uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
2,869,038,155,928
arxiv
\section{Introduction} In response to the demand for reducing pollution, Electric vehicles (EVs) are becoming increasingly common throughout the world. In general, EVs use Li-ion batteries with a variety of advantages over other batteries as energy storage devices.\unskip~\cite{Cite1} At this time, a battery management system (BMS) that controls the battery is important for increasing the life of the battery and maintaining safety. To control the battery efficiently, BMS needs to know the current internal state of the battery, the two most important states being state-of-charge (SOC) indicating current charge of battery and state-of-charge (SOH). The SOC basically represents the distance the vehicle can travel, and SOH represents the current and remaining service life of the EV battery. Therefore, real-time estimation of SOC and SOH is important for improving the efficiency of the driving cost, driving convenience and economy of the vehicle. As a representative method for estimating SOC and SOH, there is a model based method in which a system model in which the input of the battery is used as the current and the output is the voltage is established and the internal state is estimated based on the system model. In general, the commercial BMS of the EV is an Equivalent Circuit Model (ECM), which is easy to apply and has low computational complexity.\unskip~\cite{342681:7604108} Various SOC/SOH estimation algorithms based on ECM have been studied. Among them, filter based method and observer based method are mainly used in commercial BMS. \unskip~\cite{342681:7589621,342681:7589623,342681:7589626,342681:7589631,342681:7589629}. However, when the existing algorithms are used solely to estimate the state of the EV, it is generally possible to estimate only the SOC alone and does not adequately cope with changes in SOH and characteristic of the battery which changes in real time. Therefore, algorithms that can estimate the SOH by simultaneously improving the real - time estimation performance of the SOC by separating the battery parameter estimation method and the state estimation method have been proposed. This method is called joint-estimation or dual-estimation.\unskip~\cite{342681:7589622,342681:7589630}In recent papers, joint-estimation is commonly used, so in this paper we will refer to it as follows. \unskip~\cite{342681:7589630,342681:7589628,342681:7589624} The most common structure of joint estimation is to combine two different algorithms to estimate parameters and states, respectively. Furthermore, recently, the most studied combinations are using RLS for parameter estimation and KF for state estimation.\unskip~\cite{342681:15364123} The advantage of this combination is that it has high battery parameter estimation performance and high SOC/SOH estimation performance while having a level of computation applicable to BMS. These methods commonly use conventional RLS to apply a single forgetting factor value (FV) to RLS. When these methods are applied to the actual driving pattern of the EV, the RLS parameter estimation performance decreases significantly in a certain period. To solve this problem, a new SOC \& SOH estimation algorithm is proposed based on the following new techniques. First, we propose a joint estimation algorithm of the Extended Kalman Filter and the Diagonal Forgetting Factor RLS (DFF-RLS) that estimates the SOC and SOH based on the real-time parameter estimation in the BMS of the EV. Second, we propose the auto-tuning method of DFF-RLS using condition number to improve 'wind-up problem' which is weakness of existing RLS-based technique and to estimate more accurate parameter. Third, we proposed an auto-tuning method to apply the optimal forgetting factor of DFF-RLS in real time using condition number. This paper is organized as follows. Section 2 describes the equivalent circuit model (ECM) and the conventional algorithms (RLS, DMFF-RLS, EKF). Section 3 describes the proposed adaptive multiple forgetting factor recursive least square (ADFF-RLS). Section 4 introduces Experimental result, which verifies the performance of the proposed method using real data. Finally, Section 5 describes the overall summary and future work. \section{Joint estimation using RLS \& EKF} \subsection{Equivalent Circuit Model}There are various models representing the battery. Among them, the Equivalent Circuit Model (ECM) model, which has low complexity and high model reliability and accuracy, is easy to apply to actual applications. In general, it is known that the 1RC model can express the dynamics of a battery relatively accurately with less computational burden for a Li-NMC battery. \unskip~\cite{342681:7604108,342681:13254944} For this reason, it is widely used in BMS of commercial EV, and this paper also used 1RC model. \bgroup \fixFloatSize{images/A5.png} \begin{figure}[!htbp] \centering \makeatletter\IfFileExists{images/A5.png}{\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}{images/A5.png}}{} \makeatother \caption{{1RC Equivalent circuit model}} \label{figure-9b4a0e28b62377f918687c7fbd786daf} \end{figure} \egroup The 1RC model consists of one voltage source, one series resistor, and one series RC branch. Where $V_L $is the terminal voltage of battery, $V_{OCV} $ is the open circuit voltage, $I_L $ is the terminal current, The branch consist of $R_1,\;C_1 $ is equivalent polarization resistance and capacitance represent slow response, $R_0 $ is equivalent serial ohmic resistance which represents instantaneous response. \subsection{Joint estimation using DFF-RLS \& EKF}Because the characteristics of the battery change in real time, it is necessary to update the model parameters. Recently, `Joint estimation' which combines parameter update algorithm and state estimation algorithm has been studied extensively. By using this method, we can obtain high state estimation performance while compensating for the disadvantages of each algorithm.\unskip~\cite{342681:8204099,342681:8204101} \bgroup \fixFloatSize{images/A1.png} \begin{figure}[!htbp] \centering \makeatletter\IfFileExists{images/A1.png}{\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}{images/A1.png}}{} \makeatother \caption{{Structure of Joint estimation}} \label{figure-f1730111690e0b448395db7d320c479f} \end{figure} \egroup In general, EKF and RLS are the most commonly used joint-estimation algorithms for SOC/SOH estimation. EKF is most common algorithm for estimating the state of a system, and RLS is widely used for estimating and tracking time-varying parameters.\unskip~\cite{342681:8281700,342681:8281701} In this paper, Joint-estimation type SOC \& SOH real-time estimation algorithm combining DFF-RLS and EKF is proposed. DFF-RLS is an algorithm that can adapt forgetting to the characteristics of parameters that estimate FV as n-dim diagonal matrix. It is highly effective when the sensitivity of parameters such as battery is considerably different. \subsection{Extended Kalman Filter (EKF)} Extended kalman filter (EKF) is optimal state observer for nonlinear system which estimates true state recursively. EKF estimates system state from system model and measurement.\unskip~\cite{342681:13001505} At this time, EKF is linearized by finding jacobian of the nonlinear model equation f(x). This method has the advantage of easily applying KF to nonlinear models. This algorithm performs a `primary estimate' to estimate the state of the past data, and then performs a `posteriori-estimate' to correct the value obtained from the primary using the sensor value. Fig. 2.5 conceptually illustrates the operation of the EKF when system state equation.Equations~(\ref{dfg-6a65042c081c}) and~(\ref{dfg-fb60485ac8ac}) \let\saveeqnno\theequation \let\savefrac\frac \def\displaystyle\savefrac{\displaystyle\savefrac} \begin{eqnarray} \let\frac\displaystyle\savefrac \gdef\theequation{2.1} \let\theHequation\theequation \label{dfg-6a65042c081c} \begin{array}{@{}l}X(k+1)\quad =\quad AX(k)+Bu(k)+Q\end{array} \end{eqnarray} \global\let\theequation\saveeqnno \addtocounter{equation}{-1}\ignorespaces \let\saveeqnno\theequation \let\savefrac\frac \def\displaystyle\savefrac{\displaystyle\savefrac} \begin{eqnarray} \let\frac\displaystyle\savefrac \gdef\theequation{2.2} \let\theHequation\theequation \label{dfg-fb60485ac8ac} \begin{array}{@{}l}Y(k+1)\quad =\quad HX(k+1)+R\end{array} \end{eqnarray} \global\let\theequation\saveeqnno \addtocounter{equation}{-1}\ignorespaces \bgroup \fixFloatSize{images/A2.jpg} \begin{figure}[!htbp] \centering \makeatletter\IfFileExists{images/A2.jpg}{\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}{images/A2.jpg}}{} \makeatother \caption{{Diagram of overall EKF algorithm}} \label{f-d0a38c6e73b6} \end{figure} \egroup Where ${\widehat x}_{k-1},\;{\widehat x}_k $ are primary estimated state and posteriori estimated state, A, B, H are system model, Q, R are noise covariance, P, K are updating covariance and Kalman gain. \let\saveeqnno\theequation \let\savefrac\frac \def\displaystyle\savefrac{\displaystyle\savefrac} \begin{eqnarray} \let\frac\displaystyle\savefrac \gdef\theequation{2.3} \let\theHequation\theequation \label{dfg-8efc4291f9bb} \begin{array}{@{}l}\begin{bmatrix}SOC_{k+1}\\I_{k+1}\end{bmatrix}=\begin{bmatrix}1&0\\0&A_{RC}\end{bmatrix}\begin{bmatrix}SOC_k\\I_k\end{bmatrix}+\begin{bmatrix}\frac{-\eta\lbrack k\rbrack\Delta t}Q&0\\B_{RC}&0\end{bmatrix}\begin{bmatrix}I_k\\0\end{bmatrix}\end{array} \end{eqnarray} \global\let\theequation\saveeqnno \addtocounter{equation}{-1}\ignorespaces We use the 1RC model which is widely used in real BMS's in the equation of state equation of EKF. Generally, it is known that the model accuracy increases as the number of RC branches increases in the nRC model. However, there is a disadvantage that the computational burden increases due to the increase of the number of parameters to be estimated and the nonlinearity.\unskip~\cite{342681:7604108,342681:13254944} \subsection{Diagonal Forgetting factor Recursive Least Square (DFF-RLS)}When parameters are estimated using a conventional RLS, there is a phenomenon in which the estimation accuracy is degraded in an area where data excitation is insufficient. This is called the `estimator wind-up' or 'covariance wind-up'. One reason for this problem in conventional RLS is that it applies the same covariance matrix and FV to battery parameters with different changes. When applying the same error covariance and FV for parameters that show different changes, errors due to all parameters are lumped into one single scalar term. As a result, drift occurs in one parameter estimation process, and the influence of the drift affects all parameter estimates, thereby reducing the overall estimation performance. It is also difficult to design a FV optimized for the variation characteristics of each parameter. To solve this problem, it is necessary to apply covariance matrix and FV for each parameter. To solve this problem, we used diagonal forgetting factor -RLS (DFF-RLS) as a parameter estimation algorithm in this paper. This algorithm can apply forgetting to the characteristics of the parameters estimating the FV as an n-dim diagonal matrix. It is highly effective when the sensitivity of parameters such as battery is considerably different. In this algorithm, when given data matrix $\phi $ and diagonal forgetting factor matrix $\Lambda $ \let\saveeqnno\theequation \let\savefrac\frac \def\displaystyle\savefrac{\displaystyle\savefrac} \begin{eqnarray} \let\frac\displaystyle\savefrac \gdef\theequation{2.2} \let\theHequation\theequation \label{dfg-11a7752d85fb} \begin{array}{@{}l}\phi(t)=\begin{bmatrix}\begin{array}{c}1\\v(t-1)\\I(t)\\I(t-1)\end{array}\end{bmatrix}\end{array} \end{eqnarray} \global\let\theequation\saveeqnno \addtocounter{equation}{-1}\ignorespaces \let\saveeqnno\theequation \let\savefrac\frac \def\displaystyle\savefrac{\displaystyle\savefrac} \begin{eqnarray} \let\frac\displaystyle\savefrac \gdef\theequation{2.3} \let\theHequation\theequation \label{dfg-fc32e713bd8a} \begin{array}{@{}l}\Lambda=\begin{bmatrix}\begin{array}{ccc}\lambda_1&0&0\\0&\ddots&0\\0&0&\lambda_N\end{array}\end{bmatrix}\end{array} \end{eqnarray} \global\let\theequation\saveeqnno \addtocounter{equation}{-1}\ignorespaces First, we need to initialize parameter $\theta(0) $ and error covariance $P_0 $ . After that, DFF-RLS calculates updating gain(K) : \let\saveeqnno\theequation \let\savefrac\frac \def\displaystyle\savefrac{\displaystyle\savefrac} \begin{eqnarray} \let\frac\displaystyle\savefrac \gdef\theequation{2.4} \let\theHequation\theequation \label{dfg-06d0} \begin{array}{@{}l}K\left(t\right)=\frac{P\left(t-1\right)\mathrm\Lambda^{-1}\varphi(t)}{I+\varphi\left(t\right)^{T}P(t-1)\mathrm\Lambda^{-1}\varphi\left(t\right)}\end{array} \end{eqnarray} \global\let\theequation\saveeqnno \addtocounter{equation}{-1}\ignorespaces and estimation error ($\alpha $) : \let\saveeqnno\theequation \let\savefrac\frac \def\displaystyle\savefrac{\displaystyle\savefrac} \begin{eqnarray} \let\frac\displaystyle\savefrac \gdef\theequation{2.5} \let\theHequation\theequation \label{dfg-637d} \begin{array}{@{}l}\alpha\left(t\right)=y\left(t\right)-\varphi\left(t\right)^{T}\theta(t-1)\end{array} \end{eqnarray} \global\let\theequation\saveeqnno \addtocounter{equation}{-1}\ignorespaces for each time and updates them. Using this values, we can estimate the model parareter ($\theta $) and error covariance (P) \let\saveeqnno\theequation \let\savefrac\frac \def\displaystyle\savefrac{\displaystyle\savefrac} \begin{eqnarray} \let\frac\displaystyle\savefrac \gdef\theequation{2.6} \let\theHequation\theequation \label{dfg-7957} \begin{array}{@{}l}\theta\left(t\right)=\theta\left(t-1\right)+K\left(t\right)\alpha(t)\end{array} \end{eqnarray} \global\let\theequation\saveeqnno \addtocounter{equation}{-1}\ignorespaces where $\theta $ is estimated parameter, $\varphi $ is sensor input matrix composed of voltage and current. \section{Adaptive Diagonal Forgetting Factor Recursive Least Square (ADFF-RLS)} \subsection{Real EV's driving pattern}The driving pattern of actual EVs is a hybrid profile with a mixture of static profile (rest and CC-CV charging) and dynamic profile (driving). Therefore, the algorithm to be mounted on the BMS of the actual EV should show high estimation performance in the hybrid profile. However, most of the existing studies have only verified the performance using only the dynamic (driving) profile represented by UDDS\unskip~\cite{342681:13001590} and NEDC\unskip~\cite{342681:13255303}. Therefore, when estimating parameters in real time in a hybrid profile, the estimation accuracy is significantly reduced. In addition, normal estimation is not performed in the static profile section. Therefore, the algorithms currently being studied may show lower performance than the simulation results when applied to the BMS of the actual EV. In general, even if the algorithm has high estimation accuracy in the dynamic profile, when the parameter is estimated in real time in the hybrid profile, the estimation accuracy decreases considerably. In particular, normal estimation is not made in the static profile section in the middle of the entire profile. In the case of RLS, covariance wind-up phenomenon occurs in the static profile section, and normal parameter estimation is not performed. In conclusion, the algorithm for the actual BMS needs to verify the estimation performance of the hybrid profile. In this paper, we focus on this part and verify the algorithm performance. \subsection{Covariance (Estimator) wind-up problem}If there is insufficient excitation of the current input data and dynamic information sufficient to estimate the parameters of the battery is not input to the estimation algorithm, existing data is continuously forgotten. This phenomenon leads to exponential growth of the RLS covariance, which is another cause of the 'wind-up' problem.\unskip~\cite{342681:13001506} Therefore, the parameter estimation performance of the RLS is drastically reduced in the interval where the excitation is relatively insufficient among the driving patterns of the EV. This disadvantage causes a reduction in the overall parameter estimation performance in the actual EV driving pattern that repeats the patterns of driving (dynamic pattern), charging and rest (static pattern). \bgroup \fixFloatSize{images/A8.png} \begin{figure}[!htbp] \centering \makeatletter\IfFileExists{images/A8.png}{\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}{images/A8.png}}{} \makeatother \caption{{Hybrid profile}} \label{f-02743f77313b} \end{figure} \egroup \subsection{Condition number (CN) and Auto-tuning method} When estimating the battery parameters in the EV driving cycle using RLS, the accuracy of parameter estimation fluctuates greatly depending on the FV. For example, if a FV corresponding to a rapidly varying parameter has a small value (typically less than 0.8), the estimation accuracy of the parameter decreases. Therefore, setting this value is very important. At this time, the optimal value of FF greatly changes according to the driving pattern of the EV, the characteristics of the battery, and the change pattern of the corresponding parameter. Previous studies using joint estimation used a forgetting factor as a fixed value. There are papers on how to set this value,\unskip~\cite{342681:15364208,342681:15364209} but they do not adequately reflect the characteristics of the system used. In actual commercial BMS, it is generally known to use the best result obtained by preprocessing that performs fine-tuning through several trials and errors on specific data. However, in an actual EV environment where driving patterns change every time and patterns cannot be predicted, this approach is not optimal. Therefore, in this paper, we applied an auto-tuning method that can automatically estimate the optimal FV in real time in real environment.\unskip~\cite{342681:16954759} Through this technique, the SOC/SOH estimation accuracy is not significantly affected by the value of the initial FV. It also has high estimation accuracy for various driving patterns. In this study, auto-tuning is performed for the first value which has the greatest influence on the estimation result among four FV to reduce the computation amount. At this time, condition number (CN) was introduced as a measure for accurate parameter estimation. CN is a measure to judge the sensitivity of the estimated parameter and has a lower value as the estimated parameter is robust to the change of the input.\unskip~\cite{342681:13001548} In this case, we can judge that the CN parameter has a more accurate RC parameter. \unskip~\cite{342681:13446884,342681:13446885} Based on this, in the auto-tuning algorithm, the CN of the parameter estimated by RLS is obtained, and the FFV is updated in the direction of decreasing CN. By applying the optimal FFV, SOC \& SOH estimation performance and estimation performance can be shown. CN = $cond(A) $ represents the sensitivity of solution x according to the change of A, B in equation $Ax\;=\;B $. \mbox{}\protect\newline The basic definition of CN is : \let\saveeqnno\theequation \let\savefrac\frac \def\displaystyle\savefrac{\displaystyle\savefrac} \begin{eqnarray} \let\frac\displaystyle\savefrac \gdef\theequation{3.1} \let\theHequation\theequation \label{dfg-1ce2} \begin{array}{@{}l}\frac { \frac { \left\ensuremath{\Vert} \Delta x \right\ensuremath{\Vert} }{ \left\ensuremath{\Vert} x \right\ensuremath{\Vert} } }{ \frac { \left\ensuremath{\Vert} \Delta A \right\ensuremath{\Vert} }{ \left\ensuremath{\Vert} A \right\ensuremath{\Vert} } +\frac { \left\ensuremath{\Vert} \Delta B \right\ensuremath{\Vert} }{ \left\ensuremath{\Vert} B \right\ensuremath{\Vert} } } \quad \le \quad cond(A)\end{array} \end{eqnarray} \global\let\theequation\saveeqnno \addtocounter{equation}{-1}\ignorespaces The two matrices A and B used to estimate the parameters in the DFF-RLS are: \let\saveeqnno\theequation \let\savefrac\frac \def\displaystyle\savefrac{\displaystyle\savefrac} \begin{eqnarray} \let\frac\displaystyle\savefrac \gdef\theequation{3.2} \let\theHequation\theequation \label{dfg-51e2} \begin{array}{@{}l}A\quad =\quad \sum _{ t=0 }^{ t }{ { \Lambda }^{ t-i }{ \varphi }_{ i }{ \varphi }_{ i }^{ T } } \end{array} \end{eqnarray} \global\let\theequation\saveeqnno \addtocounter{equation}{-1}\ignorespaces \let\saveeqnno\theequation \let\savefrac\frac \def\displaystyle\savefrac{\displaystyle\savefrac} \begin{eqnarray} \let\frac\displaystyle\savefrac \gdef\theequation{3.3} \let\theHequation\theequation \label{dfg-0925} \begin{array}{@{}l}B\quad=\quad\sum_{t=0}^{t}\Lambda^{t-i}\varphi_iy\end{array} \end{eqnarray} \global\let\theequation\saveeqnno \addtocounter{equation}{-1}\ignorespaces \let\saveeqnno\theequation \let\savefrac\frac \def\displaystyle\savefrac{\displaystyle\savefrac} \begin{eqnarray} \let\frac\displaystyle\savefrac \gdef\theequation{3.4} \let\theHequation\theequation \label{dfg-10af} \begin{array}{@{}l}x = \theta\end{array} \end{eqnarray} \global\let\theequation\saveeqnno \addtocounter{equation}{-1}\ignorespaces A and B are matrices of current and voltage, including errors due to measurement noise, disturbance, and round-off. Therefore, it is possible to determine the reliability of the parameter estimated by DFF-RLS through the CN. For example, when $cond(A) $is high, it is known that the estimated parameter x is sensitive to errors of A and B, so that x is a value far from the true value. Therefore, the reliability of x at this time can be judged to be low. \bgroup \fixFloatSize{images/A3.png} \begin{figure*}[!htbp] \centering \makeatletter\IfFileExists{images/A3.png}{\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}{images/A3.png}}{} \makeatother \caption{{Algorithm Flowchart of auto-tuning method}} \label{f-02ff69c89310} \end{figure*} \egroup \subsection{Excitation-tag method} If there is insufficient excitation of the current input data and dynamic information sufficient to estimate the parameters of the battery is not input to the estimation algorithm, existing data is continuously forgotten. This phenomenon leads to exponential growth of the RLS covariance, which is another cause of the 'wind-up' problem. Therefore, the parameter estimation performance of the RLS is drastically reduced in the interval where the excitation is relatively insufficient among the driving patterns of the EV. This drawback becomes a problem in the actual EV driving pattern that repeats these patterns {\textemdash} Driving (dynamic pattern), Charging \& Rest (static pattern). \bgroup \fixFloatSize{images/A7.png} \begin{figure}[!htbp] \centering \makeatletter\IfFileExists{images/A7.png}{\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}{images/A7.png}}{} \makeatother \caption{{Ys}} \label{f-283f11fdbeaf} \end{figure} \egroup To solve this problem, a different technique can be applied to each section of the battery profile with different excitations, so that the information on the battery dynamics can be concentrated on the sections with sufficient excitation and the sections with insufficient excitation. To apply other estimation techniques to Based on this idea, we proposed the following solution-'Excitation Tag'. The concept is simple. The algorithm first determines whether excitation is sufficient or not enough, and then tags accordingly. (For example, enough sections are set to 1 and insufficient sections are set to 0.) The state estimation algorithm is then changed according to the tag. Depending on the excitation tag, the way the DFF-RLS and EKF sections work is different. When the excitation tag is 1, since the reliability of the ECM model parameter estimated by the RLS with the present input signal is high, the parameter is updated to the EKF. In this case, since the prediction value obtained through the model in the EKF is also highly reliable, the value of Q corresponding to the model prediction is lowered. Conversely, when the excitation tag is 0, the reliability of the parameter estimated through the RLS is low, so the parameter is not updated to the EKF, and the Q value at this time also increases. When this method is applied, the SOC/SOH estimation performance in the hybrid driving profile can be significantly improved. \bgroup \fixFloatSize{images/A4.png} \begin{figure}[!htbp] \centering \makeatletter\IfFileExists{images/A4.png}{\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}{images/A4.png}}{} \makeatother \caption{{Excitation tag checker}} \label{f-dc040d5c60b8} \end{figure} \egroup \subsection{Adaptive DFF-RLS (ADFF-RLS)}We proposed Adaptive Diagonal Forgetting Factor Recursive Least Square (ADFF-RLS) which improved the existing DFF-RLS. The contribution of this proposal is as follows. \bgroup \fixFloatSize{images/A6.png} \begin{figure*}[!htbp] \centering \makeatletter\IfFileExists{images/A6.png}{\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}{images/A6.png}}{} \makeatother \caption{{Flowchart of ADFF-RLS}} \label{f-79bc95a8302b} \end{figure*} \egroup \begin{enumerate} \item \relax The covariance wind-up problem, a chronic problem of RLS, was solved by applying DFF-RLS. \item \relax Combining DFF-RLS and EKF, it was possible to obtain high SOC and SOH estimation performance compared to other algorithms (Single EKF, MFF-RLS \& EKF) while compensating for the shortcomings of each algorithm. \item \relax By applying the auto-tuning method of the forgetting factor of RLS through the condition number, it is possible to automatically and adaptively apply the optimal forgetting factor without fine-tuning the input data used for estimation. \item \relax Excitation tag that adaptively changes the tuning parameter of EKF according to the excitation of the input data is newly proposed to improve the estimation performance more than the existing EKF. \end{enumerate} To verify the performance of the proposed algorithm, the results were verified with the battery data used in the actual EV in the next section. \section{Results} This section shows a comparative study between the common SOC/SOH estimation method and the proposed ADFF-RLS method. Experimental data have been collected from a Li-ion (NMC) battery pack used in actual EV uses various data with different temperature from SOH. \subsection{ Experimental test setup}Experimental data used Li-ion NMC 72Ah battery pack used in actual EV. The battery was tested with various current input profiles (Static and dynamic profile) and various SOH (Beginning of life) and EOL (End of life). The SOC/SOH was estimated using the voltage (V), current (I) and temperature (T) data obtained from this experiment. \subsection{Error comparison result} \begin{table}[!htbp] \caption{{Driving profile with different SOH and noise} } \label{tw-a0fda6458faf} \def1{1} \ignorespaces \centering \begin{tabulary}{\linewidth}{p{\dimexpr.25\linewidth-2\tabcolsep}p{\dimexpr.3244\linewidth-2\tabcolsep}p{\dimexpr.1756\linewidth-2\tabcolsep}p{\dimexpr.25\linewidth-2\tabcolsep}} \hline \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip Profile number & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip Test profile & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip Life status & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip Temperature\\ \hline \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 3-1 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip Driving - GB/T & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip BOL & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 30\ensuremath{^\circ}C\\ \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 3-2 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip Driving - GB/T & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip EOL & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 30\ensuremath{^\circ}C\\ \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 3-3 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip Driving - US06 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip BOL & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 30\ensuremath{^\circ}C\\ \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 3-4 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip Driving - GB/T+US06 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip EOL & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 30\ensuremath{^\circ}C\\ \hline \end{tabulary}\par \end{table} \begin{table}[!htbp] \caption{{Maximum and average absolute estimation error of SOC} } \label{tw-8ac14cbc7db0} \def1{1} \ignorespaces \centering \begin{tabulary}{\linewidth}{LLLLLLL} \hline \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip \cellcolor[HTML]{B3B3B3}{Test profile} & \multicolumn{2}{p{\dimexpr(\mcWidth{2}+\mcWidth{3})}}{\rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip \cellcolor[HTML]{B3B3B3}{Single EKF}} & \multicolumn{2}{p{\dimexpr(\mcWidth{4}+\mcWidth{5})}}{\rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip \cellcolor[HTML]{B3B3B3}{MFF-RLS\&EKF}} & \multicolumn{2}{p{\dimexpr(\mcWidth{5}+\mcWidth{6})}}{\rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip \cellcolor[HTML]{B3B3B3}{DFF-RLS\&EKF}}\\ \hline \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip \cellcolor[HTML]{E5E5E5}{Error [\%]} & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip \cellcolor[HTML]{E5E5E5}{MAX} & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip \cellcolor[HTML]{E5E5E5}{AVG} & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip \cellcolor[HTML]{E5E5E5}{MAX} & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip \cellcolor[HTML]{E5E5E5}{AVG} & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip \cellcolor[HTML]{E5E5E5}{MAX} & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip \cellcolor[HTML]{E5E5E5}{AVG}\\ \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip Driving - GB/T (BOL) & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 4.211 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 1.805 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 1.650 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 1.315 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 0.060 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 0.048\\ \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip Driving - GB/T (EOL) & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 6.450 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 2.763 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 16.734 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 2.603 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 0.459 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 0.144\\ \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip Driving - US06 (BOL) & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 6.649 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 3.204 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 7.648 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 4.727 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 0.644 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 0.497\\ \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip Driving - GB/T+US06 (EOL) & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 5.355 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 2.665 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 1.552 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 1.152 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 0.017 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 0.013\\ \hline \end{tabulary}\par \end{table} \begin{table}[!htbp] \caption{{Maximum and average absolute estimation error of estimated voltage (SOH error)} } \label{tw-a174a019501a} \def1{1} \ignorespaces \centering \begin{tabulary}{\linewidth}{LLLLLLL} \hline \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip \cellcolor[HTML]{B3B3B3}{Test profile} & \multicolumn{2}{p{\dimexpr(\mcWidth{2}+\mcWidth{3})}}{\rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip \cellcolor[HTML]{B3B3B3}{Single EKF}} & \multicolumn{2}{p{\dimexpr(\mcWidth{4}+\mcWidth{5})}}{\rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip \cellcolor[HTML]{B3B3B3}{MFF-RLS\&EKF}} & \multicolumn{2}{p{\dimexpr(\mcWidth{5}+\mcWidth{6})}}{\rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip \cellcolor[HTML]{B3B3B3}{DFF-RLS\&EKF}}\\ \hline \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip \cellcolor[HTML]{E5E5E5}{Error [mV]} & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip \cellcolor[HTML]{E5E5E5}{MAX} & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip \cellcolor[HTML]{E5E5E5}{AVG} & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip \cellcolor[HTML]{E5E5E5}{MAX} & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip \cellcolor[HTML]{E5E5E5}{AVG} & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip \cellcolor[HTML]{E5E5E5}{MAX} & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip \cellcolor[HTML]{E5E5E5}{AVG}\\ \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip Driving - GB/T (BOL) & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 60.600 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 4.620 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 72.913 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 8.792 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 27.854 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 0.337\\ \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip Driving - GB/T (EOL) & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 56.220 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 9.523 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 2916.000 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 18.775 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 49.428 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 4.815\\ \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip Driving - US06 (BOL) & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 110.300 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 7.968 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 126.390 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 22.616 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 208.130 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 10.602\\ \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip Driving - GB/T+US06 (EOL) & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 41.130 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 8.299 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 42.354 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 15.714 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 37.584 & \rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip 1.774\\ \hline \end{tabulary}\par \end{table} Standardized car driving test profile GB / T and US06, and EKF alone (Single EKF) for battery test data tested with a mixture of two profiles, previously proposed MFF-RLS + EKF, and proposed in this study Algorithm (ADFF-RLS + EKF) was applied and the results were compared and verified. From the overall data, we can see that the proposed algorithm has higher estimation accuracy. \section{Conclusions} In this paper, we proposed a real-time joint estimation (DMFF-RLS \& EKF) algorithm using excitation tag and verified by experiment. The contribution of this proposal is as follows. Through the joint estimation of DFF-RLS and EKF, it can adapt to changing environments in EV real-time driving environment and obtain high SOC / SOH estimation performance. \begin{enumerate} \item \relax Condition number was introduced to numerically represent the accuracy of the estimated parameters and auto-tuning was performed based on the condition number. Therefore, high estimation accuracy can be obtained without fine-tuning of tuning parameters, and adaptive response can be achieved through automatic tuning even if the driving pattern changes. \item \relax Through the above proposals, high estimation accuracy can be obtained by improving 'covariance wind-up problem' which is a disadvantage of RLS-based estimation method. \end{enumerate} In order to evaluate the accuracy of the SOC/SOH estimation result of this algorithm, we applied the hybrid driving pattern cell data of real EV's with two SOH (BOL \& EOL) to the proposed method, and confirmed the SOC estimation results for each data. The same data were compared with several conventional algorithms. The proposed algorithm was more accurate than the other algorithms. These results were common to all battery life types. Therefore, it is confirmed that the proposed algorithm has higher SOC/SOH estimation accuracy than the existing algorithm in the environment where the traveling pattern changes in real time. This result is expected to provide more accurate information about the driver. Therefore, the driver can manipulate the EV in a more economical way and obtain an improved driving experience. \section*{Acknowledgements}
2,869,038,155,929
arxiv
\section{Detailed description of the CIANNA framework} \label{cianna_app} \clearpage \null \thispagestyle{empty} \newpage \thispagestyle{empty} \etocsetnexttocdepth{4} \etocsettocstyle{\subsection*{Detailed description of the CIANNA framework}}{} \localtableofcontents{} \clearpage \section*{Detailed description of the CIANNA framework} \vspace{-0.1cm} In this section we describe our numerical framework CIANNA (Convolutional Interactive Artificial Neural Networks by/for Astrophysicists) in its present state. Our framework is evolving fast so some of the information presented here might be out of date at some point, but the main programming philosophy should remain the same. For up to date information please look for CIANNA on GitHub or similar code-hosting solution (currently at \href{https://github.com/Deyht/CIANNA}{github.com/Deyht/CIANNA}). All the results from the previous study were obtained using CIANNA or its precursors, from the simpler illustrative examples (Sect.~\ref{global_ann_section}) up to the advanced various CNN architectures (Sect.~\ref{cnn_global_section}). Despite being a very generalist framework that could be applied to any ANN application, the development was mostly driven by its ability to solve various astrophysical problems. For example we successfully used it for: regression, classification, clustering, other program acceleration, and dimensionality reduction, all in the context of an astrophysical application. This demonstrates an already satisfying maturity of the framework that is available as Open Source under the Apache 2 license. In the present appendix section we detail the overall development philosophy, the global programming scheme, some very important functions like \textit{im2col}, how the interface works on a concrete example, and finally the framework performance in comparison to the widely used Keras (TensorFlow) framework.\\ \vspace{-0.7cm} \subsection{Global description} \vspace{-0.0cm} CIANNA is written using the C language (C99 revision), but it also contains a large amount of Nvidia CUDA code allowing for Nvidia GPU acceleration. Any application can be coded using a low-level C interface to access to the full potential of the framework. Additionally, we constructed a Python high-level interface that resembles the Keras one. This interface is suitable for the vast majority of applications and can be easily modified to suit many other needs (see Sect.~\ref{cianna_interface}). CIANNA allows the use of several computing methods through different implementations: a very simple no-dependency CPU, OpenMP, an OpenBLAS matrix formalism, and finally the CUDA GPU matrix acceleration. Overall, the framework has been built to permit subsequent additions in a modular way. However, each time a choice had to be made between modularity or ease of use against performance, we systematically privileged the latter. The rational behind this decision is that, with this framework, we wanted to get the finest possible control over the detailed compute behavior. A focus on modularity would have enclosed the detailed behavior into too many high-level considerations, which would have strongly complicated any application that would get out of the pre-determined range of use cases. This noticeably explains that there are some repetitions in the network in order to obtain a proper modularity over several independent fine grained implementations. Overall, the framework aims at providing basic elements for which we proposed several arrangements but that remain suitable to construct any non anticipated application with less efforts than with a very high level framework.\\ \vspace{-0.2cm} In practical terms, CIANNA presently allows the user to construct arbitrarily deep convolutional or fully connected networks with a fine control over each layer property. In addition to the overall architecture construction, it is possible to control several aspects of the network including: learning rate with decay, weight initialization, activation function, gradient descent scheme, shuffle method, momentum, weight decay, dropout, etc. Interestingly, the framework being based on low-level functions it can be adapted to easily get out of the classical feedforward ANN with backpropagation scheme. For example we were able to construct several other applications from CIANNA, namely: Generative Adversarial Networks, object detection networks, semi-supervised clustering by K-means, Self Organizing Maps (SOM) and Radial Basis Functions. Even if we did not had time to test other constructions, we are confident that the framework could be adapted to other applications, for example to ANN trained from a Genetic Algorithm formalism. \vspace{-0.2cm} \subsection{CIANNA objects} While CIANNA is coded with regular C that is not focused on high-level object programming, we still reproduced some simple object properties in the framework based on C data structures that are associated to specific sets of functions. This choice was mainly made due to strong pre-existing programming habit, still presenting a few advantages. It is easier to manipulate for programmers who are not used to the object formalism, and it provides full access to every element of our data structures at any place. This allows one to elaborate on the pre-existing CIANNA functions from completely new files without the need for modifying the existing code, or to create derivative objects. We list here the principal data-structures and briefly describe their role in the framework:\vspace{-0.6cm}\\ \begin{itemize}[leftmargin=0.5cm] \setlength\itemsep{0.01cm} \item \textbf{The network structure} allows the user to create several networks in the same instance of CIANNA. This structure contains all the properties of the networks that have an impact on the subsequent structures. For example it contains the choice of gradient descent scheme, the batch size if needed, the learning rate and momentum, but also all the information about the dataset to be processed, with their input and output dimensions. But more importantly it is the home structure of the list of layer objects that will be declared for this network. It also contains references to the training, test, and valid datasets that are associated to this network. Still, since this structure mostly contains references, it means that layers or datasets can technically be associated to several networks. \item \textbf{The dataset structure} is the simplest one. It contains the memory location of the raw data already divided into the several batches. This structure then only contains the sizes of the data it handles, but also references their location that can be either on the host memory or the device (GPU) memory. The independence of this structure allows to create and manipulate any dataset before an association to a network in the rare case it is useful, but it also allows the creation of transition datasets for various internal operations of the framework. \item \textbf{The layer structure} is certainly the most important one. This structure is highly modular since it must handle all the types of layers in the same object: Convolutional, pooling, dense, or any other kind of layers. Each layer is associated to a network structure but is constructed independently, so that a given layer could be used by different networks, for example to vary the dataset to be trained from. Each layer contains a reference to the previous layer or to the input dataset if it is the first one. It contains all the internal data that it needs, like the weights, but also its output and any memory set that would be needed for intermediate transformations between layers. It also usually contains all the equivalent memory required for the backpropagation. Interestingly, the layer structure contains placeholder references in the form of a per-type-of-layer structure and of activation function pointers that are referenced during layer initialization. \item \textbf{The layer parameter structures} are as many as the number of layer types in the framework. Each of them is an independent structure that is able to store the information that is required regarding the layer type. For example the parameter structure associated to a convolutional layer contains the number of filters, the filter size, the padding, the stride, etc. All these structures can take the same void place-holder in a layer structure and are then converted back to their true type in any function that requires this information. \end{itemize} \newpage \subsection{Description of the layers} \vspace{-0.2cm} The previous description of data structures already provides a lot of details on how the framework is constructed. Still we want to provide more details here by listing the typical major operations performed by each type of layer. Since it would be too long to distinguish the different approaches we only discuss the matrix formalism in the general case.\\ \vspace{-0.7cm} \subsubsection{Dense layer} \vspace{-0.1cm} The dense layers follow the various prescriptions from Section~\ref{matrix_formal}. They are composed of 2D matrices that represent the batched input with a size depending on the previous layer output, the weight matrix sized accordingly to the input and to the number of the neurons in the layer, and finally the batched output depending on the number of neurons in the layer. All these elements have a counterpart used for the error propagation through the layer. In the case where the previous layer is dense as well, the input is not duplicated and the previous layer output is directly in the right form to be used as input in the present layer, assuming the use of the bias propagation trick described in Section ~\ref{gpus_prog}. In the case where the previous layer is a convolutional or pooling one, then a temporary matrix is made to re-arrange the data in the proper format. We also note that this is the only layer type that can use dropout in CIANNA for now.\\ \vspace{-0.1cm} The dense layers are mainly characterized by two functions, that strongly resemble what was illustrated in Figure~\ref{matricial_form_fig}: the forward propagation and the back propagation. The forward function is mainly composed of a matrix multiplication between the weight matrix and the input followed by the use of the activation function associated to the layer applied on each element of the output. Each activated neuron then gets a chance of being set to zero according to the dropout rate. Therefore, the number of neurons dropped at each path is not constant. The position of the deactivated neurons is stored for the backpropagation phase. The layer does not actively transfer its output to the next layer, it is at the charge of the next layer to gather this output. The construction of the backpropagation function is very similar, starting by setting the propagated output error elements corresponding to the dropped neurons to zero. Then it continues with a large matrix multiplication between the transposed weight matrix and the remaining sparse output error. The result goes through the derivative of the activation function of the \textit{previous} layer and is then propagated to the previous layer output-error directly. Finally, the backpropagation function handles the weight update by multiplying the present layer output error by the transposed input it has received and stored from the forward phase. This creates an update matrix that is used to change the weights. This update matrix is kept in the layer memory and all subsequent update matrices use it to account for the momentum parameter. \vspace{-0.2cm} \subsubsection{Pooling layer} \vspace{-0.1cm} The pooling layers are by far the simplest ones. For now the pooling is systematically considered as a max pooling, but other kinds could be added simply. The present pooling layers are characterized only by the pooling size. They are characterized by forward and backward functions with a single internal operation for both (See Figures~\ref{pool_op} and ~\ref{pool_op_back}). The forward function performs the pooling operation by selecting the maximum input pixel values in each $P_o\times P_o$ area for each depth channel and stores the position of the maximum. The backpropagation function then takes the output error it has inherited from the previous layer propagation and propagates it in a matrix that contains zeros everywhere except for the position of the maximums of the forward phase that get the error of the associated output error pixel corresponding to their area. The pooling layer also handles the transformation of this propagated error by applying the derivative of the activation function of the previous layer to its own output. \subsubsection{Convolutional layer} The convolutional layers are the most complicated to handle. They are characterized by all the necessary parameters: filter size, stride, padding, and number of filters. They are composed of many 2D matrices, several of them being in fact flattened versions of 3D matrices. Presently we consider that a convolutional layer can only be preceded by a convolution or pooling layer. It should be possible to allow the network to grow convolutional layers from dense ones in order to have for example auto-encoders convolutional architectures, but it is currently not present in CIANNA. The flattened input usually considers that each depth channel is a continuous 1D array, and that these depths can be stacked after another. There are two arrangements here, (i) one where all the first depth channel images are subsequent in a 1D array and the second dimension is the number of input depths, (ii) and the second where all the depth channels of a given image are flattened in a 1D array and the second dimension is the batch size. The input image is usually considered in the second format, while the activation maps at the output of a convolutional layers are in the first one. The layer is then composed of the filter volume that depends both on the previous layer number of depth channels and on the number of filters of the present layers. Finally the output is in the form of a flatten volume from the different activation maps. As before, all these inputs have a duplicate form that is dedicated to the error backpropagation phase. We remind that a simple example of matrix formalism for a single convolution layer is given in Figure~\ref{matricial_form_fig}.\\ As for the previous layers, the convolutional ones are separated into a forward and a backpropagation function, but more importantly here is the im2col function that transforms the input in the appropriate form to be handled as a matrix multiplication. We dedicate the next Section ~\ref{im2col_function_algorithm} to this function and only refer to its use here for the rest of the description. The forward function of a convolutional layer actually starts with the im2col transformation that rearranges the input volume into a larger flattened volume that separates all the sub regions to which the filters must be applied. This volume is then transposed and multiplied by the flattened weight filter matrix to obtain all the activation maps with all their depth channels from the full batch at once. The resulting volume then goes through the activation function of the present layer. While the ReLU activation is strongly recommended for convolutional layers it is still possible to use other activation functions in CIANNA.\\ The backpropagation function is very similar. Based on the input and output shapes it evaluates the parameters that must be used for the transposed convolution like the external and internal padding or the stride. It then rotates and rearranges the filters following the prescription from Section~\ref{conv_layer_learn} so that each weight is effectively used to propagate each error pixel to the input pixels that were involved in its activation. The im2col includes the padding and stride parameters in its transformation so there is no need to have an intermediate transformation. Then the transformed error volume is transposed and multiplied by the rotated filter volume. The propagated error obtained then goes through the derivative of the activation function from the previous layer. As for the dense layer, the convolutional layer handles the weight update after the propagation by multiplying the im2col transformed input from the forward phase by the output error \textit{before} the im2col transformation. The obtained weight update matrix is considered with a momentum using its previous value and then used to change the weight filter volume.\\ \newpage \subsection{Im2col function} \label{im2col_function_algorithm} \vspace{-0.1cm} The im2col function is by far the most important one of the all framework when considering convolutional networks. We remind that a graphical representation of the procedure is presented in Figure~\ref{im2col_fig}. The development of this function has concentrated a lot of optimization efforts and the version we present here is already the 4th major version. This function is very important because it is a prerequisite of the matrix formalism for convolutional layers, which is usually much quicker than direct convolution implementation. The total time is then dominated by the im2col function itself as we discussed in Section~\ref{gpu_cnn}. Any improvement to this function then leads to strong overall performance improvements of the whole network. In our approach the im2col operation directly handles the transformation induced by the external and internal padding and by the stride and filter size choice. This way we avoid any non-necessary intermediate computation and kernel launch time when using the GPU acceleration. We note that the position of the zeros induced by the previous parameters do not move in the matrix expanded form. For this reason the full matrix is set to zero at the layer creation and all elements that must stay at zero are left untouched by our im2col implementation. We note that there is still room for improvement for this function, especially using advanced CUDA shared memory management and better thread block fine tuning. Our version currently minimizes the number of read and write in memory by only accessing once to all input and output pixels. The fact that our function remains memory bound with a minimal cache-miss in this situation is in fact a strong indication that we have constructed a computationally efficient implementation that is mainly limited by the GPU memory clock.\\ \vspace{-0.2cm} We present our im2col implementation in Algorithm~\ref{im2col_algo} using the notations from Table~\ref{im2col_algo_parameters}. While this algorithm is fairly difficult to follow, we resume here the overall approach. The different loops allow to go through all the pixels of the original input volume. Then the objective is to find all the locations of the transformed volume that must contain a duplicate of this input pixel. We remind that this is due to the overlap of the different sub-regions in the image that occurs when $S < f_s$. With this approach there is only one reading per input pixel, which then stays on the cache, and only one affectation for each output pixel that must receive a value and no memory action for those that must stay to zero. To do so the algorithm searches all the possible filter positions around the running input pixel. This can be seen as overlapping a filter above the running pixel starting at the top-right corner position of the filter and then moving the filter around this point following the stride value searching for all filter placements that still contain the looked-at pixel. Each of the contribution of the current input pixel is then associated to all the transformed output pixel identified, taking into account the external and internal padding.\\ \vspace{-0.2cm} This algorithm form is in fact slightly different than the concrete implementation in CIANNA. In fact our im2col approach was designed directly into a CUDA kernel. The conversion between the presented algorithm and the kernel is relatively easy and mainly consists in replacing the $i$, $d$ and $z$ loops by the indices of a 3D CUDA thread block. The rest of the differences should be marginal and was just made to improve the readability of the present algorithm. We note that, using CUDA blocks, it is very important to consider the memory arrangement to avoid cache-misses. Therefore, the $z$ loop that goes through contiguous pixels must absolutely be associated with the quickest CUDA block index. We also note that the present matrix is suitable for both forward and backpropagation by adjusting the $I_{\mathrm{shift}}$, $D_{\mathrm{pad}}$ and $T_{\mathrm{flat}}$ parameters that depict the arrangement of the images and their depth channels in memory. This function handles the conversion to correspond to a fractionally-stride convolution with internal padding and various strides. The choice of parameters that correspond to the adequate backpropagation is made automatically by the convolutional layer in the backpropagation function. \begin{table}[!t] \centering \vspace{-0.6cm} \caption{Variable list for the im2col algorithm} \vspace{-0.3cm} \footnotesize \def1.1{1.0} \begin{tabularx}{1.0\hsize}{l @{\hskip 0.06\hsize} l @{\hskip 0.06\hsize} l } \toprule \toprule {\small Symbol} & {\small Type} & {\small Description}\\ \toprule $i$ & variable & Index of current image in the batch\\ $d$ & variable & Index of depth channel in the input image\\ $z$ & variable & Index of pixel in the input depth channel\\ $w$ & variable & Pixel width coordinate in transformed image\\ $h$ & variable & Pixel height coordinate in transformed image\\ $x$ & variable & The corresponding region on the width axis\\ $y$ & variable & The corresponding region on the height axis\\ $p_w$ & variable & The width position inside the filter\\ $p_h$ & variable & The height position inside the filter\\ $pos$ & variable & Pixel 1-D flatten coordinate in transformed image\\ $\mathrm{in}$ & pointer & Temporary position in the input image\\ $\mathrm{mod}$ & pointer & Temporary position in the transformed image\\ $B_{\mathrm{size}}$ & constant & Batch size\\ $D$ & constant & Number of input depth channel\\ $I_{\mathrm{flat}}$ & constant & Size of a flattened input depth channel, i.e $w_{\mathrm{in}} \times h_{\mathrm{in}}$\\ $D_{\mathrm{pad}}$ & constant & Separation between two depth channels in memory ($I_{\mathrm{flat}}$ or $B_{\mathrm{size}} I_{\mathrm{flat}}$)\\ $P_{\mathrm{ext}}$ & constant & External padding\\ $P_{\mathrm{int}}$ & constant & Internal padding\\ $T_{\mathrm{flat}}$ & constant & Size of a flattened image in the transformed format, i.e $w_{\mathrm{out}} \times h_{\mathrm{out}} \times f_s^2D$\\ $I_{\mathrm{shift}}$ & constant & Separation between two input images in memory ($I_{\mathrm{flat}}$ or $D I_{\mathrm{flat}}$)\\ $f_s$ & constant & Filter 1D size\\ $S$ & constant & Convolution stride \\ $N_{\mathrm{area}}$ & constant & Number of regions in the image in one axis, i.e $w_{\mathrm{out}}$\\ \bottomrule \bottomrule \end{tabularx} \label{im2col_algo_parameters} \vspace{-0.2cm} \end{table} \begin{algorithm}[!h] \setstretch{1.0} \SetInd{0.3cm}{0.6cm} \For{ $i \leftarrow 0$ \KwTo $B_{\mathrm{size}}$ }{ \vspace{0.1cm} \For{ $d \leftarrow 0$ \KwTo $D$ }{ \vspace{0.1cm} $\mathrm{in}\quad \leftarrow i\times I_{\mathrm{shift}} + d\times D_{\mathrm{pad}}$\\ $\mathrm{mod} \leftarrow i\times T_{\mathrm{flat}} + d\times f_s^2$\\ \For{ $z \leftarrow 0$ \KwTo $I_{\mathrm{flat}}$ }{ \vspace{0.1cm} $w \leftarrow (z \div w_s) \times(1 + P_{\mathrm{int}}) + P_{\mathrm{ext}} $\\ $h \, \leftarrow (z\ \, / \ w_s) \times(1 + P_{\mathrm{int}}) + P_{\mathrm{ext}} $\\ $x \, \leftarrow w/S$\\ \While{$(w-xS) < f_s$ and $ x \geqslant 0 $}{ \vspace{0.1cm} $p_w \leftarrow w-xS$\\ $y \ \ \,\leftarrow h/S$\\ \While{$(h-yS) < f_s$ and $ y \geqslant 0$}{ \vspace{0.1cm} $p_h \leftarrow h-yS$\\ $pos \leftarrow x f_s^2 D + y N_{\mathrm{area}} f_s + p_h f_s + p_w$\\ \If{$pos \geqslant 0$ and $pos < I_{\mathrm{flat}}$}{ \textcolor{red}{$\bm{\mathrm{mod}[pos] \leftarrow \mathrm{in}[z] }$}\\ } $y \ \ \leftarrow y-1$\\ } $x \ \ \,\leftarrow x-1$\\ } } } } \caption{Im2col algorithm} \label{im2col_algo} \end{algorithm} \newpage \subsection{Other important functions} \vspace{-0.2cm} We have reviewed the most important elements in the CIANNA framework. Around them are also various auxiliary functions like the activation functions, weight initializations, dataset shuffling, normalizations, dataset loading, etc. It is already possible to assemble all these elements using C programming to create almost any network structure. Still, we added a few higher lever functions that provide an easier network construction for the classical cases. Among them, there are several construction functions that declare the network, dataset and layer structures. The link between them is then automatized. Once assembled the network can be used with a global training and forward functions that take various network hyperparameters as argument.\\ \vspace{-0.2cm} The simplest of these large scale functions is the one that performs a forward on a given dataset. It simply takes the dataset as input and forwards each pre-constructed batch through the network layers one after the other. The layer construction has already linked the respective layer inputs and outputs so that each of them only has to be called through its associated internal forward function. This global function is in fact built among another one that performs this task in addition to computing the error between the targets and the network outputs. In this case the error is only used as a monitoring information and not for training. It can noticeably display a confusion matrix on the provided dataset. This forward with error computation allows one to monitor the evolution of the error on the valid dataset during the training process.\\ \vspace{-0.2cm} Finally, the highest-level function is the training one. This function includes all the necessary elements to perform the full network training during a given number of epochs. It handles the learning rate with decay, momentum, the frequency of control steps, the frequency of shuffle, etc. It means that it has access to all the required datasets, to the network structure, and to all the hyperparameters that are not considered as parameters of the function itself. If needed, it also manages a part of the data transfers between the CPU host memory and the GPU device memory. Overall, the function proceeds by forwarding the training dataset batches one after another through all the network layers, then computes an output error that is used for the backpropagation through each layer again. After a full epoch, the function shuffles the training dataset if necessary and it computes the error on the valid dataset using the previous error function. Interestingly, we also added a performance measurement inside this function that takes the form of the number of objects processed per second by the network.\\ \vspace{-0.7cm} \subsection{Python and C interfaces} \label{cianna_interface} \lstinputlisting[float=!t, caption={Python interface},label=cianna_python_interf, language=Python, basicstyle=\fontsize{8pt}{8pt}\selectfont,frame=single]{cianna_interf.py} \lstinputlisting[float=!t, caption={C interface},label=cianna_c_interf, language=Python, basicstyle=\fontsize{8pt}{8pt}\selectfont,frame=single]{cianna_interf.c} \vspace{-0.2cm} Now that we have covered most of the internal elements of CIANNA, we present here the two interfaces, in C and Python, on a practical example. For this section and the following one about the network performance we used the example from Section~\ref{mnist_example} on MNIST. We illustrate the two interfaces in Listings~\ref{cianna_python_interf} and~\ref{cianna_c_interf} for Python and C, respectively. There are many tunable parameters in the CIANNA interface so we only discuss the global approach here since this section does not stand in place of the full instructions that will be provided with the source code. Both interfaces require the same list of actions that resemble the Keras approach. First, the network must be created. This is done using a function that takes as arguments the size of the input and output of the network and the batch size, which are all required to properly arrange memory in the dataset constructions. Then the data can be loaded directly into the dataset structure in the C interface, while for the Python one it requires the user to create Numpy arrays that are then converted into the right format. The network construction is made through individual layer creation functions for which the order is important to automatize the connection between the subsequent layers. It is then possible to call the train function on the constructed network. Once trained it is possible to forward the test dataset through the network to construct predictions.\\ \lstinputlisting[float=!t, caption={Typical CIANNA training monitor},label=cianna_output, language=sh, basicstyle=\fontsize{6pt}{6pt}\selectfont,frame=single, linewidth=1.1\textwidth, xleftmargin=-1.2cm]{cianna_output.txt} \vspace{-0.7cm} Now that the general procedure is described we list here a few details on the interfaces. First, it is visible that many of the network hyperparameters are expressed in the example we selected. It is possible to custom simple parameters like the batch size or the learning rate, momentum, etc; but also layer dependent parameters like the stride, padding, number of filters, number of neurons, dropout, etc; and more importantly it is possible to select the activation function for each layer. In the Python interface there are many optional parameters that are not all expressed here, while in our classic C interface all the parameters are required every time. This exposes some details of the underlying C structure of CIANNA. It is important to note that the Python interface only calls the equivalent functions from the C interface in a simplified way. Therefore, the C interface provides much more control over the network behavior. For example it is possible to design two networks that share some layers, or to have much more datasets that can be switched to emulate transfer learning. A more practical example that we actually made was a Generative Adversarial Network, which we were able to design using a succession of networks to associate the generative and the discriminative parts and train them separately or simultaneously. Finally, we note that both the interfaces allow for what we call "dynamical loading" on GPU, which consists in loading each batch individually on the GPU memory while keeping the all dataset on the host memory. While this approach is expected to add memory loading overhead during the training, we observed that this overhead mostly overlaps with other actions performed by the framework like kernel launch latency. In addition, this allows to delegate the shuffle task to the CPU host memory that is much more efficient than the GPU for this and that can therefore be done concurrently to GPU computations. For these reasons, the performance hit induced by the dynamic loading is most of the time negligible while allowing to handle much larger datasets that would not fit entirely inside the GPU memory. We also highlight that the network training and forward functions can be serialized to construct training blocks or to construct more complex evolution of the training hyperparameters or dataset. Finally, the CIANNA framework can save the network state regularly during the training, and consequently it is able to reload any previously trained network. This allows to pursue training from a given point, to make predictions using a saved network, or to perform transfer learning from any network.\\ We illustrate in Listing~\ref{cianna_output} a typical console output of the network construction and training for 3 epochs using the Python interface on the MNIST example. This constitutes the CIANNA log file that can be saved during training and that shows several hyperparameters as well as the network layer structure. This output was made using the option that displays the confusion matrix from the test dataset at each control step, here at every epoch. This allows one to have a detailed view of the network prediction for all the classes at each epoch. While the network appears to have already a very good prediction at the first epoch with a global accuracy of 97.72\%, we remind that the best accuracy achieved by this network is around 99.35\% as exposed in Section~\ref{mnist_example}. With this log it is also possible to monitor the network error on the validation dataset expressed as the "cumulated error" at each epoch, which correspond to the average of the cross-entropy error computed on the prediction of each validation object in this case. On this example it is visible that the error slowly decreases with the values 0.0758, 0.0570, and 0.0463. Because this error is measured from a given dropout selection and not from a scaling of the weights, it is expected that it will present significant oscillations during training, the best classification result on this example being obtained at an error around 0.036. Still, it is usually easy to spot overtraining by looking for a global increase of this error over several control steps. \clearpage \subsection{Performance comparison} \vspace{-0.2cm} Now that we have presented how CIANNA can be used, we proceed to an evaluation of its compute performance. To have a reference we will declare the exact same network on an identical dataset with CIANNA and with a Keras implementation that relies on TensorFlow, both using GPU acceleration. All the measurements were made on our Nvidia P2000 mobile since it is the system on which we have the finest control over the software versions and hardware properties. Even if we usually use the latest CUDA 10.2 version with CIANNA, the present TensorFlow stable release is limited to CUDA 10.1, we then compiled our framework with this version as well. We remind that the specifications of the P2000 mobile are: 768 CUDA cores clocked at 1544 MHz (for our version), with 4GB of GDDR5 dedicated memory clocked at 1502 MHz (6008 Mhz effective) and interfaced with a 128 bit connection to the host memory, achieving 96 GB/s of bandwidth. The host system uses a Xeon E-2176M, and both frameworks use only a single CPU thread to drive the GPU on a single core that sustains a sturdy 4.2 GHz clock under load. The reference network used for the comparison is again the one from the MNIST dataset, since it is one of our network with the most convolutional layers.\\ \vspace{-0.1cm} First we observed that both frameworks achieved very similar prediction quality on a identical architecture, even if the ADAM error gradient optimization is automatically used in the Keras version. The only advantage that this advanced gradient optimization provides is that it is more resilient to changes in the network learning rate, momentum, and batch size, and more generally of all elements that affect the size of the weight updates. In practical terms it means that the network is more likely to converge properly in a larger range of values for these parameters than with our present very naive learning rate decay implementation. In terms of number of epochs to converge, CIANNA has usually a lesser accuracy during the first few epochs on which the ADAM optimized Keras version is more efficient, but still the two frameworks reach their best prediction at a similar epoch.\\ \vspace{-0.1cm} In terms of compute performance, it is difficult to separate some network construction elements from the training function itself. For this reason we only excluded the initial data loading and kept the network initialization, the layers creation and the data conversion into the comparison, even if the times for these operations are marginal against the training time. Both framework implementations were executed several times in a row to account for possible overheating of the system that would lead to thermal throttling in one run and not in the other.\\ \vspace{-0.1cm} We present the performance comparison results in Table~\ref{cianna_vs_keras} for 4 network architectures that are variations around the one we used originally for MNIST. We used the same architecture description as in Section~\ref{cnn_architecture_test}. Before analyzing the results we note that: (i) all the convolutional layers use an external padding that preserves the image size between the input and the output, but that is not displayed to increase the readability, (ii) Keras performance metric are the time in second for an epoch, or the time in millisecond for a batch, both not being very accurate to compute the number of items used per second during training, which is the metric used in CIANNA. From these results we observed that CIANNA is most of the time faster for the considered architectures. For the dense-only network (N°2) CIANNA is even twice faster than Keras. The other architectures tend to confirm a general trend of CIANNA being much faster for dense layers, and Keras being significantly faster for the convolutional layers. The latter effect is expected considering that Keras/TensorFlow relies on the cuDNN closed framework from Nvidia that is strongly optimized for convolution, using dedicated kernels that was specifically designed for the task. \\ We also noticed a few interesting behavior during training. First, the Keras framework fully utilizes the GPU memory while CIANNA only uses a third of it, and does not suffer of important performance impact when using the dynamic load that allows to use only a few hundred MB on the GPU. Concurrently, we observed that the Nvidia monitoring tools report a $50$ to $70\%$ GPU utilization when using Keras, while the GPU utilization is always saturated at $100\%$ with CIANNA. We note that many behaviors of the frameworks can be affected by the present P2000 architecture and what are its inherent bottlenecks. For example we noticed that CIANNA as well does shows a $\sim 75\%$ GPU utilization on the V100 GPU. Still, this difference highlights that Keras must be mostly memory bound, while CIANNA remains compute bound for at list a significant subset of its operations. For this reason we expect that much deeper convolutional architectures will certainly train faster with Keras than CIANNA. Still, our results here are sufficient to predict that CIANNA will not be far behind in computational time, and that it will be as good or better on large networks with a more balanced architecture. \begin{table}[!t] \vspace{-0.5cm} \caption{Performance comparison CIANNA vs Keras/TensorFlow} \vspace{-0.1cm} \small \def1.1{1.6} \hspace{-0.6cm} \begin{tabularx}{1.1\hsize}{ l @{\hskip 0.06\hsize} *{2}{Y} c@{\hskip 0.04\hsize} *{2}{Y}} \toprule \toprule \multirow{2}{*}{\textbf{Network architecture}} & \multicolumn{2}{c}{\textbf{CIANNA}} & & \multicolumn{2}{c}{\textbf{Keras}}\\ \cmidrule[1pt](r){2-3} \cmidrule[1pt](r){5-6} & Time [s] & Object/s & & Time (s) & Object/s\\ \toprule \makecell[cl]{1. I-28.28, C-6.5, P-2, C-16.5, P-2, C-48.3,\\ D-1024\_0.5, D-256\_0.2, D10} & 226 & 11360 & & 241 & $\sim$9900\\ \midrule \makecell[cl]{2. D-1024\_0.5, D-1024\_0.5, D-256\_0.2, D10\\ \hfill } & 63 & 41000 & & 144 & $\sim$18700\\ \midrule \makecell[cl]{3. I-28.28, C-6.5, P-2, C-16.5, P-2, C-48.3,\\ D-256\_0.2, D10} & 185 & 14000 & & 208 & $\sim$12800\\ \midrule \makecell[cl]{4. I-28.28, C-6.5, P-2, C-16.5, P-2, C-48.3,\\ C-96.3, D-1024\_0.5, D-256\_0.2, D10} & 339 & 7540 & & 324 & $\sim$7800\\ \bottomrule \bottomrule \end{tabularx} \label{cianna_vs_keras} \vspace{-0.2cm} \end{table} \subsection{Future improvements} Presently, the objectives for the development of CIANNA are the addition of the "mixed precision" support to be used with Nvidia tensor cores, which should lead to a huge performance boost of the framework when using modern GPUs. We also want to include multi GPU support, at least on a single cluster node. Overall, we aim at improving the memory loading scheme since we were limited by the host memory usage in our extinction map application. Just like we added dynamic load to the GPU we would like to serialize the data loading from the permanent memory into the host RAM memory with a minimum performance impact overall and allow the possibility of dynamic data augmentation that is especially useful when working with images. Another major improvement would be to generalize CIANNA to handle inputs that present more than 2D spatial coherency, so we could have several 3D cubes as input "channels". We would also like to increase the diversity of weight initialization, activation functions and layers. We would like to create more high-level functions to handle some cases like GAN, Genetically trained ANN, semi-supervised learning including clustering, etc. Finally, we are looking forward to automatize more complex layer connections like in Recurrent Neural Network, or Residual Neural Networks, or even blocks of layers like in the Inception architecture. \part{Context} \newpage \null \thispagestyle{empty} \newpage \etocsetnexttocdepth{4} \etocsettocstyle{\subsection*{Part I: Context}}{} \localtableofcontents{} \newpage \null \thispagestyle{empty} \newpage \section{Milky Way 3D structure} In this section we introduce astronomical knowledge that is relevant to understand the context of the present study. We noticeably describe the presently admitted view of our galaxy the Milky Way along with a few order of magnitude for the useful astronomical objects and quantity. We describe the expected structural information of the Milky Way and highlight its support from observational constrains. In a second time we expose some properties of the interstellar medium, summarizing its link with the stars in the galaxy. We end by describing the extinction from the ISM and its link with the structural information of the Milky Way. \etocsettocstyle{\subsubsection*{\vspace{-1cm}}}{} \localtableofcontents \subsection{Review of useful properties of the Milky Way} \subsubsection{The only galaxy that can be observed from the inside} A natural beginning of a work on the Milky Way galaxy structure would be to define what a galaxy exactly is. Still, the presently accepted definition is not that old. In the 1920's, two visions of our place in the universe was opposed in what was latter called the "Great debate". The argument was mostly opposing the two astronomers Harlow Shapley and Heber D. Curtis. The former defended the thesis that every astronomical object observed and especially what they called distant spiral nebulae was part of our Milky Way, so these nebulae must be close and small. The second one in contrast, argued that these objects were very distant and very large and were likely to be external galaxies that look very alike our own Milky Way of billions of stars. They published a common paper containing two parts where each of them exposed their arguments and that was titled \textit{The Scale of the Universe} \citet{shapley_curtis_1921}. The difference in physical scale between the two points of view was of several orders of magnitude, illustrating how much was remaining to understand just 100 years ago.\\ A few years later another famous astronomer, Edwin Hubble, published a paper that estimated the distance of these spiral nebulae based on the known absolute magnitude of Cepheid variable stars \citep{hubble_1926}. The distances he found are today known to be significantly underestimated, still they were already large enough distances to support the thesis defended by Curtis that these nebulae were very large, very massive, distant structures. Later, he published the study that gave birth to the Hubble law and that correlates the distance and radial velocity of other galaxies with their reddening \citep{Hubble_1929}, which is now understood as a cosmological effect of the universe expansion. The known scale of the universe had drastically changed in a few years.\\ \begin{figure*}[!t] \hspace{-1.4cm} \begin{minipage}{1.18\textwidth} \centering \begin{subfigure}[!t]{0.32\textwidth} \includegraphics[width=1.0\hsize, height=1.0\hsize]{images/m74_gemini_big.jpg} \end{subfigure} \begin{subfigure}[!t]{0.32\textwidth} \includegraphics[width=1.0\hsize, height=1.0\hsize]{images/Hubble2005-01-barred-spiral-galaxy-NGC1300.jpg} \end{subfigure} \begin{subfigure}[!t]{0.32\textwidth} \includegraphics[width=1.0\hsize, height=1.0\hsize]{images/NGC-7793.jpg} \end{subfigure} \end{minipage} \caption[Spiral galaxy examples]{Examples of spiral galaxies that present different detailed morphology. From left to right, \href{https://www.gemini.edu/gallery/media/perfect-spiral-m74}{NGC 628 (M74)}, a grand design spiral galaxy (SA(s)c) observed by the 8.1-meter North Telescope of the Gemini Observatory , \href{https://apod.nasa.gov/apod/ap200611.html}{NGC 1300} a barred spiral galaxy (SB(s)bc) observed by the Hubble space telescope, and \href{http://annesastronomynews.com/annes-image-of-the-day-spiral-galaxy-ngc-7793/}{NGC 7793} a flocculent galaxy (SA(s)d) observed by the ESO Very Large Telescope (VLT).} \label{spiral_gal_morphology_comparison} \end{figure*} We do not aim at making an historical overview of the astronomical knowledge about galaxies here, but this story illustrates the fact that knowing the physical scale and boundary of our own galaxy was a tricky question at this time \citep[e.g][]{Kapteyn_1922}. The currently accepted view of a galaxy is a system of stars, dust, gas, and dark matter that is gravitationally bound. Their size can vary from a few kpc to more than 100 kpc and their mass estimates are mostly between $10^5$ and $10^{13}$ $\mathrm{M_\odot}$ based on rotation curves. Galaxies have many different forms, as described by \citet{Hubble_1936} and successively refined in \citet[][,...]{DeVaucouleurs1959, DeVaucouleurs1991, Lintott2008}, and the most noticeable ones for the present study are spiral galaxies. Figure~\ref{spiral_gal_morphology_comparison} shows three typical observed galaxies of this type (NGC 628 - M74, NGC 1300 and NGC 7793) with a face-on view of their plane spanned by spiral-shaped arms that start from the bulge and coil out progressively. This figure also illustrates the large variety of possible spiral structures, with a variable number of arms and a center that can be a roughly-spherical bulge or in other cases an elongated bar. There is also an opposition between two views of galaxy structures: (i) the grand-design view that corresponds to very well resolved narrow spiral arms at large scale which could be the case of M74 in the left frame, and (ii) the flocculent view of more sub-structured galaxies with sub-arms, arm discontinuities, bridges between them, and that does not always follow the expected spiral shape as illustrated in the left frame with NGC 7793. Most of the star formation is believed to occur in the arms \citep{Solomon_1989, Salim_2010} even if the galaxy mass is mostly evenly sprayed over the whole disk \citep{McMillan_2017}. This disk is rotating around the central bulge or bar region that hosts a supermassive black hole for most galaxies \citep{Heckman_2014} and concentrates an important fraction of its mass. This global structure of a galaxy is expected to come from their formation process that started quickly in the early stages of the universe and that is still going on today \citep{Freeman_2002}. While there are multiple views on which process dominates the galaxies formation, it is mostly accepted that there is a gravitational collapse of matter at large scale \citep{Cooper_2010} and that the rotating accreted matter speeds up with the decrease in the structure size creating a flatten disk shape structure \citep{Brook_2004}. Interestingly, there are still to this day a lot of unknowns about our home galaxy structure, size, mass, detailed 3D distribution of star, etc \citep{Bland-Hawthorn_2016}. We are presently in an uncomfortable situation, where we know more about other galaxies large-scale structures that are far way from us, than about our own galaxy structure. This is due exactly to the fact the we are part of this galaxy. While it is possible to see other galaxies face-on, our own galaxy obscures itself since we are inside the galactic plane and relatively far away from the center. \\ \begin{figure}[!t] \centering \includegraphics[width=0.70\hsize]{images/RHurt_MW_artistic_view_grid.jpg} \caption[Artistic view of the Milky Way]{Most common artistic face-on view of the Milky Way that illustrates the expected Milky Way bulge and arms. \textit{ From} \citet{Hurt_2008}.} \label{milky_way_hurt} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=1.0\hsize]{images/Gaia_DR2_milky_view_color.jpg} \caption[Gaia DR2 view of the Milky Way in the plane of the sky]{Gaia DR2 view of the Milky Way in the plane of the sky. The image is not a photograph, but a map of the 1.6 billion star brightness in the survey. The image is encoded using the Gaia magnitude bands following, Red: $\mathrm{G_{RP}}$, Green: G and Blue: $\mathrm{G_{BP}}$. {\it From } \citet{Gaia_Collaboration_2018_global}} \label{milky_way_Gaia_DR2} \end{figure} \newpage The most used Milky Way representation is the one presented in Figure~\ref{milky_way_hurt} from \citet{Hurt_2008}. Despite the fact that this view was constructed based on some observations and on strong theoretical knowledge from the observation of other galaxy structures, it remains mostly an artistic representation that is strongly underconstrained. This view conveys the idea that this is the present state-of-the-art astronomical knowledge of our galaxy structure, even though only sparse and heterogeneous observational evidences are available to this day. What can truly be observed from our standpoint looks like the Figure \ref{milky_way_Gaia_DR2} that contains all the observed stars from the Gaia DR2 mission that we will describe in Section~\ref{modern_large_scale_surveys}. This view illustrates the difficulty caused by our position inside the Milky Way. A great thought experiment that we got from a colleague, is to picture the Milky Way as an expanded forest. Once inside, it is possible to see through a tenth of meter depending on the tree density, but at some point the accumulation of trees and vegetation with the distance makes the view opaque. It is therefore impossible to properly assess the size of the forest from the inside. In this view the trees correspond to the stars and the most diffuse vegetation to the gas and dust distributed in the Milky Way disk. From this it is more clear why it is difficult to reconstruct the Milky Way large-scale structure. Still, it remains a favored position to study the interstellar medium and the stars themselves.\\ \vspace{-0.7cm} \subsubsection{Expected structural information} \label{milky_way_structure} In the present section we summarize some of the Milky Way (hereafter MW) properties based on the present knowledge \citep[mostly following][]{Bland-Hawthorn_2016} in order to contextualize the present study. The MW is a rather evolved galaxy that has a decreasing star formation and does not present traces of important merger history. It is usually classified as a Spiral Sb-Sbc galaxy, and most of the representations account for 4 spiral arms. Most measurements predict a stellar mass around $5 \times 10^{10}\, \mathrm{M_\odot}$ and total galactic mass from the large dark matter halo around $\sim 1.5 \times 10^{12}\, \mathrm{M_\odot}$. The stellar disk radius is often estimated at 10 kpc and is often separated in two stellar populations, one from a thin disk with a scale height estimated around $\mathrm{z^t} \simeq 300$ pc and an older one from a thick disk with a scale height $\mathrm{z^T}\simeq 900 pc$ depending on the study, both presenting a flaring, i.e. an increase in height scale with galactic radius. The Sun position is estimated at around 8 kpc from the center of the MW and roughly positioned at an elevation of $\mathrm{z_0}\simeq 25 pc$. At the center of the MW is a super massive black hole named Sagittarius $\mathrm{A^*}$ for which the mass is estimated at approximately $4 \times 10^6\, \mathrm{M_\odot}$, around which a nuclear star cluster is found. They are themselves embedded in an X-shaped (or peanut-shaped) bulge structure that is $\sim 3$ kpc long and with a scale height of $\sim 0.5$ kpc \citep{Robin_2012}. There is then a ``long bar'' or ``thin bar'' region that extend after the bulge up to a 5 kpc half-length \citep{Wegg_2015} but with a quickly decreasing height profile with a mean $180$ pc scale height.\\ The previous elements are considered to be the main components of the Milky Way and provide an accurate global representation based on relatively well-constrained observations. Then the arm structures are more difficult to constrain since they are mostly defined by their higher luminosity or peculiar stellar population and do not represent strong star over-density in stellar mass \citep{Salim_2010,McMillan_2017}. They are proposed to be self-propagated compression waves created by the differential rotation of the galaxy. In this model an arm is a spiral-shaped local compression that triggers star formation and propagates through the galactic plane, which explains that the star velocities do not match those of the arms \citep{Shu_2016}. This process triggers intense star formation episodes, where massive stars are more likely to form than in other regions of the galaxy, highlighting the spiral arm shape. Since these massive stars have a very short life-time, they are gone soon after the passage of the wave, inducing relatively narrow spiral arm structures. As we will expose in Section~\ref{stellar_formation_dense_environment}, stars form from dense cloud compression, therefore the arm structures are also traced by dense interstellar environment which will be at the center of the present study. Overall, in contrast with what Figure~\ref{milky_way_hurt} support, the MW arms are mostly under relatively weak observational constrains at these day which is discussed in Section~\ref{obs_constraints_MW}\\ \vspace{-0.5cm} \subsection{The interstellar medium} \subsubsection{The bridge between stellar population and interstellar medium} \label{stellar_formation_dense_environment} The main components of galaxies are stars, but they evolve in a more diffuse matter environment, the Inter Stellar Medium (ISM), to which they are bound through a complex interplay. Mainly, the ISM is a mixture of gas and dust with a huge diversity of states and detailed composition. Overall the mass of the ISM is divided as $70.4\%$ Hydrogen, $28.1\%$ Helium, and $1.5\%$ of heavier atoms, almost all of it being in a gas state with less than $1\%$ of the mass of this matter being in the form of solid dust grains \citep{Ferriere_2001}. This matter is distributed very heterogeneously in the galactic environment, from very warm diffuse ($T_K > 10^5\, \mathrm{K}$ and $n < 0.01\, \mathrm{cm^{-3}} $) and almost transparent large-scale structures to very dense and cold structures ($T_K \sim 10\, \mathrm{K}$ and $n > 10^3\, \mathrm{cm^{-3}} $) at much smaller scales with a continuum of structures between the two, including for example interstellar molecular filaments.\\ The ISM evolution is determined by the complex interplay between the magneto-hydrodynamics laws, which describe how the gas flows in the galaxy, gravity and self-gravity, which contribute to shaping and compressing the gas at all scales, as well as a number of processes related to stellar evolution, like the propagation of supernova shock waves, the gas heating by photoelectric effect on dust grains or gas ionization by stellar ultra-violet (UV) flux. The ISM represents a significant portion of the mass of the Milky way, equivalent to around 10 to 15\% of the total stellar mass. This is known to be the matter from which stars form as explained in detail, for example, by \citet{McKee_2007}, \citet{Kennicutt_2012}, and references therein, and described briefly here. From the proportions reported above, it is visible that the MW has already converted most of its gas into stars. Under the combined effects of dynamics and gravity the interstellar medium will contract hierarchically creating dense clouds. At some point they will become optically thick and their inside will get cooler by preventing the ambient UV light from stars to penetrate the cloud deeply. The low temperatures enable a complex chemistry catalyzed by dust grains that allows the creation of larger molecules and lets dust grain themselves grow in size, which changes the optical properties of the densest structures. The definition threshold of a dense cloud is a tricky question that is often solved using a certain amount of CO emission or by the dust reddening amount that is much higher in dense clouds. If the cloud is massive enough so that gravity is stronger than the gas support (kinetic energy, turbulence, magnetic field), it will collapse gravitationally, starting the formation of a protostar. It ultimately leads to the formation of a star that is supported by nuclear fusion in its core, i.e. a main-sequence star. The steps of star-formation, from the gravitational collapse to the main-sequence star are described where they are useful in Section~\ref{yso_def_and_use}.\\ The important point here is that stars are formed through the collapse of the dense interstellar medium. Once formed, the stars will progressively get away from their original structure, so that identifying Young Stellar Objects (YSO) that did not have time to move too much is a suitable way to reconstruct large-scale dense-cloud structures that are massive enough to form stars (Sect.~\ref{3d_yso_gaia}). A large part of the present study, namely the part II, is dedicated to the construction of a YSO identification method that is a prerequisite of the previous approach.\\ \vspace{-0.5cm} More generally the link between the stars and the ISM is not one-sided since there are a lot of feedback from the stars on the ISM, of which we give some examples. First the star light warms and ionizes the ISM but also breaks any complex molecule or even evaporate dust grains. In addition, any element other than hydrogen and helium originate from the stellar nucleosynthesis and are dispersed after the star end of life. The supernova explosions that ends the life of massive stars ($M > 8 \mathrm{M_\odot}$) play a major and ambiguous role in the ISM evolution. This phenomenon is known to inject a very significant mechanical energy to the ISM, which can blow away dense structures and enrich it with new elements that are only formed during such energetic events. It can also have the opposite effect and induce compression waves on the ISM, leading to triggered star formation \citep[e.g][]{padoan_2017}.\\ At large scales, the ISM is shaped by the global dynamic of the Milky Way. Indeed larger ISM structure have been observed to follow the spiral arms in other galaxies \citep[e.g][]{Elmegreen_2003} and in simulations \citep{Bournaud_2002} with the more local structures getting their energy mostly from gravitational instabilities from the spiral arms that drive the turbulent regime and by inward mass accretion \citep{Bournaud_2010}. This explains why the large-scale structures of galaxies can be traced using the distribution of the dense ISM. Still, smaller scales of the local ISM distribution are made more complex by various feedback effects like supernovae \citep{Hennebelle_2014} that are important to account for the flocculent substructures in galaxies, as it is illustrated by the Figure~\ref{milky_way_Gaia_DR2} to explain higher latitude dense clouds, or again in the right frame of Figure~\ref{spiral_gal_morphology_comparison}. \subsubsection{Interstellar medium extinction and emission} \label{intro_extinction} While the ISM can be studied through its several interactions with stars, it is possible to perform more direct detection of the ISM structures. The first observable effect, even by the human eye, is the screening effect of the background stars by the interstellar clouds as can be observed in the Milky Way plane in Figure~\ref{milky_way_Gaia_DR2}. This is due to an effect of the ISM called extinction and that sums two physical effects: the absorption and the scattering of the light by the matter in the light path. For astronomical considerations this effect induced predominantly by interstellar dust grains \citep[][and reference therein]{Draine_2003}. This extinction is usually characterized by the quantity $A_\lambda$: \begin{equation} A_{\lambda} = 2.5 \log \Bigg(\, \frac{F^0_{\lambda}}{F_{\lambda}}\, \Bigg) \label{basic_ext_eq} \end{equation} where $F^0$ is the luminosity flux before the clouds, $F$ is the flux after the cloud and $A_\lambda$ is the total extinction at the wavelength $\lambda$. An important point is that all these quantities depend of the wavelength of the light. This is due to the dependence of scattering to the ratio between the wavelength and the grain size, while the dust absorption spectrum also depends on the grain composition. Therefore, the extinction strongly correlates with the dust grain size distribution in the ISM defining what is called an extinction curve, or extinction law (Fig.~\ref{extinction_curve_fig}). It was exposed by \citet{Cardelli_1989} and refined by \citet{fitzpatrick_correcting_1999} that it is possible to parametrize this law using a single dimensionless parameter $R_V$ that is expressed as: \begin{equation} R_V = \frac{A_V}{E(B-V)} \label{extinction_curve_eq} \end{equation} where $A_V$ is the extinction in the V band ($\lambda = 550$\, nm, $\Delta \lambda = 88$\, nm), and $E(B-V)$ is the reddening (or selective extinction) between the B ($\lambda = 445$\, nm, $\Delta \lambda = 94$\, nm) and the V bands, defined as $E(B-V) = A_B - A_V$. \newpage This reddening is an important aspect of the process since it corresponds to an effective shift of the apparent color of the observed stars under the effect of extinction. We show the shape of the typical extinction curves for different values of $R_V$ in Figure~\ref{extinction_curve_fig}. In first approximation, the dust grain composition and size distribution in the diffuse interstellar medium is globally constant across the Milky Way. This leads to a rather constant extinction law in this medium, although significant variations are observed, mostly toward dense molecular gas \citep[e.g.][and references therein]{Schirmer_2020}. These variations are generally well parameterized by a single parameter \citep[$R_V$, Eq.~\ref{basic_ext_eq}, Fig.~\ref{extinction_curve_fig}][]{Cardelli_1989}, although more complex variations were reported toward the Galactic Center \citep{Nataf_2016}, and $R_V$ appears to vary even on large galactic scales for the diffuse ISM \citep{Schlafly_2016}. From this observationally constrained law, we see that shorter wavelengths are much more affected by extinction than the longer ones. This is the cause of the reddening of the observed light.\\ One of the most important aspects of extinction is that it is an integrated quantity over the full light path from the emitting astronomical object down to the observer. However, since the extinction quantity is characteristic of the amount of dust it can be used as a probe of the dense regions of the Milky Way. There are noticeable relations between the extinction and the column density of atomic or molecular gas. The main difficulty is then to reconstruct the distance distribution along a given line of sight, called an extinction profile. The reconstruction of the 3D distribution of the extinction in the MW is a powerful approach as it would directly map the 3D distribution of the dense ISM. This is theoretically a suitable method to provide constraints on the spiral arms of the MW that we described as underconstrained in Section~\ref{milky_way_structure}. The second half of the present study, namely Part III, is devoted to a new approach to reconstruct large-scale 3D extinction maps with a large distance range based on multiple observational surveys. More details on existing studies about this approach and their implications are given in the corresponding part introduction section \ref{ext_map_first_section}.\\ Finally, another ISM observable that is used in this work is dust emission. The heating of dust grains via the absorption of star light is balanced, in average, by their cooling due to a continuous thermal or stochastic emission at infrared (IR) wavelengths. We note that in very dense environments, collisions can also become a heating process. The typical wavelength range of dust emission is between $ 1 < \lambda < 10^3 \, \mathrm{\mu m}$ with various contributions induced from the diverse populations of dust grains. Figure~\ref{dust_emission_fig} shows the typical dust emission as a function of the wavelength, separating different grain population contributions as modeled by \citep{Compiegne_2011} along with observational constraints. We illustrate the use of dust emission in Figure~\ref{planck_optdepth_skyplane} that shows the reconstructed dust optical depth at 353 GHz based on a modified black body fitting of the dust emission observed by the Planck space telescope \citep{Planck_2016}. This map will noticeably be used to perform morphology comparison of the dust distribution over the plane of the sky in Sections~\ref{2mass_maps_section} and~\ref{gaia_2mass_ext_section}. The dust emission can also be used to distinguish different early-protostar stages. Indeed, stars begin their formation embedded into dense envelopes that are heated by the protostar that is then visible in the spectral energy distribution (SED) of the object. In subsequent stages, the envelope is evaporated and a dusty emitting disk remains. These emission properties are used in Section~\ref{intro_yso} as a tracer for YSO classification in the infrared. \\ \begin{figure}[!t] \centering \includegraphics[width=0.86\hsize]{images/draine_extinction_curve_from_fitzpatrick.png} \caption[Extinction curve variation as a function of $R_V$.]{Extinction curves based on the prescriptions from \citet{fitzpatrick_correcting_1999} for different values of $R_V$. {\it From} \citet{Draine_2003}.} \label{extinction_curve_fig} \end{figure} \begin{figure}[!t] \vspace{2.50cm} \begin{minipage}{1.0\textwidth} \centering \includegraphics[width=0.94\hsize]{images/2MASS_skyview.png} \end{minipage} \caption[2MASS view of the Milky Way in the plane of the sky]{2MASS view of the Milky Way in the plane of the sky. The image is encoded using the 2MASS magnitude bands following, Red: J, Green: H and Blue: $\mathrm{K_s}$. {\it From } \citet{skrutskie_two_2006}} \label{2MASS_skyplane} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.84\hsize]{images/dust_emission.png} \caption[Dust emission as a function of wavelength]{Typical observed dust emission for the diffuse interstellar medium at high-galactic latitude for a given $N_H = 10^{20} \mathrm{H\,cm^{-2}}$. The mid-IR ($\sim 5-15\, \mathrm{\mu m}$) and far-IR ($\sim 100-1000\, \mathrm{\mu m}$) spectra are from ISOCAM/CVF (ISO satellite) and FIRAS (COBE satellite), respectively. Squares are the photometric measurements from DIRBE (COBE). The continuous line is the DustEm model prediction. {\it From} \citet{Compiegne_2011}.} \label{dust_emission_fig} \end{figure} \begin{figure}[!t] \centering \vspace{-0.5cm} \includegraphics[width=1.0\hsize]{images/planck_tau353.jpg} \caption[Planck dust opacity view of the Milky Way in the plane of the sky]{Planck dust opacity at 353 GHz of the Milky Way in the plane of the sky, as fitted from a modified black body based on dust emission. {\it From } \citet{Planck_2014}.} \label{planck_optdepth_skyplane} \end{figure} \clearpage \subsection{Observational constraints on the Milky Way structure} \label{obs_constraints_MW} \begin{figure}[!t] \centering \includegraphics[width=0.65\hsize]{images/RBenjamin_milky_way_annotated.jpg} \caption[Milky Way illustration with observational constraints]{Artistic representation of the Milky Way annotated with compiled effective knowledge from 2014 on the spiral arms based on observational constraints. Each structure is associated with a reference publication. \textit{This image is taken from }\citet{Benjamin_2014} \textit{which adapted it from} \citet{Hurt_2008}.} \label{milky_way_annoted} \end{figure} We show in Figure~\ref{milky_way_annoted} a carefully annotated version of the artistic face-on view made by \citet{Benjamin_2014} and that represents a census of the existing published constraints on each expected spiral arm structure of the MW a few years ago. This figure puts the emphasis on the fact that there is a significant portion of the Milky Way structure that is not constrained due to its position behind the Galactic Center. Here we describe some of the present existing work that have added constrains on the Milky Way structure. One of the oldest method to infer the galaxy spiral arms existence and position has been to measure the atomic hydrogen HI emission \citep{van_de_Hulst_1954}. Since HI is already mainly presents in ISM clouds that follows the global Milky Way structure, it is possible to use it to reconstruct roughly the galactic structure \citep{Kalberla_2009}. HI is observed through its hyperfine transition that emits at a 21 cm wavelength at which the interstellar medium is mostly transparent, granting the possibility of high distance measurements. The main approach is then to use the Doppler frequency shifting to reconstruct the velocity of coherent structures in the spectra. With some assumptions on the Milk Way circular geometry it is noticeably possible to reconstruct the arms tangent position in order to reconstruct the galactic rotation curve. HI data were also used to infer the position of some galactic arms or substructure \citep{McClure-Griffiths_2004}, but it remains too diffuse to highlight very strongly a global spiral structures.\\ \begin{figure*}[!t] \hspace{-1.8cm} \begin{minipage}{1.25\textwidth} \centering \begin{subfigure}[t]{\textwidth} \includegraphics[width=1.0\hsize]{images/Dame_figure.pdf} \end{subfigure} \begin{subfigure}[t]{0.99\textwidth} \includegraphics[width=0.97\hsize]{images/Dame_figure_zoom.jpg} \end{subfigure} \end{minipage} \caption[Longitude-velocity map using CO(J=1-0) emission]{Longitude-velocity map of CO(J=1-0)) integrated for $|b| < 4$ and centered on the galactic plane. The map resolution is 2 $\mathrm{km.s^{-1}}$ in velocity and $12'$ in galactic longitude. The \textit{bottom} frame is a zoom on the annotated version of the map. \textit{From }\citet{Dame_2001}.} \label{dame_figure} \end{figure*} Another suitable tracer of much denser ISM environments and that can be used in the same manner is the CO molecule. Its rotational transition line at 115 GHz is consider as easy to observe and CO is present in every molecular clouds, allowing for very complete detection. The study from \citet{Dame_2001} has been a reference in the identification of the galactic structures. From their observations they reconstructed a longitude velocity map of the Galactic Plane integrated over a $\pm 4^\circ$ latitude range, which is presented in Figure~\ref{dame_figure}. From this map they identified what could correspond to the spiral arm structures as illustrate in the bottom frame of the figure. With this approach it remains difficult to disentangle structures in the central region and behind. It also relies on the assumption that it is effectively possible to separate the arms in the velocity space, which might not always be the case especially if we consider the existence of bridges, gaps, and overall less continuous structures in the Milky Way.\\ \begin{figure}[!t] \centering \includegraphics[width=0.71\hsize]{images/GLIMPSE_arms.png} \caption[Galactic plane source-count in the infrared from Spitzer]{Number of sources per $\mathrm{deg^2}$ as a function of galactic longitude using the 4.5 $\mathrm{\mu m}$ band of Spitzer, and the J, H, $\mathrm{K_s}$ bands from 2MASS. The vertical lines highlight the predicted position of some galactic arms. \textit{From }\citet{Churchwell_2009}.} \label{GLIMPSE_arms} \end{figure} An other approach performed by \citep{Benjamin_2005, Churchwell_2009} using the Galactic Legacy Infrared Mid-Plane Survey Extraordinaire (GLIMPSE) performed with the Spitzer Space Observatory \citep{werner_spitzer_2004}, was to compare the observed star count in the galactic plane with the expected exponential disk population. Higher star count, especially from specific stellar population, can trace the tangent of the arms as illustrated in Figure~\ref{GLIMPSE_arms}. However, even if Spitzer uses infrared wavelength to observe through relatively dense environments, the star count remains affected by the extinction in a complex manner. Therefore, it requires an accurate extinction prescription or to choose carefully not too much extincted lines of sight.\\ \begin{figure*}[!t] \hspace{-1.4cm} \begin{minipage}{1.18\textwidth} \centering \begin{subfigure}[t]{0.49\textwidth} \includegraphics[width=1.0\hsize]{images/reid_masers.png} \end{subfigure} \begin{subfigure}[t]{0.48\textwidth} \includegraphics[width=0.95\hsize]{images/hou_and_han.png} \end{subfigure} \end{minipage} \caption[Discrete catalogs for Galactic arm fitting]{Galactic structure from discrete object distribution using face-on views. On both images the Galactic Center is at the $X=0$,$Y=0$ coordinates and the Sun is around $X=0$,$Y=8$. {\it Left}: Maser with parallax observed using the VLBA, VERA and the EVN, from \citet{Reid_2014}. The color groups were made from velocity-longitude association, then the continuous lines are log-periodic spiral arms fitted using these groups. {\it Right}: HII regions collected by \citep{hou_and_han_2014}, their best 6-arm model fit on these regions is added in gray and associated to the usual arm names.} \label{punctual_dections_structure} \end{figure*} More discrete objects can also be used to reconstruct the galactic structures like HII regions, Giant Molecular Clouds (GMC) or masers. The latter are associated with young and massive stars that will therefore be present in active dense star forming regions, likely following the arms. They were used by \citet{Reid_2014} in combination with parallax measurements in order to add constraints of the portions of the arms that are relatively nearby as presented in the left frame of Figure~\ref{punctual_dections_structure}. From these results it is visible that for many arms the continuity is not straightforward. Therefore, it is only used to constrain an expected model and not to confirm its realism. In a similar fashion, a study from \citet{hou_and_han_2014} compiled more than 2500 HII regions and 1300 GMCs from the literature. They used the existing distance estimates when there was one and computed one from existing Milky Way rotation curves. This statistic allows them to try to fit various structures at a galactic scale, testing arm counts, logarithmic spiral arms, polynomial arms, influence of the rotation curves selection, etc. The right frame of Figure~\ref{punctual_dections_structure} shows their HII regions distribution along with their best fitting model re-associated to usual arm names. One drawback of this approach is that these types of regions are not always expected to follow the arms very tightly. Additionally the distance estimates for most regions still have an important uncertainty, and the one for which a rotation curve was used relies on its quality and could be biased.\\ Finally, another approach relies on the reconstruction of the extinction distribution in 3D in the Milky Way. This is the approach that will be explored in the Part III of the present study. It mainly relies on the fact that the extinction (see Sect.~\ref{intro_extinction}) directly depend on the dust density. Therefore reconstruction the extinction as a function of the distance is directly equivalent to map the dense structures of the ISM. Since it is one of our main application we delay the detailed discussion on present state-of-the-art extinction maps and the associated difficulties to Section~\ref{ext_properties_part3} as an introduction to our own approach. Still for illustration the Figure~\ref{marshall_early_2006} shows the widely used map from \citet{Marshall_2006}.\\ We note that all the presented observational constraints does not allow to firmly state that the Milky Way would correspond to the grand-design structure of galaxies. The present detection would be very representative of a more flocculent design with a lit of inter-arm structures and much less continuous large scale arms structures overall. \begin{figure}[!t] \centering \includegraphics[width=0.50\hsize]{images/marshall_early_2006.png} \caption[Extinction map from \citet{Marshall_2006}]{Extinction distribution in the Milky Way using a face-on view of the Galactic Plane in longitude-distance coordinates. The Sun is at the center left and the Galactic Center is marked by the black cross. \textit{From }\citet{Marshall_2006}.} \label{marshall_early_2006} \end{figure} \clearpage \section{The rise of AI in the current Big Data era} In this section we describe some of the modern aspects about managing very large amounts of data. We start by highlighting some orders of magnitude that are becoming common for Big Tech companies. Then we draw a simple picture of the artificial intelligence usage and history in order to explain their recent and quick widespread adoption over the past few years. We will end by showing that the use of artificial intelligence has also grown in astronomy studies, and why they are becoming a must-have for recent and future paradigm-breaking large surveys. \etocsettocstyle{\subsubsection*{\vspace{-1.2cm}}}{} \localtableofcontents \vspace{-0.3cm} \subsection{Proliferation of data and meta-data} Data is raw information, usually in a numerical form, and that are uninterpreted. Data are acquired from an observation, or acquisition of some sort, or can sometimes be generated from other data. A simple example of a dataset would be a collection of words arranged in a certain way and stored. This data has intrinsic minimal information that is for example just a number corresponding to a letter, but are usually assembled to create more complex information. Using the same example, the order of the words in the dataset might form a sentence that has an associated meaning. In addition, there is also meta-data, that are considered to be data about other data. Again with the same example, a meta-data would be the time and date the sentence was written, or the time it took to write it. This allows to build context about the initial dataset.\\ In the current all-numerical information exchange era, tremendous amounts of data and meta-data are generated or exchanged continuously. Every click, message, image, etc, is stored at some point, and more rarely deleted after its objective was achieved. This growing usage of numerical data is also sped up by the Internet Of Things (IOT) trend that consists in adding numerical elements in every objects, that acquire and share even more data than ever. In an attempt to provide orders of magnitude, the global IP traffic in 2017 was estimated at 122 exabytes ($10^{18}$ bytes) per month, and projections predict a value of almost 400 exabytes a month for 2022 (\href{https://www.cisco.com/c/dam/m/en_us/network-intelligence/service-provider/digital-transformation/knowledge-network-webinars/pdfs/1213-business-services-ckn.pdf}{Cisco Annual Internet Report, 2018–2023}). Also, as much of 60\% of the global internet traffic is related to Video On Demand (\href{https://www.sandvine.com/hubfs/Sandvine_Redesign_2019/Downloads/Interne \subsection{Artificial intelligence, a not-so-modern tool} \subsubsection{Beginnings of AI} \label{ai_begin} Due to the explosion of AI applications and demonstrated use cases in the past two decades, AI methods are often considered as modern methods that rely on very new technology. While we will provide a detailed definition of AI in Section~\ref{AI_definition} we will state for now that it is a category of methods that learn to solve a problem autonomously with no details given on the way to find the solution other than examples with the expected answer. For now, it can be seen as methods that learn from experience. The first research on what will progressively evolve to become the modern AI methods started in the years 1940. One of the first element was the paper from \citep{mcculloch_logical_1943} that described a mathematical model of what will be called an artificial neuron. For comparison, the first presentation of what will evolved as our modern computer paradigm was presented just a few years ago by \citep{Turing_1937}, also a precursor of modern AI concepts. From these basic elements, the research was mostly focused on artificial neural networks at the time, but other methods that are today considered as a part of the AI field was also designed around those years. The term AI was apparently adopted in 1956 from a conference on the topic of ``making machines behave intelligently'' \citep{McCorduck_2004}. One big step further was the publication from \citep{rosenblatt_perceptron:_1958} that described a model for connecting binary neurons in the form of a rudimentary network based on the weighted sum of input signal and that was named the Perceptron (see Sect.\ref{sect_perceptron}). Interestingly, this publication was made in the journal ``Psychological Review''. All the basic elements were in place for already very capable neural network predictions, and the following 20 years are known as the Golden Age of AI. During this time there was large money investment into the new field and a profusion of ideas about AI methods and techniques, but also dangerous claims about how these methods would reach near human performance in a matter of years. \subsubsection{End of 20th century difficulties and successes} A few years latter the publication \citep{Minsky_1969} raised strong limitations in the present Perceptron formalism leading to a long rejection of all methods based on what was called ``connectionism''. Overall the difficulties mainly originate in the lack of computational power at the time in order to build large enough models to perform properly. During this time the AI research focused on other approaches. It is only in the early 1980 that the neural networks started to have a new support, mostly base on the publication from \citet{Hopfield_1982} that defines a new way of connecting and training a network architecture, that are now called Hopfield networks. Few years latter the publication from \citet{rumelhart_learning_1986} was a game changer since it summarized recent advances on neural networks and described the backpropagation algorithm for training neural network. This is still at this day one of the most used methods, even if it has been improved with some refinements (see Sects.~\ref{neuron_learn} and ~\ref{mlp_backprop}). Despite these important steps, funding agencies and companies lost their interest in AI as the field was not yet successful in providing industrial-scale applications that it had promised several years ago. The lack of computational power remained an issue, and the methods themselves were requiring an amount of training data that was not accessible at the time. Still, during the following years many adjustments were made behind the scene by some researchers who pushed the methods to the point where it could truly accomplish large-scale applications. \subsubsection{The new golden age of AI} In the late 1990's and early 2000's, it was possible to begin to have large datasets and the computational power of recent hardware started to reach a point that was compatible with AI techniques. An important mindset shift occurred in 1997 when the AI-dedicated system Deep Blue from IBM \citep[described in][]{Campbell_2002} defeated a world-class champion at chess. Compared to modern architectures, this machine was mainly performing a brute-force approach of decision tree comparison. Still, it was sufficient to put new lights on AI and on what the last two-decade improvements had led to. After that, large technology companies invested massively in AI, successfully applying it on problems that were predicted to be solvable with the methods decades ago like speech recognition, industrial robotics, data mining, computer vision, medical diagnosis, etc.\\ At this point there was a very strong mutual interest from technology companies and AI researchers, that led to a very quick improvement of these method capacities supported by new dedicated hardware technologies (see Sects.~\ref{matrix_formal}). The "Deep Learning" field (see definition in Sect.~\ref{mlp_backprop} and~\ref{conv_layer_learn}) is the result of these recent advances by generalizing the approach from the past decades and overcoming many of the exposed difficulties, and also improving their numerical efficiency. We also note that it is easier than ever to use these methods from a completely external standpoint. A large variety of state-of-the-art user-friendly frameworks and pretrained models are freely accessible, even if it may occasionally imply some misuses (see Sect.~\ref{tool_boxes}). At these day the AI field is still moving very fast and the corresponding methods are progressively becoming the only suitable solution to work with ever growing datasets.\\ \subsubsection{Astronomical uses of AI} \label{astro_ia_use} Artificial Intelligence is becoming a common tool for other research fields for which it is able to process large amount of data or learn complex correlations automatically from very high dimensionality spaces. For research other that in the AI field itself we rather use Machine Learning (ML) which is a subpart of the larger AI field that excludes many very specific tasks dedicated to reproducing realistic intelligence and cognition. ML is more focused on practical methods like regression, classification, clustering, etc, without aiming for high-level abstraction. This does not mean in any way that ML methods are less powerful, since they mostly rely on the same algorithms and architectures, but it is a switch in focus.\\ As many other research fields, Astronomy has begun to use ML methods as an analysis tool a few years ago. Here we list a few works that had a significant impact in our community and that were relying on ML methods. The specific field of external-galaxy analysis and classification adopted ML methods earlier than others. The famous Galaxy Zoo study from \citep{Lintott2008} provided an unprecedentedly large catalog of galaxies with morphological classification performed by matching multiple human visual classification on SDSS images for each of them, providing very accurate labels. This dataset became a widely adopted playground for ML applications that attempted to automate the classification in order to create a high performance classifier that could be used on new galaxies. Various methods had been employed for this, including support vector machine, convolutional neural networks, Bayesian networks and more \citep[e.g][...]{Banerji_2010, Huertas-Company_2011, Huertas-Company2015, Dieleman_2015, Walmsley_2020}. On another topic, ML methods can sometimes be used to reproduce a very well defined problem for which there is an analytical solution but that is slow to compute. This way an efficient ML algorithm like a light neural network can be used to significantly speed up the prediction or can even be used as an accelerator for a larger computation \citep{Grassi_2011, Mijolla_2019}. By extending the previous approach it is possible to use ML methods to interpolate between predictions that are timely to make in order to provide a full parameter space predictor from a sample of examples \citep{Shimabukuro_2017}. A few other examples are, ISM structure classification with support vector machine \citep{Beaumont_2011}, molecular clouds clustering using the unsupervised Meanshift method \citep{Bron_2018}, or even differentiating ISM turbulence regimes using again neural networks \citep{Peek_2019}. This is a very incomplete view, since ML methods have become very common in astronomical studies the last few years.\\ \subsection{Astronomical Big Data scale surveys} \label{astro_big_data} \subsubsection{Previous large surveys} Astronomical surveys are known to be very large datasets. Telescopes can produce very high resolution images, and point source catalog surveys are usually very large and present a high dimensionality. For example the Apache Point Observatory Galactic Evolution Experiment survey \citep[APOGEE][]{APOGEE_2017} contains $\sim 146000$ stellar spectra of resolution $R \sim 22,500$, which is a relatively small number of objects but with a very high dimensionality. At higher orders of magnitude, we can cite the Wide-field Infrared Survey Explorer \citep[WISE][]{wright_wide-field_2010} that contains $\sim 5.6 \times 10^8$ point source objects and a few parameters (4 bands, 4 uncertainties, sky position, other meta data ...) for each object. This size of dataset begins to be difficult to analyze on modest hardware infrastructures, and is clearly out of the scope of a domestic computer for a full dataset analysis. The Two Micron All Sky Survey \citep[2MASS][]{skrutskie_two_2006} is an other widely used survey that presents a similar size. At an even higher size scale there is the Spitzer space observatory \citep{werner_spitzer_2004} point source catalog that contains almost $1.5\times 10^9$ objects with barely fewer parameters than the previous two surveys. In the same size category we can cite the U.S. Naval Observatory - B1 \citep[USNO-B1][]{Monet_2003} that contains also $~1 \times 10^9$ objects. \\ Such dataset sizes become very difficult to handle even for more advanced hardware using classical methods. Even for the smaller dataset we started with, Machine Learning methods could provide significant treatment time improvement or provide new insights due to new ways of exploring the parameter space. For the larger surveys, ML approaches really start to shine as they provide new analysis possibilities that are not possible using more classical tools. Additionally, one of the additional advantage of ML methods is the automation. Even if some task can be performed using carefully designed classical analysis tools, an ML method will either be able to find the most optimum approach granting even better results by itself or at least it could find a more efficient process to speed up the analysis. We discuss more deeply the advantages an ML approach can provide, even on "relatively small" dataset, in Section~\ref{yso_ml_motivation}. \newpage \subsubsection{A new order of magnitude with PanSTARRS and Gaia} \label{modern_large_scale_surveys} A recent very large scale survey is the Panoramic Survey Telescope and Rapid Response System \citep[Pan-STARRS][]{Chambers_2016} that is estimated to contains 1.6 PetaBytes of data in the form of very high resolution images. Dealing with such a dataset is a true challenge, and using methods that are able to work efficiently on image processing is absolutely necessary to perform analysis on the full dataset (See~\ref{image_process_section}). Another very popular survey right now is the second data release of the Gaia mission \citep{Gaia_Collaboration_2018} that contains more than $1.6 \times 10^9$ stars with astrometric parameters and a large variety of other acquired quantities like magnitudes, kinematics, or post processed quantities. This dataset is not only a challenge due to its size but also its raw data acquisition rate. Indeed, the spacecraft itself sends around 30 GB of raw data per day and the end of mission catalog is estimated to be of around 1 PetaByte. Additionally, Gaia data go though an advance processing of the data to reconstruct most of the parameters form observed star movement over successive scans. This process justified the creation of the Gaia Data Processing and Analysis Consortium (DPAC) that shares the data treatment in terms of compute resources and manpower into several European countries. The Gaia mission had justified the construction of large computer cluster totally dedicated to the mission data analysis and end product storage. An interesting fact is that the size of the intermediate results is so large and must be over so much storage nodes that the Gaia DPAC has adopted the Hadoop file system tool that is otherwise mostly used by Big tech companies for their country scale servers. Using Machine Learning to perform the analysis of such a large dataset then appears as a fantastic opportunity to extract unmatched problem complexity due the huge statistic in the dataset, and also as a necessity since many classical methods would scale too badly on such dataset sizes. We note that due to the distribution of the intermediate data over several storage clusters, using ML to perform intermediate computations requires very tricky distributed model learning as exposed in \citet{Hsieh_2017}. \subsubsection{The historical challenge of SKA and following surveys} Finally, we discuss succinctly the case of incoming very large scale astronomical instruments. The most illustrative example is what is expected to be produced by the Square Kilometer Array radio telescope interferometer \citep{Dewdney_2009}. This instrument will have such a wide angle of view and spectral bandwidth that it would output a total of 15 TB/s and is expected to produce 600 PetaByte per Year of at least partly pre-processed data. Currently, there is no detailed plan on how the data processing will be managed since there is no existing analysis method in the astronomical community that would have a sufficient computational performance. On the other hand, by the time the instrument will be finished, in a few years, computer technologies are expected to have significantly improved. Still, Machine Learning methods are at the middle of the attention of the SKA development teams since it appears that they have the most suitable design for this upcoming challenge.\\ Presently several ML applications are being tested on SKA precursors to assess their capabilities to replace some pre-processing steps or as a posterior analysis tool. Indeed, even if the SKA data generation was tamed, methods must be spread to the astronomical community to enable them to analyze the data as well. Finally, the SKA instrument will certainly not be the only one of this scale to be built in the upcoming years, therefore these methods are likely to become a necessary part of an astronomer regular toolkit. \part{Young Stellar Objects classification} \newpage \null \thispagestyle{empty} \newpage \etocsetnexttocdepth{2} \etocsettocstyle{\section*{\vspace{-0.1cm}Part II: Young Stellar Objects classification}}{} \localtableofcontents \clearpage \section{Young Stellar Objects as a probe of the interstellar medium} \label{intro_yso} \etocsettocstyle{\subsubsection*{\vspace{-1cm}}}{} \localtableofcontents \subsection{YSO definition and use} \label{yso_def_and_use} Young Stellar Objects (YSOs) refer to a relatively wide range of protostars. As described in Section~\ref{stellar_formation_dense_environment}, stars form in dense molecular clouds through gravitational collapse. In the presently accepted view \citep[for example, ][]{McKee_2007, Kennicutt_2012}, a typical low- to intermediate-mass YSO ($<8$ M$_\odot$) continuously accretes matter from its parent cloud through a disk that is quickly formed and partly maintained by its rotation. The disk progressively disappears due to its matter falling into the star because of friction, being photo-evaporate due to the star becoming energetic enough to radiate more, or being used to form planets. Simultaneously, the star accumulates a sufficient amount of matter to lead to the start of the nuclear fusion of hydrogen nuclei in its core. A star for which the nuclear fusion is inactive or non-dominant in its energy production is a pre-main sequence star. YSOs correspond to all the steps from the gravitational collapse to the advanced stages of pre-main sequence stars. The formation sequence of massive stars is less well understood \citep{Motte_2018}, but it does not impact our study because the scarcity of those stars makes them less useful targets for the purpose of the present work.\\ Observing YSOs in stellar clusters and molecular clouds is a common strategy to characterize star forming regions. Their presence attests star formation activity, their spatial distribution within a molecular complex provides clues about its star formation history \citep{Gutermuth_2011}, and their surface density can be used as a measure of the local star formation rate \citep{heiderman_star_2010}. The youngest YSOs are the most interesting ones since they are more likely to be very close to their formation point, while more evolved ones had more time to drift from their original location. This was demonstrated, for example, by \citet{Hacar_2017} in the case of the low-mass star forming region NGC 1333 (see their figures 9 and 10) or by \citet{Buckner_2020} in the case of the massive star forming region NGC 2264. This is due to various ways of getting a different velocity than the one of the forming cloud, for example by interaction with other stars. For example, \citet{Stutz_2016} observed in Orion that the velocity of younger protostars is more coherent with their parent filament than that of more advanced protostars. Interestingly they concluded that protostars might be ejected by a slingshot-like mechanism from their oscillating original filament. This would also imply the interruption of the accretion mechanism, impacting the stellar IMF (Initial Mass Function). Recently, YSOs have also been combined with astrometric surveys like Gaia to recover the 3D structure and motion of star-forming clouds \citep{grossschedl_3d_2018}. This proves that YSOs may indeed be used to reconstruct more globally the Milky Way structure in 3D by the combination of large YSO catalogs and large astrometric surveys. There are two main difficulties to this approach, (i) the first being that YSOs are embedded in an envelope that progressively disappears during the protostar evolution making them usually very faint in the bands that are used to perform astrometry, (ii) the second being that, even using more suitable infrared observations as described bellow, it is still difficult to construct sufficiently large catalogs. Therefore, it is necessary to find efficient identification methods that are able to work on large catalogs that are sensitive enough to detect a large amount of them. \subsection{YSO candidates identification} YSO identification is often summarized as a classification problem. As they are cooler than more evolved stars and due to their dusty environment, YSOs are much simpler to detect using infrared wavelengths. Consequently, their classification relies mainly on their Spectral Energy Distribution (SED) in the IR, which allows one to distinguish evolutionary steps that range from the prestellar core phase to the main sequence. These steps were translated into classes ranging from 0 to III by \citet{Lada87} and \citet{allen_infrared_2004}, corresponding to the observed slope of the IR SED, characterized by its spectral index. Objects that present a black body spectrum in the far-IR, and that are quiet in the mid-IR are called Class 0 (C0) and correspond to dense cores or deeply embedded protostars. Objects that present a black body emission in the mid-IR and a strong excess in the far-IR are called Class I (CI) and correspond to protostars dominated by the emission of an infalling envelope. Objects that present a black body emission in the mid-IR but with a flattened emission in the far-IR are called Class II (CII) and correspond to pre-main sequence stars with an emissive thick disk. Objects that present a black body emission in the mid-IR and are devoid of far-IR emission are called Class III (CIII) and correspond to pre-main sequence stars without disks or too faint ones to be detected. Figure~\ref{yso_sed} shows simplified typical SEDs for each class, along with an illustration of the corresponding star formation step. We note that this classification is sometimes further refined to include other sub-classes like a flat-spectrum class between CI and CII, a Transition Disk class between CII and CIII \citep{gutermuth_spitzer_2009}, or even add a fainter class for deeply embedded CI YSOs \citep{megeath_spitzer_2012}. Overall, it is an efficient classification but it can still lead to misinterpretation for specific objects, like when a CII or CIII YSO is observed behind a thick cloud leading to a confusion with a CI. Similarly, when a CII YSO is observed edge-on, it is obscured by its disk and can be confused with a CI object. More subtle effects have also been identified that increase the confusion between pre-main sequence stars with a disk and flat spectrum objects \citep{Crapsi_2008}. Still, this classification is very efficient for statistical studies, like when studying the 3D ISM structure. This subclassification can then be used to provide additional information on the structure and evolution of star-forming regions since youngest (up to class~I) objects are more likely to be close to their formation position than more evolved YSOs.\\ \begin{figure}[!t] \centering \begin{subfigure}[t]{0.65\textwidth} \includegraphics[width=\textwidth]{images/yso_sed_A.png} \end{subfigure} \vspace{0.2cm} \begin{subfigure}[t]{0.65\textwidth} \includegraphics[width=\textwidth]{images/yso_sed_B.png} \end{subfigure} \vspace{0.2cm} \begin{subfigure}[t]{0.65\textwidth} \includegraphics[width=\textwidth]{images/yso_sed_C.png} \end{subfigure} \vspace{0.2cm} \begin{subfigure}[t]{0.65\textwidth} \includegraphics[width=\textwidth]{images/yso_sed_D.png} \end{subfigure} \caption[Simplified YSO SED for each class.]{Simplified YSO SED for each class. {\it Left}: The spectral energy as a function of the wavelength. The contribution of different elements of the system are shown colored. {\it Right}: Illustration of the star forming step associated with each class, which shows the different elements contributing to the SED. Adapted from \citet{Greene_2001} and \href{https://doi.org/10.6084/m9.figshare.1121574.v2}{Persson (2014)}.} \label{yso_sed} \end{figure} \vspace{-0.25cm} One of the most famous classification method that is based on the IR SED is the one described by \citep{gutermuth_spitzer_2009} based on data from the Spitzer space observatory \citep{werner_spitzer_2004} and from the 2 Micron All Sky Survey \citep[2MASS, ][]{skrutskie_two_2006}, which is fully described in Section~\ref{data_prep}. The two papers that describe the first two versions of this classification \citep{Gutermuth_2008, gutermuth_spitzer_2009} have inspired other widely adopted methods on other surveys like the one by \citet{koenig_wide-field_2012}, for the use of data from the Wide-field Infrared Survey Explorer \citep[WISE, ][]{wright_wide-field_2010}. It is to be compared to other similar methods, for example \citet{allen_infrared_2004} or \citep{Robitaille_2008} that both use Spitzer.\\ \vspace{-0.25cm} We note that despite the YSO classes being historically defined using infrared criteria, other identification methods can be used. For example centimeter or (sub)milimeter interferometers were used to assess the presence of the disk and its evolution state, like with the James Clerk Maxwell Telescope (JCMT) \citep{Brown_2000}, the SubMilimeter Array (SMA) \citep{Jorgensen_2009}, the Very Large Array (VLA) \citep{Segura-Cox_2018} or even more recently the Atacama Large Milimeter/submilimeter Array (ALMA) \citep{Yen_2014, Ohashi_2014}, etc. Many other similar studies are listed in \citet{tobin_2020}. YSOs are also known to be stronger X-ray emitter than more evolved stars. Such radiations are also capable of escaping dusty environment and can therefore be identified using a high-resolution X-ray telescope like the Chandra X-ray space Observatory \citep{Feigelson_2013}. \subsection{Machine Learning motivation and previous attempts} \label{yso_ml_motivation} \vspace{0.4cm} As we already discussed in Section \ref{astro_big_data}, the astronomical datasets are becoming too large for traditional analysis methods, and more automated statistical approaches like ML are used. They are able to both work efficiently on large datasets using many dimensions, and take advantage of the increased statistics to often overcome limits of previously used methods. In this context, it is timely to try and design a classification method for YSOs, relying on current and future large surveys and taking advantage of ML tools. Such approaches have been attempted by \citet{marton_all-sky_2016}, \citet{marton_2019}, and \citet{miettinen_protostellar_2018}. The study by \citet{marton_all-sky_2016} used a supervised ML algorithm called Support Vector Machine (SVM) applied to the mid-IR ($3 - 22\ \mu$m) all-sky data of WISE \citep{wright_wide-field_2010}. The SVM used in this study offers great performance on linearly separable data. However, it is not able to separate more than two classes at the same time and has a less good scaling with the number of dimensions than other methods. Besides, the full-sky approach produced large YSO candidate catalogs, but suffers from the uncertainty and artifacts in star-forming regions of the WISE survey \citep{lang_unwise:_2014}. Additionally, the YSO objects used for training were identified using SIMBAD, resulting in a strong heterogeneity in the reliability of the training sample.\\ \vspace{0.4cm} In their subsequent study, \citet{marton_2019} added Gaia magnitudes and parallaxes to the study. Gaia is expected to add a large statistics and to complete the SED coverage \citep{Gaia_Collaboration_2018}, but the necessary cross match between Gaia and WISE excludes most of the youngest and embedded stars. The authors also compared the performances of several ML algorithms (SVM, Neural Networks, Random Forest, ...) and reported the random forest to be the most efficient with their training sample. This is a better solution as it overcomes the exposed limitations of the SVM. However, as in their previous study, the training sample compiles objects from different identification methods, including SIMBAD. This adds more heterogeneity and is likely to increase the lack of reliability of the training sample, despite the use of a larger training sample.\\ \citet{miettinen_protostellar_2018} adopted a different approach by compiling a large amount of ML methods applied to reliably identified YSOs using 10 photometric bands ranging from $3.6$ to $870\ \mathrm{\mu m}$. For this, he used the Herschel Orion Protostar Survey \citep{HOPS_2013}, resulting in just less than $300$ objects. Such a large number of input dimensions combined with a small learning sample is often highly problematic for most ML methods (Sect.~\ref{nb_neurons}). Moreover, this study focuses on the subclass distinction of YSOs and does not attempt to extract them from a larger catalog that contains other types of objects. In consequence, it cannot be generalized to currently available large surveys and relies on a prior YSO candidate selection. \\ \newpage \subsection{Objective and organization} \vspace{0.4cm} The aim of this first part of the manuscript (Part I) is {\bf to propose a methodology to achieve YSO identification and classification, based on ML, and capable of taking advantage of present and future large surveys}. We describe some properties of the ML methods in general and explain our choice of method and our choice of building our own framework. We extensively detail the functioning of the selected ML method along with some basic application examples. We then describe how this method can be used to perform YSO classification using the Spitzer space observatory IR data, based on the widely used classification method developed by \citet{gutermuth_spitzer_2009}. We detail the data preparation phase and our choice of representations for the results along with their analysis, therefore exposing the encountered limitations. Finally, we discuss the caveats and potential improvements of our methodology, and propose a probabilistic characterization of our results. This Part ends with a reconstruction of the results from \citet{grossschedl_3d_2018} using our own YSO catalog to infer the 3D structure of the Orion molecular cloud. \\ \vspace{0.4cm} We emphasize that these results are to be published in A\&A \citet[][accepted]{cornu_montillaud_20} and that the present manuscript reproduces many sections of the published version, while also providing a large amount of additional material and a deeper analysis of the study. Also, this publication is associated with our catalog of YSO candidates that is described in the present manuscript in Section~\ref{proba_discussion} and that will be publicly available at the CDS. \newpage \section{Classical Artificial Neural Networks} \label{global_ann_section} In this section we describe the theoretical and technical aspects of Artificial Neural Networks (ANN) in detail. For this, we present ML in general along with the corresponding categories of algorithms. We will then describe a classical mathematical model to construct ANN and improve it until it is able to approximate any function. Finally, this section will describe a wide variety of in depth tuning of such network with common examples. \etocsettocstyle{\subsubsection*{\vspace{-1cm}}}{} \localtableofcontents \newpage \subsection{Attempt of ML definition} \label{AI_definition} \subsubsection{"Animal" learning and "Machine" Learning} In order to define what ML means we need to find a definition of "Learning", which is often difficult as there are plenty of definitions depending on the field it is apply to (Psychology, Biology, Pedagogy, ...) or on the person you ask in a given field. The online Cambridge Dictionary holds the following definition: \begin{displayquote} \it ``The process of getting an understanding of something by experience''\footnote{\href{https://dictionary.cambridge.org/us/dictionary/english/learning?q=Learning}{https://dictionary.cambridge.org/}} \end{displayquote} while the \textit{Tresor de la langue Française informatisé} defines it with : \begin{displayquote} \it ``Acquérir la connaissance d'une chose par l'exercice de l'intelligence, de la mémoire, des mécanismes gestuels appropriés, etc.''\footnote{\href{http://stella.atilf.fr/Dendien/scripts/tlfiv5/advanced.exe?8;s=250699680;}{http://stella.atilf.fr}} \end{displayquote} that can be translated as : \begin{displayquote} \it ``Acquire the knowledge of something through the practice of intelligence, memory, appropriate gestures, etc. '' \end{displayquote} The two definitions appear somewhat different, but both contain elements that are used to commonly define "Animal" learning. Both definitions contain the idea of experience, or practice, meaning that in order to learn, an animal must face the appropriate situation, ideally several times. The second main element is the \textit{memory}, which is necessary in order to retain information about the experience. Then there is the understanding and correction, that is usually referenced to as the \textit{adaptability}. It means that the animal must be able to change its behavior in regard of the experience outcome. And finally the last point is the \textit{generalization} ability, that allows the animal to adapt its behavior based on non-identical but similar experience and dress a continuity of behaviors between them.\\ \vspace{-0.2cm} Animal learning and intelligence are based on this elemental abilities but are obviously more complex and require many other complex faculties. We would need to define, for example, the reasoning or the logical deduction before starting to talk about intelligence. However, the previous basic capacities are enough to perform a lot of tasks, and this already justifies the attempt to reproduce them artificially. Here comes the ML (or Artificial Learning), for which there is, as well, no easy definition. Our personal definition, which merges many of other definitions, is the following : \begin{displayquote} \it ``Make a computer extract a statistical information and adapt to it through an iterative process.'' \end{displayquote} This definition is vast in order to include all the common algorithms that are granted the ML label. It echoes the previously defined learning ability, as the machine will need : \begin{itemize}[leftmargin=0.5cm] \setlength\itemsep{0.2em} \item \textit{Memory} to remember either the previous situations, or a reduced version of them. \item A way to estimate if its behavior (output) is appropriate. \item A way to \textit{adapt} its behavior during the learning process. \item A way to \textit{generalize} its behavior to new outputs. \end{itemize} We acknowledge again that this is a very global view, and that it might not fit every algorithms that are considered as ML but it should correspond to most of them. \subsubsection{Types of artificial learning} All ML methods do not work the same way. Therefore, it is necessary to identify what is the objective of the application in order to select a suitable method or algorithm. Usually ML methods are separated into two main families, {\it supervised} or {\it unsupervised}. It is however also common to add three other families, {\it semi-supervised}, {\it reinforced} and {\it evolutionary} learning. \begin{itemize}[leftmargin=0.5cm] \item The {\bf supervised} methods use a training dataset that contains the expected output of each example. The algorithm attempts to reproduce the target. It learns by comparing its current output with the target and correcting itself based on this comparison. After training, such algorithms are able to generalize to objects that are similar to those that were used for training. This is the most common type of algorithms because they are often simpler than the ones in other categories, and because it is easier to assess their prediction performance. They usually have a broad range of applications. \item The {\bf unsupervised} methods use a training dataset with no information on the expected output. The algorithm will then attempt to create categories in the dataset by itself based on some pre-defined proximity estimator. Most of these methods are clustering ones, that can either be used for classification or dimensionality reduction. Despite their reduced application range they are commonly used as well. Interestingly, many clustering methods that are widely used since decades have recently been rebranded as ML methods to follow the trend of this domain, which is legitimate considering the previous definition of ML. \item The hybrid {\bf semi-supervised} methods are often a combination of an unsupervised algorithm that does the first part of the work, either to simplify the problem or to remove some bias in target definitions, and of a supervised part to benefit of the application range of these methods. They are fairly common as they often merge qualities of the two categories. Still, they are often more computationally heavy than purely supervised ones, and can be more difficult to constrain properly. \item Instead of a labeled dataset, the {\bf reinforcement} methods use a reward function to measure how appropriate the output of the algorithm is. Usually, there is a distinction between an agent part that acts on an environment, and an interpreter part that provides the reward regarding the action that was performed. One key difference with other methods is that it often has a delayed reward since an action is often a series of interdependent actions, and its performance can only be assessed based on the final result. These methods are mostly used in robotics or to reproduce human tasks. It allows one to find a solution to a problem where one only knows some basics rules, and whether a specific output is better than another. For example, it was successfully used to make a robot learn how to walk without programming each motor, but just by encouraging the robot to test all possibilities to increase its velocity, in both simulation and real world applications \citep{Heess_2017,Haarnoja_2018}. \item The {\bf evolutionary} methods are similar to the reinforced one. They work with a dataset, no target and only a global performance measurement. However, they explore the possibility space in a different way that mostly mimics biological evolution. The algorithm prediction is described as a population that starts with random properties. Then it learns by selecting the individuals that provide the best proximity with the solution and create a new population from them and repeat this process until convergence. It also often uses some kind of mutation to ensure exploration of new solutions. \end{itemize} In addition to these families, the methods can be categorized either as discriminative or generative. This is a vast topic but it can be summarized as: discriminative models learn boundaries between cases, (i.e., they learn the probability distribution of the output given an input information), while generative models learn the actual distribution of the examples, (i.e., they learn the probability distribution of the input that corresponds to the output). It means that generative models can be used to create new mock objects or predict the output from an specific input, while discriminative models can only perform prediction tasks. However, generative algorithms are often more difficult to train, computationally more demanding, and are suitable to a relatively narrow range of problems. \subsubsection{Broad application range and profusion of algorithms} \label{ml_application_range} ML methods are recognized for their vast application range. It is one of their strengths that is currently driving their global adoption in many computational and scientific fields. They can perform classical tasks like classification, regression and clustering. But they can also be applied to dimension reduction with unexpected results as an alternative to usual compression algorithms. There is a strong interest in there capacity in time series prediction, noticeably for economical markets \citep{Sezer_2019} or climate change predictions \citep[e.g][]{Feng_2016,Ise_2019}. The loudest application is obviously the image recognition with the strong appeal around autonomous vehicles \citep{Bojarski_2016} or facial recognition \citep{Wang_2018}, but with many other scientific applications in a lot of fields (See for Astronomy in Sect.~\ref{astro_ia_use}). There is also more and more high performance generative algorithms that allow to see deceased actors in new movies, to simulate aging of highly researched criminals, or to create realistic numerical instruments \citep[e.g][]{Zhu_2017,Sawant_2019,Engel_2017}. We can also cite ease-of-life applications like new spell checkers, real time vocal translators, real time media upscaling \citep[e.g][]{Bahdanau_2014,Ghosh_2017,Shi_2016}, or for scientists the IArxiv application that sorts papers by probable interest for the reader \citep{iarxiv}. Convinced or not by these applications, ML is becoming a part of the scientific landscape and understanding how its work is becoming an essential knowledge. We already highlighted some typical ML applications in astronomy in Section \ref{astro_ia_use}.\\ As well as there are many application possibilities, there is also a profusion of algorithms. Among the famous ones are Artificial Neural networks, Random Forests, K-means, and many others. But there are also less known ones like Radial Basis Function Networks, Self Organizing Maps, Neural Gas, Deep Belief Networks, etc. Moreover, algorithms are more and more likely to be combined to achieve either better performance or new capabilities, like Reinforced Generative Adversarial Networks. Unfortunately, all those methods do not perform equally on each application, and are even unable to perform certain tasks at all. In Figure~\ref{fig_ml_types} we list some well-known algorithms arranged by family and linked to their usual application cases. \begin{figure}[!t] \centering \includegraphics[width=0.92\textwidth]{images/ml_types.pdf} \caption[List of common ML algorithms packed by type]{List of common ML algorithms packed by type and linked to their usual application cases. We note that neither the list of methods, nor the list of applications, nor the links themselves are exhaustive.} \label{fig_ml_types} \end{figure} \subsubsection{Toolboxes against home-made code} \label{tool_boxes} When searching how to use ML one faces an elemental dilemma that is the choice of the tool or library. There are many possible answers to this question, that we will address here, with some partiality. First of all one could choose to use none of the available frameworks and be tempted to program his/her own algorithm from scratch. This is not a common choice for many reasons including time efficiency, computational performance, ease of modifications, etc. Therefore we will first discuss the scenario of the use of a pre-existing framework.\\ \newpage Nowadays, knowing a limited number of programming languages is not a problem to start using a ML framework since there are implementations in almost any common language. Moreover, most of the frameworks are very user friendly with a really small number of high-level functions to call in order to train a highly-efficient modern algorithm. Consequently, most people stick to the most common framework in their preferred language. For more experienced users, the question of the framework performance is more important. Even if most frameworks are equivalent in terms of application capability, some of them are more frequently upgraded with the latest innovations in the AI field. Additionally, they are not equivalent in terms of raw computational performance. Many ML algorithms can benefit a lot from Graphics Processing Unit (GPU) acceleration, which is not included in all the frameworks, as well as the capability to run in a scalable hardware environment like computer clusters. Among the most popular frameworks we can cite: \begin{itemize} \setlength\itemsep{0.3em} \item \href{https://www.tensorflow.org/}{TensorFlow} \citep{tensorflow2015-whitepaper}: certainly the most popular framework, with native API in Python, C++, Java, etc. and many community supported APIs. Its popularity is mainly due to the fact that it is completely free and open source while being developed and maintained by Google. The updates are really frequent and the developers actively work with other software and hardware companies to include new capabilities concurrently to their official releases. See more capabilities in \citet{Abadi_2016}. \item \href{https://keras.io/}{Keras}\citep{chollet2015keras}: A strong "addition" to TensorFlow, it is known for its very high level API that allows one to code complex ML algorithms with a minimal number of lines. It is also open source and developed by a large community. However, Keras being mainly included in TensorFlow as a higher level interface, many developers are employed by Google. \item \href{https://pytorch.org/}{PyTorch} \citep{paszke2017automatic}: is an open source library with mainly Python and C++ APIs, but is more independent from Big-Tech companies than most of the other frameworks cited here, and has its own low-level computational algorithms. Like the previous one it remains computationally very efficient with most of the very modern hardware capabilities integrated. \item \href{https://github.com/Microsoft/CNTK}{CNTK}: The Microsoft Cognitive Toolkit is also an open source framework that is supported by a big company. It is highly comparable to the previous ones with APIs in Python, C/C++ and Java. The choice of this framework is often motivated by very specific use case optimizations that are not included in other frameworks. \item \href{https://www.r-project.org/}{R frameworks}: R is not as widespread as Python or C/C++ but it is widely used among statisticians. It contains some integrated functions that can be used to perform ML, but also contains various open source libraries that are mostly algorithm specifics. Additionally, as time is going it contains an increasing number of interfaces to more widely used APIs like Keras. \item \href{https://developer.nvidia.com/cudnn}{cuDNN}: The Nvidia cuda Deep Neural Network library is somewhat different to the previous examples. It is neither open source nor applicable to many algorithms. It is also the only one to require a specific hardware with an Nvidia GPU and the specific programming language CUDA derived from C++ (also seen as an API but that requires a dedicated compiler), that is not very user friendly. However, it is worth noticing that, up to this day, it is the most computationally efficient way of building neural networks that make use of cutting edge modern hardware technologies. Many previously cited frameworks contain specific calls to this library in order to provide the best performance. \end{itemize} There are obviously plenty of other frameworks that we did not describe here like \href{https://caffe.berkeleyvision.org/}{Caffe}, \href{http://deeplearning.net/software/theano/#}{Theano}, \href{https://mxnet.apache.org/}{MXNET}, \href{https://scikit-learn.org/stable/}{scikit-learn} (very suitable for teaching), etc. \\ In contrast to the use of a pre-existing framework, one can develop a home made ML application using any programming language. One drawback of the use of a framework or library is the induced dependency to it. In software communities the choice of a library is sometimes compared to a wedding. One will invest time to understand practices that are framework specific, and develop specific applications. In the "relatively rare" case where the framework stops being developed, it can be very harmful for one's productivity as it will force one to completely re-develop many applications. For many of the previously cited frameworks, their open source characteristics strongly mitigates this issue. For the widespread frameworks that are supported by Big-Tech companies, this scenario seems very unlikely. Still, there are examples of widely adopted programmable solutions that stopped their activity. This is the case of the Adobe Flash platform that progressively stopped being supported, despite the fact that it had been a dominating development platform for web-based content between 2000 and 2010. Home-made codes, on the other hand, often rely only on the programming language or low-level libraries that can often be easily replaced if needed.\\ \vspace{-0.3cm} Another general difference concerns the performance. It is commonly presumed that a framework developed by big-tech companies will be much more efficient than a home-made code, but it is not always the case. Particularly in the field of ML, the framework development is often more focused on ease-of-access and on widening the application range. It is true that many frameworks include automatic scalability or GPU acceleration which induces more computational power. However, a specific low-level implementation that makes use of the same hardware is often able to achieve better raw performance. One example would be the complex type conversion: when using a permissive framework one is able to plug almost any datatype. It eases the use of the framework, but will hamper the performance as the framework will have to assess datatypes and perform the appropriate conversion. This might add overhead at many steps of the computation. This example can be generalized to many small aspects of the algorithms. We emphasize here that such effects are stronger in highly iterative algorithms where each iteration can be dominated by such overhead, which is the case with most ML approaches.\\ \vspace{-0.3cm} Finally, a more subjective argument, is that fully programming the algorithm is an efficient way to learn the underlying theory. The time investment is significant, but this is a much more transferable knowledge that can then be used with any framework that are mostly used as "black boxes". The final decision is mostly a matter of individual sensibility to all these aspects. In our case, {\bf we chose to develop our own home-made framework} due to our will of building a strong theoretical knowledge on ML applications and because we already had a solid High Performance Computing (HPC) experience. We still kept an interested eye toward the most used frameworks in order to compete in term of computational performance and capabilities. We implemented several algorithms, but we mainly focused on Artificial Neural Networks.\\ \vspace{-0.3cm} Our framework, called CIANNA (Convolutional Interactive Artificial Neural Network for/by Astrophysicists), is a general purpose ANN framework that we started to develop independently to the present research but whose development was then driven by the present study. To date, our framework is capable of creating arbitrarily deep, arbitrarily thick, convolutional neural networks and is CUDA GPU accelerated, but it also support multi-threaded CPU computation through OpenBLAS and OpenMP. It is provided with Python and C high level APIs, and has been successfully applied to many applications: Classification, regression, clustering, computation acceleration, image recognition, detection in images, image generation, ... and is suitable for any ANN application. Its development is currently motivated by its ability to solve astrophysical problems. New elements are added as they are identified as useful for specific case studies. CIANNA is freely accessible on GitHub as open-source (ApacheV2) at \href{https://github.com/Deyht/CIANNA}{github.com/Deyht/CIANNA}. An in-depth presentation of the framework programming strategy and capabilities is given in the dedicated Appendix~\ref{cianna_app}. Additionally, some details of our implementation will also be discussed in various sections when it seems appropriate. \subsection{Artificial Neuron} \subsubsection{Context and generalities} Artificial Neural Networks are one of the most famous Machine Learning algorithms. They belong to the supervised ML category and they have a really broad variety of applications. Their popularity might be explained by their intuitive definition and construction. They are also truly computationally efficient when applying an already trained network, which allows them to work on really light systems, or even on small embedded devices. Anyhow, they are perfectly suitable for really complex tasks with increasingly large networks. These days, despite other algorithms achieving better performance on specific tasks, there is a strong momentum toward the use of these methods. Many ML frameworks are even only dedicated to ANN. Their vast adoption by the machine learning community also makes them better documented and more optimized with many big-tech companies constantly breaking the limits of their performance.\\ As already described in Section \ref{ai_begin}, ANN are not new, despite the strong recent appeal about them, and about the AI field in general. While it is difficult to accurately date the appearance of ANN, one fundamental reference is \citet{mcculloch_logical_1943} where the authors attempt to summarize the behavior of biological neurons with a mathematical model. This reveals the obvious inspiration of the biological brain to try to reproduce intelligence artificially. The brain is a wonderful biological machine that performs really interesting tasks. It computes predictions based on a census of biological sensors and using many context information (social environment, time, previous topics and knowledge, ...). This means that the brain is able to compile highly dimensional data in an impressive small amount of time, which allows one to react within a quarter of second. It is also able to work with heavily noised data, like in a noisy bar where it is able to filter out irrelevant discussions to focus on a specific one. It also presents a high capacity of generalization, like when one is able to properly walk on previously unseen ground just by efficiently compiling previous walking knowledge. One final impressive capability is its resiliency to the loss of neurons with aging, with up to 10\% loss estimate between the age of 25 and 60 in certain specific brain areas \citep{Wickelgren96}. However, the brain maintains unchanged cognitive function in most cases, which demonstrates its ability to rearrange information in an efficient way. \\ More technically, the brain can be seen as a massively parallel computer with up to $10^{11}$ neurons, mostly binary compute units, and $10^{14}$ synaptic connections that retain information. With all these advantages and the apparent simplicity of the basic neuron behavior many attempts were made to reproduce it artificially. As stated in Section \ref{ai_begin}, despite the growing interest of ANN between 1940 and 1970, the lack of computational power refrained its adoption by a larger communities. It is between 1980 and 1990 that the interest started to rise again with new application outbreak due the increased computational power, while the real ANN boom started with the 21th century. In this section we describe the most common approach to construct ANN and subsequently increase its complexity up to modern architectures that are able to efficiently solve concrete problems. An extensive introduction can be found in \citet{Bishop:2006:PRM:1162264} or \citet{MarslandBook2} that relies on several reference papers including \citet{rosenblatt_perceptron:_1958,rumelhart_learning_1986,rumelhart_parallel_1986,widrow_30_1990}. \newpage \subsubsection{Mathematical model} \label{neuron_math_model} \begin{figure}[!t] \centering \includegraphics[width=0.8\hsize]{images/neuron.pdf} \caption[Schematic view of a binary neuron]{Schematic view of a binary neuron. $X_ {[1,...,m]}$ are the input features for one object in the training dataset, $w_{[1,...,m]}$ are the weights associated with each feature. $\sum$ is the sum function and $h$ its result. $\theta$ is the threshold of the step function, and $a$ is the final neuron activation state that is compared to the target $t$ of the current object.} \label{fig_neuron1} \end{figure} The following mathematical model is derived from \citet{mcculloch_logical_1943}. It consist of an input vector $X_i$ containing the $m$ dimensions of a specific object. In ML the input dimensions are called features, and the corresponding input dimension space is named the feature space. Each feature is associated with a weight $\omega_i$, to perform a weighted sum $h$: \begin{equation} \centering h = \sum_{i = 1}^{m}{X_i \omega_i}, \hspace{1.5cm} \label{eq_weighted_sum} \end{equation} where the weights can take any positive or negative value. We note that there are many $X_i$ vectors while the $m$ weights are unique and shared for all the possible inputs. The neuron is associated to an activation function $g(h)$ which provides the activation $a=g(h)$ of the neuron, as a function of $h$. In very simple models a step function is often used: \begin{equation} \centering a = g(h) = \begin{cases} 1 & \text{if} \quad h > \theta, \\ 0 & \text{if} \quad h \leq \theta, \end{cases} \label{eq_activ_neuron} \end{equation} where $\theta$ is the threshold value and is set by the user, generally to zero. The weights quantify the correlations and anti-correlations of each input dimension with the current neuron. Figure~\ref{fig_neuron1} illustrates this model. This model mimics the behavior of a simple biological neuron which transforms an input signal into a binary response (often referred to as "firing" or "not-firing"). This action of computing the activation of a neuron, or more generally a network, for a given input vector is called a "forward" step.\\ Such simple neuron can already be used as a binary classifier with any number of input dimensions. It can be seen as a simple linear separator in the feature space with weights corresponding to the slope of the separation along each dimension. \subsubsection{Supervised learning of a neuron} \label{neuron_learn} Training such a neuron consists in finding a suitable set of weight values that minimizes a given error function. Since it belongs to the supervised ML family it uses a training dataset of examples with pre-established solution. This is achieved in an iterative fashion: the neuron is activated for an input vector of the training dataset, and the result $a$ of the activation is compared to the expected result, the so-called target $t$. An error is computed by comparing the activation and the target, that is used to correct the weights, generally by a small amount. This step is called an update. The two steps, forward and update, are repeated for all the input vectors of the training dataset, making the weights converge toward suitable values. A learning phase performed on the complete dataset is called an "epoch". Many epochs are necessary to fully train such a neuron. In practice, the correction of the weights depends on the derivative of the chosen error function and is proportional to the relative input for each weight. For a binary neuron and a usual square error $ E = 0.5 \times (a - t)^2$ it can be computed using: \begin{equation} \centering \omega_i \leftarrow \omega_i - \eta \, \left(a - t\right) \, X_i \label{eq_update_neuron} \end{equation} where $\eta$ is a learning rate that can be defined according to the problem to solve, or adjusted automatically (Sect. \ref{learning_rate}). Considering that learning with this neuron is an iterative process that searches for the optimal weight values one must define a starting state. Usually best performances are achieved when initializing the weights to small random values, which is discussed in Section \ref{weight_init}. \subsection{The bias node} \label{bias_node} Equations~\ref{eq_weighted_sum} and~\ref{eq_update_neuron} show that the particular vector $X_i=0$ for all $i$ is a pathological point: its weighted sum $h$ is independent of the weights, and the weight correction is always null, regardless of the error function $a-t$. To circumvent this peculiarity, one approach consists in adding an $m+1$ value to the input vector, fixed to $X_{m+1}=-1$, and connected to the neuron by an additional weight $\omega_{m+1}$, which behaves as any other weight. This addition is equivalent to allow an adaptive value for the threshold $\theta$. Moreover, since the neuron acts as a linear separation, this additional weight on a fixed input allows a non-zero origin. We acknowledge that there are other possible implementations of this effect. For example, one can add a constant to the result of the weighted sum and share the responsibility for the additional activation over all inputs during the weight update process. This way the additional weight is shared by all input weights. Both approaches aim at minimizing the impact on the neuron formalism. Our choice for the first implementation can be justified by performance concerns, as exposed in Section \ref{matrix_formal}.\\ Because the input values $X_i$ are often called input nodes, this additional input dimension is generally referred to as "the bias node". The additional degree of freedom provided by the bias node enables the neuron to behave normally when $X_i=0$ for $1 \le i \le m$. \\ Figure~\ref{bias_neuron} illustrates the addition of the bias node to the neuron schematic view and Figure~\ref{bias_illus} provides two examples where the use of a bias node is necessary. The first one is a simple binary classification, but with the best separation not being aligned with the origin of the frame axis, while the second attempts to reproduce the "AND" logical gate.\\ \begin{figure}[!t] \centering \includegraphics[width=0.75\hsize]{images/neuron_bias.pdf} \caption[Schematic view of a binary neuron with bias node]{Schematic view of a binary neuron with the addition of the bias node. $X_ {[1,...,m]}$ are the input features with the additional constant $-1$ bias input node, $w_{[1,...,m,m+1]}$ are the weights associated with each feature with the extra $w_{m+1}$ weight for the bias node. $a$ is the final neuron activation state. } \label{bias_neuron} \end{figure} \begin{figure}[!t] \vspace{2cm} \hspace{-1.2cm} \begin{minipage}{1.15\textwidth} \centering \includegraphics[width=0.45\hsize]{images/bias_illust_class.pdf} \hspace{0.8cm} \includegraphics[width=0.45\hsize]{images/bias_illust_and.pdf} \end{minipage} \caption[Effect of a bias node on a binary separation]{Illustration of the effect of adding a bias input node to a single neuron binary separation. {\it Left}: two classes in blue and orange are separated by a trained binary neuron with or without a bias node in red and green, respectively. {\it Right}: reproduction of an AND logical gate using a trained binary neuron with or without a bias node in red and green, respectively.} \label{bias_illus} \end{figure} \newpage \null \newpage \subsection{Perceptron algorithm: linear combination} \label{sect_perceptron} \vspace{0.2cm} As we exposed (Sect.~\ref{neuron_math_model}), a single neuron is only able to perform a linear separation. To deal with complex problems more neurons are necessary. A simple way to combine neurons consists in adding them in the same layer. Each neuron is then fully connected to the input layer, but is not connected to the other neurons. They are thus fully independent and each of them has the exact same behavior as in the previous sections and are trained similarly as in the previous sections, except that the equations of the weighted sum and of the activation become matrix equations: \begin{equation} h_j = \sum_{i = 1}^{m+1}{X_i \omega_{ij}} \label{weighted_sum} \end{equation} and \begin{equation} a_j = g(h_j) = \begin{cases} 1 & \text{if} \quad h_j > \theta \\ 0 & \text{if} \quad h_j \leq \theta \end{cases} \label{eq_activ_perceptron} \end{equation} where $j$ is the index of a neuron in the layer, and the sum runs from 1 to $m+1$ to account for the bias node. Similarly, the correction of the weights becomes: \begin{equation} \centering \omega_{ij} \leftarrow \omega_{ij} - \eta \left(a_j - t_j\right) \times X_i \label{eq_update_perceptron} \end{equation} where $a_j$ and $t_j$ are the activation and target of neuron $j$, respectively. This network and its training procedure altogether are called the Perceptron algorithm \citep{rosenblatt_perceptron:_1958}.\\ \vspace{0.4cm} This architecture is illustrated in Figure~\ref{bias_perceptron}. Again, in this model, neurons are equivalent to hyperplanes in the weight space, each hyperplane performing a linear splitting between two classes. As before a slow training across multiple epochs is needed to let the weights of the network converge. In this structure, each neuron can learn a different part of the generalization \citep{rumelhart_parallel_1986}. One difficulty though, is to find a proper way to encode a global information into a set of binary neurons. When doing classification, one can set one neuron per output class and encode the target in the form of only one specific neuron being activated and the others set to 0. With a three-class example the possible outputs would be: A: (1-0-0), B: (0-1-0) and C: (0-0-1). This classification case is illustrated in Figure~\ref{perceptron_illus} that shows how the three neurons share the classification task. However, in order to work with the previous Perceptron algorithm, each class must be linearly separable from all the others. If it is not the case this encoding strategy must be refined in order to allow more than one neuron to represent each class. Another approach for a regression example would be to use binary value encoding, with for example 4 neurons as bits to encode a range of 16 values. One can also perform image processing by using one neuron per pixel. \begin{figure}[!t] \centering \includegraphics[width=0.8\hsize]{images/illustr_prec.pdf} \caption[Schematic view of a simple Perceptron neural network]{Schematic view of a simple Perceptron neural network. The light dots are input dimensions for one object of the training dataset. The black dots are neurons with the linking weights represented as continuous lines. Learning with this network relies on Eqs.~(\ref{weighted_sum}), (\ref{eq_activ_perceptron}) and (\ref{eq_update_perceptron}). $X_{[1,\dots,i]}$ are the dimensions for one input vector with an additional bias node, $a_{[1,\dots,j]}$ are the activations of the neurons, while $W_{ij}$ represents the weight matrices that connect the input vector to the neurons.} \label{bias_perceptron} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.75\hsize]{images/perceptron_illust_class.pdf} \caption[Three class separation using a trained Perceptron]{Illustration of a three-class separation using a trained Perceptron with 3 neurons. Each red line corresponds to one binary neuron separation.} \label{perceptron_illus} \end{figure} \newpage \subsection{Multi Layer Perceptron : universal approximation} \label{mlp_sect} \subsubsection{Non linear activation function and neural layers stacking} The Perceptron network remains too simple to learn complex problems. Firstly, it is restricted to linear combinations. Secondly, the neurons are limited to two states (0 and 1), and it can be a difficulty to encode physical values. Deep artificial neural networks can solve these two points. One major modification is to change the activation function to one which is continuous and differentiable, with the constraint that it must keep the global behavior of the neuron with two well distinct states. In a first step, we describe the sigmoid function \citep{rumelhart_parallel_1986}: \begin{equation} \centering g(h) = \frac{1}{1+ \exp(-\beta h)} \label{eq_sigm} \end{equation} where $\beta$ is a positive hyperparameter that defines the steepness of the curve. This function has a $S$ shape with results between $0$ and $1$ and has a simple derivative form, which is illustrated in Figure~\ref{sigmoid_fig}. This addition noticeably allows easier regression with the Perceptron network, where each continuous variable can be represented with a single neuron.\\ \begin{figure}[!t] \centering \includegraphics[width=0.56\hsize]{images/sigmoid_fig.pdf} \caption[Illustration of a sigmoid activation]{Illustration of sigmoid activations with $\beta = 1$ and $\beta = 0.5$ and their derivatives.} \vspace{-0.1cm} \label{sigmoid_fig} \end{figure} This addition is complemented by a second major modification: adding more layers. Neurons are added behind the previous layer, where they take as input the result of the activation of the neurons from the previous layer, as illustrated in Figure~\ref{fig_network}. Like in the Perceptron network, the neurons within one layer are independent one from the other, and their activation is computed following similar equations as Eqs.~(\ref{weighted_sum}) and (\ref{eq_sigm}). A bias node needs to be added to the previous layer to avoid any pathological behavior from the next one. This architecture is illustrated in Figure~\ref{fig_network}.\\ \begin{figure*}[!t] \centering \includegraphics[width=0.9\hsize]{images/illustr_net.pdf} \caption[Schematic view of a "deep" neural network with one hidden layer]{Schematic view of a simple "deep" neural network with only one hidden layer. The light dots are input dimensions. The black dots are neurons with the linking weights represented as continuous lines. Learning with this network relies on Eqs.~(\ref{eq_sigm}) to (\ref{eq_update_first}). $X_{[1,\dots,i]}$ are the dimensions for one input vector, $a_{[1,\dots,j]}$ are the activations of the hidden neurons, $a_{[1,\dots,k]}$ are the activations of the output neurons, while $V_{ij}$ and $W_{jk}$ represent the weight matrices between the input and hidden layers, and between the hidden and output layers, respectively.} \label{fig_network} \vspace{-0.2cm} \end{figure*} This procedure can be repeated to add multiple layers, constructing a "deep" network. The last layer is the output layer and the other neuron layers are the "hidden" layers. The input nodes are generally considered to form a first layer, dubbed input layer, although they are not neurons. While the input and output layers are mostly constrained by the problem to solve, the number and size of the hidden layers directly represent the computational strength of the network and must be adapted to the difficulty of the task. This kind of networks is called a Multi Layer Perceptron (MLP).\\ This multilayer architecture allows the network to combine sigmoid functions in a non-linear way, each layer increasing the complexity of the achievable generalization. The combination of sigmoid functions can be used to represent any function, which means that this new network is a "Universal Function Approximator" as demonstrated by \citet{cybenko_approximation_1989}. The combination of sigmoid is illustrated in Figure~\ref{sigmoid_comb}. It shows that sigmoid can be combined into hill shapes, that can be combined to get bumps, and that these bumps can be combined to create any arbitrary point-like function. They also define the Universal Approximation Theorem. It demonstrates that only one hidden layer with enough neurons is able to approximate any function as accurately as an arbitrarily deep network. Therefore, it can be used to solve a very wide variety of problems as discussed in Section \ref{ml_application_range}. We illustrate the capacity of such a network in Figure~\ref{mlp_illus_class}, where a network with two input dimensions learns non linear splittings between three classes. This network uses one output neuron per class similarly to Section \ref{sect_perceptron}, and only one hidden layer containing 6 sigmoid neurons. \begin{figure}[!t] \centering \includegraphics[width=1.0\hsize]{images/sigmoid_comb.pdf} \caption[Example of sigmoid combinations]{Example of sigmoid combinations. (A): one sigmoid alone, (B): two sigmoid combined into a hill shape, (C): two hill shape combined at $90\deg$ to form a "bump", (D): several rotated bump combined to obtain a localized pic function.} \label{sigmoid_comb} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.68\hsize]{images/mlp_illust_class.pdf} \caption[Three class separation in a two dimensional space using a MLP]{Illustration of a three-class separation in a two dimensional feature space using a trained MLP with 3 output neurons and 8 hidden neurons, all with sigmoid activations. The light background colors indicate the regions of the feature space that the network has attributed to each class.} \label{mlp_illus_class} \end{figure} \subsubsection{Supervised network learning using backpropagation} \label{mlp_backprop} Adding new layers introduces a difficulty to update the weights, since the targets are only available for the output layer. The "Backpropagation" algorithm \citep{rumelhart_parallel_1986} allows one to compute an error gradient descent, starting from the output layer, that can be propagated through the entire network. This gradient calculation depends on the error function, which is often the simple sum-of-squares error: \begin{equation} \centering E(a,t) = \frac{1}{2}\sum_{k=1}^{N}{(a_k -t_k)^2} \label{eq_error_fct} \end{equation} where $k$ runs through the number $N$ of output neurons, $a_k$ is the activation of the $k$-th output neuron and $t_k$ the corresponding target. The weight corrections for a given layer $l$ are computed as follows: \begin{equation} \omega_{ij} \leftarrow \omega_{ij} - \eta \frac{\partial E}{\partial \omega_{ij}} \label{eq_grad_desc} \end{equation} where the gradient $\frac{\partial E}{\partial \omega_{ij}}$ can be expanded as: \begin{equation} \frac{\partial E}{\partial \omega _{ij}} = \delta _l (j) \frac{\partial h_j}{\partial \omega_{ij}} \quad \text{with} \quad \delta _l (j) \equiv \frac{\partial E}{\partial h_j} = \frac{\partial E}{\partial a_j}\frac{\partial a_j}{\partial h_j} = \frac{\partial a_j}{\partial h_j} \sum _j {\omega _{kj} \delta _{l+1}(k)}. \label{eq_update_full_network} \end{equation} In these equations, the indices $i$ and $j$ run through the number of input dimensions of the current layer, and its number of neurons, respectively. These equations are the same for each layer. $\delta_l$ defines a local error term that can be defined for each layer of neurons, so that, for a hidden layer $l$, the error $E$ in eqs.~(\ref{eq_grad_desc}) and (\ref{eq_update_full_network}) is replaced by the multiplication of the following weight matrix and the next layer error $\delta_{l+1}$ with $k$ that runs through the number of neurons of the next layer. It also depends on the activation function $a=g(h)$ at each layer through the derivative $\frac{\partial a_j}{\partial h_j}$. Thus, this kind of gradient can be evaluated for an arbitrary number of layers. These terms can be further simplified by considering the previous error, and a sigmoid activation for all neurons: \begin{equation} \frac{\partial h_j}{\partial \omega_{ij}} = a_i, \end{equation} the activation of the current layer, \begin{equation} \frac{\partial E}{\partial a_j} = (a_j - t_j), \end{equation} the derivative of the error to replace $\delta_l$ at the output layer, \begin{equation} \frac{\partial a_j}{\partial h_j} = \beta a_j (1 - a_j), \label{eq_sig_deriv} \end{equation} the derivative form of the sigmoid activation. Further details on the equations can be found in \citet{Bishop:2006:PRM:1162264} or \citet{MarslandBook2}.\\ While the {\bf definition of "deep learning"} is often fuzzy and varies from a user to another, some consider the MLP to be sufficient to fit in this category of ANN (see Sect~\ref{conv_layer_learn} for a more advanced definition). However, constructing a many-layer network is often challenging. The main difficulty is the so-called "vanishing gradient". When using the above equations to propagate the error through the network, each layer multiply the $\delta_l$ value received from the previous layer by the derivative of the activation function, in this case a sigmoid. The issue is that this factor is most of the time less than 1, and therefore the error information is smaller and smaller for the layers that are the closest to the inputs, up to the point where the corresponding weights are enable to be updated anymore. This issue, combined with the fact that one hidden layer is enough to approximate any function, induces that the most common MLP architecture only contains one hidden layer. However, such architectures are less employed these days, because most of the recent architectures overcome this limitation by adopting a different kind of activation function to remove the "vanishing gradient" issue, and therefore construct very deep networks. This point, along with a less disputed definition of "deep learning", will be covered later in the second part, Section \ref{conv_layer_learn}, of the present manuscript.\\ For the sake of simplicity, and due to the previous point, we detail here the upgrade procedure for a network with only one hidden layer like in Figure~\ref{fig_network}. The network is therefore composed of an input layer constrained by the input dimensions $m$ of the problem to solve, a hidden layer with a tunable number of neurons $n$, and an output layer with $o$ neurons. All the neurons use sigmoid activations in this example. The gradient descent is computed from the backpropagation equations (Eqs.~(\ref{eq_error_fct}) to (\ref{eq_sig_deriv})) as follows. The local error $\delta_o(k)$ of the $k$-th output neuron is computed using: \begin{equation} \centering \delta_o(k) = \beta a_k (1 - a_k) (a_k - t_k). \label{eq_deltao} \end{equation} The obtained values are combined with the weights between the hidden and output layers to derive the local error for neurons in the hidden layer, multiplied as before by the derivative of the sigmoid activation: \begin{equation} \centering \delta_h(j) = \beta a_j(1-a_j) \sum_{k=1}^{o}{\delta_o(k)\omega_{jk}} \label{eq_deltah} \end{equation} where $j$ is the index of a hidden neuron. Once the local errors are computed, the weights of both layers are updated: \begin{eqnarray} \centering \omega_{jk} & \leftarrow & \omega_{jk} - \eta \delta_o(k)a_j \label{eq_update_second}\\ v_{ij} & \leftarrow & v_{ij} - \eta \delta_h(j)x_i \label{eq_update_first} \end{eqnarray} where $\omega_{jk}$ and $v_{ij}$ denote the weights between the hidden and output layers, and between the input and hidden layers, respectively, $a_j$ is the activation value of the $j$-th hidden neuron, and $x_i$ is the $i$-th input value. \subsection{Limits of the model} Before diving into some more advanced details, we discuss here some of the intrinsic limitations of the current model to reproduce the biological brain. There is a lot of unknowns that remain about the biological neurons and the brain, and it is not our subject here, but a first order comparison remains a good illustration of how simplified the ANN model is.\\ First of all, the mathematical neuron model itself is far less complex than a biological one. It is well established that a biological neuron does not fire a single impulsion, but rather a series of complex impulsions that can encode way more information. However, it conserves a "global" activated or non-activated state, but with an additional recovering time that adds an even more complex inhibition effect to its behavior. While in our model the activation function is fixed and only the weights can change, in the biological neuron the activation might change depending on environmental effects and even be function of time or of the number of activations. Another difference is that the model permits weights to change sign, while it is not possible for the biological neuron.\\ In a more global view, the biological neuron activations are also non-sequential (asynchronous), while our model is designed to activate each layer in a specific order. Moreover, the biological neurons are not nicely arranged in independent layers. Each neuron can be connected to many others in a complex architecture that creates many neural "paths", and connections can actively be remapped, which strongly contributes to the brain plasticity. Biological neurons can even reconnect to themselves (feedback). All of these aspects being only the tip of the iceberg, even regarding our partial understanding of the biological brain.\\ Yet, the simplicity of the model is not only a matter of difficulty to write algorithms that fulfill the brain capabilities. For example, feedback is a common feature of recent Recurrent Neural Networks (RNN) architectures \citep[already described by][]{rumelhart_parallel_1986}. We can also cite Neural Gas Networks \citep{Fritzke95} that help to construct complex non-layered architectures. The main objective of all the ANN models (and ML models in general) is to find an appropriate balance between computing performance and capabilities. While RNN has proven to be efficient for specific tasks like speech recognition \citep{Robinson1996, Waibel89}, it does not improve the generalization potential in all cases, although the computational cost excess of this method is substantial. It is, therefore, often advised to start by trying simple algorithms and properly assess if a more complex one would really improve the results. The exposed ANN model has proven its great efficiency in a vast variety of cases, which explains its popularity in many communities. We adopt this progressive strategy in the current manuscript with many advanced ANN capabilities being postponed to Part~2 where they are useful. \subsection{Neural network parameters} \subsubsection{Network depth and dataset size} \label{nb_neurons} All ML methods have in common that the quality of the prediction is directly linked to the number of examples provided. Most of the time it is useful to start by estimating the difficulty of the task to perform. For ANN, since individual neurons can be compared to linear separators, one can roughly estimate how many of such separations would be necessary to isolate all the groups of objects identified in the feature space. Most of the time it only provides a starting point, from which it is necessary to search the optimal number of neurons. Considering a network with a single hidden layer, only the number of neurons in this layer can be changed, therefore only this value has to be explored. Most commonly, the minimum error that can be achieved on the output layer is directly linked to the number of neurons. Increasing the number of neurons tends to improve the results (lower the minimum error), but the gain provided by a new neuron decreases with the total number of neurons in the layer. Beyond a certain amount of neurons, an error plateau is reached where adding more neurons only adds noise to the minimal error value. \\ However, increasing the number of neurons also increases the number of weights (or degrees of freedom) and, therefore, requires more data to be trained. A widely used empirical rule prescribes that the number of objects for each class in the training sample must be an order of magnitude larger than the number of weights in the network. As an example, for a network with $m$ features, $n$ hidden neurons in only one hidden layer, and $o$ output neurons, the number of weights in the network is $(m+1)\times n + (n+1)\times o$. It corresponds to the number of element of the weight matrices $V_{ij}$ and $W_{jk}$ of the example network in Figure~\ref{fig_neuron1}. We note that this estimate also depends on the type of neuron, i.e. their activation function. Binary or linear neurons being simpler than sigmoid ones they could be constrained with less examples. In order to reduce the number of weights needed, a deeper architecture with more but smaller hidden layer is often a successful strategy. However, such a network would be less stable even if capable of better absolute generalization capabilities and is more prone to the "vanishing gradient" issue. We also note that a network with several hidden layers will learn complex boundaries with less epochs than a single hidden layer one.\\ \begin{figure}[!t] \centering \includegraphics[width=0.60\hsize]{images/nb_neurons_illust.pdf} \caption[Evolution of the error as a function of the number of neurons]{Evolution of the error as a function of the number of neurons in the network, for various numbers of examples. This figure uses the same network as for Figure~\ref{mlp_illus_class} with only one hidden layer. $n$ is the number of neurons in the hidden layer, $N_w$ is the total number of weights in the network depending on $n$, $E$ is the sum-of-square error of the output layer (equation~\ref{eq_error_fct}) averaged over all the given examples. Each error point is the average of 5 independent training with different weight initialization and training selection from the same distribution.} \label{nb_neurons_opt} \end{figure} We used the same network as in the example in Section~\ref{mlp_sect} illustrated in Figure~\ref{mlp_illus_class}, where we separates three classes in a two dimensional feature space using one hidden layer. Figure~\ref{nb_neurons_opt} shows the search of the optimal number of neurons in the hidden layer by looking at the output-layer error-average for the corresponding dataset. Each point in this figure is the average of 5 independent training with the same network size in order to mitigate the effect of the random weight initialization and random example selection from the same distribution. We see that, when having few objects ($\sim 100$) in the training datasets, increasing the number of neurons fails to improve the results. This is due to the number of weights in the network being quickly of the same order as the number of examples. In this regime the network prediction is highly unstable, the weight updates being greatly underconstrained. The case with $\sim 400$ objects is more satisfying with a quick reduction of the error up to 4 neurons. It stabilizes itself to a constant value with small variations when adding more neurons. The last case is with $\sim 1000$ objects. With only three neurons it is already more efficient than the previous cases and is near its optimum value with five neurons. It is the case that also shows the less fluctuations when increasing the number of neurons, which indicates that they are properly constrained thanks to the larger training dataset.\\ We note here that these results are highly dependent on the other network parameters that are discussed on the following sections, such as the learning rate, activation function, number of layers, weight initialization, etc., and is very problem specific. Moreover, as we did in the simple example of Figure~\ref{nb_neurons_opt}, it is often necessary to perform several times the same training with the exact same parameters to see if the random selection of weights affects the result, and to compute an average, and ideally a dispersion. However, a strong limitation of this approach is that, as the number of hyper-parameters increases, the time required to train the network becomes very large, and it can become completely unrealistic to fully explore the impact of each hyperparameter on the results. This is why it is very common to find widespread architectures that are known to work nicely on many problems and that are reused blindly for other applications, with an exploration of parameters reduced to the learning rate and few other parameters. This also implies that the used network architectures are often over-sized for the considered problem which could lead to other difficulties as discussed in Section \ref{sect_overtraining}, and require special care to still work properly on too simple problems. \subsubsection{Learning rate} \label{learning_rate} \begin{figure}[!t] \centering \includegraphics[width=0.495\hsize]{images/learn_rate_path.pdf} \includegraphics[width=0.495\hsize]{images/learn_rate_local_minima.pdf} \caption[Error value in the weight space]{Error value in the weight space. {\it Left}: Simple binary separation between two classes, from the example in Figure~\ref{bias_illus}. The background color represents the logarithm of the error of the neuron, averaged over all objects in the dataset, as a function of the 2D weight space of the neuron (excluding the bias node). Red and purple lines represent the first 20 weight values of the neuron during learning, with $\eta = 0.1$ and $\eta = 0.32$ respectively. {\it Right}: Example of the 2D weight space of a neuron in the hidden layer used for Figure~\ref{mlp_illus_class}, separating three classes in a two dimensional feature space.} \label{weight_spaces} \vspace{-0.2cm} \end{figure} The learning rate $\eta$ allows one to scale the weight updates in equations~(\ref{eq_update_perceptron}) and (\ref{eq_grad_desc}). This is useful to adapt the weight updates to the granularity of the feature space and of the various weight spaces. Lower values of $\eta$ increase the stability of the learning process, at the cost of a lower speed, and a higher chance for the system to get stuck in a local minimum. Conversely, larger values increase the speed of the learning process and its ability to roam from one minimum to another, but too large values might prevent it from converging to a good but narrow minimum. It should be noted that the correction also scales with the input value $X_i$, correcting more the weights of the inputs that are more responsible for the neuron activation. \\ The effect of the learning rate value is illustrated in the left frame of Figure~\ref{weight_spaces}, which shows the error of a neuron performing a linear separation between two classes in the two-dimensional weight space of that neuron. Two learning rates are compared. A small one at $\eta = 0.1$ leads to an efficient path in the weight space, illustrated by the red arrows, down to the minimum value. The second case with $\eta = 0.32$ leads to a zigzag path with very inefficient updates that overpass the optimal value. In the best case scenario, a too-large learning rate will only delay the convergence, but in most of the cases with a complex weight space it will not find the best minimum value. We note that values of $\eta$ larger than 0.32 most of the time induce divergence in this example, preventing the neuron to learn anything useful.\\ The right frame of Figure~\ref{weight_spaces} shows a more complex error distribution within a dimensional weight space that corresponds to one of the neuron in the hidden layer of the example given in Section \ref{mlp_sect} and Figure~\ref{mlp_illus_class}. In this case the hidden neuron activation is recombined with many other sigmoid neurons, which produces a complex error function. It is important to note that is a very partial visualization. In order to obtain this error in the weight space, we had to freeze all the network weights except those of the focused neuron. Indeed, this is a "snapshot" representation of the optimal error. During the full network training all the weights vary toward the optimal error, which produces a significantly different optimal error distribution in the weight space for the next iteration. It is then expected that the weights will converge and reach the optimum value. At this stage, the error distribution within the weight space is stable.\\ These examples show the importance of carefully choosing the learning rate value. However, a fixed learning rate is also a strong limitation. One easy addition consists of setting a large initial learning rate and a lower final learning rate with a progressive decrease between the two during training. This allows the network to go quickly to a part of the weight space that is near the optimum value, and then to have a smaller learning rate to properly resolve the optimal values. It is common to adopt an exponential learning rate decay \citep[for details, see][]{schedulers18}, which is the approach implemented in our framework CIANNA (see Sect.~\ref{cnn_hyperparameters}, Eq.\ref{eq_decay}). Following the same idea, advanced learning rate algorithms are getting more and more common. The aim is to dynamically change the learning rate to follow the evolution of weight update scale. Such a method aims at always providing the optimal learning rate regarding the current state of the network \citep{kingma2014}.\\ It should be noted here that there is no such thing as a perfect set of weight values. There are many sets of weights that could achieve the a similarly good prediction. It does not prevent any of these weight set solutions to be completely stable when the network has converged. This is a classic criticism against ANN methods, that are accused to be non-physical and impossible to interpret. However, there are many tools and techniques to link the input features to the predicted output by following the weights through the network. For example, assessing if there is a continuous path of large weight values between the inputs and outputs might help determining which inputs are useful to the network and which are not. Another common heuristic is to remove some input features to see if it impacts the prediction quality and infer the relative importance of each feature. Some applications also use an auto-encoder network to find the smallest number of dimensions necessary to represent all the input features, and draw insightful representations of the feature space into this reduced space \citep{Bengio_2012}. \subsubsection{Weight initialization} \label{weight_init} We mentioned in Section \ref{neuron_learn} that the weights must be initialized to "small random values", but we did not explained why. Considering a network where all weights would be equal, for example to zero, then all the neuron activations would be identical, and so would be the weight updates and their propagation to the previous layers. This means that the corresponding network would only be able to solve problems that work with all the weights being equal. This is a symmetry issue. To break such a behavior the weights are randomly initialized. However, all initializations are not equal. It must guarantee that the weights are large enough to learn, and small enough to avoid divergence of the weights when the error of the neuron is large. Additionally, layers with many neurons will sum many individual errors. This could lead layers to learning at different pace, which increases the chances that network weights get stuck into a local minima, or at least slow down the network convergence. In the worst scenario, layers can even be stuck for many epochs in a saturated state where all neurons remain active, which is identical for all inputs, stopping the learning of all subsequent layers \citep{glorot_understanding_2010}. Therefore, it is often a good practice to scale the weight matrix between two layers accordingly to the size of the upward one. A common weight initialization is then a random uniform value in the range: \begin{equation} -1/\sqrt{N} < \omega < 1/\sqrt{N} \end{equation} where $N$ is the number of neurons of in the "input" layer. This way, the weight matrix keeps a zero mean with a dispersion that is properly scaled to the expected error on this layer.\\ On the other hand, the efficiency of a specific weight initialization is highly dependent on the chosen activation function. This is mainly due to the differences in mean value of the activation. For example a sigmoid activation has a 0.5 mean value, as well as a binary activation, while a linear activation will have a zero mean, as well as a hyperbolic-tangent activation. Additionally, it depends on the depth of the network, correlated with the activation function in a non trivial way. More informations about the effect of specific weight initialization regarding all those parameters can be found in \citet{glorot_understanding_2010} and \citet{he_delving_2015}, along the much more modern and widespread Xavier and He-et-al initialization methods. It is worth noting that the {\bf weight initialization is not just a small improvement} and can make the difference between a network that is able to learn something interesting and another that will not learn anything and diverge. A vast part of the improvement of the ANN methods over the last two decades came from changes in favor of newer combination of weight initialization and activation function.\\ \vspace{-0.6cm} \subsubsection{Input data normalization} \label{input_norm} The size of the weight updates is not only shaped by the weights themselves but also by the previous layer activation value and at some point by the value of the input features. This is visible in equations~(\ref{eq_update_neuron}) and (\ref{eq_update_first}) and is implicit in equation~(\ref{eq_update_full_network}). Therefore, it is important to scale the features to be of the same order as a typical layer activation. Otherwise, it could cause saturation or too large weight updates to converge smoothly, similarly to what can be observed with a too large learning rate (Sect.~\ref{learning_rate}). Additionally, all the features do not necessarily have the same range of values. For example, a feature that represents the age of a star will have a range of values between few millions and billion years, while a stellar apparent magnitude will be in the range of few tens. This will result in meaningless dominance of the numerically-larger features over the activation of the neurons and weight updates. \\ \vspace{-0.1cm} A widespread solution is to normalize each feature individually, for example in an interval of $-1$ to $+1$ with a zero mean. This can be done by subtracting the mean value of a feature and then dividing by the new absolute maximum value. The network can then start with an equivalent importance of each feature, which we observed to be an efficient solution. Additionally this allows the weights to be of same order of magnitude for each feature, which tends to strongly stabilize the weight updates. It is usual to scale the inputs in a similar range than that achieved by the typical activation function used in the corresponding network, which leads to the same weight initialization for all the layers. We note that the normalization solutions we presented here are mostly suited for sigmoid activated neurons with $\beta=1$ and a $-1$ bias value as described in the previous sections. \vspace{-0.2cm} \subsubsection{Weight decay} \label{weight_decay} We already established that the weights should be non-equal to break symmetries, should have a zero mean to avoid initial direction bias, and should be small enough to prevent saturation. However, there is an additional benefit of having very small weights in some cases, because small weights are more likely to induce sigmoid neurons to be in their middle linear phase, since $g(0) = 0.5$. It is interesting to keep as much neurons in a linear state as possible to ease the error propagation as it is also where the sigmoid derivative reaches its maximum and where the "vanishing gradient" effects are the lowest. Ultimately, the network should use as few non-linear neurons as possible to represent the necessary non-linearity of the problem. \\ \vspace{-0.1cm} To achieve such a behavior, it is common to add a weight decay \citep{Hanson_89} in the network algorithm. The most simple one consists in multiplying all the weights of the network by a decay factor $ 0 < d_c < 1$, generally very close to 1, after each epoch. This way the network will only keep large weights where it is absolutely necessary. Here, the natural dependency to the activation function is obvious, meaning that such improvement would not be necessary when using an activation function that behaves more like a linear activation. As before, very advanced weight decay methods can be found in some modern ANN architectures \citep{kingma2014}. It is worth noting that this technique can be used as a regularization (see Sect.~\ref{dropout_sect}) that helps to prevent overtraining on noisy data \citep{gupta_98}. \subsubsection{Monitor overtraining} \label{sect_overtraining} As we began to expose with the previous section, tuning a network to make it learn can be tedious. But there can be difficulties even with a perfectly tuned network, a noticeable one being the overtraining of the network. In the most common scheme, the network weights start at a random inappropriate position and slowly converge to an optimal value. After this point, the learning process starts to over-fit the training data. It means that it is starting to learn the specificities of the dataset. In principle, it would not be an issue with a infinite dataset that perfectly suits the appropriate feature space. In a real case, the data distribution will always have gaps in the feature space that the network might try to fit, making it necessary to monitor the learning phase. A widely adopted solution consists in monitoring the overtraining using additional datasets. Most commonly the original dataset with labels is split in several parts: \begin{itemize} \setlength\itemsep{0.2em} \item A {\bf training dataset} that contains the vast majority of the objects and that is used to effectively train the algorithm. \item A {\bf validation set} that is used regularly during the training phase to compute an error, but without updating the weights, and enables one to monitor the training process. \item A {\bf test dataset} that is used after the training phase to assess the quality of the generalization on data that were not seen during the training phase or the validation phase. \end{itemize} In practice, during the training, both the errors of the training and validation datasets are computed. Overall, the error on the training dataset tends to decrease down to an asymptotic value, around which it may fluctuate, regardless of whether the network is over-trained or not. In contrast, the error on the validation datasets generally varies in two phases. In a first phase, the error decreases, similarly to what is observed for the training set. In a second phase, when the network starts over-training, the error on the validation set starts to rise. Therefore, the training of the network must be stopped close to this point, which is often done by monitoring the error on the validation set on several consecutive steps.\\ \begin{figure*}[!t] \centering \begin{subfigure}[!t]{0.65\textwidth} \centering \includegraphics[width=\hsize]{images/overtrain_error.pdf} \end{subfigure} \begin{subfigure}[!t]{0.63\textwidth} \centering \includegraphics[width=\hsize]{images/nicetrain_fit.pdf} \end{subfigure} \begin{subfigure}[!t]{0.65\textwidth} \centering \includegraphics[width=\hsize]{images/overtrain_fit.pdf} \end{subfigure} \caption[Effect of overtraining on a one dimensional regression]{Effect of overtraining on a one dimensional regression problem using an oversized network. {\it Top}: Evolution of the output error, averaged over the corresponding dataset, as a function of the number of epochs. {\it Middle}: Network prediction for the optimal epoch $n_e = 40000$. The points represents the training and validation datasets in blue and orange, respectively. The original function is the black dashed line, while the network prediction is the continuous black line. {\it Bottom}: Network prediction for the last epoch $n_e = 192000$.} \label{overtrain_figs} \end{figure*} This behavior is illustrated in Figure~\ref{overtrain_figs} on a simple one dimensional regression example. In order to ease the reading, by exaggerating the effect, we deliberately used a very oversized network composed of two hidden layer of 32 sigmoid neurons each. Furthermore, we added a Gaussian noise to the original function to increase the disparity regarding the data selection. Half of the data are used as training set, and the other half as validation set. The bottom frame frame shows the the final network prediction at epoch $n_e=192000$. The strong over-training is visible since the prediction follows almost each point in the training dataset, loosing track of the general shape of the function. If the network is trained for a very large number of epochs, the prediction will end up being only linear links between the training data, using the sigmoid neurons only in their linear regime. The top frame illustrates the evolution of the errors of both the training and validation sets. It shows the specific point where the two errors start to diverge, and where the training should have been stopped, represented in the middle frame, despite the error of the training set being far from its minimum value. Several breaking steps in the error curve are due to the high granularity of this specific problem and to new sets of neurons reaching their linear state to fit an outlier point.\\ Usual distribution proportions between the subsets are 50:25:25, or 60:20:20. However, it strongly depends on the original size of the labeled dataset. For very large dataset, the training set is usually reduced to prevent the algorithms to use too much memory with little to no effect on the prediction quality. Large datasets can also be split into many subdatasets and training, validation and test are randomly picked into them. This allows one to repeat several trainings with different datasets and select the most efficient one which is the base of Bootstrap and Boosting methods \citep{efron_86, freund_97}, or even allows one to combine several networks to make a decision by committee \citep{xu_92}.\\ Since very complex problems require a lot of neurons and weights, and therefore a lot of training data, it is more common that the limitation of the generalization capacity is induced by having too few examples. In such conditions, it is preferable to have most of the data in the training set. Then, precautions must be taken, to avoid too small test and validation sets that are not representative of the feature space coverage of the problem or even suffer of small-number effects. In very stretched cases, test and validation sets can be merged into only one dataset that fulfill both roles \citep[e.g][]{lecun-98,Bishop:2006:PRM:1162264}. However, additional precautions should be taken to ensure that the network does not stop training at a point that is overly favorable to this dataset, for example by verifying that the predictions on the training and test sets are similar (see Sect.~\ref{training_test_datasets}). \subsubsection{Shuffle and gradient descent schemes} \label{descent_schemes} Since the training set must be shown numerous times, the order of the objects and the frequency of the weight updates are two important parameters. In the previous section we assumed, following equations~(\ref{eq_update_perceptron}) and (\ref{eq_grad_desc}), that the network training process uses one object of the training sample at a time, computes the corresponding activation and corrects the weights accordingly, before switching to the next object. This is a usual approach called "on-line", however it can greatly suffer from ordering effects. Considering a simple one dimensional regression example, if data are given sorted, many of the first update steps will be biased toward the first part of the feature space area. Because the first steps are usually larger, it can trap the network weights in a local minimum for a long time, only fitting part of the problem. Such ordering effects can have non-desirable effects on a vast variety of applications. Therefore, it is advised to periodically shuffle the training dataset, up to after each epoch.\\ An additional difficulty raised by this effect is that each object pulls the weights toward its own {\it current} minimum. It might happen that objects push the network weights in opposite ways and slow the training process down. It is also very sensitive to other network aspects like the weight initialization and it often produces a noisy convergence toward the optimum value. For that reason, a widespread approach is to perform a so-called "Batch training". It consists in performing a forward step, therefore a prediction, for all the objects in the dataset, then summing or averaging the individual errors to perform a single weight update. It allows the network to better follow the general trend and often converges faster in the direction of the global error with much less noise in the prediction than the on-line approach. It is also less sensitive to initialization and normalization effects. It also removes all ordering effects, and can be written in a much more computationally-efficient way (Sect.~\ref{matrix_formal}). However, it was demonstrated that the on-line approach can be more safely used with larger learning rates, and that it better resolves the error surface, leading to a much faster global convergence \citep{wilson_general_2003}. This implies that the batch learning often takes much more epochs to learn, and is therefore inefficient in terms of the number of required computations to converge, despite being quicker to perform them. This is the reason why on-line training is still very common, especially when using lighter hardware.\\ An approach related to the on-line scheme is the Stochastic Gradient Descent (SGD) that randomly picks a training element in the dataset and uses it to perform one weight update, and then repeats the process until convergence. This is a drawing with replacement, which removes the necessity to shuffle and obsoletes the notion of batch. It has proven to be very efficient on large datasets and on light hardware. It usually minimizes the number of computation operations to be performed to reach convergence.\\ Considering this information, a recent and broadly adopted hybrid-scheme is the "mini-batch" scheme. It merges the best of both worlds by splitting the training dataset into several small groups that are used together to perform a weight update. It allows one to keep many small updates as in the on-line approach, and it reduces noise in the convergence by doing the average of a few errors. It can be implemented in two different ways: (i) either by selecting few random objects to construct the next mini-batch, in a SGD fashion or (ii) by splitting the whole dataset into groups that are used subsequently, then shuffling the complete dataset at each epoch to compose different groups. This scheme is implemented in most of the popular frameworks as it proved to have a good efficiency, both in terms of the number of required computations, and in terms of computational performance. In our framework CIANNA, the mini-batch is the default scheme and the size of the batches is a tunable parameter. The framework is also able to perform batch and SGD trainings if specified, using specific arguments. \\ \subsubsection{Momentum conservation} \label{sect_momentum} Another optimization, still to avoid local minima, consists in adding a "momentum". This is a classic speeding up method for various iterative problems \citep{polyak_methods_1964}, that has been updated for modern problems \citep{qian_momentum_1999}. It consists in adding a fraction of the previous weight update to the next one, during the training phase. This memory of the previous steps helps keeping a global direction during the training especially in the first steps. This additional inertia also prevents the network from staying stuck in local minima without resorting to a higher learning rate (Section \ref{learning_rate}), and usually allows much faster convergence.\\ Moreover, it can be seen as another form of adaptive learning rate. Indeed, it usually increases the weight updates in the first training steps when the weights are the farthest away from their optimum values. Once closer to the global minimum error, the inertia will reduce, going back to smaller updates. Therefore, it allows a faster training even when taking a smaller learning rate. It also helps reducing the spread between repeated trainings. Again, there are many different implementations, but a simple one is to add an inertia term to the update equation~(\ref{eq_grad_desc}), which can then be expressed as: \begin{equation} \omega^{t}_{ij} \leftarrow \omega^{t-1}_{ij} - \Delta\omega^{t}_{ij}, \end{equation}S where $\omega_{ij}$ is the weight matrix between two layers, $t$ marks the value for the current epoch update, and $\Delta\omega_{ij}$ is the computed weight update. The latter is expressed with the momentum term as: \begin{equation} \Delta \omega^{t}_{ij} = \eta \frac{\partial E}{\partial \omega^{t-1}_{ij}} + \alpha \Delta \omega^{t-1}_{ij}, \end{equation} with $E$ the propagated error at the current layer, and where the hyperparameter $0 < \alpha < 1$ scales the momentum. Usual values are often between $0.6$ and $0.9$. \subsection{Matrix formalism and GPU programming} \label{matrix_formal} \subsubsection{Hardware considerations for matrix operations} We mentioned in Section~\ref{descent_schemes} that the batch, and consequently the mini-batch, gradient descent scheme is computationally more efficient. This is mainly due to the fact that batch methods can very efficiently be converted to matrix operations, and mostly to matrix multiplications. Indeed, matrix operations are extremely quick to compute, compared to scalar operations, regarding the raw number of computations to be performed. Indeed, matrix operations have been in the landscape of numerical computing for decades, and have been identified, especially by data and computer scientists, as computationally intensive while being necessary for a very wide range of workflows. Additionally, many computing operations, even if not explicitly written this way, can be and are computed as matrix operations. The very nature of computer hardware optimization (memory layout, batched operations, several operation per cycle, ...) works using the scheme of vector and matrix operations. Therefore, an enormous quantity of work and experience has been accumulated toward the optimization of matrix operations through the years. And at some point, it began to reversely influence hardware technology to make a better use of matrix operations and to express more applications into this formalism \citep{Du_2012, Cook_2013}.\\ The very expression of these optimizations is how matrix operations can be described as highly parallel operations, taking advantage of vector co-processors in Central Processing Units (CPU) and of the rising core count on the same processor chip (or multiple CPU chip on a single motherboard). Indeed matrix computations can be efficiently expressed as Single Instruction Multiple Data (SIMD) operations. It also takes a strong advantage of very quick low-level memory in CPUs as the same data are used several times subsequently. In summary, matrix operations influence and take advantage of many hardware novelty and optimizations. \\ The most extreme example of this situation is the Graphical Processing Units (GPU) that are almost dedicated to SIMD operations. As their name indicates, these chips are designed for graphical applications that rely on a pixel formalism. Therefore, dealing with images is intrinsically a matter of tables, or matrices, of pixels. For examples: dimming an image is equivalent to applying a factor to all pixel values, which is a pure SIMD operation; smoothing an image consists in the application of a filter to many zones of pixels in the image, which are made of many small matrix multiplications; the rotation of an image is also a SIMD operation to transform input coordinates into others; etc. For this purpose, GPUs are built in the form of a very large amount of very light compute units or cores with much more layered cache levels than CPUs. These cores are slow in terms of clock speed, which is around a GHz for high-end GPUs while it can be as high as above 5\,GHz for modern CPUs, and in terms of instructions per clock cycle and of general purpose capabilities. Noticeably, the GPU cores cannot perform or have very poor performance with double precision real numbers (excluding most advanced professional or science dedicated ones) and are limited to single precision float and integer operations. Such cores are simple and small enough to be stacked in large numbers in a single chip along with large amounts of very fast dedicated memory, allowing GPUs to reach accumulated performances that are orders of magnitude above regular CPUs. However, it makes such chips very application specific, while regular CPUs are able to handle much more diverse tasks, like handling an Operating System. This is why GPUs rely on a CPU to handle the general programming and are plugged as accelerators in more classical computer systems. For all these reasons, they are the most efficient way to speed up matrix operations to this date. \newpage \subsubsection{Artificial Neural networks as matrix operations} \label{ANN_as_matrix} \afterpage{% \clearpag \ifodd\value{page} \expandafter\afterpag \fi { \begin{sidewaysfigure} \centering \includegraphics[width=0.80\hsize]{images/matricial_form_fig.pdf} \caption[Graphical representation of the matrix batch training]{Graphical representation of the matrix operations involved in the (mini-)batch training of a single hidden layer network. The large red arrows indicate the order in which the operations are performed. The result of a matrix multiplication is then used directly to perform the next operation. Red arrows inside matrix indicate activation function and are performed before the next operation. Large $\times$ symbols are matrix multiplications, while $\circ$ symbols stand for element wise multiplication. The matrix sizes are as follows: $b$ is the batch size, $m$ is number of input dimensions, $n$ is the number of neurons in the hidden layer, and $o$ is the number of output neurons.} \label{matricial_form_fig} \end{sidewaysfigure} } } All the equations of Section \ref{global_ann_section} can be expressed as matrix operations relatively easily. Assuming a batch or mini-batch approach, the input vector is replaced by an input matrix $X_{bm}$ with $m$ input dimensions and $b$ training objects. Then, following equations~(\ref{weighted_sum}), it is multiplied with the weight matrix, which produces a matrix of all the sums $h_{bn}$ for $n$ neurons, where each element has to go through the activation function to produce all the activations of the first layer, corresponding to all the inputs in the current batch. This operation can then be repeated for each layer up to the end of the network where an output matrix $a_bn$ is produced. Following the same approach it can be compared to a target matrix using the selected error function, to produce an error matrix. Following the exact same approach, the back propagation can be done in a similar matrix formalism. However, it must be noted that during the back propagation either the current weight matrix or the produced $\delta_bn$ must be transposed to respect the ordering in equation~(\ref{eq_update_full_network}) when propagating the error. As well, either the current $\delta_bn$ or the current activation $a_bn$ must be transposed to respect the ordering when computing the weight updates.\\ One specific point in this formalism is the inclusion of the bias node. In many methods that use the matrix formalism it was decided to include it as a subsequent addition to the neuron weighted sum as evoked in Section~\ref{bias_node}. However, this implementation induces performance penalties when using GPUs, as explained in the following Section~\ref{gpus_prog}. Then, the difficulty using our approach of an extra node that acts as a supplementary constant feature is that, even if it works on the input layer by having a column of bias values, the multiplication of this extended input matrix by the weight matrix creates a hidden layer activation without bias node. One could solve this by appending a bias node column to the hidden activations, but this would result in even more performance penalties than the one we are trying to avoid. A possible solution, is then to also add an extra column in each weight matrix that is full of zeros except for on value defined to 1. Then the weighted sum of input automatically produces an extra column of bias with the same value as the input column.\\ The full procedure including the forward pass, the error back-propagation, and the weight update, is illustrated in Figure~\ref{matricial_form_fig} that shows the successive operations performed on the input data and the intermediate network results. We note that matrix operations are symbolized with the symbol $\times$ while the dot multiplication is denoted by $\circ$. This figure also illustrates the addition of the bias value using the previously described methodology in the input $X$ and weight matrix $V$, but not on the output layer for which it is unnecessary. In the back-propagation part, the red column in $\delta_{hid}$ is the echo of the extra bias column in $h_{hid}$ that comes from matrix size conservation of the process, but it does not contain meaningful information. This vector is then manually set to $0$ to preserve the extra column values in $\Delta V$ that enables the bias propagation. This solution to include the bias directly in the matrix could be seen as an unnecessary subsequent complexity and increase in memory usage. In practice, the difference in time needed for matrix multiplications, with or without the added bias, is completely negligible, and the memory usage increase is very marginal on a common application. But when considering the operation launch time overhead on a GPU, our solution is much more efficient since it requires significantly less operation launches than a subsequent bias addition, resulting in a very significant computation time reduction when using small batch sizes. Ultimately, this approach can be implemented using efficient Basic Linear Algebra Library (BLAS) which performs efficient multi-threaded matrix operations, like OpenBlas, or similar GPUs library like cuBLAS. \newpage \subsubsection{GPUs variety} \label{gpus_variety} It is important to note that historically the GPUs were not a programmable hardware. They were completely closed, with given APIs that only allows a limited number of tasks. The first uses of GPU for computation were then some sort of "hacks" where a problem was reformulated into operations on images so that the GPU could work on it. While ancestors of the GPUs originate in the 70s, it is only in the early 2000s that General-Purpose computing on Graphical Processing Units (GPGPU) began \citep{Du_2012}. They noticeably became more efficient than CPUs on some matrix operations in 2005 \citep{Galoppo_2015}.\\ At the moment, there are several manufacturers of GPUs, which leads to important variations in hardware architectures and software development tools. We list here the three major GPU manufacturers along with some of their specificities:\\ {\bf \large Intel}\vspace{-0.4cm}\\ Mostly known for their CPUs, they are in fact the company that sells the largest number of GPUs, in the form of integrated GPUs (iGPU) in their CPUs. However, despite this position they do not permit GPGPU! Indeed, the iGPU format is not the most adapted one for this approach as it shares resources with the CPU. However, Intel has recently made large investments in dedicated GPU solutions in order to deliver them to the market probably in the next years, and that would be suitable for GPGPU. We still note that the manufacturer has already made such attempts in the past that were not successful enough to be released. We also note that Intel has led some part of the research on "many cores" CPUs like their Xeon Phi lineup, but that are slowly being discarded in profit of GPUs.\vspace{+0.2cm}\\ {\bf \large AMD}\vspace{-0.4cm}\\ This manufacturer is mostly known in GPU technology due to its sub-brand Radeon. AMD is also a very well-known brand for its CPUs, however, unlike Intel, it produces both iGPUs and dedicated GPUs. Regarding GPGPU, the company is strongly invested in the development of OpenCL (or Open Computing Language), which is an open source framework mostly written in C that allows a very specific way of parallel computing. Indeed, the aim of OpenCL is to be usable on any device taking advantage of many various hardware: CPUs (even ARM CPUs), GPUs, Digital Signal Processors (DGS), Field-Programmable Gate Arrays (FPGA), ... It aims at unifying the programming to automatically use all these hardware. While this specificity along with the open source aspect of this solution is appealing, it suffers several caveats. Firstly, the OpenCL framework is known to be laborious to use. Secondly, this very general approach does not enable optimum performance for each hardware. For AMD GPUs, OpenCL is the only usable approach and still does not permit to use them at their maximum performance. Good performances are reachable, though, but at the cost of a very verbose OpenCL programming instead of the common approach. A last point is that AMD does not have much economical leverage as other companies, and therefore is often behind in terms of modern GPU techniques, despite being currently one of the most innovative brand regarding CPU.\vspace{+0.2cm}\\ \newpage {\bf \large Nvidia}\vspace{-0.4cm}\\ This brand is the biggest one to be dedicated to GPU solutions (excluding some chips like the {\it Tegra} that are more close to full System on a Chip (SoC)). They are comparable to AMD in terms of volume of dedicated GPU sells, though usually slightly ahead. They also provide the most commonly adopted GPUs in professional environment and science with dedicated lineups. They are the main provider of GPU solutions for super computers, being included in 5 of the top 10 world supercomputers, including the first two positions. This very specialized expertise allows them to be on the edge of many GPU technologies. However, their GPGPU aspects is done by using a dedicated programming language slightly derived from C++ that is called CUDA (Compute Unified Device Architecture, that stands both for the hardware architecture and the associated language), which also contains a dedicated compiler based on gcc. However, it is only supported by Nvidia GPGPU, and therefore is much less general than OpenCL. Additionally this solution is not open source (with the induced dependency effect argued in Section~\ref{tool_boxes}), still, allowing for a very low level programming capacity. Many higher level functionalities are also proposed through not-open dedicated CUDA libraries even if they can be mostly reproduced using the low level CUDA language. It is worth noting that one can also use OpenCL for Nvidia GPUs, which often results in much lower performance considering the same time investment, but allows widespread OpenCL applications to work on such GPUs. Finally, Nvidia has a fairly aggressive approach on the professional market where some CUDA ``Pro'' accelerated applications can only be used with their professional GPU lineup that is often much more expensive.\\ \vspace{1cm} We note that there are a few other software solutions for GPGPU on Nvidia or AMD hardware, namely the Microsoft Direct Compute that relies on their graphical library DirectX only available on the Windows OS; the latest versions of OpenMP that also support GPU acceleration; and the OpenACC solution that is very promising and progressively adopted by many recent super-computing infrastructures, and that rely on compiler directives to convert regular loops into GPU accelerated equivalent automatically with very efficient performance.\\ Regarding the specificity of each solution, we opted for Nvidia CUDA. First, we note that it was not motivated by the material at our disposal, since we had access to reasonable hardware from both AMD and Nvidia. The choice toward Nvidia was mostly motivated by the greater potential of Nvidia hardware for GPGPU. Additionally they are deeply involved in the field of AI, and more specifically in ANN, with many dedicated optimizations for these specific workflows. Moreover, they are currently adding more and more dedicated computing sub-units in their CUDA cores that are even more dedicated to small and light (lower bit-size numbers) matrix operations aiming at further improving neural network training speeds. These specific sub-units, namely the Tensor Cores, are present in the last two generations of Nvidia GPUs, and have been further improved in their upcoming Ampere architecture. Concurrently, the choice of the CUDA language has been made over OpenCL again for performance efficiency, and because CUDA is necessary to make use of all the dedicated AI materials we just presented. With most of the large computing clusters having modern Nvidia GPU, this appears as a strategic choice in anticipation of a potential future proposal to use large scale GPU facilities with dedicated access to AI research for example on the new \href{http://www.idris.fr/annonces/annonce-jean-zay-eng.html}{Jean Zay} supercomputer at GENCI (14 PFLOPS peak performance).\\ \newpage \subsubsection{Insights on GPU programming} \label{gpus_prog} We describe here a few aspects that are particularly important in our implementation in CIANNA (see Appendix~\ref{cianna_app}), but also for any GPU accelerated ANN. Our framework is able to handle CPU matrix operations though OpenBLAS, but its CUDA BLAS (cuBLAS) implementation drives most of its development. First, matrix operations are very nicely handled in cuBLAS, with a huge amount of small optimizations that make use of the accumulated knowledge on matrix computation and of the dedicated hardware. It is important to note that using these GPUs for matrix computation strongly drives the hardware development of Nvidia. The Tensor Cores are the best examples, but it works the same for cache optimization, choice of precision capabilities, memory bandwidth, memory-core layouts, etc. The general CUDA programming is used to construct extremely parallel kernels (i.e. individual functions that are executed simultaneously by many individual CUDA cores on the GPU corresponding to the SIMD formalism) for all the operations that are not matrix multiplications described in the previous section and illustrated in Figure~\ref{matricial_form_fig}. For example, we wrote kernels for activation functions, element wise derivative multiplication, etc. In practice cuBLAS operations also rely on underlying kernels but that are launched using the dedicated cuBLAS API, so we will most of the time refer as kernels for any function that executes on the GPU including matrix operations.\\ Overall, the programming scheme of a CUDA application consists in declaring all the useful variables on the CPU and on what is called the "host" RAM memory. Then all the useful data are moved on the GPU (also called the "device") RAM memory. Even if modern GPUs have very quick memory interface with the CPU, these data transfer operations still have an important time cost, therefore one must aim at minimizing them. For a neural network it is easy to create the weights and let them on the GPU memory. This is the same for many other network data like the layer activations and the layer errors. Even when computing the error, it is common to send back to the host as little data as necessary to monitor the error evolution. The input data is more tricky to manage because the size of the dataset can be large. With modern GPU that have a very large dedicated memory, the dataset can be fully loaded in the GPU and all the training epochs can be performed there. When the dataset is too large, chunck of data must be sent to the GPU regularly. There is a nice property of GPUs which is that memory transfer and computations can be launched concurrently without performance penalty. Therefore, one must aim at maximizing the overlap between the two, which can be easily done in this case. We note that this is considered as basic GPU programming knowledge and optimization by the \href{https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html}{CUDA programming guide}.\\ It is now the appropriate place to detail why the computation of the bias node can lead to a performance penalty (Sect.~\ref{ANN_as_matrix}). When using GPU to perform the matrix operations of a neural network, the computation themselves are very efficient, and this is the same for the kernel operation. However, due to the fact that the CPU leads the execution of the program and orders the GPU to execute the various kernels, there is a launch latency of the order of few tens of $\mathrm{\mu s}$. In most workflows that were used for years it was not much of an issue since GPUs were used with very large kernels or to perform very large matrix operations, for example in numerical simulations. However, regarding ANNs, an epoch cannot be expressed efficiently as a single very large kernel because the activations of a layer depend on the activations of the previous one. Therefore, ANN implementations are a succession of many smaller matrix or kernel operations, and it is even worsen by the fact that they often require many epochs to learn. Obviously, this effect is additionally scaled when using a small mini-batch size since an epoch is splitted into several small forward and back propagation passes on the network, vastly increasing the number of overheads. In this context it has become very common for ANN frameworks to be fully bottle-necked by the kernel launch latency rather than by the compute performance of the GPU. Despite that, it has been much more efficient to use GPUs than CPUs for neural networks for several years now. But it remains a good idea to try to minimize this effect to get closer to the full performance of the GPU. Nvidia has invested a lot of efforts recently in the last versions of CUDA (starting with 10.0) to reduce kernel launch latency and to provide solutions that allows the program to become independent of the CPU by having groups of kernels executed several times (CUDA Graphs). In this case the latency is reduced to the equivalent of only one kernel launch. However, reducing the number of kernels needed remains a good practice in order to obtain the best possible performance. This is what motivated our implementation of the bias node, which removes a kernel launch at each layer.\\ We also highlight, that we could have used the cuDNN toolkit or cuFFT, that were designed exactly for such cases. They include many optimizations of this kind for many modern ANN practices. But in contrast with cuBLAS, they usually impose constraints on the data format or the general approach. We therefore preferred to use cuBLAS to keep the freedom to try any unconventional method that could be appropriate for a future application of CIANNA.\\ Finally, we give here some details about the Tensor Cores that are present on the latest Nvidia GPUs architectures (Volta, Turing and Ampere). They are dedicated computing units that are able to perform a size-specific matrix multiplication per GPU clock cycle. The Tensor Cores even incorporate a subsequent addition to the result in the same clock cycle to account for the bias addition. These cores work on numbers with lower precision (FP16, INT8 or even INT4, and more recently TF32, BF16, ...) because they are dedicated to ANN workflows where the precision is far less important. We remind that many early models of ANNs used binary neurons! Therefore, most of the network parameters and results can be safely expressed using lower precision numbers. However, the matrix operation itself makes a lot of sums, exposing the user to the accumulation of rounding errors more dangerously than with higher precision variables. This is why Tensor Cores are "Mixed Precision" compute units, meaning that they include an internal sum accumulator that uses at least twice the number of bits than the input data. They can even store the results of the multiplication of two half-precision matrices into a full precision one. It is then possible to construct very efficient "mixed precision" networks by using these new hardware cores wisely, which is described nicely by Nvidia in \citet{micikevicius2018mixed}. Unfortunately, we did not have access to such modern GPU early enough for these capabilities to be present in CIANNA. However, we since had experience with them and it will be include in a near future. \newpage \vspace{-0.3cm} \subsection{The specificities of classification} In this section we discuss some specificities of a classification application of ANNs, along with some necessary precautions, that will be useful for the classification of Young Stellar Objects in Section~\ref{yso_datasets_tuning}. We finish this subsection with an illustration of a few common classification examples using ANNs. \vspace{-0.3cm} \subsubsection{Probabilistic class prediction} \label{proba_class_intro} While it is possible to do a classification that has a "binary" behavior using neural networks, it is much more common to represent the membership of a class using a continuous value. It has the nice property that it can measure the degree of certainty of the network in its prediction. As exposed in Section \ref{sect_perceptron}, a common approach is to define the output layer to have one sigmoid neuron per class and a target with only the neuron of the expected class having a value of one, the other being set to zero. Then, after the training of the network, a prediction of $0.9$ for one neuron can lead us to consider the corresponding object as a reliable member of the corresponding class. Indeed, the neuron is far off the linear regime of the sigmoid, which indicates that it has accumulated a significant signal toward its activation. In contrast, such a neuron with an activation value close to $0.6$ would not be considered as a very reliable classification result, even if it is the highest activation of all output neurons. We note that the 1.0 and 0.0 target are asymptotic values of the sigmoid meaning that these perfect predictions are very unlikely to be reached. To obtain a probability it additionally needs a normalization over all the outputs, so that the sum of the output values for one object is always 1 and each neuron provides a membership probability. This would give the network attributes of a Probabilistic Neural Network \citep[PNN; ][]{specht_probabilistic_1990, stinchcombe_universal_1989}. However, using sigmoid activation functions without normalization, there are no rules that prevent several output neurons from having simultaneously a value close to one, which might happen even if only one element of the target vector is set to one due to random weight initialization. Such prediction should naturally disappear during the training process but there is no strong condition that prevent such case to happen after the training for an underconstrained input.\\ A more suitable activation function to perform a probabilistic classification is the so-called Softmax activation, also know as normalized exponential \citep{Bridle_90}. It is expressed as: \begin{equation} \centering a_k = g(h_k) = \frac{\exp(h_k)}{\sum_{k^\prime=1}^{o}{\exp(h_{k^\prime})}}, \label{Softmax_activation} \end{equation} where $k$ is the neuron index in the output layer. Thanks to the normalization over all the output neurons, the $k$-th output neuron provides a real value between zero and one, which acts as a proxy for the membership probability of the input object in the $k$-th class. The global behavior of this activation is also considerably less prone to saturation effects (Sect.~\ref{weight_init}), and therefore should be able to get excellent predictions with reasonably small weights. But this function is not without caveats. If used with an inappropriate error function, (as discussed in the next paragraph), it can push the weighted sum too high and therefore reach the overflow limit of the variables. It is specifically problematic for GPU implementation as it often uses low precision variables (e.g. FP32, FP16, or smaller, see Sect. \ref{gpus_prog}). A simple modification to the function is to subtract the maximum $h_k$ value to all the output values : \begin{equation} a_k = g(h_k) = \frac{\exp(h_k - \max \{h_1, \dots, h_o\})}{\sum_{k^\prime=1}^{o}{\exp(h_{k^\prime} - \max \{h_1, \dots, h_o\})}} \label{mod_Softmax_activation} \end{equation} This allows us to shift the number comparison in a less numerically problematic part of the exponential, and almost always prevents the overflow.\\ On the other hand, the sum-of-square error is less suitable for a probability output, especially with Softmax. It is most of the time replaced by the so-called Cross-Entropy error (or loss) that is much more efficient for probabilistic outputs and that produces a nice error propagation term when combined to the Softmax \citep{Bridle_90}. It is expressed as: \begin{equation} E = - \sum^{o}_{k=1}t_k \log(a_k), \label{cross_entropy_error} \end{equation} where $a_k$ are the activations of the output layer that count $o$ neurons and $t_k$ the corresponding targets, both of which are probability values. It can then be combined with the derivative of the Softmax activation function: \begin{equation} \frac{\partial a_k}{ \partial h_{K} } = a_k(\delta_{k K } - a_K ) \label{soft_max_deriv} \end{equation} where $\delta_{k K }$ is the Kronecker symbol ($\delta_{k K } = 1$ if $k = K$ and 0 otherwise). Following the equation~\ref{eq_update_full_network} the corresponding output error term is written as: \begin{equation} \delta_o(k) = a_k - t_k. \label{soft_max_output_error} \end{equation} Figure~\ref{errors_comparison} shows the difference between the cross-entropy and the sum-of-square errors on a probabilistic output for which the target is 1. It demonstrates that the cross-entropy quickly corrects the outputs that predict 0 when they should be 1. It also induces that the output error term $\delta_o$ is linear, which allows a better error propagation and balance between the various output during the weight updates. This avoids attributing to much importance to specific output neuron regimes.\\ Finally, a probability per class does not directly define what is the predicted output class. The easiest method consists in selecting the highest probability, but more advanced methods also exist. For example, one can require that the maximum value must be higher than the sum of the others, or that it is higher than the second maximum by a certain amount. The most selective one would be to require that the output probability is above a defined threshold. All these methods exclude objects that are too "confused" to be considered as reliable in order to improve the overall reliability of the selected objects.\\ However, it is important to emphasize again that this probability prediction only characterizes the network estimated probability regarding how it succeeded in fitting the presented distribution of objects in the feature space. Then, it must not be used as a genuine probability, and selecting all objects with a neuron activation probability above $0.9$ does not necessarily mean that these objects have a $90\%$ probability of being of the corresponding class. In Section~\ref{proba_discussion}, we will use these activation probabilities as a criteria to disentangle the confused and reliable objects, but the actual probabilities will be estimated using the tools presented in the next section. \begin{figure}[!t] \centering \includegraphics[width=0.6\hsize]{images/errors_compare.pdf} \caption[Sum-of-square error against cross-entropy error]{Comparison between the behavior of the sum-of-square and cross-entropy error functions for a probabilistic output with target to 1.} \label{errors_comparison} \end{figure} \newpage \subsubsection{The confusion matrix} \begin{table}[t] \centering \caption{Confusion matrix example for a cats and dogs example.} \vspace{-0.1cm} \begin{tabularx}{0.55\hsize}{r l |*{2}{m}| r } \multicolumn{2}{c}{}& \multicolumn{2}{c}{\textbf{Predicted}}&\\ \cmidrule[\heavyrulewidth](lr){2-5} \parbox[l]{0.2cm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{Actual}}}} & Class & Cat & Dog & Recall \\ \cmidrule(lr){2-5} & Cat & 93 & 7 & 93.0\% \\ & dog & 14 & 86 & 86.0\% \\ \cmidrule(lr){2-5} & Precision & 86.9\% & 92.5\% & 89.5\%\\ \cmidrule[\heavyrulewidth](lr){2-5} \end{tabularx} \vspace{-0.05cm} \label{balanced_confmat} \end{table} We define here the concepts necessary to present results statistically, and to characterize their quality. For this it is common to use the so-called "confusion" matrix. It is defined as a two dimensional table where the rows correspond to the targets, and the columns correspond to the classes predicted for the same objects by the classifier, in our context an ANN. A usual example is a classification between cats and dogs, as presented in Table~\ref{balanced_confmat} using made-up plausible results. It shows the corresponding $2 \times 2$ confusion matrix, where the number $14$ that appears in the second row, first column is the number of labeled dogs that were mistakenly classified as cats by the classifier. In this scheme, a $100\%$ "correct" classification gives a diagonal matrix, while the off-diagonal numbers count the misclassified objects. This representation directly provides a visual indication of the quality of the network classification. The confusion matrix allows us to define quality estimators for each class: \begin{equation} \centering \text{Recall} = \frac{TP}{TP+FN} \qquad \text{Precision} = \frac{TP}{TP+FP} \label{eq_conf} \end{equation} \vspace{-0.3cm} \begin{equation} \centering \text{Accuracy} = \frac{TP + TN}{TP+TN+FP+FN} \label{eq_accu} \end{equation} where : \begin{align*} & TP \equiv \text{True Positive} & TN \equiv \text{True Negative}\\ & FP \equiv \text{False Positive} & FN \equiv \text{False Negative}\\ \end{align*} The "recall" represents the proportion of objects from a given \textit{target} class that were correctly classified. The "precision" is a purity indicator of an output class. It represents the fraction of correctly classified objects in a \textit{predicted} class, as predicted by the network. And finally, the "accuracy" is a global quantity that gives the proportion of objects that are correctly classified with respect to the total number of objects. In our confusion matrix representation, we show the accuracy at the intersection of the recall column and the precision row. Limiting the result analysis to this latter quantity may be misleading, because it would hide the class-specific quality and would be strongly impacted by the possible imbalance between the output classes. The matrix format is particularly well-suited to reveal the weaknesses of a classification. It could, for example, reveal that the vast majority of a subclass is misidentified as a specific other subclass, which is also informative about some degeneracy between the two classes.\\ In our example with cats and dogs (Table~\ref{balanced_confmat}), the global accuracy is $89.5\%$ while the recall of cats is much better with $93\%$, and the recall of dogs is $86\%$, therefore there is a significant additional difficulty in fitting the distribution of dog examples in the feature space. This also has incidence on the precision values: the dogs being less well identified, the misclassified dogs induce a drop in the precision of cats, increasing the false positive rate of cat predictions. The differences between the two classes could even be exaggerated with the exact same global accuracy, which illustrates to what extent it is a bad indicator when doing classification. Interestingly, it also highlights a structural weakness in the way we defined our ANNs. Even if the error propagation is made by using each individual output error, one usually just monitors the average error over the output neurons, while it is possible that some are better fitted than others. It is possible that, during the training phase, some objects are overtraining a subset of the neurons, while others are still in the process of improving another subset of the neurons, with an average error that still decreases. This is why more elaborated error monitoring methods are sometimes used in some ANN implementations.\\ \subsubsection{Class balancing and observational proportions} \label{class_balance} Machine Learning methods are known to work better on balanced dataset \citep[e.g][]{Anand_93,yamin_2009}. In a classification case it means that it works better if all classes have the same number of examples in the training dataset. The main reason is that when more data of a specific class are used, they lead to proportionally more weight updates. The network therefore uses more weights and neurons to reconstruct this class in disadvantage of the others. However, this reasoning in based on the assumption that each class has the same intrinsic complexity in the feature space, which is a strong assumption for which we will a give counter example in Section~\ref{yso_results}. In cases where this assumption is valid, a common approach consists in rebalancing the training dataset to have equal class proportions despite the proportions of the original dataset. One should however be careful, because this can lead to several bad practices, the first being that the classification performance are also evaluated using a balanced dataset as validation and test datasets, while the true underlying proportions are strongly imbalanced. The study of the impact of imbalance in ML algorithm is a dedicated field of study called "imbalanced learning" which is known to be much more complicated than learning on classical balanced dataset and that is studied for more than two decades \citep{Anand_93, he_learning_2009}.\\ \newpage To illustrate the effects of an imbalanced dataset, we selected a standard example of a specific virus immunity detection. Assuming that one wants to build a serological test for that purpose, it can be seen as a simple two-category classification with positive and negative test results. For the purpose of this example we assumed that only 1 out of 10 persons are truly infected. We did not make any assumption on how the classifier was built and just assumed a recall of $93\%$ for positive cases and $95.4\%$ for negative cases. Table~\ref{balanced_test_confmat} shows the confusion matrix using balanced proportions, that are likely to be used when tuning the method, for example when training the network. This gives apparently satisfying results with a $94\%$ global accuracy and a reasonable precision above $94.9\%$ to avoid false positive. However, as we said, only $10\%$ of people are truly positive in this example, therefore using the true proportions, which we will refer to as "observational proportions", it gives the confusion matrix in Table \ref{imbalanced_test_confmat}. This table shows that despite the $95\%$ recall of negative cases, it leads to a large contamination of the predicted positive cases whose precision drops to $66.9\%$, and therefore leads to a high false-positive rate. Such misleading results can have important consequences, like giving immunity passports to non-immune persons, or in similar medical studies, giving a high-risk treatment to healthy persons.\\ We also note a communication caveat that can happen even when using observational proportions. Some authors choose to normalize the confusion matrix and even to colorize it accordingly \citep[][...]{Richards2011, miettinen_protostellar_2018,Walmsley_2020}. It may help making the results more attractive and emphasizing some aspects of the results that are apparently easy to read. However, it implies to choose to normalize either on the lines or on the rows, respectively highlighting the recall or the precision value. This can subsequently hide the imbalance effect on either precision or recall. In the present work, we prefer to show the complete confusion matrix without color or arbitrary normalization to preserve their objectivity.\\ \begin{table}[t] \centering \caption{Imbalanced classification for a medical example using balanced proportions.} \vspace{-0.1cm} \begin{tabularx}{0.55\hsize}{r l |*{2}{m}| r } \multicolumn{2}{c}{}& \multicolumn{2}{c}{\textbf{Predicted}}&\\ \cmidrule[\heavyrulewidth](lr){2-5} \parbox[l]{0.2cm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{Actual}}}} & Class & Positive & Negative & Recall \\ \cmidrule(lr){2-5} & Positive & 93 & 7 & 93.0\% \\ & Negative & 5 & 95 & 95.0\% \\ \cmidrule(lr){2-5} & Precision & 94.9\% & 93.1\% & 94.0\%\\ \cmidrule[\heavyrulewidth](lr){2-5} \end{tabularx} \vspace{-0.05cm} \label{balanced_test_confmat} \end{table} \begin{table}[t] \centering \caption{Imbalances classification for a medical example using imbalance proportion with similar recall for the two classes} \vspace{-0.1cm} \begin{tabularx}{0.55\hsize}{r l |*{2}{m}| r } \multicolumn{2}{c}{}& \multicolumn{2}{c}{\textbf{Predicted}}&\\ \cmidrule[\heavyrulewidth](lr){2-5} \parbox[l]{0.2cm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{Actual}}}} & Class & Positive & Negative & Recall \\ \cmidrule(lr){2-5} & Positive & 93 & 7 & 93.0\% \\ & Negative & 46 & 954 & 95.4\% \\ \cmidrule(lr){2-5} & Precision & 66.9\% & 99.27\% & 95.18\%\\ \cmidrule[\heavyrulewidth](lr){2-5} \end{tabularx} \vspace{-0.05cm} \label{imbalanced_test_confmat} \end{table} \newpage This example also highlights an interesting aspect of imbalanced classification, which is that to improve the overall quality of a rare class it can be necessary to further improve the recall of the dominant class. Using ANNs it implies that one might deliberately increase the proportion of negative cases in the training set in order to give the opportunity to the network to better constrain this case, increasing its recall. It does not have to be observational proportions because it depends on many factors, noticeably the expected results. We illustrate such a case with a classifier achieving a lesser recall of $87\%$ on the positive case and a much better recall of $99\%$ on the negative case. It results in the confusion matrix in Table \ref{imbalanced_test_confmat_improved} where the precision of the positive cases has increased to $89.7\%$ which is much closer to the positive recall of $87\%$.\\ This example raises an important point, which is the absence of a global quality estimator, since it depends on the end objective. As for any classification problem, one must choose the appropriate balance between reliability and completeness. Thus, in the last example we traded a recall drop by about $8\%$ for an improvement of $23\%$ in precision for the positive case. Therefore, acting on the training set proportions allows one to put the emphasis on certain quality indicators, while it remains necessary to test the results on observational proportions. We note that there are many tools that play on the balance of the datasets. For example, the so called "augmentation methods" can be used to artificially change the balance of the dataset by creating mock example that follows the same feature space distribution. Though, it would not improve the coverage of the class in the parameter space which is a strongly related issue. Also, there are often cases where all the classes do not have equivalent feature space coverage. In consequence some might be intrinsically more difficult to classify than others, which is completely overlooked when using balanced training proportions. These aspects are examined in detail in our main application of YSO classification in the dataset and result analyses in Sections~\ref{yso_datasets_tuning} to \ref{yso_discussion}.\\ \begin{table}[t] \centering \caption{Imbalanced classification for a medical example using imbalanced proportions with a better recall for the dominant class.} \vspace{-0.1cm} \begin{tabularx}{0.55\hsize}{r l |*{2}{m}| r } \multicolumn{2}{c}{}& \multicolumn{2}{c}{\textbf{Predicted}}&\\ \cmidrule[\heavyrulewidth](lr){2-5} \parbox[l]{0.2cm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{Actual}}}} & Class & Positive & Negative & Recall \\ \cmidrule(lr){2-5} & Positive & 87 & 13 & 87.0\% \\ & Negative & 10 & 990 & 99.0\% \\ \cmidrule(lr){2-5} & Precision & 89.7\% & 98.7\% & 97.9\%\\ \cmidrule[\heavyrulewidth](lr){2-5} \end{tabularx} \vspace{-0.05cm} \label{imbalanced_test_confmat_improved} \end{table} \newpage \subsection{Simple examples} In this section we present a few simple real examples. We pursue mainly two objectives: (i) provide additional insights in how a concrete problem can be expressed with the ANN formalism we described, using freely accessible data to ensure reproducibility, and (ii) to illustrate a few of the advanced effects we have described in the previous sections, like the use of some hyper-parameters. While numerous astrophysical examples could have been used, they usually employ large sets of parameters that need a careful preparation and would have required too much context introduction to be suitable as quick and simple examples. \subsubsection{Regression} \label{regression_expl} We chose a regression that takes two inputs and also produces two outputs. The selected relation is fairly simple and expressed as: \begin{equation} o_1 = 0.7 \sin(x_1) + 0.6 \cos(0.6\,x_2) + (0.15\,x_1 + 0.1\,x_2)^2, \end{equation} \begin{equation} o_2 = \sin(x_1) + \cos(0.8\,x_2) + 0.3(0.3\,x_1+ 0.6\,x_2), \end{equation} where $x_1$ and $x_2$ are the two input dimensions, and $o_1$ and $o_2$ the output target.\\ In this example, we constructed a grid of $x$ values ranging from $-5$ to $5$ with $60\times 60$ elements for each input. This is our full dataset. For the training set we selected only a grid of $20\times 20$ elements, which means that we have a training point for each 5 points of the original axis subdivision, for each axis. This translates into a training set size of $400$ couples of coordinates.\\ The corresponding network is inevitably constructed with 2 input nodes with the associated bias, one or several hidden layers, and 2 output neurons that are set to a linear activation, which is often much simpler for regression. We first decided to stick to a one hidden layer network with sigmoid-activated neurons. Regarding the fact that we have $400$ examples we first tried to be close to $40$ weights in the network, which is closely matched using $8$ neurons in the hidden layer since $3\times 8 + 9\times 2 = 42$ weights, accounting for the bias node in the input and hidden layers. In this example we used many of the previously described network optimizations. We chose to have a constant learning rate but with a momentum and the input dataset is normalized as we described in Section~\ref{input_norm}. The output is only divided by the maximum absolute value in the full dataset forcing it to be in the $-1$ to $1$ range, which works well for linearly activated output neurons. Finally the selected gradient descent scheme we used is the mini-batch one. Therefore, it gave us a list of hyperparameters to set. A quick manual exploration of these parameters made us choose $\eta = 0.004$ (considering that our updates are summed and not averaged in a minibatch), $\alpha = 0.8$ and a mini-batch size $b_s = 20$. We then trained the network for $4\times 10^5$ epochs while monitoring the error on the full $60\times 60$ grid.\\ \begin{figure}[!t] \centering \includegraphics[width=0.92\hsize]{images/regre_expl2d.pdf} \caption[Two dimensional regression example results.]{Two dimensional regression training results. Each frame represents the evolution of a quantity in the two dimension input space. {\it Rows} represent the two outputs of the network respectively. {\it Columns} are the target, prediction and the difference between the first two scaled by a factor of 10.} \label{regre_expl2d} \end{figure} Figure~\ref{regre_expl2d} shows the results of this training. Because it is difficult to represent the evolution of the two outputs as a function of the two inputs in a single graph, we separated them into two rows. Three columns are used to represent the original function result, the prediction of the network and the last one shows 10 times the difference between the first two. The gray dots on the figures represent the training points. At first glance the prediction seems almost perfect, it requires a moment to spot the differences. The error frames strongly highlight that there is an edge effect, either with a stronger positive or negative difference that mostly depends on the surface slope at the edge. This is certainly due to the fact that it lacks constraints where there is less nearby learning points. Moreover, these error frames also show that the highest errors are in places where the surface curve dynamic is higher, and that it does not directly correlate with the places that have highest absolute values. Despite this, the error without the $\times 10$ factor looks really flat and the network reproduces nicely the shape of the function even with training points significantly distant from each other. \\ \begin{figure}[!t] \centering \includegraphics[width=0.58\hsize]{images/regre_expl2d_error.pdf} \caption[Evolution of the error during training]{Evolution of the output layer error on the validation dataset during training as a function of the number of epochs.} \label{regre_expl2d_error} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.60\hsize]{images/regre_expl2d_neurons_1l.pdf} \caption[Evolution of the error as a function of the number of neurons]{Evolution of the final output layer error on the validation dataset as a function of the number of neurons per layer. The orange and blue curves represent a one and two hidden layer network, respectively. The added uncertainty of each point is the standard deviation over few runs.} \label{regr_expl_nb_neurons} \end{figure} \newpage Figure~\ref{regre_expl2d_error} illustrates the output error value as a function of the number of epochs which shows that we are not overtraining. Nevertheless, we tested the effect of the number of neurons in this example as a complementary example of Section~\ref{nb_neurons}. Figure~\ref{regr_expl_nb_neurons} shows that, for one hidden layer, the optimal error is reached around $8$ neurons and additional neurons do not provide substantial improvement. We also tested a 2 hidden layer case, with this time the error as a function of the number of neurons per layer. It shows that, for this specific case, two hidden layers with 4 neurons are already very close to one hidden layer case with 8 neurons. We also note that, the two hidden layer solutions converged with less epochs. However, the comparison is very limited here since the limit of the number of examples is quickly reached.\\ This simple example illustrates how regression can be performed using the described model of ANNs. True useful problems can be more complicated, but they can usually be expressed in a very similar fashion as this example. One connected application is the modern use of ANNs as computation accelerators. If we imagine a heavy function that is used in a numerical application several times, there is a chance that this function could be reproduced up to a satisfying enough precision using ANN. Now, considering that modern ANNs use very simple activation functions and that they make use of powerful dedicated hardware, there is a strong chance that a pre-trained ANN could compute this approximated function much faster than it used to be done in the original application. This way ANNs can be used as computation accelerators even on well-known functions \citep{Baymani10, tompson16} or potential \citep[for example to replace a numerical solver for 3 body problems in][]{breen_2020}. This can be extended to training a network on a set of simulations that uses basic physics, from which it will extract higher level correlations, that can then be used to do new simulations quicker. The global application in Part III is a regression problem, still it requires more advance networks that are described in Section~\ref{cnn_global_section}. \newpage \subsubsection{Classification} \label{classification_examples} Prior to the YSO classification, we present simple examples to illustrate some effects that are important in the analysis of YSO classification. For this reason we present here two very simple common classification examples in order to focus on the technical aspects, both of which have their dataset freely accessible at the online \href{https://archive.ics.uci.edu/ml/index.php}{UCI Machine Learning repository}.\\ \textbf{A - Iris}\\ The first one is the Iris dataset, which is the most popular on the UCI repository. It consists of 3 types of Iris flowers, Setosa, Versicolour, and Virginica, based on 4 petal properties: sepal length, sepal width, petal length, petal width. The dataset contains 150 examples of these, perfectly balanced with 50 examples of each.\\ To perform the example in a proper way, we selected 35 examples of each class randomly to construct our training dataset (105), while keeping the rest, 15 examples of each class, to construct a merged validation-test dataset (45) due to the very small number of objects. The adopted network consists of 4 input nodes, one hidden layer with sigmoid activated neurons, and an output layer with 3 Softmax activated neurons. We identified that 6 hidden neurons were enough, corresponding to 51 weights, which is already above the standard recommendation. Less neurons provided less good results, while more neurons tended to strongly increase the instability of the network, and often led to a strong overtraining. The learning rate was set to $\eta = 5\times 10^{-4}$, the momentum to $\alpha = 0.8$, and we used mini-batches of size $b_s = 10$. This network was trained on 3200 epochs several times in order to characterize the result stability. Table~\ref{iris_confmat} shows the results of the network prediction for our test set. The very small number of objects in the test set makes it harder to interpret, because one misclassified object leads to a variation by several percents in the quality indicators. However, the network often succeeded in reaching a 100\% accuracy with this setting, and only misclassified three objects in two different classes in the worse case. Even if it is not a proper practice to look at the prediction for objects that were in the training sample, we still used the full dataset to ensure that the observed accuracy was not just a small number effect.\\ \begin{table}[t] \centering \caption{Iris classification, confusion matrix of the test set.} \vspace{-0.1cm} \begin{tabularx}{0.70\hsize}{r l |*{3}{m}| r } \multicolumn{2}{c}{}& \multicolumn{3}{c}{\textbf{Predicted}}&\\ \cmidrule[\heavyrulewidth](lr){2-6} \parbox[l]{0.2cm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{\textbf{Actual}}}} & Class & Setosa & Versicolour & Virginica & Recall \\ \cmidrule(lr){2-6} & Setosa & 15 & 0 & 0 & 100\% \\ & Versicolor & 0 & 14 & 1 & 93.3\% \\ & Virgninica & 0 & 1 & 14 & 93.3\% \\ \cmidrule(lr){2-6} & Precision & 100\% & 93.3\% & 93.3\%& 95.6\% \\ \cmidrule[\heavyrulewidth](lr){2-6} \end{tabularx} \vspace{-0.05cm} \label{iris_confmat} \end{table} It is noted by the authors of this dataset that one class is linearly separable from the two others which are not linearly separable from each other. This can be observed by using a very simple network with linear neurons as output and with no hidden layer. In this case, the degree of confusion between Versicolor and Virginica increased, while the Setosa remained well separated from the two others.\\ This example is also the occasion to illustrate the effect of probability membership. Looking at the maximum output probability distribution shows that almost all objects have a probability greater than $0.9$, and that there are only a few of them that are under this limit. As expected the objects that are properly classified (147) show a higher mean membership value, around $0.978 \pm 0.06$, than the mean value for misclassified objects (3), around $0.741 \pm 0.08$. However, one properly classified object has only a $0.52$ probability, while a misclassified one has a probability of $0.85$, which is also highlighted by the dispersion around the mean values. The underlying issue is that, even if an object is properly classified it can fall in a region of the feature space that is poorly constrained, and therefore less reliable. With applications that contain more objects, it is often a good choice to exclude less reliable ones by using a membership threshold, which often significantly improves the classification results (Sect.~\ref{proba_discussion}). \\ \textbf{B - Wines}\\ The second example is the Wine dataset, which is the third most popular from the UCI repository. It also consists in the separation of three classes that represent different unnamed vineyards. For this it provides a set of 13 features that correspond to many concentration measurements in the wine: Alcohol, Malic Acid, Ash, Magnesium, etc. The dataset contains 178 examples with classes being imbalanced with 59, 71, and 48 examples, respectively.\\ This example being imbalanced, we first extracted $\sim 28\%$ of objects in each class to construct our test/validation dataset. The purpose of such classifier being unclear, we assumed that it could be used to distinguish between the three vineyard on sold bottled and that the given proportions correspond to the true actual production proportion of each vineyard. Then, considering that they are in observational proportions, we tried to conserve them as much as possible in the test set. The adopted number of objects from each class in our test set are 16, 20, and 13, respectively, for a total of 49 objects. For the training dataset, we adopted the opposite approach by selecting the largest possible balanced dataset in the remaining data, which gave us 35 objects of each class for a training dataset with 105 objects. The adopted network is very close to the one of the previous example, with 13 input nodes, one hidden layer with sigmoid activated neurons, and an output layer with 3 Softmax activated neurons. We identified that 8 neurons are sufficient for this problem to reach its optimal prediction. Less neurons predicted less good results and more was not doing anything better since 8 neurons already led to an overtraining that must be monitored. As for the previous example, the learning rate was set to $\eta = 5 \times 10^{-4}$, the momentum to $\alpha = 0.8$, and we used mini-batches of size $b_s = 10$. This time, the network was trained several time on 4800 epochs. We observed that the network predictions were always above $93\%$ of global accuracy (3 objects misclassified) but that the results mostly depended on the random object selection for the training and test datasets. With the most appropriate selection the network reached $100\%$ prediction, which is maintained when forwarding the full dataset in the same network. Still, we present in Table~\ref{wines_confmat} an imperfect example for illustration purposes. Additionally, since the minimum error value depends on the data random selection in either the training or test dataset, we repeated 20 trainings and computed a mean minimum error averaged on the test dataset, of $0.086 \pm 0.062$. It illustrates the strong variability depending on the random selection of the training data over the dataset, and also that each individual neuron that contributes to this error is usually $\sim 0.03$ off the target value. This indicates that most objects will have a maximum prediction around $0.97$ which is guideline before the application of a threshold.\\ \begin{table}[t] \centering \caption{Wines classification, confusion matrix of the test set.} \vspace{-0.1cm} \begin{tabularx}{0.70\hsize}{r l |*{3}{m}| r } \multicolumn{2}{c}{}& \multicolumn{3}{c}{\textbf{Predicted}}&\\ \cmidrule[\heavyrulewidth](lr){2-6} \parbox[l]{0.2cm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{\textbf{Actual}}}} & Class & A & B & C & Recall \\ \cmidrule(lr){2-6} & A & 16 & 0 & 0 & 100\% \\ & B & 1 & 18 & 1 & 90.0\% \\ & C & 0 & 1 & 12 & 92.3\% \\ \cmidrule(lr){2-6} & Precision & 94.7\% & 94.7\% & 93.3\%& 93.9\% \\ \cmidrule[\heavyrulewidth](lr){2-6} \end{tabularx} \vspace{-0.05cm} \label{wines_confmat} \end{table} Just like the previous example, it is very simple to reach $100\%$ each time with the exact same network when using the full dataset in just 1000 epochs. It only means that the classes are separable regardless of the complexity of their separation. However, as in the previous example, it means that the feature space is poorly sampled, which is expected using 13 dimensions with only $< 200$ objects. One approach to obtain better results out of less objects, would be to test if some input features are unnecessary to separate the classes. On the other hand, these imperfect results are another occasion to illustrate the membership probability. In this example, it is more visible that the misclassified objects are indeed predicted with a lesser probability. Taking an example with 3 misclassified objects in the full dataset, they have a mean membership value around $0.52 \pm 0.02$ while the properly classified ones have a mean of $0.97 \pm 0.06$. Using a threshold value of $0.9$ excludes 6 objects out of the 178 objects in the dataset, including all the misclassified ones. It is also interesting to note that some objects are more often misclassified than others, and also have a low probability when properly classified, making them probable outliers of the main distribution. The predicted output class of these objects, and therefore their membership probability, mostly depend on their random association to either the training or test dataset. Like in the previous example applying a membership threshold here would not significantly improve the results because they are already very good, but we efficiently used this approach on the YSO classification application with a clear success (Section~\ref{proba_discussion}).\\ \newpage \section[Automatic identification of YSOs in infrared surveys]{Automatic identification of Young Stellar Objects in infrared surveys} \label{yso_datasets_tuning} In this section, we detail how we connected the general network presented in the previous sections with the YSO classification problem. We show how we arranged the data in a usable form for the network and describe the needed precautions for this process. We also explain how we defined the various datasets used to train our network. \etocsettocstyle{\subsubsection*{\vspace{-1cm}}}{} \localtableofcontents \vspace{1cm} \subsection{Problem description and class definition} \label{data_prep} As we exposed in Section~\ref{intro_yso}, our objective in this Part II is to design a methodology to perform YSO classification using ANN. A new classification scheme could be created using ML by mostly two approaches, (i) using a supervised ML algorithm that is trained on observed objects labeled without any ambiguity, for example YSOs for which the disk is directly observed, or by using simulated data (both discussed in Sect.~\ref{Method_discussion}), and (ii) using unsupervised learning to construct new, possibly more efficient, underlying classes (also in Sect.~\ref{Method_discussion}). These two approaches present major difficulties, due to the very small number of indisputably identified YSOs to train a ML algorithm for the first approach, and due to underlying feature space distribution properties of YSOs using infrared data for the second approach.\\ While these difficulties could be overcome, it would require a lot of work without guarantee of success. For this reason, as a first step, we set ourselves the objective to reproduce the classification by \citet{gutermuth_spitzer_2009} that makes use of Spitzer data, to evaluate the capacity of ML methods to perform such a task. It allowed us to assess the numerous elements reported here like the optimal number of data (Sects.~\ref{training_test_datasets}, \ref{NGC2264_results}), the minimal architecture requirement for an ANN method (Sect.~\ref{network_tuning}), the needed hardware resources (Sect.~\ref{network_tuning}), the degree of confusion between classes (Sects.~\ref{yso_results}, \ref{proba_discussion}), the effect of class imbalance (Sect.~\ref{training_test_datasets}), etc. But more importantly it can provide significant additions to the classification that is reproduced like identifying underconstrained feature space areas (Sects.~\ref{shocks_discussion}, \ref{proba_discussion}), generalizing the classification scheme to be used on very large catalogs efficiently (Sect.~\ref{yso_discussion}), or more importantly predicting a membership probability for the YSO candidates (Sect.~\ref{proba_discussion}).\\ Our choice of the \citet[][hereafter G09]{gutermuth_spitzer_2009} classification scheme to train from, is motivated by two main considerations: (i) the fact that we needed a method that works on large datasets in order to be used with a ML method, and (ii) our will to use a survey that is able to distinguish class I from class II YSOs.\\ \newpage In the following section, we summarize the construction of the training sample using simplified version of this classification scheme. The G09 method combines data in the J, H, and K$_s$ bands from the Two Micron All Sky Survey \citep[2MASS, ][]{skrutskie_two_2006} and data between 3 and 24 $\mu$m from the Spitzer Space Telescope \citep{werner_spitzer_2004}. In our approach we did not use the 2MASS data and solely used Spitzer data in order to have a more homogeneous classification, which is discussed latter in the present section. We note that by using Spitzer data for YSO classification, rather than WISE data \citep[e.g][]{koenig_wide-field_2012}, we expect to cover only specific regions on the sky, but with a better sensitivity ($ \approx~1.6$ to $27\ \mathrm{\mu J}$ for the The Infrared Array Camera (IRAC) instrument) and spatial resolution ($1.2^{\prime\prime}$) than with WISE ($ \approx~80$ to $6000\ \mathrm{\mu J}$ and $ 6.1^{\prime\prime}$ to $12^{\prime\prime}$), the latter being used in \citet{marton_all-sky_2016} and \citet{marton_2019}. In the original G09 method, they performed the classification in several steps. In addition to Spitzer, they used 2MASS data as an additional step to refine the classification of some objects or to classify objects for which Spitzer bands are missing. Therefore, restricting our analysis to Spitzer data still allowed a reasonable classification. In our adapted method, we started with the four IRAC bands, at $3.6,\ 4.5,\ 5.8$ and $8\ \mathrm{\mu m}$, applying a preselection that kept only the sources with a detection in the four bands and with uncertainties $\sigma < 0.2$ mag, like in the original classification. Then this first classification is refined using the Multiband Imaging Photometer (MIPS) $24\ \mathrm{\mu m}$ band. With this classification, it is possible to identify candidate contaminants and to recover YSO candidates at the end. Similarly to G09, we used the YSO classes described in Section~\ref{intro_yso}.\\ Using mainly IRAC data prevented us from identifying class 0 objects, since they do not emit in the IRAC wavelength range (Fig.~\ref{yso_sed}). Similarly, because of Spitzer uncertainties, the class III objects are too similar to main sequence stars to be distinguished. For these reasons, we limited our objectives to the identification of CI and CII YSOs. We then proceeded to the so-called "phase~1" from G09 (their Appendix~A) to successively extract different contaminants using specified straight cuts into color-color and color-magnitude diagrams (CMDs) along with their respective uncertainties. This step enabled us to exclude star-forming galaxies, active galactic nuclei (AGN), shock emission, and resolved PAH emission. It ends by extracting the class I YSO candidates from the leftovers, and then again extracting the remaining class II YSO candidates, from more evolved stars. The cuts used on these steps are shown in Fig.~\ref{fig_gut_method}, with the final CI and CII YSO candidates from the Orion region (Sect.~\ref{data_setup}).\\ \begin{figure*}[!t] \centering \begin{subfigure}[t]{0.43\textwidth} \includegraphics[width=\textwidth]{images/cmd_orion_A.png} \end{subfigure} \begin{subfigure}[t]{0.43\textwidth} \includegraphics[width=\textwidth]{images/cmd_orion_B.png} \end{subfigure}\\ \begin{subfigure}[t]{0.43\textwidth} \includegraphics[width=\textwidth]{images/cmd_orion_C.png} \end{subfigure} \begin{subfigure}[t]{0.43\textwidth} \includegraphics[width=\textwidth]{images/cmd_orion_D.png} \end{subfigure}\\ \begin{subfigure}[t]{0.43\textwidth} \includegraphics[width=\textwidth]{images/cmd_orion_E.png} \end{subfigure} \caption[Selection of CMD diagrams from our simplified G09]{Selection of color-color and color-magnitude diagrams from our simplified multi-step G09 classification. The data used in this figure correspond to the Orion labeled dataset in Table~\ref{tab_selection}. The contaminants, CII YSOs, and CI YSOs are shown in blue, green, and red, respectively. They were plotted in that order and partly screen one another, as revealed by the histograms in the side frames. The area of each histogram is normalized to one. In frame A, some PAH galaxies are excluded. In frame B, leftover PAH galaxies are excluded based on another criteria. It also shows the criteria of class II extraction that is in a later step. In frame C, AGN are excluded. In frame D, Shocks and PAH aperture contaminants are excluded. It also show the last criteria of class I extraction. In frame E, one of the criteria from the MIPS $24\ \mathrm{\mu m}$ band is shown, that identifies reddened Class~II in the previously classified Class~I.} \label{fig_gut_method} \end{figure*} For the sake of simplicity, we adopted only 3 categories: CI YSOs, CII YSOs, and contaminants, which we also refer to as "others" in our tables. Doing so forced the network to focus on the separation of the contaminant class from the YSOs, rather than between different contamination classes. It can be seen as a simplification of the underlying classification if some subclasses of contaminants are close to each other in the feature space, reducing the required number of weights in the network. Therefore, we defined the output layer of our network with 3 neurons using a Softmax activation function, meaning that there was one neuron per class that returned a membership probability as described in Section~\ref{proba_class_intro}.\\ We chose not to use 2MASS, and therefore skipped the so-called "phase 2" of the G09 classification scheme. This is mainly motivated by the fact that it creates an artificial difference in degree of refinement between objects that have a 2MASS detection and objects that does not. Additionally, it creates several more classification paths that only contain very few objects, which is much more difficult to constrain (highlited at several places in Sects. ~\ref{yso_results} and ~\ref{yso_discussion}). However, G09 also proposed a "phase 3" that uses the MIPS $24\,\mu$m band, and which might be useful for our classification. In this last phase, some objects that were misclassified in the previous two phases are rebranded. Although this can raise difficulties, as discussed in Section \ref{sec:mips24}, we used it in our analysis because it relies only on Spitzer data. Since MIPS $24\,\mu$m data are only used to refine the classification, we did not exclude objects without detection in this band. We only used it in phase 3 when it had an uncertainty $\sigma_{24} < 0.2$ mag. This additional phase ensured that the features identified in the SED with the four IRAC bands were consistent with longer wavelength data. It allowed: (i) to test the presence of a transition disk emission in objects classified as field stars, rebranding them as class II, (ii) to test the presence of a strong excess in this longer wavelength that is characteristic of deeply embedded class I objects, potentially misclassified as AGNs or Shocks, (iii) to refine the distinction between class I and II by testing whether the SED still rises at wavelengths longer than $8\,\mu$m for class I, otherwise rebranding them as reddened class II. Those refinements explain the presence of objects beyond the boundaries in almost all frames in Fig.~\ref{fig_gut_method}. For example in frame B, some class II objects, shown in green, are located behind the boundary at the bottom-left part in a region dominated by more evolved field stars. In this figure, all the steps of this refinement are not shown, only the criterion on reddened class II identification is illustrated in frame E. Our adapted classification scheme was therefore composed of five bands (4 IRAC, 1 MIPS).\\ \vspace{-0.2cm} Finally, we stress that the G09 classification uses all the band uncertainties ($[\sigma_{3.6}]$, $[\sigma_{4.5}]$, $[\sigma_{5.8}]$, $[\sigma_8]$, $[\sigma_{24}]$) to construct complementary conditions on the separation between stellar classes in all the phases. As an example, objects that satisfy the CI YSO condition but that have a high uncertainty in the bands used in this condition will be classified as CII YSOs. These uncertainties are therefore to be considered as direct input features of the classification. While such measurements could be used in a more complex way with modern ANN architectures, the present study focuses on using solely the G09 method as training object construction, forcing us to use band uncertainties as required input features. In summary, our labeled dataset was structured as a list of (input, target) pairs, one per point source, where the input was a vector with 10 values ($[3.6]$, $[\sigma_{3.6}]$, $[4.5]$, $[\sigma_{4.5}]$, $[5.8]$, $[\sigma_{5.8}]$, $[8]$, $[\sigma_8]$, $[24]$, $[\sigma_{24}]$), and the target was a vector of 3 values ($P(\text{CI}), P(\text{CII}), P(\text{Contaminant})$). The input features are normalized over the considered labeled dataset as described in Section~\ref{input_norm}. Here $P()$ denotes the membership probability normalized over the three neurons of the output layer. An alternative choice of input space is discussed in Sect.~\ref{sec:color_usage}.\vspace{-0.2cm} \subsection{Labeled datasets in Orion, NGC 2264, 1\,kpc and combinations} \label{data_setup} We chose to use well-known and well constrained star forming regions, where YSO classification was already performed using Spitzer data. Although we employed an approach made of progressive steps by firstly drawing conclusions on one case before going to the next one, we summarize here all the datasets that were used. Therefore, in the following section we detail how we created the corresponding labeled dataset and what was the optimal proportions along with the network parameters. Additional information on how these parameters were found for each individual case are discussed in Section~\ref{yso_results}. This organization allows us to group information to ease the comparison and to better reflect the global approach while summarizing the dataset construction in one place.\\ \vspace{-0.2cm} \begin{table*}[t] \vspace{-0.5cm} \hspace{-1.2cm} \begin{minipage}{1.15\hsize} \footnotesize \centering \vspace{0.2cm} \caption{Results of our simplified G09 method for our various datasets.} \label{tab_selection} \vspace{-0.2cm} \begin{tabularx}{1.0\hsize}{ l *{2}{x{0.06\hsize}} @{\hskip 0.065\hsize} *{5}{Y} @{\hskip 0.065\hsize} *{3}{x{0.075\hsize}}} \toprule \toprule \vspace{-0.3cm}\\ \multirow{2}{*}{Dataset} & \multicolumn{2}{c}{\hspace{-0.6cm}Pre-selection} & \multicolumn{5}{c}{\hspace{-0.6cm}Detailed contaminants} & \multicolumn{3}{c}{\hspace{-0.6cm}Labeled classes}\\ \cmidrule(l){2-11} & Total & Selected & Gal. & AGNs & Shocks & PAHs & Stars & CI YSOs & CII YSOs & Others\\ \vspace{-0.3cm}\\ \midrule \vspace{-0.15cm}\\ Orion & 298405 & 19114 & 407 & 1141 & 28 & 87 & 14903 & 324 & 2224 & 16566\\ \vspace{-0.15cm}\\ NGC 2264 & 10454 & 7789 & 114 & 250 & 6 & 1 & 6893 & 90 & 435 & 7264\\ \vspace{-0.15cm}\\ Combined & 308859 & 26903 & 521 & 1391 & 34 & 88 & 21796 & 414 & 2659 & 23830\\ \vspace{-0.15cm}\\ 1\,kpc* & 2548 & 2171 & 1 & 57 & 0 & 1 & 3 & 370 & 1735 & 67\\ \vspace{-0.15cm}\\ Full 1\,kpc & 311407 & 29074 & 522 & 1448 & 34 & 89 & 21799 & 784 & 4396 & 23897\\ \vspace{-0.2cm}\\ \bottomrule \bottomrule \end{tabularx} \end{minipage} \caption*{\vspace{-0.4cm}\\ {\bf Notes.} The third group of columns gives the labels used in the learning phase. The last column is the sum of the columns in the "Detailed contaminants" group. *The 1\,kpc sample contains only pre-identified YSOs candidates. Still, we classified some of them as contaminants because of the simplifications in our method.\vspace{-0.5cm}} \end{table*} The main idea was to test the learning process on individual regions, and then compare it with various combinations of these regions. It is expected that, due to the increased diversity in the training set, the combination of regions should improve the generalization capacity of trained network and allows predictions on new regions. We selected regions analyzed in three studies, all using the original G09 method. However, some differences remain between the parameters adopted by the authors (e.g. the uncertainty cuts). Using our simplified G09 method, as presented in Section \ref{data_prep}, allowed us not only to base our study solely on Spitzer data, but also to build a homogeneous dataset with the exact same criteria for all regions. We present here the three selected regions and the corresponding catalogs:\vspace{-0.1cm} \begin{itemize}[leftmargin=0.0cm] \setlength\itemsep{0.2em} \item[] {\hspace{0.3cm}\bf $\bullet \,$ Orion}: The first region we used was the Orion molecular cloud with the dataset from \citet{megeath_spitzer_2012} (Figs.~\ref{megeath_orion_cover} and \ref{orion_and_2264_wise}). This is the most studied star forming region, due to its relative proximity ($\sim 420$ pc), and because of its large mass (above $10^5 \mathrm{M_\odot}$) and size (more than $\sim 50 deg^2$ on the sky plane). The presence of several young stellar clusters, including the massive Trapezium cluster, makes it a bright target across the whole electromagnetic spectra and therefore an ideal target for most interstellar medium topics (for example, in massive star formation: \citet{Rivilla_2013}, in low-mass star formation: \citet{Nutter_2007}, in filament dynamics: \citet{Stutz_2016}, in photodissociation regions: \citet{Goicoechea_2016}, in astrochemistry: \citet{Crockett_2014}, etc.). It is composed of several parts in diverse evolutionary stages: Orion A is the most actively star forming part with a complex star formation history with various episodes from 12 Myr ago to this day \citep{Brown_94}; Orion B is in an earlier evolutionary stage, and is mostly quiescent in spite of a few well-known reflection nebulea \citep{Pety_2017}; and the $\Lambda$ Orionis shell is a spherical or toroidal structure shaped by the massive O-type star $\Lambda$ Orionis and a past supernova explosion \citep{Dolan_2001}, where the net effect of star formation feedback is debated \citep[][and reference therein]{Yi_2018}. The corresponding catalog covers only Orion A and B and contains all the elements we needed with the four IRAC bands and the MIPS $24\ \mu$m band and relies on the G09 method. The authors provide the full point source catalog they used to perform their YSO candidate extraction. This is one of the most important element in our study, since the network needs to see both the YSOs and the other types of objects to be able to learn the differences between them. \item[] {\hspace{0.3cm}\bf $\bullet \,$ NGC 2264}: For the second dataset, we used the catalog by \citet{rapson_spitzer_2014} (Figs.~\ref{rapson_ngc2264_cover} and \ref{orion_and_2264_wise}), who analyzed Spitzer observations of NGC 2264 in the Mon OB1 region using the same classification scheme. This is also one of the largest star-forming region in the solar neighborhood \citep[$\sim 4 \times 10^4\,\mathrm{M_\odot}$, with an extent of $\sim 50$ pc, ][and references therein]{montillaud_2019_II} while being a bit more distant from us $\sim$ 723 pc \citep{Cantat-Gaudin_2018}. This region has sustained an active star formation for at least the last 5 Myr near the center of NGC 2264 \citep{Dahm_2005} that has occurred sequentially \citep{Buckner_2020}, and a secondary convergence center seems to be currently forming in the northern edge of the cluster \citep[around $\delta=10^\circ30'$][also visible in Fig.~\ref{rapson_ngc2264_cover} as a group of Class0/I YSOs]{montillaud_2019_II}. We note that, in contrast to the Orion dataset, this one does not provide the full point-source catalog, but a preprocessed object list compiled after performing band selection and magnitude uncertainty cuts. However, it should not affect the selection, since we used the exact same uncertainty cuts as them. \item[] {\hspace{0.3cm}\bf $\bullet \,$ Combined}: We then defined a dataset that is the combination of the previous two catalogs, which we call the "combined" dataset. We used it to test the impact of combining different star-forming regions in the training process, because distance, environment, and star formation history can impact the statistical distributions of YSOs in CMDs. \item[] {\hspace{0.3cm}\bf $\bullet \,$ Full 1\,kpc}: We pushed this idea further by defining an additional "1\,kpc" catalog, directly from \citet{gutermuth_spitzer_2009} (Fig.~\ref{gutermuth_1kpc_dist}). It contains a census of the brightest star forming regions closer than 1\,kpc, excluding both Orion and NGC 2264. However, this catalog only contains the extracted YSO candidates and not the original point source catalog with the corresponding contaminants. This is an important drawback, since it cannot be used to add diversity information in this category. Yet, it can be used to increase the number of class I and II and increase their respective diversity. We refer to the dataset that combines the three previous datasets Orion, NGC 2264 and 1\,kpc, as the "full 1\,kpc" dataset. \end{itemize} \vspace{-0.2cm}This first classification provided various labeled datasets, that were used as targets for the learning process. The detailed distribution of the resulting classes for all our datasets is presented in a common table (Table~\ref{tab_selection}), in order to ease their comparison. This table also shows the subclass distribution of the contaminants, as obtained with our modified G09 methods.\vspace{-0.2cm}\\ We examine here the discrepancies between our results and those provided in the respective publications. In the case of Orion, merging their various subclasses, \citet{megeath_spitzer_2012} found 488 class I and 2991 class II, no details being provided for the distribution of contaminant subclasses. This is consistent with our simplified G09 method, considering that the absence of the 2MASS phase prevented us from recovering objects that lacked detection in some IRAC bands, and that the authors also applied additional customized YSO identification steps. For the NGC 2264 region, \citet{rapson_spitzer_2014} report 308 sources that present an IR excess, merging class 0/I, II and transition disks. However, they used more conservative criteria than in the G09 method to further limit the contamination, which partly explains why our sample of YSOs is larger in this region. The authors do not provide all the intermediate numbers, but they state that they excluded 5952 contaminant stars from the Mon OB1 region, a number roughly consistent with our own estimate (6893). Finally, the 1\,kpc dataset only contains class I and II objects, which means that every object that we classified as contaminant is a direct discrepancy between the two classifications. This is again due to the absence of some refinement steps in our simplified G09 method. For the complete catalog containing 36 individual star-forming regions, \citet{gutermuth_spitzer_2009} report 472 class I and 2076 class II extracted, which is also consistent with our adapted method that extract 370 class I and 1735 Class II, taking into account the absence of the 2MASS phase.\vspace{-0.2cm}\\ From these results, the strong imbalance between the 3 labeled classes is striking. We detailed some of the difficulties that imbalance learning raises in Section~\ref{class_balance}. This aspect will be described and carefully handled in the following section, as it proved to have a very strong impact on our results (Sect.~\ref{orion_results}). \begin{figure}[!t] \centering \includegraphics[height=0.58\hsize]{images/megeath_orion_cover.png} \caption[Spitzer Coverage of the Orion cloud]{Spitzer coverage of the Orion cloud. The grayscale is an extinction map of the region obtained using the 2MASS point source catalog. {\it Left}: in green, the fields surveyed with IRAC. {\it Right}: in green and red, the fields surveyed with MIPS. {\it Adapted from} \citet{gutermuth_spitzer_2009}.} \label{megeath_orion_cover} \end{figure} \begin{figure}[!t] \centering \includegraphics[height=0.58\hsize]{images/rapson_ngc2264_cover.png} \caption[Spitzer Coverage of Mon OB1. East]{Spitzer Coverage of Mon OB1 East. The grayscale is the inversion of the observed 4.5 $\mathrm{\mu m}$ band. {\it Left}: the red boxes define NGC 2264. The magenta boxes were used to estimate the field stars amount in the original study. {\it Right}: retrieved class 0/I YSOs are in red, class II YSOs are in green and "transition disks" are in blue, from the original study. {\it Adapted from} \citet{rapson_spitzer_2014}.} \label{rapson_ngc2264_cover} \end{figure} \begin{figure*}[!t] \centering \begin{subfigure}[t]{0.49\textwidth} \caption*{\bf Orion} \includegraphics[width=\textwidth]{images/orion_wise.png} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \caption*{\bf NGC 2264} \includegraphics[width=\textwidth]{images/ngc2264_wise.png} \end{subfigure}\\ \caption[Orion and NGC 2264 as seen by AllWISE]{Orion and NGC 2264 as observed using the colored AllWISE data (Red, Green, Blue correspond to W4, W2, W1, respectively).} \label{orion_and_2264_wise} \end{figure*} \begin{figure*}[!t] \vspace{0.5cm} \hspace{-0.7cm} \begin{minipage}{1.15\hsize} \centering \includegraphics[width=\textwidth]{images/gutermuth_1kpc_dist.pdf} \end{minipage} \caption[Distribution of the < 1\,kpc regions]{Distribution of the < 1\,kpc regions from the corresponding catalog on the sky using an Aitoff projection with ICRS frame. The colors indicate the estimated distance of the regions, based on \citet{gutermuth_spitzer_2009}} \label{gutermuth_1kpc_dist} \end{figure*} \clearpage \newpage \subsection{Construction of the test, valid and train dataset} \label{training_test_datasets} As described in Section~\ref{sect_overtraining}, the learning process requires not only a training dataset to update the weights, but also a test set and a validation set, which contain sources that were not shown to the network during the learning process. This provides a criteria to stop the learning process. The class proportions in those datasets can be kept as in the labeled dataset or can be rebalanced to have an even number of objects per class. However, our sample suffers from two limitations: its small size and its strong imbalance. To optimize the quality of our results, we needed to carefully define our training and test datasets. Since one of our class of interest (CI) is represented by a relatively small number of objects, the efficiency of the training strongly depends on how many of them we kept in the training sample. Therefore, we chose to adopt the strategy where the same dataset is used for both validation and test steps (Sect.~\ref{sect_overtraining}). It remains efficient to track overfitting, but it increases the risk to stop the training in a state that is abnormally favorable for the test set. As mentioned in Section~\ref{sect_overtraining}, discrepancies between results on the training and test datasets can be used to diagnose over-training and will be detailed for the present application in Section~\ref{orion_results}. Even with this strategy, it remains that the labeled dataset has only few objects to be shared between the training and the test set for some subclasses, considering the expected complexity of the classification.\\ \begin{figure}[!t] \hspace{-1.2cm} \begin{minipage}{1.1\hsize} \centering \includegraphics[width=\hsize]{images/hist_train.png} \end{minipage} \caption[Rebalancing process illustrated on the Combined dataset]{Rebalancing process illustrated on the Combined dataset. The orange color shows the number of objects in the complete dataset, the blue color represents the remaining objects after exclusion of the test set in observational proportion using $\theta = 0.2$, and the green color represents the training objects using the $\gamma_i$ corresponding to this case. The vertical dashed black line is the $1-\theta$ value corresponding to CI, which is the $\gamma_i = 1$ reference.} \label{hist_train_rebalance} \vspace{-0.2cm} \end{figure} In addition to the previous point, to evaluate the results quality, it was necessary for the test set to be representative of the actual problem. As before, this is difficult mainly because our case study is strongly imbalanced. Therefore, we needed to keep "observational proportions" for the test set as discussed in Section~\ref{class_balance}. We defined a fraction $\theta$ of objects from the labeled dataset that was taken to form the test dataset. This selection was made independently for each of the seven subclasses provided by the modified G09 classification. It ensured that the proportions were respected even for highly diluted classes of objects (e.g. for Shocks). The effect of taking such proportions is discussed for our various results in Section~\ref{yso_results}.\\ In contrast, the training set does not need to have observational proportions. It needs to have more numerous objects from the classes that have a greater intrinsic diversity and populate a larger volume in the input parameter space. It is also necessary to be more accurate on the most abundant classes, since even a small error on them induces a large contamination of the diluted classes (Sect.~\ref{class_balance}). We chose to scale the number of objects from each class to the number of CI YSOs. The choice of this class is motivated by the fact that the identification of CI YSOs is missing in several other YSO identification methods that do not use surveys with enough sensitivity to detect them. Therefore, we want this class to be predicted with the highest achievable quality. Additionally, they are rare in our labeled dataset, mainly due to the fact that they are much fainter. This induces that we wanted to have the maximum number of them in our training sample while also having a fine control over the degree of dilution of this class against the others. Consequently, the scaling is performed as follows. We shared all the CI objects between the training and the test samples as fixed by the fraction $\theta$, that is: \begin{equation} N^{\rm train}_{\rm CI} = (1-\theta) \times N^{\rm tot}_{\rm CI} \quad \text{and} \end{equation} \begin{equation} N^{\rm test}_{\rm CI} = \theta \times N^{\rm tot}_{\rm CI}, \end{equation} respectively, where $N^{\rm tot}_{\rm CI}$ is the total number of CI objects. Then, we defined a new hyperparameter, the factor $\gamma_i$, as the ratio between the number of selected objects from a given subclass $N^{\rm train}_i$ and the number $N^{\rm train}_{\rm CI}$ of class I YSOs in the same dataset: \begin{equation} \gamma_i = \frac{N^{\rm train}_i}{N^{\rm train}_{\rm CI}}. \end{equation} If $N^{\rm train}_i$ were computed directly from this formula, it may exceed $(1-\theta)\times N^{\rm tot}_i$ in some cases, a situation incompatible with keeping $N^{\rm test}_{\rm CI} = \theta \times N^{\rm tot}_{\rm CI}$ in the test set. Thus, we limited the values of $N^{\rm train}_i$ as follows: \begin{equation} N^{\rm train}_i = \min\Big( \big(\gamma_i\times (1-\theta) \times N_{\rm CI}^{\rm tot}\big), \big((1-\theta) \times N_i^{\rm tot}\big)\Big), \label{eq:Ntrainmin} \end{equation} where the values of the $\gamma_i$ factors were determined manually by trying to optimize the results on each training set. We note that for the most populated classes, this approach implies that only part of the sample was used to build the training and test sets. As discussed below, this was a motivation to repeat the training with various random selections of objects, and thus assess the impact of this random selection on the results.\\ \begin{table*}[t] \small \centering \vspace{0.2cm} \caption{Composition of the training and test datasets for each labeled dataset.} \label{sat_factors} \begin{tabularx}{1.0\hsize}{l l@{\hskip 0.05\hsize} x{0.085\hsize} x{0.105\hsize} *{5}{Y} @{\hskip 0.07\hsize} c} \toprule \toprule \vspace{-0.3cm}\\ & & CI & CII & Gal. & AGNs & Shocks & PAHs & Stars & Total\\ \vspace{-0.3cm}\\ \toprule \vspace{-0.3cm}\\ \multicolumn{10}{c}{ Orion - $\theta = 0.3$}\\ \cmidrule(lr){1-10} Test: & & 97 & 667 & 122 & 342 & 8 & 26 & 4470 & 5732\\ \cmidrule(lr){2-10} \multirow{2}{*}{Train:} & $\gamma_i$ & 1.0 & 3.35 & 0.6 & 1.3 & 0.1 & 0.3 & 4.0 & \\ & $N_i$ & 226 & 757 & 135 & 293 & 19 & 60 & 904 & 2394\\ \vspace{-0.1cm}\\ \multicolumn{10}{c}{ NGC 2264 - $\theta = 0.3$}\\ \cmidrule(lr){1-10} Test: & & 27 & 130 & 34 & 75 & 1 & 0 & 2067 & 2334\\ \cmidrule(lr){2-10} \multirow{2}{*}{Train:} & $\gamma_i$ & 1.0 & 2.5 & 0.3 & 0.6 & 0.1 & 0.3 & 3.5 & \\ & $N_i$ & 62 & 155 & 18 & 37 & 4 & 0 & 217 & 493 \\ \vspace{-0.1cm}\\ \multicolumn{10}{c}{ Combined - $\theta = 0.2$}\\ \cmidrule(lr){1-10} Test:& & 82 & 531 & 104 & 278 & 6 & 17 & 4359 & 5377\\ \cmidrule(lr){2-10} \multirow{2}{*}{Train:} & $\gamma_i$ & 1.0 & 3.45 & 0.7 & 1.6 & 0.1 & 0.3 & 3.8 &\\ & $N_i$ & 331 & 1141 & 231 & 529 & 27 & 70 & 1257 & 3586\\ \vspace{-0.1cm}\\ \multicolumn{10}{c}{ Full 1\,kpc - $\theta = 0.2$}\\ \cmidrule(lr){1-10} Test**:& & 82 & 531 & 104 & 278 & 6 & 17 & 4359 & 5377\\ \cmidrule(lr){2-10} \multirow{2}{*}{Train:} & $\gamma_i$ & 1.0/1.0* & 3.3/3.0* & 1.0 & 1.4 & 0.1 & 0.3 & 8.0 &\\ & $N_i$ & 331/331* & 1092/993* & 331 & 463 & 27 & 70 & 2648 & 6286\\ \vspace{-0.2cm}\\ \bottomrule \bottomrule \end{tabularx} \caption*{\vspace{-0.3cm}\\ {\bf Notes.} *The first and second values of $\gamma_i$ are for YSOs from the Combined and 1\,kpc datasets, respectively. \newline **The 1\,kpc dataset does not add contaminants, therefore the Full 1\,kpc test set is the same as the Combined test dataset to keep realistic observational proportions.} \vspace{-0.5cm} \end{table*} The adopted values of $\theta$, $\gamma_i$, and the corresponding numbers of objects in the training sample are given in Table~\ref{sat_factors} for each dataset, while Figure~\ref{hist_train_rebalance} shows a graphical representation of the rebalancing process for the Combined dataset. The figure compares the sizes of the complete labeled dataset, the training set, and the test set. Like for the other parameters the choice of $\gamma_i$ values for each dataset was the result of an exploration and of the analysis of the results given for each case. The table shows the general trend that with larger labeled datasets, we can use smaller values of $\theta$ because it corresponds to a large enough number of objects in the associated test set. In addition, the number of objects in the training set of NGC 2264 is significantly smaller than in the other datasets. The sample also lacks some subclasses of contaminants, mainly due to the much smaller sky coverage of the region when compared to Orion, which impacted the results for the associated training. The fine tuning of the $\gamma_i$ values is discussed for each region in Section~\ref{yso_results} and aims at maximizing the precision for CI, while keeping a large enough value in recall (ideally $>90\%$ for both of them), and a good precision on CII as well. This choice strongly impacts the tuning of the $\gamma_i$ values, since they directly represent the emphasis given to a class against the others during the training phase, hence biasing the network toward the class that needs the most representative strength. This will slightly lower the quality of objects in other classes but always to a very acceptable extent. Moreover, it is still possible to isolate objects with the best classification reliability using the probability output to overcome this effect, as discussed in Section~\ref{classification_examples}. One last point is that the exploration of the optimal proportion of each class in the training dataset allows one to account for intrinsicly more complicated distributions in the input feature space for certain classes. Some subclasses might be very easy to isolate since they are linearly separable from the rest of the problem. Therefore, reducing the number of objects in this subclass will mostly conserve their quality but will free the space for other, more difficult, objects and will also reduce the dilution of other rare subclasses. This point was evoked in Section~\ref{class_balance} and will also be covered more deeply in the results (Sect.~\ref{yso_results})\\ \newpage Finally, to ensure that our results are statistically robust, each training was repeated several times with different random selections of the testing and training objects based on the $\theta$ and $\gamma_i$ factors. This allowed us to estimate the variability of our results as discussed during the result analysis in Section~\ref{yso_results}. We checked these variability after each change in any of the hyperparameters. In the case of subclasses with many objects, some objects were included neither in the training nor in the test set. This ensured that the random selection could pick up various combinations of them at each training. In contrast, in the case of the rare subclasses, since they are entirely included in either the training or the test set, it is more difficult to ensure a large diversity in their selection to test the stability against selection. For each result presented in Section~\ref{yso_results}, we took care to also dissociate this effect from the one induced by the random initialization of weights by doing several trainings with the same data selection, which is an indication of the intrinsic stability of the network for a specific set of hyperparameters. We acknowledge here that our approach to re-balancing might not be optimum. We have considered other approaches for this task like various data augmentation methods, setting a different error cost for each class, having some priors in the class distribution, etc. Our method based on $\gamma_i$ values still has the advantage of simplicity of implementation and it solely relies on observed data. However, it has the major flaw of not using a significant part of the labeled dataset. \vspace{-0.3cm} \subsection{Network architecture and parameters} \label{network_tuning} \vspace{-0.1cm} We adjusted most of the network hyperparameters manually to find appropriate values for our problem. To ease the research of optimal values, we started with values from general recipes. To start, we defined the number of neurons in the hidden layer. The number of neurons can be roughly estimated with the idea that each neuron can be seen as a continuous linear separation in the input feature space (Sect.~\ref{nb_neurons}). Based on Figure~\ref{fig_gut_method} at least $n = 10$ neurons should be necessary, since this figure does not represent all the possible combinations of inputs. We then progressively raised the number of neurons and tested if the overall quality of the classification was improving as defined in the previous Section~\ref{training_test_datasets}, looking at CI and CII recall and precision. In most cases, it improved continuously and then fluctuated around a maximum value. The corresponding number of neurons and the maximum value can vary with the other network hyperparameters. The chosen number of neurons is then the result of a joint optimization of the different parameters. We observed that, depending on the other parameters, the average network reached its maximum value for $n \geq 15$ hidden neurons when trained on Orion. However, the network showed better stability with a slightly larger value. We adopted $n = 20$ hidden neurons for almost all the datasets, and increased it to $n = 30$ for the largest dataset, because it slightly improved the results in this case (Table~\ref{tab_hyperparam}). Increasing too much this number could lead to less stability and increases the computation time. The corresponding network architecture for this example is illustrated in Figure~\ref{illustr_net_final}. It shows all the input features described in Section~\ref{data_prep}, the "large" hidden layer with 20 to 30 sigmoid neurons and the 3 Softmax probability outputs.\\ \begin{figure}[!t] \centering \includegraphics[width=0.95\hsize]{images/illustr_net_final.pdf} \caption[Illustration of the ANN used for the YSO classification]{Illustration of the ANN actually used for the YSO classification. The light dots with blue border are input nodes representing each feature and the necessary bias nodes, the black dots are hidden neurons with sigmoid activation, and the red dots are the output neurons with a probabilistic Softmax activation. The black lines represent the linking weights. Only part of the hidden neurons and weights are represented to increase readability.} \label{illustr_net_final} \end{figure} \vspace{-0.1cm} The optimum number of neurons and the maximum quality of the classification also depends on the number of objects in the training dataset. As discussed in Section~\ref{nb_neurons}, we checked if we satisfy the recommended rule of having 10 times more training objects that weights in the network. In our case, including the bias nodes, we would need $(m+1)\times n + (n+1)\times o \times 10$ objects in our training set, with the same notations as in Section~\ref{mlp_sect}. This gives us a minimum of $2830$ objects in the whole training set using our network structure with $n = 20$, assuming a balanced distribution among the output classes. As shown in Table~\ref{sat_factors}, some of our training samples are too small for the class I YSOs and critically small for various subclasses of contaminants. Still, each class does not get the same number of neurons from the network. As we already stated, we expect some classes to have a less complex distribution in the parameter space, meaning that they can be represented by a smaller number of weights, therefore with less training examples. The extra representative strength can then be used to better represent more complex classes that may be more abundant. Thus, it is a matter of balance between having a sufficient amount of neurons to properly describe our problem and the maximum number of available data.\\ \begin{table} \centering \caption{Non structural network hyperparameter values used in training for each dataset. } \vspace{-0.1cm} \begin{tabularx}{0.95\hsize}{l @{\hskip 0.1\hsize} *{4}{Y}} \toprule & Orion & NGC 2264 & Combined & Full 1\,kpc \\ \vspace{-0.4cm}\\ \cmidrule(rr){2-5} \vspace{-0.3cm}\\ Train size & 2394 & 493 & 3586 & 7476 \\ \midrule $\eta$ & $3 \times 10^{-5}$ & $2 \times 10^{-5}$ & $4 \times 10^{-5}$ & $8 \times 10^{-5}$ \\ $\alpha$ & 0.7 & 0.6 & 0.6 & 0.8 \\ $n$ & 20 & 20 & 20 & 30 \\ $n_e$ & 5000 & 5000 & 5000 & 3000 \\ \bottomrule \end{tabularx} \label{tab_hyperparam} \caption*{\vspace{-0.3cm}\\ {\bf Notes.} The size of the corresponding training set is put for comparison. $\eta$ is the learning rate, $\alpha$ the momentum, $n$ the number of neurons in the hidden layer, and $n_e$ the number of epochs between two control steps.} \vspace{-0.4cm} \end{table} Our datasets were individually normalized in an interval of $-1$ to $1$ as described in Section~\ref{input_norm}. Therefore, we set the steepness $\beta$ of the sigmoid activation of the hidden neurons to $\beta=1$, which worked well with the adopted normalization and our weight initialization that is the same as in Section~\ref{weight_init}. Regarding the gradient descent scheme (Sect.~\ref{descent_schemes}), all methods were compared at various steps of the study, but none has been outperforming significantly the others in terms of reached best prediction. Then, regarding the actual computation time necessary to converge we selected the full batch method.\\ Concerning the computational performance, it is worth noting that at the moment of the described application, all our computations were made using a much simpler precursor of CIANNA that also was GPU accelerated. At that time we used a now 7 years old NVIDA GTX 780 which is a non-professional GPU. Still, it was able to train our networks in about 10-15 minutes using the batch formalism, while our CPU parallel implementation using OpenBLAS and OpenMP on an intel 3770k (4C/8T, 3.5 Ghz) required approximately 1h. These results are for roughly $1.3 \times 10^6$ full epochs. This is consistent with a rough estimate of 537 GFLOPS for our overclocked 3770k against 4.93 TFLOPS for our version of the GTX 780 (both for FP32). This is a nice example of the aspects discusses in Section~\ref{matrix_formal}, showing that GPUs are very efficient to perform such tasks. As a matter of fact, this GPU draws up to 250 W of power while the 3770k CPU is rated for 77W, which is certainly vastly underestimated considering our 4.2GHz all core overclock. Therefore, while having a 3.24 times higher power usage, the GPU has a raw compute capability 9.18 times higher. To end the comparison, we note that the used framework at that time was much more naive than our current version of CIANNA and was using an old CUDA version (<10.0) that did not take advantage of some kernel latency improvements of the subsequent versions, while it would have allow for mini-batch to efficiently converge with less epochs, i.e less raw computations. Also, while modern GPU now have a compute power above 15 TFLOPS in FP32, their power consumption is very similar. In contrast, the modern high-end CPUs power consumption tends to increase (up to 250 W) following their higher core count. Also we did not account for any modern hardware ANN specific use in this comparison (Sect.~\ref{gpus_prog}), which would have further reinforce the advantage of GPUs.\\ More practically, the learning rates we finally adopted are in the range $\eta = 3 - 8 \times 10^{-5}$ and used momentum ranging from $\alpha = 0.6$ to $ \alpha = 0.8$ depending on the dataset, as shown in Table~\ref{tab_hyperparam}. We note that during training and in contrast with what is described in Section~\ref{descent_schemes}, we summed the weight update contributions from each object in the training set as in \citet{rumelhart_parallel_1986} over a batch, instead of averaging them as, for example, in \citet{glorot_understanding_2010}. This implies that our learning rate must be accordingly small when using large datasets. We observed that, for this specific study, the learning rate could instead be progressively increased when the training dataset was larger. This indicates that, in small training sets, the learning process is dominated by the lack of constraints, causing a less stable value of convergence. This translates into a convergence region in the weight space that contains numerous narrow minima due to the relatively larger granularity of the objects in a smaller dataset. The network can only properly resolve it with a smaller learning rate and will be less capable of generalization. This is an expected issue, because we intentionally included small datasets in the analysis to assess the limits of the method with few objects.\\ Finally, one less important hyperparameter is the number of epochs between two monitoring steps which was set from $n_e = 3000$ to $ n_e = 5000$. It defines at which frequency the network state is saved and checked, leaving the opportunity to decrease the learning rate $\eta$ if necessary. \subsection{Convergence criteria} Since training the network is an iterative process, a convergence criteria must be adopted. In principle, this criteria should enable one to identify an iteration where the training has sufficiently converged for the network to capture the information in the training sample, but is not yet subject to over-training, as stated in Section~\ref{sect_overtraining}. However, in our case this global error is affected by the proportions in the training sample and does not necessarily reflect the underlying convergence of each subclass. Our approach to this issue was to let the network learn for an obviously unnecessary amount of steps and regularly save the state of the network. This allowed us to better monitor the long term behavior of the error, and to compare the confusion matrices at regular steps. In most cases, the error of the training and test sets both converged to a stable value and stayed there for many steps before the second one started to rise. During this apparently stable moment, the prediction quality of the classes oscillated, switching the classes that get the most representativity strength from the network. Because we want to put the emphasis on CI YSOs, we then manually selected a step that was near the maximum value for CI YSO precision, with a special attention to avoid the ones that would be too unfavorable to CII YSOs.\\ As one would expected, we observed that the convergence step changed significantly with the network weight random initialization, even with the exact same dataset and network, ranging from 100 to more than 1000, where each step corresponds to several thousands epochs (Table~\ref{tab_hyperparam}). Most of the time, the error plateau lasted around 100 steps. We emphasize that the number of steps needed to converge has no consequences on the quality of the results; it only reflects the length of the particular trajectory followed by the network during the training phase. \newpage \section{Subsequent application to multiple star-forming regions} \label{yso_results} This section presents the results of our YSO classification using ANN, obtained for the various labeled datasets described in Section~\ref{yso_datasets_tuning}. To ease the reading of this section, we summarize all the cases in Table~\ref{results_cases}. This section also includes some analysis of each case and some comparison of the results that allows to explain the motivations behind our choice of parameters.\\ \etocsettocstyle{\subsubsection*{\vspace{-1.5cm}}}{} \localtableofcontents \begin{table*}[!ht] \centering \caption{List of case studies regarding the dataset used to train the network and the dataset to which it was applied to provide predictions.} \vspace{-0.1cm} \def1.1{1.3} \begin{tabularx}{0.8\hsize}{r l *{4}{m}} \multicolumn{2}{c}{} & \multicolumn{4}{c}{\textbf{\large Forward dataset}}\\ \cmidrule[\heavyrulewidth](lr){2-6} \parbox[l]{0.2cm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{\textbf{\large Training dataset}}}} & & Orion & NGC 2264 & Combined & Full 1\,kpc \\ \cmidrule(lr){2-6} & Orion & O-O & \hspace{0.19cm}O-N* & & \\ & NGC 2264 & \hspace{0.1cm} N-O* & N-N & & \\ & Combined & & & C-C & \\ & Full 1\,kpc & \hspace{0.12cm}F-O* & \hspace{0.13cm}F-N* & F-C & \\ \cmidrule[\heavyrulewidth](lr){2-6} \end{tabularx} \label{results_cases} \caption*{\vspace{-0.1cm}\\ {\bf Notes.} *These cases were only forwarded on the full corresponding dataset with no need for a test set. There was no forward on the full 1\,kpc dataset since, as a combination of a complete catalog and a YSO-only catalog, it is not in observational proportions.} \end{table*} \newpage \subsection{First training on one specific region: the Orion molecular cloud} \label{orion_results} In this section, we consider the case where both the training and forward datasets were built from the Orion labeled dataset, hereafter denoted the "O-O" case. This first application is expected to {\bf draw a baseline of results on the simplest case possible}, since Orion is the star-forming region of our sample that contains the most YSOs. \\ \vspace{-0.8cm} \subsubsection{Hyper-parameter and training proportion evaluation} The network hyperparameters used for Orion are described in the previous Section~\ref{yso_datasets_tuning} and Table~\ref{tab_hyperparam}. The resulting confusion matrix is shown in Table~\ref{tab:OO} using the test set in observational proportions from Table~\ref{sat_factors}. The optimal $\gamma_i$ factors found for Orion show a stronger importance of the CII YSOs ($\gamma_{\text{CII}} = 3.35$) and of the Stars subclass ($\gamma_{\text{Stars}} = 4.0$) than for any other subclass ($\gamma_i \lesssim 1$). In contrast, the optimal values for Shocks and PAHs are saturated in the sense that in Eq.~(\ref{eq:Ntrainmin}), $N_i^{\rm train} = (1-\theta) \times N_i^{\rm tot}$, but they appeared to have a negligible impact on the classification quality in this case. Galaxies and PAHs appeared to be easily classified with a rather small number of them in the training sample. This is convenient since adding too many objects of any class hampers the capacity of the network to represent CI objects, i.e. the most diluted class of interest, in the network, degrading the reliability of their identification. Therefore, Stars and CII objects could be well represented with a large fraction of them in the training sample, still limiting their number to avoid an excessive dilution of CI YSOs (Sect.~\ref{class_balance}).\\ We note that we have explored different values for the $\theta$ parameter. It revealed that the network predictions improve continuously when increasing the number of objects in the training sample. However, to keep enough objects in the test dataset, we had to limit $\theta$ to 0.3 (Table~\ref{sat_factors}). The only classes for which the number of objects in the training sample is limited by the $\theta$ value rather than their respective $\gamma_i$ values are CI YSO, Shocks and PAHs. Since Shocks and PAHs are rare in the observational proportions, they are unlikely to have a dominant impact on the prediction quality; we discuss more deeply the results for these rare subclasses in Section~\ref{shocks_discussion}. This leads to the outcome that the results on Orion are currently mainly limited by the number of CI YSOs, because their dilution prevents the use of more objects of the other subclasses for which we have numerous examples.\\ \subsubsection{Main result} \begin{table}[!t] \small \centering \caption{Confusion matrix for the O-O case for a typical run.} \vspace{-0.1cm} \begin{tabularx}{0.65\hsize}{r l |*{3}{m}| r } \multicolumn{2}{c}{}& \multicolumn{3}{c}{\textbf{Predicted}}&\\ \cmidrule[\heavyrulewidth](lr){2-6} \parbox[l]{0.2cm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{\textbf{Actual}}}} & Class & CI YSOs & CII YSOs & Others & Recall \\ \cmidrule(lr){2-6} & CI YSOs & 88 & 4 & 5 & 90.7\% \\ & CII YSOs & 7 & 651 & 9 & 97.6\% \\ & Others & 11 & 58 & 4899 & 98.6\% \\ \cmidrule(lr){2-6} & Precision & 83.0\% & 91.3\% & 99.7\% & 98.4\% \\ \cmidrule[\heavyrulewidth](lr){2-6} \end{tabularx} \vspace{+0.1cm} \label{tab:OO} \end{table} \begin{table}[!t] \small \centering \caption{Subclass distribution for the O-O case.} \vspace{-0.1cm} \label{rep_hist_orion} \begin{tabularx}{0.75\hsize}{r l *{6}{Y} l} \multicolumn{2}{c}{}& \multicolumn{7}{c}{\textbf{Actual}}\\ \cmidrule[\heavyrulewidth](lr){2-9} \parbox[l]{0.2cm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{Predicted}}}} & & CI & CII & Gal & AGNs & Shocks & PAHs & Stars\\ \cmidrule(lr){3-9} & CI YSOs & 88 & 7 & 1 & 2 & 3 & 3 & 2 \\ & CII YSOs & 4 & 651 & 5 & 0 & 2 & 4 & 47 \\ & Others & 5 & 9 & 116 & 340 & 3 & 19 & 4421 \\ \cmidrule[\heavyrulewidth](lr){2-9} \end{tabularx} \vspace{-0.8cm} \label{tab:OO-sub} \end{table} The global accuracy of this case is 98.4\%, but the confusion matrix (Table~\ref{tab:OO}) shows that this apparently good accuracy is unevenly distributed between the three classes. The best represented class is the contaminant class, with an excellent precision of 99.7\% and a very good recall of 98.6\%. The results are slightly less satisfying for the two classes of interest, with recalls of 90.7\% and 97.6\%, and precisions of 83.0\% and 91.3\% for the CI and CII YSOs, respectively. In spite of their very good recall, due to their widely dominant number, objects from the Other class are the major contaminants of both CI and CII YSOs, with 11 out of 18, and 58 out of 62 contaminants, respectively. Therefore, improving the relatively low precision of CI and CII objects mainly requires to better classify the Other objects. In addition, less abundant classes are more vulnerable to contamination. This is well illustrated by the fact that the 7 CII YSOs misclassified as CI YSOs account for a loss of 7\% in precision for CI objects, while the 9 CII YSOs misclassified as Others account for a loss of only 0.2\% in the Others precision. Those properties are typical of classification problems with a diluted class of interest, where it is essential to compute the confusion matrix using observational proportions. Computing it from a balanced forward sample would have led to apparently excellent results, which would greatly overestimate the quality genuinely obtained in a real use case. Moreover, it illustrates the necessity of a high $\gamma_i$ value for dominant classes regardless of their interest (e.g Stars), as we need to maximize the recall of these classes to enhance the precision of the diluted ones.\\ \subsubsection{Test on a balanced dataset} To illustrate the interest of selecting our training proportions with the $\theta$ and $\gamma_i$ factors, we made a test with a balanced training set, where all three classes were represented by an equal number of objects. The result of this test for a typical training run is provided in Table \ref{tab:OO_balanced}. The best we could achieve this way was not more than $\sim 55\%$ precision on CI YSOs, showed in red in our table, and $\sim 87\%$ for CII YSOs, which is considerably less than the results with the rebalanced dataset. This was mostly due to the small size of the training sample (681 objects with $\theta = 0.3$), which was constrained by the less abundant class, and to the poor sampling of the Other class compared to its great diversity. Especially with only $227$ contaminants it is impossible to represent all the subclasses. We could also have attempted a balanced training with more output classes in our network definition, to account for this diversity, but it results in other issues, CI recall getting way too low regarding our expectations. In contrast, when using our more complex sample definition, despite the reduced proportion of YSOs in the training sample, the precision and recall quantities for both CI and CII remained above $80\%$ and $90\%$, respectively. This means that we found an appropriate balance between the representativity of each class and their dilution in the training sample.\\ \begin{table}[!t] \small \centering \caption{Confusion matrix for a balanced training on the O-O case for a typical run.} \vspace{-0.1cm} \begin{tabularx}{0.65\hsize}{r l |*{3}{m}| r } \multicolumn{2}{c}{}& \multicolumn{3}{c}{\textbf{Predicted}}&\\ \cmidrule[\heavyrulewidth](lr){2-6} \parbox[l]{0.2cm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{\textbf{Actual}}}} & Class & CI YSOs & CII YSOs & Others & Recall \\ \cmidrule(lr){2-6} & CI YSOs & 94 & 3 & 0 & 96.9\% \\ & CII YSOs & 28 & 613 & 26 & 91.9\% \\ & Others & 50 & 93 & 4826 & 97.1\% \\ \cmidrule(lr){2-6} & Precision & \textcolor{red}{\bf 54.7\%} & 86.5\% & 99.5\% & 96.5\% \\ \cmidrule[\heavyrulewidth](lr){2-6} \end{tabularx} \vspace{+0.1cm} \label{tab:OO_balanced} \end{table} \subsubsection{Prediction stability} As discussed in Section \ref{training_test_datasets}, we tested the stability of those results regarding (i) the initial weight values using the exact same training dataset, and (ii) the random selection of objects in the training and test set. For point (i), we found that in Orion, the weight initialization has a weak impact with approximately $\pm 0.5\%$ dispersion in almost all the quality estimators. For point (ii), we found the dispersion to average around $\pm 1\%$ for the recall of YSO classes. Contaminants were found to be more stable with a recall dispersion under $\pm 0.5\%$. Regarding the precision value, there is more instability for the CI YSOs, because they are weakly represented in the test set and one misclassified object changes the precision value by typically $1\%$. Overall, we observed values ranging from $77\%$ to $83\%$ for the CI YSOs precision. For the better represented classes, we obtained much more stable values with dispersions of $\pm 0.5$ to $\pm 1\%$ on class II, and less than $\pm 0.5\%$ on Other objects. This relative stability is strongly related to the proper balance between classes, controlled by the $\gamma_i$ parameters, since strong variations between runs imply that selection effects are important, and that there are not enough objects to represent the input parameter space properly.\\ \subsubsection{Detailed sub-classes distribution} We also looked at the detailed distribution of classified objects regarding their subclasses from the labeled dataset. These results are shown for Orion in Table~\ref{rep_hist_orion}. It is particularly useful to detail the distribution of contaminants across the three network output classes. For CI YSOs, the contamination appears to originate evenly from various subclasses, while for CII there is a strong contamination from non-YSO stars, though this represents only a small fraction ($\sim 1\%$) of the Stars population. The distribution of Other objects among the subclasses is very similar to the original one (Table~\ref{tab_selection}). Interestingly, the Shocks subclass is evenly scattered across the three output classes, which we interpret as a failure by the network to find a proper generalization for these objects. More generally, Table~\ref{rep_hist_orion} shows that the classes that are sufficiently represented in the training set like AGNs or Stars are well classified, while the Galaxies, Shocks and PAHs are less well predicted. This is directly related to the fact that the training dataset does not fully covers their respective volume in the input parameter space or to the fact that they are too diluted in the dataset. Additionally, Stars and Galaxies mainly contaminate the CII class. This is a direct consequence of the proximity of theses classes in the input parameter space, as can be seen in Figure~\ref{fig_gut_method}. \subsubsection{Full dataset result} To circumvent the limitations due to the small size of our test set, we also applied our network to the complete Orion dataset. The corresponding confusion matrix is in Table~\ref{tab:OO_all} and the associated subclass distribution is in Table~\ref{tab:OO_all-sub}. It may be considered to be a risky practice, because it includes objects from the training set that could be over-fitted, so it should not be used alone to analyze the results. Here, we used it jointly with the results on the test set as an additional over-fitting test. If the classes are well constrained, then the confusion matrix should be stable when switching from the test to the complete dataset. For Orion, we see a strong consistency between Tables~\ref{tab:OO} and \ref{tab:OO_all} for the Other and CII classes, both in terms of recall and precision. For CI YSOs, the recall has increased by 3.4\%, and the precision has decreased by 1.2\%. These variations are of the same order as the variability observed when changing the training set random selection, indicating that over-fitting is unlikely here. If there is over-fitting it should be weak and restricted to CI YSOs. Therefore, the results obtained from the complete Orion dataset appear to be reliable enough to take advantage of their greater statistics. Table~\ref{tab:OO_all} gives slightly more information than Table~\ref{tab:OO}, and mostly confirms the previous conclusions on the contamination between classes. Table~\ref{tab:OO_all-sub} provides further insight. AGNs, which seemed to be almost perfectly classified, are revealed to be misclassified as YSOs in 1.8\% of cases. It also shows that the missed AGNs are equally distributed across the CI and CII YSO classes. Shocks are still evenly spread across the three output classes. Regarding PAHs, Table~\ref{tab:OO_all-sub} reveals that there is more confusion with the CII YSOs than with the CI YSOs. \begin{table}[!t] \small \centering \caption{Confusion matrix for the O-O case forwarded on the full dataset.} \begin{tabularx}{0.65\hsize}{r l |*{3}{m}| r } \multicolumn{2}{c}{}& \multicolumn{3}{c}{\textbf{Predicted}}&\\ \cmidrule[\heavyrulewidth](lr){2-6} \parbox[l]{0.2cm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{\textbf{Actual}}}} & Class & CI YSOs & CII YSOs & Others & Recall \\ \cmidrule(lr){2-6} & CI YSOs & 305 & 11 & 8 & 94.1\% \\ & CII YSOs & 34 & 2157 & 33 & 97.0\% \\ & Others & 34 & 201 & 16331 & 98.6\% \\ \cmidrule(lr){2-6} & Precision & 81.8\% & 91.1\% & 99.7\% & 98.3\% \\ \cmidrule[\heavyrulewidth](lr){2-6} \end{tabularx} \vspace{+0.1cm} \label{tab:OO_all} \end{table} \begin{table}[!t] \small \centering \caption{Subclass distribution for the O-O case forwarded on the full dataset.} \vspace{-0.1cm} \begin{tabularx}{0.75\hsize}{r l *{6}{Y} l} \multicolumn{2}{c}{}& \multicolumn{7}{c}{\textbf{Actual}}\\ \cmidrule[\heavyrulewidth](lr){2-9} \parbox[l]{0.2cm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{Predicted}}}} & & CI & CII & Gal & AGNs & Shocks & PAHs & Stars\\ \cmidrule(lr){3-9} & CI YSOs & 305 & 34 & 2 & 11 & 11 & 7 & 3 \\ & CII YSOs & 11 & 2157 & 10 & 9 & 9 & 18 & 155 \\ & Others & 8 & 33 & 395 & 1121 & 8 & 62 & 14745 \\ \cmidrule[\heavyrulewidth](lr){2-9} \end{tabularx} \vspace{-0.1cm} \label{tab:OO_all-sub} \end{table} \newpage \subsection{Effect of the selected region: training using NGC 2264} \label{NGC2264_results} \vspace{0.3cm} After having established base results using Orion, we wanted to {\bf test if the learning process could be performed on another region}. As Orion is the largest star-forming region of our sample, it implies selecting a region with less YSOs, which was expected to have a strong impact on the results. Training on an individual region is very limited as they are very few to have a sufficient amount of stars. However, we wanted to have two distinct one-region cases in order to ease the comparison of the two, and to assess the presence of region specific features.\\ \subsubsection{Main result} For this, we used the training and forward datasets for NGC 2264 as described in Table~\ref{sat_factors} with the corresponding hyperparameters (Table~\ref{tab_hyperparam}). The results for this region alone, obtained by a forward on the test set, are shown in Table~\ref{tab:NN}, with the subclass distribution in Table~\ref{tab:NN-sub}. We refer to this case as the N-N case. The major differences with Orion are expected to come from the differences in input parameter space coverage and from the different proportions of each subclass. This N-N case is also useful to see how difficult it is to train our network with a small dataset. We notice that the recall and precision of CI YSOs are greater ($96.3\%$ and $89.7\%$, respectively) than in Orion, but the corresponding number of objects is too small to draw firm conclusions. For CII YSOs, the recall and precision are lesser than in Orion by approximately $4\%$ and $10\%$, respectively. The Other class shows similar values as in Orion.\\ \begin{table}[!t] \small \centering \caption{Confusion matrix for the N-N case for a typical run.} \vspace{-0.1cm} \begin{tabularx}{0.65\hsize}{r l |*{3}{m}| r } \multicolumn{2}{c}{}& \multicolumn{3}{c}{\textbf{Predicted}}&\\ \cmidrule[\heavyrulewidth](lr){2-6} \parbox[l]{0.2cm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{\textbf{Actual}}}} & Class & CI YSOs & CII YSOs & Others & Recall \\ \cmidrule(lr){2-6} & CI YSOs & 26 & 1 & 0 & 96.3\% \\ & CII YSOs & 1 & 121 & 8 & 93.1\% \\ & Others & 2 & 31 & 2144 & 98.5\% \\ \cmidrule(lr){2-6} & Precision & 89.7\% & 79.1\% & 99.6\% & 98.2\% \\ \cmidrule[\heavyrulewidth](lr){2-6} \end{tabularx} \vspace{+0.1cm} \label{tab:NN} \end{table} \begin{table}[!t] \small \centering \caption{Subclass distribution for the N-N case.} \vspace{-0.1cm} \begin{tabularx}{0.75\hsize}{r l *{6}{Y} l} \multicolumn{2}{c}{}& \multicolumn{7}{c}{\textbf{Actual}}\\ \cmidrule[\heavyrulewidth](lr){2-9} \parbox[l]{0.2cm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{Predicted}}}} & & CI & CII & Gal & AGNs & Shocks & PAHs & Stars\\ \cmidrule(lr){3-9} & CI YSOs & 26 & 1 & 0 & 2 & 0 & 0 & 0 \\ & CII YSOs & 1 & 121 & 4 & 5 & 1 & 0 & 21 \\ & Others & 0 & 8 & 30 & 68 & 0 & 0 & 2046 \\ \cmidrule[\heavyrulewidth](lr){2-9} \end{tabularx} \vspace{+0.2cm} \label{tab:NN-sub} \end{table} \newpage \subsubsection{Small dataset issues} We highlight here how having a small learning sample is problematic for this classification. First of all, the training set contains only 62 CI YSOs, which is far from enough in regard of the size of the network (Sect. \ref{nb_neurons}). This difficulty is far worse than for Orion, because, to avoid dilution, we had to limit the number of objects in the two other classes, leading to the small size of the training sample (493 objects), and consequently to worse results for all classes. To mitigate these difficulties and because the dilution effect occurs quickly, we adopted lower $\gamma_i$ values for CII YSOs and Stars, thus reducing their relative strength. This results in too small training set sizes for all the subclasses compared to the number of weights in the network. However, we observed that a decrease in the number of neurons still reduced the quality of the results. Although a lower number of hidden neurons tended to increase stability, we chose (i) to keep them at $n = 20$ to get the best results, and (ii) to reduce the learning rate to achieve better stability. We note that, due to the use of batch training with the sum of contributions, the smaller size of the dataset than for the O-O case is somewhat equivalent to an additional lowering of the learning rate (Sect.~\ref{descent_schemes}). For this dataset, slight changes on the $\gamma_i$ values happened to lead to great differences in terms of results and stability, which indicates that the classification lacks constraints.\\ \subsubsection{Prediction stability} Even for a given good $\gamma_i$ set, there is a large scatter in the results when changing the training and test set random selection. It leads to a dispersion of about $\pm 4\%$ in both recall and precision for the CI YSOs. This can be due to a lack of representativity of this class in our sample, but it can also come from small-number effects in the test set that are stronger than in Orion. These two points show that the quality estimators for CI YSOs are not trustworthy with such a small sample size. The results shown in Tables~\ref{tab:NN} and \ref{tab:NN-sub} correspond to one of the best training on NGC 2264, that achieves nearly the best values for CI quality estimators. The CII precision dispersion is about $\pm 2\%$, and its average value is around $80\%$, which is higher than in the specific result given in Table~\ref{tab:NN}, but still significantly lower than for Orion. In contrast, the CII recall is fairly stable with less than $\pm 1\%$ dispersion. Contaminants seem as stable as for Orion using these specific $\gamma_i$ values. However, it could come from the artificial simplification of the problem due to the quasi-absence of some subclasses (Shocks and PAH, see Table~\ref{sat_factors}) in the test set. We note that the network would not be able to classify objects from these classes if this training were applied to any other region that contained such objects.\\ As in the previous section, we studied the effect of the random initialization of the weights. We found that both precision and recall of YSO classes are less stable than for the O-O case with a dispersion of $\pm 1.5\%$ to $\pm 2.5\%$. The Other class shows a similar stability than for Orion, with up to $\pm 0.5\%$ dispersion on precision and recall, which could again be biased by the fact that the absence of some subclasses simplifies the classification. These results indicate as before that our network is not sufficiently constrained using this dataset alone with respect to the architecture complexity that is needed for YSO classification. \\ \newpage \subsubsection{Full dataset result} The forward on the complete NGC 2264 dataset is crucial in this case, since it may overcome small-number effects for many subclasses. The corresponding results are shown in Tables~\ref{tab:NN_all} and \ref{tab:NN-sub_all}. It is more difficult in this case than in the O-O one to be sure that there is no over-training, even with a careful monitoring of the error convergence on the test set during the training, because the small-number effects are important. As a precaution, in all the results for the N-N case, we chose to stop the training slightly earlier in the convergence phase in comparison to Orion, for which we found over-training to be negligible or absent (Sect.~\ref{orion_results}). We expect this strategy to reduce over-training, at the cost of a higher noise.\\ With this assumption, the results show more similarities to the Orion case than those obtained with the test set only (comparing Tables~\ref{tab:OO_all} and \ref{tab:NN_all}). Because NGC 2264 contains less CI and CII YSOs than Orion, their boundaries with the contaminants in the parameter space are less constrained. This results in a lower precision for YSO classes, which is mainly visible for the CII YSOs with a drop in precision down to $83.7\%$. For NGC 2264, we have smaller optimal $\gamma_i$ values for the contaminants (especially the Stars) than in Orion. Since it implicitly forces the network to put the emphasis on CI and CII, it should result in better, or at least equivalent, values for recall on these classes than on Orion. It appears to be the case for CI ($\approx 98\%$). It is less clear for CII ($93.3\%$), possibly because of their lesser $\gamma_i$ value than for the Orion case. For the sub-contaminant distributions, the statistics are more robust than in Table~\ref{tab:NN-sub}, and the Galaxies and AGNs are properly represented. Still, it appears that the AGN classification quality is not sufficient and has a stronger impact on the CI precision than in the case of Orion. The other behaviors are similar to those identified in Orion.\\ \begin{table}[!t] \small \centering \caption{Confusion matrix for the N-N case forwarded on the full dataset.} \vspace{-0.1cm} \begin{tabularx}{0.65\hsize}{r l |*{3}{m}| r } \multicolumn{2}{c}{}& \multicolumn{3}{c}{\textbf{Predicted}}&\\ \cmidrule[\heavyrulewidth](lr){2-6} \parbox[l]{0.2cm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{\textbf{Actual}}}} & Class & CI YSOs & CII YSOs & Others & Recall \\ \cmidrule(lr){2-6} & CI YSOs & 88 & 2 & 0 & 97.8\% \\ & CII YSOs & 7 & 406 & 22 & 93.3\% \\ & Others & 12 & 77 & 7175 & 98.8\% \\ \cmidrule(lr){2-6} & Precision & 82.2\% & 83.7\% & 99.7\% & 98.4\%\\ \cmidrule[\heavyrulewidth](lr){2-6} \end{tabularx} \vspace{+0.1cm} \label{tab:NN_all} \end{table} \begin{table}[!t] \small \centering \caption{Subclass distribution for the N-N case forwarded on the full dataset.} \label{tab:NN-sub_all} \vspace{0.1cm} \begin{tabularx}{0.75\hsize}{r l *{6}{Y} l} \multicolumn{2}{c}{}& \multicolumn{7}{c}{\textbf{Actual}}\\ \cmidrule[\heavyrulewidth](lr){2-9} \parbox[l]{0.2cm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{Predicted}}}} & & CI & CII & Gal & AGNs & Shocks & PAHs & Stars\\ \cmidrule(lr){3-9} & CI YSOs & 88 & 7 & 0 & 8 & 3 & 0 & 0 \\ & CII YSOs & 2 & 406 & 8 & 10 & 1 & 0 & 58 \\ & Others & 0 & 22 & 106 & 232 & 2 & 0 & 6835 \\ \cmidrule[\heavyrulewidth](lr){2-9} \end{tabularx} \vspace{+0.1cm} \label{tab:NN_all-sub} \end{table} \newpage \subsection{Generalization capacity: crossed application} \label{cross_forward} In this section, we {\bf tested the generalization capacity of the trained networks} by using the network trained on one region to classify the sources of the other one. This test is important because this is a typical use case: training the network on well-known regions, and use it on a new one. This is also a way to highlight more discrepancies between the datasets. \subsubsection{Cross forward considerations} For this, we used the obtained trained networks from the O-O and N-N cases described in Sects. \ref{orion_results} and \ref{NGC2264_results}. Since they are both built from the same original classification scheme (Sect. \ref{data_prep}), we applied directly one training to the other labeled dataset, which resulted in the two new cases O-N and N-O (see Table~\ref{results_cases}). However, the forwarded dataset must be normalized in the same way as the training set (Sect.~\ref{network_tuning}). Omitting this step would lead to deviations and distortions of our network class boundaries in the input parameter space, with a strong impact on the network prediction. One difficulty is that some objects end up with parameters outside the $[-1;1]$ range, corresponding to areas of the feature space where the network is not constrained. One could partly hide this effect by excluding those out-of-boundary objects. However, they give an additional information about which kind of objects are missing in the respective training datasets and about the corresponding input feature space areas. Therefore, we preferred to keep them in the forward samples. It is legitimate here to use the full dataset directly to test the networks, because none of its objects were used during the corresponding training. It also means that we forwarded datasets with different proportions than the ones they were trained with, but this is the expected end use of such networks. Moreover, both datasets are the results of observations, which means that our tests measured the effective performance of the trained network on a genuine observational use case with the corresponding proportions of classes.\\ \begin{table}[!t] \small \centering \caption{Confusion matrix for the O-N case forwarded on the full NGC 2264 dataset.} \vspace{-0.1cm} \begin{tabularx}{0.65\hsize}{r l |*{3}{m}| r } \multicolumn{2}{c}{}& \multicolumn{3}{c}{\textbf{Predicted}}&\\ \cmidrule[\heavyrulewidth](lr){2-6} \parbox[l]{0.2cm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{\textbf{Actual}}}} & Class & CI YSOs & CII YSOs & Others & Recall \\ \cmidrule(lr){2-6} & CI YSOs & 74 & 2 & 14 & 82.2\% \\ & CII YSOs & 6 & 402 & 27 & 92.4\% \\ & Others & 9 & 52 & 7203 & 99.2\% \\ \cmidrule(lr){2-6} & Precision & 83.1\% & 88.2\% & 99.4\% & 98.6\% \\ \cmidrule[\heavyrulewidth](lr){2-6} \end{tabularx} \vspace{+0.1cm} \label{tab:ON_all} \end{table} \begin{table}[!t] \small \centering \caption{Subclass distribution for the O-N case forwarded on the NGC 2264 dataset.} \vspace{-0.1cm} \begin{tabularx}{0.75\hsize}{r l *{6}{Y} l} \multicolumn{2}{c}{}& \multicolumn{7}{c}{\textbf{Actual}}\\ \cmidrule[\heavyrulewidth](lr){2-9} \parbox[l]{0.2cm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{Predicted}}}} & & CI & CII & Gal & AGNs & Shocks & PAHs & Stars\\ \cmidrule(lr){3-9} & CI YSOs & 74 & 6 & 0 & 3 & 5 & 0 & 1 \\ & CII YSOs & 2 & 402 & 6 & 2 & 0 & 0 & 44 \\ & Others & 14 & 27 & 108 & 245 & 0 & 1 & 6848 \\ \cmidrule[\heavyrulewidth](lr){2-9} \end{tabularx} \vspace{-0.2cm} \label{tab:ON_all-sub} \end{table} One must note that, in order to properly compare the results, we needed to keep the exact same networks that produced the results in Tables~\ref{tab:OO}, \ref{tab:OO_all}, \ref{tab:NN}, \ref{tab:NN_all}. Therefore, we did not estimate the dispersion of the prediction regarding the weight initialization, and the training set random selection on the O-N and N-O cases. \\ \subsubsection{O-N main result} \begin{figure*}[t] \centering \begin{subfigure}[t]{0.44\textwidth} \caption*{\textbf{Orion}} \includegraphics[width=\textwidth]{images/space_coverage_orion.png} \end{subfigure} \begin{subfigure}[t]{0.44\textwidth} \caption*{\textbf{NGC 2264}} \includegraphics[width=\textwidth]{images/space_coverage_2264.png} \end{subfigure} \\\vspace{0.3cm} \begin{subfigure}[t]{0.44\textwidth} \caption*{\textbf{Combined}} \includegraphics[width=\textwidth]{images/space_coverage_combined.png} \end{subfigure} \begin{subfigure}[t]{0.44\textwidth} \caption*{\textbf{Combined + 1\,kpc}} \includegraphics[width=\textwidth]{images/space_coverage_combined_and_1kpc.png} \end{subfigure} \caption[Differences in feature space coverage for our datasets]{Illustration of the differences in feature space coverage for our datasets. The CI YSOs, CII YSOs, and contaminants are shown in red, green, and blue, respectively, according to the simplified G09 classification scheme. The crosses in the last frame show the YSOs from the 1\,kpc sample. In the side frames, the area of each histogram is normalized to one.} \vspace{-1cm} \label{datasets_space_coverage} \end{figure*} \begin{figure*}[!t] \centering \begin{subfigure}[t]{0.49\textwidth} \caption*{\textbf{Missed}} \includegraphics[width=\textwidth]{images/space_coverage_orion_fwd_2264_missed.png} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \caption*{\textbf{Wrong}} \includegraphics[width=\textwidth]{images/space_coverage_orion_fwd_2264_wrong.png} \end{subfigure} \caption[Space coverage of misclassified objects in the O-N case]{Space coverage of misclassified objects in the O-N case. \textit{Left}: Genuine CI and CII according to the labeled dataset that were misclassified by the network. Green is for CII YSOs, red for CI YSOs. The points and crosses indicate the network output as indicated in the legend. \textit{Right}: Predictions of the network that are known to be incorrect based on the labeled dataset. Green is for predicted CII YSOs, red for predicted CI YSOs. The points and crosses indicate the genuine class as indicated in the legend.} \label{orion_fwd_2264_space_coverage} \end{figure*} Regarding the results from O-N in Tables~\ref{tab:ON_all} and \ref{tab:ON_all-sub}, we see that the recall for CI YSOs is lower by approximately $8\%$ than the one on the O-O case (Table~\ref{tab:OO}) and lower by approximately $12\%$ when compared to the Orion full dataset results (Table~\ref{tab:OO_all}). Similarly, CII YSOs have a recall lower by approximately $5\%$. This difference being much greater than the dispersion of our results on the O-O case, indicates that the Orion data lack some specific information that is contained in NGC 2264 for these classes. This should correspond to differences in feature space coverage, but these differences might be subtle in the limited set of CMDs considered in the G09 method, whereas the network works directly in the 10-dimension space composed of the 5 bands and 5 errors. For example, as shown in Fig.~\ref{datasets_space_coverage}, it is striking that both YSO classes cover less the upper part of the diagram ([4.5] < 9) in the NGC 2264 case than for Orion. The slopes of the normalized histograms in this figure also illustrate that the density distributions are different between Orion and NGC 2264, especially for CI YSOs. For this population, Orion presents a virtually symmetrical peaked distribution of [4.5]-[8] centered near [4.5]-[8] = 1.9 mag, while NGC 2264 shows a flatter and more skewed distribution. Although subtle, this specificity of the parameter space coverage is in line with the drop in the CI YSO recall in the O-N case, since, in Orion, the area located at [4.5]-[8] $>2$ is less constrained than for [4.5]-[8] $\approx 1.9$, while, in NGC 2264, the area at [4.5]-[8] $>2$ contains a larger fraction of CI YSOs. This interpretation is also consistent with the fact that in the O-N case, CI YSOs are mostly confused with objects from the Other class, in contrast with the O-O and N-N cases, suggesting a lack of constraint for the boundary between the CI and Other classes in the lower-right area of the CI distribution in Fig.~\ref{datasets_space_coverage}, although the differences in class proportions may also contribute. From the perspective of the network, it is likely that the weight values were more influenced by the more abundant updates from objects near the CI peak at [4.5]-[8] = 1.9 mag.\\ \subsubsection{Detailed feature space analysis for O-N} \label{detailed_feature_space_analysis_ON} To confirm the previous analysis, Figure~\ref{orion_fwd_2264_space_coverage} shows the distribution of misclassified objects for this O-N case using the same ([4.5]-[8], [4.5]) CMD. To ease the comparison, the misclassified objects are separated in two categories, with "Missed" objects standing for misclassified genuine YSOs, and "Wrong" objects standing for any class being wrongly predicted as a YSO. This representation is equivalent to take either the YSO rows or the YSO columns in the confusion matrix, respectively. As a consequence, CI YSOs misclassified as CII, and CII YSOs misclassified as CI both appear in the two representations. Despite the very small number of CI YSOs of the NGC 2264 dataset, and therefore the few CI YSOs that are misclassified, some trends can still be observed. Regarding missed CI YSOs, more that half of them are misclassified as Other in the bottom part ([4.5]< 12) of the CMD. This indicates that this region is less constrained when training on Orion than when training on NGC 2264, or at least that the learned boundary is not favorable to the NGC 2264 dataset. While the latter region has much less CI training examples in this part of the CMD, the more homogeneous CI YSOs distribution over the feature space of NGC 2264 allows more weights to be dedicated to this specific part. We also noted that the number of CI misclassified as CII is almost the same in N-N and O-N with 3 and 2 objects, respectively (Tables~\ref{tab:NN_all} and \ref{tab:ON_all}). Therefore the drop in CI recall is dominated by these misclassified CI as contaminants. Interestingly, the O-N subclass distribution shows that the misclassified CI are mainly predicted as CII and Shocks, the confusion with AGN being of the same order than the N-N case. Regarding the CII YSOs, the O-N case is more compelling since their recall only slightly dropped, and their precision increased. Therefore, the CII distribution in Orion appears suitable to constrain the CII YSO boundaries which was not the case of NGC~2264.\\ Physically, the observed differences in this CMD are likely to come from the different star formation histories and from the difference in distance between the two regions, respectively between $\sim$ 420 pc for Orion \citep{megeath_spitzer_2012}, and $\sim$ 760 pc for NGC 2264 \citep{rapson_spitzer_2014}. In contrast, the Other class appears to be well represented, suggesting that the Orion training set contains enough objects to represent properly the inherent distribution of this class also in NGC 2264.\\ The changes in precision are less significant than those in recall, due to the differences in class proportions between the two datasets. For example, there is a $1.58$ factor in the CI over Other ratio between Orion and NGC 2264. The number of misclassified Other as CI is then expected to rise, with a consequent impact on CI precision. However, for this case the improved Other recall between the O-O and O-N case of 0.6\% seems to overcome this effect partly. In contrast, the CII YSOs, for which the proportions are lowered by a $2.24$ factor, indeed suffer a $\sim 8\%$ drop in precision. This strong interplay between proportions and changes in recall for each class makes the differences in precision less prone to analysis. \subsubsection{N-O main result} Concerning the results from N-O in Tables~\ref{tab:NO_all} and \ref{tab:NO_all-sub}, the precision of CI YSOs dropped to $65.2\%$, in spite of the number of objects, sufficient not to be affected by small-number effects. This is the worst quality estimator value we observed in the whole study. The precision drop in CII YSOs is less important and only $2\%$ lesser than the NGC 2264 full dataset results. The impact of the differences in feature space coverage is even stronger than for the O-N case, since there are almost no YSOs brighter than [4.5]=9 mag in NGC 2264, therefore a large part of the feature space where many Orion objects lie is left unconstrained. Moreover, the NGC 2264 dataset lacks shocks and PAHs that are present in non negligible proportions in the Orion dataset. Therefore, the NGC 2264 trained network did not constrain them, as confirmed in Table~\ref{tab:NO_all-sub}, where PAHs are evenly scattered in all output classes, and where shocks are completely misclassified as YSOs. In addition to these flaws, the number of objects in the training set is too small to properly constrain the overall network architecture that suits this problem (Sect. \ref{network_tuning}). \newpage \subsubsection{Detailed feature space analysis for N-O} \begin{table}[!t] \small \centering \caption{Confusion matrix for N-O case forwarded on the full Orion dataset.} \vspace{-0.3cm} \begin{tabularx}{0.65\hsize}{r l |*{3}{m}| r } \multicolumn{2}{c}{}& \multicolumn{3}{c}{\textbf{Predicted}}&\\ \cmidrule[\heavyrulewidth](lr){2-6} \parbox[l]{0.2cm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{\textbf{Actual}}}} & Class & CI YSOs & CII YSOs & Others & Recall \\ \cmidrule(lr){2-6} & CI YSOs & 285 & 33 & 6 & 88.0\% \\ & CII YSOs & 54 & 1967 & 203 & 88.4\% \\ & Others & 98 & 293 & 16175 & 97.6\% \\ \cmidrule(lr){2-6} & Precision & 65.2\% & 85.8\% & 98.7\% & 96.4\%\\ \cmidrule[\heavyrulewidth](lr){2-6} \end{tabularx} \vspace{-0.1cm} \label{tab:NO_all} \end{table} \begin{table}[!t] \small \centering \vspace{-0.1cm} \caption{Subclass distribution for the N-O case forwarded on the full Orion dataset.} \begin{tabularx}{0.75\hsize}{r l *{6}{Y} l} \multicolumn{2}{c}{}& \multicolumn{7}{c}{\textbf{Actual}}\\ \cmidrule[\heavyrulewidth](lr){2-9} \parbox[l]{0.2cm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{Predicted}}}} & & CI & CII & Gal & AGNs & Shocks & PAHs & Stars\\ \cmidrule(lr){3-9} & CI YSOs & 285 & 54 & 8 & 37 & 12 & 39 & 2 \\ & CII YSOs & 33 & 1967 & 18 & 34 & 15 & 27 & 199 \\ & Others & 6 & 203 & 381 & 1070 & 1 & 21 & 14702 \\ \cmidrule[\heavyrulewidth](lr){2-9} \end{tabularx} \vspace{-0.3cm} \label{tab:NO_all-sub} \end{table} \vspace{-0.2cm} Similar to the previous O-N case, Figure~\ref{2264_fwd_orion_space_coverage} shows the distribution of misclassified objects for this N-O case. Thanks to the much larger Orion dataset size, it is much easier to extract the trends regarding the class distribution within the feature space. On the main vertical separation between CI and CII YSOs, around [4.5]-[8] = 1.8, there is a noticeable change in behavior at [4.5] = 9. Bellow this limit ([4.5] > 9), there is a large amount of missed CII classified as CI, while above this limit ([4.5] < 9) it is reversed with more CI misclassified as CII. It perfectly illustrates the fact that in the N-N training it was acceptable to consider every object above this limit and with [4.5]-[8] > 1.8 as CII. In the same way, the highest density of CI and CII near the vertical splitting between CI and CII YSOs that is present in Orion (top left frame in Figure~\ref{datasets_space_coverage}), is smoothed in NGC 2264 (top right frame in Figure~\ref{datasets_space_coverage}). Therefore, this boundary is less constrained, and the class proportions gave the advantage to CI YSOs due to the much lower $\gamma_{CII}$ value in NGC 2264 that allowed to reach good CI recall in the N-N case, but is unsuitable for a generalization to Orion. This behavior strongly decreases the predicted CI YSO precision but is not sufficient to explain the drop to $~65\%$. Most of the contamination comes from contaminants misclassified as CI. In the figure there are two main regions for these objects, below the AGNs limit and on the far right side of the CMD, [4.5]-[8] > 3. This is confirmed by the subclass distribution (Table~\ref{tab:NO_all-sub}), where there is a lot of misclassified AGNs and PAHs as CI, which correspond to these regions. This is directly due to the complete absence of identified PAHs in the NGC 2264 dataset and to the very few number of AGNs that must not be enough to provide a complete coverage of their feature space. Finally, the CII YSOs are contaminated by genuine CI YSOs in the upper part of the diagram, but the main sources of contaminants are the Stars. As before the two frames of Figure~\ref{2264_fwd_orion_space_coverage} show how a lot of misclassified objects fall at the boundary between the two classes. Interestingly, the region [4.5] > 13 that contains genuine CII classified as contaminants (group of green dots in the left frame), is continuous with the region [4.5] < 13 where Contaminants (mainly stars) are misclassified as CII (group of green dots in the right frame), illustrating the misplacement of the network boundary. The upper part of the CMD at [4.5] < 9, contains as before a lot of CII that were missed as contaminants, most likely bright stars. The remaining contamination is visible in the "wrong" frame showing many CII predictions in the AGNs and PAH region, again due to this regions not being constrained properly by the NGC 2264 training. \begin{figure*}[!t] \centering \begin{subfigure}[t]{0.49\textwidth} \caption*{\textbf{Missed}} \includegraphics[width=\textwidth]{images/space_coverage_2264_fwd_orion_missed.png} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \caption*{\textbf{Wrong}} \includegraphics[width=\textwidth]{images/space_coverage_2264_fwd_orion_wrong.png} \end{subfigure} \caption[Space coverage of misclassified objects in the N-O case]{Space coverage of misclassified objects in the N-O case. \textit{Left}: Genuine CI and CII YSOs according to the labeled dataset that were misclassified by the network. Green is for CII YSOs, red for CI YSOs. The points and crosses indicate the network output as indicated in the legend. \textit{Right}: Predictions of the network that are known to be incorrect based on the labeled dataset. Green is for predicted CII YSOs, red for predicted CI YSOs. The points and crosses indicate the genuine class as indicated in the legend.} \label{2264_fwd_orion_space_coverage} \end{figure*} \subsection{Improving diversity: combined training} \label{cross_train} The two major limitations identified in the cases of Orion and NGC 2264 are (i) the lack of CI YSOs in the training datasets to be properly constrained by the network, with the associated reduction of other types of objects to avoid dilution, and (ii) the differences in feature space coverage for the two different regions, which induce a lack of generalization capacity toward new star-forming regions. A simple solution to overcome those limitations is to {\bf perform a combined training with the two clouds} (Fig.~\ref{datasets_space_coverage}). We refer to this case, where we merged the labeled samples from Orion and NGC 2264, and used it both to train the network and perform the forward step, as the C-C case. Since the two labeled datasets were obtained with our modified G09 classification, they formed a homogeneous dataset and it was straightforward to combine them. We normalized this new combined dataset as explained in Section~\ref{network_tuning}. The detailed subclass distribution of the target sample for this dataset is presented in Table~\ref{tab_selection}. Thanks to the larger number of CI YSOs in the labeled dataset, we were able to adopt a lower value of $\theta$ ($\theta = 0.2$) to build the test set, which proved to be large enough to mitigate the small-number effects for our output classes. It conserved most data in the training set, where they were needed to improve the classification quality. We note that merging the datasets led to slightly different observational proportions, still realistic enough. \newpage \subsubsection{Hyper-parameter and training proportion changes} \vspace{-0.2cm} Table~\ref{sat_factors} shows the optimal $\gamma_i$ values obtained with the combined dataset. The $\gamma_i$ values are very similar to those of Orion, as a result of Orion providing two to five times more objects than NGC 2264 to the combined dataset. The dataset is globally larger, so that the optimal number of neurons could have been raised to represent the expected more complex boundaries in the parameter space. However, increasing the number of hidden neurons did not show any improvement of the end results. Thus, we kept 20 hidden neurons for this C-C case. Nevertheless, the larger size of the training set tended to stabilize the convergence of the network during the training, which allowed us to increase the learning rate to $\eta = 4\times 10^{-5}$. As exposed in Section~\ref{network_tuning}, this is counter-intuitive. Indeed, since the weight updates are computed as a sum over the objects in the training sample, they should be greater here than in previous cases, increasing the probability that the network misses potentially good but narrow minima, usually forcing to lower the learning rate. On the other hand, the larger statistics improves the weight space resolution, mitigating meaningless local minima that originate in the limited number of objects. It appears that the latter effect was dominant with smaller dataset forcing a smaller learning rate to properly explore all these minima and find the best one. Using larger dataset then allowed us to raise the learning rate even more. We kept the momentum value at $\alpha = 0.6$ (Sect.~\ref{sect_momentum}) because a greater value happened to make the network diverge in the first steps of training, when the weight corrections were too large.\\ \vspace{-0.9cm} \subsubsection{Main result} \begin{table}[!t] \small \centering \caption{Confusion matrix for the C-C case for a typical run.} \vspace{-0.2cm} \begin{tabularx}{0.65\hsize}{r l |*{3}{m}| r } \multicolumn{2}{c}{}& \multicolumn{3}{c}{\textbf{Predicted}}&\\ \cmidrule[\heavyrulewidth](lr){2-6} \parbox[l]{0.2cm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{\textbf{Actual}}}} & Class & CI YSOs & CII YSOs & Others & Recall \\ \cmidrule(lr){2-6} & CI YSOs & 77 & 2 & 3 & 93.9\% \\ & CII YSOs & 9 & 514 & 8 & 96.8\% \\ & Others & 9 & 49 & 4706 & 98.8\% \\ \cmidrule(lr){2-6} & Precision & 81.1\% & 91.0\% & 99.8\% & 98.5\%\\ \cmidrule[\heavyrulewidth](lr){2-6} \end{tabularx} \vspace{-0.2cm} \label{tab:CC} \end{table} \begin{table}[!t] \small \centering \caption{Subclass distribution of the C-C case.} \vspace{-0.2cm} \begin{tabularx}{0.75\hsize}{r l *{6}{Y} l} \multicolumn{2}{c}{}& \multicolumn{7}{c}{\textbf{Actual}}\\ \cmidrule[\heavyrulewidth](lr){2-9} \parbox[l]{0.2cm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{Predicted}}}} & & CI & CII & Gal & AGNs & Shocks & PAHs & Stars\\ \cmidrule(lr){3-9} & CI YSOs & 77 & 9 & 1 & 3 & 3 & 2 & 0 \\ & CII YSOs & 2 & 514 & 0 & 3 & 3 & 4 & 39 \\ & Others & 3 & 8 & 103 & 272 & 0 & 11 & 4320 \\ \cmidrule[\heavyrulewidth](lr){2-9} \end{tabularx} \vspace{-0.4cm} \label{tab:CC-sub} \end{table} \vspace{-0.2cm} The results of this C-C case, presented in Table~\ref{tab:CC} and \ref{tab:CC-sub}, are very close to those on the O-O case with the full Orion dataset. The largest difference is by $0.7\%$ for the precision of CI YSOs. The other differences are $\leq 0.2\%$. The stability of the results regarding both the weight initialization and the random selection of the test and training sets is also very similar to that of the O-O case (Sect. \ref{orion_results}), with recall and precision values scattered by typically $\pm 0.5\%$, except for CI precision which scattered by about $\pm 1\%$. These fluctuations exceed the differences between the O-O and C-C case, as observed from their confusion matrices, when considering the full-dataset forward. This stability was not guaranteed, since, on the one hand, the combined training set is more general than previous training sets, and, on the other hand, the combined training is a more complex problem than a single cloud training, due to the expected more complex distribution of objects in the input parameter space, especially for YSOs. \newpage \subsubsection{Generalization capacity evaluation} If the latter effect dominates, the results could be expected to be poorer than both the O-O and N-N results individually, or any linear combination of them. We illustrate this idea with the following conservative reasoning. If, when using the combined training dataset, the network had only learned from Orion objects, as might be argued due to their dominance in the combined sample, then the state of the network should be very similar to that obtained in the O-O case. The C-C confusion matrix should then be a linear combination of those of the O-O (Table \ref{tab:OO_all}) and O-N cases (Table \ref{tab:ON_all}), weighted by the respective abundances of Orion and NGC 2264 in the forward sampling. The recall of CI YSOs in the O-O and O-N were 94.1\% and 82.2\%, respectively. Since, in the Combined dataset, 78.3\% of CI YSOs come from Orion, the expected recall from an Orion dominated network would be 91.5\%. This results can also be seen as the cell-wise sum of the two matrices, from which the recall and precision are re-computed using the new proportions. Considering the obtained value of 93.9\% in the C-C case(Table \ref{tab:CC}), with a $\pm 1\%$ dispersion, the network has indisputably learned information from the NGC 2264 objects, and the increased complexity of the problem was more than balanced by the increased generality of the sample. In other words, the fact that the results of the C-C test are as good as those of the O-O test in spite of the increased complexity implies that the network managed to take advantage of the greater generality of the combined sample to find a better generalization.\\ The analysis of the other two classes does not contradict those conclusions, although the improvement for CII objects is only marginal, since the same reasoning applied to CII YSOs leads to a recall of 96.2\%, to be compared to the C-C value of 96.8\%, with $\pm 0.5\%$ dispersion. This is in line with the fact that the CII YSO coverage in Orion was already close to that of NGC 2264, as highlighted by the less than $1\%$ difference between the CII YSO recall in O-N and N-N. Finally, contaminants are dominated by subclasses that were already nicely constrained in the O-O and N-N cases.\\ \begin{table}[!t] \small \centering \caption{Confusion matrix for the C-C case forwarded on the full dataset.} \vspace{-0.1cm} \begin{tabularx}{0.65\hsize}{r l |*{3}{m}| r } \multicolumn{2}{c}{}& \multicolumn{3}{c}{\textbf{Predicted}}&\\ \cmidrule[\heavyrulewidth](lr){2-6} \parbox[l]{0.2cm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{\textbf{Actual}}}} & Class & CI YSOs & CII YSOs & Others & Recall \\ \cmidrule(lr){2-6} & CI YSOs & 389 & 14 & 11 & 94.0\% \\ & CII YSOs & 53 & 2570 & 36 & 96.7\% \\ & Others & 50 & 254 & 23526 & 98.7\% \\ \cmidrule(lr){2-6} & Precision & 79.1\% & 90.6\% & 99.8\% & 98.4\% \\ \cmidrule[\heavyrulewidth](lr){2-6} \end{tabularx} \label{tab:CC_all} \vspace{-0.1cm} \end{table} \begin{table}[!t] \small \centering \caption{Subclass distribution for the C-C case forwarded on the full dataset.} \vspace{-0.1cm} \begin{tabularx}{0.75\hsize}{r l *{6}{Y} l} \multicolumn{2}{c}{}& \multicolumn{7}{c}{\textbf{Actual}}\\ \cmidrule[\heavyrulewidth](lr){2-9} \parbox[l]{0.2cm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{Predicted}}}} & & CI & CII & Gal & AGNs & Shocks & PAHs & Stars\\ \cmidrule(lr){3-9} & CI YSOs & 389 & 53 & 2 & 10 & 22 & 11 & 5 \\ & CII YSOs & 14 & 2570 & 4 & 16 & 11 & 15 & 208 \\ & Others & 11 & 36 & 515 & 1365 & 1 & 62 & 21583 \\ \cmidrule[\heavyrulewidth](lr){2-9} \end{tabularx} \vspace{-0.1cm} \label{tab:CC_all-sub} \end{table} \newpage The fact that the network results for the C-C case are as good as, or better than, for the Orion case despite the added complexity confirms that the number of objects was a strong limitation in the O-O and N-N cases. It also confirms that the O-O training might have provided better results with more observed objects in the same region, which was already established from the improvement of results with lower $\theta$ values in Section~\ref{orion_results}. Moreover, the absence of positive effect when raising the number of neurons demonstrates that the network efficiently combined their respective input parameter space coverage and that $n = 20$ is not limiting these results. The change in observational proportions that occurred by merging the two datasets seems to have a negligible impact as they are still close to the Orion ones, but adding more regions with less YSOs is expected to decrease the precision values for YSOs by increasing their dilution by the Other class. \vspace{-0.2cm} \subsubsection{Full dataset result and analysis of rare sub-classes prediction} The results for the complete combined dataset are presented in Tables~\ref{tab:CC_all} and \ref{tab:CC_all-sub}. As before, the results appear to be free of over-training, since there is no noticeable increase in recall for any of our classes. These results are very similar to the previous ones, with differences in quality estimators of the same order as the dispersion observed with random weight initialization. The slight decrease in precision of CI YSOs is also of the same order as the dispersion obtained from the random selection of our training and test samples. The contaminants that are not sufficiently constrained, like Shocks, could also be affected by selection effects between the two sets, which could lead to such a dispersion in precision for CI YSOs. This seems to be confirmed by the fact that two thirds of the shocks were misclassified as CI YSOs. \\ \label{shocks_discussion} \vspace{-0.1cm} Interestingly, this suggests a change in the network behavior, compared to the O-O case, where shocks were almost evenly distributed among the three output classes. We interpret the difference in shock distributions as a consequence of the difference in the relative abundance of this subclass compared to the rest of the training set, and to its strong dependancy to the MIPS rebranding step. Indeed, the special location of Shocks in the feature space, close to CII YSOs and mixed with the MIPS-identified CI YSOs (Fig.~\ref{fig_gut_method} D), makes this subclass identification sensitive to its small relative abundance during the learning process. Thus, in the O-O case, the number of shocks in the sample enabled the network to place the boundary in the vicinity of the Shocks region, but in an inaccurate way, hence the even distribution. Conversely, the lower fraction of shocks in the C-C sample probably made the network find an optimum where most of its representative strength was used for other parts of the feature space. In this situation, the majority of shocks are likely to be included in one specific output class, which can vary according to the random training set selection, but is more likely to be a YSO class, and even more likely to be CI due to the MIPS rebranding step.\\ \vspace{-0.1cm} To summarize the results of this combined training, we exposed that {\bf combining two star-forming clouds has improved the underlying diversity of our prediction}, and therefore the generalization capability of our network over possible new regions. The added complexity was largely overcome by the increased statistics on our classes of interest, CI and CII YSOs, which allowed to conserve very good accuracy and precision for them. However, some rare contaminant subclasses suffered from their increased dilution. Also, it is worth mentioning that, despite the very good recall we obtained on this C-C result that could convincingly be used to predict other regions, it is still composed of just two star-forming regions that are much more massive than any other at our disposal. This was our motivation to include more regions in the next Section despite the noted limitations of the corresponding 1\,kpc dataset (Sect. \ref{data_setup}). \newpage \subsection{Further increase in diversity and dataset size: nearby regions (< 1kpc)} \label{1kpc_train} In this section, we present the advantages of the 1\,kpc dataset to {\bf further improve the network generalization capacity by increasing the underlying diversity} of the object sample. As discussed in Section~\ref{data_setup}, this dataset only contains YSOs. This is not a major issue, because most of our contaminant subclasses are already well constrained, while we have shown that it is not the case for YSOs, since adding more of them led to a better generalization. Moreover, as the dataset contains several regions, it should ensure an even better diversity and input parameter space coverage for YSOs than the previous C-C case, but it might also increase again the underlying distribution complexity (Fig.~\ref{datasets_space_coverage}). In this section, we study the F-C case, that is a training on the full 1\,kpc dataset (combined + 1\,kpc YSOs) and a forward on the combined dataset, to keep a realistic test dataset with almost observational proportions. As before, the full 1\,kpc dataset is normalized as described in Section~\ref{network_tuning}.\\ \subsubsection{Hyper-parameter and training proportion changes} The detailed $\gamma_i$ selection for this more complicated dataset is presented in Table~\ref{sat_factors}. As we added YSOs, we had to increase the number of contaminants to preserve their dominant representation in the training sample. However, some subclasses of contaminants were already too few in the C-C case and already included in the training set as much as possible. Therefore, we did not add all the CI YSOs at our disposal to avoid a too strong dilution of these subclasses of Contaminants. For objects from the combined dataset, we kept $\theta = 0.2$, giving the now usual $(1-\theta)$ CI YSOs in the training sample, which are doubled using the 1kpc dataset. For CII YSOs, results were better when taking a slightly fewer proportion of them from the 1\,kpc dataset. In the same manner as for the other datasets, we tried various numbers of neurons in the hidden layer, with for the first time a higher optimum value around $n=30$. This means that we certainly have sufficiently raised the number of objects to break previously existing limitations regarding the size of the network. We also took advantage of the larger dataset and adopted greater values for $\eta = 8 \times 10^{-5}$ and $\alpha = 0.8$, which proved to stabilize more the network than smaller value, following the trend already described in Section~\ref{cross_train}.\\ \subsubsection{Main result} The results for this F-C case are presented in Tables~\ref{tab:FC} and \ref{tab:FC-sub}. The precision of CI YSOs has dropped by $2.5\%$, but all the other precisions have slightly improved. Compared to the C-C case (Table~\ref{tab:CC}), the precision of CI YSOs raised by $2.8\%$, but the recall is significantly lowered with a drop by nearly $5\%$. In contrast, the precision of CII YSOs dropped by $1.2\%$, and the recall improved by $0.8\%$. Overall, these results are similar to the previous C-C case, despite the increase in complexity coming from the addition of YSOs from new star forming regions. Similarly to the combination of Orion and NGC 2264, we could have observed a stronger drop in quality estimators, because the problem becomes more general and therefore more difficult to constrain. It is worth noting that the stability of the network somewhat decreased in comparison to the O-O and C-C cases. We observed a dispersion of recall regarding the weight random initialization of about $\pm 1\%$ for CI and $\pm 0.7\%$ for CII YSOs. This dispersion affects less the Other class with a value around $\pm 0.15\%$. The precision is less reliable with a dispersion of nearly $\pm 1.5\%$ for CI YSOs. The precision dispersion for CII YSOs is around $\pm 0.5\%$ and is less than $\pm 0.1\%$ for the Other class.\\ \vspace{-0.4cm} \subsubsection{More detailed analysis} More generally, the sources of contamination of the YSO classes have not changed, their overall effect has just risen. The fact that raising the number of neurons from $20$ to $30$ in the network leads to better results is certainly an indication of the increased complexity of this problem. This means that the network uses more refined splittings in the input parameter space. However, there might not be enough objects in our dataset to perfectly constrain this larger network, despite the added YSOs. This naturally leads to a stronger sensitivity to the weight initialization. In contrast, the dispersion over the training set random selection is similar to the one observed on the C-C case and is of the same order as the weight initialization dispersion. As in the previous cases, the results show that the main source of contamination of CI YSOs are the CII YSOs, while the latter are mostly contaminated by the Other class. This is, again, an indication of the respective proximity of the three classes in the input parameter space.\\ \begin{table}[!t] \small \centering \caption{Confusion matrix for the F-C case for a typical run.} \vspace{-0.1cm} \begin{tabularx}{0.65\hsize}{r l |*{3}{m}| r } \multicolumn{2}{c}{}& \multicolumn{3}{c}{\textbf{Predicted}}&\\ \cmidrule[\heavyrulewidth](lr){2-6} \parbox[l]{0.2cm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{\textbf{Actual}}}} & Class & CI YSOs & CII YSOs & Others & Recall \\ \cmidrule(lr){2-6} & CI YSOs & 73 & 4 & 5 & 89.0\% \\ & CII YSOs & 9 & 518 & 4 & 97.6\% \\ & Others & 5 & 55 & 4704 & 98.7\% \\ \cmidrule(lr){2-6} & Precision & 83.9\% & 89.8\% & 99.8\% & 98.5\% \\ \cmidrule[\heavyrulewidth](lr){2-6} \end{tabularx} \vspace{-0.1cm} \label{tab:FC} \end{table} \begin{table}[!t] \small \centering \caption{Subclass distribution for the F-C case.} \vspace{-0.1cm} \begin{tabularx}{0.75\hsize}{r l *{6}{Y} l} \multicolumn{2}{c}{}& \multicolumn{7}{c}{\textbf{Actual}}\\ \cmidrule[\heavyrulewidth](lr){2-9} \parbox[l]{0.2cm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{Predicted}}}} & & CI & CII & Gal & AGNs & Shocks & PAHs & Stars\\ \cmidrule(lr){3-9} & CI YSOs & 73 & 9 & 0 & 0 & 1 & 2 & 2 \\ & CII YSOs & 4 & 518 & 1 & 6 & 5 & 6 & 37 \\ & Others & 5 & 4 & 102 & 272 & 0 & 9 & 4321 \\ \cmidrule[\heavyrulewidth](lr){2-9} \end{tabularx} \vspace{-0.6cm} \label{tab:FC-sub} \end{table} The increased number of objects allowed us to see more details on the subclass distribution across the output classes. Similarly to the C-C case, the Shocks behave as completely unconstrained, since they end up in mostly one class, which changes randomly when the training is repeated. Compared to the C-C case, this effect is stronger, most likely because we did not add any Shocks in the training sample, therefore increasing their dilution. For almost any of the other subclasses, the variations are quite within the dispersion, with a slight trend for contaminant subclasses (Galaxies, AGNs, Shocks, PAH) to be less well classified, and CII YSOs and Stars to be better classified. One may expect these results, because we increased the number of YSOs and Stars in the training sample. On the other hand, we also have increased the YSO distribution complexity, which could lead to worse overall results. Possibly, this induced the slight drop in CI YSO recall observed from C-C to F-C, whereas CII YSOs and Others kept their quality indicators stable, either due to the increased statistics, or because their input feature space was already properly constrained by the Combined dataset (C-C case).\\ \begin{table}[!t] \small \centering \caption{Confusion matrix for the F-C case forwarded on the full combined dataset.} \vspace{-0.2cm} \begin{tabularx}{0.65\hsize}{r l |*{3}{m}| r } \multicolumn{2}{c}{}& \multicolumn{3}{c}{\textbf{Predicted}}&\\ \cmidrule[\heavyrulewidth](lr){2-6} \parbox[l]{0.2cm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{\textbf{Actual}}}} & Class & CI YSOs & CII YSOs & Others & Recall \\ \cmidrule(lr){2-6} & CI YSOs & 378 & 22 & 14 & 91.3\% \\ & CII YSOs & 45 & 2584 & 30 & 97.2\% \\ & Others & 43 & 244 & 23543 & 98.8\% \\ \cmidrule(lr){2-6} & Precision & 81.1\% & 90.7\% & 99.8\% & 98.5\% \\ \cmidrule[\heavyrulewidth](lr){2-6} \end{tabularx} \vspace{-0.1cm} \label{tab:FC_all} \end{table} \begin{table}[!t] \small \centering \caption{Subclass distribution for the F-C case forwarded on the full combined dataset.} \vspace{-0.2cm} \begin{tabularx}{0.75\hsize}{r l *{6}{Y} l} \multicolumn{2}{c}{}& \multicolumn{7}{c}{\textbf{Actual}}\\ \cmidrule[\heavyrulewidth](lr){2-9} \parbox[l]{0.2cm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{Predicted}}}} & & CI & CII & Gal & AGNs & Shocks & PAHs & Stars\\ \cmidrule(lr){3-9} & CI YSOs & 378 & 45 & 0 & 15 & 8 & 15 & 5 \\ & CII YSOs & 22 & 2584 & 6 & 22 & 25 & 14 & 177 \\ & Others & 14 & 30 & 515 & 1354 & 1 & 59 & 21614 \\ \cmidrule[\heavyrulewidth](lr){2-9} \end{tabularx} \vspace{-0.3cm} \label{tab:FC_all-sub} \end{table} \newpage \vspace{-0.3cm} \subsubsection{Full dataset result} \vspace{-0.1cm} The results of a forward of the complete combined dataset using this network are shown in Table~\ref{tab:FC_all}, with the subclass distributions in Table~\ref{tab:FC_all-sub}. These results show a $2.3\%$ increase in the CI YSO recall compared to Table~\ref{tab:FC} and a $2.8\%$ drop in precision for the same class. As for all the previous cases, the Other class remained almost identical. For CII YSOs and Other, the variations in precision and recall are within the weight initialization dispersion. The case of CI YSOs is less clear as their recall increase is greater than their dispersion, which could mean that there is a slight over-training. However, when searching for the optimum set of $\gamma_i$ values, we observed that the sets leading to a lesser over-training of CI YSOs also degraded the overall quality of the results. Still, it suggests that the genuine CI YSO recall is between the values of Table~\ref{tab:FC} and Table~\ref{tab:FC_all}.\\ \vspace{-0.8cm} \subsubsection{Misclassified objects distribution} \vspace{-0.1cm} \begin{figure*}[!t] \centering \begin{subfigure}[t]{0.48\textwidth} \caption*{\textbf{Missed}} \includegraphics[width=\textwidth]{images/diag_1kpc_missed_d4.png} \end{subfigure} \begin{subfigure}[t]{0.48\textwidth} \caption*{\textbf{Wrong}} \includegraphics[width=\textwidth]{images/diag_1kpc_wrong_d4.png} \end{subfigure} \caption[Zoom on misclassified objects in the F-C case]{Zoom on the $[4.5] - [5.8]\, \text{vs.}\, [3.6] - [4.5]$ graph, for misclassified objects in the F-C case. \textit{Left:} Genuine CI and CII YSOs according to the labeled dataset that were misclassified by the network. Green is for CII YSOs, red for CI YSOs. The points and crosses indicate the network output as indicated in the legend. \textit{Right} Predictions of the network that are known to be incorrect based on the labeled dataset. Green is for predicted CII YSOs, red for predicted CI YSOs. The points and crosses indicate the genuine class as indicated in the legend.} \label{missed_wrong_zoom} \end{figure*} Since {\bf the present F-C case is our most complete result}, and in order to provide additional verification of the limitations we exposed, we checked the distribution of the misclassified objects in the same fashion as in Section~\ref{cross_train}. For that, Figure~\ref{missed_wrong_space} at the end of the section shows the same five usual CMDs as in Section~\ref{data_prep} for the labeled distribution from our G09 method, the predicted distribution by the F-C network, the "missed" and the "wrong" CI and CII YSOs, all for the full Combined dataset. Figure~\ref{missed_wrong_zoom} presents a zoom on the fourth row to ease the comparison. As for the previous result we did not forward on the 1\,kpc sample since we do not have contaminant estimates and therefore we cannot provide quality estimates. They are just used to enhance the YSO diversity through their feature space coverage in the training process. From this figure, we observe that there are two main misclassification zones. The first one between CI and CII YSOs, which can mostly be seen in the fourth row (Fig.~\ref{missed_wrong_zoom}). And the second one between CII YSOs and more evolved Stars, which can mostly be seen in the second row. This is strongly consistent with our previous interpretation of the object distribution in the various confusion matrix of the F-C case. Some other contamination area can also be seen like in the AGNs exclusion region close to the corresponding cut, or like in the Shocks exclusion region. The fact that the misclassified objects mostly stacks along the cuts is a strong indication of where the network is less precise,this is also a first indication that the membership probability could be used to improve the results. \subsubsection{Forward of the trained network on Orion and NGC 2264} We also looked at the F-C trained network prediction over Orion and NGC 2264 individually, hereafter the F-O and F-N cases. This allowed us to verify that they are both properly represented by the F-C network and that one is not responsible for the majority of the misclassified objects. We present in Tables \ref{tab:FO} and \ref{tab:FN} the confusion matrices that represent the predictions on the F-O and F-C cases, respectively. For this, we used the full labeled datasets of the each region, meaning that they must be compared to the full combined dataset prediction from Table \ref{tab:FC_all}. Regarding the F-O result, we can see that the changes in recall and precision are very small and within the dispersion for each class. For the F-N result, it is also very stable but with an average of 2\% change with respect to Table \ref{tab:FC_all} in all the YSO quality estimators. However, the two regions are expected not to have the same prediction due to the fact that they can sometimes constrain different parts of the feature space. Therefore, the full combined dataset result in Table~\ref{tab:FC_all} is a sum of the two individual predictions from Tables~\ref{tab:FO} and \ref{tab:FN}, which implies that the quality estimator for both of them may individually get out of the dispersion range estimated around the mean value of the quality estimator of the full dataset. Overall, both the individual predictions remain satisfying.\\ Interestingly, these F-O and F-N predictions can be compared to the individual training on Orion and NGC 2264, with the O-O case in Table~\ref{tab:OO_all} and the N-N case in Table~\ref{tab:NN_all}, respectively. For Orion, the F-O results are very similar to the O-O results, with a slightly better recall on the CII YSOs with a 0.6\% increase. The CI YSOs, however, are slightly less well represented with a drop by 3.4\% in recall, which seems mainly caused by an increased confusion with CII YSOs. Still, we suspected a slight overtraining of the CI YSOs in this O-O case that might explain the difference and could be confirmed by the fact that the CI recall between F-O and the test dataset only O-O results are almost identical, the dispersion of the F-C training being lower (Table~\ref{tab:OO}). However, the O-O case did not show this stronger confusion between CI and CII, which tends to indicate that the numerical similitude between F-O and O-O does not correspond to the same underlying classification properties. It means that the two results cannot be directly compared, and prevents a strong conclusion on the overtraning of the O-O. Still, this result remains very strong since we achieved almost identical prediction, and better CII prediction, with our much more generalist ANN training. Indeed, as we stated previously we could have expect to obtain a classifier that works well enough on all regions but that would be significantly poorer than any individual training.\\ \begin{table}[!t] \small \centering \caption{Confusion matrix for the F-O case.} \vspace{-0.1cm} \begin{tabularx}{0.65\hsize}{r l |*{3}{m}| r } \multicolumn{2}{c}{}& \multicolumn{3}{c}{\textbf{Predicted}}&\\ \cmidrule[\heavyrulewidth](lr){2-6} \parbox[l]{0.2cm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{\textbf{Actual}}}} & Class & CI YSOs & CII YSOs & Others & Recall \\ \cmidrule(lr){2-6} & CI YSOs & 294 & 20 & 10 & 90.7\% \\ & CII YSOs & 35 & 2170 & 19 & 97.6\% \\ & Others & 36 & 191 & 16339 & 98.6\% \\ \cmidrule(lr){2-6} & Precision & 80.5\% & 91.1\% & 99.8\% & 98.4\% \\ \cmidrule[\heavyrulewidth](lr){2-6} \end{tabularx} \vspace{-0.1cm} \label{tab:FO} \end{table} \begin{table}[!t] \small \centering \caption{Confusion matrix for the F-N case.} \vspace{-0.1cm} \begin{tabularx}{0.65\hsize}{r l |*{3}{m}| r } \multicolumn{2}{c}{}& \multicolumn{3}{c}{\textbf{Predicted}}&\\ \cmidrule[\heavyrulewidth](lr){2-6} \parbox[l]{0.2cm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{\textbf{Actual}}}} & Class & CI YSOs & CII YSOs & Others & Recall \\ \cmidrule(lr){2-6} & CI YSOs & 84 & 2 & 4 & 93.3\% \\ & CII YSOs & 10 & 414 & 11 & 95.2\% \\ & Others & 7 & 53 & 7204 & 99.2\% \\ \cmidrule(lr){2-6} & Precision & 83.2\% & 88.3\% & 99.8\% & 98.9\% \\ \cmidrule[\heavyrulewidth](lr){2-6} \end{tabularx} \vspace{-0.1cm} \label{tab:FN} \end{table} The results are much more striking for NGC 2264, with the F-N prediction being much better than the N-N prediction for CII YSOs (+1.9\%) and contaminants (+0.4\%), which allows an increase of CI precision of 1\% despite their 4.5\% drop in recall. This is not very surprising that the CI YSOs from NGC 2264 are less well represented since their distribution is very peculiar and that they are not numerous enough in the full 1\,kpc training dataset to be better constrained than the N-N case that focus only on this specific distribution. Still, the confusion between CI and CII YSOs is not very impacted and the confusion between CII and contaminants is significantly improved, which is as before a very strong results considering that it is the result of the more generalist network. This F-N case can also be compared to the O-N case (Table~\ref{tab:ON_all}), where it is striking from the 11.1\% CI recall increase and the 2.8 \% CII recall increase, that this full 1\,kpc training is much more suitable than Orion alone to generalize over other regions. \\ \begin{sidewaysfigure} \centering \begin{subfigure}[t]{0.24\textwidth} \caption*{\textbf{\hspace{0.3cm}Actual}} \includegraphics[width=\textwidth]{images/cmd_full_1kpc_actual_A.png} \end{subfigure} \begin{subfigure}[t]{0.24\textwidth} \caption*{\textbf{\hspace{0.3cm}Predicted}} \includegraphics[width=\textwidth]{images/cmd_full_1kpc_predicted_A.png} \end{subfigure} \begin{subfigure}[t]{0.24\textwidth} \caption*{\textbf{\hspace{0.3cm}Missed}} \includegraphics[width=\textwidth]{images/cmd_full_1kpc_missed_A.png} \end{subfigure} \begin{subfigure}[t]{0.24\textwidth} \caption*{\textbf{\hspace{0.3cm}Wrong}} \includegraphics[width=\textwidth]{images/cmd_full_1kpc_wrong_A.png} \end{subfigure}\\ \begin{subfigure}[t]{0.24\textwidth} \includegraphics[width=\textwidth]{images/cmd_full_1kpc_actual_B.png} \end{subfigure} \begin{subfigure}[t]{0.24\textwidth} \includegraphics[width=\textwidth]{images/cmd_full_1kpc_predicted_B.png} \end{subfigure} \begin{subfigure}[t]{0.24\textwidth} \includegraphics[width=\textwidth]{images/cmd_full_1kpc_missed_B.png} \end{subfigure} \begin{subfigure}[t]{0.24\textwidth} \includegraphics[width=\textwidth]{images/cmd_full_1kpc_wrong_B.png} \end{subfigure}\\ \begin{subfigure}[t]{0.24\textwidth} \includegraphics[width=\textwidth]{images/cmd_full_1kpc_actual_C.png} \end{subfigure} \begin{subfigure}[t]{0.24\textwidth} \includegraphics[width=\textwidth]{images/cmd_full_1kpc_predicted_C.png} \end{subfigure} \begin{subfigure}[t]{0.24\textwidth} \includegraphics[width=\textwidth]{images/cmd_full_1kpc_missed_C.png} \end{subfigure} \begin{subfigure}[t]{0.24\textwidth} \includegraphics[width=\textwidth]{images/cmd_full_1kpc_wrong_C.png} \end{subfigure} \caption*{} \end{sidewaysfigure} \addtocounter{figure}{-1} \begin{sidewaysfigure} \centering \begin{subfigure}[t]{0.24\textwidth} \caption*{\textbf{\hspace{0.3cm}Actual}} \includegraphics[width=\textwidth]{images/cmd_full_1kpc_actual_D.png} \end{subfigure} \begin{subfigure}[t]{0.24\textwidth} \caption*{\textbf{\hspace{0.3cm}Predicted}} \includegraphics[width=\textwidth]{images/cmd_full_1kpc_predicted_D.png} \end{subfigure} \begin{subfigure}[t]{0.24\textwidth} \caption*{\textbf{\hspace{0.3cm}Missed}} \includegraphics[width=\textwidth]{images/cmd_full_1kpc_missed_D.png} \end{subfigure} \begin{subfigure}[t]{0.24\textwidth} \caption*{\textbf{\hspace{0.3cm}Wrong}} \includegraphics[width=\textwidth]{images/cmd_full_1kpc_wrong_D.png} \end{subfigure}\\ \begin{subfigure}[t]{0.24\textwidth} \includegraphics[width=\textwidth]{images/cmd_full_1kpc_actual_E.png} \end{subfigure} \begin{subfigure}[t]{0.24\textwidth} \includegraphics[width=\textwidth]{images/cmd_full_1kpc_predicted_E.png} \end{subfigure} \begin{subfigure}[t]{0.24\textwidth} \includegraphics[width=\textwidth]{images/cmd_full_1kpc_missed_E.png} \end{subfigure} \begin{subfigure}[t]{0.24\textwidth} \includegraphics[width=\textwidth]{images/cmd_full_1kpc_wrong_E.png} \end{subfigure} \caption[Input parameter space coverage in the F-C case]{Input parameter space coverage in the CMDs used for the G09 method in the F-C case on the full dataset regarding different populations. \textit{Actual:} distribution of genuine classes. CI YSOs are in red, CII YSOs are in green and Others are in blue. \textit{Predicted:} prediction given by the network with the same color-code as for the \textit{actual} frames. { \textit{Missed:} Genuine CI and CII according to the labeled dataset that were misclassified by the network. Green is for genuine CII YSOs, red for genuine CI YSOs. The points and crosses indicate the network output as specified in the legend. \textit{Wrong:} YSO predictions of the network that are known to be incorrect based on the labeled dataset. Green is for genuine CII YSOs, red for genuine CI YSOs and blue is for genuine contaminants. The two types of crosses indicate the predicted YSO class as specified in the legend.}} \label{missed_wrong_space} \end{sidewaysfigure} \clearpage \subsection{Orion and NGC 2264 YSO candidates distribution maps} \label{yso_candidates_maps} With the prediction from the full 1\,kpc training over Orion and NGC 2264 we were able to look at the distribution of CI and CII YSOs in the corresponding regions. To represent the density of the regions we chose to use data from the Herschel space observatory \citep{herschel_2010}, especially the Spectral and Photometric Imaging REceiver (SPIRE) 500 $\mathrm{\mu m}$ band that is a reasonable proxy of the total gas column density.\\ Figure~\ref{orion_A_yso_dist} shows the distributions for the Orion A region that contains the Orion nebula (Messier 42). It shows that our CI YSOs follow the main dense filament very closely, especially in the so-called "integral" shaped filament (between $l=208$ and 210 degrees). As expected, the CII YSOs, which are more evolved, spread more widely on the observed region. Still, the high CII density nicely maps the densest part of the Orion molecular cloud and traces the parts that are expected to form stars more actively.\\ The Orion B part of the molecular complex is presented in Figure~\ref{orion_B_yso_dist}. The number of YSOs in this part of the cloud is lesser than in Orion A. This is line with Orion B being in an earlier evolutionary stage, but the comparison is hampered by the fact that in this region the Spitzer observations were not continuous (see Fig.~\ref{megeath_orion_cover}). Yet, as for Orion A the CI YSOs tightly follow the densest parts of the star-forming region, while CII show a greater dispersion around the density peaks. We note that the number of YSOs found in each part is large enough to hope for some of them having a Gaia counterpart that would allow us to estimate their distance (Sect.~\ref{3d_yso_gaia}). \\ The NGC 2264 region is presented in Figure~\ref{ngc2264_yso_dist}. As for the two others, the main star-forming region is well traced by our CI YSOs, with CII being more dispersed \citep[as in]{Buckner_2020}. As for the rest of the study the smaller number of YSOs in the region makes it slightly more difficult to analyze.\\ Interestingly, there is a small concentration of CI YSOs around $l = 202.3$, $b = 2.5$ which corresponds to the $G202.3+02.50$ molecular cloud region where we showed that two filaments of the cloud are colliding \citep[][and Fig.~\ref{montillaud_2019b_fig2}]{montillaud_2019_I, montillaud_2019_II}. It is remarkable that this small CI cluster coincides tightly with the junction region between the two merging filaments, whereas the CII distribution seems mostly independent. Based on the typical time scale of the CI protostellar phase, and assuming that the formation of the small CI cluster was triggered by the filament collision, we conclude that the collision would have started typically $\lesssim 5 \times 10^5$ years ago \citep{Evans_2009}. This age is compatible with the age estimate of $\sim 10^5$ yr obtained from N$_2$H$^+$ observations of the junction region (Montillaud+19b).\\ Finally, we do not provide any prediction from the 1\,kpc dataset, because it is impossible for us to construct a confusion matrix and to provide quality estimators from them, due to the absence of contaminants. However, we have made some attempts to use our trained network over other star-forming regions using Spitzer data, which is partly discussed in Section~\ref{conclusion_perspectives}.\\ \begin{figure*}[!t] \begin{minipage}{1.0\textwidth} \centering \begin{subfigure}[t]{1.0\textwidth} \includegraphics[width=\hsize]{images/orion_herschel_empty.png} \end{subfigure}\\ \vspace{-0.7cm} \begin{subfigure}[t]{1.0\textwidth} \includegraphics[width=\hsize]{images/orion_herschel_all_CI.png} \end{subfigure}\\ \vspace{-0.7cm} \begin{subfigure}[t]{1.0\textwidth} \includegraphics[width=\hsize]{images/orion_herschel_all_CII.png} \end{subfigure} \end{minipage} \caption[Orion A YSO candidate distribution]{Distribution of YSO candidates in the Orion A part of the molecular complex. The grey scale shows the Herschel SPIRE 500 $\mathrm{\mu m}$ map. CI and CII YSOs are shown in the middle and lower frames in red and green, respectively.} \label{orion_A_yso_dist} \end{figure*} \begin{figure*}[!t] \hspace{-0.5cm} \begin{minipage}{1.05\hsize} \centering \begin{subfigure}[t]{0.32\textwidth} \includegraphics[width=1.0\hsize]{images/orion_B_herschel_empty.png} \end{subfigure} \begin{subfigure}[t]{0.32\textwidth} \includegraphics[width=1.0\hsize]{images/orion_B_herschel_all_CI.png} \end{subfigure} \begin{subfigure}[t]{0.32\textwidth} \includegraphics[width=1.0\hsize]{images/orion_B_herschel_all_CII.png} \end{subfigure} \end{minipage} \caption[Orion B YSO candidates distribution]{Distribution of YSO candidates in the Orion B part of the molecular complex. The grey scale shows the Herschel SPIRE 500 $\mathrm{\mu m}$ map. CI and CII YSOs are shown in the middle and right frames in red and green, respectively.} \label{orion_B_yso_dist} \end{figure*} \begin{figure*}[!t] \hspace{-2.0cm} \begin{minipage}{1.25\textwidth} \centering \begin{subfigure}[t]{0.49\textwidth} \includegraphics[width=1.0\hsize]{images/ngc2264_herschel_empty.png} \end{subfigure}\\ \begin{subfigure}[t]{0.49\textwidth} \includegraphics[width=1.0\hsize]{images/ngc2264_herschel_all_CI.png} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \includegraphics[width=1.0\hsize]{images/ngc2264_herschel_all_CII.png} \end{subfigure} \end{minipage} \caption[NGC 2264 YSO candidates distribution]{Distribution of YSO candidates in the NGC 2264 region. The grey scale shows the Herschel SPIRE 500 $\mathrm{\mu m}$ map. CI and CII YSOs are shown in the middle and lower frames in red and green, respectively.} \label{ngc2264_yso_dist} \end{figure*} \begin{figure}[!t] \includegraphics[width=0.9\hsize]{images/montillaud_2019b_fig2.png} \caption[Herschel view of the G202.3+2.5 region]{Herschel view of the G202.3+2.5 region, about 1 deg north to the open cluster NGC 2264. {\it Left}: column density of molecular hydrogen derived from the SED fit of SPIRE bands. {\it Right}: dust temperature from the same SED fit. In both frames, the ellipses show the submillimeter compact sources extracted by Montillaud et al. (2015). The junction region, along with other important structures of the cloud are indicated with white shapes. In both frames, the inset shows a zoom to the junction region. {\it From} \citet{montillaud_2019_II}.} \label{montillaud_2019b_fig2} \end{figure} To conclude this section, we are confident that our Full 1\,kpc trained network contains a sufficient diversity of subclasses to be efficiently applied to most nearby ($\lesssim 1$ kpc) star forming regions. Our results show that one can expect nearly $90\%$ of the CI YSOs to be properly recovered with a precision above $80\%$, while near $97\%$ of CII YSOs are expected to be recovered with a $90\%$ precision. \clearpage \section{Probabilistic prediction contribution to the analysis} \label{proba_discussion} \etocsettocstyle{\subsubsection*{\vspace{-1cm}}}{} \localtableofcontents \vspace{0.3cm} \begin{figure*}[!t] \vspace{-0.2cm} \centering \begin{subfigure}[t]{0.35\textwidth} \caption*{\textbf{Output}} \includegraphics[width=\textwidth]{images/ternary_proba.png} \end{subfigure} \hspace{0.2cm} \begin{subfigure}[t]{0.35\textwidth} \caption*{\textbf{Correct}} \includegraphics[width=\textwidth]{images/ternary_proba_proper.png} \end{subfigure}\\ \vspace{0.1cm} \begin{subfigure}[t]{0.35\textwidth} \caption*{\textbf{Missed}} \includegraphics[width=\textwidth]{images/ternary_proba_missed.png} \end{subfigure} \hspace{0.2cm} \begin{subfigure}[t]{0.35\textwidth} \caption*{\textbf{Wrong}} \includegraphics[width=\textwidth]{images/ternary_proba_wrong.png} \end{subfigure} \caption[Ternary plots of output membership probability in the F-C case]% {\href{https://doi.org/10.5281/zenodo.2628066}{Ternary plots} of output membership probability for each class in the F-C case forwarded on the full dataset. \textit{Output:} all objects. \textit{Correct:} genuine and predicted classes are identical. \textit{Missed:} misclassified objects colored regarding their genuine class. \textit{Wrong:} misclassified object colored regarding their predicted class.} \label{ternary_plots} \end{figure*} In this section, we discuss the inclusion of a membership probability prediction into our network. If we assumed that the original classification were absolutely correct, the discrepancies would only correspond to errors. However, as illustrated by the effect of the MIPS band, the original classification has its own limitations. Therefore, the objects misclassified by our network might highlight that they were already less reliable in the original classification or may even have been misclassified. A membership probability allows one to refine this idea by quantifying the level of confidence of the network on each prediction, directly based on the observed distribution of the objects in the input parameter space. In practice, as already illustrated in Figure~\ref{missed_wrong_zoom}, where misclassified objects stack around the inter-class boundaries, the classification reliability of individual objects is mostly a function of their distance to these boundaries. One strength of the probabilistic output presented in Section~\ref{proba_class_intro} is that the probability values provided by the network take advantage of the network ability to combine the boundaries directly in the ten dimensions of the feature space.\\ \subsection{Interpretation of the membership probability} We used the probabilistic predictions to measure the degree of confusion of an object between the output classes. This is illustrated by the ternary plots in Figure~\ref{ternary_plots}, where the location of the objects corresponds to their predicted probability to belong to each class. On these plots, an object with a high confidence level lies near the peaks. Objects that are in the inner part of the graph are the most confused between the three classes, while objects on the edges illustrate a confusion between only two classes. The sample size obviously plays a role in this representation, but each class clearly shows a level of confusion that is higher with one specific other class. The graph for all outputs shows that the confusion between CI and Other is the lowest, followed by the confusion between CI and CII YSOs, with the highest confusion level being between the CII and Other classes. Those observations are strongly consistent with our previous analysis based only on the confusion matrix.\\ \begin{figure*}[!t] \hspace{-1.2cm} \begin{minipage}{1.17\textwidth} \centering \begin{subfigure}[t]{0.48\textwidth} \caption*{\textbf{Output}} \includegraphics[width=\textwidth]{images/hist_proba.pdf} \end{subfigure} \begin{subfigure}[t]{0.48\textwidth} \caption*{\textbf{Correct}} \includegraphics[width=\textwidth]{images/hist_proba_proper.pdf} \end{subfigure}\\ \vspace{0.2cm} \begin{subfigure}[t]{0.48\textwidth} \caption*{\textbf{Missed}} \includegraphics[width=\textwidth]{images/hist_proba_missed.pdf} \end{subfigure} \begin{subfigure}[t]{0.48\textwidth} \caption*{\textbf{Wrong}} \includegraphics[width=\textwidth]{images/hist_proba_wrong.pdf} \end{subfigure} \end{minipage} \caption[Histograms of membership probability in the F-C case]{Histograms of membership probability for YSO classes regarding different populations in the F-C case forwarded on the full dataset. \textit{Output:} all objects. \textit{Correct:} genuine and predicted classes are identical. \textit{Missed:} misclassified objects colored regarding their genuine class. \textit{Wrong:} misclassified object colored regarding their predicted class.} \label{hist_proba} \end{figure*} \begin{table}[!t] \small \centering \caption{F-C case forwarded on the full dataset with membership probability $> 0.9$. } \vspace{-0.1cm} \begin{tabularx}{0.65\hsize}{r l |*{3}{m}| r } \multicolumn{2}{c}{}& \multicolumn{3}{c}{\textbf{Predicted}}&\\ \cmidrule[\heavyrulewidth](lr){2-6} \parbox[l]{0.2cm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{\textbf{Actual}}}} & Class & CI YSO & CII YSO & Other & Recall \\ \cmidrule(lr){2-6} & CI YSO & 297 & 5 & 8 & 95.8\% \\ & CII YSO & 16 & 2412 & 13 & 98.8\% \\ & Other & 26 & 118 & 23247 & 99.4\% \\ \cmidrule(lr){2-6} & Precision & 87.6\% & 95.1\% & 99.9\% & 99.3\% \\ \cmidrule[\heavyrulewidth](lr){2-6} \end{tabularx} \caption*{\vspace{-0.3cm}\\ {\bf Notes.} The selection removed 104 CI ($-25.1\%$), 218 CII ($-8.2\%$), and 439 Other ($-1.8\%$).} \vspace{-0.1cm} \label{conf_proba_09} \end{table} \begin{table}[!t] \small \centering \caption{F-C case forwarded on the full dataset with membership probability $> 0.95$. } \vspace{-0.1cm} \begin{tabularx}{0.65\hsize}{r l |*{3}{m}| r } \multicolumn{2}{c}{}& \multicolumn{3}{c}{\textbf{Predicted}}&\\ \cmidrule[\heavyrulewidth](lr){2-6} \parbox[l]{0.2cm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{\textbf{Actual}}}} & Class & CI YSO & CII YSO & Other & Recall \\ \cmidrule(lr){2-6} & CI YSO & 274 & 2 & 7 & 96.8\% \\ & CII YSO & 11 & 2302 & 8 & 99.2\% \\ & Other & 23 & 92 & 23136 & 99.5\% \\ \cmidrule(lr){2-6} & Precision & 89.0\% & 96.1\% & 99.9\% & 99.4\% \\ \cmidrule[\heavyrulewidth](lr){2-6} \end{tabularx} \caption*{\vspace{-0.3cm}\\ {\bf Notes.} The selection removed 131 CI ($-31.6\%$), 338 CII ($-12.7\%$), and 579 Other ($-2.4\%$).} \vspace{-0.1cm} \label{conf_proba_095} \end{table} \begin{table}[!t] \small \centering \caption{F-C case forwarded on the full dataset with membership probability $> 0.99$.} \vspace{-0.1cm} \begin{tabularx}{0.65\hsize}{r l |*{3}{m}| r } \multicolumn{2}{c}{}& \multicolumn{3}{c}{\textbf{Predicted}}&\\ \cmidrule[\heavyrulewidth](lr){2-6} \parbox[l]{0.2cm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{\textbf{Actual}}}} & Class & CI YSO & CII YSO & Other & Recall \\ \cmidrule(lr){2-6} & CI YSO & 203 & 0 & 5 & 97.6\% \\ & CII YSO & 4 & 1970 & 4 & 99.6\% \\ & Other & 14 & 51 & 22747 & 99.7\% \\ \cmidrule(lr){2-6} & Precision & 91.9\% & 97.5\% & 99.9\% & 99.7\% \\ \cmidrule[\heavyrulewidth](lr){2-6} \end{tabularx} \caption*{\vspace{-0.3cm}\\ {\bf Notes.} The selection removed 206 CI ($-49.8\%$), 681 CII ($-25.6\%$), and 1018 Other ($-4.3\%$).} \vspace{-0.1cm} \label{conf_proba_099} \end{table} Additionally, as discussed in Section~\ref{proba_class_intro}, the probabilistic predictions can be used to remove objects that are not reliable enough. The misclassified objects show a higher degree of confusion, and therefore a maximum value of membership probability lower than the objects properly classified. This characteristic is illustrated by Figures~\ref{ternary_plots} and \ref{hist_proba}. The latter compares histograms of the highest output probability for properly and wrongly classified objects. This figure reveals that the great majority of correctly classified YSOs have a membership probability greater than 0.95, whereas most missed or wrong YSOs have a probability membership below that threshold. In this context, applying a threshold on the membership probability will proportionally remove more misclassified objects than properly classified ones, therefore improving the recall and precision of our network. The threshold value is arbitrary, depending on the application. \\ \newpage We illustrate this selection effect on the F-C case in Tables~\ref{conf_proba_09}, \ref{conf_proba_095}, and \ref{conf_proba_099}. These tables represent the confusion matrix of the complete combined dataset after selecting objects with membership probability above 0.9, 0.95 and 0.99 respectively. In the 0.9 case (Table~\ref{conf_proba_09}), 25\% (104) of the CI YSOs were removed, while their recall increased by 4.5\%. In the same way, 8.2\% (218) of the CII YSOs were removed leading to a 1.2\% increase in their recall. Contaminants were less affected with only 1.8\% of objects removed, which still increased the recall by 0.6\%. This is an additional demonstration of the CI YSOs being less constrained than the other output classes. In the 0.95 case (Table~\ref{conf_proba_095}), the output classes have lost 31.6\% (131), 12.7 (338), and 2.4\% (579) of objects, respectively. This still improved the recall of the two YSO classes with a 1\% increase for CI and an 0.4\% increase for CII, when compared to the 0.9 case. This result is also the first one to be close to have all quality estimators above 90\%, since the CI YSO precision is 89\%, while losing an acceptable fraction of them. The 0.99 case (Table~\ref{conf_proba_099}) is more extreme, since almost 50\% (206) of CI YSOs were removed, but the recall of the remaining one reached 97.6\%, that is a 6.3\% improvement over the regular F-C full dataset case. However, the CII YSOs are also strongly affected, with 25.6\% (681) of them removed, and only yielding a 0.4\% improvement in comparison to the 0.95 case. Another illustration that this strategy effectively excludes objects that are near the cuts is presented by Figure~\ref{membership_threshold_comparison} where the objects above or below a $0.9$ membership threshold are plotted separately for a usual set of CMDs. This effect is particularly visible in the ([4.5]-[8],[3.6]-[5.8]) (second frame) and the ([4.5]-[5.8],[3.6]-[4.5]) (forth frame) diagrams. This figure illustrates that membership probability less than 0.9 can be considered unreliable. \\ \begin{figure*} \centering \begin{subfigure}[t]{0.49\textwidth} \caption*{{\bf \Large Above 0.9}} \includegraphics[width=\textwidth]{images/cmd_full_1kpc_above_A.png} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \caption*{{\bf \Large Bellow 0.9}} \includegraphics[width=\textwidth]{images/cmd_full_1kpc_below_A.png} \end{subfigure}\\ \begin{subfigure}[t]{0.49\textwidth} \includegraphics[width=\textwidth]{images/cmd_full_1kpc_above_B.png} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \includegraphics[width=\textwidth]{images/cmd_full_1kpc_below_B.png} \end{subfigure}\\ \begin{subfigure}[t]{0.49\textwidth} \includegraphics[width=\textwidth]{images/cmd_full_1kpc_above_C.png} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \includegraphics[width=\textwidth]{images/cmd_full_1kpc_below_C.png} \end{subfigure} \caption{See caption on second half of the figure next page.} \end{figure*} \addtocounter{figure}{-1} \begin{figure*} \centering \begin{subfigure}[t]{0.49\textwidth} \includegraphics[width=\textwidth]{images/cmd_full_1kpc_above_D.png} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \includegraphics[width=\textwidth]{images/cmd_full_1kpc_below_D.png} \end{subfigure}\\ \begin{subfigure}[t]{0.49\textwidth} \includegraphics[width=\textwidth]{images/cmd_full_1kpc_above_E.png} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \includegraphics[width=\textwidth]{images/cmd_full_1kpc_below_E.png} \end{subfigure} \caption[Probability threshold effect on feature space coverage]{Input parameter space coverage using the usual G09 diagrams in the F-C case on the full dataset regarding their predicted membership probability. CI YSOs are in red, CII YSOs are in green while Other are in blue. \textit{Left:} objects with membership probability greater than $0.9$. \textit{Right:} objects with membership probability less than $0.9$.} \label{membership_threshold_comparison} \end{figure*} \vspace{0.3cm} It is important to emphasize again, as stated in Section~\ref{proba_class_intro}, that the membership probability output is not a direct physical probability. It is a probability regarding the network knowledge of the problem, which can be biased or incomplete or both. Therefore, selecting a $0.9$ membership probability does not necessarily correspond to a $90\%$ certainty prediction level. The only usable probability is the one given by the confusion matrix. Consequently, according to Table~\ref{conf_proba_09}, when applying a $0.9$ membership limit, the probability that a predicted class I YSO is correct is estimated to be $87.6\%$, while, with the same limit, class II YSOs are correct in $96.1\%$ of the cases. These two values are not equivalent and one must not use the network output membership probability as a true estimate of the reliability of an object. It can only be used to compare objects from the same network training, and must be converted as a true quality estimator using the confusion matrix. \\ \newpage \subsection{Graphical analysis of the membership probability} Printing the confusion matrix for each threshold value of the membership probability is the optimal way to get the direct performance information at a given threshold value. Despite this, there are some common representations that can be made using the threshold value. The most common one is the Receiver Operating Characteristic (ROC) curve, which is a standard tool to assess the prediction quality of a binary probabilistic classifier. In our case it is possible to plot the corresponding ROC curve for each output class by considering it as binary output against the other two classes. The ROC curve is usually defined as the False Positive Rate (FPR) or 1 - specificity, against the True Positive Rate (TPR) or sensitivity, which is the equivalent of our previously defined recall (Sect.~\ref{class_balance}). To produce this curve, the threshold value is sampled and the previous two values are computed for each point for a specific class. We stress that, during this process the predicted class depends solely on the fact that the corresponding neuron has a value higher than the threshold and does not mean that it is the maximum value of the output neurons. It produced the Figure~\ref{roc_curve} that contains the corresponding ROC for each class. In this figure, a random classifier would produce a linear response, while a perfect classifier would have only one point at the top left edge of the graph, meaning that it has both a perfect sensitivity and a perfect specificity. This ROC curve allows to compute the Area Under the Curve or AUC, that is an estimate of the global binary classifier performance. The regular ROC plot being generally used for less efficient classifier, we made a zoom on the interesting part for our case in the bottom frame of Figure~\ref{roc_curve}. It is striking by looking at the AUC and the curves that our CI YSO class is less well represented. Interestingly, the CII and Other classes seem more or less equivalent using this quality estimator, which was not the case when looking at our confusion matrix.\\ \begin{figure}[!t] \hspace{-1.8cm} \begin{minipage}{1.20\hsize} \centering \begin{subfigure}[t]{0.49\textwidth} \includegraphics[width=\hsize]{images/roc_full1kpc.pdf} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \includegraphics[width=\hsize]{images/roc_zoom_full1kpc.pdf} \end{subfigure} \end{minipage} \caption[ROC curves for output classes in the F-C case]{ROC curves for each of our output classes, CI in blue, CII in orange and other in green. The red curve illustrates a random classifier. Each point of the curve is obtained from a given threshold limit. The {\it right} frame is a zoom of the upper left part of the {\it left} frame.} \label{roc_curve} \vspace{1.5cm} \end{figure} \begin{figure}[!t] \centering \begin{subfigure}[t]{0.70\textwidth} \includegraphics[width=\hsize]{images/exc_threshold_full1kpc.pdf} \end{subfigure}\\ \vspace{0.4cm} \begin{subfigure}[t]{0.70\textwidth} \includegraphics[width=\hsize]{images/recall_threshold_full1kpc.pdf} \end{subfigure}\\ \vspace{0.4cm} \begin{subfigure}[t]{0.70\textwidth} \includegraphics[width=\hsize]{images/prec_threshold_full1kpc.pdf} \end{subfigure} \caption[Fraction of exclusion, recall and precision as a function of probability treshold]{Evolution of quality estimators for each class regarding the membership probability threshold. {\it Top}, {\it Middle} and {\it Bottom} frame shows the evolution of the kept proportion, recall, and precision, respectively.} \label{threshold_curves} \end{figure} \clearpage The ROC is an interesting indicator when comparing different classifiers, but it is less useful in our case with just one final classifier. However, it motivated the estimate of other quantities as a function of the probability threshold. There are mainly three quantities that are interesting in our case, which are the proportion of objects that are excluded, the recall and the precision. Figure~\ref{threshold_curves} shows all these quantities for our three output classes. We note that the curve does not go bellow a probability value of $1/3$ since, for a normalized output with three classes, when a class has a probability lesser than $1/3$, another necessarily has a probability greater than this value. These curves perfectly illustrate the lesser network confidence level on our CI prediction against the two other classes. Ultimately, they can be used to predict the threshold to apply in order to get a given recall or precision on a given class, and what would be the number of objects that are lost and the impact on the other classes.\\ However, we note that this curve is unable to reproduce our F-C full dataset result since the class association that was used for it consisted in taking the maximum of the three probability output. With this threshold approach, considering an object as a CI as soon as its membership probability is above 0.4 does not prevent another class to be at 0.6 and therefore do a misclassification. However, doing so allows to select objects that are at least close to the CI category. In order to avoid misclassification, the threshold value must be above 0.5, since no other class alone can be higher. Still, it will miss some objects that the maximum probability association would have found, as for example a (0.4,0.3,0.3) probability output. These examples highlight the threshold approach limits, especially for lower thresholds values.\\ Finally we looked at the effect of the membership probability threshold on the distribution of the remaining YSOs in Orion and NGC 2264. Figures~\ref{yso_ci_dist_proba_orion} and \ref{yso_cii_dist_proba_orion} show the distribution of CI and CII YSOs, respectively. Despite the fact that the objects classified as CI but removed by a probability threshold of 0.9 have a greater chance not to be genuine CI YSOs, it does not translate into evident correlation in the distribution of these objects in the sky plane. Indeed, the removed excluded objects seem to mostly follow the global distribution of the same class. Still, due to the higher concentration of objects in the densest part of the cloud, applying a threshold will result in an apparently narrower distribution on the filament. Figure~\ref{yso_dist_proba_ngc2264} shows similar results for NGC 2264. We note that such a cut is only interesting when looking at the statistical properties of the clouds. For more local, or per star study, it could be useful to keep all candidates and proceed to further individual inspection.\\ To conclude, with the inclusion of this probability in our results, we provide a substantial addition to the original G09 classification, for which it might be more difficult to identify the reliable objects. The results of the F-C case will be published in the form of a public catalog available at CDS and associated with our paper \citep{cornu_montillaud_20}, which contains the class prediction along with the membership probability for each object in the Combined dataset. It includes all objects from the catalogs by \citet{megeath_spitzer_2012} and \citet{rapson_spitzer_2014}, as described in Section~\ref{data_setup}, and Table~\ref{yso_catalog} shows an excerpt from our catalog.\\ \clearpage \begin{figure*}[!t] \hspace{-1.5cm} \begin{minipage}{1.25\hsize} \centering \begin{subfigure}[t]{\textwidth} \includegraphics[width=\hsize]{images/orion_herschel_all_CI.png} \end{subfigure}\\ \vspace{-0.2cm} \begin{subfigure}[t]{\textwidth} \includegraphics[width=\hsize]{images/orion_herschel_CI_0.90.png} \end{subfigure}\\ \vspace{-0.2cm} \begin{subfigure}[t]{\textwidth} \includegraphics[width=\hsize]{images/orion_herschel_CI_0.99.png} \end{subfigure} \end{minipage} \caption[Probability filter on Orion CI YSO candidate distribution]{Distribution of CI YSO candidates for Orion A after a membership probability threshold. The background grayscale is the Herschel SPIRE 500 $\mathrm{\mu m}$ map. {\it Top}, {\it Middle} and {\it Bottom} frames are for 0.9, 0.95 and 0.99 probability threshold, respectively.} \label{yso_ci_dist_proba_orion} \end{figure*} \begin{figure*}[!t] \hspace{-1.5cm} \begin{minipage}{1.25\hsize} \centering \begin{subfigure}[t]{\textwidth} \includegraphics[width=\hsize]{images/orion_herschel_all_CII.png} \end{subfigure}\\ \vspace{-0.2cm} \begin{subfigure}[t]{\textwidth} \includegraphics[width=\hsize]{images/orion_herschel_CII_0.90.png} \end{subfigure}\\ \vspace{-0.2cm} \begin{subfigure}[t]{\textwidth} \includegraphics[width=\hsize]{images/orion_herschel_CII_0.99.png} \end{subfigure} \end{minipage} \caption[Probability filter on Orion CII YSO candidate distribution]{Distribution of CII YSO candidates for Orion A after a membership probability threshold. The background grayscale is the Herschel SPIRE 500 $\mathrm{\mu m}$ map. {\it Top}, {\it Middle} and {\it Bottom} frames are for 0.9, 0.95 and 0.99 probability threshold, respectively.} \label{yso_cii_dist_proba_orion} \end{figure*} \begin{figure*}[!t] \hspace{-1.7cm} \begin{minipage}{1.2\hsize} \centering \begin{subfigure}[t]{0.48\textwidth} \includegraphics[width=1.0\hsize]{images/ngc2264_herschel_all_CI.png} \end{subfigure} \begin{subfigure}[t]{0.48\textwidth} \includegraphics[width=1.0\hsize]{images/ngc2264_herschel_all_CII.png} \end{subfigure}\\ \vspace{+0.4cm} \begin{subfigure}[t]{0.48\textwidth} \includegraphics[width=1.0\hsize]{images/ngc2264_herschel_CI_0.90.png} \end{subfigure} \begin{subfigure}[t]{0.48\textwidth} \includegraphics[width=1.0\hsize]{images/ngc2264_herschel_CII_0.90.png} \end{subfigure}\\ \vspace{+0.4cm} \begin{subfigure}[t]{0.48\textwidth} \includegraphics[width=1.0\hsize]{images/ngc2264_herschel_CI_0.99.png} \end{subfigure} \begin{subfigure}[t]{0.48\textwidth} \includegraphics[width=1.0\hsize]{images/ngc2264_herschel_CII_0.99.png} \end{subfigure} \end{minipage} \caption[Probability filter on NGC 2264 YSO candidate distribution]{Distribution YSO candidates for NGC 2264 after a membership probability threshold. The background grayscale is the Herschel SPIRE 500 $\mathrm{\mu m}$ map. {\it Left}: distribution of the CI YSOs. {\it Right}: distribution of the CII YSOs. {\it Top}, {\it Middle} and {\it Bottom} frames are for 0.9, 0.95 and 0.99 probability threshold, respectively.} \label{yso_dist_proba_ngc2264} \end{figure*} \input{catalog_excerpt} \clearpage \section{3D cloud reconstruction using cross-match with Gaia} \label{3d_yso_gaia} \etocsettocstyle{\subsubsection*{\vspace{-0.5cm}}}{} \localtableofcontents \vspace{0.5cm} \subsection{Orion A distance and 3D information} \label{sect:orion_A_dist} \vspace{0.2cm} We aim to demonstrate that our catalog of YSO candidates can be used to retrieve distance information about molecular clouds, and even reveal more subtle 3D structural characteristics. For this we mainly followed the approach exposed by \citet{grossschedl_3d_2018} (hereafter GR18) who performed a 3D reconstruction of Orion A based on the \citet{megeath_spitzer_2012} catalog (and some refinements from \citet{Megeath_2016}) and an additional 200 YSOs sample from the ESO-VISTA near-infrared survey \citep{Meingast_2016}, all cross-matched with Gaia DR2 in order to obtain parallax measurements. Therefore, using our own YSO candidate catalog, we tried to reproduce the results on Orion A, and subsequently applied a similar approach to Orion B and NGC 2264. Our approach is described in the following paragraphs.\\ We performed a cross-match with Gaia DR2 \citep{Gaia_Collaboration_2018} that associate objects based on a $1^{\prime\prime}$ sky distance as in GR18. Since YSOs are embedded in an environment that is often optically thick for the optical Gaia G band, we only recovered a fraction of them, and lost almost all the CI YSOs for all datasets. Still following GR18 we applied cuts in parallax quality with the $\sigma_{\varpi}/\varpi < 0.1$ condition. We also stress that we conserved their assumption that the direct inverse of parallax is a sufficiently good distance approximation in the case of Orion A, with less than 1\% difference with the \citet{bailer-jones_2018} Bayesian inference distance estimate. Then, instead of applying subsequent Gaia astrometry filters and an additional color excess exclusion like in GR18, we preferred to test if improving the YSO selection quality leads to similar results. For this we applied a 0.95 membership threshold that has shown to conserve a large fraction of the objects while having a $\sim$90\% precision on our CI YSOs (Table~\ref{conf_proba_095}), which is important for such application. This threshold further reduces the number of objects recovered after the cross match with Gaia. The rest of the analysis is then specific to each region.\\ \begin{table} \centering \caption{Orion A sample size for different selection criteria.} \vspace{-0.1cm} \begin{tabularx}{0.9\hsize}{l @{\hskip 0.1\hsize} *{2}{Y}} \toprule & CI YSOs & CII YSOs\\ \midrule (0): Raw catalog & 275 & 1957\\ (1): Raw X-match & 49 & 1612\\ (2): 1 with $\varpi$ & 36 & 1457\\ (3): 2 with $\sigma_{\varpi}/\varpi < 0.1$ & 12 & 1006 \\ (4): 2 with $P(X) > 0.99$ & 2 & 1038 \\ (5): 2 with $P(X) > 0.95$ and $\sigma_{\varpi}/\varpi < 0.1$ & 5 & 932\\ \bottomrule \end{tabularx} \label{orion_A_select_crit} \vspace{1.5cm} \end{table} \newpage \begin{figure}[!t] \hspace{-1.7cm} \begin{minipage}{1.2\hsize} \centering \includegraphics[width=1.0\hsize]{images/grossschedl_orion_dist.png} \end{minipage} \caption[Orion A distance distribution from \citet{grossschedl_3d_2018}]{Orion A distance distribution from GR18, with their full selection. Error bars correspond to $(\sigma_{\varpi}/\varpi^2)$. The orange and blue markers are the mean and median distance for each $\Delta l = 1^{\circ}$ distance bin, respectively. The light and dark blue areas represents the $2\sigma$ and $1\sigma$ percentiles for each bin, respectively. The background grayscale is an Herschel map. {\it From} \citet{grossschedl_3d_2018}} \label{orion_A_dist_GR18} \end{figure} \begin{figure*}[!t] \centering \begin{subfigure}[t]{0.70\textwidth} \caption*{\bf Selection criteria (5)} \includegraphics[width=1.0\hsize]{images/orion_3d_dist_dispersion.pdf} \end{subfigure}\\ \vspace{-0.3cm} \begin{subfigure}[t]{0.70\textwidth} \includegraphics[width=1.0\hsize]{images/orion_herschel.png} \end{subfigure}\\ \vspace{0.2cm} \begin{subfigure}[t]{0.70\textwidth} \caption*{\bf Selection criteria (4)} \includegraphics[width=1.0\hsize]{images/orion_3d_dist_dispersion_prob.pdf} \end{subfigure}\\ \vspace{-0.3cm} \begin{subfigure}[t]{0.70\textwidth} \includegraphics[width=1.0\hsize]{images/orion_herschel_prob.png} \end{subfigure} \caption[Orion A distance distribution from the F-C YSO candidates]{Orion A distance distribution from the F-C YSO candidates, along with the sky distribution over the region. Error bars correspond to $(\sigma_{\varpi}/\varpi^2)$. CI and CII selected YSOs are in red and green, respectively. The orange and blue markers are the mean and median distance for each $\Delta l = 1^{\circ}$ distance bin, respectively. The light and dark blue areas represents the 2 and 1 $\sigma$ percentiles for each bin, respectively. The background grayscale is the Herschel SPIRE 500 $\mathrm{\mu m}$ map. {\it Top duo}: YSO selection and distribution in case (5). {\it Bottom duo}: YSO selection and distribution in case (4).} \label{orion_A_dist} \end{figure*} \newpage For Orion A, we used galactic coordinates to properly compare with GR18, and because it is a wise choice since the filament structure is mostly aligned with the galactic longitude axis. A few YSO selection criteria have been tested and are summarized in Table~\ref{orion_A_select_crit}. From 275 CI and 1957 CII YSO candidates corresponding to Orion A in our catalog (case (0)), the Gaia cross match only preserves 49 CI and 1612 CII YSOs (case (1)), with only 36 and 1457 of them, respectively, with a parallax measurement (case (2)). Afterward, the use of our "good YSO" condition that include the membership probability threshold at 0.95, combined with an exclusion of objects with parallaxes clearly unrelated with Orion ($\varpi > 3.333$ or $\varpi < 1.666$) like in GR18), left only 5 CI and 932 YSOs (case (5)). We observed that the threshold is only responsible for a subsequent removal of 7 CI and 74 CII, while all the other removals are induced by the parallax condition (case (3)). Nevertheless, selecting YSOs with a membership probability $>0.95$ with no cut in parallax quality still led to a sample that is smaller that the raw cross match, with 23 CI and 1441 CII YSOs. A 0.99 membership cut (case(4)) resulted in a sample with 2 CI and 1038 CII YSOs, on which the addition of the parallax condition left only 2 CI and 846 CII YSOs. This tends to indicate that there is an overlap between the two conditions, meaning that selecting very firmly established YSOs could be a sufficient enough criteria in the case of Orion A.\\ \newpage \begin{sidewaystable*} \footnotesize \centering \caption{Orion A distance estimates and dispersion for each galactic longitude bin.} \begin{tabularx}{1.0\hsize}{l @{\hskip 0.01\hsize} | Y l *{6}{Y} @{\hskip 0.05\hsize} | Y l *{6}{Y}} \toprule \multicolumn{17}{c}{}\vspace{-0.2cm}\\ \multicolumn{1}{c}{} & \multicolumn{8}{c}{{\bf Case (5): X-match with $\bm{ P(X) > 0.95}$ and $\bm{\sigma_{\varpi}/\varpi < 0.1}$}} & \multicolumn{8}{c}{{\bf Case (4): X-match with $\bm{P(X) > 0.99}$}}\\ \multicolumn{1}{c}{}\vspace{-0.2cm}\\ \midrule $\Delta l$ center & $N_{YSO}$ & Mean & Median & StD & $p(-2\sigma)$ & $p(-1\sigma)$ & $p(+1\sigma)$ & $p(+2\sigma)$ & $N_{YSO}$ & Mean & Median & StD & $p(-2\sigma)$ & $p(-1\sigma)$ & $p(+1\sigma)$ & $p(+2\sigma)$ \\ (deg) & & (pc) & (pc) & (pc) & (pc) & (pc) & (pc) & (pc) & & (pc) & (pc) & (pc) & (pc) & (pc) & (pc) & (pc) \\ \midrule 207.5 & 2 & $401.4\pm 19.4$ & 394.7 & 29.1 & 364.3 & 375.7 & 420.7 & 470.1 & 18 & $401.0\pm 38.4$ & 393.5 & 41.3 & 331.9 & 375.7 & 425.6 & 489.6 \\ 208.0 & 155 & $396.4\pm 17.2$ & 394.7 & 25.5 & 350.9 & 376.4 & 411.2 & 461.7 & 155 & $401.1\pm 42.9$ & 395.7 & 35.7 & 345.9 & 376.7 & 417.6 & 502.5 \\ 208.5 & 321 & $392.3\pm 17.9$ & 393.4 & 29.0 & 331.1 & 371.4 & 411.4 & 454.1 & 314 & $392.9\pm 35.4$ & 393.4 & 36.7 & 312.1 & 369.5 & 414.9 & 471.0 \\ 209.0 & 416 & $397.5\pm 18.5$ & 396.7 & 32.0 & 330.8 & 375.5 & 418.3 & 467.3 & 410 & $397.6\pm 29.5$ & 396.9 & 38.2 & 313.2 & 371.4 & 420.7 & 487.4 \\ 209.5 & 337 & $399.2\pm 18.5$ & 396.6 & 30.4 & 342.1 & 376.1 & 419.8 & 476.1 & 344 & $402.1\pm 35.3$ & 398.0 & 37.4 & 334.6 & 375.6 & 424.5 & 498.5 \\ 210.0 & 180 & $391.3\pm 18.4$ & 388.7 & 29.8 & 329.6 & 367.8 & 415.5 & 459.6 & 195 & $394.9\pm 38.9$ & 390.4 & 37.5 & 327.0 & 366.7 & 420.0 & 491.5 \\ 210.5 & 115 & $395.1\pm 19.5$ & 391.4 & 33.8 & 328.9 & 367.9 & 420.0 & 479.9 & 136 & $398.1\pm 60.7$ & 393.1 & 40.4 & 328.6 & 366.3 & 427.5 & 514.4 \\ 211.0 & 56 & $397.5\pm 20.4$ & 394.6 & 38.8 & 330.6 & 365.3 & 418.2 & 506.4 & 78 & $406.2\pm 84.8$ & 397.5 & 47.7 & 328.1 & 366.3 & 447.1 & 529.2 \\ 211.5 & 31 & $402.7\pm 20.2$ & 398.4 & 39.8 & 329.3 & 372.1 & 430.8 & 487.0 & 48 & $418.5\pm 66.6$ & 408.2 & 47.0 & 337.1 & 381.8 & 463.2 & 527.3 \\ 212.0 & 52 & $425.3\pm 22.0$ & 423.0 & 44.5 & 353.0 & 380.5 & 469.4 & 521.4 & 66 & $431.2\pm 51.7$ & 426.0 & 48.0 & 355.7 & 381.9 & 476.2 & 539.8 \\ 212.5 & 65 & $433.9\pm 23.4$ & 439.7 & 43.8 & 355.5 & 383.1 & 471.5 & 514.0 & 89 & $439.7\pm 61.4$ & 446.8 & 50.6 & 351.6 & 382.9 & 482.5 & 552.0 \\ 213.0 & 48 & $436.9\pm 23.9$ & 443.9 & 35.8 & 360.3 & 402.2 & 467.1 & 490.5 & 81 & $439.0\pm 66.5$ & 446.8 & 56.8 & 323.1 & 381.2 & 490.7 & 539.5 \\ 213.5 & 32 & $439.5\pm 24.5$ & 442.9 & 36.0 & 368.3 & 410.1 & 472.9 & 496.6 & 59 & $439.7\pm 69.8$ & 443.1 & 62.1 & 315.3 & 385.9 & 490.7 & 562.7 \\ 214.0 & 24 & $461.1\pm 29.1$ & 461.0 & 44.6 & 389.9 & 417.6 & 508.4 & 549.5 & 52 & $471.9\pm 95.8$ & 465.4 & 63.4 & 378.1 & 403.2 & 541.7 & 589.1 \\ 214.5 & 13 & $476.1\pm 30.2$ & 468.9 & 44.5 & 409.3 & 430.1 & 513.5 & 558.6 & 30 & $486.2\pm 104$ & 469.5 & 64.9 & 370.6 & 429.1 & 565.1 & 593.5 \\ \bottomrule \end{tabularx} \label{tab_orion_a_dist} \end{sidewaystable*} \clearpage To help the comparison of our results with GR18, Figure~\ref{orion_A_dist_GR18} from \citet{grossschedl_3d_2018} depicts the distribution of their YSO selection on Orion A using their full selection, resulting in a 682 CII YSOs sample. The top frame shows the distance distribution following the galactic longitude with each point being a YSO with its uncertainty $(\sigma_{\varpi}/\varpi^2)$. To extract a continuous 3D information, they chose to make bins of longitude with a $\Delta l = 1^\circ$ width and evaluated the mean and median distance for such interval every $0.5^\circ$. This conducted to their main result with a clear depth evolution toward the molecular complex with an estimated angle of $\sim 70^\circ$ with the plane of the sky. This also allowed them to estimate that the physical length of Orion A is $\sim 90$ pc. These results strongly underline the limits of the commonly adopted 414 distance estimate for the whole molecular complex \citep{Menten2007}.\\ Using our catalog we were able to observe a very similar distance distribution. Figure~\ref{orion_A_dist} shows our result with the combined selection based on probability and parallax uncertainty, corresponding to case (5), and for our membership selection alone, i.e. case (4). The first one, despite being more noisy that the GR18 result, follows the same global trend, with an $1\sigma$ dispersion that is very similar. We used the same longitude binning approach and represented the same mean, median and 1-2 $\sigma$ percentiles. While the distribution over the plane of the sky is slightly more crowded, it seems that we have less objects that are far away from the main filament. It is interesting to note that, in this example we have $~37\%$ more CII than GR18, which could result in a better overall statistic. Also, we remind that, in this case, we only applied one criterion on astrometry quality, while there are many other filters applied in GR18. Still, we reached the same conclusion as GR18, with at least around 80 pc distance difference between the closest ($\sim$395) and the farthest ($\sim$475) points and very similar bin averages and standard deviations. \\ The lower part of Figure \ref{orion_A_dist}, shows our attempt to use the case (4), with no parallax quality criteria at all. As expected the uncertainties are much larger, while the overall trend is mostly conserved. The fact that there is no parallax quality filter at all makes the mean uncertainty estimation very large for some regions for which very uncertain objects are used. We observed that, in the longitude bins that contains many YSOs, the $1\sigma$ percentile of the distance estimate is very similar to case (5), while it significantly increases for areas that are less dense in YSOs, at greater longitude. Still, the $2\sigma$ percentile is always larger, which is expected with the increased dispersion. Nevertheless, the distribution over the plane of the sky remains convincing with very few objects far off the main filament. Additionally, we have noticed that other selections did not lead to significant changes. A combination of the two cases with both the 0.99 threshold and the astrometry criteria only reduced the dispersion by a small amount, with no other significant changes. For in depth comparison between our two cases and the GR18 case, we provide all measurements for each of our galactic longitude bins in Table~\ref{tab_orion_a_dist}, containing YSO counts, mean and median values, and the standard deviation along with the four percentiles values.\\ \newpage \subsection{Distances to Orion B sub-regions} \vspace{-0.2cm} We applied the same methodology to the Orion B part of the molecular complex. As before we used the 0.95 membership threshold along with the $\sigma_{\varpi}/\varpi < 0.1$ condition. We also removed the maximum and minimum parallax limits since there was much less objects that were evident distance outliers. We considered the following sub-regions: NGC 2024/2022, NGC 2071/2068 and LDN 1622, as in \citet{megeath_spitzer_2012}. We applied our selection to each of these regions and produced a distance estimate for the three of them. The results are summarized in Table~\ref{orion_B_select_crit_and_dist}. First, we observed that, despite the fact that no CI passed through the full selection criteria, NGC 2024/2022 and NGC 2071/2068 still contain a reasonable number of CII YSOs to estimate a distance with 51 and 67 objects, which is larger than in several bins we used for Orion A. The LDN 1622 region, despite being actively forming stars on a very localized spot (see CI in Fig.~\ref{orion_B_yso_dist}), is only characterized with 9 CII YSOs after the selection which is similar to our less represented Orion A bin. We show in Figure~\ref{orion_B_selected_dist} the selected YSOs within Orion B.\\ \vspace{-0.1cm} The NGC 2024/2022 region, that is the closest one to Orion A in the plane of the sky, shows a mean $382 \pm 20$ pc and median 397 pc estimated distance that is very close to the one of the Orion A nebula. We emphasize that all the errors that are given along the mean prediction correspond to the propagated parallax errors. In this case, the standard deviation of $47$ pc is similar to the largest values observed in Orion A (Table~\ref{tab_orion_a_dist}) and could indicate either that the selection of YSO candidates in this region in not as good as expected, or that this cloud is particularly extended along the line of sight. Using the mean distance, it is possible to estimate the physical width of the region considering a $2^\circ$ width on the plane of the sky (as the circle in Fig.~\ref{orion_B_selected_dist}), which provides $14\,$pc. Considering this results it is unlikely that our dispersion represents the cloud depth. Also, a recent estimation of the distance of this region was made using the VLA and led to an average distance of $423 \pm 15$ pc using the stellar parallax estimation of 5 very well identified YSOs \citep{Kounkel2017}. Despite the relatively small given uncertainty, they have values ranging from 356 to 536 pc associated to this region with a very small sample size. Interestingly, they refined their distance estimate in \citet{Kounkel2018}, where they used clustering in a 6D space that merges Gaia and Apogee data with several YSO catalogs including \citet{megeath_spitzer_2012}. In this study they found a $403\pm 4$ pc estimate that is slightly more compatible with our own result. \\ \vspace{-0.1cm} For the NGC 2071/2068 region, we found a mean distance of $431\pm 26$ pc, with an even larger standard deviation of 53 pc. This result suggests that the region is further away by around 30 to 40 pc than the Orion nebula region, which is more than what their alignment in the plane of the sky suggested. Comparing to \citet{Kounkel2017}, they estimated a distance of $388 \pm 10$, which is the opposite of the trend we observed. Still, this estimate was only based on 3 YSOs with again strong differences between their sources estimated at 383, 392 and 455 pc, respectively. Their distance refinement from \citet{Kounkel2018} is more compatible with our result with a $417\pm 5$ pc estimation. \\ \vspace{-0.1cm} The LDN 1622 region appears much closer than expected with a mean distance of $343\pm 13$ pc and a much smaller standard deviation of $\pm 17$ pc. This result clearly separates the region from the rest of the molecular complex. The same conclusion is obtained by \citet{Kounkel2018}, where they found a distance of $345 \pm 6$ pc that is almost identical to our own estimate. These results highlight that the widely adopted 400 to 420 pc distance estimate might often be incorrect since the star-forming regions in the Orion molecular complex can spread over a 80 pc distance range, and some of them that were though to be linked to the complex are actually up to 60 pc closer to us than the Orion nebula. \begin{table} \centering \caption{Orion B distance estimates and dispersion for each identified region.} \vspace{-0.1cm} \begin{tabularx}{0.80\hsize}{l @{\hskip 0.05\hsize} @{\hskip 0.08\hsize} *{3}{Y}} \toprule & NGC 2024 & NGC 2071 & LDN 1622\\ & /2022 & /2068 & \\ \midrule Raw catalog & 31/182 & 52/223 & 7/18\\ Raw X-match & 5/131 & 4/151 & 1/15\\ Full selection & 0/51 & 0/67 & 0/9\\ Mean (pc) & 381.8 & 431.5 & 343.4\\ Median (pc) & 397.0 & 424.9 & 337.8\\ StD (pc) & 46.9 & 52.9 & 17.4 \\ $p(-2\sigma)$ & 261.9 & 346.5 & 323.3\\ $p(-1\sigma)$ & 340.7 & 399.1 & 327.3\\ $p(+1\sigma)$ & 421.7 & 464.2 & 357.8\\ $p(+2\sigma)$ & 443.3 & 509.1 & 375.7 \\ \bottomrule \end{tabularx} \caption*{\vspace{-0.4cm}\\ {\bf Notes.} The object numbers are given for CI/CII YSOs. The full selection applied here are the same than the case (5) of Table~\ref{orion_A_select_crit}.} \label{orion_B_select_crit_and_dist} \end{table} \begin{figure}[!t] \centering \includegraphics[width=0.68\hsize]{images/orion_B_herschel.png} \caption[Orion B distribution from the F-C YSO candidates]{Distribution of the YSO candidates that are kept by the selection criteria to compute the distance of each region. The blue circles show the three regions for which a distance is estimated.} \label{orion_B_selected_dist} \end{figure} \newpage \subsection{NGC 2264 distance and 3D information} The last region for which we were able to provide a distance estimate is NGC 2264. As for Orion A, it is possible to make slices using the galactic longitude that mostly follow the sub-structures of the region. We used the same selection as for Orion B with a 0.95 membership threshold and the same $\sigma_{\varpi}/\varpi < 0.1$ criteria. We summarize in Table~\ref{ngc2264_select_crit} the effect of the basic selections. We note that there was no CI remaining after our final selection criteria. First, we made an average distance estimate over the whole region that gave a mean distance of $738 \pm 43$ pc and a median distance of 742 pc, and standard deviation of 48 pc and the following 1 and 2 $\sigma$ percentiles 630, 699, 781, 827 pc. This global estimate is consistent with the \citet{rapson_spitzer_2014} estimates, but is even closer to the estimated distance of the open cluster with Gaia of $723 \pm 50$ that has a similar dispersion to our result \citep{Cantat-Gaudin_2018}. We note that this last estimate was not made using YSOs and therefore our estimated distance should better trace the star-forming region inside NGC 2264.\\ Since the number of selected CII is low we limited ourselves to four longitude bins that do not have the same size in order to get more stars in the less crowded regions. The selected bins are $l = [202.50,202.85], [202.85,203.10], [203.10,203.35], [203.35, 203.70]$. Figure~\ref{ngc2264_dist} shows the distance distribution of our YSO selection using the same visualization as for Orion A in Section~\ref{sect:orion_A_dist}. We observed that the distance estimate is much more constant in longitude with only a variation of around 20 pc in the mean values. We summarize in Table \ref{tab_ngc2264_dist} the results for each bin including the YSO count, the mean and median values, and the standard deviation along with the 4 percentiles values. This detailed values indicate a small variation in distance along the star-forming cloud, however our dispersion are significantly larger than for Orion making this trend difficult to confirm. We also note that our selection criteria removed most of the stars in the region of the filament junction region G202.3+2.5 \citep{montillaud_2019_II}. \begin{table} \centering \caption{NGC 2264 sample size for different selection criteria.} \vspace{-0.1cm} \begin{tabularx}{0.8\hsize}{l @{\hskip 0.1\hsize} *{2}{Y}} \toprule & CI YSOs & CII YSOs\\ \midrule Raw catalog & 101 & 469\\ Raw X-match & 8 & 390\\ with $\varpi$ & 6 & 355\\ with $P(X) > 0.95$ and $\sigma_{\varpi}/\varpi < 0.1$ & 0 & 142\\ \bottomrule \end{tabularx} \label{ngc2264_select_crit} \end{table} \begin{table} \small \caption{NGC 2264 distance estimates and dispersion for each galactic longitude bin.} \hspace{-0.9cm} \begin{tabularx}{1.1\hsize}{l @{\hskip 0.04\hsize} l l *{6}{Y} } \toprule $l$ interval & $N_{YSO}$ & Mean & Median & StD & $p(-2\sigma)$ & $p(-1\sigma)$ & $p(+1\sigma)$ & $p(+2\sigma)$ \\ (deg) & & (pc) & (pc) & (pc) & (pc) & (pc) & (pc) & (pc)\\ \midrule $[202.50,202.85]$ & 19 & $724.3\pm 45.6$ & 707.1 & 52.6 & 636.0 & 678.5 & 780.8 & 810.8 \\ $[202.85,203.10]$ & 45 & $730.8\pm 42.5$ & 738.4 & 51.7 & 575.0 & 695.8 & 770.4 & 800.1 \\ $[203.10,203.35]$ & 59 & $746.3\pm 36.9$ & 743.3 & 43.1 & 652.1 & 706.4 & 785.5 & 830.4 \\ $[203.35,203.70]$ & 18 & $745.4\pm 52.7$ & 740.7 & 39.2 & 676.1 & 705.9 & 780.0 & 814.3 \\ \bottomrule \end{tabularx} \label{tab_ngc2264_dist} \end{table} \begin{figure*}[!t] \hspace{-0.7cm} \begin{minipage}{1.1\hsize} \centering \begin{subfigure}[t]{1.0\textwidth} \includegraphics[width=1.0\hsize]{images/ngc2264_3d_dist_dispersion.pdf} \end{subfigure}\\ \vspace{-0.0cm} \begin{subfigure}[t]{1.0\textwidth} \includegraphics[width=1.0\hsize]{images/ngc2264_herschel.png} \end{subfigure} \end{minipage} \caption[NGC2264 distance distribution from the F-C YSO candidates]{NGC 2264 distribution of the F-C YSO candidates, along with the sky distribution over the region. Error bars correspond to $(\sigma_{\varpi}/\varpi^2)$. The selected YSOs (all CII) are in green. The orange and blue markers are the mean and median distance for each longitude bin, respectively. The light and dark blue areas represent the 2 and 1 $\sigma$ percentiles for each bin, respectively. The background grayscale is the Herschel SPIRE 500 $\mathrm{\mu m}$ map.} \label{ngc2264_dist} \end{figure*} \clearpage \section{Additional discussion and further improvements} \label{yso_discussion} \etocsettocstyle{\subsubsection*{\vspace{-1cm}}}{} \localtableofcontents \subsection{Identified limitations to our results} With the dataset selected for this study, the quality of our results is mostly dependent on the proper choice of the $\gamma_i$ factors, that is to say that the main limitation comes from the construction of our labeled dataset. It is indeed expected to be the most critical part of any ML application, because the network only provides results as good as the input data. One of our major issues is that some subclasses of rare contaminants remain poorly constrained, like Shocks or PAHs, which leads to a significant contamination of the YSO classes. The non-homogeneity between the 1\,kpc small cloud dataset and the other datasets worsen this effect by increasing the dilution of these rare subclasses (Sect. \ref{cross_train}). They are almost evenly distributed across output classes in the O-O case, revealing that the network was not able to identify enough constraints on those objects. In contrast, for the C-C and F-C cases, they are randomly assigned to an output class. This means that they are completely unconstrained by the network, which failed to disentangle them from the noise of another class. This effect appeared in those specific cases due to the increased dilution of those subclasses of contaminants.\\ On the other hand, the main source of contamination for CII YSOs is the Star subclass. Adding more of them has proven to improve their classification quality (Sects.~\ref{orion_results} and \ref{cross_train}), but at the cost of even more dilution of all the other subclasses, which has a stronger negative impact on the global result. Similarly, YSO classes themselves should be more present to further improve their recall, but again at the cost of an increased dilution of the contaminant subclasses. The confusion between CI and CII YSOs is illustrated by Figure~\ref{missed_wrong_zoom}, where the misclassified YSOs of both CI and CII accumulate at the boundary between them in the input parameter space. This figure also illustrates the CII contamination from Other with the same kind of stacking, where the two classes are close to each other. A similar representation for all the CMDs is provided in Figure~\ref{missed_wrong_space}.\\ Overall, we lack data to get better results. Large Spitzer point source catalogs are available, but the original classification from \citet{gutermuth_spitzer_2009} was tailored for relatively nearby star-forming regions, where YSOs are expected to be observed. Therefore, using a non-specific large Spitzer catalog would mostly add non star-forming regions, which would create a significant number of false positive YSOs. In practice, these false positive YSOs would overwhelmingly contaminate the results, and the network performance would drop to the point where more than $50\%$ of CI YSOs are false positive. However, since one of our main limitation is the number of contaminants, a large Spitzer catalog could be used to increase the number of rare contaminants in the training sample by selecting areas that are known to be clear of YSOs. Unfortunately, this approach would mostly provide us with more Stars, Galaxies and AGNs, which are already well constrained, while the two most critical contaminant subclasses, Shocks and PAHs, originate mostly from star forming regions. \\ \newpage \subsection{MIPS 24 micron band effect on the results} \label{sec:mips24} We investigate here the impact of the MIPS $24\ \mathrm{\mu m}$ band on the original classification, and therefore on the results of the network. As stated in Section~\ref{data_prep}, this band is used as a refinement step of the G09 method. Considering the classification performed using the four IRAC bands, it ensures that it is consistent with the $24\ \mathrm{\mu m}$ emission where available, for example by testing whether the SED still rises at long wavelength to better distinguish between different YSO classes. However, it adds heterogeneity in the classification scheme, since objects that do not present a MIPS emission cannot be refined. It makes the results harder to interpret and gives more work to the network as it has to learn an equivalent of this additional step. Moreover, the effect of this band on the end classification strongly affects some subclasses that are very rare in the dataset. For example, almost half of the objects initially classified as Shocks are reclassified as CI YSOs after this refinement step. Therefore, as it corresponds to a significant increase in complexity on very few objects, it is difficult to get the network to constrain them, considering the other limitations. It results in a strong contamination of the CI YSOs, as highlighted multiple times in our results. \\ On the other hand, most of the Spitzer large surveys miss a $24\ \mathrm{\mu m}$ MIPS band measurement, preventing us from generalizing our network to those datasets. Nevertheless, we chose to keep this band in our study to have the most complete view of its effect on our network. To quantify this effect, we have trained networks that did include neither the MIPS refinement step, nor the $24\ \mathrm{\mu m}$ in input. These networks have shown a small increase in performance, especially for CI YSOs with $2\%$ to $3\%$ improvement in recall and precision in the F-C case. This can mainly be explained by the simplification of the problem, but also by the greater number of objects in rare subclasses like Shocks. Moreover, such results could be generalized over larger datasets. In this case, a MIPS refinement step could still be performed \textit{a posteriori} on the network output, for objects where this band is available. Interestingly, although the absence of the MIPS refinement step could be expected to degrade the absolute reliability of the classification, the potentially large increase in the number of rare subclasses may improve the overall network performance sufficiently for the net effect on the absolute accuracy of the classification to be positive. \\ We emphasize here that the inherent difficulties that come from the use of the MIPS band can be generalized to the addition of any other band from a different survey. As we exposed in Section~\ref{yso_ml_motivation} with the study from \citet{miettinen_protostellar_2018}, cross-matching several surveys to have more bands comes at the cost of much less objects or divergent classification paths that are very difficult to constrain using ML methods. Still, very well identified YSOs with several bands that better reconstruct the SED could be used as a training dataset to construct a single large scale infrared survey classification. In this case, selection effects should be looked into with care, and it would still require a significant amount of examples. \subsection{Usage of Spitzer colors instead of bands} \label{sec:color_usage} We stated in Section~\ref{data_prep} that we chose to use IRAC and MIPS bands, along with their respective uncertainties, as direct input features. The obvious alternative would have been to use colors, that present the advantage to be robust to many environmental properties of the star-forming regions of interest like the distance. While this approach could be efficient in principle, the G09 classification was constructed using a few direct band criteria like in the C frame of Figure~\ref{fig_gut_method}. This allows the G09 method to more robustly exclude some extra-Galactic contaminants, but it also excludes the faintest YSOs and consequently limits the method to close star forming regions. Since the present study only sticks to the G09 scheme for training, we were constrained to use these magnitudes in a way or another. The next section discusses possible alternative methods to construct the training dataset, in which case the use of colors as input features could be much more relevant.\\ From the network standpoint, using bands or colors as input features is somehow identical since it is able to reconstruct one from the other. Still, it would have an effect on the normalization of the features since colors are, in principle, less prone to variations in feature space between various star-forming regions. Another argument for using colors is that it should already be a more appropriate space for the problem we want to solve. However, all our attempts to use various color combinations or a mixture of bands and colors, plus uncertainty combinations, never outperformed our training where we used solely the bands and uncertainties directly. In any case, the prediction results were very similar but the network tends to train slightly faster when using colors but at the cost of a small increase in result dispersion. For this last reason, we chose to keep the bands as input features for the present study. \subsection{Method discussion} \label{Method_discussion} Our approach has several caveats, the main one being that we built our labeled dataset from the preexisting G09 classification that has its own limitations including the placement of the cuts, the fact that it was constrained only on few star-forming regions, the use of magnitude cuts limiting the distance range, etc. As a consequence, our prediction is likely to inherit several of these limitations. The membership probability discussed in the previous section provides a first but limited view of the uncertainties of the original classification scheme. One approach to completely release our methodology from its dependence on the G09 scheme would consist in building our training set from a more conclusive type of observations, like visible spectroscopy to detect the $H_\alpha$ line that is usually attributed to gas accretion by the protostar \citep{Kun_2009}, or (sub-)millimeter interferometry to detect the disks \citep[e.g.][]{ruiz-rodriguez_2018, alma_disk_yso, tobin_2020}. Alternatively, a large set of photometric bands could be gathered to reconstruct the SED across a wider spectral range, as in \citet{miettinen_protostellar_2018}. Unfortunately, for now, too few objects have been observed that extensively to build a labeled sample large enough to efficiently train most of the ML algorithms.\\ Another approach would be to use simulations of star-forming regions \citep[e.g.][]{padoan_2017, vazquez-semadini_2019} and of star-forming cores \citep[e.g.][]{robitaille_interpreting_2006} to provide a mock census of YSOs and emulate their observational properties. This option would enable us to generate large training catalogs, and would provide additional control on the YSO classes, but at the cost of other kinds of biases coming from the simulation assumptions. An additional difficulty of this approach would be to find a way to generate the required large variety of contaminant objects, each of which would require a dedicated treatment.\\ A different strategy could consist of improving the method itself. With feedforward neural networks like in this study, there may still be improvement possibilities by using deeper networks with, for example, a different activation function, weight initialization, or a more complex error propagation. We explore this aspect later in the present manuscript (see Sect.~\ref{cnn_hyperparameters}) as it needs subsequent introduction to more complex networks (Sect.~\ref{cnn_global_section}. By choosing a completely different, unsupervised method, one could work independently of any prior classification. However, there is a risk that the classes identified by the method do not match the classical ones. In particular, the continuous distribution from CI to CII YSOs, and then to main sequence stars, is likely to be identified as a single class by such algorithms. A middle-ground could be the semi-supervised learning algorithms such as Deep Belief Networks \citep{Hinton504}. Such algorithms were designed to find a dimensional reduction of the given input feature space that is more suitable to the problem, therefore making its own classes based on the proximity of objects in the feature space. It could then be connected with a regular supervised feedforward neural network layer, that would combine the found classes into more usual ones. This approach would reduce the impact of the original classification on the training process, and therefore its impact on the final results.\vspace{-0.8cm}\\ \subsection{Conclusion and perspectives} \label{conclusion_perspectives} \vspace{-0.2cm} We have presented a detailed methodology to use Neural Networks to extract and classify YSO candidates from several star-forming regions using Spitzer infrared data, based on the method described by \citep{gutermuth_spitzer_2009}. This study led to the following conclusions.\vspace{-0.25cm}\\ Neural Networks are a suitable solution to perform an efficient YSO classification using the Spitzer four IRAC bands and the MIPS 24 $\mu$m band. When trained on one cloud only, the prediction performance mostly depends on the size of the sample. Fairly simple networks can be used for this task with just one hidden layer that only consists of 15 to 25 neurons with a classical sigmoid activation function.\vspace{-0.25cm}\\ The prediction capability of the network on a new region strongly depends on the properties of the region used for training. Therefore, the study revealed the necessity to train the network on a census of star forming regions to improve the diversity of the training sample. A network trained on a more diverse dataset has been able to maintain a high quality prediction, which is promising for its ability to be applied to new star-forming regions.\vspace{-0.25cm}\\ The dataset imbalance has a strong effect on the results, not only on the classes of interest, but also for the hidden subclasses considered as contaminants. Carefully rebalancing each subclass in the training dataset, according to its respective feature space coverage complexity and to its proximity with other classes of interest, has shown to be of critical importance. The use of observational proportions to measure the quality of the prediction has been exposed to be of major importance to properly assess the quality of the prediction.\vspace{-0.25cm}\\ This study showed that the network membership probability prediction complements the original G09 classification with an estimate of the prediction reliability. It allows one to select objects based on their proximity to the whole set of classification cuts in a multi-dimensional space, using a single quantity. In addition, the identification of objects with a higher degree of confusion highlights parts of the parameter space that might lack constraints and that would benefit from a refinement of the original classification. The corresponding catalog of YSO candidates in Orion and NGC 2264 predicted by our final ANN, along with the class membership probability for each object, is publicly available at CDS.\vspace{-0.25cm}\\ We showed that our prediction can efficiently be used in combination to a survey like Gaia to recover distance information on the star forming regions. In the most favorable cases, it allowed to reconstruct a continuous distance information, while in the other cases it provided competitive global distance estimates of the star forming clouds. We also exposed that the more interesting younger CI YSO candidates do not have an optical emission that is strong enough or that the recovered parallaxes has too large uncertainty on these objects. It would be interesting to be able to recover these objects with a good parallax measurement as the showed to better trace the star forming regions than more evolved CII YSOs.\vspace{-0.25cm}\\ \vspace{-0.5cm} The current study contains various limitations, mainly the lack of additional near star-forming region catalogs, that contain the sub-contaminant distinction to construct complete training samples. Moreover, some sub-classes, namely Shocks and PAHs, remain strongly unconstrained due to their scarcity. Identifying additional shocks and resolved PAH emission in Spitzer archive data could significantly improve their classification by our networks, and consequently improve the YSO classification. The attention has also been drawn toward the use of simulations to compile large training datasets, that might be used in ensuing studies.\vspace{-0.25cm}\\ Finally, our method could be improved by adopting more advanced networks which would probably overcome some difficulties, for example by avoiding local minima more efficiently, and would improve the raw computational performance of the method. Semi-supervised or fully unsupervised methods may also be promising tracks to predict YSO candidates which may overthrow the supervised methods in terms of prediction quality. On the other hand, we have highlighted that most of the difficulties come from the training set construction, which is mostly independent of the chosen method. Therefore, future improvements in YSO identification and classification from ML applied to mid-IR surveys will require compilation of larger and more reliable training catalogs, either by taking advantage of current and future surveys from various facilities, like the Massive Young Star-Forming Complex Study in Infrared and X-ray \citep[MYStIX,][]{Feigelson_2013} and the VLA/ALMA Nascent Disk and Multiplicity survey \citep[VANDAM,][]{tobin_2020}, or by synthesizing such catalogs from simulations.\vspace{-0.25cm}\\ Most of the {\bf methods and results discussed in this first part (Part I) are published} in the section "Numerical methods and codes" of the journal Astronomy and Astrophysics \citep[][accepted]{cornu_montillaud_20}. \part[Reconstruction of the 3D interstellar extinction of the MW]{Reconstruction of the 3D interstellar extinction of the Milky Way} \newpage \null \thispagestyle{empty} \newpage \etocsetnexttocdepth{2} \etocsettocstyle{\subsection*{Part III: Reconstruction of the 3D interstellar extinction of the Milky Way}}{} \localtableofcontents{} \clearpage \section{Using interstellar extinction to infer the 3D Milky Way structure} \label{ext_map_first_section} \etocsettocstyle{\subsubsection*{\vspace{-0.6cm}}}{} \localtableofcontents \subsection{Current state of 3D extinction maps} \label{ext_properties_part3} We introduced in Section~\ref{intro_extinction} that the extinction is a physical process which reduces the apparent luminosity of an astronomical object. The latter is also reddened according to an extinction law that correlates the effect of the extinction with the wavelength according to a given dust grain size and composition, which is considered constant in the diffuse ISM of the Milky Way. For large-scale Galactic studies or stellar population studies, having a good measurement of the extinction is a necessity, since all the light observed has traveled through at least a small piece of ISM. It can also be important for extra-galactic and cosmological studies to remove the Milky Way foreground extinction in order to measure absolute magnitudes, for example to estimate the distance to standard candles like type Ia supernovae or Cepheids \citep{Nataf_2016}.\\ One major issue with extinction is that it is an integrated quantity, cumulated along the whole light path from the source to the observer. Still, it requires to know what is the true emitted spectra of the source in order to measure its reddening properly. Since the extinction value is directly linked to the dust density, being able to estimate the differential extinction as a function of the distance is a way to reconstruct the Milky Way dust structure. For these reasons, the reconstruction of 3D extinction maps of the Milky Way has been an active topic for several years involving many research groups.\\ \afterpage{% \begin{figure*}[!t] \hspace{-1.1cm} \begin{minipage}{1.15\textwidth} \centering \begin{subfigure}[!t]{0.48\textwidth} \hspace{+0.2cm} \includegraphics[width=\textwidth]{images/redline_new_NGP_cropped_borders.jpg} \end{subfigure} \vspace{0.2cm} \begin{subfigure}[!t]{0.48\textwidth} \includegraphics[width=\textwidth]{images/lallement2019_borders.jpg} \end{subfigure}\\ \vspace{0.2cm} \begin{subfigure}[!t]{0.48\textwidth} \includegraphics[width=\textwidth]{images/green2019.jpg} \end{subfigure} \hspace{0.5cm} \begin{subfigure}[!t]{0.48\textwidth} \vspace{-1.0cm} \includegraphics[width=\textwidth]{images/chen_2019.jpg} \end{subfigure} \vspace{0.2cm} \end{minipage} \caption[Examples of recent 3D extinction maps]{Four recent extinction maps based on different methods and data. These maps all represent a face-on view of differential extinction in the Milky Way Disk integrated over a given galactic height or latitude. In all maps the Sun is in the middle and the galactic center is to the right. {\it Top-left}: Map from \citet[][in prep.]{Marshall_2020} integrated for $|b| < 1$ deg using solely 2MASS. Each circle is a 2 kpc radius, the purple and red squares represent the range of two of the other maps as indicated. {\it Top-right}: Map from \citet{Lallement_2019} integrated for $|z|< 300$ pc using 2MASS and Gaia DR2 cross matched data, the purple square represents the range of the bottom left map. {\it Bottom-left}: Map from \citet{Green_2019} integrated for $|z|< 300$ pc using Pan-STARRS, 2MASS and Gaia DR2 cross matched data. {\it Bottom-right}: Map from \citet{Chen_2019} integrated for $|b|< 0.1$ deg using WISE, 2MASS and Gaia DR2 cross matched data.} \label{extinction_maps} \end{figure*} \clearpage } The usual approach consists in estimating the extinction for each star, or group of stars, along a line of sight (LOS). The extinction is usually measured in the infrared since it is less affected by the extinction than optical wavelength, which provides a deeper view in dense environments. The approaches that rely on star parallaxes to get the distance of each star and reconstruct the extinction distribution from it, are usually limited to short distance estimates and are not able to provide constraints on galactic-scale structures other than the closest arms (Local, Perseus, Sagittarius). Still, such approaches usually provide a better resolution at short distances. In contrast, methods that rely solely on infrared data are usually able to make predictions at much greater distances, but suffer from a lower resolution and a quickly increasing distance uncertainty, which produces elongated artifacts that are known as "fingers of God".\\ Here we discuss some of the most known extinction maps in order to identify what are their present limitations. Among the most known extinction maps we can cite \citet{Marshall_2006} with its last refinement on which we participated \citet[][in prep.]{Marshall_2020}. This map is done using a per line of sight approach and works by comparing statistical predictions of the Besançon Galaxy Model (see Sect.~\ref{BGM_sect}) with the equivalent observed quantities. The map is made using solely 2MASS data allowing a greater distance range (up to 14 kpc) due to the lesser optical depth of interstellar clouds in the infrared than in the visible. The last iteration of this map is visible on the top-left frame of Figure~\ref{extinction_maps} using a face-on view of the Galactic disk. It has been successfully used to identify some large-scale structures that are coherent with several of the expected Galactic arms. Still, they can be difficult to distinguish from one another due to the relatively low distance resolution in some places of the map (e.g. the Scutum-Crux and Norma arms at a distance of $\sim$4 kpc in the direction of the Galactic Center). It also detects a first part of the Galactic bar centered around 8 kpc. The main limitations of this map are that the anti-center region is not strongly constrained due to a lesser star count, and that the fingers of god artifacts remain significant.\\ The map from \citet{Lallement_2019} is not based on a line of sight approach, but on a global inversion from a given star list. The aim is to find the extinction value for each star and to reconstruct a 3D spatially coherent distribution from it. In practice the last iteration of the map is based on a cross-match between Gaia and 2MASS and uses the magnitudes from both surveys. The combination of: (i) a meticulously-constructed hierarchical inversion method that is based on Bayesian processes, and (ii) the very large set of individual Gaia distances from stars that have been carefully selected, leads to an unmatched map resolution. The corresponding map is shown in the top-right frame of Figure~\ref{extinction_maps} also using a face-on view. This map efficiently disentangles several extinction regions that are aligned on the same line of sight, which is difficult to achieve with other methods. The structures present little to no stretching in distance. This map as been observed to reconstruct well-know structures at close range and highlight a global curved and continuous structure associated to the local arm. Some sub-structures also tend to match the expected position of some other arms, namely Perseus, and a Sagittarius-Carina foreground. Still, the limits of this map are the possible biases in the star selection and the low distance range of the prediction, up to 3 kpc only, which is mainly due to the cross-match with Gaia that is much more affected by extinction than 2MASS. The authors also highlight that the distance estimates from the parallax inversion might be underestimated, and that there is a lower limit in structure size induced by the method that remains unrealistic, which is more problematic with larger distances.\\ Another common map we can cite is \citet{Green_2019} that uses individual lines of sight but with a prior on the correlation between adjacent ones. The method is made of several steps that consist in finding the extinction and distance modulus for each star using the observed parallax and photometric magnitudes using a set of priors. Then another step reconstructs the line of sight extinction distribution using the star list sampled into distance bins. Finally a Gaussian process is added to correlate adjacent lines of sight. In practice they used a cross match between Pan-STARRS 1 \citep{Chambers_2016}, 2MASS and Gaia DR2 (parallax only), inducing and even lower maximum distance estimate than the map from \citet{Lallement_2019} with only a 2kpc range of confident prediction. The bottom-left frame of Figure~\ref{extinction_maps} also shows a face-on view of this map. The authors observed a convincing match between their prediction and well-known star-forming regions that are associated to different arms that appear roughly aligned in the map. The local arm structure is the most convincing with Perseus, Sagittarius and Scutum being either not strongly predicted by the map or less well defined by the reference star-forming regions that present a high distance uncertainty. The limitations of this map are that the survey selection prevents any prediction in more than a quarter of the MW, the short distance prediction, and finally several individual priors that could accumulate biases. We also note that, despite the added correlation between lines of sight, there is still significant variations between adjacent ones, especially for distances greater than 1\,kpc.\\ Lastly, we mention the map by \citet{Chen_2019} that uses a machine learning method based on the very efficient random forests algorithm that is trained using well constrained example stars. They use this method to predict various color excesses for individual stars using the 2MASS and Gaia magnitudes. Their full star list is then decomposed into lines of sight for which a color excess vs. distance profile is fitted using Gaia distances from \citep{bailer-jones_2018}. The bottom-right frame of Figure~\ref{extinction_maps} shows a face-on view of their prediction. The overploted arms are from \citep{Reid_2014} and mostly match what would be the local arm an the close part of Sagittarius. The match with the Perseus arm that the authors claim to observe in their map seems more uncertain, because many similar structures as those used for this assessment are observed outside any arm structure. Still, the map is mostly in good agreement with \citep{Lallement_2019} when looking at comparable distance range, which is expected since they used very similar input data and dimensions. The limits of this map are mainly the relatively small distance range as well, and strong fluctuations between adjacent lines of sight due to the absence of correlation between them.\\ We do note perform an exhaustive extinction map census here, and we will stop with these four more detailed maps, but there are still a few works that we find worth mentioning. The work by \citet{Drimmel_2001} that was a precursor for the present extinction maps, the map from \citet{Sale_2014} and all its refinements that also relies on a hierarchical Bayesian approach but using other types of surveys like in $H_\alpha$, or the recent new work from \citet{Rezaei_2017, Rezaei_2018} that uses Gaussian processes to overcome several difficulties of the previous maps and that is slightly more discussed in Section~\ref{other_ext_map_ML}.\\ For the present study the aim is to construct a method that is able to be efficient at an intermediate scale between large distance from \citet[][in prep.]{Marshall_2006,Marshall_2020} and the closer range ones like \citet{Lallement_2019}. More details on our objectives are given in Sect.~\ref{cnn_maps_objective}. \subsection{Per line of sight approach} We describe here the approach that consists in selecting a small cone observation in the sky that contains several stars. This cone is designated as a Line Of Sight (LOS), that is defined by its center position on the sky and by its radius or solid angle. We illustrate in Figure~\ref{los_illustration} a simple case where all the extinction is packed in two individual clouds. In this simple case it is visible that the stars before the first cloud do not suffer of extinction, the stars between the two clouds are extincted only by the first one, while the stars that are behind the second cloud are extincted by both clouds.\\ In this context, we want to reconstruct the distribution of the extinction as a function of the distance for this LOS. If the majority of the extinction is concentrated in dense clouds then it is equivalent to find the position of the clouds along the LOS. The corresponding cumulative or differential extinction profile, assuming that the clouds have a negligible extent along the LOS, is illustrated in Figure~\ref{simple_profile_expl}. To construct a large 3D extinction map it is then possible to decompose the plane of the sky into several small individual lines of sight.\\ \newpage Inferring the extinction along the LOS can be done using different methods and data. They all have in common to work from observed stars but using different quantities. One approach consists in estimating the distance and the intrinsic extinction to the stars and then infer the extinction profile along the LOS. This approach was adopted by \citet{Green_2018} who used Bayesian inference with a Markov Chain Monte Carlo (MCMC) technique to estimate the distance-extinction pairs of a sample of stars built by cross-matching Pan-STARRS-1, Gaia DR2, and 2MASS stars. Another MCMC step is then used to infer an extinction profile compatible with the distance-extinction pairs in each conical LOS. In \citet{Lallement_2019} a catalog made of stars from the cross-match of 2MASS and Gaia DR2 was compiled, where the distances were derived from Gaia parallaxes with uncertainties better than 20\%, and the extinction from a fit of the intrinsic colors in Gaia-2MASS colors by adjusting the extinction parametrization. The differential extinction distribution is then inferred by a hierarchical, multi-scale Bayesian inversion in 3D where a 3D Gaussian kernel, whose size depends on the current scale, is used to ensure the spatial coherence. \\ In contrast, \citep[][in prep.]{Marshall_2006, Marshall_2009, Marshall_2020} forgo determining the distance and extinction to individual stars. Instead, they rely on a stellar population model of the Milky Way (see next section) which provides the statistical distributions of the intrinsic stellar observational properties for each LOS. The extinction profile of each LOS is inferred from the statistical comparison between the intrinsic and observed distributions of stars. Different methods were attempted, including genetic algorithm \citep{Marshall_2009} and MCMC \citep[][in prep.]{Marshall_2020}.\\ Figure~\ref{extinction_maps} compares some of the 3D extinction maps obtained by the authors mentioned above. They reveal some of the major limitations of these approaches: (i) it is difficult to recover spatially coherent structures between line of sights without adding an ad-hoc correlation, (ii) once a first front of extinction has been localized it is much more difficult to reliably detect additional extinction beyond this front, and (iii) the large difference between the uncertainties parallel and perpendicular to the LOS leads to an elongated radial artifact often called "finger of gods", with a strong variation of the prediction between adjacent LOS. \\ In the present work, we elaborate on the approach by Marshall et al., comparing 2MASS and Gaia data to a stellar population model of the Milky Way: the Besançon Galaxy model. \newpage \begin{figure}[!t] \centering \includegraphics[width=1.0\hsize]{images/icons_cloud_ext.pdf} \caption[Illustration of a simple LOS with two clouds]{Simple line of sight (cone view) example that contains two clouds, with the observer to the left. The star colors are reddened and faded according the observed cumulative extinction effect on them from the observer's point of view.} \label{los_illustration} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.75\hsize]{images/simple_mock_profile.pdf} \caption[Extinction profile example]{Simple extinction profile example corresponding to the LOS with two clouds of Figure~\ref{los_illustration}. The profile is represented using both the cumulative and differential extinction from the same underlying quantity.} \label{simple_profile_expl} \end{figure} \clearpage \subsection{The Besançon Galaxy Model} \label{BGM_sect} \begin{figure*}[!t] \centering \begin{subfigure}[t]{0.48\textwidth} \includegraphics[width=\textwidth]{images/2MASS_obs_l280bp0.pdf} \end{subfigure} \begin{subfigure}[t]{0.48\textwidth} \includegraphics[width=\textwidth]{images/Gaia_obs_l280bp0.pdf} \end{subfigure} \caption[Observed diagrams for comparison with the BGM]{Illustration of observed diagrams that can be reproduced using the Besançon Galaxy Model. The two diagrams are obtained from a $4^\circ$ degree radius centered at galactic coordinates $l=280$ deg, $b=0$ deg. {\it Left:} 2MASS observed [J-K]-[K] CMD. {\it Right:} Gaia DR2 observed [Gmag]-[Parallax] diagram.} \label{expl_observed_cmds} \vspace{-0.2cm} \end{figure*} \vspace{-0.1cm} We are lucky to have a favored access to the Besançon Galaxy Model (BGM), a world-class stellar population synthesis model \citep{Robin_2003,Robin_2012}. It was noticeably adopted to anticipate Gaia results \citep{Robin_2012b} and is still used as a validation tool for Gaia catalogs \citep{Arenou_2018}. This model is able to generate 3D stellar distributions that are statistically representative of many observables. It is based on four distinct stellar populations: a thin disk, a thick disk, a bulge and a halo. The model is constrained by both observations and theoretical recipes that account for stellar evolution \citep{Lagarde_2012}, dynamics \citep{Bienayme_1987}, initial mass function \citep{Haywood_1997a, Haywood_1997b}, etc. A BGM computed realization takes the form of a star list that contains various physical quantities for each modeled star, like mass, velocity, age, magnitudes, stellar count, distance, etc. Regarding the star emission, the BGM uses color tables based on stellar atmosphere models to accurately reproduce stellar colors \citep[based on and refined in][]{Lejeune_1997, Westera_2002}.\\ \vspace{-0.1cm} It is interesting to note that, in order to convert absolute quantities to observable ones appropriately, the BGM must use an extinction map. Depending on the model version, it uses different extinction maps, and it is even possible to select the most appropriate map depending on the region of the Milky Way. Overall it relies mainly on those of \citet{Marshall_2006} and more recently \citet{Lallement_2019}. This is another example of the importance of producing good quality extinction maps.\\ \vspace{-0.1cm} In order to be representative of the real Milky Way, the BGM prediction must be used only with statistical representations and with a large enough number of stars. A suitable representation is a Color-Magnitude Diagram, that is similar to 2D histogram of the star list, or any similar representation involving a stellar observable (e.g. parallaxes). Figure~\ref{expl_observed_cmds} shows two examples for observed data, with a 2MASS [J-K]-[K] CMD and a Gaia [Gmag]-[Parallax] diagram using a $4^\circ$ radius line of sight centered on the galactic coordinates $l=280$ deg, $b=0$ deg. We detail in Section~\ref{cmds_construction_section} and \ref{gaia_diag_constuction} how the BGM can be used to reconstruct theses diagrams in a realistic fashion. \subsection{Mesuring extinction using the BGM} \label{extinction_with_bgm_intro} The fact that the BGM first produces stellar quantities without the extinction contribution is very useful feature in our case. On the left frame of Figure~\ref{obs_model_ext_comparison} we show the same [J-K]-[J] 2MASS CMD predicted by the model as in Figure~\ref{expl_observed_cmds}, but for a smaller radius of $0.25^\circ$ and without the extinction. Following the physical properties of the extinction exposed in Sections~\ref{intro_extinction} and~\ref{ext_properties_part3}, the stars will both be fainted and reddened by the extinction creating a translation toward lower [K] and larger [J-K], due to the fact that the wavelength of J ($1.235 \mu m$) is lower than the one of $\mathrm{K_s}$ ($2.159 \mu m$) inducing a stronger extinction in the J band. The right frame of Figure~\ref{obs_model_ext_comparison} illustrates this effect by showing the equivalent observed 2MASS CMD. \\ \begin{figure}[!t] \centering \vspace{-0.3cm} \includegraphics[width=0.93\textwidth]{images/model_vs_obs_extinction.pdf} \caption[Model without extinction and observed extinction for the same LOS]{2MASS [J-K]-[K] CMD comparison between model prediction without extinction and observed extinction for the same LOS. The two diagrams are obtained from a $0.25^\circ$ degree radius centered at galactic coordinates $l=280$ deg, $b=0$ deg. The red arrow illustrates the extinction translation direction for individual stars. The dashed blue line corresponds to the observation limit cut as described in Sect.~\ref{ext_profile_and_cmd_realism}. {\it Left:} 2MASS model without extinction. {\it Right:} 2MASS observed [J-K]-[K] CMD.} \label{obs_model_ext_comparison} \end{figure} From these results it is possible to formulate the following hypothesis: {\bf assuming a perfectly representative model, the difference between the BGM and the observed CMD is solely due to interstellar extinction}. Considering this, it should be possible to design a method that infers the extinction profile from the differences between synthetic and observed CMDs. This model-observation comparison is already at the heart of the \citet{Marshall_2020} method which also uses the BGM as a reference. We emphasize that, even if the effect of extinction on a single star in this diagram is a simple translation, there is a complex distribution of these stars along the LOS entangled with the extinction distribution. It means that all stars will move following their own local cumulative extinction, inducing a much more complex transformation in this diagram including translation, stretching and overlap. It also has to account for a cut in magnitude that corresponds to the limits of the 2MASS observations (see Sect.\ref{ext_profile_and_cmd_realism}).\\ \newpage \subsection{Using Machine Learning for this task} \label{other_ext_map_ML} There are many various methods that were used to perform a similar comparison in order to reconstruct the extinction distribution (see references in Sect.~\ref{ext_properties_part3}), some recent ones including machine learning as well. Still, we observed that few attempts were made using classical algorithms and that the solutions proposed are usually too computationally intensive for large maps reconstructions. For example, in \citet{Marshall_2009} they used a Genetic Algorithm (GA) method (see Sect.~\ref{ml_application_range}, and Fig.~\ref{fig_ml_types}) to reconstruct individual dark clouds distance from extinction. Still, they used GA by fitting each LOS individually in a very similar fashion as the recent work by \citet{Marshall_2020}, and to not construct a high resolution extinction map. Another recent example comes from \citet{Rezaei_2017} and \citet{Rezaei_2018} who use Gaussian Process (GP) (Sect.~\ref{ml_application_range}, and Fig.~\ref{fig_ml_types}) to reconstruct the 3D extinction. In this approach there is no LOS consideration and the large regions are reconstructed at once. It noticeably reconstructs 3D spatially coherent structures in a very smooth way. The main difficulty is that, because it relies on a one time large inversion that scales non linearly with the number of data points and the resolution, it is very computationally heavy. To overcome this, the authors mostly used restricted datasets, which negatively impacted the statistical representation of the problem. Another approach using Random Forest is described by \citet{Chen_2019} where they used it to fit the Gaia color excess of individual stars that are then positioned in distance using Gaia parallaxes. This method is certainly computationally efficient but the stars that have the largest reddening will have very uncertain distance estimate, which results in important finger of gods effects.\\ We highlight that, to our present knowledge, there was no published application that uses any kind of Artificial Neural Network to reconstruct extinction profiles, or that uses the extinction distribution to infer distances, and even less a large 3D structural reconstruction of or from the extinction. The only links between ANN and extinction we found are in applications that assess the cumulative extinction as a single quantity for given stellar clusters \citep{Bialopetravivius_2020} or for galaxy observations \citep{Almeida_2010}. We believe that this method may have been considered too computationally heavy under the intuitive approach where each LOS would be fit individually by an ANN, following a similar approach to \citet{Marshall_2009} with GA method. The difficulty is that training an ANN for each LOS with a sufficient resolution on the plane of the sky to build a map is an unrealistic solution due to the huge cumulative training dataset size that it would require and similarly the massive cumulative training time. The dataset size and computation time for a good single LOS training that are exposed later, in Section~\ref{2mass_single_los}, perfectly illustrates this point. However, {\bf we will demonstrate in the present study that it is possible to design an ANN formalism that can be trained one single time on various lines of sight simultaneously and that can still predict individual LOS extinction profiles} (Sect.\ref{los_combination}). Additionally, this type of method being capable of combining different quantity at the same time and find the correlation automatically, it should permit a combination of photometric and astrometric surveys without the necessity of a cross match (Sect.~\ref{gaia_2mass_ext_section}). We also note that, in opposition to a widespread belief, simple ANN architectures can be tweaked to provide result uncertainties in the form of a posterior probability distribution just like a Gaussian process method (Sect.~\ref{dropout_error}). \newpage \subsection{Objective and organization} \label{cnn_maps_objective} The aim of this second part of the manuscript (Part II) is {\bf to propose an ANN architecture that is capable of sharing information from various lines of sight in a single training and that can be used to predict large extinction maps}. For this we extensively describe a more advanced ANN formalism that is based on the redundancy of information when using images as input, namely Convolutional Neural Networks. We then describe how it can be used to reconstruct extinction profiles using CMDs from the Besançon Galaxy Model and from observational surveys. We also detail the construction of the training dataset, which requires several precautions that has considerable impact on the prediction capability. We analyze the effect of several properties of the network on the prediction quality. We finally use this formalism to predict extinction maps for a portion of $45^\circ$ of the Milky Way disk using both a 2MASS only dataset, and a {\bf 2MASS plus Gaia DR2 dataset without cross match}.\\ Like for the previous session we emphasize that the results of the following part will soon be published in \citet{cornu_montillaud_20b} in the form of a short letter to the Astronomy and Astrophysics journal. The present manuscript provides a large amount of additional material and analysis. \clearpage \section{Convolutional Neural Networks} \label{cnn_global_section} In this section we describe several additions to the ANN formalism introduced in Section~\ref{global_ann_section} which can be used to construct much more complex network architectures and that can efficiently process images. We will first describe how classical neural networks can be used on images and what are the corresponding limitations. Then, we will define the convolution operation and the associated convolutional layers and explain their training procedure. Finally, we will discuss the construction of deep ANN architectures along with descriptions of various necessary parameters. \etocsettocstyle{\subsubsection*{\vspace{-1cm}}}{} \localtableofcontents \subsection{The image processing impulse} \label{image_process_section} Machine Learning is historically tightly related to the field of signal processing. Similar methods are used in both fields and they regularly influence each other. Image processing is certainly the strongest bond between the two, with a long history in signal processing and with most modern corresponding applications being made using ML. In this section we focus on describing how to use images as input for ANN. \subsubsection{Spatially coherent information} \label{spatial_coherence} As we discussed in the GPU Section~\ref{matrix_formal}, images can be considered as two dimensional arrays of pixels. Each pixel contains at least one value that is often an integer encoded using a 8 bit format (0-255) and more rarely up to 16 or 32 bit. To obtain colored images, at least 3 of these 8 bit arrays must be superimposed, with each layer corresponding to a specific color intensity in each pixel often following the Red-Green-Blue (RGB) encoding. Fundamentally, an image is a decomposition of an information, for example a physical object, captured into flat ``2D'' discrete representation. Image recognition performs the opposite operation, that is to find coherent information in an array of pixel and associate it to a more abstract object that the image represents. For example, Figure~\ref{im_proc_numbers} represent $6\times 7$ binary pixel representations of digits, from which one may want to identify the digit that is represented. This is a typical image classification example that will be discussed for a more concrete example in Section~\ref{mnist_example}.\\ \begin{figure}[!t] \centering \includegraphics[width=0.7\hsize]{images/im_proc_numbers.pdf} \caption[Simple digit image $6\times 7$]{Simple digit representation as a $6\times 7$ binary pixel image.} \label{im_proc_numbers} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.55\hsize]{images/im_proc_loc.pdf} \caption[Cross pattern localization example]{Representation of a simple cross pattern on a $5\times 5$ image as input, and the corresponding localization prediction on an equivalent size output image.} \label{im_proc_loc} \end{figure} \begin{figure}[!t] \vspace{0.3cm} \centering \includegraphics[width=0.97\hsize]{images/im_proc_illustr_net.pdf} \caption[FC network for the cross pattern localization example]{Fully connected network corresponding to the cross pattern identification and localization problem. The images are flattened and each pixel is connected to each neuron of the hidden layer acting as a feature. The pixels of the output layer are considered neurons.} \label{im_proc_illustr_net} \end{figure} One key element to the extraction of image information is that it is spatially coherent. To illustrate why spatial coherence can be difficult, we first show how the classical ANN algorithm described in Section~\ref{global_ann_section} can be used to compute information using images as input. We consider a network for pattern recognition and localization. An image, that may or may not contain a specific pattern, is presented to this network. The algorithm task is then to predict the position of the pattern in the image when it is present. This application is illustrated by the Figure~\ref{im_proc_loc} where the pattern to identify is a $3\times 3$ binary pixel cross and the expected output is an identically sized image that contains a positive value at the pixel that represents the center of the cross. Although we chose a single and very simple pattern, this application illustrates many modern ML uses in computer vision, where the objective is to find if an image contains a specific object from a given set (classification) and to predict its location or a boundary box around it. This is the typical example that is shown for self driving vehicles in Figure~\ref{computer-vision}.\\ \begin{figure}[!t] \centering \includegraphics[width=0.9\hsize]{images/computer-vision.jpg} \caption[Computer vision example for autonomous vehicle]{Image of the prediction of a computer vision deep learning algorithm for autonomous vehicle. Objects are classified based on a list of useful elements for driving and are localized into a boundary box. {\it Image credits \href{https://www.lebigdata.fr/computer-vision-definition}{www.lebigdata.fr}.}} \label{computer-vision} \end{figure} Using this representation, the easiest approach to connect an input image to an ANN is to consider that each pixel is an individual feature. An input vector can then be constructed, with the same size as the image. In this specific example the output vector has the same dimension. Figure~\ref{im_proc_illustr_net} shows a corresponding single hidden layer network that takes as input and output the corresponding flatten images. This network is actually suitable to perform this task. In order to ease the comparison with other network architectures, we will now refer to such a network arrangement as a "fully connected" or "dense" network, and we will also use these formulation as adjectives for a "fully connected layer" or "dense-layer" to depict a MLP like weight connection.\\ The first striking limitation with this representation is that the number of weights is very large. It has a cost in terms of computing performance and requires a lot of examples to be properly constrained. The second limitation is that, in order to train the weights of each pixel, examples of a cross on each pixel must be contained in the training set. It means that such a network on this specific case will need to be provided all the possible positions as example, and is therefore inefficient to generalize the problem. \subsubsection{Information redundancy: pattern recognition} \begin{figure}[!t] \hspace{-1.1cm} \begin{minipage}{1.15\textwidth} \centering \includegraphics[width=1.0\hsize]{images/im_proc_loc_mult.pdf} \end{minipage} \caption[Multiple cross pattern detection in a large image with noise]{Two examples, A and B, of input and target images for cross pattern localization. Multiple objects in the same $10\times 10$ image are permitted and the complexity is raised by the addition of noise or other patterns.} \label{im_proc_loc_mult} \end{figure} We now consider an example where the image is larger and that it can contain multiple times the looked-up pattern at the same time. This case is illustrated by Figure~\ref{im_proc_loc_mult} where we represented two different $10\times 10$ cases in which we have added some irrelevant input pixels that could be noise or non looked-for patterns to make the example more realistic. The images in this case can be plugged as before to a fully connected network. This time, considering that there are irrelevant patterns increasing the complexity of the problem, the network should be able to generalize information. Indeed, even if it is mandatory to provide at least one example of a searched pattern at each point, it is not necessary to add examples that correspond to all the possible irrelevant information combination on all other pixels. A fully connected ANN trained on this problem might be more time efficient to predict the solution than a naive algorithm that searches for the presence of a cross at each pixel position in the image. A similar illustration on differentiating T from C letters representations independently of translation and rotation using a MLP is presented in \citet{rumelhart_learning_1986}. \\ However, an instinctive reaction to this problem is to notice that there is only one pattern to look for, and that learning this pattern once and for all, and then search for it at different places, would be much more efficient than learning how to react to this pattern at every possible positions in the image. This is because one, as a human being, is sensitive to redundancy of the information. Then it should be possible to construct a network architecture that is able to perform the same task. The objective is to build an operation that is capable of detecting a pattern in a way that is invariant by translation in the image. This can be done by creating a unique artificial receptive field that can be applied at several places on the image, strongly reducing the number of parameters that are needed by sharing them over the full image. In practice this is done using an operation that is called a convolution and that relies on a filter (also called a kernel). \newpage \subsubsection{Convolution filter} \label{conv_filter} A convolution operation consists in the application of a filter to an image through a decomposition in sub-regions. For this, the filter is considered as a set of numerical values that has the size of the wanted receptive field. The values of this filter are then multiplied element-wise to a subset of pixels and the results are summed to obtain a single value. This operation can be performed at several places in the image to produce an output vector that contains all the corresponding results. Usually, the filter is applied at regular intervals on the input vector with a shift in pixels between each application that is called the stride $S$. We illustrate this operation in a one dimensional example in Figure~\ref{im_proc_1d_conv} with a 7-pixel input vector (here with positive or negative integer values) that is convolved using a 3-pixel filter. This figure shows two examples with strides $S=1$ and $S=2$ leading to output vectors of 5 and 3 elements, respectively. We stress that, in the $S=1$ case, each input value is used between 1 and 3 times depending on the filter overlap and with a continuous contribution pattern, while in the $S=2$ case, each input is only used 1 or 2 times with a periodic overlap pattern. While such an overlap pattern from each input pixel is expected in this operation, it could lead to concerns in specific cases. To better visualize this property of the convolution operation, we included a representation of the contribution number of each pixel in Figure~\ref{im_proc_1d_conv} and in subsequent figures.\\ \begin{figure}[!t] \hspace{-0.9cm} \begin{minipage}{1.0\textwidth} \centering \includegraphics[width=1.0\hsize]{images/im_proc_1d_conv.pdf} \end{minipage} \caption[1D convolution example]{Illustration of a 1D convolution operation on a 7-element input vector (in blue) using a 3-element filter (in red). The output result is in green and the grayscale table represents the number of times each input was used by the operation. Two examples are given for $S=1$ (left) and $S=2$ (right) on the same input vector and using the same filter.} \label{im_proc_1d_conv} \end{figure} For images, the information is spatially coherent in two dimensions, therefore the convolution operation is performed using a 2D filter that "slides" over the image along one axis (say, along a line) with a shift between each application that is defined by the stride. When the end of the line is reached, the operation is repeated for another line according to the stride. This way the filter is applied regularly in both dimensions. One side effect of the convolution is to reduce the size of the image. Although this can be useful to reduce the dimensionality of the problem, in some cases it is preferable to conserve the image dimension by adding a zero-padding (often just refered to as padding) around the input image. It results in the following relation between input and output dimensions: \begin{align} & w_{out} = \frac{w_{in} - f_s + 2P}{S} + 1 & h_{out} = \frac{h_{in} - f_s + 2P}{S} + 1 \label{width_height_relation} \end{align} where $w$ and $h$ denote the input and output widths and heights, respectively, $f_s$ is the filter size, $P$ is the padding, and $S$ is the stride, considering that the last three quantities have the same values for both axes. \\ We illustrate this 2D convolution operation in Figure~\ref{im_proc_2d_conv} using a $7\times 7$ input 2D table and a filter with $f_s=3$, $S= 1$ and $P=1$, where the $\bm *$ symbol represents the convolution operation. To ease the understanding of the operation, the figure also presents two colored squares that are used to highlight two specific sub-regions that are individually multiplied by the filter and result in the corresponding colored elements in the output table \footnote{A very nice animation of the convolution operation on a randomly generated input table can be found in the excellent ANN online course of the Stanford university at \href{https://cs231n.github.io/convolutional-networks/}{cs231n.github.io}}.\\ \begin{figure}[!t] \hspace{-1.5cm} \begin{minipage}{1.20\textwidth} \centering \includegraphics[width=\hsize]{images/im_proc_2d_conv.pdf} \end{minipage} \caption[2D convolution example]{Illustration of a 2D convolution operation on a $5 \times 5$ input table (in blue) with an added zero padding $P=1$ represented in yellow, using a $f_s=3$ filter (in red). The resulting $5\times 5$ table is in green and the grayscale table is the number of contributions from each input element. Purple and orange squares highlight two specific sub-region products and their corresponding output pixels.} \label{im_proc_2d_conv} \end{figure} \afterpage{ \begin{sidewaysfigure} \centering \begin{minipage}[t]{0.18\textwidth} \centering {\bf \large No filter} \end{minipage} \begin{minipage}[t]{0.18\textwidth} \centering {\bf \large Sharpen} \begin{equation*} \begin{bmatrix*}[r] 0 & -1 & 0 \\ -1 & 5 & -1 \\ 0 & -1 & 0 \end{bmatrix*} \end{equation*} \vspace{-0.1cm} \end{minipage} \begin{minipage}[t]{0.18\textwidth} \centering {\bf \large Gaussian blur} \begin{equation*} \frac{1}{16}\begin{bmatrix*}[r] 1 & 2 & 1 \\ 2 & 4 & 2 \\ 1 & 2 & 1 \end{bmatrix*} \end{equation*} \vspace{-0.1cm} \end{minipage} \begin{minipage}[t]{0.18\textwidth} \centering {\bf \large Edge detector} \begin{equation*} \begin{bmatrix*}[r] -1 & -1 & -1 \\ -1 & 8 & -1 \\ -1 & -1 & -1 \end{bmatrix*} \end{equation*} \vspace{-0.1cm} \end{minipage} \begin{minipage}[t]{0.18\textwidth} \centering {\bf \large Axis elevation} \begin{equation*} \begin{bmatrix*}[r] -1 & -1 & -1 \\ 0 & 0 & 0 \\ 1 & 1 & 1 \end{bmatrix*} \end{equation*} \vspace{-0.1cm} \end{minipage}\\ \begin{subfigure}[t]{0.18\textwidth} \includegraphics[width=\textwidth]{images/carina_hubble_R.jpg} \end{subfigure} \begin{subfigure}[t]{0.18\textwidth} \includegraphics[width=\textwidth]{images/carina_hubble_sharpen.jpg} \end{subfigure} \begin{subfigure}[t]{0.18\textwidth} \includegraphics[width=\textwidth]{images/carina_hubble_gauss.jpg} \end{subfigure} \begin{subfigure}[t]{0.18\textwidth} \includegraphics[width=\textwidth]{images/carina_hubble_edge.jpg} \end{subfigure} \begin{subfigure}[t]{0.18\textwidth} \includegraphics[width=\textwidth]{images/carina_hubble_elevation.jpg} \end{subfigure}\\ \begin{subfigure}[t]{0.18\textwidth} \includegraphics[width=\textwidth]{images/UGC1810_R.jpg} \end{subfigure} \begin{subfigure}[t]{0.18\textwidth} \includegraphics[width=\textwidth]{images/UGC1810_sharpen.jpg} \end{subfigure} \begin{subfigure}[t]{0.18\textwidth} \includegraphics[width=\textwidth]{images/UGC1810_gauss.jpg} \end{subfigure} \begin{subfigure}[t]{0.18\textwidth} \includegraphics[width=\textwidth]{images/UGC1810_edge.jpg} \end{subfigure} \begin{subfigure}[t]{0.18\textwidth} \includegraphics[width=\textwidth]{images/UGC1810_elevation.jpg} \end{subfigure}\\ \begin{subfigure}[t]{0.18\textwidth} \includegraphics[width=\textwidth]{images/Jupiter_R.jpg} \end{subfigure} \begin{subfigure}[t]{0.18\textwidth} \includegraphics[width=\textwidth]{images/Jupiter_sharpen.jpg} \end{subfigure} \begin{subfigure}[t]{0.18\textwidth} \includegraphics[width=\textwidth]{images/Jupiter_gauss.jpg} \end{subfigure} \begin{subfigure}[t]{0.18\textwidth} \includegraphics[width=\textwidth]{images/Jupiter_edge.jpg} \end{subfigure} \begin{subfigure}[t]{0.18\textwidth} \includegraphics[width=\textwidth]{images/Jupiter_elevation.jpg} \end{subfigure} \caption[Common filters applied to a selection of astronomical images]{Convolution results using common filters applied to a selection of three astronomical images from the Hubble Space Telescope (HST) representing objects of very different physical size. The base images on the left are the red channels from the original images. {\it Top}: HST image of the \href{https://apod.nasa.gov/apod/ap170702.html}{Carina Nebula}. {\it Middle}: HST image of \href{https://apod.nasa.gov/apod/ap191120.html}{UGC 1810} along with the smallest UGC 1813 galaxies. {\it Bottom}: HST image of \href{https://apod.nasa.gov/apod/ap181016.html}{Jupiter}.} \label{conv_filter_examples} \end{sidewaysfigure} } Consequently, the convolution operation can be used to detect a specific pattern at any place in an image by using a filter that is a replica of the pattern. This property is very convenient for the example of the previous section, which is automatically solved using only a convolution with the appropriate padding to conserve the image size in the output. However, for more complex images the convolution can be seen as a specific image processing. The most common convolution operations are the ones that blur an image or apply an interpolation in order to resize the image. We illustrate in Figure~\ref{conv_filter_examples} the effect of standard convolution filters on three different astronomical images that represent very different physical scales. While the sharpen and blur operations are common in every day life, the edge detector and the axis elevation ones demonstrate how convolution filters can be used to extract patterns in an image. Still, one convolution would often be insufficient to solve a complexe problem directly, leading to a necessary combination of several convolution operations.\\ \newpage \subsubsection{Convolutional layer} \begin{figure*}[!t] \centering \begin{subfigure}[!t]{0.80\textwidth} \centering \caption*{\large {\bf Input images}} \includegraphics[width=\hsize]{images/im_proc_numbers.pdf} \end{subfigure} \vspace{0.6cm}\\ \begin{subfigure}[!t]{0.6\textwidth} \centering \caption*{\large {\bf Filters}} \includegraphics[width=\hsize]{images/mult_filt.pdf} \end{subfigure} \vspace{0.6cm}\\ \begin{subfigure}[!t]{\textwidth} \centering \caption*{\large {\bf Superimposed output images}} \includegraphics[width=0.70\hsize]{images/simple_numb_filtered.pdf} \end{subfigure} \caption[Multiple filters for simple digit recognition]{Multiple simple $3 \times 3$ filters applied to digit recognition on three $6\times 7$ images. Each filter can extract patterns at several point in each image and can be used for different digits. A filter predict a positive output if there is a full match between its activated pixels and the activated pixels of each input sub-region. {\it Top}: the three input images. {\it Middle}: the colored filters. {\it Bottom}: output from all filters are stacked and color-coded in one $4\times 5$ output image for each input example.} \label{mult_filt} \end{figure*} There are two main cases that justify the use of several convolution filters. The first is when one filter alone, that directly represents the looked up pattern, is too large compared to the size of the image. The second is when multiple patterns are looked for. In such cases it is more efficient to use several small convolution filters that represent sub-parts of the pattern and that can be shared in the case of multiple-object identification. If we consider the simple digit-identification example from Section~\ref{spatial_coherence}, it is manifest that they can be decomposed into much easier redundant parts. We illustrate this case in Figure~\ref{mult_filt}, where we proposed 4 simple filters that are just different continuous and aligned pixel layouts. When used in a convolution operation, each of these filters will produce an individual output image. In this case we consider that there must be a perfect match between the filters colored pixel and the pixel of the input image at given sub-region for the corresponding output to be activated. If we consider that blank pixels are 0 and colored ones are 1, it means that there is a threshold of 3 to be reached for the output to be set to 1 as well. In this figure we used different colors for each filter and stacked up the color-coded output images in order to ease the representation. We note that there is no overlap in this specific case, but that nothing prevents multiple filters to produce the same output at a given pixel location. The figure illustrates that, by using adequate filters, it is possible to construct a smaller and simpler space that represents all digits differently.\\ Using the defined convolution operation, it is now possible to link it to our ANN formalism. There is a direct analogy between the dot product and sum of the result that occurs in the convolution operation and the weighted sum of input in a neuron operation that we described in Section~\ref{neuron_math_model}. Indeed, the sub-region of the image, when flattened, can be considered as our input vector $X_i$, the filter considered as the weights $W_i$ also considering it flattened, and the result is summed into a $h$ quantity. The only missing part for a convolution output pixel to act as a neuron is the activation function along with a bias value that are very simple to add to the operation. The choice of an appropriate activation function that works well with the convolution operation on the weights is discussed in Section \ref{sect_relu}. The convolution operation is then a suitable approach to share a small group of weights over the all image, strongly reducing the number of parameters that need to be constrained. This formalism of neural receptive field is long known and was already introduced in a very similar fashion by \citet{rumelhart_parallel_1986}.\\ To be usable for real image processing, this model of weight filters still has to cover the case of multiple color channels in an image, which can be considered as an added depth $d$ to the input, resulting in an input volume of $w_{in} \times h_{in} \times d$. This can be done by considering that the weight filter is a 3D table that has a width and height equivalent to the filter size $f_s$ and an additional depth $d$ that corresponds to the number of input depth channels. This way the weight filter combines spatial information that comes from all the input depth channels at the same place in the images and still only moves in a 2D space following the stride parameter. We note that this 3D filter still sums its $ f_s \times f_s \times d$ contributions into only one result for each area of the images, and therefore produces a 2D output. To stick to the network representation, each pixel of this output image goes through an activation function, which results in a so-called activation map. The input images can be convolved by a certain number $n_f$ of these 3D weight filters, which results in a number of activation maps equal to the number of filters, with a size that follows the relations of Equation~\ref{width_height_relation}. The output volume is then of $w_{out} \times h_{out} \times n_f$. This complete operation constitutes what is called a \textbf{convolutional layer} and is illustrated in Figure~\ref{convolutional_layer_im}.\\ \begin{figure}[!t] \vspace{+0.6cm} \hspace{-2.1cm} \begin{minipage}{1.25\textwidth} \centering \includegraphics[width=\hsize]{images/conv_layer.pdf} \end{minipage} \caption[Detailed convolutional layer]{Detailed representation of a convolutional layer. The input image is made of $d=3$ layers (RGB), the filters (in light red) are cubes of dimension $f_s\times f_s \times d$. The transparent gray box denotes the region of the image that is multiplied by the filter to obtain a specific neuron activation. The activation maps from each filter are in green, with the first one showing each activated neuron pixel as a dark green square.} \label{convolutional_layer_im} \vspace{0.7cm} \end{figure} \vspace{0.5cm} The resulting activation maps that constitute a re-arrangement and often a dimensional reduction of the input images can then be fully connected to an MLP architecture with each pixel of each activation map considered as a feature. If we consider an example with input images sized as $w_{in} = 100$, $h_{in} = 100$ et $d = 3$, a full connection to an $n = 20$ hidden MLP layer would require $6\times 10^5$ weights. Using the same input images and $n_f = 6$ filters of size $f_s=4$ that are used with a stride of $S=3$, it is only $48$ weights per filter or $288$ weights for all the filters of the convolutional layer. This convolution produces a $32\times 32\times 6$ activation map volume that requires "only" $1.23 \times 10^5$ weights to be fully connected to the same $n = 20$ MLP layer.\\ \newpage This architecture already presents several advantages: (i) the exclusion of irrelevant patterns, (ii) the dimension reduction that occurs with a large stride and filter size, and (iii) a huge reduction of the number of weights compared to a fully connected layer thanks to the possibility to share the small filters that are applied over the whole image . While these additions are already interesting enough to justify such an architecture, the true interest is the possibility to easily stack such layers to extract more complex representations of the input, which is covered in detail in Section~\ref{convolution_stacking}. \newpage \subsubsection{A simpler activation function : the rectified linear unit} \label{sect_relu} \vspace{-0.1cm} While a convolution layer can technically be built using any activation function, some of them proved to be more efficient than others. The main issue is that adding a convolutional layer over a fully connected network, and moreover stacking several convolutional layers as we will discuss in the next section, add depth to the network which can result in a vanishing gradient issue (Sect.~\ref{mlp_backprop}). Additionally, since each output pixel is considered as a neuron, and therefore is activated, the computational efficiency of the activation function becomes more important when working with large images.\\ Because it is able to solve this problem, the most widely used activation function in deep network with convolution is the Rectified Linear Unit activation, or ReLU \citep{nair_2010}. This function is simply linear for any input value above zero and is equal to zero in the negative input part, which is summarized as: \begin{align} & a_j = g(h_j) = \begin{cases} h_j & \text{if} \quad h_j \geq 0 \\ 0 & \text{if} \quad h_j < 0 \end{cases} & \text{or} \hspace{1.5cm} & a_j = g(h_j) = \max(0,h_j) \label{relu_activ} \end{align} following the same notations as Equation~\ref{eq_activ_perceptron}. Despite its simplicity this function presents several advantages, (i) it conserves the global non linearity with two states, (ii) it is scale invariant, because it does not saturate, (iii) it is easy to compute with only a comparison and a memory affectation in the negative input case, (iv) it has a constant derivative equal to one, meaning that there is no loss of gradient information when the neuron is in its activated state, which also speeds up the learning process. We illustrate this function along with its derivative in the left frame of Figure~\ref{relu_fig}.\\ There is in fact a family of ReLU activation functions, starting with the leaky ReLU \citep{Maas_2013} that defines a leaking factor $\lambda$ for the negative input part of the function making it linear as well. While in the original paper they suggested a leaking factor of $\lambda = 0.01$ recent applications have demonstrated that other leaking factor could produce better results. This activation is summarized as: \begin{align} &a_j = g(h_j) = \begin{cases} h_j & \text{if} \quad h_j \geq 0 \\ \lambda h_j & \text{if} \quad h_j < 0 \end{cases} & \text{or} \quad \quad & a_j = g(h_j) = \max(0,h_j) + \min(0, \lambda h_j). \label{leaky_relu_activ} \end{align} The leaky ReLU activation is illustrated in the right frame of Figure~\ref{relu_fig}. The idea behind this addition is that neurons that are not activated with the classical ReLU are not updated at all, which could lead to neurons that remain stuck in this state depending on the input features and weight values. Using a leaky parameter, neurons that where not responsible of a given activation can slowly get involved if they are not currently used to constrain another part of the feature space, or if they would be more useful to the present example than to the ones they where constraining before. A small leaking factor value ensures that activated ReLU remains the main responsible for the output, preserving the global propagation scheme of the network. We note that, this small propagation to non-activated neurons is sensitive to the vanishing gradient since the derivative is equal to $\lambda$. Therefore the propagation remains driven by the activated neurons in the previous layer, which is mostly a good thing as we want the global propagation to mostly follow the activated neurons path. Still, it can be an issue in the very rare case of a continuous path of never activated neurons. \\ For now, the leaky ReLU activation is our preferred solution with $\lambda$ as a tunable hyperparameter. Other ReLU activations are for example the parametric ReLU (or PReLU) that is equivalent to the leaky case but with the optimal leaking factor being learned during the training process \citep{he_delving_2015}, the randomized leaky ReLU that randomly selects a $\lambda$ value for each neuron, or the exponential linear unit (ELU) that smooth the negative input part with an exponential. Empirical performance comparison for networks with convolutional layers did not shown any significant superiority of these variants of ReLU, while they all perform decisively better than the basic ReLU \citep{xu_2015}.\\ \begin{figure}[!t] \begin{subfigure}[!t]{0.49\textwidth} \centering \includegraphics[width=1.0\hsize]{images/relu_fig.pdf} \end{subfigure} \begin{subfigure}[!t]{0.49\textwidth} \centering \includegraphics[width=1.0\hsize]{images/leaky_relu_fig.pdf} \end{subfigure} \caption[Illustration of a ReLU activation]{Illustration of the ReLU activation function as plain line and its derivative as dashed line. {\it Left}: classical ReLU activation. {\it Right}: leaky ReLU activation with $\lambda = 0.2$.} \label{relu_fig} \end{figure} While the ReLU activation was introduced to help convolutional networks it can efficiently be used on fully connected MLP networks. As we discussed in Section \ref{weight_decay}, the sigmoid neurons are expected to work mainly in their close-to-linear regime to help error propagation and network stability, while the non-linear part of the function should only be used when necessary. These constraints are released by the ReLU activation with a very nice error propagation even for deep networks. The trade-off is that much more ReLU neurons must be used when the boundaries to find are truly non linear. However, ReLU neurons are much simpler to constrain and we observed in our applications that less data are usually necessary to properly constrain the network, in spite of the greater number of weights than in the case of sigmoid neurons (Sect. \ref{nb_neurons}). We show an example of a fully connected layer that uses ReLU in Section~\ref{dropout_error}. \newpage \subsubsection{Stacking Convolutional layers} \label{convolution_stacking} Even if one convolutional layer can already identify a lot of pattern if provided with many filters, it remains limited to patterns of the size of the filter and is often enable to learn efficiently complex non-linear patterns \citep{Lecun-95}. Therefore, convolutional layers are usually repeated in an MLP-like fashion, each layer considering the outputs of the previous layer as its own input images. This is a very efficient way to identify more and more complex patterns as the network gets deeper. Usually the first layers act as basic detectors for edges, lines, colors, luminosity, ... that correspond to low-level representations; then the mid network layers detect more advanced features of the image like textures, repetitive patterns, and very basic shapes that act as mid-level representations; the final network layers act as sub-object detectors, behaving like small classifiers of much more abstract content of the image. The end of the network is usually still connected to a few fully connected layers that merge these sub-classifiers into a final classification output or into any other kind of output that is targeted. Such an artificial neural network architecture is called a Convolutional Neural Network (CNN).\\ \begin{figure}[!t] \vspace{-0.5cm} \hspace{-0.6cm} \begin{minipage}[!t]{1.05\textwidth} \centering \includegraphics[width=1.0\hsize]{images/features_visual.jpg} \end{minipage} \vspace{-0.2cm} \caption[CNN filters representation-level examples]{Example of mock images generated to maximize the activation of specific filters in the network. {\it Left}: using filters form an early convolution layer. {\it Middle}: using filters from a deeper layer. {\it Right}: using filters close to the output of the network. {\it Image from \href{https://distill.pub/2017/feature-visualization/}{Distil}}.} \vspace{-0.2cm} \label{features_visual} \end{figure} While it is difficult to look directly at the filters themselves because they are usually very small, there are optimization techniques that can be used to construct mock input images that maximize the activation of a specific filter. While such methods are beyond the scope of this thesis, they still provide a didactic illustration of the previous multi-level representation in CNNs. Figure~\ref{features_visual} shows an example from the \href{https://distill.pub/2017/feature-visualization/}{Distil} online CNN visualization tool. The filters become more and more precise to identify specific sub-parts of the images, with the apparent repetition of the pattern in the image being just a construction effect. The filters become more and more precise to identify specific sub-parts of the images. We note that the apparent repetition of the pattern in the image is just a construction effect in the sens that the filters selected to produce the image are usually small ($3\times 3$ or $5 \times 5$) weight matrices, so that the complex pattern is constructed from the non-linear combination of all the previous layer filters that contribute to the looked at filterup to input image that maximizes the looked at filter.\\ \subsubsection{Pooling layer} \label{pool_layer} \begin{figure}[!t] \centering \includegraphics[width=0.85\hsize]{images/pool_op.pdf} \caption[Illustration of the Max-Pooling operation]{Illustration of the Max-Pooling operation with $P_o=2$ on a single layer image, colored by sub-regions.} \label{pool_op} \vspace{-1.5cm} \end{figure} Convolutional layers are often used in combination with another new type of layer that reduces the dimensionality of the output images (or activation maps). This layer performs a so-called pooling operation, and is therefore named a pooling layer. While there are different types of pooling operations, the most commonly used is the Max-Pooling. Considering a pooling size $P_o$ it will decompose each depth channel of the image in non-overlaping sub-regions of $P_o \times P_o$ pixels and produce an output image that is composed only of the maximum value of each of these sub-regions. It is common to use $P_o = 2$, which results in an image that is half the size in both dimensions, conserving only a quarter of the pixels of the original image. This operation is performed for each depth channel of the image, which can be the activation maps of a previous layer, so it conserves its depth. We note that a pooling layer does not use weights or activation in any way, so it does not increase the number of learned parameters in the network. This operation is illustrated in Figure~\ref{pool_op} using a different color for each sub-region. Among the most common max-pooling alternatives we can cite the average pooling, max-out layers, L2-norm pooling, global pooling, fractional pooling, etc. \citep[][...]{LeCun-98b, Scherer_2012,Gulcehre_2013, Graham_2014}\\ The aim of this operation is to reduce dimensionality by selecting only the dominant pixels in the input image. This can lead to a very convenient speed up of the learning process with generally small or negligeable degradation of the prediction capacity of a CNN network. Using a pooling layer just after a convolutional one conserves an important overlap of the convolution filter with a small stride. This way the convolved image has a much better resolution and the max-pooling operation conserves only the most relevant information by selecting only one pixel. Such construction has long proven to be a very efficient architecture as exposed in Section~\ref{cnn_architectures}. While pooling layers were widely used a few years ago with a pooling after each convolution, they tend to be less common in modern architectures, with a pooling layer only every several convolution layers. Moreover, the pooling layers has shown to be replaceable by carefully designed convolution that performs equivalent dimensionality reduction \citep{Springenberg_2014}.\\ \subsubsection{Learning the convolutional filters} \label{conv_layer_learn} \begin{figure}[!t] \centering \includegraphics[width=0.85\hsize]{images/pool_op_back.pdf} \caption[Illustration of the Max-Pooling error backpropagation]{Illustration of the Max-Pooling error backpropagation with $P_o=2$ on a single layer image, colored by sub-regions. The "Max-Loc" matrix gives the location where the maximum value were prior to the Max-Pooling step, counted from zero. The used maximum location are those of Figure~\ref{pool_op} for consistency.} \label{pool_op_back} \vspace{-1.0cm} \end{figure} In the previous sections we have expressed the convolution operation by using already appropriate weight filters. While it is possible to use only pre-defined filters like it was the case in the first few decades of machine learning image applications, the true objective with this architecture is to learn these filters \citep{lecun-98}. We note that, in echo to our first discussion in Section~\ref{mlp_backprop}, this is the boundary where many agree on the definition of "deep learning". The "deep" attribute here does not only represents the depth of the network as a stack of layers but refers to the fact that both the link between filters and the filters themselves are learned during the training process. Here, we describe all the elements that are necessary to propagate the error through a convolutional architecture. We note that this description is frequently missing in many deep learning courses or presentations (e.g Stanford \href{https://cs231n.github.io/convolutional-networks/}{CS231} or \citet{Bishop:2006:PRM:1162264}) while it is often the most difficult part of a CNN construction.\\ First, the simplest operation to propagate is the pooling. Considering that the error has been propagated using the classical rules for fully connected layers as described in Section~\ref{mlp_backprop}, the propagation produces an error volume for the pooling layer that has the size of its output. The error then has to be propagated to the input image with the appropriate error being associated to the input neurons (or pixels) that was the maximum value of each sub-region. In practice it means that one must conserve the memory of the position of the neuron that contained the maximum value. The error is reported to this element location and all the other elements involved in the associated pooling sub-region get their error set to zero. We illustrate this procedure in the typical case of a $P_o=2$ pooling size in Figure~\ref{pool_op_back}, where the input image was a single layer of size $4\times 4$, which is equivalent to the size of the propagated error. Identically to the forward pooling operation, the depth channels are independent.\\ \vspace{-0.2cm} For the error propagation in the convolutional layers, we need to define an operation that is called a transposed convolution \citep{Dumoulin_2016}. It works in a very symmetrical way as the convolution, each pixel of an input image being multiplied by all the filter values to produce an identical number of elements in an output image. This operation is repeated for each pixel in the input image according to the stride. Pixels next to each other in the input image will often have an overlap of their projected field of contribution in the output image, depending on the stride. In this case the contribution from each input weighted by the filter are summed on overlapping output pixels. With this operation it is possible to propagate an error "image" of the size of the forward convolution output to produce an error table that has the size of the input image used by the forward convolution. This way each error pixel propagates its value to all the original input positions that were involved in its activation, accordingly weighted by the filter element used by each original input pixel. This operation is the exact equivalent of the Equation~\ref{eq_update_full_network}, or the simplified version from Equation~\ref{eq_deltah}, for each sub-region of the considered operation input image. Indeed, each error is scaled by each weight that were involved in the activation of the forward output neurons and the error is propagated the corresponding forward input neurons.\\ \vspace{-0.2cm} We illustrate this transposed convolution operation in a one dimensional case in Figure~\ref{im_proc_1d_conv_back} that is the equivalent of the propagation of the case presented in Figure~\ref{im_proc_1d_conv}. We note that the stride in this image is the one that was used in the forward convolution. For the $S=2$ case an equivalent of the transposed convolution should be used, but it requires an addition that we explain below, in the 2D case. Interestingly, in this figure, the contribution pattern is identical to the one that was used for the forward convolution but this time it represents the overlapping of each pixel contribution.\\ \begin{figure}[!t] \hspace{-0.9cm} \begin{minipage}{1.0\textwidth} \centering \includegraphics[width=1.0\hsize]{images/im_proc_1d_conv_back.pdf} \end{minipage} \caption[Illustration of a 1D transposed convolution operation]{Illustration of a 1D transposed convolution operation using a 3-element filter (in red). The error is in green, the propagation result in blue and the grayscale table represents the number of error elements that contribute to each propagated pixel. Two examples are given for $S=1$ and $S=2$ that result in propagated errors with identical sizes.} \label{im_proc_1d_conv_back} \end{figure} \vspace{-0.2cm} We illustrate a 2D transposed convolution operation in Figure~\ref{im_proc_2d_conv_back}. We consider an original convolution operation that was performed on a $4\times 4$ input image using a $3\times 3$ filter producing a $2\times 2$ output image. Then the transposed convolution operation combines the error computed in this output with the same filter than the one used in the forward in order to propagate it into an input error that has the same size of the original input. To ease the understanding of the operation, the contribution from each error pixel has been represented independently and colored accordingly. These contributions are summed to produce the shown propagated error along with the associated contribution pattern \footnote{Useful animated illustrations of the transposed convolution can be found on the \href{https://github.com/vdumoulin/conv_arithmetic}{GitHub} page associated with the paper from \citep{Dumoulin_2016}}. \\ We stress that the transposed convolution is not a true "deconvolution" as it is often called. While the transposed convolution performs exactly the action we need, that is to distribute the error over all the original input neurons that were responsible of each output activation, a deconvolution in the sense of signal processing would reconstruct the convolution original input. This is not the same operation as the transposed convolution. Another way to depict a transposed convolution is through a so-called fractionally-stride convolution \citep{Dumoulin_2016}, which is discussed in the next paragraph.\\ \begin{figure}[!t] \centering \includegraphics[width=1.0\hsize]{images/im_proc_2d_conv_back.pdf} \caption[Illustration of a 2D transposed convolution operation]{Illustration of a 2D transposed convolution operation on a $2\times 2$ error using a $3\times 3$ filter (in red). Each error pixel has a different color and the individual contribution to the operation is shown for each of them in the corresponding color. The resulting $4 \times 4$ propagated error is in blue and the grayscale table show the number of contribution for each pixel.} \label{im_proc_2d_conv_back} \end{figure} While the transposed convolution performs the wanted operation it is often re-expressed as a regular convolution. There are two main motivations for this: (i) the homogeneity of the operations to perform, that already led to replace pooling and dense layers by convolutional ones \citep{Springenberg_2014}, (ii) and the fact that the convolution can efficiently be encoded as a matrix multiplication operation as we describe in Section~\ref{gpu_cnn}.\\ \begin{figure*}[!t] \centering \includegraphics[width=0.90\hsize]{images/im_proc_2d_conv_back_rot.pdf} \caption[Illustration of the complete forward and backpropagation process for $S=1$]{Illustration of the complete forward and backpropagation process for $4\times 4$ input images using a filter of size $f_s=2$ (in red) and a stride $S=1$. The input and propagated error are in blue, the output and its error are in green. The yellow outline is the external zero-padding $P = 1$. The grayscale table represents the number of contributions from each input pixel and is equivalent to the number of contributing elements from the error. The colored dashed squares highlight specific input-weight pairs to help following pixel paths in the operation.} \label{im_proc_2d_conv_back_rot} \vspace{0.3cm} \centering \includegraphics[width=0.90\hsize]{images/im_proc_2d_conv_back_stride.pdf} \caption[Illustration of the complete forward and backpropagation process for $S=2$]{Illustration of the complete forward and backpropagation process for $5 \times 5$ input image using a filter of size $f_s =3 $ (in red) and a stride $S=2$. The input and propagated error are in blue, the output and its error are in green. The yellow outline is the external zero-padding $P_o = 2$ and the purple internal zero-padding represents $P_s = 1$. The grayscale table represents the number of contributions from each input pixel and is equivalent to the number of contributing elements from the error.} \label{im_proc_2d_conv_back_stride} \end{figure*} This alternative formulation of the back-propagation transposed convolution takes the same output error image as its own input but with a scaling transformation so that a regular convolution operation produces an output image that has the size of the one from the transpose convolution. In order to replicate the correct propagation, this convolution has to reproduce the input size of the forward convolution operation. It means that it usually has to "upscale" its input image, here the output error to be propagated. This is possible by adding zero padding around the image similarly to the one that is used to preserve the size between the input and the output in the forward convolution. To avoid the filters to be applied to zero values only, the padding must be smaller than the filter size (see Fig.~\ref{im_proc_2d_conv}). The quantity of padding to add in a $S=1$ case is defined as $P' = f_s - P - 1$ where $P'$ is the padding of the propagation convolution and $P$ is the padding of the forward convolution. The change of size between the input and output image of the forward operation can then be reversed. However, to reproduce the transposed convolution operation, the weight filter must be rotated by $180^\circ$ when using this propagation convolution. This way, each weight in the filter is applied to the appropriate output value to reconstruct the correct propagated error. This full operation including the filter rotation is often referred to as a full convolution \citep{Dumoulin_2016}. We illustrate this operation in Figure~\ref{im_proc_2d_conv_back_rot} that contains both the forward convolution and the associated error propagation using the full padded convolution. The $4\times 4$ input image is convolved by a $f_s = 2$ filter with a stride $S=1$ and no padding, resulting in a $3\times 3$ output image, here pictured with no activation function. To illustrate the propagation convolution we used an arbitrary error image that has the same size of the output, on which a $P'=1$ padding is apply. Two specific input-weight pairs are highlighted using dashed colored squares, to help verifying that the proper association is kept between the forward and the backward operations.\\ In the previous example we addressed the full convolution in a case with $S=1$ but the use of a larger stride in the original convolution requires additional transformations of the error image. Indeed, it is necessary to control the overlapping pattern as well as the resizing process. This is done by using an additional internal zero-padding (or stride padding) that will enlarge the the error image. This padding is simply defined as $P_s = S - 1$ and is applied between each input image pixel in both dimensions. We illustrate the case of a $S=2$ convolution and the associated full convolution for error propagation in Figure~\ref{im_proc_2d_conv_back_stride}. In this example a $5\times 5$ input image is convolved by a $3 \times 3$ filter using a stride $S=2$, which results in a $2\times 2$ output image. The full convolution for the error propagation, then uses an external padding $P_o = 2$ and an internal padding $P_s = 1$ on the $2\times 2$ error that is propagated into a $5\times 5$ error input image by using the weights from the rotated filter applied to the corresponding output. As before the contribution pattern is reproduced in the forward and back-propagation operations.\\ Finally, there are usually more than one input and output depth channels $d_i$ and $d_o$ in a convolutional layer. In the forward convolution each filter presents a depth that is equivalent to the number of input depth channels $d_i$. The output depth is then the number of filters in the layer $n_f = d_o$ forming an individual filter volume $V = f_s\times f_s\times d_i$ inside a larger volume of $V\times d_o$. The backpropagation convolution must produce a propagated-error volume with the same depth $d_i$ as the input layer. To achieve this, the error propagation convolution must reorganized the filters so that it creates a new volume where a given depth $d_i$ for all the $d_o$ filters are associated in a new individual filter volume $V' = f_s \times f_s \times d_o$ inside a larger volume of $V'\times d_i$. The new filter volume then contains a number of filters corresponding to the the number of depth channel of the input image of the forward convolution, and each of this filter contain as much depth channel than the output of the forward convolution. An illustration of this process using the matrix formalism in presented later in Figure~\ref{matricial_form_fig}.\\ \clearpage \subsection{Convolutional networks parameters} \label{cnn_parameters} \subsubsection{Convolutional Neural Network architectures} \label{cnn_architectures} \vspace{-0.2cm} Convolutional layers can be stacked on top of each others (Sect.~\ref{convolution_stacking}) and pooling layers can be added to reduce the dimensionality of the intermediate activation maps (Sect.~\ref{pool_layer}). To achieve this, many questions need to be addressed: How to choose the appropriate detailed architecture? How many layers are necessary? With how many filters in each? What should be the size of the filters? Is a pooling layer necessary after each convolution, etc ? The general answer to all these questions is: it depends. Some extreme boundaries are easy to estimate: like not having larger filters than the input image, avoiding to reduce the dimensionality too quickly, or avoid having a stride that is larger than the filter size which would cause some pixels to never be scanned. It remains, however, difficult to provide proper general advice. The ANN community has long adopted a "trial and error" approach to find the most effective network architectures, but the number of possible combinations is huge and continues to rise exponentially with the addition of ever new ANN features and layer types.\\ \vspace{-0.2cm} For these reasons, the ML community is mainly moving forward by organizing contests on always more difficult tasks or by comparing the success of various architectures on freely accessible datasets. This leads to a proliferation of algorithms and architectures to be tested and the most successful ones then spread to the rest of the user base. This principle is going even further these days with many architectures being so hard to train and unstable for a long training time that it is advised to get a pre-trained network for very general purpose classification, and then continue the training with completely different input images to adapt it to a new application.\\ \vspace{-0.2cm} For general considerations, it has been observed that architectures that have many convolutional layers with many small filters were much more efficient than fewer layers with larger filters \citep{vggnet-2014}. While a large filter scans over a large region at once, the same large region can be scanned using successive layers with small filters. For example two layers of $3 \times 3$ filters with a stride of $S=1$ scan up to a $5 \times 5$ area of the original image. Adding an extra layer with identically sized filters results in a $7 \times 7$ area. The advantage of this type of architecture is that it decompose large scale pattern into a non-linear combination of several small patterns, increasing the diversity of objects that can be identified for a same given number of filter. This is exactly comparable to the difference between one MLP layer with much more neurons against the same number of neuron distributed over multiple MLP layers. Additionally, a complex $7 \times 7$ pattern can be decomposed into pieces that might be useful for another type of objects in the same dataset, reducing the global quantity of weights in the network. Most common filter sizes are $3 \times 3$ or $5\times 5$.\\ \vspace{-0.2cm} Other common practices are for example to use adequate zero-padding in order to conserve the image size in the convolutional layers. The dimension reduction is then completely endorsed by the pooling layers. Small strides are more common, generally with a simple $S=1$ value. In some cases larger filters are used in conjunction with a larger stride when the number of parameters in the network is problematic, but only on the first layers \citet{alexnet_2012}. This leads to a common practice that is to start with layers that have few filters in order to reduce the number of activations and that conserve the input image size. The subsequent layers are made denser with more filters, and the activation maps are pooled to reduce the dimensionality. Most networks finish their convolution part with a layer that contains many filters that produce very small activation maps. The objective being to list all the necessary sub-patterns from the input images that are necessary to perform the end classification. Ultimately, the last convolution layer is fully connected to a smaller series of MLP layers with, as before, each pixel of all the activation maps considered as an independent input feature. It is also common to perform a last convolution operation with many filters of the size of the last activation maps to produce a large mono-dimensional activation before adding the fully connected layers, which results in the exact same number of parameters than the previous solution. We illustrate a very generic architecture of this type in Figure~\ref{cnn_network}, where only the activation maps are represented, but their size and number is characteristic of the filters used at each layer.\\ \vspace{-0.2cm} There are many other kinds of layers and tuning with for example non linear architectures, networks with outputs larger than their input, recurrent CNN, residual CNN... We can noticeably mention the widely used "depthwise separable convolution" that performs a regular convolution layer per layer and then recombines them using a $1 \times 1$ convolution. Some networks also work with tensor images that have more than two spatially coherent dimensions, and can still have several multiple depth channels, needing tensor weight filters and subsequent tensor representation of the network. However, these advanced techniques are for now irrelevant for our applications.\\ \begin{sidewaysfigure} \hspace{-0.8cm} \begin{minipage}{1.05\textwidth} \centering \includegraphics[width=\hsize]{images/figure_cnn_no_background.pdf} \end{minipage} \caption[Illustration of a typical CNN architecture]{Illustration of a typical CNN architecture with subsequent convolutional layers with regular pooling. In this representation the gray slices represent activation maps, the filters are not represented. The end of the network is composed of several fully connected layers with gray circles representing regular individual neurons.} \label{cnn_network} \end{sidewaysfigure} \vspace{-0.2cm} Finally, we list here some famous CNN architectures that illustrate the previous general considerations. We note that, because the size of the activation maps, and therefore the depth of the network, depends on the size of the input map, each architecture is given with its typical input image size. We note that all these architectures have won one or several image classification contests that are described in the reference paper for each of them, leading to their wide adoption in the ANN community. To ease the architecture description we use the following naming system: the input volume is denoted I along with its dimensions (width, height, depth) as I-W.H.D, a convolutional filter is denoted C with the number of filters N followed by the filter size dimension $f_s$ and the stride $S$ as C-N.$f_s$.$S$ (the stride $S$ is omitted in the case of $S=1$), a pooling layer is simply denoted P along with its $P_o$ values as P-$P_o$, and dense layers are denote D followed by the number of neurons $n$ as D-$n$. \begin{itemize}[leftmargin=0.5cm] \setlength\itemsep{0.01cm} \item \textbf{LeNet:} a "classical" simple CNN architecture from \citet{lecun-98}, from which the most known revision is the LeNet-5: [I-32.32, C-6.5, P-2, C-16.5, P-2, C-5.120, D-84, D10]. \item \textbf{AlexNet:} a much more recent and larger architecture from \citet{alexnet_2012}, but that remains modest enough to be usable on most modern individual computers. It noticeably uses larger filters and stride in the first layer to strongly reduce the image size. Its architecture is [I-224.224, C-96.11.4, P-2, C-256.5, 2$\times$[C-384.3], C-256.3, P-2, 2$\times$[D-4096], D-1000]. \item \textbf{VGGNet:} that made the demonstration that a very deep network with only small filters can achieve top-tier performances. This network architecture from \citet{vggnet-2014} is still widely used today due to it simplicity of implementation and very good performance even on modern problems. It is made of chunks of identical convolutional layers. The architecture can be described as [I-224.224.3, 2$\times$ [C-64.3], P-2, 2$\times$ [C-128.3], P-2, 3$\times$ [C-256.3], P-2, 3$\times$ [C-512.3], P-2, 3$\times$ [C-512.3], P-2, D-4096, D-4096, D-4096, D-1000]. \item \textbf{Inception:} a much more recent approach to CNN that is composed of several "blocks" of parallel networks that are concatenated after a number of layers \citep{inception_2014}. The number of continuous convolutional layers in the first version is 22 and it goes above 70 for more recent iterations of this network architecture \citep{inception_2016}, which we do not represent here because of its complexity. This category of networks can only be trained using powerful computing clusters but are capable of solving very diverse tasks efficiently using the very same architecture. However, the details of such an architecture is beyond the objectives of this thesis. \end{itemize} \subsubsection{Weight initialization and bias value} \label{cnn_weight_init} As we exposed in Section~\ref{weight_init}, the method used to initialize the weights at the beginning of the training process can have a strong impact on the stability and convergence ability of ANN. We also discussed that it is strongly linked to the choice of activation function, bias value, and even to the network architecture. Overall, the ReLU activation function is much more reliable than the sigmoid activation for similarly deep networks. In the case of a very deep network like the one we described in the previous section, the ReLU activation quickly becomes the only suitable solution. In such a case, the choice of an appropriate weight initialization is crucial because an inapproriate choice can completely prevent very deep networks to converge. The main objective remains to conserve the weight small enough to preserve precision and stability but also large enough to quickly obtain very different behaviors of each neuron in the network. The performance comparison between the various weight initialization methods is mainly empirical. Still, the main objective of each method is to get as close to the same initial weight variance over all the network layers, independently of their size.\\ In the VGG network for example, they used the Xavier initialization (also named Glorot depending on the reference to the name of forename of the author) that is a uniform distribution scaled according to the size of the layers by $\sqrt{1/n_{l-1}}$, where $n_{l-1}$ is the size of the previous layer \citep{glorot_understanding_2010}. In the same paper they also propose what is now called the normalized Xavier initialization that scales the uniform distribution by: \begin{equation} \sqrt{\frac{6}{n_{l-1} + n_{l}}}. \end{equation} They claim that this initialization works better with layers that are unevenly sized. These initializations can be generalized to be used with a normal distribution instead. Using a zero mean and standard deviation of one, the Xavier initialization is scale by: \begin{equation} \sqrt{\frac{2}{n_{l-1} + n_{l}}}. \label{eq_xavier_normal} \end{equation} We note that {\bf this is the weight initialization currently used by default in our CIANNA framework}.\\ Another initialization that is frequently used is the He initialization that follows the same idea but with a wider variance by scaling the uniform or normal distribution by $\sqrt{6/n_{l-1}}$ and $\sqrt{2/n_{l-1}}$, respectively \citep{he_delving_2015}. This initialization is claimed to be more efficient on very deep ReLU activated networks, while in practice it appears that both He and Xavier initialization are commonly used in such applications.\\ Regarding the bias value, there are many approaches using ReLU in convolutional layers that differ regarding their implementation. If the bias value itself changes during the learning phase, then it is often initialized to zero at the beginning of the training. The other approach that uses a constant bias value and an adaptive weight can emulate the same behavior with a weight set to zero for the bias at the initialization. It is common to rather use a small positive value in order to allow the ReLU to not start in an inactive state. {\bf Our approach to this problem has been to use a bias value of $\bm{ 0.1}$} with an associated weight that is randomly generated following the previously described rules for every neuron that uses a ReLU activation. \clearpage \subsubsection{Additional regularization: Dropout and momentum} \label{dropout_sect} In the context of modern ANN, regularization denotes any technique that is used to prevent overfitting and to even the generalization of the network between training data points. These methods aimed at gaining a better representation of the larger scale network prediction \citep{Goodfellow_2016}. Most of the modern CNN architectures use a regularization technique called dropout \citep{dropout_2014}. It consists in randomly removing a given proportion $d_r$ of neurons at each training step, taking into account that the $d_r$ proportion can be different for each layer. A dropout of $d_r = 0.6$ means that $60\%$ of neurons are dropped. It noticeably prevents overfitting by forcing the network weights to adapt in a more general way by preventing given weights to become the only overspecialized representation of a training dataset specificity. This type of regularization enables the network to work with smaller datasets without overfitting.\\ \vspace{-0.2cm} Usually, this technique is used only on dense layers at the end of the CNN architecture. This presents a very interesting side effect that is to better specialize the filters themselves, especially if dropout is used in combination with a momentum (Sect.~\ref{sect_momentum}). This way each filter really becomes responsible for one robust pattern as presented in Section~\ref{convolution_stacking}. Without dropout, multiple filters can share the responsibility for a pattern that could otherwise be represented by just one of them, making the interpretation of a filter more complex. Despite having less neurons to compute, using dropout usually makes the training process require much more epochs to converge due to the multiple suitable combination of weights that must be learned to account for the random shutdown of neurons. However the improved robustness of the network representation is often considered worth the slower training.\\ \vspace{-0.2cm} Usually, the first dense layers after the convolution part present a high dropout rate with up to $d_r = 0.9$ that is consecutively reduced for layers closer to the output with a value depending on the usage. A value around $d_r=0.5$ is often adopted for the last dense layer in classification cases but smaller values of $0.2$ or $0.1$ are sometimes adopted on suitable applications like regression cases with great efficiency \citep{Gal_2015}. Naturally, the output layer does not have dropout since it encodes the problem prediction. One side effect of the use of a high dropout value is that the number of neurons must be significantly raised. An common heuristic is to have an "active" number of neurons that is the equivalent of the same network without dropout, but it might be insufficient is some cases. Still, having $n/d_r$ neurons where $n$ is the number of neuron necessary for a non-dropout model is a useful minimal approach. We also note that a high momentum ($\alpha >= 0.9$) is often advised to minimize dropout undesirable effects like gradients canceling each other and an increased noise in the global gradient descent. This can also be efficiently combined to a larger learning rate.\\ \vspace{-0.2cm} Finally, once the network has been trained using dropout, there are two opposite approaches to compute the prediction of the network. The first one, that is recommended most of the time, is to scale down all the weights by the $d_r$ factor and then to use all the network weights without removing neurons. This way the global sum of the weights remains scaled similarly as during the training process but all representations are averaged. This is usually the most efficient way to preform the prediction. The other approach is to perform a Monte-Carlo estimation of the output by performing several predictions with a different dropout activation for each layer. This method requires several predictions of the network before being close to the average predicted by weight scaling but is equivalent to sampling a prediction probability distribution, just like a Bayesian approach (see Sect.~\ref{dropout_error}) We stress that we only scratched the surface of the complexity and capacity of dropout for many artificial neuron based ML applications, more details and examples being presented in the reference paper from \citet{dropout_2014}. \subsubsection{Implications for GPU formalism} \label{gpu_cnn} We exposed in Section~\ref{matrix_formal} the advantages of a matrix formalism to speed up the network operations. Following the same objective, we discuss here a possible approach to express the convolutional layer operations as matrix multiplications. The element-wise multiplication between a filter and several sub-regions of an image is already very close to the operations performed in a matrix multiplication. Some methods also take advantage of its underlying SIMD structure to construct GPU kernels that perform the convolution operation directly. However, it is often more efficient to use the already strongly optimized linear algebra libraries.\\ \begin{figure}[!t] \hspace{-1.6cm} \begin{minipage}{1.2\textwidth} \centering \includegraphics[width=1.0\hsize]{images/im2col_fig.pdf} \end{minipage} \caption[Illustration of the im2col operation]{Illustration of the im2col (here, rather im2row) operation performed on an depth $d=2$ input image of size $w_i = h_i = 5$ using filters with $f_s=2$ represented in red. Elements from input depth channel 1 and 2 are colored in blue and orange, respectively. The expanded input present $w_o \times h_o \times$ columns which correspond to the flatten dimension of one input depth channel, and $d\times f_s \times f_s$ to correspond to a flatten sub-region. $W$ represents the flatten filter matrix, with $d_o$ independent filters, that is multiplied by the expanded input to produce the activation maps. The red and blue dashed rectangles highlight two specific sub-regions that go through the conversion.} \label{im2col_fig} \end{figure} \begin{sidewaysfigure} \centering \includegraphics[width=0.82\hsize, height=0.75\paperwidth]{images/matricial_form_cnn_fig.pdf} \caption[Graphical representation of a CNN matrix batch training]{Graphical representation of the matrix batch operation for one convolutional layer. The large red arrows indicate the order of the operations. Large $\times$ symbols are matrix multiplications. The matrix sizes are as follows: $b$ is the batch size, $w_i$, $h_i$ and $d_i$ the width, height and depth of the input image, respectively, $w_o$, $h_o$ are the width and height of the activation maps, $d_o$ is the number of filters and activation maps, and $f_s$ is the filter size in both spatial dimensions.} \label{matricial_form_fig} \end{sidewaysfigure} The most widely adopted approach is based on a representation of the weight filters as a matrix with columns representing each filter flattened ($f_s\times f_s \times d_i$) and with as many columns as the number of filters ($d_o$) in the layer. To correspond to this weight matrix, the input matrix must be composed of rows that represent all the elements of each sub-region flattened accordingly. Besides, multiple images from the same batch can be concatenated in the same expanded representation resulting in a $w_o \times h_o \times b$ number of rows. It is then possible to perform all the convolution operations of the batch using one single matrix operation \citep{Chellapilla_2006}. Using this representation, each sub-region of the input is multiplied by each filter of the layer and produces an output matrix where each column is the flattened activation maps for one filter. This conversion operation is often referred as "im2col" that stands for all the operations of this type, even if the operation we describe might be more suited by the "im2row" name. We illustrate this conversion in Figure~\ref{im2col_fig} where an input image with two depth channels of $5 \times 5$ pixels is converted into an expanded version that corresponds to filters of size $f_s = 2$ with the same number of dimensions. Two dashed areas highlight specific sub-regions that go through the conversion.\\ While it is relatively easy to construct the weight matrix in the right format and conserve it during training, it is much more difficult to convert the input in the right form. The first problem is that this representation uses much more memory than the regular image form due to the redundancy of elements caused by the overlap between the sub-regions. This is a common algorithmic trade off that consists in increasing the memory usage in order to improve computational performances. But, even if it is possible in some cases to keep the expanded form in memory for simple problems, most of the time the images must be converted dynamically to lower the memory footprint. The second problem is that the output format of the activation maps correspond to regular flattened images. This means that they must also be converted in the expanded form if the next layer is a convolutional one, and successively for the all network convolutional layers, increasing even more the memory issue.\\ However, even with a very poor conversion performance the large single matrix operation is so much faster than a raw computation of the convolution that it is almost always worth the additional cost of the conversion. There are many reasons for the raw convolution to be slow, the main one being that the input elements are poorly arranged in memory for this operation, inducing regular cache-misses \footnote{Cache-miss refers to the CPU being forced to load data from the host memory rather than from the cache. Memory copy being made by blocks, operations on continuous data in memory allows the CPU to read the subsequent data from the cache loaded from previous access.}. This is completely solved with the expanded matrix representation that fully uses the matrix multiplication optimization of cache and redundancy. Again, we insist on the fact that in almost all cases it is faster to use this approach anyway, but it makes the conversion operation the most probable bottleneck of the whole network, then any improvement in the conversion computational cost results in a large global network performance increase. There are many implementations of this function that make use of various hardware capabilities, with many of them being open source while others are protected inside closed frameworks. This function is so important in modern networks that it is the object of several researches and publications; a nice empirical method comparison can be found in \citet{Anderson_2017}. Still, we propose our own im2col implementation that takes the form of a CUDA kernel \iffalse that is described in Appendix~\ref{cianna_app} \fi. While we kept a simple approach, we minimized the number of memory operations with only one read per input pixel and just the absolutely necessary number of affectations. Despite this, the kernel remains completely memory bound, which indicates that it is as optimized as it can be without using advanced shared GPU memory management and low-level cache operations. We emphasize that, depending on the network architecture, our im2col implementation is efficient enough not to be the computational bottleneck with most of the computation time spent in matrix multiplication even on a high-end GPU.\\ Using this approach we now have a method to convert convolutional layers into efficient matrix operations. Each subsequent layer can then use the same approach to construct the full network. The pooling operations are just a SIMD operation on all sub regions and do not require any additional treatment. It is possible to use the matrix formalism for fully connected layers described in Section~\ref{matrix_formal} for the end of a CNN, when necessary. Regarding the error backpropagation, we exposed that it can be expressed as a classical convolution, meaning that the same im2col routine can be used with minimal adjustments to propagate the error in a convolutional layer. We illustrate this matrix formalism in Figure~\ref{matricial_form_fig} that presents the forward, the backpropagation and the weight update for a convolutional layer that consists in the multiplication of the expanded input and the expanded output error. The dimensions of each element are provided in the figure following the same notations as in the previous sections. The figure also shows the addition of a bias value in the expanded image form, as before to minimize the number of kernel launches that must be performed. Still, unlike the matrix formalism we presented for fully connected layers (Fig.~\ref{matricial_form_fig}), this representation is not exhaustive: to limit the complexity of the figure, we omitted many small adjustments that are needed to achieve a computationally efficient matrix operation.\\ \subsubsection{Example of a classical image classification} \label{mnist_example} To illustrate the classification capacity of a CNN architecture and connect the exposed theoretical aspects to a real simple example, we will use a very well known dataset named \href{http://yann.lecun.com/exdb/mnist/}{MNIST} (Modified NIST's Special Database) \citep{Lecun-95, lecun-98}. It consists of a set of handwritten digits from 500 different writers expressed as $28 \times 28$ grayscale images (0-255 pixel values) positioned to their respective centers of mass. It is freely accessible in the form of a 60000-image training dataset and a 10000-image test set. We stress that although the digits are not perfectly equally represented, the proportions are sufficiently balanced to not cause any issue. Figure~\ref{mnist_digit_fig} shows the first 36 images of the training dataset with the corresponding labels. It has been the support of some of the first CNN architectures that automated the filter selection by training them as the rest of the network. It has then been used by many others to test the efficiency of architectures or even non-ANN classification methods due to its free access and balanced difficulty. The results for various architectures are listed on the dataset website along with the associated publications for each of them \citep[for example][]{lecun-98, Belongie_2002, Ciresan_2012} with best results near $0.23\%$ error rate to be compared to a human performance estimated at $0.2\%$ error rate. For comparison, a single-layer linear ANN gets only a $12\%$ error rate on this dataset. These days, MNIST remains a widely adopted benchmark set for CNN applications and is used for many pedagogical illustrations.\\ \begin{figure}[!t] \centering \includegraphics[width=0.8\hsize]{images/mnist_digit_fig.pdf} \caption[Excerpt of the first 36 images of the MNIST dataset]{Excerpt of the first 36 images of the MNIST dataset. Each image has a size of $28 \times 28$ pixels and is encoded using grayscale with integer values in the range 0-255. The corresponding targets are shown in red for each image.} \label{mnist_digit_fig} \end{figure} For this example we used our framework CIANNA to construct a simple CNN that is vastly inspired by the LeNET-5 from \citet{lecun-98} slightly described in Section~\ref{cnn_architectures}, and that achieved a $0.95\%$ error rate on this dataset. We note that, for our application, we did not use any pre-processing on the input data, like dataset augmentation, image distortion or deskewing and just used the raw MNIST training dataset. Firstly, we used a convolutional layer that is composed of 6 filters of size $f_s=5$ with a stride of $S=1$ and padding of $P=2$, directly followed by a max-pooling of size $P_o = 2$. It results in a set of $14\times 14$ activation maps that uses leaky ReLU with $\lambda = 0.01$, and goes through a second convolutional and pooling combination using the same parameters but with 16 filters, resulting in $7\times 7$ activation maps. A last convolutional layer is added without pooling using 48 filters of size $f_s=3$ and $P=1$. It produces $5\times 5$ activation maps. This output is flattened into a $5\times 5 \times 48 = 1200$ input vector that is connected to two fully connected layers with $n = 1024$ , $d_r = 0.5$ and $n = 256$, $d_r = 0.2$, respectively, that both use the same ReLU activation than the convolutional layers. The network ends with a fully connected layer of $o=10$ using Softmax activation with a cross-entropy error (Sect.~\ref{proba_class_intro}). The network is trained using mini-batches of size $b=64$, a learning rate of $2\times 10^{-4}$ that slowly decays to $1 \times 10^{-4}$, a momentum of $\alpha=0.9$, for a total of 40 epochs. \\ \begin{table}[!t] \small \centering \caption{Confusion matrix for the MNIST prediction using our CNN implementation.} \begin{minipage}[!t]{1.2\textwidth} \hspace{-1.6cm} \begin{tabularx}{1.0\hsize}{r l |*{10}{m}| r } \multicolumn{2}{c}{}& \multicolumn{10}{c}{\textbf{Predicted}}&\\ \cmidrule[\heavyrulewidth](lr){2-13} \parbox[l]{0.2cm}{\multirow{13}{*}{\rotatebox[origin=c]{90}{\textbf{Actual}}}} & Class & C0 & C1 & C2 & C3 & C4 & C5 & C6 & C7 & C8 & C9 & Recall \\ \cmidrule(lr){2-13} & C0 & 976 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 99.6\% \\ & C1 & 0 & 1132 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 99.7\% \\ & C2 & 1 & 1 & 1027 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 99.5\% \\ & C3 & 0 & 0 & 1 & 1004 & 0 & 3 & 0 & 1 & 1 & 0 & 99.4\% \\ & C4 & 0 & 0 & 1 & 0 & 972 & 0 & 1 & 0 & 1 & 7 & 99.0\% \\ & C5 & 0 & 0 & 0 & 4 & 0 & 886 & 1 & 0 & 0 & 1 & 99.3\% \\ & C6 & 3 & 2 & 0 & 0 & 1 & 2 & 949 & 0 & 1 & 0 & 99.1\% \\ & C7 & 0 & 2 & 3 & 0 & 0 & 0 & 0 & 1020 & 1 & 2 & 99.2\% \\ & C8 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 1 & 968 & 1 & 99.4\% \\ & C9 & 0 & 0 & 0 & 0 & 3 & 1 & 0 & 4 & 0 & 1001 & 99.2\% \\ \cmidrule(lr){2-13} & Precision & 99.6\% & 99.6\% & 99.2\% & 99.5\% & 99.4\% & 99.2\% & 99.6\% & 99.2\% & 99.3\% & 98.9\% & 99.35\% \\ \cmidrule[\heavyrulewidth](lr){2-13} \end{tabularx} \end{minipage} \vspace{-0.1cm} \label{mnist_confmat} \end{table} With this simple network we reached a $0.65\%$ error rate on the test set at around epoch 30, corresponding to a $\sim 3$ minutes training running on a Nvidia \href{https://www.techpowerup.com/gpu-specs/quadro-p2000-mobile.c3202}{Quadro P2000} mobile. The corresponding confusion matrix (Sect.\ref{class_balance}) is shown in Table~\ref{mnist_confmat} that shows the global accuracy of $99.35\%$. It also highlights some specific confusion between digits that are more alike, for example between C4 and C9. More importantly for this manuscript, this result demonstrates the overall effectiveness of our implementation and choice of optimization as it is very competitive to similarly deep state-of-the art network implementations. For example, the same network architecture declared using the Keras framework with the much more advanced ADAM gradient descent optimization \citep{kingma2014} does not achieve better accuracy results. We illustrate the use of the CIANNA interfaces on this example in Appendix~\ref{cianna_app}, where we also use this example to make a performance comparison between CIANNA and Keras (TensorFlow). \subsection{Use of the dropout to estimate the uncertainty in a regression case} \label{dropout_error} An interesting side-usage of the dropout (Sect.~\ref{dropout_sect}) in ANN is to provide an uncertainty measurement of the prediction. Indeed, a network trained using dropout can be used to make several predictions with different random selections of neurons performing a Monte-Carlo estimate of the network prediction \citep{dropout_2014}. In fact, the dropout of neurons in dense layers forces the network to learn a probability distribution of the output, each random selection being responsible for a sub-set of this distribution. Many applications that need this feature use Bayesian Neural Networks \citep{MacKay_92} that achieve such task using modern variational inference \citep{Titsias_2014}, but they often have an important supplementary computational cost. However, it has been demonstrated that a regular ANN with dropout can achieve similar predictions while being much more efficient in terms of computational performances \citep{Blundell_2015, Gal_2015}. \\ We illustrate this capacity here using the simple one-dimensional example from Section~\ref{regression_expl}. To better show the uncertainty measurement we slightly raised the added noise dispersion around our original function to $\sigma = 0.15$. The network we used here is composed of two hidden layers with a leaky-ReLU activation that contains $n=64$ and $n=48$ neurons using a dropout rate of $d_r = 0.6$ and $d_r=0.5$, respectively. Having larger layers than these values with a higher dropout rate has shown to degrade the global prediction performance. In contrast, smaller layers were able to get a similar global prediction but the uncertainty appeared to be underestimated in these cases. As expected, too large layers with a too small dropout rate led to overtraining. On this example the optimal learning rate was $\eta = 0.002$ using a batch size of $b_s = 64$ and a momentum of $0.8$.\\ The results are presented in Figure~\ref{drop_error_regre} corresponding to 100 predictions using the training network with random neuron exclusion. The figure shows two different representations, the first one using the mean and the standard deviation from all the predictions of each input point in our test set, and the second one by drawing a histogram of the predicted values for each input point. The two representations reveal that the global prediction mostly follows the original function in a good confidence interval. The points near the limits of the training interval ($X > 4$ and $X < -4$) are less constrained, like in the regular case due to the combination of few training points and quick local-changes in the original function at these places, just like a regular boundary effect. It is also visible that the network confidence interval is more narrow in regions that have a steeper slope. The histogram representation is very informative since it directly represents the probability distribution but it is less well suited for visualisation of higher dimension regressions. \begin{figure}[!t] \centering \includegraphics[width=0.95\hsize]{images/drop_error_regre.pdf} \caption[Error prediction using dropout in a 1D regression case]{Error prediction using dropout in a 1D regression example (from Sect.~\ref{regression_expl}) using a two fully-connected hidden layer network. {\it Top:} Average of 100 predictions using dropout. The gray area shows the uncertainty computed as the standard deviation of all the predictions at a given abscissa. {\it Bottom:} 2D histogram of the 100 predictions. Each input value correspond to a vertical histogram of prediction values.} \label{drop_error_regre} \end{figure} \clearpage \section{Extinction profile reconstruction for one line of sight} \label{galmap_problem_description} In this section we describe how the discussed CNN formalism can be used to reconstruct an extinction profile based on the comparison between observed and modeled data (see Sect.~\ref{extinction_with_bgm_intro}). We explain our choice of observed quantities for this comparison and describe various processing that had to be performed in order to make the BGM predictions as realistic as possible. We also detail our methodology to construct mock extinction profiles that are sufficiently representative of the interstellar dust distribution to train our network. We then describe various effects either from the previous construction or from the network architecture itself than can have effects on the prediction. Finally, we combine these elements to perform a first CNN prediction using one LOS and discuss its generalization capacity to neighboring lines of sight in the Galactic plane. \etocsettocstyle{\subsubsection*{\vspace{-1cm}}}{} \localtableofcontents \vspace{0.5cm} \subsection[Construction of a simulated 2MASS CMD using the BGM]{Construction of a simulated 2MASS color-magnitude diagram using the Besançon Galaxy model} \label{cmds_construction_section} \subsubsection{Choice of BGM representation and observed quantity} \label{choice_of_BGM_cmd} We stated in Section \ref{extinction_with_bgm_intro} that we aim at {\bf using the Besançon Galaxy Model to reconstruct extinction profiles by comparison to observed quantities} that are also predicted by the model. We also explained that, because the BGM is a statistical stellar synthesis model, we have to use observations in a statistical form as well. In a first step, for the sake of simplicity, we wanted to use solely 2MASS data. We thus take advantage of the potentially large distance range permitted by the relatively low extinction in the near IR, and of the possibility of a direct comparison to the other work we are involved in \citep{Marshall_2020}. There are 3 bands in the 2MASS survey that can also be predicted by the BGM, namely J ($1.235 \mu m$), H ($1.662 \mu m$) and K ($2.159 \mu m$). To constrain the extinction, it is better to have one color and one magnitude to be sensitive to both the reddening and the brightness decrease that it induces. To maximize the leverage in the color dimension (i.e. the difference in extinction between two wavelengths) we chose to use a [J-K]-[K] CMD, that we already illustrated in Figures~\ref{expl_observed_cmds} and \ref{obs_model_ext_comparison}.\\ \begin{figure*}[!t] \hspace{-1.7cm} \begin{minipage}{1.2\textwidth} \centering \begin{subfigure}[!t]{0.47\textwidth} \caption*{\bf Giant stars (Class III)} \includegraphics[width=1.0\hsize]{images/CMD_dissection/Diag_K_JmK_CL3.png} \end{subfigure} \hspace{0.3cm} \begin{subfigure}[!t]{0.47\textwidth} \caption*{\bf Main sequence stars (Class V)} \includegraphics[width=1.0\hsize]{images/CMD_dissection/Diag_K_JmK_CL5.png} \end{subfigure}\\ \vspace{0.3cm} \begin{subfigure}[!t]{0.47\textwidth} \caption*{\bf All modeled stars} \includegraphics[width=1.0\hsize]{images/CMD_dissection/Diag_K_JmK_complete.png} \end{subfigure} \end{minipage} \caption[Simulated 2MASS CMDs for giant and main sequence stars]{Color-magnitude diagram simulated by the BGM for mock 2MASS data, without extinction, in the direction $l = 280$ deg, $b = 0$ deg. The contributions of giant stars and main sequence stars are shown separately and together. The raw BGM values are show in the left frame of each case. The right frame of each case shows the same data after adding simulated 2MASS noise.} \vspace{-0.5cm} \label{stellar_types_in_CMD} \end{figure*} \begin{figure*}[!t] \hspace{-2.1cm} \begin{minipage}{1.22\textwidth} \centering \begin{subfigure}[!t]{0.41\textwidth} \caption*{$d = 0 \rightarrow 1$ kpc} \includegraphics[width=1.0\hsize]{images/CMD_dissection/Diag_K_JmK_d0-1.png} \end{subfigure} \hspace{1cm} \begin{subfigure}[!t]{0.41\textwidth} \caption*{ $d = 5 \rightarrow 6$ kpc} \includegraphics[width=1.0\hsize]{images/CMD_dissection/Diag_K_JmK_d5-6.png} \end{subfigure}\\ \begin{subfigure}[!t]{0.41\textwidth} \caption*{ $d = 1 \rightarrow 2$ kpc} \includegraphics[width=1.0\hsize]{images/CMD_dissection/Diag_K_JmK_d1-2.png} \end{subfigure} \hspace{1cm} \begin{subfigure}[!t]{0.41\textwidth} \caption*{ $d = 6 \rightarrow 7$ kpc} \includegraphics[width=1.0\hsize]{images/CMD_dissection/Diag_K_JmK_d6-7.png} \end{subfigure}\\ \begin{subfigure}[!t]{0.41\textwidth} \caption*{ $d = 2 \rightarrow 3$ kpc} \includegraphics[width=1.0\hsize]{images/CMD_dissection/Diag_K_JmK_d2-3.png} \end{subfigure} \hspace{1cm} \begin{subfigure}[!t]{0.41\textwidth} \caption*{ $d = 7 \rightarrow 8$ kpc} \includegraphics[width=1.0\hsize]{images/CMD_dissection/Diag_K_JmK_d7-8.png} \end{subfigure}\\ \begin{subfigure}[!t]{0.41\textwidth} \caption*{ $d = 3 \rightarrow 4$ kpc} \includegraphics[width=1.0\hsize]{images/CMD_dissection/Diag_K_JmK_d3-4.png} \end{subfigure} \hspace{1cm} \begin{subfigure}[!t]{0.41\textwidth} \caption*{ $d = 8 \rightarrow 9$ kpc} \includegraphics[width=1.0\hsize]{images/CMD_dissection/Diag_K_JmK_d8-9.png} \end{subfigure}\\ \begin{subfigure}[!t]{0.41\textwidth} \caption*{ $d = 4 \rightarrow 5$ kpc} \includegraphics[width=1.0\hsize]{images/CMD_dissection/Diag_K_JmK_d4-5.png} \end{subfigure} \hspace{1cm} \begin{subfigure}[!t]{0.41\textwidth} \caption*{ $d = 9 \rightarrow 10$ kpc} \includegraphics[width=1.0\hsize]{images/CMD_dissection/Diag_K_JmK_d9-10.png} \end{subfigure} \end{minipage} \caption[Effect of distance on a 2MASS CMD]{Effect of distance on a 2MASS CMD. The data are the same as in Fig.~\ref{stellar_types_in_CMD}, but split in 1 kpc bins of distance.} \label{distance_slices_CMD} \end{figure*} \newpage We highlight that the various stellar populations distribute differently in this diagram. This is an important aspect to understand which classes of stars are the most useful to the profile reconstruction. For example, without extinction, the giant stars mainly align vertically around a color of [J-K]$\simeq$0.7 forming a vertical continuous distribution in almost all our K magnitude range. The rest of the diagram mostly corresponds to main sequence stars with the highest density being due to the relatively low-mass star population centered around [J-K] = 0.25 and [K]$\simeq 15$ mag. These two populations are represented separately in Figure~\ref{stellar_types_in_CMD} for a modeled 2MASS CMD without extinction, with and without simulated noise (the details on the noise modeling are given in Section~\ref{ext_profile_and_cmd_realism}). Additionally, the respective proportions of each of them varies as a function of their location in the Milky Way, as parametrized by the BGM. This extinction free diagram is also shaped by the effect of distance to the Sun. Figure~\ref{distance_slices_CMD} shows the corresponding CMD for slices of distance for an example LOS at $l = 280$ deg, $b = 0$ deg. It shows that the primary effect of increasing distance (i.e. a vertical shift) also leads to a strong increase in the ratio of giant to main sequence stars, since the latter are to faint to be detected by 2MASS beyond $\sim 8$ kpc. These properties highlight again that the amount of information contained in a simple 2D image is considerable, and that powerful statistical techniques are needed to disentangle this complex information.\\ This choice of using a multi-faceted CMD as input for our method makes clear that, due to the properties exposed in Section~\ref{image_process_section}, the CNN architecture should be more suitable than a classical ANN for this task (see Sect.~\ref{input_output_cnn_dim}). Indeed, a CMD can be considered as an image with the pixel value encoding the quantity in each bin when using a sufficient numerical range, and in the comparison we mostly want to estimate the translation of specific patterns for each stellar population. In the present study we used CMDs of $64\times 64$ pixels with $-0.5<[J-K]<6.1$ and $10<[K]<16$. This choice of resolution and limits is discussed in Section~\ref{input_output_cnn_dim}. However, for a CNN architecture to properly predict on real data we have to assess to what extent the BGM prediction realistically represents an observation. This is of major importance because ANN can easily be biased by systematic differences in the training sample or by non-representative proportions (as seen in Sects.~\ref{class_balance}, \ref{training_test_datasets}, or \ref{detailed_feature_space_analysis_ON}), for example by populating a part of the training CMDs that is never present in real observations, or in the opposite case if the training data lack constrains on parts of the CMDs that contain information in the observations. \subsubsection{Reproducing realistic observations: uncertainty and magnitude cuts} \label{ext_profile_and_cmd_realism} \begin{table} \centering \caption{Uncertainty fitting free parameters for all 2MASS bands} \vspace{-0.1cm} \def1.1{1.1} \begin{tabularx}{0.80\hsize}{l @{\hskip 0.05\hsize} @{\hskip 0.05\hsize}*{3}{Y}} \toprule & a & b & c\\ \toprule J & $7.253 \times 10^{-8}$ & $8.590 \times 10^{-1}$ & $2.258 \times 10^{-2}$\\ H & $1.807 \times 10^{-8}$ & $9.894 \times 10^{-1}$ & $2.802 \times 10^{-2}$\\ $\mathrm{K_s}$ & $2.242 \times 10^{-7}$ & $8.768 \times 10^{-1}$ & $2.044 \times 10^{-2}$\\ \bottomrule \end{tabularx} \label{table_2MASS_uncertainty_fitting} \end{table} On the one hand, an astronomical observed data can be altered in several ways during the acquisition process. Observational instruments have limits to their sensitivity inducing incompleteness, measurement uncertainty, and can even have systematic biases. Usually, these effects are well documented for each instrument which makes it possible to take them into account in the data analysis. On the other hand, a model also have biases or incompleteness of other types that are often difficult to assess, even for models that are constrained by observations. Our objective here is to make the training CMDs and the observed ones as alike as possible. There are two main properties that must be evaluated: the measurement photometric uncertainty, and the magnitude detection limit cut of the telescope. In our approach, these quantities are estimated individually. Anticipating on Section~\ref{2mass_single_los} we note that we will focus on the Milky Way disk between galactic latitudes $|b|<5$ deg and galactic longitudes $257 < l < 303$ deg centered on $l=280$ deg.\\ Regarding the magnitude cuts, we fitted data from the 2MASS point source catalog. For this, we downloaded the stars from a 1 $\mathrm{deg^2}$ region, centered on $l=280$ deg and $b=0$ deg. We excluded the stars for which one or more of the J, H, $\mathrm{K_s}$ bands was missing. We then fitted the magnitude histogram for each band individually using the following analytical formula: \begin{equation} f(x) = 0.5 a x^\alpha {\cal S}(x) \quad {\rm with} \quad {\cal S}(x) = 1 + \tanh \Bigg( b \times \frac{x_{50}-x}{x_{50}-x_{90}} \Bigg) \label{magnitude_cut_fit} \end{equation} where $a$ and $\alpha$ are free parameters that correspond to the first part of the function following a power law, used as a simple model for the underlying star distribution. The constant $b=\arctanh ( 0.8 ) $ is fixed and $x_{50}$ and $x_{90}$ are free parameters that correspond to the abscissa values at half and $90\%$ increase of the selection function ${\cal S}$. To use this selection in our mock CMDs, for each star we drew a value randomly from the selection function according to the star magnitude for each involved in the CMD. Therefore the shape of the selection cut is reproduced statistically. Figure~\ref{2MASS_cut_fitting} shows the observed star distribution histograms (in blue) and the best fit obtained for each band. \\ To evaluate the photometric uncertainty for each 2MASS band we used the same 1 $\mathrm{deg^2}$ region centered on $l=280$ deg, $b=0$ deg. Similar, to the cut fitting, we excluded stars for which at least one of the J, H, $\mathrm{K_s}$ bands was missing but also the stars that do not have all the respective uncertainties. For each band we represented the corresponding magnitude-uncertainty diagram. Figure~\ref{fig_2MASS_uncertainty_fitting} shows that the stars mostly distribute following an exponential law. Following the example of \citet{Robin_2003}, we fitted the distribution with using the following form: \begin{equation} \sigma(x) = a \exp\big (b \, x\big) + c \label{uncertainty_fit} \end{equation} where $x$ is the magnitude of the fitted band and $a$, $b$, and $c$ are free parameters. To overcome the fact that the number of outliers greatly increases toward the higher magnitudes, we first computed the running median (RM) of the distribution, that has the advantage of being robust against outliers. In practice, the RM was evaluated for 100 magnitude bins of 0.3 mag evenly distributed between the 0.1 and 99.9 percentiles of the magnitude interval. The RM has then been fitted using Equation~\ref{uncertainty_fit} without weighting in order to prevent the less represented magnitude bins from being less constrained due to their smaller proportion. The corresponding RM values and our fit results are illustrated in Figure~\ref{fig_2MASS_uncertainty_fitting}. Table~\ref{table_2MASS_uncertainty_fitting} summarizes the free parameter values obtained for the three 2MASS bands. To compute mock photometric errors, the $\sigma$ value of each star was computed using the best fit $a$, $b$, and $c$ values, and was used to draw a random Gaussian error that was added to the errorless magnitude computed by the BGM.\\ \begin{figure}[!t] \centering \includegraphics[width=0.99\textwidth]{images/Selection_function_2MASS_l280_b0_1deg.png} \caption[Fitting of the cut in magnitude for the three 2MASS bands]{Fitting of the cut in magnitude for the three 2MASS bands. The blue histograms show the observed distribution, the fitted models are in red. The gray area shows the range of magnitude values included in the fit.} \label{2MASS_cut_fitting} \vspace{-0.4cm} \end{figure} \begin{figure*}[!t] \hspace{-1.9cm} \begin{minipage}{1.23\textwidth} \centering \begin{subfigure}{0.32\textwidth} \includegraphics[width=\textwidth]{images/plots_noise/2MASS_Jmag_2MASS_l280_b0_1deg.txt.png} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\textwidth]{images/plots_noise/2MASS_Hmag_2MASS_l280_b0_1deg.txt.png} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\textwidth]{images/plots_noise/2MASS_Kmag_2MASS_l280_b0_1deg.txt.png} \end{subfigure} \end{minipage} \caption[Fit of 2MASS uncertainties]{Fit of 2MASS uncertainties. The gray dots are 2MASS stars, the gray scale representing the star density in the diagram. The running median (blue dots) is fitted by an exponential model (orange line).} \label{fig_2MASS_uncertainty_fitting} \vspace{-0.4cm} \end{figure*} \clearpage \subsubsection{Simple extinction effect on the diagram} \label{simple_extinction_effect_cmd} Now that we have described the effects of observational noise and selection on the CMDs, we can build a realistic CMD without extinction. From this "bare" CMD we illustrate the effect of rudimentary extinction profiles in order to better understand what is the information that the network will be tasked to extract. As we described in Section~\ref{extinction_with_bgm_intro}, a star that is present in a given CMD pixel at the beginning will translate toward a higher color and a lower magnitude. If we consider the case with a single point-like cloud on the LOS, all stars that are in front of the cloud do not move at all, while all others are translated with respect to the extinction quantity of the cloud, as illustrated in Figure~\ref{simple_ext_examples}. In the same figure we also illustrate a simple two-cloud example in which the stars that are behind the clouds are affected by the cumulative extinction. The individual cloud extinction effect is especially visible in the vertical branch of the giant stars.\\ We note that with the typical angular resolution of our 3D extinction maps (15 arcmin per pixel), the sub-beam distribution of extinction is not uniform. For example, at 5 kpc, a 15 arcmin beam covers a physical area of $\sim 20$\,pc, which is enough to contain a whole molecular cloud, with its complex substructures (filaments, clumps, cavities, ...). To take this effect into account, we followed \citet{Marshall_2020} and modeled this so-called fractal structure of the ISM with a log-normal probability density function: \begin{equation} f(A_V) = \frac{1}{\sigma A_V \sqrt{2\pi}} \exp \left( - \frac{\log^2 (A_V)}{2\sigma^2} \right) \label{eq:lognorm_ext} \end{equation} where $A_V$ is the cumulative extinction from the star to the observer as obtained from our extinction profile. This value can be considered as the mean extinction in the beam. The constant $\sigma$ characterizes the width of the distribution. We adopted a value $\sigma = 0.4$, typical of the values estimated by \citet{Kainulainen_2009} in a score of nearby molecular clouds using near-IR 2D extinction maps derived from 2MASS data. In practice, we used this probability density function to randomly draw the actual value of $A_V$ of each star. The bottom frames of Figure~\ref{simple_ext_examples} show a comparison of the produced CMD using the same extinction profile but with a log-normal and uniform extinction distribution, respectively.\\ \begin{figure*}[!t] \hspace{-1.4cm} \begin{minipage}{1.15\textwidth} \centering \begin{subfigure}{0.48\textwidth} \caption*{\bf \small No extinction} \includegraphics[width=\textwidth]{images/CMD_dissection/2MASS_no-ext.png} \end{subfigure} \hspace{0.4cm} \begin{subfigure}{0.48\textwidth} \caption*{\bf \small 1 Cloud, $\bm{ A_{V} = 3}$ mag, $\bm{d = 2}$ kpc} \includegraphics[width=\textwidth]{images/CMD_dissection/2MASS_Av3_d2_ln.png} \end{subfigure}\\ \vspace{0.6cm} \begin{subfigure}{0.48\textwidth} \caption*{\bf \small 1 Cloud, $\bm{A_{V} = 10}$ mag, $\bm{d = 2}$ kpc} \includegraphics[width=\textwidth]{images/CMD_dissection/2MASS_Av10_d2_ln.png} \end{subfigure} \hspace{0.4cm} \begin{subfigure}{0.48\textwidth} \caption*{\bf \small 1 Cloud, $\bm{A_{V} = 10}$ mag, $\bm{d = 4}$ kpc} \includegraphics[width=\textwidth]{images/CMD_dissection/2MASS_Av10_d4_ln.png} \end{subfigure}\\ \vspace{0.6cm} \begin{subfigure}{0.48\textwidth} \caption*{\bf \small 2 Clouds, $\bm{A_{V} = 1.5, 6 }$ mag, $\bm{d = 1, 6}$ kpc} \includegraphics[width=\textwidth]{images/CMD_dissection/2MASS_2clouds_ln.png} \end{subfigure} \hspace{0.4cm} \begin{subfigure}{0.48\textwidth} \caption*{\bf \small Uniform ext - 2 Clouds, $\bm{A_{V} = 1.5, 6 }$ mag, $\bm{d = 1, 6}$ kpc} \includegraphics[width=\textwidth]{images/CMD_dissection/2MASS_2clouds_u.png} \end{subfigure} \end{minipage} \caption[Effect of individual clouds on the 2MASS CMD]{Effect of individual clouds on the 2MASS [J-K]-[K] CMD. The extinction is modeled as a log-normal distribution, except in the bottom-right panel where a uniform extinction is used.} \label{simple_ext_examples} \end{figure*} We showed in Fig.~\ref{obs_model_ext_comparison} that extinction corresponds to a shift in CMD diagrams. This provides a very important insight about the expected resolution of the predicted extinction. Indeed, due to the position of the CMD and to the translation direction, it is challenging to retrieve more than 30 discrete values for the height of an extinction bin in the profile. In addition, the distance at which the extinction must be placed in the profile corresponds to the fraction between the number of reddened stars and the number of not reddened stars. Consequently the best achievable distance resolution is related to the number of stars per CMD bins, which is of a few tens of stars per bin for the example LOS. We elaborate more on consequences of the choice of CMD resolution in Section~\ref{input_output_cnn_dim}\\ Despite the relative easy understanding of the extinction effect on individual stars in this CMD it is striking from Figures~\ref{stellar_types_in_CMD}, \ref{distance_slices_CMD} and \ref{simple_ext_examples}, that the combination of the different stellar classes, the spatial variations of the Galactic structures, the observation noise, and sub-beam extinction distribution is a very complex problem for a 2D CMD representation. This justifies the need for a highly non-linear method that would be able to automatically extract all these correlations from the intricate CMD, just like our CNN formalism. \clearpage \subsection{Creating realistic extinction profiles for training} \label{GRF_profiles_section} We remind the reader that the objective of the study is to reconstruct the underlying dust distribution of each LOS, represented by a differential extinction profile. In this section {\bf we prepare a training sample to train an ANN} to perform this task. We used the BGM to generate many realistic star distributions on given lines of sight. Then, applying a given extinction profile to this star distribution we construct a mock extincted CMD. The network will then take this CMD as its input and will learn to predict the extinction profile that was used by taking it as its target.\\ To be capable of predicting extinction profiles from observed CMD, the ANN must have been trained using realistic extinction profile examples. It means that we have to find a prescription to construct sufficiently realistic example to train the network. One approach could be to use simulations of the interstellar medium \citep[e.g.][]{padoan_2017} but it would require very large hardware facilities considering the fact that we are interested in large distances at the Milky Way scale and due to the number of examples that will be necessary to train a relatively large ANN architecture (Sects.~\ref{cnn_hyperparameters}, \ref{2mass_single_los_training_and_test_set_prediction}, \ref{2mass_multi_los_training_and_test_set_prediction}, \ref{Gaia-2MASS_single_los_training_and_test_set_prediction}, and \ref{Gaia_2mass_multi_los_training_and_test_set_prediction}). Another approach would be to use previous 3D extinction maps of other methods as a prior for our training profiles but it would be difficult to assess the induced bias in our own results. Instead, we adopted a lower level approach that consists in creating mock training profiles from Gaussian Random Fields \citep[GRF, e.g.][]{Sale_2014} and to tune general construction parameters to correspond to our need.\\ \subsubsection{Gaussian Random Fields} We succinctly describe here the necessary elements to construct 1D Gaussian Random Fields (GRFs) that are then used to construct our profiles. Details on the method formalism that we depict here can be found in the appendix B of \citet{Sale_2014}. The objective here is to construct a realistic dust density profile, for which the logarithm can be approximated by a GRF. In the present method we first start by assuming that the log of density $\rho$ (i.e. of differential extinction) has a power law $|FT(\log \rho^2(k))| \propto k^{-2\beta}$, where $FT$ is the Fourier transform, and $k$ is the spatial frequency. Each point in the Fourier space then receives a complex magnitude drawn randomly from a Gaussian probability distribution of standard deviation $k^{-\beta}$ that corresponds to the square root of its value from the power law. A complex phase is added to each point randomly in the $0 < \phi < 2\pi$ range. Applying the inverse Fourier transform to this space generates a GRF that follows the desired power spectrum. The final profile of differential extinction is obtained by exponentiating the GRF following $\frac{dA_V}{dz}(z) = \exp( \sigma \mathrm{GRF} )$. Two parameters can be tuned in this formalism in order to control the properties of the constructed profile, (i) a $\beta$ parameter that controls whether the peaks of the predicted density are narrow and frequent or more sparse and large, and (ii) a $\sigma$ parameter that acts as a posterior scaling of the profile, being responsible for the contrast between the peaks and the lows in the profile. Example of the generated GRF profiles are in Figure~\ref{grf_param_illustrations} for two different sets of $\beta$, $\sigma$ parameters.\\ \newpage \subsubsection{GRF generated profile} \begin{figure}[!t] \centering \includegraphics[width=0.9\hsize]{images/grf_param_illustrations.pdf} \caption[Examples of GRF realizations]{Examples of Gaussian random field realizations with the two sets of adopted parameters, as indicated above each column. We emphasize that the ordinate scales are different in each example.} \label{grf_param_illustrations} \end{figure} Using this formalism we tested many combinations of $\beta$ and $\sigma$ to assess which one was suitable for our application. One important point to notice here is that, providing the network with a target that contains too much details in comparison to the amount of information that is contained in the inputs is counterproductive. Indeed, the fine-grained error that would be produced from the target output comparison in such a case would not find any input information that correlates with this error. This would induce a non meaningful correction to all the weights of active neurons, globally adding noise to the network preventing it from properly converging to the information effectively accessible. Therefore, the training profile realism must also be limited. We noticeably tested training profiles that contains several narrow structures close to each others, and the network was enable to reconstruct them only finding smoother structures. We also tested to create training datasets that were based on random $\beta$ and $\sigma$ values within given ranges, but the diversity was too large for the network to converge without over-sized datasets. \\ By looking at the predictions from other maps and also at the prediction capacity of the network on several tested parameters we settled on an approach that makes a combination of two individual GRF profiles. We generated a profile with $\beta = 2.0$ and $\sigma = 1.0$ to obtain large structures (low spatial frequencies) along the LOS, in order to represent the more diffuse ISM. The second profile was generated with $\beta = 1.8$ and $\sigma = 5.0$ to represent more compact structures (higher spatial frequencies), representative of molecular complexes. We illustrate a few profiles from each of these two sets of parameter in Figure~\ref{grf_param_illustrations}. To construct a single training profile, a realization of each of these two types of GRF was summed using a random fraction between 0 and 1 for each of them. We illustrate a typical result of this combination in Figure~\ref{grf_fractional_sum}. The profile is then scaled based on its maximum value using a random $A_{V,\rm max}$ between $10^{-2}$ and 100 mag/kpc. To avoid having profiles with a very strong total extinction, which could occur with the GRF, we excluded any profile for which the total cumulative extinction is higher than $A_{V,\rm cumul} > 50$ mag. The latter effect is visible in several of our profiles in our result section (e.g. in Figures~\ref{single_los_2MASS_test_profiles_prediction} and \ref{multi_los_2MASS_test_profiles_prediction}). The exact values of our profile generation were tuned to maximize statistical similarities between our profiles and the prediction from other extinction maps, by looking for example at the maximum peak extinction distribution, or at the integrated extinction distribution. Tuning the parameters of the GRF profiles in order to modify their statistical properties is very similar to our training dataset rebalacing from Section~\ref{training_test_datasets}. More profiles generated using the same approach are visible in subsequent figures that illustrate the network predictions in the following sections (e.g. Figs.~\ref{single_los_2MASS_test_profiles_prediction} or \ref{multi_los_2MASS_test_profiles_prediction}).\\ \begin{figure}[!t] \centering \includegraphics[width=0.7\hsize]{images/grf_fractional_sum.pdf} \caption[Example of a combined Gaussian random field profile]{Example of a profile obtained by summing two Gaussian random field profiles, one with $\beta=2.0$ and $\sigma=1.0$, the other with $\beta=1.8$ and $\sigma=5.0$, weighted with a random value randomly drawn between 0 and 1. } \label{grf_fractional_sum} \end{figure} \subsubsection{Profile star count limit and magnitude cap} \label{zlim_subsection} The obtained profiles are the ones that are effectively applied to the bare modeled CDMs to obtain the final mock CMDs of the training sample. However, after computing the mock CMDs, we performed two last transformations of the profiles before using them as targets, still following the idea that we should not have targets that are impossible to reproduce. The first transformation is motivated by the possibility that, despite our limit in $A_{V, \rm cumul}$, some profiles can have an sufficient cumulative extinction to completely screen the stars beyond a certain distance. In other words, it is possible that parts of the profile are not constrained at all, or not by a sufficient amount of stars. To account for this effect, we manually defined a star count limit $Z_{\rm lim}$ that was used to force the target profile to zero after a certain point. In practice, after the application of the full extinction profile to the star list, we searched the farthest distance beyond which remained only $Z_{\rm lim}$ stars. For each training profile every distance bin beyond this limit was set to zero, which is illustrated by the cut profiles in Figure~\ref{single_los_2MASS_test_profiles_prediction}. Interestingly, since the full profile and not the cut profile is used to compute the input CMD extinction, it only means that the network is trained to consider all the confused cases where there is not enough stars anymore to be zero, still conserving a fully realistic CMD. From a classification standpoint, it can be seen as making one large class that contains all the cases that are too difficult to discriminate and attribute the same target to all of them.\\ The second transformation followed a similar idea. We also capped the maximum extinction per bin in the target profile to $dA_V/dz = 50$ mag/kpc despite the $dA_V/dz = 100$ mag/kpc permitted by the GRF profile construction. Just like for the $Z_{\rm lim}$ cut, this modification is only made on the target profile while the CMD is still affected by the full profile. Again it just acts as an additional clustering of the cases with large extinction per bin, strongly stabilizing the training and improving the global network prediction. This choice is justified because we are more interested in the dust distribution rather than the exact extinction quantity at a first time. Additionally our various results showed that the cases where the predicted profile is saturated are rare (Sect.~\ref{2mass_maps_section}, e.g. Fig.~\ref{single_los_2mass_polar_plan}).\\ Finally, even if it is possible to generate profiles with a very small total extinction, we observed that adding some flat extinction profiles in the training sample significantly improved the prediction results. The might be due to the fact that the network has to see the original distribution of the stars in the bare CMD directly from the model without extinction. It helps better constraining the reference pixels for each star. We control the proportion of profiles that we manually set to zero using the $f_{\rm naked}$ parameter that is usually set at $0.01$ so that 1 out of 100 profiles is a flat zero extinction one. This might seems a large amount but we expect to have many predicted profiles with very faint extinction, especially if we try to perform prediction outside of the Galactic Place. Still, we exposed in our YSO application that a better representation of the most common case is a way of reducing false positive in the more rare classes. In other words, since ANN works by assessing differences between cases, having a strong constraint on what a null or faint profile looks like significantly reduces the noise in our prediction and increases the confidence one can have when there is a detection.\\ \newpage \subsection{Tuning the method} \subsubsection{Input and output dimensions} \label{input_output_cnn_dim} In the present section we describe some general network architecture properties that are shared by our different applications in Sections~\ref{2mass_maps_section} and \ref{gaia_2mass_ext_section}. First, it is important to note that the angular resolution in the plane of the sky will strongly affect how many stars will be present in our lines of sight, therefore affecting the proportions in the CMDs. For all our applications we adopted a pixel size of $0.25^\circ$, corresponding to a $0.2 \mathrm{deg^2}$ surface on the plane of the sky. In practice it means that for each pixel we built the forward sample from a query of the 2MASS or Gaia catalogs within an area of this size. \\ The input volume of our CNN is always composed of images of size $64\times 64$, representing different diagrams, in the case of 2MASS the [J-K]-[K] CMD. Considering the adopted LOS area, this CMD resolution has been identified as a proper balance between the number of pixels in the CMD and the number of stars per bin, following the considerations on how the extinction profile is encoded in the CMD (Section~\ref{simple_extinction_effect_cmd}). Indeed, in the case of a too low resolution (i.e. only few pixels in the CMD) the network is unable to assess properly both the original position of the stars and shift length in the diagram, leading to imprecise extinction quantity prediction at a given distance. In the opposite case of a too high resolution, the number of stars per bin becomes too small for the network to properly assess how many stars have moved due to extinction, making the distance estimate very inaccurate. We note that our input CMDs are systematically normalized by a simple scaling between 0 and 1 according to the maximum pixel value in the full training dataset, which works well with a CNN architecture.\\ Regarding the predicted profile resolution, we opted for 100 pc bins. We expect, somewhat naively, that it corresponds to the typical resolution one could expect around 2 to 3 kpc. Closer distance estimates could be slightly under resolved compared to the information we expect to be contained in the CMD. At larger distances, the profile will be clearly over resolved compared to what can be extracted from the CMD. In this case the network prediction is expected to be spread across several bins, following the distance uncertainty. We note that the same resolution was adopted by \citep[][in prep.]{Marshall_2020}. Testing progressive bin sizes would certainly be worth testing in the future. \\ We choose to have profiles of 128 bins allowing a maximum distance estimate of 12.8 kpc. We observed that having a maximum distance at 10 kpc induced boundary artifacts when there was indeed structures around this maximum distance. As we show in Section~\ref{los_combination}, our method manages to reconstruct extinction structures at distances around 10 kpc when looking at sufficiently populated line of sights. From a network perspective, these profiles are encoded using a set of 128 linearly activated neurons representing each bin in the profile. It means that each bin is independently activated and that any correlation between nearby bins is the result of the network training without any prior on the form of the output other than the list of training target profiles. The profile is numerically encoded using the differential extinction per bin $dA_V/dz$ in units of mag / 100 pc, which is similar to considering that the profile is an array of total extinction $A_V$ in each 100 pc-bin. These values are normalized using a constant division by 5, which corresponds the maximum possible value of a single bin. This leads to targets in the range 0 to 1, which works well for linearly activated neurons. We note that we did not force any output to be above 0, so it happened that some profiles present slightly negative predictions. When negative values were present in a map prediction, they were changed to 0. \vspace{-0.2cm} \subsubsection{Network architecture} \label{cnn_architecture_test} \vspace{-0.2cm} We have described our input image and our output layer dimension for the targeted and predicted profiles. They define the boundary dimensions of the network between which we can arrange our internal network architecture. In the following paragraphs we compare the capacity of several network architectures based on their error on a modeled test dataset after training.\\ \vspace{-0.2cm} We first attempted a fully connected architecture, considering each input pixel of the input CMD as an independent feature. We tested several variations with up to 5 layers of various size (up to 4096 neurons on each layer) and used several of the improvements from the Section~\ref{cnn_parameters} that are suitable for regular ANN, like the change in activation function for leaky ReLU, a better weight initialization, and the use of a dropout. While such an architecture is still suitable for the task to a certain extent, it never managed to get to a similar prediction quality than a carefully designed CNN architecture. It is interesting to note that the high number of weights induced by a fully connected architecture is not the limiting factor in our case since our best CNN network has a number of weights that is of the same order of magnitude.\\ \vspace{-0.2cm} We then tried to use classical architectures (LeNet, AlexNet, VGG, ..., Sect.~\ref{cnn_architectures}) as an inspiration for our own convolutional layer construction. Despite many careful attempts almost no architecture with 3 or more convolutional layers was able to perform even as good as a fully connected one. We found that one solution to improve the performance is to have few convolutional layers that quickly increase their number of filters up to above 256 before the fully connected layers with a minimum amount of image size reduction. Such architecture barely outperformed the fully connected architecture in spite of a huge increase in computational time.\\ \vspace{-0.2cm} From this observations we evaluated why the fully connected architecture was performing so well on this task. First, we remind that a convolutional layer extracts pattern in the images. At first glance it seems to be the operation that we want to perform here since we mostly want to detect an echo of a given pattern (typically, the giant branch) a several positions in the image. Additionally it is more indicated when a pattern to detect can be a different places in the image, which is not the case of our CMD for which the reference pattern is always at the same place for the stars that are not affected by extinction. Then the most common CNN architectures progressively reduce the image dimensionality and increase the number of filter, corresponding to the amount of feature. This architecture is well suited to differentiate between many very different objects that might share some sub-patterns. In our case this is not really the expected operation. In Sections~\ref{simple_extinction_effect_cmd} and \ref{input_output_cnn_dim}, we stated that the information about the profile is encoded in the CMD in the form of: (i) a shifting amount in pixel corresponding to the extinction quantity, and (ii) a ratio between pixels assessing how many stars have moved from their original position, corresponding to the distance. From this it is more evident than common CNN architectures that are mainly design around classification are not suitable for the task.\\ \vspace{-0.2cm} Our approach to improve the results using the CNN formalism was then to assess what a convolutional layer can add to the fully connected architecture. Noticeably, it is efficient to find some patterns, to reduce noise in the original image, and to find a similar representation of the input with less dimensions. Each of these reasons is sufficient to justify the addition of at least one convolutional layer to the fully connected architecture. Indeed, even with as few as 4 filters of size $5\times 5$ with a stride of $1$ and then 3 fully connected layers similarly sized to the fully connected architecture, the prediction result was significantly improved. This attests that few filters are sufficient to average, denoise and strengthen the most important patterns in the CMD. We then explored various small improvements around this very simple architecture. \\ Following the notation introduced in Section~\ref{cnn_architectures} and including the dropout notation in a dense layer as D-n\_$d_r$ where $d_r$ is the dropout fraction, we list here a few of the architectures that we explored: \begin{enumerate} \setlength\itemsep{0.2em} \item {[I-64.64, C-4.5, $2\times$[D-3072\_0.1], D-2048, D128]} \item {[I-64.64, C-8.5, P-2, $2\times$[D-3072\_0.1], D-2048, D128]} \item {\bf [I-64.64, C-12.5, P-2, $\bm{2\times}$[D-3072\_0.1], D-2048, D128]} \item {[I-64.64, C-8.5.2, P-2, $2\times$[D-3072\_0.1], D-2048, D128]} \item {[I-64.64, C-6.5, C-8.5, $2\times$[D-3072\_0.1], D-2048, D128]} \item {[I-64.64, C-6.5, C-8.5, P-2, $2\times$[D-3072\_0.1], D-2048, D128]} \item {[I-64.64, C-6.5, P-2, C-8.5, P-2, $2\times$[D-3072\_0.1], D-2048, D128]} \item {[I-64.64, C-6.5.2, C-8.5, P-2, $2\times$[D-3072\_0.1], D-2048, D128]} \item {[I-64.64, C-12.3, P-2, C-24.3, P-2, $2\times$[D-3072\_0.1], D-2048, D128]} \end{enumerate} and many other variations, including modifications to the dense part.\\ We found that the architecture 3 in this list was the one that had the best balance between prediction quality, much better than the fully connected one, and computational efficiency. The first convolutional layer is used with a padding of two $P=2$ to conserve the input dimensionality and apply 12 filters of size $5\times 5$ with a stride of $S=1$. This leads to 12 activation maps that conserve the image resolution, and therefore preserve the shift quantity in the input CMD. Connecting these maps directly to the dense part of the network would induce a very large number of weights which would make the training much more difficult and the error convergence much more noisy as well as much slower. We had two choices for the reduction dimensionality, a Max-Pooling or a stride of 2 for the first convolution. We kept the first one since it was providing significantly better predictions. Adding more convolutional layers after this first construction almost always led to less good predictions and a significant increase in computational time. The end of the network is then made of two dense layers with $n=3072$ leaky-ReLU neurons with dropout. We discuss the choice of dropout rate in the following Section~\ref{cnn_hyperparameters}. There is then a last smaller dense layer with $n=2048$ leaky-ReLU without dropout, and then the output layer and its 128 linear activations corresponding to the extinction profile. \subsubsection{Network hyperparameters} \label{cnn_hyperparameters} Our choice of hyperparameters regarding the selected architecture is the result of a meticulous manual exploration. Due to the number of hyperparameters to tune and to the time required to train a network on a given set (Sect. \ref{cnn_computational}), it was unrealistic to attempt an automated exploration of a large hyperparameter space, therefore it might exist a combination that works better than the one we adopted. We note that the adopted combination provides similar network training behavior in all the cases that are exposed in our results Sections~\ref{2mass_maps_section} and \ref{gaia_2mass_ext_section}, therefore the following description is valid for all the following applications. To better understand the choice of hyperparameters we already state here that our typical training dataset size will range between $5\times 10^5$ and $2\times 10^6$. We note that the few parameters that are not mentioned here follow the prescription from the corresponding section, usually following the CIANNA default values. \\ The network weights are initialized using the Xavier normal distribution (Sect.~\ref{cnn_weight_init}, equation~\ref{eq_xavier_normal}). We adopted a batch size of 32, which provided an appropriate balance between the computational time of a large enough batch (Sect.~\ref{gpus_prog}) and small weight updates to efficiently resolve the error in the weight space and converge in a reasonable number of epochs (Sect.~\ref{descent_schemes}). Interestingly, we observed that having momentum on this architecture was mostly preventing the network from reaching its optimal value while only slightly speeding up the error convergence. It is possible that the learning rate decay that we used was somehow redundant with the momentum effect, and decided to not use the latter. The training is then decomposed into 3 blocks of 50 epochs with their own learning rate. The blocks have individual learning rate prescription with an exponential decay that follows the equation: \begin{equation} \eta(t) = \eta_{\rm min} + (\eta_{\rm max} - \eta_{\rm min}) \exp\big (-\tau t \big ) \label{eq_decay} \end{equation} where $\eta$ is the learning rate as a function of the epoch, $\eta_{\rm max}$ is the starting learning rate at the beginning of the block, $\eta_{\rm max}$ is the asymptotic minimum value, $\tau$ is the decay rate and, $t$ the current epoch number. We note that each block count its own epoch from zero. The first block starts with $\eta_{\rm max} = 0.002$ and decreases exponentially toward $\eta_{\rm min} = 0.001$. The second and third blocks are identical, starting at $\eta_{\rm max} = 0.0015$ and aiming at $\eta_{\rm min} = 0.001$. All the blocks have the same decay rate of $\tau = 0.005$. Most of our networks converge between the epoch 50 and 100. We kept the third block in case, similarly to a simulated annealing technique, the sudden increase in learning rate at the beginning of the third block gets the weights out of a local minima. In practice we found it to be rare, since the network properly converged almost all the time before epoch 100.\\ Regarding the dropout rate in the dense layer at the end of the network, we observed that having at least a small dropout is absolutely necessary on this architecture. Without dropout, repeating the same training several time could lead to inconsistent predictions on observed data and very noisy predictions, even if the valid and test dataset errors where very similar between the different training. We note that this is not solely due to a possible overfitting induced by potentially too large dense layers, since we observed the same behavior even with much smaller ones. Still, the dropout solves this issue very nicely and allowed us to estimate uncertainties on our predicted profiles. In practice, we adopted a dropout rate of $dr = 0.1$ for the two first dense layers afters the convolutional part as in \citet{Gal_2015}, the last layer being smaller and free of dropout since we observed that it evened the prediction between several training without a significant impact on the uncertainty predictions. However, in order to assess if the network produces uncertainties that are representative of a true underlying dispersion, we should have make several training, slowly increasing the dropout value to find the point where the dispersion do not increase anymore. Larger dropout would imply to resize the layers accordingly, which would increasing the raw computation time of these larger layers and would significantly increase the number of epochs required to converge (Sect.~\ref{dropout_sect}). Such an exploration was not compatible with the time given to the present study, but it would be an interesting future development in order to have accurate prediction uncertainties. Still, we note that an identical value of dropout rate was used to reproduce accurate posterior-error measurement on a regression case in \citet{Gal_2015} even more efficiently than other usual methods. In any case our dropout rate is useful to overcome undesirable effects on the prediction and provide at least a first estimate of the uncertainty morphology in large predictions (Sect.~\ref{2mass_single_los}).\\ \vspace{-0.6cm} \newpage \subsubsection{Computational aspects} \label{cnn_computational} \vspace{-0.2cm} All our network trainings were performed on Deep-Learning dedicated GPU cluster nodes from the "Université de Franche-Comté" Mesocenter. These nodes contain a total of 7 Nvidia Tesla V100 GPU, which was at this point the most powerful Nvidia professional GPU for servers. Each of these card contains 5120 CUDA cores clocked at 1.38 GHz corresponding to 14.13 TFLOPS in FP32. We had access to two sub-models of V100 with either 16 or 32 GB of dedicated HBM2 memory that uses a 4096 bit interface to reach a bandwidth of $\sim$900 GB/s. Using one of these monstrous GPUs, our CIANNA framework managed to train at a rate of $\sim$7000 examples/s. Increasing the batch size, it would be possible to achieve up to $\sim$18000 examples/s for training but at the cost of more epochs needed and a less good overall error convergence. We note that these GPUs are equiped with first generation Tensor Cores, but that we did not used them since we did not had the time to add Mixed-Precision training in CIANNA.\\ \vspace{-0.1cm} Depending on the training dataset size, our training process required between 2h and 8h to converge, and had a memory usage between 6 and 190 GB. The timescale and memory usage match with most CNN applications, considering for example that the training in the AlexNet \citep{alexnet_2012} study needed 6 days on an old generation $\sim$3 TFLOPS total GPU machine. Our persistent memory usage is also very high since testing different dataset constructions and storing the network weights all uncompressed we would accumulate several TB of data production. While there are still a lot of improvements possible from a numerical performance and memory usage standpoint, we highlight that this is already the result of large optimizations and meticulous choices of relevant data. This study would certainly not have been possible on a similarly sized hardware infrastructure without all the time we invested in tuning our own CNN framework to be optimized for the task and our careful dataset construction.\\ \vspace{-0.1cm} Finally, while our training process is very computationally intensive we stress that using the trained network to produce large scale maps with dropout uncertainty is a matter of minutes using a mid-range laptop GPU Nvidia P2000-mobile with 768 CUDA cores clocked at 1.6 GHz. Still, most of our map predictions using our trained networks was made on a server belonging to the UFC Computational Physics Master that is equipped with a much more recent Nvidia Quadro RTX-5000 GPU. Overall, the network architecture is very computationally efficient and the time to train the network is only a consequence of the complexity of the problem to solve, which requires to constrain a large number of parameters. Otherwise, the prediction of a large scale map from a single path without dropout is always a matter of second on any GPU. This opens the possibility to distribute the trained network along with the map itself, the first one being much lighter with only 560 MB while a reasonable sized map with the full prediction probability distribution weights several GB. This would allow anyone to quickly reconstruct a map with an individual control of the probability distribution sampling or on the resolution.\\ \vspace{-0.1cm} For the present work we accumulated more than 1000 GPU hours on the UFC Mesocenter. While the comparison with CPU hour is not straightforward since our code takes advantage of GPUs architecture specificity, we can roughly estimate the conversion between GPU and CPU hours. A Tesla V100 is estimated at 14.13 TFLOPS, while the CPUs on the same machine are Intel 4110 at 2.1 GHz, which convert roughly to 67 GFLOPs per core in single precision. Using the ratio between the two raw-compute power our GPU hours count converts to $\sim$210000 CPU hours. We note that for subsequent studies we plan to make a proposal to the \href{http://www.idris.fr/annonces/annonce-jean-zay-eng.html}{Jean Zay} GENCI super computer as stated in Section~\ref{gpus_variety}, which has a dedicated entry program for AI projects, granting access to large GPU-nodes equipped with several Tesla V100 GPUs. \newpage \section{2MASS only extinction maps} \label{2mass_maps_section} In this section we describe the results we obtained using our CNN architecture solely on 2MASS data. We first present results in a large zone from the generalization of a training on a single LOS, and in a second step we show how several lines of sight can be combined into a single training. We also illustrate the effect of some parameters of our training dataset like the $Z_{\rm lim}$ value on our network prediction. Because the results are mostly arranged in an linearly increasing complexity order, we perform most of the analysis of each case after the presentation of the results. \etocsettocstyle{\subsubsection*{\vspace{-1cm}}}{} \localtableofcontents \subsection{Training with one line of sight} \label{2mass_single_los} \subsubsection{Network training and test set prediction} \label{2mass_single_los_training_and_test_set_prediction} The simplest approach we can elaborate to train a CNN on 2MASS CMDs is the one that has been described along with the dataset construction in Section~\ref{galmap_problem_description}). For this first application, we considered only one LOS. Still, we will show that a training on a single LOS can be generalized to a relatively large galactic longitude range. We selected the LOS $l = 280$ deg, $b = 0$ deg in galactic coordinates because this region approximately corresponds to the observable tangent of the Sagittarius-Carina galactic arm. In this region we expect to have a significant diversity of extinction distributions in a relatively narrow galactic longitude window.\\ Using solely the central LOS value, we generated a training sample of various extincted CMDs. For this we first produced several BGM realizations because they are always slightly different considering that it represents stellar populations statistically (Sect.~\ref{BGM_sect}). For each of these realizations, we generated 100 (i.e. $1/f_{\rm naked}$, Sect.~\ref{zlim_subsection}) mock extinction profiles following our GRF recipe (Sect.~\ref{GRF_profiles_section}) that are applied to the BGM star distance distribution. We then used the magnitude limit cuts to exclude too faint stars and added the modeled photometric errors, both following the description from Section~\ref{ext_profile_and_cmd_realism}. The extincted CDMs are generated from the extincted star lists and defined as the network inputs. Finally, the realistic list of stars for each example is used to modify the corresponding extinction profile based on a $Z_{\rm lim} = 100$ value, and the extinction profiles are capped (Sect.~\ref{zlim_subsection}). The modified profile is then defined as the network target. \\ For our first application using a single LOS we generated $5\times 10^5$ examples of 2MASS CMD-profile pairs meaning that the training sample is based on $1000$ BGM realizations. Also, a part of the profiles are considered as flat with no extinction to better constrains the bare CMDs distribution, using $f_{\rm naked} = 0.1$ as described in Section~\ref{zlim_subsection}. This dataset is separated into a training dataset that contains $94\%$ (470000) of the examples, a valid dataset with $5\%$ (25000), and the remaining $1\%$ (5000) are our test dataset (see Sect.~\ref{sect_overtraining}). The training is performed using the architecture we highlighted in Section~\ref{cnn_architecture_test} with only one convolutional layer with few filters followed by three dense layer that include a small dropout. The network hyperparameters are described in Section~\ref{cnn_hyperparameters}. It took around 60 epochs for the network to converge corresponding to a few hours on a Tesla V100 GPU, then the prediction is mostly stable up to epoch 100 where the network distinctly began to overtrain.\\ \begin{figure}[!t] \hspace{-0.9cm} \vspace{-0.2cm} \begin{minipage}{1.06\textwidth} \centering \includegraphics[width=1.0\hsize]{images/single_los_2MASS_test_set_predictions.pdf} \end{minipage} \caption[Single 2MASS LOS training prediction on the corresponding test set]{Excerpt of a few objects from the test dataset of the 2MASS single LOS training. {\it Left:} View of the CMD for which the prediction is made. {\it Right:} View of corresponding profile. The dashed line shows the target of the network that accounts for the $Z_{\rm lim}$ maximum distance limit. The network prediction is presented in the form of a vertical histogram prediction for each distance bin corresponding to 100 random dropout predictions.} \label{single_los_2MASS_test_profiles_prediction} \end{figure} \vspace{-0.3cm} A global error on the test dataset is not a very visual representation of the network capacity to reproduce the target, so we extracted a few extinction profile predictions of the network on the test dataset that we represent in Figure~\ref{single_los_2MASS_test_profiles_prediction}. This figure shows the CMD that is used as input for each case on the left column, and it compares the target for this case to the network prediction using a sample of 100 random dropout predictions to construct the prediction probability distribution. From the figure it is striking that the network greatly succeeds in localizing the extinction peaks, but it is slightly less accurate to reproduce the maximum extinction amount. We also observe that some structures are not reconstructed properly after a first extinction peak, for example in frames 1 and 3. The frame 3 illustrates a case where the network manages to reconstruct a relatively low second extinction peak after a first one, even at a large distance $d \simeq 5$ kpc. In all the cases the network mostly succeeds to localize the $Z_{\rm lim}$ maximum distance and appropriately predicts zero for larger distances. On the other hand, high extinction peaks are not always as nicely represented, as illustrated in frame 2, especially at large distance. We especially notice that in the case of a relatively strong first extinction peak, the network has much more difficulties to predict a second one that is still not cut by the $Z_{\rm lim}$ limit distance, as visible in frame 1. In such a case, we observe that the network still localizes an extinction increase at the appropriate location but with an underestimated extinction value. However, as our following results will show, our map will mostly predict extinction that lies in a range where our test examples are properly reconstructed. Indeed, the most difficult cases are in fact less realistic or would be very uncommon. We kept them in the network training in order to ensure that we have a large enough feature space coverage, in an attempt to obtain a sufficient diversity to avoid leaving unusual observed CMDs completely unconstrained.\\ \vspace{-0.7cm} \subsubsection{Generalizing over a Galactic Place portion} \vspace{-0.1cm} From this trained network we were able to make predictions on real observed data. For this, we used observed CMDs in place of the mock ones as the input of our network. While our network was trained using solely mock CMDs that correspond to the $l=280$ deg, $b = 0$ deg it is still possible to construct a map using close enough lines of sight. In practice we constructed our maps using cone queries with $0.25$ deg radius on the 2MASS PSC, corresponding to the same $0.2 \mathrm{deg^2}$ solid angle that was used to build our mock CMDs from the BGM. In order to be sure that our observed and mock CMDs are constructed from similar stellar distributions we removed every 2MASS star that lacks one or more band detection since this is the approach followed in our mock CMD construction. Our map is then considered as a grid of 0.2 deg sized square pixels so that it is close to follow the Nyquist criterion. Our map range is as follow, $257 < l < 303$ deg and $ -5 < b < 5$ deg. Since each pixel of the map is a LOS with an extinction profile prediction, our map is a 3D volume of $230\times 50\times 128$ bins of differential extinction values that represent a large squared-cone field of view.\\ \subsubsection{Integrated view of the plane of the sky} \begin{figure*}[!t] \hspace{-1.6cm} \begin{minipage}{1.25\textwidth} \centering \vspace{0.79cm} \begin{subfigure}[!t]{1.0\textwidth} \includegraphics[width=1.0\hsize]{images/2mass_starcount_skyplane.pdf} \end{subfigure} \end{minipage} \caption[2MASS star count]{Map of star counts in the 2MASS [J-K]-[K] CMDs for each pixel on the same grid as in Fig.~\ref{single_los_2mass_integrated_result}.} \label{2mass_sky_star_count} \vspace{-1.2cm} \end{figure*} \begin{figure*}[!t] \hspace{-1.5cm} \begin{minipage}{1.22\textwidth} \centering \begin{subfigure}[!t]{1.0\textwidth} \caption*{\bf \large Planck dust opacity $\bm{\tau_{353}}$} \includegraphics[width=1.0\hsize]{images/run147_planck_int_map.pdf} \end{subfigure}\\ \vspace{0.6cm} \begin{subfigure}[!t]{1.0\textwidth} \caption*{\bf \large Single position training - $\bm{Z_{\rm lim}=50}$} \includegraphics[width=1.0\hsize]{images/run156_int_map_ep60.pdf} \end{subfigure}\\ \vspace{0.6cm} \begin{subfigure}[!t]{1.0\textwidth} \caption*{\bf \large Single position training - $\bm{Z_{\rm lim}=100}$} \includegraphics[width=1.0\hsize]{images/run132_int_map_ep60.pdf} \end{subfigure} \end{minipage} \caption[Integrated view of the plane of the sky for the 2MASS single LOS training]{Comparison of the 2MASS single-LOS training results in a plane of the sky view using galactic coordinates. {\it Top:} Observed Planck dust opacity at 353 GHz. {\it Middle:} Integrated extinction over the whole LOS for each pixel corresponding to the 2MASS single-LOS training with $Z_{\rm lim}=50$. {\it Bottom:} Integrated extinction for the same case but with $Z_{\rm lim}=100$. The contours of the Planck map at $\tau_{353}=0.00016$, $0.00028$, and $0.0004$ are reproduced in the other two frames to ease the comparison.} \label{single_los_2mass_integrated_result} \end{figure*} From this predicted map we first reconstructed the integrated extinction as observed in the plane of the sky. With this test it is possible to verify that our method predicts realistic total extinction in the various line of sights, and more importantly to verify that the observed Galactic Plane morphology is properly reconstructed. We chose to use the Planck optical depth $\tau_{353}$ that is derived from dust emission as a proxy for the dust distribution morphology. Figure~\ref{single_los_2mass_integrated_result} shows the comparison between the Planck sky map and our CNN predictions for $Z_{\rm lim} = 50$ and $Z_{\rm lim} = 100$. The morphology of the Planck map mostly follows the Galactic Plane with structures mostly contained in the $|b| < 2$ deg interval.\\ The most striking result of this figure is that there is a large difference induced by the choice of the $Z_{\rm lim}$ value especially for the lines of sight that contain less stars, i.e. at higher latitudes outside the Plane, and at lower galactic longitudes away from the Galactic Center. When the drop in star count compared to the reference training LOS is not related to extinction, the network still predicts an important amount of extinction. The star count distribution is illustrated by Figure~\ref{2mass_sky_star_count}. We observe that this star count distribution strongly (anti-)correlates with our inappropriate extinction predictions. The $Z_{\rm lim} = 100$ value mitigates this effect, but we could not remove it completely. It probably cannot be done using a single LOS training.\\ In spite of the star count variations across the map, our $Z_{\rm lim} = 100$ result already reconstructs the Planck map morphology very well. Many of the strongest $\tau_{353}$ structures are also predicted as strong extinction LOS by our CNN and the contours of these structures are accurately followed most of the time. We stress that generalizing over a $\pm 23^\circ$ longitude range from a single training is a very challenging task since the corresponding CMD variations are important. It means that the CNN architecture manages to identify the parts of the CMD that are most relevant for the extinction, and probably also that the part that it learned to ignore happened to be the one that changes the most between galactic coordinate positions. We highlight that there is no strong constraint on the integrated extinction value in our CNN profile prediction, each output neuron being completely independent there is no error propagation corresponding to a total extinction error on the profile. Therefore, having a proper predicted integrated morphology is already a sign that the reconstructed profile is likely to be realistic.\\ \vspace{-1cm} \subsubsection{Face-on view} \label{face_on_view} To visualize the distance prediction, we built face-on views of the Galactic Plane. To do so we had to select a latitude thickness in which we average our differential extinction cube. In order to be comparable to the other maps described in Section~\ref{ext_properties_part3} we used a slice of $|b| < 1$ deg. Due to the remark we made on Figure~\ref{single_los_2mass_integrated_result} that most of the structures truly lie in the $|b|<2$ deg range for our region, we compared the integrated map obtained with $|b| < 1$ deg and $|b| < 2$ deg, and found only marginal differences, so in the following we only discuss the one with $|b| < 1$ deg. We present our first distance prediction results in the Figure~\ref{single_los_2mass_polar_plan} that presents the polar face-on view of the portion of disk between $257 < l < 303$ deg. In this figure we show the full distance prediction up to 12.8 kpc and a zoom on the nearby predictions closer than 3.5 kpc, for both the $Z_{\rm lim} = 50$ and 100 values. On all the figures that follow this representation, we also display other observational constraints on the Milky Way morphology.\\ We added HII and GMC regions (red squares and purple circles) from \citet{hou_and_han_2014}, that are expected to trace the Galactic spiral arm structure. The distance to these regions was either compiled from various previous studies or estimated by the authors using the kinematic method based on a rotation curve of the Milky Way resulting in relatively heterogeneous sample with large uncertainties that are not provided in their fully accessible catalog. Still, from their catalog, the Carina arm appears very clearly, suggesting that the distance estimates are relatively reliable in this region of the Milky Way, although they could be subject to the same systematic error. The dispersion of their points suggests uncertainties of the order of a few hundred parsecs, but it is difficult to disentangle the genuine scatter of interstellar structures from the distance uncertainties. \\ We also added the Gaia stellar cluster catalog (Yellow crosses with green border) from \citet{Cantat-Gaudin_2018} that are much more accurately positioned in distance thanks to the Gaia parallaxes, but that are less likely to be directly related to dust distribution because it mostly includes relatively evolved stars (see our YSO-Gaia cross match in Sect.~\ref{3d_yso_gaia}). Additionally, there are only few of these clusters close to the Galactic Place that are found at large distances. We therefore only display these data points in our short-distance view. We note that for both catalogs, only the regions inside the $|b|<1$ deg range are displayed. \\ We also display a very simple elliptical arm model represented by light gray dots simply constrained by a distance and a pitch angle, and that is parameterized to represent the Carina arm (we used $r_0 = 5.0$\,kpc, $p = 14.5^\odot$) and the end of the Perseus arm at high distance (we used $r_0 = 3.8$\,kpc, $p = 14.5^\odot$) in the low galactic longitude part of our map, both assuming a sun distance to the galactic center of 8.4 kpc \citep{Marshall_phd_2006}. These are really simplistic arm equations just constrained by a distance and a pitch angle, therefore they are not adjusted on any data and just provide a global insight on arms shape in such representation. Still, we see that with our adopted parameters the mock Carina arm mostly follows the \citet{hou_and_han_2014} regions. \\ Finally, we note that the polar view is not the most representative of the data cube the CNN works with, especially at short distances where the first bins of all our lines of sight overlap. As an alternative, we provide a Cartesian view of the the same Galactic Place view in Figure~\ref{single_los_2mass_cart_plan}, that can be used to complement the previous Figure~\ref{single_los_2mass_integrated_result} and ease the prediction interpretation.\\ \begin{figure*}[!t] \vspace{-0.5cm} \hspace{-0.5cm} \begin{minipage}{1.05\textwidth} \centering \begin{subfigure}[!t]{0.47\textwidth} \caption*{\bf \large $\bm{Z_{\rm lim}=50}$} \includegraphics[width=1.0\hsize]{images/run156_polar_plan_map_log_terrain_ep60.jpg} \end{subfigure} \vspace{0.5cm} \hspace{0.4cm} \begin{subfigure}[!t]{0.47\textwidth} \caption*{\bf \large $\bm{Z_{\rm lim}=100}$} \includegraphics[width=1.0\hsize]{images/run132_polar_plan_map_log_terrain_ep60.jpg} \end{subfigure}\\ \begin{subfigure}[!t]{0.47\textwidth} \caption*{\bf \large $\bm{Z_{\rm lim}=50}$} \includegraphics[width=1.0\hsize]{images/run156_polar_plan_map_log_terrain_close_ep60.jpg} \end{subfigure} \vspace{0.5cm} \hspace{0.4cm} \begin{subfigure}[!t]{0.47\textwidth} \caption*{\bf \large $\bm{Z_{\rm lim}=100}$} \includegraphics[width=1.0\hsize]{images/run132_polar_plan_map_log_terrain_close_ep60.jpg} \end{subfigure} \end{minipage} \caption[Face-on view for the 2MASS single-LOS training]{Face-on view of the Galactic Plane $|b| < 1$ deg in polar galactic-longitude distance coordinates for the predicted Carina-arm region using our 2MASS single-LOS training. {\it Left column:} Network prediction with $Z_{\rm lim}=50$. {\it Right column:} Network prediction with $Z_{\rm lim}=100$. {\it Top row:} Full distance prediction. {\it Bottom row:} Zoom on the $d < 3.5$ kpc prediction. The HII regions and GMCs compiled by \citet{hou_and_han_2014} are displayed as red open squares and purple open circles, respectively. The yellow pluses are open clusters from the catalog by \citet{Cantat-Gaudin_2018}. In the top row, simple spiral arm models from \citep{Marshall_phd_2006} are represented by gray dots for comparison.} \label{single_los_2mass_polar_plan} \end{figure*} \begin{figure*}[!t] \begin{minipage}{0.98\textwidth} \centering \begin{subfigure}[!t]{1.0\textwidth} \caption*{\bf \large $\bm{Z_{\rm lim}=50}$} \includegraphics[width=1.0\hsize]{images/run156_cart_plan_map_log_terrain_ep60.pdf} \end{subfigure}\\ \vspace{0.6cm} \begin{subfigure}[!t]{0.98\textwidth} \caption*{\bf \large $\bm{Z_{\rm lim}=100}$} \includegraphics[width=1.0\hsize]{images/run132_cart_plan_map_log_terrain_ep60.pdf} \end{subfigure} \end{minipage} \caption[Cartesian face-on view for the 2MASS single-LOS training]{Face on view of the Galactic Plane $|b| < 1$ deg in cartesian galactic-longitude distance coordinates for the predicted Carina-arm region using our 2MASS single-LOS training. The axis on the right border of each frame corresponds to the pixel height as a function of the distance induced by the conic shape of our LOS. The symbols are the same as in figure ~\ref{single_los_2mass_polar_plan}. {\it Top:} Network prediction with $Z_{\rm lim}=50$. {\it Bottom:} Network prediction with $Z_{\rm lim}=100$.} \label{single_los_2mass_cart_plan} \vspace{-0.3cm} \end{figure*} \newpage From Figures \ref{single_los_2mass_polar_plan} and \ref{single_los_2mass_cart_plan}, we observe that the $Z_{\rm lim}=50$ case is much more noisy and presents quasi-periodic artifacts in the $l < 275$ deg part corresponding to the case where the latitude artifact joins the plane in Figure~\ref{single_los_2mass_integrated_result}. In contrast the $Z_{\rm lim} = 100$ case is much more realistic, and is devoid of these periodic artifacts in the low longitude region, with a convincing nearby extinction distribution. The other striking observation is how the larger longitude results are compatible with an arm shape. The most convincing part is the tangent structure found just above $l = 280$ deg at around 6 kpc with a clear structure interruption along the longitude axis. The large blurry prediction at between $285 < l < 303$ deg at 10 kpc is likely to be dominated by an artifact. The galactic disk morphology induce that the number of stars is rising quickly with the observational longitude and distance roughly following the shape of this structure. This is a surprising effect that the network tend to compensate higher star count for which it is not constrained by an excess of extinction at high distances.\\ These maps also illustrate that the network manages to reconstruct structures that are very coherent between lines of sight. We remind that there is no prescription of LOS correlation between adjacent pixels, and that the CNN still reconstructs convincing structures that are sometime coherent for more that 10 adjacent pixels following the longitude axis. \\ There is also a notable separation between two continuous structures at distances of $\sim$2 and $\sim$4 kpc in the large longitude part ($290 < l < 303$ deg), which is more visible in Figure~\ref{single_los_2mass_cart_plan}. The arm model seems to follow the closer structure but there are no HII region or GMC to confirm that this separation is real. Still, the group of Gaia clusters and HII region that is present at $l \simeq 287$ and $d = 2.5$ kpc, visible in the close distance representation of Figure~\ref{single_los_2mass_polar_plan}, suggests that the distance to the local maximum of extinction at $l \simeq 287$ and $d = 2.5$ kpc is under-estimated by $\sim 500$ pc in our map.\\ One of the use of our network prediction density probability from dropout (Sect.~\ref{2mass_single_los_training_and_test_set_prediction}) is to assess which structure could be less realistic according to the network own uncertainty. Figure~\ref{std_single_los_2mass_polar_plan} shows the same face-on view as before but representing the averaged standard deviation of individual prediction at each distance. This figure reveals that the large-longitude short-distance region discussed in the previous paragraph is the less well constrained, with the uncertainty maximum being reached for the secondary structure at $l \simeq 301$, $d = 3.5$ kpc. This is consistent with the fact that it is $\sim 23$ deg away from our training LOS, and that the star count starts to rise quickly at these galactic longitudes. It is likely that a similar issue occurs for the other boundary of the map at $l = 257$ due to the lower star count, but it is partly compensated by the $Z_{\rm lim} = 100$ parameter, still causing the network to loose the higher distance information. \\ Overall, these results on a single LOS training are surprisingly capable of a convincing generalization on a large galactic longitude window once tuned appropriately. However, many of the limitations we exposed in the present section should be resolved if we were able to train the network on several lines of sight at different galactic coordinates.\\ \begin{figure*}[!t] \hspace{-0.8cm} \begin{minipage}{1.08\textwidth} \centering \begin{subfigure}[!t]{0.47\textwidth} \includegraphics[width=1.0\hsize]{images/run132_std_polar_plan_map_ep60.jpg} \end{subfigure} \hspace{0.5cm} \begin{subfigure}[!t]{0.47\textwidth} \includegraphics[width=1.0\hsize]{images/run132_std_polar_plan_map_terrain_close_ep60.jpg} \end{subfigure} \end{minipage} \caption[Face-on view for the standard deviation of the 2MASS single-LOS training]{Face-on view of the Galactic Plane $|b| < 1$ deg in polar galactic-longitude distance coordinates for the standard deviation predicted in the Carina arm region using our 2MASS single-LOS training. The symbols are the same as in figure ~\ref{single_los_2mass_polar_plan}. {\it Left:} Full distance prediction. {\it Right:} Zoom in on the d < 3.5 kpc prediction region.} \label{std_single_los_2mass_polar_plan} \end{figure*} \newpage \subsection{Combination of several lines of sight in the same training} \label{los_combination} \subsubsection{Sampling in galactic longitude} There are a few methods that could be used to construct a map that is capable of taking into account the presented LOS. We saw in the previous Section~\ref{face_on_view} that the network prediction in the vicinity of the training LOS appears to be reconstructed properly within several degrees. The simplest approach would then be to sample the plane of the sky with a few reference LOS and to train an individual network on each of them. The map would then be made of several tiles centered on each reference LOS, each tile corresponding to an independent network training. We tested this method as follows.\\ \vspace{-0.25cm} Considering that we are mainly interested in the Galactic Place and that we are reasonably free of latitude artifacts in the range $|b| <1$ deg we only sampled the galactic longitude axis with a training every $5^\circ$ centered on the $l = 280$ deg, $b = 0$ deg position, ending up with 9 LOS from $l = 260$ deg to $l = 300$ deg. While this approach provides interesting results that solved some of the issues we had with the single training generalization (Fig.~\ref{single_los_2mass_polar_plan}) it creates strong discontinuities at the junction between adjacent tiles, resulting in a very patchy prediction. Additionally, we saw that we needed up to $5\times 10^5$ training examples for a single LOS training, so 9 individual training were very expensive in terms of memory and training time.\\ \vspace{-0.5cm} \subsubsection{Multiple line of sights in a single training} \label{multi_los_CMD_constrution} We found a more suitable approach in the idea that there should be redundant information between the different training. It is straightforward that convolutional filters that were found to be useful for one LOS is very likely to be useful for another LOS. This consideration can be generalized to the whole network architecture. One possible solution would have been to perform a single training on the central LOS and then use it as a pre-trained starting point for all the other LOS trainings. This would have significantly reduced the training time and possibly reduced a part of the tiling effects, considering that all networks have mostly similar weights. However, this solution is still not as appealing as a single training that capable of predicting the whole map at once. For this to be possible we made a few changes in our network input.\\ \vspace{-0.25cm} For a single network to work on multiple lines of sight, it must be provided with the CMD-profile pairs, but also with some information about the reference bare CMD. In our single training this was done by including a $f_{\rm naked}$ proportion of bare CMDs in the training corresponding to a flat profile so that the network could learn the reference statistically. In the present approach we chose to change the form of the input by adding an input depth channel containing the bare BGM realization CMD that was used to compute the extincted one. This way the network is provided in a single input with both the reference CDM of the given LOS and the corresponding extinct one. We kept our $f_{\rm naked} = 0.1$ value corresponding to cases with the bare CDM in both input depth channels, so that the network can still associate this reference to a flat profile. We note that the bare CDM is presented with the same magnitude limit cut and uncertainties. Using a completely non-processed CMD as reference, the network was not able to extract the link between a reference CMD and a CMD with almost no extinction. With this approach it is possible to construct a single training that learns from various lines of sight at the same time. Because making BGM realizations requires a significant computational time, we still used the 9 LOS sampling described in the previous section, but here they were merged into one single training dataset. We highlight that in this approach we double the input dimensions which results in twice larger dataset memory usage. \subsubsection{Dataset construction, architecture effect and training} \label{2mass_multi_los_training_and_test_set_prediction} Considering that there will be redundancy of a non-negligible part of the information from different lines of sight we were able to reduce the number of examples to $2 \times 10^5$ for each reference LOS. This still results in a $1.8 \times 10^6$ dataset that has twice the number of pixels per object compared to Section~\ref{2mass_single_los}, requiring a careful choice of numerical range for each of our pixels to reduce storage footprint that could reach 50 GB easily. From this dataset we used the same 0.94, 0.05, 0.01 proportions for the training, valid and test dataset, respectively, than in the previous single LOS training. We took care of applying these proportions to each reference LOS set of $2 \times 10^5$ examples before merging them. Additionally, each input depth channel is scaled separately by looking for the pixel with the highest star count in the whole dataset. This way, both our reference and observed CMD fall in a 0 to 1 range by conserving the proportions between various example on a given diagram and also ensuring that both the diagrams have similar actual pixel values, just like we normalized every input feature individually in Section~\ref{input_norm}.\\ We highlight that despite this change in input size, the first convolutional layers of the network produce an activation volume with the same size as before. The only change in network parameters is that the filters of this convolutional layer get an extra depth channel, which is an insignificant increase in regards of the more than $50\times 10^6$ weights in our network architecture. We note that despite this change in input dimension and a much more general context, the CNN architecture described in Section~\ref{cnn_architecture_test} remained the one with the lowest error on the test dataset from all the other architectures we tested. Since only the first convolutional layer is changed, the network mostly conserves its training speed in terms of number of objects per seconds. However, the dataset is much larger than in the single LOS case so each epoch takes much more time. The typical number of epochs required to train then remains mostly unchanged meaning that we have provided a similar amount of information overall, considering both the increased generality of this case and the additional global statistic of the larger dataset. We note that using only $1 \times 10^5$ examples for each reference LOS was providing significantly less good results, still with very acceptable predictions. Therefore, the redundancy of information is effectively present but in a smaller amount than we expected. We could not try $3\times 10^5$ since it would exceed the maximum host RAM memory of the compute cluster we used for training (250 GB). Still, since the difference between $1 \times 10^5$ and $2 \times 10^5$ examples per LOS was only slightly improving the results, we do not expect important improvements for a $3\times 10^5$ sample. Overall, the training using this dataset with $2 \times 10^5$ examples per LOS requires between 8h and 12h to complete using the Tesla V100 GPU (Sect.~\ref{cnn_computational}) depending on the number of epochs required to converge.\\ \begin{figure}[!t] \begin{minipage}{1.00\textwidth} \centering \includegraphics[width=1.0\hsize]{images/single_los_2MASS_multi_los_test_set_predictions.pdf} \end{minipage} \caption[2MASS Multiple-LOS training prediction on the corresponding test set]{Excerpt of a few objects from the test dataset of the 2MASS multiple-LOS training, for the $l=280$ deg, $b=0$ deg reference LOS. {\it Left:} View of the CMD for which the prediction is made. {\it Right:} Corresponding profile. The dashed line shows the target of the network that accounts for the $Z_{\rm lim}$ maximum distance limit. The network prediction is presented in the form of a vertical histogram prediction for each distance bin, corresponding to 100 random dropout predictions.} \label{multi_los_2MASS_test_profiles_prediction} \end{figure} Following the approach of Section~\ref{2mass_single_los_training_and_test_set_prediction} we present in Figure~\ref{multi_los_2MASS_test_profiles_prediction} a typical prediction of our multiple-LOS CNN training. All the predictions in the figure refer to the middle LOS $l=280$ deg, $b=0$ deg, so it can be compared to Figure~\ref{single_los_2MASS_test_profiles_prediction}. The reconstruction of the profiles is very similar to the one on the single LOS training, despite the number of example given for this specific LOS being reduced by 60\%. This confirms that our dual depth-channel input approach is suitable for the task, and that the network successfully shared a part of the information between several LOS to maintain a similar prediction capacity on individual one. This figure noticeably highlights a case in frame 2, where a first extinction structure with a non negligible total extinction is very well predicted and where the second structure at high distance ($\sim$9 kpc) is perfectly localized. The amount of extinction however is significantly underestimated for the second structure, but interestingly the dropout dispersion is maximal at the same location and can therefore be used to diagnose the poor reliability of this structure in the map.\\ \vspace{-0.9cm} \subsubsection{Map results} \label{sec:2MASS_main_result} Performing a prediction on an observed CMD with this combined network is slightly more complicated than in the previous case. Indeed, additionally to the observed CMD we have to provide a reference bare CMD from the BGM. We could use different approaches for this independently of the reference CMD that was used for the training. One solution would be to perform a BGM realization for each pixel of our map. While it will obviously be at the cost of a significant computational time, it is indeed possible to have a bare BGM CMD at each pixel, although due to the statistical fluctuations of the realizations it may produce artifacts. To remove them it would be necessary to have several BGM realizations for each map pixel and then either average them or including a random reference choice in our 100 predictions that already account for the dropout, making this solution too much time and resource consuming. A much simpler approach consists in using a random reference CMD from the training dataset corresponding to the closest reference LOS. This would also induce a possible tiling effect but that would be much lighter than with a completely independent training on each reference LOS. We chose to use this approach with the addition that we constructed an average reference CMD from 10 of the bare reference CMD of each training reference LOS. From this the map is constructed by using the observed CMD and the closest averaged reference CMD as input. We note that an intermediate approach would be either to interpolate the reference CMD or to construct a sub grid of BGM references that has a better resolution than the training, but still is significantly less sampled than the map resolution.\\ \begin{figure*} \hspace{-1.5cm} \begin{minipage}{1.15\textwidth} \centering \begin{subfigure}[!t]{1.0\textwidth} \includegraphics[width=1.0\hsize]{images/run147_int_map_ep70.pdf} \end{subfigure} \end{minipage} \caption[Plane of the sky view for the 2MASS multiple-LOS training]{Integrated extinction for each pixel of the 2MASS multiple-LOS training prediction in a plane of the sky view using galactic coordinates. Contours are from Planck $\tau_{353}$.} \label{multi_los_2mass_polar_sky} \end{figure*} \begin{figure*}[!t] \vspace{2cm} \hspace{-1.5cm} \begin{minipage}{1.15\textwidth} \centering \begin{subfigure}[!t]{1.0\textwidth} \caption*{\bf \large $\bm{0 \leqslant d < 4}$ kpc} \includegraphics[width=1.0\hsize]{images/run147_int_map_d0-4_ep70.pdf} \end{subfigure}\\ \vspace{0.8cm} \begin{subfigure}[!t]{1.0\textwidth} \caption*{\bf \large $\bm{4 < d \leqslant 7}$ kpc} \includegraphics[width=1.0\hsize]{images/run147_int_map_d4-7_ep70.pdf} \end{subfigure}\\ \vspace{0.8cm} \begin{subfigure}[!t]{1.0\textwidth} \caption*{\bf \large $\bm{ d \geqslant 7}$ kpc} \includegraphics[width=1.0\hsize]{images/run147_int_map_d7-p_ep70.pdf} \end{subfigure} \end{minipage} \caption[Integrated view as a function of distance]{Integrated extinction for different distance range from the 2MASS multiple-LOS training prediction in a plane of the sky view using galactic coordinates. Contours are from Planck $\tau_{353}$.} \label{multiple_los_2mass_integrated_distance_results} \end{figure*} The results for this network prediction are presented in Figure~\ref{multi_los_2mass_polar_sky} that shows the plane of the sky map of integrated extinction, and in Figure~\ref{multi_los_2mass_polar_plan} that shows the face-on polar view of our distance prediction. For this result we also present in Figure~\ref{multiple_los_2mass_integrated_distance_results}, a decomposition of the integrated extinction corresponding to different distance range, $[0 \leqslant d < 4]$, $[4 \leqslant d < 7]$ and $[d \geqslant 7]$ kpc. From these figures it is visible that a tiling effect remains in the integrated extinction, especially for the large-longitude tile. However, it mainly impacts the highest latitudes and close distance range ($d < 4$ kpc) of the map, which are not in our Galactic Place slice represented in the face-on view. Interestingly, these results were made using $Z_{\rm lim}=50$ meaning that the effect of properly sampling the galactic longitude into several lines of sight already considerably reduces the very large amount of artifacts we had in the equivalent training using the same value of $Z_{\rm lim}$. The reason we chose not to use a larger $Z_{\rm lim}$ was that it reduced the map maximum distance estimation loosing most of the structures for $d > 8kpc$, which were recovered using a smaller $Z_{\rm lim}$ value. Another striking difference between this result and the single training one, even with high $Z_{\rm lim}$, is the contrast we obtain in high-extinction regions. For example the structure around $l = 283$ deg and $b=-1$ deg contains a much higher integrated extinction than before. The few structures at low longitude are also much more convincing, following accurately several of the Planck map structures highlighted by the contours. The structure at $l = 266$ deg and between $1 < b < 2 $ deg is much better identified as containing much more extinction that the surrounding small regions, which was not the case in our single LOS training. In the large-longitude part (l > 297) the situation is more complicated. Even if we have a dedicated LOS for this region, the predictions appear to be less good. It is still much better than the single previous single LOS training but the structures less accurately follow the Planck morphology and are more prone to artifacts. Indeed, the large difference in star count with the previous one (see Fig.\ref{2mass_sky_star_count}) tends to indicate that either our longitude sampling is too coarse and more reference LOS should be used here, or the information is too different from the other lines of sight for the generalization on them to be useful here. This would be equivalent to consider that this LOS was trained solely on the $2\times 10^5$ examples which is insufficient to constrain a single LOS training. However, trying to voluntarily imbalance this training dataset would require much care and we preferred to delay such an approach to a future work.\\ \begin{figure*}[!t] \hspace{-0.8cm} \begin{minipage}{1.08\textwidth} \centering \begin{subfigure}[!t]{0.47\textwidth} \includegraphics[width=1.0\hsize]{images/run147_polar_plan_map_log_terrain_ep70.jpg} \end{subfigure} \hspace{0.5cm} \begin{subfigure}[!t]{0.47\textwidth} \includegraphics[width=1.0\hsize]{images/run147_polar_plan_map_log_terrain_close_ep70.jpg} \end{subfigure} \end{minipage} \caption[Face-on view for the 2MASS multiple-LOS training]{Face-on view of the Galactic Place $|b| < 1$ deg in polar galactic-longitude distance coordinates for the predicted Carina arm region using our 2MASS multiple-LOS training. The symbols are the same as in figure ~\ref{single_los_2mass_polar_plan}. {\it Left:} Full distance prediction. {\it Right:} Zoom in on the $d < 3.5$ kpc prediction region. The displayed symbols are as in Fig.~\ref{single_los_2mass_polar_plan}.} \label{multi_los_2mass_polar_plan} \end{figure*} \newpage Regarding the details from the face-on view (Fig.~\ref{multi_los_2mass_polar_plan}), the extinction distribution in distance is quite similar to that obtained with our single-LOS training. Still, we stress again that this result is obtained using $Z_{\rm lim}=50$, which led to many artifacts in the single-LOS training, whereas they are totally absent here. This tends to confirm that there is effectively no strong extinction structures in the low longitude $l<280$ deg high distance d > 3 region, since a lower value of $Z_{\rm lim}$ increases the sensitivity to structures traced by a limited number of stars, and since we used multiple dedicated reference lines of sight for this large region. Still, the short-distance figure (Right frame) shows an interesting extinction dynamic in this region for $ 0.5 < d < 2.5$ kpc with regions mostly being in agreement with both the HII regions and the Gaia clusters. Overall the region at $l=283$ deg and $d=5.5$ kpc interpreted as the Carina arm tangent is still well resolved and is mostly in agreement with the HII regions. The regions at $d=10$ kpc and $295 < l < 303$ deg show a good match with many HII regions and a much more detailed structure than the diffuse structure obtained in this area with our single-LOS training. For this reason, although we found the single-LOS result of this area suspicious, we are quite confident with that of the multiple-LOS training. We note that this is typically the kind of structures that are removed if the $Z_{\rm lim}$ parameter is increased, since they are at large distances and are most likely constrained by a relatively small number of stars in the CMD. Finally, the problematic structure at $l \simeq 300$ deg, $d = 4$ kpc is still present and has a stronger maximum differential extinction than for the single LOS training. The foreground for this large-longitude LOS still remains compatible with our arm shape. The result from this multiple-LOS training and presented in the two previous figures (\ref{multi_los_2mass_polar_sky} and \ref{multi_los_2mass_polar_plan}) constitutes our main reference result. {\bf For the rest of the present manuscript we refer to this result as our "main 2MASS result", to ease the comparison with our cases.}\\ \begin{figure*}[!t] \hspace{-0.8cm} \begin{minipage}{1.08\textwidth} \centering \begin{subfigure}[!t]{0.47\textwidth} \includegraphics[width=1.0\hsize]{images/run147_std_polar_plan_map_ep70.jpg} \end{subfigure} \hspace{0.5cm} \begin{subfigure}[!t]{0.47\textwidth} \includegraphics[width=1.0\hsize]{images/run147_std_polar_plan_map_terrain_close_ep70.jpg} \end{subfigure} \end{minipage} \caption[Face-on view for the standard deviation of the 2MASS multiple-LOS training]{Face-on view of the Galactic Place $|b| < 1$ deg in polar galactic-longitude distance coordinates for the standard deviation predicted in the Carina arm region using our 2MASS multiple-LOS training. The symbols are the same as in figure ~\ref{single_los_2mass_polar_plan}. {\it Left:} Full distance prediction. {\it Right:} Zoom in on the d < 3.5 kpc prediction region.} \label{std_multi_los_2mass_polar_plan} \end{figure*} \newpage Figure~\ref{std_multi_los_2mass_polar_plan} shows the averaged standard deviation of the prediction coming from the dropout using the face-on view. Overall this figure appears more contrasted than the equivalent one from the single-LOS training (Fig.~\ref{std_single_los_2mass_polar_plan}. Again it is mostly explained by the lower $Z_{\rm lim}$ value that allows the network to consider less stars as relevant for the extinction prediction, inducing that we recover structures that we potentially missed before, but also reduce the signal-to-noise ratio of the global map prediction. For this reason and because we conserved the same scale for comparison, we expect only the two regions with the highest dispersion to reflect a true underlying issue. These regions are both found in the large-longitude region that corresponds to the same problematic reference LOS. The region at $d = 10$ kpc that contains several small predictions lost by the higher $Z_{\rm lim}$ are in fact expected to be uncertain from this standpoint since they are at our detection limit. This does not raise much concern about the fact that they are representative of genuine physical structures, but rather that their differential extinction might be off the true value and that they are possibly too extended around their central position, similarly to what is illustrated in the second frame of Fig.~\ref{multi_los_2MASS_test_profiles_prediction}. In contrast, the region between $3 < d < 4$ kpc is much more likely to be an artifact since it has a very large dispersion while it is much closer and does not have a high extinction foreground. We expect this behavior for a significantly unconstrained region, or for an input that lies off the part of the feature space that was effectively constrained during training. Without additional investigation of this region we were unable to draw firm conclusion about the reality of this strong extinction structure.\\ \begin{figure}[!t] \hspace{-1.2cm} \begin{minipage}{1.10\textwidth} \centering \includegraphics[width=1.0\hsize]{images/run147_cart_plan_map_log_terrain_ep70.pdf} \end{minipage} \caption[Cartesian face-on view for the 2MASS multiple-LOS training]{Face-on view of the Galactic Place $|b| < 1$ deg in cartesian galactic-longitude distance coordinates for the predicted Carina arm region using our 2MASS multiple-LOS training. The axis on the right border corresponds to the pixel height as a function of the distance induced by the conic shape of our LOS. The symbols are the same as in Figure ~\ref{single_los_2mass_polar_plan}.} \label{multi_los_2MASS_test_profiles_prediction} \end{figure} \newpage \subsubsection{Effect of the galactic latitude} In this section, we use a new multiple line of sights training to examine the effects of a latitude sampling. The general purpose of this test is to assess whether a multiple-LOS training based on a sampling both in longitude and latitude would be useful, since it would considerably increase the size of our training dataset. For this we performed a training again centered on $l=280$ deg, $b=0$ deg using 5 reference lines of sight between $-4 < b < 4$ deg, each LOS being used as a reference for a 2 deg slice in latitude. Each reference LOS is again provided with $2\times 10^5$ examples following the same construction described in Section \ref{multi_los_CMD_constrution}.\\ \begin{figure*}[!t] \hspace{-1.5cm} \begin{minipage}{1.15\textwidth} \centering \begin{subfigure}[!t]{0.48\textwidth} \caption*{\bf \large Single LOS} \includegraphics[width=1.0\hsize]{images/run132_int_map_narrow_ep60.pdf} \end{subfigure} \hspace{0.3cm} \begin{subfigure}[!t]{0.48\textwidth} \caption*{\bf \large Multi LOS longitude} \includegraphics[width=1.0\hsize]{images/run147_int_map_narrow_ep70.pdf} \end{subfigure}\\ \vspace{0.8cm} \begin{subfigure}[!t]{0.48\textwidth} \caption*{\bf \large Multi LOS latitude} \includegraphics[width=1.0\hsize]{images/run158_int_map_narrow_ep80.pdf} \end{subfigure} \hspace{0.3cm} \begin{subfigure}[!t]{0.48\textwidth} \caption*{\bf \large Planck $\tau_{353}$} \includegraphics[width=1.0\hsize]{images/planck_int_map_narrow.pdf} \end{subfigure} \end{minipage} \vspace{0.2cm} \caption[Plane-of-the-sky view for the 2MASS latitude sampling multiple-LOS training]{Comparison of the integrated extinction for different 2MASS training prediction in a plane of the sky view using galactic coordinates. All the predictions are cropped at the longitude prediction range of the 2MASS latitude sampling training. The contours are from Planck $\tau_{353}$. {\it Top-left:} 2MASS single LOS training result. {\it Top-right:} 2MASS multi LOS longitude sampling training result. {\it Bottom-left:} 2MASS multi LOS latitude sampling training result. {\it Bottom-Right:} Observed Planck dust opacity at 353.} \label{multi_los_2MASS_latitude} \end{figure*} Figure~\ref{multi_los_2MASS_latitude} shows a comparison of different integrated extinction map over the plane of the sky for that include this training in the bottom-left frame. From this figure, the three middle reference LOS $b = -2, 0, 2$ deg seems to have slightly improve the boundary of the structure in comparison to the two previous 2MASS training. Still the most striking effect is that there are significantly less latitude artifacts and extinction overall predicted in empty regions, especially considering that this training was made with $Z_{\rm lim}=50$. However there is a relatively strong tiling effect. This should come from the same effect we exposed for the multiple-LOS training in the previous section, where the discontinuity between tiles is stronger when the star count changes quickly. As we get farther from the Galactic Place this is likely to be the case here. Surprisingly, the network seems to react to an overall excess of stars compared to the reference LOS by overestimating the extinction, as revealed by the strong latitude gradient in the $+4$ and $-4$ deg tiles. Finally, we observed that the extinction visible for $|b|$ between 3 and 4 deg is mostly localized at large distance $d > 10$ kpc.\\ There are a few solutions to overcome this effect and still take advantage of the added information coming from multiple LOS in latitude. First, since most of the errors appear to be at large distance, we could simply raise the $Z_{\rm lim}$ value. Another solution would be to increase the number of LOS to better sample the latitude axis. This second solution would require again a very large amount of data for large maps. Another approach would be to refine only the spatial grid of the reference CMDs that are used during the forward step along with the observed one. Indeed, we expect that from several reference LOS the network should have found a continuous transformation of the CMD to account for the variations in star count. Therefore, changing only the reference CMD for the one corresponding to each pixel LOS when constructing the map should benefit from this automated CMD interpolation by the network. However, as we stated before, creating that many BGM realizations would be very compute intensive. As proposed before, a convenient solution would be to use a grid for the BGM CMDs used in the forward step with an intermediate resolution between that of the map and that used for training. In any case, this tiling effect should be fixed before attempting a multiple-LOS training on longitude and latitude simultaneously. \clearpage \subsection{Comparison with other 3D extinction maps} In this section we discuss briefly the comparison of our main 2MASS result with 3D extinction maps obtained with different methods. The simplest one to perform is with \citet{Marshall_2020} (here after M20) since both our maps rely on the BGM, use the same survey as input, use a LOS approach and have the same distance bin resolution. Figure~\ref{cornu_marshall_int_maps_comparison} shows the comparison of our map with the M20 one. In this figure we added Planck data for which we used the angular resolution of the M20 map rather than our lower resolution (the Planck map with our resolution is in Fig.~\ref{single_los_2mass_integrated_result}). To ease the comparison we cropped our map to the $|b| < 1$ deg range of the M20 map. From this figure our prediction appears less noisy that but it might be affected by the resolution choice. Still, the prediction of larger structure regarding the radial axis could be explained by the typical structure width from our GRFs, pushing the network to avoid very narrow structures. It can also be an effect of the convolution and pooling steps of our network which tend to smooth the fine grained differences in the input CMD, while in the MCMC method of M20, these fine grained differences are likely to be responsible for the strong contrast between some adjacent pixels in the map. \\ In Figure~\ref{cornu_marshall_maps_comparison} we compare our maps with the M20 map using the same face on representation using an identical plotting methodology from the M20 raw data cube. Both of the maps are averaged for a $|b| < 1$ deg slice. In this figure, the two maps are compared with the same color scale, the same colobar range and the same method to average on latitude pixels. At first glance, most of the important structures are reconstructed similarly. The Carina arm tangent is very similar in our two maps. However, in our prediction the low longitude part is free of structures that could either be artifacts in the M20 map or be missed by our method. The $d = 10$ kpc, $l \simeq 300$ deg group of structures we recover in our map are absent from the M20, or may correspond to the extinction detected near 7 - 8 kpc in the M20 map. The fact that we have a very clear extinction free area in our results at the same place, and that it matches very well the distribution of HII region by \citet{hou_and_han_2014} suggests that our map is more reliable in this area. The structure at $l \simeq 300$ deg, $3 < d < 4$ kpc has an equivalent in the M20 map but with a much lower extinction and a lesser extent along the longitude axis than our prediction. As discussed in Section~\ref{sec:2MASS_main_result}, we doubt that our prediction is accurate for this structure.\\ We compare in Figure~\ref{cornu_marshall_lallement_close_maps_comparison} our short-distance prediction with M20, and also with \citet{Lallement_2019} (hereafter L19) and \citet{Chen_2019} (hereafter C19) since the distance range is now comparable. As before, all the maps are made from the accessible raw data cubes and therefore compared using the same color-scale and intervals. The L19 map does not use galactic coordinate but a cartesian frame. Our approach for this comparison was to average the Galactic Place in a constant $\pm 35$ pc slice around the Galactic Place, roughly corresponding to the height of our map at 2kpc. We note that both the L19 and C19 maps use Gaia DR2 parallaxes through a cross-match in their process of recovering the extinction distance, which might explain the global morphology match between the two. We also did not degrade the resolution of any map to correspond to our resolution since the very high resolution of the L19 provides interesting substructures that are worth discussing. From this comparison we see that at distance $d > 2.5$ kpc there is almost no extinction left in both L19 and C19, while our result and M20 predict a significant extinction passed this distance, with a similar morphology. The high-extinction structure at $l\simeq 300$ deg, $d=3.5$ kpc is the problematic one discussed in Section~\ref{sec:2MASS_main_result} and in the previous paragraph. \\ \begin{sidewaysfigure} \hspace{-0.6cm} \begin{minipage}{1.05\textwidth} \centering \begin{subfigure}[!t]{1.0\textwidth} \caption*{\bf \large \hspace{0.8cm} Planck dust opacity $\bm{\tau_{353}}$} \includegraphics[width=1.0\hsize]{images/planck_int_map_slim.pdf} \end{subfigure}\\ \vspace{0.4cm} \begin{subfigure}[!t]{1.0\textwidth} \caption*{\bf \large \hspace{0.8cm} Main 2MASS result} \includegraphics[width=1.0\hsize]{images/run147_int_map_slim_ep70.pdf} \end{subfigure}\\ \vspace{0.4cm} \begin{subfigure}[!t]{1.0\textwidth} \caption*{\bf \large \hspace{0.8cm} Marshall+20} \includegraphics[width=1.0\hsize]{images/int_map_marshall.pdf} \end{subfigure} \end{minipage} \caption[Comparison of Galactic Place for the 2MASS multiple-LOS training with M20]{Comparison of the 2MASS multiple-LOS training results with Planck opacity and with the 3D extinction map by M20, in a plane of the sky view using galactic coordinates. All the maps are cropped according to the M20 latitude limits. {\it Top:} Observed Planck dust opacity at 353 GHz using the pixel resolution of M20. {\it Middle:} Integrated extinction over the whole LOS for each pixel corresponding to the 2MASS multiple-LOS training. {\it Bottom:} Integrated extinction over the whole LOS for each pixel for the M20 prediction.} \label{cornu_marshall_int_maps_comparison} \end{sidewaysfigure} \begin{figure*}[!t] \vspace{-0.3cm} \begin{minipage}{1.0\textwidth} \centering \begin{subfigure}[!t]{0.47\textwidth} \caption*{\bf \large Main 2MASS result} \includegraphics[width=1.0\hsize]{images/run147_polar_plan_map_log_terrain_ep70.jpg} \end{subfigure} \hspace{0.4cm} \begin{subfigure}[!t]{0.47\textwidth} \caption*{\bf \large Marshall+20} \includegraphics[width=1.0\hsize]{images/polar_plan_map_log_terrain_marshall.jpg} \end{subfigure} \end{minipage} \caption[Comparison of face-on views for the 2MASS multiple-LOS training with M20]{Comparison of the face-on view of the Galactic Place $|b| < 1$ deg in polar galactic-longitude distance coordinates for the Carina arm region. The symbols are the same as in figure ~\ref{single_los_2mass_polar_plan}. {\it Left:} Our main 2MASS multiple-LOS training result. {\it Right:} The M20 prediction using the same galactic slice construction.} \label{cornu_marshall_maps_comparison} \vspace{-0.5cm} \end{figure*} \newpage To compare the sub structures in the close range comparison (Fig.~\ref{cornu_marshall_lallement_close_maps_comparison}), we will refer to the sub structures observed in the L19 map. We call A the structure at close $d = 0.8$ kpc and that is observed in the range $257 < l < 280$ deg, then the structures that are all between $1.5 < d < 2.0$ kpc are called B, C, D, E and correspond to the longitude 260, 272, 283, 302 respectively. The last, more blurry, structure around $l=280$ deg at a higher distance $2.5 < d < 3$ kpc is called F. We observe that the L19 map D structure has a counterpart in M20 and in our map as well. In the M20 map, this structure is blurred by the large uncertainties in distance and it cannot be disentangled from the arm tangent structure. Our method seems to dissociate this D structure from a more distant one that is compatible with the F structure from the L19 map and that is visible in the C19 maps as well. Regarding the region A, B, and C the morphology of the M20 map in the corresponding region seems compatible with that of L19 and C19, with a significant detection of the gap at $\simeq 265$ deg but with a higher distance prediction for the B and C structures that are behind A. In our result it seems that our CNN has rather smoothed and packed all these structures together, with a continuous structure between $d=0.8$ and $d=2.5$ kpc. A gap may also be present at $\simeq 265$ deg, although it is less clear than in the three other maps. We not that the structure E from L19 and that is also predicted by C19 does not have a counterpart in the M20 map and in our prediction. However, our prediction and the M20 map both reconstruct a closer structure at $l=303$ deg around $d=1$ kpc, the more distant one at the same longitude being the probable artifact we described before.\\ \begin{figure*}[!t] \vspace{0.6cm} \hspace{-0.4cm} \begin{minipage}{1.05\textwidth} \centering \begin{subfigure}[!t]{0.48\textwidth} \caption*{\bf \large Main 2MASS result} \includegraphics[width=1.0\hsize]{images/run147_polar_plan_map_log_terrain_close_ep70.jpg} \end{subfigure} \vspace{0.6cm} \hspace{0.1cm} \begin{subfigure}[!t]{0.48\textwidth} \caption*{\bf \large Marshall+20} \includegraphics[width=1.0\hsize]{images/polar_plan_map_log_terrain_close_marshall.jpg} \end{subfigure}\\ \begin{subfigure}[!t]{0.48\textwidth} \caption*{\bf \large Lallement+19} \includegraphics[width=1.0\hsize]{images/gal_plan_polar_map_terrain_lallement.pdf} \end{subfigure} \vspace{0.6cm} \hspace{0.1cm} \begin{subfigure}[!t]{0.48\textwidth} \caption*{\bf \large Chen+19} \includegraphics[width=1.0\hsize]{images/polar_plan_map_log_terrain_close_chen.jpg} \end{subfigure} \end{minipage} \caption[Face-on view comparison at short distances for the 2MASS multiple-LOS training]{Short distance comparison of the face-on view of the Galactic Place in polar galactic-longitude distance coordinates for the Carina arm region using various maps. All the maps are limited to $d < 3.5$ kpc. The symbols are the same as in figure ~\ref{single_los_2mass_polar_plan}. {\it Top-left:} our 2MASS multiple-LOS training result for $|b|< 1$ deg. {\it Top-Right:} M20 prediction for $|b|< 1$ deg. {\it Bottom-left:} L19 prediction for $|z| < 35$ pc. {\it Bottom-right:} C19 prediction for $|b|< 1$ deg.} \label{cornu_marshall_lallement_close_maps_comparison} \end{figure*} \vspace{-0.2cm} From all these comparisons our method seems to be at least as efficient as the M20 approach that uses the same data, with the advantage of predicting less noisy maps with more compact structures and more resolved high distance structures. Our map still contains uncertainty on the distance estimate that can spread over several distance bins, but the finger of god effect is greatly reduced. Our CNN might still lack distance resolution at closer distance to match the overall morphology of the L19, which might be improved by adding Gaia data in our approach (Section~\ref{gaia_2mass_ext_section}). \clearpage \subsection{Addition of a second color-magnitude diagram} \begin{figure*}[!t] \vspace{-0.3cm} \centering \begin{subfigure}[!t]{0.47\textwidth} \includegraphics[width=1.0\hsize]{images/2MASS_obs_true_range_l280bp0.pdf} \end{subfigure} \hspace{0.5cm} \begin{subfigure}[!t]{0.47\textwidth} \includegraphics[width=1.0\hsize]{images/2MASS_CMD2_obs_true_range_l280bp0.pdf} \end{subfigure} \caption[Dual 2MASS CDM illustration]{Illustration of the two different 2MASS CMDs [J-K]-[K] and [J-H]-[H] as observed by 2MASS for $l = 280$ deg, $b = 0$ deg with a cone query radius of $0.25\, \mathrm{deg}$.} \label{second_2mass_cmd comparison} \end{figure*} \begin{figure}[!t] \hspace{-0.9cm} \begin{minipage}{1.10\textwidth} \centering \includegraphics[width=1.0\hsize]{images/run155_int_map_ep60.pdf} \end{minipage} \caption[Plane-of-the-sky view of the 2MASS dual-CMD multiple-LOS training]{Integrated extinction of the 2MASS dual-CMD multiple-LOS training prediction in a plane of the sky view using galactic coordinates. Contours are from Planck $\tau_{353}$.} \label{multi_los_dual_2MASS_int_map} \vspace{-0.2cm} \end{figure} Based on our multiple-LOS training, we demonstrated that it is possible to efficiently add a new input depth-channel without significantly increasing the network computational time and that it manages to infer information correlation between the two channels. While this approach was used to allow the network to distinguish different line of sigh references, it can be used to add more information in the form of additional images. Thus, we added a second 2MASS CMD representing [J-H]-[H] with the same $64 \times 64$ resolution. The J-H colors are lesser than the J-K colors, because of the lesser difference in wavelengths, so we reduced the color range of this diagram to better resolve the induced star shift. Figure~\ref{second_2mass_cmd comparison} shows a comparison of the two observed diagrams. For this training we directly used the multiple reference LOS approach conserving the same 9 reference LOS for training. Therefore, our input for each example is now a set of 4 CMD, two observed ones and two bare references, consequently increasing the dataset size. We conserved $f_{\rm naked} = 0.1$ and $Z_{\rm lim} = 50$ but the definition of the latter change slightly. It is still used to assess a maximum distance after which the target profile is set to zero (Sect.~\ref{zlim_subsection}), but this time it does so only after both CMDs have reached this limit individually. Each CMD input depth channel is normalized individually by looking for the highest pixel value in the all dataset corresponding to a given depth. The training time required for this is similar to our main 2MASS result, although the convergence requires more epochs overall.\\ \begin{figure*}[!t] \hspace{-0.8cm} \begin{minipage}{1.10\textwidth} \centering \begin{subfigure}[!t]{0.48\textwidth} \includegraphics[width=1.0\hsize]{images/run155_polar_plan_map_log_terrain_ep60.jpg} \end{subfigure} \hspace{0.5cm} \begin{subfigure}[!t]{0.48\textwidth} \includegraphics[width=1.0\hsize]{images/run155_polar_plan_map_log_terrain_close_ep60.jpg} \end{subfigure} \end{minipage} \caption[Face-on view for the dual-CMD 2MASS multiple-LOS training]{Face-on view of the Galactic Place $|b| < 1$ deg in polar galactic-longitude distance coordinates for the predicted Carina arm region using our dual-CMD 2MASS multiple-LOS training. The symbols are the same as in figure ~\ref{single_los_2mass_polar_plan}. {\it Left:} Full distance prediction. {\it Right:} Zoom on the $d < 3.5$ kpc prediction.} \label{multi_los_dual_2MASS_polar_map} \vspace{3cm} \end{figure*} \begin{figure}[!t] \hspace{-0.5cm} \begin{minipage}{1.05\textwidth} \centering \includegraphics[width=1.0\hsize]{images/run155_cart_plan_map_log_terrain_ep60.pdf} \end{minipage} \caption[Cartesian face-on view for the dual-CMD 2MASS multiple-LOS training]{Face on view of the Galactic Place $|b| < 1$ deg in cartesian galactic-longitude distance coordinates for the predicted Carina arm region using our dual-CMD 2MASS multiple-LOS training. The axis on the right border corresponds to the pixel height as a function of the distance induced by the conic shape of our LOS. The symbols are the same as in figure ~\ref{single_los_2mass_polar_plan}.} \label{multi_los_dual_2MASS_cart_map} \end{figure} \newpage We illustrate our prediction in the plane of the sky in Figure~\ref{multi_los_dual_2MASS_int_map}. It mainly highlights that there are more latitude artifacts and a stronger tiling effect than in our main 2MASS result. Still, the Galactic Place in the $|b| < 2$ deg range is mainly free of this artifacts and accurately follows the Planck morphology contours. The most interesting difference with our main results is that the extinction quantity seems better constrained overall. We conserved mostly the same integrate extinction dynamic range in the map but there are much less saturated pixels and the transition between the high and low extinction regimes is smoother. This could be explained by the fact that, even if the second CMD did not improve the distance estimate (that is mostly induced by the amount of pixel shift), it has certainly improved the resolution of extinction value (that depends of the star ratio between pixels, Sect.~\ref{input_output_cnn_dim}). Indeed, two CMDs improve the global statistic but there are also more stars per pixel overall in the [J-H]-[H] CMD, definitely improving the extinction quantity resolution. However, the fact that this result presents more latitude artifacts might indicate that the $Z_{\rm lim} = 50$ value is not adapted to this case. The [J-H]-[H] CMD generally being more populated than the [J-K]-[K] one, it usually reaches the limit later, and it can still be used to infer the extinction profile up to greater distances. However, the color leverage being lesser in this diagram it tasks the network to predict more extinction with less information. A suitable solution to counter this effect would be to have a different $Z_{\rm lim}$ value for each CMD, but we did not had time for this test for now.\\ The face-on view for this training is presented in Figure~\ref{multi_los_dual_2MASS_polar_map} and the cartesian view of the same prediction is in Figure~\ref{multi_los_dual_2MASS_cart_map}. The lower longitude part $l < 280$ deg is roughly identical to our main 2MASS result with a bit more of very close $d < 0.4$ kpc foreground. In contrast, the high longitude part $l > 280$ deg presents a few differences that are more evident in Figure~\ref{multi_los_dual_2MASS_cart_map}. First there is much more continuity along the structure at $1 < d < 2$ kpc, and along that at $3 < d < 5$ kpc. Even if this view strongly stretches the short distances, it is visible that the extinction is more evenly distributed between these two structures than in our main result where the structure at $3 < d < 5$ kpc was much denser. This latter structure now has a much lower extinction value and looks much more alike the M20 prediction in Figures~\ref{cornu_marshall_maps_comparison} and~\ref{cornu_marshall_lallement_close_maps_comparison}. These elements support the idea that it is not an artifact, but a genuine interstellar structure, although it remains more extended in longitude in the present prediction than in M20. Interestingly, the network tends to predict a connection between the tangent and this secondary structure more than with the closer one that would better correspond to our arm model. The more distant group of structures around $d\simeq 10$ kpc is roughly identical with an order of variation similar to the one we would obtain from repeated training over the same dataset.\\ While these results are sufficiently improved to justify the addition of the second 2MASS CMD, we were not able to always include it in all our tests since it would lead to very large datasets that are hard to work with using our main hardware infrastructure. Therefore, in the following section this diagram is not used.\\ \clearpage \section{Combined Gaia-2MASS extinction maps} \label{gaia_2mass_ext_section} In this section we generalize our approach of multiple CMDs as input by adding a Gaia data diagram. We present some specificity related to the Gaia dataset like the band and parallax error fittings. We also present the results from a CNN training on a single reference LOS case and from a multiple reference LOS one. We compare them to our main 2MASS result and to other extinctions maps. We finally discuss the current limitations of our approach with Gaia along with some possible adjustment that we considered. \etocsettocstyle{\subsubsection*{\vspace{-1.2cm}}}{} \localtableofcontents \vspace{-0.3cm} \subsection{Realistic Gaia diagram construction from the BGM} \label{gaia_diag_constuction} In the previous Section~\ref{2mass_maps_section}, we illustrated that the network architecture and training dataset construction using the BGM and GRF generated profiles (Sect.~\ref{galmap_problem_description}) was suitable to reconstruct large 3D extinction maps. We exposed that our CNN architecture allows to efficiently combine multiple diagrams as input depth channels allowing the network to generalize complex problem representations. One of the main advantages of this construction is that each diagram can theoretically be completely independent and that the network will automatically extract the relevant information contained in each of them regarding the task to perform. From these observations and theoretical elements it should be possible to add an independent diagram from Gaia without the necessity of cross matching the stars with 2MASS. For this to be possible we still had to follow the rules that allowed us to construct a suitable 2MASS CMD from the BGM, meaning that we had to select a statistical representation that accurately reproduced an observed quantity based on cuts and uncertainties (Sect.~\ref{ext_profile_and_cmd_realism}).\\ For this application we chose to use a [G]-[$\varpi$] diagram, where G is the photometric Gaia band at $\lambda_{\rm eff} = 0.623$\,$\mu m$, and $\varpi$ is the parallax measurement. We note that using directly the parallax instead of the distance removes the necessity of a possibly inaccurate or complex distance inversion \citep{bailer-jones_2018}. In this diagram the extinction effect will be to decrease the star luminosity (increasing the G magnitude value) of all stars after the corresponding distance. This should mainly result in platforming the continuous parallax distribution following the G magnitude axis. We illustrate the effect of extinction on this diagram in Figure~\ref{Gaia_diag_dissection}. We note that this effect is very strong due to the relatively short wavelength of the G band, implying greater extinction than with 2MASS, and that therefore the diagram will not provide any additional information for large distance estimates. Still, it should be useful to increase the close distance resolution, especially providing a better first extinction front position. It is also possible that it helps to better constrains the low extinction lines of sight.\\ \begin{figure*}[!t] \hspace{-0.6cm} \begin{minipage}{1.05\textwidth} \centering \begin{subfigure}[!t]{0.49\textwidth} \caption*{\bf No extinction} \includegraphics[width=1.0\hsize]{images/CMD_dissection/Gaia_no-ext.png} \end{subfigure} \vspace{0.6cm} \begin{subfigure}[!t]{0.49\textwidth} \caption*{\bf 1 Cloud, $\bm{A_v = 3}$ mag, $\bm{d = 0.5}$ kpc} \includegraphics[width=1.0\hsize]{images/CMD_dissection/Gaia_A3_d0p5_ln.png} \end{subfigure}\\ \begin{subfigure}[!t]{0.49\textwidth} \caption*{\bf 1 Cloud, $\bm{A_v = 5}$ mag, $\bm{d = 0.5}$ kpc} \includegraphics[width=1.0\hsize]{images/CMD_dissection/Gaia_A5_d0p5_ln.png} \end{subfigure} \begin{subfigure}[!t]{0.49\textwidth} \caption*{\bf 2 Clouds, $\bm{A_v = 5,3}$ mag, $\bm{d = 0.5,2}$ kpc} \includegraphics[width=1.0\hsize]{images/CMD_dissection/Gaia_2_nuages_ln.png} \end{subfigure}\\ \begin{subfigure}[!t]{0.49\textwidth} \caption*{\bf 1 Cloud, $\bm{A_v = 3}$ mag, $\bm{d = 2}$ kpc} \includegraphics[width=1.0\hsize]{images/CMD_dissection/Gaia_A3_d2_ln.png} \end{subfigure} \begin{subfigure}[!t]{0.49\textwidth} \caption*{\bf Uniform ext - 1 Cloud, $\bm{A_v = 3}$ mag, $\bm{d = 2}$ kpc\\} \includegraphics[width=1.0\hsize]{images/CMD_dissection/Gaia_A3_d2_u.png} \end{subfigure} \end{minipage} \caption[Effect of individual clouds on the Gaia diagram]{Effect of individual clouds on the Gaia [G]-[$\varpi$] diagram. The extinction is modeled as a log-normal distribution, except in the bottom-right panel where a uniform extinction is used.} \label{Gaia_diag_dissection} \end{figure*} Like for 2MASS, we had to characterize the selection cut and the observational uncertainties for the Gaia data. For this we used the same approach than the one described in Section~\ref{ext_profile_and_cmd_realism} on the same $l=280$ deg, $b=0$ deg LOS with a 1 $deg^2$ radius. For the magnitude cut in the G, $\mathrm{G_{BP}}$ and $\mathrm{G_{RP}}$ bands we excluded the stars that lack one or more detection and followed the equation \ref{magnitude_cut_fit} to fit each magnitude histogram. Figure~\ref{Gaia_cut_fitting} shows the resulting fitted cuts in the three Gaia bands. For the parallax, we selected the stars based on the ratio of their parallax over parallax-error with $\varpi/\sigma(\varpi) > 3$, excluding all stars that miss any of these quantities. This last addition reduced the number of stars that is then closer to the order of 2MASS star count.\\ \begin{figure}[!t] \centering \includegraphics[width=1.0\textwidth]{images/Selection_function_Gaia_l280_b0_1deg.png} \caption[Fitting of the cut in magnitude for the three Gaia bands]{Fitting of the cut in magnitude for the three Gaia bands. The blue histograms show the observed distribution, the fitted models are in red. The gray area shows the range of magnitude values included in the fit.} \label{Gaia_cut_fitting} \end{figure} Regarding the uncertainties we kept only stars that present all the bands, all the uncertainties and both the parallax value and its uncertainty, which conserved around 80\% of the stars on the present LOS. For the 3 magnitude band uncertainties we followed the same procedure described in Section~\ref{ext_profile_and_cmd_realism} following Equation \ref{uncertainty_fit}. Regarding the parallax uncertainty, we observed that it has a higher correlation with the G band than with the parallax itself. So we decided to fit this uncertainty using the [G]-[$\sigma(\varpi)$] diagram. All the fits are illustrated in Figure~\ref{fig_Gaia_uncertainty_fitting} and the Table~\ref{table_Gaia_uncertainty_fitting} contains the corresponding best fit parameters for the Gaia magnitudes and the parallaxes. There is an important observed difference between the diagram with and without the added uncertainty which plays a major role to make the diagram realistic.\\ \begin{table} \centering \caption{Uncertainty best fit parameters for all Gaia bands and parallax} \vspace{-0.1cm} \def1.1{1.1} \begin{tabularx}{0.80\hsize}{l @{\hskip 0.05\hsize} @{\hskip 0.05\hsize}*{3}{Y}} \toprule & a & b & c\\ \toprule G & $7.950 \times 10^{-12}$ & $1.013 \times 10^{0}$ & $4.069 \times 10^{-4}$\\ $\mathrm{G_{BP}}$ & $5.003 \times 10^{-8}$ & $7.074 \times 10^{-1}$ & $-8.295 \times 10^{-4}$\\ $\mathrm{G_{RP}}$ & $1.621 \times 10^{-11}$ & $1.147 \times 10^{0}$ & $1.187 \times 10^{-3}$\\ $\mathrm{\varpi}$ & $2.138 \times 10^{-8}$ & $8.539 \times 10^{-1}$ & $2.767 \times 10^{-2}$\\ \bottomrule \end{tabularx} \label{table_Gaia_uncertainty_fitting} \end{table} \begin{figure*}[!t] \hspace{-0.4cm} \begin{minipage}{1.05\textwidth} \centering \begin{subfigure}{0.47\textwidth} \includegraphics[width=\textwidth]{images/plots_noise/Gaia_flx_Gmag_Gaia_l280_b0_1deg.txt.png} \end{subfigure} \begin{subfigure}{0.47\textwidth} \includegraphics[width=\textwidth]{images/plots_noise/Gaia_flx_BPmag_Gaia_l280_b0_1deg.txt.png} \end{subfigure}\\ \begin{subfigure}{0.47\textwidth} \includegraphics[width=\textwidth]{images/plots_noise/Gaia_flx_RPmag_Gaia_l280_b0_1deg.txt.png} \end{subfigure} \begin{subfigure}{0.47\textwidth} \includegraphics[width=\textwidth]{images/plots_noise/Gaia_plx_Gaia_l280_b0_1deg.txt.png} \end{subfigure} \end{minipage} \caption[Fit of Gaia uncertainties]{Fit of Gaia uncertainties. The gray dots are Gaia stars, the gray scale representing the star density in the diagram. The running median (blue dots) is fitted by an exponential model (orange line).} \label{fig_Gaia_uncertainty_fitting} \end{figure*} \newpage \vspace{-0.3cm} \subsection{Training with one line of sight} \label{Gaia-2MASS_single_los_training_and_test_set_prediction} For comparison with the single LOS training using solely 2MASS from Section~\ref{2mass_single_los}, we present here a Gaia-2MASS training on a single LOS. For this we defined our input as a dual depth channel CMD, the first one being the same [J-K]-[K] CMD and the new [G]-[$\varpi$] diagram, both using the same $64 \times 64$ resolution. As for 2MASS only, the order of the operations is as follow: we started with raw BGM realizations, we generated composite GRF extinction profiles (Sect.~\ref{GRF_profiles_section}) that are used to extinct all stars from each list, then the fitted cuts are applied and the noise is added. From these extincted star lists we determined the limiting distances using $Z_{\rm lim} = 100$ and forced all profiles to zero after this point. Finally we cut every extinction peak above five in the targets (Sect.~\ref{zlim_subsection}). The created input-target pairs are then used to train the network accounting for a $f_{\rm naked} = 0.1$ value in order to regularly present the bare diagrams as inputs. Similarly to the single LOS training using 2MASS, we generated $5\times 10^5$ of these examples and kept the $0.94$, $0.05$, $0.01$ proportions for the training, valid, and test datasets. Each input depth channel is normalized into the 0 to 1 range according to the maximum pixel value in the full dataset. This step is even more important in this case because there are significantly more stars in the [G]-[$\varpi$] diagrams than in the [J-K]-[K] CMDs, therefore the normalization evens out the initial respective influence of the two diagram types. Training the network on this dataset is quick since it only contains two input depth channels and $5\times 10^5$ examples per epoch. The convergence is reached at a very similar epoch as for the 2MASS single-LOS training.\\ \begin{figure*} \hspace{-0.6cm} \begin{minipage}{1.05\textwidth} \centering \begin{subfigure}[!t]{1.0\textwidth} \includegraphics[width=1.0\hsize]{images/run157_int_map_ep60.pdf} \end{subfigure} \end{minipage} \caption[Plane of the sky view for the Gaia-2MASS single-LOS training]{Integrated extinction for each pixel of the Gaia-2MASS single-LOS training prediction in a plane of the sky view using galactic coordinates. Contours are from Planck $\tau_{353}$.} \label{single_gaia_2mass_int_map} \end{figure*} \begin{figure*}[!t] \hspace{-0.9cm} \begin{minipage}{1.1\textwidth} \centering \centering \begin{subfigure}[!t]{0.47\textwidth} \includegraphics[width=1.0\hsize]{images/run157_polar_plan_map_log_terrain_ep60.jpg} \end{subfigure} \hspace{0.3cm} \begin{subfigure}[!t]{0.47\textwidth} \includegraphics[width=1.0\hsize]{images/run157_polar_plan_map_log_terrain_close_ep60.jpg} \end{subfigure} \end{minipage} \caption[Face-on view for the Gaia-2MASS single-LOS training]{Face-on view of the Galactic Place $|b| < 1$ deg in polar galactic-longitude distance coordinates for the predicted Carina arm region using our Gaia-2MASS single LOS training. The symbols are the same as in figure ~\ref{single_los_2mass_polar_plan}. {\it Left:} full distance prediction. {\it Right:} zoom in on the d < 3.5 kpc prediction region.} \label{single_gaia_2mass_polar_plan} \end{figure*} The predictions from the previously described network training are presented in the following figures: the integrated extinction plane of the sky view is presented in Figure~\ref{single_gaia_2mass_int_map}, the face-on view of the Galactic Place corresponds to Figure~\ref{single_gaia_2mass_polar_plan} and the cartesian view of the same quantity is in Figure~\ref{single_gaia_2mass_cart_plan}. From these results we observe that the prediction is very similar to the one from the 2MASS only results. It mostly suffers from the same issues and presents the same strengths. However we note two major differences: (i) in the low longitude part $l < 280$ deg the prediction seems to better follow the L19 or C19 morphology from Figure~\ref{cornu_marshall_lallement_close_maps_comparison}, with a much clearer distinction between two groups of structures in distance, and (ii) in the high longitude part $ l > 290$ deg almost all the extinction is concentrated in the structure we interpreted as an artifact in previous sections in the region $ 295 < l < 303$ deg and $2.5 < d < 3.5$ kpc. There might be two explanations for the second point. One possibility is that the very large star count increase from Gaia, when following the longitude axis, pushes the network more quickly to perform prediction on non-constrained parts of the feature space. Indeed, it has never been presented such highly populated CMD in the training dataset. The other explanation would be that we did not manage to create a representative Gaia diagram and that this artifact is the result of a systematic difference between modeled and observed diagrams. We note that the high distance diffuse structure is predicted very similarly to the 2MASS single LOS training indicating that the network must have automatically identified that it can use 2MASS only for such high distance prediction independently of the foreground for which Gaia dominates. This might also come from a non sufficiently restrictive $Z_{\rm lim}$ but we already adopted a relatively large $Z_{\rm lim} = 100$ value, as further discussed for the next case.\\ \begin{figure*}[!t] \centering \begin{subfigure}[!t]{0.98\textwidth} \includegraphics[width=1.0\hsize]{images/run157_cart_plan_map_log_terrain_ep60.pdf} \end{subfigure} \caption[Cartesian face-on view of the Gaia-2MASS single-LOS training]{Face-on view of the Galactic Place $|b| < 1$ deg in cartesian galactic-longitude distance coordinates for the predicted Carina arm region using our gaia-2MASS single-LOS training. The axis on the right border corresponds to the pixel height as a function of the distance induced by the conic shape of our LOS. The symbols are the same as in figure ~\ref{single_los_2mass_polar_plan}.} \label{single_gaia_2mass_cart_plan} \end{figure*} \newpage \subsection{Combined sampled training} \label{Gaia_2mass_multi_los_training_and_test_set_prediction} For the same reason as exposed in Section~\ref{los_combination}, we proceed to a single training using the same sampling of 9 LOS over the galactic longitude range $260 < l < 300$ deg. This should help take into account the specificity of each LOS and share the redundant information between them. The input is then adjusted to account for 4 individual diagrams at once, a 2MASS [J-K]-[K] extinct CMD and its bare reference, and a Gaia [G]-[$\varpi$] extinct diagram and its bare reference. The output is still a single LOS profile. We used the same previously defined parameters with $2\times 10^5$ examples per reference LOS, $Z_{\rm lim} = 100$ and kept $f_{\rm naked} = 0.1$ so that the network can associate the bare references to a flat-zero extinction profile. Each input depth channel is normalized as an independent feature from the maximum pixel value of each diagram in the whole dataset. This last step is of critical importance because a bare Gaia reference diagram can contain a very large number of stars that would completely dominate the network training error at the beginning otherwise.\\ The integrated extinction plane of the sky view is presented in Figure~\ref{multi_gaia_2mass_int_map}. It is visible that this map presents significantly less contrast in the low extinction regions, but the high-latitudes predictions seem more even than in our previous maps. The central structure presents a very high integrated extinction that is significantly higher than in several of our previous results but this high extinction is compatible with our main 2MASS result (Fig.~\ref{multi_los_2mass_polar_plan}). The low longitude part appears properly reconstructed, again with less contrast, which is also the case for the high longitude part but with an added diffuse latitude extinction.\\ \begin{figure*}[!t] \hspace{-0.6cm} \begin{minipage}{1.05\textwidth} \centering \begin{subfigure}[!t]{1.0\textwidth} \includegraphics[width=1.0\hsize]{images/run151_int_map_ep100.pdf} \end{subfigure} \end{minipage} \caption[Plane-of-the-sky view for the Gaia-2MASS multiple-LOS training]{Integrated extinction for each pixel of the Gaia-2MASS multiple-LOS training prediction in a plane of the sky view using galactic coordinates. Contours are from Planck $\tau_{353}$.} \label{multi_gaia_2mass_int_map} \end{figure*} \begin{figure*}[!t] \hspace{-1.2cm} \begin{minipage}{1.12\textwidth} \centering \begin{subfigure}[!t]{0.48\textwidth} \includegraphics[width=1.0\hsize]{images/run151_polar_plan_map_log_terrain_ep100.jpg} \end{subfigure} \hspace{0.3cm} \begin{subfigure}[!t]{0.48\textwidth} \includegraphics[width=1.0\hsize]{images/run151_polar_plan_map_log_terrain_close_ep100.jpg} \end{subfigure} \end{minipage} \caption[Face-on view for the Gaia-2MASS multiple-LOS training]{Face-on view of the Galactic Place $|b| < 1$ deg in polar galactic-longitude distance coordinates for the predicted Carina arm region using our Gaia-2MASS multiple-LOS training. The symbols are the same as in figure ~\ref{single_los_2mass_polar_plan}. {\it Left:} Full distance prediction. {\it Right:} Zoom in on the $d < 3.5$ kpc prediction region.} \label{multi_gaia_2mass_polar_plan} \end{figure*} The usual face-on view of the Galactic Place slice is presented in Figure~\ref{multi_gaia_2mass_polar_plan} and the same quantity using a cartesian view is in Figure~\ref{multi_gaia_2mass_cart_plan}. We also provide a comparison of this result with our previous main 2MASS result along with the M20 and the L19 maps in a close distance view in Figure~\ref{gaia_2mass_cornu_marshall_lallement_close_maps_comparison}. These figures illustrate important issues we have with this result. First, from a simple prediction stability standpoint, the prediction seems more noisy with several continuous and quasi-periodic stripes with widths of 1 to 3 bins, and that are much more visible in Figure~\ref{multi_gaia_2mass_cart_plan}. These are obvious network artifacts, althouh most of them have low extinction values ($\lesssim 1$ mag/kpc). Regarding the low longitude part, the prediction roughly follows our main 2MASS result but with much larger absolute extinction and with a late first extinction estimate and more compact structure (see the short-distance comparison in Fig.~\ref{gaia_2mass_cornu_marshall_lallement_close_maps_comparison}). There is also a foreground extinction in the first 2 distance bins in this region. The high longitude region is similarly dominated by the likely artifact structure at $d=3$ kpc, that is now closer by about 0.5 kpc in comparison to the main 2MASS result. We also note that we do not detect any structure after $d > 6$ kpc anymore, which is unlikely to be due to the addition of Gaia since we did not remove any information from the 2MASS data, and that structures were present at those distances in the single LOS Gaia-2MASS result (Fig.~\ref{single_gaia_2mass_polar_plan}). At short distance in the middle longitude range $275<l<285$ deg, the prediction appears realistic. The first central extinction peak around region D at $ l \simeq 282$ and $d = 1.5$ kpc roughly corresponds to the L19 one for the same structure and our structures in this small area are compatible with the M20 prediction. Still, in the same middle longitude range, the secondary peak that could correspond to the arm tangent around $l = 282$ deg and $d = 6$ kpc from our main 2MASS result, is predicted with a shorter distance by at least 1 kpc, and is not as elongated than in the previous Gaia-2MASS single LOS result. Considering that the single LOS training had 2.5 times the number of training example for this centered LOS it is more likely that it has a better prediction here, and that with just $2 \times 10^2$ this LOS is underconstrained in the present result.\\ \begin{figure*}[!t] \begin{minipage}{1.0\textwidth} \centering \begin{subfigure}[!t]{1.0\textwidth} \includegraphics[width=1.0\hsize]{images/cart_plan_map_log_terrain_ep100.pdf} \end{subfigure} \end{minipage} \caption[Cartesian face-on view for the Gaia-2MASS multiple-LOS training]{Face-on view of the Galactic Place $|b| < 1$ deg in cartesian galactic-longitude distance coordinates for the predicted Carina arm region using our gaia-2MASS multiple-LOS training. The axis on the right border corresponds to the pixel height as a function of the distance induced by the conic shape of our LOS. The symbols are the same as in figure ~\ref{single_los_2mass_polar_plan}.} \label{multi_gaia_2mass_cart_plan} \end{figure*} \begin{figure*}[!t] \hspace{-0.5cm} \begin{minipage}{1.05\textwidth} \centering \begin{subfigure}[!t]{0.48\textwidth} \caption*{\bf \large Main 2MASS result} \includegraphics[width=1.0\hsize]{images/run147_polar_plan_map_log_terrain_close_ep70.jpg} \end{subfigure} \vspace{0.5cm} \hspace{0.3cm} \begin{subfigure}[!t]{0.48\textwidth} \caption*{\bf \large 2MASS \& Gaia result} \includegraphics[width=1.0\hsize]{images/run151_polar_plan_map_log_terrain_close_ep100.jpg} \end{subfigure}\\ \begin{subfigure}[!t]{0.48\textwidth} \caption*{\bf \large Marshall+20} \includegraphics[width=1.0\hsize]{images/polar_plan_map_log_terrain_close_marshall.jpg} \end{subfigure} \vspace{0.5cm} \hspace{0.3cm} \begin{subfigure}[!t]{0.48\textwidth} \caption*{\bf \large Lallement+19} \includegraphics[width=1.0\hsize]{images/gal_plan_polar_map_terrain_lallement.pdf} \end{subfigure} \end{minipage} \caption[Face-on view comparison at short distance for various maps]{Short distance comparison of the face-on view of the Galactic Place in polar galactic-longitude distance coordinates for the Carina arm region using various maps. All the maps are limited to $d < 3.5$ kpc. The symbols are the same as in figure ~\ref{single_los_2mass_polar_plan}. {\it Top-left:} Our 2MASS multiple-LOS training result for $|b|< 1$ deg. {\it Top-Right:} Our Gaia-2MASS multiple-LOS training result for $|b|< 1$ deg. {\it Top-left:} M20 prediction for $|b| < 1$ deg. {\it Bottom-right:} L19 prediction for $|z| < 35 $ pc.} \label{gaia_2mass_cornu_marshall_lallement_close_maps_comparison} \end{figure*} \newpage We do not push much further the detailed analysis of this result since it presents clear signs of a very underconstrained training. There are several reasons that could explain this behavior. First, we observed that our prediction from a single-LOS training presented more plausible results than the multiple LOS training. Cases where adding information causes the network to degrade its prediction are likely to be a sign of non-realistic input examples. This assumption is strongly supported by the fact that this network performs significantly better with lower average error on its test set than an identical training with the same parameters but using 2MASS only CMDs. It means that the network better reproduces the training profiles, but is still unable to predict the observed quantity correctly, confirming an intrinsic difference between our training input and the observed one. Additionally, the fact that the central LOS prediction is also less well reconstructed illustrates that the information generalized from the other reference LOS is at best not used, and at worse badly affecting the central LOS prediction. The $Z_{\rm lim}$ maximum distance is unlikely to be affected by Gaia since most of the corresponding stars are extincted very early, therefore the high distance range should be constrained solely based on 2MASS. The fact that the prediction at these distances is degraded means that the addition of Gaia affects the 2MASS prediction, which should not be the case. Here, again the explanation might be in the existence of a systematic difference between our training Gaia diagram and the observed ones. We note that such effects have also been observed in our early-testing results using 2MASS only, and that usually managing to create more realistic training CMD solved the issues. The main difference here is that the Gaia diagram seem more affected by aspects of the construction that was negligible for the 2MASS data. Overall, the present inconsistency of our Gaia-2MASS multiple reference LOS training, is likely to come from a combination of all the limits we discuss in Section~\ref{extinction_maps_discussion}.\\ Independently of the previous discussion, we note that our choice of diagram for Gaia was not the most appropriate. Using a Gaia color [$\mathrm{G_{BP}}$ - $\mathrm{G_{RP}}$] would probably have worked better in place of the G band. This way the diagram may have contained more information thanks to the correlation between distance and extinction. Also, as we discussed for the 2MASS dual CMD case, it should be possible to add another Gaia diagram. Ideally, we aim at using a [$\mathrm{G_{BP}}$ - $\mathrm{G_{RP}}$]-[$\varpi$] diagram and a [$\mathrm{G_{BP}}$ - $\mathrm{G_{RP}}$]-[G] CMD all together with one or several 2MASS CMDs. This is a step-by-step and ongoing work where we try to combine the most of the two surveys.\\ \vspace{-1cm} \section{Method discussion and conclusion} \label{extinction_maps_discussion} \subsection{Dataset construction limits and improvements} \subsubsection{Magnitude cuts and uncertainty issues} We highlighted in several section that the construction of realistic examples to train the network is the most critical aspect of this application. It is difficult to assess whether our training data are realistic enough since even if it is not the case, the network will mostly reconstruct a good prediction of the test dataset because it is based on the same construction than the training dataset. Therefore, it only provides information on how well the network is able to perform the task it was given, but not in any case if the prediction using observed inputs will be realistic. One can directly have a look at the diagrams to search for striking differences, but the use of a complex ANN method was selected exactly because the fine analysis of this diagram is difficult. Therefore, imperceptible differences could remain. Another solution is to look at the predicted map and its uncertainty and to compare it to other predictions to identify clear errors. According to the region where major differences are noticed, it is possible to better analyze the underlying prediction and input to diagnose the origin of the problem. In the end, it is only by trying to reproduce the acquisition scheme of the observed data that a realistic example can be built.\\ One of our main assumption, that we did not stress strongly before, is that we fit our magnitude cut limits and uncertainties solely on the $l = 280$ deg, $b = 0$ deg. However, it is well known that 2MASS and Gaia have variations of these values across the plane of the sky, mainly due to the variations in stellar confusion across the Milky Way \citep{skrutskie_two_2006, Evans_2018}. For this reason it should be more efficient to perform at least an uncertainty and limit fitting for each of the reference LOS used in multiple-LOS training. From preliminary tests we observed significant variations of the cut limit following the galactic longitude axis for 2MASS. However, since the magnitude cut limit mainly affects faint main sequence dwarf stars, it could explain that our 2MASS results are not that much affected. Indeed, the network most likely does not use these limit stars, as a small extinction is sufficient to shift those stars beyond the detection limit in the CMD, preventing the network from extracting any information from them. In contrast, it is very probable that for the Gaia diagram the network is significantly affected by this limit. Overall, it means that the Gaia and 2MASS mock diagrams differ from the observed ones, which could affect some results like the last combined Gaia-2MASS multiple-LOS maps.\\ \subsubsection{Modular $Z_{\rm lim}$ value} Regarding the $Z_{\rm lim}$ value, we stressed that its value is important to assess the network prediction limit capacity and to avoid providing target profiles that are impossible to reconstruct from the input data information. A simple addition to improve results from multiple-CMD would be to have a different $Z_{\rm lim}$ for each. Especially, in the case of Gaia, since the number of stars is larger and because there is less extinction information contained in each of them, it should be justified to adopt a significantly higher $Z_{\rm lim}$ for Gaia than for 2MASS in the same training. Another improvement would be to have a different $Z_{\rm lim}$ for each reference CMD. This could be used to force the network to perform higher distance estimates in less crowded areas of the Milky Way, toward the anti-center for example. This would results in less precise predictions in these regions, but it could be better than no prediction at all. More populated areas would keep a high $Z_{\rm lim}$ to improve the prediction uncertainty since the number of stars will be sufficient to still reconstruct high distance structures. We note that we tried to use a relative $Z_{\rm lim}$ value so that each LOS has a $Z_{\rm lim}$ that correspond to $1\%$ of its total star count. We combined this solution with fixed $Z_{\rm lim}$ to avoid very low star counts LOS to be too noisy. However, for now we did not manage to improve the results using this approach. The other limitations of the study may remain the dominant issue, or we did not found an appropriate $Z_{\rm lim}$ recipe yet. Still, having the possibility to choose the map detection limit for each region would be a very useful addition to the map prediction, which would make it tunable as a function of the use case.\\ \subsubsection{Construction of realistic profiles} Our profile construction using GRFs could certainly benefit from some refinements as well. Our current prescription induces two main problems. The first one is that there is a characteristic structure size range. Similarly to the effect of a prior in a Bayesian approach, our predictions might then be biased toward structures of the same size. This means that it might be necessary to increase the diversity of our training dataset, and therefore to also increase its size. In contrast, there is a significant part of our generated profiles that remains very unrealistic due to the intrinsic randomness of the GRF generation. Reducing the occurrence of these unrealistic examples would help reducing the training dataset size for an identical prediction capacity, and could enable the network to save many of its weights for the actually realistic profiles. While our GRF recipe could definitely be improved, we could also use other profile prescriptions. We could for example use profiles predicted from a variety of other extinction maps. In this case, we would only keep the realistic profiles independent of the studied LOS, so we could apply all theses profiles randomly to a large variety of CMDs from any region of the Milky Way. A similar solution would be to use simulations of the large scale Milky Way distribution to also recreate realistic profiles. One advantage would be that several simulation realizations could be used to significantly increase the training dataset size and diversity. Both of the previous approaches could also be used just to constrain the best parameters for our GRF construction. This would then allow us to generate as many profiles we want with the added property that it could create new realistic examples that were not seen in simulations or observations, but that have a similar statistical distribution. On the other hand, in this approach the predictions would inherit of the biases of the simulations or the other maps used as a model for our profiles. \\ Another approach would be to reverse the methodology of the present study. We could use a generative method that learns to reconstruct realistic CMDs based on a mock profile. We performed preliminary attempts in this direction by designing a Generative Adversarial Network (GAN) for this task \citep{Goodfellow_2014}. This is a double ANN setup where a network learns to predict a realistic extinction profile from a random vector, and a second network tries to distinguish extincted CMDs produced from the combination of a bare BGM CMD and the previous network profile prediction. This is then a "zero sum game" where the first network learns to fool the second one, while the second one tries to distinguish between true and fake extincted CMDs. This approach would allow us to have a GAN profile generator that was constrained to reconstruct realistic profiles based on observations. Still, GAN are long to train and usually do not have a transforming process between the generator prediction and the discriminator input as we described here. Getting to a working configuration of this approach would still require a lot of efforts. \newpage \subsubsection{The "perfect BGM model" assumption} \vspace{-0.1cm} One of our strongest assumptions is that the BGM prediction perfectly reproduces the observed LOS without extinction, at least statistically. Obviously the BGM itself has assumptions and limits, but one of the most important issues is that it predicts only the general shape of the Milky Way disk. In this model, there is no galactic arm, and no local stellar over-density. For example, the youngest stars are modeled with the same assumptions as the older stars, so that they are well mixed with the general population, while in observations they are more often grouped in open clusters. It means that the difference between our bare BGM and our observed quantity is certainly not solely due to the extinction. While it might be possible to construct an input quantity that would be sensitive to the presence of stellar clusters using Gaia, they are still missing in the training dataset for the network to learn to make the difference. A long term approach here would be to create star lists that may or may not contain star clusters using an additional construction recipe, that can again be constrained by observations or simulations. This would obviously be a considerable work to properly parameterize such datasets and it would very significantly increase the problem complexity and therefore the training dataset sizes and training times. It is not excluded that it would be possible to construct a very large network infrastructure that smartly combines all this information in a near future. \vspace{-0.3cm} \subsection{CNN method discussion} \vspace{-0.1cm} Regarding the CNN architecture itself, there is still room for improvement. The first modification to attempt in a near future will be to allow our network to work on Mixed precision datasets. This would enable us to significantly reduce the training dataset storage by at least a factor of two, and additionally it would strongly improve the network training speed without a significant prediction penalty regarding the expected precision of extinction maps. Still from a computational standpoint, allowing our framework to load data dynamically from the durable storage source would strongly reduce our RAM memory usage. The next step is then multi-GPU support. This way we could expect to work on much larger datasets and to add several input depth-channels from other studies to be all combined at once by our CNN, still without the necessity of a cross match.\\ Another modification would be to improve the network architecture itself. There are still recent CNN approaches that we did not attempt in the present construction like Residual Neural Networks \citep{He_ResidualNN_2015} or multi-path networks like in the Inception architecture \citep{inception_2016} that might improve the generalization capacity of the method. We note that we were surprised by the performance of a $1\times 1$-filter convolution in our architecture exploration, so a careful redesign of the network architecture with more of them could lead to prediction improvements. Finally, at a much larger time scale, we could consider to include spatial coherency in our input, by for example showing the input depth channel for the presented LOS but also for all the ones that are close to it. Our input could also be constructed from higher dimension depth channels, for example using 3D histograms like [G]-[H]-[K], or a similar one that includes the Gaia parallax. This way the network would be provided with a sampling of the star distance distribution and it would look for 3D coherent patterns in this volume. Finally, we can imagine a network that would be able to take large-scale Milky Way volumes as its input at once (instead of discrete LOS-profiles), allowing it to construct several realistic 3D Milky Way dust distributions as single examples and generalizing from it. Such an application would be a huge computational challenge, but it is becoming more and more realistic considering the important improvement in ANN dedicated hardware in the past few years and the present performance prediction for the upcoming new hardware technologies. \subsection{Conclusion and perspectives} \vspace{-0.0cm} In the final part (Part III) of the manuscript, we presented a Convolutional Neural Networks methodology to reconstruct 3D extinction maps of the Milky Way, based on the Besançon Galaxy Model and applied to 2MASS and Gaia data, and that is suitable for large scale predictions. This study led to the following conclusion. A comparison between modeled extinction-free and observed extincted CMDs (or 2D histogram) contains a sufficient amount of information to reconstruct extinction profiles with a large distance range $d \gtrsim 10$ kpc prediction. Usual methods compare star list directly to avoid too intricate information in a CMD formalism for which it is necessary to construct a highly non-linear analysis method. \\ \vspace{-0.2cm} A Convolutional Neural Network is a suitable method for this task. It is able to extract the information contained on the input CMD efficiently. The most efficient CNN architecture that was found is based on very few convolutional layers with a minimal dimensionality reduction of the input and then requires large dense layers to reconstruct extinction profiles accurately. The exposed architecture is computationally efficient but the training process requires large datasets of examples ($5\times 10^5$ to $2 \times 10^6$). This formalism can efficiently be generalized over a multiple LOS combination in a single training by adding reference LOS as input depth channel.\\ \vspace{-0.2cm} The realism of the chosen input is of critical importance. Small differences between the training modeled examples and the observations lead to significant prediction artifacts in some cases. The magnitude limit cut and the uncertainty fitting of all used quantity from both our used survey has been identified as the most important step. The target profile realism is of similar importance. It must be, at the same time, diverse enough to properly constrain the feature space, and restricted enough to avoid too many unrealistic examples that slow the training process down and add noise to the predictions. The target profiles must also be modified so they do not correspond to unpredictable results regarding the information contained in the input volume.\\ \vspace{-0.2cm} The CNN construction we propose is able to reconstruct large portions of the Milky Way Galactic Place by learning from a relatively sparse sampling in galactic longitude. The network predicts spatially coherent structures even without forcing any correlation between adjacent lines of sight. The distance dispersion of the prediction is smaller than in maps working from the same datasets and the prediction contains much less finger of gods artifacts.\\ \vspace{-0.2cm} It has been exposed that the CNN architecture can efficiently combine several diagrams as input allowing to combine information from multiple surveys like Gaia and 2MASS without a cross match of the stars. Presently, the results from large scale application of such combination is limited by realism of our modeled diagrams especially in terms of magnitude cut limit and uncertainty fitting. The star count limit that the network considers as relevant for making a prediction also appears to be insufficiently described for the combined surveys prediction to conserve the distant 2MASS-identified structures.\\ \vspace{-0.2cm} Our on-going works are focused on the improvement of the training dataset and construction and on the assessment of hyperparameters relative to reference LOS used during the training. The objective is to correct the Gaia-2MASS prediction to better balance the contribution from each survey, and then to predict a full Galactic Place map. Several important improvements are currently under study, such as adding other surveys, adding spatial coherence between lines of sight, changing the profile construction toward more realistic ones, network architecture improvements to speed up the training and increase the generalization capacity. The method description and first results on 2MASS will be soon published in the form of a letter to the journal Astronomy and Astrophysics \citep{cornu_montillaud_20b}. \clearpage \section{General conclusion} Each of the previous parts has already been discussed in depth individually. In the present section we partly describe the timeline of the presented work. We remind the reader some of the global questioning with our approach to solve them and briefly the main results and limitations. We also provide more general insights on our approach and slightly discuss what could be the role of ML methods in astronomy during the coming years.\\ \vspace{-0.1cm} In this study we presented the work that we have done toward the reconstruction of the large-scale 3D structure of the Milky Way. The objective of this work was to design a new methodology based on Machine Learning and that could efficiently use infrared surveys in combination with Gaia to infer this Galactic structure. From the beginning of this work, the underlying goal was to assess if these methods would be able to construct new 3D extinction maps. The main motivation for this was the present opposition between: (a) small distnace range but high resolution maps that use astrometry from optical surveys in combination with infrared ones, and (b) high distance range with lower resolution maps that use infrared surveys only. The rationale was based on the reputation of ML methods that should be able to combine heterogeneous datasets automatically. This property would permit the construction of extinction maps for which the most suitable survey, or a complex combination of them, is used as a function of the distance in an automated way. Additionally, the very large dataset usually involved in such tasks would ensure a proper training of the ML methods and their efficiency in handling a very high number of dimensions would open the possibility for the combination of several large surveys at once.\\ \vspace{-0.1cm} It quickly became apparent that, independent to the intrinsic strengths of ML methods, they require the given task to be constructed in a very specific form. Replacing some steps of existing methods with ML fitting would have been possible and relatively easy, but not satisfying due to the fact that it would not make use of the full generalization potential of a ML approach. For this reason we decided to build a strong theoretical and practical knowledge on ML methods by designing our own framework from scratch. The choice of Artificial Neural Network, or Deep Learning, was motivated by their almost unmatched versatility and for their high computational performance. In order to build experience on their usage we approached a simpler problem, YSO classification, that would still be able to provide 3D spatial constrains on medium scale structures.\\ \vspace{-0.1cm} As demonstrated in the present study, Young Stellar Objects trace dense clouds and can be used with Gaia to reconstruct their distance, elongation, and even global morphology in 3D. One difficulty of this approach is the lack of detected YSOs to enable robust predictions, and more importantly the fact that some regions that could contain YSOs are not investigated. We then decided to construct an identification that would focus on the deep infrared survey Spitzer. It provides the addition of granting constrains that can help to distinguish youngest Class I and older Class II YSOs, which are missing in many surveys, and its sensitivity also allows it to detect more YSOs overall, even in dense environment. The two limitations are that Spitzer does not cover the full sky but mostly the Galactic Place and that even if it detects more deeply embedded YSOs they are unlikely to have a counterpart with Gaia. Putting aside this limitation, we managed to design an ANN that is able to accurately reproduce the classification scheme from \citep{gutermuth_spitzer_2009} and that also provides an additional membership probability for each object. It allowed us to identify a few weaker points in the usual classification scheme, but more importantly we demonstrated that this probability can be used to select the most reliably identified YSOs, providing additional constraints for the 3D reconstruction of observed clouds. This approach should be able to provide large Spitzer census of YSO candidates in the near future that could be used to identify new regions of interest or even constrains more clouds morphology. We identified several possible improvements of the method mainly relying on further improving the training dataset, especially by reducing the reliance on the \citep{gutermuth_spitzer_2009} classification and rather use strong observational targets or simulations. This approach is also suitable to be applied to other all-sky surveys, or even to be used with a combination of surveys without cross-match.\\ \vspace{-0.1cm} This first application allowed us to construct a much deeper understanding of ANN structure, behavior, limits, etc. This allowed us to identify more clearly a specific problem construction that would be able to work on the original objective of building a new 3D extinction map. After significant improvements of our framework and a careful problem description, we managed to use a Convolutional Neural Network to perform direct extinction profile reconstruction. The approach is based on the comparison of the Besançon Galaxy Model with observations from the 2MASS infrared survey. From a set of training mock profile examples, the CNN has proved capable of accurately predicting the position and size of typical extinction structures for individual lines of sight. We successfully generalized our approach in a manner that allows the network to be trained simultaneously on multiple lines of sight from different galactic longitudes. Thus, our network can be trained once and is then capable to generalize its prediction for large Galactic Place portions. From it we constructed large extinction maps that exhibit spatial coherence between adjacent lines of sight and that strongly reduce common elongation artifacts in distance. Our map based on 2MASS exhibits compact substructures up to 10 kpc and is in correct agreement with many other maps, and it also correlates with other tracers of high density structures like HII regions that are expected to trace the spiral arms. We also demonstrated that it is possible to combine Gaia and 2MASS data in a manner that does not require a cross-match. We observed that the network efficiently combines the information without requiring a significant increase in the number of parameters in the network and a negligible computation time increase. Our results are promising, particularly in some places of the map where it is clear that the information from the two surveys is efficiently combined. However, most of our results from this 2MASS-Gaia combination remain dominated by artifacts that are likely to come from insufficiently realistic training examples that still have to be refined. Since this artifacts are similar to the ones we had in our first attempts of 2MASS only predictions, we are confident in the fact that they will be solved and do not represent a fundamental limitation of either the data or the method.\\ \vspace{-0.1cm} To conclude, Machine Learning methods are becoming an essential tool in astronomy, especially in regard of the future challenges from very-large and highly-dimensional surveys. In this work we successfully constructed ML approaches around the identified limitations that were observed in two different astronomical problems. These methods must be used properly in order to truly provide genuine improvements to various present works in astronomy. The easy accessibility of ML frameworks and the frequent publicized breakthroughs they permit often convey the idea that these methods are so efficient that they can be used even on poorly described problems. We exposed in this work that this is at the opposite of the proper approach, that consists in a fine identification of the parameter space to constrain the dataset reliability, the training and observed proportions, etc. These methods will solve many currently impossible problems in astronomy, but that they will first need to be progressively tamed by the community. \section*{} \addcontentsline{toc}{section}{Abstract} \begin{center} \vspace{-1.5cm} \textbf{\Large Abstract}\\ \end{center} Large-scale structure in the Milky Way (MW) is, observationally, not well constrained. Studying the morphology of other galaxies is straightforward but the observation of our home galaxy is made difficult by our internal viewpoint. Stellar confusion and screening by interstellar matter are strong observational limitations to assess the underlying 3D structure of the MW. At the same time, very large-scale astronomical surveys are made available and are expected to allow new studies to overcome the previous limitations. The Gaia survey that contains around 1.6 billion star distances is the new flagship of MW structure and stellar population analyses, and can be combined with other large-scale infrared (IR) surveys to provide unprecedented long distance measurements inside the Galactic Plane. Concurrently, the past two decades have seen an explosion of the use of Machine Learning (ML) methods that are also increasingly employed in astronomy. With these methods it is possible to automate complex problem solving and efficient extraction of statistical information from very large datasets.\\ \vspace{-0.2cm} In the present work we first describe our construction of a ML classifier to improve a widely adopted classification scheme for Young Stellar Object (YSO) candidates. Born in dense interstellar environments, these young stars have not yet had time to significantly move away from their formation site and therefore can be used as a probe of the densest structures in the interstellar medium. The combination of YSO identification and Gaia distance measurements enables the reconstruction of dense cloud structures in 3D. Our ML classifier is based on Artificial Neural Networks (ANN) and uses IR data from the Spitzer Space Telescope to reconstruct the YSO classification automatically from given examples. We extensively explore dataset constructions and the effect of imbalanced classes in order to optimize our ANN prediction and to provide reliable estimates of its accuracy for each class. Our method is suitable for large-scale YSO candidate identification and provides a membership probability for each object. This probability can be used to select the most reliable objects for subsequent applications like cloud structure reconstruction.\\ \vspace{-0.2cm} In the second part, we present a new method for reconstructing the 3D extinction distribution of the MW and that is based on Convolutional Neural Networks (CNN). With this approach it is possible to efficiently predict individual line of sight extinction profiles using IR data from the 2MASS survey. The CNN is trained using a large-scale Galactic model, the Besançon Galaxy Model, and learns to infer the extinction distance distribution by comparing results of the model with observed data. This method has been employed to reconstruct a large Galactic Plane portion toward the Carina arm and has demonstrated competitive predictions with other state-of-the-art 3D extinction maps. Our results are noticeably predicting spatially coherent structures and significantly reduced artifacts that are frequent in maps using similar datasets. We show that this method is able to resolve distant structures up to 10 kpc with a formal resolution of 100 pc. Our CNN was found to be capable of combining 2MASS and Gaia datasets without the necessity of a cross match. This allows the network to use relevant information from each dataset depending on the distance in an automated fashion. The results from this combined prediction are encouraging and open the possibility for future full Galactic Plane prediction using a larger combination of various datasets. \newpage \thispagestyle{empty} \section*{} \addcontentsline{toc}{section}{Résumé en Français} \begin{center} \vspace{-1.5cm} \textbf{\Large Résumé en Français}\\ \end{center} \textbf{\large Titre en français : Modélisation de la Voie Lactée en 3D par \textsc{machine learning} avec les données infrarouges et Gaia}\\ La structure à grande échelle de la Voie-Lactée (VL) n'est actuellement toujours pas parfaitement contrainte. Contrairement aux autres galaxies, il est difficile d'observer directement sa structure du fait de notre appartenance à celle-ci. La confusion entre les étoiles et l'occultation de la lumière par le milieu interstellaire (MIS) sont les principales sources de difficulté qui empêchent la reconstruction de la structure sous-jacente de la VL. Par ailleurs, de plus en plus de relevés astronomiques de grande ampleur sont disponibles et permettent de surmonter ces difficultés. Le relevé Gaia et ses 1.6 milliards mesures de distances aux étoiles est le nouvel outil de prédilection pour l’étude de la structure de la VL et l’analyse des populations stellaires. Ces nouvelles données peuvent être combinées avec d’autres grands relevés infrarouges (IR) afin d’effectuer des mesures à des distances jusque-là inégalées. Par ailleurs, le nombre d’applications reposant sur des méthodes d’apprentissage machine (AM) s’est envolé ces vingt dernières années et celles-ci sont de plus en plus employées en astronomie. Ces méthodes sont capables d’automatiser la résolution de problèmes complexes ou encore d’extraire efficacement des statistiques sur de grands jeux de données.\\ \vspace{-0.2cm} Dans cette étude, nous commençons par décrire la construction d’un outil de classification par AM utilisé pour améliorer les méthodes classiques de classification des Jeunes Objets Stellaires (JOS). Comme les étoiles naissent dans un environnement interstellaire dense, il est possible d’utiliser les plus jeunes d’entre elles, qui n’ont pas encore eu le temps de s’éloigner de leur lieux de formation, afin d’identifier les structures denses du MIS. La combinaison des JOS et des distances mesurées par Gaia permet alors de reconstruire la structure 3D des nuages denses. Notre méthode de classification par AM est basée sur les réseaux de neurones artificiels et se sert des données du télescope spatial Spitzer pour reconstruire automatiquement la classification des JOS sur la base d’une liste d’exemples. Nous détaillons la construction des jeux de données associés ainsi que l’effet du déséquilibre entre les classes, ce qui permet d’optimiser les prédictions du réseau et d’estimer la précision associée. Cette méthode est capable d’identifier des JOS dans de très grands relevés tout en fournissant une probabilité d’appartenance pour chacun des objets testés. Celle-ci peut alors être utilisée pour retenir les objets les plus fiables afin de reconstruire la structure des nuages.\\ \vspace{-0.2cm} Dans une seconde partie, nous présentons une méthode permettant de reconstruire la distribution 3D de l’extinction dans la VL et reposant sur des réseaux de neurones convolutifs. Cette approche permet de prédire des profils d’extinction sur la base de données IR provenant du relevé 2MASS. Ce réseau est entraîné à l’aide du modèle de la Galaxie de Besançon afin de reproduire la distribution en distance de l’extinction à grande échelle en s’appuyant sur la comparaison entre le modèle et les données observées. Nous avons ainsi reconstruit une grande portion du plan Galactique dans la région du bras de la Carène, et avons montré que notre prédiction est compétitive avec d’autres cartes d’extinction 3D qui font référence. Nos résultats sont notamment capables de prédire des structures spatialement cohérentes, et parviennent à réduire les artefacts fréquents dits ``doigts de Dieu''. Cette méthode est parvenue à résoudre des structures distantes jusqu’à 10 kpc avec une résolution formelle de 100 pc. Notre réseau est également capable de combiner les données 2MASS et Gaia sans avoir recours à une identification croisée. Cela permet d’utiliser automatiquement le jeu de données le plus pertinent en fonction de la distance. Les résultats de cette prédiction combinée sont encourageants et ouvrent la voie à de nouvelles reconstructions du plan Galactique en combinant davantage de jeux de données. \newpage \null \thispagestyle{empty} \newpage \newgeometry{left=2.7cm, right=2.7cm, top=2.9cm,bottom=2.9cm} \renewcommand{\contentsname}{\vspace{-2cm}} \tableofcontents \newpage \restoregeometry \pagestyle{fancy} \fancyhead[LO]{} \fancyhead[RE]{} \fancyhead[LE]{\bf \rightmark} \fancyhead[RO]{\bf \nouppercase{\leftmark}} \pagenumbering{arabic} \setcounter{page}{0} \input{part1_context} \input{part2_ysos} \input{part3_galmap} \newpage \thispagestyle{empty} \part*{\null} \addcontentsline{toc}{part}{Appendix} \begin{appendix} \input{cianna_appendix} \end{appendix} \clearpage \listoffigures \addcontentsline{toc}{part}{\listfigurename} \clearpage \listoftables \addcontentsline{toc}{part}{\listtablename} \clearpage \addcontentsline{toc}{part}{References} \newgeometry{left=2.0cm,right=2.0cm,top=2.8cm,bottom=2.8cm} \setlength{\bibsep}{3pt} \bibliographystyle{aa}
2,869,038,155,930
arxiv
\section{Introduction} \label{sec:introduction} It is an important task to compute (Landau or Coulomb) gauge gluon, ghost, fermion propagators and the basic vertex functions from non-perturbative approaches to $SU(N)$ gauge theories, like Dyson-Schwinger equations or the lattice formulation. On one hand one is interested in their behavior in the infrared limit in order to extract non-perturbative informations on various observables, e.g. the QCD running couplings $\alpha_s(q^2)$, to understand quark and gluon confinement within the Gribov-Zwanziger scenario \cite{Gribov:1977wm,Zwanziger:1993dh,Zwanziger:2003cf}, or to check the Kugo-Ojima confinement criterion for the absence of colored states \cite{Kugo:1979gm}. On the other hand it is technically important to see to what extent these different non-perturbative approaches provide results consistent with each other in the non-perturbative region, i.e. at low momenta. At present we are still far from drawing final conclusions in this respect. In particular the Dyson-Schwinger approach \cite{Alkofer:2000wg,Fischer:2003rp,Lerche:2002ep}, always relying on a truncated set of equations, provides results which look quite different in the infinite volume limit compared with those obtained on a torus \cite{Fischer:2002eq,Fischer:2002hn,Fischer:2005ui}, while the latter show at least qualitative agreement with recent results of numerical lattice simulations \cite{Sternbeck:2005tk}. It is well known that gauge fixing in the non-perturbative range is faced with the Gribov ambiguity problem, which means that there can be many gauge copies for a given gauge field satisfying the Landau gauge condition $~\partial_{\mu} A_{\mu} =0~$ within the Gribov region, the latter defined by the positivity of the Landau gauge Faddeev-Popov operator. In recent years one has checked in greater detail how strong Gribov copies can influence the infrared behavior especially of the gluon and ghost propagators. Several groups of authors came to the conclusion that while there is a clearly visible influence on the ghost propagator, the gluon propagator seems only weakly affected \cite{Cucchieri:1997dx,Bakeev:2003rr,Nakajima:2003my,% Silva:2004bv,Sternbeck:2005tk}. Recently Zwanziger has argued that in the infinite volume limit the influence of Gribov copies: ``... might be negligible, i.e. all averages taken over the Gribov region should become equal to averages over the fundamental modular region''~\cite{Zwanziger:2003cf}. However, in practical lattice simulations we are always restricted to finite volumes. Thus, Gribov copies have to be taken into account properly before extrapolating to the infrared and infinite volume. In this paper we present a reinvestigation of the Gribov copy problem for the $SU(2)$ case. The usual way to fix the (Landau) gauge on the lattice is to simulate the path integral in its gauge invariant form. Subsequently each of the produced lattice gauge fields $~U \equiv \{U_{x,\mu}\}~$ is subjected to an iterative procedure maximizing the gauge functional \begin{eqnarray} \nonumber F(g) &=& \frac{1}{d V} \sum_{x,\mu} \frac{1}{2} \mbox{Tr} ~U^g_{x,\mu}\,, \\ U^g_{x,\mu} &=& g(x) ~U_{x,\mu} ~g^{\dagger}(x+{\hat\mu}) \label{eq:functional} \end{eqnarray} with respect to local gauge transformations $~g \equiv \{g(x) \in SU(2)\}$. $~V = L^{d}~$ denotes the number of lattice sites in $~d=4~$ dimensions. The local maxima of $~F(g)~$ satisfy the differential lattice Landau gauge transversality condition \begin{equation} (\partial_{\mu} A^g_{\mu})(x) = A^g_{\mu}(x+{\hat \mu}/2)-A^g_{\mu}(x-{\hat\mu}/2) = 0\,, \label{eq:transversality} \end{equation} where the lattice gauge potentials are \begin{equation} A_{\mu}(x+ {\hat \mu}/2) = \frac{1}{2i}(U_{x,\mu}-U_{x,\mu}^{\dagger})\,. \label{eq:potential} \end{equation} The standard procedure assumes {\it periodic gauge transformations} and employs the {\it overrelaxation algorithm}. In what follows we shall abbreviate it by {\it SOR}. The influence of Gribov copies can be easily studied by taking various initial random gauge copies of the gauge field configurations before subjecting them to the SOR algorithm. At this place it is worth to note that the widely - at least till now - accepted approach to compute e.g. a gauge-variant propagator $G$ is to choose always the gauge copy with the highest value of local maxima $F_{max}$ (or the {\it best copy}) found for the gauge functional (\ref{eq:functional}). One can then hope to have found a copy belonging to the so-called {\it fundamental modular region} or at least being not far from it. In order to find the {\it best copy} for each thermalized gauge field configuration one needs to compare $F_{max}$-values for a pretty large amount of gauge copies, which is a rather time consuming procedure. A reasonable question is if the use of only one gauge copy (the {\it first copy}) provides us with the same - within errorbars - values of the propagator as the use of the best copy. This logic brings us to compare the propagator calculated on best copies ($G^{(bc)}$) with that on the first copies ($G^{(fc)}$). The relative deviation $~\delta G \equiv |(G^{(fc)}-G^{(bc)})/G^{(bc)}|~$ provides then a useful quantitive measure of the Gribov ambiguity of the quantity under consideration. We shall discuss this measure throughout the present paper. Of course, one can enhance the effect of Gribov copies by comparing instead the best copies with the {\it worst copies}, i.e. with those having the smallest values $F_{max}$ found from the repeated use of a given maximization method. This attitude has been taken in Ref. \cite{Silva:2004bv} in order to highlight a Gribov copy effect for the gluon propagator. In Refs. \cite{Bakeev:2003rr} for $SU(2)$ and \cite{Sternbeck:2005tk} for $SU(3)$ some of us already have thoroughly discussed the impact of Gribov copies within the SOR framework by comparing first and best copies. From this point of view the gluon propagator did not depend on the copies within the statistical noise, whereas the ghost propagator clearly was depending on them in the infrared. But the data for the ghost propagator obtained for different lattice sizes showed an indication for a weakening of the dependence on the choice of Gribov copies for increasing lattice size at fixed momentum, in agreement with Zwanziger's claim~\cite{Zwanziger:2003cf}. Here we enlarge the class of possible gauge transformations by taking into account also {\it non-periodic} center gauge transformations. This will allow us to maximize further the gauge functional and to see a quite strong Gribov copy effect also for the gluon propagator at finite (lattice) volumes. In \Sec{sec:gaugefixing} we shall explain the improved gauge fixing procedure. In \Sec{sec:propagators} we define the propagators to be calculated. In \Sec{sec:results} we are going to present our results for the gluon and ghost propagators, whereas in \Sec{sec:conclusions} the conclusions will be drawn. \section{Improved gauge fixing} \label{sec:gaugefixing} We shall deal all the time with $SU(2)$ pure gauge lattice fields in four Euclidean dimensions produced by means of Monte Carlo simulations with the standard Wilson plaquette action. We restrict ourselves to the confinement phase at $T=0$. To fix the gauge we employ the standard Los Alamos type overrelaxation with $\omega = 1.7$. Our generalization of the standard gauge fixing procedure SOR comes from the simple observation that gauge covariance for periodic $SU(2)$ gauge fields on a $d$-dimensional torus of extension $L^d$ allows gauge transformations which are not necessarily periodic but can differ by a group center element at the boundary: \begin{equation} g(x+L\hat{\nu}) = z_{\nu} g(x)\,, z_{\nu}=\pm 1 \in \mathbb{Z}(2)\,. \end{equation} In light of this it is legitimate to allow, during the maximization of the gauge functional in the gauge fixing procedure, for gauge tranformations which differ by a sign when winding around a boundary. Let $\nu$ be the direction of such boundary. Any such gauge transformation can be decomposed into a standard periodic gauge transformation (which we may call a ``small'' one) and a flip of all links $~U_{\nu}(x) \to - ~U_{\nu}(x)~$ of a 3-plane at a given fixed $~x_{\nu}$. Given a ``small'' random gauge copy of the configuration we have thus performed a pre-conditioning step for the gauge functional by sweeping in every direction all 3-planes in succession and comparing the value of the flipped with the unflipped gauge functional. The flip is accepted if the gauge functional increases. It is easy to see that such a procedure is independent of the order of choosing the 3-planes and that only one sweep through the lattice is required to maximize the functional. The gauge copy obtained at the end of this procedure is then used as a starting point for a standard maximization procedure. We call the whole procedure FOR. Analogously to the SOR method the FOR procedure can be repeated with different initial random gauges in order to find a best copy (\bc{}) in comparison e.g. with the first random copy (\fc{}). We shall check the convergence of the \bc{}-propagator results for the best copies as a function of the number $~n_{copy}~$ of random initial copies. \section{Gluon and ghost propagators} \label{sec:propagators} We turn now to the computation of the gauge variant gluon and ghost propagators within the Landau gauge. The lattice gluon propagator $~D^{ab}_{\mu\nu}(p)~$ is taken as the Fourier transform of the gluon two-point function, {\it i.e.} the expectation value \begin{eqnarray} D^{ab}_{\mu\nu}(p) &=& \left\langle \widetilde{A}^a_{\mu}(\hat{k}) \widetilde{A}^b_{\nu}(-\hat{k}) \right\rangle_U \\ &=& \delta^{ab} \left(\delta_{\mu\nu} - \frac{p_{\mu}~p_{\nu}}{p^2} \right) D(p) \,. \nonumber \label{eq:D-def} \end{eqnarray} $\widetilde{A}^a_{\mu}(\hat{k})~$ is the Fourier transform of the lattice gauge potential $~A^a_\mu(x+\hat{\mu}/2)$. $~p~$ denotes the four-momentum \begin{equation} p_{\mu}(\hat{k}_{\mu}) = \frac{2}{a} \sin\left(\frac{\pi \hat{k}_{\mu}}{L}\right) \label{eq:p-def} \end{equation} with the integer-valued lattice momentum $~\hat{k}_{\mu} \in (-L/2, +L/2]$. $a$ is the lattice spacing. The lattice ghost propagator is defined by inverting the Faddeev-Popov (F-P) operator, the latter being the Hessian of the gauge functional \Eq{eq:functional}. The F-P operator can be written in terms of the (gauge-fixed) link variables $~U_{x,\mu}~$ as \begin{equation} M^{ab}_{xy} = \sum_{\mu} A^{ab}_{x,\mu}\,\delta_{x,y} - B^{ab}_{x,\mu}\,\delta_{x+\hat{\mu},y} - C^{ab}_{x,\mu}\,\delta_{x-\hat{\mu},y}\quad \label{eq:FP-def} \end{equation} with \begin{eqnarray*} A^{ab}_{x,\mu} &=& \frac{1}{2}~\delta_{ab}~\mbox{Tr}\left[ U_{x,\mu}+U_{x-\hat{\mu},\mu} \right]\,,\\ B^{ab}_{x,\mu} &=& \frac{1}{2}~ \mbox{Tr}\left[ \sigma^b \sigma^a\, U_{x,\mu}\right]\,, \\ C^{ab}_{x,\mu} &=& \frac{1}{2}~ \mbox{Tr}\left[ \sigma^a \sigma^b\, U_{x-\hat{\mu},\mu} \right]\,, \end{eqnarray*} where the $~\sigma^a\,, a=1,2,3~$ are the Pauli matrices. In the continuum $~M^{ab}_{xy}~$ corresponds to the operator $~M^{ab} = - \partial_{\mu} D^{ab}_{\mu}$, with $~D^{ab}~$ the covariant derivative in the adjoint representation. The ghost propagator in momentum space is calculated from the ensemble average \begin{eqnarray} G^{ab}(p) &=& \frac{1}{V} \sum_{x,y} \left\langle {\rm e}^{-2\pi i\,\hat{k} \cdot (x-y)} [M^{-1}]^{a\,b}_{x\,y}\right\rangle_U \\ &=& \delta^{ab}~G(p)\,. \label{eq:G-def} \end{eqnarray} Following Ref.~\cite{Suman:1995zg, Cucchieri:1997dx} we have used the conjugate gradient (CG) algorithm to invert $~M~$ on a plane wave $~\vec{\psi}_c = \{~\delta_{ac} \exp (2\pi i\,\hat{k}\!\cdot\! x)~\}$. After solving $~M \vec{\phi}=\vec{\psi}_c~$ the resulting vector $~\vec{\phi}~$ is projected back on $~\vec{\psi}~$ so that the average $~G^{cc}(p)~$ over the color index $~c~$ can be taken explicitly. Since the F-P operator $~M~$ is zero if acting on constant modes, only $~\hat{k} \ne (0,0,0,0)~$ is permitted. Due to high computational requirements to invert the F-P operator for each $~\hat{k}$, separately, the estimators on a single, gauge-fixed configuration are evaluated only for a preselected set of momenta $~\hat{k}$. \section{Results} \label{sec:results} We consider various bare couplings in the interval $~\beta=4/g_0^2~\in [2.1,2.5]~$ and lattice sizes up to $~20^4~$. We compare the gluon and ghost propagators obtained with the alternative gauge fixing methods SOR (`flips off') and FOR (`flips on') both for the first (\fc{}) and best copy (\bc{}). In order to find the best copies we always generate $~20~$ initial random gauge copies. \begin{figure*} \mbox{ \includegraphics[width=0.5\textwidth,height=0.48\textwidth]{fig1a.eps} \quad \includegraphics[width=0.5\textwidth,height=0.48\textwidth]{fig1b.eps} } \caption{ Gluon propagator (left) and ghost propagator (right) at lowest momentum $~p_{min}=(2/a) \sin (\pi / L)~$ versus number of random copies employing the FOR method ('flips on') at $~\beta=2.5~$ and $~2.4$, respectively (lattice size $~16^4~$). } \label{fig:Gl_Gh_copies} \end{figure*} In \Fig{fig:Gl_Gh_copies} we illustrate for the FOR method how fast the gluon and ghost propagators are converging when determined from the best copy out of the first $~n_{copy}~$ copies. We see plateaus occuring for $~n_{copy} \ge O(10)$. We have convinced ourselves that $~O(20)~$ copies are sufficient at least for $\beta \ge 2.3$ and lattice sizes up to $20^4$. For the SOR method the convergence is faster - although to worse values of the gauge functional - such that in principle a smaller number of copies would be sufficient within the given parameter range. Mostly we have concentrated on the lowest non-trivial on-axis lattice momentum $~p_{min}~=~(2/a) \sin (\pi / L)~$ and some multiple on-axis momenta in order to study the infrared limit for given lattice size and bare coupling. We are aware of the fact that this choice is by far too restrictive in order to get reliable results for the (renormalized) propagators in the continuum and thermodynamic limit. \begin{figure*} \mbox{ \includegraphics[width=0.48\textwidth,height=0.46\textwidth]{fig2a.eps} \quad \includegraphics[width=0.48\textwidth,height=0.47\textwidth]{fig2b.eps} } \caption{ Gluon propagator $~D(p_{min})~$ at lowest momentum for various $~\beta~$ and for lattice sizes $~12^4~$ (left) and $~16^4~$ (right). Full dots refer to FOR \fc{} and open squares (circles) correspond to SOR \fc{} (\bc{}). } \label{fig:gluon_prop} \end{figure*} \begin{figure*} \mbox{ \includegraphics[width=0.48\textwidth,height=0.48\textwidth]{fig3a.eps} \quad \includegraphics[width=0.48\textwidth,height=0.48\textwidth]{fig3b.eps} } \caption{ Ghost propagator $~G(p_{min})~$ as for \Fig{fig:gluon_prop}. } \label{fig:ghost_prop} \end{figure*} In \Fig{fig:gluon_prop} and \Fig{fig:ghost_prop} we show our results for the lattice gluon $D(p_{min})$ and ghost propagators $G(p_{min})$ for $12^4$ and $16^4$ lattices, always for the smallest non-vanishing momentum. In order to demonstrate the effect of the $\mathbb{Z}(2)$ flips in comparison with the SOR results obtained with \bc{} and \fc{} copies \cite{Bakeev:2003rr} we show three sets of data points: black dots correspond to FOR - `flips on' and \bc{} copies and open circles (squares) correspond to SOR - `flips off' for \bc{} (\fc{}) copies. The corresponding data are listed in \Tab{tab:Gl_Gh_prop}. \begin{table*} \begin{center} \mbox{ \begin{tabular}{|c|c|c||c|c|c|} \hline\hline \multicolumn{6}{|c|}{$12^4$} \\ \hline\hline $\beta$& $\#$ & $D_{\rm{FOR}}^{(bc)}$ & $\#$ & $D_{\rm{SOR}}^{(bc)}$ & $D_{\rm{SOR}}^{(fc)}$ \\ \hline\hline 2.10 & 1200 & $ 5.39 (6)$ & 900 & $ 5.79 (8)$ & $ 5.83 (8)$ \\ \hline 2.20 & 1200 & $ 7.94 (9)$ & 1200 & $ 8.74 (10)$ & $ 8.66 (10)$ \\ \hline 2.30 & 1200 & $12.16 (14)$ & 1200 & $12.69 (15)$ & $12.85 (15)$ \\ \hline 2.40 & 3600 & $15.10 (10)$ & 2080 & $17.06 (17)$ & $17.12 (17)$ \\ \hline 2.44 & 5100 & $15.13 (9)$ & & & \\ \hline 2.47 & 5700 & $14.64 (9)$ & & & \\ \hline 2.50 & 2650 & $14.16 (13)$ & 1760 & $17.34 (26)$ & $17.42 (26)$ \\ \hline\hline \multicolumn{6}{|c|}{$16^4$}\\ \hline\hline $\beta$& $\#$ & $D_{\rm{FOR}}^{(bc)}$ & $\#$ & $D_{\rm{SOR}}^{(bc)}$ & $D_{\rm{SOR}}^{(fc)}$ \\ \hline\hline 2.10 & 1042 & $ 5.59 (7)$ & 918 & $ 5.93 (8)$ & $ 5.95 (8)$ \\ \hline 2.20 & 900 & $ 9.01 (12)$ & 740 & $ 9.35 (14)$ & $ 9.58 (14)$ \\ \hline 2.30 & 1100 & $14.88 (18)$ & 510 & $16.16 (31)$ & $15.97 (29)$ \\ \hline 2.40 & 1032 & $22.65 (29)$ &1020 & $24.36 (32)$ & $25.03 (32)$ \\ \hline 2.45 & 1020 & $25.69 (32)$ &1030 & $28.19 (36)$ & $28.21 (38)$ \\ \hline 2.50 & 1040 & $26.86 (35)$ &1060 & $30.64 (44)$ & $30.37 (45)$ \\ \hline\hline \end{tabular} } \hspace*{0.5cm} \mbox{ \begin{tabular}{|c|c|c||c|c|c|} \hline\hline \multicolumn{6}{|c|}{$12^4$} \\ \hline\hline $\beta$& $\#$ & $G_{\rm{FOR}}^{(bc)}$ & $\#$ & $G_{\rm{SOR}}^{(bc)}$ & $G_{\rm{SOR}}^{(fc)}$ \\ \hline\hline 2.10 & 1200 & $11.58 (4)$ & 900 & $11.87 (4)$ & $12.48 (7)$ \\ \hline 2.20 & 1200 & $10.10 (8)$ &1200 & $10.39 (3)$ & $10.90 (5)$ \\ \hline 2.30 & 1200 & $ 8.37 (2)$ &1200 & $ 8.99 (6)$ & $ 9.27 (4)$ \\ \hline 2.40 & 3600 & $ 7.04 (1)$ &2080 & $ 7.80 (3)$ & $ 7.97 (4)$ \\ \hline 2.44 & 5100 & $ 6.65 (1)$ & & & \\ \hline 2.47 & 5700 & $ 6.36 (1)$ & & & \\ \hline 2.50 & 2650 & $ 6.11 (1)$ &1760 & $ 7.26 (5)$ & $ 7.40 (5)$ \\ \hline\hline \multicolumn{6}{|c|}{$16^4$}\\ \hline\hline $\beta$& $\#$ & $G_{\rm{FOR}}^{(bc)}$ & $\#$ & $G_{\rm{SOR}}^{(bc)}$ & $G_{\rm{SOR}}^{(fc)}$ \\ \hline\hline 2.10 &1042 & $22.89 (6)$ & 918 & $23.12 (13)$ & $24.15 (8)$ \\ \hline 2.20 & 900 & $19.83 (6)$ & 740 & $20.29 (6)$ & $21.34 (9)$ \\ \hline 2.30 &1100 & $16.83 (5)$ & 510 & $17.27 (8)$ & $18.05 (10)$ \\ \hline 2.40 &1032 & $14.00 (4)$ &1020 & $14.88 (6)$ & $15.60 (8)$ \\ \hline 2.45 &1020 & $12.92 (5)$ &1030 & $13.86 (6)$ & $14.41 (11)$ \\ \hline 2.50 &1040 & $12.02 (4)$ &1060 & $13.26 (6)$ & $13.45 (7)$ \\ \hline\hline \end{tabular} } \end{center} \caption{Data for the gluon propagator $D(p)$ (left) as well as for the ghost propagator $G(p)$ (right) at lowest momentum $p=p_{min}$ obtained with FOR (\bc{}) and SOR (\bc{} and \fc{}) methods on $12^4$ and $16^4$ lattices. } \label{tab:Gl_Gh_prop} \end{table*} We clearly see that the FOR method leads to an additional visible Gribov copy effect not only for the ghost propagator but also for the gluon propagator. The effect is even more pronounced at higher $\beta$-values, i.e. at smaller 'physical' lattice sizes. We have convinced ourselves that this is compatible with the behavior of the average maximal gauge functional $\langle F_{max} \rangle$. Its relative difference determined with \bc{} copies for the FOR method versus the SOR method is also rising with $\beta$. Later on we shall see that this observation is also in one-to-one correspondence with the gauge copy dependence for fixed $\beta$ and varying lattice size. The anatomy of the (new) FOR gauge copies deserves further studies in the future. In order to illustrate the strong Gribov copy effect in a slightly different manner we compare smoothed distributions for the mean value estimators for the gluon and ghost propagators for the \bc{} with the FOR and SOR method, respectively (see \Fig{fig:Gl_Gh_dist}). The mean value distributions have been obtained in accordance with the bootstrap method \cite{efron} from replica of sequences of randomly selected data. Such bootstrapped resampling was applied to the initial MC data set as a whole, the amount of replicas being typically 200. To smoothen the distribution we have used the standard Nadaraya--Watson method with normal kernel \cite{bowman}, and an improved Silverman's rule of thumb for the choice of the corresponding bandwidth. It is worth mentioning that the statistical errors for most of our data have also been estimated through bootstrapped resampling. \begin{figure*} \mbox{ \includegraphics[width=0.50\textwidth,height=0.48\textwidth]% {fig4a.eps} \includegraphics[width=0.50\textwidth,height=0.48\textwidth]% {fig4b.eps} } \caption{SOR- and FOR-distributions for $D^{(bc)}(p_{min})$ (left) and $G^{(bc)}(p_{min})$ (right) at $\beta=2.5$ and $16^4$ lattice.} \label{fig:Gl_Gh_dist} \end{figure*} We have also studied how the Gribov copy effect develops for larger momenta $p(\hat{k})$. We have used multiples of the minimal lattice momentum $\hat{k}=(0,0,0,k),~k=1,2,3,4~$ along one axis. We compare for the gluon propagator the \bc{} SOR results with \bc{} FOR results in terms of the relative deviation \begin{equation} \delta D(p)=(D_{\rm{SOR}}^{(bc)}-D_{\rm{FOR}}^{(bc)})/D_{\rm{FOR}}^{(bc)}~, \end{equation} \noindent and analogously for the ghost propagator $~G(p)~$ at various $\beta$-values and with fixed lattice size $~16^4~$ (see \Fig{fig:Gl_Gh_rel_k}). For the gluon propagator our results are restricted to only one $\beta$-value because of the much stronger statistical noise. Nevertheless, the results presented for the gluon propagator point into the same direction as for the ghost propagator. The effect of Gribov copies still remains noticable at $p>p_{min}$, although decreasing for rising momenta. \begin{figure*} \mbox{ \includegraphics[width=0.50\textwidth,height=0.48\textwidth]{fig5a.eps} \includegraphics[width=0.50\textwidth,height=0.48\textwidth]{fig5b.eps} } \caption{ Left: relative deviation $~\delta D(p) = (D_{\rm{SOR}}^{(bc)}-D_{\rm{FOR}}^{(bc)})/D_{\rm{FOR}}^{(bc)}~$ in percent for the gluon propagator at various (on-axis) lattice momenta $~p(k)~$ (lattice size $16^4$, $\beta=2.5$). \\ Right: the analogous relative deviation for the ghost propagator for the same lattice size but for $\beta=2.2, 2.3, 2.4$ and $2.5$. } \label{fig:Gl_Gh_rel_k} \end{figure*} The data for the ghost propagator at various momenta obtained from independent Monte Carlo runs are also collected in \Tab{tab:Gh_p_16x16}. \begin{table}[htb] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline\hline \multicolumn{7}{|c|}{FOR} \\ \hline\hline $\beta$& $\#$ & & $G(k=1)$ & $G(k=2)$ & $G(k=3)$ & $G(k=4)$ \\ \hline 2.20 &400 & fc & 21.2(1) & 3.96(1) & 1.510(2) & 0.8116(5) \\ \hline & & bc & 19.88(8)& 3.868(7)& 1.493(1) & 0.8076(4) \\ \hline 2.30 &400 & fc & 18.2(2) & 3.39(3)& 1.313(2)& 0.7276(6) \\ \hline & & bc & 16.88(8)& 3.267(6) & 1.299(1)& 0.7241(4) \\ \hline 2.40 &356 & fc & 15.4(1)& 2.87(1)&1.171(1) & 0.6693(3) \\ \hline & & bc & 13.8(1)&2.770(8) &1.156(2) & 0.6647(4) \\ \hline 2.50 &400 & fc & 13.7(1)& 2.578(5)& 1.0897(8)& 0.6357(2) \\ \hline & & bc & 12.2(1)& 2.508(5) & 1.079(1)& 0.6325(3) \\ \hline\hline \multicolumn{7}{|c|}{SOR} \\ \hline\hline $\beta$& $\#$ & & $G(k=1)$ & $G(k=2)$ & $G(k=3)$ & $G(k=4)$ \\ \hline 2.20 & 200& fc &21.2(2) &3.97(2)8 &1.511(3) &0.8117(7) \\ \hline & & bc &20.23(12)&3.885(10)&1.4971(25)&0.8084(7) \\ \hline 2.30 & 200& fc &18.2(1)& 3.35(1) &1.312(2) & 0.7272(5) \\ \hline & & bc &17.3(1)& 3.297(8) &1.304(1)& 0.7253(5) \\ \hline 2.40 & 370& fc & 15.6(1)& 2.87(1)& 1.171(1) & 0.6690(3) \\ \hline & & bc & 14.8(1)& 2.83(1)& 1.165(1) & 0.6673(3) \\ \hline 2.50 & 200& fc & 14.1(2)& 2.586(8) & 1.090(1)& 0.6359(4) \\ \hline & & bc & 13.4(1)& 2.564(6) & 1.088(1)& 0.6352(3) \\ \hline \hline \end{tabular} \end{center} \caption{Ghost propagators $G(p)$ on the $16^4$ lattice for various on-axis lattice momenta $~p(k)$.} \label{tab:Gh_p_16x16} \end{table} We have also made a corresponding check for the gluon propagator at zero momentum. On a lattice of size $20^4$ and for the same $\beta=2.5$ we observed a deviation between the \bc{} FOR and SOR results of the order $O(25 \%)$. This would of course have consequences for estimates like in Ref.~\cite{Bonnet:2001uh,Boucaud:2006pc}, since the infinite volume extrapolation of $D(0)$ there performed, although probably remaining finite, will definitely suffer from uncontrolled systematic uncertainties. It is interesting to study the volume dependence of the Gribov copy effect, in view of Zwanziger's recent claim mentioned at the beginning \cite{Zwanziger:2003cf}. \begin{figure}[htm] \mbox{ \includegraphics[width=0.47\textwidth,height=0.47\textwidth]% {fig6.eps} } \caption{Distributions of the number of different gauge copies found with the FOR method at $\beta=2.40$ for lattice sizes $8^4$ and $16^4$. } \label{fig:no_copies} \end{figure} \begin{figure}[htm] \mbox{ \includegraphics[width=0.47\textwidth,height=0.47\textwidth]% {fig7.eps} } \caption{Distributions of the deviation of the gauge functional values for different gauge copies relative to the best copy per configuration. FOR method at $\beta=2.40$ for lattice sizes $8^4$ and $16^4$. } \label{fig:func_copies} \end{figure} \begin{figure}[htm] \mbox{ \includegraphics[width=0.47\textwidth,height=0.47\textwidth]% {fig8.eps} } \caption{Distributions of ghost propagator values at lowest non-trivial momentum for different gauge copies as in \Fig{fig:func_copies}. } \label{fig:ghost_copies} \end{figure} \begin{figure}[htm] \mbox{ \includegraphics[width=0.50\textwidth,height=0.42\textwidth]% {fig9.eps} } \caption{ Relative deviation $~\delta D_L(p_{min})~\equiv ~(D_{\rm{SOR}}^{(fc)}-D_{\rm{FOR}}^{(bc)})/D_{\rm{FOR}}^{(bc)}~$ in percent for the gluon propagator $~D~$ for various linear lattice sizes $~L~$ and smallest non-vanishing momentum $~p_{min}~=~(2/a) \sin(\pi/L)~$ ($\beta=2.4$). } \label{fig:Gl_rel_L} \end{figure} First of all we have convinced ourselves that the number of gauge copies is strongly rising with the lattice volume as it should be. This is clearly demonstrated in \Fig{fig:no_copies} providing the distributions of the number of gauge copies per configuration found with the FOR method (`flips on') for lattice sizes $8^4$ and $16^4$ at $\beta=2.40$. In both cases we have generated 100 configurations with 100 gauge copies each. It turns out that identical (or degenerated) copies can be well recognized at an accuracy for the gauge functional \Eq{eq:functional} of $O(10^{-10})$. Adjacent copies normally differ in the values for the gauge functional at a level of $O(10^{-6})$. Now let us compare the distributions of the corresponding values of the functional $F$ for each copy found. In order to normalize the values with respect to the highest (i.e. best) value per configuration we show the relative deviation $(F_{max}^{(bc)}-F_{max})/F_{max}^{(bc)}$. The frequency distributions of these values are shown in \Fig{fig:func_copies} for the same ensembles as used for \Fig{fig:no_copies}. There is a very clear tendency that the variance of the gauge functional becomes much smaller if we increase the lattice volume. A similar tendency becomes visible in \Fig{fig:ghost_copies}, where we plot for the same set of configurations and gauge copies the distributions for the single values of the ghost propagator for the lowest non-vanishing on-axis momentum. Also in this case we have normalized the single values as $(G^{(bc)}-G)/G^{(bc)}$, i.e. taking the relative deviation of the propagator at a given copy $G$ from the value computed on the best copy $G^{bc}$, the latter chosen again with respect to the gauge functional value. We see that the long tail seen for the smaller lattice disappears for the larger lattice. Although the fact that close values of the gauge functional will not tell anything about how much the corresponding gauge configurations are differing from each other (irrespective of a global relative gauge transformation) we would like to interprete our finding of shrinking distributions as a weakening of the Gribov problem with increasing 'physical' lattice size. Moreover, we have plotted the relative deviation \begin{equation} \delta D_L(p_{min})~\equiv (D_{\rm{SOR}}^{(fc)}-D_{\rm{FOR}}^{(bc)})/D_{\rm{FOR}}^{(bc)} \end{equation} for the gluon propagator (see \Fig{fig:Gl_rel_L}) and analogously for the ghost propagator (see l.h.s. of \Fig{fig:Gh_rel_Lk}) as a function of the inverse linear lattice size $~1/L~$, both determined at the minimal momentum $~p_{min}$. Here we have used data for fixed $~\beta=2.4~$ and lattice sizes from $~L=5~$ up to $~L=20$. In close correspondence to our observations presented in Figs. \ref{fig:gluon_prop} and \ref{fig:ghost_prop} we see that the Gribov copy effect becomes weaker (stronger) for increasing (decreasing) 'physical' lattice size and correspondingly decreasing (increasing) minimal momentum, at least up to a certain value of the lattice size ($\mbox{}_{\textstyle \sim}^{\textstyle < } 15 $). One would of course need larger values of $L$ to make a reliable conclusion about the limit $L\to\infty$. Anyway, at our largest lattice value $L=20$ the Gribov copy effect is still quite strong. For the ghost propagator, where the signal to noise ratio is more favourable, we have found an analogous behavior also for the multiple on-axis momenta $~k=2,3,4$ (see r.h.s. of \Fig{fig:Gh_rel_Lk}). \begin{figure*} \mbox{ \includegraphics[width=0.50\textwidth,height=0.45\textwidth]% {fig10a.eps} \includegraphics[width=0.50\textwidth,height=0.45\textwidth]% {fig10b.eps} } \caption{ Relative deviation $~\delta G_L(p)~\equiv ~(G_{\rm{SOR}}^{(fc)}-G_{\rm{FOR}}^{(bc)})/G_{\rm{FOR}}^{(bc)}~$ in percent for the ghost propagator $~G~$ at $\beta=2.4$ for various linear lattice sizes $~L~$ and the smallest non-vanishing momentum $~p_{min}~=~(2/a) \sin(\pi/L)~$ (left) as well as for on-axis momenta \mbox{$~p(k), ~k=2,3,4$ (right)}. } \label{fig:Gh_rel_Lk} \end{figure*} In \cite{Bakeev:2003rr} two of us have reported on rare Monte Carlo events with exceptionally large values of the ghost propagator occuring for the SOR gauge fixing method for larger $~\beta$ values. In \Fig{fig:Gl_Gh_hist_16x16_b2p50} we show some time histories for the gluon and ghost propagators for $\beta=2.5$ and a $16^4$ lattice, comparing \bc{} SOR with \bc{} FOR. We see that for the `best copy - flips on' case (FOR) the fluctuations for both propagators are smaller. But for the ghost propagator the effect of exceptionally large values, in general related to small eigenvalues of the F-P operator \cite{Sternbeck:2005vs}, is still there. \begin{figure*} \mbox{ \includegraphics[width=0.50\textwidth,height=0.50\textwidth]% {fig11a.eps} \includegraphics[width=0.50\textwidth,height=0.50\textwidth] {fig11b.eps} } \caption{ Time histories for $D^{(bc)}(p_{min})$ (left) and $G^{(bc)}(p_{min})$ (right) for both SOR and FOR methods at $\beta=2.5$ and $16^4$ lattice.} \label{fig:Gl_Gh_hist_16x16_b2p50} \end{figure*} Concluding we show the form factors of the gluon propagator $~p^2 D(p)~$ and of the ghost propagator $~p^2 G(p)~$ in physical units as a function of the physical momentum for fixed $~\beta=2.4~$ and lattice sizes varying from $~10^4~$ to $~20^4~$. We have rescaled the gluon propagator values $~D(p)~$ with factors $~a^2~$ and $~g_0^2~$ and the ghost propagator $~G(p)~$ with $~a^2~$, respectively, in order to translate to the corresponding continuum (bare) propagators (compare with \cite{Bloch:2003sk}). To estimate the lattice spacing in physical units we have used the string tension: $~a^2 \sigma = .071~$ \cite{Fingberg:1992ju} with the standard value $~\sqrt{\sigma} = 440 \mathrm{MeV}$. The form factor results for both methods \bc{} SOR and \bc{} FOR are shown together in \Fig{fig:Gl_Gh_scale}. Again the figure shows clear Gribov copy effects for both the propagators and not only for the ghost propagator. We did not apply any overall renormalization here. The statistics collected for these runs is listed in \Tab{tab:stat}. \begin{table}[htb] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline\hline \multicolumn{11}{|c|}{FOR} \\ \hline\hline $L$&5 &6 &8 & 9 & 10 &12&15&16&18&20\\ \hline $N_{conf}$ &1000 & 1000 & 800 & 600 & 600 & 500&400&356&200&200 \\ \hline\hline \multicolumn{11}{|c|}{SOR} \\ \hline\hline $L$&5 &6 &8 & 9 & 10 &12&15&16&18&20\\ \hline $N_{conf}$ &1000 & 1000 & 800 & 500 & 500 & 400&400&370&100&100 \\ \hline \hline \end{tabular} \end{center} \caption{Statistics for the measurements at different $L$ and $\beta=2.4$.} \label{tab:stat} \end{table} \begin{figure*} \mbox{ \includegraphics[width=0.50\textwidth,height=0.50\textwidth]% {fig12a.eps} \includegraphics[width=0.50\textwidth,height=0.50\textwidth] {fig12b.eps} } \caption{ Gluon form factor $~p^2 D(p)~$ (left) and ghost form factor $~p^2 G(p)~$ (right) both for the \bc{} SOR and \bc{} FOR methods versus momentum obtained for various lattice sizes and fixed $~\beta=2.4$. } \label{fig:Gl_Gh_scale} \end{figure*} \section{Conclusions} \label{sec:conclusions} In this paper we have demonstrated that there is a visible Gribov problem for the ghost propagator as well as for the gluon propagator computed in $SU(2)$ lattice gauge theory within the Landau gauge. In order to show this we have enlarged the gauge orbits of given Monte Carlo generated gauge fields by non-periodic $\mathbb{Z}(2)$ transformations, flipping all links in a given direction on a slice orthogonal to that. This allows a preconditioning which maximizes the gauge functional before applying the overrelaxation algorithm. We have found indications for a weakening of the Gribov copy effect both going to larger momenta at fixed volume and also increasing the lattice size $L$ while correspondingly lowering the minimal non-zero momentum, at least up to a certain value of the lattice size ($\mbox{}_{\textstyle \sim}^{\textstyle < } 15$). However, one would need larger values of $L$ to draw a reliable conclusion about the limit $L\to\infty$. We have not shown the momentum scheme running coupling which can be determined from the form factors of the propagators discussed here assuming that the renormalization factor for the ghost-gluon vertex is constant. This will be discussed in a future paper, where we want to present data for larger lattices and a larger spectrum of (off-axis) momenta. \section*{ACKNOWLEDGEMENTS} This investigation has been supported by the Heisenberg-Landau program of collaboration between the Bogoliubov Lab of Theoretical Physics of the Joint Institute for Nuclear Research Dubna, Russia and german institutes. V.K.M. acknowledges support by an RFBR grant 05-02-16306. G.B. acknowledges support from an INFN fellowship. M.M.-P. thanks the DFG for support under grant FOR 465 / Mu932/2-2. \bibliographystyle{apsrev}
2,869,038,155,931
arxiv
\section{Introduction} This article, which is continuation of our earlier work \cite{Grohs2013}, studies off-diagonal decay properties of Moore-Penrose pseudoinverses $A^+$ of symmetric (bi-infinite) matrices $A=(A_{\lambda,\lambda'})_{\lambda,\lambda'\in\Lambda}$, with $\Lambda$ a discrete index set. More precisely, our results are of the following general type: assume that $A$ is \emph{localized}, in the sense that \begin{equation}\label{eq:locintro} |A_{\lambda,\lambda'}|\le C\omega(\lambda,\lambda')^{-N}\quad \mbox{for all }\lambda,\ \lambda'\in \Lambda \end{equation} \new{with respect to some nice function $\omega$ measuring the distance between the indices}. Then the Moore-Penrose pseudoinverse of $A$ satisfies the analogous inequality with a different constant $C$ and a parameter $N^+\le N$ which we describe explicitly. Typically $A$ arises as a Gram matrix $A=\left(\langle\psi_\lambda,\psi_{\lambda'}\rangle_\mathcal{H}\right)_{\lambda,\ \lambda'\in\Lambda}$ of a frame $(\psi_{\lambda})_{\lambda\in \Lambda}$ of a Hilbert space $\mathcal{H}$. In that case the Moore-Penrose pseudoinverse $A^+$ corresponds to the Gram matrix of the canonical dual frame of $(\psi_{\lambda})_{\lambda\in \Lambda}$. Hence, localization properties of $A^+$ provides useful information about the canonical dual frame. For more information regarding frames we refer to \cite{Christensen2003}. For a more detailed motivation of the problem that we consider in the present paper (for instance in the context of operator compression) we refer to our earlier work \cite{Grohs2013}. The 'localization problem' as described above has been studied in several contexts, see \cite{Aldroubi2008,Balan2006,Balan2006a,Balan2008,Baskakov2011,Baskakov1997,Baskakov1997a,Grochenig2004,Fornasier2005,Cordero2004,Sun2007,Sun2011,Demko1984,Jaffard1990,Futamura2009,Krishtal2011}. In these works the index set $\Lambda$ arises as a sampling set for either Gabor- or wavelet frames. In both cases there exists a canonical index distance function $\omega$ for which localization results have been established in the aforementioned works. Recently, these results have been extended to anisotropic frame systems such as curvelets \cite{Candes2004a} or shearlets \cite{Labate2005}, and more generally parabolic molecules \cite{Grohs2011}. The present paper extends and unifies these results. More precisely, we shall prove localization results for index distance functions $\omega$ which are associated with frames of so-called $\alpha$-molecules as introduced in \cite{Grohs2013a}. The notion of $\alpha$-molecules includes wavelets, ridgelets \cite{CandesPhD,GrohsRidge}, shearlets, curvelets, parabolic molecules and $\alpha$-shearlets \cite{Keiper2013} as special cases. Consequently, the results of the present paper are applicable to all these systems at once. {\bf Outline. }We proceed as follows. In Section \ref{sec:framework} we provide an abstract framework for index distance functions in which localization results can be established. The main result of this section is Theorem \ref{thm_main}, which states that, if an index distance function $\omega$ satisfies certain properties, then localization of a matrix $A$ in the sense of \ref{eq:locintro} implies a similar property for its Moore-Penrose pseudoinverse $A^+$. To further motivate the importance of localization properties we also provide several results stating that localized matrices are automatically bounded on a wide class of weighted $\ell^p$ Banach spaces. Then in Section \ref{sec:alpha} we apply the abstract framework of Section \ref{sec:framework} to specific index distance functions, namely those associated with frames of $\alpha$-molecules as introduced in \cite{Grohs2013a}. More precisely, we verify that those index distance functions satisfy the assumptions of the abstract theory developed in Section \ref{sec:framework} and hence provide localization results for the whole class of $\alpha$-molecules. We collect some auxiliary results in Appendix \ref{appendix}. \section{Abstract Framework}\label{sec:framework} In the present section we set the abstract framework which we later apply in Section \ref{sec:alpha} to establish localization results for frames of $\alpha$-molecules. Subsection \ref{subsec:basicnotions} below starts by introducing the kind of index distance functions $\omega$ with which we are working. We consequently define the Banach space of localized matrices, for which a submultiplicativity property is established in Theorem \ref{thm_submult}. This property provides a key technical tool to prove the main result of the section, namely Theorem \ref{thm_main}. In Subsection \ref{subsec:bounded} we show that localization with respect to such index functions implies boundedness on a large range of weighted $\ell^p$ spaces. Finally, in Subsection \ref{subsec:invclos} we establish Theorem \ref{thm_main}, which states the localization of the Moore-Penrose pseudoinverse of matrices which are localized with respect to $\omega$ as introduced in Subsection \ref{subsec:basicnotions}. Most of the material in this section is well-known. Using the proof techniques developed in \cite{Grohs2013}, Theorem \ref{thm_main} is not too hard to establish. The difficult part of the present paper is contained in Section \ref{sec:alpha}, where we shall verify that canonical index distances associated to $\alpha$-molecules fit into the abstract framework developed in the present section. \subsection{Basic Notions}\label{subsec:basicnotions} We shall prove a localization result in a general framework which we describe in the present section. Here we introduce the notations and definitions which we shall use, starting with the following definition of an \emph{index distance} function. \begin{Def} \label{def:index_dist} Let $\Lambda$ be a discrete index set. An \emph{index distance} is a function $ \omega : \Lambda \times \Lambda \longrightarrow [1,\infty) $ such that there exist constants $ C_S,\,C_T \geq 1 $ with \begin{enumerate}[(i)] \item\label{item:pseudo_sep} $ \omega(\lambda,\lambda) = 1 $ for all $ \lambda \in \Lambda $; \item\label{item:pseudo_sym} $ \omega(\lambda,\lambda') \le C_S \omega(\lambda',\lambda) $ for all $ \lambda,\lambda' \in \Lambda $; \item\label{item:pseudo_tri} $ \omega(\lambda,\lambda') \le C_T \omega(\lambda,\lambda'') \omega(\lambda'',\lambda') $ for all $ \lambda,\lambda',\lambda'' \in \Lambda $. \end{enumerate} \end{Def} \begin{Def}\label{def:sep} We say that $\Lambda$ is \emph{separated} by $\omega$ if \begin{flalign*} & C_\Lambda := \inf_{\lambda \neq \lambda'} \omega(\lambda,\lambda') > 1 . & \end{flalign*} \end{Def} \begin{Def}\label{def:Schur} Let $ K \geq 1 $. We say that $\omega$ is $K$-\emph{admissible} if \begin{flalign*} & C_\omega := \sup_{\lambda \in \Lambda} \sum_{\lambda' \in \Lambda} \omega(\lambda,\lambda')^{-K} < \infty . & \end{flalign*} \end{Def} \new{ \begin{Rmk} The pseudo-symmetry property \ref{def:index_dist}(\ref{item:pseudo_sym}) is not strictly necessary. However, our examples of index distance are all pseudo-symmetric in a natural way, and this allows to state the Schur type condition \ref{def:Schur} in any fixed order of the indices. Furthermore, one can always replace $\omega(\lambda,\lambda')$ with its symmetrization $ \omega^{\operatorname{sym}}(\lambda,\lambda') := \frac{1}{2} (\omega(\lambda,\lambda') + \omega(\lambda',\lambda)) $: if $\omega$ enjoys \ref{def:index_dist}(\ref{item:pseudo_sep}) and (\ref{item:pseudo_tri}), \ref{def:sep} and \ref{def:Schur} with constants $C_T$, $C_\Lambda$ and $C_\omega$, then $\omega^{\operatorname{sym}}$ will enjoy the same properties with constants $2C_T$, $ \inf_{\lambda \neq \lambda'} \omega^{\operatorname{sym}}(\lambda,\lambda') \geq C_\Lambda $ and $ \sup_{\lambda \in \Lambda} \sum_{\lambda' \in \Lambda} \omega^{\operatorname{sym}}(\lambda,\lambda')^{-K} < 2^K C_\omega $. On the contrary, the pseudo-triangle inequality \ref{def:index_dist}(\ref{item:pseudo_tri}) is technically crucial. \end{Rmk} } Having introduced the required properties of an index distance function we now define the Banach space of \emph{localized} operators. \begin{Def}\label{def:loc} Let $\omega$ be an admissible index distance and $ N \geq 1 $. A matrix $ A \in \mathbb{C}^{\Lambda\times\Lambda} $ is said to be $N$-\emph{localized} (with respect to $\omega$) if $ |A_{\lambda,\lambda'}| \lesssim \omega(\lambda,\lambda')^{-N} $ for all $ \lambda,\lambda' \in \Lambda $. We define $\mathcal{B}_N$ as the space of all $N$-localized matrices, $$ \mathcal{B}_N := \{ A \in \mathbb{C}^{\Lambda\times\Lambda} : |A_{\lambda,\lambda'}| \lesssim \omega(\lambda,\lambda')^{-N} \ \mbox{ for all } \lambda,\lambda' \in \Lambda \} , $$ with associated norm $$ \|A\|_{\mathcal{B}_N}:=\inf\{C>0:\,|A_{\lambda,\lambda'}| \le C \omega(\lambda,\lambda')^{-N} \mbox{ for all }\lambda,\lambda' \in \Lambda\} = \sup_{\lambda,\lambda'\in\Lambda} \omega(\lambda,\lambda')^N |A_{\lambda,\lambda'}| . $$ \end{Def} Notice that $ \mathcal{B}_N \subseteq \mathcal{B}_M $ as $ N \geq M $. We next show that $\mathcal{B}_N$ is complete. \begin{Prop} The set $\mathcal{B}_N$ constitutes a Banach space with respect to the norm $\|\ \|_{\mathcal{B}_N}$. \end{Prop} \begin{proof} Take a Cauchy sequence $(A_n)$ in $\mathcal{B}_N$. This means that $\omega^N(A_n)$ is uniformly Cauchy. Moreover, $(A_n)$ is pointwise Cauchy, since $$ |A_n(\lambda,\lambda') - A_m(\lambda,\lambda') | \leq \| A_n - A_m \|_{\mathcal{B}_N} \omega(\lambda,\lambda')^{-N} . $$ Hence $(A_n)$ converges pointwise to some $ A \in \mathbb{C}^\Lambda $. Now $\omega^N(A_n)$ converges pointwise to $\omega^N A$, and it is uniformly Cauchy, therefore it converges uniformly to $\omega^N A$. Then, since $ \sup \omega^N A_n < \infty $ for all $n$, we also have $ \sup \omega^N A < \infty $, namely $ A \in \mathcal{B}_N $. \end{proof} We close this subsection with the following result \new{regarding the action of $\mathcal{B}_M$ on $\mathcal{B}_N$, whenever $M$ is sufficiently large, made possible by} a submultiplicativity property of the $\mathcal{B}_N$-norm. This result is crucial for the proof of our main Theorem \ref{thm_main}. \new{Notice that, unlike \cite[Proposition 2.13]{Grohs2013}, it is not required that $C_S=1$ or $AB$ be symmetric.} \begin{Thm}\label{thm_submult} Let $\Lambda$ be a discrete set, separated by a $K$-admissible index distance $\omega$. Let $ A \in \mathcal{B}_{N+L} $ with $L\geq \max\left(2N\log_{C_\Lambda}C_T,2K\right)$, and $ B \in \mathcal{B}_{N} $. Then $ AB \in \mathcal{B}_N $, with $$ \|AB\|_{\mathcal{B}_N} \le (1+C_\omega)\|A\|_{\mathcal{B}_{N+L}}\|B\|_{\mathcal{B}_N}. $$ \end{Thm} \begin{proof} We have \begin{align*} |(AB)_{\lambda,\lambda'}| &= |\sum_{\lambda'' \in \Lambda} A_{\lambda,\lambda''} B_{\lambda'',\lambda'}| \\ &\lesssim \sum_{\lambda'' \in \Lambda} \omega(\lambda,\lambda'')^{-N-L} \omega(\lambda'',\lambda')^{-N} \quad \mbox{ by \ref{def:loc}} \\ &= \omega(\lambda,\lambda)^{-N-L} \omega(\lambda,\lambda')^{-N} + \sum_{\lambda'' \neq \lambda} \omega(\lambda,\lambda'')^{-N-L} \omega(\lambda'',\lambda')^{-N} \\ &= \omega(\lambda,\lambda')^{-N} + \sum_{\lambda'' \neq \lambda} \omega(\lambda,\lambda'')^{-N-L} \omega(\lambda'',\lambda')^{-N} \quad \mbox{ by \ref{def:index_dist}(\ref{item:pseudo_sep})} \\ &= \omega(\lambda,\lambda')^{-N} + \sum_{\lambda'' \neq \lambda} [\omega(\lambda,\lambda'') \omega(\lambda'',\lambda')]^{-N} \omega(\lambda,\lambda'')^{-L} \\ &\leq \omega(\lambda,\lambda')^{-N} + {C_T}^N \omega(\lambda,\lambda')^{-N} \sum_{\lambda'' \neq \lambda} \omega(\lambda,\lambda'')^{-L} \quad \mbox{ by \ref{def:index_dist}(\ref{item:pseudo_tri})} \\ &= \omega(\lambda,\lambda')^{-N} \left( 1 + {C_T}^N \sum_{\lambda'' \neq \lambda} \omega(\lambda,\lambda'')^{-L/2}\omega(\lambda,\lambda'')^{-L/2} \right) \\ &\leq \omega(\lambda,\lambda')^{-N} \left( 1 + {C_T}^N C_\Lambda^{-L/2} \sum_{\lambda'' \neq \lambda} \omega(\lambda,\lambda'')^{-L/2} \right) \quad \mbox{ by \ref{def:sep}} \\ &\leq \omega(\lambda,\lambda')^{-N} \left( 1 + \sum_{\lambda'' \neq \lambda} \omega(\lambda,\lambda'')^{-L/2} \right) \quad \mbox{ as $ L \geq 2N\log_{C_\Lambda}{C_T} $} \\ &\leq \omega(\lambda,\lambda')^{-N} (1 + C_\omega) \quad \mbox{ as $ L \geq 2K $, by \ref{def:Schur}.} \qedhere \end{align*} \end{proof} \subsection{Localization implies Boundedness}\label{subsec:bounded} One important feature of localization is that it implies boundedness on a large class of weighted $\ell^p$ spaces. A classical instance of this type of results are boundedness results for Calder\`on-Zygmund operators on Besov spaces, which can be shown by representing the operators in a wavelet basis and using localization, together with the fact that Besov space norms can be characterized in terms of weighted $\ell^p$ norms of wavelet coefficients \cite{Frazier1991}. As another example we mention Fourier integral operators which can be shown to be localized if represented in a frame of parabolic molecules \cite{Grohs2013,Candes2004}. Consequently, such operators are bounded on the associated functions spaces, as described e.g. in \cite{Borup2007}. In the present section we establish results stating that localized matrices always induce bounded operators on weighted $\ell^p$ spaces, whenever the index distance $\omega$ satisfies certain admissibility properties. Given a weight function $ \mathsf w : \Lambda \to (0,\infty) $, for $ p \in (0,\infty] $ we define the weighted $\ell^p$ spaces $$ \ell^p_\mathsf w(\Lambda) := \{ a \in \mathbb{C}^\Lambda : a\mathsf w \in \ell^p(\Lambda) \} $$ with weighted norms $$ \| a \|_{p,\mathsf w} := \|a\mathsf w\|_p , $$ where we write $a\mathsf w=(a(\lambda)\mathsf w(\lambda))_{\lambda\in \Lambda}$. We recall the following weighted version of the Schur test (\cite[Lemma 4]{GS}). \begin{Lemma} \label{lemma:Schur1} Let $ A \in \mathbb{C}^{\Lambda\times\Lambda} $. For $ \mathsf w_1,\mathsf w_2 : \Lambda \to (0,+\infty) $ and $ p_0 \in (0,1] $, consider the Schur conditions \begin{subequations} \label{eq:weighted_Schur1} \begin{align} & \sum_{\lambda \in \Lambda} \mathsf w_2(\lambda)^{p_0} |A_{\lambda,\lambda'}|^{p_0} \leq C_1^{p_0} \mathsf w_1(\lambda')^{p_0} \mbox{ for some $ C_1 > 0 $,} \label{eq:w+Schur1} \\ & \sum_{\lambda' \in \Lambda} |A_{\lambda,\lambda'}| \mathsf w_1(\lambda')^{-1} \leq C_2 \mathsf w_2(\lambda)^{-1} \mbox{ for some $ C_2 > 0 $,} \label{eq:w-Schur1} \end{align} \end{subequations} and define the formal matrix operator \begin{equation*} (Aa)_\lambda := \sum_{\lambda'\in\Lambda} A_{\lambda,\lambda'} a_{\lambda'} \qquad a \in \mathbb{C}^\Lambda . \end{equation*} Then: \begin{enumerate}[(a)] \item \label{item:1-Schur1} \new{if $A$ enjoys \ref{eq:w+Schur1}, then it is bounded from $\ell^{p}_{\mathsf w_1}(\Lambda)$ to $\ell^{p}_{\mathsf w_2}(\Lambda)$ for all $ p \in [p_0,1] $;} \item \label{item:inf-Schur1} if $A$ enjoys \ref{eq:w-Schur1}, then it is bounded from $\ell^\infty_{\mathsf w_1}(\Lambda)$ to $\ell^\infty_{\mathsf w_2}(\Lambda)$; \item \label{item:p-Schur1} if $A$ enjoys \ref{eq:w+Schur1} and \ref{eq:w-Schur1}, then it is bounded from $\ell^p_{\mathsf w_1}(\Lambda)$ to $\ell^p_{\mathsf w_2}(\Lambda)$ for all $ p \in [p_0,\infty] $. \end{enumerate} In each case, $ \|A\|_{\ell^p_{\mathsf w_1}\to\ell^p_{\mathsf w_2}} \leq C_1^{1/p} C_2^{1/p'} $ for $ p \in [1,\infty] $ (where $ 1/p + 1/p' = 1 $ and $ 1/\infty = 0 $), and $ \|A\|_{\ell^p_{\mathsf w_1}\to\ell^p_{\mathsf w_2}} \leq C_1 $ for $ p \in (p_0,1] $. \end{Lemma} \begin{proof} \new{ First notice that, if \ref{eq:w+Schur1} is true for $ p_0 \in (0,1] $, then it holds true with $ p_0 = 1 $ by the $p$-triangle inequality.} Further \begin{align*} \| Aa \|_{p_0,\mathsf w_2}^{p_0} &= \sum_{\lambda\in\Lambda} \left| \sum_{\lambda'\in\Lambda} A_{\lambda,\lambda'} a_{\lambda'} \right|^{p_0} \mathsf w_2(\lambda)^{p_0} \\ &\leq \sum_{\lambda\in\Lambda} \sum_{\lambda'\in\Lambda} |A_{\lambda,\lambda'}|^{p_0} |a_{\lambda'}|^{p_0} \mathsf w_2(\lambda)^{p_0} \\ &= \sum_{\lambda'\in\Lambda} |a_{\lambda'}|^{p_0} \sum_{\lambda\in\Lambda} \mathsf w_2(\lambda)^{p_0} |A_{\lambda,\lambda'}|^{p_0} \\ &\leq C_1^{p_0} \sum_{\lambda'\in\Lambda} |a_{\lambda'}|^{p_0} \mathsf w_1(\lambda')^{p_0} \\ &= C_1^{p_0} \| a \|_{p_0,\mathsf w_1}^{p_0} . \end{align*} \new{Therefore, the interpolation theorem (\cite[Corollary 2.2]{Gustavsson1982}) yields item (\ref{item:1-Schur1}).} Now, assuming \ref{eq:w-Schur1} we can estimate \begin{align*} \| Aa \|_{\infty,\mathsf w_2} &\leq \sup_{\lambda\in\Lambda} \sum_{\lambda'\in\Lambda} |A_{\lambda,\lambda'}| |a_{\lambda'}| \mathsf w_2(\lambda) \\ &= \sup_{\lambda\in\Lambda} \mathsf w_2(\lambda) \sum_{\lambda'\in\Lambda} |A_{\lambda,\lambda'}| \mathsf w_1(\lambda')^{-1} |a_{\lambda'}| \mathsf w_1(\lambda') \\ &\leq \sup_{\lambda\in\Lambda} \mathsf w_2(\lambda) \sum_{\lambda'\in\Lambda} |A_{\lambda,\lambda'}| \mathsf w_1(\lambda')^{-1} \sup_{\lambda'\in\Lambda} |a_{\lambda'}| \mathsf w_1(\lambda') \\ &\leq C_2 \sup_{\lambda\in\Lambda} \mathsf w_2(\lambda) \mathsf w_2(\lambda)^{-1} \sup_{\lambda'\in\Lambda} |a_{\lambda'}| \mathsf w_1(\lambda') \\ &= C_2 \| a \|_{\infty,\mathsf w_1} , \end{align*} whence we obtain item (\ref{item:inf-Schur1}). Finally assume both \ref{eq:w+Schur1} and \ref{eq:w-Schur1}. Then, by the interpolation theorem (\cite[Corollary 2.2]{Gustavsson1982}), items (\ref{item:1-Schur1}) and (\ref{item:inf-Schur1}) imply item (\ref{item:p-Schur1}). The estimate for $ \|A\|_{\ell^p_{\mathsf w_1}\to\ell^p_{\mathsf w_2}} $ with $ p \in (p_0,1] $ follows from \cite[Proposition 1.1, Theorem 2.4]{Gustavsson1982}, interpolating between $p_0$ and $1$. As for the case $ p \in [1,\infty] $, the bound follows easily by applying the H\"older inequality, \ref{eq:w-Schur1} and \ref{eq:w+Schur1} with $p_0 = 1$ (see \cite[Lemma 4]{GS}). \end{proof} If a matrix $ A \in \mathbb{C}^{\Lambda\times\Lambda} $ decays with respect to some bounding function (e.g an index distance), $$ |A_{\lambda,\lambda'}| \lesssim \omega(\lambda,\lambda')^{-N}, $$ one can test the boundedness of $A$ by testing estimates of the form $$ \sum_{\lambda \in \Lambda} \mathsf w_2^{p_0}(\lambda) \omega(\lambda,\lambda')^{-p_0K} \lesssim \mathsf w_1^{p_0}(\lambda') , \quad \sum_{\lambda' \in \Lambda} \omega(\lambda,\lambda')^{-K} \mathsf w_1^{-1}(\lambda') \lesssim \mathsf w_2^{-1}(\lambda), $$ which imply conditions \ref{eq:weighted_Schur1} for $ N $ sufficiently large. In oder to give a precise statement, we introduce the concept of \emph{admissibility} with respect to two weight sequences $\mathsf w_1,\mathsf w_2$ and a root $p_0$. \begin{Def}\label{def:pKadmiss1} Let $ \mathsf w_1,\mathsf w_2 :\Lambda \to (0,+\infty) $, $ p_0 \in (0,1] $ and $ K \geq 1 $. An index distance $\omega$ is called $(\mathsf w_1,\mathsf w_2,p_0,K)$-\emph{admissible} if $$ \sum_{\lambda \in \Lambda} \mathsf w_2^{p_0}(\lambda) \omega(\lambda,\lambda')^{-p_0K} \leq C_1^{p_0} \mathsf w_1^{p_0}(\lambda') , \quad \sum_{\lambda' \in \Lambda} \omega(\lambda,\lambda')^{-K} \mathsf w_1^{-1}(\lambda') \leq C_2 \mathsf w_2^{-1}(\lambda), $$ for some $C_1,\ C_2>0$. \end{Def} Note that, thanks to property \ref{def:index_dist}(\ref{item:pseudo_sym}), $K$-admissibility as defined in \ref{def:Schur} is equivalent to $(1,1,1,K)$-admissibility as defined in \ref{def:pKadmiss1}. \begin{Prop} Let $\omega$ be a $(\mathsf w_1,\mathsf w_2,p_0,K)$-admissible index distance. If $ A \in \mathcal{B}_N $ for some $ N \geq K $, then it defines a bounded operator from $\ell^p_{\mathsf w_1}(\Lambda)$ to $\ell^p_{\mathsf w_2}(\Lambda)$ for all $ p \in [p_0,\infty]$, with \begin{align*} & \|A\|_{\ell^p_{\mathsf w_1}\to\ell^p_{\mathsf w_2}} \leq C_1^{1/p}C_2^{1/p'}\|A\|_{\mathcal{B}_N} \quad p \in [1,\infty] , \\ & \|A\|_{\ell^p_{\mathsf w_1}\to\ell^p_{\mathsf w_2}} \leq C_1\|A\|_{\mathcal{B}_N} \quad p \in (p_0,1]. \end{align*} If $\omega$ is $K$-admissible, then $A$ defines a bounded operator from $\ell^p(\Lambda)$ to $\ell^p(\Lambda)$ for all $ p \in [1,\infty]$, with $$ \|A\|_{\ell^p\to\ell^p} \leq C_S^{N/p} C_\omega \|A\|_{\mathcal{B}_N} \quad p \in [1,\infty] . $$ \end{Prop} \begin{proof} The proof follows directly by applying Lemma \ref{lemma:Schur1}. \end{proof} \subsection{Inverse Closedness}\label{subsec:invclos} For several applications it is important to know the localization properties of the operator $A^{-1}$, assuming that $A$, restricted to its image, constitutes an isomorphism $A:\ell^2(\Lambda)\to \ell^2(\Lambda)$. For instance, as we have seen in the previous subsection, if it can be shown that $A^{-1}\in \mathcal{B}_N$ for sufficiently large $N$ one can deduce the boundedness of $A^{-1}$ on a large class of sequence spaces. Except for very special cases of $\omega$ it cannot be expected that $A^{-1}\in \mathcal{B}_N$ if $A\in \mathcal{B}_N$. However, we shall show that the Moore-Penrose pseudoinverse $A^+ \in \mathcal{B}_{N^+}$ whenever $A\in \mathcal{B}_N$, where $N^+\le N$, depending only on $\omega$ and the spectrum of $A$. Moreover, this dependence will be made completely explicit. We now describe the spectral assumption on $A$, which we shall impose in our analysis. \begin{Def} The matrix $A$ viewed as an operator from $\ell^2(\Lambda)$ to itself possesses a \emph{spectral gap} if there exist numbers $0<a\le b<\infty$ such that % \begin{equation*} \sigma_2\left(A\right) \subset \left\{0\right\}\cup [a,b], \end{equation*} % where $\sigma_2\left(A\right)$ denotes the $\ell^2(\Lambda)$-spectrum of $A$. \end{Def} It $A$ is symmetric and possesses a spectral gap we can define its Moore-Penrose pseudoinverse $A^+$ which satisfies the normal equations \begin{equation}\label{eq:MP} A^2A^+ = A. \end{equation} Having stated all necessary definitions we can now state our main result. \begin{Thm}\label{thm_main} % Assume that $A\in \mathcal{B}_{N+L}$ with \begin{equation} \label{eq:hp} N \geq K \qquad L\geq\max \left(2N\log_{C_\Lambda}C_T,2K\right) \end{equation} is symmetric and possesses a spectral gap, e.g., $$ \sigma_2(A) \subset \{0\}\cup [a,b]. $$ Then with $A^+$ denoting its Moore-Penrose pseudoinverse we have $$ A^+\in \mathcal{B}_{N^+} $$ with \begin{equation}\label{eq:nplus} N^+ = N \left(1-\frac{\log\left(1 + \frac{2}{a^2+b^2} \|A\|_{\mathcal{B}_{N+L}}^2\left(1+C_\omega\right)^2\right)} {\log\left(\frac{b^2 - a^2}{b^2 + a^2}\right)}\right)^{-1}. \end{equation} \end{Thm} \begin{proof} The proof goes exactly as the proof of \cite[Theorem 2.12]{Grohs2013}, using our submultiplicativity result, Theorem \ref{thm_submult}. \end{proof} \section{Application to $\alpha$-Molecules}\label{sec:alpha} We intend to apply the general results of the previous section to the study of $\alpha$-molecules \cite{Grohs2013a}. This class of systems includes as special cases wavelets, curvelets, shearlets, hybrid shearlets and ridgelets, therefore our results will allow us to gain localization results for all these systems simultaneously. We proceed as follows. In Subsection \ref{subsec:indexdist} we describe the index distance $\omega$ which has been introduced in \cite{Grohs2013a} and which is defined on a contiuous phase space $P$. Then we prove that this function $\omega$ satisfies all the assumptions of Definition \ref{def:index_dist}. This turns out to be the most technical part of this work. Then, only later in Subsection \ref{subsec:alphamol} we briefly introduce the notion of $\alpha$-molecules. In a system of $\alpha$-molecules, every function is associated with a point in the phase space $P$ and therefore every such system is associated to a discrete sampling set $\Lambda\subset P$. We discuss two canonical choices of $\Lambda$ in detail: so-called curvelet-type systems in Subsubsection \ref{subsubsec:curve} and so-called shearlet-type systems in Subsubsection \ref{subsubsec:shear}. In both cases we show that the index distance $\omega$ restricted to $\Lambda$ is separated and admissible. In summary $\alpha$-molecules, together with the index distance introduced in \cite{Grohs2013a}, fits into the abstract framework developed earlier in Section \ref{sec:framework}. As an application we present localization for canonical duals of frames of $\alpha$-curvelets (Theorem \ref{thm:curveloc}) and $\alpha$-shearlets (Theorem \ref{thm:shearloc}). \subsection{Index Distance}\label{subsec:indexdist} Before we describe the notion of $\alpha$-molecules we start by defining the corresponding index distance $\omega_\alpha$ and show that our main result can indeed be applied to this index distance. Roughly speaking, $\alpha$-molecules can be associated with a scale, an orientation and a location. Therefore we first define a contiuous parameter space $P$ as the product \begin{equation} P := \mathbb{R}_+ \times S^1 \times \mathbb{R}^2 . \end{equation} The parameters in $P$ will be denoted by $ p = (s,\theta,x) $, $ p' = (s',\theta',x') $, and so on. We also define, for each $ \alpha \in [0,1] $, the function \begin{equation} \label{eq:alpha-index_dist} \omega_\alpha := M(1+d_\alpha) , \end{equation} where \begin{align*} & M(p,p') := \max(s/s',s'/s) \\ & d_\alpha(p,p') := \min(s,s')^{2(1-\alpha)}|\theta-\theta'|^2 + \min(s,s')^{2\alpha}\|x-x'\|^2 + \min(s,s') |\langle x-x' , e_\theta \rangle| , \end{align*} $e_\theta$ being the ``co-direction'' $(\cos\theta,-\sin\theta)$. We shall often adopt the abbreviations $ \Delta\theta := \theta - \theta' $ and $ \Delta x := x - x' $. \new{ \begin{Rmk} This definition of $\omega_\alpha$ differs from the one presented in \cite{Grohs2013a}, where the last term of $d_\alpha(p,p')$ is replaced by $$ \frac{\min(s,s')^2 |\langle \Delta x , e_\theta \rangle|^2}{1 + \min(s,s')^{2(1-\alpha)}|\Delta\theta|^2} . $$ However, an application of the inequality of arithmetic and geometric means yields \begin{align*} \ & 1 + \min(s,s')^{2(1-\alpha)}|\Delta\theta|^2 + \frac{\min(s,s')^2 |\langle \Delta x , e_\theta \rangle|^2}{1 + \min(s,s')^{2(1-\alpha)}|\Delta\theta|^2} \\ = \ & \left(\sqrt{1 + \min(s,s')^{2(1-\alpha)}|\Delta\theta|^2}\right)^2 + \left( \frac{\min(s,s') |\langle \Delta x , e_\theta \rangle|}{\sqrt{1 + \min(s,s')^{2(1-\alpha)}|\Delta\theta|^2}} \right)^2 \\ \geq \ & 2 \sqrt{1 + \min(s,s')^{2(1-\alpha)}|\Delta\theta|^2} \frac{\min(s,s') |\langle \Delta x , e_\theta \rangle|}{\sqrt{1 + \min(s,s')^{2(1-\alpha)}|\Delta\theta|^2}} \\ = \ & 2 \min(s,s') |\langle \Delta x , e_\theta \rangle| ; \end{align*} then, if we call $\tilde{\omega}_\alpha$ the index distance introduced in \cite{Grohs2013a}, we have $ \tilde{\omega}_\alpha \geq 2 \omega_\alpha $. Now, the key concept to preserve here is \emph{almost orthogonality} (see \cite{Grohs2013a} for details), and this inequality shows exactly that systems which are almost orthogonal respect to $\tilde{\omega}_\alpha$ are still almost orthogonal respect to $\omega_\alpha$. Also note that, for suitable choices of the parameters, $\omega_1$ corresponds to the wavelet index distance, whereas $\omega_{\frac{1}{2}}$ returns the curvelet-shearlet index distance studied in \cite{Grohs2013}. \end{Rmk} } The restriction of $\omega_\alpha$ to any discrete index set $ \Lambda \subset P $ describes an index distance as we show in the next result. \begin{Prop} $\omega_\alpha$ is an index distance for all $ \alpha \in [0,1] $ and all discrete index sets $\Lambda \subset P$. The resulting constants obey $ C_S \leq 2 $ and $ C_T \leq 4 $. \end{Prop} \begin{proof} For ease of notation, we shall avoid to specify the index $\alpha$ in $\omega_\alpha$. It is apparent that $\omega$ is $1$ on the diagonal, that is property \ref{def:index_dist}(\ref{item:pseudo_sep}). We next prove properties \ref{def:index_dist}(\ref{item:pseudo_sym}) and (\ref{item:pseudo_tri}). \ref{def:index_dist}(\ref{item:pseudo_sym}). Notice that the only non symmetric term in $\omega(p,p')$ is $ \min(s,s') |\langle \Delta x , e_\theta \rangle| $, so it sufficies to show that $|\langle \Delta x , e_\theta \rangle| \lesssim \min(s,s')^{-1} d(p',p) $. Since $$ |\langle \Delta x , e_\theta \rangle| \leq |\langle \Delta x , e_\theta \rangle| + |\langle \Delta x , e_{\theta'} \rangle| \leq |\langle \Delta x , e_\theta \rangle - \langle \Delta x , e_{\theta'} \rangle| + 2 |\langle \Delta x , e_{\theta'} \rangle| , $$ it remains to estimate $$ |\langle \Delta x , e_\theta \rangle - \langle \Delta x , e_{\theta'} \rangle| = |\langle \Delta x , e_\theta - e_{\theta'} \rangle| \leq \|e_\theta - e_{\theta'}\| \|\Delta x\| . $$ By prosthaphaeresis $$ e_\theta - e_{\theta'} = (\cos\theta - \cos\theta',\sin\theta - \sin\theta') = 2 \sin\left(\frac{\theta-\theta'}{2}\right) (-\sin\left(\frac{\theta+\theta'}{2}\right) , \cos\left(\frac{\theta+\theta'}{2}\right)) , $$ then $$ \|e_\theta - e_{\theta'}\| = 2 \left|\sin\left(\frac{\theta-\theta'}{2}\right)\right| \leq 2 \left|\frac{\theta-\theta'}{2}\right| = |\theta-\theta'| . $$ Therefore \begin{align*} |\langle \Delta x , e_\theta \rangle - \langle \Delta x , e_{\theta'} \rangle| &\leq |\Delta\theta| \|\Delta x\| \\ &= \min(s,s')^{1/2-\alpha} |\Delta\theta| \min(s,s')^{\alpha-1/2} \|\Delta x\| \\ &\leq \frac{1}{2} (\min(s,s')^{1-2\alpha} |\Delta\theta|^2 + \min(s,s')^{2\alpha-1} \|\Delta x\|^2) \\ &\leq \frac{1}{2} \min(s,s')^{-1} d(p,p') \end{align*} by the inequality of arithmetic and geometric means. Thus we get \ref{def:index_dist}(\ref{item:pseudo_sym}) with bound $ C_S \leq 2 $. \ref{def:index_dist}(\ref{item:pseudo_tri}). We shall write indices $01$ for the expressions evaluated in $(p,p')$, $02$ for $(p,p'')$ and $21$ for $(p'',p')$. The letter $s_{01}$ will be a short for $\min(s,s')$, and similarly with regard to $s_{02}$ and $s_{21}$. Well then, we have to show that $ \omega_{01} \leq C \omega_{02}\omega_{21} $ for some constant $ C \geq 1 $. Write \begin{align*} \omega_{01} &= M_{01}(1 + d_{01}) \\ &= M_{01}(1 + s_{01}^{2-2\alpha}\Delta\theta^2 + s_{01}^{2\alpha}\Delta x^2 + s_{01} |\langle \Delta x , e_\theta \rangle| ) \\ &= \underbrace{M_{01}(s_{01}^{2-2\alpha}\Delta\theta^2 + s_{01}^{2\alpha}\Delta x^2)}_{=:A} + \underbrace{M_{01} (1 + s_{01}|\langle \Delta x , e_\theta \rangle|)}_{=:B} . \end{align*} We shall prove that \begin{align} &A \leq 2 M_{02}M_{21}(d_{02} + d_{21}) \leq 2 \omega_{02}\omega_{21} , \label{eq:A} \\ &B \leq 2 M_{02}M_{21}(1 + d_{02} + d_{21} + d_{02}d_{21}) = 2 \omega_{02}\omega_{21} , \label{eq:B} \end{align} so that $ \omega_{01} = A + B \leq 4 \omega_{02}\omega_{21} $, as we claim. In order to verify \ref{eq:A} and \ref{eq:B}, we first observe that $\omega$ is translational invariant, namely $$ \omega((s,\theta,x),(s',\theta',x')) = \omega ((s,\theta,x+t),(s',\theta',x'+t)) \quad \forall t \in \mathbb{R}^2 , $$ and therefore we can set $ x = 0 $. Moreover, we can work in coordinates $ e_\theta , e_\theta^\perp $, so that $ e_\theta = (1,0) $ and $ \theta = 0 $. Coordinates for $x'$ and $x''$ are called $(x_1,y_1)$ and $(x_2,y_2)$, respectively. With this choices we have: \begin{align*} &d_{01} = s_{01}^{2-2\alpha} |\theta'|^2 + s_{01}^{2\alpha} (|x_1|^2 + |y_1|^2) + s_{01}|x_1| , \\ &d_{02} = s_{02}^{2-2\alpha} |\theta''|^2 + s_{02}^{2\alpha} (|x_2|^2 + |y_2|^2) + s_{02}|x_2| , \\ &d_{21} = s_{21}^{2-2\alpha} |\theta''-\theta'|^2 + s_{21}^{2\alpha} (|x_2-x_1|^2 + |y_2-y_1|^2) + s_{21}|\cos\theta''(x_2-x_1) + \sin\theta''(y_2-y_1)| . \end{align*} Let us start with \ref{eq:A}. We estimate \begin{align*} M_{01}s_{01}^{2-2\alpha}|\theta'|^2 &\leq 2 (M_{01}s_{01}^{2-2\alpha}|\theta''|^2 + M_{01}s_{01}^{2-2\alpha}|\theta''-\theta'|^2 ) \\ &\leq 2 (M_{02}M_{21}s_{02}^{2-2\alpha}|\theta''|^2 + M_{02}M_{21}s_{21}^{2-2\alpha}|\theta''-\theta'|^2) \end{align*} by Lemma \ref{lemma:trimin} (just exponentiate the appropriate inequality to use it in multiplicative form). Likewise we have \begin{align*} M_{01}s_{01}^{2\alpha}\|x'\|^2 &\leq 2 (M_{01}s_{01}^{2\alpha}\|x''\|^2 + M_{01}s_{01}^{2\alpha}\|x''-x'\|^2) \\ &\leq 2 (M_{02}M_{21}s_{02}^{2\alpha}\|x''\|^2 + M_{02}M_{21}s_{21}^{2\alpha}\|x''-x'\|^2) , \end{align*} again by \ref{lemma:trimin} (in multiplicative form). Thus we get \ref{eq:A}. Now we move on to \ref{eq:B}. One has $$ B = M_{01} + M_{01}s_{01}|x_1| \leq M_{02}M_{21} + M_{01}s_{01}|x_1| ; $$ then, if we show that \begin{equation*} \underbrace{M_{01}s_{01}|x_1|}_{=: L} \leq \underbrace{M_{02}M_{21}[1 + 2(d_{02} + d_{21} + d_{02}d_{21})]}_{=: R} , \end{equation*} we have \ref{eq:B}. First notice that \begin{align*} L &= \frac{\max(s,s')}{\min(s,s')} \min(s,s') |x_1| \\ &= \max(s,s')|x_1| . \end{align*} Our estimates for $R$ depend on how large is the angle $\theta''$. We shall occasionally write $s_{012}$ for $\min(s,s',s'')$. \begin{description} \item[$ |\theta''| \geq \pi/4 $.] We have \begin{align*} R &\geq M_{02}M_{21}[1 + 2(s_{02}|x_2| + s_{02}^{2-2\alpha}|\theta''|^2 s_{21}^{2\alpha}|x_2-x_1|^2)] \\ &\geq M_{02}M_{21}[1 + 2(s_{02}|x_2| + \frac{\pi^2}{16}s_{02}^{2-2\alpha}s_{21}^{2\alpha}|x_2-x_1|^2)] \\ &\geq M_{02}M_{21}[1 + 2(s_{012}|x_2| + \frac{\pi^2}{16}s_{012}^2|x_2-x_1|^2)] =: R' . \end{align*} Dividing by $M_{02}M_{21}$, we have that $ R \geq L $ if $$ 1 + 2s_{012}|x_2| + \frac{\pi^2}{8}s_{012}^2|x_2-x_1|^2 \geq \frac{\max(s,s')}{M_{02}M_{21}}|x_1| . $$ But \begin{align*} \frac{\max(s,s')}{M_{02}M_{21}} &= \max(s,s') \ \frac{\min(s,s'')}{\max(s,s'')} \ \frac{\min(s'',s')}{\max(s'',s')} \\ &= \max(s,s') \ \frac{\max(\min(s,s''),\min(s'',s')) \ \min(s,s',s'')}{\min(\max(s,s''),\max(s'',s')) \ \max(s,s',s'')} \\ &\leq \max(s,s') \ \frac{\min(s,s',s'')}{\max(s,s',s'')} \quad \mbox{ by Lemma \ref{lemma:maxmin}} \\ &\leq \max(s,s',s'') \ \frac{\min(s,s',s'')}{\max(s,s',s'')} \\ &= \min(s,s',s'') = s_{012} , \end{align*} whence $ R \geq L $ provided that $$ 1 + 2a|x_2| + \frac{\pi^2}{8}a^2|x_2-x_1|^2 \geq a|x_1| $$ for every $ a > 0 $ and $ x_1,x_2 \in \mathbb{R} $. It is actually equivalent to show this for $ a = 1 $, since we can always replace $(x_1,x_2)$ with $(ax_1,ax_2)$. By the usual triangle inequality, we have $$ |x_1| \leq |x_2-x_1| + |x_2| . $$ Now, if $ |x_2-x_1| \geq 1 $, then $ |x_2-x_1| \leq |x_2-x_1|^2 $ and we are done. Otherwise $ |x_2-x_1| \leq 1 $, and we are done as well. \item[$ |\theta''| \leq \pi/4 $.] We have \begin{align*} R &\geq M_{02}M_{21} \{1 + 2[s_{02}|x_2| + s_{21}|\cos\theta''(x_2-x_1) - \sin\theta''(y_2-y_1)| + s_{02}^{2-2\alpha}|\theta''|^2 s_{21}^{2\alpha}|y_2-y_1|^2]\} \\ &\geq M_{02}M_{21} \{1 + 2[s_{012}|x_2| + s_{012}|\cos\theta''(x_2-x_1) - \sin\theta''(y_2-y_1)| + s_{012}^2|\theta''|^2|y_2-y_1|^2]\} \\ &\geq M_{02}M_{21} \{1 + 2[s_{012}|x_2| + s_{012}(\cos\theta''|x_2-x_1| - \sin|\theta''||y_2-y_1|) + s_{012}^2|\theta''|^2|y_2-y_1|^2)]\} \\ &\geq M_{02}M_{21} \{1 + 2s_{012}|x_2| + \sqrt{2}s_{012}|x_2-x_1| - 2s_{012}\sin|\theta''||y_2-y_1| + 2s_{012}^2|\theta''|^2|y_2-y_1|^2\} \\ &= M_{02}M_{21}\underbrace{ ( 1 - 2s_{012}\sin|\theta''||y_2-y_1| + 2s_{012}^2|\theta''|^2|y_2-y_1|^2 ) }_{=: R_1} \\ &+ M_{02}M_{21} ( 2 s_{012}|x_2| + \sqrt{2}s_{012}|x_2-x_1| ) \\ &\geq M_{02}M_{21}R_1 + \underbrace{ M_{02}M_{21}s_{012}(|x_2| + |x_2-x_1|) }_{=: R_2} . \end{align*} In $R_1$ we can regard $s_{012}$ as any $ a > 0 $, and we can actually set $ a = 1 $ by replacing $(y_1,y_2)$ with $(ay_1,ay_2)$. Thus we get a polynomial in $|y_2-y_1|$ with discriminant $$ \Delta/4 = \sin^2|\theta''| - 2 |\theta''|^2 \leq 0 , $$ whence $ R_1 \geq 0 $. It follows that $ R \geq R_2 . $ On the other hand, we have \begin{align*} M_{02}M_{21}s_{012} &= \frac{\max(s,s'')}{\min(s,s'')} \ \frac{\max(s'',s')}{\min(s'',s')} \ \min(s,s',s'') \\ &= \frac{\max(s,s',s'') \ \min(\max(s,s''),\max(s'',s'))}{\min(s,s',s'') \ \max(\min(s,s''),\min(s'',s'))} \ \min(s,s',s'') \\ &= \max(s,s',s'') \ \frac{\min(\max(s,s''),\max(s'',s'))}{\max(\min(s,s''),\min(s'',s'))} \\ &\geq \max(s,s',s'') \quad \mbox{ by Lemma \ref{lemma:maxmin}} \\ &\geq \max(s,s') , \end{align*} whence \begin{align*} R_2 &\geq \max(s,s')(|x_2| + |x_2-x_1|) \\ &\geq \max(s,s')|x_1| = L . \end{align*} \end{description} The property \ref{def:index_dist}(\ref{item:pseudo_tri}) is finally proven, with constant bound $ C_T \leq 4 $. \end{proof} \new{ \begin{Rmk} Following \cite{Grohs2013}, one may think to write $ \omega_\alpha(p,p') = \max(s,s')(1 + \min(s,s')\tilde{d}_\alpha(p,p')) $, with $$ \tilde{d}_\alpha(p,p') := \min(s,s')^{1-2\alpha} |\Delta\theta|^2 + \min(s,s')^{2\alpha-1} \|\Delta x\|^2 + |\langle \Delta x , e_\theta \rangle| , $$ and check the assumptions made in \cite{Grohs2013}. It turns out that all the hypothesis are satisfied, except for the very important pseudo-triangle inequality $$ \tilde{d}_\alpha(p,p') \lesssim \tilde{d}_\alpha(p,p'') + \tilde{d}_\alpha(p'',p') , $$ which is true if and only if $ \alpha = \frac{1}{2} $. To see this, begin by fixing $ \alpha \in [0,\frac{1}{2}) $, so that $ 1-2\alpha \in (0,1] $. For any $ C \geq 1 $ pick $$ s = s' > C^{1/(1-2\alpha)} , \quad s''= 1 , \quad \theta = 0 , \quad \theta' = \theta'' \neq 0 , \quad x = x' = x'' = 0 , $$ whence we obtain $$ \tilde{d}_\alpha(p,p') = s^{1-2\alpha} \theta'^2 > C \theta'^2 = C \tilde{d}_\alpha(p,p'') = C (\tilde{d}_\alpha(p,p'') + \tilde{d}_\alpha(p'',p')) . $$ Similarly, if $ \alpha \in (\frac{1}{2},1] $, $ 2\alpha-1 \in (0,1] $, for any $ C \geq 1 $ we can set $$ s = s' > C^{1/(2\alpha-1)} , \quad s'' = 1 , \quad \theta = \theta' = \theta'' = 0 , \quad x = 0 , \quad x' = x'' = (0,y) \neq 0 , $$ whence $$ \tilde{d}_\alpha(p,p') = s^{2\alpha-1} y^2 > C y^2 = C \tilde{d}_\alpha(p,p'') = C (\tilde{d}_\alpha(p,p'') + \tilde{d}_\alpha(p'',p')) . $$ \end{Rmk} } \subsection{$\alpha$-Molecules}\label{subsec:alphamol} We now introduce the notion of $\alpha$-\emph{molecules} as in \cite{Grohs2013a}. There, $\alpha$-molecules are defined as systems of functions $(m_{\lambda})_{\lambda\in \Lambda}$, where each $m_{\lambda}\in L^2(\mathbb{R}^2)$ has to satisfy some additional properties. In particular, each function $m_{\lambda}$ will be associated with a unique point in $P$, which is done via a \emph{parametrization} as defined below. \begin{Def}A \emph{parametrization} consists of a pair $(\Lambda,\Phi_\Lambda)$ where $\Lambda$ is an index set and $\Phi_\Lambda$ is a mapping % $$ \Phi_\Lambda:\left\{\begin{array}{ccc}\Lambda &\to & P\\ \lambda & \mapsto & \left(s_\lambda , \theta_\lambda , x_\lambda\right) \end{array}\right. $$ % which associates with each $\lambda\in \Lambda$ a \emph{scale} $s_\lambda$, a \emph{direction} $\theta_\lambda$ and a \emph{location} $x_\lambda$. \end{Def} \new{With slight abuse of notation, below we shall confuse $\Lambda$ with the image $\Phi_\Lambda(\Lambda)$, and $\omega$ with the pull-back $\omega\circ\Phi_\Lambda$.} Let \begin{equation} D_s := \begin{pmatrix} s & 0 \\ 0 & s^{\alpha} \end{pmatrix} , \qquad R_\theta := \begin{pmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{pmatrix} \end{equation} denote respectively the anisotropic dilation matrix associated with $s>0$ and $\alpha\in[0,1]$ and the rotation matrix by an angle $\theta\in S^1$. Now we have collected all the necessary ingredients for defining $\alpha$-molecules. \begin{Def} Let $(\Lambda,\Phi_\Lambda)$ be a parametrization \new{and $R,M,N_1,N_2 >0$}. A family $(m_\lambda)_{\lambda \in \Lambda}$ \new{$\subset L^2(\mathbb{R}^2)$} is called a family of $\alpha$-\emph{molecules} with respect to $(\Lambda,\Phi_\Lambda)$ of order $(R,M,N_1,N_2)$, if it can be written as % $$ m_\lambda (x) = s_\lambda^{(1+\alpha)/2} a^{(\lambda)} \left(D_{s_\lambda}R_{\theta_\lambda}\left(x - x_\lambda\right)\right) $$ % such that % \begin{equation*} \left| \partial^\beta \hat a^{(\lambda)}(\xi)\right| \lesssim \min\left(1,s_\lambda^{-1} + |\xi_1| + s_\lambda^{-(1-\alpha)}|\xi_2|\right)^M \left\langle |\xi|\right\rangle^{-N_1} \langle \xi_2 \rangle^{-N_2} \quad \mbox{for all $|\beta|\le R$}. \end{equation*} % The implicit constants are uniform over $\lambda\in \Lambda$. \end{Def} It is instructive to look at some special cases. For instance the case $\alpha = 1$ corresponds to wavelet-type systems, whereas the case $\alpha = \frac{1}{2}$ corresponds to parabolic molecules \cite{Grohs2011}, which include curvelets \cite{Candes2004} and shearlets \cite{Labate2005}. The case $\alpha = 0$ corresponds to ridgelet-type systems \cite{Grohs2011a,CandesPhD}. The systems with $\alpha\in (0,\frac{1}{2})$ have been called 'hybrid' systems. Such systems, together with their approximation properties, have been studied recently in \cite{Keiper2013}. Systems of $\alpha$-molecules are useful for the decomposition and reconstruction of functions $f\in L^2(\mathbb{R}^2)$ in a numerically stable fashion. To this end it is required that a system $(m_\lambda)_{\lambda\in \Lambda}$ constitues a \emph{frame} in the sense that there exist constants $0<a\le b <\infty$ such that \begin{equation}\label{eq:framedef} a^2\|f\|_{L^2(\mathbb{R}^2)}^2\le \sum_{\lambda\in \Lambda}|\langle f, m_\lambda \rangle_{L^2(\mathbb{R}^2)}|^2 \le b^2\|f\|_{L^2(\mathbb{R}^2)}^2\quad \mbox{for all }f\in L^2(\mathbb{R}^2). \end{equation} If \ref{eq:framedef} holds true, there exists a canonical dual frame $(\tilde{m}_\lambda)_{\lambda\in \Lambda}$ satisfying $$ f = \sum_{\lambda\in \Lambda}\langle f, m_\lambda \rangle_{L^2(\mathbb{R}^2)}\tilde{m}_\lambda = \sum_{\lambda\in \Lambda}\langle f, \tilde{m}_\lambda \rangle_{L^2(\mathbb{R}^2)}m_\lambda \quad \mbox{for all }f\in L^2(\mathbb{R}^2), $$ see e.g. \cite{Christensen2003}. \new{Unless $ a = b $, in which case $ \tilde{m}_\lambda = a^{-2} m_\lambda $, the canonical dual frame is not in general explicitly known. Nevertheless,} for a number of applications it is important to study its structure, in particular its similarity or dissimilarity to the primal frame $(m_\lambda)_{\lambda\in \Lambda}$. Again we refer to \cite{Grohs2013} for more detailed information. Crucial in this respect is the \emph{localization} \new{property which we define next. \begin{Def} We say that a system $(m_\lambda)_{\lambda\in\Lambda}$ is $N$-\emph{localized} (with respect to the index distance \ref{eq:alpha-index_dist}) if such is its Gramian, that is $$ \left(\langle m_\lambda, m_{\lambda'} \rangle_{L^2(\mathbb{R}^2)}\right)_{\lambda,\lambda'\in \Lambda} \in \mathcal{B}_N .$$ \end{Def} Notice that $\omega_\alpha$ provides a measure of the off-diagonal decay of the Gramian. In the following we shall study conditions under which the dual of a frame of $\alpha$-molecules is localized, provided that such is the primal frame. In order to apply the machinery of Section \ref{sec:framework} we first need to observe a couple of facts. \begin{Lemma} \label{lemma:gram} Given a frame $ (m_\lambda)_{\lambda\in\Lambda} \subset L^2(\mathbb{R}^2) $ with frame constants $a,b$, the associated Gramian possesses the spectral gap $$ \sigma_2\left(\left(\langle m_\lambda, m_{\lambda'} \rangle_{L^2(\mathbb{R}^2)}\right)_{\lambda,\lambda'\in \Lambda}\right)\subset \{0\}\cup[a,b]. $$ Furthermore, the Moore-Penrose pseudoinverse of the Gramian $(\langle m_\lambda, m_{\lambda'} \rangle_{L^2(\mathbb{R}^2)})_{\lambda,\lambda'\in \Lambda}$ is given by the dual Gramian $(\langle \tilde{m}_\lambda, \tilde{m}_{\lambda'} \rangle_{L^2(\mathbb{R}^2)})_{\lambda,\lambda'\in \Lambda}$. \end{Lemma} \begin{proof} \cite[Lemma 3.3]{Grohs2013}. \end{proof} In view of Lemma \ref{lemma:gram}}, if we can show that the index distance $\omega_\alpha$ restricted to a suitable discrete index set satisfies the assumptions of Section \ref{sec:framework}, we can directly appeal to Theorem \ref{thm_main} to deduce localization results for the dual frame. The verification of these latter properties is the subject of the remainder of this section. In particular we shall consider curvelet-type and shearlet-type sampling sets below. \subsubsection{Curvelet-type Parametrization}\label{subsubsec:curve} We start by considering curvelet-type parametrizations which arise by discretizing the scale parameter on a logarithmic scale and the directional parameter uniformly in polar angle (see \cite{Grohs2013a} for more details). We show the admissibility and the separatedness of the resulting parametrization, which allows us to directly appeal to Theorem \ref{thm_main}. \begin{Def} Let $\alpha\in[0,1]$ and $g>1$, $\tau>0$ be some fixed parameters. Further, let $(\gamma_j)_{j\in\mathbb{N}}$ and \new{$(L_j)_{j\in\mathbb{N}}$} be sequences of positive real numbers with $\gamma_j\asymp g^{-j(1-\alpha)}$, i.e.\ there are constants $C,c>0$ independent of $j$ such that $c g^{-j(1-\alpha)}\le \gamma_j \le C g^{-j(1-\alpha)}$, and $ L_j \lesssim g^{j(1-\alpha)} $. An \emph{$\alpha$-curvelet parametrization} is given by an index set of the form $$ \Lambda^c_\alpha:=\left\{ (j,l,k) \in \mathbb{N} \times \mathbb{Z} \times \mathbb{Z}^2 : |l| \le L_j \right\} $$ and a mapping $$ \Phi^c (\lambda) := (s_\lambda,\theta_\lambda,x_\lambda) := (g^j,l\gamma_j,R_{\theta_\lambda}^{-1}D_{s_\lambda}^{-1} \tau k) . $$ \end{Def} The parameters $g>1$ and $\tau>0$ are sampling constants which determine the fineness of the sampling grid, $g$ for the scales and $\tau$ for locations. Our next goal is to prove that the index distance $\omega_\alpha$ which arises from a curvelet-type parameterization separates the index set $\Lambda^c_\alpha$ and is admissible as defined in Subsection \ref{subsec:basicnotions}. We start by proving separatedness below. \begin{Prop} The index set $\Lambda^c_\alpha$ is separated by $\omega_\alpha$ with $$ C_{\Lambda_\alpha^c} = \min\{g,1+c^2,1+\tau^2,1+\tau\} , $$ where $ c = \inf\limits_{\lambda\in\Lambda^c_\alpha} \gamma_j g^{j(1-\alpha)} $. \end{Prop} \begin{proof} Let $ \lambda,\lambda' \in \Lambda $, $ \lambda \neq \lambda' $. If $ j \neq j' $, then $$ \omega(\lambda,\lambda') = g^{|j-j'|}(1 + d(\lambda,\lambda') \geq g . $$ Thus we can suppose $ j = j' $, so that $ \omega(\lambda,\lambda') = 1 + d(\lambda,\lambda') $. Now, if $ l \neq l' $ we estimate $$ |\Delta\theta_\lambda|^2 = |l-l'|^2 \gamma_j^2 \geq c^2 |l-l'|^2 g^{-j(2-2\alpha)} \geq c^2 g^{-j(2-2\alpha)} , $$ whence $$ \omega(\lambda,\lambda') \geq 1 + c^2 g^{j(2-2\alpha)} g^{-j(2-2\alpha)} = 1 + c^2 . $$ Thus we can finally suppose $ j = j' $, $ l = l' $ and $ k \neq k' $. If $ k_2 \neq k'_2 $ we estimate \begin{align*} \|\Delta x_\lambda\|^2 &= \| R_{\theta_\lambda}^{-1} D_{s_\lambda}^{-1} \tau (k-k') \|^2 \\ &= \tau^2 \| D_{s_\lambda}^{-1} (k-k') \|^2 \\ &= \tau^2 [g^{-j2}(k_1-k'_1)^2 + g^{-j2\alpha}(k_2-k'_2)^2] \\ &\geq \tau^2 g^{-j2\alpha}(k_2-k'_2)^2 \\ &\geq \tau^2 g^{-j2\alpha} , \end{align*} whence $$ \omega(\lambda,\lambda') \geq 1 + \tau^2 g^{j2\alpha} g^{-j2\alpha} = 1 + \tau^2 . $$ Otherwise, if $ k_2 = k'_2 $ and $ k_1 \neq k'_1 $ we estimate $$ |\langle \Delta x_\lambda , e_{\theta_\lambda} \rangle| = | g^{-j}\tau(k_1-k'_1 ) | = g^{-j} \tau |k_1-k'_1| \geq g^{-j} \tau , $$ whence \begin{equation*} \omega(\lambda,\lambda') \geq 1 + g^{j} g^{-j} \tau = 1 + \tau . \qedhere \end{equation*} \end{proof} Next we examine the admissibility of $\omega_\alpha$ restricted to $\Lambda_\alpha^c$. This property has actually been verified in \cite{Grohs2013a} and we arrive at the following result. \begin{Prop} The index distance $\omega_\alpha$ on $\Lambda_\alpha^c$ is $2$-admissible for all $\alpha \in [0,1] $. \end{Prop} \begin{proof} \cite[Proposition 3.6]{Grohs2013a}. \end{proof} We can finally apply our machinery to obtain the following localization result for $\alpha$-curvelets. \begin{Thm} \label{thm:curveloc} Assume we have an $\alpha$-curvelet frame which is $(N\!+\!L)$-localized with respect to $\omega_\alpha$, with $N$ and $L$ satisfying \ref{eq:hp}. Then the dual frame is $N^+\!$-localized, with $N^+$ given by \ref{eq:nplus}. \end{Thm} \begin{proof} Just apply Theorem \ref{thm_main}\new{, taking into account Lemma \ref{lemma:gram}}. \end{proof} \subsubsection{Shearlet-type Parametrization}\label{subsubsec:shear} This subsection studies the admissibility and separatedness of so-called shearlet-type parametrizations as introduced in \cite{Grohs2013a}. This paramatrization discretizes the directional parameter uniformly in slope rather than in angle, which is advantageous for digital implementations. \new{This is done by means of the \emph{shear transformation} \begin{equation} S_t := \begin{pmatrix} 1 & t \\ 0 & 1 \end{pmatrix} , \end{equation} which replaces the usual rotation $R_\theta$ with $ t = \tan\theta $.} \begin{Def} Let $ \alpha \in [0,1] $, $ g > 1 $ and $ \tau > 0 $. Further, let $(\eta_j)_{j\in\mathbb{Z}}$ and \new{$(L_j)_{j\in\mathbb{Z}}$} be sequences of positive real numbers with $ \eta_j \asymp g^{-j(1-\alpha)}$ and \new{$ L_j \lesssim g^{j(1-\alpha)} $}. An \emph{$\alpha$-shearlet parametrization} is given by an index set of the form $$ \Lambda^s_\alpha := \{ (j,l,k) \in \mathbb{Z} \times \mathbb{Z} \times \mathbb{Z}^2 : |l| \leq \new{L_j} \} $$ and a mapping $$ \Phi^s (\lambda) := (s_\lambda,\theta_\lambda,x_\lambda) := (g^j,\arctan(l\eta_j),S_{\tan\theta_\lambda}^{-1}D_{s_\lambda}^{-1} \tau k) . $$ \end{Def} \new{ \begin{Rmk} We may suppose to work with a scaling function and thus consider only positive scales $j\in\mathbb{N}$, but this would require additional notation and inessential slight complications on the paramatrization. Anyway, all the arguments go through with or without scaling functions. \end{Rmk} } Similarly to the $\alpha$-curvelet case, we first prove that $\omega_\alpha$ separates $\Lambda_\alpha^s$. \begin{Prop} The index set $\Lambda_\alpha^s$ is separated by $\omega_\alpha$, with $$ C_{\Lambda_\alpha^s} = \min\{g,1+c^2(1+C^2)^{-2},1+\tau^2,1+\tau(1+C^2)^{-1/2}\} , $$ where $ c = \inf\limits_{\lambda\in\Lambda^s_\alpha} \eta_j g^{j(1-\alpha)} $ and $ C = \new{\sup\limits_{j\in\mathbb{Z}} L_j \eta_j} $. \end{Prop} \begin{proof} Let $ \lambda,\lambda' \in \Lambda $, $ \lambda \neq \lambda' $. If $ j \neq j' $, then $$ \omega(\lambda,\lambda') = g^{|j-j'|}(1 + d(\lambda,\lambda') \geq g . $$ Thus we can suppose $ j = j' $, so that $ \omega(\lambda,\lambda') = 1 + d(\lambda,\lambda') $. If $ l \neq l' $ we estimate $ |\Delta\theta_\lambda|^2 $. By the mean value theorem we have $$ |\Delta\theta_\lambda|^2 = |\arctan(l\eta_j) - \arctan(l'\eta_j)|^2 = \eta_j^2 |l-l'|^2 \left( \frac{1}{1+\xi^2} \right)^2 $$ for some $\xi$, $ |\xi| \leq |\max(l,l')|\eta_j \new{\leq L_j\eta_j} \leq C $, so that $$ |\Delta\theta_\lambda|^2 \geq c^2 g^{-j(2-2\alpha)} |l-l'|^2 (1+C^2)^{-2} \geq c^2 g^{-j(2-2\alpha)} (1+C^2)^{-2} , $$ whence $$ \omega(\lambda,\lambda') \geq 1 + c^2 g^{j(2-2\alpha)} g^{-j(2-2\alpha)} (1+C^2)^{-2} = 1 + c^2(1+C^2)^{-2} . $$ Thus we can finally suppose $ j = j' $, $ l = l' $ and $ k \neq k' $. If $ k_2 \neq k'_2 $ we estimate \begin{align*} \|\Delta x_\lambda\|^2 &= \| S_{l\eta_j}^{-1}D_{g^j}^{-1} \tau (k-k') \|^2 \\ &= \tau^2 \{[g^{-j}(k_1-k'_1) - l\eta_j g^{-j\alpha}(k_2-k'_2)]^2 + g^{-j2\alpha}(k_2-k'_2)^2\} \\ &\geq \tau^2 g^{-j2\alpha}(k_2-k'_2)^2 \\ &\geq \tau^2 g^{-j2\alpha} , \end{align*} whence $$ \omega(\lambda,\lambda') \geq 1 + \tau^2 g^{j2\alpha} g^{-j2\alpha} = 1 + \tau^2 . $$ Otherwise, if $ k_2 = k'_2 $ and $ k_1 \neq k'_1 $ we estimate \begin{align*} |\langle \Delta x_\lambda , e_{\theta_\lambda} \rangle| &= | g^{-j}\tau(k_1-k'_1 ) \cos\theta_\lambda | \\ &= g^{-j}\tau|k_1-k'_1| \cos\theta_\lambda \\ &= g^{-j}\tau|k_1-k'_1| \cos\arctan(l\eta_j) \\ &= g^{-j}\tau|k_1-k'_1| \frac{1}{\sqrt{1 + l^2 \eta_j^2}} \\ &\geq g^{-j}\tau (1+C^2)^{-1/2}, \end{align*} whence \begin{equation*} \omega(\lambda,\lambda') \geq 1 + g^{j} g^{-j}\tau (1+C^2)^{-1/2} = 1 + \tau (1+C^2)^{-1/2} . \qedhere \end{equation*} \end{proof} Finally, it only remains to see the matter of admissibility. As before, we refer to \cite{Grohs2013a}. \begin{Prop} The index distance $\omega_\alpha$ on $\Lambda_\alpha^s$ is $2$-admissible for all $\alpha \in [0,1] $. \end{Prop} \begin{proof} \cite[Proposition 3.6]{Grohs2013a}. \end{proof} We conclude by stating the corresponding localization result for $\alpha$-shearlet frames. \begin{Thm} \label{thm:shearloc} Assume we have an $\alpha$-shearlet frame which is $(N\!+\!L)$-localized with respect to $\omega_\alpha$, with $N$ and $L$ satisfying \ref{eq:hp}. Then the dual frame is $N^+\!$-localized, with $N^+$ given by \ref{eq:nplus}. \end{Thm} \begin{proof} Just apply Theorem \ref{thm_main}\new{, taking into account Lemma \ref{lemma:gram}}. \end{proof}
2,869,038,155,932
arxiv
\section{Introduction} The study of correlation functions of integrable quantum spin chains has a very long history \cite{BOOK-KBI,BOOK-JM}. There exist many results for the case of the $SU(2)$ spin-$1/2$ chain \cite{TAKA,JMMN92,JM96,KMT00,GAS05,BOKO01,BOOS05,BOOS2,BGKS06,DGHK07,KKMST09,AuKl12} and its higher-spin realizations \cite{BoWe94,Idzumi94,Kitanine01,DeMa10,GSS10,KNS013,RK2016}. The correlation functions for the $SU(2)$ spin-$1/2$ case were realized to be given in terms of Riemann's zeta function of odd arguments \cite{BOKO01}. Later higher-spin cases were studied and explicit results were obtained where also zeta function values for even arguments appeared \cite{KNS013,RK2016}. These results were obtained from solutions of functional equations for suitably defined correlation functions. The derivation of the functional equations is based on the Yang-Baxter equation, and crossing symmetry (valid in the $SU(2)$ case) for the $R$-matrix, see \cite{AuKl12} for the finite temperature case. Nevertheless, one still lacks a better understanding of the correlation properties of models based on high rank algebras. In the $SU(n)$ case for $n>2$ \cite{UIMIN,SUTHERLAND}, there is no result for correlation functions and this stayed as a longstanding problem for decades. In this paper, we devise a framework to tackle the problem of computing the short-range correlations of the integrable $SU(n)$ spin chains. We also provide explicit solutions for the first correlation functions for the $SU(3)$ case, where already for the two-site case the solution is given in terms of Hurwitz' zeta function (generalized zeta function). Besides that, we have indications of the absence of factorization of the correlations in terms of two-point correlations. This paper is organized as follows. In section \ref{INTEGRA}, we introduce the integrable Hamiltonians and their associated integrable structure. In section \ref{density}, we introduce the density operator containing all correlation data as well as a generalized density operator. In contrast to the standard density operator, the generalized density operator allows for the derivation of discrete functional equations. This and the analyticity properties of the generalized density operator are presented in section \ref{functional}. In section \ref{su3section}, we exemplify our approach for the case of $SU(3)$ spin chains and we present the zero temperature solution for two- and three-site correlation functions for which the use of a mixed density operator proves to be sufficient. In section \ref{lack}, we present some evidence for the absence of factorization of the correlations in terms of two-point correlations. Finally, our conclusions are given in section \ref{conclusion}. Additional details are given in the appendices. \section{The integrable model}\label{INTEGRA} The Hamiltonian of the integrable $SU(n)$ spin chain is given by \cite{UIMIN,SUTHERLAND}, \eq H^{(n)}=\sum_{j=1}^L P_{j,j+1}, \label{hamiltonian} \en where $P_{j,j+1}$ is the permutation operator and $L$ is the number of sites. The Hilbert space is $V^{\otimes L}$ with local space $V=\mathbb{C}^n$. For instance, in the case of $SU(3)$ spin chains the Hamiltonian can be written in terms of spin-$1$ matrices as follows, \eq H^{(3)}=\sum_{j=1}^L [ \vec{S}_{j}\cdot\vec{S}_{j+1} + (\vec{S}_{j}\cdot\vec{S}_{j+1})^2 ]. \en The integrable Hamiltonian (\ref{hamiltonian}) is obtained as the logarithmic derivative of the row-to-row transfer matrix \eq T^{(n)}(\lambda)=\tr_{\cal A}{[ R^{(n,n)}_{{\cal A}L}(\lambda)\dots R^{(n,n)}_{{\cal A} 1}(\lambda)]}, \en where the $R$-matrix $R^{(n,n)}_{ab}(\lambda)=P_{ab}\check{R}^{(n,n)}_{ab}(\lambda)$ acts non-trivially on the indicated space $V_a\otimes V_b$ of the (long) tensor product where $V_a$ and $V_b$ are copies of the local space $V$. The operation of $R^{(n,n)}_{ab}$ is co-variant under $SU(n)$ acting by the product of two fundamental representations $[n]$. The representation $[n]$ is the irreducible representation of dimension $n$ denoted by a single box in the Young-Tableaux notation. Later in applications we will associate spectral parameters $\lambda$, $\mu$ with the two local vector spaces the $R$-matrix acts on and the difference $\lambda-\mu$ will enter as argument. The rational solution of the Yang-Baxter equation can be written and depicted as, \eq \begin{tikzpicture}[scale=1] \draw (-3.75,0) node {$\check{R}_{12}^{(n,n)}(\lambda-\mu)= I_{12} + (\lambda-\mu ) P_{12}=$}; \tikzstyle directed=[postaction={decorate,decoration={markings, mark=at position .35 with {\arrow[arrowstyle]{stealth}}}}] \tikzstyle reverse directed=[postaction={decorate,decoration={markings, mark=at position .35 with {\arrowreversed[arrowstyle]{stealth};}}}] \draw (-0.5,0) [color=black, directed, thick, rounded corners=7pt] +(0,0) -- +(1,0); \draw (-0.5,0) [color=black, directed, thick, rounded corners=7pt] +(0.5,-0.5) -- +(0.5,0.5); \draw (0.85,0) node {$=$}; \draw (-0.25,0.25) node {$\lambda$}; \draw (0.2,-0.35) node {$\mu$}; \tikzstyle directed=[postaction={decorate,decoration={markings, mark=at position .65 with {\arrow[arrowstyle]{stealth}}}}] \tikzstyle reverse directed=[postaction={decorate,decoration={markings, mark=at position .65 with {\arrowreversed[arrowstyle]{stealth};}}}] \draw (1.5,0) [-,color=black, thick, rounded corners=8pt] +(0,0) -- +(0.5,0) -- +(0.5,0.5); \draw (1.5,0) [-,color=black, thick, rounded corners=8pt] +(0.5,-0.5) -- +(0.5,0) -- +(1,0); \draw (3.5,0) node {$+(\lambda-\mu)$}; \draw (4.5,0) [color=black, thick, rounded corners=7pt] +(0,0) -- +(1,0) +(0.5,0.05) -- +(0.5,0.5) +(0.5,-0.5) -- +(0.5,-0.05); \draw (5.7,0) node {$,$}; \end{tikzpicture} \label{Rmatrix} \en where $P_{12}$ is the standard permutation operator such that $P_{12}=\sum_{i,j,k,l=1}^n P_{ik}^{jl} \hat{e}_{ij}^{(1)}\otimes \hat{e}_{kl}^{(2)}$ with $P_{ik}^{jl}=\delta_{il}\delta_{jk}$ and where $\hat{e}_{ij}^{(a)}\in C_a^n$ are the standard $n\times n$ Weyl matrices acting in the $a$-space. Likewise, the matrix elements of the identity matrix are given as $I_{ik}^{jl}=\delta_{ij}\delta_{kl}$. Let us motivate the graphical depiction of algebraic quantities. In the main body of this paper we are going to study correlation functions which occur as ratios of certain (large) sums. The denominator will be the partition function of a certain classical vertex model on a square lattice or a minor modification thereof and the numerator will be a similar partition function of a slightly modified geometry with a few bonds cut and specifically chosen spin values at the open ends. The general rule for turning a graph into a number is like we are used from Feynman diagrams. We place spin variables on closed bonds, we evaluate all local objects for the given spin configuration and multiply these results, which are then summed over for all allowed spin configurations. In particular, a trace over a product of (transfer) matrices naturally turns into a (huge) sum over products of local objects. Very generally, graphs encode contractions of products of tensors. Note that $R^{(n,n)}$ acts on $[n]\otimes [n]$, understood as $SU(n)$ module, which is graphically indicated by arrows from left to right and from bottom to top. By use of the isomorphism of $\mbox{End}(W)$ and $W^*\otimes W$ for any linear space $W$ and its dual $W^*$ we may alternatively view $R^{(n,n)}$ as a vector in the tensor space $[\bar n]\otimes [\bar n]\otimes [n]\otimes [n]$, i.e. a multilinear map of the type $[n]\times [n] \times [\bar n]\times [\bar n]\to \mathbb{C}$. In order to further illustrate, we show how to read the matrix elements of the $R$-matrix (\ref{Rmatrix}) and the other operators in the graphical notation as follows, \eq \begin{tikzpicture}[scale=1] \draw (-3.75,0) node {${[\check{R}^{(n,n)}(\lambda-\mu)]}_{ik}^{jl}= I_{ik}^{jl} + (\lambda-\mu ) P_{ik}^{jl}=$}; \tikzstyle directed=[postaction={decorate,decoration={markings, mark=at position .35 with {\arrow[arrowstyle]{stealth}}}}] \tikzstyle reverse directed=[postaction={decorate,decoration={markings, mark=at position .35 with {\arrowreversed[arrowstyle]{stealth};}}}] \draw (-0.15,0) [color=black, directed, thick, rounded corners=7pt] +(0,0) -- +(1,0); \draw (-0.15,0) [color=black, directed, thick, rounded corners=7pt] +(0.5,-0.5) -- +(0.5,0.5); \draw (-0.25,0) node {$i$}; \draw (0.35,0.8) node {$j$}; \draw (0.35,-0.8) node {$k$}; \draw (1.,0) node {$l$}; \draw (1.25,0) node {$=$}; \draw (0.1,0.25) node {$\lambda$}; \draw (0.6,-0.35) node {$\mu$}; \tikzstyle directed=[postaction={decorate,decoration={markings, mark=at position .65 with {\arrow[arrowstyle]{stealth}}}}] \tikzstyle reverse directed=[postaction={decorate,decoration={markings, mark=at position .65 with {\arrowreversed[arrowstyle]{stealth};}}}] \draw (1.68,0) node {$i$}; \draw (2.2,0.8) node {$j$}; \draw (2.2,-0.8) node {$k$}; \draw (2.9,0) node {$l$}; \draw (1.75,0) [-,color=black, thick, rounded corners=8pt] +(0,0) -- +(0.5,0) -- +(0.5,0.5); \draw (1.75,0) [-,color=black, thick, rounded corners=8pt] +(0.5,-0.5) -- +(0.5,0) -- +(1,0); \draw (3.9,0) node {$+(\lambda-\mu)$}; \draw (4.85,0) node {$i$}; \draw (5.5,0.8) node {$j$}; \draw (5.5,-0.8) node {$k$}; \draw (6.1,0) node {$l$}; \draw (5,0) [color=black, thick, rounded corners=7pt] +(0,0) -- +(1,0) +(0.5,0.05) -- +(0.5,0.5) +(0.5,-0.5) -- +(0.5,-0.05); \draw (6.3,0) node {$.$}; \end{tikzpicture} \en The $R$-matrix with mixed representations of the fundamental $[n]$ and anti-fundamental $[\bar n]$ representation of the $SU(n)$ can be written as follows, \eq \begin{tikzpicture}[scale=1] \draw (-1.75,0) node {$\check{R}_{12}^{(n,\bar{n})}(\lambda-\mu)= E_{12} + (\lambda-\mu ) P_{12}=$}; \draw (1.5,0) [-,color=black, thick, rounded corners=7pt] +(0,0) -- +(0.5,0) -- +(0.5,-0.5); \draw (1.5,0) [-,color=black, thick, rounded corners=7pt] +(0.5,0.5) -- +(0.5,0) -- +(1,0); \draw (3.5,0) node {$+(\lambda-\mu)$}; \draw (4.5,0) [color=black, thick, rounded corners=7pt] +(0,0) -- +(1,0) +(0.5,0.05) -- +(0.5,0.5) +(0.5,-0.5) -- +(0.5,-0.05); \draw (5.7,0) node {$.$}; \end{tikzpicture} \en where $E_{12}$ is the standard Temperley-Lieb operator such that $E_{ik}^{jl}=\delta_{ik}\delta_{jl}$ and the anti-fundamental representation is the other $n$ dimensional irreducible representation denoted as a column of $n-1$ boxes in the Young-Tableaux notation. Note the reversed direction of the arrow on the vertical line. For rational models, the remaining combinations can be expressed in terms of the previous one such that $\check{R}^{(\bar{n},n)}(\lambda)=\check{R}^{(n,\bar{n})}(\lambda)$ and $\check{R}^{(\bar{n},\bar{n})}(\lambda)=\check{R}^{(n,n)}(\lambda)$ as linear operators on $V\otimes V$. Note that co-variance with respect to $SU(n)$ is guaranteed, i.e. $g\otimes g^*$ and $g^*\otimes g$ for any $g\in SU(n)$ commute with $\check{R}^{(\bar{n},n)}$ and of course with $\check{R}^{(n,\bar{n})}(\lambda)$. With a grain of salt, these four $R$-matrices are solution to the Yang-Baxter equation \eq \check{R}^{(r_1,r_2)}_{12}(\lambda-\mu) \check{R}^{(r_1,r_3)}_{23}(\lambda-\nu) \check{R}^{(r_2,r_3)}_{12}(\mu-\nu) =\check{R}^{(r_2,r_3)}_{23}(\mu-\nu) \check{R}^{(r_1,r_3)}_{12}(\lambda-\nu) \check{R}^{(r_1,r_2)}_{23}(\lambda-\mu), \label{YB} \en where $r_i \in \{n,\bar{n}\}$ for $i=1,2,3$. For having (\ref{YB}) literally for all combinations of $r_1$, $r_2$, $r_3$ we would have to introduce a shift by $n$ in the argument of for instance $\check{R}^{(n,\bar{n})}$ wherever it appears. However, we keep the definitions of the $R$-matrices as given above and have (\ref{YB}) for all $r_1$, $r_2$, $r_3$ except for $n, \bar{n}, n$ and $\bar{n}, n, \bar{n}$. In order to simplify our notation it is convenient to list these equations in a group of six standard equations as above (\ref{YB}) and a group of two special ones which have shifted arguments in the intertwining matrix (see Figure \ref{YBeqs}). We explicitly write one of the special Yang-Baxter equations, \eq \check{R}^{(n,\bar{n})}_{12}(\lambda-\mu+n) \check{R}^{(n,n)}_{23}(\lambda-\nu) \check{R}^{(\bar{n},n)}_{12}(\mu-\nu) =\check{R}^{(\bar{n},n)}_{23}(\mu-\nu) \check{R}^{(n,n)}_{12}(\lambda-\nu) \check{R}^{(n,\bar{n})}_{23}(\lambda-\mu+n), \label{specialYB} \en and the other one is obtained by exchanging the representations $[n]$ and $[\bar{n}]$. It is worth to note that in our graphical notation, e.g in Figure \ref{YBeqs}, the lines upwards and to the right are associated to the fundamental representation $[n]$ and conversely the lines downwards and to the left are associated to the anti-fundamental representation $[\bar n]$. \begin{figure}[h] \begin{center} \begin{minipage}{0.5\linewidth} \begin{tikzpicture}[scale=1.25] \tikzstyle directed=[postaction={decorate,decoration={markings, mark=at position .5 with {\arrow[arrowstyle]{stealth}}}}] \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(-0.5,-0.25) -- +(1.25,1.25); \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(-0.25,-0.4) -- +(-0.25,1.35); \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(-0.5,1) -- +(1.25,1); \draw (1.5 ,0.5) node {$=$}; \draw (2,0) [-,color=black, thick,directed, rounded corners=8pt]+(-0.5,-0.25) -- +(1.25,1.25); \draw (2,0) [-,color=black, thick,directed, rounded corners=8pt]+(1.,-0.4) -- +(1.,1.35); \draw (2,0) [-,color=black, thick,directed, rounded corners=8pt]+(-0.5,0) -- +(1.25,0); \tikzstyle directed=[postaction={decorate,decoration={markings, mark=at position .65 with {\arrow[arrowstyle]{stealth}}}}] \draw (0.5,0.25) node {$\mu$}; \draw (0.5,1.25) node {$\lambda$}; \draw (-0.5 ,0.5) node {$\nu$}; \draw (2.25,0.75) node {$\mu$}; \draw (2.5,-0.25) node {$\lambda$}; \draw (3.25,0.5) node {$\nu$}; \draw (1.5,-0.75) node {a)}; \draw (3.75,0.25) node {,}; \end{tikzpicture} \end{minipage \begin{minipage}{0.5\linewidth} \begin{tikzpicture}[scale=1.25] \tikzstyle directed=[postaction={decorate,decoration={markings, mark=at position .5 with {\arrow[arrowstyle]{stealth}}}}] \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(1.25,1.25) -- +(-0.5,-0.25); \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(-0.25,-0.4) -- +(-0.25,1.35); \draw (1.5 ,0.5) node {$=$}; \draw (2,0) [-,color=black, thick,directed, rounded corners=8pt]+(1.25,1.25) -- +(-0.5,-0.25); \draw (2,0) [-,color=black, thick,directed, rounded corners=8pt]+(1.,-0.4) -- +(1.,1.35); \tikzstyle directed=[postaction={decorate,decoration={markings, mark=at position .5 with {\arrow[arrowstyle]{>|}}}}] \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(-0.5,1) -- +(1.25,1); \draw (2,0) [-,color=black, thick,directed, rounded corners=8pt]+(-0.5,0) -- +(1.25,0); \tikzstyle directed=[postaction={decorate,decoration={markings, mark=at position .65 with {\arrow[arrowstyle]{stealth}}}}] \draw (0.5,0.25) node {$\mu$}; \draw (0.1,1.35) node {$\lambda$}; \draw (0.85,1.35) node {$\lambda+n$}; \draw (-0.5 ,0.5) node {$\nu$}; \draw (2.25,0.75) node {$\mu$}; \draw (2.75,-0.35) node {$\lambda$}; \draw (1.95,-0.35) node {$\lambda+n$}; \draw (3.25,0.5) node {$\nu$}; \draw (1.5,-0.75) node {b)}; \end{tikzpicture} \end{minipage} \end{center} \caption{Graphical illustration of the Yang-Baxter equation (where vertices from lower left to upper right correspond to $R$-matrices in (\ref{YB}) and (\ref{specialYB}) from right to left): a) the standard Yang-Baxter equation (\ref{YB}) for the fundamental representation $r_1=r_2=r_3=[n]$ (the $5$ remaining standard equations are obtained from a) by rotation); b) the special Yang-Baxter equation (\ref{specialYB}), where the shift in the argument of the $R$-matrix can be conveniently seen as a discontinuity of the spectral parameter along that line.} \label{YBeqs} \end{figure} The fundamental $R$-matrix has important properties, \begin{align} \check{R}^{(n,n)}_{12}(0)&= I, \qquad &\mbox{initial condition}, \label{regularity} \\ \check{R}^{(n,n)}_{12}(\lambda-\mu)\check{R}_{21}^{(n,n)}(\mu-\lambda) &= (1-(\lambda-\mu)^2)I, \qquad & \mbox{standard unitarity}, \label{unitarity} \end{align} where again $I$ is the $n^2\times n^2$ identity matrix. These relations hold literally also for $\check{R}^{(\bar n,\bar n)}$. However, differently from the $SU(2)$ case, the $SU(n)$ case for $n>2$ does not have crossing symmetry, which makes this model special in the realm of integrable models. This is because for $n>2$ the conjugate of the representation $[n]$, namely $[\bar{n}]$, is inequivalent to $[n]$. In order to circumvent the difficulties which arise from the fact that the model lacks the crossing symmetry, one has to add a few more ingredients to formulate a consistent framework for the computation of correlation functions. The crucial observation is that one has to conveniently and largely on the same footing work with the fundamental $[n]$ and anti-fundamental representation $[\bar{n}]$ of the $SU(n)$. This is possible since as presented before, the Yang-Baxter equation accommodates different representations in each vector space. Besides that, the above $R$-matrices with mixed representations also have symmetry properties, which we call special unitarity (see Figure \ref{Unitarities}b), \bear \check{R}^{(n,\bar{n})}_{12}(\lambda-\mu+n)\check{R}_{21}^{(\bar{n},n)}(\mu-\lambda) &=& (\mu-\lambda)(\lambda-\mu+n) I, \label{s-unitarity1}\\ \check{R}^{(\bar{n},n)}_{12}(\lambda-\mu)\check{R}_{21}^{(n,\bar{n})}(\mu-\lambda+n) &=& (\lambda-\mu)(\mu-\lambda+n)I. \label{s-unitarity2} \ear \begin{figure}[h] \begin{center} \begin{minipage}{0.5\linewidth} \begin{tikzpicture}[scale=1.25] \tikzstyle directed=[postaction={decorate,decoration={markings, mark=at position .45 with {\arrow[arrowstyle]{stealth}}}}] \draw (0,0.5) [-,color=black, thick, directed, rounded corners=8pt]+(-(0.35,-0.75) -- +(0.35,0)--+(-0.35,0.75); \draw (0,0.5) [-,color=black, thick,directed, rounded corners=8pt]+(0.35,-0.75) -- +(-0.35,0)--+(0.35,0.75); \draw (1.7,0.5) node {$=(1-(\lambda-\mu)^2)$}; \draw (3.15,0.5) [-,color=black, thick, directed, rounded corners=8pt]+(-0.35,-0.75) -- +(0.15,0)--+(-0.35,0.75); \draw (3.5,0.5) [-,color=black, thick,directed, rounded corners=8pt]+(0.35,-0.75) -- +(-0.15,0)--+(0.35,0.75); \tikzstyle directed=[postaction={decorate,decoration={markings, mark=at position .65 with {\arrow[arrowstyle]{stealth}}}}] \draw (-0.4,0.25) node {$\mu$}; \draw (0.4,0.25) node {$\lambda$}; \draw (1.5,-0.85) node {a)}; \draw (3.95,0.25) node {,}; \end{tikzpicture} \end{minipage \begin{minipage}{0.5\linewidth} \begin{tikzpicture}[scale=1.25] \tikzstyle directed=[postaction={decorate,decoration={markings, mark=at position .5 with {\arrow[arrowstyle]{>|}}}}] \tikzstyle reverse directed=[postaction={decorate,decoration={markings, mark=at position .49 with {\arrowreversed[arrowstyle]{>};}}}] \draw (0,0.5) [-,color=black, thick, directed, rounded corners=8pt]+(-(0.35,-0.75) -- +(0.35,0)--+(-0.35,0.75); \draw (0,0.5) [-,color=black, thick, reverse directed, rounded corners=8pt]+(0.35,-0.75) -- +(-0.35,0)--+(0.35,0.75); \draw (2,0.5) node {$=(\mu-\lambda)(\lambda-\mu+n)$}; \draw (3.65,0.5) [-,color=black, thick, directed, rounded corners=8pt]+(-0.35,-0.75) -- +(0.15,0)--+(-0.35,0.75); \draw (4,0.5) [-,color=black, thick, reverse directed, rounded corners=8pt]+(0.35,-0.75) -- +(-0.15,0)--+(0.35,0.75); \tikzstyle directed=[postaction={decorate,decoration={markings, mark=at position .65 with {\arrow[arrowstyle]{stealth}}}}] \tikzstyle reverse directed=[postaction={decorate,decoration={markings, mark=at position .65 with {\arrowreversed[arrowstyle]{stealth};}}}] \draw (-0.35,0.25) node {$\mu$}; \draw (0.6,0.15) node {$\lambda+n$}; \draw (0.3,0.8) node {$\lambda$}; \draw (1.5,-0.85) node {b)}; \end{tikzpicture} \end{minipage} \end{center} \caption{Graphical illustration of the unitarity relations (two more are obtained by 180$^{\circ}$ rotations): a) the standard unitarity (\ref{unitarity}); b) the special unitarity (\ref{s-unitarity1}). Again we consider the spectral parameter to be discontinuous in order to describe the shift in the $R$-matrix, i.e. the spectral parameter value is $\lambda+n$ in the bottom part and $\lambda$ in the top part of the graph.} \label{Unitarities} \end{figure} Finally, in order to exploit the full $SU(n)$ symmetry we introduce relations of suitable products of $n$ many $R$-matrices with the completely antisymmetric state in $V^{\otimes n}$, i.e. the totally antisymmetric tensor $\epsilon$. For instance, in the $SU(3)$ case these relations read, \eq \begin{tikzpicture}[scale=1.7] \draw (0.3,0.2) node {$\lambda$}; \draw (0.3,-0.3) node {$\lambda+1$}; \draw (0.3,-0.8) node {$\lambda+2$}; \draw (0,0) [<->,color=black, thick, rounded corners=7pt] +(0,0) -- +(1.0,0) -- +(1.0,-1.) -- +(0.0,-1.0); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(1.0,-0.5) -- +(0,-0.5); \draw (0.5,-1.3) node {$\mu$}; \draw (0.1,-0.5) [->,color=black, thick, rounded corners=7pt] +(0.5,-1) -- +(0.5,1); \draw (2.65,-0.5) node {$=(\lambda+2-\mu)(1-(\lambda-\mu)^2)$}; \draw (1.,-0.5)[fill=black] circle (0.15ex); \draw (4.5,0.2) node {$\lambda$}; \draw (4.5,-0.3) node {$\lambda+1$}; \draw (4.5,-0.8) node {$\lambda+2$}; \draw (4.2,0) [<->,color=black, thick, rounded corners=7pt] +(0,0) -- +(1.0,0) -- +(1.0,-1.) -- +(0.0,-1.0); \draw (4.2,0) [->,color=black, thick, rounded corners=7pt] +(1.0,-0.5) -- +(0,-0.5); \draw (5.3,-0.5) [->,color=black, thick, rounded corners=7pt] +(0.5,-1) -- +(0.5,1); \draw (5.4,-1.3) node {$\mu$}; \draw (5.2,-0.5)[fill=black] circle (0.15ex); \draw (6,-0.5) node {$,$}; \end{tikzpicture} \label{sym-property1} \en and \eq \begin{tikzpicture}[scale=1.7] \draw (0.3,0.2) node {$\lambda$}; \draw (0.3,-0.3) node {$\lambda+1$}; \draw (0.3,-0.8) node {$\lambda+2$}; \draw (0,0) [<->,color=black, thick, rounded corners=7pt] +(0,0) -- +(1.0,0) -- +(1.0,-1.) -- +(0.0,-1.0); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(1.0,-0.5) -- +(0,-0.5); \draw (0.5,-1.3) node {$\mu$}; \draw (0.1,-0.5) [->,color=black, thick, rounded corners=7pt] +(0.5,1) -- +(0.5,-1); \draw (2.6,-0.5) node {$=(\mu-\lambda)(1-(\lambda+2-\mu)^2)$}; \draw (1.,-0.5)[fill=black] circle (0.15ex); \draw (4.5,0.2) node {$\lambda$}; \draw (4.5,-0.3) node {$\lambda+1$}; \draw (4.5,-0.8) node {$\lambda+2$}; \draw (4.2,0) [<->,color=black, thick, rounded corners=7pt] +(0,0) -- +(1.0,0) -- +(1.0,-1.) -- +(0.0,-1.0); \draw (4.2,0) [->,color=black, thick, rounded corners=7pt] +(1.0,-0.5) -- +(0,-0.5); \draw (5.3,-0.5) [->,color=black, thick, rounded corners=7pt] +(0.5,1) -- +(0.5,-1); \draw (5.6,-1.3) node {$\mu$}; \draw (6,-0.5) node {$,$}; \draw (5.2,-0.5)[fill=black] circle (0.15ex); \end{tikzpicture} \label{sym-property2} \en and the depicted objects are \eq \begin{minipage}{0.5\linewidth} \begin{center} \begin{tikzpicture}[scale=1.25] \draw (0,0.5) [-,color=black, thick, rounded corners=7pt] +(0,0) -- +(0.5,0) -- +(0.5,-1.) -- +(0.0,-1.0); \draw (0,0.5) [-,color=black, thick, rounded corners=7pt] +(0.5,-0.5) -- +(0,-0.5); \draw (1.5,0.) node {$=\epsilon_{ijk},$}; \draw (0.,0)[fill=black] circle (0.3ex); \draw (0.,-0.5)[fill=black] circle (0.3ex); \draw (0.,0.5)[fill=black] circle (0.3ex); \draw (-0.25,-0.5) node {$i$}; \draw (-0.25,0) node {$j$}; \draw (-0.25,0.5) node {$k$}; \end{tikzpicture} \end{center} \end{minipage \begin{minipage}{0.5\linewidth} \begin{center} \begin{tikzpicture}[scale=1.] \draw (0,0) [-,color=black, thick, directed, rounded corners=8pt]+(-(0.,-0.75) --+(0.,0.75); \draw (0,-1.1) node {$i$}; \draw (0,1.1) node {$j$}; \draw (0.75,0) node {$=\delta_{ij}$}; \draw (0.,-0.75)[fill=black] circle (0.3ex); \draw (0.,0.75)[fill=black] circle (0.3ex); \end{tikzpicture} \end{center} \end{minipage} \label{anti-symmetrizer} \en the fully anti-symmetric tensor (Levi-Civita tensor) and the Kronecker delta. Besides that, a number of simple identities among the previous objects are used throughout this work, e.g. tensor products and contractions \bear \begin{minipage}{0.5\linewidth} \begin{center} \begin{tikzpicture}[scale=1.25] \draw (0,0.5) [-,color=black, thick, rounded corners=7pt] +(0,0) -- +(0.5,0) -- +(0.5,-1.) -- +(0.0,-1.0); \draw (0,0.5) [-,color=black, thick, rounded corners=7pt] +(0,0) -- +(-0.5,0) -- +(-0.5,-1.) -- +(0.0,-1.0); \draw (0,0.5) [-,color=black, thick, rounded corners=7pt] +(0.5,-0.5) -- +(0,-0.5); \draw (0,0.5) [-,color=black, thick, rounded corners=7pt] +(-0.5,-0.5) -- +(0,-0.5); \draw (1.55,0.) node {$=\epsilon_{ijk} \epsilon^{ijk}=6,$}; \draw (0.,0)[fill=black] circle (0.3ex); \draw (0.,-0.5)[fill=black] circle (0.3ex); \draw (0.,0.5)[fill=black] circle (0.3ex); \draw (-0.,-0.7) node {$i$}; \draw (-0.,-0.2) node {$j$}; \draw (-0.,0.3) node {$k$}; \end{tikzpicture} \end{center} \end{minipage \begin{minipage}{0.5\linewidth} \begin{center} \begin{tikzpicture}[scale=1.] \draw (0,0.5) node {$i$}; \draw (1.6,0) node {$=\delta_{ii}=3$}; \draw (0.,0) circle (0.75); \draw (0.,0.75)[fill=black] circle (0.3ex); \end{tikzpicture} \end{center} \end{minipage} \label{other1}\\ \begin{minipage}{0.45\linewidth} \begin{center} \begin{tikzpicture}[scale=1.2] \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(0,0) -- +(0,0.5) -- +(-0.5,0.5) -- +(-0.5,0); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(0,0) -- +(0,-0.5) -- +(-0.5,-0.5) -- +(-0.5,0); \draw (0,0) [-,color=black, thick, rounded corners=7pt]+(-0.25,0.5) -- +(-0.25,1); \draw (0,0) [-,color=black, thick, rounded corners=7pt]+(-0.25,-0.5) -- +(-0.25,-1); \draw (0.,0)[fill=black] circle (0.3ex); \draw (-0.5,0)[fill=black] circle (0.3ex); \draw (-0.25,1)[fill=black] circle (0.3ex); \draw (-0.25,-1)[fill=black] circle (0.3ex); \draw (-0.15,-0.8) node {$i$}; \draw (-0.7,-0.) node {$j$}; \draw (0.2,0.) node {$k$}; \draw (-0.15,0.8) node {$l$}; \draw (0.75,0.) node {$=2$}; \draw (1.2,0) [-,color=black, thick, directed, rounded corners=8pt]+(-(0.,-0.75) --+(0.,0.75); \draw (1.2,-1.1) node {$i$}; \draw (1.2,1.1) node {$l$}; \draw (1.2,-0.75)[fill=black] circle (0.3ex); \draw (1.2,0.75)[fill=black] circle (0.3ex); \draw (2.45,0.) node {$=\epsilon_{ijk} \epsilon^{jkl}=2 \delta_{il},$}; \end{tikzpicture} \end{center} \end{minipage \begin{minipage}{0.55\linewidth} \begin{center} \begin{tikzpicture}[scale=1.2] \draw (-0.5,0) [-,color=black, thick, rounded corners=7pt] +(0,0.5) -- +(0,0) -- +(0.5,0) -- +(0.5,0.5); \draw (-0.5,-0.5) [-,color=black, thick, rounded corners=7pt] +(0,-0.5) -- +(0,0) -- +(0.5,0) -- +(0.5,-0.5); \draw (0,-1.0) [-,color=black, thick, rounded corners=7pt]+(-0.25,0.5) -- +(-0.25,1); \draw (0.,-1)[fill=black] circle (0.3ex); \draw (-0.5,0.5)[fill=black] circle (0.3ex); \draw (0,0.5)[fill=black] circle (0.3ex); \draw (-0.5,-1)[fill=black] circle (0.3ex); \draw (-0.25,-0.25)[fill=black] circle (0.3ex); \draw (-0.5,-1.25) node {$i$}; \draw (-0.0,-1.25) node {$j$}; \draw (-0.5,0.8) node {$l$}; \draw (-0.45,-0.25) node {$k$}; \draw (-0.,0.8) node {$m$}; \draw (0.25,-0.3) node {$=$}; \draw (0.65,0) [-,color=black, thick, directed, rounded corners=8pt]+(-(0.,-1) --+(0.,0.5); \draw (0.65,-1.25) node {$i$}; \draw (0.65,0.75) node {$l$}; \draw (0.65,-1)[fill=black] circle (0.3ex); \draw (0.65,0.5)[fill=black] circle (0.3ex); \draw (0.9,0) [-,color=black, thick, directed, rounded corners=8pt]+(-(0.,-1) --+(0.,0.5); \draw (0.9,-1.25) node {$j$}; \draw (0.9,0.75) node {$m$}; \draw (0.9,-1)[fill=black] circle (0.3ex); \draw (0.9,0.5)[fill=black] circle (0.3ex); \draw (1.15,-0.3) node {$-$}; \draw (1.4,0) [-,color=black, thick, directed, rounded corners=8pt]+(-(0.,-1)--+(0.11,-0.25); \draw (1.4,0) [-,color=black, thick, rounded corners=8pt]+((0.15,-0.15)--+(0.25,0.5); \draw (1.4,0) [-,color=black, thick, directed, rounded corners=8pt]+((0.25,-1)--+(0.,0.5); \draw (1.4,-1.25) node {$i$}; \draw (1.4,0.75) node {$l$}; \draw (1.4,-1)[fill=black] circle (0.3ex); \draw (1.4,0.5)[fill=black] circle (0.3ex); \draw (1.65,-1.25) node {$j$}; \draw (1.65,0.75) node {$m$}; \draw (1.65,-1)[fill=black] circle (0.3ex); \draw (1.65,0.5)[fill=black] circle (0.3ex); \draw (3.65,-0.3) node {$=\epsilon_{ijk} \epsilon^{klm}=\delta_{il}\delta_{jm}-\delta_{im}\delta_{jl}.$}; \end{tikzpicture} \end{center} \end{minipage} \label{other2} \ear \section{Density matrices}\label{density} The framework for calculating thermal correlation functions of integrable Hamiltonians was introduced in \cite{GoKlSe04} and has been applied to the case of integrable $SU(2)$ spin chains several times, see e.g.~\cite{GSS10,KNS013,RK2016}. This approach makes use of the usual inhomogeneous reduced density operator, see Figure \ref{figDmatrix}a, in the thermodynamic limit $L \to \infty$, however with finite Trotter number $N$ \cite{GoKlSe04}. This formulation can be naturally extended to the case of $SU(n)$ spin chains. As the infinitely many column-to-column transfer matrices on the left (right) project onto the leading eigenstate $\bra{\Phi_L}$ ($\ket{\Phi_R}$) we obtain the compact form \eq D_m(\lambda_1,\cdots,\lambda_m)=\frac{ \bra{\Phi_L}{\cal T}^{(n)}_1(\lambda_1)\cdots {\cal T}^{(n)}_m(\lambda_m) \ket{\Phi_R}}{ \Lambda_0^{(n)}(\lambda_1)\cdots \Lambda_0^{(n)}(\lambda_m) }, \label{i-dm} \en where ${\cal T}^{(n)}_j(x)$ is the usual $j$-th monodromy matrix ${\cal T}^{(n)}_j(x)=R^{(n,n)}_{j,N}(x-u_N)\dots $ $R^{(n,n)}_{j,2}(x-u_2) R^{(n,n)}_{j,1}(x-u_1)$ associated to the quantum transfer matrix for the $SU(n)$ quantum spin chains $t_j^{QTM}(x)=\tr[{\cal T}_j^{(n)}(x)]$, $\Phi_L$ and $\Phi_R$ represent the left and right leading eigenstates of the quantum transfer matrix and $\Lambda_0^{(n)}(x)$ is the leading eigenvalue. For instance, the matrix element ${D_m}_{11\cdots1}^{11\cdots1}=\tr{\left[\hat{e}^{(1)}_{11}\hat{e}^{(2)}_{11}\cdots \hat{e}^{(m)}_{11}D_m\right]}$ (which in Figure \ref{figDmatrix} corresponds to assign $1$ to all the indices sitting at the black dots) is the standard emptiness formation probability $P_m(\lambda_1,\cdots,\lambda_m)$ \cite{BOOK-KBI,EFP}. \begin{figure}[h] \begin{tikzpicture}[scale=1.6] \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0)--(0.25,3); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.8,1.75)--(0.8,3); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(0.8,0)--(0.8,1.25); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(1.4,1.75)--(1.4,3); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(1.4,0)--(1.4,1.25); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(2.05,1.75)--(2.05,3); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(2.05,0)--(2.05,1.25); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(2.65,0)--(2.65,3); \draw (0.25,-0.25) node {$0$}; \draw (0.8,-0.25) node {$\lambda_1$}; \draw (1.4,-0.25) node {$\lambda_2$}; \draw (1.75 ,-0.25) node {$\dots$}; \draw (2.05,-0.25) node {$\lambda_m$}; \draw (2.65,-0.25) node {$0$}; \draw (0.8,1.25)[fill=black] circle (0.15ex); \draw (0.8,1.75)[fill=black] circle (0.15ex); \draw (1.4,1.25)[fill=black] circle (0.15ex); \draw (1.4,1.75)[fill=black] circle (0.15ex); \draw (2.05,1.25)[fill=black] circle (0.15ex); \draw (2.05,1.75)[fill=black] circle (0.15ex); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,0.5)--(3.5,0.5); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,1.0)--(3.5,1); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,2.5)--(3.5,2.5); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,2.0)--(3.5,2.0); \draw (-0.4,0.5) node {$\dots$}; \draw (-0.4,1.0) node {$\dots$}; \draw (-0.4,2.5) node {$\dots$}; \draw (-0.4,2.0) node {$\dots$}; \draw (3.7,0.5) node {$\dots$}; \draw (3.7,1.0) node {$\dots$}; \draw (3.7,2.5) node {$\dots$}; \draw (3.7,2.0) node {$\dots$}; \draw (-1.5,3.) node {a)}; \draw (-1.7,1.5) node {$D_m(\lambda_1,\dots,\lambda_m)=$}; \draw (3.1,0.6) node {$u_1$}; \draw (3.1,1.1) node {$u_2$}; \draw (3.1,1.6) node {$\vdots$}; \draw (3.1,2.6) node {$u_{N}$}; \draw (3.1,2.1) node {$u_{N-1}$}; \end{tikzpicture} \begin{tikzpicture}[scale=1.6] \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.8,1.75)--(0.8,3); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(0.8,0)--(0.8,1.25); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(1.4,1.75)--(1.4,3); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(1.4,0)--(1.4,1.25); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(2.05,1.75)--(2.05,3); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(2.05,0)--(2.05,1.25); \draw (0.8,-0.25) node {$\lambda_1$}; \draw (1.4,-0.25) node {$\lambda_2$}; \draw (1.75 ,-0.25) node {$\dots$}; \draw (2.05,-0.25) node {$\lambda_m$}; \draw (0.8,1.25)[fill=black] circle (0.15ex); \draw (0.8,1.75)[fill=black] circle (0.15ex); \draw (1.4,1.25)[fill=black] circle (0.15ex); \draw (1.4,1.75)[fill=black] circle (0.15ex); \draw (2.05,1.25)[fill=black] circle (0.15ex); \draw (2.05,1.75)[fill=black] circle (0.15ex); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.5,0.5)--(3.,0.5); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.5,1.0)--(3.,1); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.5,2.5)--(3.,2.5); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.5,2.0)--(3.,2.0); \draw (-1.5,3.) node {b)}; \draw (-1.7,1.5) node {$D_m(\lambda_1,\dots,\lambda_m)=$}; \draw (2.5,0.6) node {$u_1$}; \draw (2.5,1.1) node {$u_2$}; \draw (2.5,1.6) node {$\vdots$}; \draw (2.5,2.6) node {$u_{N}$}; \draw (2.5,2.1) node {$u_{N-1}$}; \draw (0,1.5) node {$\Phi_L$}; \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(0.25,0)--(0.25,3)--(-0.3,1.5)--(0.25,0); \draw (3.25,1.5) node {$\Phi_R$}; \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(3.,0)--(3.,3)--(3.55,1.5)--(3.,0); \end{tikzpicture} \caption{Graphical illustration of the un-normalized density operator $D_m(\lambda_1,\dots,\lambda_m)$: a) an infinite cylinder with $N$ infinitely long horizontal lines carrying spectral parameters $u_j$ and $m$ open bonds associated to the spectral parameters $\lambda_1,\dots,\lambda_m$; b) the infinitely many column-to-column transfer matrices to the left and to the right are replaced by the boundary states they project onto.} \label{figDmatrix} \end{figure} The physically interesting result is typically obtained from the above reduced density operator (\ref{i-dm}) by taking the homogeneous limit $\lambda_j \to 0$ and the Trotter limit $N\to\infty$. However, we take advantage of the dependence on arbitrary $\lambda\in\mathbb{C}$. For the density operator of the $SU(2)$ case a set of discrete functional equations can be derived by use of the usual integrability structure plus the crossing symmetry \cite{AuKl12}. This and transparent analyticity properties allow for the complete determination of the reduced density operator at finite temperature and for an alternative proof of factorization of the correlation functions in terms of sums over products of nearest-neighbor correlators. Unfortunately, for the case of $SU(n)$ spin chains with $n>2$ we do not have crossing symmetry and hence adopting the line of reasoning of the $SU(2)$ case does not result in a closed set of functional equations for $D_m$. However, we can derive an analogous set of discrete functional equations provided we consider a slightly more general density operator we denote by ${\mathbb D}_m(\lambda_1,\dots,\lambda_m)$. Like the usual density operator, the generalized density operator ${\mathbb D}_m$ is defined on a horizontal infinite cylinder with $N$ horizontal lines, carrying spectral parameters $u_j$, however with additional semi-infinite rows as depicted in Figure \ref{figBunch}a. Alternatively, this generalized correlator can be written with boundary states where now $\widetilde{\Phi}_L$ is the leading eigenstate of the modified quantum transfer matrix acting on a tensor product of $N+m\cdot n$ copies of $V$ (see Figure \ref{figBunch}b). The monodromy matrix is given by $\widetilde{\cal T}_j^{(n)}(x)={\cal T}_j^{(n)}(x) \cdot \prod_{\alpha=1}^m \prod_{\beta=1}^{n} R^{(n,\bar{n})}_{j,N+(\alpha-1)n+\beta}(x-(\lambda_{\alpha}+\beta-1))$. Besides the correlations contained in $D_m$, the generalized density operator ${\mathbb D}_m$ also contains other correlation functions, like those contained in variants of $D_m$ with just anti-fundamental representations or any mixture of fundamental and anti-fundamental representations in the spaces indexed by 1 to $m$. Note that $D_m$ may be viewed as a vector in the tensor space $[\bar n]^{\otimes m}\otimes [n]^{\otimes m}$, i.e. a multilinear map $[n]^m\times [\bar n]^m\to \mathbb{C}$, and likewise ${\mathbb D}_m$ is a vector in the tensor space $[\bar n]^{\otimes mn}$, i.e. a multilinear map $[n]^{mn}\to \mathbb{C}$. \begin{figure} \begin{tikzpicture}[scale=1.6] \draw (-1.5,4.5) node {a)}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0)--(0.25,4.7); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.8,0)--(0.8,4.7); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,1.0)--(1.3,1.0); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,1.4)--(1.3,1.4); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,1.8)--(1.3,1.8); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,2.2)--(1.3,2.2); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,2.6)--(1.3,2.6); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,3.0)--(1.3,3.0); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,3.4)--(1.3,3.4); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,3.8)--(1.3,3.8); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,4.2)--(1.3,4.2); \draw (-0.4,1.0) node {$\dots$}; \draw (-0.4,1.4) node {$\dots$}; \draw (-0.4,1.8) node {$\dots$}; \draw (-0.4,2.2) node {$\dots$}; \draw (-0.4,2.6) node {$\dots$}; \draw (-0.4,3.0) node {$\dots$}; \draw (-0.4,3.4) node {$\dots$}; \draw (-0.4,3.8) node {$\dots$}; \draw (-0.4,4.2) node {$\dots$}; \draw (-0.4,0.25) node {$\dots$}; \draw (-0.4,0.6) node {$\dots$}; \draw (-0.4,4.55) node {$\dots$}; \draw (3.95,0.25) node {$\dots$}; \draw (3.95,0.6) node {$\dots$}; \draw (3.95,4.55) node {$\dots$}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(2.6,0)--(2.6,4.7); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(3.05,0)--(3.05,4.7); \draw (1.97,0.98) node {$\lambda_m+n-1$}; \draw (1.97,1.23) node {$\vdots$}; \draw (1.7,1.4) node {$\lambda_m+1$}; \draw (1.5,1.8) node {$\lambda_m$}; \draw (1.0,2.06) node {$\vdots$}; \draw (1.95,2.18) node {$\lambda_2+n-1$}; \draw (1.95,2.43) node {$\vdots$}; \draw (1.7,2.6) node {$\lambda_2+1$}; \draw (1.5,3.0) node {$\lambda_2$}; \draw (1.95,3.38) node {$\lambda_1+n-1$}; \draw (1.95,3.63) node {$\vdots$}; \draw (1.7,3.8) node {$\lambda_1+1$}; \draw (1.5,4.2) node {$\lambda_1$}; \draw (0.25,-0.15) node {$0$}; \draw (0.8,-0.15) node {$0$}; \draw (2.6,-0.15) node {$0$}; \draw (3.05,-0.15) node {$0$}; \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,0.25)--(3.75,0.25); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,0.6)--(3.75,0.6); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,4.55)--(3.75,4.55); \draw (3.4,0.35) node {$u_1$}; \draw (3.4,0.75) node {$u_2$}; \draw (3.4,1.1) node {$\vdots$}; \draw (3.4,4.35) node {$u_{N}$}; \draw (1.3,1)[fill=black] circle (0.15ex); \draw (1.3,1.4)[fill=black] circle (0.15ex); \draw (1.3,1.8)[fill=black] circle (0.15ex); \draw (1.3,2.2)[fill=black] circle (0.15ex); \draw (1.3,2.6)[fill=black] circle (0.15ex); \draw (1.3,3)[fill=black] circle (0.15ex); \draw (1.3,3.4)[fill=black] circle (0.15ex); \draw (1.3,3.8)[fill=black] circle (0.15ex); \draw (1.3,4.2)[fill=black] circle (0.15ex); \draw (-1.8,2.5) node {${\mathbb D}_m(\lambda_1,\dots,\lambda_m)=$}; \end{tikzpicture} \begin{tikzpicture}[scale=1.65] \draw (-1.,4.5) node {b)}; \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0.5,1.0)--(1.3,1.0); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0.5,1.4)--(1.3,1.4); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0.5,1.8)--(1.3,1.8); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0.5,2.2)--(1.3,2.2); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0.5,2.6)--(1.3,2.6); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0.5,3.0)--(1.3,3.0); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0.5,3.4)--(1.3,3.4); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0.5,3.8)--(1.3,3.8); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0.5,4.2)--(1.3,4.2); \draw (1.97,0.98) node {$\lambda_m+n-1$}; \draw (1.97,1.23) node {$\vdots$}; \draw (1.7,1.4) node {$\lambda_m+1$}; \draw (1.5,1.8) node {$\lambda_m$}; \draw (1.0,2.06) node {$\vdots$}; \draw (1.95,2.18) node {$\lambda_2+n-1$}; \draw (1.95,2.43) node {$\vdots$}; \draw (1.7,2.6) node {$\lambda_2+1$}; \draw (1.5,3.0) node {$\lambda_2$}; \draw (1.95,3.38) node {$\lambda_1+n-1$}; \draw (1.95,3.63) node {$\vdots$}; \draw (1.7,3.8) node {$\lambda_1+1$}; \draw (1.5,4.2) node {$\lambda_1$}; \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.75,0.25)--(3.25,0.25); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.75,0.6)--(3.25,0.6); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.75,4.55)--(3.25,4.55); \draw (0.25,2.5) node {$\widetilde{\Phi}_L$}; \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(0.5,0)--(0.5,4.85)--(-0.05,2.5)--(0.5,0); \draw (3.5,2.5) node {$\Phi_R$}; \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(3.25,0)--(3.25,4.85)--(3.8,2.5)--(3.25,0); \draw (2.9,0.35) node {$u_1$}; \draw (2.9,0.75) node {$u_2$}; \draw (2.9,1.1) node {$\vdots$}; \draw (2.9,4.35) node {$u_{N}$}; \draw (1.3,1)[fill=black] circle (0.15ex); \draw (1.3,1.4)[fill=black] circle (0.15ex); \draw (1.3,1.8)[fill=black] circle (0.15ex); \draw (1.3,2.2)[fill=black] circle (0.15ex); \draw (1.3,2.6)[fill=black] circle (0.15ex); \draw (1.3,3)[fill=black] circle (0.15ex); \draw (1.3,3.4)[fill=black] circle (0.15ex); \draw (1.3,3.8)[fill=black] circle (0.15ex); \draw (1.3,4.2)[fill=black] circle (0.15ex); \draw (-1.3,2.5) node {${\mathbb D}_m(\lambda_1,\dots,\lambda_m)=$}; \end{tikzpicture} \caption{Graphical illustration of the un-normalized generalized density operator ${\mathbb D}_m(\lambda_1,\dots,\lambda_m)$: a) the infinite cylinder with $N$ infinitely long horizontal lines carrying spectral parameters $u_j$ and $m$ bunches of $n$ semi-infinite lines with spectral parameters $\{\lambda_j,\lambda_j+1,\dots,\lambda_j+n-1 \}$ for $j=1,\dots, m$; b) the infinitely many column-to-column transfer matrices to the left and to the right are replaced by the boundary states they project onto.} \label{figBunch} \end{figure} We have to show that \begin{itemize} \item the generalized density operator ${\mathbb D}_m$ contains the physically interesting correlations (and more), \item it admits a closed set of functional equations, \item it has controlled analyticity properties (which are not obvious from the definition). \end{itemize} Before turning to the proofs we like to mention the normalization of the generalized density operator. For this we take the action of ${\mathbb D}_m(\lambda_1,\dots,\lambda_m)$ on $m$ completely antisymmetric states (the $SU(n)$ singlets) in each bunch of $n$ basis states. We like to mention the useful reduction property of ${\mathbb D}_m$ applied to just $k$ antisymmetric states resulting into a density matrix ${\mathbb D}_{m-k}$, see Appendix A. Next we turn to the embedding of $D_m$ in ${\mathbb D}_m$. For this we apply anti-symmetrizations to the lower (upper) $n-1$ lines of a bunch of $n$ lines in ${\mathbb D}_m$ resulting in a line carrying the conjugate representation. For the $SU(3)$ case we have: \bear \begin{minipage}{0.5\linewidth} \begin{center} \begin{tikzpicture}[scale=1.25] \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0.5)--(0.25,2.5); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.5,0.5)--(0.5,2.5); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0,1.0)--(1.6,1.0); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0,1.4)--(1.6,1.4); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0,1.8)--(2.3,1.8); \draw (0,0) [-,color=black, directed, thick, rounded corners=7pt] +(1.75,1.2)--(2.3,1.2); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(1.6,1.0) -- +(1.8,1.2) -- +(1.6,1.4); \draw (2.3,1.8)[fill=black] circle (0.15ex); \draw (2.3,1.2)[fill=black] circle (0.15ex); \draw (1.15,1.2) node {$\lambda+2$}; \draw (1.15,1.6) node {$\lambda+1$}; \draw (0.85,2.) node {$\lambda$}; \draw (3,1.5) node {$=$}; \end{tikzpicture} \end{center} \end{minipage \begin{minipage}{0.5\linewidth} \begin{center} \begin{tikzpicture}[scale=1.25] \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0.5)--(0.25,2.5); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.5,0.5)--(0.5,2.5); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0,1.8)--(2.3,1.8); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0,1.2)--(2.3,1.2); \draw (2.3,1.8)[fill=black] circle (0.15ex); \draw (2.3,1.2)[fill=black] circle (0.15ex); \draw (1.15,1.45) node {$\lambda$}; \draw (1.15,2.) node {$\lambda$}; \draw (2.75,1.5) node {$,$}; \end{tikzpicture} \end{center} \end{minipage} \label{fus1}\\ \begin{minipage}{0.5\linewidth} \begin{center} \begin{tikzpicture}[scale=1.25] \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0.5)--(0.25,2.5); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.5,0.5)--(0.5,2.5); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0,1.0)--(2.3,1.0); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0,1.4)--(1.6,1.4); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0,1.8)--(1.6,1.8); \draw (0,0) [-,color=black, directed, thick, rounded corners=7pt] +(1.75,1.6)--(2.3,1.6); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(1.6,1.4) -- +(1.8,1.6) -- +(1.6,1.8); \draw (2.3,1.6)[fill=black] circle (0.15ex); \draw (2.3,1.0)[fill=black] circle (0.15ex); \draw (1.15,1.2) node {$\lambda+2$}; \draw (1.15,1.6) node {$\lambda+1$}; \draw (0.85,2.) node {$\lambda$}; \draw (3,1.5) node {$=$}; \end{tikzpicture} \end{center} \end{minipage \begin{minipage}{0.5\linewidth} \begin{center} \begin{tikzpicture}[scale=1.25] \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0.5)--(0.25,2.5); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.5,0.5)--(0.5,2.5); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0,1.8)--(2.3,1.8); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0,1.2)--(2.3,1.2); \draw (2.3,1.8)[fill=black] circle (0.15ex); \draw (2.3,1.2)[fill=black] circle (0.15ex); \draw (1.15,1.45) node {$\lambda+2$}; \draw (1.15,2.) node {$\lambda-1$}; \draw (2.75,1.5) node {$.$}; \end{tikzpicture} \end{center} \end{minipage} \label{fus2} \ear Let us consider the anti-symmetrization of the lower $n-1$ lines. {In Figure \ref{DD3-D3} we depict the simplest case of two-point ($m=2$) correlations for $SU(3)$. First, the antisymmetrizers are carried to the very left by virtue of (\ref{fus1}), see Figure \ref{DD3-D3} a) and b).} We modify the lattice at the far left by bending the upper horizontal line with arrow pointing to the left upwards and bending the lower horizontal line with arrow pointing to the right downwards, finally connecting the two ends {(carrying the same spectral parameter)} by exploiting the periodic boundary condition in vertical direction, Figure \ref{DD3-D3} c). This manipulation of the far left boundary may introduce a factor which however is independent of the spins on the open bonds inside the lattice. Finally, we use the Yang-Baxter equation and unitarity to move the closed loops at the far left as simple vertical lines to the center of the lattice, {Figure \ref{DD3-D3} d) and e). The resulting object is equal to the density operator ${D}_2(\lambda_1,\lambda_2)$ under the action of two $R$-matrices.} \begin{figure}[h] \begin{center} \begin{minipage}{0.33\linewidth} \begin{tikzpicture}[scale=1.25] \draw (-0.25,4.) node {a)}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0)--(0.25,4); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0,1.0)--(1.6,1.0); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0,1.4)--(1.6,1.4); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0,1.8)--(2.3,1.8); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0,2.2)--(1.6,2.2); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0,2.6)--(1.6,2.6); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0,3.0)--(2.3,3.0); \draw (0,0) [-,color=black, directed, thick, rounded corners=7pt] +(1.75,1.2)--(2.3,1.2); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(1.6,1.0) -- +(1.8,1.2) -- +(1.6,1.4); \draw (1.74,1.2)[fill=black] circle (0.15ex); \draw (2.3,1.2)[fill=black] circle (0.15ex); \draw (2,1.36) node {$\lambda_2$}; \draw (0,0) [-,color=black, directed, thick, rounded corners=7pt] +(1.75,2.4)--(2.3,2.4); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(1.6,2.2) -- +(1.8,2.4) -- +(1.6,2.6); \draw (1.74,2.4)[fill=black] circle (0.15ex); \draw (2.3,2.4)[fill=black] circle (0.15ex); \draw (2,2.56) node {$\lambda_1$}; \draw (-0.15,0.25) node {$\dots$}; \draw (-0.15,0.6) node {$\dots$}; \draw (-0.15,3.35) node {$\dots$}; \draw (-0.15,3.75) node {$\dots$}; \draw (-0.15,1) node {$\dots$}; \draw (-0.15,1.4) node {$\dots$}; \draw (-0.15,1.8) node {$\dots$}; \draw (-0.15,2.2) node {$\dots$}; \draw (-0.15,2.6) node {$\dots$}; \draw (-0.15,3.0) node {$\dots$}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(2.5,0)--(2.5,4); \draw (0.7,1.15) node {$\lambda_2+2$}; \draw (0.7,1.55) node {$\lambda_2+1$}; \draw (2,1.95) node {$\lambda_2$}; \draw (0.7,2.35) node {$\lambda_1+2$}; \draw (0.7,2.75) node {$\lambda_1+1$}; \draw (2.,3.15) node {$\lambda_1$}; \draw (0.25,-0.25) node {$0$}; \draw (2.5,-0.25) node {$0$}; \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0.25)--(3.25,0.25); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0.6)--(3.25,0.6); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.25,3.35)--(3.25,3.35); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.25,3.75)--(3.25,3.75); \draw (2.8,0.35) node {$u_1$}; \draw (2.8,0.75) node {$u_2$}; \draw (2.8,1.15) node {$\vdots$}; \draw (2.8,3.85) node {$u_{N}$}; \draw (2.85,3.47) node {$u_{N-1}$}; \draw (1.2,1)[fill=black] circle (0.15ex); \draw (1.2,1.4)[fill=black] circle (0.15ex); \draw (2.3,1.8)[fill=black] circle (0.15ex); \draw (1.2,2.2)[fill=black] circle (0.15ex); \draw (1.2,2.6)[fill=black] circle (0.15ex); \draw (2.3,3)[fill=black] circle (0.15ex); \end{tikzpicture} \end{minipage \begin{minipage}{0.3\linewidth} \begin{tikzpicture}[scale=1.25] \draw (-0.25,4.) node {b)}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0)--(0.25,4); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0,1.8)--(0.75,1.8); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0,3.0)--(0.75,3.0); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0,1.2)--(0.75,1.2); \draw (-0.15,0.25) node {$\dots$}; \draw (-0.15,0.6) node {$\dots$}; \draw (-0.15,3.35) node {$\dots$}; \draw (-0.15,3.75) node {$\dots$}; \draw (-0.15,1.2) node {$\dots$}; \draw (-0.15,1.8) node {$\dots$}; \draw (-0.15,2.4) node {$\dots$}; \draw (-0.15,3.0) node {$\dots$}; \draw (0.75,1.2)[fill=black] circle (0.15ex); \draw (0.5,1.36) node {$\lambda_2$}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0,2.4)--(0.75,2.4); \draw (0.75,2.4)[fill=black] circle (0.15ex); \draw (0.5,2.56) node {$\lambda_1$}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(1.25,0)--(1.25,4); \draw (0.5,1.95) node {$\lambda_2$}; \draw (0.5,3.15) node {$\lambda_1$}; \draw (0.25,-0.25) node {$0$}; \draw (1.25,-0.25) node {$0$}; \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0.25)--(2,0.25); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0.6)--(2,0.6); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.25,3.35)--(2,3.35); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.25,3.75)--(2,3.75); \draw (1.65,0.35) node {$u_1$}; \draw (1.65,0.75) node {$u_2$}; \draw (1.65,1.15) node {$\vdots$}; \draw (1.65,3.85) node {$u_{N}$}; \draw (1.63,3.47) node {$u_{N-1}$}; \draw (0.75,1.8)[fill=black] circle (0.15ex); \draw (0.75,3)[fill=black] circle (0.15ex); \draw (-0.8,2) node {$=$}; \end{tikzpicture} \end{minipage \begin{minipage}{0.3666\linewidth} \begin{tikzpicture}[scale=1.25] \draw (-0.9,4.) node {c)}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(1.,0)--(1.,4); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.75,4)--(-0.75,1.8)--(0,1.8); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-0.75,0)--(-0.75,1.2)--(0,1.2); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,4)--(-0.25,3.0)--(0,3.0); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-0.25,0)--(-0.25,2.4)--(0,2.4); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0.5,1.8)--(1.5,1.8); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.5,1.2)--(1.5,1.2); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0.5,3.0)--(1.5,3.0); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.5,2.4)--(1.5,2.4); \draw (0.25,0.25) node {$\dots$}; \draw (0.25,0.6) node {$\dots$}; \draw (0.25,1.2) node {$\dots$}; \draw (0.25,1.8) node {$\dots$}; \draw (0.25,2.4) node {$\dots$}; \draw (0.25,3.0) node {$\dots$}; \draw (0.25,3.35) node {$\dots$}; \draw (0.25,3.75) node {$\dots$}; \draw (1.5,1.2)[fill=black] circle (0.15ex); \draw (1.5,1.8)[fill=black] circle (0.15ex); \draw (1.5,2.4)[fill=black] circle (0.15ex); \draw (1.5,3)[fill=black] circle (0.15ex); \draw (1.25,1.36) node {$\lambda_2$}; \draw (1.25,2.56) node {$\lambda_1$}; \draw (1.25,1.95) node {$\lambda_2$}; \draw (1.25,3.15) node {$\lambda_1$}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(1.75,0)--(1.75,4); \draw (1.0,-0.25) node {$0$}; \draw (1.75,-0.25) node {$0$}; \draw (-0.5,-0.25) node {$-\infty$}; \draw (0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0.25)--(2.5,0.25); \draw (0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0.6)--(2.5,0.6); \draw (0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.25,3.35)--(2.5,3.35); \draw (0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.25,3.75)--(2.5,3.75); \draw (0.,0) [->,color=black, thick, rounded corners=7pt] +(-1.,0.25)--(0.,0.25); \draw (0.,0) [->,color=black, thick, rounded corners=7pt] +(-1,0.6)--(0,0.6); \draw (0.,0) [->,color=black, thick, rounded corners=7pt] +(-1.,3.35)--(0,3.35); \draw (0.,0) [->,color=black, thick, rounded corners=7pt] +(-1.,3.75)--(0,3.75); \draw (2.15,0.35) node {$u_1$}; \draw (2.15,0.75) node {$u_2$}; \draw (2.15,1.15) node {$\vdots$}; \draw (2.15,3.85) node {$u_{N}$}; \draw (2.13,3.47) node {$u_{N-1}$}; \draw (-1.,2) node {$=$}; \end{tikzpicture} \end{minipage} \begin{minipage}{0.28\linewidth} \begin{tikzpicture}[scale=1.25] \draw (-0.25,4.) node {d)}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(1.,0)--(1.,4); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0.25,4)--(0.25,1.8)--(1.5,1.8); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0)--(0.25,1.2)--(1.5,1.2); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0.6,4)--(0.6,3.0)--(1.5,3.0); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.6,0)--(0.6,2.4)--(1.5,2.4); \draw (-0.15,0.25) node {$\dots$}; \draw (-0.15,0.6) node {$\dots$}; \draw (-0.15,3.35) node {$\dots$}; \draw (-0.15,3.75) node {$\dots$}; \draw (1.5,1.2)[fill=black] circle (0.15ex); \draw (1.5,1.8)[fill=black] circle (0.15ex); \draw (1.5,2.4)[fill=black] circle (0.15ex); \draw (1.5,3)[fill=black] circle (0.15ex); \draw (1.25,1.36) node {$\lambda_2$}; \draw (1.25,2.56) node {$\lambda_1$}; \draw (1.25,1.95) node {$\lambda_2$}; \draw (1.25,3.15) node {$\lambda_1$}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(1.75,0)--(1.75,4); \draw (1.0,-0.25) node {$0$}; \draw (1.75,-0.25) node {$0$}; \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0.25)--(2.5,0.25); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0.6)--(2.5,0.6); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.25,3.35)--(2.5,3.35); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.25,3.75)--(2.5,3.75); \draw (2.15,0.35) node {$u_1$}; \draw (2.15,0.75) node {$u_2$}; \draw (2.15,1.15) node {$\vdots$}; \draw (2.15,3.85) node {$u_{N}$}; \draw (2.13,3.47) node {$u_{N-1}$}; \draw (-0.25,2) node {$=$}; \end{tikzpicture} \end{minipage \begin{minipage}{0.28\linewidth} \begin{tikzpicture}[scale=1.25] \draw (-0.25,4.) node {e)}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0)--(0.25,4); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.75,2.4)--(0.75,4); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(0.75,0.0)--(0.75,1.95); \draw (0.75,2.4)[fill=black] circle (0.15ex); \draw (0.75,1.95)[fill=black] circle (0.15ex); \draw (-0.15,0.25) node {$\dots$}; \draw (-0.15,0.6) node {$\dots$}; \draw (-0.15,3.35) node {$\dots$}; \draw (-0.15,3.75) node {$\dots$}; \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(1.25,0)--(1.25,1.25); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(1.25,1.6)--(0.5,1.6)--(0.5,2.75)-- (1.25,2.75)--(1.25,4); \draw (1.25,1.25)[fill=black] circle (0.15ex); \draw (1.25,1.6)[fill=black] circle (0.15ex); \draw (0.75,-0.25) node {$\lambda_1$}; \draw (1.25,-0.25) node {$\lambda_2$}; \draw (0.25,-0.25) node {$0$}; \draw (1.75,-0.25) node {$0$}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(1.75,0)--(1.75,4); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0.25)--(2.5,0.25); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0.6)--(2.5,0.6); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.25,3.35)--(2.5,3.35); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.25,3.75)--(2.5,3.75); \draw (2.1,0.35) node {$u_1$}; \draw (2.1,0.75) node {$u_2$}; \draw (2.1,1.15) node {$\vdots$}; \draw (2.1,3.85) node {$u_{N}$}; \draw (2.13,3.47) node {$u_{N-1}$}; \draw (-0.5,2) node {$=$}; \end{tikzpicture} \end{minipage \begin{minipage}{0.44\linewidth} \begin{tikzpicture}[scale=0.9] \draw (-1.75,2.25) node {$= R(\lambda_1-\lambda_2) D_2(\lambda_1,\lambda_2) R(\lambda_2-\lambda_1).$}; \end{tikzpicture} \end{minipage} \caption{Graphical illustration of the reduction of the generalized density operator ${\mathbb D}_2(\lambda_1,\lambda_2)$ to the density operator $D_2(\lambda_1,\lambda_2)$ for $SU(3)$. } \label{DD3-D3} \end{center} \end{figure} Quite generally, by the above described procedure of anti-symmetrization of the lower $n-1$ lines of each bunch of $n$ lines we have the reduction of the operator ${\mathbb D}_m(\lambda_1,\dots,\lambda_m)$ to the usual density operator $D_m(\lambda_1,\dots,\lambda_m)$ with additional action of $m(m-1)$ $R$-matrices. Just as a short remark we want to point out that the procedure of anti-symme\-tri\-zation of the {\em upper} $n-1$ lines of a bunch of $n$ lines and subsequent use of Yang-Baxter and (special) unitarity leads to a vertical line with conjugate representation of $SU(n)$ and carrying the spectral parameter $\lambda+n-1$. It is worth reminding that the physically interesting object we want to compute is precisely the full density operator $D_m$. However, in order to formulate consistent functional equations we have to work in the more general setting of ${\mathbb D}_m$ and at the end of the calculation to project onto the physically relevant subspace. The derivation of functional equations and analyticity properties for ${\mathbb D}_m$ is the subject of the next section. \section{Discrete functional equations and analyticity}\label{functional} In order to derive closed functional equations for the correlators of the $SU(n)$ quantum spin chain, we explore the consequences of setting the value of for instance $\lambda_1$ equal to one of the spectral parameters $u_i$ on the horizontal lines. We illustrate a sequence of manipulations in Figure \ref{derivationfunc-eqs} for the case $m=2$ of $SU(3)$. Having $\lambda_1=u_i$ allows us to connect the left going semi-infinite line carrying $\lambda_1$ with the right-going line carrying $u_i$, Figure \ref{derivationfunc-eqs}a, and to use the unitarity property (\ref{unitarity}) for moving the link towards the right, Figure \ref{derivationfunc-eqs} b) and c). Note that operation a) may change the partition function by some factor which however is independent of the spins on the interior open bonds. {Next,} we use special unitarity (\ref{s-unitarity1}) to move the line around and back to the left, Figure \ref{derivationfunc-eqs} d) and e). \begin{figure} \begin{center} \begin{minipage}{0.5\linewidth} \begin{tikzpicture}[scale=1.15] \draw (-1.25,4.) node {a)}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-1,0.4)--(-1,4); \draw (0,0) [-,color=black, thick, rounded corners=9pt] +(-1.25,3)--(-1.5,3.2)--(-1.25,3.35); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-1.25,1.0)--(-0.75,1.0); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-1.25,1.4)--(-0.75,1.4); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-1.25,1.8)--(-0.75,1.8); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-1.25,2.2)--(-0.75,2.2); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-1.25,2.6)--(-0.75,2.6); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-1.25,3.0)--(-0.75,3.0); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-1.25,0.6)--(-0.75,0.6); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-1.25,3.35)--(-0.75,3.35); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-1.25,3.75)--(-0.75,3.75); \draw (-1.1,0.25) node {$-\infty$}; \draw (-0.5,0.6) node {$\dots$}; \draw (-0.5,3.35) node {$\dots$}; \draw (-0.5,3.75) node {$\dots$}; \draw (-0.5,2.2) node {$\dots$}; \draw (-0.5,2.6) node {$\dots$}; \draw (-0.5,3.0) node {$\dots$}; \draw (-0.5,1.0) node {$\dots$}; \draw (-0.5,1.4) node {$\dots$}; \draw (-0.5,1.8) node {$\dots$}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0.4)--(0.25,4); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,1.0)--(0.5,1.0); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,1.4)--(0.5,1.4); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,1.8)--(0.5,1.8); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,2.2)--(0.5,2.2); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,2.6)--(0.5,2.6); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,3.0)--(0.5,3.0); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(2.1,0.4)--(2.1,4); \draw (1.,1.0) node {$\lambda_2+2$}; \draw (1.,1.4) node {$\lambda_2+1$}; \draw (0.75,1.8) node {$\lambda_2$}; \draw (1.,2.2) node {$\lambda_1+2$}; \draw (1.,2.6) node {$\lambda_1+1$}; \draw (0.75,3.0) node {$\lambda_1$}; \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,0.6)--(2.75,0.6); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,3.35)--(2.75,3.35); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,3.75)--(2.75,3.75); \draw (2.4,0.45) node {$u_1$}; \draw (2.4,3.6) node {$u_{N}$}; \draw (2.4,3.2) node {$u_{i}$}; \draw (0.5,1)[fill=black] circle (0.15ex); \draw (0.5,1.4)[fill=black] circle (0.15ex); \draw (0.5,1.8)[fill=black] circle (0.15ex); \draw (0.5,2.2)[fill=black] circle (0.15ex); \draw (0.5,2.6)[fill=black] circle (0.15ex); \draw (0.5,3)[fill=black] circle (0.15ex); \draw (3.54,2.2) node {$\stackrel{\lambda_1=u_i}{=}$}; \end{tikzpicture} \end{minipage \begin{minipage}{0.5\linewidth} \begin{tikzpicture}[scale=1.15] \draw (-0.25,4.) node {b)}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0.4)--(0.25,4); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,1.0)--(0.5,1.0); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,1.4)--(0.5,1.4); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,1.8)--(0.5,1.8); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,2.2)--(0.5,2.2); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,2.6)--(0.5,2.6); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,3.0)--(0.5,3.0); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(2.1,0.4)--(2.1,4); \draw (1.,1.0) node {$\lambda_2+2$}; \draw (1.,1.4) node {$\lambda_2+1$}; \draw (0.75,1.8) node {$\lambda_2$}; \draw (1.,2.2) node {$\lambda_1+2$}; \draw (1.,2.6) node {$\lambda_1+1$}; \draw (0.75,3.0) node {$\lambda_1$}; \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,0.6)--(2.75,0.6); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,3.35)--(2.75,3.35); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,3.75)--(2.75,3.75); \draw (2.4,0.45) node {$u_1$}; \draw (2.4,3.6) node {$u_{N}$}; \draw (2.4,3.2) node {$u_{i}$}; \draw (0.5,1)[fill=black] circle (0.15ex); \draw (0.5,1.4)[fill=black] circle (0.15ex); \draw (0.5,1.8)[fill=black] circle (0.15ex); \draw (0.5,2.2)[fill=black] circle (0.15ex); \draw (0.5,2.6)[fill=black] circle (0.15ex); \draw (0.5,3)[fill=black] circle (0.15ex); \draw (0,0) [-,color=black, thick, rounded corners=9pt] +(-0.25,3)--(-0.42,3.2)--(-0.25,3.35); \end{tikzpicture} \end{minipage} \begin{minipage}{0.5\linewidth} \begin{tikzpicture}[scale=1.15] \draw (-0.5,3.75) node {c)}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0.4)--(0.25,3.8); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,1.0)--(0.5,1.0); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,1.4)--(0.5,1.4); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,1.8)--(0.5,1.8); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,2.2)--(0.5,2.2); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,2.6)--(0.5,2.6); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.5,3.0)--(2.75,3.0); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(2.1,0.4)--(2.1,3.8); \draw (1.,1.0) node {$\lambda_2+2$}; \draw (1.,1.4) node {$\lambda_2+1$}; \draw (0.75,1.8) node {$\lambda_2$}; \draw (1.,2.2) node {$\lambda_1+2$}; \draw (1.,2.6) node {$\lambda_1+1$}; \draw (0.75,3.2) node {$\lambda_1$}; \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,0.6)--(2.75,0.6); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,3.45)--(2.75,3.45); \draw (2.4,0.45) node {$u_1$}; \draw (2.4,3.6) node {$u_{N}$}; \draw (0.5,1)[fill=black] circle (0.15ex); \draw (0.5,1.4)[fill=black] circle (0.15ex); \draw (0.5,1.8)[fill=black] circle (0.15ex); \draw (0.5,2.2)[fill=black] circle (0.15ex); \draw (0.5,2.6)[fill=black] circle (0.15ex); \draw (0.5,3)[fill=black] circle (0.15ex); \draw (-0.75,2.2) node {$=$}; \draw (3.54,2.2) node {$\stackrel{\lambda_1=u_i}{=}$}; \end{tikzpicture} \end{minipage \begin{minipage}{0.5\linewidth} \begin{tikzpicture}[scale=1.15] \draw (-0.,3.75) node {d)}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0.4)--(0.25,3.8); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,1.0)--(0.5,1.0); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,1.4)--(0.5,1.4); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,1.8)--(0.5,1.8); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,2.2)--(0.5,2.2); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,2.6)--(0.5,2.6); \draw (0,0) [->|,color=black, thick, rounded corners=7pt] +(0.5,3.0)--(2.35,3.0)--(2.35,2); \draw (0,0) [->|,color=black, thick, rounded corners=7pt] +(2.35,2)--(2.35,1.35); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(2.35,1.35)--(2.35,0.85)--(2.75,0.85); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(2.1,0.4)--(2.1,3.8); \draw (1.,1.0) node {$\lambda_2+2$}; \draw (1.,1.4) node {$\lambda_2+1$}; \draw (0.75,1.8) node {$\lambda_2$}; \draw (1.,2.2) node {$\lambda_1+2$}; \draw (1.,2.6) node {$\lambda_1+1$}; \draw (2.57,2.35) node {$\lambda_1$}; \draw (2.85,1.65) node {$\lambda_1+3$}; \draw (2.57,1.1) node {$\lambda_1$}; \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,0.6)--(2.75,0.6); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,3.45)--(2.75,3.45); \draw (2.4,0.45) node {$u_1$}; \draw (2.4,3.6) node {$u_{N}$}; \draw (0.5,1)[fill=black] circle (0.15ex); \draw (0.5,1.4)[fill=black] circle (0.15ex); \draw (0.5,1.8)[fill=black] circle (0.15ex); \draw (0.5,2.2)[fill=black] circle (0.15ex); \draw (0.5,2.6)[fill=black] circle (0.15ex); \draw (0.5,3)[fill=black] circle (0.15ex); \end{tikzpicture} \end{minipage} \begin{minipage}{0.5\linewidth} \begin{tikzpicture}[scale=1.15] \draw (-0.75,3.8) node {e)}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.25,-0.25)--(0.25,3.8); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,1.0)--(0.5,1.0); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,1.4)--(0.5,1.4); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,1.8)--(0.5,1.8); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,2.2)--(0.5,2.2); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,2.6)--(0.5,2.6); \draw (0,0) [->|,color=black, thick, rounded corners=7pt] +(0.5,3.0)--(2.35,3.0)--(2.35,1.85); \draw (0,0) [->|,color=black, thick, rounded corners=7pt] +(2.35,1.85)--(2.35,0.7)--(-0.25,0.7); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-0.25,0.35)--(2.75,0.35); \draw (0,0) [-,color=black, thick, rounded corners=9pt] +(-0.25,0.35)--(-0.42,0.55)--(-0.25,0.7); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(2.1,-0.25)--(2.1,3.8); \draw (1.,1.0) node {$\lambda_2+2$}; \draw (1.,1.4) node {$\lambda_2+1$}; \draw (0.75,1.8) node {$\lambda_2$}; \draw (1.,2.2) node {$\lambda_1+2$}; \draw (1.,2.6) node {$\lambda_1+1$}; \draw (2.55,2.5) node {$\lambda_1$}; \draw (2.8,1.25) node {$\lambda_1+3$}; \draw (2.4,0.2) node {$\lambda_1$}; \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,0.)--(2.75,0.); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,3.65)--(2.75,3.65); \draw (2.4,-0.15) node {$u_1$}; \draw (2.4,3.5) node {$u_{N}$}; \draw (0.5,1)[fill=black] circle (0.15ex); \draw (0.5,1.4)[fill=black] circle (0.15ex); \draw (0.5,1.8)[fill=black] circle (0.15ex); \draw (0.5,2.2)[fill=black] circle (0.15ex); \draw (0.5,2.6)[fill=black] circle (0.15ex); \draw (0.5,3)[fill=black] circle (0.15ex); \draw (3.54,2.2) node {$\stackrel{\lambda_1=u_i}{=}$}; \draw (-0.75,2.1) node {$=$}; \end{tikzpicture} \end{minipage \begin{minipage}{0.5\linewidth} \begin{tikzpicture}[scale=1.15] \draw (-1.65,3.65) node {f)}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.,-0.25)--(0.,3.8); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,1.0)--(0.5,1.0); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,1.4)--(0.5,1.4); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,0.6)--(0.5,0.6); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,2.2)--(0.5,2.2); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,2.6)--(0.5,2.6); \draw (0,0) [->|,color=black, thick, rounded corners=7pt] +(0.5,3.0) -- (1.85,3.0)--(1.85,1.75); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(1.85,1.75)--(1.85,0.4)--(0.25,0.4)--(0.25,1.8)--(-0.25,1.8); \draw (-0.5,0.6) node {$\cdots$}; \draw (-0.5,1.) node {$\cdots$}; \draw (-0.5,1.4) node {$\cdots$}; \draw (-0.5,1.8) node {$\cdots$}; \draw (-0.5,2.2) node {$\cdots$}; \draw (-0.5,2.6) node {$\cdots$}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-1.,-0.25)--(-1.,3.8); \draw (0,0) [->|,color=black, thick, rounded corners=7pt] +(-0.75,1.8) -- (-1.25,1.8)--(-1.25,0.25)--(-0.75,0.25); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.75,3.65) -- (-1.5,3.65); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.75,-0.1) -- (-1.5,-0.1); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-0.75,2.2) -- (-1.5,2.2); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-0.75,2.6) -- (-1.5,2.6); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-0.75,1.4) -- (-1.5,1.4); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-0.75,1.) -- (-1.5,1.); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-0.75,0.6) -- (-1.5,0.6); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(2.1,-0.25)--(2.1,3.8); \draw (1.,0.6) node {$\lambda_2+2$}; \draw (1.,1.0) node {$\lambda_2+1$}; \draw (0.75,1.4) node {$\lambda_2$}; \draw (1.,1.8) node {$\lambda_1+3$}; \draw (1.,2.2) node {$\lambda_1+2$}; \draw (1.,2.6) node {$\lambda_1+1$}; \draw (0.8,3.15) node {$\lambda_1$}; \draw (-0.5,0.25) node {$\cdots$}; \draw (-0.5,-0.1) node {$\cdots$}; \draw (-0.5,3.65) node {$\cdots$}; \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,0.25)--(2.75,0.25); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,-0.1)--(2.75,-0.1); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,3.65)--(2.75,3.65); \draw (2.4,-0.25) node {$u_1$}; \draw (2.4,0.1) node {$\lambda_1$}; \draw (2.4,3.5) node {$u_{N}$}; \draw (-1.15,-0.35) node {$-\infty$}; \draw (0.5,0.6)[fill=black] circle (0.15ex); \draw (0.5,1)[fill=black] circle (0.15ex); \draw (0.5,1.4)[fill=black] circle (0.15ex); \draw (0.5,2.2)[fill=black] circle (0.15ex); \draw (0.5,2.6)[fill=black] circle (0.15ex); \draw (0.5,3)[fill=black] circle (0.15ex); \end{tikzpicture} \end{minipage} \begin{minipage}{0.5\linewidth} \begin{tikzpicture}[scale=1.15] \draw (-0.25,3.85) node {g)}; \draw (-0.75,2.1) node {$=$}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.,0.)--(0.,3.8); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,1.0)--(0.5,1.0); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,1.4)--(0.5,1.4); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,0.6)--(0.5,0.6); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,2.2)--(0.5,2.2); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,2.6)--(0.5,2.6); \draw (0,0) [->|,color=black, thick, rounded corners=7pt] +(0.5,3.0) -- (1.85,3.0)--(1.85,1.75); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(1.85,1.75)--(1.85,0.4)--(0.25,0.4)--(0.25,1.8)--(-0.25,1.8); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(2.1,-0.)--(2.1,3.8); \draw (1.,0.6) node {$\lambda_2+2$}; \draw (1.,1.0) node {$\lambda_2+1$}; \draw (0.75,1.4) node {$\lambda_2$}; \draw (1.,1.8) node {$\lambda_1+3$}; \draw (1.,2.2) node {$\lambda_1+2$}; \draw (1.,2.6) node {$\lambda_1+1$}; \draw (0.8,3.15) node {$\lambda_1$}; \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,0.25)--(2.75,0.25); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,3.35)--(2.75,3.35); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,3.65)--(2.75,3.65); \draw (2.4,0.10) node {$u_1$}; \draw (2.4,3.5) node {$u_{N}$}; \draw (2.4,3.2) node {$u_{i}$}; \draw (0.5,0.6)[fill=black] circle (0.15ex); \draw (0.5,1)[fill=black] circle (0.15ex); \draw (0.5,1.4)[fill=black] circle (0.15ex); \draw (0.5,2.2)[fill=black] circle (0.15ex); \draw (0.5,2.6)[fill=black] circle (0.15ex); \draw (0.5,3)[fill=black] circle (0.15ex); \end{tikzpicture} \end{minipage} \caption{Outline of the derivation of the functional equation for the two-sites ($m=2$) generalized density operator for the $SU(3)$ case.} \label{derivationfunc-eqs} \end{center} \end{figure} {Then we use standard unitarity and widen the narrow loop as shown in Figure \ref{derivationfunc-eqs} f). This introduces the action of additional $R$-matrices at the open ends in the middle part of the lattice and at the far left, Figure \ref{derivationfunc-eqs} f). The boundary part at $-\infty$ is dropped, the horizontal line carrying $\lambda_1$ is moved by use of periodic boundary condition in vertical direction as $u_i$ line upwards. Finally we obtain the original density operator with the spectral parameter $\lambda_1$ shifted by $1$ with the action of $R$-matrices upon it, Figure \ref{derivationfunc-eqs} g).} In summary, for arbitrary $m$ and $SU(n)$ we derive a functional equation for the generalized density operator ${\mathbb D}_m(\lambda_1,\dots,\lambda_m)$ given by, \eq {\mathbb A}_m(\lambda_1,\dots,\lambda_m)[{\mathbb D}_m(\lambda_1+1,\lambda_2,\dots,\lambda_m)] = {\mathbb D}_m(\lambda_1,\lambda_2,\dots,\lambda_m), \qquad \lambda_1 = u_i, \label{qkzsun} \en where the linear ${\mathbb A}_m(\lambda_1,\dots,\lambda_m)$ operator is a product of $(m-1)n$ $R$-matrices given as \bear {\mathbb A}_m(\lambda_1,\dots,\lambda_m)^{{i}_1 \cdots {i}_{nm}}_{\bar{i}_1 \cdots \bar{i}_{nm}}&:=& \prod_{k=1}^{m-1} \prod_{j=1+(k-1)n}^{ k n} \check{R}^{(n,n)}(\lambda_1-\lambda_{m-k+1}+j)_{\alpha_j, \bar{i}_j}^{i_j, \alpha_{j-1}} \nonumber \\ &\times& \delta_{\alpha_0}^{i_{nm}} \delta^{\alpha_{n(m-1)}}_{\bar{i}_{n(m-1)+1}} \delta^{i_{n(m-1)+1}}_{\bar{i}_{n(m-1)+2}} \cdots \delta^{i_{nm-1}}_{\bar{i}_{nm}}, \ear where we assume summation over repeated indices $\alpha_j$. The action of ${\mathbb A}_2(\lambda_1,\lambda_2)$ for arbitrary $SU(n)$ is illustrated in Figure \ref{func-eqs}. \begin{figure}[h] \begin{center} \begin{minipage}{0.5\linewidth} \begin{tikzpicture}[scale=1.5] \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,0.8)--(0.5,0.8); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,1.2)--(0.5,1.2); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,1.6)--(0.5,1.6); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,2.2)--(0.5,2.2); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,2.6)--(0.5,2.6); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,3.0)--(0.5,3.0); \draw (1.2,0.8) node {$\lambda_2+n-1$}; \draw (1.15,1.03) node {$\vdots$}; \draw (1.,1.2) node {$\lambda_2+1$}; \draw (0.75,1.6) node {$\lambda_2$}; \draw (1.2,2.2) node {$\lambda_1+n-1$}; \draw (1.15,2.43) node {$\vdots$}; \draw (1.,2.6) node {$\lambda_1+1$}; \draw (0.75,3.0) node {$\lambda_1$}; \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,0.)--(2.75,0.); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,3.35)--(2.75,3.35); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,3.75)--(2.75,3.75); \draw (-0.5,2) node {$\widetilde{\Phi}_L$}; \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(-0.25,-0.25)--(-0.25,4)--(-0.8,2)--(-0.25,-0.25); \draw (3,2) node {$\Phi_R$}; \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(2.75,-0.25)--(2.75,4)--(3.3,2)--(2.75,-0.25); \draw (2.4,0.1) node {$u_1$}; \draw (2.4,0.5) node {$\vdots$}; \draw (2.4,3.85) node {$u_{N}$}; \draw (2.4,3.45) node {$u_{i}$}; \draw (0.5,0.8)[fill=black] circle (0.15ex); \draw (0.5,1.2)[fill=black] circle (0.15ex); \draw (0.5,1.6)[fill=black] circle (0.15ex); \draw (0.5,2.2)[fill=black] circle (0.15ex); \draw (0.5,2.6)[fill=black] circle (0.15ex); \draw (0.5,3)[fill=black] circle (0.15ex); \draw (3.54,2.2) node {$\stackrel{\lambda_1=u_i}{=}$}; \end{tikzpicture} \end{minipage \begin{minipage}{0.5\linewidth} \begin{tikzpicture}[scale=1.5] \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,0.8)--(0.5,0.8); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,1.2)--(0.5,1.2); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,0.4)--(0.5,0.4); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,2.2)--(0.5,2.2); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,2.6)--(0.5,2.6); \draw (0,0) [->|,color=black, thick, rounded corners=7pt] +(0.5,3.0) -- (1.85,3.0)--(1.85,1.75); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(1.85,1.75)--(1.85,0.15)--(-0,0.15)--(0,1.8)--(-0.25,1.8); \draw (1.2,0.4) node {$\lambda_2+n-1$}; \draw (1.15,0.63) node {$\vdots$}; \draw (1.,0.8) node {$\lambda_2+1$}; \draw (0.75,1.2) node {$\lambda_2$}; \draw (2.05,2.1) node {$\lambda_1$}; \draw (2.3,1.45) node {$\lambda_1+n$}; \draw (1.,1.8) node {$\lambda_1+n$}; \draw (1.15,2.03) node {$\vdots$}; \draw (1.,2.2) node {$\lambda_1+2$}; \draw (1.,2.6) node {$\lambda_1+1$}; \draw (0.8,3.15) node {$\lambda_1$}; \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,0.)--(2.75,0.); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,3.35)--(2.75,3.35); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,3.75)--(2.75,3.75); \draw (-0.5,2) node {$\widetilde{\Phi}_L$}; \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(-0.25,-0.25)--(-0.25,4)--(-0.8,2)--(-0.25,-0.25); \draw (3,2) node {$\Phi_R$}; \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(2.75,-0.25)--(2.75,4)--(3.3,2)--(2.75,-0.25); \draw (2.4,0.1) node {$u_1$}; \draw (2.4,0.5) node {$\vdots$}; \draw (2.4,3.85) node {$u_{N}$}; \draw (2.4,3.45) node {$u_{i}$}; \draw (0.5,0.4)[fill=black] circle (0.15ex); \draw (0.5,0.8)[fill=black] circle (0.15ex); \draw (0.5,1.2)[fill=black] circle (0.15ex); \draw (0.5,2.2)[fill=black] circle (0.15ex); \draw (0.5,2.6)[fill=black] circle (0.15ex); \draw (0.5,3)[fill=black] circle (0.15ex); \end{tikzpicture} \end{minipage} \caption{Graphical illustration of the functional equations for two-sites ($m=2$) for the generalized density operator ${\mathbb D}_2(\lambda_1,\lambda_2)$ valid for $\lambda_1=u_i$. Note that the spectral parameter on the manipulated line is $\lambda_1$ on the left hand side, and $\lambda_1+n$ on the right hand side.} \label{func-eqs} \end{center} \end{figure} Next we turn to the analytical properties of ${\mathbb D}_m(\lambda_1,\dots,\lambda_m)$. By definition, an infinite number of vertices carrying the parameters $\lambda_j$ enter which may result in an uncontrolled analytical dependence. Here we are going to show that fortunately this is not so. In order to represent the density operator in a way that the analyticity properties become transparent, we attach at the far left boundary the operator defined graphically on the left hand side of Figure \ref{analyticity}. This operator can be moved inside the lattice by use of the 180$^{\circ}$ rotated version of (\ref{sym-property1}-\ref{sym-property2}), as well as unitarity and special unitarity, see Figure \ref{analyticity}. \begin{figure} \begin{center} \begin{minipage}{0.5\linewidth} \begin{tikzpicture}[scale=1.7] \draw (0,0) [->,color=black, thick , rounded corners=7pt] +(-0.5,1.6)--(-0.5,0); \draw (0,0) [-,color=black, directed, thick , rounded corners=7pt] +(-1.1,0)--(-1.1,1.9); \draw (0,0) [-,color=black, reverse directed, thick, rounded corners=7pt] +(-1.1,1.9)--(-0.5,1.9); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-0.5,2.2)--(-1.1,2.55)--(-1.1,3); \draw (0,0) [-,color=black, directed, thick, rounded corners=7pt] +(-0.5,3)--(-0.5,2.55)--(-1.1,1.9); \draw (0,0) [->,color=black, thick , rounded corners=7pt] +(-1.5,0.7)--(-1.5,0); \draw (0,0) [-,color=black, directed, thick , rounded corners=7pt] +(-2.1,0)--(-2.1,1.); \draw (0,0) [-,color=black, reverse directed, thick, rounded corners=7pt] +(-2.1,1.)--(-1.5,1.); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-1.5,1.3)--(-2.1,1.65)--(-2.1,3); \draw (0,0) [-,color=black, directed, thick, rounded corners=7pt] +(-1.5,3)--(-1.5,1.65)--(-2.1,1.); \draw (-0.27,0.7) node {$\dots$}; \draw (-0.27,1.) node {$\dots$}; \draw (-0.27,1.3) node {$\dots$}; \draw (-0.27,1.6) node {$\dots$}; \draw (-0.27,1.9) node {$\dots$}; \draw (-0.27,2.2) node {$\dots$}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-2.25,0.25)--(-0.4,0.25); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-2.25,2.75)--(-0.4,2.75); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(-1.5,0.7)--(-0.45,0.7); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(-1.5,1.)--(-0.45,1.); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(-1.5,1.3)--(-0.45,1.3); \draw (-0.22,0.25) node {$\dots$}; \draw (-0.22,2.75) node {$\dots$}; \draw (-0.45,-0.25) node {$\lambda_1+2$}; \draw (-0.85,1.75) node {{\tiny $\lambda_1+1$}}; \draw (-1.05,-0.25) node {$\lambda_1$}; \draw (-1.55,-0.25) node {$\lambda_2+2$}; \draw (-1.85,0.85) node {{\tiny $\lambda_2+1$}}; \draw (-2.1,-0.25) node {$\lambda_2$}; \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.1,0.7)--(1.,0.7); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.1,1.)--(1.,1.); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.1,1.3)--(1.,1.3); \draw (1.,0.7)[fill=black] circle (0.15ex); \draw (1.,1.)[fill=black] circle (0.15ex); \draw (1.,1.3)[fill=black] circle (0.15ex); \draw (0.65,0.82) node {$\lambda_2+2$}; \draw (0.65,1.12) node {$\lambda_2+1$}; \draw (0.48,1.42) node {$\lambda_2$}; \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.1,1.6)--(1.,1.6); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.1,1.9)--(1.,1.9); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.1,2.2)--(1.,2.2); \draw (1.,1.6)[fill=black] circle (0.15ex); \draw (1.,1.9)[fill=black] circle (0.15ex); \draw (1.,2.2)[fill=black] circle (0.15ex); \draw (0.65,1.72) node {$\lambda_1+2$}; \draw (0.65,2.02) node {$\lambda_1+1$}; \draw (0.48,2.32) node {$\lambda_1$}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0)--(0.25,3); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(1.25,0)--(1.25,3); \draw (0.25,-0.25) node {$0$}; \draw (1.25,-0.25) node {$0$}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-0.05,0.25)--(1.65,0.25); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-0.05,2.75)--(1.65,2.75); \draw (1.45,0.1) node {$u_1$}; \draw (1.45,0.45) node {$\vdots$}; \draw (1.45,2.6) node {$u_{N}$}; \end{tikzpicture} \end{minipage \begin{minipage}{0.5\linewidth} \begin{tikzpicture}[scale=1.7] \draw (0,0) [->,color=black, thick , rounded corners=7pt] +(0.25,1.6)--(0.25,0); \draw (0,0) [-,color=black, directed, thick , rounded corners=7pt] +(-0.35,0)--(-0.35,1.9); \draw (0,0) [-,color=black, reverse directed, thick, rounded corners=7pt] +(-0.35,1.9)--(0.25,1.9); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.25,2.2)--(-0.35,2.55)--(-0.35,3); \draw (0,0) [-,color=black, directed, thick, rounded corners=7pt] +(0.25,3)--(0.25,2.55)--(-0.35,1.9); \draw (0,0) [->,color=black, thick , rounded corners=7pt] +(-0.75,0.7)--(-0.75,0); \draw (0,0) [-,color=black, directed, thick , rounded corners=7pt] +(-1.35,0)--(-1.35,1.); \draw (0,0) [-,color=black, reverse directed, thick, rounded corners=7pt] +(-1.35,1.)--(-0.75,1.); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-0.75,1.3)--(-1.35,1.65)--(-1.35,3); \draw (0,0) [-,color=black, directed, thick, rounded corners=7pt] +(-0.75,3)--(-0.75,1.65)--(-1.35,1.); \draw (0.3,-0.25) node {$\lambda_1+2$}; \draw (-0.1,1.75) node {{\tiny $\lambda_1+1$}}; \draw (-0.3,-0.25) node {$\lambda_1$}; \draw (-0.8,-0.25) node {$\lambda_2+2$}; \draw (-1.1,0.85) node {{\tiny $\lambda_2+1$}}; \draw (-1.35,-0.25) node {$\lambda_2$}; \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(-0.75,0.7)--(1.,0.7); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(-0.75,1.)--(1.,1.); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(-0.75,1.3)--(1.,1.3); \draw (1.,0.7)[fill=black] circle (0.15ex); \draw (1.,1.)[fill=black] circle (0.15ex); \draw (1.,1.3)[fill=black] circle (0.15ex); \draw (0.65,0.82) node {$\lambda_2+2$}; \draw (0.65,1.12) node {$\lambda_2+1$}; \draw (0.48,1.42) node {$\lambda_2$}; \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(0.25,1.6)--(1.,1.6); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(0.25,1.9)--(1.,1.9); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(0.25,2.2)--(1.,2.2); \draw (1.,1.6)[fill=black] circle (0.15ex); \draw (1.,1.9)[fill=black] circle (0.15ex); \draw (1.,2.2)[fill=black] circle (0.15ex); \draw (0.65,1.72) node {$\lambda_1+2$}; \draw (0.65,2.02) node {$\lambda_1+1$}; \draw (0.48,2.32) node {$\lambda_1$}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-1.65,0)--(-1.65,3); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(1.25,0)--(1.25,3); \draw (-1.65,-0.25) node {$0$}; \draw (1.25,-0.25) node {$0$}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-1.75,0.25)--(1.65,0.25); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-1.75,2.75)--(1.65,2.75); \draw (1.45,0.1) node {$u_1$}; \draw (1.45,0.45) node {$\vdots$}; \draw (1.45,2.6) node {$u_{N}$}; \draw (-2,1.5) node {$=$}; \end{tikzpicture} \end{minipage} \end{center} \caption{The generalized density operator ${\mathbb D}_m(\lambda_1,\dots,\lambda_m)$ for the case $SU(3)$ and $m=2$, i.e. two bunches of semi-infinite lines. The semi-infinite lines can be rearranged without changing the correlations in the form of two vertical lines with spectral parameters $\lambda_i$ and $\lambda_i+2$, however with different arrow directions resp.~representations.} \label{analyticity} \end{figure} This manipulation makes the analytical structure of the generalized density operator ${\mathbb D}_m(\lambda_1,\dots,\lambda_m)$ clear. For the $SU(3)$ case illustrated in Figure \ref{analyticity} the numerator of the (unnormalized) matrix elements must be a multivariate polynomial of degree $2N$ in the variables $\lambda_j$ times an $N$ independent number of linear factors of the type $\lambda_i-\lambda_j+const$. Therefore, the matrix elements of the generalized density operator after normalization are of the kind, \eq \frac{P(\lambda_1,\dots,\lambda_m)}{\prod_{i=1}^m\Lambda_0^{(3)}(\lambda_i)\Lambda_0^{(\bar 3)}(\lambda_i+2) \cdot \prod_{i<j}\Phi(\lambda_i-\lambda_j)}, \label{analyt-prop} \en where $P(\lambda_1,\dots,\lambda_m)$ is a multivariate polynomial of degree up to $2N + 6(m-1)m/2$ in each variable, $\Phi(\delta):=(4-\delta^2)(1-\delta^2)^2$ from the intersection of three semi-infinite lines with two vertical lines and $\Lambda_0^{(3)}(\lambda)$ is the leading eigenvalue of the quantum transfer matrix with fundamental representation in the auxiliary space and $\Lambda_0^{(\bar 3)}(\lambda)$ is the leading eigenvalue of the quantum transfer matrix with anti-fundamental representation in the auxiliary space. The normalization in the denominator is obtained by use of (\ref{sym-property1}-\ref{sym-property2}) to move the lower anti-symmetrizer to the left, which generates the $\Phi(\delta)$ function. Finally, by use of properties (\ref{other1}-\ref{other2}) we obtain two decoupled up- and down- going lines with spectral parameters $\lambda_i$ and $\lambda_i+2$, which are associated with the corresponding leading eigenvalues. In the next section, we are going to discuss the solution of the above functional equations for two and three-sites density operators for the case of the $SU(3)$ spin chain. \section{SU(3) spin chain}\label{su3section} In order to compute the two ($m=2$) and three ($m=3$) sites density operator $\mathbb D_m(\lambda_1,\dots,\lambda_m)$ we have to propose a suitable ansatz. This is usually done by choosing a certain number of linearly independent operators (states) as a basis and working out the resulting equations for the expansion coefficients. However, the number of these operators is determined by the dimension of the singlet subspace of the total space of density operators on $m$ sites referred to as $\dim(m)$, which for the two and three-sites case are $5$ and $42$, respectively. Although the two-sites case can still be treated, the high number of required independent operators for three-sites makes the problem very hard to treat. \begin{figure}[h] \begin{tikzpicture}[scale=1.5] \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,1.2)--(1.1,1.2); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,1.5)--(1.1,1.5); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,1.8)--(1.1,1.8); \draw (1.1,1.2)[fill=black] circle (0.15ex); \draw (1.1,1.5)[fill=black] circle (0.15ex); \draw (1.1,1.8)[fill=black] circle (0.15ex); \draw (0.65,1.35) node {$\lambda_1+2$}; \draw (0.65,1.65) node {$\lambda_1+1$}; \draw (0.45,1.95) node {$\lambda_1$}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0.25)--(0.25,2.75); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(1.6,1.65)--(1.6,2.75); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(1.6,0.25)--(1.6,1.35); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(2.15,1.65)--(2.15,2.75); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(2.15,0.25)--(2.15,1.35); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(2.65,0.25)--(2.65,2.75); \draw (0.25,0.) node {$0$}; \draw (1.6,0.) node {$\lambda_2$}; \draw (1.9,0.) node {$\dots$}; \draw (2.2,0.) node {$\lambda_m$}; \draw (2.65,0.) node {$0$}; \draw (1.6,1.35)[fill=black] circle (0.15ex); \draw (1.6,1.65)[fill=black] circle (0.15ex); \draw (2.15,1.35)[fill=black] circle (0.15ex); \draw (2.15,1.65)[fill=black] circle (0.15ex); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,0.5)--(3.2,0.5); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,2.5)--(3.2,2.5); \draw (-1.9,2.5) node {a)}; \draw (-1.9,1.5) node {${\mathfrak D}_m(\lambda_1,\dots,\lambda_m)=$}; \draw (-0.4,0.5) node {$\dots$}; \draw (-0.4,2.5) node {$\dots$}; \draw (3.4,0.5) node {$\dots$}; \draw (3.4,2.5) node {$\dots$}; \draw (-0.4,1.2) node {$\dots$}; \draw (-0.4,1.5) node {$\dots$}; \draw (-0.4,1.8) node {$\dots$}; \draw (2.9,0.6) node {$u_1$}; \draw (2.9,0.95) node {$\vdots$}; \draw (2.9,2.6) node {$u_{N}$}; \end{tikzpicture} \begin{tikzpicture}[scale=1.5] \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,1.2)--(1.1,1.2); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,1.5)--(1.1,1.5); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,1.8)--(1.1,1.8); \draw (1.1,1.2)[fill=black] circle (0.15ex); \draw (1.1,1.5)[fill=black] circle (0.15ex); \draw (1.1,1.8)[fill=black] circle (0.15ex); \draw (0.65,1.35) node {$\lambda_1+2$}; \draw (0.65,1.65) node {$\lambda_1+1$}; \draw (0.45,1.95) node {$\lambda_1$}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(1.6,1.65)--(1.6,2.75); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(1.6,0.25)--(1.6,1.35); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(2.15,1.65)--(2.15,2.75); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(2.15,0.25)--(2.15,1.35); \draw (1.6,0) node {$\lambda_2$}; \draw (1.9,0) node {$\dots$}; \draw (2.2,0) node {$\lambda_m$}; \draw (1.6,1.35)[fill=black] circle (0.15ex); \draw (1.6,1.65)[fill=black] circle (0.15ex); \draw (2.15,1.35)[fill=black] circle (0.15ex); \draw (2.15,1.65)[fill=black] circle (0.15ex); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,0.5)--(3.25,0.5); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0,2.5)--(3.25,2.5); \draw (-1.9,2.5) node {b)}; \draw (-1.9,1.5) node {${\mathfrak D}_m(\lambda_1,\dots,\lambda_m)=$}; \draw (2.9,0.6) node {$u_1$}; \draw (2.9,0.95) node {$\vdots$}; \draw (2.9,2.6) node {$u_{N}$}; \draw (-0.5,1.5) node {$\widetilde{\Phi}_L$}; \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(-0.25,0.25)--(-0.25,2.75)--(-0.8,1.5)--(-0.25,0.25); \draw (3.5,1.5) node {$\Phi_R$}; \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(3.25,0.25)--(3.25,2.75)--(3.8,1.5)--(3.25,0.25); \end{tikzpicture} \caption{Graphical illustration of the un-normalized mixed density operator ${\mathfrak D}_m(\lambda_1,\dots,\lambda_m)$ for the $SU(3)$ case. a) We have an infinite cylinder with $N$ infinitely long horizontal lines carrying spectral parameters $u_j$, $1$ bunch of three semi-infinite lines carrying spectral parameters $\{\lambda_1,\lambda_1+1,\lambda_1+2 \}$ and $m-1$ vertically open bonds associated to the spectral parameters $\lambda_2,\dots,\lambda_m$; b) the infinitely many column-to-column transfer matrices to the left and right replaced by the boundary states they project onto.} \label{mixDmatrix} \end{figure} A crucial observation to turn the three-sites case feasible is to reduce the number of bunches of semi-infinite horizontal lines to the minimal possible. For the problem at hand we found that this can be done by working with only one bunch of semi-infinite horizontal lines and the remaining $m-1$ bunches reduced by partial anti-symmetrization to $m-1$ vertically open bonds resulting in a mixed density operator denoted by ${\mathfrak D}_m(\lambda_1,\dots,\lambda_m)$ (see Figure \ref{mixDmatrix}). This strategy reduces the dimension of the singlet subspace for the two and three-sites case to $\dim(2)=3$ and $\dim(3)=11$, respectively. Therefore the density operator can be written as ${\mathfrak D}_m(\lambda_1,\dots,\lambda_m)=\sum_{k=1}^{\dim(m)} \rho_k^{[m]}(\lambda_1,\dots,\lambda_m) P_{k}^{[m]}$, for conveniently chosen operators $P_{k}^{[m]}$. In addition, the mixed density operator (Figure \ref{mixDmatrix}) has also the advantage of simpler reduction properties, since under partial anti-symmetrization of the semi-infinite lines it is reduced directly to the physical density operator without the action of $R$-matrices. \begin{figure} \begin{center} \begin{minipage}{0.5\linewidth} \begin{tikzpicture}[scale=1.6] \draw (-0.,2.7) node {a)}; \draw (-0.27,1.2) node {$\dots$}; \draw (-0.27,1.5) node {$\dots$}; \draw (-0.27,1.8) node {$\dots$}; \draw (-0.22,0.5) node {$\dots$}; \draw (-0.22,2.5) node {$\dots$}; \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.1,1.2)--(1.,1.2); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.1,1.5)--(1.,1.5); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.1,1.8)--(1.,1.8); \draw (1.,1.2)[fill=black] circle (0.15ex); \draw (1.,1.5)[fill=black] circle (0.15ex); \draw (1.,1.8)[fill=black] circle (0.15ex); \draw (0.65,1.32) node {$\lambda_1+2$}; \draw (0.65,1.62) node {$\lambda_1+1$}; \draw (0.48,1.95) node {$\lambda_1$}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0)--(0.25,2.75); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(1.25,1.65)--(1.25,2.75); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(1.25,0)--(1.25,1.35); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(1.75,1.65)--(1.75,2.75); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(1.75,0)--(1.75,1.35); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(2.25,0)--(2.25,2.75); \draw (0.25,-0.25) node {$0$}; \draw (1.2,-0.25) node {$\lambda_2$}; \draw (1.52 ,-0.25) node {$\dots$}; \draw (1.52 ,0.25) node {$\dots$}; \draw (1.8,-0.25) node {$\lambda_m$}; \draw (2.25,-0.25) node {$0$}; \draw (1.25,1.35)[fill=black] circle (0.15ex); \draw (1.25,1.65)[fill=black] circle (0.15ex); \draw (1.75,1.35)[fill=black] circle (0.15ex); \draw (1.75,1.65)[fill=black] circle (0.15ex); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-0.05,0.5)--(2.75,0.5); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-0.05,2.5)--(2.75,2.5); \draw (2.5,0.6) node {$u_1$}; \draw (2.5,0.9) node {$\vdots$}; \draw (2.5,2.6) node {$u_{N}$}; \end{tikzpicture} \end{minipage \begin{minipage}{0.5\linewidth} \begin{tikzpicture}[scale=1.6] \draw (-0.27,1.2) node {$\dots$}; \draw (-0.27,1.5) node {$\dots$}; \draw (-0.27,0.9) node {$\dots$}; \draw (-0.22,0.5) node {$\dots$}; \draw (-0.22,2.5) node {$\dots$}; \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.1,1.2)--(1.,1.2); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.1,1.5)--(1.,1.5); \draw (0,0) [->|,color=black, thick, rounded corners=7pt] +(1.,1.8)--(1.,2)--(2,2)--(2,1.5); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(2.,1.5)--(2.,0.9)--(-0.1,0.9); \draw (1.,1.2)[fill=black] circle (0.15ex); \draw (1.,1.5)[fill=black] circle (0.15ex); \draw (1.,1.8)[fill=black] circle (0.15ex); \draw (0.65,1.02) node {$\lambda_1+3$}; \draw (0.65,1.32) node {$\lambda_1+2$}; \draw (0.65,1.62) node {$\lambda_1+1$}; \draw (0.48,1.95) node {$\lambda_1$}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0)--(0.25,2.75); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(1.25,1.65)--(1.25,2.75); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(1.25,0)--(1.25,1.35); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(1.75,1.65)--(1.75,2.75); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(1.75,0)--(1.75,1.35); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(2.25,0)--(2.25,2.75); \draw (0.25,-0.25) node {$0$}; \draw (1.2,-0.25) node {$\lambda_2$}; \draw (1.52 ,-0.25) node {$\dots$}; \draw (1.52 ,0.25) node {$\dots$}; \draw (1.8,-0.25) node {$\lambda_m$}; \draw (2.25,-0.25) node {$0$}; \draw (1.25,1.35)[fill=black] circle (0.15ex); \draw (1.25,1.65)[fill=black] circle (0.15ex); \draw (1.75,1.35)[fill=black] circle (0.15ex); \draw (1.75,1.65)[fill=black] circle (0.15ex); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-0.05,0.5)--(2.75,0.5); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-0.05,2.5)--(2.75,2.5); \draw (2.5,0.6) node {$u_1$}; \draw (2.5,0.9) node {$\vdots$}; \draw (2.5,2.6) node {$u_{N}$}; \draw (-1.,1.5) node {$\stackrel{\lambda_1=u_i}{=}$}; \end{tikzpicture} \end{minipage} \begin{minipage}{0.5\linewidth} \begin{tikzpicture}[scale=1.6] \draw (-0.,2.7) node {b)}; \draw (-0.27,1.2) node {$\dots$}; \draw (-0.27,1.5) node {$\dots$}; \draw (-0.27,1.8) node {$\dots$}; \draw (-0.22,0.5) node {$\dots$}; \draw (-0.22,2.5) node {$\dots$}; \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.1,1.2)--(1.,1.2); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.1,1.5)--(1.,1.5); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.1,1.8)--(1.,1.8); \draw (1.,1.2)[fill=black] circle (0.15ex); \draw (1.,1.5)[fill=black] circle (0.15ex); \draw (1.,1.8)[fill=black] circle (0.15ex); \draw (0.65,1.32) node {$\lambda_1+2$}; \draw (0.65,1.62) node {$\lambda_1+1$}; \draw (0.48,1.95) node {$\lambda_1$}; \draw (1.15,1.2) node {$i_1$}; \draw (1.15,1.5) node {$i_2$}; \draw (1.15,1.8) node {$i_3$}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0)--(0.25,2.75); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(1.45,1.8)--(1.45,2.75); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(1.45,0)--(1.45,1.2); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(1.85,1.8)--(1.85,2.75); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(1.85,0)--(1.85,1.2); \draw (1.45,-0.25) node {$\lambda_2$}; \draw (1.9,-0.25) node {$\lambda_3$}; \draw (1.45,1.35) node {$s_1$}; \draw (1.45,1.65) node {$r_1$}; \draw (1.85,1.35) node {$s_2$}; \draw (1.85,1.65) node {$r_2$}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(2.25,0)--(2.25,2.75); \draw (1.45,1.2)[fill=black] circle (0.15ex); \draw (1.45,1.8)[fill=black] circle (0.15ex); \draw (1.85,1.2)[fill=black] circle (0.15ex); \draw (1.85,1.8)[fill=black] circle (0.15ex); \draw (0.25,-0.25) node {$0$}; \draw (2.25,-0.25) node {$0$}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-0.05,0.5)--(2.75,0.5); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-0.05,2.5)--(2.75,2.5); \draw (2.5,0.6) node {$u_1$}; \draw (2.5,0.9) node {$\vdots$}; \draw (2.5,2.6) node {$u_{N}$}; \end{tikzpicture} \end{minipage \begin{minipage}{0.5\linewidth} \begin{tikzpicture}[scale=1.6] \draw (-0.77,1.2) node {$\dots$}; \draw (-0.77,1.5) node {$\dots$}; \draw (-0.77,0.9) node {$\dots$}; \draw (-0.72,0.5) node {$\dots$}; \draw (-0.72,2.5) node {$\dots$}; \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.6,1.2)--(0.7,1.2); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.6,1.5)--(0.7,1.5); \draw (0,0) [->|,color=black, thick, rounded corners=7pt] +(0.7,2)--(2.1,2)--(2.1,1.5); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(2.1,1.5)--(2.1,0.9)--(-0.6,0.9); \draw (0.7,1.2)[fill=black] circle (0.15ex); \draw (0.7,1.5)[fill=black] circle (0.15ex); \draw (0.7,2)[fill=black] circle (0.15ex); \draw (-0.05,1.02) node {$\lambda_1+3$}; \draw (-0.05,1.32) node {$\lambda_1+2$}; \draw (-0.05,1.62) node {$\lambda_1+1$}; \draw (0.95,2.15) node {$\lambda_1$}; \draw (0.75,1.34) node {$\bar i_2=i_1$}; \draw (0.75,1.65) node {$\bar i_3=i_2$}; \small \draw (1.2,1.85) node {$i_3$}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-0.4,0)--(-0.4,2.75); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(1.45,1.8)--(1.45,2.75); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(1.45,0)--(1.45,1.2); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(1.85,1.8)--(1.85,2.75); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(1.85,0)--(1.85,1.2); \draw (1.25,1.05) node {$\bar i_1$}; \draw (1.45,-0.25) node {$\lambda_2$}; \draw (1.9,-0.25) node {$\lambda_3$}; \draw (1.35,0.65) node {$\bar s_1$}; \draw (2.0,0.65) node {$\bar s_2$}; \draw (1.35,2.2) node {$\bar r_1$}; \draw (2.,2.2) node {$\bar r_2$}; \draw (1.65,2.1) node {$\alpha_1$}; \draw (2.25,1.75) node {$\alpha_2$}; \draw (1.65,1.05) node {$\beta_2$}; \draw (2.25,1.15) node {$\beta_1$}; \draw (1.45,1.35) node {$s_1$}; \draw (1.45,1.65) node {$r_1$}; \draw (1.85,1.35) node {$s_2$}; \draw (1.85,1.65) node {$r_2$}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(2.4,0)--(2.4,2.75); \draw (1.45,1.2)[fill=black] circle (0.15ex); \draw (1.45,1.8)[fill=black] circle (0.15ex); \draw (1.85,1.2)[fill=black] circle (0.15ex); \draw (1.85,1.8)[fill=black] circle (0.15ex); \draw (-0.4,-0.25) node {$0$}; \draw (2.4,-0.25) node {$0$}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-0.6,0.5)--(2.85,0.5); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-0.6,2.5)--(2.85,2.5); \draw (2.6,0.6) node {$u_1$}; \draw (2.6,0.9) node {$\vdots$}; \draw (2.6,2.6) node {$u_{N}$}; \draw (-1.4,1.5) node {$\stackrel{\lambda_1=u_i}{=}$}; \end{tikzpicture} \end{minipage} \end{center} \caption{Graphical depiction of the functional equations for the mixed density operator ${\mathfrak D}_m(\lambda_1,\dots,\lambda_m)$ valid for $\lambda_1=u_i$: for: a) $m$-sites; b) $3$-sites matrix elements. } \label{mixed-funeqs} \end{figure} The mixed density operator also fulfills a functional equation of the form (\ref{qkzsun}). More specifically the equation satisfied by $\mathfrak D_m(\lambda_1,\dots,\lambda_m)$ is shown for the $SU(3)$ case in Figure \ref{mixed-funeqs}a, which again is derived for $\lambda_1=u_i$ by use of unitarity, Yang-Baxter and special unitarity condition. \eq {\mathfrak A}_m(\lambda_1,\dots,\lambda_m)[{\mathfrak D}_m(\lambda_1+1,\lambda_2,\dots,\lambda_m)] = {\mathfrak D}_m(\lambda_1,\lambda_2,\dots,\lambda_m), \qquad \lambda_1 = u_i, \label{qkzsun-frak} \en where ${\mathfrak A}_m(\lambda_1,\dots,\lambda_m)$ is a linear operator which consists of a product of $2(m-1)$ $R$-matrices depicted in Figure \ref{mixed-funeqs} for the case $SU(3)$, whose expression is given by (see Figure \ref{mixed-funeqs}b), \bear &&{{\mathfrak A}_m(\lambda_1,\dots,\lambda_m)}^{i_1 i_2 i_3 r_1 \cdots r_{m-1} s_1 \cdots s_{m-1}}_{\bar i_1 \bar i_2 \bar i_3 \bar r_1 \cdots \bar r_{m-1} \bar s_1 \cdots \bar s_{m-1}}= \label{frakA}\\ &=& [\check{R}^{(3,3)}(\lambda_1-\lambda_2)]_{ i_3 r_1}^{ \bar r_1 \alpha_1} [\check{R}^{(3,3)}(\lambda_1-\lambda_3)]_{ \alpha_1 r_2}^{ \bar r_2 \alpha_2}\cdots [\check{R}^{(3,3)}(\lambda_1-\lambda_m)]_{\alpha_{m-2} r_{m-1} }^{\bar r_{m-1} \alpha_{m-1}} \delta_{\beta_1}^{\alpha_{m-1}} \delta_{ i_1}^{\bar i_2} \delta_{ i_2}^{\bar i_3} \nonumber \\ &\times& [\check{R}^{(3,\bar 3)}(\lambda_1+3-\lambda_m)]_{ \beta_1 s_{m-1}}^{ \bar s_{m-1} \beta_2 }\cdots [\check{R}^{(3,\bar 3)}(\lambda_1+3-\lambda_3)]_{\beta_{m-2} s_2}^{ \bar s_2 \beta_{m-1}}[\check{R}^{( 3,\bar 3)}(\lambda_1+3-\lambda_2)]_{ \beta_{m-1} s_1}^{ \bar s_1 \bar i_1}. \nonumber \ear As for the generalized density operator, also for the mixed density operator $\mathfrak D_m(\lambda_1,\dots,\lambda_m)$ the analytical properties with regard to the dependence on $\lambda_1$ are not obvious from the definition of the operator. However, by exploiting the properties (\ref{sym-property1}-\ref{sym-property2}) as above, we transform the semi-infinite horizontal lines carrying the spectral parameter $\lambda_1$ into two vertical lines, see Figure \ref{analyticitymixed}. \begin{figure} \begin{center} \begin{minipage}{0.5\linewidth} \begin{tikzpicture}[scale=1.7] \draw (0,0) [->,color=black, thick , rounded corners=7pt] +(-0.5,1.2)--(-0.5,0); \draw (0,0) [-,color=black, directed, thick , rounded corners=7pt] +(-1.1,0)--(-1.1,1.5); \draw (0,0) [-,color=black, reverse directed, thick, rounded corners=7pt] +(-1.1,1.5)--(-0.5,1.5); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-0.5,1.8)--(-1.1,2.1)--(-1.1,3); \draw (0,0) [-,color=black, directed, thick, rounded corners=7pt] +(-0.5,3)--(-0.5,2.1)--(-1.1,1.5); \draw (-0.27,1.2) node {$\dots$}; \draw (-0.27,1.5) node {$\dots$}; \draw (-0.27,1.8) node {$\dots$}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-1.2,0.5)--(-0.4,0.5); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-1.2,2.5)--(-0.4,2.5); \draw (-0.22,0.5) node {$\dots$}; \draw (-0.22,2.5) node {$\dots$}; \draw (-0.45,-0.25) node {$\lambda_1+2$}; \draw (-0.85,1.35) node {{\tiny $\lambda_1+1$}}; \draw (-1.1,-0.25) node {$\lambda_1$}; \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.1,1.2)--(1.,1.2); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.1,1.5)--(1.,1.5); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.1,1.8)--(1.,1.8); \draw (1.,1.2)[fill=black] circle (0.15ex); \draw (1.,1.5)[fill=black] circle (0.15ex); \draw (1.,1.8)[fill=black] circle (0.15ex); \draw (0.65,1.32) node {$\lambda_1+2$}; \draw (0.65,1.62) node {$\lambda_1+1$}; \draw (0.48,1.95) node {$\lambda_1$}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0)--(0.25,3); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(1.25,1.65)--(1.25,3); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(1.25,0)--(1.25,1.35); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(1.75,1.65)--(1.75,3); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(1.75,0)--(1.75,1.35); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(2.25,0)--(2.25,3); \draw (0.25,-0.25) node {$0$}; \draw (1.2,-0.25) node {$\lambda_2$}; \draw (1.52 ,-0.25) node {$\dots$}; \draw (1.8,-0.25) node {$\lambda_m$}; \draw (2.25,-0.25) node {$0$}; \draw (1.25,1.35)[fill=black] circle (0.15ex); \draw (1.25,1.65)[fill=black] circle (0.15ex); \draw (1.75,1.35)[fill=black] circle (0.15ex); \draw (1.75,1.65)[fill=black] circle (0.15ex); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-0.05,0.5)--(2.75,0.5); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(-0.05,2.5)--(2.75,2.5); \draw (2.5,0.6) node {$u_1$}; \draw (2.5,0.9) node {$\vdots$}; \draw (2.5,2.6) node {$u_{N}$}; \end{tikzpicture} \end{minipage \begin{minipage}{0.5\linewidth} \begin{tikzpicture}[scale=1.7] \draw (0,0) [->,color=black, thick , rounded corners=7pt] +(1.25,1.3)--(1.25,0); \draw (0,0) [-,color=black, directed, thick , rounded corners=7pt] +(0.65,0)--(0.65,1.5); \draw (0,0) [-,color=black, directed, thick, rounded corners=7pt] +(1.25,1.5)--(0.65,1.5); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(1.25,1.7)--(0.65,2.1)--(0.65,3); \draw (0,0) [-,color=black, directed, thick, rounded corners=7pt] +(1.25,3)--(1.25,2.1)--(0.65,1.5); \draw (1.25,1.3)[fill=black] circle (0.15ex); \draw (1.25,1.5)[fill=black] circle (0.15ex); \draw (1.25,1.7)[fill=black] circle (0.15ex); \draw (0.65,1.5)[fill=black] circle (0.15ex); \draw (1.15,-0.25) node {$\lambda_1+2$}; \draw (0.95,1.4) node {{\tiny $\lambda_1+1$}}; \draw (0.65,-0.25) node {$\lambda_1$}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0)--(0.25,3); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(1.6,1.65)--(1.6,3); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(1.6,0)--(1.6,1.35); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(2.15,1.65)--(2.15,3); \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(2.15,0)--(2.15,1.35); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(2.65,0)--(2.65,3); \draw (0.25,-0.25) node {$0$}; \draw (1.65,-0.25) node {$\lambda_2$}; \draw (1.95 ,-0.25) node {$\dots$}; \draw (2.25,-0.25) node {$\lambda_m$}; \draw (2.65,-0.25) node {$0$}; \draw (1.6,1.35)[fill=black] circle (0.15ex); \draw (1.6,1.65)[fill=black] circle (0.15ex); \draw (2.15,1.35)[fill=black] circle (0.15ex); \draw (2.15,1.65)[fill=black] circle (0.15ex); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0.5)--(3.25,0.5); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.25,2.5)--(3.25,2.5); \draw (-0.5,1.5) node {$=$}; \draw (2.9,0.6) node {$u_1$}; \draw (2.9,0.9) node {$\vdots$}; \draw (2.9,2.6) node {$u_{N}$}; \end{tikzpicture} \end{minipage} \end{center} \caption{The mixed density operator ${\mathfrak D}_m(\lambda_1,\dots,\lambda_m)$ for the case $SU(3)$ and $m=3$, i.e. one bunch of semi-infinite horizontal lines and two open vertical lines. The semi-infinite lines can be rearranged without changing the correlations in the form of two vertical lines with spectral parameters $\lambda_1$ and $\lambda_1+2$, however different arrow directions resp.~representations. } \label{analyticitymixed} \end{figure} This makes the analytical structure of the mixed density operator ${\mathfrak D}_m(\lambda_1,\dots,\lambda_m)$ clear, since for the $SU(3)$ case the numerator of the matrix elements must be a multivariate polynomial of degree $2N$ in the variable $\lambda_1$ and of degree $N$ for the remaining $\lambda_i$, for $i=2,3,\dots,m$, \eq \frac{P(\lambda_1,\dots,\lambda_m)} {\Lambda_0^{(3)}(\lambda_1)\Lambda_0^{(\bar 3)}(\lambda_1+2)\prod_{i=2}^m\Lambda_0^{(3)}(\lambda_i) }, \label{analyt-prop} \en where again $\Lambda_0^{(3)}(\lambda)$ is the leading eigenvalue of the quantum transfer matrix with fundamental representation in the auxiliary space and $\Lambda_0^{(\bar 3)}(\lambda)$ is the leading eigenvalue of the quantum transfer matrix with anti-fundamental representation in the auxiliary space. \subsection{Computation of the two-sites density operator} Due to $SU(n)$ symmetry the usual (normalized) two-point density operator for the fundamental-funda\-mental and also for anti-fundamental--fundamental representations can be written as follows, \bear D_2^{(nn)}(\lambda_1,\lambda_2)&=&\left(\frac{1}{n^2}-\frac{\alpha_{nn}(\lambda_1,\lambda_2)}{n} \right) I + \alpha_{nn}(\lambda_1,\lambda_2) P, \label{qq}\\ D_2^{(\bar{n}n)}(\lambda_1,\lambda_2)&=&\left(\frac{1}{n^2}-\frac{\alpha_{\bar{n}n}(\lambda_1,\lambda_2)}{n} \right) I + \alpha_{\bar{n}n}(\lambda_1,\lambda_2)E. \ear It is convenient to define some simple two-point correlation functions to work with, \bear \omega_{nn}(\lambda_1,\lambda_2)&=&\tr[P D_2^{(nn)}] =\frac{1}{n}+ (n^2-1) \alpha_{nn}(\lambda_1,\lambda_2), \label{ome1}\\ \omega_{\bar{n}n}(\lambda_1,\lambda_2)&=&\tr[E D_2^{(\bar{n}n)}]=\frac{1}{n}+ (n^2-1) \alpha_{\bar{n}n}(\lambda_1,\lambda_2). \ear The above correlation functions will be useful in the coming computations. The operator (\ref{qq}) represents the full density matrix whose non-trivial matrix elements are ${D_2^{(nn)}}_{ii}^{ii}(\lambda_1,\lambda_2)=\frac{1}{n^2}+\frac{(n-1)}{n} \alpha_{nn}(\lambda_1,\lambda_2)$, ${D_2^{(nn)}}_{ij}^{ji}(\lambda_1,\lambda_2)=\alpha_{nn}(\lambda_1,\lambda_2)$, ${D_2^{(nn)}}_{ij}^{ij}(\lambda_1,\lambda_2)=\frac{1}{n^2}-\frac{1}{n}\alpha_{nn}(\lambda_1,\lambda_2)$ for $i,j=1,\cdots,n$, $i\neq j$. Therefore, in order to fully determine the two-sites density operator (\ref{qq}), one only needs to compute $\alpha_{nn}(\lambda_1,\lambda_2)$ or equivalently $\omega_{nn}(\lambda_1,\lambda_2)$ above. As discussed above, for deriving a closed set of functional equations we have to use the mixed density operator ${\mathfrak D}_2$ and due to the $SU(3)$ symmetry this operator can be explicitly written as a superposition of $3$ linearly independent operators as follows, \eq {\mathfrak D}_2(\lambda_1,\lambda_2)= \rho_1^{[2]}(\lambda_1,\lambda_2) P_{1}^{[2]} + \rho_2^{[2]}(\lambda_1,\lambda_2) P_{2}^{[2]} + \rho_3^{[2]}(\lambda_1,\lambda_2) P_{3}^{[2]}, \label{D2frak} \en where the operators $P_{k}^{[2]}$ are chosen as, \eq \begin{minipage}{0.33\linewidth} \begin{tikzpicture}[scale=1] \draw (-1.,0) node {$P_1^{[2]}=$}; \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(0,0.5) -- +(0.4,0.4) -- +(0.5,0.); \draw (0,0) [-,color=black, thick, directed, rounded corners=8pt]+(0,0.) -- +(0.5,0.); \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(0,-0.5) -- +(0.4,-0.4) -- +(0.5,0.); \draw (0.,0.5)[fill=black] circle (0.3ex); \draw (0,0)[fill=black] circle (0.3ex); \draw (0.,-0.5)[fill=black] circle (0.3ex); \draw (0,0) [color=black, directed, thick, rounded corners=8pt]+(0.75,0.5) -- +(0.75,-0.5); \draw (0.75,-0.5)[fill=black] circle (0.3ex); \draw (0.75,0.5,0)[fill=black] circle (0.3ex); \draw (1.25,0) node {,}; \end{tikzpicture} \end{minipage \begin{minipage}{0.33\linewidth} \begin{tikzpicture}[scale=1] \draw (-1.,0) node {$P_2^{[2]}=$}; \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(0,-0.5) -- +(0.25,-0.45) -- +(0.5,-0.25); \draw (0,0) [-,color=black, thick, directed, rounded corners=8pt]+(0,0.) -- +(0.25,-0.05)-- +(0.5,-0.25); \tikzstyle directed=[postaction={decorate,decoration={markings, mark=at position .45 with {\arrow[arrowstyle]{stealth}}}}] \draw (0,0) [color=black, directed, thick, rounded corners=8pt]+(0.75,0.5) -- +(0.5,-0.25); \tikzstyle reverse directed=[postaction={decorate,decoration={markings, mark=at position .65 with {\arrowreversed[arrowstyle]{stealth};}}}] \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(0,0.5) -- +(0.25,0.4) -- +(0.55,0.1); \draw (0,0) [-,color=black, thick, rounded corners=8pt]+(0.65,-0.05) -- +(0.75,-0.3) -- +(0.75,-0.5); \draw (0.,0.5)[fill=black] circle (0.3ex); \draw (0,0)[fill=black] circle (0.3ex); \draw (0.,-0.5)[fill=black] circle (0.3ex); \draw (0.75,-0.5)[fill=black] circle (0.3ex); \draw (0.75,0.5,0)[fill=black] circle (0.3ex); \draw (1.25,0) node {,}; \end{tikzpicture} \end{minipage \begin{minipage}{0.33\linewidth} \begin{tikzpicture}[scale=1] \draw (-1.,0) node {$P_3^{[2]}=$}; \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(0,-0.5) -- +(0.4,-0.35) -- +(0.75,-0.5); \draw (0,0) [-,color=black, thick, directed, rounded corners=8pt]+(0,0.) -- +(0.25,0.05)-- +(0.45,0.25); \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(0,0.5) -- +(0.25,0.45) -- +(0.45,0.25); \draw (0.,0.5)[fill=black] circle (0.3ex); \draw (0,0)[fill=black] circle (0.3ex); \draw (0.,-0.5)[fill=black] circle (0.3ex); \draw (0,0) [color=black, directed, thick, rounded corners=8pt]+(0.75,0.5) -- +(0.45,0.25); \draw (0.75,-0.5)[fill=black] circle (0.3ex); \draw (0.75,0.5,0)[fill=black] circle (0.3ex); \draw (1.25,0) node {,}; \end{tikzpicture} \end{minipage} \en and the functions $\rho_{k}^{[2]}(\lambda_1,\lambda_2)$ are to be determined. It is worth to note that we only need three linearly independent operators/states which can be seen as tensor anti-symmetric in the three connected indices (black dots) and symmetric in the other two. Alternatively, we can consider the operators $P_j^{[2]}$ as vector in $[\bar 3]^{\otimes 4} \otimes [3]$, with the indices $i_1,i_2,i_3,r_1,s_1=1,2,3$ assigned to the dots, such that e.g. for $P_1^{[2]}$ we have \eq \ \begin{tikzpicture}[scale=1] \draw (-5,0) node {$\left(P_1^{[2]}\right)_{i_1,i_2,i_3,r_1}^{s_1} :=\quad\epsilon_{i_1,i_2,i_3}\cdot\delta_{r_1}^{s_1}\quad \equiv$}; \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(0,0.5) -- +(0.4,0.4) -- +(0.5,0.); \draw (0,0) [-,color=black, thick, directed, rounded corners=8pt]+(0,0.) -- +(0.5,0.); \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(0,-0.5) -- +(0.4,-0.4) -- +(0.5,0.); \draw (0.,0.5)[fill=black] circle (0.3ex); \draw (0,0)[fill=black] circle (0.3ex); \draw (0.,-0.5)[fill=black] circle (0.3ex); \draw (-0.25,0.5) node {$i_1$}; \draw (-0.25,0) node {$i_2$}; \draw (-0.25,-0.5) node {$i_3$}; \draw (0.75,0.75) node {$r_1$}; \draw (0.75,-0.75) node {$s_1$}; \draw (0,0) [color=black, directed, thick, rounded corners=8pt]+(0.75,0.5) -- +(0.75,-0.5); \draw (0.75,-0.5)[fill=black] circle (0.3ex); \draw (0.75,0.5,0)[fill=black] circle (0.3ex); \draw (1.25,0) node {.}; \end{tikzpicture} \en Note that upper indices refer to states from $[3]$ and lower indices refer to states from $[\bar 3]$. And furthermore, this object has the right transformation properties, namely invariance under $SU(3)$, because for arbitrary $g\in SU(3)$ we have $g^*\otimes g^*\otimes g^*\cdot\epsilon = \mbox{det}(g^*)\epsilon=\epsilon$ and $g^*\otimes g\cdot \mbox{id} = \mbox{id}$. For convenience we have chosen to work with operators which resemble the usual identity, permutation and Temperley-Lieb defined before and that after partial anti-symmetrization reduce to the combination of identity, permutation and Temperley-Lieb. This way, the density operator (\ref{D2frak}) would nicely reduce to the usual density operator (\ref{qq}) as $D_2(\lambda_1,\lambda_2)=(2 \rho^{[2]}_{1} + \rho^{[2]}_{3}) I + (2 \rho^{[2]}_{2} - \rho^{[2]}_{3}) P_{12}$ thanks to the properties (\ref{other2}). Using the above representation of the density operator (\ref{D2frak}), equation (\ref{qkzsun-frak}) implies the following set of functional equations \eq \left(\begin{array}{c} \rho_{1}^{[2]}(\lambda_1,\lambda_2) \\ \rho_{2}^{[2]}(\lambda_1,\lambda_2) \\ \rho_{3}^{[2]}(\lambda_1,\lambda_2) \end{array}\right)= A^{[2]}(\lambda) \cdot\left(\begin{array}{c} \rho_{1}^{[2]}(\lambda_1+1,\lambda_2) \\ \rho_{2}^{[2]}(\lambda_1+1,\lambda_2) \\ \rho_{3}^{[2]}(\lambda_1+1,\lambda_2) \end{array}\right), \qquad \lambda_1=u_i, \label{matrixL-qKZ} \en where $\lambda=\lambda_1-\lambda_2$ and the matrix $A^{[2]}(\lambda)$ is given by, \eq A^{[2]}(\lambda)=\left(\begin{array}{ccc} \frac{(-1 + 3\lambda + \lambda^2)}{\lambda( \lambda + 3)} & \frac{(-2 + 2\lambda + \lambda^2)}{\lambda( \lambda+3)} & \frac{1}{ \lambda + 3} \\ \frac{3}{\lambda( \lambda + 3)} & \frac{-(-3 + \lambda + \lambda^2)}{\lambda( \lambda + 3)} & \frac{\lambda}{ \lambda + 3} \\ 0 & \frac{-(-1 + \lambda)}{\lambda} & 0 \end{array}\right). \nonumber \en These equations can be disentangled by the transformation matrix \eq \left(\begin{array}{c} 1 \\ \omega_{33}(\lambda_1,\lambda_2) \\ \omega_{\bar{3}3}(\lambda_1-1,\lambda_2) \end{array}\right) =\left(\begin{array}{ccc} 18 & 6 & 6\\ 6 & 18 &-6 \\ 6 & -6 & 18 \end{array}\right) \cdot \left(\begin{array}{c} \rho_{1}^{[2]}(\lambda_1,\lambda_2) \\ \rho_{2}^{[2]}(\lambda_1,\lambda_2) \\ \rho_{3}^{[2]}(\lambda_1,\lambda_2) \end{array}\right), \en which expresses the normalization condition and the partial antisymmetrization in the lower and upper two semi-infinite lines of the density operator represented in Figure \ref{mixDmatrix}. Here, the properties (\ref{fus1}-\ref{fus2}) were used. In terms of the above functions, the functional equation becomes, \bear \omega_{33}(\lambda_1,\lambda_2)&=& \frac{(\lambda-1)(\lambda+1)}{\lambda(\lambda+3)} \omega_{\bar{3}3}(\lambda_1,\lambda_2) +\frac{1}{\lambda}, \label{qKZ1}\\ \omega_{\bar{3}3}(\lambda_1-1,\lambda_2)&=&-\frac{(\lambda-1)(\lambda+3)}{\lambda(\lambda+3)} \omega_{33}(\lambda_1+1,\lambda_2) -\frac{(\lambda-1)(\lambda+2)}{\lambda(\lambda+3)} \omega_{\bar{3}3}(\lambda_1,\lambda_2)\nonumber\\ &&+\frac{\lambda-1}{\lambda}, \label{qKZ2} \ear for $\lambda_1=u_i$ and arbitrary $\lambda_2$. \subsubsection{Zero temperature solution} At zero temperature, the above functional equations hold for arbitrary $\lambda_1$. This is because at zero temperature one has to take the Trotter limit ($N\to\infty$) and therefore the horizontal spectral parameters $u_i$ can take an infinite number of continuous values. Therefore, we solve Eq.(\ref{qKZ1}) for $\omega_{\bar{3}3}(\lambda_1,\lambda_2)$ and insert this into Eq.~(\ref{qKZ2}), resulting in \bear \frac{\omega_{33}(\lambda_1+1,\lambda_2)}{\lambda(\lambda+2)} + \frac{\omega_{33}(\lambda_1,\lambda_2)}{(\lambda-1)(\lambda+1)} + \frac{\omega_{33}(\lambda_1-1,\lambda_2)}{\lambda(\lambda-2)} = \frac{\lambda^2+2}{(\lambda^2-4)(\lambda^2-1)}. \label{eqq} \ear In addition, at zero temperature the bi-variate function $\omega_{33}(\lambda_1,\lambda_2)$ turns into a single-variable function $\omega_{33}(\lambda_1-\lambda_2)$ depending only on the difference of the arguments, which allows us to define \eq \sigma(\lambda)=\frac{\omega_{33}(\lambda)}{(\lambda-1)(\lambda+1)}. \en Therefore, equation (\ref{eqq}) can be written as \bear \sigma(\lambda+1) + \sigma(\lambda) + \sigma(\lambda-1) = \frac{\lambda^2+2}{(\lambda-2)(\lambda-1)(\lambda+1)(\lambda+2)}, \label{eqqq} \ear whose solution obtained via Fourier transform is given by, \bear \sigma(\lambda)=- \frac{d}{d\lambda}\log\left\{ \frac{\Gamma(1+\frac{1}{3}+\frac{\lambda}{3} ))\Gamma(1-\frac{\lambda}{3})}{\Gamma(1+\frac{1}{3}-\frac{\lambda}{3}) \Gamma(1+\frac{\lambda}{3}) } \right\}- \frac{1}{\lambda^2-1}. \label{sol-lam} \ear Having this solution we obtain $\omega_{33}(\lambda_1,\lambda_2)$. Taking the homogeneous limit ($\lambda_k \to 0$), we obtain \bear \omega_{33}(0,0)=-\sigma(0)=1-\frac{\pi}{3\sqrt3}- \log3 \approx -0.70321207674618 , \label{sol} \ear which is precisely the ground state energy, as expected \cite{UIMIN,SUTHERLAND}, which is a special correlation function. The $\alpha_{33}$ coefficient in the density operator (\ref{qq}) is obtained from (\ref{ome1}) and (\ref{sol}) such that, \bear \alpha_{33}(0,0)=\frac{1}{24}\left[2-\frac{\pi}{\sqrt{3}} -3 \log3\right]\approx -0.12956817625994. \label{soldens} \ear \subsubsection{Properties of the two-point function} Differently from $SU(2)$, in the higher-rank case of $SU(3)$ the function $\omega_{33}(\lambda)$ is a generating function of special combinations of modified $\zeta$ functions (Hurwitz' zeta function). We define a function $G(\lambda)$ as follows \bear G(\lambda)&=& \frac{\omega_{33}(\lambda)+1 }{\lambda^2-1} \\ &=& \frac{1}{3}\left[\psi_0\left(1-\frac{\lambda}{3}\right)-\psi_0\left(1+\frac{1}{3}-\frac{\lambda}{3}\right) +\psi_0\left(1+\frac{\lambda}{3}\right)-\psi_0\left(1+\frac{1}{3}+\frac{\lambda}{3}\right)\right], \nonumber \ear where $\psi_0(z)$ is the digamma function $\psi_0(z)=\frac{d}{dz}\log \Gamma(z)$. Expanding $G(\lambda)$ in a power series we obtain \bear 3 G(\lambda)=2 \sum_{k=0}^{\infty} \frac{1}{2 k!} \left[\psi_{2k}\left( 1 \right) - \psi_{2k}\left(1+\frac{1}{3}\right) \right] \left(\frac{\lambda}{3}\right)^{2k}, \label{func} \ear where now $\psi_m(\lambda)$ is the polygamma function. We can use the fact that $\psi_{m}(z)=(-1)^{m+1}(m)!\zeta(m+1,z)$, where $\zeta(m,z)$ is the modified zeta function defined as \eq \zeta(m,a)=\sum_{k=0}^{\infty} \frac{1}{(k+a)^m}, \en and re-write expression (\ref{func}) as \eq 3 G(\lambda)=2 \sum_{k=1}^{\infty} \left[\zeta\left(2k+1,1\right) - \zeta\left(2k+1,1+\frac{1}{3}\right) \right] \left(\frac{\lambda}{3}\right)^{2k} + 2\left[\psi_0\left(1\right)-\psi_0\left(1+\frac{1}{3}\right)\right]. \en Then we see that $G(\lambda)$ is a generating function of differences of the modified zeta function $\zeta\left(2k+1,1\right) -\zeta\left(2k+1,1+\frac{1}{3}\right)$, albeit not of the modified zeta function itself. \subsection{Computation of the three-sites density operator} In the three-sites case, the density operator can be written as a superposition of $11$ linearly independent operators as follows, \eq {\mathfrak D}_3(\lambda_1,\lambda_2,\lambda_3)=\sum_{k=1}^{11} \rho_k^{[3]}(\lambda_1,\lambda_2,\lambda_3) P_{k}^{[3]} \label{D3frak} \en where the operators $P_{k}^{[3]}$ are chosen as, \eq \begin{minipage}{0.25\linewidth} \begin{tikzpicture}[scale=1] \draw (-1.,0) node {$P_1^{[3]}=$}; \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(0,0.5) -- +(0.4,0.4) -- +(0.5,0.); \draw (0,0) [-,color=black, thick, directed, rounded corners=8pt]+(0,0.) -- +(0.5,0.); \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(0,-0.5) -- +(0.4,-0.4) -- +(0.5,0.); \draw (0.,0.5)[fill=black] circle (0.3ex); \draw (0,0)[fill=black] circle (0.3ex); \draw (0.,-0.5)[fill=black] circle (0.3ex); \draw (0,0) [color=black, directed, thick, rounded corners=8pt]+(0.75,0.5) -- +(0.75,-0.5); \draw (0.75,-0.5)[fill=black] circle (0.3ex); \draw (0.75,0.5,0)[fill=black] circle (0.3ex); \draw (0,0) [color=black, directed, thick, rounded corners=8pt]+(1.,0.5) -- +(1,-0.5); \draw (1.,-0.5)[fill=black] circle (0.3ex); \draw (1.,0.5,0)[fill=black] circle (0.3ex); \draw (1.25,0) node {,}; \end{tikzpicture} \end{minipage \begin{minipage}{0.25\linewidth} \begin{tikzpicture}[scale=1] \draw (-1.,0) node {$P_2^{[3]}=$}; \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(0,-0.5) -- +(0.25,-0.45) -- +(0.5,-0.25); \draw (0,0) [-,color=black, thick, directed, rounded corners=8pt]+(0,0.) -- +(0.25,-0.05)-- +(0.5,-0.25); \tikzstyle directed=[postaction={decorate,decoration={markings, mark=at position .45 with {\arrow[arrowstyle]{stealth}}}}] \draw (0,0) [color=black, directed, thick, rounded corners=8pt]+(0.75,0.5) -- +(0.5,-0.25); \tikzstyle directed=[postaction={decorate,decoration={markings, mark=at position .65 with {\arrow[arrowstyle]{stealth}}}}] \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(0,0.5) -- +(0.25,0.4) -- +(0.55,0.1); \draw (0,0) [-,color=black, thick, rounded corners=8pt]+(0.65,-0.05) -- +(0.75,-0.3) -- +(0.75,-0.5); \draw (0.,0.5)[fill=black] circle (0.3ex); \draw (0,0)[fill=black] circle (0.3ex); \draw (0.,-0.5)[fill=black] circle (0.3ex); \draw (0.75,-0.5)[fill=black] circle (0.3ex); \draw (0.75,0.5,0)[fill=black] circle (0.3ex); \draw (0,0) [color=black, directed, thick, rounded corners=8pt]+(1.,0.5) -- +(1,-0.5); \draw (1.,-0.5)[fill=black] circle (0.3ex); \draw (1.,0.5,0)[fill=black] circle (0.3ex); \draw (1.25,0) node {,}; \end{tikzpicture} \end{minipage \begin{minipage}{0.25\linewidth} \begin{tikzpicture}[scale=1] \draw (-1.,0) node {$P_3^{[3]}=$}; \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(0,0.5) -- +(0.4,0.4) -- +(0.5,0.); \draw (0,0) [-,color=black, thick, directed, rounded corners=8pt]+(0,0.) -- +(0.5,0.); \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(0,-0.5) -- +(0.4,-0.4) -- +(0.5,0.); \draw (0.,0.5)[fill=black] circle (0.3ex); \draw (0,0)[fill=black] circle (0.3ex); \draw (0.,-0.5)[fill=black] circle (0.3ex); \tikzstyle directed=[postaction={decorate,decoration={markings, mark=at position .65 with {\arrow[arrowstyle]{stealth}}}}] \draw (0,0) [-,color=black, directed, thick, rounded corners=8pt]+(0.75,0.5) -- +(0.85,0.05); \draw (0,0) [color=black, thick, rounded corners=8pt]+(0.9,-0.05) -- +(1.,-0.5); \draw (0.75,-0.5)[fill=black] circle (0.3ex); \draw (0.75,0.5,0)[fill=black] circle (0.3ex); \tikzstyle directed=[postaction={decorate,decoration={markings, mark=at position .3 with {\arrow[arrowstyle]{stealth}}}}] \draw (0,0) [color=black, directed, thick, rounded corners=8pt]+(1.,0.5) -- +(0.75,-0.5); \tikzstyle directed=[postaction={decorate,decoration={markings, mark=at position .65 with {\arrow[arrowstyle]{stealth}}}}] \draw (1.,-0.5)[fill=black] circle (0.3ex); \draw (1.,0.5,0)[fill=black] circle (0.3ex); \draw (1.25,0) node {,}; \end{tikzpicture} \end{minipage \begin{minipage}{0.25\linewidth} \begin{tikzpicture}[scale=1] \draw (-1.,0) node {$P_4^{[3]}=$}; \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(0,-0.5) -- +(0.25,-0.45) -- +(0.5,-0.25); \draw (0,0) [-,color=black, thick, directed, rounded corners=8pt]+(0,0.) -- +(0.25,-0.05)-- +(0.5,-0.25); \tikzstyle directed=[postaction={decorate,decoration={markings, mark=at position .85 with {\arrow[arrowstyle]{stealth}}}}] \draw (0,0) [color=black, directed, thick, rounded corners=8pt]+(1.,0.5) -- +(0.5,-0.25); \tikzstyle reverse directed=[postaction={decorate,decoration={markings, mark=at position .65 with {\arrowreversed[arrowstyle]{stealth};}}}] \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(0,0.5) -- +(0.25,0.4) -- +(0.67,0.16); \draw (0,0) [-,color=black, thick, rounded corners=8pt]+(0.8,0.08) -- +(1,-0.3) -- +(1.,-0.5); \draw (0.,0.5)[fill=black] circle (0.3ex); \draw (0,0)[fill=black] circle (0.3ex); \draw (0.,-0.5)[fill=black] circle (0.3ex); \draw (1.,-0.5)[fill=black] circle (0.3ex); \draw (1.,0.5)[fill=black] circle (0.3ex); \tikzstyle directed=[postaction={decorate,decoration={markings, mark=at position .75 with {\arrow[arrowstyle]{stealth}}}}] \draw (0,0) [color=black, directed, thick, rounded corners=8pt]+(0.75,0.5) -- +(0.75,-0.5); \tikzstyle reverse directed=[postaction={decorate,decoration={markings, mark=at position .65 with {\arrowreversed[arrowstyle]{stealth};}}}] \draw (0.75,-0.5)[fill=black] circle (0.3ex); \draw (0.75,0.5,0)[fill=black] circle (0.3ex); \draw (1.25,0) node {,}; \end{tikzpicture} \end{minipage} \nonumber \en \eq \begin{minipage}{0.25\linewidth} \begin{tikzpicture}[scale=1] \draw (-1.,0) node {$P_5^{[3]}=$}; \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(0,-0.5) -- +(0.25,-0.45) -- +(0.5,-0.25); \draw (0,0) [-,color=black, thick, directed, rounded corners=8pt]+(0,0.) -- +(0.25,-0.05)-- +(0.5,-0.25); \tikzstyle directed=[postaction={decorate,decoration={markings, mark=at position .55 with {\arrow[arrowstyle]{stealth}}}}] \draw (0,0) [color=black, directed, thick, rounded corners=8pt]+(0.75,0.5) -- +(0.5,-0.25); \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(0,0.5) -- +(0.25,0.4) -- +(0.55,0.1); \tikzstyle reverse directed=[postaction={decorate,decoration={markings, mark=at position .65 with {\arrowreversed[arrowstyle]{stealth};}}}] \draw (0,0) [-,color=black, thick, rounded corners=8pt]+(0.7,-0.05) -- +(0.85,-0.2) -- +(1.,-0.5); \draw (0.,0.5)[fill=black] circle (0.3ex); \draw (0,0)[fill=black] circle (0.3ex); \draw (0.,-0.5)[fill=black] circle (0.3ex); \draw (0.75,-0.5)[fill=black] circle (0.3ex); \draw (0.75,0.5,0)[fill=black] circle (0.3ex); \draw (0,0) [color=black, directed, thick, rounded corners=8pt]+(1.,0.5) -- +(0.75,-0.5); \draw (1.,-0.5)[fill=black] circle (0.3ex); \draw (1.,0.5,0)[fill=black] circle (0.3ex); \draw (1.25,0) node {,}; \end{tikzpicture} \end{minipage \begin{minipage}{0.25\linewidth} \begin{tikzpicture}[scale=1] \draw (-1.,0) node {$P_6^{[3]}=$}; \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(0,-0.5) -- +(0.25,-0.45) -- +(0.5,-0.25); \draw (0,0) [-,color=black, thick, directed, rounded corners=8pt]+(0,0.) -- +(0.25,-0.05)-- +(0.5,-0.25); \draw (0,0) [color=black, directed, thick, rounded corners=8pt]+(1.,0.5) -- +(0.5,-0.25); \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(0,0.5) -- +(0.25,0.4) -- +(0.6,0.05); \draw (0,0) [-,color=black, thick, rounded corners=8pt]+(0.71,-0.16) -- +(0.75,-0.3) -- +(0.75,-0.5); \draw (0.,0.5)[fill=black] circle (0.3ex); \draw (0,0)[fill=black] circle (0.3ex); \draw (0.,-0.5)[fill=black] circle (0.3ex); \draw (0.75,-0.5)[fill=black] circle (0.3ex); \draw (0.75,0.5,0)[fill=black] circle (0.3ex); \draw (0,0) [color=black, directed, thick, rounded corners=8pt]+(0.75,0.5) -- +(1,-0.5); \draw (1.,-0.5)[fill=black] circle (0.3ex); \draw (1.,0.5,0)[fill=black] circle (0.3ex); \draw (1.25,0) node {,}; \end{tikzpicture} \end{minipage \begin{minipage}{0.25\linewidth} \begin{tikzpicture}[scale=1] \draw (-1.,0) node {$P_7^{[3]}=$}; \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(0,-0.5) -- +(0.4,-0.35) -- +(0.75,-0.5); \draw (0,0) [-,color=black, thick, directed, rounded corners=8pt]+(0,0.) -- +(0.25,0.05)-- +(0.45,0.25); \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(0,0.5) -- +(0.25,0.45) -- +(0.45,0.25); \draw (0.,0.5)[fill=black] circle (0.3ex); \draw (0,0)[fill=black] circle (0.3ex); \draw (0.,-0.5)[fill=black] circle (0.3ex); \draw (0,0) [color=black, directed, thick, rounded corners=8pt]+(0.75,0.5) -- +(0.45,0.25); \draw (0.75,-0.5)[fill=black] circle (0.3ex); \draw (0.75,0.5,0)[fill=black] circle (0.3ex); \draw (1.25,0) node {,}; \draw (0,0) [color=black, directed, thick, rounded corners=8pt]+(1.,0.5) -- +(1,-0.5); \draw (1.,-0.5)[fill=black] circle (0.3ex); \draw (1.,0.5,0)[fill=black] circle (0.3ex); \draw (1.25,0) node {,}; \end{tikzpicture} \end{minipage \begin{minipage}{0.25\linewidth} \begin{tikzpicture}[scale=1] \draw (-1.,0) node {$P_8^{[3]}=$}; \draw (0,0) [-,color=black, thick, reverse directed, rounded corners=8pt]+(0.75,-0.5) -- +(0.45,-0.25) --+(0.,0.); \draw (0,0) [-,color=black, thick, reverse directed, rounded corners=8pt]+(1,-0.5) -- +(0.45,0.15)-- +(0.,0.5); \draw (0,0) [-,color=black, thick, directed, rounded corners=8pt]+(0,-0.5) -- +(0.35,-0.45) -- +(0.87,0.15); \draw (0.,0.5)[fill=black] circle (0.3ex); \draw (0,0)[fill=black] circle (0.3ex); \draw (0.,-0.5)[fill=black] circle (0.3ex); \draw (0.75,-0.5)[fill=black] circle (0.3ex); \draw (0.75,0.5,0)[fill=black] circle (0.3ex); \draw (1.25,0) node {,}; \draw (0,0) [color=black, directed, thick, rounded corners=8pt]+(1,0.5) -- +(0.87,0.15); \draw (0,0) [color=black, directed, thick, rounded corners=8pt]+(0.75,0.5) -- +(0.87,0.15); \draw (1.,-0.5)[fill=black] circle (0.3ex); \draw (1.,0.5,0)[fill=black] circle (0.3ex); \draw (1.25,0) node {,}; \end{tikzpicture} \end{minipage} \nonumber \en \eq \begin{minipage}{0.25\linewidth} \begin{tikzpicture}[scale=1] \draw (-1.,0) node {$P_9^{[3]}=$}; \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(0,-0.5) -- +(0.5,-0.25) -- +(1.,-0.5); \draw (0,0) [-,color=black, thick, directed, rounded corners=8pt]+(0,0.) -- +(0.25,0.05)-- +(0.45,0.25); \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(0,0.5) -- +(0.25,0.45) -- +(0.45,0.25); \draw (0.,0.5)[fill=black] circle (0.3ex); \draw (0,0)[fill=black] circle (0.3ex); \draw (0.,-0.5)[fill=black] circle (0.3ex); \tikzstyle directed=[postaction={decorate,decoration={markings, mark=at position .85 with {\arrow[arrowstyle]{stealth}}}}] \draw (0,0) [color=black, directed, thick, rounded corners=8pt]+(1.,0.5) --+(0.75,0.15) -- +(0.45,0.25); \draw (1.,-0.5)[fill=black] circle (0.3ex); \draw (1.,0.5,0)[fill=black] circle (0.3ex); \draw (1.25,0) node {,}; \tikzstyle directed=[postaction={decorate,decoration={markings, mark=at position .6 with {\arrow[arrowstyle]{stealth}}}}] \draw (0,0) [color=black, directed, thick, rounded corners=8pt]+(0.75,0.5) -- +(0.75,-0.5); \tikzstyle reverse directed=[postaction={decorate,decoration={markings, mark=at position .65 with {\arrowreversed[arrowstyle]{stealth};}}}] \draw (0.75,-0.5)[fill=black] circle (0.3ex); \draw (0.75,0.5,0)[fill=black] circle (0.3ex); \draw (1.25,0) node {,}; \end{tikzpicture} \end{minipage \begin{minipage}{0.25\linewidth} \begin{tikzpicture}[scale=1] \draw (-1.,0) node {$P_{10}^{[3]}=$}; \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(0,-0.5) -- +(0.5,-0.25) -- +(1.,-0.5); \draw (0,0) [-,color=black, thick, directed, rounded corners=8pt]+(0,0.) -- +(0.25,0.05)-- +(0.45,0.25); \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(0,0.5) -- +(0.25,0.45) -- +(0.45,0.25); \draw (0.,0.5)[fill=black] circle (0.3ex); \draw (0,0)[fill=black] circle (0.3ex); \draw (0.,-0.5)[fill=black] circle (0.3ex); \draw (0,0) [color=black, directed, thick, rounded corners=8pt]+(0.75,0.5) -- +(0.45,0.25); \draw (0.75,-0.5)[fill=black] circle (0.3ex); \draw (0.75,0.5,0)[fill=black] circle (0.3ex); \draw (1.25,0) node {,}; \draw (0,0) [color=black, directed, thick, rounded corners=8pt]+(1.,0.5) -- +(0.75,-0.5); \draw (1.,-0.5)[fill=black] circle (0.3ex); \draw (1.,0.5,0)[fill=black] circle (0.3ex); \draw (1.25,0) node {,}; \end{tikzpicture} \end{minipage \begin{minipage}{0.25\linewidth} \begin{tikzpicture}[scale=1] \draw (-1.,0) node {$P_{11}^{[3]}=$}; \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(0,-0.5) -- +(0.5,-0.25) -- +(0.75,-0.5); \draw (0,0) [-,color=black, thick, directed, rounded corners=8pt]+(0,0.) -- +(0.25,0.05)-- +(0.45,0.25); \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(0,0.5) -- +(0.25,0.45) -- +(0.45,0.25); \draw (0.,0.5)[fill=black] circle (0.3ex); \draw (0,0)[fill=black] circle (0.3ex); \draw (0.,-0.5)[fill=black] circle (0.3ex); \tikzstyle directed=[postaction={decorate,decoration={markings, mark=at position .85 with {\arrow[arrowstyle]{stealth}}}}] \draw (0,0) [color=black, directed, thick, rounded corners=8pt]+(1.,0.5) --+(0.75,0.15) -- +(0.45,0.25); \tikzstyle reverse directed=[postaction={decorate,decoration={markings, mark=at position .65 with {\arrowreversed[arrowstyle]{stealth};}}}] \draw (1.,-0.5)[fill=black] circle (0.3ex); \draw (1.,0.5,0)[fill=black] circle (0.3ex); \draw (1.25,0) node {,}; \draw (0,0) [color=black, directed, thick, rounded corners=8pt]+(0.75,0.5) -- +(1.,-0.5); \draw (0.75,-0.5)[fill=black] circle (0.3ex); \draw (0.75,0.5,0)[fill=black] circle (0.3ex); \draw (1.25,0) node {,}; \end{tikzpicture} \end{minipage \nonumber \en and the functions $\rho_{k}^{[3]}(\lambda_1,\lambda_2)$ are to be determined. Like in the two-point case, we have conveniently chosen the operators as tensor products of the totally anti-symmetric tensor and the identity/Kronecker symbol to resemble identity, permutation and Temperley-Lieb acting in different spaces as $I, P_{12}, P_{23}, P_{13}, P_{12} P_{23},$ $ P_{23} P_{12}, E_{12}, E_{13}$, $ E_{12}P_{23}, P_{23}E_{12}$ plus one operator which due to symmetry must allow for a symmetric combination of the indices in the first column of dots/indices as given in $P_8^{[3]}$. We have checked that the above chosen operators are indeed linearly independent. Again, after partial anti-symmetrization, the density operator (\ref{D3frak}) can be reduced to the usual density operator for three-sites $D_3(\lambda_1,\lambda_2,\lambda_3)=(2\rho^{[3]}_1 -\rho^{[3]}_7-\rho^{[3]}_9) I +(2\rho^{[3]}_2+\rho^{[3]}_7) P_{12} + (2\rho^{[3]}_3-\rho^{[3]}_{10}-\rho^{[3]}_{11}) P_{23} + (2\rho^{[3]}_4-\rho^{[3]}_8+\rho^{[3]}_9) P_{13} + (2\rho^{[3]}_5 +\rho^{[3]}_8 +\rho^{[3]}_{10}) P_{12} P_{23} + (2\rho^{[3]}_6+\rho^{[3]}_{11}) P_{23} P_{12}$ by means of the use of the properties (\ref{other2}). Analogously to the two-sites case, the operator $P_j^{[3]}$ can be seen as vector with the indices $i_1,i_2,i_3,r_1,r_2,s_1,s_2=1,2,3$ assigned to the dots, such that e.g. for $P_1^{[3]}$ we have \eq \ \begin{tikzpicture}[scale=1] \draw (-6.25,0) node {$\left(P_1^{[3]}\right)_{i_1,i_2,i_3,r_1,r_2}^{s_1,s_2} :=\quad\epsilon_{i_1,i_2,i_3}\delta_{r_1}^{s_1}\delta_{r_2}^{s_2} \quad \equiv$}; \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(0,0.5) -- +(0.4,0.4) -- +(0.5,0.); \draw (0,0) [-,color=black, thick, directed, rounded corners=8pt]+(0,0.) -- +(0.5,0.); \draw (0,0) [-,color=black, thick,directed, rounded corners=8pt]+(0,-0.5) -- +(0.4,-0.4) -- +(0.5,0.); \draw (0.,0.5)[fill=black] circle (0.3ex); \draw (0,0)[fill=black] circle (0.3ex); \draw (0.,-0.5)[fill=black] circle (0.3ex); \draw (-0.25,0.5) node {$i_1$}; \draw (-0.25,0) node {$i_2$}; \draw (-0.25,-0.5) node {$i_3$}; \draw (0.75,0.75) node {$r_1$}; \draw (0.75,-0.75) node {$s_1$}; \draw (1.25,0.75) node {$r_2$}; \draw (1.25,-0.75) node {$s_2$}; \draw (0,0) [color=black, directed, thick, rounded corners=8pt]+(0.75,0.5) -- +(0.75,-0.5); \draw (0.75,-0.5)[fill=black] circle (0.3ex); \draw (0.75,0.5,0)[fill=black] circle (0.3ex); \draw (0,0) [color=black, directed, thick, rounded corners=8pt]+(1.25,0.5) -- +(1.25,-0.5); \draw (1.25,-0.5)[fill=black] circle (0.3ex); \draw (1.25,0.5,0)[fill=black] circle (0.3ex); \draw (1.55,0) node {,}; \end{tikzpicture} \en which shows that the computation for three-sites goes along the same lines as in the two-sites case, we just have to deal with a large number of extended operators $P_j^{[3]}$. Inserting the above expansion of the density operator (\ref{D3frak}) into equation (\ref{qkzsun-frak}) yields the set of functional equations \eq \vec{\rho}(\lambda_1,\lambda_2,\lambda_3)= A^{[3]}(\lambda_1,\lambda_2)\cdot\vec{\rho}(\lambda_1+1,\lambda_2,\lambda_3), \label{funct11} \en where $\vec{\rho}(\lambda_1,\lambda_2,\lambda_3)$ is a $11$ dimensional vector whose entries are the expansion coefficients $\rho_k^{[3]}(\lambda_1,\lambda_2,\lambda_3)$ for $k=1,\dots,11$ and the matrix $A^{[3]}(\lambda_1,\lambda_2,\lambda_3)$, which is obtained from the action of the linear operator ${\mathfrak A}_3(\lambda_1,\lambda_2,\lambda_3)$ (\ref{frakA}) on the mixed density operator (\ref{D3frak}), is given in appendix B. For convenience of the presentation of the results, we define the intermediate auxiliary functions $f_k(\lambda_1,\lambda_2,\lambda_3)= ({P}_k^{[3]})^t\cdot {\mathfrak D}_3(\lambda_1,\lambda_2,\lambda_3)$ by \eq \vec{f}(\lambda_1,\lambda_2,\lambda_3)= M \cdot \vec{\rho}(\lambda_1,\lambda_2,\lambda_3), \label{aux} \en where the matrix $M$ is given by \eq M =\left(\begin{array}{ccccccccccc} 54 & 18 & 18 & 18 & 6 & 6 & 18 & 6 & 18 & 6 & 6 \\ 18 & 54 & 6 & 6 & 18 & 18 & -18 & -6 & 6 & -6 & -6 \\ 18 & 6 & 54 & 6 & 18 & 18 & 6 & -6 & 6 & 18 & 18 \\ 18 & 6 & 6 & 54 & 18 & 18 & 6 & 18 & -18 & -6 & -6 \\ 6 & 18 & 18 & 18 & 54 & 6 & -6 & -18 & -6 & -18 & 6 \\ 6 & 18 & 18 & 18 & 6 & 54 & -6 & 6 & -6 & 6 & -18 \\ 18 & -18 & 6 & 6 & -6 & -6 & 54 & -6 & 6 & 18 & 18 \\ 6 & -6 & -6 & 18 & -18 & 6 & -6 & 54 & -6 & 6 & 6 \\ 18 & 6 & 6 & -18 & -6 & -6 & 6 & -6 & 54 & 18 & 18 \\ 6 & -6 & 18 & -6 & -18 & 6 & 18 & 6 & 18 & 54 & 6 \\ 6 & -6 & 18 & -6 & 6 & -18 & 18 & 6 & 18 & 6 & 54 \end{array}\right). \en The equations (\ref{funct11}) can be disentangled by making a suitable transformation. This can be done by using the reduction properties like the intertwining symmetry, the partial trace and so on in order to identify 8 linearly independent combinations of the functions $f_k(\lambda_1,\lambda_2,\lambda_3)$ as two-site functions (or simpler) and 3 remaining combinations as true three-site functions. Therefore, the suitable functions can be written in terms of the auxiliary functions as follows, \bear &&1=f_1, \nonumber\\ &&\omega_{33}(\lambda_1,\lambda_2)=f_2, \nonumber\\ &&\omega_{\bar{3}3}(\lambda_1-1,\lambda_2)=f_7, \nonumber\\ &&\omega_{33}(\lambda_1,\lambda_3) (1-y^2)= f_3 + y f_6 - y f_5 -y^2 f_4, \nonumber\\ &&\omega_{\bar{3}3}(\lambda_1-1,\lambda_3) (1-y)(2+y)= f_7 -(y+2) f_{10} + (y-1)f_{11} - (y-1)(y+2)f_9, \nonumber \\ &&\omega_{33}(\lambda_1,\lambda_2) (1-x^2) (1-(x-y)^2) =(1 - (x - y)^2) f_3 + x(x - 2y) f_4 \nonumber\\ &&+ x(-1 + xy - y^2) f_5 + x(1 - xy + y^2) f_6 + x(x - y)(-2 + x^2 - xy) f_2, \nonumber\\ &&\omega_{\bar{3}3}(\lambda_1-1,\lambda_2) (1-x)(2+x)(1-(x-y)^2)= \nonumber\\ &&(1 - (-1 + x)(x - y) - (2 + x)(x - y) + (-1 + x)(2 + x)(x - y)^2) f_7 \nonumber\\ &&+ (2 - y - y^2) f_9 + (2 + y)(-1 + x^2 + y - x(1 + y)) f_{10} \nonumber\\ && + (-1 + y)(1 - x^2 + x(-2 + y) + 2y) f_{11}, \nonumber\\ &&\omega_{33}(\lambda_2,\lambda_3)=f_3, \nonumber \\ &&F_1(\lambda_1,\lambda_2,\lambda_3)=2 x (2 + x) y (2 + y) f_1 + 2 x (2 + x) (2 + y) f_2 \nonumber \\ && + 2 (2 + x) y (2 + y) f_4 + 2 (2 + x) (2 + y) f_5, \label{inter}\\ &&F_2(\lambda_1,\lambda_2,\lambda_3)= 2 (-2 - y - x (2 + y) + x (2 + x) y (2 + y)) f_1 \nonumber\\ && - 2 (-1 + x + x^2) (2 + y) f_2 + 2 (1 + x) (2 + y) f_3 \nonumber\\ && + 2 (1 + x + (2 + x) y - (2 + x) y (2 + y)) f_4 - 2 (1 + x + 2 y + x y) f_5 \nonumber\\ && + 2 (2 + y) f_6 - 2 (-2 - y + x y + x^2 (1 + y)) f_7 + 2 (1 + x - y) f_8 \nonumber\\ && - 2 (1 + x) (-2 + y + y^2) f_9 - 2 (1 + x) (2 + y) f_{10} - 2 x f_{11}, \nonumber \\ &&F_3(\lambda_1,\lambda_2,\lambda_3)= 2 (x^2-1) (y^2-1)f_1 + 2 (x^2-1) (1 + y) f_7 \nonumber \\ &&+ 2 (1 + x) (y^2-1) f_9 + 2 (1 + x) (1 + y) f_{10}, \nonumber \ear where $x=\lambda_1-\lambda_3$ and $y=\lambda_1-\lambda_2$ and the combination $f_1(\lambda_1,\lambda_2,\lambda_3)$ is the normalization condition (analogue of the total trace). Substituting Eq. (\ref{aux}) in Eqs. (\ref{inter}) and solving the 11 linear equations for $\rho_1^{[3]},...,\rho_{11}^{[3]}$ in terms of the known two-site functions and the yet unknown three-site functions $F_1, F_2, F_3$, inserting these expressions into the functional equations (\ref{funct11}) and using those for the two-point functions, reduces these to just three equations for the unknown functions. So the only remaining unknown functions are $F_1, F_2, F_3$ and satisfy a set of linear functional equations. By a suitable rescaling the coefficients of the occurring matrix take a very simple form \eq \left(\begin{array}{c} G_1(\lambda_1,\lambda_2,\lambda_3) \\ G_2(\lambda_1,\lambda_2,\lambda_3) \\ G_3(\lambda_1,\lambda_2,\lambda_3) \end{array}\right) =\left(\begin{array}{ccc} 0 & 0 & 1\\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{array}\right) \cdot \left(\begin{array}{c} G_1(\lambda_1+1,\lambda_2,\lambda_3) \\ G_2(\lambda_1+1,\lambda_2,\lambda_3) \\ G_3(\lambda_1+1,\lambda_2,\lambda_3) \end{array}\right) + \left(\begin{array}{c} r(\lambda_1,\lambda_2,\lambda_3) \\ 0 \\ 0 \end{array}\right), \label{3ptseq} \en where $\lambda_1=u_i$, and we have introduced for convenience the $G_k$-functions as, \bear G_1(\lambda_1,\lambda_2,\lambda_3)&=& \frac{x y}{(x^2-1)(y^2-1) (x+2)(y+2)} F_1(\lambda_1,\lambda_2,\lambda_3), \nonumber \\ G_2(\lambda_1,\lambda_2,\lambda_3)&=& \frac{(x+1)(y+1)}{(x^2-1)(y^2-1)(x+2)(y+2)} F_2(\lambda_1,\lambda_2,\lambda_3), \\ G_3(\lambda_1,\lambda_2,\lambda_3)&=& \frac{1}{(x^2-1)(y^2-1)} F_3(\lambda_1,\lambda_2,\lambda_3), \nonumber \ear and \bear r(\lambda_1,\lambda_2,\lambda_3)& =& \frac{2 (-1 + 2 x^2 + 2 y^2)}{(x^2-1)(y^2-1)} + \frac{2(x + y)}{(x^2-1)(y^2-1)} \omega_{33}(\lambda_2, \lambda_3) \\ &+&\frac{2 (-1 + 3 x + x^2 - 3 y - 2 x y + y^2 - 3 x y^2 + 3 y^3)}{x ( x+3)(x - y)(y^2-1)} \omega_{\bar{3}3}(\lambda_1, \lambda_3) \nonumber \\ &-& \frac{2(-1 - 3 x + x^2 + 3 x^3 + 3 y - 2 x y - 3 x^2 y + y^2)} {(x^2-1)(x - y) y ( y+3)} \omega_{\bar{3}3}(\lambda_1, \lambda_2). \nonumber \ear \subsubsection{Zero temperature solution} At zero temperature, again, the above functional equations hold for arbitrary $\lambda_1$. Since we have already obtained the solution for the two-site functions from Eq.~(\ref{qKZ1}-\ref{qKZ2}), it only remains to solve equation (\ref{3ptseq}). However, equations (\ref{3ptseq}) are more complicated to deal with, since one of the equations contains the inhomogeneity $r(\lambda_1,\lambda_2,\lambda_3)$ with a more complicated pole structure than in the two-site case. The inhomogeneity can be written in terms of digamma functions. This increases the complexity to obtain a closed solution for (\ref{3ptseq}). In order to avoid carrying out, for the time being, the Fourier transform of rational functions times digamma functions, we have chosen to write the solution in terms of convolutions. Eventually, the convolution integrals can be evaluated numerically as we will describe in what follows. The problem can be significantly simplified by partially taking the homogeneous limit $\lambda_2=\lambda_3=0$ and decoupling the equations (\ref{3ptseq}) by the following transformation \bear g_0(\lambda_1) &=& G_1(\lambda_1,0,0) + G_2(\lambda_1,0,0) +G_3(\lambda_1,0,0), \nonumber\\ g_{1}(\lambda_1) &=& G_1(\lambda_1,0,0) +w G_2(\lambda_1,0,0) + w^2 G_3(\lambda_1,0,0), \\ g_{-1}(\lambda_1) &=& G_1(\lambda_1,0,0) +w^{-1} G_2(\lambda_1,0,0) + w^{-2} G_3(\lambda_1,0,0), \nonumber \ear where $w=e^{\frac{2\pi\im}{3}}$. Therefore, the resulting equations become (we now set $\lambda_1=\lambda$) \bear g_l(\lambda) &=&w^l g_l(\lambda+1) + \varphi(\lambda), \ear where $l=0,1,-1$ and $\varphi(\lambda)=\lim_{\lambda_2,\lambda_3\to 0}r(\lambda,\lambda_2,\lambda_3)$, \bear \varphi(\lambda)&=&-\frac{12}{(\lambda^2-1)} \omega_{33}(\lambda,0)-\frac{2}{(\lambda^2-1)^2} \omega_{33}'(\lambda,0) +\frac{4\lambda}{(\lambda^2-1)^2}\omega_{33}(0,0)\nonumber\\ && +\frac{2(4 \lambda^4 +6 \lambda^3-\lambda^2-6 \lambda-1)}{\lambda^2 (\lambda^2-1)^2}, \ear and the prime denotes the derivative with respect to the argument $\lambda$. Naturally the zero temperature solution for three-sites correlation must depend on the two-sites function $\omega_{33}(\lambda,0)$ and its derivative $\omega_{33}'(\lambda,0)$ via the $\varphi(\lambda)$, therefore the modified zeta function would also appear explicitly in case of an analytic solution for the three-site correlation. We use analyticity in the variable $\lambda$ and Fourier transform the above equations. The resulting equations are algebraically solved for the Fourier coefficients and yield product expressions. Then, we Fourier transform back and find integrals of convolution type \bear g_l(\lambda) &=& \int_{-\infty}^{\infty} h_l(\lambda-\mu) \varphi(\mu) \frac{d\mu}{2\pi}, \label{gconv} \ear where \eq h_l(z)= \int_{{\mathbb R} +\im 0} \frac{ e^{\im k z}}{1-w^{l}e^k} dk. \en The integral expression can be evaluated numerically at the homogeneous point $\lambda=0$. This allows us to obtain the functions $G_i(0,0,0)$ (and derivatives of $G_i$ at $(0,0,0)$) from which we compute $F_i(0,0,0)$ directly in the homogeneous limit. The function $F_1(0,0,0)$ is related to a simple three point correlation function $F_1(0,0,0)=8\langle P_{12} P_{23} \rangle$. Using the result of the numerical evaluation of the integral equation (\ref{gconv}), we obtain \eq \langle P_{12} P_{23} \rangle = 0.191368820116674 \en \begin{table}[h] \begin{center} \begin{tabular}{|l|l|l|} \hline Length & $\omega_{33}(0,0)$ & $\langle P_{12} P_{23} \rangle$ \\ \hline $L=3$ & $-1.000000000000000$ & $1.000000000000000$ \\ \hline $L=6$ & $-0.767591879243998$ & $0.309579305659537$ \\ \hline $L=9$ & $-0.731082881703061$ & $0.239661721591669$ \\ \hline $L\rightarrow\infty$ & $-0.703212076746182$ & $0.191368820116674 $ \\ \hline \end{tabular} \end{center} \caption{Comparison of numerical results from exact diagonalization for $L=3,6$ and Lanczos calculations for $L=9$ sites with the analytical result in the thermodynamic limit.} \end{table} The numerical data for finite lattices indicates an agreement with the infinite lattice result obtained from the solution of the functional equations, see Table 1. Although the numerical evaluation of the integral equations (\ref{gconv}) is not computationally demanding, it would be desirable to have an exact analytical expression. As indicated above, the analytical calculation of the convolution integral requires the computation of the Fourier transform of a product of a rational function with a digamma functions which for the moment we leave as an exercise. \section{Lack of factorization of the correlation functions}\label{lack} In the case of $SU(2)$ spin chains, it is well known that the correlation functions factorize in terms of two-site correlations \cite{BOKO01,BGKS}. This property was useful in obtaining the correlation functions for the spin-1/2 system and also for higher-spin cases \cite{KNS013,RK2016}. Unfortunately, in the case of $SU(3)$ our attempts of solving Eq.~(\ref{3ptseq}) in terms of some naive factorized ansatz failed. Besides, we have also investigated the factorization for finite Trotter number and (at first sight) surprisingly we realized that the three-point correlation functions are expressed in terms of the two-sites ($m=2$) and also a three-sites ($m=3$) emptiness formation probability (EFP). For instance, the following correlation function is given by \bear &&\tr{\left[P_{singlet} D_3(\lambda_1,\lambda_2,\lambda_3) \right]}= \frac{Q_3^{(s)}(\lambda_1,\lambda_2,\lambda_3)}{\Lambda_0^{(n)}(\lambda_1)\Lambda_0^{(n)}(\lambda_2)\Lambda_0^{(n)}(\lambda_3)} \nonumber \\ &=&6-24 \Big[ (1 +\frac{1}{\lambda_{13}\lambda_{23}}) P_2(\lambda_1,\lambda_2) + (1 +\frac{1}{\lambda_{12}\lambda_{32}}) P_2(\lambda_1,\lambda_3) \\ &+&(1 +\frac{1}{\lambda_{21}\lambda_{31}}) P_2(\lambda_2,\lambda_3) \Big] + 60 P_3(\lambda_1,\lambda_2,\lambda_3), \nonumber \ear where $P_{singlet} $ is the $SU(3)$ singlet projector and \bear P_2(\lambda_1,\lambda_2)&=& {\left[D_2^{(3,3)}(\lambda_1,\lambda_2)\right]}_{11}^{11}=\frac{Q_2(\lambda_1,\lambda_2)}{\Lambda_0^{(n)}(\lambda_1)\Lambda_0^{(n)}(\lambda_2)}, \\ P_3(\lambda_1,\lambda_2,\lambda_3)&=&{\left[D_3^{(3,3)}(\lambda_1,\lambda_2,\lambda_3))\right]}_{111}^{111}=\frac{Q_3(\lambda_1,\lambda_2,\lambda_3)}{\Lambda_0^{(n)}(\lambda_1)\Lambda_0^{(n)}(\lambda_2)\Lambda_0^{(n)}(\lambda_3)}, \ear and $Q_m(\lambda_1,\dots,\lambda_m)$ are polynomials of known degree in each variable and $\Lambda_0^{(n)}(\lambda)$ is again the leading eigenvalue of the quantum transfer matrix but with finite Trotter number $N$. It is worth to emphasize that $P_3(\lambda_1,\lambda_2,\lambda_3)$ cannot be written only in terms of the $P_2(\lambda_i,\lambda_j)$, so it does not factorize in terms of the two-point emptiness formation probability. The correlations for the case $m=4$ also do not factorize only in terms of $m=2,3$ correlators, but require one four-point function, e.g. the emptiness formation probability of four-sites $P_4(\lambda_1,\lambda_2,\lambda_3,\lambda_4)$. Therefore, this might indicate that the correlations of high rank spin chains cannot be factorized only in terms of two-point functions, which brings an additional degree of difficulty in order to push the calculation to correlations at longer distances. \section{Conclusions}\label{conclusion} We have formulated a consistent approach to deal with short-distance correlation functions of the $SU(n)$ spin chains for $n>2$ at zero and finite-temperature. The fact that the model does not have crossing symmetry turned the derivation of functional equations for the correlation functions much more challenging than in the $SU(2)$ case. The difficulties which arise were circumvented by working with generalized density operators of two types. These operators not only contain the physically interesting correlation functions, but many other correlations with mixed representations. In this sense, this approach exploits the full $SU(n)$ structure. We considered in detail the special case of $SU(3)$ for two- and three-site correlations ($m=2,3$). We used the discrete functional equations to obtain the equations which fix the two-point correlation functions. Besides, we have solved the equations via Fourier transform at zero temperature and its solutions are explicitly given in terms of digamma functions. The correlation function for the local Hamiltonian gives the ground state energy as expected. In addition, we considered the case of three-point correlations. The computation is much more involved in this case, since the dimension of the singlet space of the generalized mixed density operator is $11$. Therefore, we had to obtain this large number of equations and by appropriate identification of two and three-site functions, we reduced these $11$ equations to just 3 decoupled functional equations. We derived an integral expression of convolution type for the remaining three-point functions. The integrals were evaluated numerically giving the result for three-point correlation functions in the thermodynamical limit. We compared the infinite system size result with the result obtained for very finite lattices $L=3,6$ and $9$, which shows the correct trend. Moreover, we have also investigated the possibility of the three-point function to be factorized in terms of two-point correlations. Our attempts were based on the proposition of an ansatz for the solution of the functional equations for the three-point functions (\ref{3ptseq}), which always led us to contradictions of the proposed ansatz, indicating that the factorized ansatz does not apply to the model. Additionally, we investigated the possibility of factorization at finite Trotter number. At finite Trotter number the correlators are rational functions, which allowed us to realize that three-point correlations can be decomposed in terms of two-point function and an additional three-point function. Therefore, this is also another indication of lack of factorization. In this work, we have obtained the first results about the correlation functions of Yang-Baxter integrable $SU(n)$ quantum spin chains. We obtained analytical and numerical results for nearest ($m=2$) and next-nearest ($m=3$) correlators. We would still like to obtain an analytical evaluation of the convolution integrals for the $m=3$ case. It is completely open, how to solve the functional equation for the cases of $m\ge 4$. In the general case of $SU(n)$ ($n>3$), we have only the solution for $m=2$. Of course, here it would also be desirable to obtain the explicit functional equations for $m=3$ and its solution, at least for $SU(4)$. Another interesting goal is the explicit evaluation of the correlations at finite temperature. In order to do that, we have to derive the non-linear integral equation for the generalized quantum transfer matrix and more challenging we have to devise a way to evaluate the three-point functions at finite temperature. The standard trick for the evaluation of the two-point function at finite temperature comprises the derivative of the leading eigenvalue with respect to some inhomogeneity parameter \cite{AuKl12}, however this trick cannot be applied to three-point correlations. The above mentioned issues are currently under investigation. \section*{Acknowledgments} G.A.P. Ribeiro thanks the S\~ao Paulo Research Foundation (FAPESP) for financial support through the grants 2017/16535-1 and 2015/01643-8. He also acknowledges the hospitality of Bergische Universit\"at Wuppertal. This work has been carried out within the DFG research unit Correlations in Integrable Quantum Many-Body Systems (FOR2316). Note added: After our paper appeared on the arXiv, we became aware of the related preprint \cite{BOOS18}. \newpage \section*{\bf Appendix A: Reduction of ${\mathbb D}_m$ to ${\mathbb D}_{m-1}$} \setcounter{equation}{0} \renewcommand{\theequation}{A.\arabic{equation}} \setcounter{figure}{0} \renewcommand{\thefigure}{A.\arabic{figure}} We may apply the completely anti-symmetric tensor $\epsilon$ to any of the bunches of $n$ semi-infinite lines of ${\mathbb D}_m$. Then by use of the properties (\ref{sym-property1}) the anti-symmetrizer can be moved towards the far left resulting in ${\mathbb D}_{m-1}$ times some proportionality factor. This is illustrated in Figure \ref{reduction} for the case $m=2$ and $SU(3)$. We like to point out that repeated applications of anti-symmetrizers to ${\mathbb D}_m$ yield ${\mathbb D}_{\widetilde m}$ with arbitrary $\widetilde m\ (\le m)$. Note, the application of $m$ times the anti-symmetric tensor $\epsilon$ removes all degrees of freedom and serves as the normalization of ${\mathbb D}_m$. \begin{figure}[h] \begin{center} \begin{minipage}{0.55\linewidth} \begin{tikzpicture}[scale=1.45] \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0)--(0.25,4); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.8,0)--(0.8,4); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0.,1.0)--(1.6,1.0); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0.,1.4)--(1.6,1.4); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0.,1.8)--(1.6,1.8); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0.,2.2)--(1.6,2.2); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0.,2.6)--(1.6,2.6); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(0.,3.0)--(1.6,3.0); \draw (-0.15,0.25) node {$\dots$}; \draw (-0.15,0.6) node {$\dots$}; \draw (-0.15,3.35) node {$\dots$}; \draw (-0.15,3.75) node {$\dots$}; \draw (-0.15,1) node {$\dots$}; \draw (-0.15,1.4) node {$\dots$}; \draw (-0.15,1.8) node {$\dots$}; \draw (-0.15,2.2) node {$\dots$}; \draw (-0.15,2.6) node {$\dots$}; \draw (-0.15,3.0) node {$\dots$}; \draw (0,0) [-,color=black, thick, rounded corners=7pt] +(1.8,1.4)--(1.6,1.4); \draw (0,0) [-,color=black, thick, rounded corners=7pt]+(1.6,1.0) -- +(1.8,1.0) -- +(1.8,1.8)-- +(1.6,1.8); \draw (1.8,1.4)[fill=black] circle (0.15ex); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(2.5,0)--(2.5,4); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(3.05,0)--(3.05,4); \draw (1.2,1.15) node {$\lambda_2+2$}; \draw (1.2,1.55) node {$\lambda_2+1$}; \draw (1.,1.95) node {$\lambda_2$}; \draw (1.2,2.35) node {$\lambda_1+2$}; \draw (1.2,2.75) node {$\lambda_1+1$}; \draw (1.,3.15) node {$\lambda_1$}; \draw (0.25,-0.25) node {$0$}; \draw (0.8,-0.25) node {$0$}; \draw (2.5,-0.25) node {$0$}; \draw (3.05,-0.25) node {$0$}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(1.8,-0.5)--(3.7,-0.5); \draw (1.65,-0.5) node {$\infty$}; \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,-0.5)--(1.5,-0.5); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0.25)--(3.75,0.25); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0.6)--(3.75,0.6); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.25,3.35)--(3.75,3.35); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.25,3.75)--(3.75,3.75); \draw (3.4,0.35) node {$u_1$}; \draw (3.4,0.75) node {$u_2$}; \draw (3.4,1.1) node {$\vdots$}; \draw (3.4,3.85) node {$u_{N}$}; \draw (3.4,3.45) node {$u_{N-1}$}; \draw (1.6,1)[fill=black] circle (0.15ex); \draw (1.6,1.4)[fill=black] circle (0.15ex); \draw (1.6,1.8)[fill=black] circle (0.15ex); \draw (1.6,2.2)[fill=black] circle (0.15ex); \draw (1.6,2.6)[fill=black] circle (0.15ex); \draw (1.6,3)[fill=black] circle (0.15ex); \draw (4.3,2) node {$=$ const.}; \end{tikzpicture} \end{minipage \begin{minipage}{0.45\linewidth} \begin{tikzpicture}[scale=1.45] \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0)--(0.25,4); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(0.8,0)--(0.8,4); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0,2.2)--(1.6,2.2); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0,2.6)--(1.6,2.6); \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0,3.0)--(1.6,3.0); \draw (-0.15,0.25) node {$\dots$}; \draw (-0.15,0.6) node {$\dots$}; \draw (-0.15,3.35) node {$\dots$}; \draw (-0.15,3.75) node {$\dots$}; \draw (-0.15,2.2) node {$\dots$}; \draw (-0.15,2.6) node {$\dots$}; \draw (-0.15,3.0) node {$\dots$}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(2.5,0)--(2.5,4); \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(3.05,0)--(3.05,4); \draw (1.2,2.35) node {$\lambda_1+2$}; \draw (1.2,2.75) node {$\lambda_1+1$}; \draw (1.,3.15) node {$\lambda_1$}; \draw (0.25,-0.25) node {$0$}; \draw (0.8,-0.25) node {$0$}; \draw (2.5,-0.25) node {$0$}; \draw (3.05,-0.25) node {$0$}; \draw (0,0) [->,color=black, thick, rounded corners=7pt] +(1.8,-0.5)--(3.7,-0.5); \draw (1.65,-0.5) node {$\infty$}; \draw (0,0) [<-,color=black, thick, rounded corners=7pt] +(-0.25,-0.5)--(1.5,-0.5); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0.25)--(3.75,0.25); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.25,0.6)--(3.75,0.6); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.25,3.35)--(3.75,3.35); \draw (-0.25,0) [->,color=black, thick, rounded corners=7pt] +(0.25,3.75)--(3.75,3.75); \draw (3.4,0.35) node {$u_1$}; \draw (3.4,0.75) node {$u_2$}; \draw (3.4,1.1) node {$\vdots$}; \draw (3.4,3.85) node {$u_{N}$}; \draw (3.4,3.45) node {$u_{N-1}$}; \draw (1.6,2.2)[fill=black] circle (0.15ex); \draw (1.6,2.6)[fill=black] circle (0.15ex); \draw (1.6,3)[fill=black] circle (0.15ex); \end{tikzpicture} \end{minipage} \caption{Graphical illustration of the reduction property of two-sites to one site correlation. This property can be iterated until we reach the normalization condition of the generalized density operator ${\mathbb D}_2(\lambda_1,\lambda_2)$.} \label{reduction} \end{center} \end{figure} \section*{\bf Appendix B: The matrix $A^{[3]}$ for the three point case.} \setcounter{equation}{0} \renewcommand{\theequation}{B.\arabic{equation}} Here we give the $11\times 11$ matrix $A^{[3]}$ which defines the system of functional equations for the functions $\rho^{[3]}_k(\lambda_1,\lambda_2,\lambda_3)$. This was obtained by replacing (\ref{D3frak}) into equation (\ref{qkzsun-frak}), along the same lines as in the two-sites case. \eq A^{[3]}=\left(\begin{array}{ccccccccccc} a_{1,1} & a_{1,2} & a_{1,3} & a_{1,4} & a_{1,5} & a_{1,6} & a_{1,7} & a_{1,8} & a_{1,9} & a_{1,10} & a_{1,11} \\ a_{2,1} & a_{2,2} & a_{2,3} & a_{2,4} & a_{2,5} & a_{2,6} & a_{2,7} & a_{2,8} & a_{2,9} & a_{2,10} & a_{2,11} \\ a_{3,1} & a_{3,2} & a_{3,3} & a_{3,4} & a_{3,5} & a_{3,6} & a_{3,7} & a_{3,8} & a_{3,9} & a_{3,10} & a_{3,11} \\ a_{4,1} & 0 & a_{4,3} & a_{4,4} & a_{4,5} & a_{4,6} & 0 & a_{4,8} & a_{4,9} & 0 & a_{4,11} \\ a_{5,1} & a_{5,2} & a_{5,3} & a_{5,4} & a_{5,5} & a_{5,6} & a_{5,7} & a_{5,8} & a_{5,9} & a_{5,10} & a_{5,11} \\ a_{6,1} & a_{6,2} & a_{6,3} & a_{6,4} & a_{6,5} & a_{6,6} & 0 & a_{6,8} & a_{6,9} & 0 & a_{6,11} \\ 0 & a_{7,2} & 0 & a_{7,4} & a_{7,5} & a_{7,6} & 0 & a_{7,8} & 0 & 0 & 0 \\ 0 & a_{8,2} & 0 & a_{8,4} & a_{8,5} & a_{8,6} & 0 & a_{8,8} & 0 & 0 & 0 \\ 0 & 0 & 0 & a_{9,4} & a_{9,5} & 0 & 0 & a_{9,8} & 0 & 0 & 0 \\ 0 & 0 & 0 & a_{10,4} & a_{10,5} & 0 & 0 & a_{10,8} & 0 & 0 & 0 \\ 0 & a_{11,2} & 0 & a_{11,4} & a_{11,5} & a_{11,6} & 0 & a_{11,8} & 0 & 0 & 0 \\ \end{array}\right) \label{matrixL3-qKZ} \en where the non-trivial matrix elements are written as follows, \begin{align} &a_{1,1}=\frac{(-1+3 x+x^2) (-1+3 y+y^2)}{x (3+x) y (3+y)}, a_{1,2}=\frac{(-1+3 x+x^2) (-2+2 y+y^2)}{x (3+x) y (3+y)}, \nonumber \\ &a_{1,3}=-\frac{3}{x (3+x) y (3+y)}, a_{1,4}=\frac{(1+y) (-8+3 x+2 x^2-2 y+2 x y+x^2 y)}{x (3+x) y (3+y)}, \nonumber \\ &a_{1,5}=\frac{(1+y) (-7+x^2-3 y-x y)}{x (3+x) y (3+y)}, a_{1,6}=\frac{-3+y+y^2}{x (3+x) y (3+y)}, \nonumber \\ &a_{1,7}=\frac{-1+3 x+x^2}{x (3+x) (3+y)}, a_{1,8}=\frac{-1+3 x+x^2+y+3 x y+x^2 y}{x (3+x) y (3+y)}, \nonumber \\ \end{align} \begin{align} &a_{1,9}=\frac{1+3 x+x y}{x (3+x) (3+y)}, a_{1,10}=-\frac{-1+x^2-x y}{x (3+x) y (3+y)}, a_{1,11}=-\frac{y}{x (3+x) (3+y)}, \nonumber \\ &a_{2,1}=\frac{3 (-1+3 x+x^2)}{x (3+x) y (3+y)}, a_{2,2}=-\frac{(-1+3 x+x^2) (-3+y+y^2)}{x (3+x) y (3+y)}, \nonumber \\ &a_{2,3}=-\frac{-1+3 y+y^2}{x (3+x) y (3+y)}, a_{2,4}=\frac{2 (1+y)}{x (3+x) y (3+y)}, a_{2,5}=\frac{(1+y)^2}{x (3+x) y (3+y)} \nonumber\\ &a_{2,6}=-\frac{-2+2 y+y^2}{x (3+x) y (3+y)}, a_{2,7}=\frac{(-1+3 x+x^2) y}{x (3+x) (3+y)}, a_{2,8}=-\frac{-1+y}{x (3+x) y (3+y)},\nonumber \\ &a_{2,9}=\frac{1+3 x+x y}{x (3+x) y (3+y)}, a_{2,10}=-\frac{-1+x^2-x y}{x (3+x) (3+y)}, a_{2,11}=-\frac{1}{x (3+x) (3+y)}, \nonumber \\ &a_{3,1}=-\frac{3}{x (3+x) y (3+y)}, a_{3,2}=-\frac{3 (2+y)}{x (3+x) y (3+y)}, \nonumber\\ &a_{3,3}=\frac{-3 x-3 y+7 x y+3 x^2 y+3 x y^2+x^2 y^2}{x (3+x) y (3+y)}, a_{3,4}=-\frac{-6+6 x+3 x^2-6 y-2 x y}{x (3+x) y (3+y)}, \nonumber\\ &a_{3,5}=\frac{3-6 x-3 x^2+6 y+6 x y+x^2 y+3 y^2+4 x y^2+x^2 y^2}{x (3+x) y (3+y)}, \nonumber\\ &a_{3,6}=\frac{-3 x-6 y+6 x y+3 x^2 y-3 y^2+2 x y^2+x^2 y^2}{x (3+x) y (3+y)}, \nonumber\\ &a_{3,7}=\frac{3}{x (3+x) (3+y)}, a_{3,8}=-\frac{-3+3 y+4 x y+x^2 y}{x (3+x) y (3+y)}, a_{3,9}=-\frac{1}{x (3+y)},\nonumber\\ &a_{3,10}=\frac{-3+3 x^2+x^2 y}{x (3+x) y (3+y)}, a_{3,11}=\frac{y}{x (3+y)},\nonumber \\ &a_{4,1}=\frac{3}{x (3+x)}, a_{4,3}=-\frac{-9+x^2-3 y-2 x y}{x (3+x) y (3+y)}, \nonumber\\ &a_{4,4}=-\frac{-3 x-x^2-9 y+3 x y+3 x^2 y-3 y^2+x y^2+x^2 y^2}{x (3+x) y (3+y)},\nonumber \\ &a_{4,5}=-\frac{-3+x-y}{x y (3+y)}, a_{4,6}=\frac{3+x-y}{(3+x) y (3+y)},\nonumber\\ &a_{4,8}=-\frac{9-3 x-2 x^2-6 y+x y+2 x^2 y-3 y^2+x y^2+x^2 y^2}{x (3+x) y (3+y)},\nonumber\\ &a_{4,9}=\frac{-3-x+y+3 x y+x y^2}{(3+x) y (3+y)}, a_{4,11}=\frac{3+x-y}{(3+x) (3+y)},\nonumber \\ &a_{5,1}=-\frac{3}{x (3+x) (3+y)}, \qquad a_{5,2}=\frac{3}{x (3+x) (3+y)},\nonumber\\ &a_{5,3}=\frac{-3+8 x+3 x^2+x^2 y-x y^2}{x (3+x) y (3+y)}, a_{5,4}=-\frac{-1+y}{x y (3+y)},\nonumber\\ \end{align} \begin{align} &a_{5,5}=-\frac{3-8 x-3 x^2-3 y+2 x y+2 x^2 y+2 x y^2+x^2 y^2}{x (3+x) y (3+y)},\nonumber\\ &a_{5,6}=\frac{1}{x y (3+y)}, a_{5,7}=\frac{3 y}{x (3+x) (3+y)},\nonumber\\ &a_{5,8}=\frac{6-7 x-3 x^2-3 y+2 x y+2 x^2 y-3 y^2+x y^2+x^2 y^2}{x (3+x) y (3+y)},\nonumber\\ &a_{5,9}=-\frac{1}{x y (3+y)}, a_{5,10}=\frac{-3+3 x^2+x^2 y}{x (3+x) (3+y)}, a_{5,11}=\frac{1}{x (3+y)},\nonumber \\ &a_{6,1}=\frac{3}{x (3+x) y}, a_{6,2}=\frac{3}{x (3+x) y}, a_{6,3}=-\frac{-x-9 y+x^2 y-3 y^2-x y^2}{x (3+x) y (3+y)},\nonumber\\ &a_{6,4}=\frac{2+y}{(3+x) y (3+y)}, a_{6,5}=\frac{1}{(3+x) y (3+y)},\nonumber\\ &a_{6,6}=-\frac{-2 x-9 y+2 x y+2 x^2 y-3 y^2+2 x y^2+x^2 y^2}{x (3+x) y (3+y)},\nonumber\\ &a_{6,8}=\frac{1}{(3+x) y (3+y)}, a_{6,9}=\frac{1+3 x-3 y}{(3+x) y (3+y)}, a_{6,11}=\frac{-1+3 y+x y}{(3+x) (3+y)},\nonumber\\ &a_{7,2}=-\frac{(-1+3 x+x^2)(-1+y)}{x (3+x) y}, a_{7,4}=-\frac{-8+6 x+3 x^2-4 y-x y}{x (3+x) y (3+y)},\nonumber\\ &a_{7,5}=-\frac{-1-8 y+x^2 y-3 y^2-x y^2}{x (3+x) y (3+y)}, a_{7,6}=-\frac{-1+y}{x (3+x) y},\nonumber\\ &a_{7,8}=\frac{7-6 x-3 x^2-5 y+x y+x^2 y-2 y^2+2 x y^2+x^2 y^2}{x (3+x) y (3+y)},\nonumber \\ &a_{8,2}=\frac{3}{x (3+x)}, a_{8,4}=-\frac{3 (-3+2 x+x^2-y)}{x (3+x) y (3+y)}, \nonumber\\ &a_{8,5}=-\frac{-9-x+x^2-3 y-x y}{x (3+x) (3+y)}, a_{8,6}=-\frac{-3+2 x+x^2-x y}{x (3+x) y},\nonumber \\ &a_{8,8}=\frac{9-6 x-3 x^2-6 y-x y+x^2 y-3 y^2+x y^2+x^2 y^2}{x (3+x) y (3+y)},\nonumber \\ &a_{9,4}=\frac{(1+y) (3-2 x+y-x y)}{x y (3+y)}, a_{9,5}=\frac{(3-x+y) (1+y)}{x y (3+y)}, a_{9,8}=-\frac{1+y}{y (3+y)},\nonumber\\ &a_{10,4}=\frac{-2+3 x-2 y}{x y (3+y)}, a_{10,5}=-\frac{1-3 x+2 y+x y+y^2+x y^2}{x y (3+y)}, a_{10,8}=\frac{-1+y+x y}{x y (3+y)}, \nonumber \\ &a_{11,2}=\frac{3}{x (3+x) y}, a_{11,4}=\frac{-9+5 x+3 x^2-3 y-x y}{x (3+x) y (3+y)}, \nonumber\\ &a_{11,5}=\frac{x-9 y+x^2 y-3 y^2-x y^2}{x (3+x) y (3+y)}, a_{11,6}=-\frac{-x-3 y+2 x y+x^2 y}{x (3+x) y},\nonumber\\ &a_{11,8}=-\frac{9-4 x-3 x^2-6 y+2 x y+x^2 y-3 y^2+2 x y^2+x^2 y^2}{x (3+x) y (3+y)},\nonumber \end{align} where $x=\lambda_1-\lambda_3$ and $y=\lambda_1-\lambda_2$.
2,869,038,155,933
arxiv
\section{Introduction} The canonical thermodynamic model (CTM) as applied to the nuclear multifragmentation problem addresses the following scenario. Assume that because of collisions we have a finite piece of nuclear matter which is heated and begins to expand. During the expansion the nucleus will break up into many fragments(composites). In the expanded volume, the nuclear interaction between different composites can be neglected and the long range Coulomb force can be absorbed in a suitable approximation. The partitioning of this hot piece of matter according to the availability of phase space can be calculated exactly (but numerically) in CTM. Many applications of this model have been made and compared to experimental data \cite{Das1}. The model is very similar to the statistical multifragmentation model (SMM) developed in Copenhagen \cite{Bondorf}. SMM is more general but requires complicated Monte-Carlo simulations. In typical physical situations the two models give very similar results \cite{Botvina1}. In usual situations, the piece of nuclear matter has neutrons and protons, that is, it is a two-component system. Initially CTM was formulated for one kind of particle \cite{Dasgupta1} and already many interesting properties like phase transition could be studied. Subsequent to the extension of CTM to two kinds of particles \cite{Bhattacharyya}, many applications of the model to compare with experimental data were made \cite{Das1,Tsang1}. The objective of this paper is to extend CTM to three-component systems. While this, in general, is interesting, it can also be useful for calculations in an area of current interest. I refer here to the production of hyperons (usually $\Lambda$) in heavy ion reaction in the 1GeV/A to 2 GeV/A beam energy range. The $\Lambda$'s can get attached to nuclei turning them into composites with three species. The conventional thinking is this. The $\Lambda$ particle is produced in the participating zone, i.e., the region of violent collisions. The produced $\Lambda$'s have an extended rapidity distribution and some of these can be absorbed in the much colder specator parts. These will form hypernuclei. Those absorbed in the projectile fragment (PLF) can be more easily studied experimentally because they emerge in the forward direction. This idea has been recently used to study the production of hypernuclei in a recent paper using the SMM model \cite{Botvina2}. Our work closely follows the same physics, however, using a different and what we believe a much easier prescription. In addition our focus is different and we emphasize other aspects. The question of hypernucleus production in heavy ion reaction was already looked at in detail more than twenty years ago \cite{Wakai}. The authors used a coalescence model. Both the break up of a PLF into composites and the absorption of the $\Lambda$ used coalescence. The coalescence approach has been revived in a much more ambitious calculation recently \cite{Gaitanos}. Intuitively the coalescence model is appealing but for a satisfactory formulation of the PLF breaking up into many composites there are many very difficult details which need to be worked out. Certainly there are some points of similarities between the thermodynamic model for multifragmentation and production of composites by coalescence. For the production of the deuteron, the simplest composite, the two models were compared \cite{Jennings}. But that study also shows that for a heavier fragment (say $^{12}$C) the two routes become impossible to disentangle from each other. However, there is at least one argument in favour of the thermodynamic model (both SMM and CTM). They have been widely used for composite production and enjoyed very good success \cite{Das1,Bondorf}. The physics ansatz for the calculation reported in the earlier work \cite {Botvina2} and the prersent work is the same. The $\Lambda$ particle (particles) which arrive at the PLF interact strongly with the nucleons. Thus fragments can be calculated as in normal prescriptions. Hypernuclei as well as normal (non-strange) composites will be formed. The model gives definitive predictions as the following sections will show. Experiments can vindicate or contradict these predictions. \section{Mathematical Details} The case ot two-component system (neutrons and protons) have been dealt with in many places including \cite{Das1}. The generalisation to three components is straightforward. The multifragmentation of the system we study has a given number of baryons $A$, charges $Z$ and strangeness number $H$. This will break up into composites with mass $a$, charge $z$ and $h$ number of $\Lambda$ particles. The canonical partition function of the system $Q_{A,Z,H}$ is given by the following equation. Once the partition function is known, observables can be calculated. \begin{equation} Q_{A,Z,H}=\sum\prod \frac{(\omega_{a,z,h})^{n_{a,z,h}}} {n_{a,z,h}!} \end{equation} Here $\omega_{a,z,h}$ is the partition function of one composite which has mass number $a$, charge number $z$ and $h$ hyperons (here $\Lambda$'s) and $n_{a,z,h}$ is the number of such composites in a given channel. The sum over channels in eq.(1) is very large and each channel must satisfy \begin{eqnarray} \sum an_{a,z,h} &=& A \nonumber \\ \sum zn_{a,z,h} &=& Z \nonumber \\ \sum hn_{a,z,h} &=& H \end{eqnarray} Proceeding further, we have \begin{equation} \langle n_{a,z,h} \rangle =\frac{1}{Q_{A,Z,H}}\sum\prod n_{a,z,h} \frac{(\omega_{a,z,h})^{n_{a,z,h}}}{n_{a,z,h}!} \end{equation} which readily leads to \begin{equation} \langle n_{a,z,h} \rangle=\frac{1}{Q_{A,Z,H}}\omega_{a,z,h}Q_{A-a,Z-z,H-h} \end{equation} Since $\sum a\langle n_{a,z,h}\rangle=A$ we have \begin{equation} \sum a\frac{1}{Q_{A,Z,H}}\omega_{a,z,h}Q_{A-a,Z-z,H-h}=A \end{equation} which immediately leads to a recuurence relation which can be used to calculate the many particle partition function: \begin{equation} Q_{A,Z,H}=\frac{1}{A}\sum _{a=1}^A a\omega_{a,z,h}Q_{A-a,Z-z,H-h} \end{equation} It is obvious other formulae similar to the one above exist: \begin{equation} Q_{A,Z,H}=\frac{1}{H}\sum h\omega_{a,z,h}Q_{A-a,Z-z,H-h} \end{equation} The above equations are general. In this paper we do numerical calculations for the cases $H$=1 and $H$=2. The composites considered have either $h$=0 (non-strange composites) or $h$=1. For the case $H$=2 this means that in a given channel there will be two composites each with $h$=1. A more general treatment would include composites with two $\Lambda$'s. To complete the story we need to write down the specific expressions for $\omega_{a,z,h}$ that we use. The one particle partition function is a product two parts: \begin{equation} \omega_{a,z,h}=z_{kin}(a,z,h)z_{int}(a,z,h) \end{equation} The kinetic part is given by \begin{equation} z_{kin}(a,z,h)=\frac{V}{h^3}(2\pi MT)^{3/2} \end{equation} where $M$ is the mass of the composite: $M=(a-h)m_n+hm_{\Lambda}$. Here $m_n$ is the nucleon mass (we use 938 MeV) and $m_{\Lambda}$ is the $\Lambda$ mass (we use 1116 MeV). For low mass nuclei, we use experimental values to construct $z_{int}$ and for higher masses a liquid-drop formula is used. The neutron, proton and $\Lambda$ particle are taken as fundamental blocks and so $z_{1,0,0}=z_{1,1,0}=z_{1,0,1}=2$ (spin degeracy). For deuteron, triton, $^3$He and $^4$He we use $z_{int}(a,z,0)= (2s_{a,z,0}+1)\exp (-e_{a,z,0}(gr)/T)$ where $e_{a,z,0}$ is the ground state energy and $(2s_{a,z,0}+1)$ is the experimental spin degeneracy of the ground stste. Contributions to the $z_{int}$ from excited states are left out for these low mass nuclei. Similarly experimental data are used for $^3_{\Lambda}$H, $^4_{\Lambda}$H, $^4_{\Lambda}$He and $^5_{\Lambda}$He. For heavier nuclei ($h$=0 or 1), a liquid-drop formula is used for ground state energy. This formula is taken from \cite{Botvina2}. All energies are in MeV. \begin{equation} e_{a,z,h}=-16a+\sigma(T)a^{2/3}+0.72z^2/(a^{1/3})+25(a-h-2z)^2/(a-h) -10.68h+21.27h/(a^{1/3}) \end{equation} Here $\sigma(T)$ is temperature dependent surface tension: $\sigma(T)=18[\frac{T_c^2-t^2}{T_c^2+T^2}]^{5/4}$. A comparative study of the above binding energy formula can be found in \cite{Botvina2}. This formula also defines the drip lines. We include all nuclei within drip lines in constructing the partition function. With the liquid-drop formula we also include the contribution to $z_{int}(a,z,h)$ coming from the excited states. This gives a multiplicative factor $=exp (r(T)Ta/\epsilon_0)$ where we have introduced a correction term $r(T)=\frac{12}{12+T}$ to the expression used in \cite{Bondorf}. This slows down the increase of $z_{int}(a,z,h)$ due to excited states as $T$ increases. Reasons for this correction can be found in \cite{Bhattacharyya,Koonin} although for the temperature range used in this paper the correction is not important. We also incorporate the effects of the long-range Coulomb force in the Wigner-Seitz approximation \cite{Bondorf}. We have used eq.(7) to compute the partrition functions. If the PLF which absorbs the $\Lambda$ has mass number $A$ and proton number $Z$, we first calculate all the relevant partion functions for $H$=0 first. This requires calculating upto $Q_{A,Z,0}$. We then calculate, for $H$=1 partition functions upto $Q_{A+1,Z,1}$. We can then proceed to calculate for $H$=2 upto $Q_{A+2,Z,2}$ and so on. \section{Results for H=1} We assume one $\Lambda$ is captured in the projectile like fragment (PLF). The PLF breaks up into various fragments. In an event one of these fragments will contain the $\Lambda$ particle, the rest of the fragments will have $h=0$. There is also a probability that the $\Lambda$ remains unattached, i.e., after break-up it emerges as a free $\Lambda$. There is also another extreme possibility (this requires very low temperature in the PLF) that the $\Lambda$ gets attached to the entire PLF which does not break up. In such an event the number of composites with $h=0$ is zero. The average over all events give the average multiplicity of all composites, with $h=0$ and $h=1$ (eq.(4)). We will show results for $\Lambda$ captured by a system of $A=100, Z=40$ and $A=200, Z=80$. These are the same systems considered in \cite{Botvina2}. The results for $\langle n_{a,z,h} \rangle$ depend quite sensitively on the temperature and less so on the assumed freeze-out density. Except in one case, all the results shown use freeze-out density to be one-third normal density. Past experiences have shown \cite{Gargi1,Gargi2} that a freeze-out density of one-third normal density gives better results for disassebly of PLF than, for example, the value of one-sixth normal density which is more appropriate for the participating zone. Again from past experiences, temperatures in the range 5 to 10 MeV are considered to be appropriate. In Fig.1 we show results for $A=100, Z=40$ at a low temperature of $T$=4 MeV. In order to display the results easily we sum over the charge and plot $\langle n_{a,h} \rangle =\sum_z\langle n_{a,z,h} \rangle$. Note that for this choice of temperature, the average mass number of the hypernucleus formed is very high, about 95. The multiplicity of non-strange composites is low (1.24). The average mass of non-strange composite is about 5 and the average charge is about 1.7. We thus have a curious situation. The non-strange part is a gas with very few particles and the strange part of matter is a liquid since in heavy ion physics, a large blob of matter is attributed to be the liquid part. While this aspect of hybrid liquid-gas co-existence may lead to an interesting study, our focus here will be the population of hypernuclei. For brevity we do not show the population of composites at this temperature for a system of $A=200, Z=80$. There are remarkable similarities in the shapes of the the curves, but the differences are also significant and the curve for $A=200$ can not be scaled onto the curve for $A$=100. At higher temperature, however, one can guess the results for $A=200$ knowing, for example, the results for $A$=100. Fig. 2 shows the graph of $\langle n_{a,h} \rangle$ at 8 MeV temperature for both $A=100, Z=40$ and the system double its size. The important feature which allows one to scale the results of one system to another is this. At this temperature the relative population of $\langle n_{a,h} \rangle$ drops off rapidly with $a$ so that the population beyond, say, $a$=40 can be ignored. For compsites with $h$=1, both the systems $A=100$ and $A$=200 are virtually the same: for both, $\sum_{a=1}^{40}\langle n_{a,1} \rangle =1$ and hence it is possible to have the same value of $\langle n_{a,1}\rangle$ for the the two systems. the graphs for $h=1$ bear this out. But for $h=0$, $\sum_{a=1}^{a=40}a\langle n_{a,0}\rangle$ have to add up to different numbers. For $A=100$ they have to add up to $[101-\langle a(h=1)\rangle])$ and for $A=200$ they have to add up to $[201-\langle a(h=1)\rangle]/$. The simplest ansatz is that $\langle n_{a,1}\rangle$ for $A$=200 is larger than the corresponding quantity for $A=100$ by the ratio $[201-\langle a(h=1)\rangle]/[101-\langle a(h=1)\rangle]\approx 2$. Fig. 2 shows this to be approximately correct. For our model to be physically relevant, we expect the yield $\langle n_{a,h} \rangle$ to be proportional to $\sigma (a,h)$ although the model, at the moment, is not capable of providing the value of the proportionality constant. The average value:$\langle a(h=1)\rangle$= $\sum_a a\langle n_{a,1} \rangle/\sum_a\langle n_{a,1}\rangle= \sum_a a\langle n_{a,1} \rangle$ is a useful quantity and is predicted to be the average value of the mass number of the hypernuclei measured in experiment. This is plotted in Fig.3 as a function of temperature both for $A$=100 and $A=200$ calculated at one-third the nuclear density (graphs labelled 1 and 3 respectively). Curves labelled 2 and 4 refer to the cases when $H$=2 and we deal with them in the next section. In the figure we also plot the value of $\langle a(h=1) \rangle$ if this is calculated at a lower one-sixth normal density for $A$=100 (graph labelled 5). Several comments can be made. Assuming that the PLF temperature is in the expected 6 MeV to 10 MeV range the average mass number of hypernuclei should be in 20 to 7 range. Secondly this value is insensitive to the PLF mass number so long as it is reasonably large. We can also use the graph to state that if the temperature is above 6 MeV the grand canonical model can give a dependable estimate but if the temperature is significantly lower, say 5 MeV, grand canonical calculation can be in significant error. As expected if a lower value for the freeze-out density is used the predicted value for $\langle a(h=1) \rangle$ is lowered. A more detailed plot of yields for $^a_{\Lambda}z$ for $z$ in the range 1 to 6 and all relevant $a$'s is given in Fig.4. There are two curves for each $z$. Let us concentrate on the lower curves. These belong to the case considered here, i.e., $H$=1. Although we have drawn this this for $A=100, Z=40$ for $T$=8 MeV it is virtually unchanged for $A=200, Z=80$. The reasons were already given. These plots provide a very stringent test of the model as these yields are proportional to experimental cross-sections. \section{Results for H=2} We consider now $A=100, Z=40$ and $A=200, Z=80$ but these systems have captured two $\Lambda$'s rather than one. How do we expect the results to change? Fig.5 compares the yields of the composites with two $\Lambda$'s entrapped in $A=$100 at 8 MeV with the already studied case of one $\Lambda$ in $A$=100 at 8 MeV. For $H$=2 the number of hypernuclei (and also the number of free $\Lambda$'s) is doubled with only very small changes in the number of non-strange composites. We can understand why this happens following a similar chain of arguments as presented in the previous section. The reason for this correspondence is that at temperature 8 MeV there are only insignificant number of composites beyond $a$=40. The average value of $\langle a(h=1)\rangle$ as a function of temperature is shown in Fig.3 for $H$=2. Curve 2 is for $A$=100, $H$=2 and curve 1 is for $A$=100, $H$=1. As explained above, for $T>7$ tha average value $<a(h=1)>$ will be very close but at lower temperature (i.e.,$T=4 MeV$) the situation is very different. For $H$=1 there is a very large hypernucleus containing most of the nucleons (Fig.1) but for $H$=2 there are two hypernuclei thus they will together share the bulk of the nucleons. Thus the average value of $\langle a(h=1) \rangle$ will drop to about half the value obtained for $H$=1. In Fig.3 curve 4 is for $H$=2 in a system with $A$=200, curve 3 is for $H$=1 in a system with $A$=200. \section{Discussion} We have given a detailed description of what happens once the PLF captures the produced $\Lambda$ particle. How $\Lambda$'s are produced in the violent collision zone and the probability of arrival both temporally and positionally at PLF are not described here. This will depend strongly on the experiment: for example, the case of say,$^{197}$Au hitting $^{12}$C will have to be treated differently from that of Sn on Sn collisions. We hope to embark upon this aspect in future. We have looked at statistical aspects only. This can be investigated more easily using the canonical thermodynamic model. The calculations here looked at productions of hypernuclei in the PLF. The technique can also be applied in the participant zone. In the participant zone the temperature will be higher. Also the freeze-out density is expected to be lower. As an example if we use freeze-out density 1/6-th of the normal density, $A=100, Z=40$ and temperature 18 MeV the average value of $a$ for $h$=1 is 2.55. We produce more single $\Lambda$'s than hypernuclei. Heavier hyparnuclei are not favoured at high temperature. \section{Acknowledgement} The author is indebted to A. Botvina for drawing his attention to the topic of this paper. He also thanks him for many discussions. This work is supported by Natural Sciences and Engineering Research Council of Canada.
2,869,038,155,934
arxiv
\subsection{Fixed-Rate Coding} \label{frc} For fixed-rate coding, the per-user rate is defined as \begin{equation} R_N\triangleq \frac{K}{N}~\mathbb{P}\left(\mathrm{SIR}>2^{K/N}-1\mid \Phi\right). \label{FRC_CP} \end{equation} The CCDF of the rate $R_N$ is given by \begin{equation}\label{Frc_rn} \mathbb{P}(R_N>r)=\frac{\bar{\mathrm{B}}\left(rN/K, \bar \gamma, \beta\right)}{\mathrm{B}\left( \bar \gamma, \beta\right)}. \end{equation} In the expression for parameters $\bar \gamma$ and $\beta$ in (\ref{Frc_rn}), the moments $M_n$ given in (\ref{MnCI}) are used. Adaptive modulation and coding (AMC) is a scheme used in current 4G networks to adapt the rate to channel conditions\cite{GoldsmithII,LTEBook,CaireII}. AMC chooses a fixed-rate code and its code rate to most closely match the channel conditions. The analytical discussions in the current paper focused on rateless codes are also applicable to the case of AMC with the important change of packet time $t$ resolution. For the rateless case, the packet time $t\in \mathbb{N}$ whereas for the AMC case, the resolution changes to $t=\{N_i\}$, where $N_i$ is the number of parity symbols for the AMC index $i$. The CCDF of the rate $R_N$ with fixed-rate coding and power control is also given by (\ref{Frc_rn}). However to compute the parameters $\bar \gamma$ and $\beta$, the moments $M_n$ are obtained from \cite{Wang}. \subsection{Insights from Theorem \ref{Th3Rn}} \label{The3Insi} \begin{itemize} \item From (\ref{Frc_rn}), we can see that the tail of the $R_N$ CCDF for fixed-rate coding decays very rapidly as $r\rightarrow K/N$. However for rateless coding, it can be observed from (\ref{Rnpdf}) and (\ref{Rn_dis}) that the decay of the tail is much slower. The tail of the rateless $R_N$ CCDF is more heavy tail-like and has a slower decay due to the $\mathbb{E}[\cdot]$ w.r.t $T_\phi/N$. \item Presence of a heavy tail for the $R_N$ CCDF implies a smaller total energy consumption for a $K$-bit packet transmission, thus resulting in enhanced energy-efficiency in the cellular downlink. \item In the $R_N$ expressions of (\ref{Rn}) and (\ref{FRC_CP}), we can see that both the schemes have the $P_s(N)$ term. The distribution of $P_s(N)$ is the same for both fixed-rate coding and rateless coding under the CI model. The profound impact in $R_N$ distribution for the rateless case arises due to the $T_\phi$ term, which is a result of variable-length coding in the physical layer leading to a variable transmission time. \end{itemize} \section{Theoretical Analysis} \label{AnaResu} \input{TivarIntMod} \input{ConstIntMod} \section{Distribution Approximations} \label{dis_appr} Based on the moments $\tilde{M}_n$, a closed form expression for the CDF (or its bounds) of the RV $P_s(t)$ in (\ref{BapCP}) can be written. Since $P_s(t)$ is supported on the interval [0,1], the beta distribution will yield a simple yet useful alternative. We also obtain a distribution approximation for $R_N$ in (\ref{Rn}). \subsection{Beta Approximation of $P_s(t)$ for TvI and CI models} \label{betapr} The PDF of beta-approximated $P_s(t)$ is given by \begin{equation}\label{be_pdf} f(x)=\frac{x^{\bar \gamma-1}\left(1-x\right)^{\beta-1}}{\mathrm{B}\left(\bar \gamma,\beta\right)},~~x \in [0,1], \end{equation} where $B\left(a, b\right)=\int_{0}^{1} x^{a-1} (1-x)^{b-1} \,\mathrm{d} x$ is the beta function, $\bar \gamma$ and $\beta$ are related to the moments of $P_s(t)$ as \cite{MeDi_Pap} \begin{align} &\bar \gamma = \frac{\gamma_1 \beta}{1-\gamma_1}; ~~\beta=\frac{(\gamma_1-\gamma_2)(1-\gamma_1)}{\gamma_2-\gamma_1^2}, \label{bepa} \end{align} where $\gamma_n=\tilde{M}_n$. Both parameters $\bar \gamma$ and $\beta$ are functions of $t$. Note for the CI model, the moments $\tilde{M}_n$ are given in (\ref{MnCI}). Now we provide the CCDF result for $P_s(t)$ at $t=N$. \begin{Propi1} \label{PrPsn} The CCDF of the per-user coverage probability $P_s(N)$ in (\ref{p_s}) is approximated as \begin{align} \mathbb{P}(P_s(N)>p)&=\frac{\bar{\mathrm{B}}\left(p, \bar \gamma, \beta\right)}{\mathrm{B}\left( \bar \gamma, \beta\right)} \label{Ps_dis}, \end{align} where $\bar{\mathrm{B}}\left(a, b, c\right)=\int_{a}^{1} y^{b-1}\left(1-y\right)^{c-1} \,\mathrm{d} y$ is the upper incomplete beta function and $\bar \gamma$, $\beta$ are defined in (\ref{bepa}). \end{Propi1} \begin{IEEEproof} Using the beta approximation for $P_s(N)$ in (\ref{be_pdf}) completes the proof. \end{IEEEproof} \subsection{$R_N$ Distribution Approximation for TvI and CI models} \label{PsRn} From (\ref{Rn}), the distribution of $R_N$ can be obtained from the PDF of $P_s(N)$ given in (\ref{be_pdf}) and (\ref{bepa}) with $t=N$ and also, the distribution of $\mathbb{E}\left[T\mid \Phi\right]$. Let \begin{equation}\label{etcon} T_\phi\triangleq \mathbb{E}\left[T\mid \Phi\right]=\int_0^N \left(1-\mathbb{P}(\hat T\leq t\mid \Phi)\right) \,\mathrm{d} t. \end{equation} Now, the CCDF of rate $R_N$ in (\ref{Rn}) is given by \begin{align} \mathbb{P}(R_N>r)&=\mathbb{E}\left[\mathbb{P}\left(P_s(N)>\frac{rT_\phi}{K}\Big | T_\phi\right)\right]\nonumber\\ &=\frac{\mathbb{E}\left[\bar{\mathrm{B}}\left(rT_\phi/K, \bar \gamma, \beta\right)\right]}{\mathrm{B}\left( \bar \gamma, \beta\right)},\label{Rnpdf} \end{align} To evaluate (\ref{Rnpdf}), the distribution of $T_\phi$ in (\ref{etcon}) is very critical. We first obtain the moments of $T_\phi$. From (\ref{etcon}), the first two moments of $T_\phi$ are given by \begin{align} \nu_1&= \mathbb{E}\left[T_\phi\right]=N-\int_0^N \tilde{M}_1(t)\,\mathrm{d} t \label{TpE1}\\ \nu_2&=\mathbb{E}\left[T_\phi^2\right]=\mathbb{E}\Big[\big(N-\int_0^N P_s(t)\,\mathrm{d} t\big)^2\Big]\nonumber\\ &\stackrel{(a)}{=} N\left(2\nu_1-N\right)+ \mathbb{E}\Big[\big(\int_0^N P_s(t)\,\mathrm{d} t\big)^2\Big],\label{TpE2} \end{align} where in (a), the second term is given below. \begin{align} \mathbb{E}\Big[\big(\int_0^N P_s(t)\,\mathrm{d} t\big)^2\Big]&= \mathbb{E}\Big[\int_0^N P_s(t)\,\mathrm{d} t \int_0^N P_s(u)\,\mathrm{d} u\Big]\nonumber\\ &=\int_0^N \int_0^N \mathbb{E}\Big[ P_s(t) P_s(u) \Big] \,\mathrm{d} t \,\mathrm{d} u\nonumber\\ &\stackrel{(b)}{=}\int_0^N \int_0^N \frac{1} {1+J\left(\bar{\theta}_t,\bar{\theta}_u\right)} \,\mathrm{d} t \,\mathrm{d} u \label{Ept2}, \end{align} where (b) is obtained from Theorem \ref{Th2Mn}. Using (\ref{Ept2}) and (\ref{J2exp}), the second moment $\nu_2$ in (\ref{TpE2}) can be computed. For the CI model, the moment expressions are exact. However for the TvI model, the moment expressions are bounds since they are based on the tractable bound of $P_s(t)$ in (\ref{LBeq}). It is not feasible to express $T_\phi$ moments in closed form based on the exact $P_s(t)$ in (\ref{Exeq}). The PDF of $T_\phi \in [0,N]$ is approximated by a known distribution, whose parameters are expressed in terms of the moments of $T_\phi$. Beta distribution has been used widely to approximate the distribution of a RV with finite support. Hence, we propose to model $T_\phi/N$ as a beta distributed RV. The first two moments of $T_\phi/N$ are given by \begin{equation}\label{TNmom} \kappa_1=\frac{\nu_1}{N};~~\kappa_2=\frac{\nu_2}{N^2}. \end{equation} Now, the two parameters $\bar{\kappa}$ and $\vartheta$ of the beta distribution for the RV $T_\phi/N$ are given by \begin{equation}\label{betpar} \bar{\kappa}=\frac{\kappa_1\vartheta}{1-\kappa_1};~~ \vartheta=\frac{(\kappa_1-\kappa_2)(1-\kappa_1)}{\kappa_2-\kappa_1^2}. \end{equation} The PDF of $T_\phi/N$ is similar in form to (\ref{be_pdf}) except for the parameters $\bar{\kappa}$ and $\vartheta$. Below, we summarize the main result. \begin{Theoi3} \label{Th3Rn} The CCDF of the per-user rate $R_N$ in (\ref{Rn}) is approximated as \begin{align} \mathbb{P}(R_N>r)&=\int_{0}^{1}\frac{\bar{\mathrm{B}}\left(rNy/K, \bar \gamma, \beta\right)}{\mathrm{B}\left( \bar \gamma, \beta\right)} \frac{y^{\bar{\kappa}-1}(1-y)^{\vartheta-1}}{\mathrm{B}\left(\bar{\kappa}, \vartheta\right)} \,\mathrm{d} y.\label{Rn_dis} \end{align} \end{Theoi3} \begin{IEEEproof} The CCDF of $R_N$ is given in (\ref{Rnpdf}). Using the beta approximation for $T_\phi/N$ in (\ref{Rnpdf}) completes the proof. The parameters used in (\ref{Rn_dis}) are defined in (\ref{bepa}) and (\ref{TNmom})-(\ref{betpar}). The moments $\nu_i$ are given in (\ref{TpE1})-(\ref{Ept2}). \end{IEEEproof} The CCDF result in (\ref{Rn_dis}) is used in the section on numerical results. Note that the Theorem \ref{Th3Rn} applies to both the TvI and CI models. The result in (\ref{Rn_dis}) is based on modeling $T_\phi/N$ as a beta distributed RV. The time to decode a $K$-bit packet $\hat{T}$ in (\ref{Rx_pkt}) has been fitted with a Gamma distribution in \cite{RHI}. Since $\hat{T}\in [0,\infty)$, it seems Gamma PDF is a better match. However, $T_\phi \in [0,N]$ and hence, beta PDF is justified for $T_\phi/N$. \section{Conclusion} \label{sec:Concl} In this letter, we characterize the per-user performance of cellular downlink when physical layer rateless codes are used for adaptive transmission. The BS locations are modeled by a uniform Poisson point process. The performance of rateless codes was presented under both the constant and time-varying interference models. Accurate approximations to the distribution of the per-user coverage probability and transmission rate are derived. The advantages of physical layer rateless codes are clearly illustrated by comparing their performance to fixed-rate based adaptive modulation and coding. \section{Introduction} \label{sec:Intro} One of the key technologies for 5G NR is the Raptor-like LDPC codes for the physical layer error correction\cite{ToRich}. It is anticipated that the future cellular standards/ technologies will evolve towards error correction schemes with more rateless-like properties. Modelling the locations of BSs and users by Poisson point processes (PPPs), it is shown in \cite{RHI,RDI} that an adaptive transmission based on physical layer rateless codes is very robust in terms of providing enhanced coverage and rate relative to the one based on fixed-rate coding and power control. In \cite{RDI}, the metrics used to compare the two adaptive transmission schemes are the typical user coverage probability and rate. The typical user metrics are deterministic values and correspond to the spatial average of coverage probability and rate across the network, with the expectation taken w.r.t the point process. The meta-distribution of SIR is the distribution of coverage probability in the network conditioned on the point process\cite{MeDi_Pap}\cite{Wang}. It gives a fine-grained statistical probe into the location-dependent user performance across the network. In other words, the meta-distribution provides detailed information on the entire distribution of coverage probability rather than just the spatial average value across the network. In this letter, we focus on the coverage probability and rate conditioned on the point process. For the first time in the literature (to the best of our knowledge), this letter presents the location-dependent analysis of user performance in cellular downlink when physical layer rateless codes are used. Using an accurate approximation for the distribution of bounded non-negative RVs, this letter quantifies the distribution of the per-user metrics, i.e., coverage probability and rate conditioned on the point process for rateless codes in cellular downlink. In \cite{MeDi_Pap}\cite{Wang}, the authors provide a location-dependent performance analysis when power control is used in cellular downlink. The inherent assumption is that fixed-rate coding is used. In such a setup, the only metric that needs to be characterized is the per-user coverage probability. The authors do not study the per-user rate in \cite{MeDi_Pap}\cite{Wang}. The per-user rate is obtained by scaling the per-user coverage probability by the fixed-rate of transmission. On the contrary, the main contribution of this letter is the characterization of per-user (location-dependent) rate in cellular downlink, which is very different from the metrics pursued in \cite{RHI,RDI,MeDi_Pap,Wang}. The simulation and analytical results in this letter show significant performance enhancements for the per-user rate in cellular downlink due to rateless codes relative to adaptive modulation and coding, and fixed-rate coding with power control. \section{Numerical Results} \label{sec:Num_Res} In this section, numerical results showing the efficacy of the proposed per-user performance analysis are presented. For the network simulation, the following parameters were chosen: $\lambda=1$ and $K=75$. Fig. \ref{Psccdf} shows the plots of CCDF of $P_s(N)$ for the rateless coding scenario. It is observed that the curves corresponding to the beta distribution approximation for $P_s(N)$ given in Proposition 1 matches the simulation curves very well for both the TvI and CI models. \begin{figure}[!hbtp] \centering \includegraphics[scale=0.55, width=0.5\textwidth]{Psccdf} \caption{The CCDF of the per-user coverage probability $P_s(N)$ in (\ref{p_s}) in a cellular downlink with $\lambda=1$, $\alpha=\{3,4\}$ and $N=\{200, 90\}$ respectively. The analytical curve is based on (\ref{Ps_dis}).} \label{Psccdf} \end{figure} \begin{figure}[!hbtp] \centering \includegraphics[scale=0.55, width=0.5\textwidth]{Rnccdfci_al3} \caption{The CCDF of the per-user rate $R_N$ in (\ref{Rn}) in a cellular downlink with $\lambda=1$, $\alpha=3$ and $N=200$. The rateless coding curve is based on (\ref{Rn_dis}), while the fixed-rate coding curves for both constant power and power control are based on (\ref{Frc_rn}). For rateless coding, the curve is based on the \emph{constant interference } (CI) model.} \label{Rnccdf} \end{figure} \begin{figure}[!hbtp] \centering \includegraphics[scale=0.55, width=0.5\textwidth]{Rnccdftvi_al4} \caption{The CCDF of the per-user rate $R_N$ in (\ref{Rn}) in a cellular downlink with $\lambda=1$, $\alpha=4$ and $N=100$. For rateless coding, the curve is based on the \emph{time-varying interference} (TvI) model.} \label{Rnccdftvial4} \end{figure} In Fig. \ref{Rnccdf}, a plot of the CCDF of the per-user rate $R_N$ in a cellular network at $\alpha=3$ and $N=200$ is shown. Fig. \ref{Rnccdftvial4} shows a plot of the CCDF of the per-user rate $R_N$ for a cellular downlink at $\alpha=4$ and $N=100$. The focus is on comparing three types of transmission schemes, i.e., fixed-rate coding based on (\ref{Frc_rn}), AMC as described in Section \ref{frc} and the rateless coding scheme as per (\ref{Rn_dis}). Note that the performance of AMC is based on simulation only. For the AMC case, we consider a list of four AMC indices. The packet time for AMC index $i$ is set to $t=i\cdot N/4$, $1\leq i \leq 4$. As mentioned before in Section \ref{frc}, the rate $R_N$ for AMC case can be computed as per (\ref{Rn}) taking into consideration the above defined AMC packet times. In terms of matching the rate to the instantaneous channel conditions, fixed-rate coding with constant power has poor efficiency and rateless coding has high efficiency while AMC and fixed-rate coding with power control have intermediate performance. High efficiency of rateless coding is captured by the term $\mathbb{E}\left[T\mid \Phi\right]$ in the expression for $R_N$ in (\ref{Rn}). For fixed- rate coding, the packet time is fixed to $N$. AMC is also based on fixed-rate codes and thus, the packet time does not change with a finer resolution as compared to rateless coding. In Fig. \ref{Rnccdftvial4}, the CCDF curve for AMC decays to zero at $r=3$. For rateless coding, the CCDF at $r=3$ is $0.15$. Since rateless codes have robust adaptivity to the instantaneous channel conditions, the scheme yields much higher per-user rates relative to the AMC. These higher per-user rates for the rateless scheme have implications on the energy-efficiency of the BS-UE links and also, the congestion, QoS and end-to-end delay in the network. Note that the analytical results are based on ITM in Section \ref{AnaResu} and the moment lower bound $\tilde{M}_n$ in (\ref{Mnti}). These two approximations are necessary to obtain simplified expressions. The accuracy of the analytical curves is also influenced by the distribution of downlink distance $D\sim$ Rayleigh$(1/\sqrt{2\pi c\lambda})$. A value of $c=1.25$ has been used in \cite{Wang}. However, in this letter, we use $c=1$ to remain consistent with the majority of the literature\cite{ElSawyII}. \section{System Model} \label{sys_mod} We consider a single tier cellular downlink in which the locations of BSs are modeled by a PPP $\Phi \triangleq \Phi_b \cup \{o\}$, where $\Phi_b=\{X_i\},~i=1,2,\cdots$ is a homogeneous PPP of intensity $\lambda$\cite{ElSawyII} and $X_i$ denotes the location of the BS $i$. A user served by a BS $X_i$ is located uniformly at random within the Voronoi cell of $X_i$. The typical user is located within the \emph{typical cell}, the Voronoi cell of the typical BS at origin. The distance between the typical user and the typical BS of $\Phi$ is $D$. Its approximate distribution is $D\sim$ Rayleigh$\left(\sigma\right)$, with the scale parameter $\sigma=1/\sqrt{2\pi\lambda}$\cite{RHI}. We consider a translated version of the PPP $\Phi$ so that the typical user is at the origin. On the downlink, each BS transmits a $K$-bit packet to its user using a physical layer rateless code. Each BS transmits with constant power $\rho$. The channel is quasi-static flat fading affected by path loss. The interference power and SIR at the typical user based on the typical BS transmission are given by \begin{equation} I=\sum_{k\neq 0}\rho h_{k} \abs{X_k}^{-\alpha} \label{int_eq} \end{equation} \begin{equation} \mathrm{SIR}=\frac{\rho h D^{-\alpha} }{I}\label{sir_in}, \end{equation} where $h$ and $h_k$ have $Exp(1)$ distribution. Each packet transmission of $K$ bits has a delay constraint of $N$ channel uses. Define $\hat{T}$ as the time to decode a $K$-bit packet, and $T$ as the packet transmission time. They are defined as \begin{align} &\hat{T}\triangleq \min\left\{t:K<t\cdot C\right\}\label{Rx_pkt}\\ &T\triangleq \min (N,\hat{T}),\label{pkt_Ti} \end{align} where $C=\log_2\left(1+ \mathrm{SIR}\right)$ is the achievable rate of the typical BS transmission and depends on the type of receiver used. Note that $T$ is a truncated version of $\hat{T}$ at sample value $t=N$. Now, we focus on a framework introduced in \cite{MeDi_Pap} to study the network performance conditioned on the PPP $\Phi$. In this letter, the two metrics used to quantify the performance of rateless coded transmission are the success probability and the rate of $K$-bit packet transmission conditioned on $\Phi$, defined as \begin{align} P_s(N)&\triangleq 1-\mathbb{P}(\hat T> N\mid \Phi)\label{p_s}\\ R_N&\triangleq \frac{KP_s(N)}{\mathbb{E}\left[T\mid \Phi\right]}.\label{Rn} \end{align} Since $T$ is basically $\hat{T}$ truncated at sample value $t=N$, both $P_s(N)$ and $R_N$ depend on the distribution of $\hat{T}$ conditioned on $\Phi$. $R_N$ in (\ref{Rn}) is a random variable (RV). It quantifies the per-user rate achieved in a given PPP realization $\Phi$. From (\ref{pkt_Ti}), the CCDF of $T$ is $\mathbb{P}\left(T>t\right)=\mathbb{P}(\hat{T}>t)$, $t<N$. Plugging the expression for $C$ in (\ref{Rx_pkt}), we obtain \begin{align} &\mathbb{P}(\hat{T}>t) =\mathbb{P}\left(K/t\geq \log_2\left(1+ \mathrm{SIR}\right)\right)\label{ccdf_eq}\\ &P_s(t)\triangleq \mathbb{P}(\hat{T}\leq t\mid \Phi)=\mathbb{P}\left(\mathrm{SIR}\geq \theta_t\mid \Phi\right),\label{cdf_con} \end{align} where $\theta_t=2^{K/t}-1$. Note that the CDF in (\ref{cdf_con}) is a RV due to conditioning on $\Phi$. Below, we discuss the conditional packet transmission time distribution for two types of interference models. One type is the time-varying interference (TvI) model and the second type is the constant interference (CI) model.
2,869,038,155,935
arxiv
\section{Introduction} \label{sect1} The study of polarized deep inelastic scattering off polarized targets has revealed a rich structure of phenomena during the last years~\cite{REV}. So far mainly the case of deep inelastic photon scattering has been studied experimentally. A future polarized proton option at RHIC and HERA, however, would allow to probe the spin structure of nucleons at much higher $Q^2$~(cf.~\cite{JB95A}) also. In this range $Z$--exchange contributions become relevant and one may investigate charged current scattering as well. For this general case the scattering cross section is determined by (up to) five polarized structure functions per current combination, if lepton mass effects are disregarded. In previous investigations different techniques have been used to derive relations between these structure functions and discrepancies between several derivations were reported~(cf. e.g.~\cite{a1}). In refs.~\cite{a1,a0} the structure functions were calculated in the parton model. Some of the investigations deal with the case of longitudinal polarization only~\cite{a3}. In other studies light-cone current algebra~\cite{DIC,A1C} and the operator product expansion were used~\cite{AR}--\cite{a77}. Furthermore the structure functions $g_1^{em}$ and $g_2^{em}$ were also calculated in the covariant parton model~\cite{a1B,a17}. Still a thorough agreement between different approaches has not been obtained. It is the aim of the present paper to derive the relations for the complete set of the polarized structure functions including weak interactions which are not associated with terms in the scattering cross section vanishing as $m_{lepton} \rightarrow 0$. The calculation is performed applying two different techniques:~the operator product expansion and the covariant parton model~\cite{LP}. The latter method is furthermore used to obtain also the quark mass corrections in lowest order QCD. As it turns out the twist-2 contributions for only two out of the five polarized structure functions, corresponding to the respective current combinations, are linearly independent. Therefore three linear operators have to exist which determine the remaining three structure functions over a basis of two in lowest order QCD. Two of them are given by the Wandzura--Wilczek\cite{WW} relation and a relation by Dicus\footnote{This relation corresponds to the Callan--Gross~\cite{CG} relation for unpolarized structure functions since the spin dependence enters the tensors of $g_4$ and $g_5$ in $W_{\mu\nu}^{ij}$,~eq.~(\ref{eqz4}), in terms of a factor $S.q$.}~\cite{DIC}. A third {\it new} relation is found. New sum rules based on this relation are derived and discussed in the context of quark mass corrections. Extending a recent analysis carried out for the case of photon scattering~\cite{a18} to the complete set of neutral and charged current interactions we also investigate the validity of known relations, as the Burkhardt--Cottingham~\cite{BC} sum rule and other relations, in the presence of quark mass effects. \section{Basic Notation} \label{sect2} The hadronic tensor for polarized deep inelastic scattering is given by \begin{equation} W_{\mu\nu}^{ab}=\frac{1}{4\pi}\int d^4xe^{iqx} \langle pS\mid[J_\mu^a(x),J_\nu^b(0)]\mid pS\rangle, \label{eqHAD} \end{equation} where in framework of the quark model the currents are \begin{equation} J_\mu^a(x)=\sum_{f,f'} U_{ff'} \overline{q}_{f'}(x)\gamma_\mu(g_V^a+g_A^a\gamma_5)q_f(x). \end{equation} In terms of structure functions the hadronic tensor reads: \begin{eqnarray} W_{\mu\nu}^{ab} &=&(-g_{\mu\nu}+\frac{q_\mu q_\nu}{q^2})F_1^i(x,Q^2)+ \frac{\widehat{p}_\mu\widehat{p}_\nu}{p.q} F_2^i(x,Q^2)- i\varepsilon_{\mu\nu\lambda\sigma}\frac{q_\lambda p_\sigma}{2 p.q} F_3^i(x,Q^2)\nonumber\\ &{+}& i\varepsilon_{\mu\nu\lambda\sigma}\frac{q^\lambda S^\sigma}{p.q} g_1^i(x,Q^2)+ i\varepsilon_{\mu\nu\lambda\sigma}\frac{q^{\lambda}(p.q S^\sigma - S.q p^\sigma)} {(p.q)^2} g_2^i(x,Q^2)\nonumber\\ &{+}& \left[ \frac{\widehat{p_\mu} \widehat{S_\nu} + \widehat{S_\mu} \widehat{p_\nu}}{2}- S.q \frac{\widehat{p_\mu} \widehat{p_\nu}}{(p.q)} \right] \frac{g_3^i(x,Q^2)}{p.q}\nonumber\\ &+& S.q \frac{\widehat{p_\mu}\widehat{p_\nu}}{(p.q)^2} g_4^i(x,Q^2)+ (-g_{\mu\nu}+\frac{q_\mu q_\nu}{q^2})\frac{(S.q)}{p.q} g_5^i(x,Q^2), \label{eqz4} \end{eqnarray} with $ab \equiv i$ and \begin{equation} \widehat{p_\mu} = p_\mu-\frac{p.q}{q^2} q_{\mu},~~~~~~\widehat{S_\mu} = S_\mu-\frac{S.q}{q^2} q_{\mu}. \label{eqz5} \end{equation} Here $x = Q^2/2p.q \equiv Q^2/2M\nu$ and $Q^2 = -q^2$ is the transfered four momentum squared. $p$ and $S$ denote the four vectors of the nucleon momentum and spin, respectively, with $ S^2=-M^2$ and, $S.p = 0 $. $g_{V_i}$ and $g_{A_i}$ are the vector and axialvector couplings of the bosons exchanged in the respective subprocesses. For charged current interactions $U_{ff^\prime}$ denotes the Cabibbo-Kobayashi-Maskawa matrix. The hadronic tensor (\ref{eqz4}) was constructed using both Lorentz and time reversal invariance and current conservation. In previous analyses partly different notations for the hadronic tensor have been used. To allow for direct comparisons with earlier results we relate the definition of structure functions given in eq.~(\ref{eqz4}) to that of other authors in table~1 for convenience~\footnote{A more comprehensive comparison is given in~\cite{BK}.}. \begin{center} \begin{tabular}{||c||c|c|c|c||}\hline \hline {\sf our notation} & $\ct{a1}$ & $\ct{a3}$ & $\ct{a7}$ & $\ct{a77}$\\ \hline \hline $g_1$ & $g_1$ & $g_1$ & $g_1$ & $g_1$\\ $g_2$ & $g_2$ & $g_2$ & $g_2$ & $g_2$\\ $g_3$ & $-g_3$ & $ (g_4-g_5)/2$ & $b_1+b_2$ & $(A_2-A_3)/2$ \\ $g_4 $ & $g_4-g_3$ & $g_4$ &$a_2+b_1+b_2$ &$A_2$ \\ $g_5$ & $-g_5$ &$ g_3$ &$a_1$ & $A_1$ \\ \hline \end{tabular} \end{center} \small \vspace{3mm} \noindent \small {\sf Table~1:}~The definition of polarized deep inelastic scattering structure functions in different conventions.\footnote{Note that in part of the above papers only the structure functions being related to longitudinal nucleon polarization were delt with.} \normalsize \vspace{2mm} \section{Operator Product Expansion} \label{sect3} The forward Compton amplitude, $T_{\mu\nu}^{ij}$, is related to the hadronic tensor by \begin{equation} W_{\mu\nu}^{ij} = \frac{1}{2\pi} Im T^{ij}_{\mu\nu} \label{eqB1} \end{equation} where \begin{equation} T_{\mu\nu}^{ij} =i\int d^4xe^{iqx}\langle pS\mid(T{J_\mu^i}^\dagger (x)J_\nu^j(0))\mid pS\rangle. \label{eqB2} \end{equation} It may be represented in terms of the amplitudes $\left . T_k^i(q^2, \nu) \right|_{k=1}^3$ and $\left . A_k^i(q^2, \nu) \right|_{k=1}^5$ analogously to (\ref{eqz4}) substituting \begin{equation} F_1 \rightarrow T_1,~~~~~~~F_{2,3} \rightarrow \frac{p.q}{M^2} T_{2,3} \label{eqB3} \end{equation} and \begin{equation} g_{1,5} \rightarrow \frac{p.q}{M^2} A_{1,5},~~~~~~~g_{2,3,4} \rightarrow \frac{(p.q)^2}{M^4} A_{2,3,4}. \label{eqB3A} \end{equation} Near the light cone the forward Compton amplitude has the representation \begin{eqnarray} T^{ab}_{\mu\nu, ij} &=& \frac{2i}{(2\pi)^2(x^2 - i0)^2} \left [ \bar q(x)\gamma_\mu(g_{V_i}+g_{A_i} \gamma_5) \not \!{x} \gamma_\nu(g_{V_j}+ g_{A_j}\gamma_5)\lambda^a\lambda^b q(0) \right. \nonumber\\ &{-}&~~~~~~~~~~~~~~~~~~~~ \left . \bar q(0)\gamma_\nu(g_{V_j}+g_{A_j}\gamma_5) \not \!{x} \gamma_\mu(g_{V_i}+ g_{A_i}\gamma_5)\lambda^b\lambda^a q(x) \right ], \label{eqB4} \end{eqnarray} where $\lambda^a$ denote the $SU(N_f)$ matrices. The spin dependent part of $T_{\mu\nu, ij}^{ab}$ is \begin{equation} \begin{array}{cl} T_{\mu\nu, ij}^{spin, ab} & = \frac{\displaystyle 2 x^\alpha}{\displaystyle (2\pi)^2(x^2-i0)^2} \\ \times & \left \{ (g_{V_1}g_{V_2}+g_{A_1}g_{A_2}) \varepsilon_{\mu\alpha\nu\beta} \left [ (if^{abc}+\widetilde{d}^{abc}) \bar q(x)\gamma_\beta\gamma_5\lambda^c q(0) -(if^{abc}-\widetilde{d}^{abc}) \bar q(0)\gamma_\beta\gamma_5\lambda^c q(x) \right ] \right. \\ +& \left. (g_{V_1}g_{A_2}+g_{A_1}g_{V_2}) S_{\mu\alpha\nu\beta} \left[ (if^{abc} + \widetilde{d}^{abc})\bar q(x)\gamma_\beta\gamma_5\lambda^c q(0) + (if^{abc} - \widetilde{d}^{abc}) \bar q(0)\gamma_\beta\gamma_5\lambda^c q(x) \right ] \right \},\\ \end{array} \label{eqB5} \end{equation} with \begin{equation} S_{\mu \alpha \nu \beta} = g_{\mu \alpha} g_{\nu \beta} + g_{\mu \beta } g_{\nu \alpha} - g_{\mu \nu } g_{\alpha \beta} \end{equation} and \begin{equation} \widetilde{d}^{abc} \lambda_c = \frac{2}{N_f} \delta^{ab} + d^{abd} \lambda_c. \end{equation} We further represent (\ref{eqB5}) in terms of a Taylor series around $x=0$. The amplitudes $A_k(q^2, \nu)|_{k=1}^5$ can be related to the expectation values of a symmetric and an antisymmetric operator emerging in the Taylor expansion, $\langle pS|\Theta_{S,A}^{\beta \left\{\mu_1 ... \mu_n\right\}}|pS \rangle$, and obey the following crossing relations: \begin{eqnarray} A_{1,3}(q^2, -\nu) &=& ~A_{1,3}(q^2, \nu) \label{eqXMP} \\ A_{2,4,5}(q^2, -\nu) &=& -A_{2,4,5}(q^2, \nu) \label{eqAMP} \end{eqnarray} for neutral current interactions. One finally obtains the following expressions for the moments of structure functions using standard techniques. \begin{eqnarray} \int_0^1 dx x^n g_1^j(x,Q^2) &=& \frac{1}{4} \sum_q \alpha_j^q a_n^q,~~~{\ } n=0,2..., \label{eqYMP} \\ \int_0^1 dx x^n g_2^j(x,Q^2) &=& \frac{1}{4} \sum_q \alpha_j^q \frac{n (d_n^q -a_n^q)}{n + 1} ,~~~{\ } n=2,4..., \\ \int_0^1 dx x^n g_3^j(x,Q^2) &=& \sum_q \beta_j^q \frac{a_{n+1}^q}{n + 2} ,~~~{\ } n=0,2..., \\ \int_0^1 dx x^n g_4^j(x,Q^2) &=& \frac{1}{2} \sum_q \beta_j^q a_{n+1}^q ,~~~{\ } n=2,4..., \\ \int_0^1 dx x^n g_5^j(x,Q^2) &=& \frac{1}{4} \sum_q \beta_j^q a_{n}^q ,~~~{\ } n=1,3...~~. \label{eqg1M} \end{eqnarray} Here we adopt the notation of~\cite{RLJ} and $a_n^q$ and $d_n^q$ are the matrix elements which are related to the expectation values of $\langle pS|\Theta_{S}^{\beta\left\{ \mu_1 ... \mu_n\right\}}|pS \rangle$ and $\langle pS|\Theta_{A}^{\beta\left\{ \mu_1] ... \mu_n\right\}}|pS \rangle$, respectively. The factors $\alpha_j^q$ and $\beta_j^q$ are given by \begin{eqnarray} \left( \alpha_{|\gamma|^2}^q, \alpha_{|\gamma Z|}^q, \alpha_{|Z|^2}^q \right) &=& \left [ e_q^2, 2 e_q g_V^q, (g_V^q)^2 + (g_A^q)^2 \right ] \\ \left( \beta_{|\gamma Z|}^q, \beta_{|Z|^2}^q \right) &=& \left [ 2 e_q g_V^q, 2 g_V^q g_A^q \right ] \label{eqAlBe} \end{eqnarray} Analogous relations to (\ref{eqXMP}--\ref{eqg1M}) are derived for the charged current structure functions~( cf.~\cite{BK}). As well--known, the structure function $g_2(x,Q^2)$ contains also twist--3 contributions corresponding to the matrix elements $d_n^q$. On the other hand, all the remaining structure functions are {\it not} related to $d_n^q$, and contain at lowest twist contributions of twist--2 only. We will disregard the terms $d_n^q$ in the subseqent discussion. The twist--2 contributions are related by the equations: \begin{eqnarray} g_2^i(x) &=& -g_1^i(x) + \int_x^1\frac{dy}{y}g_1^i(x), \label{qq7} \\ g_4^j(x) &=& 2xg_5^j(x), \label{qq8} \\ g_3^j(x)&=&4x\int_x^1\frac{dy}{y}g_5^j(y), \label{qq9} \end{eqnarray} where $i=\gamma,\gamma Z, Z, W $ and $j=\gamma Z, Z, W $. Eqs.~(\ref{qq7}) and (\ref{qq8}) are the Wandzura--Wilczek~\cite{WW} and Dicus~\cite{DIC} relations, and eq.~(\ref{qq9}) is a {\it new} relation. Recently the first two moments of $g_3$ were calculated in~\cite{FRANKF}. They agree with our general relation eq.~(\ref{qq9}). We do not confirm a corresponding relation for the structure function $A_3$~(cf.~table~1) given in~\cite{a77} previously, which also disagrees with the lowest moments given in~\cite{FRANKF}. Eqs.~(\ref{qq8},\ref{qq9}) yield the sum rules \begin{equation} \int_0^1 dx x^n \left [ g_3^k(x,Q^2) - \frac{2}{n+2} g_4^k(x,Q^2) \right] = 0. \label{qq10} \end{equation} For $n = 0$ one obtains \begin{equation} \int_0^1 dx g_3^k(x,Q^2) = \int_0^1 dx g_4^k(x,Q^2). \label{qq11} \end{equation} Two of the five spin--dependent structure functions $\left. g_k^j\right |_{k=1}^5$ are linearly independent. We will express the remaining ones using $g_1^j$ and $g_5^j$ as a basis given by \begin{eqnarray} g_1^j(x,Q^2) &=& \frac{1}{2} \sum_q \alpha_j^q \left [ \Delta q(x,Q^2) + \Delta \overline{q}(x,Q^2) \right ],\\ g_5^j(x,Q^2) &=& \frac{1}{2} \sum_q \beta_j^q \left [\Delta q(x,Q^2) - \Delta \overline{q}(x, Q^2) \right ] \end{eqnarray} for the neutral current reactions. For charged current $lN$ scattering one obtains: \begin{eqnarray} g_1^{W^{-(+)}}(x,Q^2) &=& \sum_q \left [\Delta q_{u(d)}(x,Q^2) + \Delta \overline{q}_{d(u)}(x, Q^2) \right ], \\ g_5^{W^{-(+)}}(x,Q^2) &=& - \sum_q \left [\Delta q_{u(d)}(x,Q^2) - \Delta \overline{q}_{d(u)}(x, Q^2) \right ]. \end{eqnarray} In figure~1 the behaviour of the twist--2 contributions to the structure functions $\left. g_k^j(x,Q^2) \right|_{k=1}^j$ are compared for $j = |\gamma|^2, |\gamma Z|$ and $|W^-|^2$ in leading order QCD for the range $10^{-4} < x$ and $10 \GeV^2 \leq Q^2 \leq 10^4 \GeV^2$. Here we refer to the parametrization~\cite{GRVS} of the parton densities as one example. Whereas the absolute values of the structure functions $g_{1,2,5}(x,Q^2)$ grow for $x \rightarrow 0$ $g_{3}^j$ and $g_4^j$ are predicted to vanish as $x \rightarrow 0$. In the parametrization~\cite{GRVS} the structure functions $g_3^j$ to $g_5^j$ are found to be positive for $j = \gamma Z$ and negative for $j = W^-$, while $g_1$ takes negative values for $x \stackrel{<}{\sim} 10^{-3}... 3 \cdot 10^{-4}$. For larger values of $Q^2$ the twist--2 contribution to $g_2^k$ is predicted to be positive, while for some current combinations it can take negative values in the small $x$ region again. Currently the experimental data on $g_1^n$ and $g_1^p$ constrain the parton densities $\Delta q$ and $\Delta \overline{q}$ in the kinematical range $10^{-2} \stackrel{<}{\sim} x$ and the predictions for the small $x$ range result from extrapolations only. Other parametrizations (see~\cite{LADIN} for a recent compilation) agree in the range of the current data but differ in size in the range of small~$x$. Clearly more data, particularly in the low $x$ region, are needed to yield better constraints on the flavour structure of polarized structure functions. \section{Covariant Parton Model} \label{sect4} In the covariant parton model the hadronic tensor for deep inelastic scattering is given by \begin{equation} W_{\mu\nu, ab}(q,p,S)=\sum_{\lambda, i} \int d^4k f_{\lambda}^{q_i}(p,k,S) w_{\mu\nu ,ab, \lambda}^{q_i}(k,q) \delta[(k+q)^2-m^2]. \label{eq1} \end{equation} Here $w_{\mu\nu, ab, \lambda}^{q_i}(k,q)$ denotes the hadronic tensor at the quark level, $f_{\lambda}^{q_i}(p,k,S)$ describes the quark and antiquark distributions of the hadron, $\lambda$ is the quark helicity, $k$ the virtuality of the initial state parton, and $m$ is the quark mass. The spin-dependent part of the hadronic tensor at the quark level takes the following form: \begin{eqnarray} w_{\mu\nu, ab, \lambda}^{q_i, spin}(k,q) &=&\lambda \left\{ 2i\varepsilon_{\mu\alpha\nu\beta} [g_{A_a}^{q_i}g_{A_b}^{q_i}k_\alpha n_\beta+(g_{A_a}^{q_i}g_{A_b}^{q_i}+ g_{V_a}^{q_i}g_{V_b}^{q_i})q_\alpha n_\beta] \right. \nonumber\\ &+& \left. g_{V_a}^{q_i}g_{A_b}^{q_i}[2k_\mu n_\nu-(n.q)g_{\mu\nu}]+ g_{A_a}^{q_i}g_{V_b}^{q_i}[2n_\mu k_\nu-(n.q)g_{\mu\nu}] \right \}, \label{eq2} \end{eqnarray} where $n$ is the spin vector of the off-shell parton~\cite{a17} \begin{equation} n_\sigma=\frac{m p.k}{\sqrt{(p.k)^2 k^2 - M^2 k^4}}(k_\sigma - \frac{k^2}{p.k} p_\sigma). \label{eq3} \end{equation} For massless quarks the spin dependent quark densities $\Delta f^{q_i} = f_+^{q_i} - f_-^{q_i}$ obey, due to covariance (cf. \cite{a18}), \begin{equation} \Delta f^{q_i} (p.k,S.k,k^2) = - \frac{S.k}{M^2} \hat{f}^{q_i}(p.k,k^2). \label{eq43} \end{equation} We further decompose the spin dependent part of the hadronic tensor $W_{\mu\nu}$ into a longitudinal and a transverse component with respect to the nucleon spin $S^{\mu}_{\parallel} = p^{\mu} + {\cal O}(M^2/\nu)$ and $S^{\mu}_{\perp} = M(0,1,0,0)$ in the infinite momentum frame $p = (\sqrt{M^2 + \pvec^2},0,0,\pvec)$: \begin{eqnarray} W_{\mu\nu}^{j,\|} =i\varepsilon_{\mu\alpha\nu\beta} \frac{q_\alpha p_\beta}{\nu}g_1^j(x)+\frac{p_\mu p_\nu}{\nu}g_4^j(x)-g_{\mu\nu}g_5^j(x) ,\nonumber\\ W_{\mu\nu}^{j, \bot}= i\varepsilon_{\mu\alpha\nu\beta} \frac{q_\alpha S_\beta^\bot}{\nu} \left [ g_1^j(x)+g_2^j(x) \right ] +\frac{p_\mu S_\nu^\bot+p_\nu S_\mu^\bot}{2\nu}g_3^j(x). \label{eq4} \end{eqnarray} with $j \equiv ab$. Using the Sudakov representation for \begin{equation} k = xp + \frac{k^2 + k_{\perp}^2 - x^2 M^2}{2 x \nu} (q + xp) + k_{\perp} \label{eq5} \end{equation} the structure functions $g_k^j(x)$ are obtained from (\ref{eq1},\ref{eq2}) in the Bjorken limit $Q^2, \nu \rightarrow \infty,$\newline $x = const.$ by \begin{eqnarray} g_1^j(x)&=&\frac{\pi xM^2}{8} \sum_q \alpha_q^j \int_x^1dy(2x-y) \widehat{h}_{q}(y) ,\nonumber\\ g_2^j(x)&=& \frac{\pi xM^2}{8} \sum_q \alpha_q^j \int_x^1dy(2y-3x) \widehat{h}_{q}(y) ,\nonumber\\ g_3^j(x)&=& \frac{\pi x^2M^2}{2} \sum_q \beta_q^j \int_x^1dy(y-x) \widehat{h}_{q}(y) ,\nonumber\\ g_4^j(x) &=&\frac{\pi x^2M^2}{4} \sum_q \beta_q^j \int_x^1dy(2x-y) \widehat{h}_{q}(y) ,\nonumber\\ g_5^j(x) &=&\frac{\pi xM^2}{8} \sum_q \beta_q^j \int_x^1dy(2x-y) \widehat{h}_{q}(y). \label{eq6} \end{eqnarray} for neutral current interactions, where $ y=x+k^2_\bot/(xM^2)$ and $\widehat{h}_{q}(y)=\int dk^2 \hat{f}_{q}(y,k^2)$. The corresponding relations for charged current scattering are given in~\cite{BK}. The expressions for $g_1^{em}$ and $g_2^{em}$ have been obtained in \cite{a17,a18} already. Again the structure functions given in eqs.~(\ref{eq6}) may be expressed in terms of two independent structure functions in lowest order QCD: \begin{eqnarray} g_2^i(x)&=& -g_1^i(x) + \int_x^1\frac{dy}{y}g_1^i(y), \label{eq7} \\ g_4^j(x)&=& 2x g_5^j(x), \label{eq8} \\ g_3^j(x)&=&4x\int_x^1\frac{dy}{y}g_5^j(y). \label{eq9} \end{eqnarray} These relations agree with those found using the operator product expansion in section~2, eqs.~(\ref{qq7}--\ref{qq9}). As examples one may derive from (\ref{eqYMP}--\ref{eqg1M},\ref{eq6}) the relations \begin{equation} \left [ g_3^{\nu n}(x,Q^2) - g_3^{\nu p}(x,Q^2) \right ] = 12x \left [ \left ( g_1(x,Q^2) + g_2(x,Q^2) \right )^{ep} - \left ( g_1(x,Q^2) + g_2(x,Q^2) \right )^{en} \right ]^{|\gamma|^2} \label{DIC1} \end{equation} \begin{eqnarray} 6x \left [ g_2^{en}(x,Q^2) - g_2^{ep}(x,Q^2) \right ]^{|\gamma|^2} = \left [ \left ( g_4(x,Q^2) - \frac{g_3(x,Q^2)}{2} \right )^{\nu n} - \left ( g_4(x,Q^2) - \frac{g_3(x,Q^2)}{2} \right )^{\nu p} \right ]. \nonumber\\ \label{DIC2} \end{eqnarray} Eqs.~(\ref{DIC1}) and (\ref{DIC1}, \ref{DIC2}) were given first in~\cite{a1B} and \cite{DIC}, respectively. In a similar way various other relations follow for other current combinations. Let us now derive the light quark mass corrections to the structure functions $g_j(x)\left|_{j=1}^5 \right.$. We follow the treatment of ref.~\cite{a18} where it was shown that as in the massless case the polarized structure functions can be expressed in terms of functions $\tilde{h}_q(y, \rho)$, with $\rho = m^2/M^2$, and the corresponding perturbative coefficient functions. Due to the flavour dependence of the couplings, $g_{V_i}$ and $g_{A_i}$, one has in general to introduce the functions $\tilde{h}_{q}(x, \rho)$ even if the ratio of $m/M$ is treated as a single parameter. In most of the cases given below a {\it single} function, however, suffices for an {\it effective} parametrization. The mass dependent structure functions are given by: \begin{eqnarray} g_1^j(x,\rho) &=& \frac{\pi M^2x}{8} \sum_q \alpha_q^j \int_{x+\frac{\rho}{x}}^{1+\rho}dy \left [ x(2x-y) + 2 \rho \right ] \tilde{h}_{q}(y,\rho), \label{eqAA} \\ g_2^j(x,\rho) &=& \frac{\pi M^2}{8} \sum_q \alpha_q^j \int_{x+\frac{\rho}{x}}^{1+\rho}dy \left [x (2y-3x) - \rho \right ] \tilde{h}_{q}(y,\rho) - \frac{\pi m^2}{4} \sum_q \gamma_q^j \int_{x+\frac{\rho}{x}}^{1+\rho}dy \tilde{h}_{q}(y,\rho), \label{eqAC} \label{eqVIO} \\ g_3^j(x,\rho) &=& \frac{\pi M^2 x^2}{2} \sum_q \beta_q^j \int_{x+\frac{\rho}{x}}^{1+\rho}dy (y-x) \tilde{h}_{q}(y, \rho), \\ g_4^j(x,\rho) &=& 2x g_5(x), \label{eqAD} \\ g_5^j(x,\rho) &=& \frac{\pi M^2}{8} \sum_q \beta_q^j \int_{x+\frac{\rho}{x}}^{1+\rho}dy \left [x (2x-y) + 2 \rho \right ] \tilde{h}_q(y,\rho), \label{eqAB} \end{eqnarray} with $\gamma_q^j = g_{A_a}^q g_{A_b}^q, j~\equiv~ab$, and \begin{equation} \tilde{h}_{q}(y, \rho) = \int dk^2 \hat{f}_{q}(y,k^2, \rho). \label{eq11} \end{equation} Corresponding relations are obtained for charged current scattering. The last definition applies a slightly different convention than used in ref.~\cite{a18}. The functions $\tilde{h}_{q}(y, \rho)$ can be determined from the different measured structure functions in phenomenological analyses. In the case of the non--photonic structure functions the direct determination of $\tilde{h}_q$ is complicated due to the fact that these structure functions are difficult to unfold from the measured scattering cross sections. However, one may still use the relations~(\ref{eqAA}--\ref{eqAB}) in global analyses of polarization asymmetries at large $Q^2$ as corrections in the determination of $g_1(x,Q^2)$ in this kinematical range. The relations (\ref{eqAA}) and (\ref{eqAC}) agree with those derived in ref.~\cite{a18} recently for photon exchange, where $\gamma_q^j = 0$. Note that for the contributions due to $Z$ or $W^{\pm}$ exchange a new contribution to $g_2 \propto m^2/M^2$ emerges. In a different context similar terms were obtained in \cite{a1} as the only contributions to $g_2^B$, $B=Z, W^{\pm}$. The Burkhardt--Cottingham sum rule \begin{equation} \int_0^1 dx g_2^k(x) = 0 \end{equation} is valid for $\rho \neq 0$ iff $\gamma_{q}^j = g_{A_a}^{q_i} g_{A_b}^{q_i} = 0$, i.e. for pure $Z$ and $W^{\pm}$ it is violated due to the second term in (\ref{eqVIO}). For charged current interactions, on the other hand, \begin{equation} \int_0^1 dx~x \left [ g_1^k(x)+ 2 g_2^k(x) \right ] = 0 \end{equation} is valid for all values of $\rho$. The sum--rule eq.~({\ref{qq11}) \begin{equation} \int_0^1 dx g_3^k(x, \rho) = \int_0^1 dx g_4^k(x, \rho) \end{equation} holds also for massive quarks. Finally also the Dicus relation between $g_4(x)$ and $g_5(x)$~(\ref{eqAD}) obtains no quark mass corrections. An illustration of the relative size of the mass terms for the different structure functions is given in figure~2 for $m/M = 0.005$~and~0.010 in terms of relative correction factors. These mass values mark the typical range for the light current quark masses $m_u = 5.1 \pm 0.9 \MeV$ and $m_d = 9.3 \pm 1.4 \MeV$~\cite{BERN}. To obtain a first estimate we use the same parametrization for all the functions $\tilde{h}(y,\rho)$~\footnote{We would like to thank R.G. Roberts for providing us with the fit parameters of the function $\tilde{h}(y,\rho)$ determined in ref.~\cite{a18}.}. Due to the proportionality of $g_1^j(x, \rho)$, $g_4^j(x, \rho)$, and $g_5^j(x, \rho)$ only the ratios $\left . g_k^j(x, \rho)/g_k^j(x)\right|_{k=1}^3$ are different. The present data constrain these ratios to a range around unity in the region of $x \stackrel{>}{\sim} 0.02$~\footnote{ The spike in $g_2^j(x, \rho)/g_2^j(x)$ is due to the zero in $g_2(x)$.}. The ratio for $g_3^j$ is somewhat closer to unity than that for $g_1$ and $g_2$ at lower $x$ values. At smaller values of $x$ and larger values of $\rho$ the correction factors differ for the various structure functions. \section{Conclusions} \label{sect6} We have derived the twist--2 contributions to the polarized structure functions in lowest order QCD including weak currents. The results obtained using the operator product expansion and the covariant parton model agree. In lowest order two out of five structure functions are independent for the respective current combinations and the remaining structure functions are related by three linear operators. A new relation between the structure functions $g_3^j$ and $g_5^j$ was derived. As a consequence the first moment of $g_3^j$ and $g_4^j$ are predicted to be equal. The light quark mass corrections to the structure functions $\left. g^j_k \right|_{k=1}^5$ were calculated in the covariant parton model. The first moments of the structure functions $g_3$ and $g_4$ are equal also in the presence of the quark mass corrections. The Dicus relation remains to be valid. The Burkhardt--Cottingham sum rule is broken by a term $\propto g_{A_a} g_{A_b} m^2/M^2$, i.e. for pure $Z$~exchange and in charged current interactions. \vspace{3mm} {\bf Acknowledgement} N.K.~would like to thank DESY for the hospitality extended to him.
2,869,038,155,936
arxiv
\section{Introduction} Semiconductor mesoscopic systems have been extensively studied since the establishment of microfabrication techniques~\cite{DattaETMS,ImryIMP,IhnEQTMSS} in the 1980s. These systems allow us to artificially realize and control various quantum phenomena of electron charge and spin. In this review, we summarize the basics and recent progress of shot-noise measurements in mesoscopic physics. While conductance, the most fundamental transport property, provides information on the time-averaged electron transport, shot noise offers more in-depth insights into non-equilibrium electron dynamics. Shot noise, one of the most important topics in mesoscopic physics, has been theoretically studied since the early 1990s~\cite{deJong1997,BlanterPR2000,MartinBook}. However, initially, shot-noise measurements were not commonly performed in experiments, because of technical difficulties. What impressed researchers with the importance of shot-noise measurements was the detection of fractionally charged quasiparticles in fractional quantum Hall (QH) systems, which led to the Nobel Prize in Physics in 1998 for the discovery of the fractional QH effect~\cite{SaminadayarPRL1997,de-PicciottoNature1997}. Then, several excellent reviews written by theorists around 2000 have extensively promoted shot-noise research~\cite{deJong1997,BlanterPR2000,MartinBook}. This review will introduce various experiments, including those performed by us~\cite{FerrierNatPhys2016,HashisakaPRL2015}, reported since these early reviews. Although there already exists an instructive review recently written by experimentalists~\cite{PiatrushaJETP2018}, it is worth reviewing the shot-noise measurements over a broad range of mesoscopic systems, from experiments understood within the single-particle picture to those targeting quantum many-body physics. The central idea of this review is as follows. Current and noise, corresponding to the average and variance of the number of electrons passing through a conductor per unit time, respectively, provide different information on a transport phenomenon. For example, because the shot-noise intensity is given as the product of the current and the effective charge of a charge carrier, the effective charge can be evaluated by measuring both the current and shot noise. Actually, combining these measurements has revealed a two-particle scattering process in the Kondo effect, fractional charges in fractional QH systems, and Cooper pairs in superconducting junctions. This review focuses on such a combination of conductance and shot-noise measurements. This review is organized as follows. In Section~\ref{sec:current_noise}, we discuss the basics of the current-noise theory within the Landauer-B\"{u}ttiker formalism. Section~\ref{sec:techniques} presents the experimental techniques for current-noise measurements. Section~\ref{sec:shotexample} introduces several experiments in which the transport phenomena can be understood within a single-particle picture, including those on a quantum point contact (QPC), two-channel or multichannel systems, and fermion quantum optics. Section~\ref{sec:quantumliquid} describes several shot-noise measurements performed on quantum liquids, or equivalently, quantum many-body states, namely the Kondo states, the fractional QH states, and superconductors. Section~\ref{sec:fluctuationtheorem} introduces a noise study based on the fluctuation theorem, a different approach from that using the Landauer-B\"{u}ttiker picture. Section~\ref{sec:closing} summarizes this review with reference to future experimental issues. \section{Basics of current-noise theory} \label{sec:current_noise} \subsection{Classical current-noise theory} \label{sec:classicalnoise} Suppose a bias voltage $V$ is applied to a conductor, for example, a resistor, as shown in Fig.~\ref{noisemeasurement}. We monitor the time $t$ dependence of current $I(t)$ with a high-precision ammeter. Besides a time-averaged value $\langle I(t) \rangle$ of current, there always exists a fluctuation (noise) $\Delta I(t) \equiv I(t) -\langle I(t) \rangle$ around it. \begin{figure}[!b] \center \includegraphics[width=8.5cm]{ShotReviewFig01.eps} \caption{Current- and noise-measurement setup. A constant bias voltage $V$ is applied to a conductor and current $I$ is measured with a time-resolved high-precision ammeter. The current noise $\langle \Delta I^2 \rangle$ is evaluated by using a spectrum analyzer.} \label{noisemeasurement} \end{figure} Let us consider the Fourier transform of $\Delta I(t)$ over the measurement time interval $-\tau/2\leq t \leq \tau/2$: $\Delta I(\omega) \equiv \int_{-\tau/2}^{\tau/2}dt \Delta I(t)e^{i \omega t} $, where $\omega \equiv 2\pi f$ is the angular frequency for frequency $f$. The time-averaged variance of the current noise $\Delta I(t)$, given by $\langle \Delta I(t)^2\rangle \equiv \lim_{\tau\rightarrow \infty} \frac{1}{\tau} \int_{-\tau/2}^{\tau/2} \Delta I(t)^2 dt $, satisfies the following relation known as the Parseval theorem. \begin{equation} \langle \Delta I(t)^2 \rangle = \frac{1}{2\pi}\lim_{\tau\rightarrow \infty}\frac{1}{\tau}\int_{-\infty}^{\infty} \vert \Delta I(\omega) \vert^2 d\omega. \end{equation} The power spectral density (PSD) $S(\omega)$ of the current is defined as \begin{equation} \begin{split} &S(\omega) \equiv \lim_{\tau\rightarrow \infty} \frac{2}{\tau}\vert \Delta I(\omega) \vert^2 \\ &=\lim_{\tau\rightarrow \infty} \frac{2}{\tau}\int_{-\tau/2}^{\tau/2}dt \int_{-\tau/2}^{\tau/2}dt' \Delta I(t)\Delta I(t') e^{i \omega (t-t')}. \label{Definition_PSD_classic} \end{split} \end{equation} Since $I(t)$ and $S(\omega)$ are real, $\Delta I(-\omega) =[\Delta I(\omega)]^*$ and $S(\omega)=S(-\omega)$. Therefore, we can reduce the angular frequency $\omega$ to the non-negative range by redefining $S(\omega)$ as twice of itself, which accounts for the factor 2 in Eq.~(\ref{Definition_PSD_classic}). Henceforth, we use $\omega \in [0, \infty]$ in principle~\cite{NyquistPR1928,JohnsonPR1928,BeenakkerPT2003}. Equation (\ref{Definition_PSD_classic}) satisfies the following relation: \begin{equation} \langle \Delta I(t)^2 \rangle =\frac{1}{2\pi}\int_{0}^{\infty} S(\omega) d\omega. \label{eq_PSD2} \end{equation} This equation quantifies the current-noise intensity with the PSD measured with a spectrum analyzer (see Fig.~\ref{noisemeasurement}). In some textbooks and technical literature, PSD is often defined, based on Eq.~(\ref{eq_PSD2}), as~\cite{ZielNoise1986} \begin{equation} S(f) =\frac{2\langle \Delta I(t)^2\rangle_f}{\Delta f}, \label{noisedef2} \end{equation} where $\langle \Delta I(t)^2\rangle_f$ is the current noise measured around the frequency $f$ with the bandwidth of $\Delta f$. The current is nothing but the average of charge $Q$, or the number of electrons $N$, passing through a device per unit time: \begin{equation} \langle I \rangle=\frac{\langle Q \rangle}{\tau}=\frac{e\langle N \rangle}{\tau}, \quad [\textrm{A}]=\left[\frac{\textrm{C}}{\textrm{s}}\right], \label{Naverage} \end{equation} where $e$ is the elementary charge and $\tau$ is the measurement time. Meanwhile, the current noise corresponds to the time-averaged variance $\Delta Q^2$ or $\Delta N^2$ as \begin{equation} S(f)=\frac{2\langle \Delta Q^2 \rangle}{\tau} =\frac{2e^2 \langle \Delta N^2 \rangle}{\tau},\quad \left[\frac{\textrm{A}^2}{\textrm{Hz}}\right] =\left[\frac{\textrm{C}^2}{\textrm{s}}\right]. \label{Nvariance} \end{equation} Comparing the experimental results of these two quantities, we can extract information that is not accessible by standard dc-current measurements alone, such as the charge of carriers. \subsubsection{Thermal and shot noise} When the system shown in Fig.~\ref{noisemeasurement} is in equilibrium at $V=0$, the average current is zero ($\langle I \rangle=0$). However, even in this case, there is a finite current noise referred to as thermal noise or Johnson-Nyquist noise at finite temperature~\cite{JohnsonPR1928,NyquistPR1928}. The thermal noise $S_\textrm{th}$ is described as \begin{equation} S_\textrm{th} = 4 k_{\rm{B}} T_{\rm{e}} G, \label{thermalnoise} \end{equation} where $G$, $T_{\rm{e}}$, and $k_{\rm{B}}$ are the conductance, electron temperature, and Boltzmann constant, respectively. Nyquist derived Eq.~(\ref{thermalnoise}) from the second law of thermodynamics to explain the results of Johnson's current-noise measurements~\cite{JohnsonPR1928} and indicated its link with black-body radiation~\cite{NyquistPR1928}. In Eq.~(\ref{thermalnoise}), the conductance $G$ as well as $T_{\rm{e}}$ appears. Since the conductance characterizes the linear response of the system to the external bias ($I=GV$) and Joule heat ($GV^2$), we see that Eq.~(\ref{thermalnoise}) reflects the fluctuation-dissipation relation. Later, in Sect.~\ref{subsubsec:examples}, we derive the same result [Eq.~(\ref{Eq:JNLandauer})] based on the scattering theory, a different approach from the Nyquist's one. It is noteworthy that Johnson discussed the evaluation of $k_{\rm{B}}$ from the measured thermal noise~\cite{JohnsonPR1928}. This discussion was a pioneering attempt for the precise evaluation of the Boltzmann constant in metrology~\cite{QuMetrologia2017}. Next, let us consider shot-noise generation under a non-equilibrium condition. Here, we assume that $I$ is carried by electron tunneling through a potential barrier (scatterer), as sketched in Fig.~\ref{PartitionProcess}. When the transmission probability $\cal{T}$ is small, the shot-noise intensity is described as \begin{equation} S_\textrm{shot}=2e\vert \langle I \rangle\vert \label{Schottky}. \end{equation} The numerical factor 2 comes from the definition of PSD at positive frequencies, as explained earlier [see Eq.~(\ref{Definition_PSD_classic})]. Schottky derived this expression to investigate the flow of electrons in a vacuum tube~\cite{SchottkyAP1918}. \begin{figure}[!t] \begin{center} \includegraphics[width=7cm]{ShotReviewFig02.eps} \end{center} \caption{Scattering of a particle at a potential barrier. The transmission and reflection probabilities are ${\cal T}$ and $1-{\cal T}$, respectively.} \label{PartitionProcess} \end{figure} To understand the meaning of Eq.~(\ref{Schottky}), let us consider that $N$ particles emitted from a source impinge on the barrier, and each particle is either independently transmitted or reflected with a probability of ${\cal T}$ or $1-{\cal T}$, respectively. The detector measures the transmitted particles. The probability $P_{N}(N_1)$ of detecting $N_1$ particles is given by the binomial distribution as \begin{equation} P_{N}(N_1) = \frac{N!}{N_1!(N-N_1)!}{\cal T}^{N_1}(1-{\cal T})^{N-N_1}. \end{equation} The average $\langle N_1 \rangle$ and variance $\langle \Delta N_1^2 \rangle$ are given by \begin{equation} \begin{split} \langle N_1 \rangle &= N{\cal T},\\ \langle \Delta N_1^2 \rangle &\equiv \langle (N_1- \langle N_1 \rangle)^2 \rangle \\ &= N{\cal T}(1-{\cal T}) = \langle N_1 \rangle (1-{\cal T}). \end{split} \end{equation} When the transmission probability is very small (${\cal T} \ll 1$), both $\langle N_1 \rangle$ and $\langle \Delta N_1^2 \rangle$ are equal to $N{\cal T}$, and this is nothing but the signature of the Poisson distribution. Using Eqs.~(\ref{Naverage}) and (\ref{Nvariance}), we obtain $S_\textrm{shot}/\vert \langle I \rangle\vert = 2e\langle\Delta N_1^2 \rangle/\langle N_1 \rangle=2e$. Thus, the shot noise reflects the discrete nature of charge carriers. Shot noise is sometimes referred to as partition noise since it is generated when a current is partitioned into transmitted and reflected parts. It is useful to introduce the Fano factor $F$~\cite{FanoPR1947}, a dimensionless parameter that quantifies the current noise: \begin{equation} F \equiv \frac{\langle \Delta N_1^2 \rangle}{\langle N_1 \rangle} = \frac{S_\textrm{shot}}{2e\vert \langle I \rangle\vert}. \label{Fano_classic} \end{equation} By definition, $F = 1$ for the Poisson distribution. In this case, scattering events are independent of each other, namely there is no correlation between them. By comparing Eqs.~(\ref{thermalnoise}) and (\ref{Schottky}), one can notice that noise properties in equilibrium and non-equilibrium situations are qualitatively different. Particularly, elementary charge $e$ appears only in the shot-noise formula [Eq.~(\ref{Schottky})], indicating that the shot noise serves as a unique probe for charge transport. As an interesting historical note, the non-equilibrium shot noise~\cite{SchottkyAP1918} was found ten years earlier than the equilibrium thermal noise~\cite{JohnsonPR1928,NyquistPR1928}, which might reflect the inherence of the non-equilibrium in nature. \subsection{Noise in quantum transport} \label{subsec:noise_in_quantum_transport} Conductance through a mesoscopic system can be understood using our discussion, referred to as the Landauer-B\"uttiker formalism~\cite{DattaETMS,ImryIMP,IhnEQTMSS}. In this subsection, we introduce the current-noise theory using the same framework~\cite{BlanterPR2000}. \subsubsection{Scattering approach} Here, from the pedagogical perspective, we consider a simple two-terminal device coupled to a single conduction channel on both the left and right sides of the device. The theoretical descriptions until Sect.~\ref{subsubsec:examples} are taken from Ref.~[\onlinecite{KatoBussei2014}]. Note that the scattering approach can be straightforwardly generalized to multiterminal and multichannel cases~\cite{BlanterPR2000}. We present the general result in Sect.~\ref{subsub:generalshot}. Figure~\ref{fig_setup}(a) shows a schematic of the setup. The spinless Hamiltonian of the left (L) and right (R) leads, which are regarded as one-dimensional free electron systems, is expressed using the creation ($c^\dagger_k$) and annihilation ($c_k$) operators as \begin{equation} {\cal H} = \sum_k (\varepsilon_k-\mu) c^\dagger_k c_k, \label{eq:hamiltonian_freeelectron} \end{equation} where $\varepsilon_k=\hbar^2k^2/2m$ ($m$, electron mass; $k$, wavenumber of electron; $\hbar$, reduced Planck constant) is the kinetic energy of an electron and $\mu$ is the chemical potential. Here, we perform a linear approximation to the parabolic band dispersion such that \begin{equation} \varepsilon_k -\mu= \pm \hbar v_\textrm{F}(k \mp k_\textrm{F}), \label{Eq_linearapprox} \end{equation} as shown in Fig.~\ref{fig_setup}(b), considering only low-energy excitations near the Fermi surface. Here, $v_\textrm{F}=\hbar k_\textrm{F}/m$ is the Fermi velocity, and $k_\textrm{F}$ is the Fermi wavenumber. \begin{figure}[!b] \begin{center} \includegraphics[width=8.5cm]{ShotReviewFig03.eps} \end{center} \caption{(Color online) (a) Two-terminal scattering model. The conduction channels of injected ($a_{\textrm{L}, k}$, $a_{\textrm{R}, k}$) and scattered ($b_{\textrm{L}, k}$, $b_{\textrm{R}, k}$) electrons are represented by solid and dashed arrows, respectively. (b) Parabolic dispersion relation in the leads. Dotted lines indicate the linear approximation near the Fermi surface $k \simeq \pm k_\textrm{F}$ [see Eq.~(\ref{Eq_linearapprox})].} \label{fig_setup} \end{figure} There exist right- and left-moving electrons in lead L. The annihilation (creation) operators of the former and the latter are described as $a_{\textrm{L}, k}$ ($a^\dagger_{\textrm{L}, k}$) and $b_{\textrm{L}, k}$ ($b^\dagger_{\textrm{L}, k}$), respectively. Within the linear approximation, the Hamiltonian can be written by modifying Eq.~(\ref{eq:hamiltonian_freeelectron}) as \begin{equation} \begin{split} {\cal H} &= \sum_k \hbar v_\textrm{F}(k-k_\textrm{F}) a_{\textrm{L}, k}^\dagger a_{\textrm{L}, k} \\ &+\sum_k (-\hbar v_\textrm{F})(k+k_\textrm{F}) b_{\textrm{L}, k}^\dagger b_{\textrm{L}, k}. \end{split} \end{equation} We define the current operator at position $x$ in lead L as \begin{equation} \hat{I}(x)=\frac{\hbar e}{2im}\left( \hat{\psi}^\dagger(x) \frac{d\hat{\psi}(x)}{dx} - \frac{d\hat{\psi}^\dagger(x)}{dx}\hat{\psi}(x) \right), \end{equation} where $\hat{\psi}(x)= \sum_k (1/\sqrt{L}) \exp(ikx) c_k$ is the field operator, and $L$ is the length of the lead. $\hat{I}(x)$ can be expressed using $c^\dagger_k$ and $c_k$ as \begin{equation} \hat{I}(x)=\frac{\hbar e}{L}\sum_{k, k'} \frac{k+k'}{2m}c^\dagger_k c_{k'} e^{i(k'-k)x}. \end{equation} By considering the contribution only around $k = \pm k_\textrm{F}$ and using $a_{\textrm{L}, k}$ and $b_{\textrm{L}, k}$ instead of $c_k$, we obtain the following formula: \begin{equation} \hat{I}(x)=\frac{ev_\textrm{F}}{L}\sum_{k, k'} (a_{\textrm{L}, k}^\dagger a_{\textrm{L}, k'} - b_{\textrm{L}, k}^\dagger b_{\textrm{L}, k'}) e^{i(k'-k)x}. \end{equation} With the assumption that the sample is connected to the lead at $x=0$, the current flowing from lead L into the sample becomes \begin{equation} \hat{I}_\textrm{L}=\hat{I}(x=0)=\frac{ev_\textrm{F}}{L}\sum_{k, k'} (a_{\textrm{L}, k}^\dagger a_{\textrm{L}, k'} - b_{\textrm{L}, k}^\dagger b_{\textrm{L}, k'}). \label{Eq_current_operator} \end{equation} The scattering process between the incoming ($a_{\alpha, k}$) and outgoing ($b_{\alpha, k}$) electrons in lead $\alpha$ ($\alpha=\textrm{L}$ or $\textrm{R}$) is described as \begin{equation} \begin{pmatrix} b_{\textrm{L}, k} \\ b_{\textrm{R}, k} \\ \end{pmatrix} =S \begin{pmatrix} a_{\textrm{L}, k} \\ a_{\textrm{R}, k} \\ \end{pmatrix}. \label{Eq_Scattering_Matrix} \end{equation} The components of the $S$ matrix are given by \begin{equation} S= \begin{pmatrix} s^{\textrm{L}\textrm{L}}(k) & s^{\textrm{L}\textrm{R}}(k) \\ s^{\textrm{R}\textrm{L}}(k) & s^{\textrm{R}\textrm{R}}(k) \\ \end{pmatrix} = \begin{pmatrix} r & t' \\ t & r' \\ \end{pmatrix}. \end{equation} Note that, to satisfy the commutation relation $[a_{\alpha, k}, a_{\alpha', k'}^\dagger]=[b_{\alpha, k}, b_{\alpha', k'}^\dagger]=\delta_{\alpha, \alpha'}\delta_{k, k'}$, the $S$ matrix must be unitary, namely $|t|^2=|t'|^2=1-|r|^2=1-|r'|^2$. Using the $S$ matrix, we express the current operator [see Eq.~(\ref{Eq_current_operator})] as \begin{equation} \hat{I}_\textrm{L}=\frac{ev_\textrm{F}}{L}\sum_{\alpha=\textrm{L}, \textrm{R}} \sum_{\beta=\textrm{L}, \textrm{R}} \sum_{k, k'} a_{\alpha, k}^\dagger A_{\textrm{L}}^{\alpha \beta}(k, k') a_{\beta, k'}, \label{Eq_current} \end{equation} where \begin{equation} A_{\textrm{L}}^{\alpha \beta}(k, k') =\delta_{\textrm{L},\alpha}\delta_{\textrm{L},\beta}- \left[ s^{\textrm{L}\alpha}(k) \right]^* s^{\textrm{L}\beta}(k'). \end{equation} \subsubsection{Landauer formula} \label{sec:LandauerFormula} Here, we derive the conductance formula. Assuming that incident electrons are in thermal equilibrium and taking their statistical average $\langle \cdots \rangle$, we obtain \begin{equation} \langle a_{\alpha, k}^\dagger a_{\beta, k'} \rangle = \delta_{\alpha, \beta}\delta_{k, k'}f_\alpha(k). \label{Eq_StAverage} \end{equation} $f_\alpha(k)$ is the Fermi-Dirac distribution function in lead $\alpha$: \begin{equation} f_\alpha(k)=\frac{1}{\exp \left[(\varepsilon_k-\mu_\alpha)/k_\textrm{B} T_\alpha \right]+1}, \end{equation} where $T_\alpha$ and $\mu_\alpha$ are temperature and the chemical potential, respectively, in lead $\alpha$. The statistical average of Eq.~(\ref{Eq_current}) are calculated as \begin{equation} \begin{split} \langle \hat{I}_\textrm{L} \rangle &=\frac{ev_\textrm{F}}{L}\sum_k \sum_\alpha A_\textrm{L}^{\alpha\alpha}(k, k)f_\alpha(k) \\ &=\frac{e}{2\pi\hbar}\int_{-\infty}^{\infty} d\varepsilon \sum_\alpha A_\textrm{L}^{\alpha\alpha}(\varepsilon , \varepsilon )f_\alpha(\varepsilon). \end{split} \end{equation} Note that we replace the summation $(1/L)\sum_k \cdots$ with the integral $\int dk/(2\pi)\cdots$, assuming sufficiently large $L$, and using Eq.~(\ref{Eq_linearapprox}). The relations $A_\textrm{L}^{\textrm{L}\textrm{L}}=1-(s^{\textrm{L}\textrm{L}})^* s^{\textrm{L}\textrm{L}}=1-|r|^2=|t|^2\equiv {\cal T}$ and $A_\textrm{L}^{\textrm{R}\textrm{R}}=(s^{\textrm{L}\textrm{R}})^* s^{\textrm{L}\textrm{R}}=-|t'|^2=-|t|^2=- {\cal T}$, where ${\cal T}$ is the transmission probability, lead to \begin{equation} \langle \hat{I}_\textrm{L} \rangle =\frac{e}{2\pi\hbar}\int_{-\infty}^{\infty} d\varepsilon {\cal T(\varepsilon)}[f_\textrm{L}(\varepsilon)-f_\textrm{R}(\varepsilon)]. \label{Eq_current_Landauer} \end{equation} For simplicity, let us assume that ${\cal T}$ is energy independent and the system is at absolute zero temperature. When lead L is biased with $V$, the Fermi-Dirac distribution function in each lead can be written as $f_\textrm{R}(\varepsilon)=\Theta(-\varepsilon)$ and $f_\textrm{L}(\varepsilon)=\Theta(-\varepsilon+eV)$, where $\Theta(x)$ is the Heaviside function. In this case, $f_\textrm{L}(\varepsilon)-f_\textrm{R}(\varepsilon)$ is one only when $0<\varepsilon<eV$ and is zero otherwise, resulting in $\langle \hat{I}_\textrm{L} \rangle =\frac{e^2}{2\pi\hbar}{\cal T}V$. Thus, we obtain the well-known Landauer's conductance formula as \begin{equation} G=\frac{\langle \hat{I}_\textrm{L} \rangle}{V} =\frac{e^2}{h} {\cal T}, \label{LandauerConductance} \end{equation} where $h=2\pi\hbar$ is Planck constant. If the channel is spin degenerate, the equation is modified to $G=\frac{2e^2}{h} {\cal T}$. When the system is at finite temperature, Eq.~(\ref{Eq_current_Landauer}) is described as \begin{equation} G=\frac{e^2}{h}\int d\varepsilon {\cal T}(\varepsilon)\left(-\frac{df}{d\varepsilon} \right), \label{LandauerConductanceFiniteT} \end{equation} where \begin{equation} f(\varepsilon)\equiv\frac{1}{{\exp \left[(\varepsilon-\mu)/k_\textrm{B} T_\textrm{e} \right]+1}}. \end{equation} To obtain this formula, we use $f_\textrm{L}(\varepsilon)-f_\textrm{R}(\varepsilon)=f(\varepsilon-eV)-f(\varepsilon)\simeq \left(-\frac{df}{d\varepsilon}\right)eV$. When ${\cal T}$ is energy-independent, we again obtain Eq.~(\ref{LandauerConductance}) at finite temperature. \subsubsection{Current noise} \label{sec:noisetheory} We define the time evolution of the current operator in the Heisenberg representation \begin{equation} \hat{I}_\textrm{L}(t)=\exp\left(\frac{iHt}{\hbar}\right)\hat{I}_\textrm{L}\exp\left(-\frac{iHt}{\hbar}\right), \end{equation} and introduce the current-noise operator \begin{equation} \Delta\hat{I}_\textrm{L}(t) =\hat{I}_\textrm{L}(t)-\langle \hat{I}_\textrm{L}(t)\rangle. \end{equation} Using this operator, we describe the second-order current-current correlation function as \begin{equation} \begin{split} C(t, t')&\equiv\langle \Delta\hat{I}_\textrm{L}(t) \Delta\hat{I}_\textrm{L}(t') \rangle \\ &= \langle \hat{I}_\textrm{L}(t) \hat{I}_\textrm{L}(t') \rangle - \langle \hat{I}_\textrm{L}(t) \rangle\langle\hat{I}_\textrm{L}(t') \rangle. \end{split} \label{correlation_func} \end{equation} Note that the current operators $\hat{I}_\textrm{L}(t)$ and $\hat{I}_\textrm{L}(t')$ are not commutative. Following Eq.~(\ref{Definition_PSD_classic}), the current-noise PSD $S(\omega)$ is given by \begin{equation} S(\omega) \equiv \lim_{\tau\rightarrow \infty} \frac{2}{\tau}\int_{-\tau/2}^{\tau/2}dt \int_{-\tau/2}^{\tau/2}dt' C(t, t') e^{i \omega (t-t')}. \label{Definition_PSD_quantum} \end{equation} Unlike the classical case discussed in Sect.~\ref{sec:classicalnoise}, $S(\omega)$ in Eq.~(\ref{Definition_PSD_quantum}) is not necessarily a real number, and $S(\omega)= S^*(-\omega)$ holds instead of $S(\omega) = S(-\omega)$. The real part of $S(\omega)$ can be expressed as $\textrm{Re}[S(\omega)]=\left[S(\omega)+S(-\omega)\right]/2\equiv S_{\textrm{sym}}(\omega)$, where $S_{\textrm{sym}}(\omega)$ is referred to as symmetrized noise~\cite{BlanterPR2000}. While the imaginary part of $S(\omega)$ is sometimes important at high frequencies, particularly in the quantum-noise regime~\cite{DeblockScience2003} [see Fig.~\ref{fig3_2}(a)], in this review we focus on the noise at low frequencies, where $S(0)=S_{\textrm{sym}}(0)$ generally holds. If the Hamiltonian ${\cal H}$ is time-independent, $C(t, t')$ depends only on the time difference $\Delta t = t-t'$, and $C(t, t')=C(\Delta t)$ equals zero at large $\Delta t$ [also see the discussion in Sect.~\ref{sec:basics_of}]. In this case, Eq.~(\ref{Definition_PSD_quantum}) is modified to \begin{equation} \begin{split} S(\omega) &= \lim_{\tau\rightarrow \infty} \frac{2}{\tau}\int_{-\tau/2}^{\tau/2}dt' \int_{-\infty}^{\infty}d(\Delta t)C(\Delta t) e^{i \omega\Delta t} \\ &= 2 \int_{-\infty}^{\infty}d(\Delta t)C(\Delta t) e^{i \omega\Delta t}. \label{Definition_PSD_quantum2} \end{split} \end{equation} Thus, the noise PSD is formulated as the Fourier transform of $C(\Delta t)$. The zero-frequency noise is expressed as \begin{equation} S\equiv S(0) = 2 \int_{-\infty}^{\infty}dt \left[\langle \hat{I}_\textrm{L}(t) \hat{I}_\textrm{L}(0) \rangle - \langle \hat{I}_\textrm{L}(t) \rangle\langle\hat{I}_\textrm{L}(0) \rangle \right]. \label{s_zero} \end{equation} By substituting $a_{\alpha, k}(t) = a_{\alpha, k}\exp(-i \varepsilon_k t/\hbar)$, Eq.~(\ref{Eq_current}) becomes \begin{equation} \begin{split} \hat{I}_\textrm{L}&=\frac{ev_\textrm{F}}{L}\sum_{\alpha=\textrm{L}, \textrm{R}} \sum_{\beta=\textrm{L}, \textrm{R}} \sum_{k, k'} a_{\alpha, k}^\dagger A_{\textrm{L}}^{\alpha \beta}(k, k') a_{\beta, k'} \\ &\times \exp \left[\frac{i(\varepsilon_k-\varepsilon_{k'})t}{\hbar}\right]. \label{Eq_current_time} \end{split} \end{equation} Therefore, Eq.~(\ref{s_zero}) can be described as \begin{equation} \begin{split} &S = 2 \int_{-\infty}^{\infty}dt \left(\frac{ev_\textrm{F}}{L} \right)^2 \sum_{k, k', k'', k'''} \sum_{\alpha, \beta, \alpha', \beta'} \\ &A_{\textrm{L}}^{\alpha \beta}(k, k') A_{\textrm{L}}^{\alpha' \beta'}(k'', k''') [\langle a_{\alpha, k}^\dagger a_{\beta, k'} a_{\alpha', k''}^\dagger a_{\beta', k'''} \rangle \\ &-\langle a_{\alpha, k}^\dagger a_{\beta, k'} \rangle \langle a_{\alpha', k''}^\dagger a_{\beta', k'''} \rangle ] \exp \left[\frac{i(\varepsilon_k-\varepsilon_{k'})t}{\hbar}\right]. \label{Eq36} \end{split} \end{equation} Wick's theorem and Eq.~(\ref{Eq_StAverage}) lead to \begin{equation} \begin{split} &\langle a_{\alpha, k}^\dagger a_{\beta, k'} a_{\alpha', k''}^\dagger a_{\beta', k'''} \rangle -\langle a_{\alpha, k}^\dagger a_{\beta, k'} \rangle \langle a_{\alpha', k''}^\dagger a_{\beta', k'''} \rangle \\ &= \langle a_{\alpha, k}^\dagger a_{\beta', k'''} \rangle \langle a_{\beta, k'} a_{\alpha', k''}^\dagger \rangle \\ &=\delta_{\alpha, \beta'}\delta_{k, k'''}\delta_{\beta, \alpha'}\delta_{k', k''}f_\alpha(k)[1-f_\beta(k')]. \end{split} \end{equation} The second term on the leftmost side of this equation is the exchange term that takes the statistical nature of particles into account. The resultantly obtained factor $f_\alpha(k)[1-f_\beta(k')]$ represents the fermionic nature of electrons. By replacing the summation of the wavenumbers $(1/L)\sum_k \cdots$ with the integral $\int dk/(2\pi)\cdots$ and changing the integration variable from $k$ to $\varepsilon$ using Eq.~(\ref{Eq_linearapprox}) in Eq.~(\ref{Eq36}), we obtain \begin{equation} \begin{split} S&=\frac{2e^2}{(2\pi\hbar)^2}\int d\varepsilon \int d\varepsilon' \int^{\infty}_{-\infty}dt \\ &\sum_{\alpha,\beta}A_\textrm{L}^{\alpha\beta}(\varepsilon, \varepsilon') A_\textrm{L}^{\beta\alpha}(\varepsilon', \varepsilon)f_\alpha(\varepsilon)[1-f_\beta(\varepsilon')] \exp \left[ \frac{i(\varepsilon-\varepsilon')t}{\hbar}\right]. \end{split} \end{equation} In the end, using the relation $\int^{\infty}_{-\infty}dt e^{i(\varepsilon-\varepsilon')t/\hbar}=2\pi \hbar\delta(\varepsilon-\varepsilon')$, the general formula for the current noise is written as \begin{equation} S=\frac{e^2}{\pi\hbar} \int d\varepsilon \sum_{\alpha, \beta}A_\textrm{L}^{\alpha\beta}(\varepsilon, \varepsilon) A_\textrm{L}^{\beta\alpha}(\varepsilon, \varepsilon) \\ f_\alpha(\varepsilon) \left[1-f_\beta(\varepsilon)\right]. \label{NoiseSingleChannel} \end{equation} \subsubsection{Derivations of thermal and shot noise} \label{subsubsec:examples} Both the thermal- and shot-noise formulae are derived from Eq.~(\ref{NoiseSingleChannel}). When the system is in equilibrium, or $|eV|\ll k_\textrm{B}T_\textrm{e}$, the thermal noise dominates over the shot noise. Using the three relations, $eV=0$, \begin{equation} f(\varepsilon)\left[1-f(\varepsilon)\right]=k_\textrm{B}T_\textrm{e} \left(-\frac{\partial f}{\partial \varepsilon}\right), \end{equation} and \begin{equation} \sum_{\alpha,\beta}A_\textrm{L}^{\alpha\beta}(\varepsilon, \varepsilon)A_\textrm{L}^{\beta\alpha}(\varepsilon, \varepsilon)=2{\cal T}(\varepsilon), \end{equation} Eq.~(\ref{NoiseSingleChannel}) is modified to \begin{eqnarray} S=\frac{2e^2k_\textrm{B}T_\textrm{e}}{\pi\hbar}\int d\varepsilon {\cal T}(\varepsilon)\left(-\frac{df}{d\varepsilon} \right). \end{eqnarray} By comparing it with Eq.~(\ref{LandauerConductanceFiniteT}), we obtain \begin{equation} S=4k_\textrm{B}T_\textrm{e}G. \label{Eq:JNLandauer} \end{equation} This is the thermal-noise formula introduced above [see Eq.~(\ref{thermalnoise})]. Let us consider the case of $eV \neq 0$ at zero temperature, where we have $f_\textrm{L}(\varepsilon)=\Theta(-\varepsilon+eV)$ and $f_\textrm{R}(\varepsilon)=\Theta(-\varepsilon)$. When $eV>0$, $f_\alpha(\varepsilon)[1-f_\beta(\varepsilon)]\neq 0$ holds only when $\alpha = \textrm{L}$, $\beta = \textrm{R}$, and $0<\varepsilon<eV$. Hence, zero-frequency noise $S$ is given by \begin{equation} S=\frac{e^2}{\pi\hbar} A_\textrm{L}^{\textrm{L}\textrm{R}}(\varepsilon, \varepsilon) A_\textrm{L}^{\textrm{R}\textrm{L}}(\varepsilon, \varepsilon)\times |eV|. \label{Eq44} \end{equation} Suppose that the energy dependence of ${\cal T}$ is negligibly small. Since \begin{equation} A_\textrm{L}^{\textrm{L}\textrm{R}}A_\textrm{L}^{\textrm{R}\textrm{L}}=|t|^2(1-|t|^2)={\cal T}(1-{\cal T}), \end{equation} Eq.~(\ref{Eq44}) can be described as \begin{equation} S=\frac{e^2}{\pi\hbar}{\cal T}(1-{\cal T}) |eV| = 2e\vert\langle I \rangle\vert(1-{\cal T}). \label{Eq:zeroTshot} \end{equation} At finite temperature, Eq.~(\ref{Eq:zeroTshot}) is modified to \begin{equation} \begin{split} S &= 4 k_\textrm{B}T_\textrm{e}G \\ &+ 2e\vert\langle I \rangle\vert (1-{\cal T}) \left[ \coth\left(\frac{eV}{2k_\textrm{B}T_\textrm{e}}\right) - \frac{2k_\textrm{B}T_\textrm{e}}{eV} \right]. \label{ShotTheory1} \end{split} \end{equation} In the weak transmission limit (${\cal T}\ll 1$) at zero temperature, Eq.~(\ref{Eq:zeroTshot}) becomes \begin{equation} S=\frac{2e^2}{h} {\cal T} \vert eV\vert=2e\vert\langle I \rangle\vert. \end{equation} This equation corresponds to the classical Schottky-type shot-noise formula [see Eq.~(\ref{Schottky})]. Similarly, the finite-temperature shot-noise formula in the weak transmission limit is written as \begin{equation} S = 2e \vert \langle I \rangle\vert \coth\left(\frac{eV}{2k_\textrm{B}T_\textrm{e}}\right). \label{ShotTunnel} \end{equation} Equation (\ref{Eq:zeroTshot}) tells that a conductor with ${\cal T}=1$ has no current noise at zero temperature. The noiseless feature explicitly indicates that charge current fed from a reservoir does not fluctuate at zero temperature due to the fermionic nature of electrons. \subsubsection{General formula} \label{subsub:generalshot} While so far we have assumed a conductor with a single conduction channel for simplicity [see Fig.~\ref{fig_setup}], there can exist many channels in actual mesoscopic systems. Here, we present a generalized formula in multichannel cases. Assuming that the transmission probability ${\cal T}_n$ $(n=1, 2, 3\cdots)$ is energy independent, we obtain the current $\langle I \rangle$ and the conductance $G$ at zero temperature as \begin{equation} \langle I \rangle =GV, \quad G = \frac{2e^2}{h}\sum_n {\cal T}_n. \label{Landauer} \end{equation} This is the well-known Landauer formula, in which $G$ is given as the sum of contributions from the parallel channels. The factor 2 represents the spin degeneracy at zero magnetic field. The low-frequency noise $S$ is described as \begin{equation} S=\frac{2e^2}{\pi\hbar} \sum_n{\cal T}_n(1-{\cal T}_n) \vert eV\vert =2e\vert \langle I \rangle\vert F, \label{shot_47} \end{equation} where $F$ is the Fano factor defined in Eq.~(\ref{Fano_classic}). In the present case, $F$ is given by \begin{equation} F = \frac{\sum_n {\cal T}_n(1-{\cal T}_n)}{\sum_n {\cal T}_n}. \label{FanoTheory} \end{equation} Equations (\ref{shot_47}) and (\ref{FanoTheory}) explain that the current noise is given as the sum of noise contributions from the parallel channels, similarly to the case of $\langle I \rangle$. Current noise at finite temperature is \begin{equation} S = 4 k_\textrm{B}T_\textrm{e}G + 2e\vert\langle I \rangle\vert F \left[ \coth\left(\frac{eV}{2k_\textrm{B}T_\textrm{e}}\right) - \frac{2k_\textrm{B}T_\textrm{e}}{eV} \right]. \label{ShotTheory} \end{equation} This equation is the most commonly used shot-noise formula in experiments. As clearly shown in Eq.~(\ref{ShotTheory}), current noise is related to both temperature ($k_\textrm{B}T_\textrm{e}$) and bias ($eV$) in a mixed way. While Landauer argued that thermal noise and shot noise could be non-dividable~\cite{LandauerPRB1993}, it is convenient to regard them as additive independent noise in many cases. In this review, the term ``shot noise'' means the quantity obtained by subtracting the first term ($4 k_\textrm{B}T_\textrm{e}G$) from Eq.~(\ref{ShotTheory}), which represents the excess noise generated by a finite bias. Eq.~(\ref{ShotTheory}) tells that the dimensionless quantity $S/(4 k_\textrm{B}T_\textrm{e}G)$ is a function of $X \equiv eV/2k_\textrm{B}T_\textrm{e}$ as \begin{equation} \frac{S}{4k_\textrm{B}T_\textrm{e}G} = 1 + F\left[X \coth(X) -1\right]. \label{eq:shotnoisenormalized} \end{equation} Figure~\ref{ShotTheoryFig} displays $S/(4 k_\textrm{B}T_\textrm{e}G)$ for the cases of $F=0, 0.5$, and 1. The $F=1$ case is sometimes referred to as the Poisson limit, where we observe that $S/4k_\textrm{B}T_\textrm{e}G \rightarrow |X|$ for $|X|\rightarrow \infty$, namely that current noise at high bias or low temperature corresponds to the classical Schottky-type shot noise. \begin{figure}[!t] \center \includegraphics[width=8cm]{ShotReviewFig04.eps} \caption{(Color online) $S/(4 k_\textrm{B}T_\textrm{e}G)$ plotted as a function of $X \equiv eV/2k_\textrm{B}T_\textrm{e}$ for $F=0, 0.5$, and 1. The dotted line is the asymptotic line in the Poisson limit.} \label{ShotTheoryFig} \end{figure} \section{Noise measurement} \label{sec:techniques} \rm Compared to standard conductance measurements, current-noise measurements have not been widely performed because of technical difficulties. The main problem is that the current-noise intensity in mesoscopic devices is often too small to measure with a commercially available ammeter. A variety of experimental techniques have been used to solve this problem. In this section, we first explain the basics of current-noise measurements and then introduce several techniques that have provided accurate measurements. \subsection{Current-noise PSD} \label{sec:spectrum density} \begin{figure*}[t] \begin{center} \includegraphics[width=16cm]{ShotReviewFig05.eps} \end{center} \caption{(Color online) (a) Schematic of measurement setup using a current amplifier with gain $A$ and an ammeter. (b) Schematic of time-domain current-noise data $\Delta J_n$. Noise data are plotted as a function of the measurement number $n$. (c)(d) Schematic of $C(m)$ (c) and $S^J(p)$ (d). Dashed black lines are for the $N_T = \infty$ case. Solid red and green lines are for the $N_T \neq \infty$ case with a noiseless measurement setup and a noisy setup, respectively. (e) Schematic of the histogram analysis for data in the effective frequency band. $S^J(p)$ takes a fixed value over the frequency band when $N_T = \infty$ (black), while $S^J(p)$ scatters around $S^J_0=\langle S^J(p)\rangle$ with a variance $\sigma$ when $N_T \neq \infty$ (red and green). Extrinsic noise in the measurement setup enhances $\sigma$ as well as $\langle S^J(p)\rangle$ (from $S^J_0$ to $S^J_1$) and thus decreases measurement accuracy.} \label{fig3_1} \end{figure*} Generally, in electronic transport experiments, one applies an input voltage $V_{\rm{in}}$ or current $I_{\rm{in}}$ to a mesoscopic device (``sample,'' hereafter) and measures the response to evaluate the sample's transport properties. In a conductance $G = \langle I(t)\rangle /V$ measurement, the time average $\langle I(t)\rangle$ of the output current $I(t)$ is often measured with a $V_{\rm{in}}$ applied. In contrast, in current-noise measurements, the measured quantity is not $\langle I(t)\rangle$ but the variance $\langle [I(t)-\langle I(t)\rangle]^2\rangle \equiv \langle \Delta I(t)^2\rangle$. Let us consider a current $I_{\alpha}(t)$ outputs from a terminal $\Omega_{\alpha}$ of a sample. The magnitude of the current noise $\Delta I_{\alpha}(t) \equiv I_{\alpha}(t)-\langle I_{\alpha}(t)\rangle$ is often evaluated by its power spectral density (PSD) $S_{\alpha\alpha}(f) = 2\langle \Delta I_{\alpha}(t)^{2}\rangle_{f} / \Delta f$ [Eq.~(\ref{noisedef2})]. As explained in Sect.~\ref{sec:current_noise}, $S_{\alpha\alpha}(f)$ is given by the Fourier transform of the noise auto-correlation function $C_{\alpha\alpha}(\tau)$ as [see Eqs.~(\ref{noisedef2}), (\ref{correlation_func}), and (\ref{Definition_PSD_quantum2})] \begin{equation} \begin{split} S_{\alpha\alpha}(f)&=2\int_{-\infty}^{\infty}C_{\alpha\alpha}(\tau) e^{2\pi if\tau} d\tau, \label{auto_Sbb} \end{split} \end{equation} \begin{equation} \begin{split} C_{\alpha\alpha}(\tau)&=\lim_{T\rightarrow \infty} \frac{1}{T}\int_{-T/2}^{T/2} \Delta I_{\alpha}(t)\Delta I_{\alpha}(t+\tau)dt. \label{autocorr_beta} \end{split} \end{equation} We can also evaluate the correlation between current noise in different terminals $\Omega_{\alpha}$ and $\Omega_{\beta}$ by the cross-PSD $S_{\alpha\beta}(f) = 2\langle \Delta I_{\alpha}(t)\Delta I_{\beta}(t)\rangle _{f}/ \Delta f$, which is the Fourier transform of the cross-correlation function $C_{\alpha\beta}(\tau)$ of $I_{\alpha}(t)$ and $I_{\beta}(t)$: \begin{equation} \begin{split} S_{\alpha\beta}(f)&=2\int_{-\infty}^{\infty}C_{\alpha\beta}(\tau) e^{2\pi if\tau} d\tau, \label{crosscorr_Sbc} \end{split} \end{equation} \begin{equation} \begin{split} C_{\alpha\beta}(\tau)&=\lim_{T\rightarrow \infty} \frac{1}{T}\int_{-T/2}^{T/2} \Delta I_{\alpha}(t)\Delta I_{\beta}(t+\tau)dt. \label{crosscorr_beta} \end{split} \end{equation} \subsection{Basics of current-noise measurements} \label{sec:basics_of} \rm When the noise auto-correlation function [Eq.~(\ref{autocorr_beta})] of a current $I(t)$ is a delta-type function, the noise PSD, $S^{I}(f)$, is independent of frequency; in this case, $S^{I}(f)$ is referred to as ``white noise'' (see discussion in Sect.~\ref{sec:noisetheory}). In this subsection, we discuss a virtual measurement of white current noise. Because $S^{I}(f)$ is usually too small (typically of the order of $10^{-28} \rm{A}^{2}/\rm{Hz}$) to measure with a standard ammeter, we consider amplifying noise $\Delta I$ to $\Delta J = A \times \Delta I$ using an amplifier with gain $A$. Figure~\ref{fig3_1}(a) shows a schematic of a measurement setup using an amplifier. We evaluate $S^{J}(f) = 2\langle \Delta J(t)^{2}\rangle_{f} / \Delta f$, where $\Delta J(t) \equiv J(t)-\langle J(t)\rangle$ is the measured current noise, to estimate the intrinsic noise $S^{I}(f)$. Here, we briefly discuss an ideal measurement using a noiseless amplifier and ammeter. By a current-noise measurement for $T$ seconds with a sampling rate $r$, one obtains time-domain data $\Delta J_n~(n = 0, 1,,, N_{T} - 1)$, where $N_{T} = r \times T$ is the total number of data points. We calculate the noise auto-correlation function $C(m)~(m = 0, 1,,, N_{T} - 1)$ by replacing the integral in Eq.~(\ref{autocorr_beta}) with the sum of the products among the data, as follows. \begin{equation} \begin{split} C(m)=\frac{1}{N_{T}}\sum_{n=0}^{N_{T}-1}\Delta J_n\Delta J_{\rm{mod(\it{n+m, N_T})}}. \label{p_sum} \end{split} \end{equation} One obtains the current-noise PSD $S^{J}(p)~(p = 0, 1,,,N_{T}-1)$ by the discrete Fourier transform \begin{equation} \begin{split} S^{J}(p)=\frac{2}{N_{T}}\sum_{m=0}^{N_{T}-1}C(m)e^{2\pi ipm/N_{T}}. \label{sj_sum} \end{split} \end{equation} The frequency resolution is $1/T$ (Hz), and the upper limit of the frequency band is $1/T \times N_{T} = r$ (Hz). First, let us consider the case of an infinitely fast $r = \infty$ and long $T = \infty$ measurement. In this case, because $C(m)$ is a delta-type function, $S^{J}$ becomes a white-noise spectrum over the whole frequency band. In actual experiments, $r$ is finite, and the measurement has to be completed in a finite time $T$. In this case, we need to analyze a finite number of discrete time-domain data [see Fig.~\ref{fig3_1}(b)]. Here, we first consider the case of finite $r$ while assuming $T = \infty$. Because $C(m)$ is a discrete delta-type function [black dashed line in Fig.~\ref{fig3_1}(c)], $S^{J}(p)$ takes a fixed value $S^{J}_0$ [black dashed line in Fig.~\ref{fig3_1}(d)] in the ``effective frequency band'', which is determined by the upper-frequency limit of the measurement (typically about $0.4 \times r$ Hz): the factor 0.4 reflects a low-pass filtering generally applied to prevent aliasing errors. The histogram analysis of the data in the effective band is shown in the upper panel in Fig.~\ref{fig3_1}(e), where all the data points are on $S^{J}_0$. One can accurately estimate $S^{I}$ from the measured $S^{J}_0$ as $S^{I} = S^{J}_0/A^{2}$, where $A$ is the gain of the amplifier. Let us consider a current-noise measurement in a finite time ($T \neq \infty$, and hence $N_{T} \neq \infty$). In this case, to avoid the influence of the data truncation, one needs to multiply the time-domain data by a window function, e.g., Hanning window. In contrast to the $T = \infty$ case, $C(m \neq 0)$ fluctuates around $C = 0$ [red line in Fig.~\ref{fig3_1}(c)], causing the fluctuation of $S^{J}(p)$ in the effective frequency band [red line in Fig.~\ref{fig3_1}(d)]. The middle panel in Fig.~\ref{fig3_1}(e) shows the histogram analysis of the $S^{J}(p)$ data. The current-noise intensity is given by the peak $S^{J}(p)$ value $S^{J}_0 = \langle S^{J}(p)\rangle$, and the accuracy of the analysis can be evaluated as the standard deviation $\sigma \equiv \langle [S^{J}(p) -\langle S^{J}(p)\rangle]^{2}\rangle ^{1/2}$. Because $\sigma$ decreases in inverse proportion to $\sqrt{N_{T}}=\sqrt{r\times T}$, the accuracy is improved by increasing $r$ or $T$. So far, we have discussed current-noise measurements using a noiseless amplifier and ammeter. Conversely, below, we consider the influence of extrinsic noise generated in these measurement devices. When the input-referred noise of the amplifier and the ammeter is given by $S^{I}_{\rm{amp}}$ and $S^{I}_{\rm{meas}}$, respectively, the relation between $S^I$ in a sample and the measured noise $S^J$ is described as \begin{equation} \begin{split} S^{J}=A^2(S^I+S^{I}_{\rm{amp}})+S^{I}_{\rm{meas}}. \label{inputnoise} \end{split} \end{equation} When the gain $A$ is large enough to hold $A^2\times S^{I}_{\rm{amp}} \gg S^{I}_{\rm{meas}}$, $S^{I}_{\rm{amp}}$ dominates the extrinsic noise in the measurement setup. The extrinsic noise enhances both $C(m)$ peak at $m = 0$ and fluctuation at $m \neq 0$ [green line in Fig.~\ref{fig3_1}(c)], resulting in the increase in $\langle S^{J}(p)\rangle$ (from $S^{J}_0$ to $S^{J}_1$) and the fluctuation of $S^{J}(p)$ in the effective frequency band [green line in Fig.~\ref{fig3_1}(d)]. The lowest panel in Fig.~\ref{fig3_1}(e) displays the histogram representation of the $S^{J}(p)$ data. The measurement accuracy for $S^{I}=S^{J}_0/A^{2}-S^{I}_{\rm{amp}}$ drops due to the increase in $\sigma$. Although the accuracy can be improved by increasing $T$ and/or $r$ as in the noiseless-measurement case, $S^{I}_{\rm{amp}}$ is often larger than $S^{I}$ so that it takes a long time to obtain high accuracy. Thus, the extrinsic noise in the measurement devices degrades the efficiency of current-noise measurements. When one uses two current amplifiers in series, the relation between $S^I$ and $S^J$ is given by \begin{equation} \begin{split} S^{J}=A_2^2[A_1^2(S^I+S^{I}_{\rm{amp1}})+S^{I}_{\rm{amp2}}]+S^{I}_{\rm{meas}}. \label{doubleamp} \end{split} \end{equation} Here, $A_1$ and $S^{I}_{\rm{amp1}}$, respectively, are the gain and the input-referred current noise of the first amplifier, and $A_2$ and $S^{I}_{\rm{amp2}}$ are those of the second one. When $A_1$ is large enough to hold $A_1^2\times S^{I}_{\rm{amp1}} \gg S^I_{\rm{amp2}}$, the influence of $S^{I}_{\rm{amp2}}$, as well as $S^{I}_{\rm{meas}}$, can be neglected, and $S^{I}_{\rm{amp1}}$ dominates the system performance. \subsection{Noise sources in a mesoscopic device} Generally, a mesoscopic device has a variety of current-noise origins, each of which has its characteristic PSD [Fig.~\ref{fig3_2}(a)]. One important example is $1/f$ noise (red line), which originates from the trapping of electrons in unintentionally formed discrete levels in a sample~\cite{McWhorter1957,HoogePR1969,ZielNoise1986}. While we have considered a measurement for white noise above, we take the frequency dependence into account below. \begin{figure}[tb] \begin{center} \includegraphics[width=7cm]{ShotReviewFig06.eps} \end{center} \caption{(Color online) (a) Schematic of noise PSDs of several noise sources. (b) Simulated current-noise PSD in a spin-degenerated QPC of the conductance $0.5 \times 2e^2/h$. We assumed that temperature is 100 mK, source-drain bias is 100 $\rm{\mu}$V, and the $1/f$ corner frequency is 10 kHz~\cite{AguadoPRL2000,GustavssonPRL2007}.} \label{fig3_2} \end{figure} Figure~\ref{fig3_2}(b) shows a representative current-noise PSD in a QPC fabricated in a two-dimensional electron system (2DES)~\cite{AguadoPRL2000,GustavssonPRL2007}. Whereas shot noise and thermal noise, respectively, usually have broadband spectra up to gigahertz frequencies depending on the applied bias ($eV$) and temperature ($k_{\rm{B}}T_{\rm{e}}$), they can be regarded as white noise at low frequencies (typically below a few hundred megahertz). Indeed, in Fig.~\ref{fig3_2}(b), one observes that the current noise is almost independent of frequency from 100 kHz to 100 MHz, where shot noise and thermal noise are dominant. Most shot-noise measurements evaluate the PSD in this white-noise regime. At very low frequencies, we observe that the PSD monotonically increases with decreasing frequency due to $1/f$ noise and random telegraph (RT) noise [Fig.~\ref{fig3_2}(a)]. This prevents us from evaluating the shot noise and thermal noise. The frequency at which the $1/f$-noise intensity is comparable to that of white noise is often referred to as the corner frequency ($f_{\rm{c}}$). For example, $f_{\rm{c}}$ is typically about 100 kHz for a sample fabricated in a GaAs-based heterostructure. Besides, the quantum noise [Fig.~\ref{fig3_2}(a)], which is out of the scope of this review, becomes dominant at very high frequencies. \subsection{Calibration of noise-measurement systems} \label{sec:calibration_of_noise-measurement} In estimating $S^I$ from the measured noise $S^J$, it is essential to know precisely the gain $A$ and input-referred noise $S^I_{\rm{amp}}$ of the amplifier [see Eq.~(\ref{inputnoise})]. We also need to know the unintentional attenuation of the current noise in the wiring between the sample and measurement instruments. Meeting these requirements requires the calibration of an experimental setup. Below, we introduce two techniques often used for calibrating noise-measurement setups. \subsubsection{Calibration by thermal-noise measurement} \label{sec:johnson_noise_thermometry} When a sample is in thermal equilibrium (no bias applied), thermal noise $S^I_{\rm{th}}$ dominates current noise $S^I$. The magnitude of the low-frequency thermal noise is described as \begin{equation} \begin{split} S^I_{\rm{th}}=4k_{\rm{B}}T_{\rm{e}}\rm{Re}(\it{Y}), \label{johnson_y} \end{split} \end{equation} where $T_{\rm{e}}$ is electron temperature, $Y$ is admittance of the sample, and ${\rm{Re}}(Y)$ is the real part of $Y$, namely dc conductance $G$. Let us consider the change in $S^J$ induced by varying $T_{\rm{e}}$ or $G$~\cite{DiCarloRSI2006,HashisakaRSI2009,ArakawaAPL2013,LeeRSI2021}. Suppose that $S^J$ is given by $A^2(S^I+S^{I}_{\rm{amp}})$ [see Eq.~(\ref{inputnoise})]. An increase in temperature from $T_{\rm{e}}$ to $T_{\rm{e}}+\Delta T_{\rm{e}}$ results in an increase in $S^I_{\rm{th}}$ and hence $S^J$ to $S^J +\Delta S^J$, where $\Delta S^J \equiv S^J(T_{\rm{e}}+\Delta T_{\rm{e}})-S^J(T_{\rm{e}})$. The gain $A$ can be evaluated as $A=[\Delta S^J/4k_{\rm{B}}\Delta T_{\rm{e}}{\rm{Re}}(Y)]^{1/2}$. Note that $A$ is the total gain of the whole measurement system, including the amplifier's gain and the attenuation in the wiring, and we describe $``A"$ for both gains with and without the attenuation for simplicity. $S^I_{\rm{amp}}$ can be evaluated by extrapolating the $T_{\rm{e}}$ dependence of $S^J$ to $T_{\rm{e}} \rightarrow0$ [$S^I_{\rm{amp}}= S^J(T_{\rm{e}} \rightarrow 0)/A^2$]. \subsubsection{Calibration by shot-noise measurement} \label{sec:shot_noise_thermometry} Shot noise is caused by stochastic charge-scattering processes in a conductor [see Eqs.~(\ref{ShotTunnel}) and (\ref{ShotTheory})]. When the scattering process is well known, shot noise is also useful for calibrating the measurement system. Let us consider simple scattering processes of transmission probability ${\cal{T}}\rm{\ll1}$. When a current flowing through the scatterer is given by $\langle I\rangle$, shot noise $S^I_{\rm{shot}}$ at zero temperature is given by \begin{equation} \begin{split} S^I_{\rm{shot}}=2e\langle I\rangle. \label{shot_calib} \end{split} \end{equation} At finite temperature, Eq.~(\ref{shot_calib}) is modified to \begin{equation} \begin{split} S^I_{\rm{shot}}=2e\langle I\rangle\left[\coth\left(\frac{eV}{2kT_{\rm{e}}}\right)-\frac{2kT_{\rm{e}}}{eV}\right]. \label{shot_calib_finite} \end{split} \end{equation} Note that Eq.~(\ref{shot_calib_finite}) is obtained by subtracting 4$k_{\rm{B}}T_{\rm{e}}G$ from $S^I$ on the right-hand side of Eq.~(\ref{ShotTunnel}) (see discussion in Sect.~\ref{subsub:generalshot}). From the measured $\langle I\rangle$ dependence of $S^J$, one can estimate the amplifier's gain as $A = (\Delta S^J/2e\Delta \langle I\rangle)^{1/2}$, where $\Delta S^J$ and $\Delta \langle I\rangle$ are the changes in $S^J$ and $\langle I\rangle$, respectively. Electron temperature is evaluated from a fit to the experimental data with Eq.~(\ref{shot_calib_finite})~\cite{SpietzScience2003,OtaJPCM2017,TikhonovPRB2020}. The amplifier's noise $S^I_{\rm{amp}}$ is obtained by substituting $S^I_{\rm{th}} = 4k_{\rm{B}}T_{\rm{e}}G$ into $S^I_{\rm{amp}}= S^J(\langle I\rangle=0)/A^2-S^I_{\rm{th}}$. Thus, one can evaluate both $A$ and $S^I_{\rm{amp}}$ from the shot-noise measurements. \subsection{Examples of current-noise measurements} \label{sec:measurement_systems} Noise-measurement techniques can be categorized into several groups. This section describes the concept, advantages, and drawbacks of each category by introducing examples from past experiments. Before starting the discussion, we summarize some assumptions. First, we assume that a sample is placed in a cryostat to focus on quantum transport at low temperatures. The current noise generated in the sample is taken from the cryostat through coaxial cables whose length is typically about a few meters. If the cables directly connect the sample to measurement instruments, the sample is ac grounded through their stray capacitance of about a few hundred picofarads, causing the attenuation of current noise. For an accurate current-noise measurement, it is crucial to suppress the attenuation. Second, while above we have assumed current-noise measurements using an ammeter and a current amplifier, as shown in Fig.~\ref{fig3_1}(a), below we consider converting $\Delta I$ to voltage noise $\Delta V$ to measure it with an oscilloscope or spectrum analyzer with a broad frequency band and a wide dynamic range. Third, it is essential to meet standard requirements for low-temperature experiments, e.g., minimizing heat inflow into the low-temperature environment and screening external electromagnetic disturbances. \subsubsection{Measurement setup using a voltage amplifier} \label{sec:voltage_amp} \begin{figure}[tb] \begin{center} \includegraphics[width=8cm]{ShotReviewFig07.eps} \end{center} \caption{(a) Schematic of current-noise measurement using a voltage amplifier and a high-speed voltmeter. (b) Impedance conversion with a load resistance $R_{\rm{L}}$. (c) Example of current-noise measurements using voltage amplifiers. Current noise generated in a QPC causes voltage noise across the QPC. The voltage noise is measured through different pairs of voltage probes and analyzed to evaluate their cross-correlation. Reprinted figure with permission from Ref.~[\onlinecite{KumarPRL1996}]. {\copyright} (1996) American Physical Society.} \label{fig3_3} \end{figure} Current noise $\Delta I$ causes voltage noise $\Delta V = \Delta I~\times ~R$ between the input and output terminals of a sample (resistance $R$). One of the simplest experimental techniques for evaluating $\Delta I$ is to measure $\Delta V$ with a voltage amplifier and a high-speed voltmeter, as shown in Fig.~\ref{fig3_3}(a). Figure~\ref{fig3_3}(c) shows a schematic of the measurement setup using this technique reported in Ref.~[\onlinecite{KumarPRL1996}]. The sample is a QPC fabricated in a 2DES in an AlGaAs/GaAs heterostructure. Voltage noise $\Delta V=\Delta I/G_{\rm{QPC}}$ between the input and output terminals flows through coaxial cables and amplified to $\Delta W = A \times \Delta V$ at room temperature. Here, $G_{\rm{QPC}}$ is the two-terminal conductance of the QPC, and $A$ is the amplifier's gain. The spectrum analyzer converts the time-domain data $\Delta W(t)$ to the noise auto-correlation PSD $S^W(f) \equiv 2\langle \Delta W(t)^2\rangle_{f}/\Delta f$. In this setup, the relation between $S^I$ and $S^W$ is given by \begin{equation} \begin{split} S^W=A^2\times (S^I/G_{\rm{QPC}}^2+S^V_{\rm{input}})+S^V_{\rm{output}}, \label{sw} \end{split} \end{equation} where $S^V_{\rm{input}}$ is the PSD of the input-referred voltage noise of the amplifier, and $S^V_{\rm{output}}$ is the extrinsic noise raised after the amplification. In the experimental setup shown in Fig.~\ref{fig3_3}(c), both time-domain data $\Delta W_1(t)$ and $\Delta W_2(t)$, on the right- and left-hand sides of the Hall-bar, respectively, are analyzed to evaluate their cross-correlation $S^W_{12}(f) \equiv 2\langle \Delta W_1(t)\Delta W_2(t)\rangle_{f}/\Delta f$ [see Eqs.~(\ref{crosscorr_Sbc}) and (\ref{crosscorr_beta})]~\cite{KumarPRL1996,SampietroRSI1999}. When both the gain and input noise of the two amplifiers are the same ($A_1 = A_2 = A$ and $S^V_{\rm{input1}}=S^V_{\rm{input2}}=S^V_{\rm{input}}$), $S^W_{12}$ is described as \begin{equation} \begin{split} S^W_{\rm{12}}\simeq A^2\times (S^I/G_{\rm{QPC}}^2+S^V_{\rm{input}}). \label{sw_cross} \end{split} \end{equation} Note that the output noise $S^V_{\rm{output1}}$ and $S^V_{\rm{output2}}$ are washed out for large $N_T$ because they do not correlate, hence the cross-correlation measurement suppresses the influence of the extrinsic noise. The above measurement setup can be made using commercially available amplifiers and a spectrum analyzer. Because of its simplicity, this method is useful for some current-noise measurements; it was applied for measuring shot noise generated by spin accumulation in a tunnel-magneto-resistance device~\cite{ArakawaPRL2015}, for example. On the other hand, it only works at very low frequencies because sample resistance $R$ and the stray capacitance $C_{\rm{Coax}}$ of coaxial cables form an RC low-pass filter to set an upper-frequency limit [cut-off frequency $f_{RC} = 1/(2\pi RC)$]. The RC filtering is problematic, particularly when $f_{RC}$ is lower than the corner frequency of the $1/f$ noise. In this case, the $1/f$ noise buries other noises over the entire range of the measurable frequency band, preventing us from evaluating the shot noise. The frequency bandwidth can be expanded by shunting the output terminal of a sample to ground with a load resistance $R_{\rm{L}}$ smaller than $R$ [see Fig.~\ref{fig3_3}(b)]~\cite{DelattreNatPhys2009,OkazakiAPL2013,ArakawaPRL2015,OkazakiNatCom2016}. The drawback is that this method suppresses the magnitude of the voltage noise by a factor of $[R_{\rm{L}}/(R + R_{\rm{L}})]^2$, which degrades the resolution of the current-noise measurement. \subsubsection{Measurement setup using an inductor-capacitor resonant circuit} \label{sec:LC_resonance} In the experimental setup explained above [see Fig.~\ref{fig3_3}(a)], the resistance of a sample ($R\sim h/e^2 \approx 26~\rm{k}\Omega$) and the stray capacitance ($C_{\rm{Coax}} \sim$ 100 pF) typically gives $f_{RC} \simeq 100$ kHz. This $f_{RC}$ value is sometimes not high enough for some current-noise measurements. A method using an inductor-capacitor (LC) resonant circuit has been widely used to solve this problem~\cite{DiCarloRSI2006,HashisakaRSI2009,ArakawaAPL2013,LeeRSI2021}. Figure~\ref{fig3_4}(a) shows the typical measurement setup. The dc output current flows to ground through the inductor $L$ at low temperature. The inductor forms an LC resonant circuit with $C_{\rm{Coax}}$ to have a high impedance $Z_0$ at the resonance frequency $f_{\rm{LC}}=1/(2\pi\sqrt{LC_{\rm{Coax}}})$. Current noise $\Delta I$ generated in a sample causes voltage noise $\Delta V = \Delta I~\times~Z_0$ near $f_{\rm{LC}}$. By choosing an appropriate value of parameter $L$, one can set $f_{\rm{LC}}$ much higher than the $1/f$ corner frequency $f_c$ and thus enable the evaluation of the shot noise. This method is suitable for measuring current noise in various mesoscopic devices, such as QPCs~\cite{DiCarloPRL2008,HashisakaPRB2008, MuroPRB2016}, QH devices~\cite{de-PicciottoNature1997,BartolomeiScience2020}, and quantum dots (QDs)~\cite{McClurePRL2007,FerrierNatPhys2016}. \begin{figure}[tb] \begin{center} \includegraphics[width=8cm]{ShotReviewFig08.eps} \end{center} \caption{(a) Schematic of measurement setup using an LC resonant circuit and a common-source voltage amplifier. (b)(c) Experimental setup with two noise-measurement lines (b) and obtained representative current-noise PSDs (c). Reprinted from Ref.~[\onlinecite{DiCarloRSI2006}], with the permission of AIP Publishing.} \label{fig3_4} \end{figure} Whereas this technique successfully excludes the influence of $1/f$ noise, it has a narrow frequency bandwidth. The narrow bandwidth decreases the number of data points available for the analysis, which increases the standard deviation $\sigma$ of $S^I$ data (see discussion in Sect.~\ref{sec:basics_of}). A cryogenic low-noise amplifier is often used to compensate for the degradation of the resolution. Note that one needs to carefully calibrate the measurement setup at low temperature because $Z_0$ of the LC circuit and the performance of a cryogenic amplifier differ from those at room temperature (see discussion in Sect.~\ref{sec:calibration_of_noise-measurement}). Figure~\ref{fig3_4}(b) is an example of experimental setups using LC resonant circuits~\cite{DiCarloRSI2006}. Current noise generated in a sample causes voltage noise near the resonance frequency $f_{\rm{RLC}} \approx 2$ MHz of the resistor-inductor-capacitor (RLC) circuit. The voltage noise is amplified by a homemade cryogenic common-source amplifier at 4.2 K and then taken out of the cryostat using a 50-$\Omega$ impedance-matched coaxial cable. The output current noise is again amplified at room temperature and recorded by an analog-to-digital converter (digitizer) for the FFT analysis. The low-noise performance of the cryogenic amplifier contributes to increasing the resolution. The resolution is further improved by evaluating the cross-correlation between the two measurement lines. Figure~\ref{fig3_4}(c) displays representative experimental results of the auto-correlation PSD $P_i$ ($i$=1 or 2) and the real ($X_R$) and imaginary ($X_I$) parts of the cross-correlation PSD. Both $P_i$ and $X_R$ show RLC resonance line shapes. The shot noise generated in the sample is estimated from the peak height, which can be evaluated by a Lorentzian fit, as shown for $X_R$ in the inset. \subsubsection{Measurement setup using a transimpedance amplifier} \label{sec:transimpedance} In the two methods introduced above, current noise is converted to voltage noise and then amplified by a voltage amplifier. On the other hand, a transimpedance amplifier (TA), which converts a current noise $\Delta I$ to voltage noise $\Delta V$ with high transimpedance $Z_{\rm{trans}} = \Delta V/\Delta I \approx R_{\rm{FB}}$, can also be used for current-noise measurements~\cite{HashisakaRSI2014}. Here, $R_{\rm{FB}}$ is the feedback resistance. Figure~\ref{fig3_5}(a) shows a schematic of a measurement setup using a TA. The TA converts $S^I(f)$ generated in a sample to $S^V_{\rm{out}}(f) = \vert Z_{\rm{trans}}(f)\vert^2 S^I(f)$. \begin{figure}[tb] \begin{center} \includegraphics[width=8cm]{ShotReviewFig09.eps} \end{center} \caption{(Color online) (a) Schematic of a measurement setup using a TA at 4 K. The TA converts $S^I$, generated in a sample placed on a mixing-chamber (MC) plate in a dilution refrigerator, to $S^V_{\rm{out}}(f) = \vert Z_{\rm{trans}}(f)\vert^2 S^I(f)$. (b) Current-noise measurement for a QPC in a QH system. Current $I_1$ flows into the sample through the ohmic contact $\rm{\Omega}_1$ to impinge on the QPC. The shot noise generated at the QPC is evaluated by measuring the reflected ($I_3$) and transmitted ($I_5$) currents and evaluating their cross-correlation. (c) Representative cross-correlation data $S_{35}=2\langle {\rm{Re}}[\Delta I_3(t)\Delta I_5(t)]\rangle_f/\Delta f$. Reproduced from Ref.~[\onlinecite{HashisakaRSI2014}], with the permission of AIP Publishing.} \label{fig3_5} \end{figure} Compared with the LC resonant circuit (see Sect.~\ref{sec:LC_resonance}), the wider frequency bandwidth of a TA enables us to use many data points for the histogram analysis, enhancing the resolution of current-noise measurements [see Fig.~\ref{fig3_1}(e)]. For example, in Ref.~[\onlinecite{HashisakaRSI2014}], although the input-referred noise of a TA is relatively high (higher than that of the common-source HEMT amplifier in Ref.~[\onlinecite{DiCarloPRL2008}]), the resolution is as good as that of a measurement setup using an LC resonant circuit~\cite{DiCarloPRL2008}. It is important to note that this method is advantageous for two-current cross-correlation measurements because the TA's low-input impedance suppresses crosstalk caused by capacitive couplings. For example, when the input impedance of the amplifiers is 10 k$\Omega$, the capacitive coupling of 1 pF induces crosstalk of about 6 \% at 1 MHz (voltage noise $\Delta V_A$ in one of the measurement-lines leads to $\Delta V_B = 0.06\Delta V_A$ in the other line), while it is only 0.06 \% in the case of 100 $\Omega$ input impedance. Figure~\ref{fig3_5}(b) shows a schematic of a shot-noise measurement on a QPC fabricated in a QH system using TAs (see Sect.~\ref{sec:current_noise_chiral_edge}), and Fig.~\ref{fig3_5}(c) shows a representative result. Current noise $\Delta I_3$ and $\Delta I_5$ at ohmic contacts $\Omega_3$ and $\Omega_5$ were measured, and their cross-correlation $S_{35}$ was evaluated. The experimental data (diamonds) agrees very well with the theoretical curve (solid line), demonstrating the high reliability of this measurement technique. While the above experiment used TAs based on HEMTs~\cite{HashisakaRSI2014}, a TA using a superconducting-quantum-interference device (SQUID) can also be employed for current-noise measurements~\cite{JehlRSI1999,JehlNature2000,TranJJAP2017}. The advantage of the SQUID is that it can be placed on a mixing-chamber plate, namely close to a sample, because of its small energy consumption. However, it cannot be used at finite magnetic fields due to the breakdown of the superconductivity. \subsubsection{High-frequency shot-noise measurements} \label{sec:high_frequency} While above we have introduced shot-noise measurements in the white-noise limit, other experiments have demonstrated shot-noise measurements at higher frequencies. The gigahertz shot-noise intensity corresponds to the number of gigahertz photons generated by charge scattering; hence, measuring it is important for understanding the correlation between electrons and photons in mesoscopic systems. Such high-frequency measurements can provide unique information on correlated electron systems, for example, the Josephson frequency in fractional QH systems~\cite{KapferScience2019,BisogninNatCom2019}. Figure~\ref{fig3_6}(a) presents a schematic of a high-frequency shot-noise measurement (from 4 to 8 GHz)~\cite{ZakkaPRL2007}. Shot noise is generated at a QPC placed on a coplanar waveguide due to a dc source-drain bias applied through bias-tee circuits. The high-frequency shot noise enhanced by the reflections at both ends of the coplanar waveguide is amplified by a cryogenic low-noise-amplifier (LNA) and a room-temperature amplifier before it is detected as rf photons by photodiodes. Although LNAs usually generate large extrinsic noise, circulators installed at low temperatures prevent its backflow to the sample. Thus, cryogenic high-speed measurement techniques has allowed rf shot-noise detection. High-frequency shot noise can also be measured using an on-chip photon detector~\cite{AguadoPRL2000,DeblockScience2003,OnacPRL2006,GustavssonPRL2007}. Figure~\ref{fig3_6}(b) shows a schematic of such a measurement using a double quantum dot (DQD), which detects the shot noise generated in a nearby QPC~\cite{GustavssonPRL2007}. When the DQD absorbs a photon emitted from the QPC, the energy of which corresponds to the energy-level separation $\delta$ in the DQD, an electron located in the left QD (QD1) is transferred to the right QD (QD2) to be measured as a current flowing through the DQD. This process allows us to perform frequency-selective shot-noise detection by tuning $\delta$ with gate voltages. Figure~\ref{fig3_6}(c) shows representative shot-noise PSDs measured as a function of the level separation ($\delta = \Delta_{\rm{12}}$). For the three different source-drain biases applied to the QPC, the PSDs agree well with the theoretical shot-noise curves (dashed lines) over a broad frequency range (from 15 to 80 GHz). Similar on-chip rf shot-noise detection has also been demonstrated using carbon nanotubes~\cite{OnacPRL_2_2006} and semiconductor nanowires~\cite{GustavssonPRB2008}. High-frequency shot noise has also been measured by bolometric-detection techniques using a 2DES as a detector~\cite{HashisakaPRB2008,JompolNatCom2015}. \begin{figure}[htb] \begin{center} \includegraphics[width=8cm]{ShotReviewFig10.eps} \end{center} \caption{(Color online) (a) Schematic of a current-noise measurement at gigahertz frequencies. Reprinted figure with permission from Ref.~[\onlinecite{ZakkaPRL2007}] Copyright (2007) by the American Physical Society. Shot noise generated in a QPC placed on a coplanar waveguide is amplified by a cryogenic LNA and then measured at room temperature. Circulators prevent the backflow of microwave photons from room temperature. (b) Schematic of a frequency-selective shot-noise measurement using a DQD photon detector. The DQD detects microwave photons emitted from the QPC, the energy of which corresponds to the level separation $\delta$. (c) Representative shot-noise PSD measured using the setup shown in (b). The measurements were performed at several source-drain biases ($V_{\rm{QPC}}$) applied to the QPC. Panels (b) and (c) are reprinted with permission from Ref.~[\onlinecite{GustavssonPRL2007}]. {\copyright} (2007) American Physical Society.} \label{fig3_6} \end{figure} \subsubsection{Counting of single electrons} \label{sec:FCS} If one monitors all the electrons flowing through a sample, the obtained time-domain data provide perfect information on the probability distribution of the electron scattering process. Although such a measurement is difficult for general mesoscopic devices, it has been achieved for QD devices, thanks to their small number of transmitted electrons per unit time and charge detectors sensitive enough to the charging effect in QDs~\cite{GustavssonPRL2006,FujisawaScience2006,BelzigPRB2005,KaasbjergPRB2015}. \begin{figure}[htb] \begin{center} \includegraphics[width=8cm]{ShotReviewFig11.eps} \end{center} \caption{(Color online) (a) Schematic of an electron counting device. Current $I_{\rm{PC}}$ flowing through a point contact (PC) varies depending on the electron occupation in the DQD placed near the PC, forming a charge detector. (b) Representative time-domain signals of $I_{\rm{PC}}$ taken in different regimes of the DQD (B, blockade regime; H, hole-like transport regime; M, middle regime; and E, electron-like transport regime). Stepwise fluctuation indicates the change in the charge state in the DQD. (c) Representative result of a histogram analysis for the number of electron transmissions during a time interval. Second-order noise $S=e^2\langle\delta N^2\rangle /T_{\rm{avg}}$ and third-order noise $C=e^3\langle\delta N^3\rangle /T_{\rm{avg}}$ are estimated from the fit, where $\delta N = N - \langle N\rangle$ and $T_{\rm{avg}}$ is the averaging time. Panels are reprinted with permission from Ref.~[\onlinecite{FujisawaScience2006}]. {\copyright} (2006) American Association for the Advancement of Science.} \label{fig3_7} \end{figure} Figure~\ref{fig3_7}(a) shows a schematic of a counting-statistics experiment for a series DQD~\cite{FujisawaScience2006}. The QPC asymmetrically coupled to the DQD operates as a charge detector because the transmitted current $I_{\rm{PC}}$ flowing through it depends on the electron numbers ($n$, $m$) in the left and right QDs, respectively. Figure~\ref{fig3_7}(b) shows representative time-domain data of $I_{\rm{PC}}$ measured under different DQD conditions. The stepwise fluctuation of $I_{\rm{PC}}$ reflecting changes in ($n$, $m$) enables us to monitor the one-by-one electron transport through the DQD. Figure~\ref{fig3_7}(c) shows the result of a histogram analysis for the transmitted current. One observes that about two electrons are transmitted through the DQD per unit time and that the electron number has a finite variance due to the shot-noise generation. Moreover, the asymmetric distribution indicates finite skewness, namely third-order noise generated in the transmission process. Thus, even higher-order cumulants can be evaluated in counting-statistics experiments. \section{Examples of shot noise studies} \label{sec:shotexample} The purpose of this review is to show what can be learned from a combination of current and current-noise measurements. We present several shot-noise experiments that are helpful for this purpose. Before discussing the quantum many-body phenomena in Sect.~\ref{sec:quantumliquid}, in this section, we introduce experiments, most of which can be understood within the Landauer formalism based on the single-particle picture. Although excellent studies were performed in the 1990s as well, we mainly focus on experiments that have been conducted since the previous reviews published around 2000~\cite{deJong1997,BlanterPR2000,MartinBook}. \subsection{Single-channel transport through a QPC} \label{subsec:QPC} The shot-noise formula [Eq.~(\ref{ShotTheory})] has been quantitatively verified in experiments on QPCs, where the number of conduction channels and their transmission probabilities can be precisely controlled. Therefore, here we first address the shot noise in a QPC. A QPC is often fabricated in a 2DES formed in a GaAs/AlGaAs heterostructure using a pair of gate electrodes (``split-gate'' electrodes). A negative split-gate voltage depletes the 2DES underneath the electrodes, as shown in Fig.~\ref{QPC_Fano}(a), and then decreases the width of the 2DES constriction. The constriction works as a point contact between the two large 2DES regions. In a high-mobility 2DES, the electron mean free path can be much longer than the length of the constriction (typically $\simeq 1~\mu$m) so that electrons are ballistically transmitted through it. When the constriction width is comparable to the Fermi wavelength ($\simeq$ 40 nm for electrons in a typical 2DES), only a small number of conduction channels exist in the constriction, and the transmission probability ${\cal{T}}$ of each channel can be varied continuously as a function of the gate voltage. In such a case, the conductance shows a stepwise or ``quantized'' behavior due to the electron's wave nature; therefore, the constriction is called a QPC. \begin{figure}[tbhp] \center \includegraphics[width=8cm]{ShotReviewFig12.eps} \caption{(Color online) (a) Schematic of a QPC fabricated in a 2DES. Two large 2DES regions are connected through a point-like constriction formed using split-gate electrodes. (b) Conductance quantization through a QPC measured at 100 mK and 0 T. Conductance $G$ varies as a function of gate voltage, showing stepwise behavior in a unit of $2e^{2}/h$~\cite{MuroPRB2016}. (c) Measured Fano factor (circles) plotted as a function of the conductance. The solid line is the theoretical curve [see Eq.~(\ref{FanoTheory})]. (inset) Fano factor over a wide range of conductance up to $10\times 2e^2/h$. Panels (b) and (c) are reproduced from Ref.~[\onlinecite{MuroPRB2016}]. } \label{QPC_Fano} \end{figure} Figure~\ref{QPC_Fano}(b) shows typical conductance-quantization data obtained from a QPC formed in a high-mobility ($1,000-2,000$~m$^2/$Vs) 2DES~\cite{MuroPRB2016}. When the gate voltage is increased from the pinch-off voltage ($\simeq -1.9~$V), conductance $G$ increases in a stepwise manner with a unit of $2e^2/h$, where factor 2 reflects the spin degeneracy. The conductance plateaus around $-1.8$~ and $-1.65$~V indicate that each conduction channel is fully transmitted or reflected. Such conductance quantization, which is fully explained by Eq.~(\ref{Landauer}), was first reported in 1988~\cite{vanWeesPRL1988}. Today, one can observe more than 20 conductance steps ($\sim20\times 2e^2/h$) for a high-quality QPC~\cite{vanWeesPRL1988,RosslerNJP2011}. Immediately after the first QPC experiment~\cite{vanWeesPRL1988}, shot noise in a QPC was intensively studied both theoretically~\cite{LesovikJETP1989,ButtikerPRL1990,ButtikerPRB1992,MartinPRB1992} and experimentally~\cite{ReznikovPRL1995,KumarPRL1996,LiuNature1998,NakamuraPRB2009,MuroPRB2016}. Figure~\ref{QPC_Fano}(c) displays the shot-noise data, namely the Fano factor (circles) estimated from the bias $V$ dependence of the current noise [see Eq.~(\ref{ShotTheory})], observed in the QPC, the conductance $G$ of which is shown in Fig.~\ref{QPC_Fano}(b)~\cite{MuroPRB2016}. The experimental data agree very well with the theoretical curve (solid curve) calculated using Eq.~(\ref{FanoTheory}); for example, both experimental and theoretical values are zero at $2e^2/h$ and $4e^2/h$. The inset of Fig.~\ref{QPC_Fano}(c) shows the measured Fano factor over a wider range of $G$ (up to $\simeq 10\times 2e^2/h$), again demonstrating the agreement between the experiment and theory. These data indicate that the shot noise is generated only in the channel of intermediate transmission probability ($0 < {\cal T}_n < 1$). It is important to note that at low temperatures, the electron occupation probability in the leads is either 0 or 1 at each energy (see Sect.~\ref{sec:LandauerFormula}). In this case, the impinging current does not fluctuate, and hence, the excess noise, or the shot noise, directly reflects the scattering process at the sample (here, a QPC)~\cite{OliverScience1999,BeenakkerPT2003}. Because the shot noise generated in a QPC is well explained by considering electron partitioning in each conduction channel, the stepwise conductance trace in Fig.~\ref{QPC_Fano}(b) and the shot-noise result in Fig.~\ref{QPC_Fano}(c) provide the same information. In contrast, below we introduce several experiments on two-channel transport, where the shot noise provides information different from the conductance. \subsection{Two-channel transport} Let us consider spin-dependent transport through the lowest one-dimensional subband in a QPC. Here, the transmission probabilities of spin-up and spin-down electrons are given by ${\cal{T}}_\uparrow$ and ${\cal{T}}_\downarrow$, respectively. The conductance $G$ through the QPC is described as \begin{equation} G = \frac{e^2}{h}({\cal{T}}_\uparrow+{\cal{T}}_\downarrow). \label{eq:spinpol_G} \end{equation} From Eq.~(\ref{FanoTheory}), the Fano factor $F_\textrm{sp}$ is written as \begin{equation} F_\textrm{sp} =\frac{(1-{\cal{T}}_\uparrow){\cal{T}}_\uparrow+(1-{\cal{T}}_\downarrow){\cal{T}}_\downarrow}{{\cal{T}}_\uparrow+{\cal{T}}_\downarrow}. \label{Fano_Spin} \end{equation} Because we can evaluate both ${\cal{T}}_\uparrow$ and ${\cal{T}}_\downarrow$ by solving these two equations, the combination of conductance ($G$) and shot-noise ($F_\textrm{sp}$) measurements enables a fully quantitative estimation of the transmission probabilities. The spin polarization $P$ of the transmitted current can be defined as $P\equiv \vert{\cal{T}}_\uparrow-{\cal{T}}_\downarrow\vert/({\cal{T}}_\uparrow+{\cal{T}}_\downarrow)$. When $P=0$, namely ${\cal{T}}_\uparrow = {\cal{T}}_\downarrow = {\cal{T}}_0$, we obtain $F = 1-{\cal{T}}_0$ from Eq.~(\ref{Fano_Spin}). On the other hand, when $P>0$, we find \begin{equation} F_\textrm{sp}=\frac{(1-{\cal{T}}_\uparrow){\cal{T}}_\uparrow+(1-{\cal{T}}_\downarrow){\cal{T}}_\downarrow}{{\cal{T}}_\uparrow+{\cal{T}}_\downarrow}< 1-\frac{{\cal{T}}_\uparrow+{\cal{T}}_\downarrow}{2}. \end{equation} This relation tells that the spin polarization always decreases the shot-noise intensity, as visually presented in Fig.~\ref{QPC_FanoSpin}. \begin{figure}[tbp] \center \includegraphics[width=7.5cm]{ShotReviewFig13.eps} \caption{(Color online) Spin polarization $P$ and the Fano factor $F$ as a function of the conductance $G$. Compare it with Fig.~\ref{QPC_KohdaNatComm}(a).} \label{QPC_FanoSpin} \end{figure} \subsubsection{Spin polarization} \label{secInGaAs} One of the authors of this review performed a shot-noise measurement to evaluate the spin-polarized transport in a solid-state Stern-Gerlach type experiment~\cite{KohdaNatComm2012}. The device was a QPC in a 2DES in an InGaAsP/InGaAs heterostructure, where the spin-orbit interaction is significant. In this device, the Rashba spin-orbit interaction emerges due to the potential modulation at the edges fabricated by chemical etching. Electrons propagating through the constriction undergo the spatial modulation of the effective magnetic field induced by the spin-orbit interaction, resulting in the separation of propagation trajectories between spin-up and spin-down electrons. Figure~\ref{QPC_KohdaNatComm} shows the results of measurements performed at a zero magnetic field at 4.2 K. The solid curve in Fig.~\ref{QPC_KohdaNatComm}(b) shows the QPC conductance $G$ as a function of the side-gate voltage ($V_{\textrm{SG}}$). The conductance shows a plateau at $~0.5(2e^2/h)$ between $V_{\textrm{SG}}=-3.15$~V and $-3.25$~V, suggesting the lifting of the spin degeneracy at a zero magnetic field. Figure~\ref{QPC_KohdaNatComm}(a) plots the Fano factor $F$ evaluated by shot-noise measurements as a function of $G$. The measured $F$ is smaller than the theoretical curve calculated assuming spin degeneracy, namely $P=0$ (dashed line) and is close to the one assuming $P=1$ (dotted line). The observed Fano-factor reduction is the signature of spin polarization. When we solve Eqs.~(\ref{eq:spinpol_G}) and (\ref{Fano_Spin}) together using the measured $G$ and $F$ values, ${\cal{T}}_\downarrow$ and ${\cal{T}}_\uparrow$ are evaluated as shown in Fig.~\ref{QPC_KohdaNatComm}(b). They differ from each other below $G = 0.5(2e^2/h)$. Figure~\ref{QPC_KohdaNatComm}(c) summarizes the $V_{\textrm{SG}}$ dependence of $P$, where one observes $P \simeq 0.7$ at $0.5(2e^2/h)$ and its further increase at lower $G$. \begin{figure}[tbp] \center \includegraphics[width=8.5cm]{ShotReviewFig14.eps} \caption{(Color online) (a) Fano factor plotted as a function of conductance $G$ through a InGaAs-based QPC. Experimental results are marked with filled circles. The dashed blue curve presents the theoretical calculation assuming $P=0$, while the dotted red one is that assuming $P=1$. See also Fig.~\ref{QPC_FanoSpin}. (b) $G$ as a function of side gate voltage $V_{\textrm{SG}}$ (solid curve; right axis). Filled circles show the estimated ${\cal{T}}_\downarrow$ and ${\cal{T}}_\uparrow$ (left axis). (c) $V_{\textrm{SG}}$ dependence of $P$ ($=P_{\textrm{S}}$ in the figure). Figures are reprinted from Ref.~[\onlinecite{KohdaNatComm2012}]. {\copyright} (2012) The Author(s).} \label{QPC_KohdaNatComm} \end{figure} The theoretical simulation well supports the observed spin-polarized transport~\cite{KohdaNatComm2012}; it demonstrates that the narrowing of the transport channel enhances the reflection rate of the spin-down electrons more than that of the spin-up ones, resulting in the spin polarization of the transmitted current. The above results point to the potential of the Stern-Gerlach-type device as a spin-polarized-current source in spintronics applications, and at the same time, demonstrate the usefulness of combining conductance and shot-noise measurements for analyzing two-channel transport. \subsubsection{0.7 conductance anomaly} Since the mid 1990s, many experiments reported a peculiar conductance behavior of QPCs fabricated in standard GaAs/AlGaAs heterostructures. An unexpected plateau-like structure appears below the first conductance plateau at $2e^2/h$, often near $0.7\times 2e^2/h$; therefore, the behavior is referred to as the ``0.7 (conductance) anomaly''. Various theoretical and experimental studies have attempted to find the origin of the 0.7 anomaly (e.g. Refs.~[\onlinecite{IqbalNature2013}] and~[\onlinecite{BauerNature2013}]). However, even now, its origin and the mechanism are not completely understood. Here, we introduce a few theoretical and experimental works related to the shot-noise studies (for details of the 0.7 anomaly, see the special section in \textit{Journal of Physics: Condensed Matter} published in 2008~\cite{PepperJPC2008}). In 1996, Thomas \textit{et al.} experimentally demonstrated that the 0.7 anomaly continuously changes to a spin-polarized conductance plateau at $e^2/h$ by applying an in-plane high magnetic field~\cite{ThomasPRL1996}. This observation suggests that the spontaneous spin polarization at a zero magnetic field is responsible for the 0.7 anomaly. Another important observation is the resemblance between the 0.7 anomaly and the Kondo effect observed in QDs~\cite{DiCarloPRL2006}. In contrast to QDs, a QPC does not have a well-defined localized state; however, theories have discussed the idea that spin-dependent localized states could appear even in a QPC~\cite{RejecNature2006} and that the Kondo effect via the localized state could be the origin of the 0.7 anomaly~\cite{CronenwettPRL2002}. Shot-noise studies have been performed to obtain more profound insight into this phenomenon~\cite{RochePRL2004,DiCarloPRL2006,NakamuraPRB2009}. The experiments have found that the 0.7 anomaly causes the Fano factor reduction, as seen in Fig.~\ref{QPC_FanoSpin}. One possible explanation for this observation is spin-polarized transport, as in the case of the Stern-Gerlach-type experiment (see the last subsection). However, in contrast to the Stern-Gerlach-type experiment that can be understood within the single-particle picture, the 0.7 anomaly manifests the presence of a spin-related many-body effect and requires more careful analysis to identify its origin. We expect that recent progress in Kondo physics [see Sect.~\ref{Subsec:KondoNoise}] may provide a better understanding of the 0.7 anomaly. \subsubsection{Spin current} While we have discussed shot-noise measurements on spin-polarized transport in semiconductor devices, they have also been employed to evaluate spin-polarized transport in metallic devices. Such experiments are of particular importance for spintronics, where spin current, a flow of spin angular momentum, is the central issue. Here, we take Schottky's discussion in 1918~\cite{SchottkyAP1918} one step further. Because an electron carries charge and spin, the discrete nature of spin, as well as that of charge, may cause current noise. Therefore, it is natural to ask whether shot noise is generated when a tunnel barrier scatters spin current, as shown in Fig.~\ref{SpinCurrentBarrier}(a). \begin{figure}[!b] \center \includegraphics[width=7cm]{ShotReviewFig15.eps} \caption{(Color online) (a) Shot-noise generation by scattering of spin-up and spin-down charge currents at a tunnel barrier. (b) Scattering of a pure spin current. In this case, although the net charge current flowing through the barrier is zero because $I_\uparrow=-I_\downarrow$, shot noise is generated proportionally to $|\langle I_\uparrow \rangle |+|\langle I_\downarrow \rangle |$.} \label{SpinCurrentBarrier} \end{figure} In the zero-temperature case where the Fano factor is one, shot noise generated at a tunnel barrier is described as $S=2e(|\langle I_\uparrow\rangle |+|\langle I_\downarrow\rangle |)$ within the single-particle picture, where $I_\uparrow$ ($I_\downarrow$) is the charge current of spin-up (spin-down) electrons [Fig.~\ref{SpinCurrentBarrier}(a)]. Charge and spin currents are defined as $I_\textrm{C}= I_\uparrow+ I_\downarrow$ and $I_\textrm{S}= I_\uparrow- I_\downarrow$, respectively. Suppose that a pure spin current impinges on the barrier ($I_\textrm{C}=0$ and $I_\textrm{S}>0$). In this case, although no net current flows through the barrier because $I_\uparrow=-I_\downarrow$, finite shot noise is generated proportionally to $|\langle I_\uparrow \rangle |+|\langle I_\downarrow \rangle |$ [Fig.~\ref{SpinCurrentBarrier}(b)]. This equation manifests the idea that the shot noise directly measures the amount of spin current. Based on this idea, one of the authors of this review conducted an experiment to detect the shot noise associated with spin currents~\cite{ArakawaPRL2015}. In this experiment, a spin-polarized current ($I_\uparrow\neq I_\downarrow$) flowing in a non-magnetic metal was applied to a tunnel junction; the spin-polarized current was fed from a ferromagnetic metal through the other tunnel junction. Whereas the experiment was performed not for a pure spin current but a spin-polarized charge current due to a technical issue, the spin-current-induced shot noise was successfully detected in the experiment. While there have been numerous theoretical explanations of spin-current noise since the early 2000s~\cite{MishchenkoPRB2003,BelzigPRB2004,LamacraftPRB2004,MeairPRB2011}, this experiment was its first demonstration to the best of our knowledge. The spin-current detection by shot-noise measurement is promising for studying various spintronics issues, such as spin-transfer torque and thermal spin phenomena. \subsubsection{Edge mixing} Orbitals, like spins, sometimes lead to two-channel transport. Shot-noise measurements are helpful in investigating such two-orbital (or two-pseudo-spin) transport. Here, as an example, we introduce shot-noise measurements performed on graphene $pn$ junctions in QH regimes. Graphene is a typical two-dimensional system that behaves as a zero-gap semiconductor with Dirac-like linear band dispersion, where the polarity of charge carriers can be controlled by applying a gate voltage. When we prepare $p$- and $n$-type regions, where charge carriers are holes and electrons, respectively, in a single graphene device, a $pn$ junction is formed at their boundary. A graphene $pn$ junction has been extensively studied as a promising candidate for observing various intriguing phenomena, such as Klein tunneling~\cite{KatsnelsonNatPhys2006} and the Veselago lens~\cite{CheianovScience2007}. \begin{figure}[!b] \center \includegraphics[width=6cm]{ShotReviewFig16.eps} \caption{(Color online) (a) Schematic of QH edge channels in a unipolar graphene device. The edge channels are either fully transmitted or reflected at a boundary of different QH states. (b) Schematic of edge channels in a bipolar graphene device. Because of the opposite chiralities in the $p$- and $n$-type regions, the edge channels copropagate along the junction and equilibrate with each other~\cite{AbaninScience2007}.} \label{Fig_GraphenePN} \end{figure} When graphene is in the QH regime, unique two-channel transport appears at the $pn$ junction, as demonstrated in both experiments~\cite{WilliamsScience2007, OzyilmazPRL2007} and theories~\cite{AbaninScience2007}. In a QH system, charge carriers propagate along unidirectional one-dimensional channels referred to as edge channels, as discussed in Sect.~\ref{subsec:qhe}. Here, we consider a graphene device, where two different QH regions (Landau-level filling factor $\nu_1$ and $\nu_2$) form a boundary at the center of a sample. In Figs.~\ref{Fig_GraphenePN}(a) and (b), the arrows schematically show the $n$- and $p$-type edge channels in samples with (a) unipolar ($nn$) and (b) bipolar ($pn$) junctions. In the former case, each edge channel is either fully transmitted or reflected at the junction and hence generates no shot noise. In the latter case, on the other hand, edge channels fed from the left and right reservoirs encounter each other at the junction bottom, reflecting the opposite chirality in the $p$- and $n$-type regions. The edge channels copropagate along the junction and then separate again at the junction top. The $pn$ junction mixes the potentials between the $p$- and $n$-type edge channels during the copropagation. Let us discuss transport properties of graphene QH junctions in more detail. In the case of a unipolar junction between the $\nu_1$ and $\nu_2$ states, the two-terminal conductance $G_{\textrm{uni}}$ of the sample is given by \begin{equation} G_{\textrm{uni}}=\min(\vert \nu_1\vert, \vert\nu_2\vert)\frac{e^2}{h}, \label{Eq_GrapheneNNQH} \end{equation} where $\min(\vert \nu_1\vert, \vert\nu_2\vert)$ is the lower value between $\vert \nu_1\vert$ and $\vert\nu_2\vert$ that corresponds to the number of transmission channels through the sample. On the other hand, in the case of a bipolar junction, the $\nu_1$ and $\nu_2$ edge channels are mixed at the junction, as shown in Fig.~\ref{Fig_GraphenePN}(b). If the charge excitations are evenly redistributed across all the copropagating channels, we can describe the transmission and reflection probabilities of the channels fed from the left contact as ${\cal{T}}_n = \vert \nu_2 \vert/N$ and ${\cal{R}}_n=\vert \nu_1 \vert/N$, respectively (here, $N = \vert \nu_1 \vert +\vert \nu_2 \vert$). In this case, $G_{\textrm{bi}}$ is expressed as \begin{equation} G_{\textrm{bi}}=\frac{e^2}{h}\sum_n^{\vert \nu_1 \vert}{\cal{T}}_n =\frac{\vert \nu_1 \vert \vert \nu_2 \vert}{\vert \nu_1 \vert +\vert \nu_2 \vert}\frac{e^2}{h}. \label{Eq_GraphenePNQH} \end{equation} Suppose that the edge mixing is caused by elastic charge-scattering processes between the copropagating channels. In this case, the $pn$ junction works as a beam splitter for charge carriers, like a standard QPC does in a GaAs/AlGaAs heterostructure. The Fano factor, which quantifies the shot-noise intensity generated at the junction, is described as~\cite{AbaninScience2007} \begin{equation} F=\frac{\vert \nu_1 \vert\vert \nu_2 \vert}{(\vert \nu_1 \vert+\vert \nu_2 \vert)^2}. \label{eq:fano_pn} \end{equation} For example, Eq.~(\ref{eq:fano_pn}) predicts $F = 1/4$ for $(\nu_1, \nu_2)=(\pm 2, \mp 2)$ and $F = 3/16$ for $(\nu_1, \nu_2)=(\pm 2, \mp 6), (\pm 6, \mp 2)$. This equation contrasts with the case of a unipolar junction [Fig.~\ref{Fig_GraphenePN}(a)], where no carrier partitioning occurs to generate the shot noise. \begin{figure*}[!t] \center \includegraphics[width=12cm]{ShotReviewFig17.eps} \caption{(Color online) (a) Color plot of the two-terminal resistance as a function of $V_{\textrm{tg}}$ and $V_{\textrm{bg}}$ at 8 T. (b) Cross sections of the color plot at $V_{\textrm{bg}}=22, 16, 4$, and $-5$ V. (c) Measured shot noise $S_I$ as a function of $V_{\textrm{sd}}$ at 8 T at $(V_{\textrm{tg}}, V_{\textrm{bg}})=(2.5~\textrm{V}, 4~\textrm{V})$ and $(-0.5~\textrm{V}, 16~\textrm{V})$, which correspond to $(\nu_1, \nu_2)=(6, -2)$ and $(2, 2)$, respectively. The solid curve is the numerical fit. (d) $S_I$ as a function of $V_{\textrm{sd}}$ at 0 T under the same gate-voltage conditions as in (c). The solid curves are the numerical fits. Figures are reproduced from Ref.~[\onlinecite{MatsuoNatComm2015}]. {\copyright} (2015) The Author(s).} \label{Fig_GraphenePNExp} \end{figure*} Several experiments demonstrated that the two-terminal conductance of graphene $pn$ junctions is well explained by Eq.~(\ref{Eq_GraphenePNQH})~\cite{WilliamsScience2007,OzyilmazPRL2007,MatsuoSciRep2015}. Moreover, one of the authors of this review measured the shot noise in a narrow ($< 10~\mu$m) $pn$-junction device to confirm the relation in Eq.~(\ref{eq:fano_pn})~\cite{MatsuoNatComm2015}. In the shot-noise experiment, the QH junction was formed by applying a back-gate voltage ($V_{\textrm{bg}}$) to tune the carrier density over the whole graphene device and a top-gate voltage ($V_{\textrm{tg}}$) to modify the density in the half area of the device. Figure~\ref{Fig_GraphenePNExp}(a) shows a color plot of the measured two-terminal resistance $R$ as a function of $V_{\textrm{tg}}$ and $V_{\textrm{bg}}$ near the Dirac point ($V_{\textrm{tg}}\simeq 0$ V and $V_{\textrm{bg}}\simeq 10$ V), where the formations of the $nn$, $pn$, $np$, and $pp$ junctions are observed. Figure~\ref{Fig_GraphenePNExp}(b) shows the resistance traces along the cross sections in Fig.~\ref{Fig_GraphenePNExp}(a) at $V_{\textrm{bg}}=22, 16, 4$, and $-5$ V. The observed quantized resistances at $R=h/2e^2$ and $h/6e^2$ in the $nn$ and $pp$ regimes exhibit the formation of unipolar QH junctions [Fig.~\ref{Fig_GraphenePN}(a)]. On the other hand, $ R=h/e^2$, $\frac{2}{3}h/e^2$, and $\frac{1}{3}h/e^2$ plateaus in the $pn$ and $np$ regimes demonstrate bipolar QH junctions [Fig.~\ref{Fig_GraphenePN}(b)]. These results are well explained by Eqs.~(\ref{Eq_GrapheneNNQH}) and (\ref{Eq_GraphenePNQH})~\cite{WilliamsScience2007, OzyilmazPRL2007,AbaninScience2007}. Figure~\ref{Fig_GraphenePNExp}(c) shows the current-noise data measured for the unipolar $(\nu_1, \nu_2)=(2, 2)$ [$(V_\textrm{tg}, V_\textrm{bg}) = (-0.5~\textrm{V}, 16~\textrm{V})$, $R=\frac{1}{2}\frac{h}{e^2}$] junction and the bipolar $(\nu_1, \nu_2)=(6, -2)$ [$(V_\textrm{tg}, V_\textrm{bg}) = (2.5~\textrm{V}, 4~\textrm{V})$, $R=\frac{2}{3}\frac{h}{e^2}$] one. Shot noise is absent in the unipolar case, which agrees with the above explanations for Fig.~\ref{Fig_GraphenePN}(a). The absence of shot noise was observed at $(\nu_1, \nu_2)=(-2, -2)$ and $(2, 6)$, too. In contrast, finite shot noise is observed in the bipolar case. The Fano factor evaluated by the numerical fit is $0.18\pm0.01$, close to the theoretical value of $3/16=0.1875$ [see Eq.~(\ref{eq:fano_pn})]. We also obtained $F=0.18\pm 0.01$ at $(\nu_1, \nu_2)=(2, -6)$. These observations show that the edge mixing in the narrow $pn$ junction can be regarded as the elastic charge-scattering or ``beam-splitting'' process between the channels. While it is difficult to fabricate a QPC in graphene, a zero-gap semiconductor with linear dispersion, the above results indicate that a $pn$ junction works as a beam splitter, which is a fundamental building block for fermion quantum optics in condensed matter (for examples, see Refs.~[\onlinecite{WeiSciAdv2017}] and~[\onlinecite{MorikawaAPL2015}]). Note that entirely different experimental results are obtained at a zero magnetic field, namely in non-quantum-Hall systems. Figure~\ref{Fig_GraphenePNExp}(d) shows the shot-noise data under the same gate-voltage conditions as in Fig.~\ref{Fig_GraphenePNExp}(c). Finite shot-noise generation is observed in both unipolar and bipolar junctions, indicating the difference from the quantum-Hall-junction case. While theories predict $F=1-1/\sqrt{2}\sim 0.29$~\cite{CheianovPRB2006} at a zero field, we observed $F\sim 0.5$ due to the influence of disorder in the sample~\cite{LewenkopfPRB2008}. A closely related study of a graphene QH $pn$ junction was reported by Kumada \textit{et al.} at the same time~\cite{KumadaNatComm2015}. They measured the $pn$-junction-width dependence of the shot-noise intensity to observe a monotonic decrease with increasing junction width. This observation indicates that the copropagating channels relax to the thermal equilibrium state after a long propagating distance, and in the long-channel limit, the $pn$ junction behaves as a floating ohmic contact. While we have discussed junction devices in graphene so far, here, we briefly mention the shot noise in a uniform graphene device at a zero magnetic field. Theory predicts shot-noise generation in the ballistic region even when the graphene is ideally homogeneous, and the Fano factor at the Dirac point is $1/3$ in a short and wide graphene strip~\cite{TworzydloPRL2006}. Interestingly, the Fano factor of $F=1/3$ equals that of disordered metals in the classical diffusive regime. The $F=1/3$ shot noise in graphene has been under scrutiny in several experiments~\cite{DanneauPRL2008,DiCarloPRL2008,FayPRB2011,TanPRB2013,SahuPRB2019}. \subsection{Multiple-parameter case} \label{subsec:multichannel} In the last subsection, we discussed transport phenomena dominated by two parameters: the transmission probabilities of spin-up ($\cal{T}_\uparrow$) and spin-down ($\cal{T}_\downarrow$) electrons. This subsection shows how shot-noise measurements have been applied to cases of three or more parameters, for example, in the quantum-Hall-effect breakdown regime (Sect.~\ref{subsub:QHEBD}) or tunnel-junction devices with multiple tunneling paths (Sect.~\ref{subsub:MTJ}). Generally, transport properties in such nonequilibrium and/or large systems are difficult to evaluate quantitatively. However, shot-noise measurements sometimes provide critical information to solve such complicated problems. \subsubsection{Breakdown of the QH effect} \label{subsub:QHEBD} Here, we introduce experiments in which three different measurements---conductance, shot-noise, and resistively-detected nuclear-magnetic-resonance (RD-NMR) measurements---were performed to investigate the breakdown of the QH effect, a typical non-equilibrium phenomenon in mesoscopic systems~\cite{ChidaPRB2012,HashisakaPRB2020}. \begin{figure*}[tb] \center \includegraphics[width=15cm]{ShotReviewFig18.eps} \caption{(Color online) (a) Schematic of a spin-polarized $\nu=1$ QH state locally formed at a narrow constriction in a bulk $\nu=2$ QH system. The $\nu=1$ and $\nu=2$ edge channels are represented by solid red and blue arrows, respectively. Possible spin-conserved inter-channel tunnelings are shown by the dashed red and blue arrows. (b) Differential conductance $g$ measured as a function of $V_{\rm{g}}$. Conductance plateau at $g = e^2/h$ observed at $V_{\rm{in}} = 0$ breaks down at finite bias. (c) $V_{\rm{in}}$ dependence of $\alpha_{\rm{shot}}$ and $P_{\rm{NMR}}$ evaluated by shot-noise measurement and RD-NMR measurement, respectively, at $V_{\rm{g}}=-0.96$ V. $\alpha_{\rm{shot}}$ starts to decrease from $\alpha_{\rm{shot}} = 1$ at $V_{\rm{in}} = V_{\rm{th1}}$ and saturates at $\alpha_{\rm{shot}} \simeq 0.9$ above $V_{\rm{in}} = V_{\rm{th2}}$, while $P_{\rm{NMR}}$ monotonically decreases with increasing $V_{\rm{in}}$. (d) $V_{\rm{in}}$ dependence of Fano factor $F_C$ estimated by solving equations for the measured conductance, shot noise, and Knight shift of NMR. Reproduced figure with permission from Ref.~[\onlinecite{HashisakaPRB2020}]. {\copyright} (2020) American Physical Society.} \label{fig4_1} \end{figure*} As discussed in Sect.~\ref{subsec:qhe}, the QH effect is a phenomenon in which the Hall conductance of a 2DES is quantized in units of $e^2/h$ under a high perpendicular magnetic field. In a QH regime, the bulk region of a 2DES becomes insulating (incompressible) due to the complete occupation of the Landau levels below the Fermi energy, and chiral one-dimensional channels are formed at the edge of the 2DES. Accordingly, the longitudinal resistance becomes zero, and the Hall conductance is quantized, reflecting the absence of electron backscattering along the edge channels. When one applies a high source-drain voltage to a QH system, the Hall conductance deviates from the quantized value, while the longitudinal resistance takes on a finite value. This nonlinear behavior is referred to as quantum-Hall-effect (QHE) breakdown~\cite{NachtweiPhysicaE1999}. It is well known that the quantized Hall resistance of the QH state is used as a resistance standard, and an accurate Hall-resistance measurement should apply a current as high as possible within the linear-response regime. In this context, it is essential to understand the QHE-breakdown mechanism, which is the cause of the nonlinear behaviors at finite bias. Many experiments have been conducted on the breakdown mechanism in macroscopic Hall-bar or Corbino-type samples of several $\rm{\mu}$m to mm in size~\cite{NachtweiPhysicaE1999}. Current noise in such a macroscopic sample has also been measured to observe the precursory phenomenon of the QHE breakdown, generation of finite excess noise in the linear-response regime~\cite{ChidaPRB2013}. The QHE breakdown has also been studied as a representative nonequilibrium phenomenon in a mesoscopic system~\cite{NachtweiPhysicaE1999}. Notably, the breakdown of a spin-polarized QH state has come under scrutiny as a possible source of nuclear spin polarization in GaAs-based heterostructures. In this context, breakdown phenomena in locally-formed mesoscopic QH systems have often been examined in experiments~\cite{WaldPRL1994,DixonPRB1997,YusaNature2005,MasubuchiAPL2006,CorcolesPRB2009,ChidaPRB2012,HennelPRL2016,FauziPRB2017,HashisakaPRB2020}. Current-noise measurements are even more potent for investigating such small systems than they are for macroscopic systems. Below we discuss the QHE breakdown of a local $\nu=1$ system formed in a bulk $\nu=2$ system~\cite{ChidaPRB2012,HashisakaPRB2020}. Figure~\ref{fig4_1}(a) shows a schematic of such a local $\nu=1$ system. When one applies a negative split-gate voltage $V_{\rm{g}}$ to form a narrow constriction in the $\nu=2$ system, zero-bias conductance through the constriction varies as a function of $V_{\rm{g}}$, as shown by the red trace in Fig.~\ref{fig4_1}(b). The conductance plateau at $e^2/h$ ($-1.0~{\rm{V}}<{\it{V}}_{\rm{g}}<-0.7~{\rm{V}}$) indicates the formation of the local $\nu=1$ state due to the decrease in electron density in the constriction. When a high source-drain bias $V_{\rm{in}}$ is applied, the transmitted current varies nonlinearly to break down the conductance plateau [see green and blue traces in Fig.~\ref{fig4_1}(b)]. Intuitively, the most-likely mechanism for such a nonlinear behavior is the spin-conserving tunneling of spin-down electrons between the $\nu=2$ edge channels or that of spin-up electrons between the $\nu=1$ channels [schematically shown in Fig.~\ref{fig4_1}(a)]. Here, we formulate the transmitted current $I_t$ across the constriction as $I_t = I_{\uparrow}{\cal{T}}_{\uparrow}+I_{\downarrow}{\cal{T}}_{\downarrow}$, where $I_{\uparrow(\downarrow)}$ is the spin-up (spin-down) current impinging on the constriction and ${\cal{T}}_{\uparrow(\downarrow)}$ is the transmission probability of the spin-up (spin-down) electrons. If we assume that the inter-channel tunneling current is carried by stochastic electron tunneling, i.e., with no correlation [see Fig.~\ref{fig4_1}(a)], we can evaluate ${\cal{T}}_{\uparrow}$ and ${\cal{T}}_{\downarrow}$ by solving Eqs.~(\ref{eq:spinpol_G}) and (\ref{Fano_Spin}) together and estimate the spin polarization $\alpha_{\rm{shot}} \equiv ({\cal{T}}_{\uparrow}-{\cal{T}}_{\downarrow})/({\cal{T}}_{\uparrow}+{\cal{T}}_{\downarrow})$. Open green circles in Fig.~\ref{fig4_1}(c) shows the $V_{\rm{in}}$ dependence of $\alpha_{\rm{shot}}$. In the linear-response regime at low bias ($V_{\rm{in}}<V_{\rm{th1}}$), we observe $\alpha_{\rm{shot}}=1$ because spin-up electrons are fully transmitted through the constriction (${\cal{T}}_{\uparrow}=1$) while spin-down electrons are completely reflected (${\cal{T}}_{\downarrow}=0$). In the nonlinear regime ($V_{\rm{in}}>V_{\rm{th1}}$), $\alpha_{\rm{shot}}$ decreases with increasing $V_{\rm{in}}$, and when $V_{\rm{in}}$ is further increased ($V_{\rm{in}}>V_{\rm{th2}}$), $\alpha_{\rm{shot}}$ saturates at about 0.9. The observed decrease in $\alpha_{\rm{shot}}$ in the first breakdown regime ($V_{\rm{th1}}<V_{\rm{in}}<V_{\rm{th2}}$) is interpreted as the result of the interchannel electron tunneling [see Fig.~\ref{fig4_1}(a)]. Tunneling of spin-up electrons decreases ${\cal{T}}_{\uparrow}$ from 1 while that of spin-down electrons increases ${\cal{T}}_{\downarrow}$ from 0~\cite{ChidaPRB2012}. In the second breakdown regime ($V_{\rm{in}}>V_{\rm{th2}}$), on the other hand, saturation of $\alpha_{\rm{shot}}$ suggests that a different mechanism causes the nonlinear behavior. Figure~\ref{fig4_1}(c) compares $\alpha_{\rm{shot}}$ with the spin polarization in the constriction $P_{\rm{NMR}} \equiv (n_{\uparrow}-n_{\downarrow})/(n_{\uparrow}+n_{\downarrow})$, where $n_{\uparrow}$ ($n_{\downarrow}$) is spin-up (spin-down) electron density, evaluated from the Knight shift of NMR~\cite{HashisakaPRB2020}. One observes that $P_{\rm{NMR}}$ monotonically decreases with increasing $V_{\rm{in}}$ over the entire range. This result indicates that the saturation of $\alpha_{\rm{shot}}$ reflects a mechanism different from the decrease in the spin polarization, that is, the breakdown of the incompressibility of the local $\nu=1$ state. In the second breakdown regime, spin-down electrons frequently tunnel through the local $\nu=1$ region, leading to a decrease in $P_{\rm{NMR}}$ and the resultant suppression of the exchange energy. Accordingly, the spin gap in the constriction closes and the stochastic electron-tunneling picture breaks down to cause the deviation of current noise from the theoretical shot-noise value [Eq.~(\ref{ShotTheory})]. This scenario was confirmed by solving together three independent equations for the experimental data, i.e., dc conductance, shot noise, and the NMR Knight shift. The solution indicates that the Fano factor $F_C$ of the shot noise monotonically decreases from $F_C \simeq 1$ to 1/3 with increasing $V_{\rm{in}}$, as shown in Fig.~\ref{fig4_1}(d). The value of $F_C \simeq 1/3$ suggests that a classical diffusive conductor~\cite{BeenakkerPRB1992,NagaevPRB1995,KozubPRB1995,OppenPRB1997,SteinbachPRL1996} or a local $\nu=1/3$ fractional QH state~\cite{RoddaroPRL2003,RoddaroPRL2004,HashisakaPRL2015} is formed in the second nonlinear regime. Although the electron dynamics in this regime is still unclear, the experimental results unambiguously signal the two-step breakdown mechanism, that is, the electron tunneling through the local $\nu=1$ state in the first step and the breakdown of the incompressibility of the $\nu=1$ state in the second step. Here, we again emphasize that combining the three measurement techniques enables us to identify the two-step QHE breakdown. The above experiment clearly indicates that current-noise measurements provide essential information for understanding complicated nonlinear phenomena in nonequilibrium systems. Shot-noise measurements have also served as efficient probes for QHE breakdown in other experiments performed on GaAs/AlGaAs heterostructures~\cite{ChidaPRB2014,HataJPCM2016} and graphene~\cite{YangPRL2018,LaitinenJLTP2018}, where collective excitations, referred to as magneto-excitons, play an important role in the breakdown mechanism~\cite{YangPRL2018}. \subsubsection{Coherent tunneling} \label{subsub:MTJ} A tunnel junction composed of a thin insulator layer between metals is a representative example of multichannel systems. In contrast to a QPC, where the charge current flows through only a few conduction channels, a large number of channels of small transmission probabilities carry a current through a conventional tunnel junction. This has been confirmed by shot-noise measurements demonstrating the Fano factor $F=1$ of the Poisson processes. On the other hand, ``coherent tunneling'' through magnetic tunnel junctions (MTJs)---a highly transmissive tunneling process conserving all of the energy, momentum, and spin---identified in dc transport measurements requires further shot-noise studies for ensuring the highly transmissive nature of the tunneling process. Here, we introduce a shot-noise measurement performed on MTJs showing coherent tunneling. An MTJ is a junction consisting of a tunnel barrier between ferromagnetic metal layers. The tunneling resistance depends on whether the configuration of the magnetization directions is parallel or antiparallel. The resistance in the former case is lower than that in the latter one, as schematically shown in Figs.~\ref{MTJ_Arakawa}(a) and (b). The magnetization-configuration dependence of resistance, referred to as the tunneling magnetoresistance (TMR) effect, is a vital topic in spintronics. \begin{figure}[tbp] \center \includegraphics[width=8.5cm]{ShotReviewFig19.eps} \caption{(Color online) Schematic of a CoFeB/MgO/CoFeB MTJ (a) in the parallel (P) configuration and (b) in the antiparallel (AP) configuration. (c) Source-drain bias $V_{\rm{sd}}$ dependence of $dV/dI$ (solid mark, right axis) and $S_I$ (open mark, left axis) in the P configuration. The solid curve fits the current-noise data with $F=0.91 \pm 0.01$, showing deviation from the dashed line assuming $F=1.0$. (d) A part of the graph of Fig.~\ref{MTJ_Arakawa}(c) surrounded by a dotted-dashed rectangle is enlarged to show that the experimental result clearly deviates from the $F=1.0$ case. (e) and (f) are the counterparts for the AP configuration of Figs.~\ref{MTJ_Arakawa}(c) and (d), respectively. The shot noise fits well with the curve assuming $F=1.0$. Reprinted from Ref.~[\onlinecite{ArakawaAPL2011}], with the permission of AIP Publishing.} \label{MTJ_Arakawa} \end{figure} Compared with an MTJ with an amorphous AlO$_x$ barrier~\cite{GuerreroPRL2006,ScolaAPL2007,GuerreroAPL2007,CascalesPRL2012}, an MTJ composed of a crystallized magnesium-oxide (MgO) barrier shows a huge magnetoresistance, exceeding 1,000\%~\cite{YuasaNatMat2004,ParkinNatMat2004,YuasaJPSJ2008}. Theory explains that the presence of coherent-tunneling process only in the parallel configuration is responsible for the huge magnetoresistance~\cite{ButlerPRB2001,MathonPRB2001}. Shot-noise measurements performed on MgO-based MTJ devices provide evidence of coherent tunneling~\cite{SekiguchiAPL2010,ArakawaAPL2011}. Figures~\ref{MTJ_Arakawa}(c) and (e) show the results of shot-noise measurements in the parallel and antiparallel configurations, respectively (MgO layer thickness of 1.05~nm). The solid curves are fits to the experimental data using Eq.~(\ref{ShotTheory}). Figures~\ref{MTJ_Arakawa}(d) and (f) present magnified views of a part (surrounded by a dotted-dashed rectangle) of Figs.~\ref{MTJ_Arakawa}(c) and (e), respectively. Figures~\ref{MTJ_Arakawa}(e) and (f) tell that $F$ is very close to 1 ($F=0.98 \pm 0.01$) in the antiparallel configuration, indicating that the Schottky-type tunneling carries the current; namely, all the tunneling paths have small transmission probabilities (${\cal T}_n \ll 1$). On the other hand, the Fano factor is $F=0.91 \pm 0.01$ in the parallel configuration, as seen in Fig.~\ref{MTJ_Arakawa}(d). The decrease in $F$ suggests the presence of highly transmissive paths due to the coherent tunneling~\cite{ButlerPRB2001,MathonPRB2001}. A first-principles calculation for a realistic MgO barrier quantitatively explains the observed shot-noise reduction~\cite{LiuPRB2012}. After the MgO-based MTJ experiments, a similar experiment was performed on an epitaxial-spinel-barrier junction (MgAl$_2$O$_4$) to observe the presence of coherent tunneling~\cite{TanakaAPEX2012}. \subsubsection{Atomic and single-molecule junctions} \label{subsec:molecularjunction} Shot-noise measurements have also been performed to investigate atomic or single-molecule junctions that show conductance quantization~\cite{AgraitPR2003}. Such junctions are often fabricated using mechanically controllable break-junctions (MCBJs), which enables us to form an ultimately small gap between two metal electrodes and hold atoms or molecules in the gap. Various intriguing phenomena appear in such a junction, depending on the transport properties of both the held atoms or molecules and metal electrodes. For example, Cron \textit{et al.} measured charge transport through an aluminum MCBJ holding a few aluminum atoms and observed multiple Andreev reflections at the junction~\cite{CronPRL2001}. The multiple Andreev reflections result in rich features in the current-voltage ($IV$) characteristics, from which Cron \textit{et al.} extracted the entire set of transmission probabilities ${\cal{T}}_n$, which are referred to as mesoscopic PIN (personal-identification-number) codes. The experiment measured the shot noise to evaluate the effective charge ($2e, 4e\cdots$) associated with the multiple Andreev reflections. Shot-noise measurements have also been performed on other atomic or molecular junctions. For example, Fig.~\ref{AtomicContact_Tewari} shows the Fano factors measured for gold or platinum atomic junctions~\cite{TewariRSI2017}. The data obtained from 200 different MCBJs are scattered close to the theoretical shot-noise curve (see also Fig.~\ref{QPC_Fano}), manifesting the appearance of quantized channels in such atomic contacts. \begin{figure}[!t] \center \includegraphics[width=8cm]{ShotReviewFig20.eps} \caption{(Color online) Fano factor of 200 different Au or Pt MCBJs. Reprinted from Ref.~[\onlinecite{TewariRSI2017}], with the permission of AIP Publishing.} \label{AtomicContact_Tewari} \end{figure} The coupling of an electronic system with other degrees of freedom, such as phonons (or vibration modes) of MCBJs, can be probed by shot-noise measurements~\cite{TalPRL2008,KumarRPL2012,ChenSR2014}. For such measurements, high-stability and high-conductivity molecular junctions, such as the benzene molecular junction~\cite{KiguchiPRL2008}, are fascinating targets. Another direction of the shot-noise study of atomic junctions is to combine it with scanning tunneling microscopy (STM)~\cite{MasseeRSI2018}. \subsubsection{Quantum dots} \label{subsub:QD} Let us consider electron transport through a QD connected to metallic leads. When the capacitance $C$ between the QD and the environment, e.g., leads and gate electrodes, is small, the energy $e^2/2C$ required to add one electron to the QD can be larger than electron temperature $k_\textrm{B}T_\textrm{e}$. In this case, the number of electrons in the QD changes one by one as a function of the applied gate voltage (Coulomb blockade), and finite conductance through the system is observed when the energy level of the QD and the chemical potential of the leads coincide (Coulomb oscillation). Furthermore, when the QD is as small as the de Broglie wavelength of electrons, separation between discrete energy levels exceeds $k_\textrm{B}T_\textrm{e}$. In this case, electron transport occurs through each discrete level. One may expect that Coulomb repulsion in a QD always suppresses the shot-noise intensity to be sub-Poissonian ($F<1$), as the Pauli exclusion principle does in a QPC. Actually, sub-Poissonian shot noise was observed in the single-electron tunneling regime~\cite{BirkPRL1995}. However, in practice, the shot noise generated in a QD is sometimes super-Poissonian ($F>1$)~\cite{GustavssonPRL2006,OnacPRL2006,ZhangPRL2007,kiesslichPRL2007,FrickePRB2007,ZarchinPRL2007,OkazakiPRB2013,UbbelohdePRB2013,HarabulaPRB2018,SeoPRL2018}, indicating that electrons are ``bunched'' when they transmit through a QD. One of the mechanisms to enhance the shot noise is a non-Markovian process in a QD~\cite{SukhorukovPRB2001,BelzigPRB2005,ThielmannPRL2005}. For example, let us consider a situation where multiple discrete levels exist in the energy window in a voltage-biased QD. When an electron stays in one of the levels, electrons cannot use the other levels to pass through the QD due to the Coulomb blockade. Suppose that the dwell time of each level differs. When a long-dwell-time level traps an electron, electron transport is suppressed. Otherwise, the current is enhanced from the average in time. In this way, transmitted electrons are bunched in the time domain. Another mechanism is cotunneling, where multiple electrons are involved in a tunneling process. As a last note, the mechanism of electron transport through a QD is much simpler when only one discrete level contributes to it. This situation is seen, for example, in a small QD fabricated in a carbon nanotube, where the energy separation between discrete levels is large. In this case, the shot noise generated in the QD is well explained by the standard shot-noise formula $S = 2e\vert\langle I \rangle\vert \left(1-\cal{T}\right)$ [see Figs.~\ref{FerrierNatPhysFig2}(d) and~\ref{FerrierNatPhysFig2noise}(a)]~\cite{FerrierNatPhys2016}. \subsection{Fermion quantum optics} \label{sec:fermion_optics} The factor $f_\alpha(\varepsilon)[1-f_\beta(\varepsilon)]$ in Eq.~(\ref{NoiseSingleChannel}) reflects the Pauli exclusion principle of electrons; in the experiments presented above, the fermionic nature of electrons manifests itself in this factor. In contrast, in the research field referred to as ``fermion quantum optics,'' the fermionic nature is observed more directly~\cite{ButtikerScience1999}. Let us consider a simple example where each of two particles, A and B, randomly takes one of two states, $\vert 1\rangle$ or $\vert 2\rangle$. In this case, possible states are the following four: $\vert 1\rangle_{\rm{A}} \vert 1\rangle_{\rm{B}}, \vert 2\rangle_{\rm{A}}\vert 2\rangle_{\rm{B}}, \vert 1\rangle_{\rm{A}}\vert 2\rangle_{\rm{B}},$ and $\vert 2\rangle_{\rm{A}}\vert 1\rangle_{\rm{B}}$. When the two particles are distinguishable, they take the state $\vert 1\rangle_{\rm{A}} \vert 1\rangle_{\rm{B}}$ ($\vert 2\rangle_{\rm{A}}\vert 2\rangle_{\rm{B}}$) with the probability $P_{\rm{11}}(P_{\rm{22}})=25\%$ independent of their quantum statistical nature. When they are indistinguishable, in contrast, the probability depends on the quantum statistics; the two particles never take one state together in the case of fermions, while they tend to take the same state in the case of bosons. Thus, compared with classical particles, fermions avoid each other (antibunching), while bosons tend to bunch up (bunching). The quantum statistical nature of particles has a vital influence on the shot-noise generation in their scattering processes. \begin{figure}[tb] \center \includegraphics[width=6cm]{ShotReviewFig21.eps} \caption{(Color online) Schematic of exchange-interference experiments: (a) Hanbury-Brown-Twiss experiment and (b) collision experiment.} \label{2typeexp} \end{figure} Bosonic bunching was first observed in 1954 by Hanbury Brown and Twiss. They estimated the angular diameter of stars by measuring the intensity correlation of light~\cite{HanburyBrown1954,HanburyBrownNature1956}. Purcell interpreted the experimental result as reflecting the bosonic bunching of photons~\cite{PurcellNature1956}. Since the development of the laser, the Hanbury-Brown-Twiss (HBT) setup [Fig.~\ref{2typeexp}(a)] has been widely examined in quantum optics. The electron-collision experiment in 1998~\cite{LiuNature1998} and the HBT interference experiment in 1999~\cite{HennyScience1999,OliverScience1999} are well-known early fermion-quantum-optics experiments. In the former, electrons randomly ejected from two different sources, 1 and 2, sometimes collide at a beam splitter, as shown in Fig.~\ref{2typeexp}(b). The collisions, which deterministically output one electron each to the two exits, decrease the number of random scattering events at the beam splitter and thus suppress shot-noise generation. Liu {\it{et al}}. observed shot-noise suppression using a beam splitter fabricated in a 2DES in a GaAs/AlGaAs heterostructure~\cite{LiuNature1998}. In the latter, Henny {\it{et al}}. demonstrated the Fermi statistics of electrons in an HBT experiment~\cite{HennyScience1999}. They prepared the HBT setup shown in Fig.~\ref{2typeexp}(a) using a quantum Hall (QH) device and observed negative current-noise cross-correlation reflecting the fermionic nature of electrons. On the other hand, Oliver {\it{et al}}. measured the cross-correlation between the two outputs of a beam splitter in the time domain~\cite{OliverScience1999}. While these HBT experiments were performed using GaAs/AlGaAs semiconductor devices, similar experiments were later conducted using graphene~\cite{TanSciRep2018,EloPRB2019} and free electrons in a vacuum~\cite{KieselNature2002}. \begin{figure}[tb] \center \includegraphics[width=7cm]{ShotReviewFig22.eps} \caption{(Color online) (a) Schematic of a QD single-electron source. An electron and a hole are ejected one by one from a QD by applying square-wave voltage pulses $V_{\rm{exc}}$ to the gate electrode. Reprinted figure with permission from Ref.~[\onlinecite{FeveScience2007}] Copyright (2007) by American Association for the Advancement of Science. (b) Scanning electron micrograph of an electron-collision device fabricated on an AlGaAs/GaAs 2DES. Electrons ejected from two single-electron sources collide at a beam splitter. (c) Current noise generated at a beam splitter measured as a function of the time delay $\tau$ between the electron ejections. Suppression of the noise at $\tau \simeq 0$ reflects antibunching of electrons due to the Pauli exclusion principle. Panels (b) and (c) are reprinted with permission from Ref.~[\onlinecite{BocquillonScience2013}]. {\copyright} (2013) American Association for the Advancement of Science.} \label{fig4_x3} \end{figure} Another well-known example of a fermion-quantum-optics experiment is Mach-Zehnder interferometry using QH edge channels~\cite{JiNature2003}. This experiment has confirmed the long coherence length of electron waves in a solid-state device and has stimulated various studies on electron-wave interferometry. Recent experiments have demonstrated coherent electron transport over a long distance of 100 $~\mu$m~\cite{DuprezPRX2019}. A significant example of current-noise studies on such interferometers is the one by Neder {\it{et al.,}} who observed exchange interference in a two-particle interferometer~\cite{NederNature2007}. Their study is based on a theoretical proposition of demonstrating electron entanglement in a solid-state device by observing a violation of the Bell inequality~\cite{SamuelssonPRL2004}. These experiments may lead to unique developments in fermion quantum optics beyond the mere analogy of quantum optics because electronic systems often produce peculiar quantum many-body states. A recent remarkable example is the demonstration of anyonic statistics of fractionally charged quasiparticles in fractional QH states (for details, see Sect.~\ref{sec:anyonic_statistics})~\cite{BartolomeiScience2020}. Whereas the above experiments have examined the fermionic nature of electrons by applying a direct current to a mesoscopic device, recent experiments using high-speed electronics have succeeded in observing scattering processes of individual electrons. One of the core technologies in such experiments is a single-electron source, of which several types have been reported~\cite{BauerleRPP2018,FeveScience2007,MaireAPL2008,UbbelohdeNNANO2015,DuboisNature2013,FletcherNatCommun2019}. Since shot noise is generated due to the charge discreteness, the shot-noise measurement plays an essential role in evaluating these single-electron sources. Here, we present two experiments demonstrating single-electron sources, one using a quantum-dot device~\cite{FeveScience2007,BocquillonScience2013} and another using a Lorentzian electron wave packet~\cite{DuboisNature2013}. \begin{figure}[tb] \center \includegraphics[width=7cm]{ShotReviewFig23.eps} \caption{(Color online) (a) Schematic of a Leviton-collision experiment. (b) Current noise observed as a function of time delay. The noise suppression at $\tau /T=0$ reflects the HOM interference of Levitons. Reprinted with permission from Ref.~[\onlinecite{DuboisNature2013}]. {\copyright} (2013) Sprinter Nature.} \label{fig4_x4} \end{figure} Figure~\ref{fig4_x3}(a) shows a schematic of a single-electron source using a QD~\cite{FeveScience2007}. A single electron (hole) is ejected from a QD into the lead when a negative (positive) gate-voltage step is applied to the QD to control the number of electrons. Although the ejected electron interacts with electrons in the lead below the Fermi energy to excite many electron-hole pairs after a long propagation time, it propagates coherently within a short time. Figure~\ref{fig4_x3}(b) shows a schematic of an electron-collision experiment using two quantum-dot single-electron sources~\cite{BocquillonScience2013}. When two electrons are incident on the central QPC simultaneously, they collide with each other causing the exchange interference. Figure~\ref{fig4_x3}(c) presents the measured current-noise cross-correlation between the two outputs from the QPC as a function of the time difference $\tau$ between the electron ejections. One observes a suppression of the cross-correlation at $\tau \simeq 0$~ps, which indicates the Pauli exclusion principle of electrons. This experiment can be regarded as a fermionic version of the Hong-Ou-Mandel (HOM) coincidence measurement~\cite{HongPRL1987}. The fermionic HOM interference effect has also been observed in a Leviton-collision experiment. A Lorentzian pulse excites a collective excitation of electrons without holes. Levitov {\it{et al}}. proposed that a minimal charge excitation, referred to as a Leviton, transferring an elementary charge is possible by controlling the Lorentzian pulse size~\cite{LevitovJMP1996,IvanovPRB1997,KeelingPRL2006}. One can expect to observe the exchange interference of two electrons when two Levitons collide with each other at a QPC [Fig.~\ref{fig4_x4}(a)]. Figure~\ref{fig4_x4}(b) demonstrates the current-noise cross-correlation measured as a function of time delay $\tau$ normalized by the Leviton-ejection period $T$. The measured cross-correlation agrees well with the theoretical curve (solid line), in which the noise suppression at $\tau/T=0$ reflects the Pauli exclusion principle. \section{Current noise in quantum liquids} \label{sec:quantumliquid} \subsection{Quantum liquids and their non-equilibrium} The behavior of a single particle, an electron, for example, can be explained by solving the Schr\"{o}dinger equation. On the other hand, it is usually impossible to solve the equation rigorously when many particles correlate. Exotic behaviors of such many-particle systems, which cannot be expected from the single-particle picture, have attracted great attention from researchers in condensed-matter physics. We call such a many-particle system, where many indistinguishable particles correlate, showing liquid-like behaviors, ``quantum liquid''~\cite{NozieresTQL1999}. Quantum liquids have long been an important topic in condensed-matter physics, and now we can understand the equilibrium properties of several quantum liquids to a considerable extent. However, we do not have any generalized method for predicting their non-equilibrium properties; constructing a canonical way for describing non-equilibrium behavior is one of the most significant challenges in modern physics. Non-equilibrium phenomena are everywhere: light-matter interaction, transistors in electronic devices, chemical reactions, and life. Despite their familiarity with us, such phenomena are inherently challenging to analyze due to their complexity. Quantum liquids provide a quantum-mechanical prototype of such intriguing non-equilibrium issues and serve as good touchstones for understanding non-equilibrium phenomena. This section discusses shot-noise measurements on three types of quantum liquids, namely those formed by, respectively, the Kondo effect, fractional quantum Hall effect, and superconductivity. \subsection{Non-equilibrium fluctuations in the Kondo effect} \label{Subsec:KondoNoise} \subsubsection{Kondo effect and local Fermi liquid} The Kondo effect is a typical quantum many-body phenomenon. The state created by this effect (Kondo state) is a type of quantum liquid called ``local Fermi liquid''. In this subsection, we briefly introduce the Kondo effect and discuss the shot noise in a quantum dot (QD) where the Kondo effect emerges. We start from the Kondo effect in bulk materials~\cite{KondoPro1988}. Usually, the electrical resistivity of nonmagnetic metals decreases with decreasing temperature because the electron-phonon scattering is suppressed at low temperature. However, we often observe that the resistivity of nonmagnetic metals with a small number of magnetic impurities (say, 0.1--0.001\%) starts to increase at low temperatures, showing a resistivity minimum at a particular temperature. Since the 1930s, this phenomenon had been a long-standing mystery called the ``resistivity minimum phenomenon'' (for historical background, see Ref.~[\onlinecite{KondoJSPS2005}]). In 1964, Kondo theoretically solved this problem by considering a spin-dependent electron scattering at a single magnetic impurity atom~\cite{KondoPTP1964}. Figure~\ref{KondoSchematicFig}(a) shows a random distribution of magnetic impurity atoms with localized spins in a nonmagnetic metal. We assume that each impurity atom has a single discrete level. The level forms a resonant state with a finite width $\Gamma$ due to hybridization with the surrounding conduction electrons. Electrons move in and out of the level on a time scale characterized by $\Gamma$. Now, when the Coulomb energy $U$ between the electrons occupying the level is sufficiently large, $U\gg \Gamma$, only one electron can enter it at a time. \begin{figure*}[t] \center \includegraphics[width=12.5cm]{ShotReviewFig24.eps} \caption{(Color online) (a) Conceptual view of magnetic impurity atoms randomly distributed in a nonmagnetic metal. (top) At high temperature, $T_\textrm{e} \gg T_\textrm{K}$, the localized spins are paramagnetic. (middle) At low temperatures, $T_\textrm{e} \leq T_\textrm{K}$, the spin in the impurity atom and the conduction-electron spin begin to form a bound state, and the system becomes non-magnetic. The motion of the conduction electrons becomes inhibited. (bottom) As a result, the resistivity logarithmically increases. At even lower temperature, the resistivity becomes constant (unitary limit). (b) Similar phenomenon occurs in a QD. Consider a situation where the QD has a single level and contains only one electron. (top) At high temperature, $T_\textrm{e} \gg T_\textrm{K}$, no other electrons are allowed to enter due to $U$ (Coulomb blockade). (middle) At lower temperature, $T_\textrm{e} \leq T_\textrm{K}$, the Kondo state forms, and electrons can pass the QD. (bottom) The conductance of the QD increases logarithmically in decreasing temperature and shows a constant value in the low-temperature limit (unitary limit).} \label{KondoSchematicFig} \end{figure*} At high temperature, the electrons rapidly enter and exit the level one by one on a time scale characterized by $\Gamma$ ($\gg k_\textrm{B}T_\textrm{e}$), leading to the fluctuation of the direction of the spin in the level. Thus, the spins in the magnetic impurity atoms are paramagnetic. However, a different situation arises at low temperature: the spin direction of the electrons entering and exiting the level becomes correlated, which can be described by the second-order perturbation in the Anderson impurity model~\cite{YosidaTM1996}. The correlation becomes increasingly significant as temperature decreases, and finally, the spin in the impurity atom and the conduction-electron spin begin to form a spin-singlet bound state (Kondo state), making the system non-magnetic. The resistivity logarithmically increases due to the scattering by the Kondo state that develops at each impurity atom [see the middle panel of Fig.~\ref{KondoSchematicFig}(a)]. The Kondo temperature $T_\textrm{K}$ is the temperature at which the Kondo state starts to form. At sufficiently low temperature ($T_\textrm{e} \ll T_\textrm{K}$), the resistivity approaches a constant value, meaning that the scattering by the Kondo state is the dominant factor to determine the resistivity. This situation is called the unitary limit, where the Kondo state is a perfect spin-singlet formed around the discrete level of each impurity atom. The Kondo effect is a phenomenon where the magnetism and resistivity of a nonmagnetic metal gradually change with decreasing temperature due to magnetic impurities. The essence of the Kondo effect lies in that the levels that initially have a resonance state of width $\Gamma$ newly behave according to an energy scale $k_\textrm{B}T_\textrm{K}$ ($k_\textrm{B}T_\textrm{K} \ll \Gamma, U$) due to the presence of the many-body effect $U$ [see Eq.~(\ref{TKexpression}) for the expression of $k_\textrm{B}T_\textrm{K}$]. Kondo calls the emergence of this nontrivial energy scale $k_\textrm{B}T_\textrm{K}$ ``Fermi surface effect'' because the abrupt change in the occupation number at the Fermi surface peculiar to the Fermi-Dirac distribution function is responsible for the logarithmic behavior~\cite{KondoPro1988}. Note that the Kondo effect is not a phase transition but a crossover across $T_\textrm{K}$. It is also possible to understand the Kondo effect in terms of the ``Fermi liquid'', a kind of quantum liquids: Landau proposed a phenomenological Fermi liquid theory in 1956 and gave its microscopic proof based on many-body quantum theory in 1958~\cite{LandauJETP1956}. Roughly speaking, as long as the low energy physics concerns, the Fermi liquid theory enables us to treat an interacting fermion system as if it is a ``free'' fermion system by renormalizing the interaction. More accurately, due to the renormalization, we have to consider quasi-particles rather than free fermions because there exists residual interaction between quasi-particles. Landau's phenomenology describes many-body quantum states in the interacting system using an energy functional. It assumes that the low-energy eigenvalues of the system are the functional of the quasi-particle distribution function's deviation from the ground state. The excitation energy spectrum expressed in the functional form enables us to predict several observables, such as the effective mass of a quasi-particle and the magnetic susceptibility of the system. For example, in liquid~$^3$He, a representative Fermi liquid, we can experimentally determine these parameters and quantitatively predict many-body effects in other physical quantities~\cite{LeggettRPP2016}. Such a method to describe many-body states by a few parameters was also successful in Kondo physics in the 1970s~\cite{NozieresJLTP1974,YamadaPTP1975,YosidaPTP1975,Yamada2PTP1975,ShibaPTP1975,YoshimoriPTEP1976}. In this case, we use the term ``local Fermi liquid'' because we are dealing with a state formed around a localized level. \subsubsection{Kondo effect in QDs} The Kondo effect also occurs in QDs. While the underlying physics is the same between the Kondo effect in bulk metals and that in QDs, it is instructive to discuss the QD case here based on a microscopic model. Now, consider a QD with only a single level and only one electron occupying it, as shown in Fig.~\ref{KondoSchematicFig}(b). This situation is described by the impurity Anderson model $\mathcal{H}_A=\mathcal{H}_0+\mathcal{H}_T+\mathcal{H}_I$: \begin{equation} \begin{split} \mathcal{H}_0 &= \sum_{k\alpha\sigma}\varepsilon_{k\alpha}c^{\dagger}_{k\alpha\sigma} c_{k\alpha\sigma}+ \sum_{\sigma}\epsilon_d d^{\dagger}_{\sigma}d_{\sigma}, \\ \mathcal{H}_T &= \sum_{k\alpha\sigma} (v_\alpha d^{\dagger}_{\sigma}c_{k\alpha\sigma} +v_\alpha^* c^{\dagger}_{k\alpha\sigma}d_{\sigma}),\\ \mathcal{H}_I &= U d^{\dagger}_{\uparrow} d_{\uparrow}d^{\dagger}_{\downarrow} d_{\downarrow}, \end{split} \label{eq:AndersonModel} \end{equation} where $c^{\dagger}_{k\alpha\sigma}$ is an operator that creates an electron with wavenumber $k$ and spin $\sigma = \uparrow, \downarrow$ in the left and right leads $\alpha =\textrm{L}$, and $\textrm{R}$, respectively. $d^{\dagger}_{\sigma}$ is an operator that creates an electron with spin $\sigma$ in the level $\epsilon_d$ of the QD. The electrons move between the lead $\alpha$ and the QD with a tunneling matrix element $v_\alpha$, and by this tunneling the level has a line width $\Gamma = \Gamma_\textrm{L} + \Gamma_\textrm{R}$, where $\Gamma_\alpha = 2\pi \rho_c |v_\alpha|^2$ and $\rho_c$ is the density of states of the conduction electrons of the leads. In addition, the electron in the QD has a Coulomb repulsion $U$. The chemical potential of the left and right leads is set to $\mu_\textrm{L/R}=\pm eV/2$, and a voltage $V\geq 0$ is applied between the leads. If the energy $U$ is sufficiently large such that $U \gg \Gamma$, no other electrons can enter the QD, resulting in a Coulomb blockade, and conduction is inhibited. However, at low temperature $T \lesssim T_\textrm{K}$, a different situation emerges due to the Kondo effect. Even if an electron occupies the QD, another electron with the opposite spin can enter it from either lead, allowing two electrons to coexist, treated by the second-order perturbation in Eq.~(\ref{eq:AndersonModel}). Although this state is energetically unstable, the two electrons can still coexist as long as Heisenberg's uncertainty relation about time and energy allows. As temperature decreases, these virtual processes become more frequent, which leads to forming a new resonant state that bridges the left and right leads through the QD, despite its being in a Coulomb blockade state [see the middle panel of Fig.~\ref{KondoSchematicFig}(b)]. This state is nothing more than the spin-singlet bound state (the Kondo state), which we discussed already. With the formation of the Kondo state, the conductance of the QD increases logarithmically with decreasing temperature [see the bottom panel of Fig.~\ref{KondoSchematicFig}(b)]. The Kondo effect in a QD was first realized experimentally in 1998~\cite{Goldhaber-GordonNature1998,CronenwettScience1998,SchmidPB1998}. The conductance reaches $2e^2/h$ at sufficiently low temperature $T \ll T_\textrm{K}$, signaling the unitary limit~\cite{vanderWielScience2000}. In the Kondo effect in bulk metals, the resistivity increases with decreasing temperature, as shown in Fig.~\ref{KondoSchematicFig}(a). The Kondo effect occurs due to the formation of Kondo states around magnetic impurities, which inhibits the transport of the conduction electrons. In contrast, in the Kondo effect in QDs, the transport through the QDs, which are the magnetic impurities themselves, is relevant. In this case, the conductance increases with the formation of the Kondo state, as shown in Fig.~\ref{KondoSchematicFig} (b). Figure~\ref{KondoEnergyFig}(a) shows the energy diagram of a QD. A discrete energy level $\epsilon_d$, localized inside the double potential barrier, has a finite resonance width $\Gamma$ (dashed curve) due to the tunneling of the conduction electrons from the left and right leads. If the energy level is lower than the chemical potential of the leads ($\mu_\textrm{L}, \mu_\textrm{R}$), the Kondo effect occurs because of the repulsion $U$ between the electrons occupying this level. The resonance peak becomes sharper such that $k_\textrm{B}T_\textrm{K}\ll \Gamma$ and shifts very close to the Fermi level (solid curve). The resonance appears as a peak at zero bias in differential conductance. Figure~\ref{KondoEnergyFig}(a) shows a situation in which electron-hole symmetry holds, $\epsilon_d/U = -0.5$, while it is possible to control $\epsilon_d$ by controlling the gate voltage. In this case, $T_\textrm{K}$ varies as~\cite{HaldanePRL1978,Goldhaber-GordonPRL1998,vanderWielScience2000} \begin{equation} k_\textrm{B}T_\textrm{K}=\frac{\sqrt{\Gamma U}}{2} \exp \left[\frac{\pi \epsilon_d (\epsilon_d +U)}{\Gamma U} \right]. \label{TKexpression} \end{equation} Figure~\ref{KondoEnergyFig}(b) shows the Kondo temperature $k_\textrm{B}T_\textrm{K}/U$ when $U/\Gamma =3$ as a function of $\epsilon_d/U$. \begin{figure}[!t] \center \includegraphics[width=6.5cm]{ShotReviewFig25.eps} \caption{(Color online) (a) Schematic energy diagram of a QD in the Kondo regime. The discrete energy level $\epsilon_d$, which is localized inside the double potential barrier, has a finite resonance width $\Gamma$ due to the tunneling of the conduction electrons (dashed curve). If $\epsilon_d$ is lower than the chemical potential of the conduction band of the leads ($\mu_\textrm{L}$ and $\mu_\textrm{R}$), the resonance peak becomes sharper and shifts closer to the Fermi level due to the repulsion $U$ between the electrons occupying this level (solid curve). The figure shows a situation in which the electron-hole symmetry holds, $\epsilon_d/U = -0.5$. (b) Kondo temperature $k_\textrm{B}T_\textrm{K}/U$ for $U/\Gamma =3$ is shown as a function of $\epsilon_d/U$.} \label{KondoEnergyFig} \end{figure} \subsubsection{Non-equilibrium transport} The realization of the Kondo effect in QDs has a significant meaning. Since the 1960s, the Kondo effect has been one of the central topics in the study of strongly correlated electron systems (e.g., heavy fermion systems and high-temperature superconductors). Experimentally, many studies have used macroscopic samples to measure the properties of the ensemble average of many spins. By using QDs, however, we can now control all the parameters related to the Kondo effect in a single site, such as the Kondo temperature, the number of electrons in the QD, the spin states, the orbital states, and the bias voltage to drive the QD to the nonequilibrium. For example, the sensitivity of the Kondo effect to the even/odd parity of the number of electrons in the QD was demonstrated by controlling the discrete level position~\cite{Goldhaber-GordonNature1998,CronenwettScience1998,SchmidPB1998} and thus $T_\textrm{K}$ [see Fig.~\ref{KondoEnergyFig}(b)]~\cite{Goldhaber-GordonPRL1998,vanderWielScience2000}. In addition, researchers have observed Zeeman splitting of the Kondo resonance by applying a magnetic field~\cite{Goldhaber-GordonNature1998,CronenwettScience1998,SchmidPB1998}. Thus, the Kondo effect in QDs is ideal for accurately verifying the theories on the strong electron correlation and quantum liquids. The new opportunity to study non-equilibrium states is particularly remarkable. A quantitative understanding of the excited states of the local Fermi liquid becomes possible by precisely investigating the universal behavior of the Kondo effect, which appears in transport phenomena. For this purpose, it is necessary to consider the impact of the bias voltage on the quasi-particle lifetime due to electron correlation. Such theories include phenomenological Fermi liquid theory, microscopic Fermi liquid theory, and renormalized perturbation theory. The renormalized perturbation theory was introduced for the impurity-Anderson model by Hewson~\cite{HewsonPRL1993,HewsonJPCM2001} and extended to low-bias steady states~\cite{OguriPRB2001,OguriJPSJ2005}. Three important parameters in the impurity-Anderson model [Eq.~(\ref{eq:AndersonModel})] are $\epsilon_d$, $\Gamma$, and $U$. Due to the Kondo effect, these quantities are renormalized to $\tilde{\epsilon_d}$, $\tilde{\Gamma}$, and $\tilde{U}$, respectively. For example, $\tilde{U}$ corresponds to a residual interaction between quasi-particles. The renormalized level width $\tilde{\Gamma}$ corresponds to the Kondo temperature $k_\textrm{B} T_\textrm{K} = \pi \tilde{\Gamma}/4$ in the Kondo regime~\cite{OguriJPSJ2005}. Following the spirit of the Fermi liquid theory, the dynamics of the Kondo effect at low energy is expected to be described by these parameters alone. In the following, we consider the electron-hole symmetry case, namely $\epsilon_d/U = -0.5$ and $v_\textrm{L}=v_\textrm{R}$, in the Hamiltonian given by Eq.~(\ref{eq:AndersonModel}). In this case, the differential conductance of the QD can be calculated exactly up to the square of the bias voltage $V$, temperature $T_\textrm{e}$, and magnetic field $B$ so that \begin{equation} \begin{split} \frac{d\langle I \rangle}{dV} &=\frac{2e^2}{h}\left[ 1 - c_V \left( \frac{eV}{\tilde{\Gamma}} \right)^2 \right. \\ &\left. - c_T \left( \frac{\pi k_\textrm{B} T_\textrm{e}}{\tilde{\Gamma}} \right)^2 - c_B \left( \frac{g\mu_\textrm{B} B}{\tilde{\Gamma}} \right)^2\right], \end{split} \label{kondononeq} \end{equation} where $g$ represents the $g$-factor and $\mu_\textrm{B}$ represents the Bohr magneton. Here, \begin{equation} \begin{split} c_V &= \frac{1+5(R-1)^2}{4}, \\ c_T &= \frac{1+2(R-1)^2}{3}, \\ c_B &= \frac{R^2}{4}. \end{split} \end{equation} The expression of $c_V$ was derived by Oguri~\cite{OguriPRB2001,OguriJPSJ2005}. That of $c_T$ originates from Refs.~[\onlinecite{YamadaPTP1975}] and~[\onlinecite{YoshimoriPTEP1976}]. For $c_B$, refer to Refs.~[\onlinecite{FerrierNatPhys2016}] and~[\onlinecite{MoraPRB2015}]. Equation~(\ref{kondononeq}) depicts how the Kondo resonance, which appears in the differential conductance near zero bias, behaves at finite bias, finite temperature, and finite field. $R$ is a quantity called the Wilson ratio and is represented by the magnetic susceptibility $\chi_s$ and the electronic specific heat coefficient $\gamma$ as follows: \begin{equation} R \equiv \frac{4\pi k_\textrm{B}^2}{3g^2\mu_\textrm{B}^2}\frac{\chi_s}{\gamma} =1+\frac{\tilde{U}}{\pi \tilde{\Gamma}}. \end{equation} $R$ is a measure of the strength of the electron-electron interaction at the fixed point of the Fermi liquid. It increases monotonically from $R=1$ in the case of $U/\Gamma=0$, where there is no interaction, to $R=2$ in the Kondo limit $U/\Gamma \rightarrow \infty$. The conductance at $V=0$ and $B=0$ is the unitary-limit value $2e^2/h$ at $T_\textrm{e}=0$, as shown in Eq.~(\ref{kondononeq}). At very low bias, the Kondo resonance looks to electrons just a resonance state centered at the Fermi level with the width of $\sim k_\textrm{B}T_\textrm{K}$ [see Fig.~\ref{KondoEnergyFig}(a)]. As the resonance symmetrically couples to both leads ($v_\textrm{L}=v_\textrm{R}$), the transmission probability becomes 100\% at the level position. The Kondo state becomes ``invisible'' to the conduction electrons. This fact is also concordant with the spirit of the Fermi liquid theory. By renormalizing the interaction, we can treat an interacting fermion system as if it were a free-particle system. On the other hand, at finite bias, finite temperature, and finite magnetic field, the conductance decreases from the unitary limit, since the current starts to be reflected by the Kondo state. Thus, this backscattered current contains information on the excited states corresponding to the backflow of the Fermi liquid~\cite{YamadaPTP1986}. A lot of theoretical work has been done on the universal behavior of the differential conductance based on Eq.~(\ref{kondononeq})~\cite{RinconPRB2009,RouraBasPRB2010,SelaPRB2009,MoraPRB2009_2,MoraPRB2015,FilipponePRB2017,OguriPRB2018,OguriPRL2018,TerataniPRL2020}. Several experiments have also been reported~\cite{GrobisPRL2009,ScottPRB2009,DelattreNatPhys2009,YamauchiPRL2011,KretininPRB2011,KretininPRB2012,FerrierNatPhys2016,HataNatComm2021}. Such studies are an essential step towards understanding the Fermi liquid in the non-equilibrium regime. \subsubsection{Backscattering and shot noise} The realization of the Kondo effect in QDs in 1998~\cite{Goldhaber-GordonNature1998,CronenwettScience1998,SchmidPB1998} has also triggered a great deal of interest in shot noise. In quantum many-body systems, effective charge states are formed as a result of electron-electron correlations. As we discuss in this review, in the fractional quantum Hall effect, the fractional charge $e/3$ was observed~\cite{SaminadayarPRL1997,de-PicciottoNature1997} in 1997, and later $e/5$ and $e/4$ were also observed~\cite{ReznikovNature1999,DolevNature2008}. In a normal-metal-superconductor junction, the formation of the Cooper pair charge $2e$ was detected through shot noise~\cite{JehlNature2000,KozhevnikovJLTP2000,KozhevnikovPRL2000} in 2000. It is an essential and exciting fact that shot noise provides us direct and quantitative information about the non-equilibrium quantum state impossible to obtain with other experimental methods. The expressions for the shot noise at sufficiently low temperature and low bias compared to the Kondo temperature were obtained by several groups~\cite{SelaPRL2006,GogolinPRL2006,VitushinskyPRL2008,MoraPRL2008,SelaPRB2009,FujiiJPSJ2010,SakanoPRB2011,SakanoPRB2011_2}. Here, before going to the shot noise, we discuss the current at $T_\textrm{e}=0$ and $B=0$ to give a physical picture of the backscattering. We assume the electron-hole symmetry case ($\epsilon_d/U = -0.5$ and $v_\textrm{L}=v_\textrm{R}$) again. In this case, the current is given up to the cubic order of the bias voltage $V$ as follows, \begin{equation} \langle I\rangle = \frac{2e^2}{h}V-e P_{\textrm{b}0}-e P_{\textrm{b}1}-(2e) P_{\textrm{b}2}. \label{noneqcurrent_scattering} \end{equation} $P_{\textrm{b}0}$, $P_{\textrm{b}1}$, and $P_{\textrm{b}2}$, all of which are proportional to $V^3$, represent the probabilities per unit of time of the different backscattering processes~\cite{SelaPRL2006,SakanoPRB2011_2}. While Eq.~(\ref{noneqcurrent_scattering}) is consistent with the conductance given by Eq.~(\ref{kondononeq}) at $T_\textrm{e}=0$ and $B=0$, it is easier to understand the detailed backscattering processes. After Ref.~[\onlinecite{SelaPRL2006}], we describe the backscattering processes near the unitary limit by using the term of ``right movers (R movers)'' and ``left movers (L movers)'', as shown in Fig.~\ref{KondoScatteringSchematicFig}(a). The R (L) movers correspond to electron propagation from the left (right) lead to the right (left) lead. The chemical potential of the R (L) movers is $\mu_\textrm{L}$ ($\mu_\textrm{R}$) [Readers may remind that the current operator is expressed in terms of the R movers ($a_{\textrm{L}, k}^\dagger a_{\textrm{L}, k'}$) and the L movers ($b_{\textrm{L}, k}^\dagger b_{\textrm{L}, k'}$) in Eq.~(\ref{Eq_current_operator}). Also see Fig.~\ref{fig_setup}]. There is no scattering between the two movers at zero bias, as the transmission probability is 100\%. On the other hand, the conductance decreases at finite bias, which can be interpreted that some R (L) movers are backscattered into L (R) movers, as indicated by the vertical dashed line in Fig.~\ref{KondoScatteringSchematicFig}(a). Figures~\ref{KondoScatteringSchematicFig}(b), (c), (d), and (e) illustrate several backscattering processes between the two movers. \begin{figure}[tbhp] \center \includegraphics[width=8.5cm]{ShotReviewFig26.eps} \caption{(Color online) (a) Electron transport through the Kondo resonance is schematically shown. Near the unitary limit, electrons incident from the left (right) are almost totally transmitted to the right (left), which is represented by the motion of the R (L) movers~\cite{SelaPRL2006}. The vertical dashed line to connect the R and L movers indicates the backscattering at finite bias. (b) Elastic scattering between the R and L movers by the Kondo resonance centered at the Fermi level is schematically depicted with the resonance peak superposed. One R mover tunnels into the L movers, creating one particle-hole pair. (c) In addition to one particle-hole pair creation, the second particle-hole pair appears in the L movers. (d) The second particle-hole pair appears in the R movers. (e) Both particle-hole pairs are created separately in the R and L movers, corresponding to the two-particle backscattering process.} \label{KondoScatteringSchematicFig} \end{figure} $P_{\textrm{b}0}$ in Eq.~(\ref{noneqcurrent_scattering}) expresses the probability of the elastic scattering by the quasi-particle level in the QD. As $V$ becomes finite, the Kondo resonance starts to reflect electrons because the electron energy shifts from the resonance peak. Figure~\ref{KondoScatteringSchematicFig}(b) shows that one R mover stochastically tunnels into the L movers, creating one (quasi-)particle-hole pair. This process decreases the current and generates the Poissonian shot noise corresponding to the backscattered current. As the resonance peak has a parabolic energy dependence, the scattering occurs with the probability that is proportional to $V^3$, which defines the $V$-dependence of $P_{\textrm{b}0}$. This process occurs even without the residual interaction ($\tilde{U}=0$). In contrast, the probabilities $P_{\textrm{b}1}$ and $P_{\textrm{b}2}$ reflect the residual interaction. As shown in Figs.~\ref{KondoScatteringSchematicFig}(c), (d), and (e), two particle-hole pairs are excited by the second-order process of $\tilde{U}$. A hole and a particle in one pair always appear separately in each mover. For the other pair, there are two cases. \begin{enumerate} \item As shown in Figs.~\ref{KondoScatteringSchematicFig}(c) and (d), the second pair appears in one of the two movers with probability $P_{\textrm{b}1}$, and therefore it does not contribute to current. This process is the backscattering of a single charge of $e$ in the end. \item As shown in Fig.~\ref{KondoScatteringSchematicFig}(e), the hole and the particle in the second pair appear separately in the R and L movers, respectively, as the first pair does. This process produces the backscattering of unit charge $2e$. It is peculiar to Kondo physics called ``two-particle backscattering'', which occurs with probability $P_{\textrm{b}2}$. \end{enumerate} The phase-space restrictions for particle-hole-pair creation require $P_{\textrm{b}1}$ and $P_{\textrm{b}2}$ to be proportional to $V^3$, as well as $P_{\textrm{b}0}$. All these backscattering processes contribute to Eq.~(\ref{noneqcurrent_scattering}). Accordingly, the shot noise $S$ is obtained. Since $P_{\textrm{b}0}$ and $P_{\textrm{b}1}$ are the backscattering of the unit charge $e$, and $P_{\textrm{b}2}$ is the process of $2e$, the following holds, \begin{equation} S = 2 \left[ e^2 P_{\textrm{b}0}+e^2 P_{\textrm{b}1}+(2e)^2 P_{\textrm{b}2} \right]. \label{Kondo_shot_scattering} \end{equation} Because there is not only $e^2$ but also $(2e)^2$, the noise is enhanced compared to the case of a simple backscattering of $e$. It is useful to think of a quantity defined by the ratio between the shot noise and the backscattered current \begin{equation} I_\textrm{b} = \frac{2e^2}{h}V-I \label{eq:backscattered} \end{equation} rather than the ordinary Fano factor defined by Eq.~(\ref{Fano_classic}). Let's call this ratio the effective charge $e^*$. Eq.~(\ref{noneqcurrent_scattering}) and Eq.~(\ref{Kondo_shot_scattering}) yield \begin{equation} e^* \equiv \frac{S}{2\vert\langle I_\textrm{b}\rangle\vert}=\frac{e^2 P_{\textrm{b}0}+e^2 P_{\textrm{b}1}+(2e)^2 P_{\textrm{b}2}}{e P_{\textrm{b}0}+e P_{\textrm{b}1}+(2e) P_{\textrm{b}2}}. \label{Kondo_shot_estar} \end{equation} In the limit of strong electron correlation $U/\Gamma\rightarrow \infty$, we can show that the probability that the unit charge $e$ is scattered ($\propto P_{\textrm{b}0}+P_{\textrm{b}1}$) and the probability that two particles $2e$ are scattered ($\propto P_{\textrm{b}2}$) are the same~\cite{SelaPRL2006,SakanoPRB2011_2}, resulting in \begin{equation} e^* \rightarrow \frac{5}{3}e. \label{eq:5_3} \end{equation} This result can also be expressed using the Wilson ratio $R$~\cite{SelaPRB2009,FujiiJPSJ2010,SakanoPRB2011,SakanoPRB2011_2}. \begin{equation} \frac{e^*}{e}=\frac{1+9(R-1)^2}{1+5(R-1)^2} \rightarrow \frac{5}{3} \quad (R\rightarrow 2). \label{eq_estar_wilson} \end{equation} Although we call $e^*$ effective charge in line with the literature, we should not confuse it with an exotic charge like that in the fractional quantum Hall effect. The present $e^*$ is the consequence of several scattering processes. The above discussion illustrates that we can obtain the Wilson ratio $R$ from the current and the shot noise in the non-equilibrium state. In particular, the shot noise directly provides information on the two-particle backscattering due to the residual interaction, i.e., information corresponding to the ``internal structure'' of the Kondo state. As already mentioned, in the unitary limit, the transmission of the Kondo QD is 100\%: the conduction electrons cannot ``see'' the Kondo effect in the equilibrium. However, in the non-equilibrium state, the entity of the many-body interaction that produces the Kondo state reappears. \subsubsection{Orbital degeneracy} The discussion so far treats the most common Kondo effect, called the SU(2) Kondo effect, to which only the spin degree of freedom contributes. However, when there exist other degrees of freedom, such as the orbital one, a more exotic SU($n$) Kondo effect may occur~\cite{CoqblinPRB1969}. In the mesoscopic research field, such QDs have been realized and actively studied. The SU(4) Kondo effect in carbon nanotubes (CNTs) is a representative example~\cite{ChoiPRL2005,Jarillo-HerreroNature2005,DelattreNatPhys2009,LairdRMP2015}. The electrons in a CNT have two orbitals, one clockwise and the other counterclockwise with respect to the axis of the tube. They are doubly degenerate when the effects to lift the orbital degeneracy, such as an external magnetic field, are absent. Each of these two orbitals is also degenerate with respect to the spin, leading to the SU(4) Kondo effect due to the four-fold degenerate electron levels in total. Shot noise has also been theoretically studied for the general SU($n$) Kondo effect. In the case of electron-hole symmetry, no magnetic field, and sufficiently low temperature, the effective charge is theoretically shown as follows~\cite{GogolinPRL2006,SelaPRB2009,FujiiJPSJ2010} \begin{equation} \frac{e^*}{e}=\frac{1+9(n-1)(R-1)^2}{1+5(n-1)(R-1)^2}. \label{eq_estar_SUn} \end{equation} For example, $e^*/e=3/2$ is predicted for the SU(4) Kondo effect ($R\rightarrow 4/3$ for $U/\Gamma \rightarrow \infty$). In Sect.~\ref{subsub:su2-su4}, we discuss the experimental results to validate this formula in addressing the crossover between the SU(2) and SU(4) Kondo effects. \subsubsection{Shot-noise experiments} \label{subsub:kondonoise} As mentioned earlier, many theoretical studies have been conducted on the shot noise in the Kondo state. In contrast, there have been only a limited number of experimental studies. In 2008, Zarchin \textit{et al.} reported the first shot-noise measurement in the Kondo state using a lateral QD fabricated in a two-dimensional electron system (2DES) in a GaAs/AlGaAs heterostructure~\cite{ZarchinPRB2008}. They argued that, although the unitary limit was not reached, $\frac{5}{3}e$ was measured as the effective charge, as predicted by the theory. Delattre \textit{et al.} observed the current noise due to the SU(4) Kondo effect in the $1/4$-filling region in a CNT QD~\cite{DelattreNatPhys2009}. They found that the noise is larger than the shot noise expected from a single-particle picture and explained this enhancement quantitatively by the slave boson method as being due to the Kondo effect. They measured in a bias region higher than $k_\textrm{B}T_\textrm{K}$ and did not address the low energy excited states inherent to the Fermi liquid~\cite{EggerNatPhys2009}. One of the authors performed measurements using a lateral QD fabricated in a 2DES~\cite{YamauchiPRL2011} in 2011 and found that the noise increases with the development of the Kondo effect as lowering the temperature. This finding is qualitatively consistent with the increase in two-particle backscattering as described above. However, the effective charge obtained from the shot noise exceeded $5/3$, which is not compatible with the theory. Regarding this discrepancy, it was pointed that finite transport through other levels in the QD contributes to the shot noise~\cite{YamauchiPRL2011}. Because $U$, $\Gamma$, and the spacing between the discrete levels were on the same order in that experiment, transport via the other adjacent levels, which are irrelevant to the Kondo state but are strongly coupled with the leads, might be possible. Even if the transmission probabilities of those levels are small, such multi-level transport enhances the shot noise, as we have seen in Sect.~\ref{subsub:QD}. \paragraph{Realization of the unitary limit Kondo effect} To overcome the above problem, one of the authors performed shot-noise measurements in the Kondo state using a CNT QD~\cite{FerrierNatPhys2016,FerrierPRL2017,FerrierJLTP2019,HataSpringer2019}. CNT QDs have the advantage of being much smaller in volume than QDs in a GaAs/AlGaAs 2DES described above, so compared to $k_\textrm{B}T_\textrm{K}$, the discrete level spacing is large enough to neglect the effects of adjacent discrete levels. Fortunately, in our experiments, we could observe the unitary limit in perfect accordance with theory since our QD was symmetric: the two barriers forming it are almost the same, i.e., the symmetric condition $v_\textrm{L}=v_\textrm{R}$ in Eq.~(\ref{eq:AndersonModel}) is satisfied. This situation allowed us to unambiguously test the theory for both the SU(2) and SU(4) Kondo effects. In the following, until the end of Sect.~\ref{Subsec:KondoNoise}, we discuss the results obtained with this device~\cite{FerrierNatPhys2016,FerrierPRL2017}. Figure~\ref{FerrierNatPhysFig2}(a) shows a scanning electron microscope (SEM) image of the CNT QD and a schematic diagram of the measurement setup. The device consists of a single CNT sandwiched between the two leads. The Kondo effect was obtained by controlling the gate voltage $V_\textrm{g}$ applied to the gate electrode close to the QD. Figure~\ref{FerrierNatPhysFig2}(b) shows an intensity plot of the differential conductance as a function of the bias voltage $V (=V_\textrm{sd})$ and $V_\textrm{g}$. This figure, the so-called stability diagram of QD, shows a Coulomb diamond in every fourfold degenerate shell with combined spin and orbit, characteristic of a CNT QD. $N (=0, 1, 2, 3)$ represents the number of electrons in the outer-most shell. \begin{figure}[tbhp] \center \includegraphics[width=8.5cm]{ShotReviewFig27.eps} \caption{(Color online) (a) SEM image of a CNT connected to two metallic leads on a silicon wafer along with the noise measurement setup. (b) Intensity plot of the conductance as a function of the bias voltage $V (=V_\textrm{sd})$ and $V_\textrm{g}$. Kondo ridges appear as bright horizontal lines at $V=0$ for $N=1$ and $N=3$ electrons. (c) Conductance of the Kondo region at zero bias as a function of $V_\textrm{g}$ for several different temperatures between 16 and 780 mK. Two Kondo ridges at $N=1$ and $N=3$ are clearly visible. The differential conductance and the current noise obtained at the gate voltage positions indicated by $\bigtriangledown$ and $\bullet$ are shown in Figs.~\ref{FerrierNatPhysFig2noise}(a) and (b), respectively. (d) Fano factor extracted from the linear part of the current noise in the regime $eV\ll k_\textrm{B}T_\textrm{K}$ ($I\leq 5$ nA) using the definition $S = 2eF \vert I\vert$. Figures are reproduced from Ref.~[\onlinecite{FerrierNatPhys2016}]. {\copyright} (2015) Nature Publishing Group.} \label{FerrierNatPhysFig2} \end{figure} In Fig.~\ref{FerrierNatPhysFig2}(c), the horizontal and vertical axes of the graph represent $V_\textrm{g}$ and the equilibrium conductance of the QD, respectively. At 780 mK, four peaks appear in the conductance as $V_\textrm{g}$ is increased. This behavior indicates that $N$ changes one by one such that $N=0 \rightarrow 1\rightarrow 2\rightarrow 3$. Now, in the $N=1$ and $N=3$ regions, lowering the temperature from 780 to 16 mK leads to the conductance increase, signaling the Kondo effect. On the other hand, in the $N=2$ region, which is in between, the conductance remains small and almost temperature-independent, meaning that the QD is in an ordinary Coulomb blockade. The Kondo effect is usually expected when $N$ is odd, which is consistent with the observation. It is also important to note that for $N=3$, the conductance at 16 mK is almost the quantized conductance $2e^2/h$. This value indicates the unitary limit. The detailed analysis of the temperature dependence of the conductance reveals that $U=6\pm 0.5$~meV, $\Gamma = 1.8 \pm 0.2$~meV, and $T_\textrm{K} = 1.6 \pm 0.05$~K at the electron-hole symmetry point for the case of $N=3$ [indicated by $\bullet$ in Fig.~\ref{FerrierNatPhysFig2}(c)]~\cite{FerrierNatPhys2016}. The unitary limit is expected to occur in the $N=1$ region as well. However, the conductance does not rise above $0.85 \times 2e^2/h$ even at the lowest temperature. In this case, the coupling strength between the left lead and QD is different from that between the right lead and QD, namely asymmetric lead-QD coupling ($v_\textrm{L} \neq v_\textrm{R}$). The variation of $V_\textrm{g}$ to change $N$ may affect the spatial distribution of the wave function, and consequently, the coupling strength between the QD and the leads would change. The unitary limit has occurred for $N=3$, which, with hindsight, implies that the two couplings are almost the same. \paragraph{Shot noise in Coulomb blockade regime} We discuss the results of the shot-noise measurements. First, we examine the simple case where the Kondo effect does not occur. The gate voltage is set to the location indicated by $\bigtriangledown$ in Fig.~\ref{FerrierNatPhysFig2}(c), where conventional Coulomb blockade ($N=2$) occurs. In Fig.~\ref{FerrierNatPhysFig2noise}(a), the horizontal axis shows the current $\langle I\rangle$ flowing through the QD, the left vertical axis shows the differential conductance $G$, and the right vertical axis shows the current noise $S$. Regarding $S$, we represent the value after subtracting the contribution of the thermal noise as usual (see Sect.~\ref{subsub:generalshot}). $G$ is almost independent of $\langle I\rangle$, reflecting that the QD is in Coulomb blockade. Clearly, the current noise is proportional to the absolute value of the current $\langle I\rangle$, that is, $S\propto \vert \langle I\rangle\vert$. \begin{figure}[tbhp] \center \includegraphics[width=8cm]{ShotReviewFig28.eps} \caption{(Color online) (a) Conductance (black line, left axis) and noise (red dots, right axis) as a function of $\langle I \rangle$ for the Coulomb blockade region ($N=2$). $V_\textrm{g}$ is set to the location indicated by $\bigtriangledown$ in Fig.~\ref{FerrierNatPhysFig2}(c). The noise is linear to $\vert \langle I \rangle\vert$, with a slope around $2e$. (b) Conductance (black line, left axis) and noise (red dots, right axis) as a function of $\langle I \rangle$ on the Kondo ridge ($N=3$). $V_\textrm{g}$ is set to the location indicated by $\bullet$ in Fig.~\ref{FerrierNatPhysFig2}(c). The slope of $S$ around $I=0$ is almost zero owing to the perfect transmission ($F=0.06$). (c) The Kondo-effect-induced shot noise $S_\textrm{K}$ as a function of the backscattered current $I_\textrm{b}$ is plotted for the data shown in (b). Around $I_\textrm{b}=0$, $S_\textrm{K} \propto \vert I_\textrm{b} \vert$ holds. Figures are reproduced from Ref.~[\onlinecite{FerrierNatPhys2016}]. {\copyright} (2015) Nature Publishing Group.} \label{FerrierNatPhysFig2noise} \end{figure} The Fano factor $F=S/2e\vert \langle I\rangle\vert$ obtained from the shot-noise measurements is shown in Fig.~\ref{FerrierNatPhysFig2}(d) as a function of $V_\textrm{g}$. In the Coulomb blockade regime ($N=0$ and $2$), $F\sim 1$: the shot noise is in the Poisson limit, as expected for a common tunneling barrier. The overall behavior of the Fano factor in Fig.~\ref{FerrierNatPhysFig2}(d) is just an upside-down version of the behavior of the conductance at the lowest temperature, 16~mK, shown in Fig.~\ref{FerrierNatPhysFig2}(c). This is consistent with the theoretically expected behavior of $F=1-{\cal T}$. In Sect.~\ref{subsub:QD}, we introduced several experimental results regarding the Fano factor in QDs. In particular, we raised the possibility of electron bunching leading to $F>1$. The present results show that for a QD in the resonant tunneling effect regime, the noise behaves perfectly following the theory described in Sect.~\ref{subsec:noise_in_quantum_transport}, even if the QD has a fixed number of electrons because of the charging effect. This agreement also reflects coherent electron transport through the QD. \paragraph{Shot noise in the Kondo state} Figure~\ref{FerrierNatPhysFig2noise}(b) shows the conductance and shot noise obtained at the gate voltage marked with $\bullet$ shown in Fig.~\ref{FerrierNatPhysFig2}(c). This gate voltage corresponds to the vicinity of the electron-hole symmetry point of the Kondo state ($N=3$). When the current is around zero ($\vert I \vert <2$ nA), the conductance is $2e^2/h$, reflecting that the Kondo state is perfect. In this case, the current noise $S$ is almost zero, and thus the Fano factor is close to zero [see Fig.~\ref{FerrierNatPhysFig2} (d)]. The absence of shot noise means that electrons can pass through the QD as if there were no scatterer at all. Consequently, this situation is similar to a QPC with a quantized conductance of $2e^2/h$, where the shot noise disappears. When the current is close to zero, the Kondo state is a ``non-viscous'' liquid that does not reflect any electrons. As shown in Fig.~\ref{FerrierNatPhysFig2noise}(b), the conductance decreases rapidly as the current $\vert I \vert$ is increased. We consider the high bias region within the range satisfying $k_\textrm{B} T_\textrm{K}/2<\vert eV_{sd} \vert < k_\textrm{B} T_\textrm{K}$. In Fig.~\ref{FerrierNatPhysFig2noise}(b), the conductance at $ \vert I\vert=10$~nA is almost half ($0.5 \times 2e^2/h$) of that in the equilibrium. As electrons are constantly injected, the Kondo state gradually breaks down, making it harder for electrons to flow. Conversely, the backscattering starts. It is also evident that the noise increases as this backscattered current increases. For more quantitative analysis, we utilize the backscattered current $I_\textrm{b} = \frac{2e^2}{h}V-I$ as we did in Eq.~(\ref{eq:backscattered}). Additionally, we define the increase of the shot noise as \begin{equation} S_\textrm{K} = S -2eF \vert \langle I\rangle\vert. \label{eq_def_SK} \end{equation} The second term $-2eF \vert \langle I\rangle\vert$ is intended to subtract the shot-noise contribution that does not originate from the Kondo effect, but instead is caused by slight lead-dot asymmetry. In this way, we extract the shot noise purely due to the Kondo effect discussed in Eq.~(\ref{Kondo_shot_scattering}). In Fig.~\ref{FerrierNatPhysFig2noise}(c), the noise $S_\textrm{K}$ is plotted as a function of the backscattered current $\langle I_\textrm{b} \rangle$. We obtain $S_\textrm{K}/2e \vert \langle I_\textrm{b} \rangle\vert =1.7\pm 0.1$, thus $e^*/e=1.7\pm 0.1$. This value is consistent with $5/3$ [Eq.~(\ref{eq:5_3})] and validates the two-particle backscattering process in the Kondo regime. In addition, we derive $R=1.95 \pm 0.1$ from Eq.~(\ref{eq_estar_wilson}), which is consistent with the value expected from the experiment ($U=6 \pm 0.5$~meV and $\Gamma = 1.8 \pm 0.2$~meV). Figure~\ref{Fig_FerrierNatPhys2016Fig3} presents the behavior of the Wilson ratio expected from the theory as a function of $U/\Gamma$. The experimental result corresponds to the mark $\square$ [SU(2)], which agrees with the theoretical prediction. Since $R=2$ is the limit of strong correlations in the Kondo effect, the result here clearly verifies that the present QD is genuinely close to a quantum liquid in the strong correlation limit. \begin{figure}[!b] \center \includegraphics[width=7.5cm]{ShotReviewFig29.eps} \caption{(Color online) Theoretical Wilson ratio for the SU(2) and SU(4) Kondo effects is shown by solid curves as a function of $U/\Gamma$~\cite{SakanoPRB2011}. The experimental results plotted with the mark $\square$ are almost on the theoretical curves. This consistency indicates that our experimental result agrees with the Fermi liquid theory. The yellow part of the graph represents the region of universality: all the properties depend on a single parameter $T_\textrm{K}$~\cite{OguriJPSJ2005}. Reprinted from Ref.~[\onlinecite{FerrierNatPhys2016}]. {\copyright} (2015) Nature Publishing Group.} \label{Fig_FerrierNatPhys2016Fig3} \end{figure} Note that in the $N=1$ region, the unitary limit is not reached due to asymmetric lead-dot coupling. We obtained $e^*/e=1.2\pm 0.08$ from the shot noise in this region. The lead-dot coupling asymmetry $\delta$ is defined by $G(V=0)= (1-\delta )2e^2/h$. The result of the conductance measurement is $G(V=0)=0.85(2e^2/h)$, yielding $\delta=0.15$. The experimental result $1.2\pm 0.08$ is consistent with the theoretical prediction $e^*/e=5/3-(8/3)\delta =1.26$~\cite{MoraPRB2009_2}. \paragraph{Intuitive picture of the enhanced noise} Why is the effective charge larger than 1 in the limit of strong electron correlation? It is essentially impossible to describe the quantum many-body effect intuitively with a classical picture. However, we believe that such a description can sometimes provide helpful physical intuition, and we try it here. The experimental achievement is an observation of the two-particle backscattering process, which we explained in Fig.~\ref{KondoScatteringSchematicFig}. Consider this phenomenon very intuitively from the standpoint of the electrons passing through the QD. In the non-equilibrium state, electrons constantly injected from one lead escape into the other, feeling strong interaction as they pass through the quantum liquid. As seen from the decrease in the conductance, they collide with the quantum liquid to cause backscattering into the original lead. Thus, to electrons, the quantum liquid is no longer felt as a ``non-viscous'' state, but as a ``viscous'' entity. This viscosity due to the many-body interaction results in the ``two-electron'' bunching~\cite{ZarchinPRB2008}, like a ``water splash'', and increases the current noise [see Fig.~\ref{KondoScatteringSchematicFig}(e)]. This phenomenon is unique to the strongly correlated quantum liquid in the non-equilibrium: the Kondo effect creates the quantum liquid, but this fact is somehow hidden in the equilibrium, as the Fermi liquid theory tells us. Only in the non-equilibrium state does the true nature to emerge the quantum liquid manifest itself. \subsubsection{SU(2)-SU(4) crossover} \label{subsub:su2-su4} The significance of studying the Kondo effect in QDs lies not only in exploring its non-equilibrium properties but also in the potential to achieve more exotic Kondo effects using other degrees of freedom. As mentioned earlier, in CNT QDs, the SU(4) Kondo effect has been realized using orbital degrees of freedom~\cite{FerrierNatPhys2016,ChoiPRL2005,Jarillo-HerreroNature2005,DelattreNatPhys2009,LairdRMP2015}. One of the authors was able to experimentally verify $e^*/e=3/2$ predicted by Eq.~(\ref{eq_estar_SUn}) with an accuracy of $\pm 0.1$ by realizing the SU(4) Kondo effect close to the unitary limit~\cite{FerrierNatPhys2016}. The obtained Wilson ratio is consistent with the theory as plotted by the $\square$ mark in Fig.~\ref{Fig_FerrierNatPhys2016Fig3}. Consider what happens if a sufficiently strong magnetic field is applied to lift only the spin degeneracy in the SU(4) Kondo state realized at zero magnetic field. Naively, it would lead to a novel SU(2) Kondo state, which would involve only orbital degeneracy. In actuality, the situation is not so simple because the orbital state of the CNT is also affected by the magnetic field. Nevertheless, the crossover from SU(4) to SU(2) occurs by applying the magnetic field to the CNT at a specific angle~\cite{TerataniJPSJ2016,FerrierPRL2017,TerataniPRB2020}. Figure~\ref{Fig_FerrierPRL2017}(a) shows the conductance when the gate voltage is varied to change $N$ in the QD. Around 0 T, the conductance at $N=2$ reaches $1.85(2e^2/h)$, which is close to $2(2e^2/h)$, the conductance of the unitary limit of the SU(4) Kondo effect. The dashed curves in Fig.~\ref{Fig_FerrierPRL2017}(a) show the results of the numerical renormalization group (NRG) calculations. From these theoretical calculations and conductance measurements, we confirm that the SU(4) Kondo state appears in $N=1, 2$, and $3$. \begin{figure}[!t] \center \includegraphics[width=8.5cm]{ShotReviewFig30.eps} \caption{(Color online) (a) Comparison of the zero-bias conductance between the experiment (solid curves) and the numerical renormalization group (NRG) calculations (dashed curves) for several magnetic fields. $\blacktriangledown$ shows the electron-hole symmetry point at $N=2$. (b) Kondo-effect-induced noise $S_\textrm{K}$ as a function of the backscattered current $I_K$ [$=I_\textrm{b}$ defined by Eq.~(\ref{eq:backscattered})] at $B = 0$~T [SU(4) state] and $B = 13$~T [SU(2) state]. (c) The filled circles show the effective charge $e^*/e$ as a function of $R$, which quantifies the strength of quantum fluctuations. The three square symbols represent the theoretical prediction for SU(4), SU(2), and noninteracting particles. The dashed curve is the extended theoretical prediction based on Eq.~(\ref{eq_estar_SUn}). Figures are reproduced from Ref.~[\onlinecite{FerrierPRL2017}]. {\copyright} (2017) American Physical Society.} \label{Fig_FerrierPRL2017} \end{figure} By applying a magnetic field at a specific angle to the CNT QD in this situation, we found that the SU(4) Kondo effect can gradually transform into a SU(2) Kondo effect~\cite{FerrierPRL2017}. Experimental results and theoretical simulations of the conductance in several magnetic fields are shown in Fig.~\ref{Fig_FerrierPRL2017}(a). The results reveal that the Kondo effect at $N=1$ and $N=3$ disappears but the Kondo effect at $N=2$ remains. The conductance at a high magnetic field of 12~T is $2e^2/h$ expected in the unitary limit of the SU(2) Kondo effect. Thus, in 12~T, a perfect SU(2) Kondo state is realized. This SU(2) Kondo state is a special one with two electrons arising from the hybridization of orbital and spin degrees of freedom~\cite{TerataniJPSJ2016,TerataniPRB2020}. The shot noise is measured at the gate voltage where the electron-hole symmetry holds [indicated by $\blacktriangledown$ in Fig.~\ref{Fig_FerrierPRL2017}(a)], as shown in Fig.~\ref{Fig_FerrierPRL2017}(b). At zero magnetic field, the SU(4) Kondo state exists. The noise purely associated with the Kondo effect, $S_\textrm{K}$ defined by Eq.~(\ref{eq_def_SK}), is plotted as a function of the backscattered current $I_\textrm{b}$ [denoted as $I_K$ in Fig.~\ref{Fig_FerrierPRL2017}(b)]. There is a clear linear relationship from which $e^*/e=1.4 \pm 0.1$ is obtained: this value is consistent with $3/2$ expected for the SU(4) Kondo effect. The relationship between $S_\textrm{K}$ and $I_\textrm{b}$ in a strong magnetic field (13~T) is also shown in the same figure. This yields $e^*/e=1.7 \pm 0.1$ as expected for the SU(2) Kondo effect, which guarantees that the Kondo effect in high magnetic fields is indeed the SU(2) Kondo effect. Importantly, as mentioned already, this SU(2) Kondo effect is no longer the conventional one. Nevertheless, the result is consistent with the theoretically expected value of $5/3$ exemplifies the universality of Kondo physics. This result means that we can control the symmetry of the Kondo effect from SU(4) to SU(2) by varying the magnetic field. In Fig.~\ref{Fig_FerrierPRL2017}(c), the effective charge $e^*/e$ obtained from the shot noise is plotted as a function of $R$ derived from the NRG simulation of the conductance shown in Fig.~\ref{Fig_FerrierPRL2017}(b). In a free electron system, $R=1$ and $e^*/e=1$. As $R$ increases, many-body correlations develop, two-particle backscattering from the QD occurs, and $e^*/e$ grows. The limit of the effective charge is $e^*/e=5/3$ at $R=2$. By changing the magnetic field, we can see how the Kondo effect evolves continuously from SU(4) to SU(2). According to the theory of the SU($n$) Kondo effect~\cite{SakanoPRB2011}, in the Kondo limit $U \rightarrow \infty$, the Wilson ratio can be expressed as $R=1+1/(n-1)$ and the effective charge as $e^*/e=(n+8)/(n+4)$. Therefore, from Eq.~(\ref{eq_estar_SUn}), $e^*/e =[1+9(R-1)]/[1+5(R-1)]$ is expected. This is plotted in Fig.~\ref{Fig_FerrierPRL2017}(c) by the dashed curve. It explains the experimental results well, which has experimentally established a convincing link between the nonlinear noise and Wilson ratio. \subsubsection{Short summary} Throughout this Sect.~\ref{Subsec:KondoNoise}, we have discussed the shot-noise study in a QD in the Kondo regime. The Kondo effect creates a strongly correlated quantum liquid, but this is unseen in the equilibrium state as the transmission probability is unity. However, by injecting electrons to the QD to drive the system into the non-equilibrium state, the nontrivial behavior of the shot noise is revealed, and the true nature of the quantum liquid with residual interaction emerges via two-particle backscattering. The shot-noise measurement was also successfully demonstrated to explore the symmetry crossover of the Kondo effect. \subsection{Noise in quantum Hall systems} \label{subsec:qhe} The detection of fractional quasiparticles in fractional quantum Hall (FQH) systems is one of the most epoch-making experimental accomplishments in mesoscopic physics~\cite{LaughlinPRL1983,SaminadayarPRL1997,de-PicciottoNature1997}. Current-noise measurements proved the effective charge $e/3$ of tunneling quasiparticles through the Landau-level filling factor $\nu=1/3$ and $2/3$ FQH systems. After the discovery of $e/3$ quasiparticles, current-noise measurements detected various fractional quasiparticles in other FQH systems~\cite{ReznikovNature1999,ChungPRL2003,DolevNature2008}. Furthermore, they have also revealed various phenomena peculiar to quantum Hall (QH) systems, such as anyonic quantum statistics of $e/3$ quasiparticles~\cite{BartolomeiScience2020}, Josephson relations for fractional charges~\cite{KapferScience2019,BisogninNatCom2019}, heat transport along QH edge channels~\cite{BidNature2010,DolevPRL2011,GrossPRL2012,InoueNatCommun2014,SaboNatPhys2017,BanerjeeNature2017,BanerjeeNature2018,CohenNatCommun2019,JezouinScience2013,SivreNatPhys2017,SivreNatCommun2019}, and the Tomonaga-Luttinger (TL) liquid nature of QH edge channels~\cite{InouePRL2014}. These observations unambiguously indicate the excellence of current-noise measurements for investigating not only QH systems but also topological quantum many-body systems in the near future. This section summarizes the current-noise measurements performed on QH systems after the early fractional-charge-detection experiment in 1997~\cite{SaminadayarPRL1997,de-PicciottoNature1997}. In the following, we overview the basics of current-noise measurements on a QH device and then review recent topics. \subsubsection{Current-noise measurements based on chiral edge transport} \label{sec:current_noise_chiral_edge} When a 2DES is subjected to a strong perpendicular magnetic field, the Hall conductance of the system takes an integer or a rational fractional value in a unit of $e^2/h$. The former is the integer quantum Hall (IQH) effect, caused by the Landau-level formation and the Anderson localization~\cite{vonKlitzingPRL1980}. The latter is the FQH effect, resulting from the energy gap opening due to the many-body Coulomb interaction. Both of these effects reflect the incompressibility of the bulk 2DES and the Landauer-B\"uttiker's edge transport picture~\cite{DattaETMS}. \begin{figure}[bt] \center \includegraphics[width=8.5cm]{ShotReviewFig31.eps} \caption{(Color online) (a) Schematic of Landau levels in the $\nu=2$ IQH system. These levels are lifted by a confinement potential and cross the Fermi energy at the sample edge. (b) Chiral edge channels in the $\nu=2$ IQH system. The red and blue arrows indicate the spin-up and spin-down channels, respectively. (c) Schematic of a shot-noise measurement in the $\nu=1$ IQH systems. Electronic current fed from $\Omega_1$ is partitioned and generates shot noise at a narrow constriction formed by applying a split-gate voltage. The current noise in $\Omega_2$ and $\Omega_4$ is measured to evaluate the shot noise.} \label{fig5_x1} \end{figure} QH edge channels are unidirectional one-dimensional (1D) electronic states arising at the edge of QH regions. Figure ~\ref{fig5_x1}(a) shows a schematic of the $\nu=2$ IQH state in a 2DES confined by electrostatic potential. The lowest Landau levels of spin-up and spin-down electrons are filled in the bulk region, while the confinement potential lifts them at the sample edges to cross the Fermi energy, forming conductive edge channels. The red and blue arrows in Fig.~\ref{fig5_x1}(b) are schematics of the spin-up and spin-down edge channels, respectively. Because electrons coherently flow along an edge channel without backscattering, the channel is sometimes regarded as an electronic analog of an optical laser path; one can construct Fabry-P\'erot~\cite{ChamonPRB1997,CaminoPRL2005} or Mach-Zehnder~\cite{JiNature2003} interferometers using the edge channels. Thus, an IQH edge channel is a promising platform for fermion-quantum-optics experiments (see Sect.~\ref{sec:fermion_optics})~\cite{GrenierModPhsLett,BocquillonAnnPhys,RousselPhysStatSolidiB,GlattliPhysStatSolidiB}. A QPC fabricated in a QH system works as a beam splitter for an edge channel. Let us consider a QPC formed in the $\nu=1$ IQH system, as schematically shown in Fig.~\ref{fig5_x1}(c). A source-drain bias $V_{\rm{in}}$ lifts the Fermi energy of the edge channel stemming from the ohmic contact $\Omega_{\rm{1}}$ to $E_{\rm{F}}=eV_{\rm{in}}$. On the other hand, $\Omega_{\rm{3}}$ is connected to the ground so that the Fermi energy of the corresponding channel is $E_{\rm{F}}=0$. These two channels approach each other at the QPC to cause stochastic electron tunneling between them, generating shot noise. The outputs flow along the transmitted and reflected channels to reach the contacts $\Omega_{\rm{2}}$ and $\Omega_{\rm{4}}$, respectively, to raise current noise in these contacts. Importantly, in this setup, the current noise reflects only the tunneling process occurring between the channels from $\Omega_{\rm{1}}$ and $\Omega_{\rm{3}}$. The channels from $\Omega_{\rm{2}}$ and $\Omega_{\rm{4}}$ do not influence the measurement since they are well separated from the other channels by wide incompressible bulk regions. \begin{figure*}[t] \center \includegraphics[width=17cm]{ShotReviewFig32.eps} \caption{(Color online) (a) Shot-noise data for the $e/3$ quasiparticle tunneling in the $\nu=1/3$ FQH state. (b) Shot-noise data in the $\nu=2/3$ state. (c) Shot-noise data of the $e/4$ quasiparticle tunneling through the $\nu=5/2$ state. Panels (a) and (c) are reprinted with permission from Ref.~[\onlinecite{de-PicciottoNature1997}] ({\copyright} 1997 Springer Nature) and Ref.~[\onlinecite{DolevNature2008}] ({\copyright} 2008 Springer Nature), respectively. Panel (b) is reprinted with permission from Ref. ~[\onlinecite{SaminadayarPRL1997}] ({\copyright} 1997 American Physical Society).} \label{fig5_x2} \end{figure*} Within the single-particle picture, current-noise auto-correlation PSD in $\Omega_2$ is described as [see Eq.~(\ref{auto_Sbb})] \begin{equation} \begin{split} S_{22}=2e\langle I_{\rm{in}}\rangle \times {\cal{T}}(1-{\cal{T}}), \label{qhe_s22} \end{split} \end{equation} at zero temperature. Similarly, the cross-correlation PSD between $\Omega_2$ and $\Omega_4$ is given by [see Eq.~(\ref{crosscorr_Sbc})] \begin{equation} \begin{split} S_{24}=-2e\langle I_{\rm{in}}\rangle\times {\cal{T}}(1-{\cal{T}}). \label{qhe_s24} \end{split} \end{equation} Here, ${\cal{T}}$ is the transmission probability through the QPC, and $\langle I_{\rm{in}}\rangle = GV_{\rm{in}}$ is the impinging current fed from $\Omega_{\rm{1}}$. The factor ${\cal{T}}(1-{\cal{T}})$ reflects the partitioning process at the QPC. These equations can be modified using the tunneling current $\langle I_{\rm{T}}\rangle = {\cal{T}} \times \langle I_{\rm{in}}\rangle $ through the QPC as \begin{equation} \begin{split} S_{22}=2e\langle I_{\rm{T}}\rangle\times \frac{{\cal{T}}(1-{\cal{T}})}{{\cal{T}}}=2e\langle I_{\rm{T}}\rangle(1-{\cal{T}}), \label{qhe_s22_2} \end{split} \end{equation} \begin{equation} \begin{split} S_{24}=-2e\langle I_{\rm{T}}\rangle\times \frac{{\cal{T}}(1-{\cal{T}})}{{\cal{T}}}=-2e\langle I_{\rm{T}}\rangle(1-{\cal{T}}). \label{qhe_s24_2} \end{split} \end{equation} Equations (\ref{qhe_s22_2}) and (\ref{qhe_s24_2}) are derived from Eqs.~(\ref{shot_47}) and (\ref{FanoTheory}) in Sect.~\ref{sec:current_noise} with the Fano factor $F=1-{\cal{T}}$. The negative sign of Eq.~(\ref{qhe_s24_2}) reflects the negative correlation between the two outputs due to the binomial partitioning at the QPC [see Fig.~\ref{fig3_5}(c)]. Thus, one can perform shot-noise measurements by focusing on scattering processes between selected channels. This is true not only in the $\nu=1$ IQH system but also in other QH systems at different filling factors. When a QH system has more than two channels at an edge, e.g., as shown in Fig.~\ref{fig5_x1}(b) for the $\nu=2$ case, the complexity of the system increases due to the increase in degrees of freedom and inter-channel Coulomb interaction and tunneling. However, even in such multiple-channel cases, one can measure the shot-noise generation between selected edge channels. \subsubsection{Fractional charge of tunneling quasiparticles} \label{sec:fractional_charge_tunneling} The presence of fractionally charged quasiparticles was one of the most important predictions in early theories of the FQH effect~\cite{LaughlinPRL1983}. Whereas the charging energy of an antidot measured in an FQH system suggested the presence of fractional quasiparticles~\cite{GoldmanScience1995}, evidence was obtained by shot-noise measurements in the $\nu=1/3$ and $\nu=2/3$ systems in 1997~\cite{de-PicciottoNature1997,SaminadayarPRL1997}. When quasiparticles tunnel through the FQH systems stochastically, namely without correlation, zero-temperature shot noise is described as \begin{equation} \begin{split} S=2e^*\langle I_{\rm{B}}\rangle. \label{qhe_s1/3} \end{split} \end{equation} Here, $\langle I_{\rm{B}}\rangle$ is the backscattered current, and $e^*$ is the effective charge of tunneling quasiparticles. Equation (\ref{qhe_s1/3}) is modified at finite temperatures, $T_e$, due to the crossover between the thermal noise and the shot noise as [see Eq.~(\ref{shot_calib_finite})]~\cite{MartinPRB1992,MartinBook} \begin{equation} \begin{split} S=2e^*\langle I_{\rm{B}}\rangle\times \left[\coth\left(\frac{e^*V}{2k_{\rm{B}}T_{\rm{e}}}\right)-\frac{2k_{\rm{B}}T_{\rm{e}}}{e^*V}\right]. \label{qhe_s1/3_2} \end{split} \end{equation} Figures~\ref{fig5_x2}(a) and~\ref{fig5_x2}(b) compare the experimental shot-noise data at $\nu=1/3$ and $\nu=2/3$, respectively, with the theoretical one simulated using Eq.~(\ref{qhe_s1/3_2})~\cite{de-PicciottoNature1997,SaminadayarPRL1997}. When the backscattered current $\langle I_{\rm{B}}\rangle$ increases with the source-drain bias, the shot noise also increases, agreeing with the theoretical curves for $e^* = e/3$. These observations are the hallmark of Laughlin's $e/3$ quasiparticles in FQH systems. After these observations, shot-noise measurements demonstrated various fractional charges, e.g., $e/5$ quasiparticles in the $\nu=2/5$ state~\cite{ReznikovNature1999,ChungPRL2003} and the $e/7$ quasiparticles in the $\nu=3/7$ state~\cite{ChungPRL2003}, clearly exhibiting the significance of shot-noise measurements for observing exotic charge carriers. In addition to the odd-denominator fractional charges, the $e/4$ charge of quasiparticles in the $\nu=5/2$ even-denominator FQH system was observed in 2008~\cite{DolevNature2008}. As discussed in the next subsection, theories predict the non-Abelian nature of the $\nu=5/2$ state as a candidate for fault-tolerant quantum computing. The $e/4$ charge is a necessary condition for the non-Abelian nature; hence its experimental confirmation is essential. Figure~\ref{fig5_x2}(c) shows the shot-noise data obtained from a $\nu=5/2$ system formed in a split-gate device. The data agree well with the calculation assuming $e^*=e/4$ charges, signaling the presence of $e/4$ quasiparticles in the $\nu=5/2$ state. Note that the $e/4$ charge was also confirmed by analysis of the bias dependence of the direct tunneling current~\cite{RaduScience2008}, and measurement of the charging energy of microscopic $\nu=5/2$ regions~\cite{VenkatachalamNature2011}. In the above experiment [Fig.~\ref{fig5_x2}(a)], the measurements were performed in the weak-backscattering limit (transmission probability through the constriction: $\cal{T}\simeq ~$1), where a backscattering event corresponds to a quasiparticle tunneling process through the FQH region [see Fig.~\ref{fig5_x3}(a)~\cite{ChamonPRB1995}]. In this limit, the tunneling process is so infrequent that there is no correlation between the quasiparticles, allowing one to compare the experimental data with simulations using Eq.~(\ref{qhe_s1/3_2}). A similar stochastic tunneling process occurs in the strong-backscattering limit ($\cal{T}\simeq ~$0) [Fig.~\ref{fig5_x3}(b)~\cite{ChamonPRB1995}], where a forward scattering event is considered as a tunneling process through the depleted region. Because tunneling quasiparticles are electrons, in this case, we observe $e^*=e$ tunneling charge~\cite{GriffithsPRL2000}. \begin{figure}[!b] \center \includegraphics[width=8cm]{ShotReviewFig33.eps} \caption{(a) Schematic of fractional-charge tunneling in the weak-backscattering limit. (b) Electron tunneling in the strong-backscattering limit. Reprinted figures with permission from Ref.~[\onlinecite{ChamonPRB1995}]. {\copyright} (1995) American Physical Society.} \label{fig5_x3} \end{figure} In intermediate backscattering regimes, tunneling events occur so frequently that the correlation between each tunneling event becomes relevant. In this case, one cannot use Eq.~(\ref{qhe_s1/3_2}) for evaluating the effective charge of tunneling quasiparticles~\cite{FendleyPRL1995,FendleyPRB1996,TrauzettelPRL2004}. Conversely, however, Eq.~(\ref{qhe_s1/3_2}) allows us to evaluate the correlation of tunneling quasiparticles using the ``effective charge'' $e^*$ as an index. Indeed, $e^*$ continuously varies with the backscattering strength when shot-noise data are analyzed using Eq.~(\ref{qhe_s1/3_2})~\cite{GriffithsPRL2000,ChungPRL2003,HeiblumBook,FeldmanPRB2017}. A quasiparticle tunneling between FQH edge channels can be modeled as tunneling between chiral Tomonaga-Luttinger (TL) liquids~\cite{KanePRL1994,ChamonPRB1995,ChamonPRB1996,FendleyPRL1995,FendleyPRB1996,MartinBook}. Within this model, both the dc transport and shot-noise properties can be calculated analytically over the entire range of the backscattering strength~\cite{FendleyPRL1995,FendleyPRB1996}. The TL-liquid nature of FQH edge channels manifests itself in the power-law behaviors observed in transport properties~\cite{ChangRevModPhys2003}. Figure~\ref{fig5_x4}(a) shows the bias $V_{\rm{DS}}$ dependence of differential conductance $g$ through a constriction formed in the $\nu = 1/3$ state~\cite{ChungPRB2003}. The several black curves, showing the power-law behaviors of $g$, were measured at different split-gate voltages applied to form the constriction and modulate the backscattering strength~\cite{ChungPRB2003}. At a fixed gate voltage, $g$ varies with $V_{\rm{DS}}$ from the strong-backscattering regime ($g \simeq 0$) to the weak-backscattering regime ($\simeq 0.8 \times e^2/3h$). \begin{figure}[tb] \center \includegraphics[width=7cm]{ShotReviewFig34.eps} \caption{(a) Differential conductance through a narrow constriction formed in the $\nu=1/3$ state. Each trace corresponds to the result measured at a different gate voltage that varies the backscattering strength. (b) Source-drain bias dependence of the shot noise at several temperatures. Reprinted figures with permission from Ref.~[\onlinecite{ChungPRB2003}]. {\copyright} (2003) American Physical Society.} \label{fig5_x4} \end{figure} The nonlinear bias dependence of the backscattering strength gives rise to the irregular behaviors of the shot noise. Figure~\ref{fig5_x4}(b) shows the shot-noise data obtained from the same $\nu = 1/3$ FQH device as that in Fig.~\ref{fig5_x4}(a)~\cite{ChungPRB2003}. The shot noise below 100 mK shows an irregular increase with $V_{\rm{DS}}$: a steep increase at $0 < V_{\rm{DS}} < 20~\mu$V and a slowing down of the increase at $V_{\rm{DS}} > 20~\mu$V. This behavior qualitatively agrees with a simulation using the TL-liquid model~\cite{TrauzettelPRL2004}, indicating the relevance of the model~\cite{GlattliPhysicaE2000,ChungPRB2003}. The irregular behavior becomes more significant at lower temperatures [see Fig.~\ref{fig5_x4}(b)], corresponding to the prediction by the TL-liquid theory, in which the impact of electron correlation becomes more pronounced at low temperatures. While the experimental data qualitatively agree with the predictions by the TL-liquid theory, as discussed above, they disagree with them quantitatively. The disagreement may reflect the difference between an actual device and an ideal point-like scatterer assumed in the TL-liquid theory. For example, additional Coulomb interaction between the channels and unintentional localized states may be responsible for the disagreement~\cite{RosenowPRL2002}. Interestingly, some minor points in the results of several inter-channel tunneling experiments even qualitatively differ from the predictions by the TL-liquid model~\cite{FendleyPRL1995,FendleyPRB1996}. For example, in the weak-backscattering regime, conductance through a constriction decreases with increasing a bias in experiments, while the TL-liquid theory predicts the monotonic increase. The disagreement may result from the switching of edge configurations between the ones shown in Fig.~\ref{fig5_x3}(a) and Fig.~\ref{fig5_x3}(b)~\cite{RoddaroPRL2005,DolevNature2008,HeiblumBook}, while the TL-liquid model only considers a change in the coupling strength in the latter configuration. More interestingly, an increase in effective tunneling charges, unexpected in the TL-liquid theory, is observed at low temperatures. Figure~\ref{fig5_x5} shows a representative result observed in the $\nu=2/5$ state, where the effective charge, $e/5$ at 82 mK, is doubled to $2e/5$ at 9 mK~\cite{ChungPRL2003}. Similar increases in the effective charges also occur in the other FQH states, e.g., $\nu=3/7$~\cite{ChungPRL2003} and $\nu=5/2$~\cite{DolevPRB2010} states. Bunching of tunneling quasiparticles has been discussed as a possible cause of these observations. \begin{figure}[tb] \center \includegraphics[width=6cm]{ShotReviewFig35.eps} \caption{Shot noise measured in the $\nu=2/5$ FQH system. Increase in the shot-noise intensity at low temperature suggests the bunching of fractional quasiparticles. Reprinted figure with permission from Ref.~[\onlinecite{ChungPRL2003}]. {\copyright} (2003) American Physical Society.} \label{fig5_x5} \end{figure} We have discussed the tunneling experiments by implicitly assuming that the whole 2DES is in an FQH state. However, because the shot noise reflects the effective charge of tunneling quasiparticles, only the barrier region needs to be in the FQH state for the fractional-charge detection, not the whole sample. Actually, $e/4$ charge [see Fig.~\ref{fig5_x2}(c)] was measured in a local $\nu=5/2$ region formed in a bulk $\nu=3$ IQH system~\cite{DolevNature2008}. Such local FQH systems are often observed in split-gate devices~\cite{RoddaroPRL2003,RoddaroPRL2004,RoddaroPRL2005,MillerNatPhys2007}. Here, we introduce a striking example of such quasiparticle-tunneling experiments through a local FQH state~\cite{HashisakaPRL2015}. Figure~\ref{fig5_x6}(a) shows a schematic of a tunneling experiment through a local $\nu=1/3$ state formed in a bulk $\nu=1$ system. A split-gate voltage applied to constrict the $\nu=1$ system decreases electron density in the constriction, sometimes forming a local $\nu=1/3$ state. When a source-drain voltage is applied across such a constriction, the electronic current flowing between the separated $\nu=1$ regions may be carried by quasiparticle tunneling through the incompressible $\nu=1/3$ region. In this case, we can expect to observe the $e/3$ fractional charge via shot-noise measurements. \begin{figure*}[bt] \center \includegraphics[width=17cm]{ShotReviewFig36.eps} \caption{(Color online) (a) Schematic of $e/3$ quasiparticle tunneling through a local $\nu=1/3$ state formed in a bulk $\nu=1$ system. (b) Split-gate voltage $V_{\rm{g}}$ dependence of $g$ through a local $\nu=1/3$ state measured at $V_{1}=450~\mu V$ at several magnetic fields. (c) Source-drain bias $V_1$ dependence of $g$ and (d) $S^I$ obtained at several $V_{\rm{g}}$. (e) Transmission probability $T_1$ dependence of $S^I$ measured at $V=450~\mu V$. Reprinted figures with permission from Ref.~[\onlinecite{HashisakaPRL2015}]. {\copyright} (2015) American Physical Society.} \label{fig5_x6} \end{figure*} The experiment was performed on a QPC fabricated in a 2DES in an AlGaAs/GaAs heterostructure~\cite{HashisakaPRL2015}. Figure~\ref{fig5_x6}(b) presents the split-gate voltage $V_{\rm{g}}$ dependence of the differential conductance $g$ at the source-drain bias $V_{1}=450~\mu V$ at several magnetic fields. When $V_{\rm{g}}$ decreases from zero, $g$ decreases from $e^2/h$, showing a plateau at $e^2/3h$ around $-1.7~\rm{V}$, to zero below $V_{\rm{g}} = -1.9~\rm{V}$. The $e^2/3h$ plateaus signal the formation of a local $\nu=1/3$ state in the constriction. The plateau structure becomes more pronounced at higher magnetic fields, suggesting the increased FQH energy gap of the $\nu=1/3$ state. The $V_{1}$ dependence of $g$ measured near $V_{\rm{g}}=-1.7~\rm{V}$ at 8 T ($e^2/3h$ plateau region), shown in Fig.~\ref{fig5_x6}(c), exhibits conductance suppression near $V_{1} = 0$. Despite the electronic current flowing between the $\nu=1$ regions, the measured zero-bias anomaly reminds us of the TL-liquid nature of the FQH edge channels [see Fig.~\ref{fig5_x4}(a)]. The formation of strip-like FQH edge states at the smooth edge of the $\nu=1$ regions is responsible for the observation~\cite{RoddaroPRL2003,RoddaroPRL2004,ParadisoPRL2012}. In the low $g$ region at low bias, the transmitted current is carried by electron tunneling through the depleted region between the strip-like FQH edge states [see Fig.~\ref{fig5_x3}(b)]. On the other hand, at high bias, the conductance increases to saturate at $g\simeq e^2/3h$, suggesting the $\nu=1/3$ state develops over the constriction region, as shown in Fig.~\ref{fig5_x6}(a). While the $e/3$ charge tunneling through the $\nu=1/3$ system has been observed by analyzing the differential conductance~\cite{RoddaroPRL2003,RoddaroPRL2004,RoddaroPRL2005}, shot-noise measurements provide further evidence of it~\cite{HashisakaPRL2015}. Figure~\ref{fig5_x6}(d) shows the shot-noise data measured simultaneously with $g$ presented in Fig.~\ref{fig5_x6}(c). In this measurement, current-noise cross-correlation $S^I$ between the transmitted and reflected channels were evaluated. The negative sign of $S^I$ originates from the partitioning of the tunneling quasiparticles, as expressed in Eq.~(\ref{qhe_s24}). Whereas the experimental data are close to the theoretical shot-noise curve with $e^*=e$ at low bias, they approach the curve with $e^*=e/3$ at high bias. The latter observation corresponds to the fractional-quasiparticle tunneling picture illustrated in Fig.~\ref{fig5_x6}(a). Figure~\ref{fig5_x6}(e) shows the transmission probability $T_{1}=G\times h/e^2$ dependence of $S^I$ measured at $V_{1}=450~\mu V$, where $G$ is the conductance through the constriction. The shot-noise data near $T_{1}=0$ and $T_{1}=1$ are close to the theoretical curve with $e^*=e$ (dashed black curve), indicating the electron tunneling through the depleted region and the incompressible $\nu = 1$ region, respectively. Meanwhile, the data in the intermediate $T_{1}$ region agrees well with the $e^*=e/3$ curve (solid blue curve). The latter result indicates that the charge-transfer process over the broad intermediate $T_{1}$ regime is the stochastic $e/3$ tunneling between the $\nu = 1$ edge channels. The stochasticity, namely the absence of correlation between quasiparticles, can be interpreted as the result of the 1D free-electron-system nature of the $\nu = 1$ edge channels. The above experimental results demonstrate that the effective tunneling charge corresponds to the charge of elementary excitations in the barrier region and also that correlation between tunneling quasiparticles reflects the interaction in the edge channels. While the above data are obtained at highly non-equilibrium, finite current noise on plateaus is also observed at low bias in a similar setup~\cite{BidPRL2009,RosenblattNatComm2017}. Upstream charge-neutral modes in QH systems are considered to be responsible for the current noise at low bias. The relation between the observations in the high- and low-bias regimes is unclear and requires more studies. \subsubsection{Anyonic statistics of fractional quasiparticles} \label{sec:anyonic_statistics} A marked feature of FQH quasiparticles is not only their fractional charge but also their anyonic quantum statistics~\cite{WilczekPRL1982}. Unlike elementary excitations in three-dimensional systems, quasiparticles in 2D systems can be neither bosons nor fermions but anyons. When the wave function of the system obtains the phase $\theta \neq \pi$ or $2\pi$ by an exchange operation of two quasiparticles, they are referred to as ``Abelian anyons''. On the other hand, when the operation is described not by the phase evolution but by an arbitrary unitary transformation, they are called ``non-Abelian anyons''. Theories predict that the non-Abelian statistics provide the basis of fault-tolerant quantum computing, stimulating intensive studies on the quantum statistical nature of FQH quasiparticles~\cite{MooreNuclPhysB1991,ReadPhysicaB,KitaevAnnalsPhys2003,NayakRevModPhys2008}. While quasiparticles in some FQH states, such as the well-known $\nu=1/3$ and $\nu=2/5$, are Abelian anyons, other FQH states, e.g., $\nu=5/2$ and $\nu=12/5$, may support non-Abelian anyons. Although the wave function of the $\nu=5/2$ state is still under debate, some candidates are considered to support quasiparticles having $e/4$ charge and non-Abelian statistics. The shot-noise measurement performed in the $\nu=5/2$ state provides evidence of the $e/4$ charge [see Fig.~\ref{fig5_x2}(c)], which is the necessary condition for the $\nu=5/2$ state being non-Abelian~\cite{DolevNature2008}. Thus, in FQH systems, various Abelian and non-Abelian anyons can appear in a single device by varying the filling factor using external parameters such as the magnetic field; hence the FQH state is the promising testbed for investigating anyons. The anyonic statistics, both Abelian and non-Abelian, of FQH quasiparticles, were not confirmed in experiments until more than three decades after the first theoretical prediction~\cite{WilczekPRL1982}. However, very recently, the Abelian statistics of the $\nu=1/3$ quasiparticles were observed by shot-noise measurement~\cite{BartolomeiScience2020} and Fabry-P\'erot interferometry~\cite{NakamuraNatPhys2020}. Here, we introduce the shot-noise experiment, where $e/3$ quasiparticles collide to show their Abelian anyonic statistics. \begin{figure*}[bt] \center \includegraphics[width=16cm]{ShotReviewFig37.eps} \caption{(Color online) (a) Schematic of an anyon-collision experiment. (b) Experimental setup for the collision experiment using three QPCs. Some of the anyons randomly emitted from QPC1 and QPC2 collide at cQPC and cause cross-correlation between the output currents $I_3$ and $I_4$. (c) False-color scanning electron micrograph of the three-QPC device and the measurement setup. Electronic currents injected from ohmic contacts 7 and 8 flow along with the white arrows and are partitioned at QPC1 and QPC2, respectively, to generate currents $I_1$ and $I_2$ accompanied by anyon excitations (dashed red/white allows). The anyons randomly impinge on cQPC from both the top and bottom sides and sometimes collide to experience the exchange interference. The output signals are measured through contacts 3 and 4 to evaluate current-noise cross-correlation. (d) Input current $I_+$ dependence of the cross-correlation $S_{I_3I_4}/(2e^*)$ measured at three different cQPC transmission probabilities ${\cal{T}}$. The dashed lines are linear fits to the $S_{I_3I_4}/2e^*$ data. (Inset) Slope $\alpha$ extracted from the fit. The dashed lines is a fit to $\alpha = PT(1-T)$ with $P = -2.1$. Reprinted with permission from Ref.~[\onlinecite{BartolomeiScience2020}]. {\copyright} (2020) American Association for the Advancement of Science.} \label{fig5_x7} \end{figure*} Figure~\ref{fig5_x7}(a) shows a schematic of a collision experiment, where two quasiparticles impinge on a beam splitter of transmission probability ${\cal{T}}$ at the same time. Here, we define the probability $K$ of both quasiparticles scattered to the left side of the beam splitter. We describe $K={\cal{T}}(1-{\cal{T}})$ in a classical model considering the quasiparticles to be distinguishable, while $K={\cal{T}}(1-{\cal{T}})(1-p)$ when they are indistinguishable. As discussed in Sect.~\ref{sec:fermion_optics}, one finds $p=1$ and $K=0$ due to the Pauli exclusion principle in the fermion case. In contrast, in the boson case, $p<0$ and $K$ is larger than that in the classical model. If the quasiparticles are anyons, one can expect that $p$ takes an intermediate value between those of fermions and bosons. For example, $e/3$ quasiparticles in the $\nu=1/3$ state are predicted to take $p<0$ since the exchange phase $\theta = \pi/3$ is close to $\theta=0$ of bosons. Although we considered a single collision event above, in practice, it is difficult to perform such an experiment due to the lack of an on-demand single-anyon source. This difficulty contrasts with the case of electron-collision experiments, which have been achieved using single-electron sources (see Sect.~\ref{sec:fermion_optics})~\cite{BocquillonScience2013, DuboisNature2013}. For observing anyonic statistics of FQH quasiparticles, Rosenow $et~al.$ proposed a different approach using two anyon sources that randomly emit anyons in the time domain~\cite{RosenowPRL2016}. Figure~\ref{fig5_x7}(b) shows a schematic of the approach. In this setup, QPC$_1$ and QPC$_2$ are set in the weak backscattering regime. At finite bias $V_1$ and $V_2$, these QPCs randomly emit $e/3$ quasiparticles due to the tunneling, serving as the anyon sources. Here, ${\cal{T}}_1$ and ${\cal{T}}_2$ are the QPC transmission probabilities. The tunneling currents $I_1$ and $I_2$, carrying $e/3$ quasiparticles, impinge on the center QPC (cQPC) that works as an anyon beam splitter (transmission probability ${\cal{T}}$). The exchange interference between two quasiparticles enhances the shot noise accompanying the output currents $I_3$ and $I_4$. When ${\cal{T}}_1$ = ${\cal{T}}_2$ and $\langle I_1\rangle=\langle I_2\rangle$, current-noise cross-correlation $S_{I_{3}I_{4}}$ between $I_3$ and $I_4$ is described as \begin{equation} \begin{split} S_{I_3I_4}=2e^*P{\cal{T}}(1-{\cal{T}})I_+, \label{anyon} \end{split} \end{equation} \begin{equation} \begin{split} P=\frac{-2}{m-2}, \end{split} \end{equation} where $I_+ = \langle I_1\rangle+\langle I_2\rangle$ is the sum of the currents impinging on cQPC, and $m$ is a factor characterizing the exchange phase $\theta=\pi/m$. In the case of $e/3$ quasiparticles, one can expect to observe $P=-2$ since $m=3$. Figure~\ref{fig5_x7}(c) displays a false-colored scanning electron micrograph of the sample and the experimental setup examined by Bartolomei $et~al$~\cite{BartolomeiScience2020}. The edge currents $I_1$ and $I_2$ carrying $e/3$ quasiparticles are mixed at cQPC to generate finite $S_{I_{3}I_{4}}$. Figure~\ref{fig5_x7}(d) shows $S_{I_{3}I_{4}}$ measured at ${\cal{T}}_1$ = ${\cal{T}}_2 = 0.05$ (weak-backscattering regime) for three different cQPC transmission probabilities (${\cal{T}}$). The $S_{I_{3}I_{4}}$ data show a negative correlation at finite $I_+$, and the $I_+$ dependence well fits linear functions with $P=-2.1\pm0.1$ in Eq.~(\ref{anyon}). This $P$-value is close to the theoretical prediction $P=-2$, being a hallmark of the anyonic nature of $e/3$ quasiparticles. Note that Bartolomei $et~al$. also evaluated $S_{I_{3}I_{4}}$ in the case of $\langle I_1\rangle \neq \langle I_2\rangle$ and confirmed good agreements between the experimental results and the theoretical predictions~\cite{RosenowPRL2016}. The above experimental result is the first evidence of anyons. This achievement is a significant milestone toward the realization of future topological quantum computation using FQH anyons. \subsubsection{Quantum many-body effects in edge channels} Electron correlation in a QH edge channel sometimes causes peculiar transport phenomena. For example, one observes TL-liquid-like behaviors~\cite{WenPRB1990,ChangRevModPhys2003} in quasiparticle tunneling processes in FQH systems, as discussed in Sect.~\ref{sec:fractional_charge_tunneling}. Here, we introduce other intriguing behaviors originating from electron correlation in QH edge channels. While IQH edge channels are often described as 1D free-electron systems, their charge excitations often show the TL-liquid nature due to the long-range intra- and inter-edge Coulomb interaction~\cite{HashisakaRevPhys2018}. For example, in the $\nu=2$ state, transport eigenmodes in copropagating edge channels, charge and charge-neutral (spin) modes with different velocities cause spatial separation of charge and spin excitations. This well-known phenomenon is referred to as the spin-charge separation in the QH TL liquid~\cite{InouePRL2014,FreulonNatCommun2015,HashisakaNatPhys2017}. Current-noise measurements allow us to identify these transport eigenmodes that mix the current noise in the two channels~\cite{InouePRL2014}. A larger variety of correlation phenomena are observed in FQH systems. One representative example is the charge-neutral transport in the $\nu=2/3$ FQH state. Below we discuss the $\nu=2/3$ edge channels that contain the essence of quantum many-body physics at the edge of topological quantum liquids. The $\nu=2/3$ state is particle-hole symmetric with the $\nu=1/3$ state; it is the hole $\nu_{\rm{h}}=1/3$ state in the lowest Landau level. Based on this picture, in 1990, MacDonald proposed a model of the $\nu=2/3$ edge state, in which a hole $\nu_{\rm{h}}=1/3$ edge state and an electron $\nu_{\rm{e}}=1$ edge state counter-propagate~\cite{MacDonaldPRL1990,JohnsonPRL1991}. The formation of such counter-propagating channels is referred to as ``edge reconstruction'' in hole-conjugate FQH states. MacDonald's $\nu=2/3$ edge-state model is theoretically reasonable. However, this model contradicted the experimental results at that time: whereas the model predicts the two-terminal conductance $G=4/3\times e^2/h$ of the $\nu=2/3$ system, experiments reported $G=2/3\times e^2/h$. Kane $et~al$. studied transport eigenmodes in the reconstructed $\nu=2/3$ edge state using the renormalization-group theory~\cite{KanePRL1994,KanePRB1995}. They found that because of the mode mixing between the counter-propagating channels due to the random inter-channel tunneling and Coulomb interaction, the upstream charge-neutral mode appears, in addition to the charge mode that gives a quantized Hall conductance of $G=2/3\times e^2/h$. This conductance value agrees with the experimental observations, indicating the validity of both the edge-reconstruction and mode-mixing pictures. Moreover, the charge-neutral transport was also observed later, as introduced below. Note that similar peculiar edge transport can occur not only in the $\nu=2/3$ state but also in various FQH systems and in non-QH 2D topological quantum liquids as well. From this perspective, the $\nu=2/3$ edge state has been an significant testbed for studying edge transport in topological systems. Current-noise measurements have played an essential role in observing the charge-neutral transport in the $\nu=2/3$ state. Figure~\ref{fig5_x8}(a) is a schematic of the measurement setup in the first experiment~\cite{BidNature2010}. Charge excitations in the counter-propagating $\nu=1$ and $\nu=1/3$ edge channels are mixed via the inter-channel Coulomb interaction and random tunneling to form the charge mode (blue arrows) and the charge-neutral mode (red arrows), propagating clockwise and anticlockwise, respectively. When a current $\langle I_n\rangle$ is applied to an ohmic contact ``Source 2'', the charge mode is grounded at the contact ``Ground 1''. In contrast, the charge-neutral mode impinges on a QPC and generates excess current noise at the ``Voltage probe''. \begin{figure}[tb] \center \includegraphics[width=8.5cm]{ShotReviewFig38.eps} \caption{(Color online) (a) Schematic of the experimental setup for observing the upstream charge-neutral transport in the $\nu=2/3$ state. Charge and charge-neutral transport are indicated by blue and red arrows, respectively. Charge-neutral mode excited at ``Source 2'' by applying a current $I_n$ impinges on the QPC and generates excess current noise. (b) Excess current noise as a function of $I_n$ measured at several QPC transmission probabilities $t$. Reprinted with permission from Ref.~[\onlinecite{BidNature2010}]. {\copyright} (2010) Springer Nature.} \label{fig5_x8} \end{figure} Figure~\ref{fig5_x8}(b) displays the measured excess current noise as a function of $\langle I_n\rangle$. The monotonic increase in the excess noise with $|\langle I_n\rangle|$ signals the presence of the upstream charge-neutral transport. The excess noise depends on the transmission probability $t$ of the QPC, indicating that the excess noise is generated at the QPC that scatters the charge-neutral excitations. This observation proves both edge reconstruction~\cite{MacDonaldPRL1990,JohnsonPRL1991} and the formation of transport eigenmodes~\cite{KanePRL1994,KanePRB1995} in the $\nu=2/3$ state. It is noteworthy that the charge-neutral transport in the $\nu=2/3$ state was also confirmed using another method based on QD thermometry~\cite{VenkatachalamNatPhys2012,GurmanNatCommun2012}. Additionally, the charge-neutral transport has been observed in other FQH systems, such as the $\nu=4/3$, $\nu=1/3$, and $\nu=3/5$ states in GaAs/AlGaAs heterostructures~\cite{AltimirasPRL2012,InoueNatCommun2014,SaboNatPhys2017}. While the early theories predicted the presence of the charge-neutral mode only in hole-conjugate FQH systems, the charge-neutral transport observed in the $\nu=4/3$ and $\nu=1/3$ states suggests the edge-reconstruction physics in non-hole-conjugate FQH systems confined by a smooth edge potential. While the above experiment confirmed the presence of upstream heat transport in the $\nu=2/3$ state, recently, the heat flow along QH edge channels has been investigated quantitatively. Such an experiment was first performed to observe the quantized heat transport along IQH edge channels~\cite{JezouinScience2013}. Figure~\ref{fig5_x9}(a) shows the experimental setup, where a micrometer-scale ohmic contact $\Omega$ (dark red region) separates the two IQH systems (blue regions), which are individually equilibrated through different ohmic contacts (temperature $T_0$). A current injected to the 2DES on the right-hand side of $\Omega$ propagates through QPC$_1$ to reach $\Omega$. The dc excitation is equilibrated with electrons fed from the left 2DES through QPC$_2$, leading to the increase in electron temperature $T_{\rm{\Omega}}$. $T_{\rm{\Omega}}$ is evaluated by current-noise thermometry performed on both left and right 2DESs. \begin{figure}[tb] \center \includegraphics[width=8.5cm]{ShotReviewFig39.eps} \caption{(Color online) (a) Experimental setup for measuring quantized heat transport along IQH edge channels. A micrometer-scale ohmic contact (dark red region) divides the 2DES into two regions (light blue regions). Heat transport along edge channels (red arrows) is measured by the two current-noise-measurement setups using LC resonance circuits. (b) Measured heat-current factor $\alpha_n$ versus the number of edge channels $n$. The gray line is the prediction for the quantized heat transport. Reprinted with permission from Ref.~[\onlinecite{JezouinScience2013}]. {\copyright} (2013) American Association for the Advancement of Science.} \label{fig5_x9} \end{figure} The heat flow along the edge channels is quantitatively evaluated from the relation between the Joule heat and increase in $T_{\rm{\Omega}}$. Wiedemann-Franz law predicts that the heat flow $J^e_Q(T_{\Omega},T_0)$ along a single edge channel is described as \begin{equation} \begin{split} J^e_Q(T_{\Omega},T_0)=\frac{\pi^2k_{\rm{B}}^2}{6h}(T_{\Omega}^2-T_0^2), \label{heatflow} \end{split} \end{equation} where $T_{\Omega}$ and $T_0$ are temperatures of the reservoirs. Figure~\ref{fig5_x9}(b) shows the heat-current factor $\alpha_n = nJ^e_Q/(T_{\Omega}^2-T_0^2)$ evaluated at several filling factors. One observes that $\alpha_n$ data fall near the prediction by Wiedemann-Franz law (gray line): namely, each edge channel carries the heat current $J^e_Q$. This result proved the quantized heat flow across IQH edge channels. Furthermore, other experiments have revealed the impact of inter-channel Coulomb interaction on heat transport along co-propagating IQH edge channels~\cite{SivreNatPhys2017,SivreNatCommun2019}. The heat flow along FQH edge channels has also been investigated in several FQH states, e.g., the $\nu=1/3, 2/3, 3/5$, and $4/7$ states~\cite{BanerjeeNature2017}. The experiments confirmed that both IQH and FQH channels carry quantized heat flow. Furthermore, a similar heat-transport measurement was performed on the $\nu=5/2$ state, which may support the presence of non-Abelian quasiparticles~\cite{BanerjeeNature2018}. The experimental result shows a half-integer quantized heat conductance, suggesting the Majorana edge mode of the particle-hole Pfaffian state. This observation contradicts numerical calculation predicting the Pfaffian or the anti-Pfaffian state and therefore requires further theoretical and experimental studies. Although still inconclusive, the experimental result implies a non-Abelian topological order of the $\nu=5/2$ state. \subsubsection{Short summary} Current-noise measurements have revealed various phenomena in QH systems, such as fractional charge, anyonic statistics, and quantum-many-body effects in edge channels. There remain many other interesting issues, such as Andreev reflection of fractional quasiparticles at QH interfaces, requiring shot-noise measurements~\cite{HashisakaNatComm2021}. Current-noise measurements will allow us to gain more in-depth insight into QH systems and other 1D electron systems, including helical edge states in 2D topological materials. \subsection{Noise in superconductor-based junctions} Superconductivity is one of the most representative examples of the quantum many-body effect. While a supercurrent does not generate shot noise in a bulk superconductor, it can be backscattered at a junction between a superconductor and a normal conductor to generate a shot noise. Blanter and B\"uttiker introduced several theoretical treatments for shot-noise generation at superconductor junctions~\cite{BlanterPR2000}. While there were only a few experiments at that time, we now have many examples of such experimental shot-noise studies. Here, we introduce some experiments that studied Andreev reflection, Cooper-pair splitting, the Kondo-Andreev effect, and superconductor-QH junctions. As we will see below, these junctions provide valuable platforms for exploring quantum many-body phenomena. \subsubsection{Andreev reflection} Electron injection from a normal metal to a superconductor results in hole reflection to form a Cooper pair in the superconductor. This intriguing phenomenon, directly manifesting the electron pairing in a superconductor, is referred to as Andreev reflection. The Andreev process leads to the charge-$2e$ shot-noise generation at a normal metal-superconductor (NS) junction~\cite{KhlusJETP1987,deJongPRB1994}. The charge-$2e$ shot noise in the Andreev process was first observed in 2000~\cite{JehlNature2000,KozhevnikovJLTP2000,KozhevnikovPRL2000}. Figure~\ref{Fig:JehlNature2000} shows the data obtained by Jehl \textit{et al.} for a Nb/Cu junction~\cite{JehlNature2000}. For this measurement, a SQUID was used to detect the current noise in the junction of small resistance ($\sim 0.8~\Omega$)~\cite{JehlRSI1999} [see Sect.~\ref{sec:transimpedance}]. In the low-bias regime, the experimental data (circles) agree well with the following theoretical curve [see Eq.~(\ref{ShotTheory})]: \begin{equation} S=\frac{2}{3}\left[\frac{4k_B T}{dV/dI} +e^* \langle I \rangle \coth\left(\frac{e^*V}{2k_\textrm{B}T_\textrm{e}}\right)\right], \end{equation} where $e^*=2e$ reflects the Cooper-pair charge and the denominator ``3'' of the factor $2/3$ represents the Fano factor $F=1/3$ of the diffusive mesoscopic device~\cite{NagaevPLA1992,NagaevPRB2001}. The deviation of the experimental data from the $2e$ curve at high bias ($\gtrsim 1.2$~mA) may reflect the occurrence of charge-$e$ quasiparticle tunneling. \begin{figure}[tbp] \center \includegraphics[width=7cm]{ShotReviewFig40.eps} \caption{Shot noise at an NS junction as a function of bias current $\langle I \rangle$ at 1.35 K. Reprinted with permission from Ref.~[\onlinecite{JehlNature2000}]. {\copyright} (2000) Springer Nature.} \label{Fig:JehlNature2000} \end{figure} Similar charge-$2e$ shot noise was also observed in a superconductor-semiconductor junction~\cite{LeflochPRL2003} and a superconductor-QD-superconductor device~\cite{DasNatComm2012,HataPRL2018}. The former experiment, where electrons are emitted into the superconducting gap with a Poisson distribution, is a superconducting analog of Schottky's experiment on electron emission into a vacuum~\cite{SchottkyAP1918}. The measured shot noise agrees with the theoretical curve of $S=4e\vert\langle I\rangle\vert$ at low bias, indicating the $2e$ effective charge. Furthermore, multiple Andreev reflection (MAR) occurs at a junction where two superconductors sandwich a normal conductor. When such a junction is voltage-biased ($\vert eV \vert < 2\Delta$, where $2\Delta$ is the superconducting gap), Andreev processes occur many times at the two NS interfaces. In this case, the shot noise generated at the whole junction is sometimes enhanced such that $S/2e\vert\langle I \rangle\vert =1+\Delta/eV$, indicating that multiple charge quanta, $2e, 3e$, etc., carry the transmitted current~\cite{DielemanPRL1997,HossPRB2000,CronPRL2001,RonenPNAS2016}. On the other hand, another recent experiment demonstrated shot-noise suppression with a factor less than one at high bias $\vert eV\vert \sim 2\Delta$, where the quasiparticle tunneling can arise~\cite{RonenPNAS2016}. \subsubsection{Cooper-pair splitting} When two normal conductors contact a superconductor with a spacing shorter than the superconducting coherence length, a correlation appears between the charge-scattering processes at the contacts. When an electron is incident from one of the normal conductors to the superconductor, a hole is ejected from the other contact to form a Cooper pair. This process is referred to as the crossed Andreev reflection (CAR), or nonlocal Andreev reflection. In a similar setup, an inverse process of CAR occurs. A Cooper pair in the superconductor splits into two electrons, and each of them is ejected to the two normal conductors one by one. The Cooper-pair splitting was observed, for example, in a ferromagnet-superconductor junction~\cite{BeckmannPRL2004}. Because two electrons generated by a single Cooper-pair splitting are entangled quantum mechanically, this process has been intensively studied for realizing a solid-state entangled-pair generator. In experiments, a Cooper-pair-splitting device is often constructed by attaching two QDs to a superconductor~\cite{HofstetterNature2009}. Thanks to the Coulomb blockade, each QD traps only a single electron at a time so that the device enables us to observe a single Cooper-pair splitting process. Time-correlation measurement for the currents flowing through the QDs provides striking evidence for the Cooper-pair splitting. Das \textit{et al.} performed a current-noise cross-correlation measurement between InAs QDs attached to aluminum~\cite{DasNatComm2012}. The observed positive cross-correlation indicates that almost all the current is ejected from the superconductor by the Cooper-pair splitting processes. \subsubsection{Kondo-Andreev effect} The coexistence of and competition between superconductivity and the Kondo effect has attracted attention since the 1960s~\cite{SodaPTP1967}. Superconductivity in $s$-wave superconductors emerges via a macroscopic wave function of spin-singlet Cooper pairs. The Kondo effect, in contrast, occurs due to the spin-singlet (Kondo-singlet) formation between a localized spin and conduction electrons. Because these two quantum many-body effects originate from different electronic states, one may think they cannot coexist. Alternatively, because electrons screen magnetic impurities in the Kondo state, another may think that the Kondo effect enhances superconductivity. In reality, the coexistence of the two effects can be observed in bulk materials, such as heavy-fermion systems. A QD-superconductor device provides a vital platform for observing the Kondo-Andreev effect because a QD behaves as a controllable magnetic impurity. Figure~\ref{Fig:HataAndreevKondo}(a) shows a schematic of such a device, a superconductor-QD-superconductor (S-QD-S) junction, where we can examine the relationship between the two quantum many-body effects in a controlled way~\cite{BuitelaarPRL2002,KimPRL2013,HataPRL2018}. \begin{figure}[tbhp] \center \includegraphics[width=7cm]{ShotReviewFig41.eps} \caption{(Color online) (a) Schematic of a S-QD-S device, where the Kondo effect and superconductivity coexist. (b) $2\Delta/|eV|$ dependence of Fano factor $F =S/2e|\langle I\rangle |$ at the electron-hole-symmetry point [circles and squares: experimental data at the SU(4) and SU(2) Kondo states, respectively. Triangles: numerical calculation result for a QPC]. The data points fit well with the simulations using Eq.~(\ref{eq:hataPRL2018}). $\alpha$ is the noise-enhancement factor in the Kondo regime. Panel (b) is reproduced from Ref.~[\onlinecite{HataPRL2018}]. {\copyright} (2018) American Physical Society.} \label{Fig:HataAndreevKondo} \end{figure} Recently, one of the authors of this review reported a shot-noise measurement performed on the QD device shown in Fig.~\ref{Fig:HataAndreevKondo}(a)~\cite{HataPRL2018}. The QD was adjusted to the SU(2) or SU(4) Kondo state, while a weak magnetic field was applied to break superconductivity and prepare a normal state in the leads. When the magnetic field is turned off, the lead wires enter the superconducting state, and the Kondo-Andreev state can appear. Figure~\ref{Fig:HataAndreevKondo}(b) presents the shot-noise results at the electron-hole-symmetry point. The Fano factor $F$ measured at low bias voltage, $V$ below the superconducting gap $2\Delta/e$, is plotted as a function of $2\Delta/|eV|$. The experimental data demonstrate strong shot-noise enhancement ($F\propto V^{-1}$) in both the SU(2) and SU(4) Kondo states. These observations qualitatively agree with the theory of $n$th-order multiple-Andreev-reflection (MAR) processes ($n\sim 2\Delta/eV$), where a single transport process is considered to carry the $e^*=ne$ effective charge~\cite{CuevasPRL1999}. On the other hand, if we take a more quantitative look at the experimental data, we find that the observed shot-noise enhancement is much larger than the theoretical prediction. Using the factor $\alpha$, the enhanced Fano factor is defined as \begin{equation} F\equiv \frac{2\Delta}{|eV|}\alpha. \label{eq:hataPRL2018} \end{equation} For a simple junction without the Kondo correlation, $\alpha$ equals one, whereas the experiment yielded $\alpha =2.2$ for SU(2) and $\alpha =10.8$ for SU(4) as shown in Fig.~\ref{Fig:HataAndreevKondo}(b). This means that MAR enhances the noise in the Kondo state by several times. As seen in Sect.~\ref{Subsec:KondoNoise}, the shot noise in the Kondo state connected to the normal lead is in quantitative agreement with theory for both the SU(2)- and SU(4)- Kondo states. Although there is a theory of the noise in the Kondo-Andreev state~\cite{AvishaiPRB2003}, further theoretical development is necessary to explain the experimental data. Elucidating the non-equilibrium behavior in systems where two different singlet states interplay remains an important issue for the future. \subsubsection{Junction with quantum Hall states} Before closing this section, we discuss a junction between a topological edge state (including a chiral edge state in a QH system) and a superconductor. While such systems have been studied theoretically since the 1990s~\cite{MaEPL1993,FisherPRB1994}, recent theories predicting the emergence of non-Abelian anyons at a superconductor-edge state interface~\cite{FuPRL2008,MongPRX2014} have stimulated many theoretical and experimental studies. Despite the fundamental importance, quantum Hall-superconductor (QH-S) junctions had not been fabricated in experiments due to technical difficulties until recently. The difficulty was that, first, superconductivity usually disappears at high magnetic fields where the QH effect occurs. However, superconductors with a high critical magnetic field solved this problem. A more severe problem was the Schottky barrier that degrades the proximity effect at a QH-S interface. Recently, this problem was also solved by using novel 2DESs such as graphene~\cite{KomatsuPRB2012} and ZnO-based 2DES~\cite{KozukaJPSJ2018}. A shot-noise measurement, which enables us to evaluate the effective charge of an elementary excitation, serves as a powerful probe for a superconducting correlation in QH edge channels. Recently, Sahu \textit{et al.} reported a shot-noise measurement in a bilayer graphene-superconductor junction~\cite{SahuPRB2019}. They observed the shot-noise enhancement by a Fano factor of about two due to the Andreev reflection. The enhancement is more significant than that in a metal-superconductor interface formed in the same device at a zero magnetic field (the Fano factor is about 1.5 in this case). These observations may be an indication of a superconducting correlation in QH edge channels. \section{Fluctuation Theorem and Current Noise} \label{sec:fluctuationtheorem} \subsection{Fluctuation Theorem} We have so far discussed current noise in mesoscopic systems based on the Landauer-B\"{u}ttiker picture; we would now like to discuss from a different perspective, namely the Fluctuation Theorem (FT). In the 1950s, the linear response theory was formulated in the field of statistical mechanics~\cite{KuboJPSJ1957}. Whereas this theory provides a powerful methodology for studying the response of a system to an external field at near equilibrium, it cannot be applied to highly non-equilibrium systems. There have been many attempts over the years to investigate non-equilibrium systems beyond the linear response theory. One of the significant achievements is the FT reported in 1993~\cite{EvansPRL1993}. Let us consider a small system connected to a reservoir as shown in Fig.~\ref{Fig_FTconcept}~\cite{EvansAP2002}. While the entropy of the entire system does not decrease due to the second law of thermodynamics, the entropy in the small system fluctuates since it exchanges energy, particles, heat, work, entropy, etc., with the reservoir. We are interested in the rate of the entropy generation $\sigma$ in the small system and its average over a finite time $t$, $\overline{\sigma}_t\equiv \frac{1}{t}\int_0^t ds \sigma(s)$. The FT claims that the probability $P(\overline{\sigma}_t)$, i.e., the probability such that the entropy generation rate is $\overline{\sigma}_t$, strictly satisfies the following equation \begin{equation} \frac{P(\overline{\sigma}_t=A)}{P(\overline{\sigma}_t=-A)}=\exp\left(\frac{At}{k_\textrm{B}}\right). \label{Eq_FT} \end{equation} This equation is derived from the microreversibility and conservation laws. According to this equation, for a large $t$, the probability of the entropy increase (numerator of the left-hand side) is overwhelmingly larger than that of the entropy decrease (denominator of the left-hand side) because $A>0$. In this sense, this equation corresponds to the second law of thermodynamics. The FT has been studied as a new guiding principle in statistical mechanics because Eq.~(\ref{Eq_FT}) can reproduce the fundamental equations of linear response theory, such as the fluctuation-dissipation relations~\cite{GallavottiPRL1996} and the Onsager-Casimir reciprocity~\cite{SaitoPRB2008,UtsumiPRB2009}. \begin{figure}[!t] \center \includegraphics[width=7cm]{ShotReviewFig42.eps} \caption{FT considers a small system connected to a reservoir.} \label{Fig_FTconcept} \end{figure} Experimentally, the FT has been verified by observing the motion of particles in fluids and RNA molecules, for example~\cite{WangPRL2002,BustamantePhysToday2005}. It has also been confirmed in transport measurements, e.g., performed on bulk resistors~\cite{GarnierPRE2005} and in electron-counting experiments using quantum dots (QDs)~\cite{UtsumiPRB2010,KungPRX2012}. \subsection{FT in quantum transport} While the experiments mentioned above have verified the FT in classical systems, the authors reported the first experiment examining the FT in a quantum system~\cite{NakamuraPRL2010,NakamuraPRB2011}, based on a theoretical proposal~\cite{UtsumiPRB2009}. Below, we briefly introduce this experiment. We consider a general situation where a voltage induces a current $\langle I \rangle$ in a sample. The current-voltage characteristics can be written as a polynomial of $V$ as follows: \begin{equation} \langle I \rangle= G_1V + \frac{1}{2!} G_2 V^2 + \frac{1}{3!} G_3 V^3 + \cdots, \label{Eq_IinV} \end{equation} where the first term represents Ohm's law, and $G_1$ is the conductance. When the current $\langle I \rangle$ is described only by the first term, which is often the case at low bias, the conventional linear response theory holds. On the other hand, at high bias, the system generally goes into a non-equilibrium state showing a non-linear response (suppose a largely stretched spring deviates from Hooke's law, as an example). In that case, higher-order terms in Eq.~(\ref{Eq_IinV}), characterized by the response coefficients $G_2, G_3$, and so on, become significant. In mesoscopic systems, $G_1$ is directly related to the transmission probability, as shown in Eq.~(\ref{LandauerConductance}). On the other hand, the higher-order response coefficients reflect electron correlation under non-equilibrium conditions~\cite{SanchezPRL2004,SpivakPRL2004,WeiPRL2005,LeturcqPRL2006}. Unlike $G_1$, which has Onsager-Casimir reciprocity, these quantities are not symmetric with respect to the magnetic field reversal and cause the nonreciprocal transport. Similarly, the current noise $S$ can also be expressed by a polynomial of the applied voltage $V$ such that \begin{equation} S = S_0 + S_1V + \frac{1}{2!} S_2 V^2 + \cdots, \label{Eq_SinV} \end{equation} where $S_0$ is the thermal noise. The fluctuation-dissipation relation $S_0 = 4 k_\textrm{B}T_\textrm{e}G_1$ holds between the first terms of Eq.~(\ref{Eq_IinV}) and Eq.~(\ref{Eq_SinV}), as we saw in Eq.~(\ref{thermalnoise})~\cite{JohnsonPR1928,NyquistPR1928}. This suggests analogous relations between the higher-order terms. Actually, the aforementioned FT predicts~\cite{UtsumiPRB2009} \begin{equation} S_1=2k_\textrm{B} T_\textrm{e}G_2, \label{Eq_FT_S1G2} \end{equation} for example. This relation between the second terms can be understood as follows. Electron transport occurs in a ``conductor-device-conductor'' system, which can be viewed as an exchange of electrons between two reservoirs via the device. A finite voltage $V$ between the two reservoirs causes a difference between their chemical potentials with $eV$. Let us consider the probability $P(N)$ of $N$ electrons flowing from the left to the right lead. Due to the time-reversal symmetry, the conservation of the particle number, and the energy conservation, the following equation holds~\cite{UtsumiPRB2009,NakamuraPRB2011}: \begin{equation} \frac{P(N)}{P(-N)}=\exp \left( \frac{eV}{k_\textrm{B}T_\textrm{e}}N \right). \label{Eq_FT_electron} \end{equation} The original FT, expressed in Eq.~(\ref{Eq_FT}), relates the probabilities of the entropy generation and reduction processes. In the electron transport, the entropy generation is related to Joule heating. When $N$ electrons move across a potential difference of $eV$, the Joule heat of $NeV$ is produced after equilibration in the reservoir at temperature $T_\textrm{e}$ so that the entropy generation is $NeV/T_\textrm{e}$. Thus, we can evaluate the probability of the transfer of $N$ electrons and obtain Eq.~(\ref{Eq_FT_electron}). The equation gives a strong constraint on electrical conduction, from which Eqs.~(\ref{thermalnoise}) and (\ref{Eq_FT_S1G2}) are derived, as described in Ref.~[\onlinecite{NakamuraPRB2011}]. \begin{figure*}[bthp] \begin{center} \includegraphics[width=12.5cm]{ShotReviewFig43.eps} \end{center} \caption{(Color online) (a) Atomic-force micrograph of the AB ring with the DC and noise measurement setup in a dilution refrigerator. The in-plane gates defined by the oxide lines are grounded in this experiment. The carrier density in the AB ring can be controlled by the gate voltage $V_\textrm{g}$ applied to the back gate of the substrate. (b) Conductance of the AB ring as a function of $V_\textrm{g}$ and $B$. (c) $G_1$ (left axis) and $S_0$ (right axis) as a function of $V_\textrm{g}$ at $B=0$~T. (d) $S_0$ are plotted as a function of $G_1$. The solid line indicates the fluctuation-dissipation relation of $S_0 = 4 k_\textrm{B}T_\textrm{e}G_1$ with $T_\textrm{e}=125$~mK. (e) $G_2$ (left axis) and $S_1$ (right axis) as a function of $V_\textrm{g}$. (f) $S_1$ are plotted as a function of $G_2$. The solid line is the linear fit ($S_1 = 3.64 \times 4k_\textrm{B}T_\textrm{e}G_2$ with $T_\textrm{e}=125$~mK). Figures are reprinted from Ref.~[\onlinecite{NakamuraPRL2010}].{\copyright} (2010) American Physical Society.} \label{Fig_NakamuraPRL2010} \end{figure*} Electron counting experiments to verify Eq.~(\ref{Eq_FT_electron}) have been performed~\cite{UtsumiPRB2010,KungPRX2012}. They used QDs coupled to a nearby QPC as a charge detector. This charge-detection technique was discussed in Sect.~\ref{sec:FCS}. In the counting experiments, the electron transport is not coherent but in the incoherent tunneling regime. Relation Eq.~(\ref{Eq_FT_S1G2}) is for nonlinear non-equilibrium regime and is a new ``nonequilibrium fluctuation relation'' that goes beyond the known fluctuation-dissipation relation. In our experiment, we aimed to demonstrate this equation in the quantum coherent regime. The device is an Aharonov-Bohm (AB) ring (460 nm in diameter) fabricated in a GaAs/AlGaAs 2DES, as shown in Fig.~\ref{Fig_NakamuraPRL2010}(a). The electron interference in the ring was controlled by applying magnetic field $B$ or gate voltage $V_\textrm{g}$. Figure~\ref{Fig_NakamuraPRL2010}(b) shows the conductance in the $B$-$V_\textrm{g}$ plane, where the periodic oscillations of the conductance as a function of the magnetic field signal the AB oscillation, manifesting the coherent electron transport. First, we discuss the results obtained at equilibrium ($V=0$). Figure~\ref{Fig_NakamuraPRL2010}(c) shows the conductance $G_1$ and the thermal noise $S_0$ when $V_\textrm{g}$ is varied. As the fluctuation-dissipation relation tells us, the behaviors of $G_1$ and $S_0$ agree with each other. In Fig.~\ref{Fig_NakamuraPRL2010}(d), $S_0$ is plotted as a function of $G_1$. From this linear fit, we estimate electron temperature $T_\textrm{e}$ = 125~mK. Then, we estimated $G_2$ and $S_1$ from the current-voltage characteristics and the voltage dependence of the current noise, respectively. The $V_\textrm{g}$ dependence of these two quantities is shown in Fig.~\ref{Fig_NakamuraPRL2010}(e). We see that the behaviors of $G_2$ and $S_1$ are coincident with each other. Figure~\ref{Fig_NakamuraPRL2010}(f) tells that the experimental data yields $S_1 = 3.64 \times 4k_\textrm{B}T_\textrm{e}G_2$. Although the numerical factor does not agree with that of the theoretical prediction, $S_1=2k_\textrm{B} T_\textrm{e}G_2$, Fig.~\ref{Fig_NakamuraPRL2010}(f) manifests the presence of nontrivial proportionality between them ($S_1 \propto G_2$). Some readers may consider that the current noise should be expressed by Eq.~(\ref{ShotTheory}) even in the non-equilibrium state. Equation (\ref{ShotTheory}) is an expression derived for free fermion systems and does not include the influence of electron-electron correlation, except for the Pauli exclusion principle. In Sect.~\ref{subsub:kondonoise}, we have already seen that Eq.~(\ref{ShotTheory}) is not sufficient to calculate the noise in the Kondo effect. The correlation is phenomenologically taken into account as an ``effective charge'', while, microscopically, there are multiple scattering mechanisms. More generally, when a system is driven out of equilibrium, many-body effects often play significant roles~\cite{SanchezPRL2004,SpivakPRL2004,WeiPRL2005,LeturcqPRL2006}. Such correlation effects are not included in the framework of the Landauer-B\"{u}ttiker formula based on the single-particle picture. In the present experiment, we also observed a distinct nonreciprocity arising from the quantum many-body effect in a non-equilibrium state at a finite magnetic field~\cite{LeturcqPRL2006,NakamuraPRL2010}. We demonstrated that the relation derived from FT is relevant even in this case~\cite{NakamuraPRL2010,NakamuraPRB2011}. \section{Conclusion and Future Perspectives} \label{sec:closing} In this review, we have presented the advances in mesoscopic shot-noise experiments over the past two decades. We hope that this review helps in convincing readers of the advantages of shot-noise measurements and, for experimentalists, to perform the measurements themselves. While we have made every effort to cover a wide range of topics, unfortunately, several important ones could not be included in this review. Below, we would introduce some of them. The first one is higher-order cumulants. In this review, we have mainly focused on current noise given by $\langle \Delta I^2 \rangle$, which is the second-order cumulant, i.e., the variance of the number of transmitted electrons. However, as briefly mentioned in Sect.~\ref{sec:FCS}, higher-order cumulants such as $\langle \Delta I^3\rangle$ and $\langle \Delta I^4\rangle\cdots$, which can be evaluated in full-counting statistics, also provide fruitful information on transport phenomena~\cite{LevitovJMP1996}. Experimentally, third-order and higher-order ones have been measured for several devices, e.g., tunnel junctions~\cite{ReuletPRL2003}, quantum dots~\cite{GustavssonPRL2006,FujisawaScience2006}, avalanche diodes~\cite{GabelliPRB2009}, and short diffusive conductors~\cite{PinsollePRL2018}. However, measuring higher-order cumulants is still challenging for most mesoscopic systems. We believe that advances in experimental techniques will allow us to measure full-counting statistics in various samples and that the results will provide deeper insight into their transport properties. The second is shot noise at high frequencies. While we have mainly focused on shot noise in the low-frequency limit, shot noise becomes frequency-dependent and sometimes provides essential information on electron dynamics at high frequencies. For example, the Josephson relation of fractional quasiparticles in the fractional quantum Hall state is a representative result obtained by measuring high-frequency shot noise~\cite{KapferScience2019,BisogninNatCom2019}. High-frequency shot noise in the Kondo state~\cite{DelagrangePRB2018,FerrierJLTP2019} is a challenging issue for future experiments. Such a measurement will allow us to evaluate the dynamics of the Kondo singlet below and above the Kondo temperature. Thirdly, expanding the scope of shot-noise measurements is a promising direction. Most of the shot-noise studies in mesoscopic physics have been performed on semiconductor devices fabricated in a 2DES. However, as described in this review, the noise measurement is applicable to other systems such as magnetic tunnel junctions, atomic/molecular junctions, carbon nanotubes, and spintronics devices. In this context, the shot-noise measurement performed on copper-oxide high-temperature superconductors~\cite{ZhouNature2019} is a notable example. In that study, an increase in the effective charge at a temperature above the superconducting transition temperature was observed, implying a precursor phenomenon of Cooper-pair formation. Finally, shot-noise measurements can be combined with other experimental techniques, e.g., scanning microscopy~\cite{MasseeRSI2018,WengScience2018}. Such experiments will provide unique information on charge, heat, and spin transport phenomena at the surface of a solid-state device. In the 1990s, a shot-noise measurement itself was challenging. Today, however, it is a standard experimental technique in mesoscopic physics. There are still many fascinating theoretical proposals left unexplored, which are attracting much attention from experimentalists. We hope that this review will encourage many researchers to join this rich research field. \begin{acknowledgments} For the shot-noise theory described in Sect.~\ref{subsec:noise_in_quantum_transport}, we thank Takeo Kato for his enlightening lecture note~\cite{KatoBussei2014}. We acknowledge instructive comments on the writing from Heorhii Bohuslavskii and fruitful discussions in the meeting of the Cooperative Research Project of RIEC, Tohoku University. This work was partially supported by JSPS KAKENHI (Grants No. JP19H00656, JP19H05826, JP16H06009, 19H05603) and JST PRESTO (Grant No. JP17940407). \end{acknowledgments}
2,869,038,155,937
arxiv
\section{Introduction} The well known Bohr-van Leeuwen Theorem (BvL) \cite{lee21,boh11,vle32,pei79,jay81,sah08,jay07,jay08} forbids the presence of orbital diamagnetism in classical equilibrium systems. The essential point of the proof of this theorem is that the magnetic field ${\bf B}$ enters the particle Hamiltonian through the replacement of the particle momenta ${\bf p}$ by ${\bf p}+\frac{e{\bf A(r)}}{c}$, where ${\bf A(r)}$ is the associated vector potential and $-e$ is the charge of the particle. Since the partition function involves integration of the momenta over the entire momentum space, the origin of ${\bf p}$ can be trivially shifted by $\frac{e{\bf A(r)}}{c}$, and as a result, ${\bf A(r)}$ disappears from the partition function. This in turn implies that the free energy undergoes no change in the presence of a magnetic field and hence gives zero orbital magnetic moment. This result is rather surprising, given the fact that each particle must trace a cyclotron orbit in the presence of a magnetic field and, therefore, contribute to the diamagnetic moment. This was resolved by noting that the skipping orbits of the electron at the boundary generate paramagnetic moment equal and opposite to that due to a carrier in the bulk \cite{lee21,boh11,vle32,pei79,jay81}. Thus the bulk diamagnetic contribution is exactly cancelled by the boundary (surface) contribution, leading to total absence of orbital magnetism in classical equilibrium systems. The canonical statistical mechanical treatment, however, makes no explicit reference to such boundary effects. Now, let us consider a finite/infinite classical system where the particle does not hit a geometrical boundary all along its motion. In such a situation, classical diamagnetism is expected as the skipping trajectories carrying paramagnetic current along the boundary are absent \cite{jay81}. This subtle role of the boundary has been revisited by Kumar and Kumar \cite{kum09} by considering the motion of a charged particle which is constrained to move on the surface of a sphere, i.e., on a finite but unbounded system. The surface of a sphere has no boundary, and to the pleasant surprise of the authors, they did find non-zero classical orbital diamagnetic moment by following the space-time approach. This effect has been attributed to the dynamical correlation induced by Lorentz force between velocity and transverse acceleration when the problem is treated as per the Einsteinian approach, i.e., in this case, via the Langevin dynamics \cite{risk,cof96}. Such subtle dynamical correlations are presumably not captured by the classical Gibbsian statistical mechanics based on equilibrium partition function. In our present work, we explore this system further using the recently discovered fluctuation Theorems (FTs), namely, the Jarzynski Equality (JE) and the Crooks' Fluctuation Theorem (CFT) \cite{jar97,cro99}. These FTs address the calculation of equilibrium free energy difference $\Delta F$ between two thermodynamic states derivable from irreversible (nonequilibrium) trajectories. We come across other intriguing consequences. If the system is driven out of equilibrium by perturbing its Hamiltonian ($H_\lambda$) by an externally controlled time-dependent protocol $\lambda (t)$, the thermodynamic work done on the system is given by \cite{jar97} \beq W = \int_0^\tau\dot{\lambda}\pd{H}{\lambda}~dt \label{Wdef} \eeq over a phase space trajectory, where $\tau$ is the time through which the system is driven. $\lambda (0)=A$ and $\lambda(\tau)=B$ are the thermodynamic parameters of the system. The JE states \beq \la e^{-\beta W}\ra = e^{-\beta\Delta F}, \label{JE} \eeq where $\Delta F = F_B-F_A$ is the free energy difference between the equilibrium states corresponding to the thermodynamic parameters $B$ and $A$, and the angular brackets denote average taken over different realizations for fixed protocol $\lambda(t)$. In eq.(\ref{JE}), $\beta=1/k_BT$, $T$ being the temperature of the medium and $k_B$ is the Boltzmann constant. Initially the system is in equilibrium state determined by the parameter $\lambda(0)=A$. The work done $W$ during each repetition of the protocol is a random variable which depends on the initial microstate and on the microscopic trajectory followed by the system. The JE acts as a bridge between the statistical mechanics of equilibrium and nonequilibrium systems and has been used experimentally \cite{rit06} to calculate free energy differences between thermodynamic states. The CFT predicts a symmetry relation between work fluctuations associated with the forward and the reverse processes undergone by the system. This theorem asserts that \beq \frac{P_f(W)}{P_r(-W)} = e^{\beta(W-\Delta F)}, \label{cft} \eeq where $P_f(W)$ and $P_r(W)$ denote distributions of work values for the forward and its time-reversed process. During the forward process, initially the system is in equilibrium with parameter $A$. During the reverse process, the system is initially in equilibrium with parameter $B$ and the protocol is changed from $\lambda_B$ to $\lambda_A$ over a time $\tau$ in a time reversed manner ($\lambda(\tilde{t}) = \lambda(\tau-t)$) and in our present problem, magnetic field also has to be reversed in sign \cite{sah08}. From equation (\ref{cft}), it is clear that the two distributions cross at $W=\Delta F$, thus giving a prescription to calculate $\Delta F$. In the present work, we show that the case of a charged particle moving on a sphere leads to free energy of the system which depends on the magnetic field and on the dissipative coefficient, which is inconsistent with the prediction of canonical equilibrium statistical mechanics. The same system gives orbital diamagnetism when calculated via the space-time approach, again in contradiction with the equilibrium statistical mechanics \cite{kum09}. For the recently studied case of a particle moving on a ring \cite{kap09}, the Langevin approach predicts zero orbital magnetism, just as in the present treatment. Thus, in this case, the free energy obtained by using FTs is consistent with the canonical equilibrium statistical mechanics. \section{Charged particle on the surface of a sphere} We take up the model proposed in \cite{kum09}, which consists of a Brownian particle of charge $-e$ constrained to move on the surface of a sphere of radius $a$, but now with a time-dependent magnetic field ${\bf B}(t)$ in the $\hat{\bf z}$ direction. The Hamiltonian of the system in the absence of heat bath is given by: \begin{equation} H = \frac{1}{2m}\left({\bf p}+\frac{e{\bf A}({\bf r},t)}{c}\right)^2, \end{equation} which in polar coordinates reduces to \begin{equation} H = \frac{1}{2m}\left[\left(\frac{p_\theta}{a}+\frac{eA_\theta(t)}{c}\right)^2 + \left(\frac{p_\phi}{a\sin\theta}+\frac{eA_\phi(t)}{c} \right)^2\right]. \label{Hpolar} \end{equation} In a symmetric gauge, $A_\theta=0$ and $A_\phi=(1/2)aB(t)\sin\theta$. In presence of the heat bath, the dynamics of the particle is described by the Langevin equation \cite{sah08}: \begin{eqnarray} m\frac{d{\bf v}}{dt} &=& -\frac{e}{c}({\bf v \times B}(t))-\Gamma {\bf v} -\frac{e}{2c}\left({\bf r}\times \frac{d{\bf B}(t)}{dt}\right)\nn\\ &&\hspace{3cm}+ \sqrt{2T\Gamma}~{\bm \xi}(t), \label{Lang} \end{eqnarray} where $m$ is the particle mass and $\Gamma$ is the friction coefficient. $\xi(t)$ is a Gaussian white noise with the properties $\la\xi(t)\ra=0$ and $\la\xi_k(t)\xi_l(t')\ra = \delta_{kl}\delta(t-t').$ The first term on the right hand side is the Lorentz force. If the magnetic field varies with time, it also produces an electric field ${\bf E}$, hence the presence of the force term $-e{\bf E} = -(e/2c)({\bf r}\times \frac{d{\bf B}}{dt}(t))$ in eq(\ref{Lang}). This is an additional element of physics not present in reference \cite{kum09}. Switching over to the spherical polar coordinates \cite{kum09}, eq. (\ref{Lang}) assumes the following form in terms of dimensionless variables: \begin{subequations} \begin{equation} \ddot{\theta}-\dot{\phi}^2\sin\theta\cos\theta = -\frac{a\omega_c(B(t))}{c}\dot{\phi}\sin\theta\cos\theta-\frac{a\gamma}{c}\dot{\theta}+\sqrt{\eta}~\xi_\theta; \end{equation} \begin{eqnarray} \ddot{\phi}\sin\theta+2\dot{\theta}\dot{\phi}\cos\theta &=& \frac{a\omega_c(B(t))}{c}\dot{\theta}\cos\theta+\frac{ab}{c}\dot{B}(t)\sin\theta \nn\\ &&-\frac{a\gamma}{c}\dot{\phi}\sin\theta+\sqrt{\eta}~\xi_\phi. \end{eqnarray} \end{subequations} In the above equations, the dots represent differentiation with respect to the dimensionless time $\tau = (c/a)t$. Here $\gamma=\Gamma/m$, $\omega_c(B(t))= eB(t)/mc$, $b=e/(2mc)$ and $\eta = 2Ta\gamma/mc^3$. \section{Results and discussions} First we consider the case of static magnetic field ${\bf B}$ of magnitude $B$ in the $\hat{{\bf z}}$ direction. The ensemble averaged orbital magnetic moment which by symmetry is also in the $\hat{{\bf z}}$ direction is given by \beq \la M(t)\ra = -\frac{ea}{2}\la\dot{\phi}\sin^2\theta\ra \eeq where $\la\cdots\ra$ denote ensemble average over different realizations of the stochastic process. Following the same procedure as in \cite{kum09}, we have calculated the equilibrium magnetic moment by double averaging first over a large observation time and then over the ensemble: \beq M_{eq} = \la\la M(t)\ra\ra = \frac{1}{\tau}\int_0^\tau dt~\la M(t)\ra \eeq as $\tau\to\infty$. For a numerical check, we have obtained the same results as figures 2 and 3 of \cite{kum09}. Throughout our analysis, we have used dimensionless variables. $e$, $c$, $m$ and $a$ are all taken to be unity. \begin{figure}[h] \vspace{0.5cm} \centering \epsfig{file=fig1.eps,width=7cm} \caption{Plots of magnetic moment $M_{eq}$ versus magnetic field $B$ for different $\gamma$ and for a given temperature $T=1$. The different plots are for $\gamma$=1, 1.5 and 2, as mentioned in the figure.\vspace{0cm}} \label{M_B_gamma} \end{figure} \begin{figure}[h] \vspace{0.5cm} \centering \epsfig{file=fig2.eps,width=7cm} \caption{Plots of $M_{eq}$ versus $B$ for different $T$ and for a given friction coefficient $\gamma=1.$ We have taken different plots for $T$=1, 1.5 and 2.\vspace{0cm}} \label{M_B_T} \end{figure} \begin{figure}[h] \vspace{0.2cm} \centering \epsfig{file=fig3.eps,width=7cm,clip=} \caption{Plots of $M_{eq}$ as a function of $T$ for $\gamma=1$ and for three different values of the external magnetic field: $B$=3, 5 and 7.\vspace{0cm}} \label{M_T} \end{figure} \begin{figure} \vspace{0.2cm} \centering \epsfig{file=fig4.eps,width=7cm} \caption{Plots of $M_{eq}$ as a function of the friction coefficient $\gamma$, for 4 different values of $B$: $B$=3, 5, 7 and 10, with $T$=1. Note that the axis for $\gamma$ starts from 0.8. In the inset we have plotted the curves $M_{eq}$ versus $\gamma$ for $B$=12 and 15. \vspace{0cm}} \label{M_gamma} \end{figure} \begin{figure} \vspace{0cm} \centering \epsfig{file=fig5.eps,width=7cm} \caption{Plots of $\Delta F$ versus the final value of the magnetic field $B(\tau)$ for $\gamma=1$ and $\gamma=2$. The protocol used is a ramp, $B=B_0t/\tau$, for a time of observation $\tau=2000$, with the temperature fixed at $T$=1. The inset shows the variation of $\Delta F$ as a function of the friction coefficient $\gamma$, with the parameters $T$=1, $B(\tau)=B_0$=10. \vspace{0cm}} \label{F_B} \end{figure} \begin{figure} \vspace{0cm} \centering \subfigure[]{ \epsfig{file=fig6a.eps,width=7cm} }\vspace{0.8cm} \subfigure[]{ \label{Wavg} \epsfig{file=fig6b.eps,width=7cm} } \caption{Determination of $\Delta F$ using the CFT. (a) Plots of $P_f(W)$ and $P_r(-W)$ at $B(\tau)=B_0=10$, which cross at $W(=\Delta F)=0.22$. (b) Plots of $P_f(W)$ and $P_r(-W)$ at $B(\tau)=B_0=15$, which cross at $W(=\Delta F)=0.65$.} \label{fig4} \end{figure} The thermodynamic work done by the external time-dependent magnetic field on the system up to time $t$ is given by \begin{equation} W(t) = \int_0^t \pd{H}{t'}dt' = \frac{ea}{2}\int_0^t dt'~\dot{\phi}(t')\sin^2\theta(t')\dot{B}(t'). \end{equation} In our case, $B(t)$ acts as the external protocol $\lambda(t)$. The Langevin equations are solved numerically by using the Euler method of integration with time step $\Delta t = 0.01$. Same boundary conditions and numerical procedure is carried out as in \cite{kum09}. In figure \ref{M_B_gamma}, we have plotted the dimensionless magnetic moment $M_{eq}(\equiv \frac{M_{eq}}{ea})$ versus the magnetic field in dimensionless units $B (\equiv \frac{eBa}{mc^2})$ for different values of the friction coefficient $\gamma (\equiv \frac{\Gamma a}{mc})$, as mentioned in the figure. At each point, the signature of $M_{eq}$ is opposite to that of $B$, providing clear evidence of diamagnetism. Initially, $M_{eq}$ increases with $B$ (linear response) and after showing a peak at high fields, it approaches zero. At high fields, it is expected that the radius of the cyclotron orbits will tend towards zero, and hence naturally the magnetic moment also vanishes. With increase in the friction coefficient $\gamma$, the peak shifts towards higher magnitudes of magnetic field. It should be noted that this behaviour is qualitatively consistent with the exact result obtained for the orbital magnetic moment $M_{2d}$ for a charged particle in a two-dimensional plane in the absence of a boundary, following the real space-time approach (see eq.(8) of \cite{jay81}). The expression for orbital magnetic moment $M_{2d}$ is given by \begin{equation} M_{2d} = -\frac{e}{2c}\left(\frac{T\omega_c}{\gamma^2 + \omega_c^2}\right). \label{M2d} \end{equation} Here, $\omega_c=eB/mc$, where $B$ is the magnitude of the static magnetic field. Compared to the analysis in \cite{kum09}, we have gone beyond the linear response regime. In figure \ref{M_B_T} we have plotted the magnitude of $M_{eq}$ as a function of $B$ for different values of temperature $T$. From figures \ref{M_B_gamma} and \ref{M_B_T}, it can be inferred that the magnetic moment can be monotonic or non-monotonic in $T$ and $\gamma$, depending on the whether the values of $B$ lie within the linear response regime or beyond. To this end, in figures \ref{M_T} and \ref{M_gamma}, we have plotted the equilibrium magnetic moment as a function of temperature $T$ and friction coefficient $\gamma$ respectively, for various values of $B$. The magnetic moment is zero at $T=0$ as well as at $T=\infty$. It exhibits a minimum in the intermediate range of temperature. This minimum shifts towards lower temperature with the increase in $B$. It should be noted that for larger temperatures, a higher number of realizations are required to generate more accurate data points. In figure \ref{M_gamma}, we notice that in the parameter range that we have considered, the equilibrium magnetic moment decreases monotonically with friction coefficient. For large $\gamma$, the particle motion gets impeded by the medium and as expected, $M_{eq}\to 0$ as $\gamma\to\infty$. As $\gamma\to 0$, there is a saturation in the value of magnetic moment, which depends on the value of the parameters $B$ and $T$. This we have not shown in the figure. It is evident from figure \ref{M_B_gamma} that for large values of $B$ ($B>10$), dependence of $M_{eq}$ on $\gamma$ is non-monotonic. This is shown in the inset where $M_{eq}$ is plotted as a function of $\gamma$ for $B=12$ and for $B=15$. It is observed that the dip in $M_{eq}$ shifts towards higher $\gamma$ for higher value of $B$. For small friction coefficients, the saturation value is very small (for large $B$) and it requires a much larger number of realizations to achieve reliable results. The details of these results will be published elsewhere. Our results clearly indicate that the temperature and the friction dependence of the classical magnetic moment obtained via real space-time approach are qualitatively different for an infinite unbounded system (eq. (\ref{M2d})) from that for a finite unbounded system considered here. From eq. (\ref{M2d}) we can readily infer that the dependence of $M_{2d}$ on temperature $T$ and on friction coefficient $\gamma$ is monotonic. Having shown that the space-time approach leads to a finite diamagnetic moment in contrast to its absence in canonical equilibrium, we can now turn to the calculation of free energy differences for the same problem using the FTs. We subject the system to the time-dependent magnetic field (protocol) in the form of a ramp, $B(t)=B_0t/\tau$, where $\tau$ denotes the total time of observation. We use the ramp with an observation time $\tau=2000$. The final value of magnetic field is $B(\tau)=B_0$. To calculate the free energy difference, $\Delta F = F(B_0)-F(0)$, we have used the JE (eq (\ref{JE})). To calculate $\Delta F$ numerically , we have generated $10^4$ realizations of the process, making sure that the system is initially in canonical equilibrium in the absence of magnetic field ($B(0)=0$). The results for $\Delta F$ are plotted as a function of $B(\tau)$ in figure \ref{F_B} for two values of $\gamma$. All physical parameters are in dimensionless units and are as mentioned in the figure. Surprisingly, we notice that $\Delta F$ depends on the magnetic field $B(\tau)$. This is in sharp contrast to the equilibrium result, namely, $\Delta F$ should be identically zero. To our knowledge, this is the \emph{first} example wherein the Fluctuation Theorem fails to reproduce the result obtained from equilibrium statistical mechanics. This is yet another surprise in the field of classical diamagnetism. Moreover, $\Delta F$ depends on the type of protocol. The dependence of $\Delta F$ on the friction coefficient is shown in the inset of figure \ref{F_B}. In classical equilibrium, it should be noted that the free energy does not depend on friction coefficient. From this free energy, one can get moment by calculating the derivative of the obtained free energy with respect to $B$. However, the magnetic moment thus obtained does not agree with that obtained through the simulation of the Langevin equations. This we have verified separately. In figure \ref{fig4} (a) and (b), we have plotted $P_f(W)$ and $P_r(-W)$ as a function of $W$ for the same protocol ending with two different values of the magnetic field ($B(\tau)=10$ and 15). The crossing point of $P_f(W)$ and $P_r(-W)$, according to the CFT, gives the value of $\Delta F$, which we have found to be equal to 0.22 for $B_0=10$ and 0.65 for $B_0=15$, which are in turn equal to the obtained values using the JE, namely, 0.22 and 0.65 respectively, within our numerical accuracy. Thus we have shown that a charged particle on a sphere exhibits finite diamagnetic moment and magnetic field dependent free energy calculated via real space-time approach and the Fluctuation Theorems respectively. As mentioned earlier, these results contradict equilibrium statistical mechanics. \section{Charged particle on a ring} Now we turn to a simpler problem of a charged particle moving on a ring in a magnetic field perpendicular to the plane of the ring, i.e., in the $\hat{{\bf z}}$ direction. This problem has been studied recently \cite{kap09} in connection with the BvL for a particle motion in a finite but unbounded space, where it was shown that this system analyzed via the Langevin dynamics does not exhibit orbital diamagnetism, consistent with equilibrium statistical mechanics. It is not surprising as the equation of motion for the relevant dynamical variable, namely the azimuthal angle $\phi$, does not depend on the strength of the static magnetic field. Hence, the magnetic field has no effect on the motion of a particle constrained to move in a circle of fixed radius $a$. We analyze the same problem, however in the presence of time-dependent magnetic field (protocol), within the framework of Jarzynski Equality to obtain the free energy dependence on magnetic field in this case. To this end, the Hamiltonian of the system is given by \beq H = \frac{1}{2m}\left(\frac{p_\phi}{a}+\frac{eA_\phi(t)}{c}\right)^2, \label{Hring} \eeq where, for a magnetic field in the $\hat{{\bf z}}$ direction, $A_\phi(t) = (a/2)B(t)$. The corresponding Langevin equation for the relevant variable $\phi$ is given by \begin{equation} ma\ddot{\phi} = -\Gamma a\dot{\phi}+\frac{ea}{2c}\dot{B}(t)+\sqrt{2T\Gamma}~\xi_\phi . \end{equation} The above equation can be written in a compact form \beq \ddot{\phi} = -\gamma\dot{\phi}+\lambda\dot{B}(t)+\sqrt{\eta}~\xi_\phi, \label{ring} \eeq with $\gamma=\frac{\Gamma}{m}$, $\lambda = \frac{e}{2mc}$ and $\eta = \frac{2\gamma T}{ma^2}.$ In this section, the dots represent differentiation with respect to real time $t$. It may be noted that if the magnetic field is independent of time, i.e., $\dot{{\bf B}}=0$, then the field has no effect on the $\phi$ variable, as can be seen from equation (\ref{ring}). The thermodynamic work $W$, using equation (\ref{Wdef}) and (\ref{Hring}), is given by \beq W(t) = \int_0^t\pd{H}{t'}~dt' = \frac{ea^2}{2c}\int_0^t\dot{\phi}(t')\dot{B}(t')~dt'. \label{Wring} \eeq The formal solution for $\dot{\phi}$ is given by \begin{equation} \dot{\phi}(t) = \dot{\phi}(0) e^{-\gamma t} + e^{-\gamma t}\int_0^t dt'~e^{\gamma t'}[\lambda \dot{B}(t')+\sqrt{\eta}~\xi_\phi(t')]. \end{equation} Substituting this solution in eq (\ref{Wring}) for $W$, we get \begin{eqnarray} W(t) &=& g\int_0^t dt'\dot{B}(t')[\dot{\phi}(0)e^{-\gamma t'}+e^{-\gamma t'}\int_0^{t'} \{ \lambda \dot{B}(t'')\nn\\ &&+\sqrt{\eta}~\xi_\phi(t'') \}e^{\gamma t''}~dt''], \label{Wexpr} \end{eqnarray} where $g=ea^2/2c$. Since the expression for $W$ in the above equation is linear in the Gaussian stochastic variable $\xi_\phi(t)$, $W$ itself follows a Gaussian distribution. To obtain $P(W)$, we simply need to evaluate the average work $\la W\ra$ and the variance $\sigma_W^2 = \la W^2\ra-\la W\ra^2$. the full probability distribution $P(W)$ is given by \begin{equation} P(W) = \frac{1}{\sqrt{2\pi\sigma_W^2}}\exp\left[-\frac{(W-\la W\ra)^2}{2\sigma_W^2}\right]. \label{PW} \end{equation} Averaging eq. (\ref{Wexpr}) over random realizations of $\xi_\phi(t)$, and noting that $\la\xi_\phi(t)\ra=0$, we get for average work done till time $\tau$: \begin{equation} \la W\ra = g\lambda\int_0^\tau dt'\dot{B}(t')e^{-\gamma t'}\int_0^{t'}dt''~\dot{B}(t'')e^{\gamma t''}. \label{Wavg} \end{equation} Again using eq. (\ref{Wexpr}) and (\ref{Wavg}), after tedious but straightforward algebra, the variance $\sigma_W^2$ can be readily obtained and is given by \begin{equation} \sigma_W^2 = \frac{g^2}{2\gamma}\eta \int_0^\tau dt'~\dot{B}(t')\int_0^\tau dt_1\dot{B}(t_1)e^{-\gamma |t'-t_1|}. \label{Wvar} \end{equation} In arriving at the above expression, we have used the fact that the variance of the initial equilibrium distribution of angular velocity $\dot{\phi}(0)$ is given by $\la\dot{\phi}^2(0)\ra = \frac{T}{ma^2} = \frac{1}{2}g^2\eta$. Comparison between (\ref{Wavg}) and (\ref{Wvar}) gives the result \beq \sigma_W^2 = 2T\la W\ra, \label{FD} \eeq a fluctuation-dissipation relation. Using eq. (\ref{PW}) and (\ref{FD}), we get \beq \la e^{-\beta W}\ra = 1, \eeq which, according to the JE, implies $\Delta F = F(B(\tau))-F(B(0)) = 0$, where $B(0)$ and $B(\tau)$ are the values of the magnetic field at the initial and final times of the protocol respectively. The magnitudes of $B(0)$ and $B(\tau)=B$ can take any value. Thus, $\Delta F=0$ implies that the free energy is independent of the magnetic field, the result being consistent with equilibrium statistical mechanics. It is interesting to note that the averaged work $\la W\ra$ (eq. (\ref{Wavg})) and its variance $\sigma_W^2$ (eq. (\ref{Wvar})) depend on the functional form of $B(t)$ and on the magnetic fields at the end points of the observation time and yet $\la\exp(-\beta W)\ra$ is independent of magnetic field. We have obtained this exact result which is independent of the functional form the protocol $B(t)$. \section{Conclusion} In conclusion, whenever the real space-time approach for a charged particle in the presence of a magnetic field predicts a finite diamagnetic moment, the Fluctuation Theorems too fail to reproduce results consistent with equilibrium statistical mechanics. These conclusions have also been supported by the results for the motion of a charged particle in a two-dimensional plane in the absence of boundary \cite{jay81,sah09}. In cases where real space-time approach to diamagnetism is not in conflict with the equilibrium statistical mechanics, an example being a charged particle on a ring or in the presence of a confining boundary \cite{sah08}, the Fluctuation Theorems lead to results consistent with equilibrium statistical mechanics. Only experiments can resolve whether really orbital diamagnetism exists in classical equilibrium systems (like charged particle on the surface of a sphere). \acknowledgements One of us (A.M.J) thanks Prof. N. Kumar and K. Vijay Kumar for several useful discussions and also thanks DST, India for financial support. A.S. thanks IOP, Bhubaneswar (where part of the work is carried out) for hospitality.
2,869,038,155,938
arxiv
\subsection{Pre-processing of Data} The data set contained over 1300 multiple choice question answer pairs (including their explanation), level of difficulty and overall quality score given by the students. As our data set of questions was in a .txt format, it was necessary to import it into a suitable format in order to be able to process the data, as well as retain its inherent structure. In order to do this, we used Python to clean the data set and convert it to .csv format which gave us a large degree of control over the structure of our final data set. The next hurdle was to deal with questions that contained code scripts. This posed a problem with the available LDA models, as they the did not recognise code terms as potential tags for questions. We manually searched through the entire data set for code keywords and added respective tags at the start of those questions. The keywords tagged were: BigO, Modulo, for, if, while, else, print. Prior to applying our model to the data set, we needed to make sure that it did not contain any grammatical errors. Since our data set contained various subject specific terms not found in the English dictionary, we manually searched for these terms and corrected any grammatical errors. The next step of preparing the data for the LDA model was removing punctuation and converting the data to lowercase text. This was easily implemented within the DataFrame structure using the 'sub' and 'lower' functions. \subsubsection{Examples of Implementation} Its non-parametric nature allows the HDP model to be applied to various uses, some of which are: \begin{itemize} \item Multi-population Haplotype Phasing \cite{Xing2006}: A new Bayesian approach to haplotype inference for multiple populations was developed, which incorporates HDP and has an improved accuracy to other haplotype inference algorithms. \item Word Segmentation \cite{Pei2013}: A refined model based on HDP was developed for word segmentation in the Chinese language, with improvements to the base measure using a dictionary-based model. \item Musical Similarity Computation \cite{Hoffman2008}: A method based on HDP for estimating the timbral similarity between recorded songs, which are represented as feature vectors. \end{itemize} \subsubsection{Dirichlet Distribution} For more information on the Dirichlet Distribution, see \citet{book}. In the context of LDA application, it is unlikely that an individual document will consist of all the topics found in the corpus, so a parameter in the range $[0,1]$ is generally considered to be an ideal testing range. When fitting an LDA model to the provided dataset, the parameters $\alpha$, $\eta$ will need to be adjusted to reflect the word-topic and topic-document distributions found in the corpus data. Using the intuition behind their effect on the distributions produced, it is trivial to see that adjusting the $\alpha$ and $\eta$ parameters should determine the 'mixture' of topics in a document/words in a topic. Figure \ref{fig:LDAPlateNotation} shows an overall view of the dependencies within the LDA model: \vspace{2pt} \begin{center} \includegraphics[width=6.5cm]{Smoothed_LDA_2.png} \captionof {figure} {Plate Notation for LDA \cite{lda2003}} \label{fig:LDAPlateNotation} \begingroup \fontsize{8pt}{12pt}\selectfont \textit{Here, M denotes a document, N denotes the words contained within the document, k denotes the identified topics, and z denotes the topic(s) assigned to the observed word w. Each rectangle denotes a repetition within the model to form the overall structure of the corpus data.} \endgroup \end{center} \vspace{7pt} The main advantage when using LDA for topic modelling and clustering is the production latent groups which can be readily interpreted as topics. As a generative probabilistic model, it is possible to generalise an LDA model to classify documents outside of the training corpora. For a more in-depth look into the process of extending and training an LDA model into a classifier using the identified latent topics, see \citet{phan2008}, \citet{PAVLINEK201783}. The significant limitations of an LDA model arise from the inability to capture topic correlations (due to the independence assumed by using the Dirichlet distribution), the use with short-text documents, and the ambiguity around evaluation. For further reading, the problem surrounding correlations is directly approached in \citet{blei2007} where they propose the 'Correlated Topic Model'. The limitation encountered when using short-text documents stems from the lack of information available. By the nature of the data set being fitted to the LDA Model, as less information is provided, it is harder for the model to establish meaningful associations between documents, leading to documents within a topic generally being weaker in terms of 'similarity'. In terms of evaluating an LDA model, there are probability-based metrics available which provide a quantitative means by which the quality of any two topics models can be compared. However, as stated in \citet{HAGEN20181292}, 'the best-fitting model and human judgment are negatively correlated'. As a solution, an approach they detailed involved two steps. Firstly, a topic model is fitted by optimisation of some measure (e.g. perplexity) for each iterative step involved in the training method. Then humans would be required to manually assess the quality of the topics formed, allowing an optimal topic number to be derived that 'produces the best quality topics within the range [established from] the first step'. \subsubsection{Examples of Implementation} The paper in which the LDA model was first proposed \cite{Pritchard945} used the model in the context of population genetics to gain insights into the population structure and assign individuals to groups according to indications from their genotypes. Due to the nature of the LDA model, it can be used in a copious range of applications, but naturally lends itself to the field of natural language processing. Some applications of LDA are: \begin{itemize} \item Tag Recommendation System \cite{Kresteletala09}: Inspired by the tagging of online multimedia sources with irrelevant or niche tags; resources that have been tagged by users were used to find latent topics to map new resources to fewer, more relevant tags. In this case, each tag would be viewed as a 'word', and the tags associated to some resource as a 'document'. \item Review Analysis \cite{GUO2017467}: a data mining approach was used on a data set of 266,544 online hotel reviews for hotels across 16 countries. Using LDA, this approach identified different 'controllable dimensions' and their significance to inform hotels on how best to interact with their visitors. \item Fraud Detection \cite{XING20071727}: used in the telecommunications industry, the aim was to detect fraudulent calls from the high-volume network. LDA was used to develop profile signatures, so that any activity that deviated from what was considered 'normal' would be flagged for possible fraudulence. \end{itemize} \subsubsection{Determining an Optimum Topic Number} Choosing the number of topics (K) was a significant challenge we faced as part of the implementation. For the LDA model, this parameter needs to be specified before the data set is fitted. Finding an optimal value tends to be found via experimental methods as a standard method to determine the optimal value doesn't exist \cite{Croft2009SearchE}. When choosing the number of topics, it is important to consider the cases where K is less/more than the optimum number of topics. In the former case, the LDA model would produce over-generalised topics where data objects are over-aggregated, resulting in a model with clusters that don't produce any significant insights into a structure behind the data set. In the latter case, the LDA model would produce latent topics where very few data objects are clustered within each. This would reduce the possibility of the LDA model detecting an association between data objects and clustering them together. The extremes of both cases would irrespectively reduce the LDA model to an unusable state. In rare cases, the context of application can dictate the optimum of cases. Since this wasn't the situation here, we decided to implement a Hierarchical Dirichlet Process (HDP) to provide an insight into the range of topic numbers we would take forward for further testing. \subsubsection{HDP 1 vs HDP 2} In our implementation, we make use of the Python package 'gensim' to form our HDP model. In the 'gensim' version of HDP, the model produces a probability distribution, detailing the likelihood of a given topic appearing in the ‘optimal’ set of topics. Initially, we used a minimum likelihood criterion of 1/n, where n was the size of our data set, as the minimum probability of a topic being significant. We then used this number of significant topics as a prediction to use within the LDA model. However, when running the LDA model with this HDP estimate for the number of topics, the clustering output was not optimised in all cases, as a number of topics were not being clustered to, for various values of n. Indeed, this problem seemed to worsen for larger data sets. For an initial corpus size of 100, the ratio of clustered topics to the initial number predicted by the HDP model was 1, which decreased to around 0.6 for a data set of more than 900 questions. In order to overcome this issue, we explored using a stricter minimum significance criterion, and ran the HDP model a second time using 1/x, where x was the number of topics given in the first HDP estimate. This had the effect of yielding a smaller number of topics in all cases, as well as a more effective ratio, as seen in the second run in Fig. 1. The ratios in this case are all larger than 0.8, thus almost all topics are being clustered for various data set sizes, resulting in an improved clustering outcome from the LDA model. \subsection{Zeno's Paradox} When implementing the recursion process to fine-tune our topic parameter, an important consideration made was that of Zeno's Paradox \cite{sep-paradox-zeno}. Within our recursive method we were constantly checked for the efficiency of the LDA model i.e. the number of topics given as input compared to the number of topics actually used within the clustered output. Since these processes are probabilistic, there is always a slight chance that the efficiency ratio might decrease at points within the progression of the runs. To avoid encountering Zeno's paradox, we set up a subroutine, with initialised parameters $\gamma$ and $\eta$ such that the run would be ended if one of the following conditions were met: \begin{itemize} \item The difference in efficiency ratio of successive runs is greater than $\gamma$. \item The number of successive runs with a steady decrease in efficiency ratio is greater than $\eta$. \item The ratio of effective topics hits 1. \item The efficiency ratio hits 1. \end{itemize} The first three points mentioned above help prevent entering an infinite loop (Zeno's Paradox). The final point is intended for exit from the recursive loop with a positive run. When carrying out the afore detailed iterative process over our question data set, we examined the impact of two specific variations: incrementally increasing the size of the questions data set being fitted with the LDA model and applying various permutations to the data set. In the first variation, we were expecting to observe some degree of trend when fitting the data set with the first 100, 200, 300,... questions. The second variation was implemented with the intention of observing the extent to which the questions involved in the various question sets would impact the LDA model that is fitted. Executing these variations required very different approaches. Testing and fitting the various question sets involved isolating the appropriate questions from the data set, which was readily achieved using available set functions commonly found in programming languages. Permuting the questions data set required making alterations to the raw data set before any pre-processing would take place. Before any iterative runs could be carried out, the $\alpha$ and $\eta$ parameters had to be optimised for the specific data set that would be used. A range of possible values were obtained by maximisation of the coherence measure included in the 'gensim' package. This approach proved sufficient as these parameters were varied over the range [0,1] - a finite parameter space (justified earlier) as opposed to the infinite space for topics - allowing greater ease when identifying and inferring meaning from values of high coherence. The decision was made to use the default coherence measure, but further reading into the most effective topic measure implementations could start with \citet{10.1145/2684822.2685324} \subsection{Validation of Methodology and Results} This subsection contains a detailed analysis for validating our methodology. We split each of our data sets i.e. the original data and its 5 permutations, into the first 100, 200,... questions. We then produced results from 100 runs on each of these data files using HDP-2 and our iterative model. We started by recording the average estimated topics reached by our model against the HDP-2 model for each data set. The x-axis of the graphs below lists the number of questions contained in the data, while the y-axis contains the mean-mode metric. \begin{rmk} We computed the mean-mode by first computing the mode of the topic numbers. If multiple were found, we chose the closest value to the mean. Choosing the mean could result in obtaining a value that wasn't present within the set, resulting in a bad estimate of the output of our algorithm. \end{rmk} \vspace{-10pt} \begin{center} \includegraphics[width=6.5cm]{Comp_Mode.png} \captionof{figure}{} \label{fig:Mode Metric Comp} \end{center} The lines in yellow represent the topic numbers obtained from the HDP-2 model and the lines in red represent the final output of our model. The different shades in each colour represent the varied permutations of our data set. Notice that, for data sets containing a low number of questions (i.e. 100, 200, 300), the two models would roughly give the same output. This further justifies our previous claim that, for sufficiently small data sets, HDP-2 would give an adequate estimate to the optimal number of topics. However, as the size of the data set increases, HDP-2 provides an excess number of topics. This can be further seen in the graphs in Figure \ref{fig:Prop Ratio Collection}. \begin{figure}[htp] \centering \includegraphics[width=0.2318\textwidth]{Track_RatiosA_1303.png} \hfill \includegraphics[width=0.2318\textwidth]{Track_PropA_1303.png} \medskip \includegraphics[width=0.234\textwidth]{Track_RatiosC_1303.png} \hfill \includegraphics[width=0.234\textwidth]{Track_PropC_1303.png} \medskip \includegraphics[width=0.234\textwidth]{Track_RatiosF_1303.png} \hfill \includegraphics[width=0.234\textwidth]{Track_PropF_1303.png} \caption{} \label{fig:Prop Ratio Collection} \end{figure} \vspace{-5pt} From the first column of Figure \ref{fig:Prop Ratio Collection}, which illustrates the evolution of the ratio for effective topics over 100 runs, we can see that the majority of runs saw a large improvement after the first few iterations, before reaching termination in the general range of 4 – 6 iterative steps, with Permutation C acting as the sole anomaly to this statement. Although we took into account the possibility of encountering Zeno’s Paradox, of the 100 runs for each of these permutations, there were no runs that ended in ‘Failure’. This result is promising, as it suggests that the chance of our algorithm outputting a failed run is extremely low. When observing the graphs in the second column of Figure \ref{fig:Prop Ratio Collection}, notice the distinction between the two lines. The blue line represents the cumulative proportion of the 100 runs that terminated at each step, while the orange line shows the marginal proportion that each step contributes. These graphs complement their corresponding run diagram well, as they identify any trends in path termination. Note that each graph finishes at 1, confirming that we have no failed cases. Similarly, the proportion graphs show that each set of runs terminates in the range of 4 – 6 iterative steps. These results show that within Permutation C, most runs terminated in 6 or 7 steps, confirming that our range is likely to be true. Each range derived from the two figures provides an estimate of how many steps are required (on average) for our data sets. \begin{rmk}This approximation for the average number of iterative steps is dependent on the data set. This number would probably be different if another data set were to be used, but a similar process could readily be implemented to get a new range.\end{rmk} Extending this further, we ran a time analysis on all six of our data sets, comparing the HDP-2 model vs our recursive model. \begin{center} \includegraphics[width=6.5cm]{TimingResults_A_2.png} \captionof{figure}{} \label{fig:Time Results} \end{center} Figure \ref{fig:Time Results} clearly shows that the HDP-2 model has a slow exponential growth and our model displays a linear increase. This is an approximation, as our data set contains a maximum of 1300 questions. Given that the times are in seconds, the difference in timing of the two runs, although being 4-fold, is significantly small for practical use. Hence the trade off of a slower algorithm for a better clustering is justified. All the other data sets showed similar time graphs, further justifying that our model is not completely influenced by the ordering of questions-answer pairs. \section{Introduction} \input{EAAI21_Introduction.tex} \section{Latent Dirichlet Allocation} \input{EAAI21_LDA.tex} \section{Hierarchical Dirichlet Process} \input{EAAI21_HDP.tex} \section{The Dataset}\label{sec:dataset} \input{EAAI21_Dataset.tex} \section{Methodology}\label{sec:method} \input{EAAI21_Method.tex} \section{Empirical Results} \input{EAAI21_Results.tex} \section{Conclusion and Further Work} \input{EAAI21_Conclusion.tex}
2,869,038,155,939
arxiv
\section{Introduction} A pseudoprime is a composite number that satisfies some necessary condition for primality. Since primes are necessary building blocks for so many algorithms, and since the most common way to find primes in practice is to apply primality testing algorithms based on such necessary conditions, it is important to gather what information we can about pseudoprimes. In addition to the practical benefits, pseudoprimes have remarkable divisibility properties that make them fascinating objects of study. The most common necessary condition used in practice is that the number has no small divisors. Another common necessary condition follows from a theorem of Fermat, that if $n$ is prime and $\gcd(a,n)=1$ then $a^{n-1} = 1 \pmod{n}$. We denote by $F(n)$ the set of Fermat liars with respect to $n$. For the purposes of generalization, it is useful to translate the Fermat condition to polynomial rings. Let $n$ be prime, let $R = \mathbb{Z}/n \mathbb{Z}$, assume $a \in R^{\times}$, and construct the polynomial ring $R[x]/\langle x-a \rangle$. Then a little work shows that $x^n = x$ in $R[x]/\langle x-a \rangle$ \cite[Proof of Theorem 4.1]{Grantham01}. After all, $x = a$ in $R[x]/\langle x-a \rangle$, we have $R[x]/\langle x-a \rangle \cong R$ as fields, and $a^n = a$ in $R$. The advantage of this view is that $x-a$ may be replaced by an arbitrary polynomial. \begin{definition}[\cite{Grantham01}] Let $f(x) \in \mathbb{Z}[x]$ be a monic polynomial of degree $d$ and discriminant $\Delta$. Then composite $n$ is a Frobenius pseudoprime with respect to $f(x)$ if the following conditions all hold. \begin{enumerate} \item (Integer Divisibility) We have $\gcd(n, f(0)\Delta) = 1$. \item (Factorization) Let $f_0(x) = f(x) \pmod{n}$. Define $F_i(x) = {\rm gcmd}(x^{n^i}-x, f_{i-1}(x))$ and $f_i(x) = f_{i-1}(x)/F_i(x)$ for $1 \leq i \leq d$. All of the ${\rm gcmd}$s exist and $f_d(x) = 1$. \item (Frobenius) For $2 \leq i \leq d$, $F_i(x) \mid F_i(x^n)$. \item (Jacobi) Let $S = \sum_{2 \mid i} \deg(F_i(x))/i$. Have $(-1)^S = \jacs{\Delta}{n}$, where $ \jacs{\Delta}{n}$ is the Jacobi symbol. \end{enumerate} \end{definition} Here ${\rm gcmd}$ stands for ``greatest common monic divisor" \cite{Grantham01}. If $g_1(x), g_2(x), f(x)$ are all monic and ${\rm gcmd}(g_1(x), g_2(x)) = f(x)$ with respect to $n$ this means that the ideal generated by $g_1(x), g_2(x)$ equals the ideal generated by $f(x)$ in $( \mathbb{Z}/n \mathbb{Z})[x]$. The ${\rm gcmd}$ may not exist, but when it does it is unique. Grantham shows that if ${\rm gcmd}(g_1(x), g_2(x))$ exists in $( \mathbb{Z}/n \mathbb{Z})[x]$, then for all primes $p \mid n$, $\gcd(g_1(x), g_2(x))$ has the same degree when taken over $ \mathbb{Z}/p \mathbb{Z}$ \cite[Corollary 3.3]{Grantham01}. Furthermore, the Euclidean algorithm when applied to $g_1(x), g_2(x)$ will either correctly compute their ${\rm gcmd}$, or find a proper factor of $n$ \cite[Proposition 3.5]{Grantham01}. \begin{example} Suppose $d = 1$ and $n$ is a Frobenius pseudoprime with respect to $f(x) = x-a$. Then ${\rm gcmd}(x^n-x, x-a) = x-a$, which means $a^n = a \pmod{n}$, and hence $a$ is a Fermat liar with respect to $n$. Conversely, if $a$ is a Fermat liar then $\gcd(a,n)=1$ and ${\rm gcmd}(x^n-x, x-a) = x-a$, from which we conclude that $n$ is a Frobenius pseudoprime with respect to $x-a$. \end{example} We denote by $L_d(n)$ the set of Frobenius liars of degree $d$ with respect to $n$, and note by the example above that $L_1(n) = F(n)$. We will further divide the set $L_2(n)$ into $L^+_2(n)$ and $L^-_2(n)$. A degree $2$ polynomial $f(x)$ with discriminant $\Delta$ will be in $L^+_2(n)$ (respectively $L^-_2(n)$) if $ \jacs{\Delta}{n} = 1$ (respectively $-1$). Notice that if $ \jacs{\Delta}{n} = 0$, $f(x)$ is not a liar since it fails the Integer Divisibility step. Let ${\rm Frob}_2(y, f(x))$ be the set of degree-$2$ Frobenius pseudoprimes with respect to $f(x)$, up to bound $y$, and similarly divide them into $+$ and $-$ sets according to the Jacobi symbol. Further, let ${\rm Frob}_2(f(x))$ be the (possibly infinite) set of all such pseudoprimes. By abuse of notation, the same symbols will be used for the size of each set. The main goal of this work is to generalize \cite{ErdosPomerance01}, which bounds the average number of Fermat liars, strong liars, and Euler liars. We prove the following two theorems. \begin{theorem}\label{thm:th1} For all $\alpha$ satisfying Proposition \ref{prop:lb1}, in particular $\alpha \leq \frac{23}{8}$, we have $$ y^{3-\alpha^{-1}-o(1)} < \sum_{n \leq y} L_2^+(n) < y^3 \cdot \mathcal{L}(y)^{-1 + o(1)} $$ where the sum is restricted to composite $n$. Moreover, the same bounds hold if we replace $L_2^+(n)$ by $L_2(n)$. Here $\mathcal{L}(y) = {\rm exp}( (\log{y})(\log\log\log{y})/\log\log{y})$, with $\log$ being the natural logarithm. \end{theorem} \begin{theorem}\label{thm:th2} For all $\alpha$ satisfying Proposition \ref{prop:lb2}, in particular $\alpha \leq \frac{4}{3}$, we have $$ y^{3-\alpha^{-1}-o(1)} < \sum_{n \leq y} L_2^-(n) < y^3 \cdot \mathcal{L}(y)^{-1 + o(1)} $$ where the sum is restricted to composite $n$. \end{theorem} As a comparison, if $n$ is prime then the size of $L_2(n)$ is $(n-1)^2$, $L_2^+(n) = \frac{1}{2}(n-1)(n-2)$, and $L_2^-(n) = \frac{1}{2}n(n-1)$. Thus the average count of liars for composites is rather large. \begin{rmk} We obtain the same results if we restrict to odd composite $n$, or more generally if we restrict to composite $n$ coprime to some fixed value. \end{rmk} These theorems count pairs $(f(x), n)$ where $n \leq y$ and $n$ is a degree-$2$ Frobenius pseudoprime with respect to $f(x)$. We thus have the following corollary on the average count of degree-$2$ Frobenius pseudoprimes with Jacobi symbol $-1$. \begin{corollary} \label{cor:psp_count} Suppose $\alpha$ satisfies the conditions outlined in Theorem \ref{thm:th2}. Then $$ \frac{1}{y^2} \sum_{a,b \leq y} {\rm Frob}_2^-(y, x^2+ax+b) \geq y^{1-\alpha^{-1}-o(1)} \enspace . $$ \end{corollary} In \cite{Grantham01}, Grantham offers \$6.20 for exhibiting a Frobenius pseudoprime with respect to $x^2+5x+5$ that is congruent to $2$ or $3$ modulo $5$. The proper generalization for these Grantham challenge pseudoprimes are the sets ${\rm Frob}_2^-(x^2+ax+b)$, since the condition of being $2,3 \bmod{5}$ is equivalent to $\jacs{\Delta(x^2+5x+5)}{n} = -1$. By Corollary \ref{cor:psp_count} these sets are infinite on average, providing good evidence that there are infinitely many Grantham challenge pseudoprimes. Further motivation for the present work comes from other challenge pseudoprimes. Pomerance, Selfridge, and Wagstaff ask in \cite{PSW01} whether there exists composite $n$ that is simultaneously a base-$2$ Fermat pseudoprime, a Fibonacci pseudoprime, and congruent to $2$ or $3$ modulo $5$. Potentially even more rare are Baillie pseudoprimes \cite{BaillieWagstaff01} (or Baillie-PSW pseudoprimes), which ask for composite $n$ that are simultaneously base-$2$ strong pseudoprimes and strong Lucas pseudoprimes with respect to a polynomial $x^2-Px+Q$ chosen in a prescribed way to ensure that $\jacs{P^2-4Q}{n} = -1$. Though it is not clear in either case whether the conditions correspond to Frobenius pseudoprimes or strong Frobenius pseudoprimes to a single polynomial $f(x)$, quadratic Frobenius pseudoprimes provide a natural generalization of the types of conditions requested. From this we conclude that the division of $L_2(n)$ into $\jacs{\Delta}{n} = \pm 1$ cases is of fundamental importance, and in particular that bounding $\sum_{n \leq y} L_2^-(n)$ is of strong interest. Though not explored in this work, since ${\rm Frob}_2(x^2-Px+Q)$ is a subset of the set of $(P,Q)$-Lucas pseudoprimes \cite[Theorem 4.9]{Grantham01}, there are potential applications to the theory of Lucas pseudoprimes. \section{Degree-$2$ Frobenius pseudoprimes} This work focuses on the degree $2$ case. We reproduce the definition and give some basic facts about Frobenius pseudoprimes and liars. \begin{definition} \label{def:2Frob} Let $f(x) \in \mathbb{Z}[x]$ be a degree $2$ monic polynomial with discriminant $\Delta$, and let $n$ be composite. Then $n$ is a degree-$2$ Frobenius pseudoprime with respect to $f(x)$ if the following four conditions hold. \begin{enumerate} \item (Integer Divisibility) We have $\gcd(n, f(0)\Delta) = 1$. \item (Factorization) Let $F_1(x) = {\rm gcmd}(x^n-x, f(x))$, $f_1(x) = f(x)/F_1(x)$, $F_2(x) = {\rm gcmd}(x^{n^2}-x, f_1(x))$, and $f_2(x) = f_1(x)/F_2(x)$. All these polynomials exist and $f_2(x) = 1$. \item (Frobenius) We have $F_2(x) \mid F_2(x^n)$. \item (Jacobi) We have $(-1)^S = \jacs{\Delta}{n}$, where $S = \deg(F_2(x))/2$. \end{enumerate} Alternatively, in this case we call $f(x)$ a degree-$2$ Frobenius liar with respect to $n$. \end{definition} The first condition ensures that $\Delta \neq 0$ and $0$ is not a root of $f(x)$. Since the discriminant is nonzero, $f(x)$ is squarefree. Thus the roots of $f(x)$ are nonzero and distinct. \begin{example} Consider $f(x) = x^2-1$ with $\Delta = 4$. If $n$ is odd, $F_1(x) = f(x)$ and $F_2(x) = 1$, so the Frobenius step is trivially satisfied. Since $S = 0$, $n$ will be a Frobenius pseudoprime as long as $ \jacs{\Delta}{n} = 1$. Since $4$ is a square modulo $n$ for all $n \geq 5$, we conclude that all odd $n \geq 5$ have at least one degree-$2$ Frobenius liar. \end{example} \begin{example} Next consider $f(x) = x^2+1$ with $\Delta = -4$. Observe that $n = 1 \pmod{4}$ if and only if $(-1)^{(n-1)/2} = 1$, which is true if and only if ${\rm gcmd}(x^n-x, f(x)) \neq 1$. In this case $F_2(x) = 1$ and $S = 1 = \jacs{-1}{n} = \jacs{\Delta}{n}$. In the other case, $n = 3 \pmod{4}$ if and only if ${\rm gcmd}(x^n-x, f(x)) = 1$. However, $(-1)^{(n^2-1)/2} = 1$ and so ${\rm gcmd}(x^{n^2}-x, f(x)) = f(x)$. For the Frobenius step, we know $x^2+1 \mid x^{2n}+1$ since if $a$ is a root of $x^2+1$, $n$ odd implies that $(a^2)^n = -1$ and hence $a$ is also a root of $x^{2n}+1$. Finally, the Jacobi step is satisfied since $S = -1 = \jacs{\Delta}{n}$. This demonstrates that $x^2+1$ is also a liar for all odd composite $n$. The minimum number of degree-$2$ Frobenius liars for odd composite $n$ is in fact $2$, first achieved by $n=15$. \end{example} If we fix $n$ and instead restrict to liars with $ \jacs{\Delta}{n} = -1$ then it is possible that no such liars exist. See Section \ref{subsec:vanish} for a more in depth discussion of this case. We next give several reinterpretations of the conditions under which a number $n=\prod_i p_i^{r_i}$ is a degree-$2$ Frobenius pseudoprime with respect to a polynomial $f$. We treat cases $\jac{\Delta}{n} = +1$ and $\jac{\Delta}{n} = -1$ separately. \subsection{The case $\jac{\Delta}{n} = +1$} Supposing we already know that $\jac{\Delta}{n} = +1$, $n$ is a degree-$2$ Frobenius pseudoprime with respect to $f(x)$ if and only if \begin{enumerate} \item (Integer Divisibility) we have $\gcd(n, f(0)\Delta) = 1$, and \item (Factorization) ${\rm gcmd}(x^n-x, f(x)) = f(x) \pmod{n}$. \end{enumerate} All other conditions follow immediately. In particular, because $f(x) \mid x^n-x$ modulo $n$, it is not possible for the Euclidean algorithm to discover any non-trivial factors of $n$. We observe that these conditions can be interpreted locally, giving us the following result. \begin{proposition} \label{newdef_plus} Positive integer $n = \prod_i p_i^{r_i}$ satisfies Definition \ref{def:2Frob} in the case $\jacs{\Delta}{n} = 1$ if and only if \begin{enumerate} \item (Integer Divisibility) $\Delta$ is a unit modulo $n$ and $0$ is not a root of $f(x)$ modulo $p_i$ for all $i$, and \item (Factorization) ${\rm gcmd}(x^n-x, f(x)) = f(x) \pmod{p_i^{r_i}}$ for all $i$. \end{enumerate} \end{proposition} \begin{proof} First assume that $n$ is a degree-$2$ Frobenius pseudoprime with respect to $f(x)$ according to Definition \ref{def:2Frob} and that $\jacs{\Delta}{n} = 1$. Then $\gcd(n, f(0) \Delta) = 1$, so $\gcd(\Delta, n) = 1$ making $\Delta$ a unit, and $\gcd(f(0), n) = 1$. It follows that $f(0) \neq 0 \pmod{p}$ for all $p \mid n$. The Jacobi condition in Definition \ref{def:2Frob} along with the assumption that $\jacs{\Delta}{n} = 1$ ensures $S = 0$ and so ${\rm deg}(F_2(x)) = 0$. All the polynomials in condition (2) are monic, so $F_2(x) = 1$, which implies $f_1(x) = 1$, so that ${\rm gcmd}(x^n - x, f(x)) = f(x)$. Since this identity is true modulo $n$, it is true modulo $p_i^{r_i}$ for all $i$. Conversely, if ${\rm gcmd}(x^n - x, f(x)) = f(x) \pmod{p_i^{r_i}}$ for all $i$, then the identity is true modulo $n$ by the Chinese remainder theorem. It follows that $f_1(x) = 1$ and so $F_2(x) = 1$. Thus condition (2) of Definition \ref{def:2Frob} is true, condition (3) follows trivially, and condition (4) is true since $S = 0$. We are assuming that $\Delta$ is a unit modulo $n$, from which it follows that $\gcd(\Delta, n) = 1$. Furthermore, $f(0) \neq 0 \pmod{p}$ for all $p \mid n$ implies $\gcd(f(0), n) =1$. Thus condition (1) is satisfied. \end{proof} \subsection{The Case $\jac{\Delta}{n} = -1$} When $\jacs{\Delta}{n} = -1$ we need a couple more conditions. \begin{proposition} \label{newdef_minus} Positive integer $n = \prod_i p_i^{r_i}$ satisfies Definition \ref{def:2Frob} in the case $\jacs{\Delta}{n} = -1$ if and only if it satisfies the following conditions: \begin{enumerate} \item (Integer Divisibility) discriminant $\Delta$ is a unit modulo $n$ and $0$ is not a root of $f(x)\pmod{p_i}$ for all $i$, \item (Factorization 1) ${\rm gcmd}(x^n-x, f(x)) = 1 \pmod{p_i^{r_i}}$ for all $i$, \item (Factorization 2) ${\rm gcmd}(x^{n^2}-x, f(x)) = f(x) \pmod{p_i^{r_i}}$ for all $i$, \item (Frobenius) if $\alpha$ is a root of $f(x)$ modulo $p_i^{r_i}$, then so too is $\alpha^n$ for all $i$. \end{enumerate} In particular, these conditions are sufficient to ensure that ${\rm gcmd}(x^n-x, f(x))$ and ${\rm gcmd}(x^{n^2}-x, f(x))$ exist modulo $n$. \end{proposition} \begin{proof} Following the argument from Proposition \ref{newdef_plus}, condition (1) from Definition \ref{def:2Frob} holds if and only if $\Delta$ is a unit modulo $n$ and $0$ is not a root of $f(x) \pmod{p_i}$ for all $i$. Now, if we assume $n$ satisfies Definition \ref{def:2Frob}, then by condition (4) we must have $S = 1$ and hence $\deg(F_2(x)) = 2$. Thus ${\rm gcmd}(x^{n^2}-x, f_1(x)) = f(x)$, and since $f_2(x) = 1$ we further have $f_1(x) = f(x)$. This is only possible if ${\rm gcmd}(x^n - x, f(x)) = 1$. Since these identities hold modulo $n$, they hold modulo $p_i^{r_i}$ for all $i$. Finally, $F_2(x) \mid F_2(x^n)$ means $f(x) \mid f(x^n) \pmod{n}$ and hence that $\alpha^n$ is a root of $f(x)$ modulo $p_i^{r_i}$ whenever $\alpha$ is. Conversely, assume $n$ satisfies conditions (2), (3), (4) from the statement of the proposition. By the Chinese remainder theorem, conditions (2) and (3) mean that ${\rm gcmd}(x^n-x, f(x)) = 1 \pmod{n}$ and ${\rm gcmd}(x^{n^2}-x, f(x)) = f(x) \pmod{n}$. In the language of Definition \ref{def:2Frob}, we have $F_1(x) = 1$, $F_2(x) = f(x)$, and $f_2(x) = 1$ as required. It follows that the Jacobi step is satisfied. And finally, condition (3) means that $f(x) \mid f(x^n) \pmod{p_i^{r_i}}$ for all $i$, and so the Frobenius step is satisfied modulo $n$. If all ${\rm gcmd}$ calculations exist modulo $n$, then they exist modulo $p_i^{r_i}$ for all $i$, so to finish the proof we need to show that the latter condition is sufficient to ensure ${\rm gcmd}(x^n-x, f(x))$ and ${\rm gcmd}(x^{n^2}-x, f(x))$ exist. Since ${\rm gcmd}(x^n-x, f(x)) = 1 \pmod{p_i^{r_i}}$ for all $i$, by \cite[Proposition 3.4]{Grantham01} we know that ${\rm gcmd}(x^n-x, f(x)) = 1 \pmod{n}$ and thus exists. If ${\rm gcmd}(x^{n^2}-x, f(x)) = f(x) \pmod{p_i^{r_i}}$, then $p_i^{r_i}$ divides $x^{n^2}-x - f(x)g_i(x)$ for some polynomial $g_i(x)$. However, using the Chinese remainder theorem on each coefficient in turn, we can construct a polynomial $g(x) \in ( \mathbb{Z}/n \mathbb{Z})[x]$ such that $g(x) = g_i(x) \pmod{p_i^{r_i}}$ for all $i$. Then for all $i$, $p_i^{r_i}$ divides $x^{n^2}-x - f(x)g(x)$ and hence $n$ divides $x^{n^2}-x-f(x)g(x)$. This shows that ${\rm gcmd}(x^{n^2}-x, f(x)) = f(x) \pmod{n}$, and in particular that it exists. \end{proof} \begin{rmk} It is worth noting that the existence of a ${\rm gcmd}$ does not imply that the Euclidean algorithm will not detect a factorization of $n$ while computing it. That said, for the calculations involved in checking for degree-$2$ Frobenius pseudoprimes this can only happen in the $\jac{\Delta}{n} = -1$ case and only if either $n$ is even or if one of the conditions (1-4) of Proposition \ref{newdef_minus} would already fail. When $n$ is even, it will only discover a power of $2$ (and the complementary factor). The rest of this remark justifies these claims. First, assume the Euclidean algorithm would discover factors of $n$. If the Factorization $1$ and Factorization $2$ conditions are passed then it implies there exist primes $p_i$ and $p_j$, such that at some iteration of the Euclidean algorithm to compute ${\rm gcmd}(x^n-x, f(x))$, the degrees of the polynomials being considered differ. We note that given ${\rm gcmd}(x^n-x, f(x)) = 1 \pmod{n}$ we must have for each $p \mid n$ that \[ x^n-x = f(x)g(x) + ax+b \pmod{p} \] where either $a=0$ and $b$ is a unit, or $a$ is a unit. However, if $a=0$, then condition (Frobenius) implies the roots of $f(x)$ modulo $p$ are $\alpha$ and $\alpha+b$. But this can only happen for $p=2$. In particular, if $n$ is odd, then we must have that $a\neq 0$ is a unit for all $p \mid n$ and thus \[ x^n-x = f(x)g(x) + ax+b \pmod{n} \enspace . \] Given that ${\rm gcmd}(x^n-x, f(x)) = 1 \pmod{n}$ we then have that \[ f(x) = (ax+b)h(x) + e \pmod{n} \] where $e$ is a unit. It follows that the only possible discrepancy between $p_i$ and $p_j$ is if one of the primes is $2$. Finally, the Euclidean algorithm will not discover a factor of $n$ while computing ${\rm gcmd}(f(x),g(x))$ if the result is $f(x)$. \end{rmk} \section{Monier formula for degree-$2$ Frobenius pseudoprimes} In this section we give explicit formulas, analogous to those of Monier \cite{Monier01} for $F(n)$, for the quantity $L_2(n)$ of polynomials $f(x)$ modulo $n=\prod_i p_i^{r_i}$ for which $n$ is a degree-$2$ Frobenius pseudoprime. The key step will be reinterpreting the conditions of the previous section in terms of conditions on the roots $\alpha$ and $\beta$ of $f(x)$ modulo $p_i^{r_i}$ for each $i$. As in the previous section, it shall be useful to distinguish the cases $\jac{\Delta}{n} = \pm1$, and as such we will give separate formulas for $L_2^\pm(n)$. \begin{nota} For each fixed value of $n$, denote by $L_2^+(n)$ the total number of quadratic polynomials $f \pmod{n}$ such that $(f,n)$ is a liar pair and $\jac{\Delta}{n} = +1$. For each fixed value of $n$, denote by $L_2^-(n)$ the total number of quadratic polynomials $f \pmod{n}$ such that $(f,n)$ is a liar pair and $\jac{\Delta}{n} = -1$. \end{nota} At the heart of the formula is the size and structure of the ring $R:= ( \mathbb{Z}/p^r \mathbb{Z})[x]/\langle f(x) \rangle$, so we spend a little time discussing some basic facts. Recall that in the case where $r=1$, if $\jac{\Delta}{p} = 1$ then $R \simeq \mathbb{F}_p \otimes \mathbb{F}_p$ and $|R^{\times}| = (p-1)^2$, while if $\jac{\Delta}{p} = -1$ then $R \simeq \mathbb{F}_{p^2}$ and $R^{\times}$ is cyclic of order $p^2-1$. When $r > 1$ we have the canonical surjective homomorphism $$ \phi: ( \mathbb{Z}/p^r \mathbb{Z})[x]/\langle f(x) \rangle \rightarrow ( \mathbb{Z}/p \mathbb{Z})[x]/\langle f(x) \rangle $$ and a similar map on the unit groups. Furthermore, $f(x)$ will split in $ \mathbb{Z}/p^r \mathbb{Z}$ if and only if $\jac{\Delta}{p}=1$. Thus $|R^{\times}| = p^{2r-2}(p-1)^2$ if $\jac{\Delta}{p} = 1$ and $|R^{\times}| = p^{2r-2}(p^2-1)$ if $\jac{\Delta}{p} = -1$. In the latter case, since $R^{\times}$ maps surjectively onto a cyclic group of order $p^2-1$, with the kernel a $p$-group, it has a cyclic subgroup $S$ of order $p^2-1$. This fact follows from the fundamental theorem of abelian groups \cite[Exercise 1.43]{Lang02}, and implies that there is a section of $\phi$ yielding a bijective homomorphism from $S$ to $( \mathbb{Z}/p \mathbb{Z})[x]/\langle f(x) \rangle$. \subsection{The case $\jac{\Delta}{n} = +1$} We note that in this case there must be an even number of primes $p_i$ for which $r_i$ is odd and $\jac{\Delta}{p} = -1$. In order to count the number of $f(x)$ modulo $n$, we shall count for each $i$ the number of modulo $p_i^{r_i}$ false witnesses for which $\jac{\Delta}{p} = \pm1$. By the Chinese remainder theorem, the desired count is then the product for all combinations which ensure above parity condition. \begin{lemma} \label{lem:L2++} The number of degree $2$ polynomials over $( \mathbb{Z}/p^{r} \mathbb{Z})$ with $\jac{\Delta}{n} = +1$ and $\jac{\Delta}{p} = +1$ which satisfy the conditions of being a quadratic Frobenius pseudoprime at $p$ is exactly \[ L_2^{++}(n,p) = \frac{1}{2}\left(\gcd(n-1,p-1)^2 - \gcd(n-1,p-1)\right) \enspace . \] \end{lemma} \begin{proof} Referring to Proposition \ref{newdef_plus}, ${\rm gcmd}(x^n - x, f(x)) = f(x) \pmod{p^r}$ means that $\alpha^n = \alpha$ and $\beta^n = \beta$ modulo $p^r$ for roots $\alpha, \beta$ of $f(x)$. In addition, the roots are distinct and nonzero by the integer divisibility condition. The group $( \mathbb{Z}/p^r \mathbb{Z})^{\times}$ is cyclic, so it has $\gcd(n-1, p^{r-1}(p-1)) = \gcd(n-1,p-1)$ elements whose order divides both $n-1$ and $p-1$. Choosing two such elements, which are not congruent modulo $p$, gives the result. \end{proof} \begin{lemma} The number of degree $2$ polynomials over $( \mathbb{Z}/p^{r} \mathbb{Z})$ with $\jac{\Delta}{n} = +1$ and $\jac{\Delta}{p} = -1$ which satisfy the conditions of being a quadratic Frobenius pseudoprime at $p$ is exactly \[ L_2^{+-}(n,p) = \frac{1}{2}\left(\gcd(n-1,p^2-1) - \gcd(n-1,p-1)\right) \enspace . \] \end{lemma} \begin{proof} We again refer to Proposition \ref{newdef_plus}. Since $\jac{\Delta}{p} = -1$, $R:= ( \mathbb{Z}/p^r \mathbb{Z})[x]/\langle f(x) \rangle$ maps surjectively onto $ \mathbb{F}_{p^2}$ and the cofactor has size $p^{2r-2}$. Furthermore, the distinct, nonzero roots $\alpha, \beta$ of $f(x)$ are not lifts of elements of $ \mathbb{F}_p$, and $\alpha^{p^r} = \beta \pmod{p^r}$. The factorization condition implies that $\alpha^n = \alpha \pmod{p^r}$, so that the order of $\alpha$ in $R^{\times}$ divides $n-1$. All elements of $R^{\times}$ have order dividing $p^{2r-2}(p^2-1)$. Hence the number of options for $\alpha$ is exactly $\gcd(p^2-1,n-1)-\gcd(p-1,n-1)$, and we divide by $2$ since the polynomial $f(x)\pmod{p^r}$ is symmetric in $\alpha$ and $\beta$. \end{proof} In order to capture the requirement that we have an even number of contributions from primes where $\jac{\Delta}{p} = -1$ when $r_i$ is odd, we anti-symmetrize with respect to these terms to obtain the formula for $L_2^+(n)$. \begin{theorem} \label{thm:L2+} The number of degree $2$ polynomials over $( \mathbb{Z}/n \mathbb{Z})$ with $\jac{\Delta}{n} = +1$ which give a quadratic Frobenius pseudoprime is exactly \begin{align*} \frac{1}{2}\prod_{i}\left( L_2^{++}(n,p_i)+ L_2^{+-}(n,p_i) \right) +\frac{1}{2}\prod_{2\mid r_i}\left( L_2^{++}(n,p_i)+ L_2^{+-}(n,p_i) \right) \prod_{2 \nmid r_i} \left( L_2^{++}(n,p_i)- L_2^{+-}(n,p_i) \right) \enspace . \end{align*} \end{theorem} \begin{corollary} \label{plusformula} If $n$ is squarefree, the formula in Theorem \ref{thm:L2+} becomes \begin{align*} L_2^+(n)= \frac{1}{2} \prod_{p \mid n}& \frac{1}{2}\left( \gcd(n-1,p^2-1) + \gcd(n-1,p-1)^2 - 2\gcd(n-1,p-1)) \right)\\ & + \frac{1}{2}\prod_{p \mid n} \frac{1}{2}\left( \gcd(n-1,p-1)^2 - \gcd(n-1,p^2-1) ) \right) \enspace . \end{align*} \end{corollary} \subsection{The case $\jac{\Delta}{n} = -1$} In this case there must be an odd number of primes $p_i$ for which $r_i$ is odd and $\jac{\Delta}{p} = -1$. As above, the liar count is first computed separately for each $p$. \begin{lemma} The number of degree $2$ polynomials over $( \mathbb{Z}/p^{r} \mathbb{Z})$ with $\jac{\Delta}{n} = -1$ and $\jac{\Delta}{p} = +1$ which satisfy the conditions of being a quadratic Frobenius pseudoprime at $p$ is exactly \[ L_2^{-+}(n,p) = \frac{1}{2}\left(\gcd(n^2-1,p-1) - \gcd(n-1,p-1)\right) \enspace . \] \end{lemma} \begin{proof} Since $\jac{\Delta}{p} = 1$, the roots $\alpha, \beta$ of $f(x)$ are in $( \mathbb{Z}/p^r \mathbb{Z})$. Referring to Proposition \ref{newdef_minus}, the roots are distinct and nonzero by the integer divisibility condition. Furthermore, ${\rm gcmd}(x^n - x, f(x)) = 1$ means that $\alpha^n \neq \alpha \pmod{p^r}$, but we do have $\alpha^{n^2} = \alpha \pmod{p^r}$ by the factorization 2 condition. The Frobenius condition implies $\alpha^n$ is a root of $f(x)$, and thus $\alpha^n = \beta$. The group $( \mathbb{Z}/p^r \mathbb{Z})^{\times}$ is cyclic of order $p^{r-1}(p-1)$, and so the number of elements with order dividing both $n^2-1$ and $p^{r-1}(p-1)$ is $\gcd(n^2-1, p-1)$. We subtract off the subset of elements with order dividing $n-1$, then divide by $2$ since $f(x) \pmod {p^r}$ is symmetric in $\alpha$ and $\beta$. \end{proof} \begin{lemma}\label{Lem:L2--} The number of degree $2$ polynomials over $( \mathbb{Z}/p^{r} \mathbb{Z})$ with $\jac{\Delta}{n} = -1$ and $\jac{\Delta}{p} = -1$ which satisfy the conditions of being a quadratic Frobenius pseudoprime at $p$ is exactly \[ L_2^{--}(n,p) = \frac{1}{2}\left(\gcd(p^2-1,n^2-1,n-p)- \gcd(n-1,p-1)\right) \enspace . \] \end{lemma} \begin{proof} Since $\jac{\Delta}{p} = -1$, $R:= ( \mathbb{Z}/p^r \mathbb{Z})[x]/\langle f(x) \rangle$ maps surjectively onto $ \mathbb{F}_{p^2}$ and $R^{\times}$ has order $p^{2r-2}(p^2-1)$. Furthermore, roots $\alpha, \beta$ of $f(x)$ are not in $ \mathbb{Z}/p^r \mathbb{Z}$, and by the divisibility condition in Proposition \ref{newdef_minus} we know those roots are distinct units modulo $p$. The factorization conditions tell us that $\alpha^{n^2} = \alpha \pmod{p^r}$ and the Frobenius condition implies $\alpha^n = \beta \pmod{p^r}$. We claim a further relation on the roots, namely that $\alpha^p = \beta \pmod{p}$ implies $\alpha^p = \beta \pmod{p^r}$. Recall from the discussion above that $R^{\times}$ has a cyclic subgroup $S$ of order $p^2-1$. The multiplicative orders of $\alpha, \beta$ divide $n^2-1$, and since those orders are not divisible by $p$ we have $\alpha, \beta \in S$. Let $g$ be a generator and write $\alpha = g^a, \beta = g^b$. Then $\alpha^p = \beta \pmod{p}$ implies $g^{pa-b} = 1 \pmod{p}$. Since the image of $g$ under reduction modulo $p$ is a generator of $ \mathbb{F}_{p^2}^{\times}$, $p^2-1 \mid pa-b$ and hence $\alpha^p = \beta$ in $S$, i.e. $\alpha^p = \beta \pmod{p^r}$. We conclude that the order of $\alpha$ in $R^{\times}$ must divide $n^2-1$, $p^{2r-2}(p^2-1)$, and $n-p$. The number of options for $\alpha$ is thus exactly $\gcd(p^2-1,n^2-1,n-p)-\gcd(p-1,n-1)$, and we divide by $2$ since the polynomial $f(x)\pmod{p^r}$ is symmetric in $\alpha$ and $\beta$. \end{proof} In order to capture the requirement that we have an odd number of contributions from primes where $\jac{\Delta}{p} = -1$ when $r_i$ is odd, we anti-symmetrize with respect to these terms to obtain the formula for $L_2^-(n)$. \begin{theorem} \label{thm:negliars} The number of degree $2$ polynomials over $( \mathbb{Z}/n \mathbb{Z})$ with $\jac{\Delta}{n} = -1$ which give a quadratic Frobenius pseudoprime is exactly \begin{align*} \frac{1}{2}\prod_{i}\left( L_2^{-+}(n,p_i)+ L_2^{--}(n,p_i) \right) -\frac{1}{2}\prod_{2\mid r_i}\left( L_2^{-+}(n,p_i)+ L_2^{--}(n,p_i) \right) \prod_{2\nmid r_i} \left( L_2^{-+}(n,p_i) -L_2^{--}(n,p_i) \right) \enspace . \end{align*} \end{theorem} \begin{corollary} \label{minusformula} If $n$ is squarefree, the formula in Theorem \ref{thm:negliars} becomes \begin{align*} L_2^-(n) = \frac{1}{2} \prod_{p \mid n}&\frac{1}{2} \gcd(n^2-1,p-1)+\left( \gcd(n^2-1,p^2-1,n-p) -2\gcd(n-1,p-1) \right)\\& -\frac{1}{2} \prod_{p \mid n} \frac{1}{2}\left( \gcd(n^2-1,p-1)- \gcd(n^2-1,p^2-1,n-p) \right) \enspace . \end{align*} \end{corollary} \subsection{Upper bounds} In this section we give simpler upper bounds for $L_2^+(n)$ and $L_2^-(n)$, which will be needed in Section \ref{sec:upper}. \begin{lemma}\label{lemma:liarcountupperbounds} If $n$ is a composite integer then \begin{align*} & L_2^+(n) \leq \prod_{p \mid n} \max( \gcd(n-1, p^2-1), \gcd(n-1, p-1)^2) \mbox{ and } \\ & L_2^-(n) \leq \prod_{p \mid n} \gcd(n^2-1, p^2-1) \enspace . \end{align*} \end{lemma} \begin{proof} For each prime factor $p$ of $n$, we choose the greater of $L_2^{++}(n,p)$ and $L_2^{+-}(n,p)$. That is, $$ L_2^+(n) \leq \prod_i \max(L_2^{++}(n, p_i), L_2^{+-}(n,p_i)) \leq \prod_{p \mid n} \max( \gcd(n-1,p-1)^2, \gcd(n-1,p^2-1)) \enspace . $$ For $L_2^-(n)$ a similar argument gives the simpler upper bound $$ \prod_{p \mid n} \max( \gcd(n^2-1, p^2-1, n-p), \gcd(n^2-1, p-1)) \leq \prod_{p \mid n} \gcd(n^2-1, p^2-1) \enspace . $$ \end{proof} \subsection{The vanishing of $L_2^-(n)$}\label{subsec:vanish} A major theme of this work is that odd composites have many quadratic Frobenius liars on average, even if we restrict to the case $\jac{\Delta}{n} = -1$. With this in mind, it is useful to note that $L_2^{-}(n)$ can be $0$. For example, $L_2^-(9) = 0$ and $L_2^-(21) = 0$. As a first general example, write $n=ps$ with $\gcd(p,s) = 1$. If the quantities \[\gcd(p^2-1,n^2-1,n-p)- \gcd(n-1,p-1) \quad \text{and} \quad \gcd(n^2-1,p-1) - \gcd(n-1,p-1) \] are both zero, then it is immediate from Theorem \ref{thm:negliars} that $L_2^-(n) = 0$. These conditions are met if whenever $\ell^r \mid \gcd(p^2-1,n^2-1,n-p)$ or $\ell^r \mid \gcd(n^2-1,p-1)$ we also have $\ell^r \mid \gcd(n-1,p-1)$. For odd primes $\ell \mid p^2-1$ this is accomplished by the requirement that \[ s \neq -p^{-1} \pmod \ell \enspace , \] as this implies that if $\ell\mid p^2-1$ then $\ell \nmid sp+1$. For the prime $2$, if we write $p = 1+2^r \pmod{2^{r+1}}$ then the requirement \[ s = 1+2^r \pmod{2^{r+1}} \] implies the exact power of $2$ dividing each of $\gcd(p^2-1,n^2-1,n-p)$, $\gcd(n^2-1,p-1)$, and $\gcd(n-1,p-1)$ is $2^r$. A more general example comes from Carmichael numbers, which are squarefree $n$ with $\gcd(n-1, p-1) = p-1$ for all primes $p \mid n$. \begin{rmk} If $n$ is a classical Carmichael number, then $L_2^{-+}(n,p) =0$ for all $p$ and \[ L_2^-(n) = \prod_{p \mid n} \frac{1}{2} \left( \gcd(n^2-1,p^2-1,n-p) -\gcd(n-1,p-1) \right) \] if $n$ has an odd number of prime factors, and $0$ otherwise (see Corollary \ref{minusformula}). In particular, the only $f$ for which $(f,n)$ would be liar pair with $\jac{\Delta}{n} = -1$ have $f$ inert at all primes dividing $n$. Furthermore, if $n=1\pmod{4}$ then for each $p \mid n$ with $p=3\pmod{4}$ we naively estimate the probability that $L_2^{--}(n,p) = 0$ as $\prod_{\ell \mid p+1}' \frac{\ell-2}{\ell-1}$, where the product is over odd primes $\ell$. \end{rmk} As a final example, let $n$ be a rigid Carmichael number of order $2$ in the sense of \cite{Howe00}, so that $n$ is squarefree and $p^2-1 \mid n-1$ for every prime factor $p$ of $n$. Then $\gcd(n^2-1, p^2-1, n-p) = \gcd(n-1, p-1)$ and $\gcd(n^2-1, p-1) = \gcd(n-1, p-1)$, so that $L_2^-(n) = 0$. \section{Number theoretic background} \begin{nota} Let $L$ be an upper bound for Linnik's constant. That is, the constant $L$ satisfies: \[ \text{if } (a,m) = 1 \text{ then there exists } p= a \pmod{m} \text{ with } p < m^L \enspace . \] It is known that $L\leq 5.$ (See \cite{Xylouris01}) For each value $x$ denote by $M(x)$ the least common multiple of all integers up to $\frac{\log(x)}{\log\log(x)}$. For each value $x$ and for each $\alpha > 0$ denote by $P_{\alpha}^{(+)}(x)$ the set \[ \left\{ \text{prime} ~ p < (\log(x))^{\alpha} ~ \text{such that} ~(p-1) \mid M(x) \right\} \] and by $P_{\alpha}^{(-)}(x)$ the set \[ \left \{ \text{prime} ~ p < (\log(x))^{\alpha} ~ \text{such that} ~(p^2-1) \mid M(x) \right\} \enspace . \] Now, given functions $M_1(x)$ and $M_2(x)$ of $x$ which satisfy \[ M(x) = M_1(x)M_2(x) \qquad \text{and} \qquad \gcd(M_1(x),M_2(x)) = 2 \] we define for each value $x$ and for each $\alpha > 0$ the set \[ P_{\alpha}\left(M_1(x),M_2(x),x\right) = \left\{ \text{prime} ~ p < (\log(x))^{\alpha} ~ \text{such that} ~(p-1) \mid M_1(x) \text{ and } (p+1) \mid M_2(x) \right\} \enspace . \] \end{nota} \begin{proposition} \label{prop:Msize} We have $M(x) = x^{o(1)}$. \end{proposition} \begin{proof} We can estimate $M(x)$ by: \[ \prod_{p< \frac{\log(x)}{\log\log(x)}} p^{\floor{\frac{\log\log(x)-\log\log\log(x)}{\log(p)}}} < \prod_{p< \frac{\log(x)}{\log\log(x)}} \frac{\log(x)}{\log\log(x)} = \left( \frac{\log(x)}{\log\log(x)} \right)^{\pi\left(\frac{\log(x)}{\log\log(x)}\right)} = x^{o(1)} \enspace . \] \end{proof} The next two propositions follow from results on the smoothness of shifted primes. The conclusion is that the sets $P_{\alpha}^{(+)}(x)$ and $P_{\alpha}^{(-)}(x)$ are relatively large. As a comparison, by the prime number theorem the asymptotic count of all primes $p < (\log{x})^{\alpha}$ is $\frac{(\log{x})^{\alpha}}{\alpha \log\log{x}}$. \begin{proposition}\label{prop:lb1} There exists $\alpha>1$ such that $\abs{P_{\alpha}^{(+)}(x) } > \log(x)^{\alpha-o(1)}$. In particular we may take $\alpha = 23/8$. \end{proposition} The result follows from work of Erd\H{o}s; the best bound is from \cite{Balog01}. \begin{proposition}\label{prop:lb2} There exists $\alpha>1$ such that $\abs{P_{\alpha}^{(-)}(x) } > \log(x)^{\alpha-o(1)}$. In particular we may take $\alpha = 4/3$. \end{proposition} The result as well as the best bound is from \cite{DMT01}. The next proposition is a novel contribution to the theory of constructing pseudoprimes. \begin{proposition}\label{prop:lb2X} Given $\alpha$ such that $\abs{P_{\alpha}^{(-)}(x) } > \log(x)^{\alpha-o(1)}$, there exist $M_1(x)$, $M_2(x)$ such that \[ \abs{P_{\alpha}\left(M_1(x),M_2(x),x\right) } > \log(x)^{\alpha-o(1)} \enspace . \] \end{proposition} \begin{proof} Let $M$ be the fixed choice of $M(x)$ that follows from a fixed choice of $x$. Each prime $p\in P_{\alpha}^{(-)}(x)$ is also in $P_{\alpha}\left((p-1)d_1,(p+1)d_2,x\right)$ for all pairs $(d_1, d_2)$ satisfying \[ d_1d_2 = \frac{M}{p^2-1} \quad \text{and} \quad \gcd(d_1,d_2) = 1 \enspace . \] The number of pairs $(M_1, M_2)$ satisfying the conditions laid out in the notation comment at the beginning of the section is \[ 2^{\pi(\log(x)/\log\log(x))} \] since each prime up to $\frac{\log{x}}{\log\log{x}}$ is assigned to either $M_1$ or $M_2$. To count the number of choices for $d_1$ and $d_2$ we subtract from the exponent the count of prime factors of $p^2-1$. This work yields \[\sum_{M_1,M_2} \abs{ P_{\alpha}\left(M_1,M_2,x\right) } = \sum_{p\in P_{\alpha}^{(-)}(x) } 2^{ \omega\left( \frac{M}{p^2-1} \right) } > 2^{\pi \left( \frac{\log x}{\log\log x} \right) - \omega_{{\rm max}}(p^2-1)}(\log x)^{\alpha-o(1)} \] where $\omega_{{\rm max}}(p^2-1)$ denotes the maximum number of distinct prime factors of $p^2-1$ for all $p$ under consideration. Because $\omega_{{\rm max}}(p^2-1)$ is $\log((p^2-1)^{o(1)})$ we obtain the estimate \[ 2^{\omega_{{\rm max}}(p^2-1)} < (p^2-1)^{o(1)} < (\log x)^{o(1)} \enspace . \] Now, if $\abs{ P_{\alpha}(M_1, M_2, x)} < (\log x)^{\alpha - o(1)}$ for all pairs $(M_1, M_2)$ we would conclude that $$ \sum_{M_1, M_2} \abs{ P_{\alpha}(M_1, M_2, x)} < 2^{\pi\left( \frac{\log x}{\log\log x } \right)}(\log{x})^{\alpha - o(1)} \enspace , $$ but since this contradicts the earlier lower bound we instead conclude that $\abs{ P_{\alpha}(M_1, M_2, x)} > (\log x)^{\alpha - o(1)}$ for at least one pair $(M_1, M_2)$. \end{proof} \begin{rmk} From the proof we expect the result will in fact hold for most choices of $M_1$ and $M_2$. The proof we have given does not actually imply any relationship between $M_i(x)$ for different values of $x$. In particular, though one perhaps expects that that there exists a complete partitioning of all primes into two sets and that the $M_i$ are simply constructed by considering only those primes in the given range, we do not show this. \end{rmk} It is generally expected (see for example \cite{ErdosPomerance01}) that the values $\alpha$ under consideration can be taken arbitrarily large. In particular we expect the following to hold. \begin{conj} In each of the above three propositions, the result holds for all $\alpha>0$. \end{conj} The following lemma will be useful in the next section. \begin{lemma}\label{lem:cong-cases} Fix $n$ and $p \mid n$. If $n=-1 \pmod{q}$ and $p=1 \pmod{q}$ for $q\ge 3$ then \[ \gcd(n^2-1,p-1) - \gcd(n-1,p-1) > 0 \enspace . \] If $n=-1 \pmod{q}$ and $p= -1 \pmod{q}$ for $q\ge 3$ then \[ \gcd(n^2-1,p^2-1,n-p) - \gcd(n-1,p-1) > 0 \enspace . \] If $n=p=1 \pmod{2}$ then \[ \gcd(n-1,p-1)^2-\gcd(n-1,p-1) > 0 \enspace . \] \end{lemma} \begin{proof} For $n=-1 \pmod{q}$ and $p=1 \pmod{q}$ we have $q \mid \gcd(n+1,p-1)$ while $q \nmid \gcd(n-1,p-1)$. If $n=-1 \pmod{q}$ and $p= -1 \pmod{q}$ then $q \mid \gcd(n^2-1,p^2-1,n-p)$ and $q \nmid \gcd(n-1,p-1)$. Finally, for $n=p=1 \pmod{2}$ it follows that $\gcd(n-1,p-1)> 1$, and so $$\gcd(n-1,p-1)(\gcd(n-1,p-1) - 1)$$ is nonzero. \end{proof} \section{Lower bounds on the average number of degree-$2$ Frobenius pseudoprimes} In this section we will prove the lower bound portion of the two theorems in the introduction. Specifically we shall prove the following results. \begin{theorem}\label{thm:one} For any value of $\alpha > 1$ satisfying Proposition \ref{prop:lb1} we have the asymptotic inequality \[ \sum_{n<x} L_2^+(n) \geq x^{3-\alpha^{-1} - o(1)} \enspace .\] \end{theorem} \begin{theorem}\label{thm:two} For any value of $\alpha > 1$ satisfying Proposition \ref{prop:lb2} we have the asymptotic inequality \[ \sum_{n<x} L_2^-(n) \geq x^{3-\alpha^{-1} - o(1)} \enspace .\] \end{theorem} The proofs of the above two theorems are at the end of this section. We shall first introduce some notation and prove several necessary propositions. \begin{nota} For fixed $0 < \epsilon < \alpha-1$ and for all $x > 0$ let \begin{itemize} \item $k^{(+)}_\alpha(x) =\floor{ \frac{\log(x) - L\log(M)}{\alpha\log\log(x)}}$ \enspace , \item $k^{(-)}_\alpha(x) =\floor{ \frac{\log(x) - 2L\log(M)}{\alpha\log\log(x)}}$ \enspace , \item $S_{\alpha,\epsilon}^{(+)}(x)$ be the set of integers $s$ which are the product of $k^{(+)}_\alpha(x)$ distinct elements from \[ P_\alpha^{(+)}(x) \setminus P_{\alpha-\epsilon}^{(+)}(x) \enspace , \] \item ${S_{\alpha,\epsilon}^{(-)}}\left(M_1(x),M_2(x),x\right)$ be the set of integers $s$ which are the product of the largest odd number not larger than $k^{(-)}_\alpha(x)$ many distinct elements from \[ P_{\alpha}\left(M_1(x),M_2(x),x\right) \setminus P_{\alpha-\epsilon}\left(M_1(x),M_2(x),x\right) \enspace. \] \end{itemize} \end{nota} The following two claims are immediate consequences of the construction. \begin{claim} The elements $s$ of $S_{\alpha,\epsilon}^{(+)}(x)$ all satisfy \[ \left(\log(x)^{-k^{(+)}_\alpha(x)\epsilon}\right)\frac{x^{1-o(1)}}{M^L} < s < \frac{x}{M^L} \enspace . \] \end{claim} \begin{claim} The elements $s$ of $S_{\alpha,\epsilon}^{(-)}(x)$ all satisfy \[ \left(\log(x)^{-k^{(-)}_\alpha(x)\epsilon}\right)\frac{x^{1-o(1)}}{M^L} < s < \frac{x}{M^{2L}} \enspace .\] \end{claim} The next two propositions follow from the lower bound on the size of $P_\alpha^{(\pm)}$ and the definition of $k_\alpha^{(\pm)}$. \begin{proposition} \label{prop:Splussize} If $\alpha$ satisfies the conditions of Proposition \ref{prop:lb1} then \[ \abs{S_{\alpha,\epsilon}^{(+)}(x) }> x^{1-\alpha^{-1} + o(1)} \enspace .\] \end{proposition} \begin{proof} A standard bound on a binomial coefficient is given by ${n \choose k} \geq (n/k)^k$. We are choosing $k_{\alpha}^{(+)}$ many primes from a set of size at least $(\log{x})^{\alpha - o(1)} - (\log{x})^{\alpha - \epsilon} = (\log{x})^{\alpha - o(1)}$. The resulting lower bound on $\abs{S_{\alpha,\epsilon}^{(+)}(x) }$ is $$ \left( \frac{ (\log{x})^{\alpha - o(1)} }{ (\log{x})^{1+o(1)}} \right)^{\frac{ \log(x) - L\log(M)}{\alpha \log\log(x)} - 1} \geq ( (\log{x})^{\alpha - 1 + o(1)})^{(\alpha^{-1} + o(1))\frac{\log{x}}{\log\log{x}}} = x^{1-\alpha^{-1} + o(1)} \enspace . $$ \end{proof} \begin{proposition} \label{prop:Sminussize} If $\alpha$, $M_1(x)$, and $M_2(x)$ satisfy the conditions of Proposition \ref{prop:lb2X} then \[ \abs{S_{\alpha,\epsilon}^{(-)}\left(M_1(x),M_2(x), x \right)} > x^{1-\alpha^{-1} + o(1)} \enspace . \] \end{proposition} \begin{proof} The proof is identical to that of Proposition \ref{prop:Splussize}. \end{proof} The next two propositions construct a composite $n$ with many degree-$2$ Frobenius liars. The strategy in the plus one case is to start with a composite $s$ that is the product of many primes $p$ such that $p-1$ is smooth, then find a prime $q$ such that $n = sq$ is congruent to $1$ modulo $M$. While the liar count primarily comes from the primes $p$ dividing $s$, we need to ensure at least one modulo $q$ liar, else the entire modulo $n$ liar count becomes $0$. \begin{lemma}\label{lem:consQ1} As before, let $L$ be an upper bound for Linnik's constant. Given any element $s$ of $S_{\alpha,\epsilon}^{(+)}(x)$ there exists a prime $q < M^L$ such that \begin{itemize} \item $sq=1 \pmod{M}$, \item $ \gcd(q,s) = 1$, and \item $ \frac{1}{2}\left( \gcd(q-1,sq-1)^2- \gcd(q-1,sq-1)\right) > 0$. \end{itemize} Moreover, the number of liars of $n=sq$ with $\jac{\Delta}{n} = +1$ is at least $x^{2 - \epsilon \frac{2}{\alpha} - o(1)}$. \end{lemma} \begin{proof} By construction, every $s \in S_{\alpha, \epsilon}^{(+)}(x)$ satisfies $\gcd(s,M) = 1$. Then by the definition of $L$, we can choose $M < q < M^{L}$ to be the smallest prime such that $sq = 1 \pmod{M}$. Since $q > M$ and the factors of $s$ are all smaller than $M$, we have $\gcd(q,s) = 1$. With $q,n$ both odd, the third condition follows from Lemma \ref{lem:cong-cases}. For a lower bound on $L_2^+(n)$ for $n = sq$ we count only the liars from primes $p \mid s$ with $\jac{\Delta}{p} = +1$. This gives $$ \prod_{p \mid s} L_2^{++}(n,p) = \prod_{p \mid s} \frac{1}{2} (\gcd(n-1,p-1)^2 - \gcd(n-1,p-1)) $$ by Lemma \ref{lem:L2++}. By construction, for $p \mid s$ we have $p-1 \mid M$ and $M \mid n-1$, so the product becomes \begin{align*} 2^{-k^{(+)}_\alpha(x)}\prod_{p \mid s} (p-1)(p-2) &\geq 2^{-k^{(+)}_\alpha(x)} \cdot s^{2-o(1)} \\ &\geq x^{-o(1)} \left(\log(x)^{-k_\alpha^{(+)}(x) \epsilon (2-o(1))} \right) \frac{x^{2-o(1)}}{M^{L (2-o(1))}} \\ & \geq x^{-o(1)} x^{-\epsilon \cdot \frac{2}{\alpha}(1+o(1))} \frac{x^{2-o(1)}}{x^{o(1)}} = x^{2 - \epsilon \frac{2}{\alpha} - o(1)} \end{align*} where the upper bound on $M$ comes from Proposition \ref{prop:Msize}. \end{proof} In the minus one case we have two different divisibility conditions to satisfy, and as a result require two primes $q_1$ and $q_2$ to complete the composite number $n$. \begin{lemma}\label{lem:consQ2} Let $L$ be an upper bound for Linnik's constant. Given any element $s$ of $S_{\alpha,\epsilon}^{(-)}(x)$ there exists a number $q < M^{2L}$ such that \begin{itemize} \item $sq=1 \pmod{M_1}$, \item $sq=-1 \pmod{M_2}$, \item $ \gcd(q,s) = 1$, and \item $\prod_{p \mid q} \frac{1}{2}\left( \gcd((sq)^2-1,p-1)- \gcd(sq-1,p-1)\right) > 0.$ \end{itemize} Moreover, the number of liars of $n=sq$ with $\jac{\Delta}{n} = -1$ is at least \[ 2^{-k^{(-)}_\alpha(x)}\prod_{p \mid s} \left(p^2-1\right) = x^{2-\epsilon \frac{2}{\alpha}-o(1)} \enspace . \] \end{lemma} \begin{proof} We construct $q$ as the product of two primes $q_1$ and $q_2$. Let $\ell_1,\ell_2$ be two distinct odd primes which divide $M_2$ and write $M_2 = M_2'\ell_1^{r_1}\ell_2^{r_2}$. Choose $q_1$ to be the smallest prime greater than $M$ satisfying the following four conditions: $$ \begin{array}{ll} sq_1 = 1 \pmod{M_1} \hspace{1in} & sq_1 = -1 \pmod{M_2'} \\ q_1 = 1 \pmod{\ell_1^{r_1}} \hspace{1in}& sq_1 = -1 \pmod{\ell_2^{r_2}} \end{array} $$ and choose $q_2$ to be the smallest prime greater than $M$ satisfying the following four conditions: $$ \begin{array}{ll} q_2 = 1\pmod{M_1} \hspace{1in} &q_2 = 1 \pmod{M_2'} \\ sq_2 = -1 \pmod{\ell_1^{r_1}} \hspace{1in} &q_2 = 1 \pmod{\ell_2^{r_2}} \enspace . \end{array} $$ Note that $q_1,q_2 > M$ implies they are greater than any factor of $s$, and thus relatively prime to $s$. Then $q_1$, $q_2$ exist due to the definition of Linnik's constant, with $q_1, q_2 < (M_1 M_2' \ell_1^{r_1} \ell_2^{r_2})^L$ so that $q < M^{2L}$. Note $s q_1 q_2 = 1 \pmod{M_1}$, which satisfies the first bulleted condition. In addition, $sq_1 q_2 = -1 \pmod{M_2'}$, $sq_1 q_2 = -1 \pmod{\ell_1^{r_1}}$, and $sq_1 q_2 = -1 \pmod{\ell_2^{r_2}}$ so that $sq = -1 \pmod{M_2}$. For the fourth bullet point, $sq_1 q_2 = -1 \pmod{\ell_1^{r_1}}$ and $q_1 = 1 \pmod{\ell_1^{r_1}}$ gives the result by Lemma \ref{lem:cong-cases}. To bound $L_2^{-}(n)$ we select only $n$ where $\jac{\Delta}{p} = +1$ for all $p \mid q$ and $\jac{\Delta}{p} = -1$ for all $p \mid s$. By Lemma \ref{Lem:L2--} we have \begin{align*} \prod_{p \mid s} L_2^{--}(n,p) &= \prod_{p \mid s} \frac{1}{2}\left(\gcd(p^2-1,n^2-1,n-p)- \gcd(n-1,p-1)\right) \\ & = 2^{-k_{\alpha}^{(-)}(x)} \prod_{p \mid s} ( p^2-1) - (p-1) \end{align*} since by construction $p-1 \mid n-1$ and $p+1 \mid n+1$. This product is $x^{2-\epsilon \frac{2}{\alpha} - o(1)}$ by the same argument as that in Lemma \ref{lem:consQ1}. \end{proof} \begin{proof}[Proof of Theorems \ref{thm:one} and \ref{thm:two}] In each case the theorem is an immediate consequence of the lower bounds on the number of liars for each value of $n=sq$ constructed in Lemma \ref{lem:consQ1} or \ref{lem:consQ2}, together with the size of the set $S$ under consideration. More specifically, for each element $s$ of $S_{\alpha,\epsilon}^{(\pm)}(x)$, by Lemma \ref{lem:consQ1} or \ref{lem:consQ2} we can associate a distinct number $n$ with $L_2^\pm(n) > x^{2-\epsilon \frac{2}{\alpha}-o(1)}$. For each of the plus, minus cases we have that \[ \abs{ S_{\alpha,\epsilon}^{(\pm)}(x)} > x^{1-\alpha^{-1}-o(1)} \] for $\alpha$ satisfying as appropriate Proposition \ref{prop:lb1} or \ref{prop:lb2}. We conclude that for all $\epsilon>0$ and appropriately chosen $\alpha$ we have \[ \sum_n L_2^\pm(n) > x^{3-\alpha^{-1}-\epsilon \frac{2}{\alpha}-o(1)} \enspace . \] Allowing $\epsilon$ to go to $0$, we obtain the result. \end{proof} \section{Upper bounds on the average number of degree-$2$ Frobenius pseudoprimes} \label{sec:upper} Our proof will follow \cite[Theorem 2.2]{ErdosPomerance01} quite closely. First we need a key lemma, the proof of which follows a paper of Pomerance \cite[Theorem 1]{Pomerance81}. \begin{nota} Given an integer $m$, define \[ \lambda(m) = \underset{p \mid m} {\rm lcm} (p-1) \qquad\text{and}\qquad\lambda_2(m) = \underset{p \mid m} {\rm lcm} (p^2-1) \enspace . \] Note that $\lambda(m)$ is not Carmichael's function, though it is equivalent when $m$ is squarefree. Moreover, given $x>0$ we shall define \[ \mathcal{L}(x) = {\rm exp}\left( \frac{\log(x) \log_3(x)}{\log_2(x)} \right) \] where $\log_2(x) = \log\log(x)$ and $\log_3(x) = \log\log\log(x)$. Here $\log$ is the natural logarithm. \end{nota} \begin{lemma} \label{lemma:lambda2count} For all sufficiently large $x$ we have $$ \# \{ m \leq x \ : \ \lambda_2(m)=n\} \leq x \cdot \mathcal{L}(x)^{-1 + o(1)} \enspace . $$ \end{lemma} \begin{proof} For $c > 0$ we have $$ \sum_{m \leq x \atop \lambda_2(m)=n} 1 \leq x^c \sum_{\lambda_2(m)=n} m^{-c} \leq x^c \sum_{p \mid m \Rightarrow p^2-1 \mid n} m^{-c} \leq x^c \sum_{p \mid m \Rightarrow p-1 \mid n} m^{-c} \enspace . $$ By the theory of Euler products, we can rewrite the sum as $\prod_{p-1 \mid n} (1-p^{-c})^{-1}$. Call this product $A$. With $c = 1 - \frac{\log_3(x)}{\log_2(x)}$, the result follows if we can show that $\log{A} = o(\log(x)/\log_2(x))$. Take $x$ large enough so that $\frac{\log_3(x)}{\log_2(x)} \leq \frac{1}{2}$; from this it follows that for all primes $p$, $\frac{1}{1-p^{-c}} \leq 4$. Following Pomerance in \cite[Theorem 1]{Pomerance81}, via the Taylor series for $-\log(1-x)$ we can show that $$ \log{A} = \sum_{p-1 \mid n} \frac{p^{-c}}{1-p^{-c}} \leq 4 \sum_{d \mid n} d^{-c} \leq 4 \prod_{p \mid n} (1-p^{-c})^{-1} $$ and similarly $$ \log\log{A} \leq \log(4) + \sum_{p \mid n} \frac{p^{-c}}{1-p^{-c}} \leq \log(4) + 4 \sum_{p \mid n} p^{-c} \enspace . $$ Since the sum is maximized with many small primes, an upper bound is $$ \log(4) + \sum_{p \leq 4 \log{x}} 4 p^{-c} = O\left( \frac{ (\log{x})^{1-c}}{(1-c) \log\log{x}} \right) $$ where the sum is evaluated using partial summation. With $c = 1 - \log_3(x)/\log_2(x)$ we achieve \[ \log\log{A} = O\left(\frac{\log_2(x)}{\log_3(x)}\right) \] so that $\log{A} = o(\log(x)/\log_2(x))$ as requested. \end{proof} An interesting question is whether the upper bound in that lemma can be lowered. If so, a more clever upper bound would be required for the sum over primes $p$ dividing $m$ such that $p^2-1 \mid n$. In \cite{ErdosPomerance01} the key idea is to parameterize composite $n$ according to the size of the subgroup of Fermat liars, and then to prove a useful divisibility relation involving $n$. Here we reverse this strategy: we parameterize according to a divisibility condition and prove an upper bound on the size of the set of Frobenius liars. \begin{lemma} \label{lemma:minus} Assume $n$ is composite and let $k$ be the smallest integer such that $\lambda_2(n) \mid k(n^2-1)$. Then \[ L_2^-(n) \leq \frac{1}{k} \prod_{p \mid n} (p^2-1) \enspace . \] \end{lemma} \begin{proof} We have $k = \lambda_2(n)/\gcd(\lambda_2(n), n^2-1)$. Our goal will be to show that \begin{equation} \label{division1} \frac{\lambda_2(n)}{\gcd(\lambda_2(n), n^2-1)} \prod_{p \mid n} \gcd(p^2-1, n^2-1) \left| \ \prod_{p \mid n} p^2-1 \right. \enspace . \end{equation} If this is true, then combined with Lemma \ref{lemma:liarcountupperbounds} we have $$ L_2^-(n) \leq \prod_{p \mid n} \gcd(n^2-1, p^2-1) \leq \frac{1}{k} \prod_{p \mid n} p^2-1 \enspace . $$ Fix arbitrary prime $q$ and let $q^{e_i}$ be the greatest power of $q$ that divides $p_i^2-1$. Suppose we have ordered the $r$ primes dividing $n$ according to the quantity $e_i$. Let $q^d$ be the power of $q$ that divides $n^2-1$. Consider first the case where $d \geq e_r$, the largest of the $e_i$. Then $q^{e_r}$ divides $\lambda_2(n)$ since it is defined as an $ {\rm lcm}$ of the $p^2-1$, and $q^{e_r}$ divides $\gcd(\lambda_2(n), n^2-1)$ since $d \geq e_r$. We are left with the observation that $\prod_{p \mid n} \gcd(p^2-1, n^2-1)$ is a divisor of $\prod_{p \mid n} p^2-1$, and thus in particular the $q$ power divides. Next consider the case where $d \geq e_i$ for $i \leq \ell$ and $d < e_i$ for $i > \ell$. Then $q^{e_r}$ divides $\lambda_2(n)$ since it is defined as an $ {\rm lcm}$, and $q^d$ divides $\gcd(\lambda_2(n), n^2-1)$ since $d < e_r$. The total power of $q$ dividing the LHS is then $e_r - d + (\sum_{i=1}^{\ell} e_i) + (r-\ell) d$. We have \begin{align*} \left(\sum_{i=1}^{\ell} e_i\right) + e_r-d + d(r-\ell) &= \left(\sum_{i=1}^{\ell} e_i\right) + d(r-\ell-1) + (d + e_r-d)\\ &\leq \left(\sum_{i=1}^{\ell} e_i\right) + \left(\sum_{i=\ell+1}^{r-1} e_i \right) + e_r\\ &= \sum_{i=1}^{r} e_i \enspace , \end{align*} which is the power of $q$ dividing $\prod p^2-1$. Since $q$ was arbitrary, (\ref{division1}) holds, which finishes the proof. \end{proof} The result for $L_2^+(n)$ is similar. We do need a new piece of notation, namely given a prime $p$ we shall define \[ d_n(p) = \begin{cases} (p-1)^2 & \text{if }\gcd(n-1, p-1)^2 > \gcd(n-1, p^2-1) \\ p^2-1 &\text{if } \gcd(n-1, p-1)^2 \leq \gcd(n-1, p^2-1) \enspace . \end{cases} \] \begin{lemma} \label{lemma:plus} Suppose $n$ is composite and let $k$ be the smallest integer such that $\lambda(n) \mid k(n-1)$. Then \[ L_2^+(n) \leq \frac{1}{k} \prod_{p \mid n} d_n(p) \enspace . \] \end{lemma} \begin{proof} From Lemma \ref{lemma:liarcountupperbounds} we know that $$ L_2^+(n) \leq \prod_{p \mid n} \max( \gcd(n-1,p^2-1), \gcd(n-1,p-1)^2) $$ and from the definition of $k$ we know that $k$ is exactly $\lambda(n)/\gcd(\lambda(n), n-1)$. It thus suffices to show that \begin{align} \left. \frac{\lambda(n)}{\gcd(\lambda(n), n-1)} \prod_{p \mid n} \max(\gcd(n-1,p^2-1), \gcd(n-1, p-1)^2) \ \right| \prod_{p \mid n} d_n(p) \enspace . \label{eqn:gcd_division} \end{align} For an arbitrary prime $q$, let $q^{e_i}$ be the power of $q$ dividing $d_n(p)$ and let $q^d$ be the power of $q$ dividing $n-1$. Order the $e_i$, and suppose that $d \geq e_i$ for $i \leq \ell$ and $d < e_i$ for $i > \ell$. Then the exponent of $q$ dividing $\prod_{p \mid n} d_n(p)$ is $\sum_{i=1}^r e_i$. Following the same argument as in Lemma \ref{lemma:minus}, the exponent of $q$ dividing the left hand side of (\ref{eqn:gcd_division}) is $$ (e_r - d) + \left( \sum_{i=1}^{\ell} e_i \right) + (r-\ell)d \leq \sum_{i=1}^r e_i \enspace . $$ Since $q$ was arbitrary, the division in (\ref{eqn:gcd_division}) holds. \end{proof} \begin{theorem} For all sufficiently large $x$ we have $$ \sum_{n \leq x}' L_2(n) \leq x^3 \mathcal{L}(x)^{-1 + o(1)} $$ where $\sum'$ signifies the sum is only over composite integers. \end{theorem} \begin{proof} Let $C_k(x)$ denote the set of composite $n \leq x$ where $k$ is the smallest integer such that $\lambda(n) \mid k(n-1)$, and let $D_k(x)$ denote the set of composite $n \leq x$ where $k$ is the smallest integer such that $\lambda_2(n) \mid k(n^2-1)$. By Lemma \ref{lemma:minus}, if $n \in D_k(x)$ then $L_2^-(n) \leq n^2/k$. Similarly, by Lemma \ref{lemma:plus}, if $n \in C_k(x)$ then $L_2^+(n) \leq n^2/k$. Then \begin{align*} \sum_{n \leq x}' L_2(n) &= \sum_{n \leq x}' L_2^+(n) + L_2^-(n) \\ &= \sum_{k}\sum_{n \in C_k(x)} L_2^+(n) + \sum_k \sum_{n \in D_k(x)} L_2^-(n) \\ &\leq \sum_{k} \sum_{n \in C_k(x)} \frac{n^2}{k} + \sum_k \sum_{n \in D_k(x)} \frac{n^2}{k} \\ & \leq \sum_{n \leq x} \frac{n^2}{\mathcal{L}(x)} + \sum_{k \leq \mathcal{L}(x)} \sum_{n \in C_k(x)} \frac{n^2}{k} + \sum_{n \leq x} \frac{n^2}{\mathcal{L}(x)} + \sum_{k \leq \mathcal{L}(x)} \sum_{n \in D_k(x)} \frac{n^2}{k} \\ & = \frac{2x^3}{\mathcal{L}(x)} + x^2 \sum_{k \leq \mathcal{L}(x)} \frac{\abs{C_k(x)}}{k} + x^2 \sum_{k \leq \mathcal{L}(x)} \frac{\abs{D_k(x)}}{k} \end{align*} and thus the proof is complete if we can prove that $\abs{C_k(x)} \leq x \mathcal{L}(x)^{-1 + o(1)}$ and $\abs{D_k(x)} \leq x \mathcal{L}(x)^{-1 + o(1)}$ hold uniformly for $k \leq \mathcal{L}(x)$. We focus first on the $D_k(x)$ result. For every $n \in D_k(x)$, either \begin{enumerate} \item[(1)] $n \leq x/\mathcal{L}(x)$, \item[(2)] $n$ is divisible by some prime $p > \sqrt{k\mathcal{L}(x)}$, and/or \item[(3)] $n \geq x/\mathcal{L}(x)$ and $p \mid n$ implies $p \leq \sqrt{k\mathcal{L}(x)}$. \end{enumerate} The number of integers in case (1) is at most $x \mathcal{L}(x)^{-1}$ by assumption. Turning to case (2), if $n \in D_k(x)$ and $p \mid n$ then $p^2-1$ is a divisor of $\lambda_2(n)$ and hence of $k(n^2-1)$. This means that \[ \left. \frac{p^2-1}{\gcd(k, p^2-1)} \right| n^2-1 \enspace . \] A straightforward application of the Chinese remainder theorem shows that the count of residues $x \pmod{a}$ with $x^2 = 1 \pmod{a}$ is at most $2^{\omega(a)+1}$. Thus the count of $n \in D_k(x)$ with $p \mid n$ is at most $$ \left \lceil \frac{2x 2^{\omega(p^2-1)}}{p (p^2-1)/\gcd(p^2-1, k)} \right \rceil \leq \frac{2xk 2^{\omega(p^2-1)}}{p(p^2-1)} = \frac{xk \mathcal{L}(x)^{o(1)}}{p (p^2-1)} \enspace . $$ The equality $2^{\omega(p^2-1)+1} = \mathcal{L}(x)^{o(1)}$ follows from the fact that the maximum number of distinct prime factors dividing any integer $m \leq x^2$ is $2\log(x^2)/\log\log(x^2)$. We conclude that the maximum number of $n$ in case $(2)$ is $$ \sum_{p > \sqrt{k \mathcal{L}(x)}} \frac{2xk \mathcal{L}(x)^{o(1)}}{p^3} \leq xk \mathcal{L}(x)^{o(1)} \sum_{p > \sqrt{k \mathcal{L}(x)}} \frac{1}{p^3} =x \mathcal{L}(x)^{-1 + o(1)} \enspace . $$ For $n$ in case (3), since all primes dividing $n$ are small we know that $n$ has a divisor $d$ satisfying $$ \frac{x}{\mathcal{L}(x) \sqrt{k\mathcal{L}(x)}} < d \leq \frac{x}{\mathcal{L}(x)} \enspace . $$ To construct such a divisor, remove primes from $n$ until the remaining integer is smaller than $x/\mathcal{L}(x)$; since each prime dividing $n$ is at most $\sqrt{k\mathcal{L}(x)}$ the lower bound follows. Let $A$ be the set of $d \in \mathbb{Z}$ that fall between the bounds given. We have $\lambda_2(d) \mid \lambda_2(n) \mid k(n^2-1)$, and so by a similar argument we know that the number of $n \in D_k(x)$ with $d \mid n$ is at most $$ \frac{x \mathcal{L}(x)^{o(1)}}{d \lambda_2(d)/\gcd(k, \lambda_2(d))} \enspace . $$ Unlike the case where $d$ is prime, here we might have $\gcd(d, \lambda_2(d)) \neq 1$. But then the set of $n \in D_k(x)$ with $d \mid n$ is empty, so the bound given remains true. Now, the number of $n \in D_k(x)$ in case (3) is at most \begin{align*} \sum_{d \in A} \frac{x \mathcal{L}(x)^{o(1)} \gcd(k, \lambda_2(d))}{d \lambda_2(d)} &= x \mathcal{L}(x)^{o(1)} \sum_{d \in A} \frac{\gcd(k, \lambda_2(d))}{d \lambda_2(d)} \\ & = x \mathcal{L}(x)^{o(1)} \sum_{m \leq x} \frac{1}{m} \sum_{d \in A \atop \lambda_2(d)/\gcd(k, \lambda_2(d)) = m} \frac{1}{d} \\ &\leq x \mathcal{L}(x)^{o(1)} \sum_{m \leq x} \frac{1}{m} \sum_{u \mid k} \sum_{d \in A \atop \lambda_2(d) = mu} \frac{1}{d} \enspace . \end{align*} Note that if $\lambda_2(d)/\gcd(k, \lambda_2(d)) = m$, then $\lambda_2(d) = mu$ for some $u \mid k$, and thus summing over all $u \mid k$ gives an upper bound. To evaluate the inner sum we use partial summation and Lemma \ref{lemma:lambda2count} to get \begin{align*} \sum_{d \in A \atop \lambda_2(d) = mu} \frac{1}{d} &\leq \frac{1}{x/\mathcal{L}(x)} \sum_{d \in A \atop \lambda_2(d)=mu} 1 + \int_{x/\mathcal{L}(x)\sqrt{k\mathcal{L}(x)}}^{x/\mathcal{L}(x)} \frac{1}{t^2} \sum_{d < t \atop \lambda_2(d) = mu} 1 \ {\rm d}t \\ & \leq \frac{\mathcal{L}(x)}{x} \frac{x/\mathcal{L}(x)}{\mathcal{L}(x/\mathcal{L}(x))^{1+o(1)}} + \int_{x/\mathcal{L}(x)\sqrt{k\mathcal{L}(x)}}^{x/\mathcal{L}(x)} \frac{1}{t^2} \frac{t}{\mathcal{L}(t)^{1+o(1)}} \ {\rm d}t \\ & \leq \frac{1}{\mathcal{L}(x/\mathcal{L}(x))^{1+o(1)}} + \frac{\log(x)}{\mathcal{L}(x/\mathcal{L}(x)\sqrt{k\mathcal{L}(x)})} = \mathcal{L}(x)^{-1+o(1)} \end{align*} for large enough $x$ and uniformly for $k \leq \mathcal{L}(x)$. Note that the count of divisors of an integer $k$ is bounded above by $2^{(1+o(1))\log{k}/\log\log{k}}$ (see for instance \cite[Theorem 317]{HardyWright}). Thus the count in case (3) is $$ \frac{x}{\mathcal{L}(x)^{1+o(1)}} \sum_{m \leq x} \frac{1}{m} \sum_{u \mid k} 1 \leq \frac{x \log{x}}{\mathcal{L}(x)^{1+o(1)}} 2^{(1+o(1))\frac{\log{k}}{\log\log{k}}} = \frac{x}{\mathcal{L}(x)^{1+o(1)}} $$ uniformly for $k \leq \mathcal{L}(x)$ and large enough $x$. Proving $\abs{C_k(x)} \leq x \mathcal{L}(x)^{-1+o(1)}$ uniformly for $k \leq \mathcal{L}(x)$ will be similar. Here the three cases are: \begin{enumerate} \item[(1)] $n \leq x/\mathcal{L}(x)$, \item[(2)] $n$ is divisible by some prime $p > k\mathcal{L}(x)$, and \item[(3)] $n \geq \mathcal{L}(x)$ and $p \mid n$ implies $p \leq k\mathcal{L}(x)$. \end{enumerate} If $n \in C_k(x)$ then $\lambda(n) \mid k(n-1)$. Thus the number of $n \in C_k(x)$ with $p \mid n$ is at most $$ \left \lceil \frac{x}{p(p-1)/\gcd(p-1,k)} \right \rceil \leq \frac{xk}{p^2} $$ and so the count of $n$ in case (2) is $x \mathcal{L}(x)^{-1 +o(1)}$. For $n$ in case (3) we know $n$ has a divisor $d$ satisfying $$ \frac{x}{k \mathcal{L}(x)^2} < d \leq \frac{x}{\mathcal{L}(x)} $$ and so the bound of $x\mathcal{L}(x)^{-1+o(1)}$ follows exactly from case (3) of \cite[Theorem 2.2]{ErdosPomerance01}. \end{proof} \section{Conclusions and further work} A very naive interpretation of Theorems \ref{thm:th1} and \ref{thm:th2} is that for any given $f$, you should expect that there are $n$ for which $f$ is a false witness. Moreover, one expects to find this in both the $+1$ and $-1$ cases. Likewise, one expects that given $n$, there will exist $f$ which is a false witness in both the $+1$ and $-1$ cases. To emphasize the extent to which one should be careful with the careless use of the word `expect' we remind the reader that in Section \ref{subsec:vanish} we describe infinite families of $n$ for which $L_2^-(n)=0$. It would perhaps be interesting to know how often $L_2^-(n)$ vanishes for $n<x$. It is useful to note that this vanishing described in Section \ref{subsec:vanish} gives some heuristic evidence to suggest that the Baillie-PSW test is significantly more accurate than other primality tests. Further work to make these heuristics more precise may be worth pursuing. In contrast to the above the proof of Theorem \ref{thm:th2} suggests that one should expect there to exist many Frobenius-Carmichael numbers (see \cite{Grantham01} for a definition) relative to quadratic fields $K$ for which $\jac{n}{\delta_K} = -1$. It is likely this heuristic can be extended to show that for each fixed quadratic field $K$ there exists (infinitely many) Frobenius-Carmichael numbers $n$ relative to $K$ with $\jac{n}{\delta_K} = -1$. As such a number would also be a classical Carmichael number, such numbers would tend to lead to a failure of the Baillie-PSW test. If this could be done for all $K$ it would show that all $f$ admit $n$ for which $f$ is a false witness and the Jacobi symbol is $-1$. It remains an open problem to prove such numbers exist. It remains unclear from our results if the expected value of $L_2^-(n)$ is actually less (in an asymptotic sense) than the expected value of $L_2^+(n)$. Various heuristics suggest that it ought to be. A result of this sort would put further weight behind the Baillie-PSW test. \section{Acknowledgments} The Authors would like to thank Carl Pomerance for several comments which improved the results presented. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
2,869,038,155,940
arxiv
\section{Introduction}% \label{Sec:Intro} % Assessing geometric and electronic structures of molecules via different scattering techniques using x-rays, ultrafast electrons and high harmonic generations (HHG) is a hot topic of current research in molecular physics since they provide a gateway to image chemical reactions in real time\;\cite{PhysRevLett.114.255501, PhysRevLett.109.133202}. Conventional scattering techniques based on photons and electrons are able to achieve spatial resolutions needed for imaging static molecular geometry but they lack resolution in time to give a dynamic picture. Techniques based on strong field ionization and ultrafast lasers are promising as they can be used to provide both sub-\AA ngstrom spatial and sub-femtosecond temporal resolutions\;\cite{JPhysB.49.062001, RevModPhys.72.545} for dynamic imaging purposes. It is one of these techniques associated with strong-field ionization of molecules that is of interest in the present paper. For a thorough account of the latest trends in ultrafast molecular imaging methods, we refer the reader to Ref.\;\cite{DiMauro_et_al_JPhysB}. The currently accepted vision of strong field ionization is the celebrated three-step model\;\cite{PhysRevLett.71.1994, PhysRevA.49.2117}. When an atom or a molecule is excited with an intense infrared (IR) laser pulse, a quasi static potential barrier is formed in the combined potential curve of the system and the field through which a bound electron can tunnel out\;\cite{jphysb47.204001}. It creates a laser driven electron wave packet in the ionization continuum, which is driven back and forth to the parent core by the applied field. On its return to this ionic core, the electron wave packet is scattered, resulting either in elastic scattering or in inelastic collision processes like high harmonic generation (HHG) or non-sequential double ionization (NSDI)\;\cite{Zuo1996313, JPhysB21.l31, PhysRevA.48.R894} for instance. Compared to the relatively inefficient inelastic processes, the elastic scattering of the ionized wave packet is the predominant outcome of the recollision. It is known as Laser Induced Electron Diffraction (LIED). Following the first theoretical discussions in 1996\;\cite{Zuo1996313}, experimental realization of LIED on simple molecular systems was first reported in 2008\;\cite{Meckel13062008}. Since then, LIED has been considered as a tool to study strong field dynamics of isolated molecules. It is well-known from optical physics that a diffraction pattern can be seen as the image of an object in the reciprocal space, from which light, or an incident matter wave, has been scattered. By designing an inverse algorithm, one can reconstruct the image of that object in real space. Laser induced electron diffraction can be seen in the same perspective\;\cite{Zuo1996313, Meckel13062008} but unlike traditional scattering processes, in LIED the scattering beam of electrons is extracted from the molecular system itself, acting as its own electron gun. After ionization and after an eventual recollision event, the outgoing electron wave carries information about the scattering centers. The photoelectron spectrum can thus be considered as an image of the system in the reciprocal space. Given the similarity of LIED with traditional diffraction techniques, it seems potentially possible to get information about the molecule from its LIED photoelectron spectra. It was demonstrated both theoretically and experimentally that LIED can be used for extracting structural information about the equilibrium geometry of molecules with great accuracy\;\cite{Pullen2015, nat.483.194, PhysRevA.85.053417}. These recent experiments motivate developing LIED-based techniques for imaging molecular dynamics. In particular, experimental developments reported in Ref.\;\cite{Pullen2015} demonstrate the simultaneous measurement of both C-H and C-C bond lengths of aligned C$_2$H$_2$ using LIED spectra obtained with mid-IR laser fields. That the LIED spectral data can be inverted to retrieve precise information on the molecular geometry is not surprising, although it undoubtedly represents a huge advance in molecular physics, given that this measure can in principle be made on a very short time scale, allowing molecular geometry changes during a reactive collision, for example, to be followed in a time-resolved manner. Recently, one of the key research topic in strong field physics appeared to be the exploitation of the recollision process to retrieve not only structural or geometrical images but also to infer information on the electronic charge distribution of a molecule and even details of its field-free quantum eigenstates. Those pieces of information are of great interest, especially for the understanding and the imaging of reaction dynamics, where the changes in the electronic charge distribution play a major role. Achieving required spatial and temporal resolution could provide a tool for probing the transition states of a chemical reaction for example, by observing time-resolved deformation of the orbitals as transition states are crossed. Such a tool would also be of a tremendous value to image the rapid dynamics which takes place close to conical intersections\;\cite{jphyschemA.102.4031, annurev-physchem-032210-103522}. Currently, HHG is the only strong field process that has been explored as a tool for imaging molecular orbitals using tomographic techniques\;\cite{nat.7.822}, as originally demonstrated in\;\cite{nat.432.867}. This HHG-based orbital imaging approach involves a rather elaborated inversion procedure, requiring the HHG spectra to be recorded at various laser-molecule alignment angles and their treatment, \textit{i.e.} the inversion procedure per-se, rests on a number of assumptions that are still a matter of debate. In this paper, we propose an alternative route that can be used to extract both structural and orbital information of a molecule directly from its LIED spectra. Previously, we demonstrated how LIED signals, for a symmetric molecule such as CO$_2$, reflect the conservation of the nodal structure, \textit{i.e.} the symmetry character, of the initial molecular orbital (MO) from which the ionized electron has been extracted. Here, we will show that more detailed information on this initial orbital can be retrieved from this signal, culminating with an explicit, complete MO reconstruction procedure. The outline of the paper is as follows. In Sec.\;\ref{Sec:Model} we briefly recall the single-active electron (SAE) model of the CO$_2$ molecule as defined in the previous work and used in the present study, together with the numerical procedure for electron wave packet calculations within this model. Then, in Sec.\;\ref{Sec:Num_Res}, through results of numerical simulations, we illustrate the specific features of the photoelectron LIED spectrum associated with a \textit{molecular} orbital compared to the one of a typical \textit{atomic} orbital. In Sec.\;\ref{Sec:SFA_mod}, we derive an analytical expression of the LIED photoelectron momentum distribution, starting from formally exact integral expressions of the time-evolution operator describing the SAE dynamics. The final analytical model makes use of the strong field approximation (SFA) and the inversion procedure used for the MO reconstruction assumes a simple LCAO expression as a guess for the initial MO. Finally in Sec.\ref{Sec:Reconstruction}, we demonstrate this procedure in the case of the highest occupied molecular orbital (HOMO) of the carbon dioxide molecule. We present some examples of reconstruction and we specify the accuracy and limits of our approach. The last section gives some concluding remarks and perspectives for future work. Atomic units are used throughout the paper unless stated otherwise. \section{Theoretical model}% \label{Sec:Model} % To demonstrate how molecular orbitals can be imaged using LIED, we consider the specific case of the symmetric, linear, carbon dioxide molecule, CO$_2$, one of the most studied system in strong field physics\;\cite{PhysRevLett.98.243001, PhysRevA.83.051403, 0953-4075-27-7-002, Elshakre201337, doi:10.1021/ja0344819}. It is sufficiently complex to represent an interesting test case and it is relatively simple for calculations. It enables one to demonstrate the key features of electron dynamics in the presence of intense NIR fields\;\cite{PhysRevA.85.053417}. The electronic dynamics induced by the field is described by the time-dependent Schr\"o\-dinger equation (TDSE) \begin{equation} \hat{\mathcal{H}}(t)\,|\psi(t)\rangle = i\,\partial_{t}\,|\psi(t)\rangle\, , \label{Eq:TDSE} \end{equation} where $|\psi(t)\rangle$ denotes the time-dependent electronic state of the model system constituted of the most weakly bound electron of the molecule and \begin{equation} \hat{\mathcal{H}}(t) = - \bm{\nabla}^{2}/2 + V(\bm{r}) - \bm{\mu}\cdot\bm{E}(t) \label{Eq:Hamiltonian_1} \end{equation} is its Hamiltonian in the length gauge. Here $V(\bm{r})$ is an effective field-free binding potential and $-\bm{\mu}\cdot\bm{E}(t)$ is the interaction of the active electron with the laser field. The linearly polarized electric field along $\hat{\bm{e}}_x$ is defined as \begin{equation} \bm{E}(t) = - \partial_t\,\bm{A}(t)\,, \label{Eq:E_of_t} \end{equation} where $\bm{A}(t)$ is the vector potential given by \begin{equation} \bm{A}(t) = \frac{E_0}{\omega_L}\,f(t)\,\cos(\omega_L t + \phi)\,\hat{\bm{e}}_x\,. \label{Eq:Vect_pot} \end{equation} $\omega_L$ is the IR career frequency and $E_0$, the electric field amplitude. $\phi$ is the Carrier-Envelop Phase (CEP) and \begin{equation} f(t) = \sin^2\left(\frac{\pi t}{2\tau}\right) \label{Eq:Envelop} \end{equation} denotes the temporal envelop of the pulse of Full Width at Half Maximum (FWHM) $\tau$. The effective multi-well potential $V(\bm{r})$ is as given in\;\cite{PhysRevA.85.053417}. It is a soft Coulomb potential describing the attraction exerted on the single electron of the model system by screened nuclear charges with a screening factor that, for each nucleus, varies slowly with the distance separating the electron from the nuclear charge. We assume that the CO$_2$ molecule is pre-aligned along the $y$ direction. The intense IR laser pulse given by Eq.\,(\ref{Eq:E_of_t}) is therefore applied normal to the molecular axis. Thus the ionization and associated dynamics are assumed to take place in the plane defined by the orthogonal system of coordinates consisting of the molecular $y$-axis and of the polarization $x$-axis of the applied time-dependent electric field. \begin{figure}[ht] \includegraphics[width=8.5cm]{figure1.pdf} \caption{(Color online). (a) Model system with typical recollision trajectories. (b) HOMO wave function of a symmetric CO$_2$ molecule with the CO internuclear distance $R=R_e\simeq 1.4\,$\AA\,$\simeq 2.6\,$au.} \label{fig:system} \end{figure} Fig.\;\ref{fig:system}\,(a) depicts the geometry of the system within these assumptions and shows in a schematic way three typical ionization and recollision trajectories. The most probable recollision processes take place following a short trajectory\;\cite{RevModPhys.81.163}, in about half an optical cycle, and therefore on a time scale of the order of 1 to 3 femtoseconds for wavelengths between 800\,nm and 2\,$\mu$m. The electronic dynamics that takes places on this typical time scale can be separated from the nuclear dynamics whose time scale is of the order of 15\,fs for the asymmetric stretch, 25\,fs for the symmetric stretch and of 60\,fs for the bending modes of CO$_2$. We therefore consider, in a first approximation, that the nuclear motion is frozen with a fixed CO bond length $R$. The TDSE\;(\ref{Eq:TDSE}) describing the electronic dynamics is solved with the split-operator method\;\cite{QUA:QUA51}. The initial state is calculated using the imaginary time propagation technique\;\cite{Lehtovaara2007148} and the ionization and recollision events are simulated by propagating the calculated initial state during the pulse. During the interaction with the field, the asymptotic part of the wave packet is extracted and projected onto Volkov states in order to describe analytically the long range electronic dynamics\;\cite{PhysRevA.52.1450}. At the end of the pulse, corresponding to the time $t=t_f=2\tau$, the asymptotic part of the wave packet is collected to obtain the energy-resolved transition amplitudes and hence the photoelectron spectrum. The entire numerical procedure is detailed in\;\cite{PhysRevA.85.053417}. The calculated photoelectron spectrum is the laser-induced electron diffraction spectrum or LIED spectrum $\mathcal{I}(k_x,k_y)$, which gives the two dimensional momentum distribution of the elastically scattered electron wave packets. \section{LIED Spectra}% \label{Sec:Num_Res} % We discuss here the salient features of typical LIED spectra in preparation for the derivation of the inversion procedure of the next section. These spectra are calculated for the HOMO orbital of CO$_2$, seen in Fig.\;\ref{fig:system}(b), as the initial state. We also consider the spectra associated with the ionization out of a $2p_x$ atomic orbital centered on the carbon atom. This will be referred as the `atomic' case. \subsection{Influence of the wavelength}% LIED photoelectron spectra provide a picture of the momentum ($\bm{k}$) distribution of the ionized electron. A typical photoelectron spectrum $\mathcal{I}(k_x,k_y)$ obtained from the solution of the TDSE for the HOMO orbital of CO$_2$ at an extended geometry $R=5\,$\AA\/ is given in log scale in Fig.\;\ref{fig:spectra_lambda} for three different wavelengths and a single optical cycle pulse $(2\tau=2\pi/\omega_L)$ with no CEP $(\phi=0)$. Panel (a) shows the spectrum at the wavelength 800\,nm, panel (b) at 1.4\,$\mu$m and panel (c) at 2.0\,$\mu$m for a laser intensity of $10^{14}\,$W/cm$^2$. The highest probabilities are in red and the lowest in blue. The outermost contour of the circular shape of the spectrum is elongated along $k_x$, \textit{i.e.} in the direction of the polarization of the field. Two successive ionization events corresponding to the maximum and minimum of $E(t)$ in this ultra-short pulse create an oscillating continuum wave packet which is ultimately driven away from the molecule. The ionization events happen along the direction of the field, giving photoelectrons with momenta distributed as shown in the figure. The circular shape corresponds to the maximum recollision energy $3.17\,U_p= (k_x^2+k_y^2)/2 $, where $U_p$ is the ponderomotive energy\;\cite{RevModPhys.81.163}. Since $U_p$ is proportional to $\lambda^2$, an increase of the wavelength directly increases the size of the 2D photoelectron spectrum, as we can see in Fig.\;\ref{fig:spectra_lambda}. Longer wavelengths thus help making out the interference patterns of the spectrum. In the following we will use the largest wavelength $\lambda=2.0\,\mu$m. \begin{figure}[ht] \includegraphics[width=7.5cm]{figure2.pdf} \caption{(Color online). 2D photoelectron spectra $\mathcal{I}(k_x,k_y)$ (log scale) obtained from the HOMO of CO$_2$ for $R=5\,$\AA\/ when exposed to a single optical cycle pulse of intensity $I=10^{14}\,$W/cm$^2$ and zero CEP. The wavelengths used are (a) $\lambda=800\,$nm, (b) $\lambda=1.4\,\mu$m and (c) $\lambda=2.0\,\mu$m.} \label{fig:spectra_lambda} \end{figure} \subsection{Interference patterns}% To analyze in detail the interference patterns which build up in the photoelectron spectra, we compare in panels (c) and (d) of Fig.\;\ref{fig:mol_atom_1_opt} the spectrum obtained from a $2p_x$ atomic orbital centered on the carbon atom with the spectrum obtained from the HOMO of CO$_2$, at a wavelength of 2.0\,$\mu$m. All other parameters are as in Fig.\;\ref{fig:spectra_lambda}. The panels (a) and (b) of the same figure show respectively the time variations of the electric field and of the total ionization probability for the atomic (red solid line) and for the molecular (dashed blue line) cases. For the atomic calculation, the parameters of the soft-core potential $V(\bm{r})$ in Eq.\;(\ref{Eq:Hamiltonian_1}) have been modified such that the atom has the same ionization potential compared to the HOMO of CO$_2$, \textit{i.e.} 9.2\;eV at $R=5\,$\AA. The electric field $E(t)$ presents two main symmetric maxima pointing in opposite directions. For both the atomic and the molecular cases, the ionization takes place in two successive bursts. The probability of ionization rises just after each maximum of the field, and the delay separating a maximum of the field and the associated ionization burst is simply related to the time necessary for the ionized wave function to reach the asymptotic region. \begin{figure}[ht] \includegraphics[width=7.5cm]{figure3.pdf} \caption{(Color online). (a) Normalized electric field $E(t)$ as a function of time. (b) Ionization probability as a function of time for an atom (solid red line) and a molecule (dashed blue line) with the same ionization potential IP$=9.2\,$eV. (c) and (d) Associated 2D photoelectron spectra $\mathcal{I}(k_x,k_y)$ for the atom (c) and the molecule at $R=5\,$\AA\/ (d). A single optical cycle pulse of intensity $I=10^{14}\,$W/cm$^2$ and wavelength $\lambda=2.0\,\mu$m is used.} \label{fig:mol_atom_1_opt} \end{figure} In the atomic photoelectron spectrum shown panel (c) very clear ring-like structures can be seen, which come from the interference between different rescattered electron wave packets. More precisely these structures are due to the interference between long and short trajectories followed by recolliding electrons\;\cite{Spanner:2004:12L02}. They have a circular shape because, for a given energy long and short trajectories accumulate a fixed phase shift which is independent of the electron emission angle. Another interesting interference in the atomic LIED spectrum is due to the superposition of the pathways corresponding to direct ionization and to ionization preceded by recollision (\textit{i.e.} to rescattering). This holographic interference of the electron wave occurs only over a window of small $k_y$ values due to the limited spread of directly ionized electrons in the transverse direction\;\cite{Meckel2014, Chen:14, Huismans07012011}. It also appears mainly in the $k_x>0$ region of Fig.\;\ref{fig:mol_atom_1_opt}(c) (intense red colored region) due to the particular field $E(t)$ seen in panel (a) which drags the electron in the positive direction during the recollision. The associated interference patterns are relatively localized, \textit{i.e.} limited in extension, and are therefore difficult to measure in an experiment. In addition, they are seen both in atomic and molecular cases, as we can see in the comparison with panel (d) and they are therefore not the best candidates for an analysis of the molecular structure. There is however a very clear and important difference between the atomic and molecular spectra which lies in the $k_y$ variation of the spectra. Indeed, out of the different interference patterns seen in the molecular spectrum, a multiple-slit like interference can be distinguished in the $k_y$ momentum distribution. This multiple-slit like interference pattern is due to the scattering of the electron by the multi-well ionic potential describing the interaction with the nuclei. The molecular information, including the relative position of the nuclei, is therefore mainly imprinted in the $k_y$ momentum distribution, along the direction of the molecular axis. To get a simpler spectrum that we can more easily analyze, we average the electron signal $\mathcal{I}(k_x,k_y)$ over the $k_x$ momentum, keeping only the $k_y$ variation. This yields the averaged 1D LIED spectrum \begin{equation} \mathcal{S}(k_y) = \int \mathcal{I}(k_x,k_y) \, dk_x\,. \label{Eq:1D_spec_Eqn} \end{equation} It was already demonstrated that the bond length $R$ can directly be measured from the fringe width seen in this 1D spectrum\;\cite{PhysRevA.85.053417}. Two such log-scale spectra are shown in Fig.\;\ref{fig:1DSpectrum}(a) for the cases presented in 2D in Figs.\;\ref{fig:mol_atom_1_opt}(c) and\;\ref{fig:mol_atom_1_opt}(d). The averaged 1D atomic spectrum is shown as a solid black line and the molecular spectrum as a dashed red line. We clearly see strong differences in these 1D spectra which lie both in the oscillatory behavior of the molecular spectrum and in the slower decrease (with respect to $k_y$) of the mean signal of the molecular spectrum compared to the atomic spectrum. \begin{figure}[ht] \includegraphics[width=8.5cm]{figure4.pdf} \caption{(Color online). Averaged 1D LIED spectra $\mathcal{S}(k_y)$ (log scale) in the atomic case (solid black line) and in the molecular case (dashed red line). In panels (a) and (b) the parameters are as in Figs.\;\ref{fig:mol_atom_1_opt} and\;\ref{fig:mol_atom_4_opt}, respectively: (a) is for a total pulse duration of one optical cycle while (b) is for 3.5 optical cycles. All other parameters are identical.} \label{fig:1DSpectrum} \end{figure} \begin{figure}[ht] \includegraphics[width=7.5cm]{figure5.pdf} \caption{(Color online). (a) Normalized electric field $E(t)$ as a function of time. (b) Ionization probability as a function of time for an atom (solid red line) and a molecule (dashed blue line) with the same ionization potential IP$=9.2\,$eV. (c) and (d) Associated 2D photoelectron spectra $\mathcal{I}(k_x,k_y)$ for the atom (c) and the molecule at $R=5\,$\AA\/ (d). A 3.5-optical cycle pulse characterized with an intensity of $I=10^{14}\,$W/cm$^2$ and the wavelength $\lambda=2.0\,\mu$m is used.} \label{fig:mol_atom_4_opt} \end{figure} Until now, the LIED spectra were calculated for a single optical cycle only. Fig.\;\ref{fig:mol_atom_4_opt} shows similar atomic and molecular spectra, calculated with a 3.5-optical cycle laser pulse. Even though the pulse duration is much larger, there are only three main maxima of the electric field which contribute significantly to the ionization signal, as seen in panels (a) and (b) of Fig.\;\ref{fig:mol_atom_4_opt}. These maxima give rise to three bursts of ionization taking place in opposite directions. As a consequence the associated 2D momentum spectra are much more symmetric with respect to $k_x = 0$ than the spectra associated with a single-cycle pulse seen in Fig.\;\ref{fig:mol_atom_1_opt}. The different kinds of interference patterns discussed above are still visible. In particular, the multiple-slit like interference seen in the $k_y$ variation of the 2D molecular spectrum is still present. The associated $k_x$-averaged 1D spectra seen in Fig.\;\ref{fig:1DSpectrum}(b) therefore show a similar behavior compared to the ultra-short single-cycle pulse. A comparison of Fig.\;\ref{fig:1DSpectrum}(a) and Fig.\;\ref{fig:1DSpectrum}(b) shows that the longer pulse yields a larger value of the cutoff energy. This is because the maximum value of $E(t)$ is larger for the longer pulse (see the panels (a) of Figs.\;\ref{fig:mol_atom_1_opt} and\;\ref{fig:mol_atom_4_opt}). The particular oscillatory behavior of $\mathcal{S}(k_y)$ in the molecular case of Fig.\;\ref{fig:1DSpectrum}(b) shows that it is possible to attempt an analysis of the molecular structure from LIED spectra using few-cycle laser pulses. \subsection{Influence of the internuclear distance}% In Fig.\;\ref{fig:spectra_R} we explore the $R$ dependence of the LIED spectra. The intensity is $10^{14}\,$W/cm$^2$ and the pulse duration is 3.5 optical cycles at the wavelength $2.0\,\mu$m. Panels (a), (b) and (c) are for $R=2.0$\,\AA, $R=3.5$\,\AA\/ and $R=5.0$\,\AA, respectively. We can conclude from this figure that the interference between long and short trajectories and the interference between direct ionization and ionization preceded by recollision (rescattering signal) are not seriously affected by a variation of the internuclear distance. On the other hand, the multiple-slit like interference patterns seen in the $k_y$ variation of the 2D molecular spectrum change appreciably when the internuclear distance varies. \begin{figure}[ht] \includegraphics[width=7.5cm]{figure6.pdf} \caption{(Color online). $R$-dependence of the 2D photoelectron spectra $\mathcal{I}(k_x,k_y)$ for the CO$_2$ molecule. The intensity is $I=10^{14}\,$W/cm$^2$ and the pulse duration is 3.5 optical cycles. The wavelength is $\lambda=2.0\,\mu$m. The internuclear distance is (a) $R=2\,$\AA, (b) $R=3.5\,$\AA\/ and (c) $R=5\,$\AA.} \label{fig:spectra_R} \end{figure} This strong variation is confirmed by Fig.\;\ref{fig:1DSpectrumR} which shows the associated $k_x$-averaged one-dimensional LIED spectra. Panels (a), (b) and (c) are for $R=2.0$\,\AA, $R=3.5$\,\AA\/ and $R=5.0$\,\AA, respectively. We see here that the analysis of the spectrum is facilitated with large internuclear distances since the oscillation period of the 1D averaged spectrum decreases with $R$. Indeed, it was shown in\;\cite{PhysRevA.85.053417} that the fringe width $\Delta k$ varies as $\pi/R$. This result will be used in section\;\ref{Sec:Reconstruction} for the reconstruction of the initial molecular orbital. For the laser parameters used in the present calculation, \textit{i.e.} $I=10^{14}\,$W/cm$^2$ and $\lambda=2.0\,\mu$m, the ponderomotive energy is $U_p=1.38$\,a.u, and, as seen in Fig.\;\ref{fig:spectra_R}, the electron spectrum extends over a range of momenta of a few atomic units only, with $k_y \leqslant 2.95$\,a.u. As we can already infer from Fig.\;\ref{fig:1DSpectrumR}(a) this range is not sufficient for an accurate analysis of the spectrum when $R < 3$\,a.u. In the following we will discuss this analysis for the cases $R=3.5\,$\AA\/ and $R=5.0\,$\AA. Analyzing the LIED spectra at smaller internuclear distances would require higher laser intensities or longer wavelengths. \begin{figure}[ht] \includegraphics[width=8.5cm]{figure7.pdf} \caption{(Color online). $R$-dependence of the averaged 1D LIED spectra $\mathcal{S}(k_y)$ (log scale). In panels (a), (b) and (c) the internuclear distance is $R=2.0$\,\AA, $R=3.5$\,\AA\/ and $R=5.0$\,\AA, respectively. The other parameters are as in Fig.\;\ref{fig:spectra_R}. The vertical dotted lines mark the regularly spaced local minima of the three different spectra.} \label{fig:1DSpectrumR} \end{figure} The understanding the LIED spectra which was described in detail in this section can be used for the ultimate goal of this manuscript: the derivation of an inversion procedure. In the next section we describe the main ingredients of an analytical model that can lead to the image of the molecular orbital, in the present case the HOMO, by inverting the LIED spectrum. This model will then be used in the last section to analyze the spectra and to reconstruct the initial molecular orbital. \section{The Inverse Problem: An Analytical Model}% \label{Sec:SFA_mod} % The 2D LIED spectrum $\mathcal{I}(k_x,k_y)$ calculated by solving the TDSE contains information about the molecule within the diffraction patterns, as described in Section\;\ref{Sec:Num_Res}. Since this spectrum originates from the HOMO orbital of CO$_2$, both structural and orbital information are necessarily imprinted in it. Here the goal is to reconstruct the initial orbital from which the photoelectrons are extracted. We are thus facing what could be called an \emph{inverse problem}, where we need a compact analytical form for the photoelectron spectra $\mathcal{S}(k_y)$, accurate enough to assess both orbital and geometrical information. This analytical form will contain some parameters describing the initial state. These parameters will be fitted such that the analytical form of $\mathcal{S}(k_y)$ reproduces its ``exact" counterpart obtained from the solution of the TDSE. Finally, the fitted parameters will be used to reconstruct the initial molecular orbital. In general for the case discussed here, two main ingredients are necessary: (i)\;an approximate description of the ionization and associated dynamics that result in the photoelectron spectra and (ii)\;a simplified functional form for the initial state which will be used for the reconstruction. The first part is the most challenging feature of the inverse problem and is discussed in this section. \subsection{Description of the Dynamics}% \subsubsection{Exact Transition Amplitude}% The field-induced dynamics can be modeled by depicting the different steps of a recollision event\;\cite{PhysRevLett.71.1994} separately. In agreement with this mechanism describing the ionization and recollision processes, we separate the transition amplitude $a(k_x,k_y)$ in two parts, corresponding to directly ionized electrons and to electrons ionized after a recolliding event. If the exact solution $|\Psi(t_f)\rangle$ of the TDSE is known at the end of the pulse, at time $t_f$, the relevant transition amplitude for LIED can be written as \begin{equation} a(k_x,k_y) = \langle \Psi^{+}_{\bm{k}}|\Psi(t_f) \rangle\,, \label{Eq:Analytic_tran_amp_1} \end{equation} where $|\Psi^{+}_{\bm{k}}\rangle$ is the outgoing wave elastically scattered in the direction of the electron wave vector $\bm{k}$ for a prescribed asymptotic kinetic energy $\varepsilon_k=k^2/2$. The formal solution of the TDSE may be written at time $t_f$ as \begin{equation} |\Psi(t_f)\rangle = \hat{U}(t_f \!\leftarrow\! 0)\,|\Psi(0)\rangle\,, \end{equation} where $|\Psi(0)\rangle$ is the initial state and $\hat{U}(t \!\leftarrow\! 0)$ is the evolution operator obeying the TDSE \begin{equation} i\,\partial_t\,\hat{U}(t \!\leftarrow\! 0) = \hat{\mathcal{H}}(t)\,\hat{U}(t \!\leftarrow\! 0)\,. \label{Eq:evolution_op_TDSE} \end{equation} $\hat{\mathcal{H}}(t)$ given in Eq.\,(\ref{Eq:Hamiltonian_1}) contains both the binding and the driving potentials. Depending on the situation, one of them could be more influential than the other and could decide for the outcome of the dynamical process\;\cite{ATI_top_rev}. The simplest realistic picture of strong field ionization including the essential ingredients of tunnel ionization followed by recollision, requires one to consider at least a complete optical cycle. For the derivation of the model, we therefore consider a single optical cycle of duration $t_f=2\pi/\omega_L$. For certain times $t'$ within this cycle, the field reaches values sufficient to trigger both tunnel ionization and the following dynamics of the wave packet, which can be represented using the following exact form of the Dyson equation\;\cite{Frasca2195, Reiss_SFA, PhysRevA.74.063404} : \begin{eqnarray} \hat{U}(t_f \!\leftarrow\! 0) & = & \hat{U}_0(t_f \!\leftarrow\! 0) \label{Eq:Dyson_Eq_1} \\ & +i\, & \int_{0}^{t_f} \hat{U}(t_f\! \leftarrow\! t')\, \hat{\bm{\mu}}\cdot\bm{E}(t')\,\hat{U}_0(t' \!\leftarrow\! 0)\,dt'\,,\nonumber \end{eqnarray} where $\hat{U}_0(t \!\leftarrow\! 0)$ is the evolution operator associated with the field-free Hamiltonian \begin{equation} \hat{\mathcal{H}}_{0}=-\bm{\nabla}^2/2+V(\bm{r})\,. \end{equation} The Dyson equation (\ref{Eq:Dyson_Eq_1}) is exact in so far as it involves the exact evolution operator $\hat{U}(t_f\! \leftarrow\! t')$ between the time of ionization $t'$ and the final time $t_f$. During this time interval a recollision event may take place, whenever the electron wave packet propagating in the laser field comes close enough to the parent ionic core such that the Coulomb attraction starts to dominate over the driving dipole interaction. To express this idea, we then split the evolution operator $\hat{U}(t_f\! \leftarrow\! t')$, found in the integral on the right-hand-side (r.h.s.) of Eq.\,(\ref{Eq:Dyson_Eq_1}), as \begin{eqnarray} \hat{U}(t_f\! \leftarrow\! t') & = & \hat{U}_v(t_f\! \leftarrow\! t') \label{Eq:Dyson_Eq_2} \\ \nonumber & -i\, & \int_{t'}^{t_f} \hat{U}(t_f\! \leftarrow\! t'')\,V(\bm{r})\,\hat{U}_v(t''\! \leftarrow\! t')\,dt''\,, \end{eqnarray} where $\hat{U}_v$ is the evolution operator associated with the Volkov Hamiltonian\;\cite{Reiss_SFA, NDSenGupta} \begin{equation} \hat{\mathcal{H}}_v(t)=-\bm{\nabla}^2/2-\hat{\bm{\mu}}\cdot\bm{E}(t)\,. \end{equation} The Volkov evolution operator $\hat{U}_v(t_2 \leftarrow t_1)$ can be formally written as: \begin{equation} \hat{U}_v(t_2 \leftarrow t_1) = \int d\bm{k}\, \left|\Phi_{\bm{k}}^v(t_2)\right\rangle\left\langle\Phi_{\bm{k}}^v(t_1)\right|\,, \label{Eq:volkov_projector_2D} \end{equation} where \begin{equation} \Phi_{\bm{k}}^v(\bm{r},t) = \frac{\mathrm{e}^{\,i\,\left[\bm{k}+ \,\bm{A}(t)\right]\,\cdot\,\bm{r}\,-\,i\,S(\bm{k},t)}}{2\pi}\,, \label{Eq:volkov_states_2D} \end{equation} $S(\bm{k},t)$ being the classical action \begin{equation} S(\bm{k},t) = \frac{1}{2} \int_{0}^t \left[ \bm{k} + \,\bm{A}(\tau) \right]^2\,d\tau\,. \end{equation} Substituting Eq.(\ref{Eq:Dyson_Eq_2}) in Eq.(\ref{Eq:Dyson_Eq_1}) we get \begin{equation} \hat{U}(t_f\! \leftarrow\! 0) = \hat{U}_0(t_f) + \hat{U}_d(t_f) + \hat{U}_r(t_f) \,, \label{Eq:Dyson_Eq_3} \end{equation} with the following definitions: \begin{subequations} \label{Eq:Evolution_Operators} \begin{eqnarray} \!\!\!\!\hat{U}_0(t_f) & = & \exp\big(-i\,\hat{\mathcal{H}}_0\,t_f\,\big)\,, \label{Eq:field_free_Evolution} \\[0.2cm] \!\!\!\!\hat{U}_d(t_f) & = & i\int_{0}^{t_f}\!\!\!\!dt'\,\mathcal{D}(t_f,t')\,, \label{Eq:direct_ionization_part_1} \\[0.2cm] \!\!\!\!\hat{U}_r(t_f) & = & \int_{0}^{t_f}\!\!\!\!dt'\!\!\int_{t'}^{t_f}\!\!\!\!dt''\,\hat{U}(t_f\! \leftarrow\! t'')\,V(\bm{r})\,\mathcal{D}(t'',t')\,, \end{eqnarray} \end{subequations} and \begin{equation} \mathcal{D}(t_2,t_1) = \hat{U}_v(t_2 \leftarrow t_1)\,\hat{\bm{\mu}}\cdot\bm{E}(t_1)\,\hat{U}_0(t_1 \leftarrow 0) \,. \label{Eq:D_operator} \end{equation} Among the three terms composing $\hat{U}(t_f\! \leftarrow\! 0)$ in Eq.\;(\ref{Eq:Dyson_Eq_3}), $\hat{U}_d(t_f)$ is responsible for direct ionization whereas $\hat{U}_r(t_f)$ includes recollision. It is to be stressed that Eq.\,(\ref{Eq:Dyson_Eq_3}), with the definitions given in Eqs.\,(\ref{Eq:Evolution_Operators}) and\,(\ref{Eq:D_operator}), is still exact. This type of Dyson expansion could be iterated by considering multiple ionization and higher order recollisions. In the present simplified model we stop at this second order decomposition. Now, using these equations, we can split the ionization amplitude in two contributions: \begin{equation} a(k_x,k_y) = a_d(k_x,k_y) + a_r(k_x,k_y)\,, \label{Eq:Analytic_tran_amp_2} \end{equation} with \begin{equation} a_d(k_x,k_y) = \langle \Psi^{+}_{\bm{k}}|\,\hat{U}_d(t_f)\,|\Psi(0)\rangle \label{Eq:direct_amp_SFA1} \end{equation} and \begin{equation} a_r(k_x,k_y) = \langle \Psi^{+}_{\bm{k}}|\,\hat{U}_r(t_f)\,|\Psi(0)\rangle\,. \label{Eq:Recollision_amp_SFA1} \end{equation} Eq.\,(\ref{Eq:direct_amp_SFA1}) gives the transition amplitude associated with direct ionization whilst Eq.\,(\ref{Eq:Recollision_amp_SFA1}) gives the transition amplitude associated with ionization preceded by recollision. Hence, the 2D LIED spectrum can be written as \begin{equation} \mathcal{I}(k_x,k_y) = \big|a_d(k_x,k_y) + a_r(k_x,k_y)\big|^2\,, \label{Eq:Trans_prob_1} \end{equation} an expression which shows clearly the appearance of an interference between the direct and recolliding ionization pathways. Note that such expression is common when describing strong field ionization using a SFA approach\;\cite{Suarez_2015}. \subsubsection{Approximate Transition Amplitude}% Evaluating the direct ionization amplitude $a_d(k_x,k_y)$ is relatively easy as compared to the recollision amplitude $a_r(k_x,k_y)$ because of the appearance of \mbox{$\hat{U}(t_f \!\leftarrow \!t'')$} in the expression of $\hat{U}_r(t_f)$. To make this evaluation tractable, we use the Strong Field Approximation (SFA)\;\cite{PhysRevA.74.063404, PhysRevA.78.033412}, and we replace $\hat{U}(t_f\!\leftarrow\!t'')$ by the Volkov evolution operator $\hat{U}_v(t_f\!\leftarrow\!t'')$, with \begin{equation} \hat{U}_r(t_f) \simeq \int_{0}^{t_f}\!\!\!\!dt'\!\!\int_{t'}^{t_f}\!\!\!\!dt''\,\hat{U}_v(t_f\! \leftarrow\! t'')\,V(\bm{r})\,\mathcal{D}(t'',t')\,. \label{Eq:ApproxUr} \end{equation} Replacing $\hat{U}(t_f\!\leftarrow\!t'')$ by $\hat{U}_v(t_f\!\leftarrow\!t'')$ in Eq.\,(\ref{Eq:ApproxUr}) means that after the first recollision event, we neglect the Coulomb force compared to the interacting IR field, an approximation valid in the asymptotic region, where the Coulomb interaction is negligible. As a second step for simplifying the model, the outgoing waves $|\Psi^{+}_{\bm{k}}\rangle$ are approximated by plane waves $|\Phi^{\text{pw}}_{\bm{k}}\rangle$. This approximation is justified asymptotically. Within these approximations we obtain \begin{equation} a_d(k_x,k_y) \simeq\, i \int_{0}^{t_f}\!\!dt'\,e^{-i\bar{S}_1}\, \big\langle\Phi^{\text{pw}}_{\bm{k'}}\,\big|\,\hat{\bm{\mu}}\cdot\bm{E}(t')\,\big|\Psi(0)\big\rangle\,, \label{Eq:direct_amp_SFA3} \end{equation} and \begin{eqnarray} a_r(k_x,k_y) & \simeq & \int_{0}^{t_f}\!\!E(t')dt'\!\int_{t'}^{t_f}\!\!dt''\, e^{-i\bar{S}_2}\nonumber\\ & & \big\langle\Phi^{\text{pw}}_{\bm{k''}}\,\big|\,V(\bm{r})\, \big|\Psi_r\big\rangle\,, \label{Eq:Recollision_amp_SFA3} \end{eqnarray} where \begin{subequations} \begin{eqnarray} \bar{S}_1 & = & \frac{1}{2}\int_{t'}^{t_f} \left[\bm{k}+\bm{A}(\tau)\right]^2d\tau - I_p\,t'\,, \label{Eq:Sbar_1} \\ \bm{k'} & = & \bm{k}+\bm{A}(t')\,, \label{Eq:kprime}\\ \bar{S}_2 & = & \frac{1}{2}\int_{t''}^{t_f} \left[\bm{k}+\bm{A}(\tau)\right]^2d\tau - I_p\,t'\,, \label{Eq:Sbar_2} \\ \bm{k''} & = & \bm{k}+\bm{A}(t'')\,, \label{Eq:ksecond} \\ \big|\Psi_r\big\rangle & = & \hat{U}_v(t''\!\leftarrow\!t')\,x\,\big|\Psi(0)\big\rangle\,. \end{eqnarray} \end{subequations} In Section\;\ref{Sec:Num_Res}, it has been noted that the most interesting features of the photoelectron spectrum lie in the high momentum ($k_y$) part of the 1D averaged spectra. This is because these electrons are characterized by de Broglie wavelengths small enough to resolve sub-\AA\/ spatial scales. Thus, describing accurately the low-energy part of the spectrum and the parallel momentum ($k_x$) distribution of the photoelectrons is not essential. It has also been shown that large energies are reached by electrons ionized around a maximum of the field and hence around a minimum of the potential vector\;\cite{PhysRevLett.71.1994, PhysRevA.54.742}. Thus $\bm{A}(t')$ can be neglected in Eq.\,(\ref{Eq:kprime}). In addition electrons with high kinetic energies mainly recollide with the ionic core at a minimum of the field, corresponding to a maximum of the vector potential\;\cite{PhysRevLett.71.1994, PhysRevA.54.742}. In Eq.\,(\ref{Eq:ksecond}) we will therefore use $\bm{A}(t'') \simeq \pm E_0/\omega_L\,\hat{\bm{x}}$. The potential vector $\bm{A}(t'')$ therefore induces a strong shift on the parallel component $k_x$ of the electron momentum. In practice, this shift is of no significance in the present approach, since it will be averaged out in the calculation of Eq.\,(\ref{Eq:1D_spec_Eqn}) and we will therefore not take it into account in the following. Within these approximations and to avoid discrepancies between the SFA spectrum and the spectrum obtained from the solution of the TDSE, one should restrict the analysis of the 1D averaged signal to the highest $k_y$ momentum components only. The interest of this severe approximation lies however in the fact that it simplifies the model by allowing the separation of the temporal from the spatial integrals involved in Eq.\,(\ref{Eq:direct_amp_SFA3}). Thus, for the direct ionization amplitudes, one has: \begin{equation} a_d(k_x,k_y) \simeq A_{d}\,\langle\Phi^{\text{pw}}_{\bm{k}}|\,x\,|\Psi(0)\rangle\,, \label{Eq:Appox_ad} \end{equation} where \begin{equation} A_{d} = i\int_{0}^{t_f} E(t')\,e^{-i\bar{S}_1}\,dt' \,. \end{equation} Similarly, the recollision amplitude becomes \begin{eqnarray} a_r(k_x,k_y) & \simeq & \int_{0}^{t_f}\!\!E(t')dt'\!\int_{t'}^{t_f}\!\!dt''\,e^{-i\bar{S}_2}\,\nonumber\\ & & \big\langle\Phi^{\text{pw}}_{\bm{k}}\,\big|\,V(\bm{r})\,\big|\Psi_r\big\rangle\,. \label{Eq:Appox_ar} \end{eqnarray} Using the closure property of the plane wave basis set one obtains \begin{equation} \big|\Psi_r\big\rangle = \int\!\!d\bm{k'} e^{-ik'^2\Delta t/2} \big\langle\Phi^{\text{pw}}_{\bm{k'}}\big|x\big|\Psi(0)\big\rangle\, \big|\Phi^{\text{pw}}_{\bm{k'}}\big\rangle\,, \end{equation} where $\Delta t = t'' - t' \simeq 0.7\,(2\pi/\omega_L)$ is the mean time during which the electron wave packet propagates in the continuum\;\cite{PhysRevLett.71.1994, PhysRevA.54.742}. The temporal and spatial integrals can thus be separated in the expression (\ref{Eq:Appox_ar}) of the recollision amplitude as \begin{equation} a_r(k_x,k_y) \simeq A_{r}\,\big\langle\Phi^{\text{pw}}_{\bm{k}}\big|\,V(\bm{r})\,\big|\Psi_r\big\rangle\,, \label{Eq:ar_k_Approx} \end{equation} where \begin{equation} A_{r} = \int_{0}^{t_f}\!\!dt'\int_{t'}^{t_f}\!\!dt''\,E(t')\,e^{-i\bar{S}_2}\,. \label{Eq:Const_amp_recoll} \end{equation} Finally, the approximate transition amplitude is given by \begin{equation} a(k_x,k_y) \simeq A_{d}\big\langle\Phi^{\text{pw}}_{\bm{k}}\big|x \big|\Psi(0)\big\rangle + A_{r}\big\langle\Phi^{\text{pw}}_{\bm{k}}\big|V(\bm{r})\big|\Psi_r \big\rangle\,. \label{Eq:Trans_amp_approximate_general} \end{equation} This equation has to be developed on a suitable basis of initial states for the final analytical form of the LIED spectra. \subsection{Initial Molecular Wave Function}% \label{subsec:ini_guess} % Eq.\,(\ref{Eq:Trans_amp_approximate_general}) expresses the ionization amplitude as a sum two terms, each written in the form of a product of spatial and temporal integrals.The first term is associated with direct ionization and, the second one with recollision events. As we see from the LIED spectra, the information we are interested in is encoded in the spatial integrals. Thus in the following discussions, the temporal integrals $A_{d}$ and $A_{r}$ will be taken as adjustable coefficients in order to match the approximate spectrum with the calculated spectrum. To proceed further with the evaluation of the spatial integrals, we need to specify the initial wave function $\Psi(\bm{r},0)=\big\langle\bm{r}\big|\Psi(0)\big\rangle$. In so far as SAE is valid, this is a molecular orbital. In quantum chemistry, this is usually expressed as a linear combination of atomic orbitals (LCAO method) and there are many basis set ansatz for representing localized atomic wave functions. Here the initial HOMO orbital is taken as an anti-symmetric linear combination of $2p_x$ atomic orbitals (see Fig.\;\ref{fig:system}(b)) \begin{subequations} \begin{eqnarray} \Psi(\bm{r},0) & = & \Phi_{2p_x}(\bm{r}+\bm{R}) - \Phi_{2p_x}(\bm{r}-\bm{R})\\ & = & \Phi_{2p_x}^{-}(\bm{r}) - \Phi_{2p_x}^{+}(\bm{r})\,. \label{Eq:Ini_Guess_2px} \end{eqnarray} \end{subequations} Ideally one would choose for $\Phi_{2p_x}(\bm{r})$ a Slater-type orbital of the form \begin{eqnarray} \Phi_{2p_x}^s(\bm{r}) &=& \mathcal{N}_s\;x\;\mathrm{e}^{-\zeta\,r}\, , \quad r=(x^2+y^2)^{1/2} \label{Eq:STO_2px} \end{eqnarray} with the normalization factor \mbox{$\mathcal{N}_s = \zeta^2 \sqrt{8/3\pi}$} in two dimensions, where $\zeta$ is the Slater exponent. This analytical form, once introduced in Eq.\,(\ref{Eq:Ini_Guess_2px}), is a reasonable candidate for representing the HOMO orbital but an important disadvantage then lies in the difficulty of evaluating multi-center integrals such as the recolliding integral of Eq.\,(\ref{Eq:Trans_amp_approximate_general}). It appears that this difficulty can be removed if the Slater orbital (\ref{Eq:STO_2px}) is replaced by a Gaussian type orbital of the form \begin{equation} \Phi_{2p_x}^g(\bm{r}) = \mathcal{N}_g\;r\cos\theta_r\;\mathrm{e}^{-\alpha\,r^2}\,, \label{Eq:STO-1G-2px} \end{equation} with the normalization factor \mbox{$ \mathcal{N}_g = \alpha\,\sqrt{8/\pi}$}, where $\alpha$ is the Gaussian exponent. Actually, the Gaussian function (\ref{Eq:STO-1G-2px}) can be made a good approximation of the Slater orbital\;(\ref{Eq:STO_2px}) with an appropriate choice of $\alpha$. \begin{figure}[ht] \includegraphics[width=8.5cm]{figure8.pdf} \caption{(Color online). Overlap between the Gaussian-type (\ref{Eq:STO-1G-2px}) and Slater-type (\ref{Eq:STO_2px}) orbitals used in the present study as a function of the dimensionless ratio $\zeta/\sqrt{\alpha}$ (see text for details).} \label{fig:STO-GTO-overlap} \end{figure} Fig.\;\ref{fig:STO-GTO-overlap} shows the overlap between the wave functions (\ref{Eq:STO-1G-2px}) and (\ref{Eq:STO_2px}) as a function of the dimensionless ratio $\zeta/\sqrt{\alpha}$. It is clear that for $\zeta \simeq 2.165\,\sqrt{\alpha}$, the Slater and Gaussian orbitals are very similar, with an overlap of about $98\%$. In the following, for the evaluation of the integrals, we will use Gaussian-type atomic orbitals, but for the reconstruction of the initial molecular state we will use Slater-type orbitals whose size are defined from the optimal ratio $\zeta/\sqrt{\alpha} = 2.165$. \subsection{Approximate 1D Photoelectron Spectrum}% The expression for the approximate transition amplitude $a(k_x,k_y)$ given in Eq.\,(\ref{Eq:Trans_amp_approximate_general}) can now be evaluated for the initial HOMO wave function given in Eq.\,(\ref{Eq:Ini_Guess_2px}), as \begin{eqnarray} a(k_x,k_y) & = & A_{d}\Big[\langle\Phi^{\text{pw}}_{\bm{k}}|\,x\,|\Phi_{\mathrm{2p}_x}^{-}\rangle - \langle\Phi^{\text{pw}}_{\bm{k}}|\,x\,|\Phi_{\mathrm{2p}_x}^{+}\rangle\Big] \nonumber\\ & + & A_{r}\Big[\langle\Phi^{\text{pw}}_{\bm{k}}|\,V\,|\Phi_{\text{rec}}^{-}\rangle - \langle\Phi^{\text{pw}}_{\bm{k}}|\,V\,|\Phi_{\text{rec}}^{+}\rangle\Big] \label{Eq:Trans_amp_CO2_general1} \end{eqnarray} where $|\Phi_{\text{rec}}^{\pm}\rangle$ denotes \begin{equation} |\Phi_{\text{rec}}^{\pm}\rangle = \int\!d\bm{k'} e^{-ik'^2\Delta t/2} \langle\Phi^{\text{pw}}_{\bm{k'}}|\,x\,|\Phi_{\mathrm{2p}_x}^{\pm}\rangle\; |\Phi^{\text{pw}}_{\bm{k'}}\rangle\label{Phi_rec_pm}\,. \end{equation} The first two integrals in Eq.\,(\ref{Eq:Trans_amp_CO2_general1}) represent direct ionization from displaced (oxygen $2p_x$) orbitals and the last two represent the ionization amplitudes after a recollision event. The two integrals associated with direct ionization amplitudes are just Fourier transforms (FT) of products of the dipole operator $x$ with displaced $2p_x$ orbitals. In momentum space, this spatial translation becomes a simple phase shift of the form $\exp\left[\pm ik_{y}R\right]$ of the FT signal of $\Phi_{\mathrm{2p}_x}^{\pm}$. Taking this simplification into account, Eq.\,(\ref{Eq:Trans_amp_CO2_general1}) can be reduced to \begin{eqnarray} a(k_x,k_y) & = & A_{d}\,\sin(k_yR)\,\langle\Phi^{\text{pw}}_{\bm{k}}|\,x\,|\Phi_{\mathrm{2p}_x}\rangle\nonumber\\ & + & A_{r}\Big[\langle\Phi^{\text{pw}}_{\bm{k}}|\,V\,|\Phi_{\text{rec}}^{-}\rangle - \langle\Phi^{\text{pw}}_{\bm{k}}|\,V\,|\Phi_{\text{rec}}^{+}\rangle\Big] \label{Eq:Trans_amp_CO2_general2} \end{eqnarray} The evaluation of the direct ionization amplitude using a Gaussian-type orbital yields \begin{equation} a_{d}(k_x,k_y) = A_{d}\,\sin(k_yR)\,(k_x^2-2\alpha)\,e^{-\frac{k_x^2+k_y^2}{4\alpha}}\,, \label{Eq:direct_CO2_g} \end{equation} provided $A_{d}$ accounts for all constant factors. The calculation of the recollision amplitude $a_{r}(k_x,k_y)$ is more involved since it needs the knowledge of the functional form of the recolliding wave functions $\Phi_{\text{rec}}^{\pm}(\vec r)$. Using for the initial state a Gaussian-type orbital $\Phi_{\mathrm{2p}_x}$ located at origin we obtain \begin{equation} \Phi_{\text{rec}}^{0}(r) \propto \frac{\alpha-i\beta-2\beta^2x^2}{(\alpha-i\beta)^{3}}\;e^{i\gamma r^2}\,, \label{Eq:Convolution_Carbon} \end{equation} where $\gamma=\alpha\beta/(\alpha-i\beta)$ and $\beta=1/(2\Delta t)$. The wave functions $\Phi_{\text{rec}}^{\pm}(r)$ are identical to $\Phi_{\text{rec}}^{0}(r)$ except for a phase shift, so that the corresponding recollision wave functions are given by \begin{subequations} \begin{eqnarray} \Phi_{\text{rec}}^{-}(r) & = & e^{i\gamma R^2}\,e^{+i2y\gamma R}\;\Phi_{\text{rec}}^{0}(r)\,,\\ \Phi_{\text{rec}}^{+}(r) & = & e^{i\gamma R^2}\,e^{-i2y\gamma R}\;\Phi_{\text{rec}}^{0}(r)\,. \end{eqnarray} \label{Eq:Spatial_Covolution_CO2} \end{subequations} In the near IR ($\lambda$ = 800\,nm to 2.5\,$\mu$m) the parameter $\beta$ of Eq.\,(\ref{Eq:Convolution_Carbon}) is in the range $10^{-2}$ to $10^{-3}$\;a.u. In comparison, the Gaussian orbital exponent $\alpha$ is usually of the order of 1\,a.u. These orders of magnitudes can be used in order to simplify further the expression of the ionization amplitude. Since the binding potential $V(\bm{r})$ is characterized by three attractive centers, the recollision amplitude $a_{r}(k_x,k_y)$ (second part on the \textit{r.h.s} of Eq.\,(\ref{Eq:Trans_amp_CO2_general2})) contains, for the HOMO of CO$_2$, 6 integrals. Indeed, from the HOMO, ionization may originate from any of the two oxygen atoms and recollision may take place on any of the three atoms. Fortunately, these 6 integrals are similar. In the case of the HOMO, the electron wave packet is launched from both of the oxygen atoms marked as O$_1$ and O$_2$ in Fig.\;\ref{fig:system}. On recollision, the contribution from the first oxygen atom O$_1$ will scatter from the parent atom O$_1$ itself as well as from the two neighboring atoms: from the carbon atom C and from the second oxygen atom O$_2$. This part of the rescattering amplitude, shown in Fig.\;\ref{fig:system}, can be written as \begin{equation} a_{r}^{\text{O}_1}(k_x,k_y) = \langle\Phi^{\text{pw}}_{\bm{k}}|\,V\,|\Phi^{+}_{\text{rec}}\rangle\,. \end{equation} where the three-center potential $V$, supposed to be of a Coulomb form, is given by \begin{equation} V(\bm{r}) = - \dfrac{q_{\mathrm{O}}}{|\bm{r}+\bm{R}|} - \dfrac{q_{\mathrm{C}}}{|\bm{r}|} - \dfrac{q_{\mathrm{O}}}{|\bm{r}-\bm{R}|}\,. \label{Eq:SoftCoulombPotential_SAE-molecule-1bis} \end{equation} What matters most for the recollision is the scattering taking place in the vicinity of Coulombic cores. At first order near the singularities of the potential wells, \textit{i.e.} for $x \rightarrow 0$ and $y \rightarrow \{\,-R,\,0,\,R\,\}$, and taking into account Eqs.\,(\ref{Eq:Convolution_Carbon}) and\;(\ref{Eq:Spatial_Covolution_CO2}), the above integral can be reduced to \begin{equation} a_{r}^{\text{O}_1}(k_x,k_y) \propto \frac{-\mathrm{e}^{ik_y R}-\mathrm{e}^{i\beta R^{2}}-\mathrm{e}^{-ik_y R}\,\mathrm{e}^{i\beta 4R^{2}}}{|k_y|}\,. \label{Eq:Approx_recoll_1st_O} \end{equation} Similarly, for the wave packet originating from the second oxygen atom, we obtain: \begin{eqnarray} a_{r}^{\text{O}_2}(k_x,k_y) & = & \langle\Phi^{\text{pw}}_{\bm{k}}|\,V\,|\Phi^{-}_{\text{rec}}\rangle\,\notag\\ & \propto & \frac{\mathrm{e}^{-ik_y R}+\mathrm{e}^{i\beta R^{2}}+\mathrm{e}^{ik_y R}\,\mathrm{e}^{i\beta 4R^{2}}}{|k_y|}\,. \label{Eq:Approx_recoll_2nd_O} \end{eqnarray} Finally, the total recollision amplitude is \begin{equation} a_{r}(k_x,k_y) = A_{r}\,\frac{1-\mathrm{e}^{i4\beta R^{2}}}{|k_{y}|}\,\sin(k_{y}R)\,. \label{Eq:HOMO_recollision_CO2} \end{equation} Combining Eqs.\,(\ref{Eq:direct_CO2_g}) and\;(\ref{Eq:HOMO_recollision_CO2}) we obtain the 2D transition amplitude. The transition probability is the square modulus of this transition amplitude. Finally, averaging over the parallel momentum component $k_x$, the 1D spectrum is written as \begin{equation} \mathcal{S}(k_y)=\left(|A_d|^2\,\mathrm{e}^{-\frac{k_{y}^2}{2\alpha}}+ \frac{|A_r|^2}{k_y^2}\,\right)\, \sin^2(k_{y}R)\,. \label{Eq:1D_spectrum_HOMO} \end{equation} This is the compact analytical form we will use in the next section for our inversion procedure. \section{Results: Reconstruction of Orbitals}% \label{Sec:Reconstruction} % Eq.\,(\ref{Eq:1D_spectrum_HOMO}) is the final result we intended to derive for solving the inverse problem. Taking $|A_d|$, $|A_r|$, $\alpha$ and $R$ as four independent adjustment variables, this expression can be compared with 1D averaged LIED spectra calculated from the solution of the TDSE. In general, the model can be used for any internuclear distances of the CO$_2$ molecule. However, as discussed previously, with the particular laser parameters chosen in the present study our model is not expected to perform well for small values of $R$. We thus chose only two cases for this comparison: $R=3.5$\,\AA\/ and $R=5.0$\,\AA\/. To ease the multi-parameter fitting procedure, it is well-known that the search for the best fit should start from a good guess value. Here the range of the parameter $R$ can be obtained easily from the spectrum itself by measuring the fringe width $\Delta k = \pi/R$ as discussed in\;\cite{PhysRevA.85.053417}. Thus we are left with three completely unknown parameters and one partially known parameter. The fitting process is performed here by using the well-known Levenberg-Marquardt algorithm (LMA)\;\cite{Algorithm}, because of its robustness to find the best possible solutions even if the procedure starts with initial guess values relatively far from the final one. \begin{table}[ht] \caption{Fitted values of the parameters involved in the SFA analytical model of Eq.\,(\ref{Eq:1D_spectrum_HOMO}).} \begin{ruledtabular} \begin{tabular}{c c c c c} & $A_d$ (au) & $A_r$ (au) & \;\;\;\;$R$ (\AA)\;\; & $\alpha$ (au) \\ [0.5ex] \hline \noalign{\vskip 1.0ex} & 0.00478 & 0.000808 & 3.628 & 0.535 \\ [0.5ex] For $R=3.5$\,\AA & 0.00506 & 0.000720 & 3.616 & 0.516 \\ [0.5ex] & 0.00513 & 0.000634 & 3.624 & 0.527 \\ [0.5ex] & 0.00405 & 0.001087 & 3.619 & 0.520 \\ [0.5ex] \hline \noalign{\vskip 1.0ex} Average & 0.00476 & 0.000812 & 3.622 & 0.525 \\ [0.5ex] \hline \noalign{\vskip 0.5ex} \hline \noalign{\vskip 1.0ex} & 0.0157 & 0.00385 & 5.141 & 0.676 \\ [0.5ex] For $R=5.0$\,\AA & 0.0121 & 0.00639 & 5.141 & 0.625 \\ [0.5ex] & 0.0137 & 0.00507 & 5.142 & 0.657 \\ [0.5ex] & 0.0145 & 0.00452 & 5.142 & 0.669 \\ [0.5ex] \hline \noalign{\vskip 1.0ex} Average & 0.0140 & 0.00496 & 5.142 & 0.657 \\ [0.5ex] \end{tabular} \end{ruledtabular} \label{Table_fit} \end{table} The fitting process is performed on the high kinetic energy part of the spectra. The highest accessible kinetic energy and hence the highest momentum component $k_y^{max}$ is defined by the cut-off energy $3.17\,U_p$, which is fixed by the laser parameters used in the calculation or experiment. In order to obtain reliable values for the parameters, the fitting process must be repeated several times. This is done by varying the lower limit of the kinetic momentum $k_y^{min}$ taken into account, between $1.15$\,au and $1.25$\,au in the present calculation. Values of the relevant parameters obtained for four different lower limits are given in table\;\ref{Table_fit}. The values obtained for the internuclear distance $R$ are very stable and accurate. In addition, the values obtained for the orbital exponent $\alpha$ are also relatively stable. Since these two parameters are the ingredients used to reconstruct the molecular orbital (\ref{Eq:Ini_Guess_2px}) they will lead to a very similar orbital, whatever the other parameters chosen in Table\;\ref{Table_fit}. In practice, we average over several fits in order to extract these parameters (see Table\;\ref{Table_fit}). Typical numerical and model 1D LIED spectra are given in Fig.\;\ref{fig:Fit_spectra}. Panel (a) is for $R=3.5$\,\AA\/ and panel (b) is for $R=5.0$\,\AA. The numerical spectra $\mathcal{S}(k_y)$ obtained by solving the TDSE are shown as bold blue curves and the fitted (model) spectra are shown as red dashed curves, as a function of $k_y$. As it is apparent from the figure, the model and the numerical calculations fit well. For both cases considered here, the relative errors in the retrieved internuclear distances are of the order of $3\,\%\,$: We obtained $R=3.62\,$\AA\/ instead of $3.50\,$\AA\/ and $R=5.14\,$\AA\/ instead of $5.00\,$\AA\/ (see averaged $R$ values in table\;\ref{Table_fit}). \begin{figure}[ht] \includegraphics[width=7.5cm]{figure9.pdf} \caption{(Color online). 1D averaged LIED spectra $\mathcal{S}(k_y)$. The blue solid lines are the spectra calculated using the time-dependent Schr\"odinger equation and the red dashed lines show the results of the best fits using the analytical SFA model. Panel (a) is for $R=3.5\,$\AA\/ and panel (b) is for $R=5.0\,$\AA.} \label{fig:Fit_spectra} \end{figure} Taking the average values of the fitted internuclear distances $R$ and Gaussian exponents $\alpha$ we can reconstruct the initial state used for deriving Eq.\,(\ref{Eq:1D_spectrum_HOMO}). Finally, as discussed in Section\;\ref{subsec:ini_guess}, this initial state can also be given in terms of Slater-type orbitals. These functions will give the best possible simple form of the initial state. Reconstructed approximate Slater forms of the initial states are given in Fig.\,\ref{fig:Recontruct}. Panels (a) and (c) are the initial states used in the TDSE calculation for $R=3.5$\,\AA\/ and $R=5.0$\,\AA. Panels (b) and (d) are the corresponding reconstructed molecular orbitals. \begin{figure}[ht] \includegraphics[width=7.5cm]{figure10.pdf} \caption{(Color online). Panels (a) and (c): Initial wave functions used for the TDSE calculation with $R=3.5$\,\AA\/ and $5.0$\,\AA\/ respectively. Panels (b) and (d): Associated reconstructed molecular orbitals.} \label{fig:Recontruct} \end{figure} The overlap between the reconstructed orbital and the initial state used in the numerical TDSE calculations is higher than $96\%$: $96.3\%$ for $R=3.5\,$\AA\/ and $97.2\%$ for $R=5.0\,$\AA. This reconstruction shows that for large internuclear distances LIED techniques could be used to image molecular orbitals with a rather good accuracy using a simple multi-parameter fitting procedure. It is also possible to depict the discrepancy in the reconstructed orbitals caused by the inaccuracies in $R$ and $\alpha$ by plotting the difference between the exact and reconstructed orbitals. These differences are shown in Fig.\;\ref{fig:Error} using the same color code as in Fig.\;\ref{fig:Recontruct}. Panel (a) is for $R=3.5\,$\AA\/ and panel (b) is for $R=5.0\,$\AA. The discrepancy shown in this figure is due to the combined errors in the reconstructed values of both the orbital exponent $\alpha$ and the internuclear distance $R$. Any error in the internuclear distance $R$ would be crucial since it would cause a significant mismatch in the location of the reconstructed orbital. Here, since the fitted value of $R$ is very close to its exact value, this problem does not appear. A small error in the orbital exponent $\alpha$ is, on the other hand, not as crucial since the overlap between the exact wave function and the reconstructed orbital varies smoothly with $\alpha$. We have calculated the optimal $\alpha$ values for our exact initial states by computing the overlap between the initial state and the LCAO form we have adopted in this study. We have obtained $\alpha_{\mathrm{opt}} = 0.624\,$au for $R=3.5\,$\AA\/ and $\alpha_{\mathrm{opt}} = 0.626\,$au for $R=5.0\,$\AA. The relative errors in the fitted values of $\alpha$ are therefore of the order of 16\% for $R=3.5\,$\AA\/ and 5\% for $R=5.0\,$\AA. We again see here that our inversion procedure is more accurate for the largest internuclear distance. \begin{figure}[ht] \includegraphics[width=8.5cm]{figure11.pdf} \caption{(Color online). Difference between the initial states and their reconstructions. Panel (a) is for $R=3.5\,$\AA\/ and panel (b) is for $R=5.0\,$\AA. The color map used is the same as in Fig.\;\ref{fig:Recontruct}.} \label{fig:Error} \end{figure} Being a model developed using a single active electron approximation, analyzing LIED processes with multi-electron ionization channels may give additional discrepancies in the retrieved values of the parameters. One of the main problem for including interactions between electrons is the difficulty to solve such situations analytically. Aiming for a compact analytical form given with a relatively small number of fitting parameters, an extension of the model beyond the single active electron approximation is far from trivial in the spirit of an inverse problem. Other approximations could be relaxed more easily. For example, higher process of the recollision events can be included into the picture by extending the strong field approximation to the desired higher order terms. This may improve the model, but to our best understanding, the second order development used here retains the main elements necessary for an accurate reconstruction procedure for linear molecules with large internuclear distances ($R > 3\,$\AA\/). The inverse problem discussed in this manuscript in the case of the HOMO orbital of the CO$_2$ molecule can be relatively easily extended to the deeper \mbox{HOMO-1} orbital by modifying slightly the analytical model. In this case, the atomic orbitals of the three composite atoms have a significant overlap and form a symmetric molecular orbital. But the relative contributions of the C and O atoms are different. This gives an additional parameter which should be introduced in the model. This additional parameter would also have to be retrieved by an inversion procedure. It should also be relatively easy to make some other simple modifications to the analytical model to treat other linear molecules. \section{Conclusion}% \label{Sec:Conclude}% In this paper, we discuss some possibilities for imaging molecular orbitals offered by laser induced electron diffraction following the strong field ionization of a pre-aligned linear molecule. The problem is discussed in detail for the HOMO orbital of the carbon dioxide molecule. The system is described theoretically in the framework of a single active electron model. The strong field photoelectron spectra are obtained by solving the time-dependent Schr\"odinger equation (TDSE) for different initial internuclear distances. An approximate, but compact analytical model is developed for these photoelectron spectra using three classes of approximations: (i)\;using the single active electron approximation, (ii)\;using the strong field approximation and (iii)\;using an approximate LCAO ansatz for the initial molecular orbital. This analytical model contains some parameters which are fitted by comparison with the TDSE results. This fitting procedure allows for the extraction of the internuclear distance and the corresponding Slater-type orbital exponents. The initial ansatz for the molecular orbital is then reconstructed with these parameters, providing an accurate representation of the initial state used in the TDSE, with an overlap which is higher than $96\%$. This approach can be effectively used for the reconstruction of the HOMO molecular orbital with a good accuracy. It should be possible to extend this model to other initial orbitals and to other linear molecules. In the future, the inclusion of the nuclear dynamics could enable this model to image reaction dynamics like the photo-dissociation of linear molecules, for instance. \section*{Acknowledgment}% We thank Misha Ivanov for stimulating discussions. R.P.J. and E.C. acknowledge support from the EU (Project ITN-2010-264951, CORINF). We also acknowledge the use of the computing center GMPCS of the LUMAT federation (FR LUMAT 2764).
2,869,038,155,941
arxiv
\section{Introduction} A significant amount of network traffic \cite{games3, camera1} is originated nowadays by the new multimedia services over the Internet (e.g. videoconferencing, video surveillance, VoIP, online games and P2P-TV) and a large increase in the number of users can be observed. Moreover, the expectation of future growth by the use of multimedia applications, indicates that this tendency will increase in the next years. On the one hand, the user demands better experiences in multimedia services and on the other hand, the heterogeneous characteristics of the different Internet access technologies, make it necessary to take into account the Quality of Service (QoS) that they offer, especially when the accesses have to support real-time applications. At the same time, traffic behaviour may have a significant impact on network resources. The traffic behaviour provided by each service varies according to the nature of the information and its size, so different applications generate bursts of traffic (e.g. video surveillance) when a lot of information has to be sent in a short time. These bursts include different numbers of frames, and this may congest network devices when the amount of transmitted packets is significant with respect to the buffer size. On the other hand, some applications work to generate smooth traffic, with the aim of providing a certain QoS and a better user's experience, while not being detrimental for the network, at the cost of processing capacity increment. The size of the packets generated by these applications may vary between different Internet services: while some of them (e.g. VoIP) generate small packets of a few tens of bytes, others (e.g., videoconferencing and video surveillance) use large packets. By contrast, the buffer size and available bandwidth are maintained at the same value creating congestion problems on sensitive access links. In this context, when we are planning a network, the size of the access router buffer is an important design parameter because there is a relationship between its size and link utilization, since when the buffer is full and the amount of memory is big, it would generate a significant latency increment (\textit{bufferbloat}). On the other hand, a very small amount of memory will increase packet loss in congestion time. As a result, the buffer behaviour is an important parameter which should be considered when trying to improve link utilization. In the last years, many studies related to buffer size issues have been published, but they are mainly focused on backbone routers and TCP flows \cite{buffers5}. There are many techniques in order to improve link utilization but most of them are focused on bandwidth. Nevertheless bandwidth is not the only parameter to take into account. The buffer size and its behaviour are of primary importance when studying network traffic because buffer is used as a traffic regulator mechanism, since it may modify some network parameters, as delay or jitter, and may also drop packets. As a consequence, the influence of the buffer is important in order to reduce packet loss and offer a better user experience, especially when multimedia flows are being transmitted. The paper is organized as follows: section II is a review of buffer dimensioning, section III discusses the influence of the buffer in different real-time services, to highlight the need of taking the buffer behaviour into account when planning a network. Section IV describes the specific congestion problems we are addressing in this paper and section V covers the experimental results. The paper ends with the conclusions. \section{Buffer size issues} \subsection{Buffer sizing} Buffers are used to reduce packet loss by absorbing transient bursts of traffic when routers cannot forward them at that moment. This problem exists because the router has differences in the input and output rates that produces bottlenecks in the network, so packet loss may occur. Buffers are instrumental in keeping output links fully utilized during congestion times. With respect to buffer dimensioning, the accepted \textit{rule of thumb} was using BDP (Bandwidth Delay Product) \cite{buffers2} as a method to obtain the buffer size needed at a router's output interface. This rule was proposed in $ 1994 $ \cite{buffers6} and it is given by $ B=C \times RTT $, where $ B $ is the buffer size, $ RTT $ is the average round-trip time and $ C $ the capacity of the router's network interface. It was experimentally obtained using at most $ 8 $ TCP flows on a $ 40 \; Mbps $ core link, so there is no recommendation for sizing buffers when there is a significant number of TCP flows with different $ RTTs $. In \cite{buffers7}, the authors proposed a reduced buffer size by dividing BDP by the square root of the number of used TCP flows $ B=C \times RTT / \sqrt{N} $. This new approximation assumes that the number of TCP flows is large enough so as to consider them as asynchronous and independent from each other. This model was called \textit{small buffer} \cite{buffers8}. In \cite{buffers10} it was suggested the use of even smaller buffers, called \textit{tiny buffers}, considering a size of some tens of packets. However, the use of this model presents a discarding packet provability of $ 10\%-20\% $. The model was obtained based in no bursty traffic. However, some real-time IP flows are bursty as e.g., video streaming, so this leaves some uncertainty in buffer sizing. In \cite{buffers9} and \cite{buffers5}, TCP and UDP combined traffic in very small buffers was tested using non-bursty traffic, and finding an anomalous region for UDP packets, where loss probability grows when buffer size increase. It has also been observed in the literature that the buffer size is measured in different ways: e.g., in \cite{buffers1} the routers of two manufactures are compared, and one gives the information in packets, whereas the other one measures it in milliseconds, which is equivalent to bytes. As a consequence, the knowledge of the buffer behaviour is an interesting parameter which can be considered when trying to improve link utilization. \subsection{Buffer overflow with medium link utilization} The congestion of the buffer is not only caused by bandwidth scarcity but some problems can also be caused by network devices' implementations. As we can see in figure \ref{fig:buffer}, the time required to get into overflow is related to the filling rate, given by the input and output rate \cite{yo1}. Let $ R_{in} $ and $ R_{out} $ be the input and output rates of the buffer, respectively. We define $ R_{fill} $ as the rate in which the buffer fills when $ R_{in} $ is higher than $ R_{out} $ ($ R_{fill}=R_{in}-R_{out} $). \begin{figure}[t] \centering \includegraphics[height=4in,width=0.45\textwidth]{Images/buffer.jpg} \caption{Principal characteristics of buffers.} \label{fig:buffer} \end{figure} So, when bursty traffic is generated in the network, a quick buffer filling rate may cause packet loss at certain moments, even when average link utilization is medium or even low. This may be caused when the burst length is nearly the buffer size because it gets easily into overflow, it also can be produced when burst length is bigger than buffer size, in this case packet loss will be automatic. It is known that some applications work to generate smooth traffic, but the aggregate Internet traffic shows a bursty behaviour at all time scales \cite{bursty1}. In this case, it is useful to manage traffic generation by taking advantage of traffic's smoothing configuration of certain applications. In some scenarios, when network congestion problems arise, a common practice can be increasing bandwidth in the local network. For this reason many companies change their internal network devices (e.g. switching from lowest to highest network rates) trying to resolve congestion problems. But, if $ R_{out} $ remains the same and $ R_{in} $ is switched to highest rate, buffer filling rate ($ R_{fill} $) is bigger in the new network. For this reason buffer may get into overflow more quickly. Thus, in certain cases this speed increase may be translated into a worse network response, so this improvement becomes a failure. All in all, the relationship between Internet access and local network speeds and the relationship between buffer size and burst length are in fact an important parameters which cannot be neglected. In the next section, we present a number of tests with the aim of illustrating this phenomenon: some cases in which the combination of certain buffer sizes with bursty applications may cause network congestion and QoS problems. \begin{figure*}[t] \centering \subfigure[Real videoconferencing traffic capture scenario.]{\includegraphics[height=2.5in,width=0.47\textwidth]{Images/vidyo.jpg}} \subfigure[Real video surveillance traffic capture scenario.]{\includegraphics[height=2.5in,width=0.47\textwidth]{Images/test_traffic.jpg}} \caption{Real traffic capture scenarios.} \label{fig:test_traffic} \end{figure*} \section{Review of the influence of buffers on different services} Many scientific publications related to the influence of the buffer on different services and applications show how QoS is affected by the buffer behaviour, which is mainly determined by its size and management policies. In these cases, knowing the technical and functional characteristics of this device becomes important. This knowledge can be useful for applications and services in order to make decisions on the way the traffic is generated. In addition, packet management techniques can be applied as e.g. multiplexing a number of small packets into a big one or fragmentation, according to each case \cite{gtc17}. The way to study the influence of the buffer for multimedia traffic is mainly to determinate QoS characteristics based on well known network parameters (e.g., jitter, packet loss, etc.) and also the subjective quality evaluation is used to determinate the users' perception for certain services. The ITU-T E-Model \cite{emodel} \cite{mos} has obtained a procedure with the aim of calculating the Mean Opinion Score, MOS, which is useful en network transmission planing. Others authors \cite{games5}, have developed a similar model for online games based on delay and jitter. The influence of the buffer on VoIP was studied in \cite{gtc17}, where three different router buffer policies (dedicated, big and time-limited buffer) were tested, also using two multiplexing schemes. Router buffer policies cause different packet loss behaviour, and also modify voice quality, measured by means of R-factor. In the same paper a multiplexing method for VoIP flows is studied, thus reducing bandwidth with the counterpart of increasing packet size, which has an influence on packet loss, depending on the implementation and size of the router. In this case the VoIP native traffic showed a good behaviour when using a small buffer measured in bytes, as small packets have less probability of being discarded than big ones. In \cite{gtc14} the authors present a simulation study of the influence of a multiplexing method on the parameters that define the subjective quality of online games, mainly delay, jitter and packet loss. The results show that small buffers present better characteristics for maintaining delay and jitter in adequate levels, at the cost of increasing packet loss. In addition, buffers whose size is measured in packets also increase packets loss. Many access network devices are designed for bulk data transfers \cite{p2p4}, such as mail, web or FTP services. However, other applications (e.g., P2P video streaming, online games, etc.) generate a high rates of small packets, so the routers may experience problems to manage all the packets. In this case, their processing capacity can become a bottleneck if they have to manage too many packets per second \cite{games4}. The generation of hight rates of small packets \cite{p2p3y18} may penalize the video packets and consequently peer's behaviour within P2P structure will not be as expected. \section{Testbed and simulation results} In this section, the phenomena derived from the buffer filling rate are studied in three test environments, using different buffer characteristics. These three scenarios have been simulated using NS-2 to analyze the effect of the buffer size in the presence of bursty traffic and the possible implications for the traffic of other applications. We will focus on a typical SME environment with one Internet access link. Different buffer sizes have been chosen according to suggestions of different works cited above and real sizes of commercial network devices (e.g., Linksys WAP54G) \cite{yo1}. \begin{figure*}[ht] \centering \subfigure[First scenario: two and three camera connections.]{\includegraphics[height=2in,width=0.47\textwidth]{Images/test_camera.jpg}} \subfigure[Second scenario: two camera connections, videoconferencing and two VoIP calls.]{\includegraphics[height=2in,width=0.47\textwidth]{Images/test_40_70_v2.jpg}} \caption{Simulated scenarios.} \label{fig:scenarios} \end{figure*} \subsection{Traffic used} We have used three different multimedia traffic sources: videoconferencing, video surveillance and VoIP. The methodology for obtaining the traffic captures can be seen in figure \ref{fig:test_traffic}. Real traces of videoconferencing and video surveillance applications were first captured in real scenarios and then generated in NS2, following the same packet sizes and inter-packet times. VoIP traffic has been generated with NS2 CBR agent. For videoconferencing traces, Vidyo\texttrademark architecture was used. Vidyo\texttrademark incorporates Adaptive Video Layering (AVL) technology which permits dynamical video optimization for each endpoint leveraging on $ H.264 $ Scalable Video Coding (SVC)-based compression technology. The videoconference software was configured with $ 2 \; Mbps $, $ 800 \times 450 $ resolution and the camera was capturing a high motion video (rugby game). The video surveillance traffic traces have been obtained (see figure \ref{fig:test_traffic}) from a popular IP camera device (AXIS $ 2120 $). This kind of traffic is particularly bursty; table \ref{table:camera_packets} shows the relationship between the compression level and the amount of packets per burst for two different resolutions when bandwidth camera was set to $ 1 \; Mbps $. For all the test we have been chosen traces with $ 704 \times 576 $ resolution and a compression of $ 32 \; kbytes $, the time between bursts is $ 0.278 s \pm 0.06 s $, the amount of packets per burst is $ 26 $ and packet size is $ 1500 \; bytes $. \begin{table}[t] \renewcommand{\arraystretch}{1.3} \caption{Amount of packets per burst depending on camera compression.} \label{table:camera_packets} \centering \begin{tabular}{lcc} \hline \hline $ Resolucition $ & $ Compression \, level $ & $ Packets \, per \, burst $\\ \hline & $ 50 \; Kbytes $ & $ 41 $ \\ \rowcolor[gray]{0.9}$ 704\times576 \; pixeles $ & $ 32 \; Kbytes $ & $ 26 $ \\ & $ 16 \; Kbytes $ & $ 10 $ \\ \hline \multirow{2}{2.5cm}{$ 352\times288 \; pixeles $} & $ 13 \; Kbytes $ & $ 9 $ \\ & $ 4 \; Kbytes $ & $ 3 $ \\ \hline \hline \end{tabular} \end{table} Voice traffic was generated according to $ G.729 $ recommendation ($ 20 \, ms $ for inter-packet time and $ 2 $ samples per packet), resulting on a packet size of $ 60 \; bytes $. As in a real scenario, flows do not start at the same time, there is a starting period in which all flows starts randomly. So, bursts have not been forced, they appear by the specific application behaviour or by the overlapping in the combination of the different applications traffic. Packet loss differs for each test because in some cases increases the overlapping flows. We have obtained accuracy average results when each test was repeated $ 40 $ times. Simulation time is $ 60 \; s $ for each tests. \subsection{First scenario} The first scenario is shown in figure \ref{fig:scenarios}. In this case two and three different camera communications share the same Internet access limited to $ 3.5 \; Mbps $. This bandwidth has been chosen in order to set the offered bandwidth to $ 85\% $ of the link capacity when three cameras are present. The main aim of the test is to determinate the packet loss rate in the mixed bursty traffic for different buffer sizes. Although the offered traffic is roughly is about $ 57\% $ and $ 85\% $ for two and three cameras respectively, packet loss may be unacceptable, as shown in figure \ref{fig:camera}; the cause is that the amount of packets per burst exceeds the capacity of the buffer when more than one connection is set. As an example, we can think about a buffer size of $ 30 $ packets. If the number of packets in a burst is $ 26 $, it would be easy that the buffer gets full whenever a burst arrives. Furthermore, the relationship between packet loss and the number of camera flows is not linear. This relationship decreases when buffer size increases. \begin{figure}[t] \centering \includegraphics[height=3in,width=0.45\textwidth]{Images/cameras.pdf} \caption{Packet loss for two and three camera flows for different buffer size.} \label{fig:camera} \end{figure} \begin{figure*}[ht] \centering \subfigure[Packet loss by flows.]{\includegraphics[height=3in,width=0.45\textwidth]{Images/scenario_70_flows.pdf}} \subfigure[Packet loss distribution by flows.]{\includegraphics[height=3in,width=0.45\textwidth]{Images/scenario_70_flows_percent.pdf}} \caption{Packet loss when link utilization is $ 70\% $ for different buffer size.} \label{fig:scenario_70} \end{figure*} \begin{figure*}[ht] \centering \subfigure[Packet loss by flows.]{\includegraphics[height=3in,width=0.45\textwidth]{Images/scenario_40_flows.pdf}} \subfigure[Packet loss distribution by flows.]{\includegraphics[height=3in,width=0.45\textwidth]{Images/scenario_40_flows_percent.pdf}} \caption{Packet loss when buffer size is $ 40 $ packets for different values of link utilization.} \label{fig:scenario_40} \end{figure*} \subsection{Second scenario} The second scenario is shown in figure \ref{fig:scenarios}. Two IP camera flows, one videoconferencing session and two VoIP calls are used as test traffic, so the total bandwidth generated is $ 3.5 \; Mbps $. \subsubsection{General study} Two different tests have been performed in this scenario: in the first one, the Internet access link has been set to $ 5 \; Mbps $, so average link utilization is fixed ($ 70\% $) and different values of the buffer size are tested. In the second tests, the buffer size of the Internet access router is fixed at $ 40 $ packets and the simulations are run using different values of the access bandwidth and consequently different levels of link utilization, ranging from $ 50\% $ to $ 90\% $. The presented results are for the aggregate traffic of the three applications. For the first case, the packet loss per flow can be observed in figure \ref{fig:scenario_70}. Packet loss affects all the applications, so the presence of a bursty application (video surveillance) causes packet loss for all the coexisting applications, even for those generating constant bit rate traffic (VoIP). In addition, packet loss decreases when buffer size is increased, because big buffers can absorb the burst produced by the traffic mix. On the other hand, packet loss distribution is not the same for all buffers tested, although there is a small packet loss for big buffers, the percent of losses increases for videoconferencing and VoIP. As expected, packet loss increases when link utilization grows in the case of $ 40 $ packets buffer (figure \ref{fig:scenario_40}). Again, packet loss distribution is not the same and the percent of losses increases for videoconferencing and VoIP. \subsubsection{Deep study} In order to deeply analyze the effect of packet loss for each flow, we have selected a scenario with a $ 70\% $ link utilization and a buffer size of $ 40 $ packets and the same mentioned applications. In this specific scenario, tests were repeated $ 200 $ times to observe the effect of the overlapping flows cited above and their relationship with packet loss. The results are presented by a histogram in figure \ref{fig:histogram} in which the y axis represents the amount of iterations and x axis shows the packet loss ranges. \begin{figure*}[ht] \centering \includegraphics[height=3in,width=1\textwidth]{Images/histogram.jpg} \caption{Packet loss histogram for different multimedia flows with a buffer size of $ 40 $ packets and $ 70\% $ link utilization} \label{fig:histogram} \end{figure*} Packet loss presents different values for different iterations because in some cases the flow overlapping is bigger. This phenomenon mainly harms sensitive traffic as VoIP. Almost $ 80\% $ of the calls present a packet loss smaller than $ 0.75\% $. Packet loss reaches $ 3\% $ in $ 0.5\% $ of the cases (equivalent to $ 20 $ calls). On the other hand, this tool provides a means to estimate the subjective Mean Opinion Score (MOS) rating of voice quality over these planned network environments. For this reason, we have obtained the MOS for each iteration with the aim of comparing the quality in each case. The results are presented by a histogram. The chart in figure \ref{fig:mos} shows a significant amount of calls with a \textit{medium quality} according to \cite{mos} in a scenario with optimal conditions to obtain the \textit{best quality}. \begin{figure*}[ht] \centering \includegraphics[height=2.5in,width=\textwidth]{Images/mos.jpg} \caption{MOS histogram for 2 VoIP communications with a buffer size of $ 40 $ packets and $ 70\% $ link utilization.} \label{fig:mos} \end{figure*} Additional useful information can be obtained from the accumulative MOS probability (see figure \ref{fig:mos_0-40_1}) in which the MOS has been calculated for $ 5 $ different values of the network delay. The results show that about $ 98\% $ of the calls can obtain a MOS of $ 3.6 $ or can never, it is equivalent to \textit{medium quality}. A \textit{high quality} can never be reached even if the network delay is $ 0ms $. Accumulative MOS probability decreases quickly in \textit{medium quality} ($ 3.60-4.03 $) and cannot reach a higher quality. Figure \ref{fig:mos_0-40_1} illustrates this phenomenon. \begin{figure*}[ht] \centering \includegraphics[height=3in,width=\textwidth]{Images/mos_0-40_1.jpg} \caption{Accumulative MOS probability for different network delay with a buffer size of $ 40 $ packets and $ 70\% $ link utilization.} \label{fig:mos_0-40_1} \end{figure*} \section{Conclusion} This paper has studied packet loss caused by the router buffer, in the presence of applications generating bursty traffic, and their influence on VoIP subjective quality. Tests in two scenarios using NS-2 with real traces of different multimedia applications were presented, always with medium link utilization. The buffer size has been identified as a critical parameter for network planning in these environments. The reason is that the relationship between buffer size and burst length has to be coherent in order to allocate all the packets thus avoiding packet discarding because the numbers of packets in the burst exceeds the capacity of the buffer. In addition, it has been observed that bursty traffic affects the other applications sharing the same link. In order to show the effect of the bursty nature of these applications, we have measured the MOS of concurrent VoIP calls. The results show that VoIP calls are only able to obtain a \textit{medium quality}, failing to reach better qualities even when link utilization is set to $ 70\% $. \section*{Acknowledgment} This work has been partially financed by the European Social Fund in collaboration with the Government of Arag\'{o}n, CPUFLIPI Project (MICINN TIN2010-17298), Ibercaja Obra Social, Project of C\'atedra Telef\'onica, University of Zaragoza, Banco Santander and Fundaci\'on Carolina. \bibliographystyle{IEEEtran}
2,869,038,155,942
arxiv
\section{Introduction} The domain shift problem is drawing increasing attention in recent years \cite{Hoffman_cycada2017, zhu2017unpaired, Tsai_adaptseg_2018, sankaranarayanan2017unsupervised, ghifary2015domain, StarGAN2018}. In particular, there are two tasks that are of interest in computer vision community. One is the \emph{domain adaptation} problem, where the goal is to learn a model for a given task from a label-rich data domain (\ie, source domain) to perform well in a label-scarce data domain (\ie, target domain). The other one is the \emph{image translation} problem, where the goal is to transfer images in the source domain to mimic the image style in the target domain. \begin{figure}[t] \includegraphics[width=\linewidth]{comparision_tradiational_intemediate.png} \caption{Illustration of data flow generation. Traditional image translation methods directly map the image from the source domain to the target domain, while our DLOW model is able to produce a sequence of intermediate domains shifting from the source domain to the target domain.} \label{DG_Transfer} \end{figure} Generally, most existing works focus on the target domain only. They aim to learn models that well fit the target data distribution, \eg, achieving good classification accuracy in the target domain, or transferring source images into the target style. In this work, we instead are interested in the intermediate domains between source and target domains. We present a new \emph{domain flow generation} (DLOW) model, which is able to translate images from the source domain into an arbitrary intermediate domain between source and target domains. As shown in Fig \ref{DG_Transfer}, by translating a source image along the domain flow from the source domain to the target domain, we obtain a sequence of images that naturally characterize the distribution shift from the source domain to the target domain. The benefits of our DLOW model are two-fold. First, those intermediate domains are helpful to bridge the distribution gap between two domains. By translating images into intermediate domains, those translated images can be used to ease the domain adaptation task. We show that the traditional domain adaptation methods can be boosted to achieve better performance in target domain with intermediate domain images. Moreover, the obtained models also exhibit good generalization ability on new datasets that are not seen in the training phase, benefiting from the diverse intermediate domain images. Second, our DLOW model can be used for style generalization. Traditional image-to-image translation works~\cite{zhu2017unpaired, pix2pix2017, pmlr-v70-kim17a, NIPS2017_6672} mainly focus on learning a deterministic one-to-one mapping that transfers a source image into the target style. In contrast, our DLOW model allows to translate a source image into an intermediate domain that is related to multiple target domains. For example, when performing the photo to painting transfer, instead of obtaining a Monet or Van Gogh style, our DLOW model could produce a mixed style of Van Gogh, Monet, etc. Such mixture can be customized in the inference phase by simply adjusting an input vector that encodes the relatedness to different domains. We implement our DLOW model based on CycleGAN~\cite{zhu2017unpaired}, which is one of the state-of-the-art unpaired image-to-image translation methods. We augment the CycleGAN to include an additional input of domainness variable. On one hand, the domainness variable is injected into the translation network using the conditional instance normalization layer to affect the style of output images. On the other hand, it is also used as weights on discriminators to balance the relatedness of the output images to different domains. For multiple target domains, the domainness variable is extended as a vector containing the relatedness to all target domains. Extensive results on benchmark datasets demonstrate the effectiveness of our proposed model for domain adaptation and style generalization. \section{Related Work} \textbf{Image to Image Translation:} Our work is related to the image-to-image translation works. The image-to-image translation task aims at translating the image from one domain into another domain. Inspired by the success of Generative Adversarial Networks(GANs)~\cite{goodfellow2014generative}, many works have been proposed to address the image-to-image translation based on GANs~\cite{pix2pix2017, wang2018pix2pixHD,zhu2017unpaired,NIPS2017_6672,lu2017conditional,he2017arbitrary,zhu2017toward,huang2018munit,almahairi2018augmented, StarGAN2018, DRIT, yi2017dualgan,lin2018conditional}. The early works~\cite{pix2pix2017, wang2018pix2pixHD} assume that paired images between two domains are available, while the recent works such as CycleGAN~\cite{zhu2017unpaired}, DiscoGAN~\cite{pmlr-v70-kim17a} and UNIT~\cite{NIPS2017_6672} are able to train networks without using paired images. However, those works focus on learning deterministic image-to-image mappings. Once the model is learnt, a source image can only be transferred to a fixed target style. A few recent works~\cite{lu2017conditional,he2017arbitrary,zhu2017toward,huang2018munit,almahairi2018augmented, StarGAN2018, DRIT, yi2017dualgan, lin2018conditional, lample2017fader} concentrate on learning a unified model to translate images into multiple styles. These works can be divided into two categories according to the controllability of the target styles. The first category, such as \cite{huang2018munit, almahairi2018augmented}, realizes the multimodal translation by sampling different style codes which are encoded from the target style images. However, those works focus on modelling intra-domain diversity, while our DLOW model aims at characterizing the inter-domain diversity. Moreover, they cannot explicitly control the translated target style using the input codes. The second category, such as \cite{StarGAN2018, lample2017fader}, assigns the domain labels to different target domains and the domain labels are proven to be effective in controlling the translation direction. Among those, \cite{lample2017fader} shows that they could make interpolation between target domains by continuously shifting the different domain labels to change the extent of the contribution of different target domains. However, these methods only use the discrete binary domain labels in the training. Unlike the above work, the domainness variable proposed in this work is derived from the data distribution distance, and is used explicitly to regularize the style of output images during training. \textbf{Domain Adaptation and Generalization:} Our work is also related to the domain adaptation and generalization works. Domain adaptation aims to utilize a labeled source domain to learn a model that performs well on an unlabeled target domain~\cite{ganin2015unsupervised, gopalan2011domain, fernando2013unsupervised, tzeng2017adversarial, jhuo2012robust, baktashmotlagh2013unsupervised, kodirov2015unsupervised, gong2012geodesic, chen2018domain, zhang2018collaborative, wulfmeier2018incremental}. Domain generalization is a similar problem, which aims to learn a model that could be generalized to an unseen target domain by using multiple labeled source domains~\cite{pmlr-v28-muandet13, ghifary2015domain, niu2015visual, motiian2017unified, niu2015multi,Li2018Domain, li2018deep,li2018domain_gene}. Our work is partially inspired by \cite{gopalan2011domain,gong2012geodesic,cui2014flowing}, which have shown that the intermediate domains between source and target domains are useful for addressing the domain adaptation problem. They represent each domain as a subspace or covariance matrix, and then connect them on the corresponding manifold to model intermediate domains. Different from those works, we model the intermediate domains by directly translating images on pixel level. This allows us to easily improve the existing deep domain adaptation models by using the translated images as training data. Moreover, our model can also be applied to image-level domain generalization by generating mixed-style images. Recently, there is an increasing interest to apply domain adaptation techniques for semantic segmentation from synthetic data to the real scenario \cite{hoffman2016fcns, Hoffman_cycada2017, chen2018road, zou2018unsupervised, luo2018taking, huang2018domain, dundar2018domain, pan2018IBN-Net, saleh2018effective, sankaranarayanan2018learning, hong2018conditional, peng2018visda, zhang2018fully, Tsai_adaptseg_2018, murez2017image, saito2017maximum, sankaranarayanan2017unsupervised, zhu2018penalizing, chen2018learning}. Most of those works conduct the domain adaptation by adversarial training on the feature level with different priors. The recent Cycada \cite{Hoffman_cycada2017} also shows that it is beneficial to perform pixel-level domain adaptation firstly by transferring source image into the target style based on the image-to-image translation methods like CycleGAN \cite{zhu2017unpaired}. However, those methods address domain shift by adapting to only the target domain. In contrast, we aim to perform pixel-level adaptation by transferring source images to a flow of intermediate domains. Moreover, our model can also be used to further improve the existing feature-level adaptation methods. \section{Domain Flow Generation} \subsection{Problem Statement} In the domain shift problem, we are given a source domain $\cS$ and a target domain $\cT$ containing samples from two different distributions $P_S$ and $P_T$, respectively. Denoting a source sample as $\x^s \in \cS$ and a target sample as $\x^t \in \cT$, we have $\x^s \sim P_S$, $\x^t \sim P_T$, and $P_S \neq P_T$. Such distribution mismatch usually leads to a significant performance drop when applying the model trained on $\cS$ to $\cT$. Many works have been proposed to address the domain shift for different vision applications. A group of recent works aim to reduce the distribution difference on the feature level by learning domain-invariant features~\cite{ganin2015unsupervised, gopalan2011domain, kodirov2015unsupervised, gong2012geodesic}, while others work on the image level to transfer source images to mimic the target domain style~\cite{zhu2017unpaired,NIPS2017_6672,zhu2017toward,huang2018munit,almahairi2018augmented, StarGAN2018}. In this work, we also propose to address the domain shift problem on image level. However, different from existing works that focus on transferring source images into only the target domain, we instead transfer them into all intermediate domains that connect source and target domains. This is partially motivated by the previous works \cite{gopalan2011domain,gong2012geodesic,cui2014flowing}, which have shown that the intermediate domains between source and target domains are useful for addressing the domain adaptation problem. In the follows, we first briefly review the conventional image-to-image translation model CycleGAN. Then, we formulate the intermediate domain adaptation problem based on the data distribution distance. Next, we present our DLOW model based on the CycleGAN model. We then show the benefits of our DLOW model with two applications: 1) improve existing domain adaptation models with the images generated from DLOW model, and 2) transfer images into arbitrarily mixed styles when there are multiple target domains. \subsection{The CycleGAN Model} We build our model based on the state-of-the-art CycleGAN model~\cite{zhu2017unpaired} which is proposed for unpaired image-to-image translation. Formally, the CycleGAN model learns two mappings between $\cS$ and $\cT$, \ie, $G_{ST}: \cS\rightarrow\cT$ which transfers the images in $\cS$ into the style of $\cT$, and $G_{TS}: \cT\rightarrow\cS$ which acts in the inverse direction. We take the $\cS\rightarrow\cT$ direction as an example to explain CycleGAN. To transfer source images into the target style and also preserve the semantics, the CycleGAN employs an adversarial training module and a reconstruction module, respectively. In particular, the adversarial training module is used to align the image distributions for two domains, such that the style of mapped images matches the target domain. Let us denote the discriminator as $D_T$, which attempts to distinguish the translated images and the target images. Then the objective function of the adversarial training module can be written as, \begin{eqnarray} \label{eqn:cyclegan_adv} \min_{G_{ST} }\max_{D_T} \!\!\!\!&& \!\!\!\!\mathbb{E}_{\x^t\sim{P_T}}\left[\log(D_{T}(\x^t))\right]\\ \!\!\!\!&+& \!\!\!\!\mathbb{E}_{\x^s\sim{P_S}}\left[\log(1-D_{T}(G_{ST}(\x^s)))\right].\nonumber \end{eqnarray} Moreover, the reconstruction module is to ensure the mapped image $G_{ST}(\x^s)$ to preserve the semantic content of the original image $\x^s$. This is achieved by enforcing a cycle consistency loss such that $G_{ST}(\x^s)$ is able to recover $\x^s$ when being mapped back to the source style, \ie, \begin{eqnarray} \min_{G_{ST}} \quad \mathbb{E}_{\x^s\sim{P_S}}\left[\|G_{TS}(G_{ST}(\x^s))-\x^s\|_{1}\right]. \end{eqnarray} Similar modules are applied to the $\cT\rightarrow\cS$ direction. By jointly optimizing all modules, CycleGAN model is able to transfer source images into the target style and v.v. \begin{figure} \centering \includegraphics[width=0.7\linewidth]{optimized_path.png} \caption{Illustration of domain flow. Many possible paths (the green dash lines) connect source and target domains, while the domain flow is the shortest one (the red line). There are multiple domains (the blue dash line) keeping the expected relative distances to source and target domains. An intermediate domain (the blue dot) is the point at the domain flow that keeps the right distances to two domains.} \label{optimized_path} \vspace{-10pt} \end{figure} \begin{figure*}[!ht] \centering \centering \includegraphics[height=0.23\paperheight]{camera_ready_DLOWonetoone_structure_2.png} \label{fig_domaininter} \setlength{\abovecaptionskip}{0pt} \caption{The overview of our DLOW model: the generator takes domainness $z$ as additional input to control the image translation and to reconstruct the source image; The domainness $z$ is also used to weight the two discriminators.} \vspace{-10pt} \label{fig_domaininter:fig} \end{figure*} \subsection{Modeling Intermediate Domains}\label{sec:intermediate} Intermediate domains have been shown to be helpful for domain adaptation \cite{gopalan2011domain,gong2012geodesic,cui2014flowing}, where they model intermediate domains as a geodesic path on Grassmannian or Riemannian manifold. Inspired by those works, we also characterize the domain shift using intermediate domains that connect the source and target domains. Diffrent from those works, we directly operate at the image level, \ie, translating source images into different styles corresponding to intermediate domains. In this way, our method can be easily integrated with deep learning techniques for enhancing the cross-domain generalization ability of models. In particular, let us denote an intermediate domain as $\cM^{(z)}$, where $z \in [0, 1]$ is a continous variable which models the relatedness to source and target domains. We refer to $z$ as the domainness of intermediate domain. When $z = 0$, the intermediate domain $\cM^{(z)}$ is identical to the source domain $\cS$; and when $z = 1$, it is identical to the target domain $\cT$. By varying $z$ in the range of $[0, 1]$, we thus obtain a sequence of intermediate domains that flow from $\cS$ to $\cT$. There are many possible paths to connect the source and target domains. As shown in Fig \ref{optimized_path}, assuming there is a manifold of domains, where a domain with given data distribution can be seen as a point residing at the manifold. We expect the domain flow $\cM^{(z)}$ to be the shortest geodesic path connecting $\cS$ and $\cT$. Moreover, given any $z$, the distance from $\cS$ to $\cM^{(z)}$ should also be proportional to the distance between $\cS$ to $\cT$ by the value of $z$. Denoting the data distribution of $\cM^{(z)}$ as $P_M^{(z)}$, we expect that, \begin{equation} \begin{aligned} \frac{dist\left(P_S, P_M^{(z)}\right)}{dist\left(P_T, P_M^{(z)}\right)}=\frac{z}{1-z}, \end{aligned} \label{equation_domainness} \end{equation} where $dist(\cdot, \cdot)$ is a valid distance measurement over two distributions. Thus, generating an intermediate domain $\cM^{(z)}$ for a given $z$ becomes finding the point satisfying Eq. (\ref{equation_domainness}) that is closet to $\cS$ and $\cT$, which leads to minimize the following loss, \begin{eqnarray} \cL = (1-z) \cdot dist\left(P_S, P_M^{(z)}\right) + z\cdot dist\left(P_T, P_M^{(z)}\right). \label{eqn:intermediate_domain} \end{eqnarray} As shown in \cite{arjovsky2017wasserstein}, many types of distance have been exploited for image generation and image translation. The adversarial loss in Eq.~(\ref{eqn:cyclegan_adv}) can be seen as a lower bound of the Jessen-Shannon divergence. We also use it to measure distribution distance in this work. \subsection{The DLOW Model} We now present our DLOW model to generate intermediate domains. Given a source image $\x^s \sim P_s$, and a domainness variable $z \in [0, 1]$, the task is to transfer $\x^s$ into the intermediate domain $\cM^{(z)}$ with the distribution $P_M^{(z)}$ that minimizes the objective in Eq.~(\ref{eqn:intermediate_domain}). We take the $\cS\rightarrow\cT$ direction as an example, and the other direction can be similarly applied. In our DLOW model, the generator $G_{ST}$ no longer aims to directly transfer $\x^s$ to the target domain $\cT$, but to move $\x^s$ towards it. The interval of such moving is controlled by the domainness variable $z$. Let us denote $\cZ = [0, 1]$ as the domain of $z$, then the generator in our DLOW model can be represented as $G_{ST}(\x^s, z): \cS \times \cZ \rightarrow\cM^{(z)}$ where the input is a joint space of $\cS$ and $\cZ$. \textbf{Adversarial Loss:} As discussed in Section~\ref{sec:intermediate}, We deploy the adversarial loss as the distribution distance measurement to control the relatedness of an intermediate domain to the source and target domains. Specifically, we introduce two discriminators, $D_{S}(\x)$ to distinguish $\cM^{(z)}$ and $\cS$, and $D_{T}(\x)$ to distinguish $\cM^{(z)}$ and $\cT$, respectively. Then, the adversarial losses between $\cM^{(z)}$ and $\cS$ and $\cT$ can be written respectively as, \begin{eqnarray} \!\!\!\!\cL_{adv} (\!\!\!\!&G_{ST}&\!\!\!\!, D_S)=\mathbb{E}_{\x^s\sim{P_S}}\left[\log(D_{S}(\x^s))\right] \\ \!\!\!\!&+&\!\!\!\!\mathbb{E}_{\x^s\sim{P_S}}\left[\log(1-D_{S}(G_{ST}(\x^s, z)))\right] \nonumber \\ \!\!\!\!\cL_{adv} (\!\!\!\!&G_{ST}&\!\!\!\!, D_T)=\mathbb{E}_{\x^t\sim{P_T}}\left[\log(D_{T}(\x^t))\right] \\ \!\!\!\!&+&\!\!\!\!\mathbb{E}_{\x^s\sim{P_S}}\left[\log(1-D_{T}(G_{ST}(\x^s, z)))\right]. \nonumber \end{eqnarray} By using the above losses to model $dist(P_S, P_M^{(z)})$ and $dist(P_T, P_M^{(z)})$ in Eq.~(\ref{eqn:intermediate_domain}), we derive the following loss, \begin{eqnarray} \cL_{adv} = (1-z) \cL_{adv}(G_{ST}, D_S) + z \cL_{adv}(G_{ST}, D_T). \end{eqnarray} \textbf{Image Cycle Consistency Loss:} Similarly as in CylceGAN, we also apply a cycle consistency loss to ensure the semantic content is well-preserved in the translated image. Let us denote the generator on the other direction as $G_{TS}(\x^t, z): \cT \times \cZ \rightarrow\cM^{(1-z)}$, which transfers a sample $\x^t$ from the target domain towards the source domain by a interval of $z$. Since $G_{TS}$ acts in an inverse direction to $G_{ST}$, we can use it to recover $\x^s$ from the translated version $G_{ST}(\x^s, z)$, which gives the following loss, \begin{equation} \begin{aligned} L_{cyc} = &\mathbb{E}_{\x^s\sim{P_s}}\left[\|G_{TS}(G_{ST}(\x^s,z),z)-\x^s\|_{1}\right]. \end{aligned} \end{equation} \textbf{Full Objective:} We integrate the losses defined above, then the full objective can be defined as, \begin{equation} \begin{aligned} &\cL=\cL_{adv} + \lambda_1\cL_{cyc}, \end{aligned} \end{equation} where $\lambda_{1}$ is a hyper-parameter used to balance the two losses in the training process. Similar loss can be defined for the other direction $\cT\rightarrow\cS$. Due to the usage of adversarial loss $\cL_{adv}$, the training is performed in an alternating manner. We first minimize the full objective with regard to the generators, and then maximize it with regard to the discriminators. \textbf{Implementation: } We illustrate the network structure of of our DLOW model in Fig \ref{fig_domaininter:fig}. First, the domainness variable $z$ is taken as the input of the generator $G_{ST}$. This is implemented with the Conditional Instance Normalization (CN) layer~\cite{almahairi2018augmented,huang2017adain}. We first use one deconvolution layer to map the domainness variable $z$ to the vector with dimension $(1,16,1,1)$, and then use this vector as the input for the CN layer. Moreover, the domainness variable also plays the role of weighting discriminators to balance the relatedness of the generated images to different domains. It is also used as input in the image cycle consistency module. During the training phase, we randomly generate the domainess parameter $z$ for each input image. As inspired by \cite{zhang2018mixup}, we force the domainness variable $z$ to obey the beta distribution, i.e. $f(z,\alpha, \beta)=\frac{1}{B(\alpha,\beta)}z^{\alpha-1}(1-z)^{\beta-1}$, where $\beta$ is fixed as $1$, and $\alpha$ is a function of the training step $\alpha=e^{\frac{t-0.5T}{0.25T}}$ with $t$ being the current iteration and $T$ being the total number of iterations. In this way, $z$ tends to be sampled more likely as small values at the beginning, and gradually shift to larger values at the end, which gives slightly more stable training than uniform sampling. \subsection{Boosting Domain Adaptation Models}\label{sec:boostadaptation} With the DLOW model, we are able to translate each source image $\x^s$ into an arbitrary intermediate domain $\cM^{(z)}$. Let us denote the source dataset as $\cS = \{(\x^s_i, y_i)|_{i=1}^n\}$ where $y_i$ is the label of $\x^s_i$. By feeding each of the image $\x^s_i$ combined with $z_{i}$ randomly sampled from the uniform distribution $\cU(0,1)$, we then obtain a translated dataset $\tilde{\cS} = \{(\tx^s_i, y_i)|_{i=1}^n\}$ where $\tx^s_i = G_{ST}(\x^s_i, z_i)$ is the translated version of $\x^s_i$. The images in $\tilde{\cS}$ spread along the domain flow from source to target domain, and therefore become much more diverse. Using $\tilde{\cS}$ as the training data is helpful to learn domain-invariant models for computer vision tasks. In Section~\ref{sec:exp_da}, we demonstrate that model trained on $\tilde{\cS}$ achieves good performance for the cross-domain semantic segmentation problem. Moreover, the translated dataset $\tilde{\cS}$ can also be used to boost the existing adversarial training based domain adaptation approaches. Images in $\tilde{\cS}$ fill the gap between the source and target domains, and thus ease the domain adaptation task. Taking semantic segmentation as an example, a typical way is to append a discriminator to the segmentation model, which is used to distinguish the source and target samples. Using the adversarial training strategy to optimize the discriminator and the segmentation model, the segmentation model is trained to be more domain-invariant. As shown in Fig~\ref{domainnessadaptseg}, we replace the source dataset ${\cS}$ with the translated version $\tilde{\cS}$, and apply a weight $\sqrt{1-z_i}$ to the adversarial loss. The motivation is as follows, for each sample $\tx^s_i$, if the domainness $z_i$ is higher, it is closer to the target domain, then the weight of adversarial loss can be reduced. Otherwise, we should enhance the loss weight. \begin{figure} \includegraphics[width=\linewidth]{DomainnessAdaptSegNet.png} \caption{Illustration of boosting domain adaptation model for corss-domain semantic segmentation with DLOW model. Intermediate domain images are used as source dataset, and the adversarial loss is weighted by domainness.} \label{domainnessadaptseg} \vspace{-10pt} \end{figure} \subsection{Style Generalization}\label{sec:stylegeneralization} Most existing image-to-image translation works learn a deterministic mapping between two domains. After the model is learnt, source images can only be translated to a fixed style. In contrast, our DLOW model takes an random $z$ to translate images into various styles. When multiple target domains are provided, it is also able to transfer the source image into a mixture of different target styles. In other words, we are able to generalize to an unseen intermediate domain that is related to existing domains. In particular, suppose we have $K$ target domains, denoted as $\cT_1, \ldots, \cT_K$. Accordingly, the domainness variable $z$ is expanded as a $K$-dim vector $\z = [z_1, \ldots, z_K]'$ with $\sum_{k=1}^Kz_k = 1$. Each elelment $z_k$ represents the relatedness to the $k$-th target domain. To map an image from the source domain to the intermediate domain defined by $\z$, we need to optimize the following objective, \begin{eqnarray} \cL = \sum_{k=1}^K z_{k} \cdot dist(P_M, P_{T_k}), \quad\mbox{s.t.}\quad \sum_{1}^K z_k = 1 \end{eqnarray} where $P_M$ is the distribution of the intermediate domain, $P_{T_K}$ is the distribution of $T_k$. The network structure can be easily adjusted from our DLOW model to optimize the above objective. We leave the details in the Supplementary due to the space limitation. \begin{figure*}[t] \centering \begin{subfigure}[c]{0.19\textwidth} \centering \includegraphics[width=\linewidth]{label0.png} \caption{$z=0$} \label{fig_inter:sfig1} \end{subfigure} \begin{subfigure}[c]{0.19\textwidth} \centering \includegraphics[width=\linewidth]{label3.png} \caption{$z=0.3$} \label{fig_inter:sfig2} \end{subfigure} \begin{subfigure}[c]{0.19\textwidth} \centering \includegraphics[width=\linewidth]{label6.png} \caption{$z=0.6$} \label{fig_inter:sfig3} \end{subfigure} \begin{subfigure}[c]{0.19\textwidth} \centering \includegraphics[width=\linewidth]{label8.png} \caption{$z=0.8$} \label{fig_inter:sfig4} \end{subfigure} \begin{subfigure}[c]{0.19\textwidth} \centering \includegraphics[width=\linewidth]{label10.png} \caption{$z=1$} \label{fig_inter:sfig5} \end{subfigure} \caption{Examples of intermediate domain images from GTA5 to Cityscapes. As the domainness variable increases from 0 to 1, the styles of the translated images shift from the synthetic GTA5 style to the realistic Cityscapes style gradually.} \label{fig_inter:fig} \end{figure*} \begin{table*}[h] \setlength{\tabcolsep}{3pt} \centering \resizebox{\textwidth}{20mm} { \begin{tabular}{c|ccccccccccccccccccc|c} \hline \multicolumn{21}{c}{GTA5 $\rightarrow$ Cityscapes}\\ \hline Method&\rotatebox{90}{road}&\rotatebox{90}{sidewalk}&\rotatebox{90}{building}&\rotatebox{90}{wall}&\rotatebox{90}{fence}&\rotatebox{90}{pole}&\rotatebox{90}{traffic light}&\rotatebox{90}{traffic sign}&\rotatebox{90}{vegetation}&\rotatebox{90}{terrian}&\rotatebox{90}{sky}&\rotatebox{90}{person}&\rotatebox{90}{rider}&\rotatebox{90}{car}&\rotatebox{90}{truck}&\rotatebox{90}{bus}&\rotatebox{90}{train}&\rotatebox{90}{motorbike}&\rotatebox{90}{bicycle}&mIoU\\ \hline NonAdapt\cite{Tsai_adaptseg_2018}&75.8&16.8&77.2&12.5&\textbf{21.0}&25.5&\textbf{30.1}&20.1&81.3&24.6&70.3&53.8&\textbf{26.4}&49.9&17.2&25.9&\textbf{6.5}&\textbf{25.3}&\textbf{36.0}&36.6\\ CycleGAN\cite{Hoffman_cycada2017}&81.7&27.0&\textbf{81.7}&\textbf{30.3}&12.2&28.2&25.5&27.4&82.2&\textbf{27.0}&77.0&\textbf{55.9}&20.5&\textbf{82.8}&30.8&38.4&0.0&18.8&32.3&41.0\\ \hline DLOW($z=1$)&\textbf{88.5}&\textbf{33.7}&80.7&26.9&15.7&27.3&27.7&\textbf{28.3}&80.9&26.6&74.1&52.6&25.1&76.8&30.5&27.2&0.0&15.7&\textbf{36.0}&40.7\\ DLOW &87.1&33.5&80.5&24.5&13.2&\textbf{29.8}&29.5&26.6&\textbf{82.6}&26.7&\textbf{81.8}&\textbf{55.9}&25.3&78.0&\textbf{33.5}&\textbf{38.7}&0.0&22.9&34.5&\textbf{42.3}\\ \hline \end{tabular} } \caption{\label{table_pixel} Results of semantic segmentation on the CityScapes dataset based on DeepLab-v2 model with ResNet-101 backbone using the images translated with different models. The results are reported on mIoU over 19 categories. The best result is denoted in bold.} \vspace{-10pt} \end{table*} \begin{table} \setlength{\tabcolsep}{3pt} \centering \begin{tabular}{c|cccc} \hline &Cityscapes&KITTI&WildDash&BDD100K\\ \hline Original~\cite{Tsai_adaptseg_2018}&42.4&30.7&18.9&37.0\\ DLOW&\textbf{44.8}&\textbf{36.6}&\textbf{24.9}&\textbf{39.1}\\ \hline \end{tabular} \caption{\label{table_generalization} Comparison of the performance of AdaptSegNet~\cite{Tsai_adaptseg_2018} when using original source images and intermediate domain images translated with our DLOW model for semantic segmention under domain adaptation (1st column) and domain generalization (2nd to 4th columns) scenarios. The results are reported on mIoU over 19 categories. The best result is denoted in bold.} \vspace{-10pt} \end{table} \section{Experiments} In this section, we demonstrate the benefits of our DLOW model with two tasks. In the first task, we address the domain adaptation problem, and train our DLOW model to generate the intermediate domain samples to boost the domain adaptation performance. In the second task, we consider the style generalization problem, and train our DLOW model to transfer images into new styles that are unseen in the training data. \subsection{Domain Adaptation and Generalization}\label{sec:exp_da} \subsubsection{Experiments Setup} For the domain adaptation problem, we follow~\cite{hoffman2016fcns, Hoffman_cycada2017, chen2018road, zou2018unsupervised} to conduct experiments on the urban scene semantic segmentation by learning from synthetic data to real scenario. The GTA5 dataset~\cite{Richter_2016_ECCV} is used as the source domain while the Cityscapes dataset~\cite{Cordts2016Cityscapes} as the target domain. Moreover, we also evaluate the generalization ability of learnt segmentation models to unseen domains, for which we take the KITTI~\cite{Geiger2012CVPR}, WildDash~\cite{Zendel_2018_ECCV} and BDD100K~\cite{yu2018bdd100k} datasets as additional unseen datasets for evaluation. We also conduct experiments using the SYNTHIA dataset~\cite{Ros_2016_CVPR} as the source domain, and provide the results in Supplementary. \textbf{Cityscapes} is a dataset consisting of urban scene images taken from some European cities. We use the $2,993$ training images without annotation as unlabeled target samples in training phase, and 500 validation images with annotation for evaluation, which are densely labelled with 19 classes. \textbf{GTA5} is a dataset consisting of $24,966$ densely labelled synthetic frames generated from the computer game whose scenes are based on the city of Los Angeles. The annotations of the images are compatible with the Cityscaps. \textbf{KITTI} is a dataset consisting of images taken from mid-size city of Karlsruhe. We use 200 validation images densely labeled and compatible with Cityscapes. \textbf{WildDash} is a dataset covers images from different sources, different environments(place, weather, time and so on) and different camera characteristics. We use 70 labeled and Cityscapes annotation compatible validation images. \textbf{BDD100K} is a driving dataset covering diverse images taken from US whose label maps are with training indices specified in Cityscapes. We use $1,000$ densely labeled images for validation in our experiment. In this task, we first train our proposed DLOW model using the GTA5 dataset as the source domain, and Cityscapes as the target domain. Then, we generate a translated GTA5 dataset with the learnt DLOW model. Each source image is fed into DLOW with a random domainness variable $z$. The new translated GTA5 dataset contains exactly the same number of images as the original one, but the styles of images randomly drift from the synthetic style to the real style. We then use the translated GTA dataset as the new source domain to train segmentation models. We implement our model based on Augmented CycleGAN \cite{almahairi2018augmented} and CyCADA~\cite{Hoffman_cycada2017}. Following their setup, all images are resized to have width $1024$ while keeping the aspect ratio and the crop size is set as $400\times 400$. When training the DLOW model, the image cycle consistency loss weight is set as 10. The learning rate is fixed as 0.0002. For the segmentation network, we use the AdaptSegNet~\cite{Tsai_adaptseg_2018} model, which is based on DeepLab-v2~\cite{chen2018deeplab} with ResNnet-101~\cite{he2016deep} as the backbone network. The training images are resized to $1280\times 720$. We follow the exact the same training policy as in the AdaptSegNet. \subsubsection{Experimental Results} \textbf{Intermediate Domain Images:} To verify the ability of our DLOW model to generate intermediate domain images, in the inference phase, we fix the input source image, and vary the domainness variable from 0 to 1. A few examples are shown in Fig \ref{fig_inter:fig}. It can be observed that the styles of translated images gradually shift from the synthetic style of GTA5 to the real style of Cityscapes, which demonstrates the DLOW model is capable of modeling the domain flow to bridge the source and target domains as expected. Enlarged images and more discussion are provided in Supplementary. \begin{figure*} \centering \includegraphics[height=0.3\paperheight]{camera_ready_style_translation_square_2.png} \caption{Examples of style generalization. Results with red rectangles at four corners are images translated into the four target domains, and those with green rectangles in between are images translated into intermediate domains. The results show that our DLOW model generalizes well across styles, and produces new images styles smoothly.} \label{fig_example} \vspace{-10pt} \end{figure*} \noindent \textbf{Cross-Domain Semantic Segmentation:} We further evaluate the usefulness of intermediate domain images in two settings. In the first setting, we compare with the CycleGAN model~\cite{zhu2017unpaired}, which is used in the CycADA approach~\cite{Hoffman_cycada2017} for performing pixel-level domain adaptation. The difference between CycleGAN and our DLOW model is that CycleGAN transfers source images to mimic only the target style, while our DLOW model transfers source images into random styles flowing from the source domain to the target domain. We first obtain a translated version of the GTA5 dataset with each model. Then, we respectively use the two transalated GTA5 datasets to train DeepLab-v2 models, which are evaluated on the Cityscapes dataset for semantic segmentation. We also include the ``NonAdapt" baseline which uses the original GTA5 images as training data, as well as a special case of our approach, ``DLOW($z=1$)", where we set $z = 1$ for all source images when making image translation using the learnt DLOW model. The results are shown in Table \ref{table_pixel}. We observe that all pixel-level adaptation methods outperform the ``NonAdapt" baseline, which verifies that image translation is helpful for training models for cross-domain semantic segmentation. Moreover, ``DLOW($z=1$)" is a special case of our model that directly translates source images into the target domain, which non-surprisingly gives comparable result as the CycADA-pixel method ($40.7\%$ v.s. $41.0\%$). By further using intermediate domain images, our DLOW model is able to improve the result from $40.7\%$ to $42.3\%$, which demonstrates that intermediate domain images are helpful for learning a more robust domain-invariant model. In the second setting, we further use intermediate domain images to improve the feature-level domain adpatation model. We conduct experiments based on the AdaptSegNet method~\cite{Tsai_adaptseg_2018}, which is open source and has reported the state-of-the-art result for GTA5$\rightarrow$CityScapes. It consists of multiple levels of adversarial training, and we augment each level with the loss weight discussed in Section~\ref{sec:boostadaptation}. The results are reported in Table~\ref{table_generalization}. The ``Original" method denotes the AdaptSegNet model that is trained using GTA5 as the source domain, for which the results are obtained using their released pretrained model. The ``DLOW" method is AdaptSegNet trained using translated dataset with our DLOW model. From the first column, we observe that the intermediate domain images are able to improve the AdaptSegNet model by $2.5\%$ from $42.3\%$ to $44.8\%$. More interestingly, we show that the AdaptSegNet model with DLOW translated images also exhibits excellent domain generalization ability when being applied to unseen domains, which achieves significantly better results than the original AdaptSegNet model on the KITTI, WildDash and BDD100K datasets as reported in the second to the fourth columns, respectively. This shows that intermediate domain images are useful to improve the model's cross-domain generalization ability. \subsection{Style Generalization} We conduct the style generalization experiment on the Photo to Artworks dataset\cite{zhu2017unpaired}, which consists of real photographs ($6,853$ images) and artworks from Monet($1,074$ images), Cezanne($584$ images), Van Gogh($401$ images) and Ukiyo-e($1,433$ images). We use the real photographs as the source domain, and the remaining as four target domains. As discussed in Section~\ref{sec:stylegeneralization}, The domainness variable in this experiment is expanded as a $4$-dim vector $[z_1, z_2, z_3, z_4]'$ meeting the condition $\sum_{i=1}^{4}z_i=1$. Also, $z_{1}, z_{2}, z_{3}$ and $z_{4}$ corresponds to Monet, Van Gogh, Ukiyo-e and Cezanne, respectively. Each element $z_i$ can be seen as how much each style contributes to the final mixture style. In every 5 steps of the training, we set the domainness variable $z$ as $[1,0,0,0]$, $[0,1,0,0]$, $[0,0,1,0]$, $[0,0,0,1]$ and uniformly distributed random variable. The qualitative results of the style generalization are shown in Fig~\ref{fig_example}. From the qualitative results, it is shown that our DLOW model can translate the photo image to corresponding artworks with different styles. When varying the values of domainness vector, we can also successfully produce new styles related to different painting styles, which demonstrates the good generalization ability of our model to unseen domains. Note, different from ~\cite{zhang2018unified, huang2017adain}, we do not need any reference image in the test phase, and the domainness vector can be changed instantly to generate different new styles of images. We provide more examples in Supplementary. \textbf{Quantitative Results:} To verify the effectiveness of our model for style generalization, we conduct an user study on Amazon Mechanical Turk (AMT) to compare with the existing methods FadNet~\cite{lample2017fader} and MUNIT~\cite{huang2018munit}. Two cases are considered, style transfer to Van Gogh, and style generalization to mixed Van Gogh and Ukiyo-e. For FadNet, domain labels are treated as attributes. For MUNIT, we mix Van Gogh and Ukiyo-e as the target domain. The data for each trial is gathered from 10 participants and there are 100 trials in total for each case. For the first case, participants are shown the example Van Gogh style painting and are required to choose the image whose style is more similar to the example. For the second case, participants are shown the example Van Gogh and Ukiyo-e style painting and are required to choose the image with a style that is more like the mixed style of the two example paintings. The user preference is summarized in Table~\ref{tab:sg_res}, which shows that DLOW outperforms FadNet and MUNIT on both tasks. Qualitative comparison between different methods is provided in Supplementary due to the space limitation. \begin{table} \setlength{\tabcolsep}{3pt} \small \centering \begin{tabular}{c|c|c} \hline & FadNet\cite{lample2017fader} / DLOW & MUNIT\cite{huang2018munit} / DLOW\\ \hline Van Gogh & $1.4\%$ / $98.6\%$ & $21.4\%$ / $78.6\%$\\ Van Gogh + Ukiyo-e & $1.6\%$ / $98.4\%$ & $15.3\%$ / $84.7\%$ \\ \hline \end{tabular} \caption{User preference for style transfer and generalization. It is shown that more users prefer our translated results on both of the style transfer and generalization tasks compared with the existing methods FadNet and MUNIT.} \label{tab:sg_res} \vspace{-10pt} \end{table} \section{Conclusion} In this paper, we have presented the DLOW model to generate intermediate domains for bridging different domains. The model takes a domainness variable $z$ (or domainness vector $\z$) as the conditional input, and transfers images into the intermediate domain controlled by $z$ or $\z$. We demonstrate the benefits of our DLOW model in two scenarios. Firstly, for the cross-domain semantic segmentation task, our DLOW model can improve the performance of the pixel-level domain adaptation by taking the translated images in intermediate domains as training data. Secondly, our DLOW model also exhibits excellent style generalization ability for image translation and we are able to transfer images into a new style that is unseen in the training data. Extensive experiments on benchmark datasets have verified the effectiveness of our proposed model. \vspace{5mm} \noindent\textbf{Acknowledgments} The authors gratefully acknowledge the support by armasuisse. \newpage {\small \bibliographystyle{ieee}
2,869,038,155,943
arxiv
\section*{Introduction} \label{sec:intro} \begin{figure}[b] \centering \includegraphics[width=0.475\textwidth]{image-noid3-h3.pdf} \includegraphics[width=0.475\textwidth]{image-noid5-h3.pdf} \caption{\small Cutaway views of equilateral minimal noids in $\mathrm{H}^3\cup\mathrm{S}^2\cup\mathrm{H}^3$ with end counts and cyclic symmetry orders $3$ and $5$. In all figures, the surface is stereographically projected to $\mathbb{R}^3$, the wireframe designates the ideal boundary, and the lines on the surface are curvature lines.} \label{fig:noid-h3} \end{figure} The AdS/CFT correspondence predicts that the physics of the gravitational theory of anti-de~Sitter (AdS) spacetime is equivalent to the physics of conformal field theory (CFT) on the boundary of that spacetime~\cite{Maldacena_1998}. A particularly important instance is the computation of the Wilson loop expectation value, which by work of Maldacena is given by the (regularized) area of a spacelike minimal surface in AdS spacetime with the loop as boundary~\cite{Drukker_Gross_Ooguri_1999,Alday_Maldacena_2007,Alday_Maldacena_2009,Rey_Yee_2001}. The aim of this paper is to provide new examples of minimal surfaces in anti-de~Sitter spaces of non-trivial topological type and with several boundary components. It is well known that the equations for minimal surfaces in anti-de~Sitter space are related to Hitchin self-duality equations~\cite{Hitchin_1987,Alday_Maldacena_2009}. In particular, minimal surfaces in $\mathrm{AdS}_3$ are given by solutions corresponding to points in the Hitchin component for rank 2, and minimal surfaces in a totally geodesic $\mathrm{H}^3\subset \text{AdS}_4$ are given by rank 2 solutions of the self-duality equations with nilpotent Higgs field. As those, they fall into the class of integrable PDEs, and there are powerful tools for computing large classes of examples~\cite{Sakai_Satoh_2010,Ogata_2017,Dorfmeister_Inoguchi_Kobayashi_2014}. On the other hand, it is hard to construct surfaces with non-trivial finitely generated topology which are complete, i.e., the surface can be continued to the boundary at infinity of the anti-de~Sitter space, and the intersection is a finite number of topological circles. It is worth remarking that surfaces given by global solutions of the self-duality equations are not of that much interest in the AdS/CFT correspondence as the extrinsic monodromy is always non-trivial and the {\em intersection} with the boundary at infinity gets very complicated. By finite gap integration, cylindrical solutions have been constructed; see for example~\cite{Bakas_Pastras_2016}. In~\cite{Fonda_Giomi_Salvio_Tonni_2015} a detailed numerical study for surfaces bounded by a finite number of special curves (including circles, (super)ellipses and boundaries of spherocylinders) is carried out, and the holographic entanglement entropy and the holographic mutual information for those entangling curves have been numerically computed. For the construction of meaningful examples via loop group factorization methods, two main problems need to be solved: The first is the proof of existence of potentials depending on a loop parameter on a surface with non-trivial monodromy which satisfy a certain reality condition (see remark~\ref{rem:closing}). This reality condition is given explicitly only by solving ODEs along non-trivial curves. We solve this problem for a large class of examples. The second problem is concerned with the construction of the minimal surfaces from potentials satisfying the reality conditions: this problem is based on the fact that the generalized Iwasawa factorization is not global, i.e., there exit loops which do not admit an Iwasawa factorization (remark~\ref{rem:iwasawa}). It is hard to determine the curves on the surface along which the factorization breaks down, and to characterize the behavior of the minimal surface along those curves in general. On the other hand, under a mild assumption on the degeneracy of the Iwasawa decomposition, the minimal surface intersects the boundary at infinity transversally; see for example~\cite[section 5]{Heller_Heller_2018} for technical details and figures~\ref{fig:delaunay-h3} and \ref{fig:noid-h3} for visualizations. It is this failure of the global Iwasawa decomposition which allows us to produce minimal surfaces in anti-de~Sitter spaces with non-trivial (finitely generated) topology and predicted topological intersections with the boundary at infinity, at least numerically. We plan to investigate the remaining theoretical questions concerning the intersections at infinity in forthcoming work. The structure of the paper is as follows: In section~\ref{sec:CMCDPW} we briefly describe a unified loop group approach for constant mean curvature surfaces in the symmetric spaces $\mathrm{S}^3$, $\mathrm{AdS}_3$, $\mathrm{H}^3\cup\mathrm{H}^3$ and $\mathrm{dS}_3$, including the case of minimal surfaces. In section~\ref{sec:H3} we first recall the conformal surface geometry in the lightcone model, with special emphasis on minimal surfaces in hyperbolic space. We discuss some simple examples of minimal surfaces in $\mathrm{H}^3$: the hyperbolic disk as the counterpart of the round sphere, and minimal Delaunay cylinders. We then define $n$-noids and open $n$-noids, motivated by the behavior of Delaunay cylinders at their ends. We prove the existence of open $n$-noids, and conjecture that those surfaces actually give rise to $n$-noids in the strict sense. In section~\ref{sec:ads3}, we explain how to modify the techniques of section~\ref{sec:H3} in order to obtain minimal surfaces in $\mathrm{AdS}_3$. In section~\ref{sec:g3noid} we study surfaces with three ends more explicitly via the generalized Weierstrass representation (GWR\xspace); these investigations have enabled us to perform computer experiments. Among others we construct a family of equilateral trinoids in $\mathrm{H}^3$ with mean curvature $\abs{H}< 1$. The paper is supplemented by figures visualizing global properties of minimal surfaces in $\mathrm{H}^3$, $\mathrm{dS}_3$ and $\mathrm{AdS}_3$. \section*{Acknowledgements} The first author is partially supported by the DFG Collaborative Research Center TRR 109 {\em Discretization in Geometry and Dynamics}. The second author is supported by RTG 1670 {\em Mathematics inspired by string theory and quantum field theory} funded by the DFG. The third author is supported by the DFG Collaborative Research Center TRR 109 {\em Discretization in Geometry and Dynamics}. \section{The loop group method for CMC surfaces in symmetric spaces} \label{sec:CMCDPW} \subsection{Unitary frames} \label{sec:unitary-frame} To construct constant mean curvature (CMC) surfaces in the symmetric spaces $\mathrm{S}^3$, $\mathrm{AdS}_3$, $\mathrm{H}^3\cup\mathrm{H}^3$ and $\mathrm{dS}_3$ we use the following matrix models in $\mathrm{SL}_2(\mathbb{C})$: \begin{equation} \label{eq:spaceform} \begin{array}{l|l|l} \text{space} & \text{matrix model} & \hphantom{-}\DOT{x}{y}\\ \hline \mathrm{S}^3 & \mathrm{SU}_2 & \hphantom{-}\tfrac{1}{2}\tr x\hat{y} \\ \mathrm{AdS}_3 & \mathrm{SU}_{11} & -\tfrac{1}{2} \tr x\hat{y} \\ \mathrm{H}^3\cup\mathrm{H}^3 & \{X \in \mathrm{SL}_2(\mathbb{C}) \mid {\transpose{\overline{X}}} = X\} & -\tfrac{1}{2}\tr x\hat{y} \\ \mathrm{dS}_3 & \{X \in \mathrm{SL}_2(\mathbb{C}) \mid {\transpose{\overline{X}}} = e_0 X e_0 ^{-1}\} & \hphantom{-}\tfrac{1}{2}\tr x\hat{y} \end{array} \end{equation} where $e_0 = \diag(\mathbbm{i},\,-\mathbbm{i})$. The third column of the table specifies the inner product on $\mathrm{M}_{2\times 2}(\mathbb{C})$ extending the metric on the symmetric space, with sign chosen so that the signature of the tangent space is $(\pm,\,+\,+)$. Here $\hat y=\bigl[\begin{smallmatrix}d & -b\\-c & a\end{smallmatrix}\bigr]$ for $y=\bigl[\begin{smallmatrix}a & b\\c & d\end{smallmatrix}\bigr]$. To use integrable systems methods we introduce the loop group $\Lambda\mathrm{SL}_2(\mathbb{C})$ of real analytic maps $\mathrm{S}^1\to\mathrm{SL}_2(\mathbb{C})$, and the subgroup $\Lambda_+\mathrm{SL}_2(\mathbb{C})$ of loops which extend holomorphically to the interior of the unit disk. The four involutions of $\Lambda\mathrm{SL}_2(\mathbb{C})$ \begin{equation} \label{eq:real-form} \begin{array}{l|l} \mathrm{S}^3 & X^\ast(\lambda) = {\transpose{\overline{ X(1/\overline{\lambda}) }}}^{-1}\\ \mathrm{AdS}_3 & X^\ast(\lambda) = e_0 {\transpose{\overline{ X(1/\overline{\lambda}) }}}^{-1} e_0^{-1}\\ \mathrm{H}^3 & X^\ast(\lambda) = {\transpose{\overline{ X(-1/\overline{\lambda}) }}}^{-1}\\ \mathrm{dS}_3 & X^\ast(\lambda) = e_0 {\transpose{\overline{ X(-1/\overline{\lambda}) }}}^{-1}e_0 ^{-1} \end{array} \end{equation} determine four real forms $\{X\in\Lambda\mathrm{SL}_2(\mathbb{C}) \mid X^\ast = X\}\subset\Lambda\mathrm{SL}_2(\mathbb{C})$. By an abuse of terminology, we call elements of such a subgroup \emph{unitary}, or emphasizing the involution e.g. $\mathrm{H}^3$-unitary. We will also denote by $^\ast$ the corresponding involutions of the Lie algebra $\Lambda\mathrm{sl}_2(\mathbb{C})$. Given a choice of one of the real forms induced by~\eqref{eq:real-form}, a \emph{unitary connection} is a $\Lambda\mathrm{sl}_2(\mathbb{C})$-valued $1$-form $\eta$ on a Riemann surface $\Sigma$ with the following properties: \begin{itemize} \item $\eta$ is flat for all $\lambda$. \item $\eta = \eta^\ast$. \item $\eta$ has a simple pole at $\lambda=0$, $\eta_{-1} :=\res_{\lambda=0}\eta$ has no $(0,\,1)$ part, $\det\eta_{-1} = 0$, and $\DOT{\eta_{-1}}{\eta_{-1}^\ast} \ne 0$. \end{itemize} A \emph{unitary frame} $F$ is a unitary solution to the ODE $\mathrm{d} F = F\eta$. The \emph{evaluation\xspace formula} maps a unitary frame to the symmetric space: \begin{equation} \label{eq:sym} f = \left.F\right|_{\lambda_0}\left.F^{-1}\right|_{\lambda_1} \end{equation} where $\lambda_0,\,\lambda_1\in\mathbb{C}^\ast$ are the \emph{evaluation\xspace points} as follows: \begin{equation} \label{eq:sym-point} \begin{array}{l|l|l} \text{space} & \text{evaluation\xspace points} & \text{mean curvature $H$}\\ \hline \mathrm{S}^3\text{ and } \mathrm{AdS}_3& \lambda_0,\,\lambda_1 \in\mathrm{S}^1 \text{ distinct} & \frac{\mathbbm{i}(\lambda_1+\lambda_0)}{\lambda_1 - \lambda_0}\\ \mathrm{H}^3 \text{ and }\mathrm{dS}_3 & \lambda_0,\,\lambda_1\in\mathbb{C}^\ast \text{ with }\lambda_0\overline{\lambda_1} = -1 & \frac{\lambda_1+\lambda_0}{\lambda_1 - \lambda_0} \end{array} \end{equation} The third column of the table lists the mean curvature $H$ of the induced immersion $f$ up to sign, derived in theorem~\ref{thm:unitary-frame}. Note that $H$ satisfies $\abs{H}<1$ for the symmetric spaces $\mathrm{H}^3$ and $\mathrm{dS}_3$. Formula \eqref{eq:sym} was first derived in \cite{Bobenko_1991} for CMC surfaces in $\mathrm{H}^3$ and $\mathrm{S}^3.$ \begin{theorem} \label{thm:unitary-frame} Let $\eta$ be a unitary connection on a Riemann surface $\Sigma$ with respect to one of the four real forms~\eqref{eq:real-form}. Let $F$ be a unitary frame satisfying $\mathrm{d} F = F \eta$. Then the evaluation\xspace formula~\eqref{eq:sym} evaluated at evaluation\xspace points~\eqref{eq:sym-point} yields a spacelike conformal CMC immersion $f$ into the corresponding symmetric space with metric \begin{equation} \label{eq:metric} v^2\mathrm{d} z\otimes\mathrm{d}\bar{z},\quad v^2 = 2(\lambda_0^{-1}-\lambda_1^{-1})(\lambda_0-\lambda_1) \DOT{\alpha}{\alpha^\ast} \end{equation} and constant mean curvature as in~\eqref{eq:sym-point}. \end{theorem} \begin{remark}\label{rem:assofami} The unitary connection $\eta$ is usually referred to as the associated family of flat connections of the surface $f$. \end{remark} \begin{proof} We prove the theorem for $\mathrm{AdS}_3$; the proof for the other symmetric spaces is similar with sign changes. The unitary connection decomposes as \begin{equation} \label{eq:unitary-connection} \eta = (\alpha\lambda^{-1} + \beta)\mathrm{d} z + (\beta^\ast + \alpha^\ast\lambda)\mathrm{d}\bar{z}. \end{equation} The flatness of $\eta$ is equivalent to \begin{equation} [\alpha,\,\beta^\ast] = \alpha_{\bar{z}} ,\quad [\alpha,\,\alpha^\ast] + [\beta,\,\beta^\ast] = \beta_{\bar{z}} - {(\beta^\ast)}_z ,\quad [\beta,\,\alpha^\ast] = {(\alpha^\ast)}_z. \end{equation} Let $F_0 = \left.F\right|_{\lambda_0}$ and $F_1 = \left.F\right|_{\lambda_1}$. To compute the metric \begin{equation} f_z = (\lambda_0^{-1}-\lambda_1^{-1})F_0 \alpha F_1^{-1},\quad f_{\bar{z}} = (\lambda_0-\lambda_1)F_0 \alpha^\ast F_1^{-1}. \end{equation} Since $\DOT{f_z}{f_z} = \DOT{f_{\bar{z}}}{f_{\bar{z}}} = 0$ the metric is $v^2 \mathrm{d} z\otimes \mathrm{d}\bar{z}$ with \begin{equation} v^2 = 2\DOT{f_z}{f_{\bar{z}}} = 2(\lambda_0^{-1} - \lambda_1^{-1}) (\lambda_0 - \lambda_1)\DOT{\alpha}{\alpha^\ast}. \end{equation} Since $\lambda_0\ne\lambda_1$ and $\DOT{\alpha}{\alpha^\ast}\ne 0$, the metric is nonzero, so the evaluation\xspace formula induces a conformal immersion. The normal is $N = F_0\gamma F_1^{-1}$ where \begin{equation} \gamma = -\frac{\mathbbm{i}}{2}\frac{[\alpha,\,\alpha^\ast]}{\DOT{\alpha}{\alpha^\ast}}. \end{equation} Using flatness, \begin{equation} f_{z\bar{z}} = (\lambda_0^{-1}-\lambda_1^{-1}) F_0 (\lambda_0 \alpha^\ast\alpha -\lambda_1\alpha\alpha^\ast) F_1^{-1}. \end{equation} Using that $\DOT{\alpha\alpha^\ast + \alpha^\ast\alpha}{\gamma} = 0$, the mean curvature of $f$ is \begin{equation} H = 2 v^{-2}\DOT{f_{z\bar{z}}}{N} = -\tfrac{1}{2} v^2(\lambda_0^{-1}-\lambda_1^{-1})(\lambda_0 + \lambda_1) \DOT{[\alpha,\,\alpha^\ast]}{\gamma} = \mathbbm{i} \frac{\lambda_1 + \lambda_0}{\lambda_1 - \lambda_0}. \qedhere \end{equation} \end{proof} \begin{remark} With other choices of evaluation\xspace formulas and evaluation\xspace points, CMC surfaces can also be constructed from unitary connections in the symmetric spaces related by the Lawson correspondence: \begin{align} \text{corresponding to $\mathrm{S}^3$}&:\ \quad\text{$\mathrm{H}^3$ ($\abs{H}>1$) and $\mathbb{R}^3$}\\ \text{corresponding to $\mathrm{AdS}_3$}&: \quad\text{$\mathrm{dS}_3$ ($\abs{H}>1$) and $\mathbb{R}^{2,1}$}. \end{align} \end{remark} \begin{remark}\label{rem:sine} In the case the Hopf differential is $\mathrm{d} z^2$, after a coordinate change the flatness of the unitary connection is Gauss equation on the metric $v^2 = e^{2u}$: \begin{align} \label{eq:gauss-equation} \mathrm{S}^3 &:\ \Delta u + 2\sinh 2u = 0 & \mathrm{H}^3\ (\abs{H}<1)&:\ \Delta u - 2\cosh 2u = 0\\ \mathrm{AdS}_3 &:\ \Delta u - 2\sinh 2u = 0 & \mathrm{dS}_3\ (\abs{H}<1)&:\ \Delta u + 2\cosh 2u = 0. \end{align} \end{remark} \subsection{Holomorphic frames} \label{sec:gwr} To construct CMC immersions by theorem~\ref{thm:unitary-frame}, one is required to produce a flat unitary connection. The flatness is the Gauss equation, a partial differential equation on the metric. This section introduces the generalized Weierstrass representation (GWR\xspace)~\cite{Dorfmeister_Pedit_Wu_1998}. In this construction, the PDE is replaced by an ordinary differential equation together with a Iwasawa loop group factorization. For a more comprehensive treatment of real forms of loop groups see~\cite{Kobayashi_2011}. The case $\mathbb{R}^{2,1}$ was considered in~\cite{Brander_Rossman_Schmitt_2010}. A \emph{GWR\xspace potential} $\xi$ is a $\Lambda\mathrm{sl}_2(\mathbb{C})$-valued $(1,\,0)$-form on a Riemann surface $\Sigma$ satisfying the following condition: $\xi$ has a simple pole at $\lambda=0$, and $\xi_{-1}:=\res_{\lambda=0}\xi$ satisfies $\det\xi_{-1} = 0$ and is nowhere zero. A \emph{GWR\xspace frame} $\Phi$ for $\xi$ is a holomorphic map from the domain to $\Lambda\mathrm{sl}_2(\mathbb{C})$ satisfying $\mathrm{d}\Phi = \Phi\xi$. Choosing a real form, an \emph{Iwasawa factorization} of $\Phi$ is \begin{equation} \label{eq:iwasawa} \Phi = F B,\quad F^\ast = F\text{ and }B\in\Lambda_+\mathrm{SL}_2(\mathbb{C}). \end{equation} The Iwasawa factorization can be computed via the Birkhoff factorization as follows. When in the big cell, ${\Phi^\ast}^{-1}\Phi$ has a Birkhoff factorization \begin{equation} \label{eq:birkhoff} {\Phi^\ast}^{-1}\Phi = {X_+^\ast}^{-1} X_+,\quad X_+\in\Lambda_+\mathrm{SL}_2(\mathbb{C}). \end{equation} Then the desired Iwasawa factorization of $\Phi$ is \begin{equation} \Phi = F B,\quad F:=\Phi X_+^{-1},\quad B:= X_+ \end{equation} because $F^\ast = F$. \begin{theorem} \label{thm:gwr} Let $\xi$ be a GWR\xspace potential, and $\Phi$ a corresponding GWR\xspace frame. If $\Phi$ has an Iwasawa factorization, then $F$ is a unitary frame. Hence by theorem~\ref{thm:unitary-frame} $F$ induces a conformal CMC immersion. \end{theorem} \begin{proof} Since $\mathrm{d} \Phi = \Phi\xi$, $\mathrm{d} F = F\eta$, and $\Phi = F B$, then \begin{equation} \eta = \xi . (B^{-1}) \end{equation} where the dot denotes the gauge action $\xi.g := g^{-1}\xi g + g^{-1}\mathrm{d} g$. Since $B\in\Lambda\mathrm{SL}_2(\mathbb{C})$ and $\xi$ has a simple pole in $\lambda$, then $\eta$ has a simple pole in $\lambda$. Since $\xi$ has no $(0,\,1)$ part, then $\eta_{-1} := \res_{\lambda=0}\eta$ has no $(0,\,1)$ part. Since $F^\ast = F$, then $\eta^\ast=\eta$, hence $\eta$ is a unitary connection. \end{proof} \begin{remark} \label{rem:hopf} The Hopf differential of CMC immersion induced by a GWR\xspace potential $\xi$ is of the form $c(\lambda)Q\mathrm{d} z^2$ where $c$ is $z$-independent and $Q$ is the leading term (coefficient of $\lambda^{-1})$ of $\det \xi$. \end{remark} \begin{remark} \label{rem:iwasawa} In the case of the spaceform $\mathrm{S}^3$, the GWR\xspace frame always has an Iwasawa factorization. For the other three real forms, the GWR\xspace frame may fail to have an Iwasawa factorization, generally on some real analytic subset of the domain. On this set the surface is singular, in many cases going to the ideal boundary. \end{remark} If the domain is not simply connected, $\Phi$ has monodromy and the induced CMC immersion on the universal cover does not generally \emph{close}, that is, descend to an immersion of the domain. A sufficient condition for closing is the following: \begin{remark} \label{rem:closing} The induced CMC immersion closes if the monodromy $M$ of $\Phi$ is unitary (intrinsic closing), and at the evaluation\xspace points $M(\lambda_0) = M(\lambda_1) \in\{\pm 1\}$ (extrinsic closing). \end{remark} \section{Minimal $n$-noids in hyperbolic 3-space} \label{sec:H3} We study minimal surfaces in hyperbolic 3-space $\mathrm{H}^3$ which intersect the boundary at infinity perpendicularly. A convenient setup from a geometric point of view is conformal surface geometry in the lightcone model of the 3-sphere. \subsection{The lightcone model for $\mathrm{H}^3$} The lightcone approach to conformal surface geometry is classical; for details we refer to \cite{Burstall_Pedit_Pinkall_2002,Quintino_2009} and the references therein. We consider Minkowski space $V=\mathbb{R}^{4,1}$ with its standard inner product $(\cdot,\,\cdot)$ inducing the quadratic form \begin{equation} q(x_0, \dots, x_4) = -x_0^2 + x_1^2 + \dots + x_4^2. \end{equation} The lightcone \begin{equation} \mathcal L=\{\mathbb{R} x\in PV\mid x\neq0,\; q(x)=0\} \end{equation} is diffeomorphic to the 3-sphere $\mathrm{S}^3$ via \begin{equation} (x_1, \dots ,x_4)\in \mathrm{S}^3\ \mapsto\ \mathbb{R}(1, x_1, \dots ,x_4)\in\mathcal L. \end{equation} Thus $\mathcal L$ inherits a conformal structure from the round metric on $\mathrm{S}^3$. It is well known that the group $\mathrm{SO}(4,1)$ acts on $\mathcal L$ by conformal transformations. In fact, \begin{equation} \mathrm{SO}_+(4,1)=\{g\in \mathrm{SO}(4,1)\mid (g(e_0),e_0)>0\} \end{equation} is the group of (orientation preserving) conformal diffeomorphisms of $\mathrm{S}^3$ (equipped with the round conformal structure). \subsubsection{Hyperbolic 3-space} \label{hyp3space} Taking the spacelike vector $v_\infty=e_4$ we obtain two copies of hyperbolic 3-space as \begin{equation} \mathcal L\setminus (\mathcal L\cap P (e_4^\perp)) = \{(x_0,x_1,x_2,x_3,-1)\mid -x_0^2+x_1^2+x_2^2+x_3^2=-1\}. \end{equation} This space is naturally equipped with the metric of constant curvature $-1$. The subgroup $\mathrm{SO}_+(3,1)\subset \mathrm{SO}_+(4,1)$ (defined by fixing the vector $e_4$) realizes the isometry group of hyperbolic 3-space. \subsubsection{Surfaces in the lightcone model} We consider conformal immersions \begin{equation} f\colon \Sigma\to \mathrm{S}^3 \end{equation} from a Riemann surfaces $\Sigma$ into the conformal 3-sphere. The map $f$ is equivalent (in conformal geometry) to the line bundle of light-like vectors \begin{equation} \sigma^\ast\mathcal L\to\Sigma \end{equation} for $\sigma=(1,f)$. The fact that $f$ is an immersion means that for any (local) lift $\tilde \sigma=g \sigma$ and (local) pointwise independent vector fields $X,Y$ on $\Sigma$ \begin{equation} \text{span}(\tilde\sigma,\,X\cdot\tilde\sigma,\,Y\cdot\tilde\sigma) \end{equation} is a (real) 3-dimensional bundle (where $\cdot$ denotes the derivative). Conformality of $f$ means that for a local holomorphic chart $z$ on $\Sigma$ we have \begin{equation} (\tilde\sigma_z,\tilde\sigma_z)=0=(\tilde\sigma_{\bar z},\tilde\sigma_{\bar z}) \end{equation} where $g_z:=\frac{\partial}{\partial z}\cdot g$ and $g_{\bar z}:=\frac{\partial}{\partial \bar z }\cdot g$ for any (vector-valued) function $g$. A fundamental object in conformal surface theory is the mean curvature sphere congruence. The mean curvature sphere is defined locally by \begin{equation} \mathcal S=\text{span} (\tilde\sigma,\,\tilde\sigma_z,\,\sigma_{\bar z},\,\tilde\sigma_{z,\bar z}). \end{equation} \begin{proposition} The surface $f$ considered as a surface in hyperbolic 3-space defined by $v_\infty=e_4$ is of constant mean curvature $H$ if and only if \begin{equation} H=(e_4^\perp,\,e_4^\perp) \end{equation} is constant, where $(e_4^\perp)_p$ is the projection of $e_4$ to $\mathcal S_p$. Consequently, $f$ is minimal if and only if $e_4$ is contained (as a constant section) in $\mathcal S$. \end{proposition} In the following, we are interested in conformally parametrized surfaces $f\colon\Sigma\to\mathrm{S}^3$ into the conformal 3-sphere such that its intersection with \begin{equation} \mathrm{H}^3\cup\mathrm{H}^3 \ = \ \mathrm{S}^3\setminus \mathrm{S}^2 \ = \ \mathcal L\setminus (\mathcal L\cap P (e_4^\perp)) \end{equation} is a minimal surface. In order to apply the GWR\xspace approach we need an explicit isometry \begin{equation} \Psi\colon \mathcal L\setminus (\mathcal L\cap P (e_4^\perp))\to \{X \in \mathrm{SL}_2(\mathbb{C}) \mid {\transpose{\overline{X}}} = X\}. \end{equation} This is provided by \begin{equation} \label{eq:lightconematrix} \Psi([x_0,x_1,x_2,x_3,x_4])\mapsto \frac{1}{x_4} \begin{bmatrix}x_0+x_1& x_2+i x_3\\x_2-i x_3& x_0-x_1 \end{bmatrix}. \end{equation} Note that the two copies of $\Psi(\mathcal L\setminus (\mathcal L\cap P (e_4^\perp)))$ are given by the sets of positive definite and negative definite symmetric $\mathrm{SL}_2$-matrices. \subsection{Basic examples} We first illustrate the GWR\xspace approach for some basic surfaces. Recall that for $\mathrm{H}^3$, the real involution on $\Lambda\mathrm{SL}_2(\mathbb{C})$ is given by \begin{equation} X^\ast(\lambda)=e_0\overline{X(-1/\bar\lambda)}^{-1}e_0^{-1}, \end{equation} and unitary connections are flat connections of the form \begin{equation} d+\eta(\lambda)=d+\eta_0+\lambda^{-1}\eta_{-1}+\lambda\eta_1 \end{equation} with $\transpose{\overline{\eta_0}}=-\eta_0$ and $\transpose{\overline{\eta_{-1}}}=\eta_1$ where $\eta_{-1}$ is a nowhere vanishing $(1,0)$-form with values in the nilpotents. \subsubsection{The sphere in $\mathrm{H}^3$} The simplest example of a GWR\xspace potential is given by \begin{equation} \xi(\lambda)=\begin{bmatrix}0 & \lambda^{-1} \\ 0 & 0\end{bmatrix}dz \end{equation} on the complex plane, with GWR\xspace frame \begin{equation} \Phi(\lambda)=\begin{bmatrix}1 & \lambda^{-1} z \\ 0 & 1\end{bmatrix}. \end{equation} The $\mathrm{H}^3$ Iwasawa factorization is given by \begin{equation} \Phi(\lambda)=F(\lambda)B(\lambda)=\frac{1}{\sqrt{1-z\bar z}} \begin{bmatrix}1 & \lambda^{-1} z \\ \lambda \bar z & 1\end{bmatrix}\frac{1}{\sqrt{1-z\bar z}} \begin{bmatrix}1 & 0 \\ -\lambda \bar z & 1-z\bar z\end{bmatrix} \end{equation} and taking $\lambda_0=1$ and $\lambda_{1}=-1$ we obtain \begin{equation} \label{eq:H3sphere} f=F(1)F(-1)^{-1}=\frac{1}{1-z\bar z} \begin{bmatrix} 1+ z\bar z& 2z \\ 2\bar z &1+ z\bar z\end{bmatrix}. \end{equation} Restricting to the unit disk $D\subset\mathbb{C}$, this is just a conformally parametrized totally geodesic hyperbolic disk inside hyperbolic 3-space, with induced metric \begin{equation} \frac{1}{1-z\bar z}\mathrm{d} z\otimes \mathrm{d}\bar z. \end{equation} Note that this example is rather special as we are able to write down both the GWR\xspace frame and its factorization in terms of elementary functions. \begin{remark} The surface $f$ has the same GWR\xspace potential as the round minimal 2-sphere in $\mathrm{S}^3$, and serves as the simplest example of a minimal surface in $\mathrm{H}^3$. On the other hand, as a map to $\mathrm{H}^3$, $f$ is not well-defined on the whole plane $\mathbb{C}$ or projective line, but crosses the ideal boundary at $\infty$ \begin{equation} \mathrm{S}^2_\infty=\mathcal L\cap P (e_4^\perp) \end{equation} along the unit circle $\mathrm{S}^1\subset\mathbb{C}$ where the Iwasawa decomposition breaks down. By~\eqref{eq:H3sphere} and~\eqref{eq:lightconematrix} (or by geometric reasoning) $f$ can be extended as a conformal surface into the conformal 3-sphere $\mathcal L$, a phenomena which turns out to be typical in the examples below. In the case at hand, we obtain a conformally parametrized totally umbilic sphere \begin{equation} z\in\mathbb{C}\mathrm{P}^1\mapsto [1+z\bar z,\, 0,\, z+\bar z,\, \mathbbm{i} (\bar z-z),\, 1-z\bar z] \in\mathcal L. \end{equation} \end{remark} \subsubsection{Delaunay cylinders in $\mathrm{H}^3$} \begin{figure}[b] \centering \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{image-delaunay-h3.pdf} \caption{ \small Delaunay surface in $\mathrm{H}^3\cup\mathrm{S}^2\cup\mathrm{H}^3.$ Each of the two ends of this surface of revolution oscillates between the two copies of $\mathrm{H}^3$, crossing the ideal boundary infinitely often. } \label{fig:delaunay-h3} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{image-delaunay-ads3.pdf} \caption{\small Minimal Delaunay surface in $\mathrm{AdS}_3.$ At the cone point (center of image), the profile curve of this surface of revolution crosses the revolution axis and the surface fails to be immersed. } \label{fig:delaunay-ads3} \end{subfigure} \caption{} \label{fig:delaunay} \end{figure} A less trivial class of surfaces is given by Delaunay surfaces in $\mathrm{H}^3$. Minimal Delaunay cylinders in $\mathrm{H}^3$ (figure~\ref{fig:delaunay-h3}) were first described in~\cite{Babich_Bobenko_1993} in terms of their elliptic spectral data. They come in a real 1-dimensional family of geometrically distinct surfaces. On $\mathbb{C}/(2\pi i\mathbb{Z})$, the Hopf differential of a Delaunay cylinder is a constant multiple of $(\mathrm{d} w)^2$, and the conformal factor is a solution of the cosh-Gordon equation (remark~\ref{rem:sine}). The conformal factor can be given explicitly in terms of the Weierstrass $\wp$-function on a rectangular elliptic curve; see for example \cite{Babich_Bobenko_1993,Bakas_Pastras_2016}. The surface is rotational symmetric, and the conformal factor only depends on one (real) variable and is periodic --- but blows up once each period where the surface intersects the ideal boundary at $\infty$ (figure~\ref{fig:delaunay-h3}). We consider the Delaunay cylinders parametrized on the two-punctured sphere $\mathbb{C}^\ast$, whose Hopf differential is a constant multiple of $(\mathrm{d} z)^2/z^2$. To construct this 1-dimensional family on the domain $\mathbb{C}^\ast$ via the GWR\xspace approach take evaluation\xspace points \begin{equation} \lambda_0=-\lambda_1=1 \end{equation} determined by the mean curvature $H=0$ in the last column of table~\eqref{eq:sym-point}. The GWR\xspace potential on the domain $\mathbb{C}^\ast$ is $\mathbbm{i} A\mathrm{d} z/z$ where $A$ is a $z$-independent loop with the following properties (remark~\ref{rem:closing}): \begin{itemize} \item $q\in\mathbb{R}^\ast$ is a branch-point of the spectral curve: $\det A(q) = 0$; \item intrinsic closing condition: $A = A^\ast$; see the third row of~\eqref{eq:real-form}; \item extrinsic closing condition: eigenvalues of $A(\lambda_0)$ are $\pm\mathbbm{i}/2$. \end{itemize} It follows that the eigenvalues of the frame monodromy around the puncture $z=0$ are $\exp(\pm 2\pi \nu)$ where \begin{equation} \nu= \tfrac{\mathbbm{i}}{2}\sqrt{ \tfrac{(\lambda-q)(-\lambda^{-1} - q)} {q^2-1}}. \end{equation} More explicitly, for $\mathrm{H}^3$ we may take $A$ to be \begin{align} \quad \tfrac{1}{2\sqrt{q^2-1}} \begin{bmatrix}0 & \lambda^{-1} + q\\ \lambda - q & 0\end{bmatrix} \end{align} constrained by the condition that the term under the square root is positive, i.e., $\abs{q}>1$. The GWR\xspace frame $\Phi$ is based at $z=1$, i.e., $\Phi(1) = \mathbbm{1}$. Hence $\Phi = \exp(\mathbbm{i} A \log z)$. \begin{theorem} The GWR\xspace construction applied to the above data gives Delaunay cylinders. \end{theorem} \begin{proof} The GWR\xspace frame for the potential $\xi$ is $\Phi = \exp(\mathbbm{i} A \log z)$. The monodromy of $\Phi$ around the puncture $z=0$ is $M = \exp(2\pi A)$, satisfying the intrinsic closing condition $M^\ast = M$ due to the symmetry of $A$, and the extrinsic closing condition (on the cylinder) $M(\lambda_0) = M(\lambda_1) = -\mathbbm{1}$ due to the fact that the eigenvalues of $A$ are $\pm\mathbbm{i}/2$ at each evaluation\xspace point. Hence the surface closes on $\mathbb{C}^\ast$. To show that the induced surface is a surface of revolution, changing coordinates $x+\mathbbm{i} y = \mathbbm{i} \log z$, then $\Phi = \exp( (x+\mathbbm{i} y)A) = \exp(x A)\exp(\mathbbm{i} y A)$. Since $\exp(x A)$ is unitary, then the unitary factor in the Iwasawa decomposition of $\Phi$ is $F(x,\,y) = \exp(x A) G(y)$, where $G$ is the unitary factor of $\exp(\mathbbm{i} y A)$. Hence $F$ is equivariant. Hence the surface $F_{\lambda_0}F^{-1}_{\lambda_1}$ is equivariant with equivariant action on the profile curve $X = X(y) = G_{\lambda_0}G^{-1}_{\lambda_1}$ given by \begin{equation} X \mapsto \exp(x A(\lambda_0)) X \exp(-x A(\lambda_1)). \end{equation} This action has closed orbits with period $x\in[0,\,2\pi]$ because the eigenvalues of $A(\lambda_0)$ and $A(\lambda_1)$ are $\pm\mathbbm{i}/2$. \end{proof} \begin{remark} It is possible to compute the unitary factor $G$ of $\exp(\mathbbm{i} y A)$ explicitly in terms of elliptic functions. It turns out that $G$ is quasiperiodic in $y$ (i.e., it is equivariant, and the period depends on $q$), and that the Iwasawa decomposition fails twice in each period. On the other hand, using~\eqref{eq:lightconematrix} it is possible to extend the Delaunay surface to a conformal immersion of the cylinder $\mathbb{C}^\ast$ into $\mathrm{S}^3$; see also~\cite[$\S6$]{Babich_Bobenko_1993} and figure~\ref{fig:delaunay-h3}. \end{remark} \begin{remark} It is worth noting that the intersection of a Delaunay cylinder with the boundary at infinity is the disjoint union of circles. This follows from the fact that Delaunay cylinders are equivariant. If we restrict to one component of the Delaunay cylinder inside $\mathrm{H}^3$ we obtain exactly two boundary circles, which define a Riemann surface of annulus type. It would be interesting to work out in detail the relation between the free parameter $q$ and the modulus of the annulus. \end{remark} \subsection{$n$-noids} An \emph{$n$-noid} in $\mathbb{R}^3$ is a minimal immersion of a $n$ punctured Riemann sphere, each of whose end monodromies has Delaunay eigenvalues, that is, the same eigenvalues as those of a Delaunay cylinder. In the previous example we have seen that a Delaunay end in $\mathrm{H}^3$ cannot be defined on a punctured disk when we consider the surface lying only in $\mathrm{H}^3$. We therefore have to modify our definition: \begin{definition} \label{def:nnoid} An \emph{$n$-noid} in $\mathrm{H}^3$ is a conformal immersion \begin{equation} f\colon \mathbb{C}\mathrm{P}^1\setminus\{p_1, \dots, ,p_n\}\to \mathcal L\cong \mathrm{S}^3 \end{equation} such that \begin{enumerate} \item the intersection \begin{equation} \text{image}(f)\setminus(\mathcal L\cap P (e_4)^\perp)) = \text{image}(f)\setminus S^2=\text{image}(f)\cap(\mathrm{H}^3\cup\mathrm{H}^3) \end{equation} is a (not necessarily connected) minimal surface; \item the surface has Delaunay eigenvalues around each end $p_k$. \end{enumerate} \end{definition} \begin{remark} It is necessary to explain the second condition in more detail. In general, if the surface passes through the boundary at infinity, the associated family of flat connections (remark~\ref{rem:assofami}) does not exist on the $n$-punctured sphere $\mathbb{C}\mathrm{P}^1\setminus\{p_1, \dots, p_n\}$ and it is therefore not obvious in which sense one should test the second condition. For example, one should expect that the intersection of $\text{image}(f)\cap(\mathrm{H}^3\cup\mathrm{H}^3)$ around an end $p_k$ is defined on a nested union of disjoint topological annuli, and a priori it is unclear why the eigenvalues of the monodromy on all annuli are the same. On the other hand, using condition (1), the surface $f$ is a Willmore surface in $\mathrm{S}^3$ and has an associated family of flat $\mathrm{SL}(4,\mathbb{C})$-connections which reduces (in a $\lambda$-dependent way) to the associated family of rank 2 connections of the minimal surface in $\mathrm{H}^3$ on the corresponding subset. In this way, it can be shown that the monodromy representation up to conjugation is well-defined; for details see~\cite{Heller_Heller_Ndiaye_2018}. On the other hand, for minimal surfaces constructed from GWR\xspace potentials the monodromies of the potential and the associated family agree up to conjugation, and the eigenvalue condition can therefore be checked directly on the potential. \end{remark} \begin{example} All minimal Delaunay cylinders are 2-noids. In fact, it follows from~\cite{Babich_Bobenko_1993} that these surfaces can be extended through the boundary at infinity to give a (Moebius-)periodic surface into $\mathrm{S}^3$ from the two-punctured sphere. \end{example} So far we only have numerical evidence of the existence of $n$-noids in $\mathrm{H}^3$ (figures~\ref{fig:noid-h3},~\ref{fig:noid-h3-iso} and~\ref{fig:trinoid-h3-halfspace}). We are therefore forced to give the following weaker definition. \begin{definition} \label{def:openn} An \emph{open $n$-noid} in $\mathrm{H}^3$ is a conformal minimal immersion of a $n$-holed sphere \begin{equation} f\colon \mathbb{C}\mathrm{P}^1\setminus (D_1\cup \cdots \cup D_n)\to\mathrm{H}^3 \end{equation} for non-intersecting topological disks $D_k\subset\mathbb{C}\mathrm{P}^1$, such that the monodromy eigenvalues around each hole $D_k$ are the monodromy eigenvalues of a Delaunay cylinder for some $q\in\mathbb{R}$ with $\abs{q}>1$. \end{definition} \begin{remark} Not every $n$-noid in $\mathrm{H}^3$ is an open $n$-noid in the above sense. For example, it might be the case that two or more {\em ends} of the $n$-noid start in each of the two $\mathrm{H}^3$ copies inside the conformal 3-sphere, while the two pieces are joined by a surface which is close to a part of a round sphere. In fact, such examples can be constructed by the methods below. On the other hand, we will construct open $n$-noids in theorem~\ref{thm:main}. We conjecture that those surfaces are $n$-noids in the sense of definition~\ref{def:nnoid} (figures~\ref{fig:noid-h3},~\ref{fig:noid-h3-iso} and~\ref{fig:trinoid-h3-halfspace}). \end{remark} \subsubsection{3-noids} A potential for trinoids is \begin{equation} \label{eq:trinoid} \begin{bmatrix}0 & \lambda^{-1}\\ f(\lambda) Q & 0\end{bmatrix}\mathrm{d} z, \end{equation} where $Q\mathrm{d} z^2$ is a holomorphic quadratic differential with three double poles and real quadratic residues, $\lambda_0,\,\lambda_0^{-1}$ are the evaluation\xspace points, and for the ambient space $\mathrm{H}^3$ \begin{align} \quad f = (\lambda -1)(\lambda + 1).\quad \end{align} Since at each of its poles the potential is gauge equivalent to a perturbation of a Delaunay potential, by the theory of regular singular points, each monodromy around a puncture has Delaunay eigenvalues. This potential constructs CMC trinoids if the closing conditions of remark~\ref{rem:closing} are satisfied, as shown by the following theorem. \begin{theorem} \label{thm:trinoidH} There exists a real 1-parameter family of GWR\xspace potentials on the 3-punctured sphere satisfying the intrinsic and extrinsic closing conditions for minimal surfaces in $\mathrm{H}^3$. \end{theorem} The technical proof of the theorem is given in section~\ref{sec:g3noid} below. \subsection{Existence of $n$-noid potentials with small necksize} \begin{figure}[b] \centering \includegraphics[width=0.49\textwidth]{image-noid3-h3-iso.pdf} \includegraphics[width=0.49\textwidth]{image-noid4-h3-iso.pdf} \caption{\small Trinoid and fournoid in $\mathrm{H}^3\cup\mathrm{S}^2\cup\mathrm{H}^3$ with respective cyclic symmetries of orders $2$ and $3$ about the vertical axis. Neither surface is equilateral.} \label{fig:noid-h3-iso} \end{figure} We adopt the techniques of Traizet~\cite{Traizet_2017} to prove the existence of open $n$-noids in $\mathrm{H}^3$. We conjecture that these surfaces are $n$-noids in the sense of definition~\ref{def:openn} as well. In~\cite{Traizet_2017}, Traizet showed the existence of GWR\xspace potentials for constant mean curvature $n$-noids in $\mathbb{R}^3$ by deforming the GWR\xspace potential of the round sphere. His method of solving the monodromy problem for the intrinsic and extrinsic closing conditions can be easily translated to our setup, with basically identical proofs up to minor changes. Additionally, one can deduce that the Iwasawa factorization works on a subset homeomorphic to a $n$-holed sphere, which yields actually examples of open $n$-noids in $\mathrm{H}^3$. A formally similar method of deforming surfaces in $\mathrm{S}^3$ and $\mathrm{H}^3$ has been introduced in~\cite{Heller_Heller_Schmitt_2018} and~\cite{Heller_Heller_2018} respectively. We start with a potential \begin{equation} \label{eq:H3potential} \xi_t(z):= \begin{bmatrix} 0 &\lambda^{-1} \mathrm{d} z\\ t (\lambda^2+1) \omega(z)\end{bmatrix} \end{equation} where \begin{equation} \omega= \sum_{k=1}^n \bigg(\frac{a_k}{(z-z_k)^2}+\frac{b_k}{z-z_k} \bigg)\mathrm{d} z. \end{equation} We call $\mathbf{x}=(a_1,\dots,z_n)$ the parameter of the potential. Note that in~\eqref{eq:H3potential} we take the evaluation\xspace points to be $\lambda_0=\mathbbm{i}$ and $\lambda_1=-\mathbbm{i}$ in order to have more natural reality conditions. As in~\cite{Traizet_2017}, we need to allow that the coefficients $a_k,\, b_k,\, z_k$ are holomorphic functions in $\lambda$, i.e., they are holomorphic on an open neighborhood of the closed unit disk in the $\lambda$-plane. They need to be adjusted for small $t>0$ such that the intrinsic closing condition is satisfied: we want to find for small $t>0$ holomorphic functions $a_k,\, b_k,\, z_k$, which are close to constant functions (satisfying a constraint related to some balancing formula) , such that the monodromy based at $z=0$ of the potential~\eqref{eq:H3potential} is in the unitary loop group determined by~\eqref{eq:real-form} corresponding to the symmetric space $\mathrm{H}^3$. As we suppose that the functions $z_k$ are close enough to constants, the potential is well-defined on a $n$-holed sphere for all $\lambda\in \{\lambda\in\mathbb{C}^\ast\mid 0<\abs{\lambda}<1+\epsilon\}$ for some $\epsilon>0$, and the monodromy is computed on this $n$-holed sphere. We call $\mathbf{x}=(t,a_1, \dots, z_1, \dots)$ the parameter of the potential, even in the case when $a_1, \dots, z_1,\dots$ depend on $\lambda$. For $\epsilon>0$, we denote by $\mathcal B^\epsilon$ the Banach space of holomorphic functions on \begin{equation} \{\lambda\in\mathbb{C}\mid \abs{\lambda}<1+\epsilon\}, \end{equation} equipped with the generalized Wiener norm as in~\cite[$\S$4]{Traizet_2017}. \begin{lemma}\label{lem:potential} For $k=1, \dots, n$ let $\tau_k\in\mathbb{R}\setminus\{0\}$ and $p_k\in \mathbb{C}\setminus(\{0\}\cup \mathrm{S}^1$ such that \begin{equation} \label{eq:weight} 0=\sum_{k=1}^n \frac{2\tau_k \overline{p_k}}{1-\abs{p_k}^2}=\sum_{k=1}^n\tau_k \frac{1+\abs{p_k}^2}{1-\abs{p_k}^2}=\sum_{k=1}^n\frac{2\tau_k p_k}{1-\abs{p_k}^2}. \end{equation} Then there exists $\epsilon$, $T>0$ and unique smooth maps \begin{equation} b_k, z_k\colon [0;T[\ \to\ \mathcal B^\epsilon,\quad k = 1, \dots, n \end{equation} with \begin{equation} b_k(0)=\frac{2\tau_k \overline{p_k}}{1-\abs{p_k}^2},\quad z_k(0)=p_k \end{equation} for $k=1, \dots, n$, and $z_k(0)(0)=p_k$ for $k=1, \dots, n$, and a smooth functions $\tau_k\colon[0,T[\to\mathbb{R}$ with $\tau_k(0)=\tau_k,$ $k=1, \dots, n$ such that for $t\in[0;T[$ the potential~\eqref{eq:H3potential} with parameter \begin{equation} \mathbf{x}=(t,\tau_1, \dots, \tau_{n},a_n(t),b_1(t), \dots, z_n(t)) \end{equation} has $\mathrm{H}^3$-unitary monodromy at $z=0$. \end{lemma} \begin{proof} The proof is analogous to the proof of~\cite[Proposition 3]{Traizet_2017}, under the extra assumption that $\abs{p_k}\neq1$ for $k=1, \dots, n$, and using the adapted $\ast$-operator on functions, i.e.\ $f^\ast(\lambda)=\overline{f(-1/\bar\lambda)}$. The proof that the functions $\tau_k$ are independent of $\lambda$ and real-valued can be done similarly to the proof of~\cite[Proposition 4]{Traizet_2017} by looking at the eigenvalues of the local monodromies. \end{proof} As a corollary we obtain our main theorem: \begin{theorem} \label{thm:main} For every $n>1$ there exist open minimal $n$-noids in $\mathrm{H}^3$. \end{theorem} \begin{proof} For $n=2$ we have seen the existence of Delaunay cylinders. For $n\neq3$ it is easy to see the existence of $n$ pairwise distinct points $p_k\in\mathbb{C}^\ast$ with $0<\abs{b_l}<1$ for $l=1, \dots, n-1$ and $\abs{p_n}>1$ and $n$ positive real numbers $\tau_k$ satisfying the balancing formula~\eqref{eq:weight}. To construct an open minimal $n$-noid we consider for $T>t>0$ the potential provided by Lemma~\ref{lem:potential}, and the solution $\Phi$ of \begin{equation} d\Phi=\Phi\xi_{t};\quad \Phi(0)=\mathbbm{1}. \end{equation} The Iwasawa decomposition (for the real involution corresponding to $\mathrm{H}^3$) does exist on an open subset of the loop group $\Lambda\mathrm{SL}_2(\mathbb{C})$, containing the constant loop $\mathbbm{1}$. As for $\delta>0$ small enough and $t$ small enough $\xi_{t}$ is arbitrarily close to $\xi_{0}$ on \begin{equation} \Sigma=\{z\in\mathbb{C}\mid \abs{z}<1-\delta; \abs{z-p_k}>\delta \text{ for } k=1, \dots, n-1\} \end{equation} the loop $\Phi(z)$ admits an Iwasawa decomposition on the $n$-holed sphere $\Sigma$, and theorem~\ref{thm:gwr} for the evaluation\xspace points $\lambda_0=\mathbbm{i}$, $\lambda_1=-\mathbbm{i}$ provides a minimal surface \begin{equation} f\colon\Sigma\to\mathrm{H}^3. \end{equation} By construction (and for $\delta$ small enough), the monodromies around the holes have Delaunay eigenvalues since at each of its poles the potential is gauge equivalent to a perturbation of a Delaunay potential. \end{proof} Figure \ref{fig:16noid-h3} presents an equilateral minimal 16-noid. This is a surfaces with dihedral symmetry. Our numerical experiments show that such surfaces stay embedded for an arbitrary number of ends. \begin{figure}[b] \centering \includegraphics[width=0.75\textwidth]{image-noid16-h3.pdf} \caption{\small An equilateral minimal 16-noids in $\mathrm{H}^3\cup\mathrm{S}^2\cup\mathrm{H}^3$ with cyclic symmetry of order $16$.} \label{fig:16noid-h3} \end{figure} \begin{conjecture} There exist embedded minimal $n$-noids in $H^3$ for any $n\in\mathbb N^{\geq2}$. \end{conjecture} \section{Minimal $n$-noids in anti-de~Sitter space $\mathrm{AdS}_3$} \label{sec:ads3} \subsection{The lightcone model for $\mathrm{AdS}_3$} We set \begin{equation}\mathrm{AdS}_3= \mathrm{SU}_{11}\end{equation} equipped with the natural biinvariant Lorentzian metric associated to the quadratic form given by the trace. As for hyperbolic space, anti-de~Sitter space can be defined as the complement of the boundary at infinity \begin{equation} S_\infty=\{\mathbb{R} (x_0,\,x_1,\,x_2,\,x_3,\,0)\mid x_0^2+x_1^2-x_2^2-x_3^2=0\} \end{equation} of the lightcone \begin{equation} \mathcal L=\{\mathbb{R} (x_0,\,x_1,\,x_2,\,x_3,\,x_4)\mid x_0^2+x_1^2-x_2^2-x_3^2-x_4^2=0\} \end{equation} via \begin{equation} \mathbb{R}(x_0,\,x_1,\,x_2,\,x_3,\,x_4)\in\mathcal L\setminus S_\infty\mapsto \frac{1}{x_4} \begin{bmatrix}x_0+i x_1& x_2-i x_3\\ x_2+ ix_3& x_0-i x_1\end{bmatrix} \in \mathrm{SU}_{11}. \end{equation} For visualization, we make use of the {\em stereographic projection} \begin{equation} [x_0,\,x_1,\,x_2,\,x_3,\,x_4]\in\mathcal L^0\mapsto \frac{1}{x_0+x_4} (x_1,\,x_2 ,\,x_3)\in\mathbb{R}^{1,2} \end{equation} defined on \begin{equation} \mathcal L^0=\mathcal L\setminus \{ \mathbb{R}(x_0,\,x_1,\,x_2,\,x_3,\,-x_0)\mid x_1^2-x_2^2-x_3^2=0\}. \end{equation} This map is a conformal diffeomorphism onto an open subset of Minkowski space $\mathbb{R}^{1,2}$. \subsection{Basic examples} We first describe some simple surfaces in terms of the GWR\xspace approach. \subsubsection{The sphere in $\mathrm{AdS}_3$} As in the case of $\mathrm{S}^3$ and $\mathrm{H}^3$ the easiest example of a GWR\xspace potential is given by \begin{equation} \xi(\lambda)=\begin{bmatrix}0 & \lambda^{-1} \\ 0 & 0\end{bmatrix}dz \end{equation} on the complex plane, with GWR\xspace frame \begin{equation} \Phi(\lambda)=\begin{bmatrix}1 & \lambda^{-1} z \\ 0 & 1\end{bmatrix}. \end{equation} The factorization for $\mathrm{AdS}_3$ is given by \begin{equation} \Phi(\lambda)=F(\lambda)B(\lambda)=\frac{1}{\sqrt{1-z\bar z}} \begin{bmatrix}1 & \lambda^{-1} z \\ \lambda \bar z & 1\end{bmatrix}\frac{1}{\sqrt{1-z\bar z}}\begin{bmatrix}1 & 0 \\ -\lambda \bar z & 1-z\bar z\end{bmatrix} \end{equation} and taking evaluation\xspace points $\lambda_0=1$ and $\lambda_{1}=-1$ we obtain \begin{equation} \label{eq:ads3sphere} f=F(1)F(-1)^{-1}=\frac{1}{1-z\bar z} \begin{bmatrix} 1+ z\bar z& 2z \\ 2\bar z &1+ z\bar z\end{bmatrix}. \end{equation} Restricted to the unit disk $D\subset\mathbb{C}$ this is just a conformally parametrized totally geodesic hyperbolic disk inside $\mathrm{SU}_{11}=\mathrm{AdS}_3$ with induced metric \begin{equation} \frac{1}{1-z\bar z}\mathrm{d} z\otimes \mathrm{d}\bar z. \end{equation} \begin{remark} The surface $f$ is not well-defined on the whole plane $\mathbb{C}$ or projective line, but crosses the ideal boundary at infinity $S_\infty$ along the unit circle $\mathrm{S}^1\subset\mathbb{C}$. By~\eqref{eq:ads3sphere} $f$ can be continued on the complement of the closed unit disk as a map into $\mathrm{SU}_{11}$. Again, this phenomena turns out to be typical in the examples below (figures~\ref{fig:delaunay-ads3} and~\ref{fig:trinoid}). \end{remark} \subsubsection{Minimal planes} Besides the trivial example of the hyperbolic disk, there is an interesting class of minimal surfaces with trivial topology in $\mathrm{AdS}_3$ which have been investigated in detail in~\cite{Alday_Maldacena_2009}, and are parametrized by null polygonal boundaries. In the special case of regular polygons the Hopf differential is given by a homogeneous polynomial on the complex plane. The metric is rotationally invariant and given by a global solution of the Painlev\'e III equation. Analogous to the GWR\xspace description \cite{Bobenko_Its_1995} of \emph{Smyth surfaces} in $\mathbb{R}^3$ with rotationally symmetric metric~\cite{Smyth_1993} these minimal planes can be constructed via the GWR\xspace potential of the form \begin{equation} \label{eq:smyth} \begin{bmatrix} 0 & \lambda^{-1} \\ cz^n & 0\end{bmatrix}\mathrm{d} z,\quad n\in\mathbb{N},\quad c\in\mathbb{R} \end{equation} for appropriate $c \in\mathbb{R}\setminus\{0\}$. The surface (with initial condition at $z=0$ given by $\mathbbm{1}$) has discrete ambient cyclic symmetry of order $n+2$. The Iwasawa decomposition is global only for one special value $\hat c$ (depending on $n$); see for example the detailed investigations in~\cite{Guest_Its_Lin_2018}. In figure~\ref{fig:smyth-ads3} a surface for $n=1$ with $c$ numerically close to $\hat c$ is shown. A systematic investigation of this boundary behavior of solutions of the underlying Gauss equation, i.e., the generalized sinh-Gordon equation, on punctured Riemann surfaces with Hopf differentials having higher order poles is carried out in~\cite{Gupta_2018}, under the name {\em crowned Riemann surfaces}. \subsubsection{Delaunay cylinders in $\mathrm{AdS}_3$} \begin{figure}[b] \centering \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{image-smyth-ads3.pdf} \caption{ \small Minimal Smyth surface in $\mathrm{AdS}_3$ with cyclic symmetry of order three.} \label{fig:smyth-ads3} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{image-swallowtail-ads3.pdf} \caption{\small Swallowtail singularity on a minimal surface in $\mathrm{AdS}_3$. } \label{fig:swallowtail-ads3} \end{subfigure} \caption{} \label{fig:smyth} \end{figure} Minimal Delaunay planes in $\mathrm{AdS}_3$ have been described in \cite{Bakas_Pastras_2016} in terms of their elliptic spectral data. By imposing the extrinsic closing condition along the equivariant direction they form a real 1-dimensional family of geometrically distinct surfaces. The conformal factor with respect to the coordinate $\mathrm{d} w$ on $\mathbb{C}/(2\pi \mathbbm{i}\mathbb{Z})$ is a solution of the sinh-Gordon equation with opposite sign as for CMC surfaces in $\mathrm{S}^3$~\eqref{eq:gauss-equation}, and the Hopf differential is a constant multiple of $(\mathrm{d} w)^2$. The conformal factor can be given explicitly in terms of the Weierstrass $\wp$-function on a rectangular elliptic curve; see~\cite{Bakas_Pastras_2016}. The surface is rotationally symmetric, and the conformal factor depends only on one (real) variable and is periodic, but blows up once each (intrinsic) period where the surface intersects the ideal boundary at infinity. Moreover, the conformal factor $v^2=e^{2u}$ vanishes to second order once in each (intrinsic) period. Geometrically this means that one tangent direction is mapped to a lightlike vector, while the orthogonal tangent direction (with respect to the Riemann surface structure) spans the kernel of the differential of the CMC surface. In the equivariant case, at the singular points, the kernel of the differential of the CMC surface coincides with the kernel of the differential of the conformal factor $v^2$, giving us a cone point (figure~\ref{fig:delaunay-h3}). This phenomena does not happen generally, where a swallowtail like singularity is expected (figure~\ref{fig:swallowtail-ads3}). In the following, we consider the Delaunay cylinders parametrized on the 2-punctured sphere $\mathbb{C}^\ast$, where the Hopf differential is a constant multiple of $(\mathrm{d} z)^2/z^2$. These can be constructed via theorem~\ref{thm:gwr} analogously to Delaunay cylinders in $\mathrm{H}^3$. A family of GWR\xspace potentials inducing Delaunay surfaces on $\mathbb{C}^\ast$ with initial condition $\Phi(1)=\mathbbm{1}$ is \begin{equation} \label{eq:delaunay} A\frac{\mathbbm{i}\mathrm{d} z}{z},\quad A = B + B^\ast,\quad B = \begin{bmatrix}\tfrac{\mathbbm{i} c}{2} & a\lambda^{-1}\\ b & -\tfrac{\mathbbm{i} c}{2}\end{bmatrix},\quad a\in\mathbb{R}^\ast,\quad b,\,c\in\mathbb{R}. \end{equation} Imposing the extrinsic closing condition with $\lambda_0=\mathbbm{i}$, a $1$-parameter family of potentials is given by \begin{equation} B = \frac{1}{2\sqrt{q^2+1}} \begin{bmatrix}\mathbbm{i} \sqrt{2(q^2+1)} & \lambda^{-1} + q\\ \lambda + q & -\mathbbm{i} \sqrt{2(q^2+1)}\end{bmatrix}\\ \end{equation} for $q\in\mathbb{R}$. The meromorphic frame $\Phi$ is based at $z=1$ with $\Phi(1) = \mathbbm{1}$. As in the previous section we obtain: \begin{theorem} The GWR\xspace construction applied to the above data gives Delaunay cylinders in $\mathrm{AdS}_3$. \end{theorem} An example is shown in figure~\ref{fig:delaunay-ads3}. \subsection{$n$-noids in $\mathrm{AdS}_3$} We may define $n$-noids and open $n$-noids in $\mathrm{AdS}_3$ in an analogous way as for surfaces in $\mathrm{H}^3$. We can also adopt Traizet's construction~\cite{Traizet_2017} as done for the case of minimal surfaces in $\mathrm{H}^3$ above to prove the existence of open $n$-noids in $\mathrm{AdS}_3$. It would be nice to prove that these examples are actually $n$-noids in the strong sense; a proof of this conjecture might build up on the techniques developed in \cite{Raujouan_2018}. \begin{theorem} For every $n>1$ there exist open minimal $n$-noids in $\mathrm{AdS}_3$. \end{theorem} The proof is analogous to the proof of theorem~\ref{thm:main}, taking the appropriate $\ast$-operator on $\mathrm{SL}_2(\mathbb{C})$ and on holomorphic functions. Rather than repeating the details of the arguments we give in the next section a general proof of existence of 3-noid potentials for surfaces in $\mathrm{S}^3$, $\mathrm{H}^3$, $\mathrm{AdS}_3$ and $\mathrm{dS}_3$. \section{$3$-noids in symmetric spaces} \label{sec:g3noid} In this last section we extend slightly the class of surfaces and consider CMC surfaces in the symmetric spaces $\mathrm{S}^3$, $\mathrm{H}^3$, $\mathrm{AdS}_3$ and $\mathrm{dS}_3$. Recall from section~\ref{sec:CMCDPW} that these can also be described by the GWR\xspace approach. We present a general proof of existence for GWR\xspace potentials on a 3-punctured sphere, denoted as trinoids in the following. The main motivation for giving an explicit proof of existence for trinoids is that the so constructed surfaces can be numerically computed and visualized directly (figures~\ref{fig:noid-h3},~\ref{fig:noid-h3-iso},~\ref{fig:trinoid-h3-halfspace} and~\ref{fig:trinoid}). In particular, the proof might help the reader to perform computer experiments via GWR\xspace method. Also, in the case of a 3-punctured sphere, the range of end weights $q$ (determined by the quadratic residues at the ends) is more tractable here than in the implicit function theorem proof of theorem~\ref{thm:main}. The construction of trinoids is particularly simple because the conjugacy class of the monodromy group in this case is determined by the individual conjugacy classes of the generators. In the case of the involution $\lambda\mapsto 1/\overline{\lambda}$ the closing conditions can be read off from the monodromy eigenvalues, which by the theory of regular singular points, can be read off from the potential. Each pole of our potential will be a Fuchsian singularity which is GWR\xspace gauge equivalent to a perturbation of a Delaunay potential. By the theory of regular singular points, each monodromy around a puncture has Delaunay eigenvalues. This potential constructs CMC trinoids if the closing conditions of remark~\ref{rem:closing} are satisfied. The following theorem constructs trinoids in $\mathrm{AdS}_3$, $\mathrm{H}^3$ with $\abs{H}<1$, and $\mathrm{dS}_3$ with $\abs{H}<1$, in analogy to the construction of trinoids in $\mathrm{S}^3$ in~\cite{Schmitt_Kilian_Kobayashi_Rossman_2007}. To show that the trinoid closes it is necessary to compute a unitarizer $X$ of the monodromy. Unlike the case of $\mathrm{S}^3$, for the other spaceforms the unitarizer could fail to have an Iwasawa factorization. Then the frame with unitary monodromy $X\Phi$ likewise fails to have an Iwasawa factorization, and hence does not construct a trinoid. Hence it is necessary to find a unitarizer which has an Iwasawa factorization, or equivalently, a unitarizer in $\Lambda_+\mathrm{SL}_2(\mathbb{C})$. To solve the technical problem of the existence of a monodromy unitarizer in $\Lambda_+\mathrm{SL}_2(\mathbb{C})$ we make the simplifying restriction to \emph{isosceles} trinoids, for which two of the three end parameters are equal. As seen in theorem~\ref{thm:trinoid}, under this restriction, a diagonal monodromy unitarizer can be computed explicitly via a scalar Birkhoff factorization. \subsection{Real forms} \begin{figure}[b] \centering \begin{subfigure}[t]{0.49\textwidth} \includegraphics[width=\textwidth]{image-trinoid-ads3.pdf} \caption{\small Minimal trinoid in $\mathrm{AdS}_3$ } \label{fig:trinoid-ads3} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \includegraphics[width=\textwidth]{image-trinoid-ds3.pdf} \caption{\small Minimal trinoid in $\mathrm{dS}_3$ } \label{fig:trinoid-ds3} \end{subfigure} \caption{\small Minimal trinoids in $\mathrm{AdS}_3$ and $\mathrm{dS}_3$: three Delaunay half-cylinders glued to a minimal two-sphere (horizontal plane). A swallowtail singularity (figure~\ref{fig:swallowtail-ads3}) appears on each Delaunay end. } \label{fig:trinoid} \end{figure} The four involutions~\eqref{eq:real-form} on loops $X:\mathrm{S}^1\to\mathrm{SL}_2\mathbb{C}$ can be indexed by $\delta\in\{\pm 1\}$ and $\epsilon\in\{\pm 1\}$: \begin{equation} \label{eq:unitary} X^\ast(\lambda) = {\transpose{\overline{ \eta X(\delta/\overline{\lambda}) \eta^{-1} }}}^{-1},\quad \eta = \begin{cases} \mathbbm{1} & \text{if $\epsilon=1$}\\ \diag(\mathbbm{i},\,-\mathbbm{i}) & \text{if $\epsilon=-1$}. \end{cases} \end{equation} The subgroup of loops $X$ satisfying $X^\ast = X$ is the real form for the symmetric spaces as tabulated: \begin{equation} \label{eq:trinoid-table} \begin{array}{c|c|c} &\epsilon=1 & \epsilon=-1\\ \hline \delta=1 & \mathrm{S}^3 & \mathrm{AdS}_3\\ \delta=-1 & \mathrm{H}^3\ (\abs{H}<1) & \mathrm{dS}_3\ (\abs{H}<1) \end{array} \end{equation} A loop $M:\mathrm{S}^1\to\mathrm{SL}_2\mathbb{C}$ is \emph{unitary} if $M^\ast = M$. A monodromy group is \emph{unitarizable} if there exists a map $X:\mathcal{D}_+\to\mathrm{SL}_2\mathbb{C}$ such that for each $M$ in the group $XMX^{-1}$ extends holomorphically to $\mathrm{S}^1$ and is unitary. For scalar loops define \begin{equation} \label{eq:scalar-star} f^\ast(\lambda) = \overline{ f(\delta/\overline{\lambda}) }. \end{equation} \subsection{Trinoids potentials} A potential for trinoids is \begin{equation} \label{eq:trinoid-potential} o \begin{bmatrix}0 & \lambda^{-1}\\ f(\lambda) Q & 0\end{bmatrix}\mathrm{d} z, \end{equation} where $Q\,\mathrm{d} z^2$ is a holomorphic quadratic differential with three double poles and real quadratic residues, $f$ satisfying $(f/\lambda)^\ast = f/\lambda$ is \begin{subequations} \label{eq:trinoid-f} \begin{align} \label{eq:trinoid-f1} \mathrm{S}^3\text{ and }\mathrm{AdS}_3&:\quad f = (\lambda - \lambda_0)(\lambda - \lambda_0^{-1}),\quad \lambda_0\in\mathrm{S}^1\setminus\{\pm 1\}\\ \label{eq:trinoid-f2} \mathrm{H}^3\text{ and }\mathrm{dS}_3&:\quad f = (\lambda - \lambda_0)(\lambda + \lambda_0^{-1}),\quad \lambda_0\in\mathbb{R}\setminus\{0\}, \end{align} \end{subequations} and $\lambda_0,\,\lambda_0^{-1}$ are the evaluation\xspace points $\lambda_0,\lambda_1$ are determined by \begin{equation} \label{eq:delaunay-sym-points} \mathrm{S}^3 \text{ and }\mathrm{AdS}_3:\ \lambda_1 = 1/\lambda_0\in\mathrm{S}^1\setminus\{\pm 1\}; \quad\quad \mathrm{H}^3\text{ and }\mathrm{dS}_3:\ \lambda_1 = -1/\lambda_0\in\mathbb{R}^\ast. \end{equation} For isosceles trinoids choose ends $z=1,\,-1,\,\infty$ and \begin{equation} \label{eq:trinoid-hopf} Q = \frac{4a + b(z^2-1)}{ {(z^2-1)}^2 } \end{equation} satisfying $\qres_{z= \pm 1}Q\,\mathrm{d} z^2 = a$ and $\qres_{z=\infty}Q\,\mathrm{d} z^2 = b$. \subsection{Unitarization} The isosceles trinoid potential~\eqref{eq:trinoid-potential}--\eqref{eq:trinoid-hopf} has symmetries \begin{subequations} \label{eq:trinoid-potential-symmetry} \begin{gather} \sigma^\ast\xi(\lambda) = \xi(\lambda) . g_1\quad \sigma(z)=-z,\quad g_1=\diag(\mathbbm{i},\,-\mathbbm{i})\\ \overline{\tau^\ast\xi(\delta/\overline{\lambda})} = \xi(\lambda) . g_2,\quad \tau(z) =\overline{z},\quad g_2=\diag(\sqrt{\delta}/\lambda,\,\lambda/\sqrt{\delta}). \end{gather} \end{subequations} Let $M_0$ and $M_1$ be the monodromies around $z=1$ and $z=-1$ respectively, with basepoint $\Phi(0)=\mathbbm{1}$. The symmetries of the potential imply \begin{equation} \label{eq:trinoid-monodromy} M_0 = \begin{bmatrix}r & p\lambda\\ -q\lambda^{-1} & r^\ast\end{bmatrix}, \quad M_1 = \begin{bmatrix}r & -p\lambda\\q\lambda^{-1} & r^\ast\end{bmatrix} \end{equation} for some holomorphic functions $p,\,q,\,r:\mathbb{C}^\ast\to\mathbb{C}$ satisfying $rr^\ast + p q = 1$ and \begin{equation} p^\ast = \delta p\quad\text{and}\quad q^\ast = \delta q. \end{equation} \begin{lemma} \label{lem:trinoid-monodromy} If $q/p^\ast$ in~\eqref{eq:trinoid-monodromy} has a Birkhoff factorization \begin{equation} q/p^\ast = \delta\epsilon x_+^\ast x_+,\quad x_+:\mathcal{D}_+\to\mathbb{C}^\ast \end{equation} then $M_0$ and $M_1$ are unitarizable in the sense of~\eqref{eq:unitary}. \end{lemma} \begin{proof} If $q/p^\ast = \delta\epsilon x_+^\ast x_+$, then $x_+:\mathcal{D}_+\to\mathbb{C}^\ast$ has a single-valued square root $\mathcal{D}_+\to\mathbb{C}^\ast$, so $X := \diag(\sqrt{x_+},\,1/\sqrt{x_+})$ is a single-valued map $\mathcal{D}_+\to\mathrm{SL}_2(\mathbb{C})$. Then \begin{equation} P_0:=XM_0X^{-1} = \begin{bmatrix}r & s\\-\epsilon s^\ast & r^\ast\end{bmatrix}, \quad P_1:=XM_1X^{-1} = \begin{bmatrix}r & -s\\\epsilon s^\ast & r^\ast\end{bmatrix}, \quad s := px_+\lambda \end{equation} extend holomorphically to $\mathrm{S}^1$ and satisfy $P_0=P_0^\ast$ and $P_1=P_1^\ast$ in the sense of~\eqref{eq:unitary}. \end{proof} \subsection{Scalar Birkhoff factorization} \label{sec:birkhoff} To unitarize the trinoid monodromy via lemma~\ref{lem:trinoid-monodromy} we need a more detailed analysis of the scalar Birkhoff factorization. With $\mathrm{P}^1 = \mathcal{D}_+ \sqcup \mathrm{S}^1 \sqcup \mathcal{D}_-$, \begin{equation} \mathcal{D}_+:=\{\lambda\in\mathbb{C}\mid\abs{\lambda}<1\} \quad \mathcal{D}_-:=\{\lambda\in\mathbb{C}\mid\abs{\lambda}>1\}\cup\{\infty\} \end{equation} let \begin{subequations} \begin{align} \Lambda &= \{\text{real analytic loops on $\mathrm{S}^1\to\mathbb{C}^\ast$}\}\\ \Lambda_+ &= \{f\in\Lambda\mid \text{$f$ is the boundary of a holomorphic map $\mathcal{D}_+\to\mathbb{C}^\ast$}\}\\ \Lambda_- &= \{f\in\Lambda\mid \text{$f$ is the boundary of a holomorphic map $\mathcal{D}_-\to\mathbb{C}^\ast$}\}. \end{align} \end{subequations} 1. By~\cite{Pressley_Segal_1986}, every loop $f\in\Lambda$ has a unique Birkhoff factorization \begin{subequations} \begin{gather} f = c \lambda^n f_- f_+,\quad n\in\mathbb{Z},\quad c\in\mathbb{C}^\ast,\quad\\ f_+\in\Lambda_+,\quad f_-\in\Lambda_-,\quad f_+(0)=1,\quad f_-(\infty)=1. \end{gather} \end{subequations} Let star be as in~\eqref{eq:scalar-star} for either choice of $\delta\in\{\pm 1\}$. 2. If $f\in\Lambda$ satisfies $f=f^\ast$, then $f$ has a unique Birkhoff factorization \begin{equation} f = \epsilon f_+^\ast f_+,\quad \epsilon\in\{\pm 1\},\quad f_+\in\Lambda_+,\quad f_+(0)\in\mathbb{R}_+. \end{equation} This defines an homomorphism \begin{equation} \label{eq:sign0} \sign:\{f\in\Lambda\mid f=f^\ast\}\to\{\pm 1\},\quad f\mapsto \epsilon. \end{equation} A \emph{meromorphic loop} on $\mathrm{S}^1$ is meromorphic in a neighborhood of $\mathrm{S}^1$. Let $\Lambda^\ast$ be the subgroup of meromorphic loops on $\mathrm{S}^1$ \begin{equation} \Lambda^\ast = \{\text{$f\mid f^\ast = f$ and $\mathrm{div}_{\Sone} f$ even}\}, \end{equation} where $\mathrm{div}_{\Sone} f$ even means that the order of each zero and pole of $f$ on $\mathrm{S}^1$ is even. 3. Every $f\in\Lambda^\ast$ has a unique Birkhoff factorization \begin{equation} f = \epsilon f_+^\ast f_+,\quad \epsilon\in\{\pm 1\},\quad \mathrm{div}_{\Sone} f_+ = \tfrac{1}{2}\mathrm{div}_{\Sone} f \end{equation} where $f_+$ is the boundary of a holomorphic map $\mathcal{D}_+\to\mathbb{C}^\ast$. This extends the homomorphism~\eqref{eq:sign0} to an homomorphism \begin{equation} \sign:\Lambda^\ast\to\{\pm 1\},\quad f\mapsto \epsilon. \end{equation} 4. For every meromorphic loop $f$ on $\mathrm{S}^1$ with $f=f^\ast$ \begin{equation} \label{eq:sign-square} \sign[f^2] = 1. \end{equation} \subsection{The trace polynomial} Given a monodromy representation $M_0,\,M_1,\,M_2$ with $M_0M_1M_2=\mathbbm{1}$ on the three-punctured sphere, define the polynomial~\cite{Goldman_1988} \begin{equation} \label{eq:phi} \varphi = 1 - t_0^2 - t_1^2 - t_2^2 + 2 t_0 t_1 t_2,\quad t_k = \tfrac{1}{2}\tr(M_k),\quad k\in\{0,\,1,\,2\}. \end{equation} The trace polynomial vanishes precisely where the monodromy representation is reducible. In the case of the involution $\lambda\mapsto 1/\overline{\lambda}$, if the halftraces are in $(-1,\,1)$, then the monodromy is $\mathrm{SU}_2$-unitarizable if $\varphi > 0$ and $\mathrm{SU}_{11}$-unitarizable if $\varphi < 0$. For the trinoid potential~\eqref{eq:trinoid-potential} with quadratic residues $(q_0,\,q_1,\,q_2)\in\mathbb{R}^3$ of $Q$, and $f$ as in~\eqref{eq:trinoid-f}, \begin{equation} \label{eq:t} t_k = \cos(2\pi \nu_k),\quad \nu_k = \tfrac{1}{2} - \tfrac{1}{2}\sqrt{1 + q_k \kappa},\quad \kappa = 4\lambda^{-1}f(\lambda),\quad k\in\{0,\,1,\,2\}. \end{equation} The function $\kappa:\mathrm{S}^1\to\mathbb{C}$ satisfies $\kappa^\ast =\kappa$. The trace polynomial $\varphi:\mathrm{S}^1\to\mathbb{C}$ for the monodromies of the trinoid potential satisfies $\varphi^\ast = \varphi$. Putting together~\eqref{eq:phi} and~\eqref{eq:t}, its series expansion in $\kappa$ at $\kappa=0$ is \begin{equation} \label{eq:phi-series} \varphi(\kappa) = c \kappa^4 + \mathrm{O}(\kappa^5),\quad c = \frac{\pi^4}{64}(q_0+q_1+q_2)(-q_0+q_1+q_2)(q_0-q_1+q_2)(q_0+q_1-q_2). \end{equation} \begin{lemma} \label{lem:phi} Choose $(q_0,\,q_1,\,q_2)\in\mathbb{R}^3$ with $c\ne 0$. For $s$ in a small enough interval $(0,\,s_0)$, $\varphi_{sq}\in\Lambda^\ast$ and $\sign[\varphi_{sq}]=\sign(c)$. \end{lemma} \begin{proof} From the series expansion~\eqref{eq:phi-series}, \begin{equation} \abs{\varphi_q/\kappa^4-c} < \abs{c/2} \quad\text{for all}\quad \abs{\kappa} < \kappa_0. \end{equation} Let $m = 1+\max_{\lambda\in\mathrm{S}^1}\abs{\kappa(\lambda)}$. Then for all $\abs{\kappa}<m$ and $s\in(0,\,\kappa_0/m)$, we have $\abs{s\kappa} < \kappa_0$, so \begin{equation} \abs{\varphi_{sq}(\kappa)/\kappa^4-c} =\abs{\varphi_{q}(s\kappa)/\kappa^4-c} < \abs{c/2} \end{equation} Hence $\varphi_{sq}/\kappa^4$ maps $\mathrm{S}^1$ into the disk centered at $c$ with radius $\abs{c/2}$. This implies that $\varphi_{sq}$ has no zeros on $\mathrm{S}^1$, except at the zeros of $\kappa$ in the case $\kappa$ has zeros on $\mathrm{S}^1$. Since these zeros are of order $4$, then $\varphi\in\Lambda^\ast$. Since $\sign(c)\varphi_{sq}/\kappa^4$ takes values in the open right halfplane, it has a single-valued square root $f:\mathrm{S}^1\to\mathbb{C}^\ast$ satisfying $f^\ast = f$. By~\eqref{eq:sign-square}, $\sign[\varphi_{sq}]=\sign(c)\sign[\kappa^4 f^2] = \sign(c)$. \end{proof} In the case of isosceles trinoids, for which $(q_0,\,q_1,\,q_2) = (a,\,a,\,b)$ with $a$, $b$ as in~\eqref{eq:trinoid-hopf}, then $c = \frac{\pi^4}{64}b^2(4a^2-b^2)$, so $\sign[\varphi]=+1$ in a subregion of $\{\abs{b} < 2\abs{a}\}$. and $\sign[\varphi]=-1$ in a subregion of $\{\abs{b} > 2\abs{a}\}$. \subsection{Trinoids} \begin{theorem} \label{thm:trinoid} For each of the four symmetric spaces tabulated in~\eqref{eq:trinoid-table} there exists a two real parameter family of conformal CMC isosceles trinoid potentials satisfying the closing conditions with prescribed mean curvature $H$. \end{theorem} \begin{proof} Choose a symmetric space, $\epsilon\in\{\pm 1\}$ and $\delta\in\{\pm 1\}$ as tabulated in~\eqref{eq:trinoid-table}, and a isosceles trinoid potential~\eqref{eq:trinoid-potential}--\eqref{eq:trinoid-hopf}. By lemma~\ref{lem:phi}, there exists a subregion of $\{(a,\,b)\in\mathbb{R}^2\}$ with $(a,\,b)$ as in~\eqref{eq:trinoid-hopf}, in which $\varphi\in \Lambda^\ast$ and $\sign[\varphi]=\delta\epsilon$. Let $M_0$ and $M_1$ be the monodromies of the meromorphic frame $\Phi$ based at $\Phi(0)=\mathbbm{1}$ as in~\eqref{eq:trinoid-monodromy}. The respective halftraces $t_0,\,t_1,\,t_2$ of $M_0,\,M_1,\,M_2 = M_1^{-1}M_0^{-1}$ are \begin{equation} t_0 = t_1 = \tfrac{1}{2}(r+r^\ast) \quad\text{and}\quad t_2 = \tfrac{1}{2}(r^2 - 2pq + {r^\ast}^2) \end{equation} so the trace polynomial~\eqref{eq:phi} is \begin{equation} \label{eq:phi-in-terms-of-monodromy} \varphi = {(\mathbbm{i}(r-r^\ast))}^2pq. \end{equation} Since $\varphi\in \Lambda^\ast$ and ${(\mathbbm{i}(r-r^\ast))}^2\in \Lambda^\ast$, then $pq\in \Lambda^\ast$. Since $p^\ast p\in \Lambda^\ast$ then $q/p^\ast = (pq)/(p^\ast p)\in \Lambda^\ast$. By~\eqref{eq:sign-square} $\sign[{(\mathbbm{i}(r-r^\ast))}^2]=1$ and $\sign[p^\ast p] = 1$, so \begin{subequations} \begin{align} \sign[q/p^\ast] &= \sign[(pq)/(p^\ast p)] = \sign[pq]\sign[p^\ast p] = \sign[pq]\\ &= \sign[{(\mathbbm{i}(r-r^\ast))}^2]\sign[pq] = \sign[\varphi] = \delta\epsilon. \end{align} \end{subequations} By the definition of $\sign$, $p/q^\ast$ has a factorization $p/q^\ast = \delta\epsilon x_+^\ast x_+$, where $x_+:\mathcal{D}_+\to\mathbb{C}^\ast$. By lemma~\ref{lem:trinoid-monodromy} there exists a unitarizer $X$ of the monodromy in the sense of~\eqref{eq:unitary}. By theorem~\ref{thm:gwr} the immersion induced by $X\Phi$ via $r$-Iwasawa factorization is conformal and CMC, and by remark~\ref{rem:closing} it closes around each of its three Delaunay ends. \end{proof} \begin{remark} \mbox{} 1. In the case of $\mathrm{S}^3$ theorem~\ref{thm:trinoid} reconstructs a two-dimensional subfamily of the three-dimensional family of trinoids constructed in~\cite{Schmitt_Kilian_Kobayashi_Rossman_2007} without the isosceles constraint. 2. In the other three symmetric spaces, where the Iwasawa factorization can leave the big cell, the domain of the trinoids constructed by theorem~\ref{thm:trinoid} is not known, but numerical experiments indicate that in general the whole punctured Riemann sphere is mapped into the light cone, intersecting the ideal boundary along curves (figures~\ref{fig:noid-h3},~\ref{fig:noid-h3-iso},~\ref{fig:trinoid-h3-halfspace} and~\ref{fig:trinoid}). 3. Noids with more than three ends can be constructed using a cyclic branched cover of the Riemann sphere (figures~\ref{fig:noid-h3} and~\ref{fig:noid-h3-iso}). \end{remark} \subsection{Dressing} \begin{figure}[b] \centering \includegraphics[width=0.45\textwidth]{image-trinoid-h3-halfspace1.pdf} \includegraphics[width=0.45\textwidth]{image-trinoid-h3-halfspace2.pdf} \caption{\small Two views of an equilateral minimal trinoid in $\mathrm{H}^3\cup\mathrm{S}^2\cup\mathrm{H}^3$ stereographically projected to $\mathbb{R}^3$ via the Poincar\'e halfspace model. The cutaway view (right) shows the intersection of the surface with the ideal boundary along nested topological circles.} \label{fig:trinoid-h3-halfspace} \end{figure} By theorem~\ref{thm:trinoid} we can construct equilateral trinoids in $\mathrm{dS}_3$ (figure~\ref{fig:trinoid-ds3}), and isosceles non-equilateral trinoids in $\mathrm{H}^3$ (figure~\ref{fig:noid-h3-iso}). In this section we construct equilateral trinoids in $\mathrm{H}^3$ (figure~\ref{fig:noid-h3}) by dressing equilateral trinoids in $\mathrm{dS}_3$. The dressing action interchanges the real forms for $\mathrm{dS}_3$ and $\mathrm{H}^3$. More explicitly, the \emph{dressing action} of a loop $g$ (defined on an a circle of radius $r\in(0,\,1]$) on a frame $F$, written $g\dress F$, is by definition the $r$-unitary factor of $gF$ of its $r$-Iwasawa factorization. In analogy to~\cite{Terng_Uhlenbeck_2000} we take $g$ to be a diagonal \emph{simple factor} \begin{equation} \label{eq:simple-factor} g = \diag( p^{1/2},\, p^{-1/2} ),\quad p = \frac{\lambda - \mu}{\overline{\mu}\lambda + 1},\quad p^\ast = -\frac{1}{p} \end{equation} on an $r$-circle, $r < \abs{\mu}$. \begin{lemma} \label{lem:simple-factor-dressing-swap} Diagonal simple factor dressing $F\mapsto g\dress F$ interchanges the real form for $\mathrm{dS}_3$ with the real form for $\mathrm{H}^3$. \end{lemma} \begin{proof} The dressing action of the simple factor $g$ on a frame $F$ in the real form for $\mathrm{dS}_3$ is computed explicitly and algebraically as \begin{equation} g\dress F = g F k^{-1} g^{-1},\quad k = \tfrac{1}{\sqrt{{\abs{u}}^2 - {\abs{v}}^2}} \big[\begin{smallmatrix}u & \overline{v}\\v & \overline{u}\end{smallmatrix}\big],\quad \big[\begin{smallmatrix}u\\v\end{smallmatrix}\big] = F^{-1}(\mu)\ell,\quad\ \ell = \big[\begin{smallmatrix}1\\0\end{smallmatrix}\big]. \end{equation} We have \begin{equation} Fk^{-1} = \begin{bmatrix}s & t\\t^\ast & s^\ast\end{bmatrix} \quad\Longrightarrow\quad g\dress F = g F k^{-1} g^{-1} = \begin{bmatrix}s & t p\\t^\ast p^\ast & s^\ast\end{bmatrix}. \end{equation} Hence $g\dress F$ is in the real form for $\mathrm{H}^3$. The proof for the other direction is the same except for a sign change. \end{proof} \begin{theorem} \label{thm:trinoid-h3} There exists a real $1$-parameter family of equilateral trinoids in $\mathrm{H}^3$ for each mean curvature $\abs{H}<1$ (figure~\ref{fig:noid-h3}). \end{theorem} \begin{proof} Let $F$ be a unitary frame for a trinoid in the $1$-parameter family of equilateral trinoids in $\mathrm{dS}_3$ with specified mean curvature $\abs{H}<1$ (theorem~\ref{thm:trinoid}). The dressed frame $g\dress F$ is in the real form for $\mathrm{H}^3$ (lemma~\ref{lem:simple-factor-dressing-swap}) so it satisfies the intrinsic closing condition for $\mathrm{H}^3$. To satisfy the extrinsic closing condition we choose the simple factor $g$ as follows. The subset of the $\lambda$ plane along which the monodromy group of $F$ is reducible is the zero set of the trace polynomial $\varphi$~\eqref{eq:phi}. With $t$ the halftrace of the monodromy of $F$ around each end, we have $\varphi= {(1-t)}^2(1+2t)$ with zero set \begin{equation} \big\{ \mu\in\mathbb{C}^\ast\mid \tfrac{1}{2} -\tfrac{1}{2} \sqrt{ 1 + 4 q \mu^{-1}f(\mu)} \in\mathbb{Z}/3 \big\},\quad \text{$f$ as in~\eqref{eq:trinoid-f2}}, \end{equation} a discrete subset of $\mathbb{C}^\ast$ which accumulates at $0$ and $\infty$. Using~\eqref{eq:trinoid-monodromy} and~\eqref{eq:phi-in-terms-of-monodromy}, $\mu$ can be found in this zero set away from the evaluation\xspace points such that $\ell$ is a common eigenvalue of the monodromy group. This choice of $\mu$ implies that the monodromy of $g\dress F$ is $gMg^{-1}$, where $M$ is a monodromy of $F$ (in analogy to~\cite{Kilian_Schmitt_Sterling_2004} for noids in $\mathbb{R}^3$). Hence $g\dress F$ satisfies the extrinsic closing conditions for $\mathrm{H}^3$, so it is the frame for an equilateral trinoid in $\mathrm{H}^3$. \end{proof} \begin{remark} \label{rem:kobayashi} \mbox{} The GWR\xspace construction of trinoids (theorem~\ref{thm:trinoid}) is as follows: \begin{enumerate} \item In terms of a potential~\eqref{eq:trinoid-potential} on the three-punctured sphere, compute a dressing $C$ which unitarizes the monodromy group of the corresponding holomorphic frame $\Phi$ based at $\Phi(z_0)=\mathbbm{1}$. \item Show that the dressed holomorphic frame $C\Phi$ is in the big cell of the Iwasawa decomposition in some subregion of the domain. \item Construct a CMC immersion of that subregion into $\mathrm{H}^3$ via the evaluation\xspace formula~\eqref{eq:sym} applied to the unitary Iwasawa factor of $C\Phi$. \end{enumerate} Following this program with gauge equivalent trinoid potentials, Kobayashi in \cite[Theorem 4.1]{Kobayashi_2010} gives an incomplete construction of equilateral trinoids in $\mathrm{H}^3$ with mean curvature $\abs{H}< 1$, failing to address step (2). Our theorem~\ref{thm:trinoid-h3} fills this gap by constructing a dressing $C$ in the loop group $\Lambda_+\mathrm{SL}_2(\mathbb{C})$. Thus $C\Phi$ is in the big cell of the Iwasawa decomposition at, and hence in a neighborhood of the basepoint $z_0$. Moreover, it follows from Theorem \ref{thm:main} that for small necksizes $C\Phi$ is in the big cell of the Iwasawa decomposition on the 3-holed sphere obtained by removing 3 discs around the punctures of the 3-punctured sphere. Dorfmeister, Inoguchi and Kobayashi further state without proof in~\cite[\S 10.5]{Dorfmeister_Inoguchi_Kobayashi_2014} that the theorem of Kobayashi mentioned above constructs CMC immersions of the three-punctured sphere into $\mathrm{H}^3$. On the contrary, in light of the numerical evidence of figures~\ref{fig:noid-h3},~\ref{fig:noid-h3-iso},~\ref{fig:trinoid-h3-halfspace} and~\ref{fig:trinoid}, we conjecture that in analogy to 2-noids (Delaunay surfaces), the incompletely constructed trinoids in \cite{Kobayashi_2010} map the three-punctured sphere not into $\mathrm{H}^3$ but into $\mathrm{H}^3\cup\mathrm{S}^2\cup\mathrm{H}^3$. \end{remark} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
2,869,038,155,944
arxiv
\section*{Acknowledgments} This work was supported by the Natural Sciences and Engineering Research Council of Canada. \newcommand{\PR}{Phys. Rev.} \newcommand{\PRL}{Phys. Rev. Lett.} \newcommand{\CQG}{Class. Quantum Grav.} \newcommand{\CMP}{Commun. Math. Phys.} \newcommand{\NP}{Nucl. Phys.} \newcommand{\PL}{Phys. Lett.}
2,869,038,155,945
arxiv
\section{Introduction} Spatial random fields (SRF's) have applications in hydrology \cite{kitan,rubin}, oil reservoir engineering \cite{hohn}, environmental pollutant mapping and risk assessment \cite{christ}, mining exploration and reserves estimation \cite{goov}, as well as environmental health studies \cite{ch98}. SRF's model spatial correlations in variables such as mineral concentrations, dispersion of environmental pollutants, soil and rock permeability, and flow fields in oil reservoirs. Knowledge of spatial correlations enables (i) generating predictive iso-level contour maps (ii) estimating the uncertainty of predictions and (iii) developing simulations that partially reconstruct the process of interest. Geostatistics provides mathematical tools for these tasks. The classical approach is based on Gaussian SRF's (GSRF's) and various generalizations for non-Gaussian distributions \cite{lantu,wack}. For GSRF's the spatial structure is determined from the covariance matrix, which is estimated from the distribution of the data in space. An SRF state (realization) can be decomposed into a {\it deterministic trend} $m_{\rm x} ({\bf s})$, a {\it correlated fluctuation} ${X}_{\lambda}({\bf s})$, and an independent random noise term, $ \epsilon({\bf s}) $ so that $ X({\bf s})=m_{\rm x} ({\bf s})+{X}_{\lambda}({\bf s})+\epsilon({\bf s}).$ The trend represents large-scale variations of the field, which can be obtained in principle by ensemble averaging, i.e. $m_{\rm x} ({\bf s})=E[X({\bf s})]$. In practice, the trend is often determined from a single available realization. The fluctuation term corresponds to `fast variations' that reveal structure at small scales, which nonetheless exceed a cut-off $\lambda$. The random noise represents non-resolved inherent variability due to resolution limits, purely random additive noise, or non-systematic measurement errors. It is typically assumed that the fluctuation is a {\it second-order stationary SRF}, or an {\it intrinsic SRF} with second-order stationary increments \cite{yaglom}. The {\it observed SRF} after detrending is a zero-mean fluctuation: $ X^{*}({\bf s})=X_{\lambda}({\bf s})+\epsilon({\bf s}).$ In statistical physics the probability density function (pdf) of a fluctuation field $x({\bf s})$ governed by an energy functional $H[x({\bf s})]$ is expressed as $f_{\rm x} [x({\bf s})] = Z^{- 1} \exp \left\{ { - H[x ({\bf s})]} \right\},$ where $ Z $ is the partition function. Using this representation, the Gaussian joint pdf in classical geostatistics is expressed in terms of the functional: \begin{equation} \label{covenergy} H[x({\bf s})] = \frac{1}{2} \int {d{\bf s}} \int {d{\bf s'}} x ({\bf s})\, [G_{x}]^{ - 1} ({\bf s}-{\bf s'})\, x ({\bf s'}). \end{equation} \noindent In Eq.~(\ref{covenergy}), $[G_{x}]^{ - 1} ({\bf s}-{\bf s'})$ is the inverse of the covariance function $ G_{\rm x} ({\bf s}-{\bf s'}) $, which determines the {\it spatial disorder}. While statistical physics plays an increasingly important role in understanding the behavior of complex geophysical systems \cite{sornette}, its applications in geostatistical analysis have not yet been explored. Spartan Spatial Random Fields (SSRF's) model spatial correlations in terms of `interactions', in the spirit of Markov SRF's \cite{winkler}. In \cite{dth03} general properties and permissibility conditions were derived for the fluctuation-gradient-curvature (FGC) SSRF model, with the following energy functional: \begin{equation} \label{fgc} H_{\rm fgc} [X_\lambda ] = \frac{1}{{2\eta _0 \xi ^d }}\int {d{\bf s}} \, \left\{ \left[ {X_\lambda ({\bf s})} \right]^2 + \eta _1 \,\xi ^2 \left[ {\nabla X_\lambda ({\bf s})} \right]^2 + \xi ^4 \left[ {\nabla ^2 X_\lambda ({\bf s})} \right]^2 \right\}. \end{equation} For this model, a moment-based method for parameter estimation was proposed and tested with simulated data; methods for SSRF non-constrained simulation were presented in \cite{dth03b}; systematic reduction of anisotropic disorder, based on the covariance tensor identity, was investigated in \cite{dth02,dth04}. The FGC model \cite{dth03} has three main parameters: the scale coefficient $\eta_0$, the covariance-shape coefficient $\eta_1$, and the correlation length $\xi$. {\it Bochner's theorem} \cite{christ} for the covariance function requires $\eta _1 > -2$. A coarse-graining kernel is used to cut off the fluctuations at $k_c \propto \lambda ^{-1} $ \cite{dth03,dth03b}, leading to band-limited covariance spectral density and differentiable field configurations (in the mean square sense) \cite{dth03b}. \section{Operator Notation} \label{hami-not} Let $\Omega \in {\mathbb R}^{d}$ denote the area of interest and $A(\Omega)$ its boundary. Consider an SSRF defined over this area with parameters $\eta_{0}, \eta_{1}, \xi$, with a finite variance $\sigma^{2}_{\rm x}$. Let us assume that it is possible to normalize the SSRF to unit variance by simply dividing the states with the standard deviation. Next, it is possible to express the pseudo-energy functional in terms of an operator notation notation as follows: \begin{equation} \label{Hop} H [X_\lambda ] \equiv \langle X_\lambda ({\bf s}) | \, {\cal H} \, | X_\lambda ({\bf s}) \, \rangle +S(A) \equiv \int_{\Omega} d{\bf s} \, X_\lambda ({\bf s}) \, {\cal H} \left[ X_\lambda ({\bf s}) \right] + S(A), \end{equation} \noindent where ${\cal H}$ is a `pseudo-hamiltonian' operator and $S(A)$ is a surface term. Assuming that the surface term is negligible, the eigenvalue equation becomes: \begin{equation} \label{eigv1} {\cal H} \, | \psi_{E}({\bf s};{\bf b}) \rangle =E \, \psi_{E}({\bf s};{\bf b}) , \end{equation} \noindent where $\psi_{E}({\bf s};{\bf b})$ is an eigenfunction, $E$ is the corresponding energy and ${\bf b}$ a degeneracy vector index, which may include both discrete and continuous components. Since the SSRF has been normalized to unit variance, the eigenfunctions $\psi_{E} ({\bf s};{\bf b})$ can also be assumed normalized, i.e., $\int_{\Omega} d{\bf s} \, \psi_{E}^{2}({\bf s};{\bf b})=1$, and then $H [X_\lambda ]= E$. If Eq.~(\ref{eigv1}) admits solutions for non-zero $E$, one can construct eigenfunctions that correspond to positive excitation energies $E$. The realization probability that corresponds to low-lying excitations is high. Hence, the main idea is to consider the observed state or the union of the observations and the predictions as being locally represented by an excited state. This approach can be used for both parameter estimation and prediction (spatial estimation) \subsection{Eigenfunctions for FGC case} \label{fgc_sol} For the FGC functional of Eq.~(\ref{fgc}), integrating the square-gradient term by parts leads to the following equation: \begin{equation} \label{h1} \int_{\Omega} d{\bf s}\, \left[ \nabla \psi_{E}({\bf s};{\bf b})\right]^2 = -\int_{\Omega} d{\bf s}\, \psi_{E}({\bf s};{\bf b}) \, \nabla^2 \psi_{E}({\bf s};{\bf b}) + \int_{A(\Omega)} d{\bf a} \cdot \nabla \psi_{E}({\bf s};{\bf b}) \, \psi_{E}({\bf s};{\bf b}). \end{equation} \noindent In Eq.~(\ref{h1}) $\int _{A(\Omega)} d{\bf a}$ denotes the surface integral on the boundary of the area of interest. Secondly, using Green's theorem on the square-curvature term one obtains \begin{eqnarray} \label{h2} \int_{\Omega} d{\bf s}\, \left[ \nabla^{2} \psi_{E}({\bf s};{\bf b}) \right]^2 & = & \int_{\Omega} d{\bf s}\, \psi_{E}({\bf s};{\bf b}) \nabla^{4} \psi_{E}({\bf s};{\bf b}) + \int_{A(\Omega)} d{\bf a}\, \cdot \nabla \psi_{E}({\bf s};{\bf b}) \nabla^{2} \psi_{E}({\bf s};{\bf b}) \nonumber \\ & - & \int_{A(\Omega)} d{\bf a}\, \cdot \nabla \left[\nabla^{2} \psi_{E}({\bf s};{\bf b}) \right] \psi_{E}({\bf s};{\bf b}) . \end{eqnarray} \noindent Hence, in the operator notation the FGC functional is expressed as follows: \begin{equation} \label{Hop2} {\cal H}_{\rm fgc}= \frac{1}{2\eta _0 \xi ^d } \left[ 1 -\eta_{1} \, \xi^2 \, \nabla^2 + \xi^4 \, \nabla^{4} \right], \end{equation} \noindent and the surface term is given by: \begin{eqnarray} \label{surface} S(\Omega)& = & \frac{ 1}{2\, \eta_{0}\, \xi^{d}} \left[ \eta_{1} \, \xi^2 \, \int_{A(\Omega)} d{\bf a} \cdot \nabla \psi_{E}({\bf s};{\bf b}) \, \psi_{E}({\bf s};{\bf b}) \right. \nonumber \\ & + & \xi^4 \, \int_{A(\Omega)} d{\bf a}\, \cdot \nabla \psi_{E}({\bf s};{\bf b}) \nabla^{2} \psi_{E}({\bf s};{\bf b}) \nonumber \\ & - & \left. \xi^4 \, \int_{A(\Omega)} d{\bf a}\, \cdot \nabla \left[\nabla^{2} \psi_{E}({\bf s};{\bf b}) \right] \psi_{E}({\bf s};{\bf b}) \right]. \end{eqnarray} \noindent If the units are chosen so that $2\eta _0 \xi ^d=1$ and the surface term is ignored, the eigenvalue equation is given by the following partial differential equation (pde): \begin{equation} \label{eigv2} \psi_{E}({\bf s};{\bf b}) -\eta_{1} \, \xi^2 \, \nabla^2 \, \psi_{E}({\bf s};{\bf b}) + \xi^4 \, \nabla^{4} \, \psi_{E}({\bf s};{\bf b}) =E \, \psi_{E}({\bf s};{\bf b}) . \end{equation} \noindent The eigenfunctions $\psi_{E}({\bf s};{\bf b})$ of Eq.~(\ref{eigv2}) are given by the following four plane waves: \begin{equation} \label{eigens} \psi_{E}({\bf s};{\bf b})= \,e^{{\bf k}_j \cdot {\bf s}}, \quad {\bf k}_j =k_j \, {\hat{\bm \theta}}, \end{equation} \noindent where ${\hat{\bm \theta}}$ represents the unit direction vector, and $k_j $ the magnitudes of the \em{characteristic wave-vectors} that are given by the roots of the fourth-order \em{characteristic polynomial}: \begin{equation} \label{wavvec:char} \Pi_{\rm fgc} (k \xi) = (1-E)\, - \,\eta _1 \,\xi ^2\,k^2\,+\xi ^4\,\,k^4=0. \end{equation} \noindent Thus, the characteristic wavevectors are given by the following expressions: \begin{eqnarray} \label{k1} k_1(\eta_{1},\xi,E) & = & \frac{1}{\sqrt{2}\xi }\sqrt {\eta _1 + \sqrt {\eta _1^2 -4 (1-E)} } \\ \label{k2} k_2(\eta_{1},\xi,E)& = & - \frac{1}{\sqrt{2}\xi }\sqrt {\eta _1 + \sqrt {\eta _1^2 -4 (1-E)} } \\ \label{k3} k_3(\eta_{1},\xi,E) & = & \frac{1}{\sqrt{2}\xi }\sqrt {\eta _1 - \sqrt {\eta _1^2 -4 (1-E)} } \\ \label{k4} k_4(\eta_{1},\xi,E) & = & - \frac{1}{\sqrt{2}\xi }\sqrt {\eta _1 - \sqrt {\eta _1^2 -4 (1 - E)} }. \end{eqnarray} \noindent Note that only the magnitude of the wave-vectors is determined from the pde~(\ref{eigv2}). This is due to the fact that isotropic spatial dependence was assumed in the SSRF model. (a) If $\eta_{1} >0 \wedge 1-\eta_{1}^2/4 <E <1$ all the roots are real. (b) If $\eta_{1} >0 \wedge E>1$, then $k_{1}, k_{2}$ are real, while $k_{3}, k_{4}$ are purely imaginary. (c) If $\eta_{1} >0 \wedge 1-\eta_{1}^2/4 >E$, then all the roots are complex. (d) If $\eta_{1} <0 \wedge 1-\eta_{1}^2/4 <E <1$, then all the roots are imaginary. (e) If $\eta_{1} <0 \wedge E > 1$, then $k_{1}, k_{2}$ are real, while $k_{3}, k_{4}$ are imaginary. (f) If $\eta_{1} <0 \wedge 0< E < 1-\eta_{1}^2/4$ all the roots are complex. In general, an excited state formed by the linear superposition of degenerate eigenstates of energy $E$ is given by the expression: \begin{equation} \label{state_E} Z_{E}({\bf s}; c_{\bf b}) = \sum_{j=1}^{4} u\left( k_{c}- \| k_{j}\| \right) \, \int d\hat{\bm \theta} \, c_{j}(\hat{\bm \theta}) \, \exp \left( k_{j} \, {\hat{\bm \theta}} \cdot {\bf s} \right), \end{equation} \noindent where $c_{j}(\hat{\bm \theta})$ is a direction-dependent (possibly complex-valued) function, $\| k_{j} \| $ is the modulus of the characteristic wavevector, and $u(.)$ is the unit step function, used to guarantee that the fluctuations in the excited state do not exceed the cutoff `frequency'. For the estimation of real-valued processes, the coefficients $c_{j}(\hat{\bm \theta})$ are constrained to give real values for the excited state $Z_{E}({\bf s}; c_{\bf b})$. If $c_{j}(\hat{\bm \theta})=c_{j}$, an {\it isotropic excited state} is obtained, which can be expressed as $Z_{E}({\bf s}; c_{1},\ldots,c_{4})=\sum_{j=1}^{4} c_{j} \, u\left( k_{c}- \| k_{j}\| \right) \psi_{E}({\bf s};j)$, where $\psi_{E}({\bf s};j)=\int d\hat{\bm \theta} \exp \left( k_{j} \, {\hat{\bm \theta}} \cdot {\bf s} \right)$. \subsection{Eigenstates in $d=1$} We examine in more detail the real-valued eigenstates that are trigonometric or hyperbolic functions in the one-dimensional domain $[0,\, L] \in {\mathbb R}$. \subsubsection{Exponential Eigenstates} For characteristic wave-vectors $k$ that are real numbers, the {\em normalized eigenfunctions} and the corresponding energies of Eq.~(\ref{eigv2}) are given by \begin{eqnarray} \label{expo1d} X(s) & = & e^{-k\,s} \, \sqrt{\frac{2\, k}{1-e^{-2k\,L}}}, \\ \label{Eexpo1d} E & = & 1 - \eta_{1} (k\, \xi)^{2} + (k\, \xi)^{4}. \end{eqnarray} \noindent However, if the exponential function is inserted in Eq.~(\ref{fgc}), the resulting energy is given by \begin{equation} \label{Hexpo1d} H[X(s)]=1 + \eta_{1} (k\, \xi)^{2} + (k\, \xi)^{4}. \end{equation} \noindent The difference between the energy given by Eq.~(\ref{expo1d}) and the correct energy, given by Eq.~(\ref{Hexpo1d}) is due to the fact that the boundary term can not be ignored for the localized exponential excitation. \subsubsection{Trigonometric Eigenstates} If $k$ is an imaginary number, the eigenfunctions are trigonometric functions. A normalized cosine eigenfunction and the corresponding energy are given by: \begin{eqnarray} \label{trig1dX} X(s) & = & \cos (k\,s) \, \sqrt{\frac{2}{L \, \left[1 +{\rm sinc}(2k\,L) \right]}}, \\ \label{trig1dE} E& = & 1 + \eta_{1} (k\, \xi)^{2} \, \frac{1 -{\rm sinc}(2k\,L)}{1 +{\rm sinc}(2k\,L)} + (k\, \xi)^{4}. \end{eqnarray} \noindent For large domains, $k\, L>>1$, Eq.~(\ref{trig1dE}) is practically equivalent to Eq.~(\ref{Hexpo1d}). As expected, in the case of an extended eigenstate (as the cosine) the boundary term can be ignored. \section{Spatial Estimation with SSRF's} \label{spatest} Assume $S_{\rm m}=({\bf s}_1, \ldots {\bf s}_N)$ is a set of data points with the respective vector of measurements denoted by ${\bf X}^{*}=(X^{*}_{1},\ldots,X^{*}_{N})$; let ${\bf s}_{0} \notin S_{\rm m}$ be the estimation point and $\hat{X}_{\lambda}({\bf s}_{0})$ the estimate (spatial prediction). The local neighborhood of ${\bf s}_{0}$ is the set $S_{0} \equiv B({\bf s}_{0}; r_c)$ of all the data points ${\bf s}_{j}, j=1,...,M$ inside a `sphere' of radius equal to one correlation range from ${\bf s}_{0}$. In geostatistics, $\hat{X}({\bf s}_{0})$ is determined by optimal linear filters (kriging estimators) \cite{kitan,wack}, which form the estimate as a superposition of the data values inside the local neighborhood, and there is no explicit resolution scale. The coefficients of the superposition are selected to make the estimate unbiased and to minimize the mean square error. Kriging is an exact interpolator, meaning that for any ${\bf s}_{i} \in S_{\rm m}, \hat{X}({\bf s}_{i}) = X^{*} ({\bf s}_{i})$. Exactitude is not always desirable, since it ignores measurement errors and leads to excessive smoothing of the fluctuations. Hence, different estimation methods are useful. The SSRF models can be used in kriging algorithms to provide new, differentiable covariance functions. In addition, within the SSRF framework it is possible to define a new type of estimator. \subsection{Low Local Energy Estimators} \label{llee} The central idea is that a `good' estimate should correspond to a state with significant probability of realization. If the energy functional is non-negative, as in Eq.~(\ref{fgc}), the highest probability is associated with the uniform state $X_\lambda({\bf s})=0$, which is not physically interesting. Other states with high probability correspond to low-energy excitations. Let us superimpose the degenerate eigenstates with energy $E$ to form a {\em mixed state} $Z_{E}({\bf s};{\bf c})= \sum_{i=1}^{D} c_{i}\, \psi_{E}({\bf s};b_{i})$; ${\bm c}=(c_{1},\ldots,c_{D}) $ is a $D$-dimensional vector of linear coefficients that correspond to the degeneracy indices. In principle $D$ can be infinite since the directional dependence given by Eq.~(\ref{state_E}) is continuous. However, in practice it may be simplest to restrict the search to one `optimal' direction. The energy $H [Z_{E}({\bf s}; c_{\bf b})]$ of the mixed state is not necessarily equal to $E$. In fact, for orthonormal eigenstates $H [Z_{E}({\bf s}; {\bf c})]= \mu \, E$, where $\mu= \sum_{i=1}^{D} c_{i}$. This reflects the fact that the `energy level' of the observed process is set by the measurements (i.e., the coefficients $c_{i}$). Since the scale coefficient $\eta_{0}$ is inversely proportional to the magnitude of the fluctuations, it follows that $\mu^{-1} \propto \eta_{0}$. It should also be noted that if two mixed states $({\bf c}_{1},E_{1})$ and $({\bf c}_{2},E_{2})$ are energetically equivalent, i.e., $\mu_{1} \,E_{1}=\mu_{2} \,E_{2}$, they are not in general linearly related, since according to Eqs.~(\ref{eigens}), (\ref{k1})-(\ref{k4}) and~(\ref{state_E}), the dependence of $Z_{E}({\bf s};{\bf c})$ on $E$ is nonlinear. We propose that the observations for ${\bf s}_{j} \in B({\bf s}_{0}; r_c)$ be expressed as $X^{*}({\bf s}_j)=Z_{E}({\bf s}_{j};{\bf c}_{0}) +\varepsilon({\bf s}_j)$, where $Z_{E}({\bf s};{\bf c}_{0})$ is a `local' excitation and $\varepsilon({\bf s}_j)$ is the {\it local excitation residual}. Local dependence stems from the fact that the coefficients ${\bf c}_{0}$ depend on ${\bf s}_{0}$, in contrast with the solution of Eq.~(\ref{state_E}), in which the coefficient vector is global. The {\em LLEE estimator} is then given by $\hat{X}_{\lambda}({\bf s}_{0})=Z_{E}({\bf s}_{0};{\bf c}_{0})$. Since $Z_{E}({\bf s};{\bf c}_{0})$ is an {\em estimate} of the underlying process $X_\lambda({\bf s})$, the excitation residual $\varepsilon({\bf s}_j)$ is not in general the same as the noise $\epsilon ({\bf s})$. The coefficients ${\bf c}_{0}$, follow from minimizing the mean square excitation residual inside $B({\bf s}_{0}; r_c)$ , i.e., \begin{equation} \label{coeff} {\bf c}_{0}=\underbrace{{\rm arg} \: {\rm min}}_{\bf c} \sum_{j=1}^{M} \left[ X^{*}({\bf s}_j) - Z_{E}({\bf s}_{j};{\bf c}) \right] ^{2}. \end{equation} \noindent The above is a typical problem of multiple linear regression, where the regressors are the functions ${\psi}_{E}({\bf s}_{i};b_{j})$. If we define the $M \times D$ matrix $\psi_{E,ij} \equiv {\psi}_{E}({\bf s}_{i};b_{j})$, the solutions for $c_{0,i}$ and the LLEE are given by: \begin{eqnarray} \label{coefsol} \alpha_{ik} & = & \sum_{j=1}^{M} \psi_{E,ji} \,\psi_{E,jk}, \quad i=1,\ldots, M; \,\, k=1,\ldots, D, \\ c_{0,i} & = & \sum_{k=1}^{D} \left[ \alpha \right]^{-1}_{ik} \, \sum_{l=1}^{M} \psi_{E,lk}\, X^{*}_{l}, \quad i=1,\ldots, M, \end{eqnarray} \begin{equation} \label{estims0} \hat X_{\lambda}({\bf s}_{0}) = {\bf w}_{0} \, \cdot {\bf X}^{*} , \end{equation} \noindent where ${\bf w}_{0} $ is a {\it weight vector} given by: \begin{equation} \label{weight} w_{0,i} = \sum_{k=1}^{D}\psi_{E,0k} \, \sum_{j=1}^{D} \left[ \alpha \right]^{-1}_{kj} \, \psi_{E,ij}, \quad i=1,\ldots, M. \end{equation} The uncertainty of the LLEE estimate is determined from the ensemble variance of the local excitation residual $ \sigma_\varepsilon ^2 ({\bf s}_0 ) = E\left[ {X^{*} ({\bf s}_0 ) - \hat X ({\bf s}_0 )} \right]^2 $, i.e.: \begin{equation} \label{errestim} \sigma _\varepsilon ^2 ({\bf s}_0 ) = \sigma_{\rm x^{*}}^2 + \sum_{i=1}^{M} \sum_{j=1}^{M} w_{0,i}\, w_{0,j} \, G_{{\rm x}^{*},ij} - 2 \, \sum_{i=1}^{M} w_{0,i} \, G_{{\rm x}^{*},0i} \quad , \end{equation} \noindent where $G_{{\rm x}^{*},ij} = E\left[ X^{*}_{i} \, X^{*}_{j} \right]$ is the covariance matrix at the observation points, $G_{{\rm x}^{*},0i}= E\left[ X^{*}_{i} \, X^{*}_{0} \right]$, is the covariance vector of the fluctuations between ${\bf s}_{0}$ and the estimation point, and $\sigma_{\rm x^{*}}^2= E\left[ X^{*}_{i} \,X^{*}_{i} \right]$ is the variance of the observed process. \subsection{Properties of the LLEE} It follows from Eqs.~(\ref{estims0}) and~(\ref{weight}) that the LLEE is linear in the fluctuations. Hence, the estimates are unbiased and follow the Gaussian law (if the observations are normally distributed). Kriging methods are based on minimization of the (ensemble) mean square error, which is a global optimality criterion. In contrast, the LLEE criterion is local (i.e., minimum of the average squared excitation residual in the neighbourhood of the estimation point). Another difference with kriging is that low local energy estimates do not match exactly the measurements at observation points. The property of non-exactitude is maintained even when the noise can be ignored. Finally, unlike kriging predictions, the LLEE provides multiple estimates, since different energy levels lead to different excited states. In this respect the LLEE is similar to a simulation method. However, simulations involve the generation of random numbers, in contrast with the LLEE method. It should also be noted that the energy of local excitations is not necessarily the energy of the estimated state, because the locality of the coefficient vector ${\bf c}_{0}$ means that the operators $\nabla$ and $\nabla^{2}$ contribute to the overall energy when they act on the coefficients of the mixed state in Eq.~(\ref{fgc}). \section{Conclusions} A spatial estimation method for applications in the geosciences is presented. The method is based on the use of `pseudo-energy' functionals, motivated by explicit constraints or heuristic physical arguments, to capture the spatial heterogeneity of the observed process. Estimates of the process at unmeasured points (predictions) are based on local interpolating functions that represent low-energy excitations of the pseudo-energy. Multiple estimates of the process can be generated by considering local interpolating functions that correspond to different excitation energies.
2,869,038,155,946
arxiv
\section{\label{sec:introduction}Introduction} In-situ characterization of detailed 3D views of defects and interfaces and their evolution at the mesoscale (few nm - hundreds of $\mu$m) are required to develop microstructure-aware physics-based models and to design advanced materials with tailored properties \cite{ref:mesoscale2,ref:mesoscale1}. Coherent Diffraction Imaging (CDI) is a non-destructive X-ray imaging technique providing 3D measurements of sample electron density at nm resolution from which sub nm atomic displacement estimates can be calculated to understand deformation for $\mu$m sized non-crystalline specimens \cite{ref:CDI_noncrystal,ref:Au_Robinson,ref:CDI_deformation,ref:CDI_Robinson,ref:CDI1,ref:CDI_strain}. CDI has now been applied for a wide range of scientific studies including biology, physics and engineering \cite{ref:CDI_review}. CDI has been used to measure the 3D structures of individual viruses \cite{ref:CDI_virus} and bacteria \cite{ref:CDI_bacteria}, for imaging quantum dots \cite{ref:CDI_quantum}, to image red blood cells infected with malaria \cite{ref:CDI_blood}, for 3D imaging of human chromosomes \cite{ref:CDI_chromosome}, for imaging the 3D electron denisty of large ZnO crystals \cite{ref:CDI_ZnO}, using only partially coherent light \cite{ref:CDI_PC}, for measuring 3D lattice distortions due to defect structures in ion-implanted nano-crystals \cite{ref:BCDI:Felix1}, and for measuring dislocations in polycrystalline samples \cite{ref:CDI_RP}. The CDI technique records only the intensity of the complex diffraction pattern originating from the illuminated sample volume, in which all phase information is lost. If the phase information in the diffraction signal could be measured, then a simple inverse Fourier transform would provide the 3D electron density which generated the diffraction pattern. Many iterative numerical methods for achieving phase retrieval have been developed which map measured diffraction patterns to electron density \cite{ref:CDI_phase_1,ref:CDI_phase_2,ref:CDI_phase_3,ref:CDI_phase_4,ref:CDI_phase_5,ref:CDI_phase_6,ref:CDI_phase_7,ref:CDI_phase_8,ref:CDI_phase_9,ref:CDI_mematic,ref:CDI_raytrace,ref:BCDI:Sid2,ref:BCDI:Sid3}. The existing CDI phase reconstruction methods are sometimes very lengthy processes requiring extensive fine tuning by expert users. Existing algorithms for inverting diffraction signals to produce a real-space image are sometimes brute-force and usually very computationally expensive. Additionally, iterative phase retrieval algorithms require expert knowledge, relying on a wide range of experience and expertise in various fields. Standard methods are sensitive to small variation in diffraction signals and different users may produce inconsistent reconstructions depending on the experience of the user and the choice of initial guesses for the parameters while expert users are able to combine various conventional algorithms such as error reduction, difference map, shrinkwrap and hybrid input-output to be capable of exploiting the available frequency information in a CDI measurement, utilizing robust treatments of measurement signal noise \cite{ref:CDInoise}. The challenges of iterative phase retrieval make it a good candidate for utilizing machine learning methods. Although ML methods cannot substitute for traditional algorithms, they do have the potential to help with the speed of obtaining reconstructions by providing an initial guess which is then fine tuned by traditional methods to achieve accurate results. Machine learning (ML) tools, such as deep neural networks, have recently grown in popularity due to their ability to learn input-output relationships of large complex systems. Neural networks have been used to speed up lattice quantum Monte Carlo simulations \cite{ref:ML_montecarlo}, for studying complicated many body systems \cite{ref:ML_manybody}, for depth prediction in digital holography \cite{ref:ML_hologram}, and combined with model-independent feedback for adaptively controlling particle accelerator beams \cite{ref:ESML_AS}. Recently 2D convolutional neural networks (CNN) have been utilized for speeding up diffraction-based reconstructions. CNNs have been developed to directly map 2D diffraction amplitude measurements to the amplitudes and phases of the 2D objects from which they originated, presenting an approach for orders of magnitude faster 2D amplitude and phase reconstructions for CDI \cite{ref:realtimeCDI}. CNNs have also been recently developed for orders of magnitude faster mapping of electron backscatter diffraction (EBSD) patterns to crystal orientations \cite{ref:EBSD_RP}. In this work we utilize Tensorflow and the automatic differentiation capabilities of the software package, a recent application of automatic differentiation to problem of phase retrieval is given in \cite{ref:autoDiff}. \section{\label{sec:results}Summary of Main Results} A graphical summary of the method proposed in this work is shown in Figure \ref{fig:ESML}. We present an adaptive ML approach to the reconstruction of 3D object with uniform electron densities from synthetic diffraction patterns. Our adaptive ML framework utilizes a combination of a 3D convolutional neural network together with an ensemble of model-independent adaptive feedback agents to reconstruct 3D volumes based only on CDI diffraction measurements. The algorithm uses 3D diffracted intensities as inputs and provides outputs in the form of spherical harmonics which describe the surfaces of the 3D objects with uniform densities that generated the diffracted intensities. \begin{figure*} \includegraphics[width=1.0\textwidth]{Figure1.pdf} \caption{\label{fig:ESML} The 3D CNN's output is used as the initial condition for ES tuning.} \end{figure*} \section{\label{sec:math}Mathematical Background} Ideally, the goal of the the CDI measurements would be to record the complex diffracted scalar wavefield $\psi(\mathbf{w}) = \left | \psi(\mathbf{w}) \right | \exp \left [ i \phi(\mathbf{w}) \right ]$, which is related to the Fourier transform of the electron density, $\rho(\mathbf{r})$ of the sample, where $\mathbf{r} = (x,y,z)$ is the sample space and $\mathbf{w} = (w_x,w_y,w_z)$ is reciprocal space coordinates, respectively. If such a measurement could be made, the 3D electron density could be reconstructed by simply performing an inverse Fourier transform. Unfortunately, when a coherent X-ray passes through a material with electron density $\rho(\mathbf{r})$, what is recorded on a detector is the intensity of the diffracted light, given by \begin{eqnarray} I(\mathbf{w}) &=& \iint \rho(\mathbf{r}_1)\rho^\star(\mathbf{r}_2)\exp\left [ i \mathbf{q} \left ( \mathbf{r}_1 - \mathbf{r}_2 \right ) \right ] d\mathbf{r}_1d\mathbf{r}_2 \nonumber \\ &=& \psi(\mathbf{w})\psi^\star(\mathbf{w}) \nonumber \\ &=& \left | \psi(\mathbf{w}) \right |^2 \exp \left [ i \phi(\mathbf{w}) \right ] \exp \left [ -i \phi(\mathbf{w}) \right ] \nonumber \\ &=& \left | \psi(\mathbf{w}) \right |^2, \label{FFT} \end{eqnarray} with all of the phase information lost \cite{ref:CDI_Robinson,ref:Au_Robinson}. Reconstructing $\hat{\psi}(\mathbf{w})$ requires lengthy phase retrieval algorithms which are typically carried out after the experiments and performed by expert users. \subsection{Spherical harmonics shape descriptors} Our approach is to represent the unknown electron density inside a 3D object by a collection of basis vectors in the form of spherical harmonics, which describe the surface that encloses the volume of material of interest. Spherical harmonics are a generalization of the 1D Fourier series for representing functions defined on the unit sphere. For any $D>0$, the Hilbert space of real square-integrable functions defined over the interval $x \in [0,D]$ is defined as \begin{equation} L^2[0,D] = \left \{ f(x) \ : \ \int_{0}^{D} \left |f(x) \right|^2dx < \infty \right \} \end{equation} with the inner product of any $f,g \in L^2[0,D]$ defined as \begin{equation} \left < f(x), g(x) \right > = \int_{0}^{D}f(x)g(x)dx. \label{inner} \end{equation} Distance between functions in $L^2[0,D]$ is defined by the metric \begin{equation} \left \| f - g \right \|_2 = \left < f-g,f-g \right > = \int_{0}^{D}\left | f(x)-g(x)\right |^2dx. \end{equation} It is well known from Fourier analysis that any function $f(x) \in L^2[0,D]$ can be approximated arbitrarily closely by a linear combination of the basis functions \begin{equation} \varphi_{c,n}(x) = \cos\left ( \frac{2\pi n x}{D} \right ), \quad \varphi_{s,n}(x) = \sin\left ( \frac{2\pi n x}{D} \right ), \quad n \in \mathbb{N}. \end{equation} If a sequence of functions $f_N$ are defined as \begin{eqnarray} f_N(x) &=& c_0 + \sum_{n=1}^{N}\left [ c_n\varphi_{c,n}(x) + s_n\varphi_{s,n}(x) \right ], \\ c_0 &=& \frac{1}{D}\left < f(x),1 \right > = \frac{1}{D}\int_{0}^{D}f(x)dx, \\ \quad c_{n>0} &=& \frac{2}{D}\left < f(x),\varphi_{c,n}(x) \right > = \frac{2}{D}\int_{0}^{D}f(x)\varphi_{c,n}(x)dx, \\ s_{n} &=& \frac{2}{D}\left < f(x),\varphi_{s,n}(x) \right > = \frac{2}{D}\int_{0}^{D}f(x)\varphi_{s,n}(x)dx, \end{eqnarray} then \begin{equation} \lim_{N\rightarrow \infty} \left \| f - f_N \right \|_2 = 0. \end{equation} \begin{figure*} \includegraphics[width=1.0\textwidth]{Figure2.pdf} \caption{\label{fig:3D_CNN} Overview of the 3D CNN directly using the intensity of the Fourier transform as input with a final output of dimension 28 of the coefficient of the even spherical harmonics $Y_{l}^{m}$ for $l \leq 6$: $\mathbf{y} = \left ( y_1, \dots , y_{28} \right ) = \left ( a_{00}, a_{-22}, \dots, a_{22}, \dots, a_{66} \right )$.} \end{figure*} For any function $s(\theta,\phi)$ defined on the surface of the sphere, where $\theta \in [0,\pi]$ and $\phi \in [0,2\pi]$ are the spherical coordinates, the function can be approximated arbitrarily accurately with a representation of the form \begin{equation} s_N(\theta,\phi) = \sum_{l=0}^{N}\sum_{m=-l}^{l}a_{ml}Y^m_l(\theta,\phi), \label{rhat} \end{equation} where the coefficients are found by the inner product \begin{eqnarray} a_{ml} &=& \left < s(\theta,\phi), Y_{l}^{m}(\theta,\phi) \right> \nonumber \\ &=& \int_{0}^\pi \int_{0}^{2\pi} r(\theta,\phi){Y_{l}^{m}}^{*}(\theta,\phi) \sin(\theta) d\theta d\phi. \end{eqnarray} By approximating in terms of the basis of spherical harmonics, we assume that we can find a star-convex approximation of a surface $s(\theta,\phi)$. \section{\label{sec:ES}Adaptive Machine Learning for Phase Retrieval} To determine the unknown electron density $\rho(\mathbf{r},\theta,\phi)$, we make the assumption that the electron density is non-zero only within some compact set and that the density is uniform within some bounding surface $\partial \rho(\mathbf{r},\theta,\phi) = s(\theta,\phi)$ of the form \begin{equation} \rho(\mathbf{r},\theta,\phi) = \begin{cases} d, & |\mathbf{r}| \leq s(\theta,\phi) \\ 0, & |\mathbf{r}| \geq s(\theta,\phi) \end{cases}. \label{case} \end{equation} Note that in this proof-of-concept work, we are considering solid objects of uniform density $d$ without internal structures. Our 3D reconstruction approach is to find a set of coefficients $\hat{a}_{ml}$ up to order $l=N$, $\mathbf{y}=\left (\hat{a}_{00},\dots,\hat{a}_{ml},\dots,\hat{a}_{NN} \right )$, which define a surface that approximates $s(\theta,\phi)$ by constructing \begin{equation} \hat{s}(\theta,\phi) = \sum_{l=0}^{N}\sum_{m=-l}^{l}\hat{a}_{ml}Y^m_l(\theta,\phi), \label{hatshat} \end{equation} which in turn defines an electron density \begin{equation} \hat{\rho}(\mathbf{r},\theta,\phi) = \begin{cases} d, & |\mathbf{r}| \leq \hat{s}(\theta,\phi) \\ 0, & |\mathbf{r}| \geq \hat{s}(\theta,\phi) \end{cases}, \label{case_2} \end{equation} that approximates $\rho(\mathbf{r},\theta,\phi)$. In order to find the appropriate spherical harmonics, we calculate the amplitude of the 3D Fourier transform $\left | \mathcal{F}\left (\hat{\rho}(\mathbf{r},\theta,\phi) \right ) \right |$ which represents the amplitude of a complex scalar diffracted wavefield $\left | \hat{\psi}(\mathbf{w}) \right |$ and then compare it to the ground-truth synthetic 3D diffraction pattern. \subsection{3D Convolutional Neural Network} Our approach uses a combination of a 3D convolutional neural network together with model-independent adaptive feedback. Convolutional neural networks are very powerful tools that can learn relationships between parameters in complex systems and in this case can directly utilize spatial information to learn 3D features from the 3D amplitude of the Fourier transform. The architecture of the 3D CNN network developed for our problem is shown in Figure (\ref{fig:3D_CNN}). We point out that in mapping 3D Fourier transform intensities to spherical harmonic coefficients, a CNN is only able to predict the even $l$-valued harmonics $Y_{0}^{m},Y_{2}^{m},Y_{4}^{m},\dots$ for the following reason. \begin{figure*} \includegraphics[width=1.0\textwidth]{Figure3.png} \caption{\label{fig:3D_CNN_xy} Test vs prediction values shown for the even-valued $a_{lm}$ coefficients for 200 test structures along with the standard deviation of the error for each coefficient.} \end{figure*} Consider two volumes described by surfaces which are perturbations of a sphere, of the form \begin{equation} s_{\pm}(\theta,\phi) = Y_{0}^{0}(\theta,\phi) \pm \epsilon Y_{l}^{m}(\theta,\phi), \label{s_odd} \end{equation} where $l$ is odd. The odd $l$-valued harmonics are themselves odd functions because all real spherical harmonics satisfy: \begin{eqnarray} (-1)^lY_{l}^{m}(\theta,\phi) &=& Y_{l}^{m}(\pi - \theta,\pi + \phi) \nonumber \\ &\Longrightarrow& -Y_{l_{\mathrm{odd}}}^{m}(\theta,\phi) = Y_{l_{\mathrm{odd}}}^{m}(\pi - \theta,\pi + \phi). \nonumber \end{eqnarray} Therefore the two surfaces in (\ref{s_odd}) can be rewritten as \begin{eqnarray} s_{+}(\theta,\phi) &=& Y_{0}^{0}(\theta,\phi) + \epsilon Y_{l}^{m}(\theta,\phi), \nonumber \\ s_{-}(\theta,\phi) &=& Y_{0}^{0}(\theta,\phi) + \epsilon Y_{l}^{m}(\pi - \theta,\pi + \phi), \label{s_odd2} \end{eqnarray} which are simply reflections and so the intensities of the Fourier transforms of their volumes $\rho_{\pm}$ are indistinguishable because of the lost phase information. When teaching a neural network to map diffraction patterns to coefficients, we end up giving it inputs generated from two different surfaces and volumes, with their corresponding spherical harmonic coefficients as the correct outputs: \begin{eqnarray} s_{+} &\Longrightarrow& \rho_+ \Longrightarrow \left | \mathcal{F}\left (\rho_+ \right ) \right | = \left | \mathcal{F}\left (\rho_\pm \right ) \right | \Longrightarrow \mathrm{CNN} \Longrightarrow \{1,\epsilon\}, \nonumber \\ s_{-} &\Longrightarrow& \rho_- \Longrightarrow \left | \mathcal{F}\left (\rho_- \right ) \right | = \left | \mathcal{F}\left (\rho_\pm \right ) \right | \Longrightarrow \mathrm{CNN} \Longrightarrow \{1,-\epsilon\}. \nonumber \end{eqnarray} Because in this case the Fourier intensities are exactly the same after learning over thousands of random data sets, the neural network is confused and at best can predict only the average value of $0$ for all of the odd spherical harmonic coefficients. This problem can be confirmed numerically in three ways: 1). If only positive values of odd harmonics are used to generate volumes then the CNN learns how to map diffraction patters to both even and odd harmonics, but this limits its applicability because realistic objects have shapes that are composed of both odd and even harmonic components. 2). If the network is tasked with only identifying the magnitude of the odd harmonics it is able to learn the relationship, but resulting structure predictions must then iterate through all of the possible $\pm$ combinations to find those which best match the given Fourier transform intensity to calculate the correct object shape, but the created object's orientation will not necessarily match that of the target. 3). Finally, if the CNN is given the actual 3D electron densities or the 3D Fourier transforms (not just intensities) which contain the phase information as inputs, then it learns to map all spherical harmonics correctly, but this is not helpful for our problem, where the goal is to make predictions solely based on diffracted intensities. This limitation of a neural network approach was also documented in \cite{ref:realtimeCDI} where they found that the neural network would sometimes predict objects that "are twin images of each other, and that they can be obtained from each other through a centrosymmetric inversion and complex conjugate operation. Both images are equivalent solutions to the input diffraction pattern." Because of the limitations described above, the CNN was trained to map 3D diffracted intensities to only the even valued spherical harmonic coefficients that describe the boundaries of the volumes whose Fourier transforms generated those intensities. Data for training the 3D CNN was generated by sampling coefficients from uniform distributions with ranges: \begin{equation} \hat{a}_{ml} \in \left[-c_{ml}, c_{ml} \right ], \qquad c_{ml} = \frac{0.25}{1+l+|m|}, \nonumber \end{equation} and generating training volumes based on surfaces of the form \begin{equation} \hat{s}(\theta,\phi) = \frac{3}{2}Y_{0}^{0}(\theta,\phi) + \sum_{l=0}^{N=6}\sum_{m=-l}^{l}\hat{a}_{ml}Y^m_l(\theta,\phi), \end{equation} where the large $Y_0^0(\theta,\phi)$ value ensured that we are working with a well defined surface perturbed by higher order components similar to naturally occurring complex grain shapes. We generated 500,000 training sets of 49 coefficients, for $l=0,\dots,N=6$, each of which was used to generate a surface and a volume bound by that surface to perform a 3D Fourier transform. The input to the CNN was the intensity of the 3D Fourier transform and the output of the CNN was a 28 dimensional vector which was compared to the 28 even spherical harmonic coefficients $Y_{l}^{m}$ for $l \leq 6$: $\mathbf{y} = \left ( y_1, \dots , y_{28} \right ) = \left ( \hat{a}_{00}, \hat{a}_{-22}, \dots, \hat{a}_{22}, \dots, \hat{a}_{66} \right )$ via the cost function: \begin{equation} C_{\mathrm{CNN}}(\mathbf{y}) = \sum_{j=1}^{28} \left | y_j - a_{ml} \right | = \sum_{l_{\mathrm{odd}}=0}^{6}\sum_{m=-l}^l \left |\hat{a}_{ml} - a_{ml} \right |. \end{equation} CNN performance is illustrated in Figures \ref{fig:3D_CNN_xy} and \ref{fig:3D_CNN_end} where the predictive accuracy of 200 unseen test sets is shown along with the lowest and highest prediction accuracy shapes. \begin{figure*} \includegraphics[width=1.0\textwidth]{Figure4.png} \caption{\label{fig:3D_CNN_end} Detailed view of the best and worst performers out of 200 test structures that had not been seen during training of the CNN.} \end{figure*} In this setup CNN performance is accurate, but can only make predictions for the even spherical harmonic coefficients. Furthermore, for use in experiments, the accuracy of a learned model-based approach such as a CNN may suffer depending on experimental setup changes and may require very lengthy experiment-specific retraining. In order to predict the even spherical harmonics and also to make these results more robust to a wide range of experimental conditions, the next step of this approach is to use a model-independent algorithm that adjusts all spherical harmonic coefficients directly based on matching individual 3D Fourier transforms. The model independent approach is also capable of handling very large numbers of coefficients as shown below when tuning up to 225 spherical harmonics simultaneously. \subsection{Model-independent tuning} For the adaptive part of this work, we utilize a model-independent extremum seeking (ES) algorithm which has was originally developed for control and optimization of uncertain and time-varying systems by simultaneously tuning large numbers of coupled parameters based only on noisy measurements \cite{ref:ES_Bounded}. This bounded form of ES has been analytically studied with convergence proofs for general non-differentiable dithers \cite{ref:ES_nonC2}, has been proven to converge to optimal controllers for unknown systems \cite{ref:ES_opt}, and has been applied to automatically control charged particle beams in particle accelerators \cite{ref:ESML_AS}. The ES method is applicable to n-dimensional dynamic system of the form \begin{eqnarray} \frac{d\mathbf{y}}{dt} &=& f(\mathbf{y},\mathbf{p},t), \label{dynamic} \\ \hat{C} &=& C(\mathbf{y},t) + n(t), \label{yofx} \end{eqnarray} where $\mathbf{y}=(y_1,\dots,y_n)$ are physical quantities of interest, such as diffraction patterns of electron densities. The $\mathbf{p}=(p_1,\dots,p_m)$ are controlled parameters, such as the spherical harmonics that define the surface of a volume and $t$ is time. The function $f$ may be an unknown function governing the system's dynamics. $\hat{C}$ is a measurement of an analytically unknown function $C(\mathbf{y},t)$ that is noise-corrupted by an unknown function of time, $n(t)$, and depends on both the parameter values $\mathbf{y}$ and on time due to a time-varying system environment. In our approach, we compare the intensities of the measured diffraction and generated diffraction wavefields and quantify the difference with the numerical cost function whose minimization is our goal: \begin{equation} C(\mathbf{y}) = \frac{1}{\mu(V_\mathbf{w})}\iiint_{V_\mathbf{w}} \bigl\lvert \left | \psi(\mathbf{w}) \right | - \left | \hat{\psi}(\mathbf{w}) \right | \bigl\rvert dV_\mathbf{w}, \label{cost} \end{equation} where integration is performed over a volume in reciprocal space $V_\mathbf{w}$ of measure $\mu(V_\mathbf{w}) = (w_{\mathrm{max}} - w_{\mathrm{min}})^3$. In experimental applications of such a method, uncertainty would come from the unknown electron densities that we are trying to find and from uncertainties (such as misalignment of components and drifts in X-ray coherence volume, wavelength, and flux) in the experimental setup. The parameters that we tuned were the $Y^l_m$ coefficients \begin{equation} \mathbf{y}= \left (y_1,\dots,y_j,\dots,y_{(1+N)^2} \right ) = \left (\hat{a}_{00},\dots,\hat{a}_{ml},\dots,\hat{a}_{NN} \right ), \label{y_aml} \end{equation} which define the boundary surface of an unknown volume as in Equation (\ref{rhat}) and the function that we are minimizing is $C(\mathbf{y})$ as defined in (\ref{cost}). The ES algorithm perturbs parameters according to the dynamics \begin{equation} \frac{dy_j}{dt} = \sqrt{2\alpha\omega_j}\cos\left (\omega_j t + k \hat{C}(\mathbf{y},t) \right ), \label{ES} \end{equation} where $\omega_j = \omega r_j$ and $r_i \neq r_j$ for $i\neq j$. In (\ref{ES}) $\alpha$ is a dithering amplitude which can be increased to escape local minima. Once the dynamics have settled near an equilibrium point of (\ref{ES}), which may be a local minimum of $C$, each parameter will continue to oscillate about its local optimal value with a magnitude of $\sqrt{2\alpha/\omega_j}$. The term $k>0$ is a feedback gain. For $\omega \gg 1$ the dynamics (\ref{ES}) are on average approximated by \begin{equation} \frac{d\mathbf{y}}{dt} = -k\alpha\nabla_{\mathbf{y}}C(\mathbf{y},t), \label{ES_ave} \end{equation} which tracks the time-varying minimum of the unknown function $C(\mathbf{y},t)$ with respect to $\mathbf{y}(t)$ although using only its noise-corrupted measurement $\hat{C}$ as input. The reason behind convergence is that the evolution of the coupled parameters $y_j$ is decoupled and made orthogonal relative to the inner product in the $L^2[0,t]$ Hilbert space as defined in (\ref{inner}) \begin{equation} \lim_{\omega_i,\omega_j \rightarrow \infty}\left < \cos(\omega_i t),\cos(\omega_j t) \right > = 0. \end{equation} Details and analytical proofs are available in \cite{ref:ES_Bounded,ref:ES_opt,ref:ES_nonC2}. \begin{figure*} \includegraphics[width=1.0\textwidth]{Figure5.png} \caption{\label{fig:ESspeed} The top images of the first four columns show the convergence of the cost function $C$ as defined in (\ref{cost}) together with convergence of the quantities defined in (\ref{Cp1})-(\ref{Cp3}). The bottom images of the first four columns show histograms of converged values after the final iteration. The top image of the last column shows the errors of the converged $\hat{a}_{ml}$ coefficients with the green band highlighting the even coefficients relative to which there is no ambiguity in the intensity of the Fourier transform. The bottom image of the last column shows the $a_{ml}$ errors together with mean and standard deviation as well as the bounds of the random distributions from which they were generated (blue/dashed).} \end{figure*} For iterative optimization as done in this work, we replace the continuous time dynamics (\ref{ES}) with their discrete-time approximation and make iterative updates according to \begin{equation} y_j(n+1) = y_j(n) + \Delta_t \sqrt{2\alpha\omega_j}\cos \left (\omega_j \Delta_t n + k \hat{C}(n) \right ), \label{ES_it} \end{equation} which is a finite difference approximation of (\ref{ES}) for $\Delta_t \ll 1$. In order to test the convergence properties of the ES algorithm, we simultaneously measured the following three quantities during convergence for 100 random volumes: \begin{eqnarray} C_{\mathcal{F}(\rho)} &=& \frac{100}{\mu(V_\mathbf{w})} \times \iiint_{V_\mathbf{w}} \bigl\lvert \left | \psi(\mathbf{w}) \right | - \left | \hat{\psi}(\mathbf{w}) \right | \bigl\rvert dV_\mathbf{w}, \label{cost} \label{Cp1} \\ C_{\rho} &=& \frac{100}{\mu(\rho)} \times\iiint_{V} \left |\hat{\rho}(\mathbf{r},\theta,\phi) - \rho(\mathbf{r},\theta,\phi) \right |dV , \label{Cp2} \\ C_{\Delta \rho} &=& 100 \times \left |\mu(\hat{\rho}) - \mu(\rho) \right |, \label{Cp3} \\ \mu(V_\mathbf{w}) &=& \iiint_{V_\mathbf{w}}\left | \psi(\mathbf{w}) \right |dV_\mathbf{w}, \quad V_\mathbf{w} = \left [ w_{\mathrm{min}},w_{\mathrm{max}}\right ]^3, \label{spec_measure} \\ \mu(\rho) &=& \iiint_{V}\rho(\mathbf{r},\theta,\phi)dV. \label{vol_measure} \end{eqnarray} The quantity $C_{\mathcal{F}(\rho)}$ is a measure of the percent difference between the intensity of the target and reconstructed Fourier transforms. Convergence would mean we have matched the intensity of the Fourier transforms. However, it does not guarantee the correct shape due to missing phase information, and the same 3D object could be rotated or reflected. The quantity $C_{\rho}$ is a measure of the mismatch between volumes which is non-zero when the objects have the same shape, but are of different orientations. Finally, the quantity $C_{\Delta \rho}$ is a measure of shape convergence which subtracts the total volumes occupied by the two shapes and therefore will converge to zero when the two shapes are the same even if they have different orientations. We created 100 random 3D shapes, generated their 3D Fourier transforms, and fed the intensities of those transforms into the 3D CNN. The predictions of the CNN were then used as the starting point for the ES algorithm. Results of ES convergence for 100 random 3D shapes are shown in Figures \ref{fig:ESspeed} and \ref{fig:ESconvergence}. Looking at the top images in the second and third columns of Figure \ref{fig:ESspeed} it is clear that the CNN-based objects had Fourier transform intensity errors, $C_{\mathcal{F}(\rho)}$ of 40$\%$ relative to their full spectrum measures as defined in (\ref{spec_measure}) and volume errors, $C_{\Delta\rho}$ of approximately 3$\%$. The bottom images of the second and third columns show that by the end of convergence the average intensity error was 8.4$\%$ and volumetric error was 0.19$\%$. In Figure \ref{fig:ESspeed}, it is evident that on average all of the quantities $C$, $C_{\mathcal{F}(\rho)}$, $C_{\Delta \rho}$, and $C_{\rho}$ converge towards zero; however $C_{\rho}$ has several large outliers that never converge which implies that the densities being created are of the correct shape, but wrong orientation. The last column of Figure \ref{fig:ESspeed} is showing the errors between predicted $\hat{a}_{ml}$ and correct $a_{ml}$ values. The green background in Figure \ref{fig:ESspeed} highlights the even valued coefficients which we expect to match exactly while the odd components are expected to sometimes not converge due to the ambiguity introduced by the lack of phase information in the Fourier transform's intensity, as discussed above. Overall, the results of Figure \ref{fig:ESspeed} confirm that the ES approach is very robust and is able to find the correct object shape with the possibility of an incorrect orientation in space, as expected. Figure \ref{fig:ESconvergence} shows three examples of exact agreement between 3D test objects and their ES-based reconstructions. \begin{figure*} \includegraphics[width=1.0\textwidth]{Figure6.png} \caption{\label{fig:ESconvergence} Convergence of the ES algorithm for 3 different structures A, B, and C. In this demonstration the algorithm always started with only $Y_{00} \neq 0$ which created an initial spherical shape as shown on the left. The reconstructed volumes perfectly matched the targets in our $50\times50\times50$ volume representations as seen by the error $r(\theta,\phi)$ showing the initial and final differences between the volume bounding surfaces mapped onto the sphere. The last column shows the convergence of all parameter errors as the algorithm is able to find the correct values for all 36 $Y_{lm}$ settings in each case.} \end{figure*} \subsection{\label{sec:ES_CNN}Adaptive ML for experimental data} In order to further demonstrate the robustness of the adaptive ML approach, we applied it to an experimentally measured 3D crystal volume that was obtained using high energy diffraction microscopy (HEDM)\cite{ref:HEDM_4}. HEDM is used for non-destructive measurements of spatially resolved orientation ($\sim$ 1.5 $\mu$m and 0.01$^\circ$), grain resolved orientation, and elastic strain tensor ( 10$^{-3}$) from representative volume elements with hundreds of bulk grains in the measured microstructure (mm$^3$) \cite{ref:HEDM_1}. HEDM measurements at multiple states of a sample's evolution can be used as inputs to inform and validate crystal plasticity models \cite{ref:HEDM_2, ref:HEDM_3}. For a broad overview of HEDM and its many applications the reader is refereed to \cite{ref:HEDM_1} and the multiple references within. To test the robustness of our adaptive ML approach to a structure that had never been seen by neither the CNN nor the ES algorithm during its tuning and design, we picked out a single 3D grain from a polycrystalline copper sample which was measured with the HEDM technique at the Advanced Photon Source (APS) \cite{ref:HEDM_4}. The intensity of the 3D Fourier transform of this volume was fed into the CNN which provided an estimate of the first 28 even $a_{ml}$ coefficients. These were then fed as initial guesses into the ES adaptive feedback algorithm which had the freedom to tune all 225 coefficients of the $l \in \{ 0,\dots,14 \}$ $Y^m_l(\theta,\phi)$ spherical harmonics in order to match the generated and measured diffraction patterns. The 3D shape and 2D slices of the amplitude and phase of the reconstructed particle results of convergence are shown in Figures \ref{fig:g4s1_3D} and \ref{fig:g4s1_FFT}. The HEDM grain is relatively large with $\sim$60 $\mu$m diameter and is therefore too large to be imaged with existing CDI techniques due to light energy and coherence length limitations of existing light sources. Nevertheless, for testing the proposed method, the morphology of the HEDM crystal was interesting in its complexity and was similar to what has been measured by Bragg CDI techniques as applied to quantum dot nanoparticles \cite{ref:CDI_review}. Furthermore, advanced light sources such as the planned Linac Coherent Light Source II (LCLS-II) free electron laser (FEL) and the APS Upgrade (APSU) are expected to have increased transverse and longitudinal coherence lengths with techniques such as self seeding \cite{ref:LCLS,ref:APSU}, to image larger than 1 $\mu$m diameter crystals using high-energy CDI combined with HEDM \cite{ref:BCDI:Sid1}. \begin{figure*} \includegraphics[width=0.9\textwidth]{Figure7.pdf} \caption{\label{fig:g4s1_3D} Result of using the 3D CNN output as the initial guess for first the 49 $a_{lm}$ coefficients $(a_{00},a_{-11},a_{01},a_{11},\dots,a_{66})$, followed by ES fine tuning of all 225 coefficients $(a_{00},\dots,a_{1414})$. The top row (A) shows the first measured state of the HEDM structure from various views. The second row (B) shows the CNN-ES convergence results. The third row (C) is showing the same as (B) with shading for easier 3D visualization.} \end{figure*} \begin{figure*} \includegraphics[width=1.0\textwidth]{Figure8.pdf} \caption{\label{fig:g4s1_FFT} Orthogonal slices through the 3D amplitude and 3D phase of the FT are shown. The top left row (A) shows the amplitudes of the HEDM measurement. The middle left row (B) shows the amplitudes of the CNN-ES reconstruction. The bottom left row (C) shows the difference between (A) and (B), note the reduced color scale range. The rows (D), (E), and (F) show the same for the FT phase.} \end{figure*} \section{\label{sec:conclusions}Conclusions} In this proof-of-concept work, we demonstrate reconstructions of arbitrary 3D shapes, while assuming no contribution of internal lattice distortions to the diffraction signal. One immediate limitation of this method is that by parameterizing surfaces as single valued functions over the 2D domain $(\theta,\phi) \in [0,\pi]\times[0,2\pi]$, we are limiting ourselves to producing only star convex shapes. Star convex shapes are ones in which a line can be drawn from the center point to the outer edge without intercepting any other edges and therefore do not include more complex surfaces which are not simply connected, such as a donut-like shape with holes. Generalization of this approach to a larger class of shapes will be the study of future work by utilizing surface parameterization which decomposes a 3D particle surface onto three orthogonal directions \cite{ref:Sph_1,ref:Sph_2,ref:Sph_3}. Furthermore, the method presented here can be readily extended to reconstruct additional phases for crystals with internal structures due to inherent defects and dislocations by several methods including the use of generative convolutional neural networks or by extending the adaptive model-independent process to include more degrees of freedom. Although the CNN model is trained only on synthetic diffraction data, the adaptive framework will readily account for noise in the experimental data for robust reconstruction of 3D crystals with internal structures, which is a topic of future work. \begin{acknowledgments} This work was supported by the US Department of Energy through the Los Alamos National Laboratory. Los Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security Administration of U.S Department of Energy (Contract No. 89233218CNA000001). We acknowledge the support provided by the Institute of Materials Science Rapid Response project RR2020-R\&D-1. \end{acknowledgments} \section{Data Availability} The data that support the findings of this study are available from the corresponding authors upon reasonable request. \nocite{*}
2,869,038,155,947
arxiv
\section{Introduction} \label{sec:intro} \begin{figure*} \centering \includegraphics[width=175mm]{fig1-heat-white-cont.pdf} \caption{Top panel: velocity integrated intensity maps in the range of -25 to 30~km~s$^{-1}$, from left to right: of \hbox{{\rm [C {\scriptsize II}]}}\ $^2$P$_{3/2}$ $\to$ $^2$P$_{1/2}$ fine-structure line, $J$ = 3 $\to$ 2 transitions of $^{12}$CO and $^{13}$CO. The maps are in original resolution with their original beam sizes and the receiver array geometry is shown in the bottom left of each panel. Bottom panel: Emission images, from left to right, of the 8~$\mu$m GLIMPSE, 70~$\mu$m PACS, 870~$\mu$m ATLASGAL and ACIS data toward RCW~49. The center of Wd2 cluster at R.A.($\alpha$, J2000) = 10$^{h}$24$^{m}$11$^{s}$.57 and Dec.\,($\delta$, J2000) = $-$57$\degree$46$\arcmin$42.5$\arcsec$ is marked with a white asterisk. The OV5 star and the WR20b star are marked with yellow and pink asterisks, respectively. For \hbox{{\rm [C {\scriptsize II}]}}\ emission, the contour levels are smoothed to a pixel size of 15$\arcsec$ and are 20\% to 100\% in steps of 20\% of the corresponding peak emission. For the presented maps, the peak emission for \hbox{{\rm [C {\scriptsize II}]}}\,, $^{12}$CO and $^{13}$CO are 650~K~km~s$^{-1}$, 300~K~km~s$^{-1}$ and 80~K~km~s$^{-1}$, respectively. For $^{12}$CO emission, the contour levels are 10\% to 100\% in steps of 20\% of the corresponding peak emission. For $^{13}$CO and 870 $\mu$m emission, the contour levels are 15\% to 100\% in steps of 20\% of the corresponding peak emission. For 8~$\mu$m emission, the contour levels are 15\% to 100\% in steps of 30\% of the corresponding peak emission. For 70~$\mu$m, the contour levels are 20\% to 100\% in steps of 30\% of the corresponding peak emission. For 0.5--7~keV emission, the contour levels are 10\% to 100\% in steps of 30\% of the corresponding peak emission. \label{vel-int-maps}} \end{figure*} One of the most important problems in modern astrophysics is to understand the role of massive stars in driving various physical and chemical processes in the interstellar medium (ISM). Massive stars inject an immense amount of mechanical and radiative energy into their immediate vicinity. Stellar winds are responsible for the mechanical energy input, which can push the gas into shell-like structures (as in the Rosette Nebula, \citealt{2018MNRAS.475.3598W} and in the Orion Nebula, \citealt{2019Natur.565..618P}). The radiative energy input comes from the heating of gas through stellar extreme-ultraviolet (EUV, $h\nu$ $>$ 13.6~eV) and far-UV (FUV, 6 $<$ $h\nu$ $<$ 13.6~eV) photons that can ionize atoms, dissociate molecules and heat the gas giving rise to \hbox{{\rm H {\scriptsize II}}}\ regions and photodissociation regions (PDRs). These stellar feedback mechanisms power the expansion of \hbox{{\rm H {\scriptsize II}}}\ regions and shock fronts causing morphological features that appear as shells or bubbles in the ISM. Observational studies at near-infrared (IR) wavelengths led \citet{2006ApJ...649..759C} to report that these features are ubiquitous in our Galaxy and the authors coined the term ``bubbles'' by stating ``We postulate that the rings are projections of three-dimensional shells and henceforth refer to them as bubbles''. Processes that cause these shells can disrupt molecular clouds, thereby halting star formation or can compress the gas at the edges of the \hbox{{\rm H {\scriptsize II}}}\ regions, triggering star formation (\citealt{1977ApJ...214..725E}, \citealt{1997ApJ...476..166W} and \citealt{2010A&A...518L.101Z}). Thus, shells are ideal laboratories to study positive and negative feedback generated by massive star formation. Furthermore, in order to trace these shells, we can use the 1.9~THz fine-structure line of ionised carbon, C$^+$ (\hbox{{\rm [C {\scriptsize II}]}}\,), which is one of the major coolants of the ISM and also among the brightest lines in PDRs (\citealt{1972ARA&A..10..375D}, \citealt{1991ApJ...373..423S}, \citealt{1994ApJ...434..587B}). As the ionization potential (11.3~eV) of carbon (C) is less than that of hydrogen (H) (13.6~eV), \hbox{{\rm [C {\scriptsize II}]}}\ traces the transition from H$^+$ to H and H$_2$ \citep{1999RvMP...71..173H}. While the \hbox{{\rm [C {\scriptsize II}]}}\ layer probes warm atomic gas from the surface of clouds at low visual extinction ($A_{\rm v}$ $\lessapprox$ 4 magnitude), the rotational transitions of CO probe cooler molecular gas at larger $A_{\rm v}$ \citep{1999RvMP...71..173H} deeper into the cloud clumps. \\ RCW~49 is among the most luminous and massive star forming regions of the southern Galaxy located close to the tangent of the Carina arm at $l$ = 284.3$\degree$, $b$ = -0.3$\degree$. The earlier heliocentric distance measurements to RCW~49 varied from 2 to 8~kpc, as discussed in \citet{2004ApJS..154..322C}, \citet{Rauw2007}, and \citet{2009ApJ...696L.115F}. \citet{Drew2018Wd2}, in their discussion of the distance, note that photometric studies of the stars powering RCW~49 have more recently tended towards 4--6~kpc rather than the 2--8~kpc span that is usually quoted, although \citet{Rauw2007, 2011A&A...535A..40R} maintain that the distance must be 8~kpc. Three of the most recent works determine a distance of about 4.2~kpc \citep{VA2013, 2015AJ....150...78Z, 2018A&A...618A..93C}. \citet{VA2013} and \citet{2015AJ....150...78Z} in particular conducted photometric studies with two independent sets of observations and agreed on a $\sim$4~kpc distance. This is consistent with the 4.2~kpc \textit{Gaia} parallax distance reported by \citet{2018A&A...618A..93C}, though \citet{Drew2018Wd2} point out that the uncertainties on these small parallax values are considerable. In accordance with the photometric study of \citet{VA2013} and the consistent measurement by \citet{2015AJ....150...78Z}, we adopt a distance of 4.16~kpc. RCW~49 contains a bright \hbox{{\rm H {\scriptsize II}}}\ region ionized by a compact stellar cluster, Westerlund~2 (Wd2), comprising 37 OB stars and $\sim$ 30 early type OB star candidates around it \citep{TFT2007, Ascenso2007, 2011A&A...535A..40R, VPHAS_Wd2_2015, 2015AJ....150...78Z}. There is a binary Wolf-Rayet star (WR20a) associated with the central Wd2 cluster, suggested to be one of the most massive binaries in the Galaxy \citep{Rauw2005}. RCW~49 also hosts an O5V star and another Wolf-Rayet star (WR20b), both a few arcminutes away from the geometrical cluster center. These and a handful of other massive stars in the cluster periphery may have been ejected from Wd2 \citep{Drew2018Wd2}. Age estimates generally suggest that the cluster is not much older than 2~Myr \citep{Ascenso2007, 2015AJ....150...78Z}. \\ \begin{figure*} \centering \includegraphics[width=160mm]{fig-2-all-new.pdf} \caption{RGB images of \hbox{{\rm [C {\scriptsize II}]}}\ (red), GLIMPSE 8~$\mu$m (green) and ATLASGAL 870~$\mu$m (blue) emission toward RCW~49. The Wd2 cluster's center, the OV5 and the WR20b stars are marked with white, yellow and pink asterisks, respectively. The ridge and the shell are marked in the left panel, while the inner dust ring (white dashed circle) and the transition boundary (white dotted line) are marked similar to \citet[Fig.~1]{2004ApJS..154..322C} in the right panel. The $^{12}$CO clouds as shown in \citet[Fig.~1~(a) and (c)]{2009ApJ...696L.115F} are also outlined. In yellow are the northern and southern blobs of the cloud within the velocity range of 1 to 9~km~s$^{-1}$ and in magenta is the cloud within the velocity range of 11 to 21~km~s$^{-1}$. These clouds are discussed more in Sections 3.2, 3.3 and 3.4.} \label{rgb-cii-8-870} \end{figure*} In this paper, we report one of the first results of the Stratospheric Observatory For Infrared Astronomy (SOFIA, \citealt{2012ApJ...749L..17Y}) legacy program FEEDBACK\footnote{https://feedback.astro.umd.edu} \citep{2020PASP..132j4301S} performed with the SOFIA and the Atacama Pathfinder Experiment (APEX\footnote{APEX, the Atacama Pathfinder Experiment is a collaboration between the Max-Planck-Institut f\"{u}r Radioastronomie, Onsala Space Observatory (OSO), and the European Southern Observatory (ESO).}, \citealt{2006A&A...454L..13G}). The FEEDBACK Legacy Program was initiated to quantify the mechanical and radiative feedback of massive stars on their environment. A wide range of sources were selected to be observed that allow a systematic survey of the effects of different feedback mechanisms due to star formation activity (with a single O type star, groups of O type stars, compact clusters, mini starbursts, etc.), morphology of their environment, and the evolutionary stage of star formation. In particular, RCW~49 was selected to study the feedback of the compact stellar cluster, Wd2 and the Wolf-Rayet stars on their surrounding molecular clouds. We use the fine-structure line of \hbox{{\rm [C {\scriptsize II}]}}\ to probe the shell associated with RCW~49 and disentangle its dense gas component using the CO observations. We quantify the stellar wind feedback responsible for the evolution of the shell of RCW~49 and describe the shell's morphology. In Sect.~2, we describe the observations. The qualitative and quantitative analysis of the data are reported in Sect.~3. Both small and large scale effects of the stellar feedback in RCW~49 are discussed in Sect.~4 and the results of this study are summarised in Sect.~5. \section{Observations} \subsection{SOFIA Observations} The \hbox{{\rm [C {\scriptsize II}]}}\ line at 1.9~THz was observed during three flights from Christchurch, New Zealand on 7th, 10th, and 11th of June 2019, using upGREAT\footnote{German Receiver for Astronomy at Terahertz. (up)GREAT is a development by the MPI f\"ur Radioastronomie and the KOSMA/Universit\"at zu K\"oln, in cooperation with the DLR Institut f\"ur Optische Sensorsysteme.} \citep{2018JAI.....740014R}. upGREAT consists of a 2 $\times$ 7 pixel low-frequency array (LFA) that was tuned to the \hbox{{\rm [C {\scriptsize II}]}}\ line, and in parallel a seven pixel high frequency array (HFA) that was tuned to the \hbox{{\rm [O {\scriptsize I}]}}\ 63 $\mu$m line. Both arrays observe in parallel, but here, we only present the \hbox{{\rm [C {\scriptsize II}]}}\ data. The half-power beam widths are 14.1$\arcsec$ (1.9~THz) and 6.3$\arcsec$ (4.7~THz), determined by the instrument and telescope optics, and confirmed by observations of planets. The final pixel size of the \hbox{{\rm [C {\scriptsize II}]}}\ map is 7.5$\arcsec$. The observation region was split into 12 individual `tiles', each covering an area of (7.26~arcmin)$^{2}$ = 52.7~arcmin$^2$. During the three flights, eight tiles were observed ($\sim$66\% of the planned area). Each tile was covered four times and they were tilted 40$\degree$ against the R.A. axis (counter clockwise against North) for horizontal scans and perpendicular to that for the corresponding vertical scans. As a consequence of its hexagonal geometry, the array was rotated by 19$\degree$ against the tile scan direction to achieve equal spacing between the on-the-fly scan lines. The gaps between the pixels are approximately two beam widths (31.7$\arcsec$ for the LFA and 13.8$\arcsec$ for the HFA), which result into a projected pixel spacing of 10.4$\arcsec$ for the LFA and 4.6$\arcsec$ for the HFA after rotation (for more details, see the SOFIA Science Center's Planning observations webpage\footnote{https://www.sofia.usra.edu/science/proposing-and-observing/observers-handbook-cycle-9/6-great/62-planning-observations}). The second two coverages are then shifted by 36$''$ to achieve the best possible coverage for the \hbox{{\rm [O {\scriptsize I}]}}\ line. All observations were carried out in the array-on-the-fly mapping mode. The map center was at 10$^h$24$^m$11$^s$.57, -57$\degree$46$\arcmin$42$\arcsec$.5 (J2000), the reference position at 10$^h$27$^m$17$^s$.42 -57$\degree$13$\arcmin$42$\arcsec$.60. For more observational and technical details, see \citet{2020PASP..132j4301S}. \\ As backend, a Fast Fourier Transform Spectrometer (FFTS) with 4~GHz instantaneous bandwidth and a frequency resolution of 0.244~MHz was used \citep{2012A&A...542L...3K}. The \hbox{{\rm [C {\scriptsize II}]}}\ data thus have a native velocity resolution of 0.04~km~s$^{-1}$. We use here data that was re-binned to a resolution of 0.2~km~s$^{-1}$. Spectra are presented on a main beam brightness temperature scale $T_{\rm mb}$ with an average main beam efficiency of 0.65. The forward efficiency is $\eta_{\rm f}$ = 0.97. From the spectra, a first order baseline was removed and the data-quality was improved by identifying and correcting systematic baseline features with a novel method for data reduction that makes use of a Principal Component Analysis (PCA) of reference spectra as described in Appendix~\ref{sec:pca_appen}. \begin{figure*}[htp] \includegraphics[width=180mm]{cii-chan-2vn.pdf} \caption{Velocity channel maps of \hbox{{\rm [C {\scriptsize II}]}}\ emission toward RCW~49 with a channel width of 2~km~s$^{-1}$. The velocity (in km~s$^{-1}$) of each channel is shown in top left of each panel. The Wd2 cluster's center, the OV5 and the WR20b stars are marked with white, yellow and pink asterisks respectively.} \label{cii-chan} \end{figure*} \begin{figure*}[htp] \includegraphics[width=180mm]{co-chan-2vn-white.pdf} \caption{Velocity channel maps of $^{12}$CO (3-2) emission toward RCW~49 with a channel width of 2~km~s$^{-1}$. The velocity (in km~s$^{-1}$) of each channel is shown in top left of each panel. The Wd2 cluster's center, the OV5 and the WR20b stars are marked with white, yellow and pink asterisks respectively.} \label{co-chan} \end{figure*} \begin{figure*}[htp] \includegraphics[width=180mm]{13co-chan-2vn-white.pdf} \caption{Velocity channel maps of $^{13}$CO (3-2) emission toward RCW~49 with a channel width of 2~km~s$^{-1}$. The velocity (in km~s$^{-1}$) of each channel is shown in top left of each panel. The Wd2 cluster's center, the OV5 and the WR20b stars are marked with white, yellow and pink asterisks respectively.} \label{13co-chan} \end{figure*} \subsection{APEX Observations} RCW~49 was mapped on September 25-26, 2019 in good weather conditions (precipitable water vapor, pwv = 0.5 to 1~mm) in the $^{13}$CO(3-2) and $^{12}$CO(3–2) transitions using the LAsMA array on the APEX telescope \citep{2006A&A...454L..13G}. LAsMA is a 7-pixel single polarization heterodyne array that allows simultaneous observations of the two isotopomers in the upper ($^{12}$CO) and lower ($^{13}$CO) sideband of the receiver, respectively. The array is arranged in a hexagonal configuration around a central pixel with a spacing of about two beam widths ($\theta_{\rm mb}$ = 18.2$\arcsec$ at 345.8~GHz) between the pixels. It uses a K mirror as de-rotator. The backends are advanced FFTS \citep{2012A&A...542L...3K} with a bandwidth of 2 $\times$ 4~GHz and a native spectral resolution of 61~kHz. The mapping was done in total power on-the-fly mode, with the scanning directions against N at -40$\degree$ and -130$\degree$ for the orthogonal scans, respectively. The map center and the reference position were the same as the SOFIA observations. The latter was verified to be free of CO emission at a level of $<$ 0.1~K. A total area of 570 arcmin$^2$ was observed, split into 4 tiles. Each tile was scanned with 6$\arcsec$ spacing (oversampled) in scanning direction, with a spacing of 9$\arcsec$ between rows, resulting in uniformly sampled maps with high fidelity. All spectra are calibrated in $T_{\rm mb}$ (main-beam efficiency $\eta_{\rm mb}$ = 0.68 at 345.8~GHz). A linear baseline was removed, all data resampled into 0.2~km~s$^{-1}$ spectral bins. The final data cubes are constructed with a pixel size of 9.5$\arcsec$ (the beam after gridding is 20$\arcsec$). \begin{figure}[htp] \includegraphics[width=85mm]{av_spec_shell_ridge_nscl.pdf} \caption{Average spectra of \hbox{{\rm [C {\scriptsize II}]}}\,, $^{12}$CO and $^{13}$CO toward the whole mapped region of RCW~49. To highlight the structures seen in the velocity channel maps, we mark the boundaries of the shell's expansion (in green), the northern and southern clouds (in orange) and the ridge (in pink). The velocity integrated intensity maps of these regions are shown in Fig.~\ref{shell_nscl_ridge}.} \label{av_spectra} \end{figure} \begin{figure*}[htp] \centering \includegraphics[width=45mm]{cii_-12_0.pdf}\quad \includegraphics[width=45mm]{co_-12_0.pdf}\quad \includegraphics[width=45mm]{13co_-12_0.pdf} \includegraphics[width=45mm]{cii_2_8.pdf}\quad \includegraphics[width=45mm]{co_2_8.pdf}\quad \includegraphics[width=45mm]{13co_2_8.pdf} \includegraphics[width=45mm]{cii_16_22.pdf}\quad \includegraphics[width=45mm]{co_16_22.pdf}\quad \includegraphics[width=45mm]{13co_16_22.pdf} \caption{Left to right: Velocity integrated intensity maps of \hbox{{\rm [C {\scriptsize II}]}}\,, $^{12}$CO and $^{13}$CO of the shell's expansion within -12 to 0~km~s$^{-1}$ (top row), the northern and southern clouds within 2 to 8~km~s$^{-1}$ (middle row) and the ridge within 16 to 22~km~s$^{-1}$ (bottom row). The Wd2 cluster's center, the OV5 and the WR20b stars are marked with white, yellow and pink asterisks respectively.} \label{shell_nscl_ridge} \end{figure*} \subsection{Ancillary data} We present the images of the most relevant ancillary data toward RCW~49 in the lower panel of Fig.~\ref{vel-int-maps}. We started with the Galactic Legacy Infrared Mid-Plane Survey Extraordinaire (GLIMPSE, \citealt{2003PASP..115..953B}) 8~$\mu$m data observed with the Spitzer Space Telescope. The 8~$\mu$m emission arises from the PDR surface of dense molecular clouds where large hydrocarbon molecules, the Polycyclic Aromatic Hydrocarbons (PAHs), are excited by strong UV radiation resulting in fluorescent IR emission \citep{2008ARA&A..46..289T}. Next, we used the 70~$\mu$m data from the Herschel Space Archive (HSA), obtained within the Hi-GAL Galactic plane survey \citep{higal2010} observed the with Photodetector Array Camera and Spectrometer (PACS, \citealt{2010A&A...518L...2P}) aboard the Herschel Space Observatory \citep{2010A&A...518L...1P}. The 70~$\mu$m emission traces the warm interstellar dust exposed to FUV radiation from the stars. Further, we obtained the 870~$\mu$m dust continuum data from APEX Telescope Large Area Survey of the Galaxy (ATLASGAL, \citealt{2009A&A...504..415S}), performed with the Large APEX BOlometer CAmera (LABOCA) instrument of the 12~m APEX telescope. The 870~$\mu$m traces the cold and dense clumps shielded from FUV radiation by large amounts of dust extinction. Hence, this map provides a good probe of the earliest star formation sites. Lastly, we used the archival data from the Chandra X-ray Observatory using its primary camera, the Advanced CCD Imaging Spectrometer (ACIS) \citep{2003SPIE.4851...28G}. Diffuse X-ray (0.5--7~keV) structures surrounding massive star-forming regions have been shown to trace hot plasmas from massive star feedback (e.g. \citealt{2019ApJS..244...28T}). \section{Results} \subsection{Multi-wavelength overview of RCW~49} In order to investigate the relation between the atomic and molecular components of the gas, we show the emission toward RCW~49 in different transitions and continuum wavelengths in Fig.~\ref{vel-int-maps}. A study of the large-scale $^{12}$CO (2 $\to$ 1) distribution by \citet{2009ApJ...696L.115F} identified two molecular clouds ($-$11 to 9 and 11 to 21~km~s$^{-1}$) in RCW~49 and suggested their collision ($\sim$ 4~Myr ago) to be responsible for triggering the formation of Wd2. The cloud within $-$11 to 9~km~s$^{-1}$ has a mass of (8.1 $\pm$ 3.7) $\times$ 10$^4$~\(\textup{M}_\odot\) and extends over a range of $\sim$ 26.1~pc in the north-south direction and $\sim$ 21.6~pc in the east-west direction. The cloud within 11 to 21~km~s$^{-1}$ has a mass of (9.1 $\pm$ 4.1) $\times$ 10$^4$~\(\textup{M}_\odot\) and extends over a range of $\sim$ 18.3~pc in the north-south direction and $\sim$ 21.6~pc in the east-west direction \citep{2009ApJ...696L.115F}. The $-$11 to 9~km~s$^{-1}$ cloud further includes two seemingly different clouds: $-$11 to 0~km~s$^{-1}$ and 1 to 9~km~s$^{-1}$ \citep[Fig~1~(c) and (d)]{2009ApJ...696L.115F}, but were analysed as one. This is perhaps due to the unavailability of high spatial resolution in the large scale $^{12}$CO data. We manually outline the 1 to 9~km~s$^{-1}$ and 11 to 21~km~s$^{-1}$ clouds in the right panel of Fig.~\ref{rgb-cii-8-870}. The $-$11 to 0~km~s$^{-1}$ cloud will be discussed in detail in the next sections of this paper. \citet{2004ApJS..154..322C} studied RCW~49 in mid-IR wavelengths to investigate its dust emission morphology and identified distinct regions as a function of the angular radius with respect to Wd2. It can be seen in Fig.~\ref{vel-int-maps} that all tracers, except the X-ray emission, are devoid of any emission in the immediate surrounding of Wd2. This emission free region is filled by hot plasma (temperature of $\sim$ 3 $\times$ 10$^6$~K and density of $\sim$ 0.7~cm$^{-3}$, see Sect.~3.5.2) as evident from its very bright X-ray emission (as seen in Fig.~\ref{vel-int-maps}). Moving away from Wd2, the 8 and 70~$\mu$m emission becomes brighter and marks the so called ``transition boundary'' (at $\sim$ 5~pc from Wd2, shown in Fig~\ref{rgb-cii-8-870}, right panel), a ring-like structure opened to the west \citep[Fig.~1]{2004ApJS..154..322C}. Dense cores are traced by ATLASGAL 870~$\mu$m. A dense ridge structure (pointed in Fig.~\ref{rgb-cii-8-870}, left panel), running from south to east of Wd2, is particularly prominent in the ATLASGAL emission map. This ridge is also visible in the 8~$\mu$m PAH and 70~$\mu$m dust continuum emission but not very prominent in the CO (3 $\to$ 2) emission maps. It is important to verify the spatial and spectral information given by $^{12}$CO observations with $^{13}$CO because in general, $^{12}$CO is optically thick and could be impacted by opacity effects causing self absorption, etc. Thus, the optically thin $^{13}$CO is used to confirm the results obtained from $^{12}$CO observations. In addition, the combination of $^{12}$CO and $^{13}$CO can be used to determine the molecular gas mass. In contrast, the dense cores to the west and north are also recognizable in CO (3 $\to$ 2) emission maps but not so prominent in 8~$\mu$m and 70~$\mu$m emission. Our $^{12}$CO (3 $\to$ 2) map follows the larger scale $^{12}$CO (2 $\to$ 1) emission distribution as reported in \citet{2009ApJ...696L.115F}. In addition to the above mentioned bright emission, both have a slightly fainter emission toward the west of Wd2, which is absent in the ATLASGAL emission. The \hbox{{\rm [C {\scriptsize II}]}}\ emission map shows similarities to these structures. It reveals a lack of emission immediately surrounding Wd2. One of the most prominent structures in the \hbox{{\rm [C {\scriptsize II}]}}\ map is a bright and wide arc (labelled ``shell'' in Fig.~\ref{rgb-cii-8-870}, left panel) of emission running from the east to the south which is quite prominent in the channel maps from -14 to -6~km~s$^{-1}$ (see section 3.2). At higher velocities the \hbox{{\rm [C {\scriptsize II}]}}\ emission is more and more dominated by the ridge southeast from Wd2 and a separate, dense structure to the north. The brighter \hbox{{\rm [C {\scriptsize II}]}}\ peaks in these morphological structures have counterparts in the CO maps. We notice that the ridge structure is much less bright in the \hbox{{\rm [C {\scriptsize II}]}}\ line than in the 8, 70, and 870~$\mu$m maps. This can also be viewed in the red green blue (RGB) image of \hbox{{\rm [C {\scriptsize II}]}}\,, 8~$\mu$m and 870~$\mu$m emission shown in Fig.~\ref{rgb-cii-8-870}. The difference in the behavior of these structures in the various maps reveals the complex mechanical and radiative interaction of the Wd2 cluster with the surrounding molecular gas. In this paper, we will use the kinematic information in the \hbox{{\rm [C {\scriptsize II}]}}\ and CO emission maps to focus on the kinetics and energetics of the arc-like emission structure. In a future paper, we will examine the other emission components present in this data.\\ \subsection{Velocity channel maps} A spatial comparison of the \hbox{{\rm [C {\scriptsize II}]}}\,, $^{12}$CO and $^{13}$CO emission in different velocity channels can be seen in Figs.~\ref{cii-chan}, \ref{co-chan} and \ref{13co-chan}. In the \hbox{{\rm [C {\scriptsize II}]}}\ channel maps (Fig.~\ref{cii-chan}), we can see that a shell like structure starts to develop from $\sim$ -24~km~s$^{-1}$ and it appears to be expanding in the velocity range of $\sim$ -12 to 0~km~s$^{-1}$. A red blue image is shown to depict this expansion in the Appendix~\ref{sec:shell_exp_appen}, Fig.~\ref{rb_-12_2}. The eastern arc of the shell is well defined compared to the western arc, which appears to be broken. In the velocity range of 2 to 12~km~s$^{-1}$, we see \hbox{{\rm [C {\scriptsize II}]}}\ emission confined to the north and south. The ridge as we discussed in Sect.~3.1 is seen in the velocity range of 16 to 22~km~s$^{-1}$. Channel maps of $^{12}$CO and $^{13}$CO (Figs.~\ref{co-chan} and \ref{13co-chan}) show similar velocity structures. The shell seen in the \hbox{{\rm [C {\scriptsize II}]}}\ channel maps is also outlined by fragmented emission from both $^{12}$CO and $^{13}$CO, starting from about -12 to 0~km~s$^{-1}$. Similar to the \hbox{{\rm [C {\scriptsize II}]}}\ emission, the eastern side of the shell is more apparent than its western counterpart. The northern and southern structures are spread out over a smaller velocity range (2 to 8~km~s$^{-1}$) compared to the \hbox{{\rm [C {\scriptsize II}]}}\ emission. Though the ridge is not very distinct in CO emission, it appears to be associated with the molecular cloud traced by $^{12}$CO for velocities greater than 16~km~s$^{-1}$. \subsection{Different structures in RCW~49} Figure~\ref{av_spectra} shows the average spectra of \hbox{{\rm [C {\scriptsize II}]}}\,, $^{12}$CO and $^{13}$CO toward all of the mapped regions shown in Fig~\ref{vel-int-maps}. We mark the boundaries of three main structures that we identify in the channel maps (Figs.~\ref{cii-chan}, \ref{co-chan} and \ref{13co-chan}): the shell's expansion, the northern and southern clouds and the ridge. Figure~\ref{shell_nscl_ridge} shows these structures as seen in velocity integrated intensity maps of \hbox{{\rm [C {\scriptsize II}]}}\,, $^{12}$CO and $^{13}$CO. The top row of Fig.~\ref{shell_nscl_ridge} shows the integrated intensity of the velocity range within which the expansion of the shell is clearly visible (in Fig.~\ref{cii-chan}) and we discuss this shell in detail in the further sections. The northern and southern clouds and the ridge are shown in the middle and bottom rows. These two structures are similar to the clouds as reported by \citet{2009ApJ...696L.115F} and as shown in Fig.~\ref{rgb-cii-8-870}. \subsection{Spectra toward different offsets} Figure~\ref{spectra} (left and middle panels) shows examples of observed spectra of \hbox{{\rm [C {\scriptsize II}]}}\,, $^{12}$CO and $^{13}$CO toward different offsets along the horizontal and vertical cuts as shown in the right panel. These cuts were chosen to visualize the spectral line profiles of the observed species toward the shell, which we see in the velocity channel maps (in Figs.~\ref{cii-chan}, \ref{co-chan} and \ref{13co-chan}). There are multiple velocity components toward any position but by examining closely the line profiles of the blue-shifted velocity component, we can follow the shell's progression. As we move along the horizontal cut (at $\Delta\delta$ = $-$100$\arcsec$) shown in the right panel of Fig.~\ref{spectra}, we see that the blue-shifted velocity component of the \hbox{{\rm [C {\scriptsize II}]}}\ line sequentially starts to shift its peak from $\sim$ 0~km~s$^{-1}$ (at $\Delta\alpha$ = 300$\arcsec$) to $\sim$ $-$11~km~s$^{-1}$ (at $\Delta\alpha$ = 100$\arcsec$) and back to $\sim$ $-$4~km~s$^{-1}$ (at $\Delta\alpha$ = $-$200$\arcsec$). Thus, tracing the shell's expansion. For the $^{12}$CO emission, the blue-shifted velocity component roughly follows the \hbox{{\rm [C {\scriptsize II}]}}\ line profiles. It starts to show up at $\Delta\alpha$ = 200$\arcsec$, sequentially shifting its peak to $\sim$ 14~km~s$^{-1}$ (at $\Delta\alpha$ = $-$100$\arcsec$) and back to $\sim$ $-$4~km~s$^{-1}$ (at $\Delta\alpha$ = $-$200$\arcsec$), similar to \hbox{{\rm [C {\scriptsize II}]}}\ line. As mentioned earlier (in Sect.~3.1), $^{12}$CO is usually optically thick, thus, we need to examine $^{13}$CO spectral line profiles to confirm the presence of different velocity components seen in $^{12}$CO spectra. The blue-shifted velocity component of $^{13}$CO emission line intensity is low and is not detected with our S/N. However, it can be seen that toward the lines-of-sight where the red-shifted velocity component of $^{12}$CO is relatively brighter, $^{13}$CO is detectable and follows $^{12}$CO's line profile. Thus, we expect $^{13}$CO emission line to have similar profiles for its blue-shifted velocity component as well. A similar trend is observed in the spectra reported along the vertical cut (at $\Delta\alpha$ = 100$\arcsec$). A shift in the peak of the blue-shifted velocity component of \hbox{{\rm [C {\scriptsize II}]}}\ can be seen starting from $\sim$ 0~km~s$^{-1}$ (at $\Delta\delta$ = $-$300$\arcsec$) to $\sim$ $-$13~km~s$^{-1}$ (at $\Delta\delta$ = 200$\arcsec$) and then back to $\sim$ $-$5~km~s$^{-1}$ (at $\Delta\delta$ = 400$\arcsec$). At $\Delta\delta$ = 100$\arcsec$, we see no emission from the shell ($<$ 0 km~s$^{-1}$) as evident from both the spectrum and its spatial distribution described in Sect.~3.5. The $^{12}$CO and $^{13}$CO emission also follow a similar trend as the spectra along the horizontal cut. In addition to the spectra shown in Fig.~\ref{spectra}, we selected a few more positions specifically along the shell to examine the spectral line profiles of the observed species. Figure~\ref{rolf-spec} shows the spectra of \hbox{{\rm [C {\scriptsize II}]}}\,, $^{12}$CO and $^{13}$CO at different offsets marked on the shell as seen from the velocity (-25 to 0~km~s$^{-1}$) integrated intensity map of \hbox{{\rm [C {\scriptsize II}]}}\,. Most of the spectra show a prominent blue-shifted ($<$ 0~km~s$^{-1}$) velocity component, which is tracing the shell. In contrast to the blue-shifted velocity component, the red-shifted velocity component ($>$ 15~km~s$^{-1}$) of the observed spectra does not follow any obvious trend. It corresponds to the ridge as discussed in Sect.~3.3 and shown in Fig.~\ref{shell_nscl_ridge} bottom row. The velocity components of \hbox{{\rm [C {\scriptsize II}]}}\,, $^{12}$CO and $^{13}$CO that lie within 2 to 12~km~s$^{-1}$ are part of the northern and southern clouds (discussed in Sect.~3.3 and shown in Fig.~\ref{shell_nscl_ridge} middle panel). The spectra shown in Fig.~\ref{rolf-spec} at $\Delta\alpha$ = 50$''$, $\Delta\delta$ = -300$''$ show $^{12}$CO and $^{13}$CO line profiles peaking between the two peaks of \hbox{{\rm [C {\scriptsize II}]}}\ line, which is indicative of self-absorption at velocities $>$ 0~km~s$^{-1}$. This spectral behaviour could also be seen at an offset of $\Delta\alpha$ = 100$''$, $\Delta\delta$ = -250$''$. This suggests that the northern and southern clouds probably lie in front of the gas constituting the shell. \begin{figure*}[htp] \includegraphics[width=100mm]{spectra_h_v_-100_0_lines.pdf}\qquad \includegraphics[width=73mm]{cii-for-spec-h-v_-100_0.pdf} \caption{Spectra (left and middle panels) of \hbox{{\rm [C {\scriptsize II}]}}\,, $^{12}$CO and $^{13}$CO toward different offsets along the horizontal and vertical lines shown on the \hbox{{\rm [C {\scriptsize II}]}}\ velocity (-25 to 30~km~s$^{-1}$) integrated intensity map in the right panel. The presented spectra are smoothed to a velocity resolution of 1~km~s$^{-1}$. The sequential shift in the peak of the blue-shifted component is also marked with vertical dashed lines in some panels. The blue-ward shift of the expanding shell is quite apparent when comparing these spectra (see Sections 3.2, 3.3 and 3.4).} \label{spectra} \end{figure*} \begin{figure*}[htp] \centering \includegraphics[width=150mm]{rolf-spec.pdf} \caption{Spectra of \hbox{{\rm [C {\scriptsize II}]}}\, (black), $^{12}$CO (red) and $^{13}$CO (blue) toward different offsets along the shell, shown around the velocity integrated (from -25 to 0~km~s$^{-1}$) intensity map of \hbox{{\rm [C {\scriptsize II}]}}\,.} \label{rolf-spec} \end{figure*} \subsection{Expanding shell of RCW~49} \label{sec:shell_description} Owing to the high spectral and spatial resolution of our \hbox{{\rm [C {\scriptsize II}]}}\,, $^{12}$CO and $^{13}$CO data, we were able to disentangle different components of gas along the same line-of-sight. The \hbox{{\rm [C {\scriptsize II}]}}\ channel maps (Fig.~\ref{cii-chan}) and the spectra displayed in Fig.~\ref{spectra} reveal a blue-ward expanding shell to the east of Wd2. The two (in red and blue) \hbox{{\rm [C {\scriptsize II}]}}\ velocity channel maps displayed in Fig.~\ref{bub-exp}~b outline this arc structure particularly well. The west side somewhat mirrors this arc-like structure but it is clearly a less coherent structure. The blue-shifted channel maps of the $^{12}$CO and $^{13}$CO emission reveal a highly fragmented clumpy distribution coincident with this limb-brightened shell. The position-velocity (pv) diagrams (Figs.~\ref{bub-exp}~e, \ref{fig:shell_pv} and \ref{pv-ver}) also reveal clearly the eastern arc as well as the fragmented western arc. The red-shifted (velocities $>$ 1~km~s$^{-1}$) counterpart of the east and west arcs are not very prominent in the channel maps (Figs.~\ref{cii-chan}, \ref{co-chan}, \ref{13co-chan} and \ref{bub-exp}~d), but there is evidence for a large scale, though highly fragmented, arc-like structure in the red-shifted velocities pv diagrams as well. Perusing the pv diagrams (Fig.~\ref{bub-exp}~e), we discern a spheroidal shell structure as outlined by large dots in Fig.~\ref{bub-exp}~b. Assuming that the shell is expanding isotropically, its projection on position-velocity space will be an ellipse. The maximum observed velocity and radius of the shell depend on the cosine of the angle between the center of the shell and a given cut along which a pv diagram is considered (for details see \citealt{2018PhDT.......111B}). The horizontal straight dashed line in the pv diagrams, represents the systemic velocity $\sim$ 1~km~s$^{-1}$, estimated by examining the velocity profiles of \hbox{{\rm [C {\scriptsize II}]}}\ emission toward various positions. The systemic velocity can vary by 1--2~km~s$^{-1}$ when looking at the spectra along different offsets. This is also consistent with the $^{12}$CO data. The maximum observed velocity of the shell is $\sim$ 13-14~km~s$^{-1}$, estimated from the observed spectra along different horizontal and vertical cuts through the shell. The center of the expanding shell is estimated from the intersection of the longest horizontal and vertical cuts at $\sim$ $\Delta\alpha$ = 0$\arcsec$, $\Delta\delta$ = 50$\arcsec$, which is $\sim$ 100$\arcsec$ east of Wd2. The resulting predicted ellipse is shown (solid curve in Fig.~\ref{bub-exp}~e) for our observed shell. Comparing the horizontal and vertical cuts and their corresponding pv diagrams (in Fig.~\ref{pv-ver}), we get a vertical (north-south) radius of $\sim$ 7.5~pc and a horizontal (east-west) radius of $\sim$ 4~pc, such that the geometric mean elliptical radius will be $\sim$ 5.5~pc and a thickness of $\sim$ 1~pc are estimated. Thus, the shell has expanded more in the north-south direction than the east-west direction. In column~e of Fig.~\ref{bub-exp}, we flipped the ellipse (shown in dashed curves) of our blue-shifted shell and found that the actual structures at these higher velocities are not traced well by the ellipses. Instead the red-shifted part of the gas comprises dense clumps that are moving at higher speeds $\sim$ 20~km~s$^{-1}$. The red-shifted gas component, which we link to the northern and southern cloud structures and the ridge, will be discussed in a future paper. \subsection{Dynamics of the shell} \subsubsection{Mass of the shell} \label{sec:mass} We have used several independent methods to estimate the mass of the expanding shell. Each of these methods relies on calculating the mass of the thin, limb-brightened arc of emission as identified on the \hbox{{\rm [C {\scriptsize II}]}}\ channel maps. We then use a geometric model to estimate the total mass of the shell. \\ \begin{figure*}[htp] \centering \includegraphics[width=180mm]{fig3-ellipse.pdf} \caption{The top, middle and bottom rows are for \hbox{{\rm [C {\scriptsize II}]}}\,, $^{12}$CO and $^{13}$CO, respectively. The maps shown here are smaller cutouts of the ones shown in Fig.~\ref{vel-int-maps}. Columns~a and c show the velocity integrated intensity maps of the observed species in the velocity ranges of $-$25 to 0~km~s$^{-1}$ and 0 to 30~km~s$^{-1}$, respectively. Column~b shows red blue (RB) velocity channel maps of \hbox{{\rm [C {\scriptsize II}]}}\,, $^{12}$CO and $^{13}$CO ranges from $-$12 to $-8$~km~s$^{-1}$ (blue) and from $-$8 to $-4$~km~s$^{-1}$ (red), respectively. The shell is marked with white dots in the three rows of column b. Similarly, column~d shows RB velocity channel maps of \hbox{{\rm [C {\scriptsize II}]}}\,, $^{12}$CO and $^{13}$CO from 0 to 4~km~s$^{-1}$ (blue) and from 4 to 8~km~s$^{-1}$ (red), respectively. Column~e shows the pv diagrams along the vertical white dashed line cuts in the columns~a and c. The predicted ellipse is shown on the pv diagrams for the blue-shifted part (in solid curve) and it's flipped (in dashed curve) to the red-shifted velocity structures.} \label{bub-exp} \end{figure*} \begin{figure*}[htp] \centering \includegraphics[width=\textwidth]{dust_mask.png} \caption{Map of temperatures \textit{(left)} and log-10 column densities \textit{(right)} derived pixel-by-pixel from the \textit{Herschel}\ 70 and 160~$\mu$m data as described in Section~\ref{sec:mass}. The inset images in the upper right corners zoom in on the central region and show the half-ellipse mask used to estimate the shell mass from dust, \hbox{{\rm [C {\scriptsize II}]}}, and CO(3$-$2).} \label{fig:dust_mask} \end{figure*} In the first method, we used 70 and 160~$\mu$m\ data from \textit{Herschel}\ PACS to estimate gas column densities via the dust column. Illuminated dust throughout high-mass star forming regions is heated by the FUV radiation and re-emits this energy in the far infrared (FIR) \citep{1999RvMP...71..173H}. The thermal FIR emission spectrum can be modeled as a modified black-body spectrum with spectral index $\beta$ in order to parametrize the emission in terms of dust (effective) temperature and optical depth. With predictions from dust grain models, such as those of \citet{Draine2003a}, the derived optical depth can be converted to a hydrogen nucleus column density. We are ultimately interested in deriving the mass in the shell, which is lined with PDRs and thus contains warmer dust. The 70 and 160~$\mu$m PACS bands are more sensitive to warmer ($T > 20$~K) dust than the longer-wavelength SPIRE bands (250, 350, and 500~$\mu$m), so we elect to use only the 70 and 160~$\mu$m\ bands in our dust emission spectrum analysis. \citet{Castellanos2014}, in their Section~4.3, make a comparable analysis of these FIR observations of RCW~49 using these two PACS bands. Following the general technique of \citet{Lombardi2014}, we zero-point calibrated the PACS images by predicting the intensity in the PACS 70 and 160~$\mu$m\ bands at 5$'$ resolution using the \textit{Planck}\ GNILC foreground dust model \citep{PlanckGNILC}. We compare the \textit{Planck}-predicted emission to the observed \textit{Herschel}\ emission in order to determine the image-wide correction for each band. Because we are making a significant extrapolation in predicting the shorter-wavelength PACS intensities using the \textit{Planck}\ dust model, which is based on longer-wavelength \textit{Planck}\ observations, we make this comparison under a mask excluding the warm central region. This exclusion limits the comparison to lines of sight with low temperature variation compared to the central region and still includes a large area of $\sim 25$~K dust (see Figure~\ref{fig:dust_mask}) which is reasonably bright from the \textit{Planck}\ wavelengths up to PACS 70~$\mu$m, maintaining the validity of our extrapolation. A Gaussian curve is fitted to the distribution of differences between the predicted and observed intensity for each of the two PACS bands, and the fitted mean is assigned as the required zero-point correction. The correction, a single number for each band, is added to each image. The 70 and 160~$\mu$m\ intensities along the limb-brightened shell are of the order $\sim$ 0.5 to 2 $\times$ 10$^{4}$~MJy~sr$^{-1}$, so our applied zero-point corrections of 80 and 370~MJy~sr$^{-1}$, respectively, are no more than 10$\%$ of the total intensity in either band. After zero-point correcting the two PACS images, we derive dust effective temperature and optical depth. We can model the emission with a modified black-body spectrum using Equations 1--3 from \citet{Lombardi2014}. In order to compare model emission with the PACS intensity measurements, we use Equation~8 from \citet{Lombardi2014} to integrate the model spectrum over the relative spectral response functions available for both the 70 and 160~$\mu$m\ PACS bands. In the present work, all source intensities and response functions which we discuss refer to the extended emission versions (as opposed to point source emission) where applicable. The PACS photometry is expressed as intensity (MJy/sr), already accounting for beam areas. Following their Equation~8, we can write the mean intensity $\overline{I}_{i}$ measured in band $i$ as \begin{equation} \label{eq:dust_bandpass} \overline{I}_{i} = \frac{\int I_{\nu}\, R_{\nu}\, \text{d}\nu}{\int (\nu_{i}/\nu)\, R_{\nu}\, \text{d}\nu }. \end{equation} Expressing the above function of $I_{\nu}$ as a ``bandpass function'' $BP_{i}$ of the incident source intensity $I_{\nu}$ and combining with their Equation~1, we rewrite $\overline{I}_{i}$ as \begin{equation} \label{eq:dust_intensity} \overline{I}_{i} = BP_{i}\Big[ I_{\nu} \Big] = BP_{i}\Big[ B_{\nu}(T) (1 - e^{-\tau_{\nu}}) \Big] \end{equation} where $B_{\nu}(T)$ is the Planck function in Equation~2 by \citet{Lombardi2014} and $\tau_{\nu}$ is modeled as a power law with spectral index $\beta$, as given in their Equation~3. \begin{equation} \label{eq:dust_taudefinition} \tau_{\nu} = \tau_{0} \big(\nu / \nu_{0} \big)^{\beta}. \end{equation} We adopt $\nu_{0} = 1874$~GHz, corresponding to 160~$\mu$m, so that $\tau_{0}$ is the optical depth at 160~$\mu$m\ ($\tau_{160}$). We use a fixed spectral index $\beta = 2$, consistent with the grain models of \citet{Draine2003a}. Since we have two observations and two unknowns, we are able to derive a unique solution for effective temperature $T$ and optical depth $\tau_{0}$. The simplest solution can be found by making the optically thin ($\tau_{\nu} \ll 1$) approximation $(1 - e^{-\tau_{\nu}}) \approx \tau_{\nu}$. Applying this approximation within our Equation~\ref{eq:dust_intensity}, measured intensity $\overline{I}_{i}$ in either band can be expressed \begin{equation} \label{eq:dust_optthin} \begin{aligned} \overline{I}_{i} = BP_{i}\Big[ B_{\nu}(T)\, \tau_{\nu} \Big] = BP_{i}\Big[ B_{\nu}(T)\, \tau_{0} \big(\nu / \nu_{0} \big)^{\beta} \Big] \\ = BP_{i}\Big[ B_{\nu}(T)\, \big(\nu / \nu_{0} \big)^{\beta} \Big]\, \tau_{0} \end{aligned} \end{equation} Recalling that the bandpass function $BP_{i}$ is primarily an integral over frequency, the constant-in-frequency $\tau_{0}$ can be pulled outside of the function. The ratio of the intensities in the two bands excludes the parameter $\tau_{0}$ entirely. \begin{equation} \label{eq:dust_intensityratio} \frac{\overline{I}_{70}}{\overline{I}_{160}} = \frac{BP_{70}\Big[ B_{\nu}(T)\, \big(\nu / \nu_{0} \big)^{\beta} \Big]}{BP_{160}\Big[ B_{\nu}(T)\, \big(\nu / \nu_{0} \big)^{\beta} \Big]} \end{equation} This expression for the ratio of the measured intensities depends only on one parameter, the effective temperature $T$. The expression is easily evaluated for a range of $T$, producing a series of modeled intensity ratio values. Using this numerical grid, we interpolate from the observed intensity ratio values to temperatures. With derived effective temperatures in hand, we rearrange our Equation~\ref{eq:dust_optthin} and evaluate the expression for $\tau_{0}$ using the measured intensities in one of the bands. \begin{equation} \label{eq:dust_solvefortau} \tau_{0} = \frac{\overline{I}_{i}}{BP_{i}\Big[ B_{\nu}(T)\, \big(\nu / \nu_{0} \big)^{\beta} \Big]} \end{equation} We have numerically evaluated the above two expressions to derive the temperature and dust optical depth over the map. We have converted the calculated 160~$\mu$m optical depth into H-nuclei column densities (described below) and the results are shown in Fig.~\ref{fig:dust_mask}. In principle, there is no need to make the optically thin approximation. We can work directly from Equation~\ref{eq:dust_intensity} and write the intensity ratio of the two bands without any approximation or cancelling of terms. We can then use the calculated optical depth in the optically thin approximation to derive, from the rewritten intensity ratio, an improved temperature, and use that to numerically derive a new optical depth from Equation~\ref{eq:dust_intensity} and continue this iteration until convergence is achieved. As the 160~$\mu$m optical depth in the shell is rather small, this procedure converges rapidly. Tests on a few points in the shell demonstrate that the iteration produces only small ($\sim$5\%) changes in the optical depth. As this is comparable to the calibration uncertainty, we have elected to continue the analysis with the optically thin approximation. In order to convert the 160~$\mu$m\ optical depth to hydrogen nucleus column density, $N(H)$ = $N$(HI) + 2$N$(H$_2$), we use the \citet{Draine2003a} R$_{V} = 3.1$ value of the dust extinction cross section per hydrogen nucleus at 160~$\mu$m, $C_{\text{ext},160}/\text{H} = 1.9 \times 10^{-25}~\text{cm}^{2}/\text{H}$, and solve Equation~\ref{eq:dust_column} for $N(H)$. The maps of dust temperature and $N(H)$ are presented in Figure~\ref{fig:dust_mask}. \begin{equation} \label{eq:dust_column} \tau_{160} = (C_{\text{ext},160}/\text{H}) \times N(H) \end{equation} We assume a half-elliptical shell, with the major and minor axes lengths from Section~\ref{sec:shell_description}, and create a mask tracing the limb-brightened Eastern edge. From the H nucleus column densities, which range from 0.3 to 1.3$\times 10^{23}$~cm$^{-2}$ along the shell, we calculate a total gas mass (excluding He) of this region, finding a value of 8.5$\times10^{3}$~\(\textup{M}_\odot\). We consider this mass estimate an upper limit due to line-of-sight contribution. This measurement picks up some other components of gas that are not part of the shell. One of the brightest lines-of-sight in $^{12}$CO (east of Wd2) is a superposition of the shell and ridge components, which can be disentangled in the CO emission but not in the dust. We find that the masses (estimated from $^{13}$CO using the same technique as explained in the later paragraphs of this section) along the brightest lines-of-sight in CO split roughly 60/40, shell/ridge. The mass corresponding to the ridge part comes out to be 297~\(\textup{M}_\odot\) putting an additional $<$ 5\% error on the shell mass. Our gas mass estimate from the dust column also includes more diffuse foreground and background contribution, as the observed FIR intensity, and consequently the calculated column density, is non-zero even a few arcminutes away from the shell. This is distinct from the non-shell components discussed above in that this diffuse contribution is larger scale, extending past the central RCW~49 region in Figure~\ref{fig:dust_mask}, and is probably physically distant from and thus completely unrelated to the shell. We make a rough estimate of this combined ``background'' by sampling a column density of 1.3$\times 10^{22}$~cm$^{-2}$ a few arcminutes away from the shell. If we subtract this background from the map and then carry through our previous calculation, we find that it may account for up to 30\% of the mass in our upper limit estimate. Rather than make this rough background estimate and subtraction, we simply reiterate our interpretation of the mass measurement as an upper limit. Finally, the half-ellipse model captures most of the visible \hbox{{\rm [C {\scriptsize II}]}}\ shell, but the shell extends slightly past the northern edge of the mask by about $20\degree$ in azimuth, and past the southern edge by about $10\degree$. We make a rough correction for this by multiplying the mass estimate by $7/6$, assuming a constant linear density along the edge of the shell. It is possible to make a more articulated shell mask for a more precise measurement, but this would necessitate more complex assumptions. \\ The second method to estimate the gas mass is by determining the C$^+$ column density $N$(C$^+$). As detailed in Appendix~\ref{sec:cii_opacity_appen}, the \hbox{{\rm [$^{13}$C {\scriptsize II}]}}\ line shows that optical depth effects are small over most of the \hbox{{\rm [C {\scriptsize II}]}}\ arc, except for the brightest emission spot. Hence, we have calculated the C$^+$ column density in the optically thin limit following \citet[equation~A.1]{2018A&A...615A.158T}. We assume an excitation temperature $\sim$ 100~K (lower limit), which is a reasonable kinetic gas temperature in the \hbox{{\rm [C {\scriptsize II}]}}\ emitting layer of a PDR to excite the C$^+$ from $^2P_{1/2}$ $\to$ $^2P_{3/2}$. We calculated the column density (upper limit) of the observed limb-brightened part for a region similar to the mask used in the dust mass estimation and for the emission within -25 to 0~km~s$^{-1}$. We found an average $N$(C$^+$) $\sim$ 7.2 $\times$ 10$^{18}$~cm$^{-2}$. The column density $N$(C$^+$) is not very sensitive to the choice of $T_{ex}$. For instance, assuming $T_{ex} = 200$~K instead of 100~K decreases the calculated $N$(C$^+$) by 19\%. A pixel-by-pixel sum of $N$(C$^+$) over the entire thickness allowed us to estimate the H gas mass. Using the abundance ratio of C/H = 1.6 $\times$ 10$^{-4}$ \citep{2004ApJ...605..272S}, we get the corresponding H gas mass\footnote{We note that our analysis assumes a purely atomic hydrogen column density but the C$^+$ arises in both the warm atomic and molecular gas. Since the collisional excitation rate of C$^+$ by H2 is $\sim$ 0.7 times the rate of excitation by atomic hydrogen \citep{2014ApJ...780..183W} we expect that a purely H$_2$ column would have $\sim$ 1.5 times the mass.} $\sim$ 4.6 $\times$ 10$^3$~\(\textup{M}_\odot\), which is very similar to the mass calculated from the dust emission. \\ Finally, we can estimate the H$_{2}$ gas mass from $N$($^{13}$CO), which requires determination of its excitation temperature, $T_{\rm ex}$. Again, these estimations were done for the same masked region as described above and for the emission within -25 to 0~km~s$^{-1}$. As detailed in Appendix~\ref{sec:co_tex_colden_appen}, we get an average $T_{\rm ex}$ $\sim$ 14.5 $\pm$ 1~K and an average $N$($^{13}$CO) $\sim$ (8.8 $\pm$ 0.1) $\times$ 10$^{15}$~cm$^{-2}$ such that the average $N$($^{12}$CO) $\sim$ 4.6 $\times$ 10$^{17}$~cm$^{-2}$, using $^{12}$CO/$^{13}$CO = 52 \citep{2005ApJ...634.1126M}. Similar to the calculation of H gas mass, by summing pixel-by-pixel the $N$($^{13}$CO) and by using $^{12}$CO/H$_{2}$ = 8.5 $\times$ 10$^{-5}$ \citep{2010pcim.book.....T}, we get an H$_2$ gas mass of $\sim$ 1.5 $\times$ 10$^{3}$~\(\textup{M}_\odot\). As expected from the more fragmented CO emission, this mass estimate is somewhat less than derived from the dust or the \hbox{{\rm [C {\scriptsize II}]}}\ emission. \\ In summary, our mass estimate using the dust includes both the atomic and molecular gas. The estimate using \hbox{{\rm [C {\scriptsize II}]}}\ emission (CO dark gas) depends on the molecular fraction but differs by a factor of only 1.5 between pure atomic and pure molecular columns. The estimate from $^{13}$CO is for the molecular gas alone. Thus the shell mass estimated from dust (8.5 $\times$ 10$^{3}$~\(\textup{M}_\odot\)) is comparable to that estimated by \hbox{{\rm [C {\scriptsize II}]}}\ and $^{13}$CO together (6.1 $\times$ 10$^{3}$~\(\textup{M}_\odot\)). The range of masses obtained through the different methods is about a factor of 1.4 in mass. \\ In order to estimate the entire shell's mass, we assume that the shell occupies the space between two concentric prolate spheroidal shells with North-South semimajor axes $a=$ 6.5 and 7.5~pc and semiminor axes $b=$ 3.5 and 4.5~pc such that the shell thickness is about 1~pc. We also assume that the limb-brightened region is represented by the volume of a prolate spheroid of $a=$ 7.5~pc and $b=$ 4.5~pc from which the volume of an elliptic cylinder of $a=$ 6.5~pc and $b=$ 3.5~pc is subtracted, which we call a ``cored spheroid''. The conversion from the volume of a cored spheroid to that of a spheroidal shell gives us a geometric correction factor of 2.5, which we determined numerically. By a simple argument of symmetry, this correction is valid for our quarter-spheroid shell assumption. Applying that factor to our limb-brightened part's mass (derived from dust), and including the corrective factor of $7/6$ explained earlier, we get the corrected shell mass estimate $\sim2.5\times10^{4}$~\(\textup{M}_\odot\). \\ \begin{table*} \centering \begin{threeparttable} \caption{Densities, temperatures and pressures calculated for the shell of RCW~49.} \begin{tabular}{l c c c c c} \hline\hline \noalign{\smallskip} Region & $n$ (cm$^{-3}$) & $T$ (K) & $p_{\rm th}/k$ (cm$^{-3}$~K) & $p_{\rm rad}/k$ (cm$^{-3}$~K) & $p_{\rm turb}/k$ (cm$^{-3}$~K) \\ \hline \noalign{\smallskip} Plasma\tnote{a} & 0.71 & 3.13 $\times$ 10$^6$ & 4.9 $\times$ 10$^{6}$ & - & - \\ Ionized gas\tnote{b} & 317 & 7.7 $\times$ 10$^3$ & 4.9 $\times$ 10$^6$ & - & -\\ PDR/\hbox{{\rm [C {\scriptsize II}]}}\ layer\tnote{c} & 4 $\times$ 10$^{3}$ & 300 & 1.2 $\times$ 10$^6$ & 2.6 $\times$ 10$^6$ & 5.9 $\times$ 10$^6$ \\ \hline \end{tabular} Notes: Columns from left to right are region, H density, temperature, and thermal, radiation, and turbulent pressures. \begin{tablenotes} \item[a]\small $n$ is abundance of e$^-$ from ionization of H and $p_{\rm th}/k$ = $2.2nT$, where the factor 2.2 accounts for ionization of H and singly ionized He. \item[b]\small $n$ is e$^-$ density from ionization of H and $p_{\rm th}/k$ = $2nT$, where factor 2 accounts for ionized H$^+$, and neutral He. \item[c]\small $n$ is H density and $p_{\rm th}/k$ = $nT$. The radiation pressure $p_{\rm rad} = L_{\rm bol}/4{\pi}kR^2c$, where $R$ is the radius of the shell. The turbulent pressure $p_{\rm turb} = \mu mn ~\Delta v_{\rm turb}^2/8\ln{2}~k$, where $\mu =1.3$ is the mean molecular weight, $m$ is hydrogen mass and $\Delta v_{\rm turb}^2 = \Delta v_{\rm FWHM}^2 - (8\ln{2}~kT/m_{\rm c})$, where $m_{\rm c}$ is the carbon mass. \end{tablenotes} \label{pressures} \end{threeparttable} \end{table*} We can check our assumption of the shell mass against observed extinction through the foreground gas and shell. If we assume the gas distribution is uniform throughout the shell, then we can divide the total shell mass by the surface area of a quarter of a spheroidal shell at its mean semimajor and semiminor axes $a=$ 7~pc and $b=$ 4~pc and then convert the surface mass density to $A_{V}$ using the factor $N(H)/A_{V} = 1.9\times 10^{21}~\text{cm}^{-2}$ \citep{Bohlin_NH_to_Av} for an $R_{V} = 3.1$ reddening law. Including the geometric factor of 2.5 (but excluding the $7/6$ factor so that we match our surface area assumption), the mass estimate from far-infrared dust emission results in an extinction of $A_{V,\,shell} \sim 18$ through the shell. \citet{VA2013} and \citet{VPHAS_Wd2_2015} both measured an average $A_{V} \sim 6.5$ towards Wd2 cluster members, each finding the reddening to be $R_{V} \sim 3.8$. \citet{Hur2015} reported an abnormally high reddening law, $R_{V} = 4.14$, towards early-type cluster members and a more typical law, $R_{V} = 3.33$, towards foreground stars in the same field. They report a total $E(B-V) \approx$ 1.7 to 1.75 for most cluster members, and find a foreground $E(B-V)_{fg} \approx 1.05$, which suggests $A_{V,\,fg} \sim 3.5$. \citet{2015AJ....150...78Z} report $E(B-V) \approx$~1.8 to 1.9 based on reddening of line emission from ionized gas. All four of these measurements are consistent with $A_{V} \sim 6.5$ towards Wd2 cluster members, implying $A_{V,\,shell} \sim 3$ towards the cluster after accounting for the foreground extinction measured by \citet{Hur2015}. This suggests a significantly thinner shell than the $A_{V,\,shell} \sim 18$ we predict from our shell mass estimate, assuming a uniform shell. We explain this discrepancy by proposing that the thickness of the shell varies significantly across its surface, a claim further supported by the fragmented CO distribution we observe across the shell and the higher $A_{V} \sim 15$ fit by \citet{2004ApJS..154..315W} to two young stellar objects embedded in the western shell. If we picture an optical path originating from the cluster and extending east, passing through the bright eastern shell, this path experiences $A_{V} \sim 18$ extinction according to our mass calculations. But according to the optical extinction measurements, the thickness of the shell drops dramatically as the optical path sweeps towards us. In fact, we do not detect any bright shell component along the line of sight towards the cluster in the \hbox{{\rm [C {\scriptsize II}]}}\ and CO spectra and pv diagrams. We draw the conclusion that the extinction associated with the cluster probes a much thinner section of the shell than the limb-brightened eastern shell from which we extrapolate our mass estimate, and so our geometrically-extrapolated mass measurement must be considered an upper limit. One could imagine combining our limb-brightened shell mass estimate with an optical extinction map like those shown in Figure~14 in the paper by \citet{Hur2015} or Figures 8, 9, and 25 in the paper by \citet{2015AJ....150...78Z} in order to understand the variation in thickness across the shell. High confidence in the 3-dimensional positions of the cluster members and the geometry of the shell would be required in order for such a measurement to have any meaning, and this is beyond the scope of the present paper. \subsubsection{Energetics} \label{sec:energetics} Using the total mass (excluding He) of the shell $\sim$ 2.5 $\times$ 10$^4$~\(\textup{M}_\odot\) and its expansion velocity $\sim$ 13~km~s$^{-1}$, we calculated its kinetic energy, $E_{\rm kin}$ $\sim$ 4 $\times$ 10$^{49}$~ergs. \\ To assess the contribution from stellar winds in driving the shell, we need to estimate the stellar wind energy of Wd2. The early-type cluster members are catalogued along with their established or estimated stellar types by \cite{TFT2007}, \cite{VA2013}, and \cite{VPHAS_Wd2_2015}. We used the theoretical calibrations of \cite{Martins2005} to estimate effective temperature $T_{\text{eff}}$, surface gravity log$~g$, and luminosity $L$ from the spectral type. For WR20b (WN6ha; \citealt{vanderHucht2001}) and each component of the WR20a binary (WN6ha+WN6ha), we assume parameters fitted to WR20a by \cite{Rauw2005}. See Appendix~\ref{sec:wd2_appendix} for additional details about the synthesized catalog, the measurements derived from the catalog, and the uncertainties on those measurements. The combined mass loss rate, mechanical energy injection ($1/2~\dot M v_{\infty}^{2}$), and momentum transfer rate ($\dot M v_{\infty}$) by the O and B stars within $3'$ of the cluster center, at the peak of the X-ray emission \citep{2019ApJS..244...28T}, is $(3.1 \pm 0.2) \times 10^{-5}$~\(\textup{M}_\odot\)~yr$^{-1}$, $(8.3 \pm 0.5) \times 10^{37}$~ergs~s$^{-1}$, and $(5.6 \pm 0.4) \times 10^{29}$~dyn \citep{Leitherer2010}. The WR binary WR20a contributes an additional $1.9^{+0.25}_{-0.20}\times 10^{-5}$~\(\textup{M}_\odot\)~yr$^{-1}$, $3.6^{+1.3}_{-1.1} \times 10^{37}$~ergs~s$^{-1}$, and $3.0^{+0.70}_{-0.60} \times 10^{29}$~dyn. Using the evolutionary spectral synthesis software \texttt{Starburst99} \citep[see description in Appendix~\ref{sec:sb99_appendix}]{Leitherer2014Starburst99}, we estimate that over the lifetime of the cluster ($\sim$2~Myr), the OB stars have injected $\sim 6 \times 10^{51}$~ergs via their winds (see Fig.~\ref{fig:sb99} and Appendix~\ref{sec:sb99_appendix} for additional detail). WR stars represent a rather short-lived phase of the lifetimes of very massive stars, forming after about $\sim$2 to 3~Myr, depending on their mass, and lasting a few hundred thousand years if we use the results of \texttt{Starburst99} as a guide. The components of WR20a are estimated by \citet{Rauw2005} to be $\sim80~\text{M}_{\odot}$ each, the most massive observed stars in the cluster; if this is close to their initial masses, then neglecting binary effects on their evolution, \texttt{Starburst99} would suggest that they formed after $\sim3$~Myr, creating some tension with most of the age estimates of the cluster. If these Wolf-Rayet stars originated as much more massive objects, like the $\sim126~\text{M}_{\odot}$ predicted by \citet{Ascenso2007} to be the most massive star in the cluster based on their assumed IMF, then \texttt{Starburst99} would suggest that WR stars could have formed after $\sim2$~Myr, which agrees better with independent age estimates. \citet{Rauw2005} observe evidence of enhanced surface hydrogen abundance in the components of WR20a, indicating that they (and WR20b, if we assume the components are identical) are still in the core hydrogen-burning phase and, based on their position in the Hertzsprung-Russell diagram, are only $\sim1.5$~Myr old and have present-day masses very similar to their initial masses. In any case, over the lifetime of the cluster, the OB stars will have dominated the kinetic feedback; but over the last 2 to 3$\times 10^{5}$ years, the winds of the WR binary alone will have contributed $\sim4 \times 10^{49}$~ergs. \begin{figure*}[htp] \centering \includegraphics[width=0.95\textwidth]{sb99.png} \caption{\label{fig:sb99} \texttt{Starburst99} predictions of the mechanical luminosity (left panel) and momentum transfer rate (right panel) due to the stellar winds. The \texttt{Starburst99} simulation configurations are described in Appendix~\ref{sec:sb99_appendix}. The solid blue and dashed red lines, and associated solid shaded blue and hatched red regions, give the values for OB and WR stars, respectively. The horizontal lines, solid and dashed and their associated shaded regions, mark the values calculated from the observed OB and WR stars as described in Section~\ref{sec:energetics} as well as in Appendix~\ref{sec:wd2_appendix}. The darker-shaded regions reflect uncertainty in the total cluster mass, as described in the Appendix~\ref{sec:sb99_appendix}. The lighter-shaded regions reflect total cluster mass uncertainty as well as uncertainty in the maximum stellar mass; the upper limit uses a [1, 120]~$M_{\odot}$ range, and the lower limit uses a [1, 80]~$M_{\odot}$. Note the effect of the maximum stellar mass on the age at which WR stars appear in these models.} \end{figure*} \begin{figure*}[htp] \centering \includegraphics[width=130mm]{2d-3d-cartoon.pdf} \caption{2D (upper panel) and 3D (lower panel) representations of RCW~49's shell as seen by the observer. The transition boundary from \citet[Fig.~1]{2004ApJS..154..322C} is marked and labelled with `C04'. The plasma (in blue), ionized gas (in green) and the PDR (in red) are shown. The limb-brightened part of the shell in the 2d illustration is actually the expanding \hbox{{\rm [C {\scriptsize II}]}}\ shell seen in the 3D diagram. The transition boundary overhangs from the \hbox{{\rm [C {\scriptsize II}]}}\ shell into the ionized gas structure behind it.} \label{geometry} \end{figure*} Given the similarity in spectral characteristics, the contribution by WR20b is likely similar to that of a single component of WR20a (approximately half the values given above). However, WR20b is significantly offset ($>3' \sim 3.5$~pc) from the center of the X-ray emission tracing the $\sim3$~pc radius plasma bubble, so it is probably not playing as direct a role as WR20a in powering this bubble. \\ The importance of the plasma's thermal energy in expanding the shell can be assessed by comparing thermal pressures, $p_{\rm th}$, in the hot plasma, ionized gas and PDR of RCW~49. To characterize the hot plasma, we use the RCW~49 \textit{Chandra}/ACIS observations reported by \citet{2019ApJS..244...28T}. These authors fit X-ray spectra towards RCW~49 with plasma models using the spectral fitting software Xspec \citep{1996ASPC..101...17A}. The plane-parallel, constant temperature shocked plasma model labeled \textit{pshock2} represents the diffuse plasma towards the center of Wd2, so we take from this model the fitted temperature and surface emission measure listed in the Table~5 by \citet{2019ApJS..244...28T}. They independently fit spectra from the inner region towards Wd2 and the outer region out to $\sim$3 arcminutes away (see their text for details), and label these independent fits ``Wd2~inner'' and ``Wd2~outer''. We take the \textit{pshock2} parameters of the ``Wd2~outer'' region and assume the plasma fills a sphere whose circular cross-section equals the observed areas of ``Wd2~outer'' and ``Wd2~inner'' combined, yielding a radius of 2.7~pc. Following the calculations of \citet{Townsley2003}, we obtain the electron density (assuming a line-of-sight distance $2r$ through the plasma) and temperature, which are listed in Table~\ref{pressures}. Finally, assuming the temperature and pressure are constant throughout the spherical bubble, the total thermal energy of the plasma is about $2.4 \times 10^{48}$~ergs. There is clearly a mismatch (lower) in the thermal energy of the hot plasma, the mechanical energy injected over the lifetime of the stellar cluster, and the kinetic energy of the expanding shell. We will revisit this in Sect.~\ref{sec:morph}. We followed the work by \cite{2015ApJ...813...24P} to estimate the temperature and density in the ionized gas in the \hbox{{\rm H {\scriptsize II}}}\ region using their observed H109$\alpha$ line properties. We excluded from the calculation their ``region~B'' due to its outlying line-of-sight velocity, which indicates it may be from the blue-shifted foreground ridge component we detect in \hbox{{\rm [C {\scriptsize II}]}}. We assumed a hollow spherical geometry with an outer radius of 5~pc, bordering the PDR, and an inner radius of 2.7~pc, bordering the X-ray emitting plasma, and calculated the electron density and temperature (listed in Table~\ref{pressures}). Lastly, for the PDR, we compared our observations with existing PDR models (PDR Toolbox; \citealt{2006ApJ...644..283K}, \citealt{2008ASPC..394..654P}). To use these, we needed an estimation of the FUV flux incident on the PDR. We used the $T_{\rm eff}$ and log $g$ (or appropriate WR parameters from \citealt{Rauw2005}) for each catalogued star to select models from the PoWR stellar atmosphere grids \citep{PoWRCode3}. From the synthetic spectra provided by the PoWR models, we integrated the total flux between 6 to 13.6~eV. Using the stellar coordinates and these FUV fluxes from all stars within 12\arcmin\ (including WR20a and WR20b, though these contribute only a few percent of the total FUV flux), we estimated the integrated FUV flux between 6 to 13.6~eV, often expressed as $G_{0}$ in terms of the Habing field, to be $\sim$ 2--3 $\times~10^{3}$ in Habing units at the limb-brightened shell radius of $\sim$ 5--6 pc from Wd2. This value should be considered an upper limit, as the extinction between the illuminating cluster and the PDR due to dust in the \hbox{{\rm H {\scriptsize II}}}\ region has not been accounted for. For the average intensities (within the masked region shown in Fig.~\ref{fig:dust_mask}) of \hbox{{\rm [C {\scriptsize II}]}}\ ($\sim$ 112~K~km~s$^{-1}$) and $^{12}$CO ($\sim$ 41~K~km~s$^{-1}$) at a FUV radiation field, $G_{\rm 0}$ $\sim$ 10$^{3}$ in Habing units, we determined the PDR's H density and temperature using the models from the PDR toolbox\footnote{http://dustem.astro.umd.edu/} \citep{2006ApJ...644..283K,2008ASPC..394..654P}. Using our observed line ratio of \hbox{{\rm [C {\scriptsize II}]}}\,/$^{12}$CO(3-2) (after conversion from K~km~s$^{-1}$ to erg~cm$^{-2}$~s$^{-1}$~sr$^{-1}$) and comparing with the modeled line ratio\footnote{http://dustem.astro.umd.edu/models/wk2006/ciico32web.html} as a function of the cloud density and $G_{\rm 0}$, allowed us to estimate the density. Further, using this density and the $G_{\rm 0}$, we constrained the PDR temperature using the modeled PDR surface temperature map\footnote{http://dustem.astro.umd.edu/models/wk2006/tsweb.html}. The temperature is not strongly dependent on $G_{\rm 0}$ in the derived density range. A list of the derived densities, temperatures and thermal pressures is presented in Table~\ref{pressures}. Using the sum of the bolometric luminosities, $L_{\rm Bol}$, for the OB stars and the WR20a star, we can estimate the radiation pressure (as mentioned in Table~\ref{pressures}). The bolometric luminosities of the OB stars within 3$\arcmin$ were taken from the spectral type calibrations of \citet{Martins2005} as described above and \citet{Rauw2005} provides the bolometric luminosities of the components of WR20a based on their fit to its spectrum. We can also estimate the turbulent pressure, $p_{\rm turb}$ (in Table~\ref{pressures}), from the full width half maximum of the observed \hbox{{\rm [C {\scriptsize II}]}}\ emission line profile. These results reveal that there is rough pressure equipartition between the thermal, turbulent and radiation pressure in the PDR. Examining Table~\ref{pressures}, we conclude that the hot plasma, the ionized gas and the PDR are in approximate pressure equilibrium as expected for a stellar wind shell driven by mechanical energy input from the central star cluster \citep{1977ApJ...218..377W}. \section{Discussion} \subsection{Morphology of the shell and the role of WR20a in its expansion} \label{sec:morph} As discussed in Sect.~3.3.2, the kinetic energy of the expanding shell is much higher than the thermal energy of the plasma. We emphasize that the mechanical luminosity of the stellar cluster well exceeds the requirements for driving the shell. Therefore, we surmise that much of the thermal energy was lost once the shell broke open to the west and the hot gas expanded freely into the environment (as shown in Fig.~\ref{geometry}). Beside the adiabatic cooling associated with this ``free'' expansion, evaporation of entrained cold gas into the hot plasma due to electron conduction (\citealt{1977ApJ...218..377W}; \citealt{1977ApJ...211..135C}) may have led to regions that are dense and cool enough to allow rapid cooling, resulting in a rapid loss of thermal energy. We also recognize that there is a timescale issue. The shell radius of 6~pc and the expansion velocity of 13~km~s$^{-1}$ imply an expansion timescale of $\sim$ 0.5~Myr. For an enclosed bubble driven by adiabatic expansion of the hot plasma created by a continuous input of mechanical energy by stellar winds, the expansion time scale is only 0.27~Myr (using equations~51 and 52 of \citealt{1977ApJ...218..377W}). In contrast, the age of the Wd2 cluster is $\sim$2~Myr according to most age estimates. It should be noted that a different choice of the heliocentric distance (say up to 8~kpc) of RCW~49 would increase the estimated expansion timescale of the shell to 0.52~Myr, which is still inconsistent with the age of Wd2. Hence, the average expansion velocity over most of the cluster lifetime must have been $\lessapprox$ 2~km~s$^{-1}$ and only very recently ($\lessapprox$ 0.2~Myr), the shell has been accelerated to 13~km~s$^{-1}$. We infer that feedback from OB stars initially drove shell formation and expansion but that this bubble quickly burst, releasing the hot plasma. At that point, expansion would rapidly slow down due to continued sweeping up of the cold gas in the environment. The recent re-acceleration of the shell might be connected to the evolution of the most massive stars (WR20a and 20b) to the Wolf-Rayet phase. If we assume the bubble has burst, expansion must be driven by momentum transfer. The issue is that the Wolf-Rayet stars do not seem to inject significantly more momentum than the ensemble of OB stars; this issue arises in the momentum transfer rates calculated directly from the observed WR stars and their properties as well as those more generally predicted through \texttt{Starburst99} simulations. We do not propose any solutions to this conundrum, at present; this requires further detailed analyses of the member stars, especially WR20a and 20b, as well as the expanding shell. \subsection{Previous studies and larger scale structure} Our picture of a single shell at the center of RCW~49 differs from that of the two shell (separated by the ridge) scenario presented by \citet{1997A&A...317..563W} and \citet{2013A&A...559A..31B}. Owing to the kinematic information provided by the high spectral resolution of the \hbox{{\rm [C {\scriptsize II}]}}\, data that the radio data lacks, we were successful in decoupling the central ``ridge'' from the shell. The ``radio ring B'' surrounding WR20b appears to be a superimposition of filamentary structures, rather than a coherent ring. These structures generally extend toward the main Wd2 cluster, suggesting that Wd2, rather than WR20b, dominantly influences their morphology. We find that the ridge, too, is a superimposition of hot gas and dust components well-separated in velocity space, which may explain the variation in H137$\beta$ and H109$\alpha$ radio recombination line velocities observed by \citet{2013A&A...559A..31B} and \citet{2015ApJ...813...24P}, respectively. Some red-shifted sections of the ridge seem to be connected in velocity space to a larger $+16$ km/s molecular cloud \citep{2009ApJ...696L.115F}, which may indicate that these sections lie beyond the cluster and limb-brightened shell. While we argue that the expansion of the shell is driven by stellar winds of WR20a, on a larger scale the observed velocity structure (blue and red-shifted components of gas) in RCW 49 could be guided by the dynamics of several individual molecular clouds which predate Wd2. \citet{2009ApJ...696L.115F} suggests that a collision between two of these clouds may have contributed to the formation of Wd2 and, consequently, RCW~49. Additionally, \citet{2019ApJS..244...28T} observed diffuse hard X-ray emission from far-west of Wd2 toward a pulsar wind nebula that is indicative of a cavity supernova remnant, suggesting an earlier generation of massive star formation in RCW~49. Perhaps this earlier generation of star formation is responsible for the large scale velocity dispersion observed by \citet{2009ApJ...696L.115F}, while the local shell expansion that we present in this work is driven by the stellar winds of WR20a. \citet{2004ApJS..154..315W} studied star formation in different regions of RCW~49 using the GLIMPSE survey and found that most of the star formation is occurring within a 5 pc radius (similar to the transition boundary of \citealt{2004ApJS..154..322C}) from the Wd2 cluster. At larger distances, a second generation of star formation perhaps triggered by Wd2 is also suggested, based on the massive (B2--3) young stellar objects (YSOs) detected. To put it in context of our findings, this implies that star formation is occurring mainly in the ridge of RCW 49, while a second (younger) generation of star formation is probably triggered in the shell. While the former one may reflect the cloud-cloud collision event highlighted by \citet{2009ApJ...696L.115F}, the latter is likely triggered by feedback from the Wd2 cluster. However, \citet{Hur2015} suggests triggered star formation from the radiative feedback of Wd2 as a possible explanation for the enhanced abundance of pre-main sequence (PMS) candidates observed in the ridge in X-ray by \citet{Naze2008_Xray}. We suggest both the cloud-cloud collision event and the compression from Wd2's radiative feedback could be a cause for the triggered star formation in the ridge. \citet{2004ApJS..154..315W} reported a total of $\sim$ 7000 YSOs in RCW~49 with a total mass of 4500~\(\textup{M}_\odot\) and we infer that a fraction of it constitutes triggered star formation in the shell. Follow-up studies in X-ray and IR wavelengths can shed more light on the triggered star formation efficiency of the shell. The stellar mass of the Wd2 cluster is $\sim$ 3.1 $\times$ 10$^4$~\(\textup{M}_\odot\) for stars with masses $<$ 0.65~\(\textup{M}_\odot\) and $\sim$ 4 $\times$ 10$^4$~\(\textup{M}_\odot\) for stars with masses $>$ 0.65~\(\textup{M}_\odot\) \citep{Zeidler2017}. This means that the stellar mass due to triggered star formation in the shell is smaller than the mass of Wd2 cluster. Furthermore, from their modeling results that calculate emission from envelopes, disks, and outflows surrounding stars, \citet{2004ApJS..154..315W} found the most massive YSO in RCW~49 is $<$ 5.9~\(\textup{M}_\odot\), suggesting that the new generation of stars will be relatively lower in mass compared to the stars in Wd2 and feedback from this next generation of stars is expected to be limited. \subsection{Comparison with the shell of Orion} The Orion Molecular Cloud (OMC) is the closest massive star-forming region that has been studied extensively in a wide range of wavelengths. OMC~1 is its most massive core associated with the well known HII region of M42 (the Orion Nebula). \citet{2019Natur.565..618P} reported an expanding shell driven by the stellar winds of the O7V star $\theta^1$ Ori C in the Orion Nebula. The velocity of the expanding Orion veil shell is similar to that of the shell of RCW~49, i.e. 13~km~s$^{-1}$, but has a mass of $\sim$ 2600~\(\textup{M}_\odot\), which is about 9 times lower than the mass (and also the kinetic energy) of the shell of RCW~49. The difference between the kinetic energies reflects the fact that the Orion veil shell is the result of the mechanical energy input from one O7V star while the shell in RCW~49 is the effect of a rich stellar cluster. Furthermore, Orion with an age of 0.2~Myrs is relatively younger compared to RCW~49, that has an age of at least 2~Myrs. Despite being created by a larger mechanical input, the shell of RCW~49 is moving at a similar velocity as that of the Orion veil. Perhaps this is because the shell of RCW~49 is broken toward the west and is venting out plasma, while the Orion veil seems to be a complete shell. However, likely, the veil will burst soon as well, releasing the hot plasma and hence the driving force of the expansion. The O7V star $\theta^1$ Ori C lies at the front-side of OMC 1 and the shell expansion toward the rear is stopped by the dense core. In contrast, in RCW 49, backside of the bubble seems to have broken and the large scale molecular cloud in the velocity range of 11 to 21~km~s$^{-1}$ (as reported by \citealt{2009ApJ...696L.115F}) partially blocks the expansion of the red-shifted gas. Another interesting difference between the two shells is that we observe CO emission toward the same line-of-sight of the RCW~49's shell and its spatial distribution, though fragmented, outlines the shell (as in Figs.~\ref{co-chan} and \ref{13co-chan}), while the Orion veil shell lacks CO emission \citep{2020A&A...639A...2P}. Non-detection of CO in the Orion veil shell is attributed to the rather limited column density, $A_{\rm v}$ $\sim$ 2~mag, which corresponds to a gas column of $N$(H) = 4 $\times$ 10$^{21}$~cm$^{-2}$ \citep{2020A&A...639A...2P}, while we derived a maximum $A_{\rm v}$ $\sim$ 18 for RCW~49, which corresponds to a gas column of $N$(H) = 3 $\times$ 10$^{22}$~cm$^{-2}$. Existence of larger scale (and perhaps older) molecular clouds in RCW~49 has been established \citep{2009ApJ...696L.115F}. The old age of RCW~49 (2~Myrs) as compared to Orion (0.2~Myrs) puts RCW 49 at an advanced stage of evolution and more pronounced effects of stellar feedback in shaping its environment by sweeping dense molecular clouds seen as clumps toward the shell. Moreover, despite having a larger mechanical input, the denser gas column environment of RCW~49 could be another reason for its shell's expansion velocity to be similar to that of the Orion veil. The larger statistical study of the effects of stellar feedback in Galactic star forming regions initiated by the SOFIA FEEDBACK legacy program can help illuminate whether swept shells typically resemble Orion or RCW~49. \subsection{Our understanding of stellar feedback} This study of the expanding shell and molecular clouds in RCW~49 contributes toward our understanding of the stellar feedback in our Galaxy. In Sect.~3.6.2 we find that the evolution of the hot plasma, the \hbox{{\rm H {\scriptsize II}}}\ region and the PDR seems to be dominated by the energy injection by stellar winds of massive stars. Following our discussion on the next generation of star formation in Sect.~4.2, the total mass available for (triggered) star formation in the swept up shell is $\sim$ 10$^4$~\(\textup{M}_\odot\), which can be compared to the molecular clouds (of total mass $\sim$ 2 $\times$ 10$^5$~\(\textup{M}_\odot\)) from which Wd2 was formed \citep{2009ApJ...696L.115F}. As, even in a dense core, the star formation efficiency is less than unity, the total mass of the triggered cluster will be considerably less than that of Wd2. Moreover, as the mass of the most massive star in a cluster scales with the mass of the cluster \citep{1997ApJ...476..144M} and feedback can be expected to scale with the mechanical luminosity injected by the (most massive) star. Therefore, because each successive triggered star cluster will have lower mass than the previous, resulting in lower feedback, the triggered star formation process will gradually decrease. Furthermore, comparing RCW~49 with Orion, we surmise that the effects of the stellar winds are limited to the earliest phases of the expansion and that, once the swept up shell breaks open, the hot gas is vented into the surroundings, the expansion stalls but if the stars are massive enough to enter the Wolf-Rayet phase, the expansion can be rejuvenated. \section{Conclusions} We presented for the first time large scale velocity integrated intensity maps of $^2$P$_{3/2}$ $\to$ $^2$P$_{1/2}$ transition of \hbox{{\rm [C {\scriptsize II}]}}\,, $J$ = 3 $\to$ 2 transition of $^{12}$CO and $^{13}$CO toward RCW~49. By analyzing the observed data in different velocity ranges, we successfully decoupled an expanding shell associated with RCW~49 from the entire gas complex. With the accessibility of better resolution data compared to the earlier studies (as discussed in Sect.~4.2) done toward RCW~49, we justified the presence of a single shell instead of previously thought two and characterised it for the first time. We find that the shell expanding toward us at $\sim$ 13~km~s$^{-1}$ is $\sim$ 1~pc thick and has a radius of $\sim$ 6~pc. We used dust SEDs and the column densities of \hbox{{\rm [C {\scriptsize II}]}}\, and $^{13}$CO, to estimate the mass of the shell $\sim$ 2.5 $\times$ 10$^4$~\(\textup{M}_\odot\). We quantified and discussed the effects of the stellar wind feedback, which mechanically powers the expansion of the shell of RCW~49. We constrained the physical conditions of the hot plasma and the ionised gas using the previous X-ray and radio wavelength studies, while using our new \hbox{{\rm [C {\scriptsize II}]}}\ and CO observations to determine the PDR parameters. Building on the geometry of RCW~49 derived from the dust emission studies, we put forward a 3D representation of the shell as seen by the observer, where the \hbox{{\rm [C {\scriptsize II}]}}\ shell overhangs the transition boundary between the ionised gas and the PDR. Based on the energy and time scale estimations, we suggest that the shell, initially powered by Wd2, broke open in the west releasing the hot plasma and its observed re-acceleration is mainly driven by the Wolf-Rayet star, WR20a. Besides the qualitative and quantitative analysis of the shell, we spectrally resolve and present the spatially distinct gas structures in RCW~49: the ridge and the northern and southern clouds. Comparing our findings with the existing literature, we conclude that a secondary generation of star formation has been triggered in the shell but the new generation of stars being formed are relatively lower in mass than those existing in Wd2. \\ \acknowledgments We thank the anonymous referee for bringing important issues to our attention and for helping to clarify the paper. This work is based on observations made with the NASA/DLR Stratospheric Observatory for Infrared Astronomy (SOFIA). SOFIA is jointly operated by the Universities Space Research Association, Inc. (USRA), under NASA contract NNA17BF53C, and the Deutsches SOFIA Institut (DSI) under DLR contract 50 OK 0901 to the University of Stuttgart. Financial support for the SOFIA Legacy Program, FEEDBACK, at the University of Maryland was provided by NASA through award SOF070077 issued by USRA. The FEEDBACK project is supported by the Federal Ministry of Economics and Energy (BMWI) via DLR, Projekt Number 50 OR 1916 (FEEDBACK) and Projekt Number 50 OR 1714 (MOBS - MOdellierung von Beobachtungsdaten SOFIA). This work was also supported by the Agence National de Recherche (ANR/France) and the Deutsche Forschungsgemeinschaft (DFG/Germany) through the project ``GENESIS” (ANR-16-CE92-0035-01/DFG1591/2-1). \section{} \textit{Research Notes of the \href{https://aas.org}{American Astronomical Society}} (\href{http://rnaas.aas.org}{RNAAS}) is a publication in the AAS portfolio (alongside ApJ, AJ, ApJ Supplements, and ApJ Letters) through which authors can promptly and briefly share materials of interest with the astronomical community in a form that will be searchable via ADS and permanently archived. The astronomical community has long faced a challenge in disseminating information that may not meet the criteria for a traditional journal article. There have generally been few options available for sharing works in progress, comments and clarifications, null results, and timely reports of observations (such as the spectrum of a supernova), as well as results that wouldn’t traditionally merit a full paper (such as the discovery of a single exoplanet or contributions to the monitoring of variable sources). Launched in 2017, RNAAS was developed as a supported and long-term communication channel for results such as these that would otherwise be difficult to broadly disseminate to the professional community and persistently archive for future reference. Submissions to RNAAS should be brief communications - 1,000 words or fewer \footnote{An easy way to count the number of words in a Research Note is to use the \texttt{texcount} utility installed with most \latex\ installations. The call \texttt{texcount -incbib -v3 rnaas.tex}) gives 57 words in the front matter and 493 words in the text/references/captions of this template. Another option is by copying the words into MS/Word, and using ``Word Count'' under the Tool tab.}, and no more than a single figure (e.g. Figure \ref{fig:1}) or table (but not both) - and should be written in a style similar to that of a traditional journal article, including references, where appropriate, but not including an abstract. Unlike the other journals in the AAS portfolio, RNAAS publications are not peer reviewed; they are, however, reviewed by an editor for appropriateness and format before publication. If accepted, RNAAS submissions are typically published within 72 hours of manuscript receipt. Each RNAAS article is issued a DOI and indexed by ADS \citep{2000A&AS..143...41K} to create a long-term, citable record of work. Articles can be submitted in \latex\ (preferably with the new "RNAAS" style option in AASTeX v6.2), MS/Word, or via the direct submission in the \href{http://www.authorea.com}{Authorea} or \href{http://www.overleaf.com}{Overleaf} online collaborative editors. Authors are expected to follow the AAS's ethics \citep{2006ApJ...652..847K}, including guidance on plagiarism \citep{2012AAS...21920404V}. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.85,angle=0]{aas.pdf} \caption{Top page of the AAS Journals' website, \url{http://journals.aas.org}, on October 15, 2017. Each RNAAS manuscript is only allowed one figure or table (but not both). Including the \href{http://journals.aas.org//authors/data.html\#DbF}{data behind the figure} in a Note is encouraged, and the data will be provided as a link in the published Note.\label{fig:1}} \end{center} \end{figure} \acknowledgments Acknowledge people, facilities, and software here but remember that this counts against your 1000 word limit.
2,869,038,155,948
arxiv
\section{Introduction} \label{sec:intro} The rich excitation spectrum of the nucleon mirrors its complicated multi--quark inner dynamics. Therefore baryon spectroscopy is expected to provide benchmark data for any model of the nucleon, e.g. quark models in their variety \cite{CR00,Loering01} or, increasingly in the near future, Lattice QCD as an approximation of full Quantum Chromodynamics \cite{KLW05}. However, in many cases widths and density of states prohibit a clean identification, i.e. an unambiguous assignment of quantum numbers within a partial wave analysis. The analyses are mostly based on pion and kaon induced reactions. Since some excited states are suspected to have a strongly disfavoured $\pi N$ coupling \cite{CR94}, photoinduced reactions offer a complementary access to the nucleon spectrum, in particular in non-pionic final states. This provided the motivation to search for expected (within quark models) but yet unobserved ``missing'' resonances in $\eta$ photoproduction off the proton \cite{Crede05,SL02,Chen03}. The $\eta$ channel provides a great simplification to the complex spectrum. Due to its isospin $I=0$ it only connects $N^*$ states ($I=1/2$) to the nucleon ground state, but no $\Delta$ states ($I=3/2$). Nevertheless, an unambiguous extraction of all contributing partial waves still requires a complete experiment with respect to the reaction amplitudes. Pseudoscalar meson photoproduction is determined by 4 complex amplitudes. However, due to the inherent nonlinearities it is not sufficient to measure $8-1(\text{overall phase})=7$ independent quantities, as could be naively expected. Instead, it can be shown that a minimum of 8 observables needs to be measured \cite{CT97}. Besides the differential cross section those include 3 single-spin and 4 double-spin observables. The combination of double-spin observables can be appropriately chosen, but cross section, target asymmetry, $T$, recoil polarisation, $P$, and beam asymmetry, $\Sigma$, are required in any case (for a definition of the observables see e.g. ref.\cite{KDT95}). Once a linearly polarised photon beam is provided, the photon-beam asymmetry is already accessible without polarised target or recoil polarimetry. For this case the cross section of pseudoscalar meson photoproduction off a nucleon can be cast into the form \cite{KDT95} \begin{equation} \frac{d\sigma}{d\Omega} = \frac{d\sigma_0}{d\Omega}\: \left( 1 + P_\gamma\,\Sigma\,\cos 2\Phi \right), \label{eq:xsec} \end{equation} where $\sigma_0$ denotes the polarisation independent cross section, $P_\gamma$ the degree of linear polarisation of the incident photon beam, and $\Phi$ the azimuthal orientation of the reaction plane with respect to the plane of linear polarisation. While in principle it suffices to determine $d\sigma/d\Omega$ around $\Phi=0$ and $\Phi=90$\,degrees, it is more favourable to extract the beam asymmetry from the modulation of the cross section over the full azimuthal circle, since systematic effects are better under control. Thus, a cylindrically symmetric detector such as \texttt{Crystal Barrel} \cite{CBarrel} is particularly suited to measure $\Sigma$ in $\eta$ photoproduction. Most previous experiments investigated differential cross sections \cite{Crede05,Krusche95,Renard02,Dugger02}. But there are also a few measurements of single polarisation observables. Heusch et al. \cite{Heusch70} determined the recoil proton polarisation in $\eta$ photoproduction between $0.8$ GeV and $1.1$ GeV in a spark chamber experiment. The target asymmetry was measured at the Bonn synchrotron \cite{Bock81}. A first measurement of the photon beam asymmetry using linearly polarised photon beams was accomplished at the laser backscattering facility GRAAL at the ESRF Grenoble \cite{Ajaka98}. The GRAAL experiments were later on extended to higher energies and preliminary results have been presented at conferences \cite{Kouznetsov02}. Large $\Sigma \simeq 0.5$ were obtained in the near-threshold region. Contrary, in $\eta$ electroproduction the $TT$ interference cross section, which is related to $\Sigma$, was found consistent with zero over almost all the range in $Q^2=0.25$ --- $1.5$ GeV/c$^2$ in the threshold region \cite{Thompson01}. In order to clarify this situation and to extend the energy range in $\eta$ photoproduction we carried out experiments with linearly polarised tagged photon beams at the electron accelerator ELSA \cite{Hillert06} of the University of Bonn. The following section is first devoted to the experimental setup. In addition to the basic analysis steps, section\,\ref{sec:analysis} then describes the method of extracting $\Sigma$. The results are discussed in section \ref{sec:results} and, after a brief summary, tabulated in the appendix. \section{Experimental Setup} \label{sec:1} \begin{figure \resizebox{0.97\columnwidth}{!}{% \includegraphics{1350_single_4Pub.eps} } \caption{The measured coherent bremsstrahlung intensity normalised to an incoherent spectrum (histogram, see text) in comparison to an improved version \cite{Elsner06} of the ANB-calculation \cite{ANB} (full curve). The diamond radiator was set for an intensity maximum at $E_\gamma=1305$\,MeV. The numbered blocks indicate the ranges covered by the 14 timing scintillators of the tagging detector.} \label{fig:coh_spect} \end{figure} Electron beams of $E_0 = 3.2$ GeV were used to produce coherent bremsstrahlung from a $500\,\mu$m thick diamond crystal. Electrons which radiated a photon are momentum analysed using a magnetic dipole (tagging-) spectrometer. Its detection system consists of 14 plastic scintillators providing fast timing and additional hodoscopes to achieve the required energy resolution: The range of low electron energies, corresponding to $E_\gamma = 0.8 ... 0.92\,E_0$, is covered by a multi-wire proportional chamber, a 480 channel double-layer scintillating fibre detector complements the range $0.18 ... 0.8\,E_0$. At the nominal setting of $E_0 = 3.2$\,GeV the energy resolution varies between 2\,MeV for the high photon energies and 25\,MeV for the low energies. Since the photon beam remained virtually uncollimated, the measured electron spectrum directly reflects the photon spectrum. \begin{figure \resizebox{0.97\columnwidth}{!}{% \includegraphics{LinPolSkalierung_4Pub_2.eps} } \caption{Calculation of relative bremsstrahlung intensity (top curve) and corresponding degree of linear polarisation (bottom curve) using an improved version \cite{Elsner06} of the ANB bremsstrahlung code \cite{ANB} with scaled incoherent contribution (see text). A scaling factor of $1.35$ is used to achieve best agreement with the measured spectra (cf. Fig.\,\protect{\ref{fig:coh_spect}}).} \label{fig:photpol} \end{figure} Fig.\ref{fig:coh_spect} shows the photon energy distribution obtained from the diamond radiator, measured through the detection of the corresponding electrons in the tagging system. This spectrum is normalised to the spectrum of an amorphous copper radiator. Hence, a constant run of the curve corresponds to the ordinary $\sim 1/E_\gamma$ dependence in the bremsstrahlung process. This representation accentuates the coherence effect, which manifests itself in clear peaks. Within the range of the coherent peaks the bremsstrahlung recoil is transferred to the whole crystal as opposed to individual nuclei in the incoherent process, thus fixing the plane of electron deflection very tightly relative to the orientation of the crystal lattice. Consequently, the emitted photons are linearly polarised \cite{Timm69}. The maximum achievable degree of polarisation decreases with increasing photon energy, $P_\gamma \simeq 0.4$ is obtained at $E_\gamma=E_0/2$. The orientation of the linear polarisation and the position of the maximum in the photon energy-spectrum can be deliberately chosen through appropriate alignment of the crystal relative to the electron beam direction. We used a crystal setting to obtain the polarisation maximum at 1305 MeV. Vertical orientation of the polarisation vector was chosen, since the vertical emittance of the electron beam is about an order of magnitude better than in horizontal direction. A dedicated commercial 5-axis goniometer\footnote{{\it Newport} company} enabled the accurate crystal alignment with typical angular uncertainties of $\delta < 170\,\mu$rad. The curve in Fig.\ref{fig:coh_spect} represents a calculation of the spectrum using an improved version \cite{Elsner06} of the original ANB (``analytic bremsstrahlung calculation'') software \cite{ANB} from T\"ubingen University. It nicely describes the measured spectrum. This level of agreement can be only obtained, if the {\it incoherent} part of the ANB calculation is scaled by a factor of $1.35$. This was traced back to an inaccurate inclusion of multiple scattering and an uncertainty in the atomic form factors \cite{Elsner06}. Using the form factor parametrisation after Schiff \cite{Sch51} instead that of Hubbell \cite{Hub59} improves the agreement significantly. The relative strengths of coherent and incoherent contributions determine the absolute value of linear polarisation. It can be obtained from {\it any} fit of the spectrum as long as there is no overlap of different reciprocal lattice vectors --- which can correspond to different orientations of the resulting polarisation vector --- within a given energy interval. This condition is surely fulfilled, if adjacent peak regions do not overlap. In this respect the mentioned re-scaling of the incoherent contributions introduces no significant error. As can be seen from Fig.\ref{fig:coh_spect}, in our particular case there is only a tiny overlap between the adjacent peaks. Furthermore, both of them even result in the same orientation of the polarisation vector. Fig.\ref{fig:photpol} shows the ANB-calculated relative photon intensity spectrum in conjunction with the calculated photon polarisation. The maximum polarisation of $P_\gamma = 0.49$ is obtained at $E_\gamma = 1305$ MeV, as expected. An absolute error of $\delta P_\gamma < 0.02$ is estimated. The total photon flux was up to $2 \times 10^7$ s$^{-1}$. The detector setup of the experiment is depicted in Fig.\ref{fig:setup}. The linearly polarised photon beam was incident on a $5.3$ cm long liquid hydrogen target with 80\,$\mu$m Kapton windows \cite{Kop02}. A three layer scintillating fibre detector \cite{Suft05} surrounded the target within the polar angular range from 15 to 165 degrees. It determined a piercing point for charged particles. Both, charged particles and photons were detected in the \texttt{Crystal Barrel} detector \cite{CBarrel}. It was cylindrically arranged around the target with 1290 individual CsI(Tl) crystals in 23 rings, covering a polar angular range of 30 --- 168 degrees. The crystals of 16 radiation lengths guaranteed nearly full longitudinal shower containment. In transverse direction electromagnetic showers extended over up to 30 modules. For photons an energy resolution of $\sigma_{E_\gamma}/E_\gamma = 2.5\,\%/^4\sqrt{E_\gamma/\text{GeV}}$ and an angular resolution of $\sigma_{\Theta,\Phi} \simeq 1.1$\,degree was obtained. \begin{figure} \begin{center} \resizebox{0.97\columnwidth}{!}{% \includegraphics{cb-taps-saphir_004_4Pub.eps} } \end{center} \caption{Setup of the detector system as described in the text. The photon beam enters from left.} \label{fig:setup} \end{figure} The $5.8$ --- 30 degree forward cone was covered by the \texttt{TAPS} detector \cite{TAPS}, set up in one hexagonally shaped wall of 528 BaF$_2$ modules. For photons between 45 and 790 MeV the energy resolution is $\sigma_{E_\gamma}/E_\gamma = \left(0.59/\sqrt{E_\gamma/\text{GeV}}+1.9\right)\%$ \cite{Gabler94}. The position of photon incidence could be resolved within 20\,mm. For charged particle recognition each \texttt{TAPS} module has a 5\,mm plastic scintillator in front of it. \begin{figure*} \begin{center} \resizebox{0.8\textwidth}{!}{% \includegraphics{MassMesonCut_3.eps} } \end{center} \caption{Invariant mass distribution after standard kinematic analysis cuts. Left: Two photon invariant mass distribution for the 3--cluster data set; signal widths of $\sigma_{\pi^0} = 10$ MeV and $\sigma_{\eta} = 22$ MeV are obtained. Right: 6 photon invariant mass distribution for the 7--cluster data set with $\sigma_{\eta} = 25$ MeV. Note the logarithmic scale.} \label{fig:inv_mass} \end{figure*} In contrast to \texttt{Crystal Barrel}, the fast \texttt{TAPS} detectors are individually equipped with photomultiplier readout. Thus, the first level trigger was derived from \texttt{TAPS}, requiring either $\geq 2$ hits above a low threshold ($A$) or, alternatively, $\geq 1$ hit above a high threshold ($B$). Using, within $\simeq 10\,\mu$s, a fast cluster recognition \cite{Flemming00} for the \texttt{Crystal Barrel} as second level trigger ($C$), the total trigger condition required $[A \lor (B \land C)]$, with 2 clusters identified at second level \section{Event reconstruction and data analysis} \label{sec:analysis} To enrich the $\eta\,p$ final state, the occurence of, in total, either three or seven detector hits was required during the offline analysis, corresponding to two or six photons and the proton. In particular photon hits usually fire a cluster of adjacent crystals whose energy is summed over. After the basic detector calibrations from the data itself, the $\eta$ meson is identified in either of its major decay modes into two photons or $3\pi^0$. Fig.\,\ref{fig:inv_mass} shows the respective invariant mass distributions, obtained after only basic kinematic cuts have been applied in order to ensure consistence of the azimuthal angles (i.e. coplanarity) and polar angles involved. No cuts were applied on the energy of the respective hit of the proton candidate. The signal widths in Fig. \ref{fig:inv_mass} are $\sigma_{\eta\rightarrow\gamma\gamma} = 22$\,MeV and $\sigma_{\eta\rightarrow 3\pi^0} = 25$\,MeV, respectively. To avoid any possible bias from detector inefficiencies on the azimuthal distributions, the proton was {\it not} positively identified by using the signals of the inner scintillating fibre detector of the barrel or the veto detectors of \texttt{TAPS}. Instead, all combinatorial possibilities were processed, i.e. 3 for the 3--cluster events and 21 for the 7--cluster events. A cut on the missing mass applied to the proton candidates subsequently yielded a clean separation. No kinematic fit was used to improve the separation, nor to increase the resolution. As can be seen from Fig.\,\ref{fig:inv_mass}, the background below the $\eta$ peaks is very small (note the logarithmic scale). It varies with photon energy and thus was determined in each bin of $E_\gamma$. Two different fits were used to interpolate the background between the edges of the signal, linear and gaussian. From the difference the possible systematic error was estimated which may be due to the background subtraction scheme. \subsection{Beam asymmetry} \label{sec:asymmetry} Cuts of $3 \sigma$ widhts around the $\eta$--mass in the invariant mass spectra (Fig.\,\ref{fig:inv_mass}) yielded a clean event sample. To extract the photon beam asymmetry according to Eq.\,\ref{eq:xsec}, a fit of the azimuthal event distribution was performed: \begin{equation} f(\Phi) = A + B\,\cos (2\Phi). \label{eq:fit} \end{equation} \begin{figure} \resizebox{0.97\columnwidth}{!}{% \includegraphics{PhiAsym_1240_1350_Theta_66_92_v2.eps} } \caption{Example of a measured $\Phi$ distribution in the bin $E_\gamma = 1240$ --- $1350$ MeV and $\Theta_\text{cm} = 66$ --- 92 degrees for the $\eta \rightarrow 2\gamma$ decay channel. The event-weighted average polarisation was $P_\gamma = 47.3\,\%$. } \label{fig:Phi_distr} \end{figure} An example for one bin in $E_\gamma$ and $\Theta^{cm}$ is shown in Fig.\,\ref{fig:Phi_distr}. The ratio $B/A$ of the fit determines the product of beam asymmetry and photon polarisation, $P_\gamma \Sigma$, of Eq.\,\ref{eq:xsec}. Since there is a strict relation between the photon energy and the photon polarisation (c.f. Fig.\,\ref{fig:photpol}), and the appropriate photon energy can be assigned to each single event, it is possible to determine the event-weighted average polarisation in each bin of photon energy. The photon asymmetries extracted from the $\eta \rightarrow 2\gamma$ and the $\eta \rightarrow 3\pi^0$ decay channels agree very well. This is illustrated in Fig.\,\ref{fig:Sigma_2g-3pi} (top) where, as examples, the two photon energy bins $1150 \pm 50$ MeV (left) and $1250 \pm 50$ MeV (right) are shown. \begin{figure* \begin{center} \epsfig{file=Sigma_2g-3pi_1150_3pi_4Pub-2.eps,width=7.3cm,angle=-90} \hspace{0.8cm} \epsfig{file=Sigma_2g-3pi_1250_2g_4Pub-2.eps,width=7.3cm,angle=-90} \end{center} \caption{Measured photon asymmetry, $\Sigma$, as extracted from the decay channels $\eta \rightarrow 2\,\gamma$ (dots) and $\eta \rightarrow 3\,\pi^0$ (triangles) for the two photon energy bins 1150 MeV (left) and 1250 MeV (right). The bar charts indicate the total fluctuation ({\em no} 1--$\sigma$ errors) of $\Sigma$ if extracted from the $\Phi$ ranges 0 --- 180 degrees (light) and 180 --- 360 degrees (dark) seperately, instead of using the full range. Bottom left is for the $\eta \rightarrow 3\pi^0$ channel in the 1150 MeV bin, bottom right for the $\eta \rightarrow 2\gamma$ mode in the 1250 MeV bin.} \label{fig:Sigma_2g-3pi} \end{figure*} In order to detect possible false detector asymmetries, the uniformity of the event distribution of the laboratory angles $\Theta$ versus $\Phi$ was routinely inspected \cite{Elsner06}. Most detected problems could be removed in the offline analysis. Other sources of false asymmetries were identified but could not be completely remedied, e.g. trigger inefficiencies within certain angular regions. In such bins the corresponding $\Phi$--regions were excluded from the fit of Eq.\,\ref{eq:fit}. The remaining systematic error is estimated through the difference of separate fits of the 0 --- 180 and 180 --- 360 degree azimuthal regions to the full fit. The differences are shown as the bar graphs on the bottom of Fig.\,\ref{fig:Sigma_2g-3pi}, left in the $2\gamma$, right in the $3\pi^0$ decay of the $\eta$ meson. Note that these estimates are correlated with the statistical errors. It turned out that the angle dependent inefficiencies provide by far the major contribution to the systematic error of this experiment. In contrast, the remaining uncertainty of the beam polarisation affects the final result much less, and the effect of the background subtraction is almost negligible. The total error remains, however, still dominated by statistics as can also be seen from the table of results in the appendix. \section{Results and discussion} \label{sec:results} The combined results of the $\eta \rightarrow 2\gamma$ and $\eta \rightarrow 3\pi^0$ data sets are presented in Fig.\,\ref{fig:Eta_all}. Statistical errors are directly attached to the data points. Since determined from the $\chi^2$ of the fit of Eq. \ref{eq:fit}, these statistical errors may still carry some correlation to systematics. The estimated total systematic uncertainty is indicated by the bars. Nice agreement is found with the published GRAAL data of Ajaka et al. \cite{Ajaka98}. This provides confidence that the analysis chain is well under control on the level of the presented errors, in particular the determination of the degree of linear polarisation and the extraction of the azimuthal asymmetries, the latter despite the fact that, due to the unfavourable horizontal beam emittance, no data were taken with the polarisation plane rotated by 90 degrees, as was done by the GRAAL collaboration. \begin{figure*} \begin{center} \resizebox{0.8\textwidth}{!}{% \includegraphics{Eta_Ajaka_4Pub.eps} } \end{center} \caption{Photon asymmetry from the combined $\eta$ decay modes (filled circles) with statistical errors. The systematical error is indicated by the bar chart. Our results are compared to the published data (boxes) of the GRAAL collaboration \cite{Ajaka98} (see also text). The curves represent calculations of \texttt{eta-MAID} \cite{CYTD02} (full) and the Bonn--Gatchina partial wave analysis \texttt{BnGa} \cite{Anisovich05} (dashed).} \label{fig:Eta_all} \end{figure*} More recent but yet preliminary (and hence here not shown) data of the GRAAL collaboration, extended in energy up to 1445 MeV \cite{Kouznetsov02}, do also nicely agree with our data. In Fig.\ref{fig:Eta_all} our new data are compared to two standard calculations, the Mainz isobar model \texttt{eta-MAID} \cite{CYTD02} and the Bonn--Gatchina partial wave analysis \texttt{BnGa} \cite{Anisovich05}. \begin{figure*} \epsfig{file=Sigma-MAID_1250_v2.eps,width=7.5cm,angle=0} \hspace{1.5cm} \epsfig{file=Sigma-PWA_1250_v2.eps,width=7.5cm,angle=0} \caption{Sensitivity of the \texttt{eta-MAID} and the \texttt{BnGa} calculations to different resonance contributions in the energy bin $E_\gamma = (1250 \pm 50)$ MeV. Data points are the same as in Fig.\,\ref{fig:Eta_all}. Left the {\texttt{eta-MAID}} result \cite{CYTD02} is shown, right the \texttt{BnGa} analysis \cite{Anisovich05}. The full lines represent the respective full calculations. The broken curves illustrate the impact of ``turning off'' individual resonances: Long dashed without $P_{13}(1720)$, long dashed-dotted without $P_{11}(1710)$ (no difference to full calculation in \texttt{BnGa} analysis), short dashed without $D_{13}(1520)$, and short dashed-dotted without $D_{15}(1675)$.} \label{fig:Sigma-models} \end{figure*} In contrast to \texttt{eta-MAID}, the Bonn--Gatchina analysis in addition to $\eta N$ also takes the $\pi N$, $K \Lambda$ and $K \Sigma$ coupled channels into account. To calculate the photon asymmetry, the preliminary high energy GRAAL data \cite{Kouznetsov02} have already been used in the \texttt{BnGa} fit. This might be the reason for the slightly better description of our data. The overall agreement between data and both models seems very satisfactory at first glance. Closer examination reveals distinct inconsistencies, however. While the full model results agree, the individual resonance contributions differ substantially as is illustrated in Fig.\,\ref{fig:Sigma-models}. Within the energy range considered, the tail of the $S_{11}(1535)$ state provides an important contribution to the cross section in both models. In \texttt{eta-MAID} the $P_{11}(1710)$ is required as well to describe the cross section, whereas the \texttt{BnGa} PWA prefers a strong $P_{13}(1720)$ partial wave. This also shows up in the photon asymmetry. The $P_{11}(1710)$ (long dashed-dotted) affects $\Sigma$ in \texttt{eta-MAID}, albeit weakly. No impact at all is found in the \texttt{BnGa} PWA. In contrast, the influence of the $P_{13}(1720)$ (long dashed) on the photon asymmetry is pronounced only in the \texttt{BnGa} model. Within \texttt{eta-MAID}, turning off the $P_{13}(1720)$ leaves the photon asymmetry almost unaffected. Both the $D_{13}(1520)$ (short dashed) and $D_{15}(1675)$ (short dashed-dotted) states have a strong influence on $\Sigma$ within \texttt{eta-MAID}. Contrary, the $D_{15}(1675)$ remains negligible in the \texttt{BnGa} calculation; the $D_{13}(1520)$ has a weak impact but, compared to \texttt{eta-MAID}, in opposite direction (cf. Fig.\,\ref{fig:Sigma-models}). This unsatisfactory situation can not be resolved from measurements of the photon asymmetry alone. Yet, such data provide the necessary basis to be extended with double polarisation observables in order to get closer to, or even accomplish the complete experiment in terms of the introductory discussion. \section{Summary and conclusions} In summary, we have presented data on the photon beam asymmetry, $\Sigma$, in the reaction $\vec\gamma + p \rightarrow p + \eta$. The continuous $3.2$ GeV ELSA electron beam was used to produce a linearly polarised tagged photon beam by means of coherent bremsstrahlung off a diamond crystal, covering a photon energy range $E_\gamma = 800 ... 1400$\,MeV with polarisation degrees up to 49\,\%. A combined setup of the \texttt{Crystal Barrel} and \texttt{TAPS} detectors enabled high-resolution detection of multiple photons, important for the clean detection of the $2\gamma$ and $3\pi^0$ decays of the $\eta$ meson. We obtained photon asymmetries in excess of 50\,\% in some angular and energy bins. The results are in agreement with a previous measurement by the GRAAL collaboration in the overlapping energy intervals. The \texttt{eta-MAID} model and the Bonn--Gatchina partial wave analysis provide a satisfactory overall description of our data. In detail, however, there are marked differences with regard to the role of individual resonance contributions. To resolve this problem, further double-polarisation experiments are indispensable. They will be tackled at several laboraties, at ELSA within the Collaborative Research Project SFB/TR-16 with use of the Bonn polarised solid state target. \begin{acknowledgement} We are happy to acknowledge the continuous efforts of the accelerator crew and operators to provide stable beam conditions. K. Livingston from Glasgow university deserves a big share of credit for his invaluable help in setting up the Stonehenge technique for the crystal alignment. This work was financially supported by the federal state of {\em North Rhine-Westphalia} and the {\em Deutsche Forschungsgemeinschaft} within the SFB/TR-16. The Basel group acknowledges support from the {\em Schweizerischer Nationalfonds}, the KVI group from the {\em Stichting voor Fundamenteel Onderzoek der Materie} (FOM) and the {\em Nederlandse Organisatie voor Wetenschappelijk Onderzoek} (NWO). \end{acknowledgement} \section*{Appendix} The detailed results of the photon asymmetries, $\Sigma$, from the reaction $\vec\gamma \,p \rightarrow \eta\,p$ are summarised in Table\,\ref{tab:results}. To each value of the photon asymmetry is assigned the corresponding 1-$\sigma$ statistical error and an 1-$\sigma$ estimate of the total systematical error. \begin{table*} \begin{center} \begin{tabular}[t]{|c|c|c|c|c||c|c|c|c|c|} \multicolumn{5}{l}{energy bin {\bf 850} MeV} &\multicolumn{5}{l}{energy bin {\bf 950} MeV}\\ \hline E$_{\gamma}$/MeV & $\theta^{cm}_{\eta}$ & $\Sigma$ & $\sigma(\Sigma)_{stat}$ & $\sigma(\Sigma)_{sys}$ & E$_{\gamma}$/MeV & $\theta^{cm}_{\eta}$ & $\Sigma$ & $\sigma(\Sigma)_{stat}$ & $\sigma(\Sigma)_{sys}$\\ \hline 843.4 & 72.9 & 0.237 & 0.145 & 0.036 & 942.8 & 70.9 & 0.517 & 0.131 & 0.033 \\ 842.6 & 91.2 & 0.382 & 0.087 & 0.020 & 939.7 & 91.9 & 0.546 & 0.095 & 0.045 \\ 846.4 & 109.9 & 0.278 & 0.071 & 0.015 & 941.7 & 110.3 & 0.465 & 0.065 & 0.013 \\ 847.5 & 129.2 & 0.240 & 0.079 & 0.012 & 943.6 & 129.4 & 0.380 & 0.072 & 0.049 \\ 850.5 & 147.9 & 0.114 & 0.116 & 0.031 & 943.8 & 148.2 & 0.184 & 0.095 & 0.023 \\ \hline \multicolumn{10}{l}{}\\ \multicolumn{5}{l}{energy bin {\bf 1050} MeV} &\multicolumn{5}{l}{energy bin {\bf 1150} MeV}\\ \hline E$_{\gamma}$/MeV & $\theta^{cm}_{\eta}$ & $\Sigma$ & $\sigma(\Sigma)_{stat}$ & $\sigma(\Sigma)_{sys}$ & E$_{\gamma}$/MeV & $\theta^{cm}_{\eta}$ & $\Sigma$ & $\sigma(\Sigma)_{stat}$ & $\sigma(\Sigma)_{sys}$\\ \hline 1054.0 & 53.3 & 0.734 & 0.172 & 0.077 & 1154.4 & 52.9 & 0.692 & 0.090 & 0.013 \\ 1051.2 & 70.1 & 0.583 & 0.111 & 0.035 & 1151.5 & 69.8 & 0.731 & 0.072 & 0.019 \\ 1046.0 & 91.4 & 0.559 & 0.108 & 0.032 & 1150.2 & 90.1 & 0.593 & 0.084 & 0.036 \\ 1045.9 & 110.6 & 0.283 & 0.074 & 0.023 & 1151.4 & 110.7 & 0.366 & 0.058 & 0.037 \\ 1043.0 & 129.4 & 0.316 & 0.080 & 0.010 & 1150.0 & 129.0 & 0.165 & 0.064 & 0.014 \\ 1043.3 & 148.4 & 0.261 & 0.107 & 0.019 & 1148.5 & 148.2 & 0.041 & 0.095 & 0.020 \\ \hline \multicolumn{10}{l}{}\\ \multicolumn{5}{l}{energy bin {\bf 1250} MeV} &\multicolumn{5}{l}{energy bin {\bf 1350} MeV}\\ \hline E$_{\gamma}$/MeV & $\theta^{cm}_{\eta}$ & $\Sigma$ & $\sigma(\Sigma)_{stat}$ & $\sigma(\Sigma)_{sys}$ & E$_{\gamma}$/MeV & $\theta^{cm}_{\eta}$ & $\Sigma$ & $\sigma(\Sigma)_{stat}$ & $\sigma(\Sigma)_{sys}$\\ \hline 1251.4 & 51.5 & 0.674 & 0.068 & 0.016 & 1344.7 & 50.7 & 0.687 & 0.083 & 0.027 \\ 1249.3 & 69.4 & 0.758 & 0.060 & 0.017 & 1343.5 & 69.4 & 0.774 & 0.075 & 0.069 \\ 1249.0 & 89.8 & 0.638 & 0.073 & 0.012 & 1342.4 & 89.4 & 0.680 & 0.102 & 0.043 \\ 1249.5 & 110.7 & 0.508 & 0.053 & 0.021 & 1342.6 & 111.3 & 0.479 & 0.070 & 0.026 \\ 1249.5 & 129.0 & 0.338 & 0.058 & 0.010 & 1343.4 & 129.0 & 0.290 & 0.075 & 0.021 \\ 1250.0 & 148.2 & 0.095 & 0.085 & 0.076 & 1343.1 & 147.9 & 0.149 & 0.119 & 0.089 \\ \hline \end{tabular} \caption{Photon asymmetries for the reaction $\vec{\gamma}p\rightarrow \eta p$. Angles are given in degrees. Energy-bin widths are $\pm 50$ MeV.} \label{tab:results} \end{center} \end{table*} \newpage
2,869,038,155,949
arxiv
\section{Introduction} The observed fermion masses and mixing angles are well parametrized by the Higgs Yukawa couplings in the Standard Model (SM). However, the neutrino masses and mixing angles call for the addition of right-handed (RH) neutrinos or physics beyond the SM and, moreover, the flavor structures of quarks and leptons are not understood yet. As there is no flavor changing neutral current at tree level in the SM due to the GIM mechanism, the observation of flavor violation is an important probe of new physics up to very high energy scales and it can be complementary to direct searches at the LHC\@. In particular, the violation of lepton flavor universality would be a strong hint at new physics. Recently, there have been interesting reports on the anomalies in rare semileptonic $B$-meson decays at LHCb such as $R_K$~\cite{RK}, $R_{K^*}$~\cite{RKs}, $P'_5$~\cite{P5}. The reported value of $R_K={\cal B}(B\rightarrow K\mu^+\mu^-)/{\cal B}(B\rightarrow Ke^+e^-)$ is \begin{equation} R_K=0.745^{+0.097}_{-0.082}, \quad 1\,{\rm GeV}^2<q^2 <6\,{\rm GeV}^2, \end{equation} which deviates from the SM prediction by $2.6\sigma$. On the other hand for vector $B$-mesons, $R_{K^*}={\cal B}(B\rightarrow K^*\mu^+\mu^-)/{\cal B}(B\rightarrow K^*e^+e^-)$ is \begin{align} R_{K^*}&= 0.66^{+0.11}_{-0.07}(\rm stat)\pm 0.03({\rm syst}), \quad 0.045\,{\rm GeV}^2<q^2 <1.1\,{\rm GeV}^2,\nonumber\\ R_{K^*}&= 0.69^{+0.11}_{-0.07}({\rm stat})\pm 0.05({\rm syst}), \quad 1.1\,{\rm GeV}^2<q^2 <6.0\,{\rm GeV}^2, \end{align} which again differs from the SM prediction by 2.1--$2.3\sigma$ and 2.4--$2.5\sigma$, depending on the energy bins. Explaining the $B$-meson anomalies would require new physics violating the lepton flavor universality at a few 100~GeV up to a few 10~TeV, depending on the coupling strength of new particles to the SM\@. We also note that there have been interesting anomalies in $B\rightarrow D^{(*)}\tau\nu$ decays, the so called $R_{D^{(*)}}={\cal B}(B\rightarrow D^{(*)}\tau\nu)/{\cal B}(B\rightarrow D^{(*)} \ell \nu)$ with $\ell=e$, $\mu$, whose experimental values are deviated from the SM values by more than $2\sigma$~\cite{RDexp,RDexp2,RDsexp,RDsexp2}. Motivated by the $B$-anomalies $R_{K^{(*)}}$, some of the authors recently proposed a simple extension of the SM with extra $U(1)'$ gauge symmetry with flavor-dependent couplings~\cite{Bian:2017rpg}. The $U(1)'$ symmetry is taken as a linear combination of ${U(1)}_{L_\mu-L_\tau}$ and ${U(1)}_{B_3-L_3}$, which might be a good symmetry at low energy and originated from enhanced gauge symmetries such as in the $U(1)$ clockwork framework~\cite{u1cw}. In this model, the quark mixings and neutrino masses/mixings require an extended Higgs sector, which has one extra Higgs doublet and multiple singlet scalars beyond the SM\@. As a result, nonzero off-diagonal components of quark mass matrices are obtained from the vacuum expectation value (VEV) of the extra Higgs doublet and correct electroweak symmetry breaking is ensured by the VEV of one of the singlet scalars. In this paper, we study the phenomenology of the heavy Higgs bosons in the flavored $U(1)'$ model mentioned above. We first show that the correct flavor structure of the SM is well reproduced in the presence of the VEV of the extra Higgs doublet. In particular, in the case with a small VEV of the extra Higgs doublet or small $\tan\beta$, we find that the heavy Higgs bosons have sizable flavor-violating couplings to the bottom quark and reduced flavor-conserving Yukawa couplings to the top quark such that LHC searches for heavy Higgs bosons can be affected by extra or modified production and decay channels. We also briefly mention the implication of our extended Higgs sector for $R_{D^{(*)}}$ anomalies. We discuss various constraints on the extended Higgs sector from Higgs and electroweak precision data, flavor data such as the $B$-meson mixings and decays, as well as unitarity and stability bounds. For certain benchmark points that can evade such bounds, we study the productions and decays of the heavy Higgs bosons at the LHC and show distinct features of the model with flavor-violating interactions in the Higgs sector. This paper is organized as follows. First, we begin with a summary of the $U(1)'$ model with the extended Higgs sector and new interactions. The Higgs spectrum and Yukawa couplings for heavy Higgs bosons are presented in Sec.~\ref{sec:higgs_spectrum}. We then discuss various theoretical and phenomenological constraints on the Higgs sector are studied in Sec.~\ref{sec:constraints}, and collider signatures of the heavy Higgs bosons at the LHC are studied in Sec.~\ref{sec:higgs_lhc}. Finally, conclusions are drawn. There are four appendices dealing with the extended Higgs sector, unitarity bounds, quark Yukawa couplings, and the $U(1)'$ interactions. \section{Flavored \boldmath{$U(1)'$} model} We consider a simple extension of the SM with $U(1)'$, where a new gauge boson $Z^\prime$ couples specifically to heavy flavors. It is taken as a linear combination of ${U(1)}_{L_\mu-L_\tau}$ and ${U(1)}_{B_3-L_3}$ with \begin{equation*} Q' \equiv y(L_\mu-L_\tau)+x(B_3-L_3) \end{equation*} for real parameters $x$ and $y$~\cite{Bian:2017rpg}.\footnote{We note that we can take two independent parameters for the $Z'$ couplings to be either $(x g_{Z'}, \, y g_{Z'})$ or $(x/y, \, g_{Z'})$ by absorbing $y$ into $g_{Z'}$. Our following discussion does not depend on the choice of the $Z'$ couplings.} Introducing two Higgs doublets $H_{1,2}$ is necessary to have right quark masses and mixings. We add one complex singlet scalar $S$ for a correct vacuum to break electroweak symmetry and $U(1)'$. Moreover, in order to cancel the anomalies, the fermion sector is required to include at least two RH neutrinos $\nu_{iR}$ ($i = 2$, 3). One more RH neutrino $\nu_{1R}$ with zero $U(1)'$ charge as well as extra singlet scalars, $\Phi_a$ $(a=1,\,2,\,3)$, with $U(1)'$ charges of $-y$, $x+y$, $x$, respectively, are also necessary for neutrino masses and mixings. As $L_\mu-L_\tau$ is extended to RH neutrinos, $L_\mu-L_\tau$ and $L_2-L_3$ can be used interchangeably in our model. The $U(1)'$ charge assignments are given in Table~\ref{modelA}. \begin{table}[hbt!] \begin{center} \begin{tabular}{c|ccccccccc} \hline\hline &&&&&&&&&\\[-2mm] & $q_{3L}$ & $u_{3R}$ & $d_{3R}$ & $\ell_{2L}$ & $e_{2R}$ & $\nu_{2R}$ & $\ell_{3L}$ & $e_{3R}$ & $\nu_{3R}$\\[2mm] \hline &&&&&&&&&\\[-2mm] $Q'$ & $\frac{1}{3}x$ & $\frac{1}{3}x$ & $\frac{1}{3}x$ & $y$ & $y$ & $y$ & $-x-y$ & $-x-y$ & $-x-y$\\[2mm] \hline\hline \end{tabular}\\[2mm] \begin{tabular}{c|cccccc} \hline\hline &&&&&&\\[-2mm] & $S$ & $H_1$ & $H_2$ & $\Phi_1$ & $\Phi_2$ & $\Phi_3$\\[2mm] \hline &&&&&&\\[-2mm] $Q'$ & $\frac{1}{3}x$ & $0$ & $-\frac{1}{3}x$ & $-y$ & $x+y$ & $x$\\[2mm] \hline\hline \end{tabular} \end{center} \caption{$U(1)'$ charges of fermions and scalars.\label{modelA}} \end{table} The Lagrangian of the model is given as \begin{equation} {\cal L}=-\frac{1}{4}Z'_{\mu\nu} Z^{\prime\mu\nu}-\frac{1}{2} \sin\xi \, Z'_{\mu\nu} B^{\mu\nu}+ {\cal L}_S + {\cal L}_Y \end{equation} with \begin{equation} {\cal L}_S= |D_\mu H_1|^2+|D_\mu H_2|^2+|D_\mu S|^2+\sum_{a=1}^3|D_\mu\Phi_a| - V(\phi_i), \end{equation} where $Z'_{\mu\nu}=\partial_\mu Z'_\nu -\partial_\nu Z'_\mu$ is the field strength of the $U(1)'$ gauge boson, $\sin\xi$ is the gauge kinetic mixing between $U(1)'$ and SM hypercharge, and $D_\mu \phi_i=(\partial_\mu -ig_{Z'}Q'_{\phi_i} Z'_\mu)\phi_i$ are covariant derivatives. Here $Q'_{\phi_i}$ is the $U(1)'$ charge of $\phi_i$, $g_{Z'}$ is the extra gauge coupling. The scalar potential $V(\phi_i)$ is given by $V=V_1+V_2$ with \begin{align} V_1= &~\mu^2_1 |H_1|^2 + \mu^2_2 |H_2|^2- \left( \mu S H^\dagger_1 H_2 + \mathrm{h.c.}\right) \nonumber \\ &+\lambda_1 |H_1|^4+\lambda_2 |H_2|^4 + 2\lambda_3 |H_1|^2|H_2|^2+2\lambda_4 (H^\dagger_1 H_2)(H^\dagger_2 H_1) \nonumber \\ &+ 2 |S|^2(\kappa_1 |H_1|^2 +\kappa_2 |H_2|^2)+m^2_{S}|S|^2+\lambda_{S}|S|^4,\label{eq:scalar_potential_1}\\ V_2= &~\sum_{a=1}^3\Big( \mu^2_{\Phi_a} |\Phi_i|^2 +\lambda_{\Phi_a}|\Phi_a|^4 \Big)+ \left(\lambda_{S3} S^3 \Phi^\dagger_3 +\mu_4 \Phi_1 \Phi_2 \Phi^\dagger_3 + \mathrm{h.c.} \right) \nonumber \\ &+ 2\sum_{a=1}^3 |\Phi_a|^2(\beta_{a1} |H_1|^2 +\beta_{a2} |H_2|^2+ \beta_{a3} |S|^2)+2 \sum_{a<b}\lambda_{ab} |\Phi_a|^2 |\Phi_b|^2. \end{align} The extended Higgs sector is presented in the next section and studied in more detail in Appendix~\ref{app:higgs_sector}. For a set of quartic couplings for $S$ and $H_{1,2}$ that are relevant for electroweak symmetry and $U(1)'$ breaking, we have collected unitarity bounds in Appendix~\ref{app:unitarity_bounds}, which are used to constrain the parameter space of the Higgs sector in Sec.~\ref{sec:constraints}. The Yukawa Lagrangian for quarks and leptons is given by \begin{align} -{\cal L}_Y= &~{\bar q}_i ( y^u_{ij}{\tilde H}_1+ h^u_{ij}{\tilde H}_2 ) u_j +{\bar q}_i ( y^d_{ij} {H}_1+h^d_{ij} {H}_2 ) d_j \nonumber \\ &+y^\ell_{ij} {\bar \ell}_i {H}_1 e_j + y^\nu_{ij} {\bar \ell}_i {\tilde H}_1 \nu_{jR} + \overline {({\nu_{iR}})^c} ( M_{ij}+\Phi_a z^{(a)}_{ij} )\nu_{jR} + \mathrm{h.c.} \end{align} with ${\tilde H}_{1,2}\equiv i\sigma_2 H^*_{1,2}$. After electroweak symmetry and $U(1)'$ are broken by the VEVs of scalar fields, $\langle H_{1,2}\rangle=v_{1,2} / \sqrt{2}$ with $v^2_1+v^2_2=v^2=(246\,{\rm GeV})^2$ , $\langle S\rangle=v_s / \sqrt{2}$ and $\langle\Phi_a\rangle=\omega_a / \sqrt{2}$, the quark and lepton mass terms are given as \begin{equation} {\cal L}_Y=- {\bar u} M_u u-{\bar d} M_d d - {\bar \ell} M_\ell \ell - {\bar \ell} M_D \nu_R - \overline{({\nu_{R}})^c}M_R \nu_R + \mathrm{h.c.} \end{equation} with the following flavor structure: \begin{align} M_u &= \begin{pmatrix} y^u_{11}\langle {\tilde H}_1\rangle & y^u_{12}\langle {\tilde H}_1\rangle & 0 \\ y^u_{21} \langle {\tilde H}_1\rangle & y^u_{22} \langle {\tilde H}_1 \rangle & 0 \\ h^u_{31} \langle {\tilde H}_2 \rangle & h^u_{32}\langle {\tilde H}_2\rangle & y^u_{33} \langle {\tilde H}_1 \rangle \end{pmatrix},\label{qmass1} \\ M_d &= \begin{pmatrix} y^d_{11}\langle { H}_1\rangle & y^d_{12}\langle { H}_1\rangle & h^d_{13} \langle {H}_2\rangle \\ y^d_{21} \langle { H}_1 \rangle & y^d_{22} \langle {H}_1\rangle & h^d_{23}\langle {H}_2\rangle \\ 0 & 0 & y^d_{33} \langle { H}_1 \rangle \end{pmatrix},\label{qmass2} \\ M_\ell &= \begin{pmatrix} y^\ell_{11} \langle { H}_1 \rangle & 0 & 0 \\ 0 & y^\ell_{22} \langle { H}_1 \rangle & 0 \\ 0 & 0 & y^\ell_{33} \langle { H}_1\rangle \end{pmatrix},\label{cleptonmass} \\ M_D &= \begin{pmatrix} y^\nu_{11} \langle {\tilde H}_1 \rangle & 0 & 0 \\ 0 & y^\nu_{22}\langle {\tilde H}_1 \rangle & 0 \\ 0 & 0 & y^\nu_{33} \langle {\tilde H}_1 \rangle \end{pmatrix},\\ M_R &= \begin{pmatrix} M_{11} & z^{(1)}_{12} \langle \Phi_1\rangle & z^{(2)}_{13} \langle\Phi_2\rangle \\ z^{(1)}_{21}\langle\Phi_1\rangle & 0 & z^{(3)}_{23}\langle\Phi_3\rangle \\ z^{(2)}_{31}\langle \Phi_2\rangle & z^{(3)}_{32} \langle\Phi_3\rangle & 0 \end{pmatrix}.\label{RHmass} \end{align} Since the mass matrix for charged leptons is already diagonal, the lepton mixings come from the mass matrix of RH neutrinos. There are four other categories of neutrino mixing matrices~\cite{neutrinomix}, that are compatible with neutrino data. In all the cases, we need at least three complex scalar fields with different $U(1)'$ charges, similarly to the case given in~(\ref{RHmass}). The quark Yukawa couplings to Higgs bosons are summarized in Appendix~\ref{app:quark_yukawa}. We find the $Z$-like ($Z_1$) and $Z'$-like ($Z_2$) masses as \begin{equation} m^2_{Z_{1,2}}= \frac{1}{2} \Big(m^2_Z+m^2_{22}\mp \sqrt{(m^2_Z-m^2_{22})^2+4 m^4_{12}} \Big), \end{equation} where $m^2_Z\equiv (g^2+g^2_Y) v^2/4$, and \begin{align} m^2_{22} &\equiv m^2_Z s^2_W t^2_\xi + m^2_{Z'}/c^2_\xi - c^{-1}_W e g_{Z'} Q'_{H_2} v^2_w t_\xi/c_\xi, \nonumber\\ m^2_{12} &\equiv m^2_Z s_W t_\xi - \frac{1}{2} c^{-1}_W s^{-1}_W e g_{Z'} Q'_{H_2} v^2_2/c_\xi \end{align} with \begin{equation} m^2_{Z'}=g^2_{Z'} \left(\frac{1}{9}x^2 v^2_s+ y^2 \omega^2_1+(x+y)^2 \omega^2_2+ x^2 \omega^2_3 \right). \end{equation} Here $s_{\varphi} \equiv \sin\varphi$, $c_{\varphi} \equiv \cos\varphi$, and $t_{\varphi} \equiv \tan\varphi$. The modified $Z$ boson mass can receive constraints from electroweak precision data, which is studied in Sec.~\ref{sec:constraints}. We note that for a small mass mixing, the $Z'$-like mass is approximately given by $m^2_{Z_2}\approx m^2_{Z'}$ and we can treat $m_{Z'}$ and $g_{Z'}$ to be independent parameters due to the presence of nonzero $\omega_i$'s. The $U(1)'$ interactions are collected in Appendix~\ref{app:u1_ints}. \section{Higgs spectrum and Yukawa couplings\label{sec:higgs_spectrum}} We here specify the Higgs spectrum of our model and identify the quark and lepton Yukawa couplings of neutral and charged Higgs bosons for studies in next sections. The expressions are based on results in Appendices~\ref{app:higgs_sector} and~\ref{app:quark_yukawa}. \subsection{The Higgs spectrum} The Higgs sector of our model has two Higgs doublets, which are expressed in components as \begin{equation} H_j = \begin{pmatrix} \phi^+_j \\ (v_j+\rho_j+i\eta_j)/\sqrt{2} \end{pmatrix} \quad (j = 1, \, 2), \end{equation} and the complex singlet scalar decomposed into $S=\left(v_s+S_R+i S_I\right)/\sqrt{2}$. In the limit of negligible mixing with the $CP$-even singlet scalar, the mass eigenstates of $CP$-even neutral Higgs scalars, $h$ and $H$, are given by \begin{align} h &= - \sin\alpha \, \rho_1 + \cos\alpha \, \rho_2, \nonumber\\ H &= \cos\alpha \, \rho_1 + \sin\alpha \, \rho_2. \label{base-h} \end{align} The general case where the $CP$-even part of the singlet scalar $S$ mixes with the Higgs counterpart is considered in Appendix~\ref{app:higgs_sector}. The mass eigenvalues of $CP$-even neutral Higgs scalars are denoted as $m_{h_{1,2,3}}$ with $m_{h_1}<m_{h_2}<m_{h_3}$, alternatively, $m_h\equiv m_{h_1}$, $m_H\equiv m_{h_2}$ and $m_s\equiv m_{h_3}$, and there are three mixing angles, $\alpha_{1,2,3}$: $\alpha_1=\alpha$ in the limit of a decoupled $CP$-even singlet scalar, while $\alpha_2$ and $\alpha_3$ are mixing angles between $\rho_{1,2}$ and $S_R$, respectively. For $ 2\kappa_1 v_1 v_s\approx \mu v_2 / \sqrt{2}$ and $2\kappa_2 v_2 v_s\approx \mu v_1 / \sqrt{2}$, the mixing between $\rho_{1,2}$ and $S_R$ can be neglected. For a later discussion, we focus mainly on this case. The $CP$-odd parts of the singlet scalars, $S$ and $\Phi_a$, can mix with the Higgs counterpart due to a nonzero $U(1)'$ charge of the second Higgs $H_2$, but for a small $x$ and small VEV of $H_2$, the mixing effect is negligible. In this case, the neutral Goldstone boson $G^0$ and the $CP$-odd Higgs scalar $A^0$ are turned out to be \begin{align} G^0 &= \cos\beta \, \eta_1 + \sin\beta \, \eta_2 , \nonumber\\ A^0 &= \sin\beta \, \eta_1 - \cos\beta \, \eta_2 \label{base-A} \end{align} with $\tan\beta\equiv v_2/v_1$. The massless combination of $\eta_1$ and $\eta_2$ is eaten by the $Z$ boson, while a linear combination of $S_I$ and other pseudoscalars of $\Phi_a$ is eaten by the $Z'$ boson if the $Z'$ mass is determined dominantly by the VEV of $S$. The other combination of the $CP$-odd scalars from two Higgs doublets has the mass of \begin{equation} m_A^2 = \frac{\mu \sin\beta \cos\beta}{\sqrt{2} v_s} \left( v^2 + \frac{v_s^2}{\sin^2\beta \cos^2\beta} \right). \label{A0mass} \end{equation} On the other hand, the charged Goldstone bosons $G^+$ and charged Higgs scalar $H^+$ identified as \begin{align} G^+ &= \cos\beta \, \phi_1^+ + \sin\beta \, \phi_2^+ ,\nonumber\\ H^+ &= \sin\beta \, \phi_1^+ - \cos\beta \, \phi_2^+ \label{base-chargedH} \end{align} with nonzero mass eigenvalue given by \begin{equation} m_{H^+}^2 = m_A^2 - \left ( \frac{\mu\sin\beta \cos\beta}{\sqrt{2} v_s} +\lambda_4 \right ) v^2. \label{H+mass} \end{equation} We remark that in the limit of $\mu v_s\gg v^2$, the heavy scalars in the Higgs doublets become almost degenerate as $m^2_A\approx m^2_H\approx m^2_{H^+}\approx \mu v_s/(\sqrt{2}\sin\beta\cos\beta)$ and $m^2_s\approx 2\lambda_S v^2_s$ from Eqs.~(\ref{A0mass}), (\ref{H+mass}) and (\ref{h0s}). In this limit, the mixing angles between the SM-like Higgs and extra scalars can be negligibly small and the resulting Higgs spectrum is consistent with Higgs data and electroweak precision tests (EWPT) as will be discussed in Subsec~\ref{sec:higgs_ewpd}. But, as $\mu v_s$ is constrained by perturbativity and unitarity bounds on the quartic couplings with Eq.~(\ref{lams0}) or (\ref{lams}), as will be discussed in Sec.~\ref{sec:constraints}, the extra scalars in our model remain non-decoupled. Since it is sufficient to take almost degenerate masses for two of $m_A$, $m_H$, and $m_{H^+}$ for EWPT, we henceforth consider more general scalar masses but with small mixings between the SM-like Higgs and the extra neutral scalars. \subsection{Quark mass matrices} We now consider the quark mass matrices and their diagonalization. After two Higgs doublets develop VEVs, we obtain the quark mass matrices from Eqs.~(\ref{qmass1}) and~(\ref{qmass2}) as \begin{align} {(M_u)}_{ij} &= \frac{1}{\sqrt{2}} v\cos\beta \begin{pmatrix} y^u_{11} & y^u_{12} & 0\\ y^u_{21} & y^u_{22} & 0 \\ 0 & 0 & y^u_{33} \end{pmatrix} + \frac{1}{\sqrt{2}} v\sin\beta \begin{pmatrix} 0 & 0 & 0\\ 0 & 0 & 0 \\ h^u_{31} & h^u_{32} & 0 \end{pmatrix}, \nonumber\\ {(M_d)}_{ij} &= \frac{1}{\sqrt{2}} v\cos\beta \begin{pmatrix} y^d_{11} & y^d_{12} & 0\\ y^d_{21} & y^d_{22} & 0 \\ 0 & 0 & y^d_{33} \end{pmatrix} + \frac{1}{\sqrt{2}} v\sin\beta \begin{pmatrix} 0 & 0 & h^d_{13}\\ 0 & 0 & h^d_{23} \\ 0 & 0 & 0 \end{pmatrix}. \end{align} The quark mass matrices can be diagonalized by \begin{equation} U^\dagger_L M_u U_R = M^D_u= \begin{pmatrix} m_u & 0 & 0\\ 0 & m_c & 0 \\ 0 & 0 & m_t \end{pmatrix}, \quad D^\dagger_L M_d D_R = M^D_d= \begin{pmatrix} m_d & 0 & 0\\ 0 & m_s & 0 \\ 0 & 0 & m_b \end{pmatrix}, \end{equation} thus the CKM matrix is given as $V_\text{CKM}= U^\dagger_L D_L$. We note that the Yukawa couplings of the second Higgs doublet are sources of flavor violation, which could be important in meson decays/mixings and collider searches for flavor-violating top decays and/or heavy Higgs bosons~\cite{flavor,FChiggs,crivellin2017}. The detailed derivation of flavor-violating Higgs couplings is presented in the next section. Since $h^u_{31}$ and $h^u_{32}$ correspond to rotations of right-handed up-type quarks, we can take $U_L=1$, so $V_\text{CKM}=D_L$. In this case, we have an approximate relation for the down-type quark mass matrix, $M_d\approx V_\text{CKM} M^D_d $, up to $m_{d,s}/m_b$ corrections. Then the Yukawa couplings between the third and first two generations are given as follows. \begin{equation} h^d_{13} =\frac{\sqrt{2} m_b}{v\sin\beta}\, V_{ub},\quad h^d_{23} =\frac{\sqrt{2} m_b}{v\sin\beta}\, V_{cb}.\label{hd} \end{equation} For $V_{ub}\simeq 0.004\ll V_{cb}\simeq 0.04$, we have $h^d_{13}\ll h^d_{23}$. The down-type Yukawa couplings are determined as \begin{align} y^d_{11}&= \frac{\sqrt{2} m_d}{v\cos\beta}\, V_{ud}, \quad y^d_{12}= \frac{\sqrt{2} m_s}{v\cos\beta}\, V_{us}, \nonumber\\ y^d_{21}&= \frac{\sqrt{2} m_d}{v\cos\beta}\, V_{cd}, \quad y^d_{22}= \frac{\sqrt{2} m_s}{v\cos\beta}\, V_{cs}, \quad y^d_{33}= \frac{\sqrt{2} m_b}{v\cos\beta}\, V_{tb}. \end{align} On the other hand, taking $U_L=1$ as above, we find another approximate relation for the up-type quark mass matrix: $M_u=M^D_u U^\dagger_R$. Then the rotation mass matrix for right-handed down-type quarks becomes $U^\dagger_R={(M^D_u)}^{-1} M_u$, which is given as \begin{equation} U^\dagger_R= \frac{1}{\sqrt{2}} \begin{pmatrix} \frac{v}{m_u}\cos\beta\, y^u_{11} & \frac{v}{m_u}\cos\beta \, y^u_{12} & 0\\ \frac{v}{m_c}\cos\beta \, y^u_{21} & \frac{v}{m_c}\cos\beta \, y^u_{22} & 0 \\ \frac{v}{m_t}\sin\beta\, h^u_{31} & \frac{v}{m_t}\sin\beta\, h^u_{32} & \frac{v}{m_t}\cos\beta\, y^u_{33} \end{pmatrix}. \end{equation} From the unitarity condition of $U_R$ we further find the following constraints on the up-type quark Yukawa couplings: \begin{align} |y^u_{11}|^2 + |y^u_{12}|^2 &= \frac{2m^2_u}{v^2\cos^2\beta}, \label{UR1} \\ |y^u_{21}|^2+ |y^u_{22}|^2 &= \frac{2m^2_c}{v^2 \cos^2\beta}, \\ |y^u_{33}|^2+ \tan^2\beta (|h^u_{31}|^2+|h^u_{32}|^2 )&= \frac{2m^2_t}{v^2\cos^2\beta}, \label{UR2} \\ y^u_{11}(y^u_{21})^* + y^u_{12} (y^u_{22})^* &= 0, \label{UR3} \\ y^u_{21} (h^u_{31})^*+ y^u_{22} (h^u_{32})^* &= 0, \label{UR4} \\ y^u_{11} (h^u_{31})^*+ y^u_{12} (h^u_{32})^* &= 0. \label{UR5} \end{align} \subsection{Quark Yukawa couplings} Using the results in Appendix~\ref{app:quark_yukawa}, we get the Yukawa interactions for the SM-like Higgs boson $h$ and heavy neutral Higgs bosons $H$, $A$ as \begin{align} -\mathcal{L}_{Y}^{h/H/A} = &~\frac{\cos(\alpha - \beta)}{\sqrt{2} \cos\beta} \bar b_R \left( \tilde h_{13}^{d \ast} d_L + \tilde h_{23}^{d \ast} s_L \right) h + \frac{\lambda_b^h}{\sqrt{2}} \bar b_R b_L h+ \frac{\lambda_t^h}{\sqrt{2}} \bar t_R t_L h \nonumber\\ & + \frac{\sin(\alpha - \beta)}{\sqrt{2} \cos\beta} \bar b_R \left( \tilde h_{13}^{d \ast} d_L + \tilde h_{23}^{d \ast} s_L \right) H + \frac{\lambda_b^H}{\sqrt{2}} \bar b_R b_L H+ \frac{\lambda_t^H}{\sqrt{2}} \bar t_R t_L H \nonumber\\ & - \frac{i}{\sqrt{2} \cos\beta} \bar b_R \left( \tilde h_{13}^{d \ast} d_L + \tilde h_{23}^{d \ast} s_L \right) A + \frac{i \lambda_b^A}{\sqrt{2}} \bar b_R b_L A - \frac{i \lambda_t^A}{\sqrt{2}} \bar t_R t_L A + \mathrm{h.c.} \label{FCYukawas} \end{align} where \begin{align} \lambda_b^h &= -\frac{\sqrt{2} m_b \sin\alpha}{v \cos\beta} + \frac{\tilde h_{33}^d \cos(\alpha - \beta)}{\cos\beta}, \\ \lambda_t^h &= -\frac{\sqrt{2} m_t \sin\alpha}{v \cos\beta} + \frac{\tilde h_{33}^u \cos(\alpha - \beta)}{\cos\beta} , \\ \lambda_b^H &= \frac{\sqrt{2} m_b \cos\alpha}{v \cos\beta} + \frac{\tilde h_{33}^d \sin(\alpha - \beta)}{\cos\beta}, \label{eq:lambda_b} \\ \lambda_t^H &= \frac{\sqrt{2} m_t \cos\alpha}{v \cos\beta} + \frac{\tilde h_{33}^u \sin(\alpha - \beta)}{\cos\beta} , \label{eq:lambda_t} \\ \lambda_b^A &=\frac{\sqrt{2} m_b \tan\beta}{v} - \frac{\tilde h_{33}^d}{\cos\beta}, \\ \lambda_t^A &= \frac{\sqrt{2} m_t \tan\beta}{v} - \frac{\tilde h_{33}^u}{\cos\beta}. \end{align} We note that ${\tilde h}^d\equiv D^\dagger_L h^d D_R$ and ${\tilde h}^u\equiv U^\dagger_L h^u U_R$. Thus, by taking $U_L=1$ we get ${\tilde h}^u=h^u U_R$ and ${\tilde h}^d=V^\dagger_{\rm CKM} h^d$. In this case, as compared to two-Higgs-doublet model type I, extra Yukawa couplings are given by \begin{align} {\tilde h}^u_{33} &=\frac{\sqrt{2} m_t}{v\sin\beta}\Big(1-\frac{v^2\cos^2\beta}{2m^2_t}\,|y^u_{33}|^2 \Big), \label{FV3} \\ {\tilde h}^d_{13} &= 1.80\times 10^{-2}\Big(\frac{m_b}{v\sin\beta}\Big), \label{ht13} \\ {\tilde h}^d_{23} &=5.77\times 10^{-2}\Big(\frac{m_b}{v\sin\beta}\Big) , \label{ht23} \\ {\tilde h}^d_{33} &= 2.41\times 10^{-3}\Big(\frac{m_b}{v\sin\beta}\Big). \label{ht33} \end{align} We find that the flavor-violating couplings for light up-type quarks vanish, while the top quark Yukawa can have a sizable modification due to nonzero ${\tilde h}^u_{33}$. On the other hand, the flavor-violating couplings for down-type quarks can be large if $\tan\beta$ is small, even though the couplings have the suppression factors of CKM mixing and smallness of bottom quark mass. The couplings can be constrained by bounds from $B$-meson mixings and decays as is discussed in the next section. We note that the flavor-violating interactions of the SM-like Higgs boson are turned off in the alignment limit where $\alpha=\beta-\pi/2$. The Yukawa terms of the charged Higgs boson are given as \begin{equation} -\mathcal{L}_{Y}^{H^-} = {\bar b}(\lambda_{t_L}^{H^-} P_L + \lambda_{t_R}^{H^-} P_R) t H^- + {\bar b}(\lambda_{c_L}^{H^-} P_L + \lambda_{c_R}^{H^-} P_R ) c H^-+ \lambda_{u_L}^{H^-} {\bar b}P_L u H^- + \mathrm{h.c.}, \end{equation} where \begin{align} \lambda_{t_L}^{H^-} &= \frac{\sqrt{2}m_b \tan\beta}{v}\, V^*_{tb} -\frac{(V_{\rm CKM} {\tilde h}^d)^*_{33}}{\cos\beta}, \label{lamtL} \\ \lambda_{t_R}^{H^-} &= -\left( \frac{\sqrt{2} m_t \tan\beta}{v} - \frac{\tilde h_{33}^u}{\cos\beta} \right)V^*_{tb}, \label{lamtR} \\ \lambda_{c_L}^{H^-} &= \frac{\sqrt{2}m_b \tan\beta}{v}\, V^*_{cb} -\frac{(V_{\rm CKM}{\tilde h}^d)^*_{23}}{\cos\beta}, \label{lamcL} \\ \lambda_{c_R}^{H^-} &= -\frac{\sqrt{2} m_c\tan\beta}{v}\, V^*_{cb}, \label{lamcR} \\ \lambda_{u_L}^{H^-} &= \frac{\sqrt{2}m_b \tan\beta}{v}\, V^*_{ub} -\frac{(V_{\rm CKM}{\tilde h}^d)^*_{13}}{\cos\beta} \end{align} with \begin{equation} V_{\rm CKM}{\tilde h}^d= \begin{pmatrix} 0 & 0 & V_{ud}{\tilde h}^d_{13}+ V_{us}{\tilde h}^d_{23}+V_{ub}{\tilde h}^d_{33} \\ 0 & 0 & V_{cd}{\tilde h}^d_{13}+ V_{cs}{\tilde h}^d_{23}+V_{cb}{\tilde h}^d_{33} \\ 0 & 0 & V_{td}{\tilde h}^d_{13}+ V_{ts}{\tilde h}^d_{23}+V_{tb}{\tilde h}^d_{33} \end{pmatrix}. \end{equation} If $y_{33}^u = y_t^\text{SM} = \sqrt{2} m_t / v$, the Higgs coupling to top quark becomes \begin{equation} \lambda_t^H = y_t^\text{SM} \cos (\alpha-\beta) , \label{eq:lambda_t_SM} \end{equation} and $ \lambda_t^A = \lambda_{t_R}^{H^-} = 0$. \subsection{Lepton Yukawa couplings} As seen in~(\ref{cleptonmass}), the mass matrix for charged leptons $e_j$ is already diagonal due to the $U(1)'$ symmetry. Thus, the lepton Yukawa couplings are in a flavor-diagonal form given by \begin{align} -{\cal L}_{Y}^\ell = &-\frac{m_{e_j}\sin\alpha }{v\cos\beta}\, {\bar e}_j\, e_j \,h + \frac{m_{e_j}\cos\alpha }{v\cos\beta} \,{\bar e}_j\, e_j \,H + \frac{i m_{e_j}\tan\beta}{v}\, {\bar e}_j \gamma^5 e_j \,A^0 \nonumber\\ &+\frac{\sqrt{2}m_{e_j}\tan\beta}{v}\, \left({\bar \nu}_j\,P_R\, e_j \,H^+ + \mathrm{h.c.}\right) \end{align} \section{Constraints on the Higgs sector} \label{sec:constraints} In this section we consider various phenomenological constraints on the model coming from $B$-meson mixings and decays as well as Higgs and electroweak precision data on top of unitarity and stability bounds on the Higgs sector. We also show how to explain the deficits in $R_K$ and $R_{K^*}$ in the $B$-meson decays at LHCb in our model, and discuss the predictions for $R_D$ and $R_{D^*}$ through the charged Higgs exchange. \subsection{Unitarity and stability bounds} \label{sec:unitarity_bounds} Before considering the phenomenological constraints, we consider unitarity and stability bounds for the Higgs sector. As derived in Appendix~\ref{app:unitarity_bounds}, the conditions for perturbativity and unitarity are \begin{align} |\lambda_{1,2,3,S}|\leq 4\pi ,\qquad |\kappa_{1,2}| \leq 4\pi ,\nonumber\\ |\lambda_3\pm\lambda_4| \leq 4\pi ,\quad |\lambda_3+2\lambda_4| \leq 4\pi ,\quad \sqrt{\lambda_3(\lambda_3+2\lambda_4)}\leq 4 \pi ,\nonumber\\ |\lambda_1+\lambda_2\pm\sqrt{(\lambda_1-\lambda_2)^2+4\lambda_4^2}| \leq 8 \pi \nonumber\\ a_{1,2,3} \leq 8 \pi, \end{align} where $a_{1,2,3}$ are the solutions to Eq.~(\ref{cubic}). The vacuum stability conditions of the scalar potential can be obtained by considering the potential to be bounded from below along the directions of large Higgs doublet and singlet scalar fields. Following Refs.~\cite{ElKaffas:2006gdt,Grzadkowski:2009bt,Drozd:2014yla}, we obtain the stability conditions as follows: \begin{align} &\lambda_{1,2,S}>0\nonumber\\ &\sqrt{\lambda_1 \lambda_2}+\lambda_3+\lambda_4>0,\nonumber\\ &\sqrt{\lambda_1 \lambda_2}+\lambda_3>0,\nonumber\\ &\sqrt{\lambda_1 \lambda_S}+\kappa_1>0,\nonumber\\ &\sqrt{\lambda_2 \lambda_S}+\kappa_2>0,\nonumber\\ &\sqrt{(\kappa_1^2-\lambda_1\lambda_S)(\kappa_2^2-\lambda_2\lambda_S)}+\lambda_3\lambda_S>\kappa_1\kappa_2,\nonumber\\ &\sqrt{(\kappa_1^2-\lambda_1\lambda_S)(\kappa_2^2-\lambda_2\lambda_S)}+(\lambda_3+\lambda_4)\lambda_S>\kappa_1\kappa_2. \end{align} The stability conditions along the other scalar fields $\Phi_a$ can be obtained in the similar way, but they are not relevant for our study because $\Phi_a$'s do not couple directly to Higgs doublets as long as the extra quartic couplings for $\Phi_a$ are positive and large enough. \begin{figure}[t!] \begin{center} \includegraphics[height=0.45\textwidth]{fig1a.pdf} \includegraphics[height=0.45\textwidth]{fig1b.pdf} \end{center} \caption{Parameter space in terms of $m_{h_2}$ and $\tan\beta$. The gray regions are excluded by unitarity and stability bounds. $v_s=2m_{h_3}=1$ TeV and $\cos(\alpha-\beta)=0.05$ with $m_{h_2}=m_{A}$ and $m_{H^\pm}=500$~GeV in the left, and $m_{h_2}=m_{H^\pm}$ and $m_A=140$~GeV in the right panel. The mixing between heavy $CP$-even scalars is taken to be zero.\label{fig:unitarity1}} \end{figure} \begin{figure}[t!] \begin{center} \includegraphics[height=0.45\textwidth]{fig2c.pdf} \includegraphics[height=0.45\textwidth]{fig2d.pdf} \end{center} \caption{Parameter space in terms of $v_s$ and $\mu$ for $m_{h_3}=m_{H^\pm}=m_{h_2}=0.5$~TeV and $\cos(\alpha-\beta)=0.05$. The gray regions are excluded by unitarity and stability bounds. $\tan\beta=1$ (0.5) in the left (right) panel. The mixing between heavy $CP$-even scalars is taken to be zero.\label{fig:unitarity2}} \end{figure} The unitarity and stability bounds are depicted in Figs.~\ref{fig:unitarity1} and~\ref{fig:unitarity2} for the parameter space in terms of $m_{h_2}$ and $\tan\beta$, or $v_s$ and $\mu$, with assuming the alignment limit, $\cos(\alpha - \beta) = 0.05$, and zero mixing between heavy $CP$-even scalars. In each figure, the gray region corresponds to the parameter space excluded by the unitarity and stability conditions. In Fig.~\ref{fig:unitarity1}, we have taken the different choices of Higgs masses: $m_{h_2} = m_A$ and $m_{H^\pm} = 500$~GeV in the left, while $m_{h_2} = m_{H^\pm}$ and $m_A = 140$~GeV in the right panel. On the other hand, the parameter space in terms of $v_s$ and $\mu$ has been shown in Fig.~\ref{fig:unitarity2}, with setting $m_{h_3}=m_{H^\pm}=m_{h_2}=0.5$~TeV, but taking different values of $\tan\beta$. We note that the unitarity and stability bounds are sensitive to the choice of $\tan\beta$, while insensitive to the mixing angle of heavy $CP$-even scalars, in constraining the mass parameters. The allowed parameter space for mass parameters becomes narrower as $\tan\beta$ is smaller. \subsection{Higgs and electroweak precision data\label{sec:higgs_ewpd}} Provided that the Higgs mixings with the singlet scalar are small, the mixing angle $\alpha$ between $CP$-even Higgs scalars are constrained by Higgs precision data~\cite{WW,ZZ,gammagamma,bb,tautau}. The parameter space for $\sin\alpha$ and $\tan\beta$ allowed by the Higgs data is shown in Fig.~\ref{fig:Higgsfit}. We take the $(33)$ component of the up-type Higgs Yukawa coupling to be $y^u_{33}=y^{\rm SM}_t$ in the left, and $y^u_{33}=y^{\rm SM}/\cos\beta$ in the right panel. For illustration, we have also imposed unitarity and stability bounds discussed in the previous subsection for $m_{h_2}=m_{H^\pm}=450$ GeV, $m_A=140$~GeV and $v_s=1$~TeV. As a result, we find a wide parameter space close to the line of the alignment, $\alpha=\beta- \pi/2$, that is consistent with both the Higgs data and unitarity/stability bounds for $\tan\beta\gtrsim 0.1$. Thus, henceforth, for the phenomenology of the extra Higgs scalars, we focus on the parameter space near the alignment limit, $\cos(\alpha-\beta)\sim 0$. \begin{figure}[t!] \begin{center} \includegraphics[height=0.45\textwidth]{fig5b.pdf} \includegraphics[height=0.45\textwidth]{fig5a.pdf} \end{center} \caption{Parameter space for $\sin\alpha$ and $\tan\beta$ allowed by Higgs data within $1\sigma$ (green), $2\sigma$ (yellow), and $3\sigma$ (dark gray). The gray regions corresponds to the unitarity and stability bounds. $y^u_{33}=y^{\rm SM}_t$ in the left and $y^u_{33}=y^{\rm SM}_t/\cos\beta$ in the right panel. $m_{h_2}=m_{H^\pm}=450$~GeV, $m_A=140$~GeV and $v_s=1$~TeV has been taken in all panels.\label{fig:Higgsfit}} \end{figure} To see bounds from electroweak precision data, we obtain effective Lagrangian after integrating out $W$ and $Z$ bosons as follows~\cite{dumitru,kennedy}: \begin{equation} {\cal L}_{\rm eff} = -\frac{4G_F}{\sqrt{2} g^2 \sec^2\theta_W}\, \Big(\sec^2\theta_W J^\mu_{W^+} J_{W^-,\mu}+\rho J^\mu_Z J_{Z,\mu} +2 a J^\mu_Z J_{Z',\mu}+ b J^\mu_{Z'} J_{Z',\mu}\Big)+\cdots , \end{equation} where $J^\mu_Z=J^\mu_{3}-\sin^2\theta_* J^\mu_{\rm EM}$ with $\theta_*$ being the modified Weinberg angle. Here the non-oblique terms, $a$ and $b$, are determined at tree level as \begin{equation} a= \frac{\rho \sin\zeta \sec\xi}{\cos\zeta+\sin\theta_W \tan\xi\sin\zeta}, \quad b= \frac{a^2}{\rho}. \end{equation} From the $Z$-boson like mass given in Eq.~(\ref{Zmass}) and the $Z$--$Z'$ mixing angle in Eq.~(\ref{Zmix}), we find the correction to the $\rho$ parameter as \begin{align} \Delta\rho &= \frac{m^2_W}{m^2_{Z_1} \cos^2\theta_W}\, (\cos\zeta+\sin\theta_W \tan\xi \sin\zeta)^2-1 \nonumber \\ &\simeq \frac{\sin^2\theta_W}{\cos^2\xi} \frac{m^2_Z}{m^2_{Z'}} \left[\Big(2 Q'_{H_2}\frac{g_{Z'}}{g_Y} \Big)^2\sin^4\beta-\sin^2\xi \right] , \end{align} where we assumed that $\tan 2\zeta\simeq 2m^2_{12} / m^2_{Z_2} \ll 1$. Taking the limit of zero gauge kinetic mixing, {\em i.e.} $\sin\xi=0$, we have \begin{align} \Delta\rho &=\frac{m^2_W}{m^2_{Z}\cos^2\theta_W}-1 \nonumber \\ &\simeq 10^{-4} \left(\frac{x}{0.05}\right)^2 g^2_{Z'} \sin^4\beta \left(\frac{400\,{\rm GeV}}{m_{Z'}}\right)^2, \end{align} which is consistent with the result in Ref.~\cite{Bian:2017rpg}. Therefore, for $\tan\beta\simeq 1$, $g_{Z'}\simeq 1$, and $x\simeq 0.05$, $Z'$ with the mass $m_{Z'}\gtrsim 400\,{\rm GeV}$ is consistent with electroweak precision data. The mass splittings between extra Higgs scalars can also be constrained by the electroweak precision data, but it can be easily satisfied if we take $m_{h_2}=m_{H^\pm}$ or $m_{h_2}=m_A$, and a small mixing between $CP$-even scalars. \subsection{{\boldmath$B$}-meson anomalies from {\boldmath$Z'$}} Before considering constraints from $B$-meson mixings and decays, we show how to explain the $B$-meson anomalies in our model and identify the relevant parameter space for that. This section is based on the detailed results on $U(1)'$ interactions presented in Appendix~\ref{app:u1_ints} and phenomenological findings in Ref.~\cite{Bian:2017rpg}. From the relevant $Z'$ interactions for $B$-meson anomalies and the $Z'$ mass term, \begin{equation} {\cal L}'_{Z'}= g_{Z'} Z'_\mu \Big(\frac{1}{3}x\,V^*_{ts} V_{tb}\,{\bar s}\gamma^\mu P_L b+{\rm h.c.}+y {\bar \mu}\gamma^\mu \mu \Big)+ \frac{1}{2} m^2_{Z'} Z^{\prime 2}_\mu, \label{zpint} \end{equation} we get the classical equation of motion for $Z'$ as \begin{equation} Z'_\mu= -\frac{g_{Z'}}{m^2_{Z'}} \Big(\frac{1}{3}x\,V^*_{ts} V_{tb}\,{\bar s}\gamma_\mu P_L b+{\rm h.c.}+y {\bar \mu}\gamma_\mu \mu \Big). \label{zpeq} \end{equation} Then, by integrating out the $Z'$ gauge boson, we obtain the effective four-fermion interaction for ${\bar b}\rightarrow {\bar s}\mu^+ \mu^-$ as follows. \begin{equation} {\cal L}_{{\rm eff},{\bar b}\rightarrow {\bar s}\mu^+ \mu^-}= -\frac{xy g^2_{Z'}}{3 m^2_{Z'}}\, V^*_{ts} V_{tb}\, ({\bar s}\gamma^\mu P_L b) ({\bar \mu}\gamma_\mu \mu)+{\rm h.c.} \end{equation} Consequently, as compared to the effective Hamiltonian with the SM normalization, \begin{equation} \Delta {\cal H}_{{\rm eff},{\bar b}\rightarrow {\bar s}\mu^+ \mu^-} = -\frac{4G_F}{\sqrt{2}} \,V^*_{ts} V_{tb}\,\frac{\alpha_\text{em}}{4\pi}\, C^{\mu,{\rm NP}}_9 {\cal O}^\mu_9 \end{equation} with $ {\cal O}^\mu_9 \equiv ({\bar s}\gamma^\mu P_L b) ({\bar \mu}\gamma_\mu \mu)$ and $\alpha_{\rm em}$ being the electromagnetic coupling, we obtain new physics contribution to the Wilson coefficient, \begin{equation} C^{\mu, {\rm NP}}_9= -\frac{8 xy \pi^2\alpha_{Z'}}{3\alpha_{\rm em}}\, \left(\frac{v}{m_{Z'}}\right)^2 \end{equation} with $\alpha_{Z'}\equiv g^2_{Z'}/(4\pi)$, and vanishing contributions to other operators, $C^{\mu,{\rm NP}}_{10}=C^{\prime\mu, {\rm NP}}_9=C^{\prime\mu, {\rm NP}}_{10}=0$. We note that $xy>0$ is chosen for a negative sign of $C^\mu_9$, being consistent with $B$-meson anomalies. Requiring the best-fit value, $C^{\mu, \, {\rm NP}}_9=-1.10$~\cite{muon}, (while taking $[-1.27,-0.92]$ and $[-1.43,-0.74]$ within $1\sigma$ and $2\sigma$ errors), to explain the $B$-meson anomalies yields \begin{equation} m_{Z'}= 1.2~\text{TeV} \times \left(xy\, \frac{\alpha_{Z'}}{\alpha_{\rm em}} \right)^{1/2} . \end{equation} Therefore, $m_{Z'} \simeq 1\,{\rm TeV}$ for $xy\simeq 1$ and $\alpha_{Z'}\simeq \alpha_{\rm em}$. For values of $xy$ less than unity or $\alpha_{Z'}\lesssim \alpha_{\rm em}$, $Z'$ can be even lighter. Various phenomenological constraints on the $Z'$ interactions coming from dimuon resonance searches, other meson decays and mixing, tau lepton decays and neutrino scattering have been studied in Ref.~\cite{Bian:2017rpg}, leading to the conclusion that the region of $x g_{Z'}\lesssim 0.05$ for $y g_{Z'}\simeq 1$ and $m_{Z'}\lesssim 1$~TeV is consistent with the parameter space for which the $B$-meson anomalies can be explained. \subsection{Bounds from {\boldmath$B$}-meson mixings and decays} We now consider the bounds from $B$-meson mixings and decays. After integrating out the heavy Higgs bosons, the effective Lagrangian for $B_{s(d)}\rightarrow \mu^+\mu^-$ from the flavor-violating Yukawa interactions in (\ref{FCYukawas}) is \begin{align} \Delta {\cal L}_{{\rm eff},B_{s(d)}\rightarrow \mu^+ \mu^-}= &-\frac{\sqrt{2} m_\mu \sin(\alpha-\beta)\cos\alpha}{2m^2_H v \cos\beta} \Big(({\tilde h}^d_{23})^* {\bar b}_R s_L +({\tilde h}^d_{13})^* {\bar b}_R d_L+{\rm h.c.} \Big)({\bar \mu} \mu) \nonumber \\ &-\frac{\sqrt{2} m_\mu \tan\beta}{2m^2_A v \cos\beta} \Big(({\tilde h}^d_{23})^* {\bar b}_R s_L +({\tilde h}^d_{13})^* {\bar b}_R d_L+{\rm h.c.} \Big)({\bar \mu}\gamma^5 \mu). \end{align} The extra contributions to the effective Hamiltonian for $B_{s}\rightarrow \mu^+\mu^-$ are thus \begin{equation} \Delta {\cal H}_{{\rm eff},B_{s}\rightarrow \mu^+ \mu^-}=-\frac{G^2_F m^2_W}{\pi} \Big[C^{\rm BSM}_S ({\bar b}P_L s)({\bar \mu}\mu) + C^{\rm BSM}_P({\bar b}P_L s)({\bar \mu}\gamma^5\mu)\Big] \end{equation} with \begin{align} C^{\rm BSM}_S &= -\frac{\pi}{G^2_F m^2_W}\, \frac{\sqrt{2}m_\mu\sin(\alpha-\beta)\cos\alpha}{2m^2_H v\cos^2\beta}\,\cdot ({\tilde h}^d_{23})^*, \nonumber\\ C^{\rm BSM}_P&= -\frac{\pi}{G^2_F m^2_W}\, \frac{\sqrt{2}m_\mu\tan\beta}{2m^2_A v\cos\beta}\,\cdot ({\tilde h}^d_{23})^*. \end{align} In the alignment limit with $\alpha=\beta-\pi/2$ and $m_A\simeq m_H$, the Wilson coefficients become identical and suppressed for a small $\tan\beta$. The effective Hamiltonian in the above leads to the corrections of the branching ratio for $B_{s}\rightarrow \mu^+\mu^-$ as follows~\cite{crivellin2}: \begin{align} {\cal B}(B_{s}\rightarrow \mu^+\mu^-) =&~\frac{G^4_F m^4_W}{8\pi^5}\left(1-\frac{4m^2_\mu}{m^2_{B_s}}\right)^{1/2} m_{B_s} f^2_{B_s} m^2_\mu \, \tau_{B_s} \nonumber \\ &\times \left[ \left|\frac{m^2_{B_s}(C_P-C'_P)}{2(m_b+m_s)m_\mu}-(C_A-C'_A) \right|^2+ \left|\frac{m^2_{B_s}(C_S-C'_S)}{2(m_b+m_s)m_\mu} \right|^2 \left(1-\frac{4m^2_\mu}{m^2_{B_s}}\right)\right], \nonumber \\ \end{align} where $m_{B_s}$, $f_{B_s}$, and $\tau_{B_s}$ are mass, decay constant, and lifetime of $B_s$-meson, respectively. $C^{(')}_A ,C^{(')}_S, C^{(')}_P$ are Wilson coefficients of the effective operators, ${\cal O}^{(')}_A=[{\bar b}\gamma_\mu P_{L(R)}s][{\bar \mu}\gamma^\mu \gamma^5 \mu]$, ${\cal O}^{(')}_S=[{\bar b}P_{L(R)}s][{\bar \mu} \mu]$ and ${\cal O}^{(')}_P=[{\bar b}P_{L(R)}s][{\bar \mu}\gamma^\mu\gamma^5 \mu]$, respectively. We note that there is no contribution from $Z'$ interactions to $B_s\rightarrow \mu^+\mu^-$ since the muon couplings to $Z'$ are vector-like. On the other hand, in the alignment limit the bounds obtained from $B_{s,d}\rightarrow \mu^+\mu^-$ in Ref.~\cite{crivellin2} can be translated to our case as \begin{align} \left\vert{\tilde h}^d_{23}\right\vert &< 3.4\times 10^{-2} \left(\frac{\cos\beta}{\tan\beta}\right){\left(\frac{m_{H,A}}{500\,{\rm GeV}} \right)}^2, \nonumber \\ \left\vert{\tilde h}^d_{13}\right\vert &< 1.7\times 10^{-2} \left(\frac{\cos\beta}{\tan\beta}\right){\left(\frac{m_{H,A}}{500\,{\rm GeV}} \right)}^2. \label{eq:Bmumu} \end{align} From Eqs.~(\ref{ht13}) and (\ref{ht23}), we find that flavor constraints are satisfied as far as \begin{equation} \sin\beta< \sqrt{1-0.033{\left(\frac{500~{\rm GeV}}{m_{H,A}} \right)}^2}. \end{equation} This leads to $\tan\beta<5.4$ for $m_{H,A}=500\,{\rm GeV}$. The flavor-violating Yukawa couplings of heavy Higgs bosons as well as $Z'$ interactions~\cite{Bian:2017rpg} can modify the $B_s$--${\bar B}_s$ mixing. The additional effective Hamiltonian relevant for the mixing is given by \begin{equation} \Delta{\cal H}_{\text{eff},B_s-{\bar B}_s}=C'_2 (\bar s_\alpha P_R b_\alpha)({\bar s}_\beta P_R b_\beta)+ \frac{G^2_F m^2_W}{16\pi^2}\, (V^*_{ts} V_{tb})^2\,C^{\rm NP}_{VLL}\, ({\bar s}_\alpha\gamma^\mu P_L b_\alpha)({\bar s}_\beta\gamma_\mu P_L b_\beta), \end{equation} with \begin{align} C'_2=&~ \frac{{\tilde h}^d_{23}}{4\cos^2\beta\, m^2_H} \left( \frac{m^2_H}{m^2_A}-\sin^2(\alpha-\beta) -\frac{m^2_H \cos^2(\alpha-\beta)}{m^2_h}\right), \label{C2p} \\ C^{\rm NP}_{VLL}=&~ \frac{16\pi^2}{9} \, \frac{(x g_{Z'})^2 v^4}{m^2_{Z'} m^2_W} \nonumber \\ =&~ 0.27 \Big(\frac{x g_{Z'}}{0.05}\Big)^2 \left(\frac{300\,{\rm GeV}}{m_{Z'}}\right)^2. \label{CVLL} \end{align} The mass difference in the $B_s$ system becomes \begin{equation} \Delta M_{B_s}= \frac{2}{3}m_{B_s} f^2_{B_s} B^s_{123}(\mu) \left[ \frac{G^2_F m^2_W}{16\pi^2}\, (V^*_{ts} V_{tb})^2\,\Big(C^{\rm SM}_{VLL}+C^{\rm NP}_{VLL}\Big)+ |C'_2| \right], \label{DMs} \end{equation} where $B^s_{123}(\mu)$ is a combination of bag-parameters~\cite{BsBsbar} and $C^{\rm SM}_{VLL}\simeq 4.95$~\cite{MsSM}. The SM prediction and the experimental values of $\Delta M_s$ are given by $(\Delta M_{B_s})^{\rm SM}=(17.4\pm 2.6)\,{\rm ps}^{-1}$~\cite{MsSM} and $(\Delta M_{B_s})^{\rm exp}=(17.757\pm 0.021)\,{\rm ps}^{-1}$~\cite{Msexp}, respectively. Then, taking into account the SM uncertainties, we obtain the bounds on $\Delta M_{B_s}$ as $16\,(13)\,{\rm ps}^{-1}<\Delta M_{B_s}<21\,(23)\,{\rm ps}^{-1}$ or $(\Delta M_{B_s})^{\rm BSM}<3.0\,(5.6)\,{\rm ps}^{-1}$ at $1\sigma$ ($2\sigma$) level for new physics. We also note that the most recent lattice calculations show considerably large values for the bag parameters, leading to $(\Delta M_{B_s})^{\rm SM}=(20.01\pm 1.25)\,{\rm ps}^{-1}$~\cite{MsSM2}. It needs an independent confirmation, but if it is true, the new physics contributions coming from the heavy Higgs bosons and $Z'$ would be constrained more tightly. Taking the SM prediction as $(\Delta M_{B_s})^{\rm SM}=(17.4\pm 2.6)\,{\rm ps}^{-1}$~\cite{MsSM}, from Eq.~(\ref{DMs}) with Eqs.~(\ref{C2p}) and (\ref{CVLL}), we get the bound on the flavor-violating Yukawa coupling in the alignment limit of heavy Higgs bosons as \begin{equation} \frac{|{\tilde h}^d_{23}|}{\cos\beta} \bigg|\frac{m^2_H}{m^2_A}-1\bigg|^{1/2}\bigg(\frac{500\,{\rm GeV}}{m_H}\bigg)<4.6(6.4)\times 10^{-3}\sqrt{1-0.1(0.06)\Big(\frac{x g_{Z'}}{0.05}\Big)^2 \left(\frac{300\,{\rm GeV}}{m_{Z'}}\right)^2}. \label{hd23} \end{equation} Here, since we need to choose $x g_{Z'}\lesssim 0.05$ for $m_{Z'}\lesssim 1\,{\rm TeV}$ to satisfy the $B$-meson anomalies and the LHC dimuon bounds at the same time as discussed in the previous section, we can safely ignore the contribution of $Z'$ interactions to the $B_s$--${\bar B}_s$ mixing on the right-hand side of Eq.~(\ref{hd23}). Furthermore, with the $Z'$ contribution ignored, the $B_d$--${\bar B}_d$ mixing leads to a similar bound~\cite{BsBsbar}: \begin{equation} \left\vert{\tilde h}^d_{13}\right\vert<0.91(1.3)\times 10^{-3} \cos\beta\, \bigg|\frac{m^2_H}{m^2_A}-1\bigg|^{-1/2}\bigg(\frac{m_H}{500\,{\rm GeV}}\bigg). \end{equation} Comparing to the bounds from $B_s\rightarrow \mu^+\mu^-$ in~(\ref{eq:Bmumu}), the $B$--${\bar B}$ mixings could lead to tighter constraints on the flavor-violating Yukawa couplings for down-type quarks unless $m_H$ and $m_A$ are almost degenerate. The upper frames of Fig.~\ref{fig:unitarity3} show that a wide range of heavy Higgs masses up to 600--700~GeV are allowed for $m_{h_2} = m_A$ and $\tan\beta = \mathcal{O}(1)$. On the other hand, for $\tan\beta=0.5$, the neutral Higgs boson can be as heavy as 400~GeV, but the charged Higgs mass is constrained as $240~{\rm GeV} \lesssim m_{H^\pm} \lesssim 650$~GeV. For illustration, the case with $m_{h_2} = m_{H^\pm}$ has also been shown in the lower frames of Fig.~\ref{fig:unitarity3}, where the narrower region is allowed as compared with the case with $m_{h_2} = m_A$. \begin{figure}[t!] \begin{center} \includegraphics[height=0.45\textwidth]{fig3c.pdf} \includegraphics[height=0.45\textwidth]{fig3d.pdf}\\[0.2cm] \includegraphics[height=0.45\textwidth]{fig3a.pdf} \includegraphics[height=0.45\textwidth]{fig3b.pdf} \end{center} \caption{Parameter space in terms of $m_{h_2}$ and $m_{H^\pm}$ (upper frames), and $m_A$ (lower frames). $\tan\beta = 1$ in the left and 0.5 in the right panels. We have chosen $v_s=2m_{h_3}=1$~TeV, $\cos(\alpha-\beta)=0.05$, and $y_{33}^u=y_t^\text{SM}$ in all frames. The mixing between heavy $CP$-even scalars is taken to be zero. The gray regions are excluded by unitarity and stability bounds. The magenta regions are excluded by $B\rightarrow X_s \gamma$, and cyan region is excluded by $B_s\rightarrow \mu^+\mu^-$. The yellow and orange regions are excluded by $B_s$ and $B_d$ mixings, respectively.\label{fig:unitarity3}} \end{figure} Another important bound comes from the inclusive radiative decay, $B \to X_s\gamma$. The effective Hamiltonian relevant for the $b\rightarrow s\gamma$ transition is \begin{equation} {\cal H}_{{\rm eff},b\rightarrow s\gamma }= -\frac{4G_F}{\sqrt{2}} \, V_{tb} V^*_{ts} \left(C_7 {\cal O}_7+ C_8 {\cal O}_8 \right) \end{equation} with \begin{equation} {\cal O}_7 = \frac{e}{16\pi^2}\, m_b\, {\bar s} \sigma^{\mu\nu} P_R b\, F_{\mu\nu}, \quad {\cal O}_8 = \frac{g_s}{16\pi^2}\, m_b\, {\bar s} \sigma^{\mu\nu} P_R T^a b\, G^a_{\mu\nu}. \end{equation} The charged Higgs contributions to the Wilson coefficients are given by~\cite{crivellin2017,ko2} \begin{align} C^{\rm BSM}_7&= \frac{v^2}{2m^2_t}\frac{(\lambda^{H^-}_{t_R})^*\lambda^{H^-}_{t_R}}{V_{tb }V^*_{ts}}\, C^{(1)}_7(x_t) + \frac{v^2}{2m_t m_b}\frac{(\lambda^{H^-}_{t_L})^* \lambda^{H^-}_{t_R}}{V_{tb}V^*_{ts}}\, C^{(2)}_7(x_t) , \nonumber\\ C^{\rm BSM}_8&=\frac{v^2}{2m^2_t}\frac{(\lambda^{H^-}_{t_R})^*\lambda^{H^-}_{t_R}}{V_{tb }V^*_{ts}}\, C^{(1)}_8(x_t) + \frac{v^2}{2m_t m_b}\frac{(\lambda^{H^-}_{t_L})^* \lambda^{H^-}_{t_R}}{V_{tb}V^*_{ts}}\, C^{(2)}_8(x_t) \end{align} with $x_t\equiv (m_t/m_{H^\pm})^2$, and \begin{align} C^{(1)}_7(x) &= \frac{x}{72} \bigg\{\frac{-8x^3+3x^2+12x-7+(18x^2-12)\ln x}{(x-1)^4} \bigg\}, \nonumber\\ C^{(2)}_7(x) &= \frac{x}{12}\bigg\{\frac{-5x^2+8x-3+(6x-4)\ln x}{(x-1)^3} \bigg\}, \nonumber\\ C^{(1)}_8(x) &= \frac{x}{24} \bigg\{\frac{-x^3+6x^2-3x-2-6x\ln x}{(x-1)^4} \bigg\}, \nonumber\\ C^{(2)}_8(x) &= \frac{x}{4} \bigg\{\frac{-x^2+4x-3-2\ln x}{(x-1)^3} \bigg\}. \end{align} Here $\lambda^{H^-}_{t_{L,R}}$ are given by Eqs.~(\ref{lamtL}) and~(\ref{lamtR}). The Wilson coefficients in the SM at one loop are given by $C^{\rm SM}_7=3 C^{(1)}_7(m^2_t/m^2_W)$ and $C^{\rm SM}_8=3 C^{(1)}_8(m^2_t/m^2_W)$. $C^{\rm BSM}_8$ mixes into the $C^{\rm BSM}_7$ at the scale of $\mu_b=m_b$ through the renormalization group equations and contribute to ${\cal B}(B\rightarrow X_s \gamma)$~\cite{Borzumati:1998tg}. The next-to-next-leading order SM prediction for ${\cal B}(B\rightarrow X_s \gamma)$ is~\cite{bsg-th} \begin{equation} {\cal B}(B\rightarrow X_s \gamma) = (3.36\pm 0.23)\times 10^{-4}, \end{equation} whereas the experimentally measured value of ${\cal B}(B\rightarrow X_s \gamma)$ from HFAG is~\cite{Msexp} \begin{equation} {\cal B}(B\rightarrow X_s \gamma) =(3.43\pm 0.21\pm 0.07)\times 10^{-4}. \end{equation} As a result, the SM prediction for $B\rightarrow X_s \gamma$ is consistent with experiments, so we obtain the bounds on the modified Wilson coefficients as $-0.032<C^{\rm BSM}_7(\mu_b)<0.027$ at $2\sigma$ level~\cite{Paul:2016urs}. This constrains $\tan\beta$ in terms of charged Higgs mass as shown in Fig.~\ref{fig:Bsgamma}, where unitarity and stability bounds are displayed as well. \begin{figure}[t!] \begin{center} \includegraphics[height=0.45\textwidth]{pic156.pdf} \includegraphics[height=0.45\textwidth]{pic350.pdf} \end{center} \caption{Parameter space for $m_{H^\pm}$ and $\tan\beta$ excluded by $B\rightarrow X_s \gamma$ within $2\sigma$ (red) and unitarity bounds (gray) with $y_{33}^u=y_t^\text{SM}$ for $m_A=m_{h_2}=160$~GeV (left panel) and $m_A=m_{h_2}=350$~GeV (right panel).\label{fig:Bsgamma}} \end{figure} We also find that the case with $y_{33}^u = y_t^\text{SM} / \cos\beta$ has been excluded by $B \to X_s\gamma$, hence the case with $y_{33}^u = y_t^\text{SM}$ is considered in Figs.~\ref{fig:unitarity3} and~\ref{fig:Bsgamma} and collider studies in the next section. \subsection{Predictions for {\boldmath$R_D$} and {\boldmath$R_{D^*}$}} We briefly discuss the implications of flavor-violating couplings with charged Higgs on $R_D$ and $R_{D^*}$. The effective Hamiltonian relevant for $B\rightarrow D^{(*)}\tau \nu$ in our model is given as follows: \begin{equation} {\cal H}_{\rm eff}= C^{cb}_{\rm SM} ({\bar c}_L \gamma_\mu b_L) ({\bar \tau}_L \gamma^\mu \nu_L) + C^{cb}_R ({\bar c}_L b_R) ({\bar\tau}_R \nu_L) +C^{cb}_L ({\bar c}_R b_L) ({\bar\tau}_R \nu_L), \end{equation} where the Wilson coefficient in the SM is $C^{cb}_{\rm SM}=2V_{cb}/v^2$, and the new Wilson coefficients generated by charged Higgs exchanges are \begin{align} C^{cb}_R = -\frac{\sqrt{2}m_\tau \tan\beta}{v \,m^2_{H^\pm}} \, {(\lambda^{H^-}_{c_L})}^*, \quad C^{cb}_L =-\frac{\sqrt{2}m_\tau \tan\beta}{v\, m^2_{H^\pm}} \, {(\lambda^{H^-}_{c_R})}^*. \end{align} See Eqs.~(\ref{lamcL}) and~(\ref{lamcR}) for $\lambda^{H^-}_{c_{L,R}}$. The ratios of the branching ratios for $B\rightarrow D^{(*)}\tau\nu$ to $B\rightarrow D^{(*)}\ell \nu$ with $\ell=e$, $\mu$ are defined by \begin{equation} R_{D^{(*)}}= \frac{ \mathcal{B} (B\to D^{(*)} \tau \nu)}{ \mathcal{B} (B\to D^{(*)} \ell \nu)}. \end{equation} The SM expectations are $R_D=0.300\pm 0.008$ and $R_{D^*}=0.252\pm 0.003$~\cite{RDs}, but the experimental results for $R_{D^{(*)}}$ are deviated from the SM values by more than $2\sigma$~\cite{RDexp,RDexp2,RDsexp,RDsexp2}. Including the additional contributions from charged Higgs exchanges, we find the simplified forms for $R_D$ and $R_{D^*}$ as follows~\cite{ko2,crivellin2012}: \begin{align} R_D &= R_{D,{\rm SM}} \left[1 +1.5\, {\rm Re} \left(\frac{C^{cb}_R+C^{cb}_L}{C^{cb}_{\rm SM}}\right)+ \left\vert\frac{C^{cb}_R+C^{cb}_L}{C^{cb}_{\rm SM}} \right\vert^2\right], \nonumber\\ R_{D^*} &= R_{D^*,{\rm SM}} \left[1 +0.12 \, {\rm Re} \left( \frac{C^{cb}_R-C^{cb}_L}{C^{cb}_{\rm SM}}\right)+0.05 \Big|\frac{C^{cb}_R-C^{cb}_L}{C^{cb}_{\rm SM}} \Big|^2\right]. \end{align} As can be seen in Fig.~\ref{fig:RD}, a light charged Higgs is necessary to have large deviations of $R_D$ and $R_{D^*}$. However, it is excluded by $B\rightarrow X_s\gamma$. [See Fig.~\ref{fig:Bsgamma}.] Therefore, our model cannot explain the experimental results for $R_{D^{(*)}}$ simultaneously with the other bounds. \begin{figure}[t!] \begin{center} \includegraphics[height=0.45\textwidth]{RDRDstar.pdf} \end{center} \caption{The ratios of $R_D/R_{D,\text{SM}}$ and $R_{D^\ast}/R_{D^\ast,\text{SM}}$ as the functions of charged Higgs mass for given $\tan\beta$.\label{fig:RD}} \end{figure} \section{Productions and decays of heavy Higgs bosons at the LHC\label{sec:higgs_lhc}} We investigate the main production channels for heavy Higgs bosons at the LHC, including the contributions from flavor-violating interactions of quarks. The decay modes of the heavy Higgs bosons for some benchmark points are also studied, and we discuss smoking gun signals for heavy Higgs searches at the LHC\@. In this section, mixings with singlet scalar have been neglected and the heavy neutral Higgs boson $H$ denotes $h_2$. $h \equiv h_1$ is the SM-like Higgs with $m_h = 125$~GeV\@. \subsection{Heavy neutral Higgs boson} The main channels for neutral Higgs productions are the gluon fusion $gg \to H$, bottom-quark fusion $b \bar b \to H$, and additional productions through the flavor-violating interactions for the bottom quark, $b \bar d_i \to H$ and $d_i \bar b \to H$, where $d_i$ denotes light down-type quarks, $d_i = d,\,s$. There are bottom quark associated productions, $b g \to b H$ and $d_i g \to b H$, as well. The leading-order cross section for the gluon fusion process at parton level is \begin{equation} \hat\sigma (gg \to H) = \frac{\alpha_s^2 m_H^2 }{576 \pi v^2} \left\vert \frac{3}{4} \sum_q \left( \frac{\cos\alpha}{\cos\beta} + \frac{v \sin(\alpha - \beta)}{\sqrt{2} m_q \cos\beta} \tilde h_{33}^q \right) A_{1/2}^H (\tau_q) \right\vert^2 \delta (\hat s - m_H^2) , \end{equation} where $\tau_q = m_H^2 / (4m_q^2)$. The loop function $A_{1/2}^H (\tau)$ is given in Ref.~\cite{Djouadi:2005gi}. $\hat s$ is the partonic center-of-mass energy. Here the contributions of only top and bottom quarks have been taken into account. Note that the top quark contribution is vanishing if one takes $y_{33}^u = y_t^\text{SM}$ and the alignment limit as can be seen in Eq.~(\ref{eq:lambda_t_SM}). The parton-level cross section for bottom-quark fusion $b \bar b \to H$ is \begin{equation} \hat\sigma (b\bar b \to H) = \frac{\pi m_b^2}{18 v^2} \left( \frac{\cos\alpha}{\cos\beta} + \frac{v \sin(\alpha - \beta)}{\sqrt{2} m_b \cos\beta} \tilde h_{33}^d \right)^2 \left( 1 - \frac{4m_b^2}{m_H^2} \right)^{1/2} \delta (\hat s - m_H^2) . \end{equation} There are other single Higgs production channels through the flavor-violating interactions, $b \bar d_i \to H$ and $d_i \bar b \to H$. The corresponding cross section is given by \begin{equation} \hat\sigma (d_i \bar b \to H) = \frac{\pi |\tilde h_{i3}^d|^2 \sin^2 (\alpha - \beta)}{72 \cos^2 \beta} \delta (\hat s - m_H^2) , \end{equation} and $\hat\sigma (b \bar d_i \to H) = \hat\sigma (d_i \bar b \to H)$ at parton level. The bottom quark associated production of the Higgs boson can occur by initial states with a bottom quark, that is, $b g \to b H$, through the flavor-conserving interactions or initial states with a light down-type quark, $d_i g \to b H$, via the flavor-violating interactions. The former is nonvanishing even if all the components of $\tilde h^d$ are zero. The diagrams of the bottom quark associated production are shown in Fig.~\ref{fig:qgqH_diag}. \begin{figure}[bt!] \begin{center} \includegraphics[width=0.48\textwidth]{qgqH_s.pdf} \includegraphics[width=0.48\textwidth]{qgqH_t.pdf} \end{center} \caption{Diagrams of the bottom quark associated productions of neutral Higgs bosons.\label{fig:qgqH_diag}} \end{figure} The differential cross section for $b g \to b H$ at parton level is \begin{equation} \frac{d \hat\sigma}{d\hat t} (b g \to b H) = \frac{\alpha_s (\lambda_b^H)^2}{96 (\hat s - m_b^2)^2} \left[ \frac{2 F_1 - F_2^2 - 2 G_1 G_2}{(\hat s - m_b^2) (\hat t - m_b^2)} + 2m_b^2 \left( \frac{G_1}{(\hat s - m_b^2)^2} + \frac{G_2}{(\hat t - m_b^2)^2} \right) \right] , \end{equation} where \begin{equation*} F_1 = \hat s \hat t - m_b^4, \quad F_2 = \hat s + \hat t - 2m_b^2 , \quad G_1 = m_H^2 - m_b^2 - \hat s, \quad G_2 = m_H^2 - m_b^2 - \hat t, \end{equation*} and $\lambda_b^H$ is given in (\ref{eq:lambda_b}). For the $d_i g \to b H$ process, it is \begin{align} \frac{d\hat\sigma}{d\hat t} (d_i g \to b H) = \frac{\alpha_s |\tilde h_{i3}^d|^2}{96 \hat s^2 (\hat t - m_b^2)} \frac{\sin^2 (\alpha - \beta)}{\cos^2\beta} \left[ \frac{2F_1 - F_2^2 - 2G_1 G_2}{\hat s} + \frac{2 m_b^2 G_2}{\hat t - m_b^2} \right] \end{align} with \begin{equation*} F_1 = \hat s \hat t, \quad F_2 = \hat s + \hat t - m_b^2, \quad G_1 = m_H^2 - m_b^2 - \hat s, \quad G_2 = m_H^2 - \hat t . \end{equation*} And again, $\hat\sigma (\bar d_i g \to \bar b H) = \hat\sigma (d_i g \to b H)$ at parton level. We perform the integration by using the Monte Carlo method to obtain the production cross sections at proton-proton collisions of 14~TeV and employ the NNPDF2.3 parton distribution function (PDF) set~\cite{Ball:2012cx} via the LHAPDF 6 library~\cite{Buckley:2014ana}. The renormalization and factorization scales are set to $m_H$, and $m_b = 4.7$~GeV. The resulting production cross sections as a function of $m_H$ are shown in Fig.~\ref{fig:xsec_neutral}. In all frames we set $\cos (\alpha - \beta) = 0.05$, close to the alignment limit, and $y_{33}^u = y_t^\text{SM}$. A constant $K$-factor of 2.5 has been multiplied to the gluon fusion production cross section, while the leading-order expressions have been used for the other production channels. \begin{figure}[bt!] \begin{center} \includegraphics[width=0.45\textwidth]{xsec_higgs_neutral_y33u_sm_1.pdf} \includegraphics[width=0.45\textwidth]{xsec_higgs_neutral_y33u_sm_2.pdf} \end{center} \caption{Production cross sections of the heavy neutral Higgs $H$ at 14~TeV proton-proton collisions. We have chosen $\tan\beta = 1$ (left panel) and $\tan\beta = 0.5$ (right panel) with $\cos(\alpha - \beta) = 0.05$ and $y_{33}^u = y_t^\text{SM}$.\label{fig:xsec_neutral}} \end{figure} In the alignment limit, the neutral Higgs coupling to the top quarks $\lambda_t^H$ is vanishing as can be seen in Eq.~(\ref{eq:lambda_t_SM}). In this case, the single Higgs production through the gluon fusion process is suppressed compared to the SM case, though nonvanishing due to the bottom quarks in the loop. Still, the gluon fusion production convoluted with PDF is the most dominant channel for the single Higgs production and $b \bar b \to H$ is the subdominant one for $\tan\beta \gtrsim \mathcal{O}(0.1)$. On the other hand, for smaller $\tan\beta$, the flavor-violating Higgs couplings to light quarks become larger and contributions from the initial states with the light down-type quarks $d_i \bar b \to H$ is subdominant, and become even the most dominant channel in the case of very small $\tan\beta = \mathcal{O}(0.01)$. However, since we find that such scenarios with very small $\tan\beta$ have been excluded by bounds from the experimental results on $B$-meson mixings and decays, particularly by $B \to X_s \gamma$ as seen in the previous section, we have chosen $\tan\beta = 1$ and 0.5 as benchmarks for this study. For $m_H = 200$~GeV and $\tan\beta = 1$ (0.5), $\sigma_{pp \to H} \simeq 225.2$ (110.5)~fb, and $(\sigma_{b \bar d_i \to H} + \sigma_{d_i \bar b \to H}) / \sigma_{g g \to H} = 0.62$\% (1.6\%), while $(\sigma_{b \bar d_i \to H} + \sigma_{d_i \bar b \to H})/ \sigma_{b \bar b \to H} \simeq 1.6$\% (10.9\%) at the LHC\@. As the neutral Higgs gets heavier, the production cross sections rapidly decreases. For $m_H = 400$~GeV and $\tan\beta = 1$ (0.5), $\sigma_{pp \to H} \simeq 38.4$ (31.7)~fb. As can be seen in Fig.~\ref{fig:xsec_neutral}, the production cross section of the bottom quark associated process increases as $\tan\beta$ is smaller since the effect of the flavor-violating couplings become larger. In particular, if $m_H \lesssim 200$~GeV the production cross section is $\mathcal{O}(10)$~fb, so it can be served as a good search channel at the LHC\@. Meanwhile, if $m_H \gtrsim 2m_t$, the cross section decreases down to $\lesssim\mathcal{O}(1)$~fb. We now turn to the decay widths of the neutral Higgs bosons and obtain their branching ratios. Ignoring the mixing among the SM-like Higgs and singlet scalar, the partial decay widths to quarks are \begin{align} \Gamma (H \to b \bar d_i) = \Gamma (H \to d_i \bar b) &= \frac{3 |\tilde h_{i3}^d|^2 \sin^2 (\alpha - \beta)}{32 \pi \cos^2\beta} m_H {\left( 1 - \frac{m_b^2}{m_H^2} \right)}^2, \nonumber\\ \Gamma (H \to q \bar q) &= \frac{3 {(\lambda_q^H)}^2}{16\pi} m_H {\left( 1 - \frac{4 m_q^2}{m_H^2} \right)^{3/2}}, \end{align} where $q = t$, $b$, $c$. $\lambda_b^H$ and $\lambda_t^H$ are given in (\ref{eq:lambda_b}) and (\ref{eq:lambda_t}), and \begin{equation} \lambda_c^H = \frac{\sqrt{2} m_c \cos\alpha}{v \cos\beta} . \end{equation} On the other hand, the Higgs interactions to the charged leptons are flavor-conserving and the corresponding decay width is given as \begin{equation} \Gamma (H \to \tau^+ \tau^-) = \frac{m_\tau^2 \cos^2 \alpha}{8\pi v^2 \cos^2\beta} m_H \left( 1 - \frac{4 m_\tau^2}{m_H^2} \right)^{3/2} . \end{equation} The partial widths to electroweak gauge bosons $V = W$, $Z$ are given as \begin{equation} \Gamma (H \to VV) = \frac{\delta_V m_H^3 \cos^2 (\alpha - \beta)}{32\pi v^2} \left( 1 - \frac{4 m_V^2}{m_H^2} \right)^{1/2} \left( 1 - \frac{4 m_V^2}{m_H^2} + \frac{12 m_V^4}{m_H^4} \right) , \end{equation} where $\delta_W = 2$ and $\delta_Z = 1$. These partial widths are vanishing in the alignment limit. If $m_H > 2 m_{Z^\prime}$, the decay mode of $H \to Z^\prime Z^\prime$ opens. Ignoring the small mixing with the $Z$ boson, the decay width is \begin{equation} \Gamma (H \to Z^\prime Z^\prime) = \frac{g_{Z^\prime}^4 x^4 m_H^3 v^2 \sin^2 \beta \sin^2 \alpha}{2592 \pi m_Z^4} \left( 1 - \frac{4 m_{Z^\prime}^2}{m_H^2} \right)^{1/2} \left( 1 - \frac{4 m_{Z^\prime}^2}{m_H^2} + \frac{12 m_{Z^\prime}^4}{m_H^4} \right) . \end{equation} However, we find that this decay mode is almost negligible for small $g_{Z^\prime} x \simeq \mathcal{O}(0.05)$ and $m_{Z'} \gtrsim 400$~GeV, which would be necessary to evade constraints from the $Z^\prime$ searches at the LHC\@. The neutral Higgs boson can also decay into $\gamma \gamma$ and $gg$ through fermion or gauge boson loops. At leading order, the decay widths are given as \begin{align} \Gamma (H \to \gamma\gamma) &= \frac{\alpha^2 m_H^3}{256 \pi^3 v^2} \Bigg\vert \sum_{q = t, \, b} 3 Q_q^2 \frac{\lambda_q^H v}{\sqrt{2} m_q} A_{1/2}^H (\tau_q) + \frac{\cos\alpha}{\cos\beta} A_{1/2}^H (\tau_\tau) + \cos (\alpha - \beta) A_1^H (\tau_W) \Bigg\vert^2, \nonumber\\ \Gamma (H \to gg) &= \frac{\alpha_s^2 m_H^3}{72 \pi^3 v^2} \left\vert \frac{3}{4} \sum_{q = t, \, b} \frac{\lambda_q^H v}{\sqrt{2} m_q} A_{1/2}^H (\tau_q) \right\vert^2, \end{align} where $Q_q$ is the electric charge of the quark and $\tau_i \equiv m_H^2 / (4m_i^2)$. The loop functions $A_{1/2}^H$ and $A_1^H$ can be found in Ref.~\cite{Djouadi:2005gi}. If $m_H > 2m_h$, the heavy neutral Higgs can decay into a pair of SM-like Higgs bosons.\footnote{If the singlet scalar $h_3 = S$ is light enough, additional decay modes such as $H \to S h$ can occur and become important channels~\cite{vonBuddenbrock:2016rmr}. Here we assume that $S$ is heavy, $m_S \gtrsim 0.5$--1~TeV, and the mixings with doublet Higgs bosons are negligible.} The triple interaction comes from the scalar potential in (\ref{eq:scalar_potential_1}), \begin{equation} V_1 \supset \frac{g_{Hhh} v}{2} H h h, \end{equation} where \begin{align} g_{H h h} = &~3 ( \lambda_1 \sin\alpha \cos\beta + \lambda_2 \cos\alpha \sin\beta ) \sin(2\alpha) \nonumber\\ & + (\lambda_3 + \lambda_4) \left[ 3 \cos(\alpha + \beta) \cos(2\alpha) - \cos(\alpha - \beta) \right]. \label{eq:ghhh} \end{align} The decay width for the $H \to hh$ process is given as \begin{equation} \Gamma(H \to hh) = \frac{g_{Hhh}^2 v^2}{32 \pi m_H} {\left( 1 - \frac{4m_h^2}{m_H^2} \right)}^{1/2}. \end{equation} The quartic couplings in the Higgs potential can be evaluated by choosing values of $\mu v_s$, $\tan\beta$, $\sin\alpha$, and $m_H$ if mixing with the singlet scalar is negligible, $\alpha_2 \simeq \alpha_3 \simeq 0$. See Appendix~\ref{app:higgs_sector}. By combining all the decay widths, we obtain the branching ratio of each decay mode. Fig.~\ref{fig:br_neutral} shows the branching ratios of the neutral Higgs boson $H$ for $\cos (\alpha - \beta) = 0.05$ and $v_s = 1$~TeV, but with different values of $\mu$ to satisfy the unitary and stability bounds studied in Subsec.~\ref{sec:unitarity_bounds}. \begin{figure}[bt!] \begin{center} \includegraphics[width=0.48\textwidth]{br_higgs_neutral_y33u_sm_1.pdf} \includegraphics[width=0.48\textwidth]{br_higgs_neutral_y33u_sm_2.pdf} \end{center} \caption{Branching ratios of the heavy neutral Higgs $H$. $\tan\beta =1$ and $\mu = 200$~GeV (left panel), and $\tan\beta = 0.5$ and $\mu = 50$~GeV (right panel) have been taken. $v_s = 1$~TeV and $\cos (\alpha - \beta) = 0.05$ for both panels.\label{fig:br_neutral}} \end{figure} We observe that $H \to b \bar d_i$/$d_i \bar b$ is the predominant decay mode if $m_H < 2m_h$, whereas the di-Higgs mode $H \to hh$ becomes the most important if the mode is kinematically allowed, irrespective of $\tan\beta$. In practice, the branching ratio of di-Higgs mode $\mathcal{B}(H \to hh)$ depends on the choice of $\mu v_s$ value. If we take a smaller $\mu v_s$ value, for instance, $\mu = 200$~GeV and $v_s = 500$~GeV with $\tan\beta = 1$, we find that $H \to b \bar d_i$/$d_i \bar b$ is always the most dominant decay mode. The dip near $m_H = 580$~GeV in the left panel of Fig.~\ref{fig:br_neutral} is due to the accidental cancellation in the Higgs triple coupling (\ref{eq:ghhh}). The position of dip also depends on the value of $\mu v_s$ for given $\tan\beta$ and $\cos(\alpha - \beta)$. On the other hand, the $b{\bar b}$ mode and diboson modes such as $WW/ZZ$ are subdominant. From these observations, we expect that the search strategies would be different depending on the mass of the heavy Higgs boson. For $m_H < 2m_h$, $p p \to H \to b \bar d_i$/$d_i \bar b$, {\em i.e.}, dijet final states containing one $b$ jet is the most important, but for $m_H > 2m_h$, the di-Higgs channel, and possibly in conjunction with the dijet channel with one $b$ jet, is important to search the heavy neutral Higgs boson at the LHC\@. Thus, the neutral Higgs boson with $m_H < 250$~GeV can receive constraints from dijet searches~\cite{LHC-dijet}. Although the dijet channel has typically been used to seek for heavy resonances in a few TeV scales, it can probe lower scales if it is associated with a hard photon or jet from initial state radiations. The ATLAS collaboration has searched light resonance with dijet invariant mass down to 200~GeV in the final states of dijet in association with a photon~\cite{ATLAS:2016jcu}. In our case, gluon fusion production is the most dominant channel and it is not associated with a hard photon. It can have a hard jet from the gluons in the initial states, but the mass region below 250~GeV has not been searched yet in the final states of dijet in association with a hard jet. For $m_H > 250$~GeV, bounds from di-Higgs searches can be imposed, but we find that they do not have enough sensitivities for heavy neutral Higgs bosons in our model yet~\cite{LHC-hh}. \subsection{Heavy charged Higgs boson} One of the conventional search channels for the heavy charged Higgs with $m_{H^\pm} > m_t$ at hadron colliders is the top quark associated production, $b g \to t H^-$, by the similar diagrams as $b g \to b H$. Since the charged Higgs boson can have enhanced couplings with the light up-type quarks due to nonzero components of $\tilde h^d$, we can also have a sizable production cross section of the bottom quark associated process from the initial states with light up-type quarks, $u_i g \to b H^+$ where $u_i = u$, $c$.\footnote{ We note that there have been collider studies on the production of heavy Higgs bosons due to flavor-violating interactions for up-type quarks. See, for instance, Ref.~\cite{FChiggs}. } The differential cross section for $b g \to t H^-$ at parton level is \begin{align} \frac{d \hat\sigma}{d\hat t} = &~\frac{\alpha_s}{48 (\hat s - m_b^2)^2} \left[ \left( | \lambda_{t_L}^{H^-} |^2 + | \lambda_{t_R}^{H^-} |^2 \right) \left( \frac{2 F_1 - F_2^2 - 2G_1 G_2}{(\hat s - m_b^2) (\hat t - m_t^2)} + \frac{2 m_b^2 G_1}{(\hat s - m_b^2)^2} + \frac{2 m_t^2 G_2}{(\hat t - m_t^2)^2} \right) \right. \nonumber\\ &\left . + \left( \lambda_{t_L}^{H^-} (\lambda_{t_R}^{H^-})^\ast + \lambda_{t_R}^{H^-} (\lambda_{t_L}^{H^-})^\ast \right) \frac{4 m_b m_t m_{H^\pm}^2}{(\hat s - m_b^2) (\hat t - m_t^2)} \left( 1 - \frac{F_1 F_2}{m_{H^\pm}^2 (\hat s - m_b^2) (\hat t - m_t^2)} \right) \right], \end{align} where \begin{align} F_1 &= \hat s \hat t - m_b^2 m_t^2, \quad F_2 = \hat s + \hat t - m_b^2 - m_t^2, \nonumber\\ G_1 &= m_{H^\pm}^2 - m_t^2 - \hat s , \quad G_2 = m_{H^\pm}^2 - m_b^2 - \hat t. \end{align} Since the diagrams contributing to bottom quark associated processes has the same Lorentz structure as those for $b g \to t H^-$, we can obtain their parton-level cross sections by replacing $\lambda_{t_{L,R}}^{H^-}$ with $\lambda_{u_{iL,R}}^{H^-}$, $m_b$ with $m_{u_i} \simeq 0$, and $m_t$ with $m_b$. They are given as \begin{equation} \frac{d \hat\sigma}{d\hat t} (u_i g \to b H^+) = \frac{\alpha_s (| \lambda_{u_{iL}}^{H^-} |^2 + | \lambda_{u_{iR}}^{H^-} |^2)}{48 \hat s^2 (\hat t - m_b^2)} \left[ \frac{2 F_1 - F_2^2 - 2G_1 G_2}{\hat s} + \frac{2 m_b^2 G_2}{\hat t - m_b^2} \right] \end{equation} with \begin{equation} F_1 = \hat s \hat t, \quad F_2 = \hat s + \hat t - m_b^2, \quad G_1 = m_{H^\pm}^2 - m_b^2 - \hat s , \quad G_2 = m_{H^\pm}^2 - \hat t. \end{equation} The leading-order cross sections evaluated by convoluting the partonic cross section with the PDFs at proton-proton collisions of 14~TeV are shown in Fig.~\ref{fig:xsec_charged}. In each figure, $\sigma(pp \to H^\pm q) = \sigma(pp \to H^+ q) + \sigma(pp \to H^- q)$. \begin{figure}[tb!] \begin{center} \includegraphics[width=0.45\textwidth]{xsec_higgs_charged_y33u_sm_1.pdf} \includegraphics[width=0.45\textwidth]{xsec_higgs_charged_y33u_sm_2.pdf} \end{center} \caption{Production cross sections of the heavy charged Higgs $H^\pm$ at 14~TeV proton-proton collisions. We have chosen $\tan\beta = 1$ (left panel) and $\tan\beta = 0.5$ (right panel) with $y_{33}^u = y_t^\text{SM}$.\label{fig:xsec_charged}} \end{figure} The production cross sections are quite sensitive to $\tan\beta$. For $\tan\beta = 1$, the top quark associated production, $p p \to H^\pm t$, is the dominant channel, while the bottom quark associated production, $p p \to H^\pm b$, which is the characteristic channel of our model, can also be served as a good channel to search the charged Higgs boson at the LHC\@. On the other hand, for smaller $\tan\beta$, the bottom quark associated production becomes the dominant channel due to the enhanced charged-Higgs couplings with light up-type quarks. The suppression of top quark associated production is also due to the partial cancellation of two terms in $\lambda_{t_L}^{H^-}$. Concerning the decays of charged Higgs, the most important fermionic decay mode is $H^+ \to t \bar b$. The decay width is \begin{align} \Gamma(H^+ \to t \bar b) =&~\Gamma(H^- \to b \bar t) \nonumber\\ =&~\frac{3}{16\pi} m_{H^\pm} \left[ \left( 1 - \frac{(m_t + m_b)^2}{m_{H^\pm}^2} \right) \left( 1 - \frac{(m_t - m_b)^2}{m_{H^\pm}^2} \right) \right]^{1/2} \nonumber\\ & \times \left[ \left( | \lambda_{t_L}^{H^-} |^2 + | \lambda_{t_R}^{H^-} |^2 \right) \left( 1 - \frac{m_t^2 + m_b^2}{m_{H^\pm}^2} \right) \right . \nonumber\\ & \qquad \left . - 2 \left( \lambda_{t_L}^{H^-} (\lambda_{t_R}^{H^-})^\ast + \lambda_{t_R}^{H^-} (\lambda_{t_L}^{H^-})^\ast \right) \frac{m_t m_b}{m_{H^\pm}^2} \right] . \end{align} By replacing $m_t$ with $m_c$ or $m_u$ and $\lambda_{t_{L,R}}^{H^-}$ with $\lambda_{c_{L,R}}^{H^-}$ or $\lambda_{u_{L,R}}^{H^-}$, one can obtain the decay widths of $H^+ \to c \bar b$ and $H^+ \to u \bar b$. The other fermionic decay modes are $H^+ \to c \bar s$ and $c \bar d$, whose decay widths are proportional to $\tan^2\beta |V_{cs}|^2$ and $\tan^2\beta |V_{cd}|^2$, respectively. The decay widths of leptonic decay modes are given as \begin{equation} \Gamma (H^+ \to \ell^+ \nu) = \Gamma (H^- \to \ell^- \bar\nu) =\frac{m_\ell^2 \tan^2 \beta}{8\pi v^2} m_{H^\pm} \left( 1 - \frac{m_\ell^2}{m_{H^\pm}^2} \right)^2 . \end{equation} Meanwhile, if $H^+ \to W^+ A$ and $W^+ H$ are kinematically forbidden, the only non-fermionic decay mode is $H^+ \to W^+ h$. The decay width is \begin{align} \Gamma(H^+ \to W^+ h) &= \Gamma(H^- \to W^- h) \nonumber\\ &= \frac{g^2 \cos^2 (\alpha-\beta) m_{H^\pm}^3}{64\pi m_W^2} \left[ \left( 1 - \frac{m_W^2}{m_{H^\pm}^2} - \frac{m_h^2}{m_{H^\pm}^2}\right)^2 - \frac{4 m_W^2 m_h^2}{m_{H^\pm}^4}\right]^{3/2} . \end{align} By combining all the decay modes in the above we obtain the branching ratios of the heavy charged Higgs, which are shown in Fig.~\ref{fig:br_charged}. \begin{figure}[tb!] \begin{center} \includegraphics[width=0.45\textwidth]{br_higgs_charged_y33u_sm_1.pdf} \includegraphics[width=0.45\textwidth]{br_higgs_charged_y33u_sm_2.pdf} \end{center} \caption{Branching ratios of the heavy charged Higgs $H^\pm$. $\tan\beta = 1$ (left panel) and $\tan\beta = 0.5$ (right panel) have been taken. $\cos (\alpha - \beta) = 0.05$ and $y_{33}^u = y_t^\text{SM}$ for both panels.\label{fig:br_charged}} \end{figure} Interestingly, the dominant decay mode of the charged Higgs boson is $H^+ \to W^+ h$ if it is kinematically allowed, although we have taken the alignment limit. $H^+ \to t \bar b$ is subdominant. Together with the production, we expect that $p p \to H^\pm b \to W^\pm h + b$ can be served as the important process to probe the charged Higgs boson at the LHC and future hadron colliders. Most LHC searches for $W^+ h$ have been dedicated to heavy resonances~\cite{LHC-cH1} that decay directly into $W^+ h$, so our model is not constrained by $W^+ h$ at the moment. On the other hand, the $t{\bar b}$ mode is next-to-dominant and this is not constrained by the current LHC data~\cite{LHC-cH2}, because the production cross section for the heavy charged Higgs in our model is less than 10~fb in most of the parameter space. \section{Conclusions} We have considered an extra local $U(1)$ with flavor-dependent couplings as a linear combination of $B_3-L_3$ and $L_\mu-L_\tau$, that has been recently proposed to explain the $B$-meson anomalies. In our model, we have reproduced the correct flavor structure of the quark sector due to the VEV of the second Higgs doublet, at the expense of new flavor violating couplings for quarks and the violation of lepton universality. The extra gauge boson leads to flavor violating interactions for down-type quarks appropriate for explaining $B$-meson anomalies in $R_{K^{(*)}}$ whereas heavy Higgs bosons render up-type quarks have modified flavor-conserving Yukawa couplings and down-type quarks receive flavor-violating Yukawa couplings. We also found that the $B$-meson anomalies in $R_{D^{(*)}}$ cannot be explained by the charged Higgs boson in our model, due to small flavor-violating couplings. We showed how the extended Higgs sector can be constrained by unitarity and stability, Higgs and electroweak precision data, $B$-meson decays/mixings. Taking the alignment limit of heavy Higgs bosons from Higgs precision data, we also investigated the production of heavy Higgs bosons at the LHC\@. We found that there are reductions in the cross sections of the usual production channels in 2HDM, such as $pp\rightarrow H$ and $pp\rightarrow H^\pm t$ at the LHC\@. In addition, new production channels such as $pp\rightarrow Hb$ and $pp\rightarrow H^\pm b$ become important for $\tan\beta\lesssim 1$. Decay products of heavy Higgs bosons lead to interesting collider signatures due to large branching fractions of $bd+bs$ modes for neutral Higgs bosons and $W^\pm h$ mode for charged Higgs boson if kinematically allowed, thus requiring a more dedicated analysis for the LHC\@. \section*{Acknowledgments} The work is supported in part by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (NRF-2016R1A2B4008759). The work of LGB is partially supported by the National Natural Science Foundation of China (under Grant No. 11605016), Korea Research Fellowship Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (2017H1D3A1A01014046). The work of CBP is supported by IBS under the project code, IBS-R018-D1. \appendices% \section{The extended Higgs sector\label{app:higgs_sector}} By using the minimization condition of the Higgs potential given by \begin{align} \mu_1^2 &=\frac{\sqrt{2} \mu v_2 v_s-2 \lambda_1 v_1^3-2 \lambda_3 v_1 v_2^2-2 \lambda_4 v_1 v_2^2-2 \kappa_1 v_1 v_s^2}{2 v_1}, \nonumber\\ \mu_2^2 &= \frac{\sqrt{2} \mu v_1 v_s -2 \lambda_3 v_1^2 v_2-2\lambda_4 v_1^2 v_2-2\lambda_2 v_2^3-2 \kappa_2 v_2 v_s^2}{2 v_2}, \nonumber\\ m_s^2 &= \frac{\sqrt{2} \mu v_1 v_2 -2 \kappa_1 v_1^2 v_s-2 \kappa_2 v_2^2 v_s -2 \lambda_S v_s^3}{2 v_s}, \end{align} the mass matrix for $CP$-even scalars can be written as \begin{equation} M_S = \begin{pmatrix} 2 \lambda_1 v_1^2+\frac{\mu v_2 v_s}{\sqrt{2} v_1}&2 v_1 v_2 (\lambda_3+\lambda_4)-\frac{\mu v_s}{\sqrt{2}} & 2\kappa_1 v_1 v_s-\frac{\mu v_2}{\sqrt{2}} \\ 2 v_1 v_2 (\lambda_3+\lambda_4)-\frac{\mu v_s}{\sqrt{2}} & 2 \lambda_2 v_2^2+\frac{\mu v_1 v_s}{\sqrt{2}v_2} & 2\kappa_2 v_2 v_s-\frac{\mu v_1}{\sqrt{2}} \\ 2 \kappa_1 v_1 v_s-\frac{\mu v_2}{\sqrt{2}} & 2 \kappa_2 v_2 v_s-\frac{\mu v_1}{\sqrt{2}} & 2\lambda_S v_s^2+\frac{\mu v_1 v_2}{\sqrt{2} v_s} \end{pmatrix}. \label{h0matrix} \end{equation} We introduce a rotation matrix $R$ to change the interaction basis $(\rho_1, \rho_2, S_R)$ to the physical mass eigenstates, $h_1$, $h_2$ and $h_3$ as \begin{equation*} \begin{pmatrix} h_1 \\ h_2 \\ h_3 \end{pmatrix} = R \begin{pmatrix} \rho_1 \\ \rho_2 \\ S_R \end{pmatrix}. \end{equation*} The mass matrix $M_S$ can be then diagonalized as \begin{equation} R M_S R^\mathsf{T} = \mbox{diag}(m_{h_1}^2,m_{h_2}^2,m_{h_3}^2). \end{equation} We use a convention such that the mass eigenstates are ordered as $m_{h_1} < m_{h_2} < m_{h_3}$. Here, the orthogonal matrix $R$ is parametrized in terms of the mixing angles $\alpha_1$ to $\alpha_3$ as \begin{equation} R = \begin{pmatrix} c_{\alpha_1} c_{\alpha_2} & s_{\alpha_1} c_{\alpha_2} & s_{\alpha_2}\\ -(c_{\alpha_1} s_{\alpha_2} s_{\alpha_3} + s_{\alpha_1} c_{\alpha_3}) & c_{\alpha_1} c_{\alpha_3} - s_{\alpha_1} s_{\alpha_2} s_{\alpha_3} & c_{\alpha_2} s_{\alpha_3} \\ - c_{\alpha_1} s_{\alpha_2} c_{\alpha_3} + s_{\alpha_1} s_{\alpha_3} & -(c_{\alpha_1} s_{\alpha_3} + s_{\alpha_1} s_{\alpha_2} c_{\alpha_3}) & c_{\alpha_2} c_{\alpha_3} \end{pmatrix}, \end{equation} where $s_{\alpha_i} \equiv \sin\alpha_i$ and $c_{\alpha_i} \equiv \cos\alpha_i$. Without loss of generality the angles can be chosen in the range of \begin{equation*} - \frac{\pi}{2} \le \alpha_{1,2,3} < \frac{\pi}{2}. \end{equation*} In the text we focus mainly on the situation where mixings between $\rho_{1,2}$ and $S_R$ are small. The mass eigenvalues of $CP$-even neutral scalars are given by \begin{align} m_{h_1}^2&=\frac{1}{2} (a+b -\sqrt{D})\equiv m_h^2, \nonumber \\ m_{h_2}^2&=\frac{1}{2} (a+b+\sqrt{D})\equiv m_H^2, \nonumber \\ m_{h_3}^2&=2\lambda_S v_s^2+\frac{\mu v_1 v_2}{\sqrt{2} v_s}\equiv m_s^2 , \label{h0s} \end{align} where \begin{equation} a\equiv 2 \lambda_1 v_1^2+\frac{\mu v_2 v_s}{\sqrt{2} v_1} ,\quad b\equiv 2 \lambda_2 v_2^2+\frac{\mu v_1 v_s}{\sqrt{2}v_2}, \quad D\equiv (a-b)^2+4 d^2 \end{equation} with $d\equiv 2 v_1 v_2 (\lambda_3+\lambda_4)-\mu v_s / \sqrt{2}$. We can trade off quartic couplings, $\lambda_{1,2,3,4}$ and $\kappa_{1,2}$, for mixing angles and Higgs masses. \begin{align} \lambda_1&=\frac{2 \sum_i m_{h_i}^2 R_{i1}^2-\sqrt{2} \mu v_s \tan\beta}{4 v^2 \cos^2 \beta} ,\nonumber\\ \lambda_2&=\frac{2 \sum_i m_{h_i}^2 R_{i2}^2-\sqrt{2} \mu v_s \cot\beta}{4 v^2 \sin^2 \beta} ,\nonumber\\ \lambda_3+\lambda_4&=\frac{\sqrt{2} \mu v_s+2 \sum_i m_{h_i}^2 R_{i1} R_{i2}}{4 v^2 \sin 2\beta} ,\nonumber\\ \lambda_S&=\frac{2 v_s \sum_i m_{h_i}^2 R_{i3}^2-\sqrt{2} \mu v^2 \sin\beta \cos\beta}{4 v_s^3} ,\nonumber\\ \kappa_1&=\frac{\sqrt{2}\mu v \sin\beta+2 \sum_i m_{h_i}^2 R_{i1}R_{i3}}{4 v v_s \cos\beta}, \nonumber\\ \kappa_2&=\frac{\sqrt{2}\mu v \cos\beta+2 \sum_i m_{h_i}^2 R_{i2}R_{i3}}{4 v v_s \sin\beta}. \label{lams0} \end{align} In the case when the Higgs mixings with the singlet scalar are negligible, $\alpha_2 \simeq \alpha_3 \simeq 0$, the rotation matrix can be simplified as \begin{equation} R \approx \begin{pmatrix} \cos\alpha & \sin\alpha & 0\\ -\sin\alpha & \cos\alpha & 0\\ 0 & 0 & 1 \end{pmatrix}, \end{equation} where $\alpha = \alpha_1$. Then the Higgs quartic couplings are given by \begin{eqnarray} \lambda_1 & \approx & \frac{2 (m_h^2\cos^2\alpha + m_H^2 \sin^2\alpha ) - \sqrt{2} \mu v_s \tan\beta}{4 v^2 \cos^2 \beta} , \nonumber \\ \lambda_2 & \approx & \frac{2 (m_h^2\sin^2\alpha + m_H^2 \cos^2\alpha ) - \sqrt{2} \mu v_s \cot\beta}{4 v^2 \sin^2 \beta} , \nonumber \\ \lambda_3 + \lambda_4 &\approx & \frac{(m_h^2 - m_H^2) \sin 2\alpha + \sqrt{2} \mu v_s}{4 v^2 \sin 2\beta}. \label{lams} \end{eqnarray} Here $h = h_1$ with $m_h = 125$~GeV and $H = h_2$. This relations show that the values of quartic couplings can be evaluated solely by $m_H$ if one chooses a benchmark point in terms of $\mu v_s$, $\tan\beta$, and $\sin\alpha$. \section{Unitarity bounds} \label{app:unitarity_bounds} The initial scattering states can be classified by hypercharges and isospins~\cite{Akeroyd:2000wc,Ginzburg:2005dt,Kanemura:2015ska}. In the basis of $(\phi_1^+ \phi_1^-$, $\phi_2^+ \phi_2^-$, $\eta_1\eta_1/\sqrt{2}$, $\rho_1\rho_1/\sqrt{2}$, $\eta_2\eta_2/\sqrt{2}$, $\rho_2\rho_2/\sqrt{2}$, $S_R S_R/\sqrt{2}$, $S_I S_I/\sqrt{2})$, the scattering amplitude is \begin{equation} \mathcal{M}_1= \begin{pmatrix} 4 \lambda_1 & 2 (\lambda_3+\lambda_4) & \sqrt{2}\lambda_1 & \sqrt{2}\lambda_1 & \sqrt{2} \lambda_3 & \sqrt{2} \lambda_3 & \sqrt{2} \kappa_1 & \sqrt{2} \kappa_1 \\ 2 (\lambda_3+\lambda_4) & 4 \lambda_2 & \sqrt{2} \lambda_3 & \sqrt{2} \lambda_3 & \sqrt{2} \lambda_2 & \sqrt{2}\lambda_2 & \sqrt{2}\kappa_2 & \sqrt{2} \kappa_2 \\ \sqrt{2}\lambda_1 & \sqrt{2}\lambda_3 & 3 \lambda_1 & \lambda_1 & \lambda_3+\lambda_4 & \lambda_3+\lambda_4 & \kappa_1 &\kappa_1 \\ \sqrt{2} \lambda_1 & \sqrt{2} \lambda_3 & \lambda_1 & 3 \lambda_1 & \lambda_3+\lambda_4 & \lambda_3+\lambda_4 & \kappa_1 & \kappa_1 \\ \sqrt{2} \lambda_3& \sqrt{2} \lambda_2& \lambda_3+\lambda_4 &\lambda_3+\lambda_4& 3\lambda_2 & \lambda_2 &\kappa_2 & \kappa_2 \\ \sqrt{2}\lambda_3 & \sqrt{2}\lambda_2 & \lambda_3+\lambda_4 & \lambda_3+\lambda_4 &\lambda_2 & 3 \lambda_2 & \kappa_2 &\kappa_2 \\ \sqrt{2} \kappa_1 & \sqrt{2} \kappa_2 & \kappa_1 & \kappa_1 & \kappa_2 & \kappa_2 & 3 \lambda_S & \lambda_S \\ \sqrt{2} \kappa_1 & \sqrt{2} \kappa_2 & \kappa_1 &\kappa_1 & \kappa_2 &\kappa_2 &\lambda_S & 3 \lambda_S \end{pmatrix}, \end{equation} whose eigenvalues are $2\lambda_1$, $2\lambda_2$, $\lambda_1+\lambda_2\pm\sqrt{(\lambda_1-\lambda_2)^2+4\lambda_4^2}$, and $2\lambda_S$. In the basis of $(\phi_1^+ S_R$, $\phi_2^+ S_R$, $\phi_1^+ S_I$, $\phi_2^+ S_I)$, the submatrix is given by \begin{equation} \mathcal{M}_2= \begin{pmatrix} 2 \kappa_1 & 0 & 0 & 0 \\ 0 & 2\kappa_2& 0 & 0 \\ 0 & 0 & 2 \kappa_1 & 0 \\ 0 & 0 & 0 & 2 \kappa_2 \end{pmatrix} \end{equation} with eigenvalues being $2\kappa_{1,2}$. In the basis of $(\rho_1 \eta_1$, $\rho_2\eta_2$, $S_R S_I)$, the matrix is \begin{equation} \mathcal{M}_3= \begin{pmatrix} 2 \lambda_1 & 0 & 0 \\ 0 & 2\lambda_2 & 0 \\ 0 & 0 & 2 \lambda_S \end{pmatrix} \end{equation} with eigenvalues being $2\lambda_{1,2,s}$. In the basis of $(\phi_1^+\phi_2^-$, $\phi_2^+\phi_1^-$, $\rho_1\eta_2$, $\rho_2\eta_1$, $\eta_1\eta_2$, $\rho_1\rho_2)$, we have \begin{equation} \mathcal{M}_4= \begin{pmatrix} 0 & 2 \lambda_3+2 \lambda_4 & i \lambda_4 & -i \lambda_4 &\lambda_4& \lambda_4 \\ 2\lambda_3+2 \lambda_4 & 0 & -i \lambda_4 & i \lambda_4 & \lambda_4& \lambda_4 \\ i \lambda_4& -i \lambda_4 & 2 \lambda_3+2 \lambda_4& 0 & 0 & 0 \\ -i \lambda_4 & i \lambda_4 & 0 &2 \lambda_3+2 \lambda_4 & 0 & 0 \\ \lambda_4 &\lambda_4 & 0 & 0 & 2 \lambda_3+2 \lambda_4 & 0 \\ \lambda_4 & \lambda_4 & 0 & 0 & 0 & 2 \lambda_3+2 \lambda_4 \end{pmatrix} \end{equation} with eigenvalues being $2 \lambda _3$, $2 (\lambda_3+\lambda_4)$, $2 (\lambda_3+2 \lambda_4)$, and $\pm 2 \sqrt{\lambda_3 (\lambda_3+2 \lambda_4)}$. Finally, in the basis of $(\rho_1 \phi_1^+$, $\rho_2\phi_1^+$, $\eta_1\phi_1^+$, $\eta_2\phi_1^+$, $\rho_1\phi_2^+$, $\rho_2\phi_2^+$, $\eta_1\phi_2^+$, $\eta_2\phi_2^+)$, we obtain the matrix as \begin{equation} \mathcal{M}_5= \begin{pmatrix} 2 \lambda_1 & 0 & 0 & 0 & 0 & \lambda_4 & 0 & i \lambda_4 \\ 0 & 2 \lambda_3 & 0 & 0 &\lambda_4 & 0 & -i \lambda_4 & 0 \\ 0 & 0 & 2 \lambda_1 & 0 & 0 & -i \lambda_4 & 0 & \lambda_4 \\ 0 & 0 & 0 & 2 \lambda_3 & i \lambda_4 & 0 & \lambda_4 & 0 \\ 0 & \lambda_4 & 0 & -i \lambda_4 & 2\lambda_3 & 0 & 0 & 0 \\ \lambda_4 & 0 & i \lambda_4 & 0 & 0 & 2 \lambda_2 & 0 & 0 \\ 0 & i \lambda_4 & 0 & \lambda_4 & 0 & 0 & 2\lambda_3 & 0 \\ -i \lambda_4 & 0 & \lambda_4 & 0 & 0 & 0 & 0 & 2 \lambda_2 \end{pmatrix}, \end{equation} with eigenvalues being $2 \lambda_1$, $2 \lambda_2$, $2 \lambda_3$, $2 (\lambda_3\pm\lambda_4)$, and $\lambda_1+\lambda_2\pm\sqrt{(\lambda_1-\lambda_2)^2+4\lambda_4^2}$. The eigenvalues obtained in the above are constrained by unitarity as \begin{align} |2\lambda_{1,2,3,S}|\leq8\pi ,\quad |2\kappa_{1,2}| \leq 8\pi ,& \nonumber\\ |2(\lambda_3\pm\lambda_4)| \leq 8\pi ,\quad |2(\lambda_3+2\lambda_4)| \leq 8\pi ,\quad |2\sqrt{\lambda_3(\lambda_3+2\lambda_4)}| \leq 8 \pi ,& \nonumber\\ |\lambda_1+\lambda_2\pm\sqrt{(\lambda_1-\lambda_2)^2+4\lambda_4^2}| \leq 8 \pi, &\nonumber\\ a_{1,2,3} \leq 8 \pi. & \end{align} Here $a_{1,2,3}$ are three other solutions of the following equation: \begin{align} & x^3-2 x^2 (3 \lambda_1+3 \lambda_2+2 \lambda_S)-4 x \left(2 \kappa_1^2+2 \kappa_2^2-9 \lambda_1 \lambda_2-6 \lambda_1 \lambda_S-6 \lambda_2 \lambda_S+4 \lambda_3^2+4 \lambda_3 \lambda_4+ \lambda_4^2\right)\nonumber\\ & +16 \left(3 \kappa_1^2\lambda_2-2 \kappa_1\kappa_2 (2\lambda_3+ \lambda_4)+3 \kappa_2^2 \lambda_1+\lambda_S \left((2 \lambda_3+\lambda_4)^2-9 \lambda_1 \lambda_2\right)\right)=0. \label{cubic} \end{align} \section{The quark Yukawa couplings} \label{app:quark_yukawa} The quark Yukawa couplings in the interaction basis are given by \begin{align} -{\cal L}_{Y}^q = &~ \frac{1}{\sqrt{2}} {\bar u}_L \bigg( (\rho_1-i\eta_1)y^u+ (\rho_2-i\eta_2)h^u \bigg) u_R \nonumber \\ &+ \frac{1}{\sqrt{2}} {\bar d}_L \bigg( (\rho_1+i\eta_1)y^u+ (\rho_2+i\eta_2)h^u \bigg) d_R \nonumber\\ &- {\bar d}_L \Big( y^u (\phi^+_1)^*+h^u (\phi^+_2)^*\Big) u_R + {\bar u}_L \Big(y^d\phi^+_1+h^d \phi^+_2 \Big) d_R +{\rm h.c.} \end{align} In the basis of mass eigenstates the quark Yukawa interactions of the $CP$-even neutral scalars are \begin{equation} -{\cal L}_{Y}^q = ({\bar u}'_L Y^u_{H_i} u'_R + {\bar d}'_L Y^d_{H_i} d'_R) H_i +{\rm h.c.}, \end{equation} where primed fields are mass eigenstates, and \begin{align} Y^u_{H_1} &=-\frac{R_{11}}{v\cos\beta}\, M^D_u +\frac{R_{11}\tan\beta-R_{12}}{\sqrt{2}}\, {\tilde h}^u, \nonumber\\ Y^d_{H_1} &=-\frac{R_{11}}{v\cos\beta}\, M^D_u+\frac{R_{11}\tan\beta-R_{12}}{\sqrt{2}}\, {\tilde h}^d, \nonumber\\ Y^u_{H_2} &=- \frac{R_{21}}{v\cos\beta}\, M^D_u +\frac{R_{21}\tan\beta-R_{22}}{\sqrt{2}}\, {\tilde h}^u, \nonumber\\ Y^d_{H_2} &=-\frac{R_{21}}{v\cos\beta}\, M^D_d+\frac{R_{21}\tan\beta-R_{22}}{\sqrt{2}}\, {\tilde h}^d, \nonumber\\ Y^u_{H_3} &=- \frac{R_{31}}{v\cos\beta}\, M^D_u +\frac{R_{31}\tan\beta-R_{22}}{\sqrt{2}}\, {\tilde h}^u, \nonumber\\ Y^d_{H_3} &=-\frac{R_{31}}{v\cos\beta}\, M^D_d+\frac{R_{31}\tan\beta-R_{22}}{\sqrt{2}}\, {\tilde h}^d, \end{align} Assuming the singlet scalars are decoupled and using Eqs.~(\ref{base-h}) to~(\ref{base-chargedH}), the above quark Yukawa interactions become \begin{align} -{\cal L}_{q,Y} =&~({\bar u}'_L Y^{u}_h u'_R + {\bar d}'_L Y^{d}_h d'_R) h +({\bar u}'_L Y^u_H u'_R + {\bar d}'_L Y^d_H d'_R) H\nonumber \\ &+i( {\bar u}'_L Y^u_A u'_R+ {\bar d}'_L Y^d_A d'_R )A^0 \nonumber \\ & +{\bar u}'(Y_{2,H^+}P_R+Y_{1,H^+}P_L) d' H^+ +{\rm h.c.} , \end{align} where \begin{align} Y^u_h &=-\frac{\sin\alpha}{v\cos\beta}\, M^D_u +\frac{\cos(\alpha-\beta)}{\sqrt{2}\cos\beta}\, {\tilde h}^u, \nonumber\\ Y^d_h &=-\frac{\sin\alpha}{v\cos\beta}\, M^D_d+\frac{\cos(\alpha-\beta)}{\sqrt{2}\cos\beta}\, {\tilde h}^d, \nonumber\\ Y^u_H &= \frac{\cos\alpha}{v\cos\beta}\, M^D_u +\frac{\sin(\alpha-\beta)}{\sqrt{2}\cos\beta}\, {\tilde h}^u, \nonumber\\ Y^d_H &=\frac{\cos\alpha}{v\cos\beta}\, M^D_d+\frac{\sin(\alpha-\beta)}{\sqrt{2}\cos\beta}\, {\tilde h}^d, \nonumber\\ Y^u_A &=-\frac{\tan\beta}{v} M^D_u+\frac{1}{\sqrt{2}\cos\beta}\, {\tilde h}^u, \nonumber\\ Y^d_A &=\frac{\tan\beta}{v} M^D_d-\frac{1}{\sqrt{2}\cos\beta}\, {\tilde h}^d, \nonumber\\ Y_{1,H^+} &= -\bigg(\frac{\sqrt{2} \tan\beta}{v} M^D_u-\frac{1}{\cos\beta}\, ({\tilde h}^u)^\dagger \bigg)V_{\rm CKM}, \nonumber\\ Y_{2,H^+} &=V_{\rm CKM} \bigg(\frac{\sqrt{2} \tan\beta}{v} M^D_d-\frac{1}{\cos\beta}\, {\tilde h}^d \bigg) \end{align} with \begin{equation} {\tilde h}^u \equiv U^\dagger_L h^u U_R , \quad {\tilde h}^d \equiv D^\dagger_L h^d D_R. \end{equation} For $U_L=1$, we have ${\tilde h}^u=h^u U_R$. As a result, \begin{align} {\tilde h}^u_{31}&= \frac{1}{\sqrt{2}} \frac{v\cos\beta}{m_u} \Big(h^u_{31} (y^u_{11})^*+ h^u_{32} (y^u_{12})^* \Big)=0, \nonumber\\ {\tilde h}^u_{32}&= \frac{1}{\sqrt{2}} \frac{v\cos\beta}{m_c} \Big(h^u_{31} (y^u_{21})^*+h^u_{32} (y^u_{22})^* \Big)=0, \nonumber\\ {\tilde h}^u_{33}&= \frac{1}{\sqrt{2}} \frac{v\sin\beta}{m_t} \Big( |h^u_{31}|^2+|h^u_{32}|^2 \Big) \nonumber\\ &=\frac{\sqrt{2} m_t}{v\sin\beta}\Big(1-\frac{v^2\cos^2\beta}{2m^2_t}\,|y^u_{33}|^2 \Big), \end{align} where use is made of Eqs.~(\ref{UR2}), (\ref{UR4}), and (\ref{UR5}). Other components of ${\tilde h}^u$ are vanishing. Moreover, with ${\tilde h}^d=V^\dagger_{\rm CKM} h^d$ and using Eq.~(\ref{hd}) for $h^d_{13}$ and $h^d_{23}$, we obtain nonzero components of ${\tilde h}^u$ as \begin{align} {\tilde h}^d_{13} &= V^*_{ud} h^d_{13}+ V^*_{cd} h^d_{23}=\frac{\sqrt{2}m_b}{v\sin\beta}\Big(V^*_{ud}V_{ub} +V^*_{cd}V_{cb}\Big)=1.80\times 10^{-2}\Big(\frac{m_b}{v\sin\beta}\Big), \nonumber\\ {\tilde h}^d_{23}&= V^*_{us} h^d_{13}+ V^*_{cs} h^d_{23}= \frac{\sqrt{2}m_b}{v\sin\beta}\Big(V^*_{us} V_{ub}+V^*_{cs} V_{cb}\Big)=5.77\times 10^{-2}\Big(\frac{m_b}{v\sin\beta}\Big), \nonumber\\ {\tilde h}^d_{33}&= V^*_{ub}h^d_{13}+ V^*_{cb} h^d_{23}=\frac{\sqrt{2}m_b}{v\sin\beta}\Big( V^*_{ub} V_{ub}+ V^*_{cb}V_{cb}\Big)=2.41\times 10^{-3}\Big(\frac{m_b}{v\sin\beta}\Big). \end{align} \section{{\boldmath$U(1)'$} interactions} \label{app:u1_ints} The gauge kinetic terms and mass terms for $U(1)'$ and $U{(1)}_Y$ are \begin{align} {\cal L}_{\rm g. kin}= & -\frac{1}{4} B_{\mu\nu} B^{\mu\nu}-\frac{1}{4} Z'_{\mu\nu} Z^{\prime\mu\nu}-\frac{1}{2} \sin\xi Z'_{\mu\nu} B^{\mu\nu} \nonumber \\ &-\frac{1}{2} V^T_\mu M^2_V V^\mu , \end{align} where $V_\mu= (B_\mu, W^3_\mu, Z'_\mu)^\mathsf{T}$, and \begin{equation} M^2_V= \begin{pmatrix} m^2_Z s^2_W & -m^2_Z c_W s_W & \frac{1}{2}c^{-1}_W e g_{Z'} Q'_{H_2} v^2_2 \\ -m^2_Z c_W s_W & m^2_Z c^2_W & -\frac{1}{2}s^{-1}_W e g_{Z'} Q'_{H_2} v^2_2 \\ \frac{1}{2}c^{-1}_W e g_{Z'} Q'_{H_2} v^2_2 & -\frac{1}{2}s^{-1}_W e g_{Z'} Q'_{H_2} v^2_2 & m^2_{Z'} \end{pmatrix}. \end{equation} After diagonalizing the terms simultaneously with \begin{align} \begin{pmatrix} B_\mu \\ W_\mu^3 \\ Z_\mu^\prime \end{pmatrix} &= \begin{pmatrix} c_W & -s_W & -t_\xi \\ s_W & c_W & 0 \\ 0 & 0 & 1/c_\xi \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 \\ 0 & c_\zeta & s_\zeta \\ 0 & -s_\zeta & c_\zeta \end{pmatrix} \begin{pmatrix} A_\mu \\ Z_{1\mu} \\ Z_{2\mu} \end{pmatrix} \nonumber\\ &= \begin{pmatrix} c_W & -s_W c_\zeta+t_\xi s_\zeta & -s_W s_\zeta-t_\xi c_\zeta \\ s_W & c_W c_\zeta & c_W s_\zeta \\ 0 & -s_\zeta/c_\xi & c_\zeta /c_\xi \end{pmatrix} \begin{pmatrix} A_\mu \\ Z_{1\mu} \\ Z_{2\mu} \end{pmatrix} , \end{align} where $\zeta$ is the mass mixing angle and $s_W\equiv \sin\theta_W, c_W\equiv \cos\theta_W$, etc, we obtain the mass eigenvalues for massive gauge bosons: \begin{equation} m^2_{Z_{1,2}}= \frac{1}{2} \Big(m^2_Z+m^2_{22}\mp \sqrt{(m^2_Z-m^2_{22})^2+4 m^4_{12}} \Big). \end{equation} Here $m^2_Z\equiv (g^2+g^2_Y) v^2/4$ and \begin{align} m^2_{22}&\equiv m^2_Z s^2_W t^2_\xi + m^2_{Z'}/c^2_\xi - c^{-1}_W e g_{Z'} Q'_{H_2} v^2_w t_\xi/c_\xi, \nonumber\\ m^2_{12}&\equiv m^2_Z s_W t_\xi - \frac{1}{2} c^{-1}_W s^{-1}_W e g_{Z'} Q'_{H_2} v^2_2/c_\xi. \end{align} We can rewrite the $Z$-boson like mass in terms of the heavy $Z'$ mass and the mixing angle $\zeta$ as \begin{equation} m^2_{Z_1}=\frac{2m^2_Z \sec2\zeta+ m^2_{Z_2}(1-\sec 2\zeta)}{1+\sec2\zeta}, \label{Zmass} \end{equation} and the mixing angle as \begin{equation} \tan2\zeta = \frac{2m^2_{12} (m^2_{Z_2}-m^2_Z)}{(m^2_{Z_2}-m^2_Z)^2-m^4_{12}}. \label{Zmix} \end{equation} We note that the modified $Z$-boson mass is constrained by electroweak precision data, in particular, $\Delta\rho$ or $T$ parameter. The current interactions including $Z'$ are given by \begin{align} \mathcal{L}_g = &~B_\mu J_B^\mu + W_\mu^3 J_3^\mu + Z_\mu^\prime J_{Z^\prime}^\mu \nonumber\\ =&~A_\mu J_\text{EM}^\mu + Z_{1\mu} \Big( t_\xi s_\zeta c_W J_\text{EM}^\mu + ( c_\zeta - t_\xi s_\zeta s_W) J_Z^\mu - s_\zeta J_{Z^\prime}^\mu / c_\xi \Big) \nonumber\\ & + Z_{2\mu} \Big( -t_\xi c_\zeta c_W J_\text{EM}^\mu + (s_\zeta - t_\xi c_\zeta s_W) J_Z^\mu + c_\zeta J_{Z^\prime}^\mu / c_\xi \Big) \end{align} with \begin{align} J^\mu_{\rm EM} &= e {\bar f}\gamma^\mu Q_f f, \nonumber\\ J^\mu_Z &= \frac{e}{2c_W s_W} {\bar f} \gamma^\mu (\sigma^3-2s^2_W Q_f) f, \nonumber\\ J^\mu_{Z'} &= g_{Z'} {\bar f} \gamma^\mu Q_f' f. \end{align} Here $Q_f$ is the electric charge and $Q_f'$ is the $U(1)'$ charge of fermion $f$. For a small gauge kinetic mixing and/or the mass mixing $\zeta$, the $Z'$-like gauge boson $Z_{2\mu}$ couples to the electromagnetic current with the overall coefficient of $\varepsilon = t_\xi c_\zeta c_W$. Ignoring the $Z$--$Z'$ mixing, the interaction terms for $Z'$ interactions is \begin{align} {\cal L}_{Z'} = &~g_{Z'} Z'_\mu \Big( \frac{1}{3}x\, {\bar t}\gamma^\mu t+\frac{1}{3}x\, {\bar b}\gamma^\mu b+y {\bar\mu}\gamma^\mu \mu+y\, {\bar \nu}_\mu \gamma^\mu P_L \nu_\mu-(x+y)\,{\bar \tau}\gamma^\mu \tau \nonumber \\ &-(x+y)\,{\bar \nu}_\tau \gamma^\mu P_L \nu_\tau+y\, {\bar \nu}_{2R}\gamma^\mu P_R \nu_{2R}-(x+y)\,{\bar \nu}_{3R} \gamma^\mu P_R \nu_{3R}\Big). \end{align} Now we change the basis into the one with mass eigenstates by $d_R=D_R d'_R$, $u_R=U_R u'_R$, $d_L=D_L d'_L$ and $u_L=U_L u'_L$ such that $V_{\rm CKM}=U^\dagger_L D_L$. Taking $D_R=U_L=1$ and $D_L=V_{\rm CKM}$, the above $Z'$ interactions become \begin{align} {\cal L}_{Z'}= &~g_{Z'} Z'_\mu \Big( \frac{1}{3}x\, {\bar t}'\gamma^\mu P_L t'+ \frac{1}{3}x\,\frac{v^2\cos^2\beta |y^u_{33}|^2}{2m^2_t} {\bar t}'\gamma^\mu P_R t'+\frac{1}{3}x\, {\bar d}'_i \gamma^\mu \Gamma^{dL}_{ij}P_L d'_j+\frac{1}{3}x\, {\bar b}'\gamma^\mu P_R b' \nonumber \\ &\qquad\quad +y {\bar\mu}\gamma^\mu \mu-(x+y)\,{\bar \tau}\gamma^\mu \tau+ y\, {\bar \nu}_\mu \gamma^\mu P_L \nu_\mu-(x+y)\,{\bar \nu}_\tau \gamma^\mu P_L \nu_\tau \nonumber \\ &\qquad\quad +y\, {\bar \nu}_{2R}\gamma^\mu P_R \nu_{2R}-(x+y)\,{\bar \nu}_{3R} \gamma^\mu P_R \nu_{3R}\Big),\end{align} where \begin{align} \Gamma^{dL} &\equiv V^\dagger_{\rm CKM} \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix} V_{\rm CKM} \nonumber \\ &= \begin{pmatrix} |V_{td}|^2 & V^*_{td} V_{ts} & V^*_{td} V_{tb} \\ V^*_{ts} V_{td} & |V_{ts}|^2 & V^*_{ts} V_{tb} \\ V^*_{tb} V_{td} & V^*_{tb} V_{ts} & |V_{tb}|^2 \end{pmatrix}. \end{align} Considering the general mixing of $CP$-even scalars while ignoring the $Z$--$Z'$ mixing, we obtain the interaction between neutral massive electroweak gauge bosons ($V=W$, $Z$) and $Z'$ as \begin{align} {\cal L}_{V}^{h_i} = &~\frac{2m^2_W}{v}\left[ (\cos\beta R_{i1}+\sin\beta R_{i2})h_i+ \frac{1}{2v}\,h^2_i \right]W_\mu W^\mu \nonumber \\ &+ \frac{m^2_Z}{v} \left[(\cos\beta R_{i1}+\sin\beta R_{i2})h_i+ \frac{1}{2v}\, h^2_i\right] Z_\mu Z^\mu. \end{align} For a negligible mixing with the singlet scalar, the above couplings become \begin{align} {\cal L}_{V}^{h/H/A^0} = &~\frac{2m^2_W}{v} \left[-\sin(\alpha-\beta)h+\cos(\alpha-\beta)H+ \frac{1}{2v}( h^2+H^2+(A^0)^2) \right] W_\mu W^\mu \nonumber \\ &+ \frac{m^2_Z}{v} \left[-\sin(\alpha-\beta)h+\cos(\alpha-\beta)H +\frac{1}{2v}(h^2+H^2+(A^0)^2) \right] Z_\mu Z^\mu. \end{align} One can see that in the alignment limit with $\alpha=\beta-\pi/2$ the gauge interactions of $h$ are the same as for the SM Higgs while the triple couplings of heavy Higgs boson to gauge bosons vanish.
2,869,038,155,950
arxiv
\section{Hindman's theorem} Hindman's theorem is the following statement. As the name suggest it was established by Neil Hindman, see \cite{nH74}. \begin{theorem}[Hindman's theorem, \lp{HT}] If the natural numbers are colored with finitely many colors then there is an infinite set $S\subseteq \Nat$ such that the non-repeating, finite sums of $S$ \[ \textup{FS}(S) := \left\{\, \sum_{i\in I} s_i\sizeMid I \in \mathcal{P}_\textnormal{fin}(\Nat) \setminus \{ \emptyset \}\, \right\} \quad\text{where $(s_i)$ is the enumeration of $S$} \] are colored with only one color. \end{theorem} In the context of reverse mathematics \lp{HT} was first investigated by Blass, Hirst, Simpsons in \cite{BHS87}. There it was shown that it follows from \ls{ACA_0^+}, that is \ls{ACA_0} plus the statement that for all $X$ the $\omega$-jump $X^{(\omega)}$ exists. The best known lower bound is \ls{ACA_0}, see also \cite{BHS87}. It is one of the big open questions of reverse mathematics what is the exact strength of \lp{HT} and whether it is equivalent to \ls{ACA_0}. There has been some partial process on this question. However no definite answer could be given. See \cite{jH04,aB05,hT12} and \cite[Section 2.3]{aM11}. As already said Gower's pigeonhole principle is a generalization of \lp{HT}. Below we will see that it is not provable in \ls{ACA_0}. To understand the statement of Gower's pigeonhole principle we will first look at the following finite unions variant of Hindman's theorem. It is not to difficult to see that it s equivalent (relative to \ls{RCA_0}) to \lp{HT}, see \cite{BHS87}. \begin{theorem}[Hindman's theorem, finite unions variant] If the finite subsets of the natural numbers $\mathcal{P}_\textnormal{fin}(\Nat)$ are colored with finitely many colors there exists a infinite set $S\subseteq \mathcal{P}_\textnormal{fin}(\Nat)$ consisting of pairwise disjoint sets and such that the non-empty finite unions of $S$ \begin{equation}\label{eq:nu} \textup{NU}(S) := \{ s_1 \cup \dots \cup s_n \mid n\in\Nat\setminus \{0\}, s_i\in S, \max s_i < \min s_{i+1} \} \end{equation} are colored with only one color. \end{theorem} \section{Gowers' \fin theorem} Before we can formulate Gowers' \fin theorem we have to introduce some notations. The following definitions will be made in \ls{RCA_0}. Let $k\in \Nat$ and let $p\colon \Nat \longto [0,k]$. We call the set the \[ \textup{supp}(p):= \{ n\in \Nat \mid p(n) \neq 0 \} \] the \emph{support} of $p$. The space \fin we be the following. \begin{align*} \fin &:= \left\{\, p\in {[0,k]}^\Nat \sizeMid \abs{\textup{supp}(p)} < \omega, \Exists{n} p(n)=k \,\right\} \\ & \phantom{:}= \left\{\, p\in {[0,k]}^{<\Nat} \sizeMid \Exists{n< \lth(p)} (p(n)=k) \,\right\} \end{align*} This space will play the role of $\mathcal{P}_\textnormal{fin}(\Nat)$ in Hindman's theorem. For $k=2$ it is actually isomorphic to $\mathcal{P}_\textnormal{fin}(\Nat)$. On \fin we define the following order \[ p < q \quad\text{if{f}}\quad \max \textup{supp}(p) < \min \textup{supp}(q) \] and the following partial addition for comparable elements \[ p + q \quad \text{for }p<q \] which will be equal to the usual pointwise addition (if $p<q$). On \fin we will make use of the following, so called ``tetris'' operation $T$ \[ T\colon \fin \longto \fin[k-1] \qquad T(p)(n) = p(n) \dotminus 1 .\] A \emph{block sequence} $B$ is an infinite increasing sequence $B=(b_n)_{n\in\Nat}$ in \fin. The \emph{combinatorial space} $\langle B \rangle$ generated by $B$ is the smallest subsets of \fin containing $B$ and closed under addition and tetris, i.e., \[ \langle B \rangle := \left\{ \sum_{n\in \Nat} T^{k-f(n)}(b_n) \mid f\in \fin \right\} .\] (Note that the above sum is finite since the support of $f$ is finite.) We can now formulate Gowers' \fin theorem. \begin{theorem}[Gowers' \fin theorem, \lp{FIN_{<\infty}}] For any $k\in\Nat$ and any finite coloring of $\fin$ there exists a combinatorial subspace colored by only one color. \end{theorem} We will denote full version of of Gowers' \fin theorem by \lp{FIN_{<\infty}} and the restriction to a particular $k$ by \lp{FIN_\mathnormal{k}}. It is clear that for $k=2$ this theorem is the same as Hindman's theorem since a combinatorial subspace in \fin[2] is just as the set given in \eqref{eq:nu}. So in other words \[ \ls{RCA_0} \vdash \lp{HT} \IFF \lp{FIN_\mathnormal{2}} .\] Moreover, it is clear that $\lp{FIN_\mathnormal{k}} \IMPL \lp{FIN_\mathnormal{l}}$ if $k>l$ since \fin[l] can be embedded into \fin via the following mapping $i(p)(n) = p(n) + k-l$. \begin{remark}[\ls{RCA_0}] A combinatorial subspace $\langle B \rangle \subseteq \fin$ is isomorphic to \fin via the isomorphism $\Theta_B(f) := \sum_{n\in \Nat} T^{k-f(n)}(b_n)$. Thus, \lp{FIN_\mathnormal{k}} remains true if one colors only a combinatorial subspace instead of \fin. This variant is equivalent to \lp{FIN_\mathnormal{k}}. \end{remark} \section{A lower bound on \lp{FIN_\mathnormal{k}}} \begin{theorem}\label{thm:low} There exists a recursive coloring of \fin[k+1] with $2^k$ many colors, such that each monochromatic combinatorial subspace computes $\emptyset^{(k)}$. \end{theorem} Before we will come to the proof we will fix some notation and state a proposition. We will need computable approximations of the $n$-fold Turing jump. For this we shall write \[ \emptyset^{(n)}_{s_n,s_{n-1},\dots,s_1} \] for \[ (\cdots (\emptyset'_{s_1})'_{s_2} \dots )'_{s_n} \] (In other words the $s_n$-step approximation of the Turing jump of the $s_{n-1}$-step approximation of the Turing jump \dots\ of the $s_1$-step approximation of the unrelativized Turing jump.) If the tuple $(s_n,s_{n-1},\dots,s_m)$ is shorter than $n$, we shall take the true Turing jump for the missing indexes. \begin{proposition}\label{pro:itlim} For each $n,m$ there exists a finite sequence $(m_1,\dots,m_n)$ such that \[ \emptyset^{(n)} \cap [0;m] = \emptyset^{(n)}_{m_n,\dots,m_1} \cap [0;m] .\] Moreover, we may assume that $(m_1,\dots,m_n)$ is such that, taking $m_{n+1}:=m$, \begin{equation}\label{eq:lim} \emptyset^{(i)} \cap [0;m_{i+1}] = \big(\emptyset^{(i-1)}\big)'_{s} \cap [0;m_{i+1}] \quad\text{for $s>m_i$} \end{equation} where $i\in [1;n]$. \end{proposition} \begin{proof} Apply the limit lemma---relative to $\emptyset^{(n-1)}$---we obtain an $m_n$ such that \[ \emptyset^{(n)} \cap [0; m_{n+1}] = \big(\emptyset^{(n-1)})'_s \quad \text{for all $s>m_n$} .\] Iterating this process we obtain $(m_1,\dots,m_n)$ satisfying \eqref{eq:lim}. This sequence automatically satisfies the other statement of the proposition. \end{proof} For an $f\in \fin[k]$ with $k\ge 2$ we shall write $\mu_i(f):= \max \{ n \mid f(n) =i\}$ and $\lambda_i(f):= \min \{ n \mid f(n) =i\}$ and $\mu(f):= \max\{ n \mid f(n) \neq 0 \}$ and $\lambda(f) :=\min \{ n \mid f(n) \neq 0\}$. Note that $\mu_i,\lambda_i$ are undefined if $i$ is not in the image of $f$. However $\mu,\lambda$ is by definition of $\fin[k]$ always defined. \begin{proof}[Proof of \prettyref{thm:low}] This proof is inspired by Theorem~2.2 of \cite{BHS87}. Let $f\in \bigcup_{k'\le k}\fin[k'+1]$. Fix an $i\ge 1$ and let $(n_0,\dots,n_l)$ be the indexes (in ascending order) where $f(n_j)=i$. We call $(n_j,n_{j+1})$ a \emph{short gap$_i$} if \[ \emptyset^{(i)} \cap [0;n_j]\; \neq\; \big(\emptyset^{(i-1)}\big)'_{n_{j+1}} \cap [0;n_j] .\] We will write $\textsf{SG}_{\mathnormal{i}}(f)$ for the number of short gaps$_{i}$ in $f$. Note that in general $\textsf{SG}_\mathnormal{i}(f)$ is not computable. We will now construct computable approximation of short gaps. Let $i$, $(n_0,\dots,n_l)$ as above. We call $(n_j,n_{j+1})$ a \emph{very short gap$_i$} if \begin{equation}\label{eq:vsg} \emptyset^{(i)}_{\mu_i(f),\mu_{i-1}(f),\dots,\mu_1(f)} \cap [0;n_j]\; \neq\; \emptyset^{(i)}_{\mu_i(f),\mu_{i-1}(f),\dots,\mu_{2}(f),n_{j+1}} \cap [0;n_j]. \end{equation} (We treat $\mu_i(f)$ as if it were $0$ if it is undefined.) E.g.~for $i=1$ we call $(n_j,n_{j+1})$ a very short gap$_{1}$ if \[ \emptyset'_{\mu_1(f)} \cap [0;n_j]\; \neq\; \emptyset'_{n_{j+1}}\cap [0;n_j] .\] Note that $\textsf{VSG}_i(f)$ is computable. We color $\fin[k+1]$ with the following coloring. \[ c(f) := \sum_{i=1}^{k} 2^{i-1} \cdot (\textsf{VSG}_i(f) \bmod 2). \] By \lp{FIN_{k+1}} there is a homogeneous combinatorial subspace $B$. We will write $\llangle B \rrangle$ for $\bigcup_{i=0}^{k} T^i \langle B \rangle$. \begin{claim}{1} For each $f\in \llangle B\rrangle$ there exists $g\in \langle B \rangle$ with $f<g$ such that every short gap$_i$ in $f$ is a very short gap$_i$ in $f+g$ and such that between $f$ and $g$ no gap$_i$ is (very) short gap$_i$ and not very short. \end{claim} \begin{claimproof}{1} Recursively build a sequence $(g_i)_{i \le k}\subseteq \langle B \rangle$ with \begin{enumerate} \item $g_0 > f$ and $g_{i+1} > g_{i}$, \item\label{enum:1:3} $\emptyset^{(k-i')} \cap [0,\mu(g_i)] = (\emptyset^{(k-i'-1)})'_{s}\cap [0,\mu(g_i)]$ \\ for $i' \le i< k$ and $s \ge \lambda(g_{i+1})$. \end{enumerate} The proof proceeds in the same was as the proof of \prettyref{pro:itlim}. Suppose we have chosen $g_{0},\dots,g_{i-1}$, then by the limit lemma there exists an $m$ such that the equation in \ref{enum:1:3} is true for all $s \ge m$. Choose $g_i$ such that $g_i>g_{i-1}$ and such that $\lambda(g_i) > s'$. Setting \[ m_{k-i} := \mu_{k+1}(g_i) \qquad\text{for } i < k \] we recovers a sequence $(m_1,\dots,m_k)$ as in \prettyref{pro:itlim}. From this we get in the same way that \[ \emptyset^{(i)}_{m_{i},\dots,m_{1}} \cap [0,m_{i+1}] = \emptyset^{(i)} \cap [0,m_{i+1}] .\] Now consider \[ g:= \sum_{i=0}^k T^i(g_i) .\] By definition we have that $\mu_{k-i}(g) = \mu_{k-i}(f+g) = m_{k-i}$. Therefore, the right hand side of \eqref{eq:vsg} for $f+g$ is equal $\emptyset^{(i)}$ for $[0;n_j] \subseteq [0;\mu(f)]$. With this the claim is satisfied. \end{claimproof} \begin{claim}{2} $\textsf{SG}_i(f)$ is even for each $f\in \llangle B \rrangle$. \end{claim} \begin{claimproof}{2} Assume that $f\in \llangle B \rrangle$ and take again $g$ as in Claim~1. We get \begin{equation*} \textsf{VSG}_i(f+g) = \textsf{SG}_i(f) + \textsf{VSG}_i(g) . \end{equation*} Since $f+g,g\in \langle B \rangle$, the parity of $\textsf{VSG}_i(f+g), \textsf{VSG}_i(g)$ is the same by assumption to $B$. Therefore, $\textsf{SG}_i(f)$ must be even. \end{claimproof} We now show by induction that one can compute $\emptyset^{(i)}$ for $i\le k$ from $B$. Assume that we already have an algorithm which computes $\emptyset^{(i-1)}$. To compute whether $x$ is contained in $\emptyset^{(i)}$ or not search for an $f,g\in \langle B \rangle$ with \begin{enumerate} \item $f<g$, \item $x <\lambda(f)$, and \item\label{enum:2:3} the image of $f,g$ contains $i$ (this can always be achieved by searching for $f_1<f_2\in \langle B \rangle$ and taking $f:= f_1 + T^{k+1-i}(f_2)$). \end{enumerate} By Claim~2, we know that $\textsf{SG}_i(f),\textsf{SG}_i(g), \textsf{SG}_i(f+g)$ is even. Thus, $(\mu_i(f),\lambda_i(g))$ is not a short gap$_i$. (For this argument we use \ref{enum:2:3} and the consequence that $\mu_i(f),\lambda_i(g)$ are defined.) Therefore, \[ x\in \emptyset^{(i)} \quad\text{if{f}}\quad x\in {\big(\emptyset^{(i-1)}\big)}'_{\lambda_i(g)} \] By induction hypothesis $\emptyset^{(i-1)}$ is computable relative to $B$. Therefore $x\in \emptyset^{(i)}$ is computable in $B$, too. \end{proof} \begin{remark}\label{rem:corl} The above proof formalizes in $I\Sigma^0_{\mathnormal{k+2}}$. The critical steps where induction is used are Claim~1, and the verification of the algorithm. In Claim~1, $B\Sigma^0_2$ relative to $\emptyset^{(i)}$ for $i<k$ is used to find the $m$. The analysis of this the same as in the limit lemma and it is equivalent to $B\Sigma_2$. In the verification of the algorithm we have to show that, calling the algorithm for the $i$-Turing jump $e_i$, that \[ \Forall{x} \left(\Phi_{e_i}^B(x)= 1 \IFF x\in \emptyset^{(i)}\right) \] for all $i\le k$. Since this statement is $\Pi^0_{k+2}$, $I\Sigma^0_{k+2}$ is sufficient. \end{remark} \begin{corollary}\mbox{} \begin{enumerate} \item\label{enum:corl:1} For all $k$ we have that $\ls{RCA_0} + \lp{FIN_\mathnormal{k}} + \lp[\Sigma_\mathnormal{k+2}]{IND}$ proves that $\emptyset^{(k)}$ exists. \item\label{enum:corl:2} $\ls{ACA_0} + \lp[\Delta^1_1]{IND} + \lp{FIN_{<\infty}} \vdash \Forall{X}\Forall{k} X^{(k)} \text{ exists}$. In other words, the above theory proves \ls{ACA_0'}. \item\label{enum:corl:3} $\ls{ACA_0} \nvdash \lp{FIN_{<\infty}}$. \end{enumerate} \end{corollary} \begin{proof} \ref{enum:corl:1} is just a reformulation of \prettyref{rem:corl}. \ref{enum:corl:2} follows from \ref{enum:corl:1} by noting that \lp[\Delta^1_1]{IND} implies \lp[\Sigma_\mathnormal{n}]{IND} uniformly for each $n$ using Skolemization. \noindent \ref{enum:corl:3} follows from the following. First, $\Forall{X,k} X^{(k)} \text{ exists}$ can be written as a $\Pi^1_2$\nobreakdash-\hspace{0pt}statement, i.e., \[ \Forall{X,k} \Exists{Y} \left( Y_0 = X \AND \Forall{i< k} Y_{i+1} = \text{TJ}(Y_{i})\right) .\] Now if $\lp{FIN_{<\infty}}$ would be provable in \ls{ACA_0} then the above statement would be provable already in $\ls{ACA_0} + \lp[\Delta^1_1]{IND}$ and a fortiori in $\ls{ACA_0} + \lp[\Delta^1_1]{CA}$. However, this theory is $\Pi^1_2$-conservative over \ls{ACA_0}, see \cite[IX.4.4]{sS09}, which leads to the contradiction $\ls{ACA_0} \vdash \ls{ACA_0'}$. \end{proof} \section{Conclusion} We could show that the generalization of Hindman's theorem (\lp{HT}), Gowers' $\fin$ theorem ($\lp{FIN_{<\infty}}$) is stronger than the best known lower bound for \lp{HT}. It remains open to find a matching upper bound for $\lp{FIN_{<\infty}}$. It seems to be in general very difficult since to the knowledge of the author any known proofs of $\lp{FIN_{<\infty}}$ makes use of special ultrafilters (or similar objects). Of course by Shoenfield absoluteness $\lp{FIN_{<\infty}}$ must be provable without the axiom of choice. \bibliographystyle{amsplain}
2,869,038,155,951
arxiv
\@ifstar{\@ssection}{\@section}{Introduction} In a series of recent papers, Navarro, Frenk \& White (1995, 1996, 1997, hereafter NFW) have claimed that dark halos formed by dissipationless hierarchical clustering {}from gaussian initial conditions can be fitted by a universal density profile of the form $$\rho(r)=\delta {R^3\over r(R+r)^2},\eqno(\nextenum)$$ \na\nfw where $R$ is a characteristic length scale and $\delta$ a characteristic density. This function fits the numerical data presented by NFW over a radius range of about two orders of magnitude. Equally good fits are obtained for high mass (rich galaxy cluster) and for low mass (dwarf galaxy) halos, and in cosmological models with a wide range of initial power spectra $P(k)$, density parameters $\Omega$, and cosmological constants $\Lambda$. In any given cosmology the simulated halos show a strong correlation between $R$ and $\delta$; low mass halos are denser than high mass halos. NFW interpret this as reflecting the fact that low mass halos typically form earlier. In independent work Cole \& Lacey (1996) came to similar conclusions based on studies of a series of cosmological simulations with $P\propto k^n$ and $\Omega=1$. We shall refer to the profile of equation (\nfw) as the NFW profile. It will prove useful below to consider a broader family of density profiles: we will adopt the family defined by $$\rho(r) = \delta {R^{\alpha+\beta}\over r^\alpha(R^{1/\gamma}+r^{1/\gamma})^{\gamma(\beta-\alpha)}}.\eqno(\nextenum)$$ \na\rhofit This family has been studied extensively by Zhao (1996); subfamilies were studied by Dehnen (1995) and by Tremaine {\it et\thinspace al\/}.{} (1994). The NFW profile is of this form: an inner cusp with logarithmic slope $\alpha=1$, an outer envelope with logarithmic slope $\beta=3$, and a `turn over exponent' $\gamma=1$. Another well-known example is the Hernquist (1990) profile: $(\alpha,\beta,\gamma)=(1,4,1)$. Both NFW and Cole \& Lacey (1996) compared the Hernquist profile to their numerical data. Although it fits high mass halos almost as well as the NFW profile, its more rapid fall-off does not fit low mass halos adequately. Detailed comparisons of both profiles with high resolution simulations of galaxy clusters are also described by Tormen, Bouchet \& White (1996). In this paper we argue that a universal profile arises as a result of repeated mergers. Violent relaxation of a finite mass {\it isolated} system leads to a profile with $\rho\propto r^{-4}$ as $r$ tends to infinity (Aguilar \& White 1986, Jaffe 1987, Merritt, Tremaine \& Johnstone 1989, Barnes \& Hernquist 1991). Sufficiently strong violent relaxation, either through repeated merging or through cold inhomogeneous collapse, produces $\rho\propto r^{-3}$ over the regions which contain most of the mass (White 1979, Villumsen 1982, Duncan, Farouki \& Shapiro 1983, McGlynn 1984). There are, however, good reasons why neither of these results should apply to halo formation through hierarchical clustering. Cosmological mergers are not isolated, they occur continually and {}from bound orbits, and they usually involve objects of very different mass. These properties are reflected in the other simple paradigm for halo formation, the spherical infall model (Gunn \& Gott 1972, Fillmore \& Goldreich 1984, Hoffman \& Shaham 1985, White \& Zaritsky 1992). This model predicts relatively shallow density profiles, $\rho\sim r^{-2}$ for the virialised regions of halos. The next section uses simple arguments to show why merging in a cosmological context might give rise to a characteristic density profile with a central cusp. The third section then carries out some numerical experiments to test these arguments. A final section discusses our results. \@ifstar{\@ssection}{\@section}{Mergers and tidal disruption } Hierarchical clustering proceeds through a continual series of unequal mergers. Small objects typically form at earlier times and so have higher characteristic densities. The results of Navarro, Frenk \& White (1996) for a CDM universe can be fit by a power law $$\delta\propto M^{-\nu}\eqno(\nextenum)$$ \na\gamdef with $\nu\simeq0.33$. The characteristic radius is given by $$R\propto M^{(1+\nu)/3}.\eqno(\nextenum)$$ \na\rmeq Simple scaling arguments using linear theory (Kaiser 1986) predict that if $P(k)\sim k^n$ then $$\nu=(3+n)/2.\eqno(\nextenum)$$ \na\scal $\nu=0.33$ corresponds to $n=-2.3$ as expected for a CDM universe on the relevant scales. Consider a large parent halo merging with a smaller satellite system. Two important dynamical processes are responsible for the evolution of the system. Dynamical friction causes the satellite's orbit to decay, bringing it down into the central regions. Tidal stripping removes material {}from the satellite and adds it to the diffuse mass of the parent. Our claim is that the combination of these processes leads to $\alpha\simeq 1$ in the inner cusp. This structure is a stable fixed point in the process of repeated mergers, in a sense to be defined below. \beginfigure{1} \fig[prof,.9\hsize,.6\hsize,2\baselineskip] \caption{{\bf Figure \nextfignum.} The superimposed density profiles of a parent halo of mass $M$ and of homologous satellite halos of mass $m<M$. The logarithmic slope of the central cusp is $-\alpha$. Two satellites are shown, one with $\rho_m(R_m)<\delta_m$ (lower solid curve) and one with $\rho_m(R_m)>\delta_m$ (upper solid curve). The former has $3\nu/(1+\nu)< \alpha$ and the latter $3\nu/(1+\nu) > \alpha$. \naf\prof } \endfigure A crude but useful way of understanding tidal stripping, going back at least to the work of van H\"orner (1957), is as follows. Consider a satellite of mass $m$ orbiting a parent of mass $M$ at mean distance $r$. The frequency $\Omega$ of the orbit is given roughly by $\Omega^{2}\sim\bar\rho_M(r)$, where $\bar\rho_M(r)$ is the average density of the parent interior to $r$. Next consider a particle Q orbiting within the satellite at a mean distance $\xi$ {}from its centre. The frequency $\omega$ of this orbit is given by $\omega^{2}\sim\bar\rho_m(\xi)$, where $\bar\rho_m(\xi)$ is the average density of the satellite interior to $\xi$. If $\Omega=\omega$ there is a resonance between the force the satellite exerts on Q, and the tidal force exerted by the parent. This results in a secular transfer of energy to Q, which then escapes {}from the satellite. Thus, crudely speaking, the satellite is stripped of material down to radius $\xi$ defined by $\omega(\xi)=\Omega(r)$. The newly stripped material will orbit throughout the parent but will, on average, remain near the orbit of the satellite at the instant that it was stripped. Weinberg (1994abc, 1996) has shown that material can actually be stripped off the satellite substantially within the primary resonance $\Omega=\omega$. The strongest effect is nevertheless at this resonance, and since the timescale of real cosmological mergers is not very different {}from $\Omega^{-1}$, there is little time for weaker effects to be felt. For simplicity we stick to the standard assumption of stripping at the primary resonance, and we will use $N$-body experiments to check our principal results. Since clustering is hierarchical we expect the satellite to have a higher characteristic density and a smaller characteristic radius than the parent. As a result the density of the parent at the characteristic radius of the satellite $\rho_M(R_m)$ can be either larger or smaller than the density of the satellite at the same radius $\rho_m(R_m)=\delta_m$ (see Figure~\prof{}). (For $\alpha<3$ the average density is also proportional to $r^{-\alpha}$.) Thus, when dynamical friction has brought the satellite right to the centre of the parent, it will either have been entirely disrupted, or will have survived largely intact; the result depends on whether $\rho_M(R_m)$ is less than or greater than $\delta_m$. The case of equality, where the satellite is marginally disrupted by the parent as it reaches the centre, is given by $$\rho_M(R_m) = \delta_M \lr{R_M\over R_m}^\alpha = \delta_m,\eqno(\nextenum)$$ \na\margl which can be written in terms of the masses using equations (\rmeq) and (\gamdef) as follows $$1 = \lr{M\over m}^{\nu - \alpha(1+\nu)/3},\eqno(\nextenum)$$ \na\alpre whence $$\alpha = {3\nu\over 1+\nu}.\eqno(\nextenum)$$ \na\aleq In conjunction with (\scal) this predicts $$\alpha=3\lr{3+n\over 5+n}.\eqno(\nextenum)$$ \na\aln It is interesting that this exponent is the same as that predicted in a completely different context, but for similar reasons, for the small scale behaviour of mass autocorrelation function in hierarchical clustering on the stable clustering hypothesis (Davis \& Peebles 1977). These equations constitute our prediction for the logarithmic slope of the inner cusp in cosmological halos. For the Navarro, Frenk \& White (1996) halos, $\nu\simeq0.33$ implying $\alpha\simeq0.75$. This is probably consistent with NFW's fits to these halos since their experiments do not resolve the central cusp to $r\ll R$. In fact $\alpha$ does not vary strongly with $n$ for cosmologically interesting values of $-1$ to $-2.5$. The simulations of Cole \& Lacey (1996) include the cases $n=\{0,-1,-2\}$ corresponding to $\alpha=\{1.8,1.5,1\}$. They do not resolve the central cusps well, but there is a hint in their Figure~9 that $n=0$ has steeper halos than $n=-2$. A further argument is needed to show that halos will actually prefer the special value $\alpha=3\nu/(1+\nu)$. Suppose $\alpha$ in a parent halo were smaller than this value, and it merged with a homologous satellite. In this case $\rho_M(R_m) < \delta_m$, so the satellite survives intact right to the centre. It thus boosts the density of the parent cusp in the region $r<R_m$, and steepens the effective logarithmic slope in $r<R_M$; {{i.e}.} $\alpha$ increases. Now suppose $\alpha$ in the parent were larger than $3\nu/(1+\nu)$. In this case $\rho_M(R_m) > \delta_m$, so the satellite will be disrupted at $r>R_m$, spreading its mass out over a region which is larger than its own characteristic radius. It thus boosts the density of the cusp in the region $r>R_m$, and softens the effective logarithmic slope in $r<R_M$. Thus hierarchical halo formation by repeated mergers should lead to the density profile taking on a fixed point form, which depends on the mass-density relation of merging subunits. \beginfigure{2} \fig[calpha,.9\hsize,.6\hsize,2\baselineskip] \caption{{\bf Figure \nextfignum.} The logarithmic slope of the cusp $\alpha$ as a function of the number of mergers in the experiments of Section~3.1. {}From bottom to top at the right $\nu=(0.33,0.5,0.75)$. \naf\calpha } \endfigure \beginfigure{3} \fig[ccomp,.9\hsize,.6\hsize,2\baselineskip] \caption{{\bf Figure \nextfignum.} The parent halo profile {}from the last iteration in the experiment with $(\nu,\eta,\beta)=(0.33,0,4)$ superimposed on a homologous satellite with mass ratio $f=0.22$. Compare Figure~\prof{}. \naf\ccomp } \endfigure \begintable{1} \caption{{\bf Table 1.} The results of the experiments of Section~3.1. The input parameters are $(\beta,\eta,\nu)$, and the output fitted profiles are specified by $\alpha$ and $\gamma$. The errors in the last two decimal places are given in brackets (1$\sigma$ estimated {}from the last 10 iterations in each case). $\alpha|{eff}/\alpha$ is a measure of how well we resolve the inner cusp (see text). \null } { {\settabs 6 \columns \+ \hfill $\beta$ \hfill& \hfill $\eta$ \hfill& \hfill $\nu$ \hfill& \hfill $\alpha$ \hfill& \hfill $\gamma$ \hfill& \hfill $\alpha|{eff}/\alpha-1$ (\%) \hfill& \cr \+ \hfill $3$ \hfill& \hfill $0$ \hfill& \hfill $0.05$ \hfill& \hfill $0.46(10)$ \hfill& \hfill $0.57(08)$ \hfill& \hfill $0.4$ \hfill& \cr \+ \hfill $3$ \hfill& \hfill $0$ \hfill& \hfill $0.125$ \hfill& \hfill $0.62(08)$ \hfill& \hfill $0.64(12)$ \hfill& \hfill $0.7$ \hfill& \cr \+ \hfill $3$ \hfill& \hfill $0$ \hfill& \hfill $0.2$ \hfill& \hfill $0.76(08)$ \hfill& \hfill $0.70(13)$ \hfill& \hfill $0.9$ \hfill& \cr \+ \hfill $3$ \hfill& \hfill $0$ \hfill& \hfill $0.33$ \hfill& \hfill $1.00(13)$ \hfill& \hfill $0.75(12)$ \hfill& \hfill $0.9$ \hfill& \cr \+ \hfill $3$ \hfill& \hfill $0$ \hfill& \hfill $0.5$ \hfill& \hfill $1.22(14)$ \hfill& \hfill $0.80(16)$ \hfill& \hfill $1.0$ \hfill& \cr \+ \hfill $3$ \hfill& \hfill $0$ \hfill& \hfill $0.75$ \hfill& \hfill $1.47(14)$ \hfill& \hfill $0.79(20)$ \hfill& \hfill $0.9$ \hfill& \cr \+ \hfill $4$ \hfill& \hfill $0$ \hfill& \hfill $0.05$ \hfill& \hfill $0.59(09)$ \hfill& \hfill $0.54(03)$ \hfill& \hfill $0.1$ \hfill& \cr \+ \hfill $4$ \hfill& \hfill $0$ \hfill& \hfill $0.125$ \hfill& \hfill $0.81(10)$ \hfill& \hfill $0.65(05)$ \hfill& \hfill $0.3$ \hfill& \cr \+ \hfill $4$ \hfill& \hfill $0$ \hfill& \hfill $0.33$ \hfill& \hfill $1.06(10)$ \hfill& \hfill $0.83(13)$ \hfill& \hfill $1.4$ \hfill& \cr \+ \hfill $4$ \hfill& \hfill $0$ \hfill& \hfill $0.5$ \hfill& \hfill $1.27(10)$ \hfill& \hfill $0.88(16)$ \hfill& \hfill $1.5$ \hfill& \cr \+ \hfill $4$ \hfill& \hfill $0$ \hfill& \hfill $0.75$ \hfill& \hfill $1.52(11)$ \hfill& \hfill $0.98(16)$ \hfill& \hfill $1.7$ \hfill& \cr \+ \hfill $4$ \hfill& \hfill $-0.8$ \hfill& \hfill $0.33$ \hfill& \hfill $1.08(07)$ \hfill& \hfill $0.73(07)$ \hfill& \hfill $0.6$ \hfill& \cr \+ \hfill $4$ \hfill& \hfill $-0.7$ \hfill& \hfill $0.33$ \hfill& \hfill $1.14(08)$ \hfill& \hfill $0.75(05)$ \hfill& \hfill $0.6$ \hfill& \cr \+ \hfill $4$ \hfill& \hfill $-0.5$ \hfill& \hfill $0.33$ \hfill& \hfill $1.10(09)$ \hfill& \hfill $0.87(06)$ \hfill& \hfill $1.3$ \hfill& \cr \+ \hfill $4$ \hfill& \hfill $0.5$ \hfill& \hfill $0.33$ \hfill& \hfill $1.00(09)$ \hfill& \hfill $0.95(15)$ \hfill& \hfill $2.9$ \hfill& \cr \+ \hfill $4$ \hfill& \hfill $1.0$ \hfill& \hfill $0.33$ \hfill& \hfill $1.10(06)$ \hfill& \hfill $0.76(09)$ \hfill& \hfill $0.7$ \hfill& \cr \+ \hfill $4$ \hfill& \hfill $2.0$ \hfill& \hfill $0.33$ \hfill& \hfill $1.12(08)$ \hfill& \hfill $0.68(09)$ \hfill& \hfill $0.4$ \hfill& \cr } } \endtable \beginfigure{4} \fig[ga,.9\hsize,.6\hsize,2\baselineskip] \caption{{\bf Figure \nextfignum.} The relationship between $\nu$ and $\alpha$ as measured (with 1$\sigma$ error bars). Times signs are for $\beta=4$, squares are for $\beta=3$ (both with $\eta=0$) and no symbol for $\beta=4$ and different values of $\eta$ (see Table~1). The solid line is the prediction of equation (\aleq). \naf\gafig } \endfigure \beginfigure{5} \fig[ac,.6\hsize,.6\hsize,2\baselineskip] \caption{{\bf Figure \nextfignum.} The correlation between $\gamma$ and $\alpha$ (with 1$\sigma$ error bars). Symbols as in Figure~\gafig{}. \naf\acfig } \endfigure \@ifstar{\@ssection}{\@section}{Numerical experiments} We now test the assertion that there is a fixed point in the process of repeated mergers. In outline our procedure is: to merge a halo repeatedly with a small satellite that is created by scaling the parent according to equation (\gamdef). We emphasise that the satellite is always a copy of the {\ifdim\fontdimen1\font>\z@ \rm\else\it\fi current} halo, {i.e}{.} the latest merger product. First we take a parent halo with mass $M$ but with an initial density profile quite different {}from the NFW profile: $$\rho_0(r) = \delta {R^\beta\over(R^2+r^2)^{\beta/2}},\eqno(\nextenum)$$ \na\notnfw with $\delta=1$, $R=1$ and $\beta$ an input parameter. The following steps are then carried out repeatedly: \item{1} Choose a mass ratio for the next merger, $f=m/M$, by taking a random value {}from a distribution between $f|{min}=10^{-3/(1+\nu)}$ and $f|{max}=0.4$ in which the number of daughters with mass ratios in $(f,f+\td{f})$ is specified by $$\td{N}=f^\eta \td{f}.\eqno(\nextenum)$$\na\etaeq Thus $\eta=0$ gives a uniform distribution of daughter masses, and $\eta<0$ produces a larger number of low mass daughters. \item{2} Create a satellite of mass $f M$ by cloning the parent homologously: $\rho\to \rho f^{-\nu}$, $r\to r f^{(1+\nu)/3}$. \item{3} Merge the satellite with the parent. Section~3.1 describes a semi-analytic model of the merging process based on the arguments in Section~2. Section~3.2 describes an $N$-body implementation of the merging process. \@ifstar{\@ssubsection}{\@subsection}{Semi-analytic models} First we implement our simplified picture of tidal stripping as described in Section~2. On a logarithmic grid in $r$ between $r=0.01$ and $r=100$, a merger is simulated as follows. Work inwards {}from the outer edge of the parent, removing all the mass {}from the satellite where $\rho_m(\xi)=\rho_M(r)$ and placing it in the radial bin at $r$. If the profiles cross ($\rho_M(R_m) < \delta_m$) add the satellite bin by bin to the parent, as if it survived intact to the centre. \noindent Figure~\calpha{} shows the evolution of the profile as the iterations proceed for runs with $(\eta,\beta)=(0,4)$ and $\nu=(0.33,0.5,0.75)$. Clearly the homogeneous core is steepened and settles down into a cusp, as it was argued it should in the previous section. A steady state is reached, which, as predicted, automatically flattens itself if it gets too steep. The final value of $\alpha$, averaged over 10 iterations, is summarised in Table~1 and in Figure~\gafig{}. As shown by Figure~\gafig{}, the value of $\alpha$ is systematically larger than predicted by equation (\aleq). Figure~\gafig{} shows the results of experiments with $\beta=3$ and $\beta=4$. The lower value of $\beta$ tends to produce slightly lower values of $\alpha$, but still consistent with the $\beta=4$ results at the 1$\sigma$ level. Figure~\ccomp{} shows the profile of a satellite superimposed on its parent, so it is the equivalent of Figure~\prof{}. Since $\alpha$ is slightly greater than predicted by equation (\aleq), the satellite is less dense than its parent at small radii. Note that the inner cusp is very close to a power law. However, Figure~\acfig{} shows that the slope $\alpha$ is correlated with the turn over exponent $\gamma$. This could mean that we are not resolving the power law in the cusp. To test this we compare the measured value of $\alpha$ with the {\ifdim\fontdimen1\font>\z@ \rm\else\it\fi actual} logarithmic slope of the profile at the inner grid point: $$\alpha|{eff}= -\d{\ln\rho}{\ln r} = {\alpha+\beta r^{1/\gamma}\over1+r^{1/\gamma}}. \eqno(\nextenum)$$ \na\aeff The last column of Table~1 shows that the difference between $\alpha$ and $\alpha|{eff}$ is small, thus we have adequately resolved the central power law. Calculations were performed with $(\beta,\nu)=(4,0.33)$ and $f$ drawn {}from different distributions. We adjust the relative numbers of large and small mergers by changing the value of $\eta$ in equation (\etaeq). The results are summarised in Table~1, and also appear in Figures~\acfig{} and \gafig{}. Generally the resulting values of $\alpha$ are similar, show no trend with $\eta$, and are statistically consistent with each other. There is nothing in the argument leading to equation (\aleq) which depends on the mass ratio, so this is expected. The measured $\gamma$ also shows no distinct trend with $\eta$, although, as can be seen {}from Figure~\acfig{}, varying $\eta$ can lead to large scatter in the value of $\gamma$. All these values are, however, consistent with each other at the 1$\sigma$ level. \beginfigure{6} \fig[rquants,.9\hsize,.6\hsize,2\baselineskip] \caption{{\bf Figure \nextfignum.} {}From bottom to top the ratio of the $10-50$ and $80,70,60$\%-\penalty 10mass radii to the half-mass radius of the halo in simulation A as a function of the number of mergers. Each quantity is normalised to its initial value. \naf\rquantsfig } \endfigure \beginfigure{7} \fig[nfit,.9\hsize,.6\hsize,2\baselineskip] \caption{{\bf Figure \nextfignum.} The average profile of the last 7 merger products of simulation A (crosses), B (plus signs) and C1 (stars) with a fit to the inner parts overlaid. The dashed line is a fit to the original profile measured in the same way (the fit parameters match a Plummer model well). The triangles close to the dotted line are a sanity check on the $N$-body code (see text). \naf\nfitfig } \endfigure \beginfigure{8} \fig[rhs,.9\hsize,.9\hsize,2\baselineskip] \caption{{\bf Figure \nextfignum.} The ratio of $r_{1/2}$ in the parent halo to that of the disrupted satellite particles {\ifdim\fontdimen1\font>\z@ \rm\else\it\fi after} the merger, as a function of the number of mergers, in simulations A (pluses), B (stars) and C[123] (dots). \naf\rhsfig } \endfigure \@ifstar{\@ssubsection}{\@subsection}{$N$-body experiments} The code we used was the publicly available adaptive P$^3$M code of Couchman, Pearce and Thomas (1996) (configured to simulate isolated, collisionless systems). Compared with these $N$-body experiments, the simulations of Section~3.1 have better resolution and they are much cheaper to perform. An initial model of a Plummer sphere ($\beta=5$ in equation \notnfw) is set up with $8000$ particles. The satellite is placed at the $90\%$ mass radius of the parent. Its orbit is such that it is at apocentre, and it has an angular momentum $\kappa$ times that of the circular orbit with the same energy. For $\kappa=0.3$, the apocentre (pericentre) of the satellite orbit is thus approximately $4R$ $(0.1R)$, where $R$ is the scale radius of the parent Plummer sphere. The satellite mass is drawn randomly according to equation (\etaeq) with $(f|{min},f|{max},\eta)=(.05,.2,0)$. The different values of $\nu$ and $\kappa$ used are summarised in Table~2. The satellite is produced by randomly eliminating a fraction $(1-f)$ of the particles of the parent and then scaling the positions and velocities of the remaining particles. The smoothing length used in the $N$-body code is a fixed fraction of the simulation box, which is not a fixed fraction of the halo scale length. The largest value it took in any of the simulations was one third of the radius of the innermost point at which the density was measured. In simulation A it was at most an eighth of this radius. Simulation A was also re-run with the smoothing length 10 times larger---the result was not significantly different. \begintable{2} \caption{{\bf Table 2.} The parameters of the $N$-body simulations in Section~3.2. The input parameters are $(\nu,\kappa)$, and the output fitted profiles are specified by $\alpha$. The last column is the value of $\alpha$ predicted by equation (\aleq). \null } { {\settabs 5 \columns \+ \hfill Model \hfill& \hfill $\nu$ \hfill& \hfill $\kappa$ \hfill& \hfill $\alpha$ \hfill& \hfill $3\nu/(1+\nu)$& \cr \+ \hfill A \hfill& \hfill $0.33$ \hfill& \hfill $0.3$ \hfill& \hfill $0.66$ \hfill& \hfill $0.75$ \hfill& \cr \+ \hfill B \hfill& \hfill $0.75$ \hfill& \hfill $0.3$ \hfill& \hfill $1.14$ \hfill& \hfill $1.2$ \hfill& \cr \+ \hfill C1 \hfill& \hfill $1.5$ \hfill& \hfill $0.3$ \hfill& \hfill $1.75$ \hfill& \hfill $1.8$ \hfill& \cr \+ \hfill C2 \hfill& \hfill $1.5$ \hfill& \hfill $0.6$ \hfill& \hfill $1.63$ \hfill& \hfill $1.8$ \hfill& \cr \+ \hfill C3 \hfill& \hfill $1.5$ \hfill& \hfill $0.05$ \hfill& \hfill $1.79$ \hfill& \hfill $1.8$ \hfill& \cr } } \endtable After each merger another satellite is generated in the same way, and then merged. We measure the profile of the halo by starting at the potential centre and binning outwards in radius. The 10\%, 20\%,... radii are determined at the same time. Figure~\rquantsfig{} shows the evolution of the ratios of these decile radii to the half-mass radius $r_{1/2}$. This shows that the structure converges after 7 or 8 mergers. Figure~\nfitfig{} shows the average profile of the last 7 mergers for models A, B and C1. Also shown is the initial Plummer sphere, and a sanity check: the profile of the initial model, evolved in isolation for the same time as all the mergers added together in simulation A. When evolved in isolation the profile changes a little due to numerical effects, but much less than when mergers are included. The cusp slope $\alpha$ was measured in these models by fitting a power law to the profile at $r<0.3r_{1/2}$. The results are given in Table~2 and appear as solid lines in Figure~\nfitfig{}. The general prediction that larger $\nu$ produces larger $\alpha$ is supported, and the actual values of the measured slopes are quite close to the prediction of equation~(\aleq). A comparison of the three models with $\nu=1.5$ (C1, C2, C3) shows that the profiles do not differ significantly. Thus the final profile is not sensitive to the value of $\kappa$. Cosmological halos are not isolated, as our simulated halos are, and they are not in equilibrium in their outer parts. Thus we cannot make direct inferences about the outer profiles of cosmological halos on the basis of our simulations. Isolated merger products are expected to have an outer profile with $\beta=4$ (Jaffe 1987, Merritt, Tremaine \& Johnstone 1989). Thus in Figure~\nfitfig{} we see a slow roll-over {}from the inner cusp to an outer envelope with $\beta\simeq4$, significantly less steep than the initial Plummer model. In cosmological halos, $\beta\simeq3$ is observed near the virial radius for a wide range of models. It is not easy to identify the equivalent of the virial radius in our simulations, but the effective $\beta$ of our profiles is about 3 in the range $(0.5,2)r_{1/2}$. The 80\% radius is at about $2r_{1/2}$, and the 90\% radius (the apocentre of the satellite orbits) is at about $2.5r_{1/2}$. For comparison, in cosmological infall models the turnround radius is at about three times the virial radius. We also test the `sinking satellite' assumption of Section~2, which was used in the calculations of Section~3.1. If dense satellites really sink to the centre of the parent then the particles which originated in the satellite should be concentrated towards the centre of the merger product. We test this by measuring the half-mass radius of the merger product and comparing it with that of the disrupted satellite particles. The result is shown in Figure~\rhsfig{}. The degree of concentration of the disrupted satellite particles varies as the profile changes. The effect is most pronounced when the halos have a homogeneous core, and the satellite is much denser than the parent in its inner parts. As the cusp develops the satellite is disrupted at a progressively larger radius, as we argued should happen in Section~2, and the concentration settles down at a value of around 2 for all the simulations. \@ifstar{\@ssection}{\@section}{Discussion} The analytic argument leading to equation (\aleq) is highly idealised in its use of a single power law for the cusp profile. The semi-analytic model we have used for merging is based on the same idea but does not assume a form for the profile. Nevertheless it is probably not very accurate in detail. The $N$-body experiment gives us confidence that our simple arguments might be close to the truth. The values of $\alpha$, measured {}from our experiments, agree quite well with the prediction of equation (\aleq). Generally the semi-analytic experiments give $\alpha$ values higher than predicted, and the $N$-body experiments give lower values. The physical origin we suggest for the cuspy profiles of halos is that sufficiently dense satellites can sink intact to the halo centre. The details of our merger prescription may affect our detailed conclusions, such as equation (\aleq), but the robustness of the main physical idea should lead to similar behaviour in more realistic models. Figure~\rhsfig{} shows that the basic picture of Section~2 is not far wrong, at least for the kind of mergers studied here. The extent to which this basic picture applies to actual cosmological halos remains to be demonstrated. Navarro, Frenk and White (1996) introduced the NFW profile for CDM halos. Navarro, Frenk and White (1997), in agreement with the lower resolution results of Cole and Lacey (1996), find that the NFW density profile continues to be a good fit to halos from power-law initial fluctuation spectra. The expected trend in the mass-density relation of halos is clearly visible in their fits: smaller effective $\nu$ for more negative $n$. According to the arguments of the present work, we would expect to see a corresponding trend in profile properties between $n=0$ and $n=-1.5$. Nevertheless Navarro, Frenk and White (1997) are able to use a single fitting formula for all their halos. The apparent contradiction may be accounted for in a number of ways. Cosmological halos always show a degree of scatter in their measured properties because of substructure---which is absent by design in our experiments. More data on the profiles of similar mass halos would give better statistical confidence in statements about trends in halo properties. in addition, it is difficult to find a consistent and dynamically meaningful scale for $\rho$ and $r$ on which to base a comparison of halos. In Figure~\nfitfig{} the trend between the profile and $\nu$ is obvious because all the halos line up so well in their outer parts. This in turn is a consequence of the fact that they are dynamically isolated, and hence the half-mass radius and density can be unambiguously defined. Since cosmological halos are not isolated, the same comparison cannot be made in this case. Next, the mass-density relation of halos in Navarro, Frenk and White (1997) has some scatter in it. If this is a consequence of a real scatter in the underlying relation it should produce a scatter in halo profile properties in addition to the effects of substructure. Finally, our experiments do not properly account for the expected systematic dependence on the power spectrum of the distributions of satellite mass and orbital parameters. Perhaps these could compensate in some way for the difference in core structure produced by the different mass-density relation. Since in our experiments we find no dependence of the profile on mass-distribution or orbital eccentricity, and since to exactly counter the effects of the mass-density relation would require fine tuning, we do not find this explanation compelling. One thing is clear {}from our experiments though: it takes only a few mergers to establish the cusp profile. This means that the cusp can adjust to local conditions quite quickly and so should not depend in a complicated way on the {\ifdim\fontdimen1\font>\z@ \rm\else\it\fi full} merger history of a halo. \@ifstar{\@ssection}{\@section}*{References} \beginrefs \bibitem Aguilar, L., Ostriker, J.P., \& Hut, P., 1988, \apj,335,720. \bibitem Barnes, J.E., \& Hernquist, L., 1991, \apj,370,L65. \bibitem Cole, S., \& Lacey, C., 1996, \mnras,in press,astro-ph/9510147. \bibitem Couchman, H.M.P., Pearce, F.R., \& Thomas, P.A., 1996, astro-ph/9603116 \bibitem Davis, M., \& Peebles, P.J.E., 1977, \apjsupp,34,425. \bibitem Dehnen, W., 1995, \mnras,274,919. \bibitem Dubinski, J., \& Carlberg, R.G., 1991, \apj,378,496. \bibitem Duncan, M.J., Farouki, R.T., \& Shapiro, S.L., 1983, \apj,271,22. \bibitem Fillmore, J.A., \& Goldreich, P., 1984, \apj,281,1. \bibitem Gunn, J.E., \& Gott, J.R. III, 1972, \apj,176,1. \bibitem Hernquist, L., 1990, \apj,356,359. \bibitem Hoffman, Y., \& Shaham, J., 1985, \apj,297,16. \bibitem Jaffe, W., 1987, {\it Structure and Evolution of Elliptical Galaxies}, p.511, ed. de Zeeuw, T., Riedel, Dordecht. \bibitem Kaiser, N., 1986, \mnras,222,323. \bibitem McGlynn, T., 1984, \apj,281,13. \bibitem Merrit, D., Tremaine, S., \& Johnstone, D., 1989, \mnras,236,829. \bibitem Navarro, J.F., Frenk, C.S., \& White, S.D.M., 1995, \mnras,275,270. \bibitem Navarro, J.F., Frenk, C.S., \& White, S.D.M., 1996, \apj,462,563. \bibitem Navarro, J.F., Frenk, C.S., \& White, S.D.M., 1997, in preparation. \bibitem Tormen, G., Bouchet, F., \& White, S.D.M., 1996,\mnras,submitted,astro-ph 9603132. \bibitem Tremaine, S., {\it et\thinspace al\/}, 1994, \aj,107,634. \bibitem van H\"orner, S., 1957, \apj,125,451. \bibitem Villumsen, J.V., 1982, \mnras,199,493. \bibitem Weinberg, M.D., 1994a, \aj,108,1398. \bibitem Weinberg, M.D., 1994b, \aj,108,1403. \bibitem Weinberg, M.D., 1994c, \aj,108,1414. \bibitem Weinberg, M.D., 1996, astro-ph/9607099 \bibitem White, S.D.M., 1979, \mnras,189,831. \bibitem White, S.D.M., \& Zaritsky, D., 1992, \apj,394,1. \bibitem Zhao, H.S., 1996, \mnras,278,488. \par\egroup\@doendpe \@notice\par\vfill\supereject\end \@ifstar{\@ssection}{\@section}{\@ifstar{\@ssection}{\@section}} \def\@section#1 \if@nobreak \everypar{}% \ifnum\LastMac=\Hae \addvspace{\half}\fi \else \addpen{\gds@cbrk}% \addvspace{\two}% \fi \bgroup \ninepoint\bf \Raggedright \global\advance\Sec \@ne \ifappendix \global\Eqnno=\z@ \global\SubEqnno=\z@\relax \def\ch@ck{#1}% \ifx\ch@ck\empty \def\c@lon{}\else\def\c@lon{:}\fi \noindent\@nohdbrk APPENDIX\ \arabic{Sec}\c@lon\hskip 0.5em% \uppercase{#1}\@par} \else \noindent\@nohdbrk\arabic{Sec}\hskip 1pc \uppercase{#1}\@par} \fi \global\SecSec=\z@ \egroup \nobreak \vskip\half \nobreak \@noafterindent \LastMac=\Hae\relax } \def\@ssection#1 \if@nobreak \everypar{}% \ifnum\LastMac=\Hae \addvspace{\half}\fi \else \addpen{\gds@cbrk}% \addvspace{\two}% \fi \bgroup \ninepoint\bf \Raggedright \noindent\@nohdbrk\uppercase{#1}\@par} \egroup \nobreak \vskip\half \nobreak \@noafterindent \LastMac=\Hae\relax } \def\@ifstar{\@ssubsection}{\@subsection}{\@ifstar{\@ssubsection}{\@subsection}} \def\@subsection#1 \if@nobreak \everypar{}% \ifnum\LastMac=\Hae \addvspace{1pt plus 1pt minus .5pt}\fi \else \addpen{\gds@cbrk}% \addvspace{\onehalf}% \fi \bgroup \ninepoint\bf \Raggedright \global\advance\SecSec \@ne \noindent\@nohdbrk\thesection.\arabic{SecSec} \hskip 1pc\relax #1\@par} \global\SecSecSec=\z@ \egroup \nobreak \vskip\half \nobreak \@noafterindent \LastMac=\Hbe\relax } \def\@ssubsection#1 \if@nobreak \everypar{}% \ifnum\LastMac=\Hae \addvspace{1pt plus 1pt minus .5pt}\fi \else \addpen{\gds@cbrk}% \addvspace{\onehalf}% \fi \bgroup \ninepoint\bf \Raggedright \noindent\@nohdbrk #1\@par} \egroup \nobreak \vskip\half \nobreak \@noafterindent \LastMac=\Hbe\relax } \def\@ifstar{\@ssubsubsection}{\@subsubsection}{\@ifstar{\@ssubsubsection}{\@subsubsection}} \def\@subsubsection#1 \if@nobreak \everypar{}% \ifnum\LastMac=\Hbe \addvspace{1pt plus 1pt minus .5pt}\fi \else \addpen{\gds@cbrk}% \addvspace{\onehalf}% \fi \bgroup \ninepoint\it \Raggedright \global\advance\SecSecSec \@ne \noindent\@nohdbrk\thesubsection.\arabic{SecSecSec} \hskip 1pc\relax #1\@par} \egroup \nobreak \vskip\half \nobreak \@noafterindent \LastMac=\Hce\relax } \def\@ssubsubsection#1 \if@nobreak \everypar{}% \ifnum\LastMac=\Hbe \addvspace{1pt plus 1pt minus .5pt}\fi \else \addpen{\gds@cbrk}% \addvspace{\onehalf}% \fi \bgroup \ninepoint\it \Raggedright \noindent\@nohdbrk #1\@par} \egroup \nobreak \vskip\half \nobreak \@noafterindent \LastMac=\Hce\relax } \def\paragraph#1 \if@nobreak \everypar{}% \else \addpen{\gds@cbrk}% \addvspace{\one}% \fi% \bgroup% \ninepoint\it \noindent #1\ \nobreak% \egroup \LastMac=\Hde\relax \ignorespaces } \newif\ifappendix \def
2,869,038,155,952
arxiv
\section{Introduction} Let $C$ be a smooth projective curve of genus $g \geq 2$ over an algebraically closed field $k$. For simplicity we assume the characteristic of $k$ to be 0, although some of the results are valid in greater generality. To every \'etale double covering $f: \tilde{C} \rightarrow C$ one can associate a principally polarized abelian variety of dimension $g-1$, the Prym variety of $f$. The corresponding morphism $$ Pr: \mathcal{R}_g \longrightarrow \mathcal{A}_{g-1} $$ from the moduli space of \'etale double coverings of curves of genus $g$ to the moduli space of principally polarized abelian varieties of dimension $g-1$ is called the Prym map. Its properties have been extensively studied. For example it is well known that for $g\geq 6$ it is generically injective (see \cite{fs}, \cite{k}) and not injective (see \cite{d}). Moreover its infinitesimal properties have been studied as well (see \cite{Be}, \cite{ls}). In this paper we look at the analogue of the Prym map in the case of cyclic coverings of arbitrary degree. To any cyclic covering $f: \tilde{C} \rightarrow C$ of degree $n$, branched along a reduced divisor $B$ on C of degree $r$, one can associate an abelian variety $P = P(f)$ of dimension $p$, called the {\it Prym variety of the covering} $f$ and defined as the connected component containg 0 of the norm map $J\tilde{C} \rightarrow JC$. The canonical principal polarization of $J\tilde{C}$ induces a polarization on $P$ whose type $D$ depends only on the topological structure of the covering $f$. Giving a such covering is equivalent to give the data $(C, B, \eta)$ where $\eta $ is a line bundle over $C$ satisfying $\eta^{\otimes n} \simeq \mathcal{O}_C(B)$. We denote by $\mathcal{R}_g(n,r) $ the corresponding coarse moduli space. If $\mathcal{A} _{p,D}$ denotes the moduli space of polarised abelian varieties of dimension $p$ and polarisation $D$, the morphism $$ Pr_g(n,r): \mathcal{R}_g(n,r) \longrightarrow \mathcal{A} _{p,D} $$ which associates to a $n$-cyclic covering its polarized Prym variety is called the {\it Prym map of type} $(g,n,r)$. In this paper we investigate the differential of the Prym map at general $n$-cyclic branched coverings. We will show that in most cases it is injective (see Propositions \ref{prop5.1}, \ref{prop5.2}, \ref{prop5.6} and \ref{prop5.7}). As a consequence we get that the Prym map itself is generically finite in most cases (see Proposition \ref{prop5.5} and Corollary \ref{cor5.8}). From this we can compute the dimension of the image of the Prym map. In Section 2 we collect some results concerning the ampleness of the twisted canonical line bundle $\omega_C \otimes \eta$ which will be used throughout the article. Section 3 is dedicated to the study of the injectivity of the Abel-Prym map. In Section 4 we prove that the codifferential of the Prym map can be identified with a certain multiplication map of sections, which give us a necessary condition to the codifferential map be surjective at a point $(C,B, \eta)$ of $\mathcal{R}_g(n,r)$. Finally, the consequences of this result are worked out in Section 5. \section{Twisted canonical line bundles} Let $C$ be a smooth projective curve of genus $g \geq 2$ over an algebraically closed field $k$ (of characteristic 0) and $\eta$ be a line bundle of degree $s \geq 0$. In this section we collect some well-known results concerning the line bundle $$ L = \omega_C \otimes \eta, $$ which will be applied later. \begin{lem} \label{lem2.1} {\em (1)} $L$ is globally generated if and only if either \begin{itemize} \item $s \geq 2$ or \item$s = 1$ and $\eta \neq \mathcal{O}_C(x)$ for some point $x$ of $C$ or \item $s = 0$ and $\eta \neq \mathcal{O}_C(x - y)$ for points $x \neq y$ of $C$. \end{itemize} {\em (2)} $L$ is very ample if and only if either \begin{itemize} \item $s \geq 3$ or \item $s = 2$ and $\eta \neq \mathcal{O}_C(x + y)$ for points $x$ and $y$ of $C$ or \item $s=1$ and $\eta \neq \mathcal{O}_C(x+y - u)$ for points $x, y$ and $u$ of $C$ or \item $s=0$ and $\eta \neq \mathcal{O}_C(x+y - u - v)$ for points $x, y, u$ and $v$ of $C$ with $x+y \neq u+v$. \end{itemize} \end{lem} \begin{proof} This is a consequence of Riemann-Roch. \end{proof} We need the following immediate corollary of Lemma \ref{lem2.1} (2) in Section 5. \begin{cor} \label{cor2.2} If $\deg \eta = 0$, then $\omega_C \otimes \eta$ is very ample if and only if $\omega_C \otimes \eta^{-1}$ is very ample. \end{cor} Recall that the Clifford index of the curve $C$ is defined by $$ \Cliff (C) = \Min \{\deg M - 2(h^0(M) -1)\;|\; M \in \Pic(C), h^i(M) \geq 2\; \mbox{for} \; i = 0,1 \}. $$ \begin{cor} \label{cor2.3} Suppose $$ \eta^n = \mathcal{O}_C(B) $$ with $n \geq 2$ and a reduced effective divisor $B$ of degree $r = ns$ on $C$. Then\\ {\em (1)} Suppose $s = 0$ or $1$. If $g \geq n+1$ and $\Cliff(C) \geq n-1$, then $L$ is globally generated.\\ {\em (2)} Suppose $s = 0, 1$ or $2$. If $g \geq 2n+1$ and $\Cliff(C) \geq 2n-1$, then $L$ is very ample. \end{cor} The Clifford index of a general curve equals $[\frac{g-1}{2}]$. Hence for a general curve $C$ of genus $g \geq 2n-1$, respectively $\geq 4n-1$, every line bundle $L$ of this type is globally generated, respectively very ample. \begin{proof} (1): Let $s = 0$. If $L$ is not globally generated, then $ \eta = \mathcal{O}(x-y)$ for points $x \neq y$ of $C$ according to Lemma \ref{lem2.1} (1). So $nx \sim ny$ implying that $C$ admits a $g_n^1$. The assumption on the genus implies that it contributes to the Clifford index, which means $\Cliff(C) \leq n-2$. The proof in the case $s =1$ is the same.\\ (2): Let $s = 0$. If $L$ is not very ample, then $\eta = \mathcal{O}(x+y-u-v)$ for points $x,y,u,v$ of $C$ with $x+y \neq u+v$ according to Lemma \ref{lem2.1} (2). So $nx + ny \sim nu + nv$ implying that $C$ admits a $g_{2n}^1$. Again the assumption on the genus implies that it contributes to the Clifford index, which means that $\Cliff(C) \leq 2n-2$. The proof of the other cases is the same. \end{proof} \begin{cor} \label{cor2.4} Let $\eta \in \Pic^0(C)$ with $\eta^n = \mathcal{O}_C$ as above. If $\Cliff(C) \geq 2n-1$, then the canonical map $$ H^0(\omega_C \otimes \eta) \otimes H^0(\omega_C \otimes \eta^{-1}) \longrightarrow H^0(\omega_C^2) $$ is surjective. \end{cor} \begin{proof} According to a classical Theorem of Meis, $\Cliff(C) \leq \frac{g-1}{2}$. So the assumption on the Clifford index implies $ g \geq 4n-1$ and $\omega_C \otimes \eta$ and $\omega_C \otimes \eta^{-1}$ are very ample by Corollary \ref{cor2.3}. Hence the asumptions of \cite[Theorem 1]{b} are satisfied, which gives the assertion. \end{proof} \section{The Abel-Prym map} Let $C$ be a smooth projective curve of genus $g \geq 1$ and $$ f: \tilde{C} \rightarrow C $$ denote a cyclic covering of degree $n$ branched over a reduced divisor $B = x_1 + \cdots + x_r$ on $C$ of degree $r \geq 0$. So $\tilde{C}$ is given by a line bundle $\eta$ of degree $s = \frac{r}{n}$ on $C$ satisfying $$ \eta^n = \mathcal{O}_C(B) $$ and we have $$ f_*\mathcal{O}_{\tilde{C}} = \bigoplus_{i=0}^{n-1} \eta^{-i}, \qquad \qquad \omega_{\tilde{C}} = f^*(\omega_C \otimes \eta^{n-1}). $$ For such a covering $f$ let $$ \sigma : \tilde{C} \rightarrow \tilde{C} $$ denote a map generating the cyclic group of covering maps. The automorphism $\sigma$ induces an automorphism on the Jacobian $J\tilde{C}$ which we denote by $\sigma$ as well. The Prym variety $P = P(f)$ of the covering $f$ is defined as $$ P = \Ima(1-\sigma) = \Ker (1 + \sigma + \cdots + \sigma^{n-1})^0. $$ Fix a base point $c \in \tilde{C}$ and let $\alpha_c: \tilde{C} \rightarrow J(\tilde{C})$ be the Abel map with respect to $c$, i.e. $\alpha_c(p) = \mathcal{O}_{\tilde{C}}(p - c)$. The {\it Abel-Prym map} of the covering $f$ is defined as the composition $$ \pi = \pi_c: \tilde{C} \stackrel{\alpha_c}{\longrightarrow} J(\tilde{C}) \stackrel{1 - \sigma}{\longrightarrow} P. $$ \begin{prop} \label{prop3.1} {\em (1)} Suppose $\tilde{C}$ is not hyperelliptic or $\tilde{C}$ is hyperelliptic and $n \geq 3$. Then, for distinct points $p,q \in \tilde{C}$, $\pi(p)=\pi(q)$ if and only if $f$ is ramified in $p$ and $q$ and all ramification points have the same image. In particular, $\pi: \tilde{C} \rightarrow P$ is injective if $f$ is \'etale.\\ {\em (2)} If $\tilde{C}$ is hyperelliptic and $n=2$, then $\pi: \tilde{C} \rightarrow P$ is of degree 2 onto its image. All ramification points of $f$ have the same image under $\pi$. \end{prop} \begin{proof} By definition of the Abel-Prym map, $\pi(p)=\pi(q)$ if and only if $ (1-\sigma)(p-c) \sim (1-\sigma)(q-c)$. This is the case if and only if \begin{equation} \label{eq3.1} p + \sigma (q) \sim q+ \sigma(p). \end{equation} If $\tilde{C}$ is not hyperelliptic this means that $p + \sigma (q) = q +\sigma(p)$. Since $p\neq q$, this is the case if and only if $p$ and $q$ are ramification points of $f$. \\ Let $\tilde{C}$ be a hyperelliptic curve, $\iota$ the hyperelliptic involution, and $p,q \in \tilde{C}$ distinct points satisfying \eqref{eq3.1}. Again all ramification points have the same image. If $p\neq q $ are not ramification points, \eqref{eq3.1} implies $\sigma(q) = \iota (p)$ and $\sigma(p)= \iota (q)$. Then $$ \sigma^2(q) = \sigma \iota(p) = \iota \sigma (p) =\iota^2(q) = q, $$ since $\sigma$ commutes with the hyperelliptic involution. Similarly, we get $\sigma^2(p)=p$. Then $f$ ramifies in $p$ and $q$ for $n \geq 3$, a contradiction. The proof in the case $n=2$ is the same as for \cite[Proposition 12.5.2]{bl} where it was proved for $s \leq 1$. The proof of the fact that also in the hyperelliptic case all ramification points of $f$ have the same image under $\pi$ is the same as above. \end{proof} In order to describe the differential of the Abel-Prym map, let $\chi$ denote the generating character of the Galois group $G = \; <\sigma>$ of $f$, i.e. $\chi(\sigma) = \zeta_n$, a primitive $n$-th root of unity. The decomposition of $H^0(\omega_{\tilde{C}})$ into eigenspaces is \begin{equation} \label{dec} H^0(\tilde{C}, \omega_{\tilde{C}}) = \bigoplus_{i=0}^{n-1} H^0(C, \omega_C \otimes \eta^{n-i}), \end{equation} where $H^0(\omega_C \otimes \eta^{n-i})$ is the eigenspace of $\chi^i$. Recall that $$ J(\tilde{C}) = H^0(\tilde{C}, \omega_{\tilde{C}})^*/H_1(\tilde{C},\mathbb Z). $$ In these terms the induced action of $\sigma$ on $H^0(\tilde{C}, \omega_{\tilde{C}})^*$ and $H_1(\tilde{C},\mathbb Z)$ is just the analytic, respectively rational, representation of $\sigma$. We denote \begin{equation} \label{eq3.2} H^0(\tilde{C}, \omega_{\tilde{C}})^+ = H^0(C, \omega_C(B)) \quad \mbox{and} \quad H^0(\tilde{C}, \omega_{\tilde{C}})^- = \bigoplus_{i=1}^{n-1} H^0(C, \omega_C \otimes \eta^{n-i}). \end{equation} Notice that $H^0(\tilde{C}, \omega_{\tilde{C}})^+$ is the eigenspace with eigenvalue $1$ of the action of $G$ on $H^0(\tilde{C}, \omega_{\tilde{C}})$ and $H^0(\tilde{C}, \omega_{\tilde{C}})^-$ its complement, that is the sum of the other eigenspaces. If we define similarly $H_1(\tilde{C},\mathbb Z)^+$ and $H_1(\tilde{C},\mathbb Z)^-$, we have \begin{equation} \label{eq3.3} P = (H^0(\tilde{C}, \omega_{\tilde{C}})^-)^*/H_1(\tilde{C},\mathbb Z)^-. \end{equation} Hence the tangent bundle of $P$ is the trivial bundle $$ \mathcal{T}_P = P \times (H^0(\tilde{C}, \omega_{\tilde{C}})^-)^*. $$ The projectivized differential of the Abel-Prym map $\pi_c$ is by definition the projectivization of the composed map $$ \mathcal{T}_{\tilde{C}} \stackrel{d\pi_c}{\longrightarrow} \mathcal{T}_P = P \times (H^0(\tilde{C}, \omega_{\tilde{C}})^-)^* \longrightarrow (H^0(\tilde{C}, \omega_{\tilde{C}})^-)^*. $$ It is a priori a rational map $\tilde{C}=P(\mathcal{T}_{\tilde{C}}) \dasharrow P((H^0(\tilde{C}, \omega_{\tilde{C}})^-)^*)$. Let $q_i: (H^0(\tilde{C}, \omega_{\tilde{C}})^-)^* \rightarrow H^0(C, \omega_C \otimes \eta^i)^*$ denote the natural projection and $P(q_i)$ its projectivization. \begin{lem} \label{lem3.2} The composition of the projectivized differential of the Abel-Prym map $\pi_c$ with the map $P(q_i)$ for $ 1 \leq i \leq n-1$ is the composed map $$ \varphi_{\omega_C \otimes \eta^i} \circ f: \tilde{C} \rightarrow C \rightarrow P(H^0(C, \omega_C \otimes \eta^i)^*) $$ where $\varphi_{\omega_C \otimes \eta^i}$ is the map given by the linear system $|\omega_C \otimes \eta^i|$. \end{lem} The proof is a slight modification of the proof of \cite[Proposition 12.5.3]{bl}, which we omit. In particular, for $n=2$ the composition $\varphi_{\omega_C \otimes \eta} \circ f$ coincides with the projectivized differential of $\pi$. This is used in \cite[Corollary 12.5.5]{bl} to find conditions for the differential $d\pi_p$ to be injective at a point $p \in \tilde{C}$ in the case of $s \leq 1$. In any case, we have as a direct consequence of Lemma \ref{lem3.2}, \begin{prop} The differential of the Abel-Prym map $\pi_c$ is injective at a point $p \in \tilde{C}$, if the point $f(p)$ is not a base-point of the linear system $|\omega_C \otimes \eta^i|$ for some $i, \; 1 \leq i \leq n-1$. \end{prop} \begin{cor} \label{cor3.4} Suppose $n \geq 3$ and either $s \geq 1$ or $s= 0$ and $\Cliff (C) \geq 2$, or $n=2$ and $s \geq 2$. Then the differential $d\pi_p$ of the Abel-Prym map is injective at any point $p \in \tilde{C}$. \end{cor} \begin{proof} Suppose first $n \geq 3$. For $s \geq 1$ the line bundle $\omega_C \otimes \eta^2$ is globally generated according to Lemma \ref{lem2.1}. For $s=0$, Lemma \ref{lem2.1} gives that $\omega_C \otimes \eta$ is globally generated, unless $\eta = \mathcal{O}_C(x-y)$ with $x \neq y \in C$. But in the last case $\omega_C \otimes \eta^2$ is globally generated, since $\eta^2 = \mathcal{O}_C(2x - 2y)$ and $2x - 2y \sim u - v$ would imply $\Cliff (C) \leq 1$. For $n=2$ and $s \geq 2$ the line bundle $\omega_C \otimes \eta$ is globally generated according to Lemma \ref{lem2.1}. \end{proof} Combining Proposition \ref{prop3.1} and Corollary \ref{cor3.4} we get the main result of this section. \begin{thm} {\em(1)} Suppose $n \geq 3$ and either $s \geq 1$ or $s=0$ and $\Cliff (C) \geq 2$ or $n=2$, $s \geq 2$ and $\tilde{C}$ not hyperelliptic. Then the Abel-Prym map $\pi$ is an embedding of $\tilde{C} \setminus f^{-1}(B)$ and maps the ramification divisor $f^{-1}(B)$ to an ordinary $r$-fold point.\\ {\em (2)} Suppose $n=2, \; s \geq 2$ and $\tilde{C}$ hyperelliptic. Then the Abel-Prym map $\pi$ is a double covering mapping the ramification divisor $f^{-1}(B)$ to an ordinary $r$-fold point. \end{thm} In the missing case $n=2, \; s \leq 1$ and $\tilde{C}$ hyperelliptic $\pi: \tilde{C} \rightarrow D \subset P$ is a double covering onto a smooth curve $D$ such that the Prym variety of $f$ is the Jacobian of $D$ (see \cite[Corollary 12.5.7]{bl}. \section{The differential of the Prym map} Let $\mathcal{R}_{g}(n,r)$ denote the coarse moduli space of triples $(C, B, \eta)$, where $C$ is a smooth projective curve of genus $g$, $B$ an effective reduced divisor of degree $r \geq 0$ on $C$ and $\eta $ a line bundle on $C$ satisfying $\eta^{\otimes n} \simeq\mathcal{O}_C(B)$. We assume that for $r = 0$, $\eta$ is a proper $n$-division point of the Jacobian. Equivalently, $\mathcal{R}_{g}(n,r)$ is the moduli space of $n$-fold cyclic coverings $f: \tilde{C} \rightarrow C$ with smooth irreducible projective curves $C$ of genus $g$ and $\tilde{C}$ of genus $$ g(\tilde{C}) = ng -n + 1 + \frac{r(n-1)}{2}. $$ Note that $f$ is totally ramified over every point of the divisor $B$. The Prym variety $P = \im (1 - \sigma)$ of the covering $f: \tilde{C} \rightarrow C $ is an abelian subvariety of the Jacobian $J\tilde{C}$ of dimension $$ p=(n-1)(g-1) + \frac{r(n-1)}{2}. $$ According to \cite[Corollary 12.1.4 and Lemma 12.3.1]{bl} the canonical polarization of $J\tilde{C}$ induces a polarization of type $$ D = (1, \dots ,1,n, \ldots ,n), $$ where $1$ occurs $p - (g-1)$ times and $n$ occurs $(g-1)$ times if $r=0$, and $1$ occurs $p - g$ times and $n$ occurs $g$ times, if $r > 0$.\\ Let $\mathcal{A}_{p, D}$ denote the moduli space of polarised abelian varieties of dimension $p$ with polarisation of type $D$. The construction associating to every $n$-fold covering its polarized Prym variety defines a map $$ Pr = Pr_g(n,r): \mathcal{R}_{g}(n,r) \longrightarrow \mathcal{A}_{p, D}, $$ called the {\it Prym map} (of type $(g,n,r)$). It is a morphism, since the construction works also for families of the objects involved. In this section we describe the differential of the Prym map at a point $(C,B,\eta) = [f: \tilde{C} \rightarrow C] \in \mathcal{R}_{g}(n,r)$. It turns out that it is easier to describe its dual, the codifferential of $Pr$. \\ Let the notation be as in Section 3. By Serre duality we have $(H^0(\tilde{C}, \omega_{\tilde C}))^* = H^1(\tilde{C}, \mathcal{O}_{\tilde C})$. Hence the decomposition \eqref{dec} of $H^0(\tilde{C}, \omega_{\tilde C})$ induces a decomposition $$ H^1(\tilde{C}, \mathcal{O}_{\tilde C}) = \bigoplus_{i=0}^{n-1} H^1(C, \eta^{i-n}) $$ into eigenspaces of the action of $G$. Analogously to \eqref{eq3.2} we denote by $$ H^1(\tilde{C}, \mathcal{O}_{\tilde C})^+ = H^1(C, \mathcal{O}_{\tilde C}(-B)) \quad \mbox{and} \quad H^1(\tilde{C}, \mathcal{O}_{\tilde C})^- = \bigoplus_{i=1}^{n-1} H^1(C, \eta^{i-n}) $$ the eigenspace with eigenvalue 1 and its complement, the sum of the other eigenspaces. Equation \eqref{eq3.3} implies that the tangent space $t_P$ of $P$ at the origin is \begin{equation} \label{tang} t_P= (t_{J\tilde{C}} )^- = (H^0(\tilde{C}, \omega_{\tilde C})^-)^* = H^1(\tilde{C}, \mathcal{O}_{\tilde{C}})^-, \end{equation} which gives for the cotangent space $t_P^*$ of $P$ at the origin, $$ t_P^*= H^0(\tilde{C}, \omega_{\tilde{C}})^- = \bigoplus_{i=1}^{n-1} H^0(C, \omega_C \otimes \eta^{n-i}). $$ Let $\mathcal{M}_{g}(r)$ denote the coarse moduli space of $r$-pointed smooth projective curves of genus $g$. Since the forgetful map $\mathcal{R}_{g}(n,r) \rightarrow \mathcal{M}_{g}(r)$, \; $(C, B, \eta) \mapsto (C,B)$ is \'etale, the tangent space to $\mathcal{R}_{g}(n,r)$ at $(C, B,\eta)$ is isomorphic to the tangent space to $\mathcal{M}_{g}(r)$ at $(C,B)$. According to \cite[Example 3.4.19]{s} or \cite[p. 94]{hm} this space is isomorphic to $H^1(C, T_C(-B) )$ and its dual is isomorphic to $H^0(C, \omega_C^{\otimes 2}(B) )$. On the other hand, the cotangent space to $\mathcal{A}_{p,D}$ at the point $P$ can be identified with the second symmetric product $S^2(H^0(\tilde{C}, \omega_{\tilde{C}})^-))$. From \eqref{eq3.2} we deduce an isomorphism $$ S^2 (H^0( \omega_{\tilde{C}})^-) \simeq \bigoplus_{i=1}^{n-1} S^2 H^0( \omega_C \otimes \eta^{n-i} ) \oplus \bigoplus_{1\leq j < k \leq n-1} H^0(\omega_C\otimes \eta^{\eta -j}) \otimes H^0(\omega_C \otimes \eta^{\eta -k}) . $$ The eigenspace of $S^2(H^0( \omega_{\tilde{C}})^-)$ corresponding to the character $\chi^{\nu}$ of the group generated by $\sigma$ is \begin{equation*} \bigoplus_{\stackrel{1\leq i \leq n-1}{2i \equiv \nu \textrm{ mod } n} } S^2 H^0( \omega_C\otimes \eta^{n-i} ) \oplus \bigoplus_{ \stackrel{1 \leq j < k \leq n}{j+k \equiv \nu \textrm{ mod } n }} H^0(\omega_C\otimes \eta^{n -j}) \otimes H^0( \omega_C \otimes \eta^{n-k}). \end{equation*} We obtain the following commutative diagram compatible with the action of $ G =\langle \sigma \rangle$ : \begin{equation} \label{equivariant} \hspace{4cm} S^2 (H^0( \omega_{\tilde{C}})^-) \hspace{1.2cm} \longrightarrow \hspace{1.2cm} \bigoplus_{\nu=2}^{2n-2} H^0(\omega_C^2 \otimes \eta^{2n-\nu} ) \end{equation} $ \hspace{6cm} \downarrow \ p \hspace{6.4cm} \downarrow \ p^+ $ $$ [S^2( \omega_C \otimes \eta^{\frac{n}{2}})] \oplus \bigoplus_{ j=1}^{j=[\frac{n}{2}]} H^0(\omega_C \otimes \eta^{n -j}) \otimes H^0( \omega_C \otimes \eta^{j}) \hspace{.4cm} \stackrel{\mu} {\longrightarrow} \hspace{.4cm} H^0( \omega_C^2(B) ) $$ where the factor in square parenthesis on the bottom row does not occur if $n$ is odd. The vertical arrows are the projection on the $\sigma$-invariant part and $\mu$ is the multiplication of sections on every factor of the direct sum. \begin{prop} \label{main} With the identifications above, the codifferential of Prym map at the point $(C,B,\eta) = [f: \tilde{C} \rightarrow C] \in \mathcal{R}_{g}(n,r)$ can be identified with the canonical map $$ \varphi_{C,\eta} : S^2(H^0(\tilde{C}, \omega_{\tilde{C}})^-) \longrightarrow H^0(C, \omega_C^{\otimes 2}(B)), $$ where $\varphi_{C,\eta}$ is the composed map $\mu \circ p$ of diagram \eqref{equivariant}. \end{prop} In the case $n = 2, \; r = 0$ this was proved \cite[Proposition 7.5]{Be} and stated for arbitrary $n$ and $r = 0$ in \cite[Proposition 4.6]{t}. Beauville's proof generalizes directly to this more general case, but seems not to generalize to $r > 0$. For $n = r = 2$ the proposition was given in \cite[p. 123]{bcv}. In order to prove the Proposition we shall need the following lemmas. For any variety $X$ we denote by $T_X$ its tangent sheaf. \begin{lem} \label{mult} Let $X$ be a smooth projective curve and $A= V / \Lambda$ an abelian variety. Assume there is a non constant morphism $\pi : X \rightarrow A$ whose image generates $A$ as abelian variety. Then the dual of the map $$ H^1(d\pi) : H^1(X, T_X) \longrightarrow H^1(X, \pi^*T_A) $$ coincides with the multiplication of sections map $$ V^* \otimes H^0(X, \omega_X) \subset H^0(X, \omega_X) \otimes H^0(X, \omega_X) \longrightarrow H^0(X, \omega^2_X) $$ with respect to the identifications $$ H^1(X, \pi^*T_A)^* \simeq V^* \otimes H^0(X, \omega_X) \quad \mbox{and} \quad H^1(X, T_X) ^* \simeq H^0(X, \omega_X^2). $$ \end{lem} \begin{proof} Consider a non constant morphism $\pi : X \rightarrow A= V / \Lambda$. The differential of $\pi$ gives an inclusion of sheaves $ d\pi: T_X \hookrightarrow \pi^*T_A = V \otimes \mathcal{O}_X $ ; its dual map \begin{equation} \label{map} V^* \otimes \mathcal{O}_X \longrightarrow \omega_X \end{equation} has rank 1. Since the image of $X$ generates $A$ as abelian variety, $V^*$ is a subspace of $H^0(X, \omega_X )$ and then this map is the evaluation of sections. Tensoring (\ref{map}) by $\omega_X$ and taking global sections, we get the multiplication of sections $$ V^* \otimes H^0(X, \omega_X) \rightarrow H^0(X, \omega_X^2), $$ which is the dual map of $H^1(d\pi)$ . \end{proof} According to the definition in Section 3 the Abel-Prym map $\pi: \tilde{C} \rightarrow P$ factorizes via the Abel-Jacobi map $\alpha$: \begin{eqnarray}\label{abel} \xymatrix{ \tilde{C} \ar[rr]^{\pi} \ar[dr]_{\alpha}&& P\\ & J\tilde{C} \ar[ur] _{1-\sigma}& } \end{eqnarray} The polarization $\Xi$ on $P$ induces an isogeny $\varphi_{\Xi} : P \rightarrow \widehat{P}$ onto the dual abelian variety $\widehat{P}$ and thus an isomorphism $t_P \rightarrow t_{\widehat{P}}$. According to \eqref{tang} this permits us to identify $t_{\widehat{P}}$ with $ H^1(\tilde{C}, \mathcal{O}_{\tilde{C}})^-$. \begin{rem}\label{rmk4.3} This identification is also given by the differential of the dual $\widehat{\pi}$ of the map $\pi$ at the origin: $$ d\widehat{\pi}: t_{\widehat{P}} \longrightarrow t_{J\tilde{C}}=H^1(\tilde{C}, \mathcal{O}_{\tilde{C}}), $$ which maps $t_{\widehat{P}}$ isomorphically onto $H^1(\tilde{C}, \mathcal{O}_{\tilde{C}})^-$. \end{rem} The isomorphism $T_{J\tilde{C}} \simeq t_{J\tilde{C}} \otimes \mathcal{O}_{J\tilde{C}}$ induces an isomorphism \begin{equation} \label{eq4.4} H^1(J\tilde{C}, T_{J\tilde{C}} ) = H^1(\tilde{C}, \mathcal{O}_{\tilde{C}}) \otimes H^1(\tilde{C}, \mathcal{O}_{\tilde{C}} ). \end{equation} On the other hand, since apart from $H^1(\tilde{C}, \mathcal{O}_{\tilde{C}})^-$, also $H^1(P, \mathcal{O}_P)$ can be considered as the tangent space of $P$ at the origin, we have an equality $$ H^1(P,\mathcal{O}_P) = H^1(\tilde{C}, \mathcal{O}_{\tilde{C}})^-. $$ Hence the isomorphism $T_P \simeq t_{\widehat{P}} \otimes \mathcal{O}_P$ implies that we can identify \begin{equation} \label{eq4.5} H^1(P,T_P) = H^1(\tilde{C}, \mathcal{O}_{\tilde{C}})^- \otimes H^1(P, \mathcal{O}_P) = H^1(\tilde{C}, \mathcal{O}_{\tilde{C}})^- \otimes H^1(\tilde{C}, \mathcal{O}_{\tilde{C}})^- \end{equation} as well as \begin{equation} \label{eq4.6} H^1(\tilde{C}, \pi^*T_P ) = H^1(\tilde{C}, \mathcal{O}_{\tilde{C}})^- \otimes H^1(\tilde{C}, \mathcal{O}_{\tilde{C}} ). \end{equation} \begin{rem}\label{rmk4.4} The dual of this last identification is provided by the differential of the Prym map $\pi$: $$ H^0(\tilde{C}, \pi^*\Omega_P \otimes \omega_{\tilde{C}}) = H^0(\tilde{C}, \pi^*\Omega_P) \otimes H^0(\tilde{C}, \omega_{\tilde{C}}) \stackrel{d\pi \otimes id}{\longrightarrow} H^0(\tilde{C}, \omega_{\tilde{C}})^- \otimes H^0(\tilde{C}, \omega_{\tilde{C}} ). $$ where $d\pi$ induces an isomorphism $ H^0(\tilde{C}, \pi^*\Omega_P) \simeq H^0(\tilde{C}, \omega_{\tilde{C}})^- $. \end{rem} \begin{lem} \label{lemma4.3} The diagram \begin{eqnarray} \label{diagram} \xymatrix{ H^1(\tilde{C}, T_{\tilde{C}} ) \ar[d]_{H^1(d\alpha)} \ar[r]^{H^1(d\pi)} & H^1(\tilde{C} , \pi^* T_P)\\ H^1(J\tilde{C}, T_{J\tilde{C}}) \ar[r]^{p^- \otimes p^-} & H^1(P, T_P)\ar[u]^{\pi^*} } \end{eqnarray} commutes, where $p^-$ is the projection onto the subspace $H^1(\tilde{C}, \mathcal{O}_{\tilde{C}})^- $ using the identifications above. \end{lem} \begin{proof} Note first that $H^1(d\alpha)$ is a map with target $H^1(\tilde{C}, \alpha^*T_{J\tilde{C}})$. Since $T_{J\tilde{C}}$ is a trivial bundle we may identify $H^1(\tilde{C}, \alpha^*T_{J\tilde{C}}) = H^1(J\tilde{C},T_{J\tilde{C}})$ which gives the map of the left hand vertical arrow of diagram \eqref{diagram}. By Remark \ref{rmk4.3} , (\ref{eq4.5}) and (\ref{eq4.6}) the map $\pi^*$ corresponds to the map $$ d\widehat{\pi} \otimes id : H^1(P,\mathcal{O}_P)\otimes t_P \longrightarrow H^1(\tilde{C} , \mathcal{O}_{\tilde{C}}) \otimes t_P, $$ whose image is $H^1(\tilde{C} , \mathcal{O}_{\tilde{C}})^- \otimes t_P$. Therefore the diagram \ref{diagram} follows from the commutativity of \ref{abel} and identifications (\ref{eq4.4}) and (\ref{eq4.5}) . \end{proof} \begin{proof} \emph{of Proposition \ref{main}}. As noted above, the tangent space of $\mathcal{R}_g(n,r)$ at the point $(C,B,\eta)$ can be identified with $H^1(C,T_C(-B))$. It coincides with the versal deformation space of the corresponding covering $f: \tilde{C} \rightarrow C$. The identification $t_P = t_{\widehat{P}}$ gives an involution $j$ on $t_P \otimes t_{\widehat{P}}$ interchanging the factors. The deformation space of the abelian variety $P$ as a complex manifold is $H^1(P,T_P) = t_P \otimes t_{\widehat{P}}$. The subspace of infinitesimal deformations of $P$ which preserve the polarization $\Xi$ is $$ (t_{P} \otimes t_{\widehat{P}} )^{j} = S^2(t_P) = S^2(H^1(\tilde{C},\mathcal{O}_{\tilde{C}})^-). $$ This means that the tangent space of $\mathcal{A}_{p,D}$ at the point $P$ is $S^2(H^1(\tilde{C},\mathcal{O}_{\tilde{C}})^-)$. Now the natural cup product map $H^1(\tilde{C}, T_{\tilde{C}}) \times H^0(\tilde{C}, \omega_{\tilde{C}}) \rightarrow H^1(\tilde{C}, \mathcal{O}_{\tilde{C}})$ is compatible with the action induced by the automorphism $\sigma$ and hence induces a map $$ H^1(\tilde{C}, T_{\tilde{C}})^+ \times H^0(\tilde{C}, \omega_{\tilde{C}})^- \longrightarrow H^1(\tilde{C}, \mathcal{O}_{\tilde{C}})^-. $$ The image of the corresponding map \begin{eqnarray*} H^1(\tilde{C}, T_{\tilde{C}})^+ \rightarrow &\Hom(H^0(\tilde{C}, \omega_{\tilde{C}})^-,H^1(\tilde{C}, \mathcal{O}_{\tilde{C}})^-) \\ & = (H^0(\tilde{C}, \omega_{\tilde{C}})^-)^* \otimes H^1(\tilde{C}, \mathcal{O}_{\tilde{C}})^- \\ & = H^1(\tilde{C}, \mathcal{O}_{\tilde{C}})^- \otimes H^1(\tilde{C}, \mathcal{O}_{\tilde{C}})^- \end{eqnarray*} is invariant under the involution $j$. Hence it induces a map \begin{equation*} H^1(C,T_C(-B)) = H^1(\tilde{C},T_{\tilde{C}})^+ \longrightarrow S^2(H^1(\tilde{C},\mathcal{O}_{\tilde{C}})^-), \end{equation*} which is the differential of the Prym map $Pr: \mathcal{R}_g(n,r) \longrightarrow \mathcal{A}_{p,D}$ at the point $(C,B,\eta)$. From Lemma \ref{lemma4.3} we conclude that this map can be considered as a map $$ H^1(\tilde{C},T_{\tilde{C}})^+ \longrightarrow H^1(\tilde{C}, \pi^*T_P), $$ whose image is $S^2 (H^1(\tilde{C},\mathcal{O}_{\tilde{C}})^-) \subset H^1(\tilde{C}, \mathcal{O}_{\tilde{C}})^- \otimes H^1(\tilde{C}, \mathcal{O}_{\tilde{C}} ) = H^1(\tilde{C}, \pi^*T_P )$ (see \eqref{eq4.6}). Hence according to Lemma \ref{mult} the codifferential of $Pr$ at the point $(C,B,\eta)$ is the canonical map $$ S^2(H^0(\tilde{C},\omega_{\tilde{C}})^-) \longrightarrow H^0(C,\omega_C^2(B)), $$ given by multiplication of sections. \end{proof} \section{Injectivity of the differential of the Prym map} Proposition \ref{main} implies that in order to show that the differential of the Prym map $Pr : \mathcal{R}_{g}(n,r) \longrightarrow \mathcal{A}_{p, D}$ at a point $(C,B,\eta)$ is injective, it suffices to show that there is an $i,\; 1 \leq i \leq n-1$ such that the canonical map $$ \mu: H^0(\omega_C \otimes \eta^i) \otimes H^0(\omega_C \otimes \eta^{n-i}) \longrightarrow H^0(\omega_C^2(B)) $$ is surjective. First we consider the case of an \'etale covering, i.e. $r=0$. We denote the moduli space of $n$-fold cyclic \'etale coverings of curves of genus $g$ by $\mathcal{R}_{g}[n]$. In the case of $n$ even we deduce: \begin{prop} \label{prop5.1} Suppose $\Cliff(C) \geq 3$. Then for $n$ even the differential of Prym map $Pr : \mathcal{R}_g[n] \longrightarrow \mathcal{A}_{p, D}$ is injective at the point $[f: \tilde{C} \rightarrow C]$. \end{prop} \begin{proof} For $n=2m$ one of the factors of $S^2(H^0(\tilde{C}, \omega_{\tilde{C}})^-) $ is $S^2H^0(C,\omega_C\otimes \eta^m)$. Since $\eta^m$ is a 2-torsion point, the assertion is a consequence of Corollary \ref{cor2.4} and Proposition \ref{main}. \end{proof} For arbitrary $n$ we only have: \begin{prop} \label{prop5.2} Suppose $\Cliff(C) \geq 2n-1$. Then the differential of Prym map $Pr : \mathcal{R}_g[n] \longrightarrow \mathcal{A}_{p, D}$ is injective at the point $[f: \tilde{C} \rightarrow C]$. \end{prop} \begin{proof} Again this is a consequence of the remarks before Proposition \ref{prop5.1}, Corollary \ref{cor2.4} and Proposition \ref{main}. \end{proof} \begin{rem} As we saw in the proof of Corollary \ref{cor2.4}, the assumptions on the Clifford index imply $g \geq 8$ in Proposition \ref{prop5.1} and $g \geq 4n-1$ in Proposition \ref{prop5.2}. Hence the bound for the Clifford index is not sharp in Proposition \ref{prop5.1}, since it is also valid for $g=6, n=2$ (see \cite[Corollaire 7.11]{Be}). The bounds in Proposition \ref{prop5.2} are certainly not sharp if $n$ is a composed number. If $p$ is a proper prime divisor of $n$, then $\eta^{\frac{n}{p}}$ is a $p$-division point and the method of Proposition \ref{prop5.1} gives a better result. We omit this, since it is easy to work out. \end{rem} For a general covering in $\mathcal{R}_g[n]$ there is a better bound which relies on the following lemma. \begin{lem} \label{lem5.4} For $g\geq 5$ there exists a curve $C$ of genus $g$ admitting a proper $n$-division point $\eta \in JC [n]$ such that the line bundle $\omega_C \otimes \eta$ is very ample for any $n \geq 2$. \end{lem} \begin{proof} Consider the locus $$ \mathcal{V}_n := \{([C], \eta) \in \mathcal{R}_g[n] \mid \ \omega_C \otimes \eta \textrm{ is not very ample}\}. $$ which is a Zariski-closed subset of $\mathcal{R}_g[n]$. According to Lemma \ref{lem2.1}, a curve $C$ such that $([C], \eta) \in \mathcal{V} $ admits points $x,y,u,v$ with $x+y \neq u + v$ such that $nx + ny \sim nu + nv$. According to the Riemann-Hurwitz formula the corresponding $2n:1$ covering $C \rightarrow \mathbb P^1$ is branched at most over $$ b = 4n + 2g -2 -4(n-1) +2 = 2g + 4 $$ points of $\mathbb P^1$. Hence, the dimension of $ \mathcal{V}_n$ is at most the dimension of Hurwitz scheme $$ \mathcal{H}_{2n,b} = \{ \pi: C \stackrel{2n:1}{\longrightarrow} \mathbb{P}^1 \textrm{ branched over $b$ points } \}. $$ It is known that the Hurwitz schemes have at most the expected dimension $b$, the number of branch points of the cover. Therefore, the dimension of locus in $\mathcal{M}_g$ of curves $C$ admitting a $g^1_{2n}$ of the form $| n x + n y|$ is at most $$ \dim \mathcal{H}_{2n,b} - \dim \Aut (\mathbb{P}^1) = 2g+1, $$ which is less than $\dim \mathcal{R}_g[n] = \dim \mathcal{M}_g =3g-3$ for $g \geq 5$. This completes the proof. \end{proof} Using this, we can conclude, \begin{prop} \label{prop5.5} For any $n\geq 2$ and $g \geq 7$ the Prym map $Pr : \mathcal{R}_g[n] \longrightarrow \mathcal{A}_{p,D}$ is generically finite. In particular the image of $Pr$ is of dimension $3g-3$. \end{prop} \begin{proof} It suffices to show that there exists an element $([C], \eta) \in \mathcal{R}_g[n]$ such that the differential of the Prym map is injective at $([C], \eta)$. For this it suffices to show, according to Proposition \ref{main}, that there exists a $([C], \eta) \in \mathcal{R}_g[n]$ such that the canonical map $H^0(C, \omega_C \otimes \eta) \otimes H^0(C, \omega_C \otimes \eta^{-1}) \rightarrow H^0(C, \omega^2)$ is surjective. This follows from \cite[Theorem 1]{b} using Lemma \ref{lem5.4} and the fact that a general curve of genus $g \geq 7$ is of Clifford index $\geq 3$. \end{proof} Finally, we consider the case of ramified $n$-fold cyclic coverings, i.e. $r > 0$. Here we have the following consequences of Proposition \ref{main}: \begin{prop} \label{prop5.6} Suppose $g\geq 2$ and $r \geq 6$ if $n$ is even, repectively $r \geq 7$ if $n$ is odd. Then the differential of Prym map $Pr : \mathcal{R}_g(n,r) \longrightarrow \mathcal{A}_{p, D}$ is injective at any point $(C,B,\eta) \in \mathcal{R}_g(n,r)$. \end{prop} \begin{proof} As we have said at the beginning of this section, it suffices to show that the multiplication map $$ H^0(C, \omega_C \otimes \eta^{[\frac{n}{2}]}) \otimes H^0(C, \omega_C \otimes \eta^{n-[\frac{n}{2}]}) \longrightarrow H^0(C, \omega_C^2 (B)) $$ is surjective. According to \cite[Theorem 1]{b} this map is surjective if the line bundles $\omega_C \otimes \eta^{[\frac{n}{2}]}$ and $\omega_C \otimes \eta^{n-[\frac{n}{2}]}$ are very ample . Since $\deg (\omega_C ) =2g-2 $, the very ampleness condition is satisfied if $\deg \eta^{[\frac{n}{2}]} $ (resp. $\deg \eta^{n-[\frac{n}{2}]} $) is at least 3. For $n=2m$ even, $[\frac{n}{2}] - [\frac{n}{2}] =m $ and then $\deg \eta^m= \frac{mr}{n} = \frac{r}{2} \geq 3 $ if and only if $r \geq 6$. On the other hand, if $n= 2m+1$, $m\geq 1$, the line bundles $\omega_C \otimes \eta^{m}$ and $ \omega_C \otimes \eta^{m+1}$ are very ample as soon as $ \frac{r}{n} (\frac{n-1}{2}) \geq 3$. This is the case for $r \geq 7$, since $r$ is a multiple of $n$ according to Hurwitz formula. \end{proof} Since $n$ divides $r$, we are left with 6 cases for the pair $(n,r)$, for which we have a similar result. It is given in the following corollary, the proof again is an application of Lemma \ref{lem2.1} and \cite[Theorem 1]{b}. We omit the details. \begin{prop} \label{prop5.7} The differential of Prym map $Pr : \mathcal{R}_g(n,r) \longrightarrow \mathcal{A}_{p, D}$ is injective at the point $(C,B,\eta) \in \mathcal{R}_g(n,r)$ in the following cases: \begin{itemize} \item for $n=2, r=2$ i.e. $\deg \eta =1$, if $\eta \neq \mathcal{O}_C(x+y-u)$ and $\Cliff(C) \geq 2$; \item for $n=2, r=4$ i.e. $\deg \eta =2$, if $\eta \neq \mathcal{O}_C(x+y)$ and $\Cliff(C) \geq 1$; \item for $n=3, r=3$ i.e. $\deg \eta =1$, if $\eta \neq \mathcal{O}_C(x+y-u)$ and $\Cliff(C) \geq 2$; \item for $n=3, r=6$ i.e. $\deg \eta =2$, if $\eta \neq \mathcal{O}_C(x+y)$ and $\Cliff(C) \geq 1$; \item for $n=4, r=4$ i.e. $\deg \eta =1$, if $\eta^2 \neq \mathcal{O}_C(x+y)$ and $\Cliff(C) \geq 1$; \item for $n=5, r=5$ i.e. $\deg \eta =1$, if $\eta^2 \neq \mathcal{O}_C(x+y)$ and $\Cliff(C) \geq 1$. \end{itemize} \end{prop} For $g \geq 3$ respectively $g \geq 5$ there are curves of Clifford index $\geq 1$ respectively $\geq 2$ we get as an immediate consequence of Propositions \ref{prop5.6} and \ref{prop5.7}, \begin{cor} \label{cor5.8} Suppose one of the following conditions is satisfied \begin{itemize} \item $g \geq 2$ and $r \geq 6$ for $n$ even or $r\geq 7$ for $n$ odd, \item $g \geq 3$ and $n=r = 4$ or $5$ or $(n,r) = (2,4)\; or\; (3,6)$, \item $g \geq 5$ and $n=r = 2$ or 3. \end{itemize} Then the Prym map $Pr : \mathcal{R}_g(n,r) \longrightarrow \mathcal{A}_{p,D} $ is generically finite. In particular the image of $Pr$ is of dimension $3g-3 + r$. \end{cor}
2,869,038,155,953
arxiv
\section{Introduction and review of previous analyses} Charged-current (CC) deep-inelastic scattering (DIS) of neutrinos off nuclei has long been recognized to have a significant impact on global analyses of proton~\cite{Hou:2019efy,Accardi:2016qay,Bailey:2020ooq,Abramowicz:2015mha,Ball:2017nwa,Alekhin:2017kpj} and nuclear~\cite{deFlorian:2011fp,Kovarik:2015cma,Kusina:2016fxy,Kusina:2020lyz,Duwentaster:2021ioo,Eskola:2016oht,AbdulKhalek:2020yuc,Walt:2019slu,Khanpour:2020zyu} parton distribution functions (PDFs), mainly due to its discriminating power in separating quark flavors~\cite{Kovarik:2019xvh,Ethier:2020way}. A good theoretical understanding of neutrino DIS is also an important ingredient for determinations of the weak mixing angle and for searches for physics beyond the Standard Model~\cite{Zyla:2020zbs}. Apart from inclusive neutrino DIS, the semi-inclusive charm dimuon production $\nu N \to \mu D +X$ with $D \to \mu+X'$ plays a crucial role in determining the strange quark content of the nucleon \cite{Goncharov:2001qe, Kusina:2012vh, Faura:2020oom}. Due to the weak nature of the neutrino-nucleus interaction, heavy nuclei such as iron or lead have been usually used as targets in neutrino scattering experiments in order to obtain data with sufficiently high statistics. Therefore, if one were to use the neutrino DIS data in an analysis of the structure of the proton, a nuclear correction factor would be required. Indeed it is much more natural to analyze neutrino DIS in the framework of nuclear PDFs (nPDFs). Out of all available up-to-date global analyses of nPDFs, most include a small selection of neutrino inclusive or semi-inclusive DIS data. The reason why nPDF analyses do not include the totality of neutrino DIS data can be traced back to concerns about possible tensions between neutrino DIS data and the charged-lepton data fitted in nPDF frameworks. In the past decades, there have been several dedicated analyses of neutrino DIS data using the framework of nPDFs. They started with Ref.~\cite{Schienbein:2007fs}, where it was shown by conducting an analysis of neutrino DIS cross-section data from NuTeV~\cite{Tzanov:2005kr} and dimuon data from NuTeV and CCFR~\cite{Goncharov:2001qe} that the extracted iron PDFs in the nCTEQ framework led to a nuclear ratio of the charged-current structure function $F_2$ that is flatter and significantly different from the similar ratio extracted directly from the charged-lepton DIS data, as described, e.g., by the Kulagin-Petti model \cite{Kulagin:2004ie} or the SLAC/NMC parametrization \cite{Abramowicz:1991xz}. In particular, the lack of shadowing of the charged-current structure function ratio in the low-$x$ ($x\leq 0.1$) region is quite atypical. Another peculiarity can also be observed: the typical antishadowing which is present in the neutral current data at moderate $x$ ($0.06< x<0.3$) is shifted to much smaller $x$. The stark difference in the nuclear correction factor triggered a follow-up study \cite{Kovarik:2010uv}, where a global analysis that included charged-lepton and Drell-Yan (DY) data as well as neutrino DIS from NuTeV~\cite{Tzanov:2005kr} and Chorus~\cite{Onengut:2005kv} was performed. It concluded that the neutrino DIS data is incompatible with the charged-lepton data citing the high precision of the NuTeV cross-section data and especially the correlated systematic uncertainties as the main reason for the conclusion. Some time later two related studies \cite{Paukkunen:2010hb,Paukkunen:2013grz} were carried out in the EPS nPDF framework. The authors found only a mild tension between the neutrino DIS data and the charged-lepton DIS data. They further suggested~\cite{Paukkunen:2013grz} that data normalization might be the reason of the apparent incompatibility. By normalizing cross-section data with the integrated cross-section in each energy bin and using a Hessian reweighting analysis based on linearization of theory predictions near the minimum, it was shown that the neutrino DIS data, in particular those from NuTeV, could be included in a global analysis with charged-lepton DIS data without causing significant tensions. It is worth noting that the NuTeV data used in Ref.~\cite{Paukkunen:2013grz} were without point-to-point correlations, which as it was also shown in the previous nCTEQ analysis \cite{Kovarik:2010uv} makes a large difference. With uncorrelated systematic errors the NuTeV data can be described with a very good $\chi^2$ even in Ref.~\cite{Kovarik:2010uv}. Nevertheless, even if NuTeV data is described well, some charged-lepton DIS data, especially those taken on a nucleus close to iron in the mass number, have $\chi^2$/pt significantly larger than unity. Furthermore, without a proper global analysis, the linearization method employed in Ref.~\cite{Paukkunen:2013grz} might not be sufficient to capture the true minimum, considering the fact that there are almost four times as many neutrino DIS data points as there are charged-lepton and DY data. Another intriguing study aiming at comparing the neutrino DIS data with the rest of the data was performed by Kalantarians \textit{et al.}~\cite{Kalantarians:2017mkj}. There, $F_2^{\mathrm{Fe}}/F_2^{\mathrm{D}}$ data from BCDMS and NMC were transformed into $F_2^{\mathrm{Fe}}$ by multiplying the data with $F_2^{\mathrm{D}}$ from the NMC parametrization \cite{Abramowicz:1991xz}. This neutral current $F_2^{\mathrm{Fe}}$ data was then compared with charged current $F_2^{\mathrm{Fe}}$ data from the NuTeV, CCFR and CDHSW experiments, after correcting them using the well-known ``18/5-rule''. Agreement in the valence region ($x>0.3$) could be shown but around 15\% discrepancies at \hbox{$x<0.15$} were still visible. These still could be explained by a proper NLO treatment including also heavy-quark effects, which also lead to differences of similar size in the same kinematic region. Apart from the aforementioned dedicated analyses, the neutrino DIS data have been used in numerous global analyses of nPDFs. In the past, the analyses such as Ref.~\cite{deFlorian:2011fp} included $F_2$ and $F_3$ neutrino data from CDHSW, NuTeV, and Chorus. The downside of using the structure function data is that these data are not as precise and therefore much less sensitive to any tension. Currently all global analyses that use neutrino DIS data to aid in flavour decomposition, e.g. Refs.~\cite{AbdulKhalek:2020yuc,Walt:2019slu,Khanpour:2020zyu,Eskola:2021nhw,Khalek:2022zqe}, prefer to avoid the NuTeV cross-section data.% \footnote{One should also mention that the HKN group observed similar incompatibilities in the nuclear modifications extracted from charged lepton and neutrino DIS data. However, these results are still preliminary~\cite{Nakamura:2016cnn}.} It is important to emphasize that the nuclear effects determined in global nPDF analyses are relatively small and that there is insufficient data to constrain all parton densities in the nuclear environment. The notion of compatibility or lack of compatibility of the neutrino DIS cross-section data depends on the specific nPDF fitting framework such as the parameterization, the choice of free parameters, data selection or even the proton PDF baseline. Moreover, compatibility criteria differ from analysis to analysis. In this paper, we study the compatibility of the neutrino data by performing global analyses that include both charged-lepton data and neutrino DIS data. To extend the previous analyses, we include data sets that were not used in Ref.~\cite{Kovarik:2010uv}. Specifically, in addition to the charged-lepton DIS, DY, and neutrino DIS data from NuTeV and Chorus, we now include the $W$ and $Z$ boson production data from the LHC~\cite{AtlasWpPb,Aad:2015gta,Khachatryan:2015hha,Khachatryan:2015pzs,Sirunyan:2019dox,ALICE:2016rzo,Aaij:2014pvu}, single inclusive hadron production data from both RHIC~\cite{Adler:2006wg,PHENIX:2013kod,Abelev:2009hx,STAR:2006xud} and the LHC \cite{ALICE:2016dei,ALICE:2018vhm,ALICE:2021est}, charm-dimuon data from NuTeV and CCFR~\cite{Goncharov:2001qe}, and neutrino DIS data from CDHSW~\cite{Berge:1989hr} and CCFR~\cite{CCFRNuTeV:2000qwc,Yang:2001rm}. Furthermore, we improve on the treatment of the deuteron corrections which are applied to $F_2$ theory predictions. We also improve the treatment of normalization uncertainties by fitting their fluctuations to the data. To have maximal discriminatory power from the highly correlated data like NuTeV and Chorus, we take into account their correlated systematic uncertainties in all fits. We also allow the strange quark PDF parameters to vary, in contrast to our previous analysis \cite{Kovarik:2010uv} where we assumed that they are fixed by requiring $s+\bar{s}=\kappa (\bar{u}+\bar{d})$. As a result of all the aforementioned improvements and additions, the analysis presented in this paper is the most comprehensive analysis of the neutrino DIS data available so far. As a result of our compatibility study we also identify several approaches how neutrino DIS data can be used together with the charged lepton DIS data in global nPDF analyses while avoiding much of the tension. We also present the best approach which will be used in our future global release of nCTEQ nPDFs with neutrino data. In the meantime, we also publish the nPDFs obtained in the current analysis which are our most complete set of nPDFs until now. The remaining part of the paper is organized as follows. The analysis framework that serves as the basis for this work is briefly reviewed in Sec.~\ref{sec:framework}. Section~\ref{sec:data} is dedicated to the neutrino data new to this analysis. This section also contains some preliminary checks of the internal consistency of the neutrino data among themselves. Section~\ref{sec:nuglobal} is the core of this paper and introduces the compatibility criteria used in reaching the conclusions. The main point is the discussion of the compatibility between the charged-lepton and neutrino data. We investigate the impact of data selection, treatment of errors and the kinematic cuts in Sec.~\ref{sec:nufinal}. The details of the combined fit with neutrino and other data are given in Section~\ref{sec:ncteqnu}. The whole study is then summarized in Section~\ref{sec:conclusion} which also provides an outlook and a possible interpretation of the results. In addition, we list the explicit results of all fits performed in the course of this analysis in Appendix~\ref{sec:fitresults} and we discuss normalization issues and our method to handle the d' Agostini bias in Appendix~\ref{sec:app_norm_unc}. \section{Analysis Framework} \label{sec:framework} \subsection{nPDF fitting framework} The extraction of nuclear PDFs in this analysis is performed using the same framework already employed in the nCTEQ15 analysis~\cite{Kovarik:2015cma} and all our subsequent analyses~\cite{Kusina:2020lyz,Duwentaster:2021ioo}. Specifically, for a nucleus with mass number $A$ the full nPDF, $f_i^A$, is expressed in terms of effective bound-nucleon distributions: \begin{equation}\label{fiA} f_i^A(x, Q) = \frac{Z}{A} f_i^{p/A}(x, Q) + \frac{N}{A} f_i^{n/A}(x, Q), \end{equation} where $i$ is a parton flavor, $Q$ is the factorization/evolution scale, $x$ is the fractional momentum of the parton with respect to the average momentum of the nucleons, $Z$ and $N=(A-Z)$ are respectively the number of protons and neutrons inside the nucleus, while $f_i^{p/A}$ and $f_i^{n/A}$ are the effective bound proton and neutron PDFs respectively. The momentum fraction $x$ in this case takes in principle the values $0\leq x\leq A$. However, we assume that $f_i^A(x, Q)=0$ for $x>1$ which is reasonable as long as we neglect the motion of bound nucleons inside the nucleus~\cite{Segarra:2020gtj}. The bound neutron PDFs can be obtained from the bound proton ones by assuming isospin symmetry. The bound proton PDFs are parametrized at the input scale $Q_0=1.3$ GeV using the following parametrization \cite{Kovarik:2015cma}: \begin{align} xf_i^{p/A}(x, Q_0) = c_0 x^{c_1}(1-x)^{c_2}e^{c_3 x}\left(1+e^{c_4} x\right)^{c_5},\\ \frac{\bar{d}(x, Q_0)}{\bar{u}(x, Q_0)} = c_0 x^{c_1} (1-x)^{c_2}+ (1+c_3)(1-x)^{c_4}, \end{align} where the flavor index $i$ runs over $i=u_v, d_v, g, \bar{u}+\bar{d}, s+\bar{s}, s-\bar{s}$. Here $u_v$ and $d_v$ are the up and down quark valence distributions, and $g,\bar{u},\bar{d}, s, \bar{s}$ are the gluon, anti-up, anti-down, strange, and anti-strange quark distributions, respectively. The free coefficients $c_i$ are assumed to be $A$-dependent and the general form of this dependence is given by \begin{equation}\label{ck} c_i(A,Z) = p_i+ a_i(1-A^{-b_i})\,. \end{equation} Here, $p_i$ are the free-proton PDF parameters obtained in a dedicated proton PDF analysis of Ref.~\cite{Owens:2007kp}, which are close in value to the CTEQ6.1M parameters \cite{Stump:2003yu}. We have chosen the free-proton PDF parameters in order to avoid possible inconsistencies when proton PDF analyses use data taken on nuclei. The analysis \cite{Owens:2007kp} excludes all nuclear data such as the CCFR $F_2$ and $F_3$ neutrino DIS data~\cite{CCFRNuTeV:2000qwc}. The nPDFs for different nuclei are obtained by fitting the nuclear parameters $a_i$ and $b_i$ to the experimental data. In total, there are about 40 $a_i$ and $b_i$ parameters each. Some of these parameters are constrained by the usual sum rules, but the rest remains to be constrained by the data. Given that in the case of nuclear PDFs the data are not so numerous and precise as in the proton case, many of the free parameters need to be fixed in any nPDF analysis. Comparing two different nPDF extractions can be made difficult if the analyses in question use vastly different numbers of free parameters. In such a case, parametrization bias becomes an issue which is difficult to overcome. In this analysis we have succeeded to perform every relevant fit containing a sufficient number of data points with the same large number of free parameters. Only for special fits to a very small subset of data, we were forced to use a smaller number of free parameters to reliably estimate the uncertainties of these analyses within the Hessian approach. In general, even though the $A$-dependence of the parton distribution functions given in Eq.~(\ref{ck}) allows for great flexibility, there is insufficient data to constrain the whole functional form. Therefore, we opt to fix most of the $b_i$ coefficients and let them vary only in cases where we expect precise data taken on multiple nuclei can constrain them. \subsection{nCTEQ15WZSIHdeut}\label{sec:nCTEQ15} \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{figs/pcj_F2N.pdf} \caption{The ratio $F_2^{\mathrm{D}}/F_2^N$ of deuteron to isoscalar structure functions at $Q^2=8$ GeV$^2$, where $F_2^{\mathrm{D}}$ is computed using Eq.~\eqref{F2D}. } \label{fig:F2D/F2N} \end{figure} \begin{table*}[h!] \renewcommand{\arraystretch}{1.2} \caption{Comparison of the $\chi^2$/pt for the nCTEQ15, nCTEQWZSIH and nCTEQ15WZSIHdeut analyses for selected data sets. Numbers appearing inside brackets show the $\chi^2$/pt values for data sets that are not used in the corresponding fits.\label{tab:Chi}} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{3}{c|}{ATLAS Run I} & \multicolumn{3}{c|}{CMS Run I} & \multicolumn{2}{c|}{CMS Run II} & \multicolumn{2}{c|}{ALICE} & LHCb & & DIS & DY & SIH & $W$,$Z$ & {\bf~Total~} \tabularnewline \cline{1-12} \cline{2-12} \cline{3-12} \cline{4-12} \cline{5-12} \cline{6-12} \cline{7-12} \cline{8-12} \cline{9-12} \cline{10-12} \cline{11-12} \cline{12-12} \cline{12-12} & $W^{-}$ & $W^{+}$ & $Z$ & $W^{-}$ & $W^{+}$ & $Z$ & $W^{-}$ & $W^{+}$ & $W^{-}$ & $W^{+}$ & $Z$ & & & & & LHC & \tabularnewline \hline \hline nCTEQ15 & (1.38) & (0.71) & (2.88) & (6.13) & (6.38) & (0.05) & (9.65) & (13.20) & (2.30) & (1.46) & (0.70) & & 0.91 & 0.73 & (0.25) & (6.20) & {\bf 1.66} \tabularnewline \hline \hline nCTEQ15WZSIH & 0.64 & 0.26 & 1.76 & 1.31 & 1.16 & 0.11 & 0.74 & 1.14 & 0.76 & 0.04 & 0.56 & & 0.91 & 0.78 & 0.41 & 0.91 & {\bf 0.83} \tabularnewline \hline \hline nCTEQ15WZSIHdeut & 0.56 & 0.37 & 1.33 & 1.01 & 1.13 & 0.13 & 0.70 & 0.90 & 0.75 & 0.05 & 0.63 & &0.85 & 0.79 & 0.45 & 0.77 & {\bf 0.78} \tabularnewline \hline \end{tabular} \end{table*} \begin{figure*}[h!] \centering \includegraphics[width=0.95\textwidth]{figs/chi2dof_nCTEQ15WZSIHdeut.pdf} \caption{Values of $\chi^2$/pt for the nCTEQ15WZSIHdeut fit for individual experiments.\footnote{We find the DIS experiment 5108 (Sn/D EMC-1998) to be an outlier and our result is consistent with other results from literature.} The IDs of the experiments can be found in Tabs.~I-IV of Ref.~\cite{Kovarik:2015cma}, Tab.~II of Ref.~\cite{Kusina:2020lyz} and Tab.~I of Ref.~\cite{Duwentaster:2021ioo}.} \label{fig:ncteq15wzchi2} \end{figure*} \begin{figure*}[h!] \centering \includegraphics[width=0.95\textwidth]{figs/WZSIH.pdf} \caption{The ratio of nuclear parton distribution functions of the nCTEQ15WZSIH and nCTEQ15WZSIHdeut analyses with respect to the nCTEQ15 analysis for lead at the scale $Q^2=4\ {\rm GeV}^2$.} \label{fig:pdfncteq15wz} \end{figure*} Before discussing the neutrino data, we need to carefully specify the nPDFs we will compare our results against. The global analysis that we use as a reference here is based on the recent nCTEQ15WZSIH analysis~\cite{Duwentaster:2021ioo} which uses charged lepton DIS, DY, LHC $W$ and $Z$ boson production data and single inclusive hadron production data from both RHIC and LHC to determine the nPDFs. However, we improve upon the nCTEQ15WZSIH analysis in several respects. First, we remove the isoscalar corrections that were applied when the data were published using the same method as used in Ref.~\cite{Segarra:2020gtj}, to improve the up- and down-quark PDF separation. Moreover, in order to take into account the nuclear corrections in deuteron data, we correct the deuteron $F_2$ structure function predictions using the method discussed in Ref.~\cite{Segarra:2020gtj}. Specifically, the deuteron $F_2^{\mathrm{D}}$ is computed as \begin{equation}\label{F2D} F_2^{\mathrm{D}} = F_2^{p, nCTEQ15} \times \frac{F_2^{\mathrm{D}, CJ}}{F_2^{p,CJ}} \end{equation} where $F_2^{\mathrm{D}, CJ}$ and $F_2^{p, CJ}$ are the fitted deuteron and proton structure functions from the CJ15 analysis~\cite{Accardi:2016qay} and $F_2^{p, nCTEQ}$ is the computed proton structure function using our base proton PDFs. Without this method, the deuteron $F_2$ is traditionally computed as a simple isoscalar combination, $F^N_2 \equiv F^p_2+F_2^n$ \cite{Kovarik:2015cma, Eskola:2016oht}. In Fig.~\ref{fig:F2D/F2N}, we show the ratio $F_2^{\mathrm{D}}/F_2^N$ at $Q^2=8$ GeV$^2$. We can see that our treatment for the deuteron structure function modifies $F_2^N$ by $\sim 1\%$ at $x\leq 0.1$ and $\sim 3.5\%$ at $x\approx 0.65$. The different treatment of the deuteron structure function influences the description of all the charged-lepton DIS data which are published as ratios $F_2^A/F_2^{\mathrm{D}}$. This set of data includes data taken on a wide range of nuclear targets and it constitutes about a half of the data in the nCTEQ15WZSIH analysis. For DIS data, we apply our standard kinematic cuts namely we only keep data with $Q^2>4$ GeV$^2$ and $W^2= M_p^2+Q^2(1-x)/x >12.25$ GeV$^2$, where $M_p$ is the nucleon mass.% \footnote{We refrain from using less restrictive kinematic cuts like the ones in our recent analysis of JLab data~\cite{Segarra:2020gtj} as we want to stay in the purely perturbative regime and we do not want to complicate the picture by additional effects like the higher twist or the target mass corrections.} As in \cite{Duwentaster:2021ioo}, we use the same strict $p_T\geq 3$ GeV cut for all single inclusive hadron data (compared to $p_T\geq 1.7$ GeV in nCTEQ15 and EPPS16). We have repeated the nCTEQ15WZSIH analysis with all corrections and cuts mentioned above and enlarged the set of free parameters from 19 to 27. Specifically we fit: \begin{align*} & a_1^{u_v},\; a_2^{u_v},\; a_4^{u_v},\; a_5^{u_v},\; b_1^{u_v},\; b_2^{u_v}, \\ & a_1^{d_v},\; a_2^{d_v},\; a_4^{d_v},\; a_5^{d_v},\; b_1^{d_v},\; b_2^{d_v}, \\ & a_1^{\bar{u}+\bar{d}},\; a_2^{\bar{u}+\bar{d}},\; a_5^{\bar{u}+\bar{d}}, \\ & a_1^{g},\; a_4^{g},\; a_5^{g},\; b_0^{g},\; b_1^{g},\; b_4^{g},\; b_5^{g}, \\ & a_0^{s+\bar{s}},\; a_1^{s+\bar{s}},\; a_2^{s+\bar{s}},\; b_0^{s+\bar{s}},\; b_2^{s+\bar{s}} \; . \end{align*} On top of these free parameters, there are 10 additional free normalisation parameters which are also determined in the fit using the approach highlighted in App.~\ref{sec:app_norm_unc}. Similar to the analysis presented in \cite{Duwentaster:2021ioo}, 7 normalisation parameters are used to describe the single inclusive hadron experimental data and 3 normalisations are used for the description of the $W$- and $Z$-boson production measurements from the LHC. After fitting 940 data points from the same experiments that were also used in the nCTEQ15WZSIH analysis \cite{Duwentaster:2021ioo}, we obtain a $\chi^2=735$ corresponding to $\chi^2$/pt = 0.782. The list of values of all parameters obtained in this analysis is given in App.~\ref{sec:fitresults}. In the following text we refer to this new analysis as nCTEQ15WZSIHdeut. For completeness, in Tab.~\ref{tab:Chi}, we compare the quality of the new nCTEQ15WZSIHdeut fit with the previous nCTEQ15WZSIH and the nCTEQ15 analyses. The values of $\chi^2$/pt for each experiment are displayed in Fig.~\ref{fig:ncteq15wzchi2}. The resulting PDFs are then compared for all relevant flavours at the scale $Q^2= 4\ {\rm GeV}^2$ in Fig.~\ref{fig:pdfncteq15wz}. For comparison, we use the same $\Delta\chi^2 =45$ tolerance to define the uncertainties for all three analyses. There are several differences which can be observed between the original nCTEQ15WZSIH and the nCTEQ15WZSIHdeut analyses. In all parton flavors, we observe larger uncertainties compared to the nCTEQ15WZSIH analysis. This is connected to the enlarged number of free parameters which now can more realistically describe the true uncertainty. The differences in the central values for the up- and down-quark parton distributions are the expected consequences of removing the isoscalar corrections and of the different treatment of the deuterium in DIS data together with a slightly larger number of free parameters. The differences seen in the gluon distribution can be attributed to different free parameters used to describe the gluon PDF as well as secondary effects on the gluon from altered scaling violations coming from the modified deuteron data. In the case of the strange quark, the only constraint comes from the $W$ and $Z$ boson data from the LHC as well as the sum rules linking all PDFs together. Given the lack of data constraining the strange quark, we conclude that what is displayed in Fig.~\ref{fig:pdfncteq15wz} is just the parametrization bias where even our parametrization with a large number of free parameters cannot reproduce the true uncertainty in the determination of the strange quark PDF, which should be regarded as much wider than the plotted bands in Fig.~\ref{fig:pdfncteq15wz}. It is here where neutrino DIS could play a major role in a global PDF analysis, providing additional sensitivity to the strange quark PDF. \section{Neutrino DIS data} \label{sec:data} \subsection{Neutrino data and observables} As in any global analysis, data selection is an important factor which, as previous analyses of neutrino data show, can largely influence the obtained results. Given that we investigate the compatibility of neutrino DIS data with the rest of nuclear data, we aim at including all available neutrino DIS data. The experimental collaborations usually publish their results for different observables as differential cross-sections or structure functions. Given that the structure functions are extracted from the cross-section data and that this extraction often requires certain assumptions or input from theory, we prefer to use the differential cross-section data whenever possible. There are two kinds of neutrino data included in the current analysis. All the new data with a breakdown of the number of neutrino and anti-neutrino DIS cross-section data points that satisfy the kinematic cuts $Q^2>4$ GeV$^2$ and $W^2>12.25$ GeV$^2$ applied in our analysis are listed in Tab.~\ref{tab:nudata}. We also give the range of (anti-)neutrino energy bins for each data set. The largest and the most important contribution comes from the measurements of the inclusive double-differential cross section for the scattering of neutrinos and anti-neutrinos on iron or lead nuclei. The data taken on iron targets come from the CDHSW \cite{Berge:1989hr}, CCFR \cite{CCFRNuTeV:2000qwc,Yang:2001rm} and NuTeV \cite{Tzanov:2005kr} collaborations whereas Chorus \cite{Onengut:2005kv} data are taken on lead. For Chorus, CCFR and NuTeV data the electroweak corrections were applied directly to the experimental data. The Chorus and NuTeV data provide point-by-point correlated systematic uncertainties which we include in our analysis.\footnote{The correlated systematic uncertainties for NuTeV data have been used but not given explicitly in the official publication \cite{Tzanov:2005kr}. They can be found in the supplemental material of the corresponding arXiv submission.} There is one issue that needs to be mentioned here. Given that the NuTeV experiment was conceived as a follow-up experiment to the older CCFR experiment and given that in \cite{Tzanov:2005kr} it was claimed that the CCFR experiment had issues such as with mapping of the magnetic field affecting the measurements at large $x$, we apply a cut excluding all CCFR data with $x>0.4$. Apart from the data mentioned before, there have been measurements of neutrino DIS reported by the NOMAD \cite{NOMAD:2007krq,Petti:2006tu}, IceCube \cite{IceCube:2017roe} and Minerva \cite{PhysRevD.93.071101} collaborations which we do not consider in this analysis for different reasons. The NOMAD cross-section data would be the most promising given the high statistics and given that the data were taken on multiple nuclear targets. Unfortunately, the inclusive differential cross-section data have never been publicly released. The IceCube data are measured at extremely small $x\sim 10^{-6}$ where a possibly different theoretical treatment might be required and come with large uncertainties. Finally, the latest results come from the MINER$\nu$A neutrino scattering experiment on polystyrene, graphite, iron and lead targets. The collaboration published the ratio of the neutrino scattering single-differential cross section, $d\sigma/dx$, as function of $x$ and neutrino energy $E_\nu$. Unfortunately the average virtuality $\langle Q^2\rangle$ is below the $Q^2=4$ GeV$^2$ threshold and so the data are excluded from the analysis by our kinematic cuts. The second class of data we consider is the semi-inclusive production of di-muons in (anti-)neutrino DIS measured by the NuTeV and CCFR experiments \cite{Goncharov:2001qe}. There are additional numerous data from the CDHS \cite{Abramowicz:1982zr}, Chorus \cite{CHORUS:2008vjb} and NOMAD \cite{NOMAD:2013hbk} collaborations which we do not include in our analysis. The older data from CDHS and Chorus experiments provide no additional constraint compared to the di-muon data we include. The NOMAD data are more precise but due to technical difficulties we were unable to make use of them in this analysis. However, at the end of this paper, we compare the results of our analysis against the NOMAD data and show that the theoretical prediction from the final result of our analysis correctly describes the data. Still, precision of the NOMAD data suggests that further studies of their PDF constraints could be valuable. \begin{table}[tb] \label{numpoints} \caption{New neutrino data sets used in this analysis.}\label{tab:nudata} \begin{center} \resizebox{\columnwidth}{!}{ \begin{tabular}{|lccccc|} \hline Data set & Nucleus & $E_{\nu/\bar{\nu}}$(GeV) & \#pts & Corr.sys. & Ref.\\ \hline\hline CDHSW $\nu$ & \multirow{2}{*}{Fe} & \multirow{2}{*}{23 - 188} & 465 & \multirow{2}{*}{No} & \multirow{2}{*}{\cite{Berge:1989hr}}\\ CDHSW $\bar{\nu}$ & & & 464 & &\\ \hline CCFR $\nu$ & \multirow{2}{*}{Fe} & \multirow{2}{*}{35 - 340} & 1109 & \multirow{2}{*}{No} & \multirow{2}{*}{\cite{Yang:2001rm}}\\ CCFR $\bar{\nu}$ & & & 1098 & &\\ \hline NuTeV $\nu$ & \multirow{2}{*}{Fe} & \multirow{2}{*}{35 - 340} & 1170 & \multirow{2}{*}{Yes} &\multirow{2}{*}{\cite{Tzanov:2005kr}}\\ NuTeV $\bar{\nu}$ & & & 966 & &\\ \hline Chorus $\nu$ & \multirow{2}{*}{Pb} & \multirow{2}{*}{25 - 170} & 412 & \multirow{2}{*}{Yes} & \multirow{2}{*}{\cite{Onengut:2005kv}}\\ Chorus $\bar{\nu}$ & & & 412 & &\\ \hline CCFR dimuon $\nu$ & \multirow{2}{*}{Fe} & 110 - 333 & 40 & \multirow{2}{*}{No} & \multirow{2}{*}{\cite{Goncharov:2001qe}}\\ CCFR dimuon $\bar{\nu}$ & & 87 - 266 & 38 & &\\ \hline NuTeV dimuon $\nu$ & \multirow{2}{*}{Fe} & 90 - 245 & 38 & \multirow{2}{*}{No} & \multirow{2}{*}{\cite{Goncharov:2001qe}}\\ NuTeV dimuon $\bar{\nu}$ & & 79 - 222 & 34 & &\\ \hline \end{tabular} \end{center} \end{table} It is not a simple task to compare the precision of different experimental measurements if the measurements extend over different kinematic regions or include correlated systematic uncertainties. However, we show the results of a simplified comparison of the measurements of inclusive (anti-)neutrino DIS double-differential cross-sections in Tab.~\ref{tab:dataunc}. We choose an incoming neutrino energy $E_\nu\sim 85$ GeV which is common and typical for each of the experiments and average over the uncertainties (statistical and systematical errors are added in quadrature) for the corresponding data at the given neutrino beam energy. Due to the oversimplifications contained in this comparison we cannot draw very detailed conclusions but we clearly see a general trend. The neutrino data are much more precise than their anti-neutrino counterparts. This conclusion is true also for the remaining data not considered in Tab.~\ref{tab:dataunc}. For neutrino data, we see that at this energy NuTeV and CCFR data are the most precise, followed by the data from Chorus and CDHSW. For anti-neutrino data, the order is somewhat different: NuTeV and CDHSW are comparable in precision, followed by CCFR and Chorus. This conclusion has to be taken with a grain of salt. The averaging procedure and most importantly discarding the correlations might change this simple picture. We will perform much more detailed studies in the following. \begin{table}[tb] \caption{Relative experimental uncertainties (in percent) of various data sets at $E_\nu \sim 85$ GeV where all the data sets overlap.} \label{tab:dataunc} \centering \begin{tabular}{|lcc|} \hline Experiment & \#pts & Relative Error($\%$) \\ \hline \hline CDHSW $\nu$ & 59 & 8.36\\ CDHSW $\bar{\nu}$ & 59& 10.75\\ \hline CCFR $\nu$ & 54 & 6.01 \\ CCFR $\bar{\nu}$ & 54 & 16.90\\ \hline NuTeV $\nu$ & 55 & 5.88\\ NuTeV $\bar{\nu}$ & 54 & 10.29\\ \hline Chorus $\nu$ & 65 & 7.70\\ Chorus $\bar{\nu}$ & 65 & 18.32\\ \hline \end{tabular} \end{table} \subsection{Nuclear corrections from neutrino cross-section data}\label{sec:weightedaverage} \begin{figure*}[t] \centering \includegraphics[width=0.4\textwidth]{figs/Rnunubarsep.pdf}\hfil \includegraphics[width=0.4\textwidth]{figs/Rnunubarsep_ct18nonua.pdf} \caption{ The weighted average of the cross-section ratios for $Q^2>4$ GeV$^2$ and $W^2>12.25$ GeV$^2$ from CDHSW, CCFR, NuTeV, and Chorus data. The denominator ($\sigma_{free}$) is computed using nCTEQ15 proton baseline (left) and CT18 (no nu A) NLO proton PDFs without neutrino data of Ref.~\cite{Accardi:2021ysh} (right). } \label{fig:Rnunubar} \end{figure*} Before we perform a global analysis including the neutrino data in our nPDF framework, it is instructive to attempt to quantify a nuclear correction factor extracted purely from these data alone. Given that the neutrino double-differential cross-section data are reported as a function of the usual DIS variables $x, \,y,\,$and $E_{\nu}$, while the nuclear ratio is typically given only as a function of $x$ assuming the variation with changing $Q^2$ is small, an averaging procedure is necessary. We define the nuclear ratio of the cross-section and its uncertainty for each data point as \begin{eqnarray}\label{Rsigma} R_i^\sigma(x) &=& \frac{\sigma(x, y_i, E_i)}{\sigma_{\rm free}(x, y_i, E_i)}\,,\\ \label{DRsigma} \Delta R_i^\sigma(x) &=& \frac{\Delta\sigma(x, y_i, E_i)}{\sigma_{\rm free}(x, y_i, E_i)}\,, \end{eqnarray} where $\sigma_{\rm free}$ is the predicted differential cross section using ``free'' iron or lead PDFs, $f_i^{A,{\rm free}}$, defined by \begin{equation}\label{ffree} f_i^{A,{\rm free}} = \frac{Z}{A}f_i^{p}+ \frac{A-Z}{A}f_i^{n} \; . \end{equation} Here, $f_i^{p (n)}$ are the free proton (neutron) PDFs, which in our case are taken from our proton baseline. The quantity $\Delta\sigma(x, y_i, E_i)$ is the total sum of statistical and systematic uncertainties for the data points added in quadrature, except for the normalization uncertainty. We construct a weighted average of the nuclear ratios, such that for a given $x$ the weighted-average ratio and its uncertainty are: \begin{eqnarray} \mathcal{R}(x) &=& \sum_{i} w_i R^\sigma_i,\\ \Delta\mathcal{R}(x) &=& \left(\sum_{i} w_i^2(\Delta R^\sigma_i)^2\right)^{1/2} \; . \end{eqnarray} The weight $w_i$ is defined as \begin{equation} w_i = \left( \sum_{j} \frac{1}{(\Delta R^\sigma_j)^2}\right)^{-1} \frac{1}{(\Delta R^\sigma_i)^2}\,, \end{equation} where the sum runs over data points with the same $x$. This averaging procedure is similar to the one used in Ref.~\cite{Paukkunen:2013grz}, although there are differences in the definition of the weight $w_i$ and of the uncertainty $\Delta\mathcal{R}(x)$. In such a procedure the dependence on the remaining variables is averaged out. This of course is only reasonable if there is just a mild dependence of the nuclear correction factor on the remaining variables. We have checked that this assumption is reasonably valid for a wide range of $Q^2$ and $y$ within the kinematic range allowed by our cuts. Some deviations from this assumption can be observed below $x=0.015$ and above $x=0.75$, where $R$ can be spread around unity quite widely. Therefore, any inference based on this averaging procedure in these regions should be done with caution. In Fig.~\ref{fig:Rnunubar}, we show the nuclear correction factors $\mathcal{R}^\nu(x)$ and $\mathcal{R}^{\bar{\nu}}(x)$ obtained from the inclusive neutrino and anti-neutrino cross-section data from CDHSW, CCFR, NuTeV and Chorus. To better compare the shape of the nuclear corrections from different data sets, we also show an interpolation (solid lines), obtained from fits with the parametrization of the ratio~\cite{Tzanov:2005kr} \begin{equation} \mathcal{R}(x) = a_1+ a_2x+a_3e^{a_4x}+a_5x^{a_6}. \end{equation} For comparison, we also include the SLAC/NMC nuclear correction factor \cite{Abramowicz:1991xz} which approximately describes the nuclear effects in the charged lepton data. In the left panels of Fig.~\ref{fig:Rnunubar}, we show the shape of cross-section ratios where $\sigma_{free}$ is computed using our proton baseline PDFs. We observe that the CCFR and NuTeV ratios generally agree at low $x$, but the NuTeV ratio is consistently above the CCFR one for $x>0.4$. This is consistent with the observation in Ref.~\cite{Tzanov:2005kr} where issues with the CCFR experiment were cited which account for this discrepancy. In the following we will also apply a cut $x<0.4$ to the CCFR data. Overall, for the iron neutrino data (CDHSW, CCFR and NuTeV), there is no obvious shadowing, i.e. the appearance of $R<1$, at low $x$ ($x\leq 0.1$) as one expects from the SLAC/NMC model. This is even more so for CDHSW data. However, the bin center correction was not applied for the CDHSW data, which affects largely low- and high-$x$ data~\cite{Tzanov:2005kr}. In contrast to the data on iron, the nuclear ratio obtained from the Chorus data shows a shape more similar to the traditional SLAC/NMC ratio. \begin{table*}[t!] \caption{$\chi^2$/pt value for each data set from the DimuNeu fit.} \centering \begin{tabular}{|c|c||c|c|c|c||c|c|c|c||c|c|c|c||c|c|c|c||c|c|} \hline \multicolumn{2}{|c||}{Dimuon} & \multicolumn{2}{c|}{NuTeV $\nu$} & \multicolumn{2}{c||}{NuTeV $\bar{\nu}$} & \multicolumn{2}{c|}{CCFR $\nu$} & \multicolumn{2}{c||}{CCFR $\bar{\nu}$} & \multicolumn{2}{c|}{Chorus $\nu$} & \multicolumn{2}{c||}{Chorus $\bar{\nu}$} & \multicolumn{2}{c|}{CDHSW $\nu$} & \multicolumn{2}{c||}{CDHSW $\bar{\nu}$} & \multicolumn{2}{c|}{Total} \\ \hline $\chi^2\!$/pt & \#pts & $\chi^2\!$/pt & \#pts & $\chi^2\!$/pt & \#pts & $\chi^2\!$/pt & \#pts & $\chi^2\!$/pt & \#pts & $\chi^2\!$/pt & \#pts & $\chi^2\!$/pt & \#pts & $\chi^2\!$/pt & \#pts & $\chi^2\!$/pt & \#pts & $\chi^2\!$/pt & \#pts \\ \hline 1.06 & 150 & 1.51 & 1170 & 1.25 & 966 & 1.00 & 824 & 1.00 & 826 & 1.21 & 412 & 1.09 & 412 & 0.68 & 465 & 0.72 & 464 & 1.12 & 5689 \\ \hline \end{tabular} \label{tab:chi2dimuneu} \end{table*} The nuclear ratio defined above obviously depends on the underlying proton PDFs used for the free proton cross-section in the denominator of Eq.~(\ref{Rsigma}). This dependence can be seen when we compare the left and the right panels in Fig.~\ref{fig:Rnunubar}. The right panels show the same nuclear ratios as the ones on the left, but the ratios are constructed using the more recent CT18 NLO PDFs. Here we have used a dedicated fit which does not include any neutrino data in the CT18 analysis to avoid inconsistencies \cite{Accardi:2021ysh}. Comparing the nuclear ratios coming from different underlying proton PDFs, we can clearly see differences in the $x$-shape of these ratios. The largest difference is apparent at low $x$. The ratios constructed from CT18 NLO PDFs show signs of shadowing at $x\leq 0.1$ in contrast to the ones where the nCTEQ15 proton baseline PDFs were used. This should serve as a warning to draw conclusions about the existence of shadowing in neutrino data from observables, which are not purely data driven and depend on some assumptions such as the proton parton distributions. \subsection{Neutrino DIS Data Fit}\label{sec:neutrino} \begin{figure*}[t!] \centering \includegraphics[width=0.95\textwidth]{figs/fratio-full_iron.pdf} \includegraphics[width=0.95\textwidth]{figs/fratio-full_lead.pdf} \caption{The ratio of nuclear parton distribution functions for the full nuclei - iron $(A=56,Z=26)$ (top) and lead $(A=208,Z=82)$ (bottom) - to the nPDF of full nuclei made up of free protons and neutrons both at the scale $Q^2=5\,{\rm GeV}^2$.} \label{fig:dimuneu_pdfs} \end{figure*} \begin{figure*}[t!] \centering \includegraphics[width=0.98\columnwidth]{figs/RF2_NC_5_CTEQ6M_dimuneu.pdf} \hfil \includegraphics[width=0.94\columnwidth]{figs/RF2_CC_5_CTEQ6M_dimuneu.pdf} \caption{The structure function ratio predictions from DimuNeu and nCTEQ15WZSIHdeut fits. The grey bands on the left and on the right highlight the regions without any data points passing the kinematic cuts. } \label{RF2_CC_dimuneu} \end{figure*} In the previous section, we have investigated the nuclear effects using just the data, constructing the weighted average of cross section ratios. We have observed in Fig.~\ref{fig:Rnunubar} that the resulting $x$-dependence varies between neutrino experiments and is different from the expected SLAC/NMC result. Here we will go one step further and perform a neutrino analysis using the nPDF framework detailed in Sec.~\ref{sec:framework}. In this analysis, which we will refer to as ``DimuNeu'', we include {\bf only} the inclusive and semi-inclusive neutrino data listed in Tab.~\ref{tab:nudata}. Compared to our previous analyses, we improve on the treatment of correlated errors and normalisation uncertainties. The details of this treatment are given in App.~\ref{sec:app_norm_unc}. Before going further, we note that extracting a reliable set of nPDFs from neutrino data alone is not possible without making some assumptions given that the neutrino data alone cannot constrain all possible parton distributions. In this global neutrino analysis, we set the gluon PDF parameters to be the same as those in the nCTEQ15WZSIHdeut fit. Furthermore, we set the $\bar{d}/\bar{u}$ ratio to be the same as in the free proton case, as we assume that the nuclear corrections to $\bar{u}$ and $\bar{d}$ are similar and cancel in the ratio~\cite{Schienbein:2007fs}. This fit therefore uses 20 free parameters. In addition, the normalizations of all data sets are also determined from the fit, which introduces 10 additional free parameters. The uncertainties of the parameters are determined using the Hessian method (for details see \cite{Kovarik:2015cma}) with the same $\Delta\chi^2 = 45$ tolerance criterion as the one used in the nCTEQ15WZSIHdeut analysis. The results of the DimuNeu analysis are threefold. First, the list of final values of all parameters after the DimuNeu analysis can be found in App.~\ref{sec:fitresults}. Next, the $\chi^2$ values for all data and for each data set separately are given in Tab.~\ref{tab:chi2dimuneu}. Lastly, in Fig.~\ref{fig:dimuneu_pdfs} we show the ratio of nuclear PDFs for the whole nucleus to the PDFs for the whole nucleus obtained using the free proton PDFs. We compare the nuclear parton distribution functions extracted from the neutrino data to the ones extracted in the nCTEQ15WZSIHdeut analysis in Sec.~\ref{sec:nCTEQ15}. We observe that the results from the DimuNeu and nCTEQ15WZSIHdeut analyses are distinctly different for the valence quark PDFs as well as for the non-valence quark PDFs. The shapes are different even if we consider the PDF errors of both analyses. The strange quark nPDF also differs between the two analyses. In the case of iron PDFs the changes in the strange quark PDF are still within the uncertainties but for lead the strange quark PDF is distinctly different. The gluon PDF parameters were fixed and so the gluon PDF is the same in both analyses. \begin{figure*}[t!] \centering \includegraphics[width=0.90\columnwidth]{figs/CMSwminus.pdf} \hfil \includegraphics[width=0.90\columnwidth]{figs/CMSwplus.pdf} \caption{Comparison between CMS $W^\pm$ boson production cross section data with the theory predictions from our fits. The green (red) bands show the theory uncertainties from nCTEQ15WZSIHdeut (DimuNeu) error PDFs. All theory predictions have been shifted by their respective fitted normalization shift. } \label{CMSpred} \end{figure*} It is instructive to see how the resulting nPDFs from the DimuNeu analysis describe the experimental data. In Fig.~\ref{RF2_CC_dimuneu} we compare the predictions stemming from the DimuNeu analysis for the nuclear correction factor constructed from the $F_2$ structure functions from the neutral or charged current deep inelastic scattering to the corresponding structure function data. There is a subtlety one has to take into account. In the case of the neutral current DIS (see the left panel of Fig.~\ref{RF2_CC_dimuneu}), the data are presented as ratios $F_2^{A}/F_2^D$, where the denominator comes from a measurement on deuterium targets. In the charged current case with neutrino beams (see the right panel of Fig.~\ref{RF2_CC_dimuneu}), deuterium targets are not heavy enough to generate sufficient statistics. Therefore, one uses a nuclear correction factor constructed as \begin{equation} R[F_2^{CC}] = \frac{F_2^{CC}[f_i^A]}{F_2^{CC}[f_{i}^{A,{\rm free}}]}, \label{rf2cc} \end{equation} where the charged current structure function $F_2^{CC}$ is defined as an average $F_2^{CC}= (F_2^{\nu A}+F_2^{\bar{\nu} A})/2$. In the case of the theoretical predictions, the numerator is calculated using the nuclear PDFs, $f_i^{A}$, for the corresponding nucleus $A$, and in the denominator the combination of free proton and neutron PDFs, $f_i^{A,{\rm free}}$, are used instead. In Fig.~\ref{RF2_CC_dimuneu}, the experimental points are obtained by dividing the data on $F_2^{CC}$ by the same "free" PDF denominator as for the theoretical prediction. In Fig.~\ref{CMSpred} we also show predictions from the DimuNeu analysis for the $W^{\pm}$ production at the LHC as a function of the rapidity of the charged lepton $y^{\pm}$. Based on the total $\chi^2$ in Tab.~\ref{tab:chi2dimuneu}, we see that the DimuNeu result can decently describe all neutrino data. We see however that not all data are described equally well. On one side, both neutrino and anti-neutrino data from CDHSW and CCFR experiments are very well compatible with the DimuNeu prediction. On the other side, all dimuon data and all Chorus data as well as anti-neutrino data from the NuTeV show a mild tension where the $\chi^2{\rm /pt}\sim 1.2$. The neutrino data from the NuTeV collaboration are the most precise and show the largest tension with the DimuNeu analysis. As was stated in previous analyses and verified also in the course of this analysis, NuTeV neutrino data cannot be adequately described in this nPDF framework even if the data are fitted alone. In the right panel of Fig.~\ref{RF2_CC_dimuneu}, we see that the predicted nuclear correction factor, coming from the global neutrino DimuNeu analysis, describes the data from NuTeV and CDHSW within their uncertainty. This can be compared to the nuclear correction factor from the nCTEQ15WZSIHdeut analysis where the $x$-shape of the correction factor is completely different and cannot describe the neutrino data at all. We also observe in the left panel of Fig.~\ref{RF2_CC_dimuneu} that the inverse is true for the neutral current data where the nuclear correction factor which describes the neutrino data fails to describe the aforementioned data. This is true almost for any $x$ but the largest deviation can be seen for $x<0.07$. Even for mid-$x$ where the shape of the DimuNeu nuclear correction factor would be consistent with the data, it consistently undershoots all data. Here the situation is reversed and the nuclear correction factor from nCTEQ15WZSIHdeut describes the data well. This apparent inconsistency of the nuclear correction factor determined from neutrino data with the rest of the neutral current data is what prompted the series of studies starting with \cite{Schienbein:2007fs}. In Fig.~\ref{CMSpred} we show that not all observables disagree. In the case of the $W^{\pm}$ production at the LHC we see a nice agreement between the results from the nCTEQ15WZSIHdeut and DimuNeu analyses. This should come as no surprise given that the $W^{\pm}$ production is quite sensitive to the gluon PDF% \footnote{Actually, in case of a nPDF fit without jet data the $W/Z$ LHC data provide the most stringent constraints for the gluon.} which remains fixed and is the same in both analyses. Above, we have verified that the prediction from the DimuNeu analysis correctly describes the experimental data on the $F_2^{CC}$ structure function by comparing the nuclear correction factor $R[F_2^{CC}]$. Given that we have not used the structure function data in our analysis, it is also instructive to see how well the cross-section data are being described analogously to the results and discussion of Fig.~\ref{fig:Rnunubar}. For that purpose we return to the weighted average introduced in Sec.~\ref{sec:weightedaverage} and in Fig.~\ref{Rratio} to check how well the DimuNeu analysis fits the data. Even though all data considered in Fig.~\ref{Rratio} correspond to the same observable, the result of the averaging procedure depends on which data set is used in the averaging as different experiments have different ranges in $Q^2$ which are being averaged over. Therefore, separate theoretical predictions for the weighted average for each experiment with the corresponding uncertainties are shown. In constructing the theoretical prediction for the weighted average we have replaced $R_i^\sigma$ and $\Delta R_i^\sigma$ in Eqs.~(\ref{Rsigma}) and (\ref{DRsigma}) by the predicted central value and the theoretical uncertainty stemming from the PDF uncertainty, respectively. We have retained the weights $w_i$ calculated from the corresponding experimental data to ensure the same weighing procedure is used for both data and theory predictions. We see that in general the theoretical prediction from the DimuNeu analysis fits the cross-section data as well as it did the structure function data. There is a good agreement between the data and the DimuNeu prediction for all experiments in the intermediate Bjorken-$x$ region. In the large-$x$ region, the DimuNeu result is a compromise between the diverging experimental data where the NuTeV measurement starkly differs from the others. For small Bjorken $x$ the fit is also a compromise given that the CDHSW, CCFR and NuTeV show no distinct shadowing in this region whereas the CHORUS data display a shadowing behavior similar to the neutral current DIS data. Given the noticeable difference between the neutrino data taken on iron and the data taken on lead in Fig.~\ref{Rratio}, one might conclude at first glance that these data are incompatible with each other. However, we see that the DimuNeu analysis can describe both neutrino data on iron and on lead quite successfully within one unified nPDF framework. To investigate the matter a little further, we have performed two separate fits which we label ``DimuNeuIron'' and ``ChorusW''. \begin{figure}[t!] \centering \includegraphics[width=0.961\columnwidth]{figs/RDimuNeu.pdf} \caption{The weighted average of the cross section ratio for individual neutrino and anti-neutrino cross section data from NuTeV, Chorus, CCFR and CDHSW. The solid bands show the prediction from the DimuNeu fit. Note that the plotted points match those presented in Fig.~\ref{fig:Rnunubar}.} \label{Rratio} \end{figure} \begin{figure}[t!] \centering \includegraphics[width = 0.98\columnwidth]{figs/ironvslead_dimuneu.pdf} \caption{A comparison of predictions from the DimuNeu, DimuNeuIron and ChorusW analyses for the charged-current structure function ratios $R[F_2^{\rm CC}]$ for iron and lead.} \label{fig:ironlead_dimuneu} \end{figure} Both fits use only 14 free parameters and compared to the free parameters of the nCTEQ15WZSIHdeut fit listed in Sec.~\ref{sec:nCTEQ15} all parameters $b_i^x$ corresponding to the $A$-dependence were held fixed. The reason for fixing these parameters is that both fits include data taken only on one nucleus. In the case of the DimuNeuIron analysis, only neutrino data from CDHSW, CCFR and NuTeV taken on iron were included and in the case of the ChorusW analysis only Chorus neutrino data and LHC data on $W$-boson production both taken on lead were used. In Fig.~\ref{fig:ironlead_dimuneu} we compare the predictions for the charged-current structure function ratios for iron (red) and for lead (blue) from these specialized fits (dashed lines) with the predictions from the global DimuNeu neutrino analysis (solid lines). We see that in general the predictions from the specialized fits agree well with the ones from the global DimuNeu analysis with the sole exception of the large-$x$ region where the precise NuTeV data dominate the global analysis. The difference in the nuclear correction factor for iron and for lead can come from two sources. The main effect usually comes from the different proton and neutron content of the iron and the lead atoms. The large excess of neutrons in a lead nucleus leads to noticeable differences in predicted observables even though the underlying effective bound proton and bound neutron PDFs are the same as for other elements. The second possible source for the difference is the dependence of the underlying bound nucleon PDFs on the atomic number $A$. The second effect is typically subleading. We can see the impact of the large neutron excess if we compare the predictions for lead in Fig.~\ref{Rratio}, where in accordance with the experimental data it was assumed that $A=208$ and $Z=82$, with the predictions shown in Fig.~\ref{fig:ironlead_dimuneu}, where $A=208$ and $Z=104$ were used given that the structure function data from Chorus are isoscalar corrected. We can therefore conclude that the neutrino data from all experiments irrespective if they are taken on iron or lead show similar behaviour for all but large $x>0.5$. \section{Neutrino Data Compatibility} \label{sec:nuglobal} In this section we will introduce a combined global nuclear PDF analysis including all data from the reference nCTEQ15WZSIHdeut fit (see Sec.~\ref{sec:framework}) and all neutrino data discussed in Sec.~\ref{sec:data}. Extending an existing PDF analysis by including new data is a standard and frequent occurrence. Usually one includes new data in a PDF analysis in order to improve on the precision or on the $x$-$Q^2$ coverage of previously used data or to constrain PDFs of partons which were previously left unconstrained. In order for the new data to provide all that, it has to be possible to consistently describe them in the underlying theoretical framework based on the factorisation theorem, perturbative QCD and on the $x$-parametrization of the PDFs at the input scale. Schematically, if the new data cannot be consistently described in a combined analysis, it can mean one of two things. Either the theoretical framework needs to be extended for example by including small-$x$ resummation effects or the target mass corrections or there was a problem with the data acquisition e.g. the experimental errors were underestimated. Based on the preliminary analysis we have performed on the neutrino deep inelastic scattering data in the previous section, we expect possible large tensions between the neutrino data and the rest of the nuclear scattering data. Therefore, we will investigate the compatibility of the neutrino DIS data with the bulk of the nuclear scattering data in detail. We will take a closer look at the compatibility of the results of each neutrino DIS experiment separately. We will also look into the possibility that all neutrino DIS data are showing significant tensions, which, in one interpretation, may indicate incompleteness in the theoretical framework used to describe neutrino scattering in the nPDF analysis. \subsection{Compatibility Criteria}\label{sec:criteria} Before we dive into the details of the compatibility discussion, we need to clearly specify the criteria for compatibility which we will be using. In general, we will be discussing the compatibility of two data sets $S$ and $\bar{S}$ in a global fit which includes both of the sets $Z\equiv S\cup \bar{S}$. In our case, the set $S$ will always be the set of data used in the reference fit nCTEQ15WZSIHdeut and the set $\bar{S}$ will be some subset of the newly considered neutrino data. In what follows, we will be using three different criteria. \begin{table*}[htb] \caption{Statistical information such as the total $\chi^2$ and number of data points for all analyses discussed here are presented. Moreover, the $\chi^2$-percentiles with respect to the reference fit nCTEQ15WZSIHdeut (denoted $S$) and to the only neutrino DimuNeu analysis (denoted $\bar{S}$) are also given.} \label{tab:statperc} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline Analysis name & $\chi^2_S/N$ & $\chi^2_{\bar{S}}/N $ & $\Delta\chi^2_S$ & $\Delta\chi^2_{\bar{S}}$ & $p_S/p_{\bar{S}}$ \\[1mm] \hline \hline nCTEQ15WZSIHdeut & 735/940 & - & 0 & - & 0.500 / - \\ DimuNeu & - & 6383/5689 & - & 0 & - / 0.500\\ BaseDimuNeu & 866/940 & 6666/5689 & 131 & 283 & 0.99987/0.990\\ \hline \end{tabular} \end{table*} \newline\newline % {\bfseries $\Delta\chi^2_S\,$-compatibility} This first criterion for comparison of the compatibility of two data sets $S$ and $\bar{S}$ uses the $\chi^2$ of the global analyses of the data sets $S$ and $Z$. We use the $\chi^2$ to assess whether the nPDFs extracted from the fit to the combined data set $Z$ are within the error bands of the nPDFs from a fit to the baseline data set $S$. It can be shown that in the Hessian error formalism, this happens if and only if the increase of the $\chi^2$ of $S$ before and after including $\bar{S}$ is less then the tolerance $\Delta\chi^2_S$, hence the name of this criterion. To apply this criterion in our case, we have to define a proper tolerance $\Delta\chi^2_S$ of the global reference fit to the data $S$ which in our case is the analysis nCTEQ15WZSIHdeut discussed in Sec.~\ref{sec:framework}. In the nCTEQ15 analysis, we have used $\Delta\chi^2=35$ with $N=740$ data points. However, the nCTEQ15WZSIHdeut analysis contains significantly more data $N=940$ so an adjustment of $\Delta\chi^2_S$ is required. We will make use of the $\chi^2$-distribution for $N$ degrees of freedom \begin{equation} P(\chi^{2},N)=\frac{(\chi^{2})^{N/2-1}e^{-\chi^{2}/2}}{2^{N/2}\Gamma(N/2)}\,, \label{eq:chi2dist} \end{equation} to define the $\Delta\chi^2_S$. The $\chi^2$-distribution allows us to define the percentiles, $\xi_{p}$, via \begin{equation}\label{perc} \int_{0}^{\xi_{p}}P(\chi^{2},N)\,d\chi^{2}=\frac{p}{100}\quad\;{\rm where}\quad p=\{50,90,99\}\,. \end{equation} $\xi_{50}$ serves as an estimate of the mean of the $\chi^2$-distribution and we expect the $\chi^2$ of a good fit to be close to $\xi_{50}$. In the case of nCTEQ15WZSIHdeut analysis where $\chi_0^2=735 < \xi_{50} = 939$, the fit was better than expected. Due to the large discrepancy between $\chi_0^2$ and $\xi_{50}=939$, we have decided to rescale all percentiles by a factor $\gamma_S = \chi_0^2/\xi_{50}$. The new rescaled 90\% percentile then becomes $\chi^2_{90} = \gamma_S\, \xi_{90} = 779$. We can finally define $\Delta\chi^2_S$ as \begin{equation} \Delta\chi^2_S = \chi^2_{90} - \chi^2_{0} = 45\,. \end{equation} This is the tolerance we use to define the error PDFs for the nCTEQ15WZSIHdeut analysis. Assessing compatibility using the $\Delta\chi^2_S\,$-criterion has one obvious drawback. If the reference analysis of data $S$ contains a parameter (or a combination of parameters) which cannot be sufficiently constrained, the uncertainty connected to this parameter is often underestimated. This is due to the fact that in the Hessian approach the unconstrained parameters are connected to very small eigenvalues of the Hessian matrix and the diagonalization of a large matrix where the eigenvalues span multiple orders of magnitude is numerically unstable. If the global analysis of the extended data set $Z\equiv S\cup \bar{S}$ constrains the previously unconstrained combination of parameters, the resulting PDF is often outside of the underestimated error band of the previous analysis. In this case the criterion signals incompatibility even though there is none. Therefore no matter how useful this criterion is, we cannot rely just on this single criterion. \newline\newline % {\bfseries $\chi^2_S\,$-compatibility} The second criterion approaches the problem of compatibility slightly differently. Using this criterion, we asses if the data sets $S$ and $\bar{S}$ are described acceptably well in a combined fit to $Z\equiv S\cup \bar{S}$, comparing the quality of the description of the data sets in the combined fit to the fits to the data sets alone. We will consider the data sets $S$ and $\bar{S}$ are $\chi^2_S\,$-compatible if both their $\chi^2$ in a combined fit are within at most 90\% percentile defined in Eq.~(\ref{perc}) from their expected value. To account for the cases where a data set cannot be optimally described even in a fit only to the data set itself, we will define the rescaled percentile $\chi^2_{90} = \gamma_S\, \xi_{90}$ exactly as we did in the case of the $\Delta\chi^2_S\,$-compatibility criterion above. Similar to the first criterion, using the $\chi^2_S\,$-compatibility criterion also has its issues. In order to properly use this criterion it has to be possible to fit the data set alone. This limits the usefulness of this criterion only to data sets which are sufficiently large to be fit alone. \newline\newline % {\bfseries $S_E$-compatibility} The last criterion used in our analysis is yet another alternative to investigate compatibility of data sets in a combined global analysis. Here we will consider only the global analysis of the combined data sets $Z\equiv S\cup \bar{S}$ and investigate the quality of description of each experiment $E$ in this analysis. The comparison of the quality between two different experiments is made difficult by the fact that the $\chi^2$-distribution $P(\chi^2,N)$ (see Eq.~(\ref{eq:chi2dist})) is heavily dependent on the number of data points $N$ of the experiment. Therefore, instead of the $\chi^2$-distribution $P(\chi^2,N)$ we use a variable $S(\chi^2,N)$ \begin{equation}\label{SEdef} S(\chi^2(N),N) = \sqrt{2\chi^2(N)} - \sqrt{2N-1} \end{equation} which is no longer strongly sensitive to the number of data points. Moreover, the variable $S(N)$ is distributed according to the normal distribution with zero mean and unit variance \cite{Kovarik:2019xvh}. We can evaluate $S_E=S(\chi^2_E,N_E)$ for each experiment using the number of data points $N=N_E$ and $\chi^2=\chi^2_E$ and check if the variable for all experiments is distributed according to the normal distribution with the expected mean and variance. This happens if the $\chi^2$ values of all experiments involved in the global analysis are distributed according to the corresponding $\chi^2$-distributions. On top of checking if $S_E$ for the totality of experiments is distributed as expected, we can also identify experiments which are not compatible with this distribution and also quantify to what degree using the standard confidence levels of the normal distribution. \subsection{Global analysis with neutrino data} \begin{figure*}[htb] \centering \includegraphics[width=0.95\textwidth]{figs/Base-BaseDimuNeu_FE} \caption{The full iron PDFs at $Q^2=4\ {\rm GeV}^2$. All uncertainty bands are computed using the Hessian method with $\Delta \chi^2=45.$} \label{fig.pdf} \end{figure*} \begin{figure*}[htb] \centering \includegraphics[width=0.95\textwidth]{figs/RBase-BaseDimuNeu_FE} \caption{Ratio of the full iron PDFs to the corresponding PDFs from nCTEQ15WZSIHdeut fit at $Q^2=4\ {\rm GeV}^2$. All uncertainty bands are obtained using the Hessian method with $\Delta \chi^2=45$.} \label{fig.pdfratio} \end{figure*} \begin{figure*}[htb] \centering \includegraphics[width=0.80\textwidth]{figs/BaseDimuNeu_scan.pdf} \caption{Scans of the $\chi^2$ function along the PDF parameter directions varying always one free parameter at a time while other parameters were left fixed at the global minimum of the BaseDimuNeu analysis. The breakdown into $\chi^2$ for classes of experimental data is also shown.} \label{fig.chi2.scan} \end{figure*} We will start our analysis of the compatibility of the neutrino DIS data with the rest of the nuclear scattering data used so far in the nCTEQ analyses by considering a global analysis which adds all available neutrino data to the rest of the nCTEQ data mentioned in Sec.~\ref{sec:nCTEQ15}. The fit BaseDimuNeu contains all the data from the reference nCTEQ15WZSIHdeut analysis and all inclusive (anti-)neutrino DIS data from the CDHSW, Chorus, CCFR and NuTeV experiments as well as semi-inclusive di-muon data from CCFR and NuTeV. We have to emphasize that there is a disparity between the number of data present in the original nCTEQ15WZSIHdeut analysis ($N=940$) and the number of the new neutrino DIS data added ($N=5689$). Therefore, the neutrino data will dominate the global analysis and we expect that if there is any tension, it can be seen in a different description of the original data of the nCTEQ15WZSIHdeut analysis. The global analysis BaseDimuNeu uses the same framework discussed in Sec.~\ref{sec:framework} with the same 27 free parameters to determine nuclear PDFs by fitting 6629 data points. We obtain $\chi^2=7532$ or alternatively $\chi^2$/pt = 1.14. Given that all neutrino data could be described with $\chi^2$/pt = 1.12 and we have added nCTEQ15WZSIHdeut data to the analysis which on its own was described with $\chi^2$/pt = 0.78, the result of the global analysis can be considered as the first signal that there may be some tension among the data within the analysis. \begin{figure*}[htb] \centering \includegraphics[width=0.48\textwidth]{figs/RF2_NC_8_CTEQ6M_basedimuneu.pdf}\qquad \quad \includegraphics[width=0.45\textwidth]{figs/RF2_CC_8_CTEQ6M_basedimuneu.pdf} \caption{Neutral current nuclear ratio $F_2^{\rm Fe}/F_2^{\rm D}$ (left) and charged current nuclear ratio $R[F_2^{\rm CC}]$ as defined in Eq.~(\ref{rf2cc}) (right) using the fitted nPDFs. Note that we have applied nuclear correction for the neutral current deuterium structure function $F_2^D$ but not for the charged current one.} \label{fig.F2.prediction.comp} \end{figure*} \begin{figure*}[htb] \centering \includegraphics[width=0.48\textwidth]{figs/F3nutev_basedimuneu.pdf}\qquad \caption{Charged current nuclear ratio $R[F_3^{\rm CC}]$ defined analogously to $R[F_2^{\rm CC}]$ using the fitted nPDFs.} \label{fig.F3.prediction.comp} \end{figure*} Specifically, when we compare the description of the subset of the data common to both nCTEQ15WZSIHdeut and BaseDimuNeu analyses, we notice a distinct rise from $\chi^2$ = 735 to $\chi^2$ = 866. This is an increase of 131 which is almost three times larger than the $\Delta \chi^2$=45 which was used to generate the error PDFs of the nCTEQ15WZSIHdeut result. This, according to the $\Delta\chi^2_S$ compatibility criterion introduced above, signals that the newly added data are incompatible with the original data of the nCTEQ15WZSIHdeut analysis. All relevant $\chi^2$ values are summarised in Tab.~\ref{tab:statperc}. As we have stated previously, violating the $\Delta\chi^2_S$ compatibility criterion is also related to large differences in extracted PDFs. In Figs.~\ref{fig.pdf} and \ref{fig.pdfratio} we show the nuclear PDFs for iron resulting from the BaseDimuNeu analysis and compare them to the nPDFs of the nCTEQ15WZSIHdeut fit including the uncertainties. The comparison of both analyses is best seen in Fig.~\ref{fig.pdfratio} where the ratio of BaseDimuNeu and nCTEQ15WZSIHdeut nPDFs is shown. We can clearly see that the up- and down-quark valence PDF distributions as well as the strange-quark nuclear PDF from the global analysis including all neutrino data lie outside or at the edge of the error band of the reference nCTEQ15WZSIHdeut analysis. To exclude the possibility that the newly added neutrino data just constrain previously unconstrained PDF parameters, we investigate also the $\chi^2$ profiles varying the free parameters (see Fig.~\ref{fig.chi2.scan}). In Fig.~\ref{fig.chi2.scan} we see that for many quark parameters the result of the BaseDimuNeu analysis is a compromise between the neutral current DIS data already present in the nCTEQ15WZSIHdeut analysis (labeled DIS in Fig.~\ref{fig.chi2.scan}) and the newly added inclusive neutrino DIS data (labeled DISNEU). The final minima of the $\chi^2$ function lie frequently between the minima preferred by the DIS subsets. The DIS and DISNEU subsets show clear sensitivity to the quark valence parameters $a_1^{u_v}$, $a_2^{u_v}$, $a_4^{u_v}$, $a_5^{u_v}$, $a_1^{d_v}$, $a_4^{d_v}$, $a_5^{d_v}$ based on their respective $\chi^2$ growth profiles, but with widely-separated preferred values for those parameters. This is a clear sign for tensions between these subsets. On the other hand, the situation is slightly different in the case of the strange quark. There, the minima preferred by the same subsets are also distinct but we can also observe that the neutrino DIS data are much more sensitive to the strange quark parameter variations than the neutral current DIS data sets. This leads us to conclude that, in the case of the strange quark, the neutrino DIS is the data set providing the first strong constraint on the strange PDF parameters and hence the discrepancy is not a sign of tension here. However, there is a small caveat. The neutrino differential cross-section data prefer a different strange quark PDF compared to the di-muon neutrino data. Moreover, the di-muon data and the neutral current DIS data prefer a similar strange quark. This tension can be later seen in Tab.~\ref{tab:chi2nudata} where the listed $\chi^2$/pt of the di-muon data signify that they are described much worse than in the neutrino only DimuNeu analysis. The difference between the extracted PDFs from the BaseDimuNeu and nCTEQ15WZSIHdeut analyses translates into different predictions for observables such as the ratio of structure functions $F_2$ and $F_3$ shown in Figs.~\ref{fig.F2.prediction.comp} and \ref{fig.F3.prediction.comp} respectively. Here a similar interpretation is possible where we can clearly see that the results of the BaseDimuNeu analysis are a compromise between the nCTEQ15WZSIHdeut results and the results of the DimuNeu analysis which included only the neutrino data. The compromise predictions of the BaseDimuNeu analysis for the neutral-current nuclear ratio are compatible up to 1-$\sigma$ with the nCTEQ15WZSIHdeut prediction given that the central value lies within the error band of the nCTEQ15WZSIHdeut analysis. In the case of the other observables, the tension is larger. In the case of the charged-current nuclear ratio the results of the BaseDimuNeu are incompatible with the nCTEQ15WZSIHdeut result at $x\sim 0.025$ as the difference between the central predictions of the two analyses is larger than the error estimate on either analysis. The same is true if we would compare the predictions from the BaseDimuNeu and from the DimuNeu analyses. The case of the ratio of the structure function $F_3$ is a little different. First of all, there was almost no experimental information directly on the structure function $F_3^{\rm NC}$ from the neutral-current DIS data. Furthermore, the data on the charged-current structure function $F_3^{\rm CC}$ have larger errors compared to the structure function $F_2$. Even with larger errors, the $F_3$ data from NuTeV experiment (see Fig.~\ref{fig.F3.prediction.comp}) are not described particularly well by any of the analyses. Moreover, similar to the case of the structure function $F_2$ the predictions of the nCTEQ15WZSIHdeut and the DimuNeu analyses are incompatible with each other. This time the largest tension is found in the interval $0.1 < x < 0.4$. The central predictions of the global analysis BaseDimuNeu are in turn outside of the error band of the nCTEQ15WZSIHdeut analysis for $0.15 < x < 0.3$. We conclude that the tension which can be observed at the level of extracted PDFs in Figs.~\ref{fig.pdf} and \ref{fig.pdfratio} translates also to the ratios of the charged-current structure functions. To reach a conclusive picture of the compatibility of neutrino DIS data with the remaining scattering data, we will use the other two criteria introduced in the previous section. The $\chi^2$ of the neutrino and the rest of scattering data subsets in the combined analysis are $\chi^2 = 6666$ and $\chi^2 = 866$, respectively (see Tab.~\ref{tab:statperc}). Using the rescaled percentiles as defined previously, we see that the description of both subsets of data is outside of the 90\% percentile (and even outside of the 99\% percentile in the case of nCTEQ15WZSIHdeut data), making the data sets incompatible according to the $\chi^2_S$-compatibility criterion. \begin{figure*}[htb] \centering \includegraphics[width=0.32\textwidth]{figs/SEncteq15WZ-n.pdf}\quad \includegraphics[width=0.32\textwidth]{figs/SEbase-n.pdf}\quad \includegraphics[width=0.32\textwidth]{figs/SEbase-ncteq15WZ-n.pdf} \caption{Distribution of the variable $S_E$ for all experiments in the nCTEQ15WZSIHdeut analysis (left) and for all experiments in the BaseDimuNeu analysis (middle). The right panel shows the distribution of the variable $S_E$ from the BaseDimuNeu analysis for experiments in nCTEQ15WZSIHdeut. All panels show the fitted Gaussian distribution to the actual $S_E$ distribution (blue) compared to the ideal Gaussian $S_E$ distribution with $\mu=0$ and $\sigma=1$ (red). Note some of the $S_E$ values lie outside the plot range.} \label{fig.SE.comp} \end{figure*} Lastly, we will look into the details of how well all experiments are described in the combined global analysis with all neutrino data. In contrast to using the rescaled percentile to account for imperfect description of data, we will use the distribution of the $S(\chi^2,N)$ variable for all the experiments in the combined analysis. Considering the whole distribution allows for the possibility that some experiments in the global analysis are not described well leading to $S_E > 0$ and that some are over fitted ($S_E<0$). Before we investigate the $S_E$ distribution of the combined analysis, we will review the same distribution for the reference nCTEQ15WZSIHdeut analysis which is shown in the left panel of Fig.~\ref{fig.SE.comp}. After analyzing the distribution and determining the mean ($\mu = -0.74$) and the standard deviation ($\sigma = 1.12$), we can see that the nPDF framework with 27 free parameters is describing the data too well on average but the spread is still compatible with the ideal distribution of the $S(\chi^2,N)$ variable. The distribution of $S_E$ in the case of the BaseDimuNeu analysis is shown in the middle panel of Fig.~\ref{fig.SE.comp} and from the characteristics of the distribution, it is clear that on average experiments are still described well ($\mu = 0.08$). However, this time the standard deviation $\sigma = 2.54$ signifies that there are more outlier experiments. Our interest is twofold. First, we would like to compare the description of the experiments contained in nCTEQ15WZSIHdeut and the subset of the same experiments in the combined analysis BaseDimuNeu. We show the distribution of the nCTEQ15WZSIHdeut experiments in the BaseDimuNeu analysis in the right panel of Fig.~\ref{fig.SE.comp}. Comparing how these two analyses describe the same set of experiments, clearly points to the BaseDimuNeu analysis being a compromise given that the description of this subset of experimental data is worse than in the reference analysis ($\mu = -0.26$ and $\sigma = 1.44$). As expected the worse description can be traced back to the neutral current DIS experimental data which are very sensitive to the up- and down-quark PDF which is one of the PDFs mostly shifted in the combined analysis. The reason why the previous two compatibility criteria signal a problem is hidden in the description of neutrino data. The large standard deviation is mostly caused by the NuTeV neutrino and anti-neutrino cross-section data having extremely large $|S_E|$-values, $S_E=13.05$ for neutrino (not shown on plot) and $S_E=5.5$ for anti-neutrino data. The other contribution to the large standard deviation comes from the di-muon data from both CCFR and NuTeV experiments and from the overfitted CDHSW neutrino cross-section data. \begin{table}[tb] \caption{Statistical information on the description of the neutrino data sets used in different analyses.}\label{tab:chi2nudata} \begin{center} \begin{tabular}{|l|c|c|c|} \hline \multirow{2}{*}{Data set} & \multirow{2}{*}{\#pts} & $\chi^2$/pt ($S_E$) & $\chi^2$/pt ($S_E$)\\ & & {\rm DimuNeu} & BaseDimuNeu \\ \hline\hline CDHSW $\nu$ & 465 & 0.68 (-5.29) & 0.59 (-7.01) \\ CDHSW $\bar{\nu}$ & 464 & 0.73 (-4.47) & 0.69 (-5.22)\\ \hline CCFR $\nu$ & \ 824\ & 0.99 (-0.09) & 1.03 (0.56)\\ CCFR $\bar{\nu}$ & 826 & 1.00 (0.07) & 1.02 (0.45)\\ \hline NuTeV $\nu$ & 1170 & 1.51 (11.12) & 1.61 (13.05)\\ NuTeV $\bar{\nu}$ & 966 & 1.25 (5.16) & 1.27 (5.50)\\ \hline Chorus $\nu$ & 412 & 1.21 (2.85) & 1.25 (3.40)\\ Chorus $\bar{\nu}$ & 412 & 1.09 (1.26) & 1.25 (3.35)\\ \hline CCFR dimuon $\nu$ & 40 & 1.70 (2.79) & 2.52 (5.32)\\ CCFR dimuon $\bar{\nu}$ & 38 & 0.79 (-0.89) & 0.64 (-1.68)\\ \hline NuTeV dimuon $\nu$ & 38 & 0.98 (-0.06) & 2.11 (4.01)\\ NuTeV dimuon $\bar{\nu}$ & 34 & 0.73 (-1.16) & 1.16 (0.70)\\ \hline \end{tabular} \end{center} \end{table} \begin{table}[tb] \caption{Statistical information on the description of the selected neutral current DIS data sets used in the reference nCTEQ15WZSIHdeut and BaseDimuNeu analyses.}\label{tab:chi2data} \begin{center} \begin{tabular}{|l|c|c|c|c|c|} \hline \multirow{2}{*}{Experiment} & \multirow{2}{*}{Target} & \multirow{2}{*}{ID} & \multirow{2}{*}{\#pts} & $\chi^2$/pt ($S_E$) & $\chi^2$/pt ($S_E$)\\ & & & & {\rm Reference} & BaseDimuNeu \\ \hline\hline NMC-95 & C/D & 5113 & 12 & 0.88 (-0.20) & 1.70 (1.59) \\ NMC-95,re & C/D & 5114 & 12 & 1.18 (0.53) & 2.16 (2.40)\\ \hline NMC-95 & Ca/D & 5121 & \ 12\ & 1.15 (0.46) & 2.98 (3.66)\\ \hline BCDMS & Fe/D & 5101 & 10 & 0.63 (-0.81) & 2.00 (1.97)\\ BCDMS & Fe/D & 5102 & 6 & 0.48 (-0.93) & 1.62 (1.09)\\ \hline \end{tabular} \end{center} \end{table} Comparing the statistical results for the nCTEQ15WZSIHdeut and DimuNeu analyses with the combined analysis BaseDimuNeu (see Fig.~\ref{fig.SE.comp} and Tab.~\ref{tab:chi2nudata}), we can identify the origin of the inconsistencies signaled by the first two compatibility criteria. For the $\chi^2$/pt and $S_E$ data for all neutrino experiments shown in Tab.~\ref{tab:chi2nudata}, we can see that the description of the NuTeV cross-section data, Chorus cross-section data and above all the di-muon data in the compromise fit of the BaseDimuNeu analysis is much worse than in the reference only neutrino DimuNeu analysis. Moreover, if one examines the shifts in the description of the experiments in the reference nCTEQ15WZSIHdeut analysis seen in Fig.~\ref{fig.SE.comp} more closely, we can discover large shifts in $\chi^2$/pt or alternatively in the $S_E$ variable especially in precise DIS experiments (for details see Tab.~\ref{tab:chi2data}). These facts all together lead us to conclude that the inconsistency signalled by the other criteria is justified and there is indeed a large tension between the neutrino data and the rest of the scattering data. The crucial question which we will address in the final part of this paper is if there is a way to include the neutrino DIS data in a combined analysis while at the same time avoiding large tensions and incompatibilities. \section{Consistent global nPDF analysis with neutrino data} \label{sec:nufinal} In the previous section, we have shown that incorporating neutrino data into the nCTEQ framework can produce significant tensions among key data sets. Moreover, we have observed in Sec.~\ref{sec:data} that these include tensions among different neutrino scattering measurements, most notably among the ones taken on iron from the CDHSW, CCFR and NuTeV collaborations and those taken on lead from the Chorus collaboration.\footnote{The inconsistency between the CCFR and NuTeV data at large Bjorken $x$ was resolved by accepting the reasoning in Ref.~\cite{Tzanov:2005kr} and not including any CCFR data in the region of $x>0.4$.} To complicate matters even more, the neutrino inclusive DIS data and the neutrino di-muon data each prefer a different strange quark PDF, leading to substantial tensions as well. The goal of this section is to explore ways to include neutrino data in a global analysis so that these large tensions can be avoided or mitigated. Before we consider a global analysis, we will introduce a series of fits where on top of all data from the reference nCTEQ15WZSIHdeut analysis, we include neutrino and anti-neutrino data from one single experiment. This way we can explore tensions of neutrino data from every single neutrino experiment with the reference analysis without considering any tensions among the neutrino data themselves. We show the statistical results of four analyses (BaseChorus, BaseCDHSW, BaseCCFR and BaseNuTeV) in Tab.~\ref{tab:statperc2}. The results show that apart from the data from the Chorus experiment, adding the other neutrino experimental data causes tension with the neutral current scattering data. This should come as no surprise in light of the nuclear correction factors extracted from the neutrino and anti-neutrino data shown in Fig.~\ref{fig:Rnunubar}, where only the nuclear correction factor from the Chorus neutrino and anti-neutrino data has a shape similar to the one preferred by the neutral current scattering data. Given the results shown in Fig.~\ref{fig:Rnunubar}, we clearly expect the tensions for the other experiments to come from neutrino data in the low-$x$ and/or in the high-$x$ kinematic region. We will use this information in the following. Aiming for a global analysis without large tensions among data sets, there are several possible approaches one can take: \begin{enumerate} \item If the tensions can be attributed to a specific kinematic region, they can be removed by imposing a kinematic cut on the neutrino data. \item Large tensions can often be caused by very precise experimental data, and a compromise can be reached if it is believed that the estimate of the experimental errors is underestimated. In such a case, the errors might be artificially enlarged. \item The last option is to identify experiments which are still consistent with the bulk of the original data and include only those in our analysis. \end{enumerate} We will investigate all of these approaches in the following. \begin{table*}[htb] \caption{Statistical information such as the total $\chi^2$ and the number of data points for all analyses discussed here are presented. Moreover, the $\chi^2$-percentiles with respect to the default data sets of the reference fit nCTEQ15WZSIHdeut (denoted $S$) and to the DimuChorus analysis (denoted $\bar{S}$) are also given if applicable.} \label{tab:statperc2} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Analysis name & $\chi^2_S/N$ & $\chi^2_S/pt$ & $\chi^2_{\bar{S}}/N $ & $\chi^2_{\bar{S}}/pt $ & $\Delta\chi^2_S$ & $\Delta\chi^2_{\bar{S}}$ & $p_S/p_{\bar{S}}$ \\[1mm] \hline \hline nCTEQ15WZSIHdeut & 735/940 & 0.78 & - & - & 0 & - & 0.500 / - \\ DimuChorus & - & - & 1059/974 & 1.09 & - & 0 & - / 0.500 \\ \hline BaseChorus & 737/940 & 0.78 & 969/824 & 1.18 & 2 & - & 0.530 / -\\ BaseCDHSW & 778/940 & 0.83 & 584/929 & 0.63 & 43 & - & 0.895 / -\\ BaseCCFR & 815/940 & 0.87 & 2119/2207 & 0.96 & 80 & - & 0.989 / -\\ BaseNuTeV & 807/940 & 0.86 & 3049/2136 & 1.43 & 72 & - & 0.981 / -\\ BaseNuTeVU & 787/940 & 0.84 & 1984/2136 & 0.93 & 52 & - & 0.933 / -\\ \hline BaseDimuNeuU & 861/940 & 0.92 & 5569/5689 & 0.98 & 126 & - & 0.99978 / - \\ BaseDimuNeuX & 781/940 & 0.83 & 5032/4644 & 1.08 & 46 & - & 0.908 / - \\ BaseDimuChorus & 740/940 & 0.79 & 1117/974 & 1.15 & 5 & 58 & 0.559 / 0.885 \\ \hline \end{tabular} \end{table*} \begin{figure*}[htb] \centering \includegraphics[width=0.32\textwidth]{figs/SEx-n.pdf}\quad \includegraphics[width=0.32\textwidth]{figs/SEunc-n.pdf}\quad \includegraphics[width=0.32\textwidth]{figs/SEchor-n.pdf} \caption{Distribution of the variable $S_E$ for all experiments in the BaseDimuNeuX analysis (left) and for all experiments in the BaseDimuNeuU analysis (middle). The right panel shows the distribution of the variable $S_E$ from the BaseDimuChorus analysis. All panels show the fitted Gaussian distribution to the actual $S_E$ distribution (blue) compared to the ideal Gaussian $S_E$ distribution with $\mu=0$ and $\sigma=1$ (red). Note that in the case of the BaseDimuNeuX analysis we do not show a bin with $S_E$=9.72 which corresponds to the NuTeV neutrino data.} \label{fig.SE.comp2} \end{figure*} \subsection{Neutrino DIS data with $x>0.1$} \label{sec:xcut} \begin{figure*}[htb] \centering \includegraphics[width=0.95\textwidth]{figs/Base-BaseDimuNeuX_FE.pdf} \caption{The full iron PDFs at $Q^2=4\ {\rm GeV}^2$. All uncertainty bands are computed using the Hessian method with $\Delta \chi^2=45.$} \label{fig.pdfbasedimuneux} \end{figure*} \begin{figure*}[htb] \centering \includegraphics[width=0.95\textwidth]{figs/RBase-BaseDimuNeuX_FE.pdf} \caption{The fitted iron PDF ratio to nCTEQ15WZSIHdeut. All uncertainty bands are obtained using the Hessian method with $\Delta \chi^2=45$.} \label{fig.rpdfbasedimuneux} \end{figure*} The large tensions and incompatibilities observed in the previous section were not completely surprising considering the ratio we have extracted from the cross-section data in Sec.~\ref{sec:weightedaverage} and which as shown in Fig.~\ref{fig:Rnunubar}, shows a markedly different shape of the nuclear correction factor, especially in the small-$x$ and in the very large-$x$ regions. Given that our conservative kinematic cuts on $Q^2$ and $W^2$ are already effectively restricting the large $x$ region, the only way how we can resolve the tension using a kinematic cut is to exclude the low-$x$ neutrino data. Using arbitrary cuts to remove the data which cause the largest tensions in each experiment is not in line with the philosophy of a global analysis, because it introduces a bias which such an analysis tries to avoid. One possible motivation for using a cut to remove data could be signs that the theoretical description of the data in a specific region is inadequate. In this section, we will assume that the large tensions in the low-$x$ region may be due to e.g. a different mechanism for nuclear shadowing in charged current DIS \cite{Kopeliovich:2012kw} which is not properly included in our theoretical framework. A different reason one could have to justify a kinematic cut is an internal tension between all neutrino data in this region. We can see in Fig.~\ref{fig:Rnunubar} that there is indeed such a tension in the low-$x$ region, especially between the NuTeV and CCFR data on one side and Chorus data on the other. Citing any of these reasons, we will employ an arbitrary constraint, $x>0.1$, which the charged current DIS data have to fulfill. This applies to all inclusive DIS and di-muon data. To show the impact of such a cut, we have performed an analysis similar to the global BaseDimuNeu analysis requiring that all neutrino data satisfy the constraint, $x>0.1$. This analysis, called in the following BaseDimuNeuX, uses the same number of free parameters and, similarly to the previous analysis, also fits the normalisation of all neutrino experiments. The kinematic cut removes 1045 data points from the low-$x$ region of neutrino scattering data. The result of this analysis has $\chi^2$/pt = 1.04. Further details and the breakdown of the $\chi^2$ for the usual data subsets are listed in Tab.~\ref{tab:statperc2}. Analyzing the statistical properties, we see that with $\Delta\chi^2_S = 46$, which is approximately within the 91-percentile, the analysis BaseDimuNeuX is barely consistent with the original data in the nCTEQ15WZSIHdeut analysis. A closer look at Fig.~\ref{fig.SE.comp2} reveals that most experiments are fitted well except only a few outliers. The tensions are experienced by the NuTeV neutrino cross-section data ($S_E$ = 9.72 largest not shown) and by the NuTeV anti-neutrino data ($S_E$ = 3.37). Without these data the $S_E$ distribution would be very similar to the one of the reference analysis shown in Fig.~\ref{fig.SE.comp}. In Figs.~\ref{fig.pdfbasedimuneux} and \ref{fig.rpdfbasedimuneux} we compare the extracted nuclear PDFs to the ones of nCTEQ15WZSIHdeut. If we first focus on the central values of the nuclear PDFs extracted in the BaseDimuNeuX analysis, we observe that except for the strange quark PDF, the central values are within the error bands of the reference analysis. This nicely highlights the usefulness of the $\Delta\chi^2$-criterion. Comparing the results with those of the BaseDimuNeu analysis shown in Fig.~\ref{fig.pdfratio}, we see that the shapes of the central values are very similar. This indicates that the tensions from the original global analysis are not completely removed but just reduced in size. The unexpected part of the result which can be seen in Fig.~\ref{fig.pdfbasedimuneux} is the uncertainty band of the strange quark and gluon PDFs. The large uncertainty of the strange quark is a result of two competing preferences for the strange quark PDF from the di-muon and the neutrino inclusive cross-section data. As a result the uncertainty is enlarged to account for this tension. The reduced gluon PDF uncertainty is due to adding a large number of precise DIS data, constraining the gluon via the NLO sensitivity of the DIS process to the gluon PDF. The predictions for the nuclear correction factors from the neutral and charged current DIS are shown in Fig.~\ref{fig.F2.prediction.comp-2} and compared with those of the reference analysis. There we can see that, as expected, the much different behavior of the theoretical prediction in the low-$x$ region, which was present in Fig.~\ref{fig.F2.prediction.comp} for the BaseDimuNeu analysis, is largely gone and the prediction has a larger uncertainty band for the charged current nuclear correction factor. Moreover, excluding neutrino data with $x<0.1$ from the analysis significantly effects the prediction of the nuclear correction factor also in other regions in $x$. In Fig.~\ref{fig.F2.prediction.comp-2} we see that the structure function data from NuTeV and CDHSW are not correctly described even in the intermediate $x$ region and only the large $x$ behavior is driven by the NuTeV data and remains very different from the predictions of the reference analysis. Overall, we see that employing the cut $x>0.1$ to all neutrino data reduces the tensions just enough for this fit to be considered consistent. However, some problems still remain. The tension in the previously well determined valence quark PDFs is still present and the NuTeV cross-section data is still badly described. Moreover, all this has been achieved after removing the small-$x$ and large-$x$ data where the tensions are the largest. It needs to be stressed once more that this analysis can be considered the final result only if a plausible explanation for the additional kinematic cut is put forward. \subsection{NuTeV with uncorrelated systematic errors} \label{sec:uncorr} The second possible approach to lessen the tensions we consider is to enlarge the errors of the experimental data causing the tension. An equivalent to enlarging the errors of all data of a data set is to introduce a weight for this data set in the calculation of the $\chi^2$-function. We have investigated this option in our previous analysis \cite{Kovarik:2010uv} and found no acceptable way to include the neutrino DIS data in a global analysis. In a similar spirit, previous analyses \cite{Kovarik:2010uv, Paukkunen:2010hb} enlarged the errors of the NuTeV cross-section data by not considering the correlated systematic errors. Let us therefore explore the effect of neglecting these correlations on the combined analysis. First, we have performed a fit with the data from the nCTEQ15WZSIHdeut analysis and only from the NuTeV experiment using uncorrelated systematic errors. The analysis BaseNuTeVU clearly shows that with uncorrelated systematic errors the framework we use to fit the experimental data can, for the first time, describe the NuTeV data well with $\chi^2$/pt=0.93. Moreover, comparing to the BaseNuTeV analysis which used correlated systematic errors, we see that the tension with the neutral current data is reduced but still present (for details see Tab.~\ref{tab:statperc2}). This shows that the inconsistencies cannot be attributed solely to the use of correlated systematic errors. For completeness, we have also performed a global analysis much like BaseDimuNeu but without correlations in the case of the NuTeV data (called BaseDimuNeuU). Here a similar picture emerges. The neutrino data are described much better ($\chi^2$/pt=0.98), but the tension with the neutral current data is unchanged. Some details of the tensions are again visible in the $S_E$-distribution shown in Fig.~\ref{fig.SE.comp2}, where the standard deviation of the distribution is much larger than unity ($\sigma$ = 1.89). Large $S_E$ contributions can be traced back to the neutrino di-muon data from both CCFR ($S_E$=4.77) and NuTeV ($S_E$=3.19) which as we have seen before prefer a different strange quark PDF compared to the inclusive neutrino data. The tensions with the neutral current DIS data have also not improved but rather got worse compared to the BaseDimuNeu analysis (see Tab.~\ref{tab:chi2data}). The largest $S_E$ contributions still come from the Ca/D and C/D data from the NMC collaboration ($S_E$=3.91 and $S_E$=2.45 respectively). Therefore, we conclude that the use of correlated systematic errors for the NuTeV data has no effect on the compatibility of the neutrino data with the rest of the scattering data and neglecting the correlations does not reduce the tensions, even though the neutrino data seem to be described well overall. \begin{figure*}[htb] \centering \includegraphics[width=0.95\textwidth]{figs/Base-BaseDimuChorus_lead.pdf} \caption{The full lead PDFs at $Q^2=4\ {\rm GeV}^2$. All uncertainty bands are computed using the Hessian method with $\Delta \chi^2=45.$} \label{fig.pdfchorus} \end{figure*} \begin{figure*}[htb] \centering \includegraphics[width=0.95\textwidth]{figs/RBase-BaseDimuChorus_lead.pdf} \caption{The fitted lead PDF ratio to nCTEQ15WZSIHdeut. All uncertainty bands are obtained using the Hessian method with $\Delta \chi^2=45$.} \label{fig.pdfratiochorus} \end{figure*} \begin{figure*}[htb] \centering \includegraphics[width=0.80\textwidth]{figs/BaseDimuChorus_scan.pdf} \caption{Scans of the $\chi^2$ function along the PDF parameter directions varying always one free parameter at a time while other parameters were left fixed at the global minimum of the BaseDimuChorus analysis. The breakdown into $\chi^2$ for classes of experimental data is also shown. We note that in this case "DISNEU" refers to the Chorus data which is the only inclusive neutrino data used in this fit.} \label{fig.chi2.scan.chorus} \end{figure*} \section{Global analysis with Chorus and di-muon data} \label{sec:ncteqnu} \begin{figure*}[htb] \centering \includegraphics[width=0.48\textwidth]{figs/RF2_NC_8_CTEQ6M_basedimuchorus.pdf}\qquad \quad \includegraphics[width=0.45\textwidth]{figs/RF2_CC_8_CTEQ6M_basedimuchorus.pdf} \caption{Neutral current nuclear ratio $F_2^{\rm Fe}/F_2^{\rm D}$ (left) and charged current nuclear ratio $R[F_2^{\rm CC}]$ as defined in Eq.~(\ref{rf2cc}) (right) using the fitted nPDFs. Note that we have applied nuclear corrections for the neutral current deuterium structure function $F_2^D$, but not for the charged current one.} \label{fig.F2.prediction.comp-2} \end{figure*} \begin{figure*}[htb] \centering \includegraphics[width=0.5\textwidth]{figs/F3nutev_basedimuchorus.pdf}\qquad \caption{Charged current nuclear ratio $R[F_3^{\rm CC}]$ defined analogously to $R[F_2^{\rm CC}]$ using the fitted nPDFs.} \label{fig.F3.prediction.comp-2} \end{figure*} \begin{figure}[htb] \centering \includegraphics[width=0.45\textwidth]{figs/nomad_dimuon.pdf}\qquad \quad \caption{Comparison between the data from the NOMAD experiment \cite{NOMAD:2013hbk} and our theory predictions using our fitted PDFs for the ratio of the di-muon production and the total charged current DIS cross-section.} \label{fig.dimuon.prediction.comp} \end{figure} As we have shown in Sec.~\ref{sec:nuglobal}, the global analysis of all available data where also all neutrino data are included leads to large tensions. Furthermore, we have shown that these cannot be sufficiently removed by introducing a kinematic cut or by neglecting the correlations of the systematic errors of the neutrino experiment where the tensions are the largest. One option which we have not yet explored is to try to identify a subset of the neutrino data which shows no or little tension. Based on what we have observed in previous analyses, we will add all di-muon and both Chorus neutrino and anti-neutrino scattering data to our global analysis and disregard all other (anti-)neutrino data. We will refer to this global analysis as BaseDimuChorus. The statistical results of this analysis are also given in Tab.~\ref{tab:statperc2} and the total $\chi^2$/pt = 0.97. As can be seen from the details in Tab.~\ref{tab:statperc2}, in this combined analysis, all data from the reference nCTEQ15WZSIHdeut analysis as well as all neutrino data are described well. We have performed a dedicated analysis of only di-muon and Chorus data (DimuChorus analysis) so that we can assess how well these data are described in the combined analysis. Using the rescaled percentiles defined above, we see that the descriptions of both the nCTEQ15WZSIHdeut data and di-muon and Chorus data are both within the 90\% percentile of the $\chi^2$-distribution. In Fig.~\ref{fig.SE.comp2} we also show the $S_E$ distribution and clearly see that on average the data are over fitted ($\mu$ = -0.54) and that the standard deviation of the distribution is larger than the one for nCTEQ15WZSIHdeut ($\sigma$ = 1.28). This is due to the new neutrino data given that the neutrino cross-section data from Chorus are fitted to $\chi^2$/pt = 1.27 ($S_E$ = 3.61) and also the di-muon data from CCFR to $\chi^2$/pt = 1.68 ($S_E$ = 2.70). However, we can see that these data were not described much better in any other analysis and given that all other criteria do not signal inconsistencies, we can look at these results as statistical fluctuations. In Figs.~\ref{fig.pdfchorus} and \ref{fig.pdfratiochorus} we show the extracted nuclear PDFs from this analysis and compare them to those extracted from the reference nCTEQ15WZSIHdeut analysis. We can see that the central values are almost identical for all but the strange quark PDFs, where the addition of new neutrino data leads to a shift in the central value of the strange quark PDF. Moreover, the neutrino data are also more sensitive to the strange quark, which is reflected in the noticeably reduced uncertainty. The effect of better flavor separation of the quark PDFs thanks to the addition of the (anti-)neutrino cross-section data from Chorus can be also observed in the reduced uncertainties of the valence quark PDFs. From the scans of the $\chi^2$ function along the free parameters and the breakdown into separate contributions to the global $\chi^2$ stemming from different experiment classes shown in Fig.~\ref{fig.chi2.scan.chorus} we can read off the details, which subset of experiments is responsible for constraining specific parameters. We can infer from Fig.~\ref{fig.chi2.scan.chorus} that the valence quark and the anti-quark parameters are mainly constrained by the neutral current DIS experiments while the gluon parameters are constrained by the vector boson production processes at the LHC and from the single inclusive hadron production processes. Most importantly for this analysis, we see that the strange quark parameters are constrained by the di-muon data and also from the Chorus inclusive data alike. The predictions for the nuclear correction factors for the neutral and charged current DIS are shown in Figs.~\ref{fig.F2.prediction.comp-2} and \ref{fig.F3.prediction.comp-2}. The predictions from the BaseDimuChorus and the reference nCTEQ15WZSIHdeut analyses are almost identical and we can observe a reduction in the uncertainties after adding the Chorus and di-muon data. In the case of the charged current nuclear correction factor for the structure function $F_2$, we see that the theoretical prediction from the BaseDimuChorus analysis does not describe the structure function data from NuTeV or CDHSW well. This is to be expected as we have omitted the corresponding NuTeV, CCFR and CDHSW cross-section data from the fit as they were the source of inconsistencies. In the case of the structure function $F_3$, neither the predictions from the BaseDimuChorus or from the BaseDimuNeuX analysis can describe the $F_3$ data from NuTeV well. We should note that even though the normalization of the cross-section data from NuTeV (and also from the other collaborations) was allowed to vary as a part of the fitting procedure, no shift was applied to the structure function data shown in Figs.~\ref{fig.F2.prediction.comp-2} and \ref{fig.F3.prediction.comp-2}. Shifting the NuTeV data by the normalization of 3.6\% determined in the BaseDimuNeuX analysis would improve the tensions between the data and the theoretical prediction for both structure functions from this analysis. Finally, in Fig.~\ref{fig.dimuon.prediction.comp} we also compare the theoretical predictions for the ratio of di-muon and charged current total cross-sections measured by the NOMAD collaboration as a function of the incoming neutrino energy. We see that the prediction from the BaseDimuChorus analysis where the strange quark PDF is largely determined by the CCFR and NuTeV di-muon data, describes the NOMAD di-muon data very well for all incoming neutrino energies. We also observe that the uncertainty on the prediction is much larger than the experimental errors indicating that including this data in our future analysis can lead to a substantially more precise extraction of the strange quark PDF. Given the large uncertainties on all theoretical predictions shown in Fig.~\ref{fig.dimuon.prediction.comp}, we can consider the NOMAD data to be described well enough even by the nCTEQ15WZSIHdeut and BaseDimuNeuX analyses. This is an indication of a realistic estimation of the uncertainty of the strange quark PDF in these analyses. Out of all possible approaches listed at the beginning of Sec.~\ref{sec:nufinal}, only the last one presented here led to a combined analysis compatible with the reference analysis nCTEQ15WZSIHdeut. Moreover, the neutrino data included in this analysis provided a much improved description of the strange quark PDF. \section{Conclusions and outlook} \label{sec:conclusion} The aim of this analysis was to take a second look at the (anti-)neutrino deep inelastic scattering data and see if, after all the developments of recent years, a conclusion different to the one presented in our analysis \cite{Kovarik:2010uv} can be reached. As our previous study of the neutrino data predates the nCTEQ15 analysis and any updates thereafter, one could have imagined a shift in the outcome. Moreover, compared to our previous analysis, we were now in a position to use different tools to analyse the compatibility of the neutrino DIS data. We have also added other neutrino data sets to make the current analysis much more comprehensive. The analysis presented in this paper starts by collecting all relevant updates to the nCTEQ15 analysis to form the reference fit to use in comparing the compatibility of neutrino data. This is then followed by reviewing the neutrino data and presenting the extraction of effective nuclear correction factors from the cross-section data. On top of that, a fit to all neutrino data is performed and the results are compared with the reference analysis. In the main part of this analysis in Sec.~\ref{sec:nuglobal} we have performed a global fit (BaseDimuNeu) where we have added all neutrino data to the extended nCTEQ15 analysis. We have observed large tensions in the previously well determined valence quark PDFs and, even in the strange quark PDF determination, tension among the neutrino data is visible. Therefore, the first important conclusion of this analysis is that, due to the large tensions, the bulk of neutrino data is considered incompatible with the data of the baseline analysis or even among each other. In an effort to recover at least a subset of neutrino data to be used in a global analysis, we have proposed three strategies to alleviate the tensions between the neutrino DIS data and all the data in the reference analysis. We have analyzed the possibility of neglecting the correlations in the systematic errors of the NuTeV experiment, which are responsible for a substantial part of the tensions in the neutrino data itself. This yielded a much better description of the neutrino data, but the tensions with the original data of the nCTEQ15WZSIHdeut analysis remained. Since the neutrino data introducing tension at high Bjorken x had already been removed by initial global kinematic cuts, the next possibility we investigated was the introduction of an arbitrary kinematic cut to remove the remaining problematic neutrino data in the region of low Bjorken x (BaseDimuNeuX). As expected after the removal of this data, which causes most of the remaining tension, the description of all the data improved and this can in principle be considered a way to go. However, for this possibility to be viable, a reason for introducing such a cut has to be provided. It was hypothesised (e.g., in \cite{Kopeliovich:2012kw}) that shadowing in neutrino scattering on nuclei works differently than in the neutral-current DIS, which is the cornerstone of the reference PDF analysis. If this were indeed the case, one would have to modify the theoretical predictions for neutrino scattering. Alternatively, before doing so a cut might be introduced to remove data which do not have a proper theoretical description. In such a case, the results of the BaseDimuNeuX analysis might be considered the final result of this study. Given, however, that an alternative mechanism for shadowing in neutrino-nuclei interactions is not yet completely established and is merely hypothesised, we have put forward a different final result of our compatibility analysis. We have identified a subset of neutrino data which has no tension with the data in the reference analysis and so it can be safely included in a combined global analysis. Unfortunately, the majority of neutrino DIS cross-section data have been left unused in the process. Including just the scattering data from Chorus and the di-muon data from NuTeV and CCFR, we have performed the analysis called BaseDimuChorus. The result of including the new data is a much improved description of the strange quark PDF. Even though we have found a way to include some neutrino data in our analysis in order to improve the determination of the strange quark PDF, the fact that the bulk of the DIS neutrino cross-section data is incompatible with the neutral current DIS data is established. Without new experimental data on neutrino-nucleus interactions in the DIS regime, there is no way to decide if this inconsistency is due to a different mechanism for the neutrino-nucleus interaction or simply a sign of problems in the acquisition of the current neutrino experimental data. The resolution could have come from the high-statistics NOMAD experiment but even after more than 20 years only the results of the di-muon analysis were publicly released so far. Unfortunately, after plans for a new neutrino scattering experiment were not followed-up on \cite{NuSOnG:2009rcm}, no new high-energy neutrino scattering experiment is currently in planning. Nevertheless, there is potential to obtain new crucial data from novel ideas or experiments such as the proposed Forward Physics Facility \cite{Anchordoqui:2021ghd} at the LHC or from precise measurements of charged current DIS processes at the future Electron-Ion-Collider \cite{Accardi:2012qut,AbdulKhalek:2021gbh}. \section*{Acknowlegments} We are pleased to thank Un-ki Yang for providing us the CCFR differential cross section data. We are also grateful to Alberto Accardi, Chlo\'e L\'eger, and Peter Risse for useful discussions. The work of P.D., T.J., M.K. and K.K. was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – project-id 273811115 – SFB 1225. P.D., T.J., K.F.M., M.K. and K.K. also acknowledge support of the DFG through the Research Training Group GRK 2149. \\ This manuscript has been authored by Fermi Research Alliance, LLC under Contract No.~DEAC02-07CH11359 with the U.S.~Department of Energy, Office of Science, Office of High Energy Physics. \\ F.O. was supported by the U.S. Department of Energy Grant No. DE-SC0010129. \\ A.K. and R.R. acknowledge the support of Narodowe Centrum Nauki under Sonata Bis Grant No. 2019/34/E/ST2/00186. \\ R.R. acknowledges the support of the Polska Akademia Nauk (grant agreement PAN.BFD.S.BDN. 613. 022. 2021 - PASIFIC 1, POPSICLE). This work has received funding from the European Union's Horizon 2020 research and innovation program under the Sk{\l}odowska-Curie grant agreement No. 847639 and from the Polish Ministry of Education and Science. \\ The work of I. S. was supported in part by the French National Centre for Scientific Research CNRS through IN2P3 Project GLUE@NLO. \clearpage
2,869,038,155,954
arxiv
\section{Introduction} Studies of elastic scattering of high energy protons lead to several unexpected results reviewed, e.g., in Refs \cite{ijmp, ufn17}. Among them, the increase of the share of elastic scattering with the energy increase by a factor about 1.5 from ISR-energies to LHC is the most surprising and yet unexplained phenomenon. Up to now it is unclear why inelastic processes become losing their competition with elastic scattering. With the help of the unitarity condition this feature can be formulated in terms of the darkness of the spatial profile of inelastic interactions which also increases. One of the ways to understanding these results lies through the detailed analysis of the intriguing shape of the elastic differential cross section with respect to the transferred momentum. Its characteristic fast exponential decrease at comparatively small transferred momenta in the so-called diffraction cone and subsequent (dip/bump + slower decrease) structure at higher momenta have been carefully studied by experimentalists in a wide energy interval and, especially, at LHC \cite{totem1, totem2, totem3, totem4, atlas1, atlas2}. Many phenomenological models were proposed (see, e.g., recent papers \cite{kmr, glm, bdh, ncc, fgp, dr15, anis, sel, kfk, kfk1, fms, bsw} and references therein) for explanation of these peculiarities. Among them, the kfk-model \cite{kfk, kfk1} must be especially highlighted. The analytical expressions for the imaginary and real parts of the elastic scattering amplitude based on some QCD arguments are presented there as functions both of the transferred momentum measured in experiment and of the impact parameter relevant to the spatial view of the process. It describes successfully experimental data in a wide energy interval from 20 GeV to 8 TeV in the center-of-mass system choosing the energy dependence of the adjustable parameters. In total, there are 8 such parameters each of which contains the energy independent terms and those increasing with energy $s$ as $\log \sqrt s$ and $\log ^2\sqrt s$ (see Eqs (29)-(36) in \cite{kfk}). Thus 8 coefficients should be determined from comparison with experiment at a given energy and 24 for the description of the energy dependence in a chosen interval. Besides, there is another constant $a_0$ (see Eqs (8)-(10) in \cite {kfk}) which is proclaimed to be fixed within the factor 1.5 from some theoretical arguments about the correlation length of the gluon vacuum expectation value. All that concerns the nuclear part of the amplitude for transferred momenta $0.05<\vert t\vert <2$ GeV$^2$. Additional parameters have to be ascribed for outer intervals. They are related to the Coulomb-nuclear interference region at very small transferred momenta and to the three-gluon exchange term assumed to be relevant at $\vert t\vert >2$ GeV$^2$. Other models mentioned above use the comparable number of adjustable parameters. Hard computer work is required to reveal the otherwise hidden impact of a particular coefficient on the quality of the fit. That is why it is desirable to get a simplified model with direct analytical estimates of this impact and smaller number of parameters. \section{The model} Here, such a model aimed on the rather accurate qualitative description of experimental results is presented. Its main outcome contains a single parameter only. The proposed model is strongly inspired by the phenomenological QCD-motivated kfk-model \cite{kfk, kfk1}, which describes experimental data quantitatively in a wide energy interval. That is why we review first the main findings of the kfk-model. They are at the ground of the simplified model. The crucial assumption of the proposed model is the complete neglect by the real part of the elastic scattering amplitude $f_R$ at high energies. Such an assumption can be guessed from the results of the kfk-model. It was well known from the dispersion relations \cite{dnaz, blo, bloh} that the real part at high energies is much smaller than the imaginary part $f_I$ for the forward scattering: $f_R(s,t=0)=(0.1 - 0.14)f_I(s,t=0)$. It is confirmed by LHC results and satisfied within the kfk-model. Moreover, the real part in the kfk-model becomes even much smaller at low transferred momenta within the diffraction cone compared to the imaginary part (see Fig. 3 of \cite{kfk}). It possesses zero inside there in accordance with theoretical claims of Refs \cite{mar, mar1}. The integral contribution of the real part of the amplitude to the elastic cross section amounts to less than 1.5$\%$ since the diffraction cone dominates. It is demonstrated in the Table II of \cite{kfk1}. The role of the real part becomes noticeable for the differential cross section only at its dip where the imaginary part vanishes as seen from Fig. 4 of \cite{kfk}. However the integral contribution from this region is negligible because all the values at the dip are very low. The position of the dip $t_{dip}$ practically coincides with the position of the zero of the imaginary part $t_0$ (see Fig. 4 in \cite{kfk1}) because the cross section at the dip is much smaller than its values in the diffraction peak. Namely, $d\sigma /dt\vert _{dip}/d\sigma /dt\vert _{t=0}\approx 3\cdot 10^{-5}$ at 7 TeV, i.e. the real part at the dip $f_R(t=t_{dip})$ is less than $5\cdot 10^{-3}$ of the imaginary part in the diffraction peak $f_I(t=0)$. These findings validate the neglect by the real part of the amplitude $f_R$ in the simplified approach. One can approximate the differential cross section by the following expression \begin{equation} \frac {d\sigma }{dt}\approx f_I^2 \label{dsdt1} \end{equation} neglecting $f^2_R$ compared to $f^2_I$. The two most typical features of the imaginary part of the amplitude in the kfk-model are its steep exponential decrease at low transferred momenta in the so-called diffraction cone and its single zero at some transferred momentum. These features can be accounted by the following expression for the imaginary part $f_I$ of the nuclear amplitude of the elastic scattering of high energy protons used in our simplified model: \begin{equation} f_I(s,t)= \frac {\sigma _{tot}(s)}{4\sqrt {\pi}}(1-(t/t_0(s))^2)e^{B(s)t/2}. \label{fi} \end{equation} The variables $s$ and $t$ are the squared energy and transferred momentum of colliding protons in the center-of-mass system $s=4E^2=4(p^2+m^2)$, $-t=2p^2(1-\cos \theta)$ at the scattering angle $\theta $. The amplitude is normalized according to the optical theorem. Its two typical features are the exponential factor with the slope $B$ which governs mainly the behavior of the diffraction cone measured at low transferred momenta $t$ and the zero at $t=t_0$ which is crucial for the description of the dip/bump region. Thus there are only two energy-dependent parameters $t_0$ and $B$ in the model. We consider the total cross section as fixed by the optical theorem at the normalization point $t=0$. The formula (\ref{fi}) fits quite well the kfk-graph for $T_I$ in Fig. 4 of \cite{kfk} in the interval $0.05<\vert t\vert <2$ GeV$^2$. The negative term in the brackets steepens the shape of the diffraction cone that is often approximated by another exponent with a larger slope at the end of the cone. Thus the differential cross section is given by the following expression \begin{equation} \frac {d\sigma }{dt}\approx f_I^2=\frac {\sigma ^2 _{tot}(s)}{16\pi} (1-(t/t_0(s))^2)^2e^{B(s)t}. \label{dsdt} \end{equation} It coincides practically with $d\sigma ^I/dt$ shown in Fig. 4 of \cite{kfk} for $\vert t_0\vert =0.4757$ GeV$^2$ and $B\approx B^I=19.90$ GeV$^{-2}$ at 7 TeV given in the Tables I and II of \cite{kfk1}. Therefore we do not reproduce their almost identical shapes here. Surely, our assumption leads to the zero of the differential cross section at $t=t_0$ (as in Figs 3, 4 of \cite {kfk, kfk1} for $f_I$ and $d\sigma ^I/dt$) instead of the dip but rather accurately reproduces its behavior in other $t$-regions which are more important for the integral contributions. The elastic cross section is \begin{equation} \sigma _{el}=\frac {\sigma ^2 _{tot}(s)}{16\pi B}\left (1-\frac {4}{(B t_0)^2} +\frac {24}{(Bt_0)^4}\right ). \label{el} \end{equation} The structure of the obtained expression is very transparent. The main normalization factor $\sigma ^2 _{tot}(s)/16\pi B$ is determined by the height of the diffraction peak $\sigma _{tot}$ defined by the unitarity condition at $t=0$ and its width $B$. The terms in the brackets demonstrate the suppression of the diffraction peak at its end. Would the real part of the amplitude be taken into account, this expression is multiplied by the factor $\approx 1+(f_R(s,0)/f_I(s,0))^2\approx 1.02$. To be more precise, the integral contributions of real and imaginary parts inside the diffraction cone should be evaluated that shifts the above factor even closer to 1 for the kfk-model because $f_R$ decreases there faster than $f_I$. According to experimental measurements the ratio of the elastic to total cross section increases from ISR to LHC up to the value about 1/4. In what follows we use the ratio \begin{equation} r=\frac {4\sigma _{el}}{\sigma _{tot}}=\frac {\sigma _{tot}(s)}{4\pi B} \left (1-\frac {4}{(Bt_0)^2}+\frac {24}{(Bt_0)^4}\right ). \label{rs} \end{equation} It is close to 1 at LHC with both factors near 1. The energy dependence of the first factor is especially important for $r$ in view of the smallness of the correction terms in the brackets. These terms depend on the single dimensionless product $B\vert t_0\vert $. They show how deep the zero position $t_0$ penetrates inside the diffraction cone at a given energy. This motion of zero is often approximated in exponential fits of experimental data on the differential cross section by the steeper falling exponent at the end of the diffraction cone. For the present model it is taken into account and mimiced by the negative contribution of the terms in the brackets. They are small at LHC energies because $B\vert t_0\vert \approx 10$ there. However the position of the dip (close to $t_0$) seems to move to smaller values with energy increase faster than $B$ increases even in the range of LHC energies. It is confirmed by the kfk-model (see Table I and Table II of \cite{kfk1}). The corrections can become larger at higher energies. Then the shape of the diffraction cone modifies and can not be treated as the exponential one for small enough values of $\vert t_0\vert $. The same factor determines the dip/bump structure of the differential cross section beyond the diffraction cone. The bump position is defined by the zero of the derivative of $f_I$. The relative shift of the bump position $\vert t_b\vert $ to the dip is given by \begin{equation} \frac {\vert t_b\vert - \vert t_0\vert}{\vert t_0\vert}\approx \frac {2}{B\vert t_0\vert }. \label{tbt0} \end{equation} This ratio depends on the same product $B\vert t_0\vert $. It is about 0.2 at LHC or $\vert t_b\vert - \vert t_0\vert \approx 0.1$. The estimate is a qualitative one. The distance between the dip and the bump can be somewhat larger because the $t$-dependence of the real part is important in this interval of the transferred momenta. \section{The spatial inelastic profile} The ratio $r$ is very close to another important characteristics of elastic processes \begin{equation} \zeta (s)=\frac {1}{2\sqrt {\pi }}\int _0^{\infty}d\vert t\vert f_I(s,t). \label{zeta1} \end{equation} It can be called as the darkness factor because it determines the attenuation in the spatial profiles of interaction regions for elastic and inelastic processes in the impact parameter $b$-presentation\footnote{Let us note that the integral from 0 to $\vert t_0\vert $ is positive and that from $\vert t_0\vert $ to $\infty $ is negative because $f_I$ changes the sign at $t_0$.}. The impact parameter $b$ is determined as the transverse distance between the trajectories of the centers of the colliding protons. The knowledge of the attenuation in inelasic processes at different impact parameters is gained from the unitarity condition which connects elastic and inelastic channels of the reaction. Here we consider only the strength of the attenuation in central ($b=0$) inelastic collisions referring the readers for the detailed description of the unitarity condition and for all-$b$-view to the reviews cited above. The unitarity condition for central head-on collisions with $b=0$ reads \cite{ijmp, ufn17} \begin{equation} G(s, b=0)=\zeta (s)(2-\zeta (s)), \label{unit} \end{equation} where $G(s,b)$ is the $b$-profile of inelastic collisions. The darkness of central inelastic collisions is complete ($G(s,b=0)=1$) at $\zeta =1$. Both values are critical ones because any slight decline of $\zeta $ from 1 by $\pm \delta $ results in much smaller and always negative decline of $G(s,0)$ from 1 by $-\delta ^2$. One gets the complete attenuation at $\zeta =1$. The attenuation becomes weaker for any value of $\zeta $ which differs from 1. Thus the energy behavior of $\zeta $ determines the deformation of the inelastic profile with energy. For the proposed simple model one gets \begin{equation} \zeta =\frac {\sigma _{tot}(s)}{4\pi B}\left( 1-\frac {8}{(Bt_0)^2}\right ). \label{zmod} \end{equation} Here, the correction term in the brackets is somewhat different from those in the ratio $r$. However, all of them are small at LHC where $B\vert t_0\vert \approx 10$. Our analytical estimates show how severe are the requirements upon the accuracy of experimental measurements in the vicinity of the critical values of $r$ and $\zeta $ close to 1 observed at LHC. The factor $1+(f_R(s,0)/f_I(s,0))^2\approx 1.02$ should be again kept in mind if the real part is accounted. The ratio $r$ is always larger than $\zeta $: \begin{equation} \frac {r}{\zeta }=\frac {1-\frac {4}{(Bt_0)^2}+\frac {24}{(Bt_0)^4}} {1-\frac {8}{(Bt_0)^2}}\approx 1+\frac {4}{(Bt_0)^2}+\frac {88}{(Bt_0)^4}>1. \label{r/z} \end{equation} Actually, this relation is the main goal of our model. The energy behavior of the ratio $r/\zeta $ determines the evolution of their relative values. It depends on a single variable $Bt_0$ only but not on $B$ and $t_0$ separately. It is about 10 at LHC with $B\approx 20$ GeV$^{-2}$ and $\vert t_0\vert \approx \vert t_{dip}\vert \approx 0.5$ GeV$^2$. Thus, $r$ exceeds $\zeta $ by less than 5$\%$. Both of them are close to 1. However the precision of measurements of $r$ is still not high enough. This single variable $B\vert t_0\vert $ can be gained from experimental results where the exponential fit of the low-$t$ region is done and the position of the dip $t_{dip}\approx t_0$ is determined. Using this formula (\ref{r/z}) one can easily estimate the accuracy of measurements which is required for obtaining the accurate enough values. It can be used to get the value of $\zeta $ after the ratio of the elastic and total cross section $r$ is measured precisely enough and the parameter $B\vert t_0\vert $ is defined. The increase of the ratio $r$ from 0.67 at ISR energies to about 1 at LHC is directly related to the increase of $\zeta $. Therefore their precise values in that energy interval are very important. The need in better accuracy of experimental results is evident. The above discussion of the attenuation dependence on the darkness factor shows that the values of $r$ about 1 obtained from experimental data at LHC can be considered as critical ones. The accuracy of experimental data at LHC is still not enough to get the variables $r$ and $\zeta $ with high enough precision near 1. The desired accuracy is easily estimated with the help of the formula (\ref{r/z}). Further behavior of these variables at higher energies is especially crucial. It is reasonable to assume that the values of $r$ will increase following the (yet unexplained!) trend at lower energies. The tendencies of $\zeta $ to saturate at 1 or increase above 1 at higher energies would lead to different predictions about the inelastic profiles with complete darkness or decreased attenuation at the center, correspondingly. In the kfk-model asymptotical values of $r$ and $\zeta $ are equal to 1.416 and 1, correspondingly. The parameter $B\vert t_0\vert $ should become at least twice smaller there as follows from Eq. (\ref{r/z}). Thus one predicts that the dip must move deeper inside the cone at higher energies. That is the qualitative feature observable in experiment. The region of the diffraction cone contributes mostly to both $\zeta $ and $r$ because they are defined as integrals of $f_I$ and $f_I^2$. To be more definite, the role of the region beyond the cone (at transferred momenta larger than the dip position) is estimated by integration from $\vert t_0\vert $ to infinity. Its contribution $\Delta \zeta $ to $\zeta $ happens to be negligibly small and negative: \begin{equation} \Delta \zeta \frac {4\pi B}{\sigma _{tot}}=-\frac {4}{B\vert t_0\vert }\left(1- \frac {2}{B\vert t_0\vert }\right)e^{-B\vert t_0\vert/2} \label{t0inf} \end{equation} which turns out to be about $-2\cdot 10^{-3}$ at LHC. Therefore the role of the tail of the differential cross section is very mild. The unitarity condition imposes the limit $\zeta \leq 2$. It is required by the positivity of the inelastic profile (\ref{unit}). Then there are no inelastic processes for central collisions ($G(s,0)=0$ according to Eq. (\ref{unit})). It is strange that this limit was called as the "black disk". We prefer to call it as TEH - the Toroidal Elastic Hollow \cite{ijmp, ufn17}. The inelastic profile acquires the toroidal shape with a hole at the very center which allows only the elastic scattering in there. In principle, such regime is not excluded asymptotically but it is not realized in the kfk-model where $\zeta $ saturates at 1. It asks for the relation \begin{equation} \sigma _{el}=\sigma _{inel}=\sigma _{tot}/2. \end{equation} It is not fulfilled at present energies. The height of the profile of elastic collisions $\zeta ^2$ at the center $b=0$ completely dominates and saturates the total profile $2\zeta $ for $\zeta =2$. \section{Discussion and conclusions} If taken into account, the neglected real part of the elastic amplitude would ask for many new parameters to be introduced. We have restricted the considered range of the transferred momenta to those which provide main contribution to the integral characteristics $r$ and $\zeta $ described above. The integral contribution of the real part can be definitely omitted there. The special problem to be discussed is the energy behavior of $\zeta $. It is important that the value of $\zeta $ in the kfk-model starts increasing at ISR and approaches 1 at LHC energies being below 1 by several percents only. At the same time, if the steady increase of the share of elastic processes from ISR to LHC persists at higher energies with $r$ becoming larger than 1, one should consider the intriguing possibility that $\zeta $ will also exceed 1. Surely, that can happen only if the ratio of the total cross section to $4\pi B$ becomes noticeably larger than 1. It increased from about 0.67 at ISR-energies to 1.02$\pm$0.04 at LHC. Actually, the experimental values of $r$ at LHC energies range from 1.01$\pm$0.06 \cite{totem1} to 1.06$\pm$0.06 \cite{totem4}. The factor in brackets in Eq.(\ref{rs}) is about 0.96. Thus the situation at LHC energies is critical in the sense that all estimates of $r$ and $\zeta $ are near 1 within the accuracy of experimental data. The further insight into proton collisions can be gained if more precise data on elastic scattering at LHC will be obtained. The accuracy of measurements of $\sigma _{tot}, B$ and $t_{dip}$ is decisive. In the case of the further noticeable increase of $\sigma _{tot}/4\pi B$ at higher energies the attenuation for central inelastic collisions can become incomplete ($G(s,b=0)<1$) after passing through its completeness at LHC. However if the value of $\zeta $ tends to 1 asymptotically, no such peculiar behavior will be observed. $G(s,b=0)$ will tend to 1. The last possibility looks quite probable both from our intuitive expectations and from conclusions of the kfk-model. In conclusion, we have proposed the simple model of elastic scattering of high energy protons which admits analytic calculations and easy estimates with the help of a single parameter related to experimentally measurable characteristics. The accuracy of experimental measurements is directly translated into the required precision of the estimates of its value. The analytic expressions allow experimentalists to find the necessary demands upon the accuracy of measurements by direct estimation of the product of the cone slope $B$ and the dip position $t_{dip}\approx t_0$ by providing the accurate values of $r$ and $\zeta $. That is especially important in the LHC energy range where both $r$ and $\zeta $ reach their critical values close to 1. What concerns the further perspective, one can state that there is yet no consensus about possibilities for energy behavior of the share of elastic processes $r$ and of $\zeta $ at higher energies. Their asymptotical approach to 1 or increase above 1 would tell us not only about elastic scattering but reveal interesting features of inelastic collisions as well. \medskip {\bf Acknowledgments} \medskip I am grateful for support by the RAS-CERN program and the Competitiveness Program of NRNU "MEPhI" ( M.H.U.).
2,869,038,155,955
arxiv
\section{Introduction} The income fluctuation problem refers to the broad class of decision problems that characterize the optimal consumption-saving behavior for agents facing stochastic income streams. In most cases, agents are subject to idiosyncratic shocks and borrowing constraints. Markets are incomplete so idiosyncratic risks cannot be fully diversified or hedged. The model represents one of the fundamental workhorses of modern macroeconomics, and has been adopted to study a large variety of important topics, ranging from asset pricing, life-cycle choice, fiscal policy, social security, to income and wealth inequality, among many others. See, for example, \cite{schechtman1976income}, \cite{deaton1992behaviour}, \cite{huggett1993risk}, \cite{aiyagari1994uninsured}, \cite{carroll1997buffer}, \cite{chamberlain2000optimal}, \cite{cagetti2008wealth}, \cite{de2010elderly}, \cite{guner2011taxation}, \cite{guvenen2011macroeconomics}, \cite{meghir2011earnings}, \cite{meyer2013consumption}, \cite{guvenen2014inferring} and \cite{heathcote2014consumption}. In recent years, researchers have come to investigate an important mechanism in the income fluctuation framework---the dispersion in rates of return to wealth, referred to below as the capital income risk. Early studies are provided by \cite{angeletos2005incomplete} and \cite{angeletos2007uninsured}. These works highlight that the macroeconomic effects of idiosyncratic capital income risk can be both qualitatively distinct from those of idiosyncratic labor income risk and quantitatively significant. An especially important set of applications concerns wealth inequality. As is well known in the literature, the classic income fluctuation frameworks of \cite{huggett1993risk} and \cite{aiyagari1994uninsured}, in which returns to wealth are homogeneous across agents, fail to reproduce the high inequality and the fat upper tail of wealth distributions in many economies. Such empirical failure has prompted researchers to investigate models with uninsured capital income risk. Entrepreneurial risk, a representative example of capital income risk, is studied by \cite{quadrini2000entrepreneurship} and \cite{cagetti2006entrepreneurship}. By introducing heterogeneity across agents in their work and entrepreneurial ability, these studies successfully generate skewed wealth distributions that are more similar to those observed in the U.S. data. Moreover, in an OLG economy with intergenerational transmission of wealth, \cite{benhabib2011distribution} show that capital income risk is the driving force of the heavy-tail properties of the stationary wealth distribution. In a Blanchard-Yaari style economy, \cite{benhabib2016distribution} show that idiosyncratic investment risk has a big impact on generating a double Pareto stationary wealth distribution. In another important contribution, \cite{gabaix2016dynamics} point out that a positive correlation of returns with wealth (``scale dependence") in addition to persistent heterogeneity in returns (``type dependence") can well explain the speed of changes in the tail inequality observed in the data. An important work that is highly pertinent to the present paper is \cite{benhabib2015wealth}. In a stylized infinite horizon income fluctuation problem with capital income risk, the authors prove that there exists a unique stationary wealth distribution that displays fat tail. On the empirical side, using twelve years of population data from Norway's administrative tax records, \cite{fagereng2016heterogeneity, fagereng2016heterogeneityb} document that individuals earn markedly different average returns to both their financial assets (a standard deviation of $14\%$) and net worth (a standard deviation of $8\%$). Wealth returns are heterogeneous both within and across asset classes. Returns are positively correlated with the wealth level and highly persistent over time. In addition, wealth returns are (mildly) correlated across generations. Although theoretical, empirical and quantitative studies all reveal the significant economic impact of capital income risk, existing models of capital income risk in the income fluctuation framework are highly stylized. For example, the assumptions of {\sc iid} labor income process, {\sc iid} wealth return process and their mutual independence made by \cite{benhabib2015wealth} are rejected by the empirical data in several economies (see, e.g., \cite{kaplan2010much}, \cite{guvenen2010inferring} and \cite{fagereng2016heterogeneity, fagereng2016heterogeneityb}). As \cite{benhabib2015wealth} point out, adding positive correlations in labor earnings and wealth returns enriches model dynamics in that it captures economic environments with limited social mobility. To our best knowledge, a general theory of capital income risk in the income fluctuation framework has been missing in the literature. This raises concerns about whether or not existing views on the economic impact of capital income risk hold in general, as well as whether or not modeling capital income risk in more generic and realistic settings is technically achievable. To be specific, several important questions are: \begin{itemize} \item Do correlations in the wealth return process (e.g., those caused by mean persistence or stochastic volatility of wealth returns) enhance or dampen the macroeconomic impact of capital income risk? \item What if, in addition to serial correlation, the wealth return process and the labor earnings process are mutually correlated? \item Does an optimal policy always exist in these generalized settings? If it does, is it unique? \item Does the stochastic law of motion for optimal wealth accumulation yield a stationary distribution of wealth? \item If it does, is the model economy globally stable, in the sense that the stationary distribution is unique and can be approached by the distributional path from any starting point? \item How do we compute the optimal policy and the stationary wealth distribution in practice? \end{itemize} These questions are highly significant, in the sense that a negative answer to any of them will pose a threat to the existing findings concerning capital income risk. However, due to technical limitations, these questions have not been investigated in a general income fluctuation framework. In this paper, we attempt to fill this gap. To this end, we extend the standard income fluctuation problem by characterizing the following essential features. \begin{itemize} \item Agents face idiosyncratic rate of return to wealth $\{ R_t\}$ (capital income risk) and idiosyncratic labor earnings $\{Y_t\}$ (labor income risk), both of which are affected by a generic, exogenous Markov process $\{ z_t \}$. \item Supports of $\{ R_t\}$ and $\{ Y_t\}$ are bounded or unbounded, and, in either case, allowed to contain zero. \item The reward (utility) function is bounded or unbounded, and no specific structure is imposed beyond differentiability, concavity and the usual slope conditions. \end{itemize} As can be seen, general $\{ R_t\}$ and $\{Y_t\}$ processes that are serially correlated and mutually dependent are covered by our framework. Moreover, consumption can become either arbitrarily small or arbitrarily large, so that agents are allowed to borrow up to the highest sustainable level of debt, creating rich and substantial model dynamics reflecting agents' borrowing activity.\footnote{See the discussion of \cite{rabault2002borrowing}.} We make several tightly connected contributions on optimality, stochastic stability and computation of this generalized income fluctuation problem. First, we prove that the Coleman operator adapted to this framework is indeed an ``$n$-step'' contraction mapping in a complete metric space of candidate consumption policies, even when rewards are unbounded. The unique fixed point is shown to be the optimal policy (also unique in the candidate space), and several important properties (e.g., continuity and monotonicity) are derived. To tackle unboundedness, we draw on and extend \cite{li2014solving} by adding capital income risk and constructing a metric that evaluates consumption differences in terms of marginal utility. To obtain contractions under a minimal level of restriction, we focus our key assumption on bounding the \textit{long-run growth rate} of wealth returns. We show that this assumption is indeed equivalent to bounding the spectral radius of an expected wealth return operator (a bounded linear operator) by $1 / \beta$. As a result, it is similar to the assumptions made by recent literature regarding the operator theoretic method, which have been proven both necessary and sufficient for the existence and uniqueness of solutions in a variety of models (see, e.g., \cite{hansen2009long, hansen2012recursive}, \cite{borovivcka2017necessary} and \cite{toda2018wealth}). Our assumption is easy to verify numerically. For example, when the state space for the exogenous Markov process $\{z_t\}$ is finite, verifying this assumption is as convenient as finding the largest modulus of the set of eigenvalues for a given matrix. Second, as our most significant contribution, we show that the model economy is globally stable, even in the presence of capital income risk. Specifically, there exists a unique stationary distribution for the state process (including wealth and the exogenous Markov state), and, given any initial state, the distributional path of the state process generated via optimal consumption and wealth accumulation converges to the stationary distribution as time iterates forward. The idea of proof goes as follows. Based on the optimality results established in the previous step, existence of a stationary distribution is guaranteed under some further restrictions on agents' level of patience, plus some mild assumptions on the stochastic properties of the exogenous state and the labor income processes. The key is to show that the wealth process is bounded in probability. The proof of global stability is more tricky and separated into two scenarios. (Scenario \rom{1}) When the exogenous state process $\{z_t\}$ is independent and identically distributed, so are $\{R_t\}$ and $\{ Y_t\}$, and wealth is the only state variable remaining. We show that, with some additional concavity structure imposed, the model economy is monotone, allowing us to use some new results in the field of stochastic stability (due to \cite{kamihigashi2014stochastic, kamihigashi2016seeking}). Based on these results, both global stability and the Law of Large Numbers are established. In this case, convergence of the distributional path to its stationarity is in the form of weak convergence. Moreover, the added concavity assumption holds for standard utilities such as CRRA or the logarithm utility. Notably, even in the current case, our theory extends the stability theory of \cite{benhabib2015wealth}, since we allow $\{R_t\}$ and $\{ Y_t\}$ to be dependent on each other (a more detailed comparison is given below). (Scenario \rom{2}) When the exogenous state process $\{z_t\}$ is Markovian, $\{ R_t\}$ and $\{Y_t\}$ are in general autocorrelated and mutually dependent, and the structure of monotone economy is lost due to the added exogenous state. As a result, the order theoretic approach used in the previous case is no longer applicable. In response to that, we aim to exploit the traditional theory of stochastic stability (see, e.g., \cite{meyn2009markov}). Specifically, we provide sufficient conditions for the state process to be $\psi$-irreducible, strongly aperiodic and a positive Harris chain, which in turn guarantee global stability and the Law of Large Numbers. Convergence here is in total variation norm distance, which is stronger than weak convergence. Our sufficient conditions are easy to verify in applications, and centered around existence of density representations for the exogenous state process and the labor earnings process. We only require that supports of the two densities contain respectively a nontrivial compact subset and a certain ``small" interval. Importantly, no further concavity structure is required. Moreover, we show in this scenario that if we add the same concavity structure as we do in scenario \rom{1} and some other mild assumptions (e.g., existence of densities for the wealth return process and geometric drift property of the labor earnings process), then the model economy is indeed $V$-geometrically ergodic. As a result, convergence to the stationary distribution occurs at a geometric speed. Since an {\sc iid} process is a special Markov process, as a byproduct, the theory in scenario \rom{2} serves as an alternative stability theory when the exogenous state process is {\sc iid}. As can be seen from the discussion above, neither of the two theories is ``stronger" than the other in this circumstance. On the one hand, global stability in scenario \rom{1} is established under an additional concavity assumption, which is not required for global stability in scenario \rom{2}. On the other hand, we make no assumptions on the density structure of the key stochastic processes in scenario \rom{1} as we do in scenario \rom{2}. Based on the established stability and ergodicity results, the unique stationary distribution can be approximated via tracking a single state process simulated according to the optimal consumption and wealth accumulation rules, which is highly efficient. The real caveat is that, in presence of capital income risk, there can be very large realized values of wealth (and consumption), causing serious problems to numerical computation of the optimal policy. However, this problem is alleviated in our setting. We show that, under our maintained assumptions, the optimal policy is concave and asymptotically linear with respect to the wealth level. Hence, at large levels of wealth, the optimal consumption rule can be well approximated via linear extrapolation. We provide several important applications. First, we illustrate how our theory can be applied to modeling capital income risk in different scenarios. Then, we provide a numerical example in which we explore the quantitative effect of stochastic volatility and mean persistence of the wealth return process on wealth inequality. In the calibrated economy, our quantitative analysis shows that both these two factors lead to lower tail exponents of the stationary wealth distribution and higher Gini coefficients, and thus a higher level of wealth inequality. In terms of connections to the existing literature, the most closely related results are those found in the recent paper \cite{benhabib2015wealth}. Like us, the authors study capital income risk in an income fluctuation framework. On the one hand, their paper proves an important theoretical result---the stationary wealth distribution has a fat tail, a topic not treated by the present paper (tail properties are only studied by us numerically). On the other hand, our theory of optimality and stochastic stability is considerably sharper and covers a much broader range of applications. Specifically, to avoid technical complication, \cite{benhabib2015wealth} assume that $\{R_t\}$ and $\{ Y_t\}$ are {\sc iid}, mutually independent, supported on bounded closed intervals with strictly positive lower bounds, and that their distributions are represented by densities. Albeit helpful for simplifying analysis and deriving tail properties, these assumptions rule out important features observed in the real economy (e.g., mean persistence and stochastic volatility in the empirical labor earnings and wealth return processes, as discussed). Moreover, the strictly positive lower bound for $\{ Y_t\}$ prevents agents from borrowing up to the highest sustainable level of debt, hiding substantial model dynamics.\footnote{As discussed in \cite{rabault2002borrowing}, in this case, agents are guaranteed a strictly positive minimum level of consumption, so the marginal value of consumption is bounded, and the problem can be easily solved by constructing supremum norm contractions. However, relaxing this assumption allows agents to systematically avoid exhausting their borrowing capacity. } As described above, all these assumptions are relaxed in our framework. Regarding earlier literature, specific types of capital income risk are modeled by \cite{quadrini2000entrepreneurship}, \cite{angeletos2005incomplete}, \cite{cagetti2006entrepreneurship} and \cite{angeletos2007uninsured} in general equilibrium frameworks. In comparison, the present paper focuses on constructing a ``more general" one-sector framework and deriving sharper theoretical results, which, of course, could potentially benefit ``more general" general equilibrium analysis. Moreover, since we tackle unbounded rewards and the associated technical complication, our paper is also related to \cite{rabault2002borrowing}, \cite{carroll2004theoretical}, \cite{kuhn2013recursive} and \cite{li2014solving}. These works develop different methods to handle the issue of unboundedness in standard income fluctuation problems (ones without capital income risk). While \cite{carroll2004theoretical} constructs a weighted supremum norm contraction and works with the Bellman operator, the other three works focus on the Coleman operator. In particular, \cite{rabault2002borrowing} exploits the monotonicity structure, \cite{kuhn2013recursive} applies a version of the Tarski's fixed point theorem, while \cite{li2014solving} constructs a contraction mapping based on a metric that evaluates consumption differences in marginal values. As discussed above, the present paper draws on and extends \cite{li2014solving} by incorporating capital income risk. The rest of this paper is structured as follows. Section \ref{s:setup} formulates the problem. Section \ref{s:opt_results} establishes optimality results. Sufficient conditions for the existence and uniqueness of optimal policies are discussed. Section \ref{s:sto_stability} focuses on stochastic stability. Global stability and some further properties are studied. Section \ref{s:app} provides a set of applications. All proofs are deferred to the appendix. \section{Set up} \label{s:setup} This section sets up the income fluctuation problem to be studied. As a first step, we introduce some mathematical techniques and notation used in this paper. \subsection{Preliminaries} \label{ss:prelm} Let $\mathbbm N$, $\mathbbm R$ and $\mathbbm R_+$ be the natural, real and nonnegative real numbers respectively. Given topological space $\SS$, let $\mathscr B(\SS)$ be the Borel $\sigma$-algebra and let $\mathscr P(\SS)$ be the set of probability measures on $\mathscr B(\SS)$. A \emph{stochastic kernel} $Q$ on $\SS$ is a map $Q \colon \SS \times \mathscr B(\SS) \to [0, 1]$ such that \begin{itemize} \item $x \mapsto Q(x, B)$ is $\mathscr B(\SS)$-measurable for each $B \in \mathscr B(\SS)$ and \item $B \mapsto Q(x, B)$ is a probability measure on $\mathscr B(\SS)$ for each $x \in \SS$. \end{itemize} Let $bc \SS$ be the set of bounded continuous functions on $\SS$. A stochastic kernel $Q$ is called \emph{Feller} if $x \mapsto \int h(y) Q(x, \diff y)$ is in $bc\SS$ whenever $h \in bc\SS$. For all $t \in \mathbbm N$, we define the \emph{$t$-th order kernel} as \begin{equation*} Q^1 := Q, \quad Q^t (x, B) := \int Q^{t-1}(y, B) Q (x, \diff y) \quad (x \in \SS, B \in \mathscr B(\SS)). \end{equation*} The value $Q^t (x,B)$ represents the probability of transitioning from $x$ to $B$ in $t$ steps. Furthermore, for all $\mu \in \mathscr P(\SS)$, we define $\mu Q^t \in \mathscr P(\SS)$ as \begin{equation*} (\mu Q^t)(B) := \int Q^t (x, B) \mu(\diff x) \qquad (B \in \mathscr B(\SS)). \end{equation*} A sequence $\{ \mu_n \} \subset \mathscr P(\SS)$ is called \emph{tight}, if, for all $\epsilon>0$, there exists a compact $K \subset \SS$ such that $\mu_n(\SS \backslash K) \leq \epsilon$ for all $n$. We say that \emph{$\mu_n$ converges to $\mu$ weakly} and write $\mu_n \stackrel{w}{\rightarrow} \mu$ if $\mu \in \mathscr P(\SS)$ and $\int h \diff \mu_n \rightarrow \int h \diff \mu$ for all bounded continuous $h \colon \SS \rightarrow \mathbbm R$. A stochastic kernel $Q$ is called \emph{bounded in probability} if the sequence $\{ Q^t(x, \cdot)\}_{t \geq 0}$ is tight for all $x \in \SS$. We call $\psi \in \mathscr P(\SS)$ \emph{stationary} for $Q$ if $\psi Q = \psi$. We say that $Q$ is \emph{globally stable} if there exists a unique stationary distribution $\psi$ in $\mathscr P(\SS)$ and $\psi_0 Q^t \stackrel{w}{\to} \psi$ for all $\psi_0 \in \mathscr P(\SS)$. Let $K$ be a bounded linear operator from $bc \SS$ to itself and $\| \cdot \|$ be the supremum norm on $bc\SS$. The \textit{operator norm} and \textit{spectral radius} of $K$ are defined by \begin{equation*} \| K \| := \sup \{ \|Kg\|: g \in bc\SS, \; \|g \| \leq 1 \} \quad \text{and} \quad r(K) := \lim_{m \to \infty} \|K^m \|^{1 / m}. \end{equation*} In particular, when $\SS$ is finite, $K$ becomes a square matrix, and the spectral radius $r(K)$ reduces to $\max_\lambda |\lambda|$, where $\lambda$ ranges over the set of eigenvalues of $K$. (See, e.g., page~663 of \cite{guide2006infinite}). In what follows, $(\Omega, \mathscr F, \mathbbm P)$ is a fixed probability space on which all random variables are defined, while $\mathbbm E \,$ is expectations with respect to $\mathbbm P$. \subsection{The income fluctuation problem} We introduce capital income risk and consider a generalized income fluctuation problem as follows \begin{align} \label{eq:trans_at} & \max \, \mathbbm E \, \left\{ \sum_{t \geq 0} \beta^t u(c_t) \right\} \nonumber \\ \text{s.t.} \quad & a_{t+1} = R_{t+1} (a_t - c_t) + Y_{t+1}, \\ & 0 \leq \; c_t \leq a_t, \quad (a_0, z_0)=(a,z) \text{ given} \nonumber, \end{align} where $\beta \in [0,1)$ is a state-independent discount factor, $u$ is the utility function, the control process $\{ c_t\}_{t \geq 0}$ is consumption, $\{R_{t}\}_{t \geq 1}$ is a gross rate of return on wealth and $\{Y_{t} \}_{t \geq 1}$ is labor income. The return and income processes obey \begin{align} \label{eq:RY_func} R_{t} &= R \left( z_{t}, \zeta_{t} \right), \quad \left\{ \zeta_{t} \right\}_{t \geq 1} \stackrel {\textrm{ {\sc iid }}} {\sim} \nu, \nonumber \\ Y_{t} &= Y \left( z_{t}, \eta_{t} \right), \quad \left\{ \eta_{t} \right\}_{t \geq 1} \stackrel {\textrm{ {\sc iid }}} {\sim} \mu, \end{align} where $R$ and $Y$ are nonnegative real-valued measurable functions, $\{ \zeta_t\}$ and $\{ \eta_t\}$ are innovations, and $\{z_t\}_{t \geq 0}$ is a time-homogeneous $\mathsf Z$-valued Markov process with Feller stochastic kernel $P$, where $\mathsf Z$ is a Borel subset of $\mathbbm R^m$ paired with the usual relative topology. Throughout we make the following assumption on the agent's utility. \begin{assumption} \label{a:utility} The utility function $u \colon \mathbbm R_+ \rightarrow \{ - \infty \} \cup \mathbbm R$ is twice differentiable on $(0, \infty)$ and satisfies % \begin{enumerate} \item $u' > 0$ and $u'' < 0$ everywhere on $(0, \infty)$, and \item $u'(c) \rightarrow \infty$ as $c \rightarrow 0$ and $u'(c) \rightarrow 0$ as $c \rightarrow \infty$. \end{enumerate} % \end{assumption} \begin{example} A typical example that meets assumption \ref{a:utility} is the CRRA utility \begin{equation} \label{eq:crra_utils} u(c) = c^{1 - \gamma} / (1 - \gamma) \quad \text{if } \gamma > 0, \, \gamma \neq 1 \quad \text{and} \quad u(c) = \log c \quad \text{if } \gamma = 1, \end{equation} where $\gamma > 0$ is the coefficient of relative risk aversion. \end{example} \subsection{Further notation} \label{ss:nota} We use $x$ and $\hat{x}$ to denote respectively the current and next period random variables. In addition, \begin{equation} \label{eq:nota1} \mathbbm E \,_{a,z} := \mathbbm E \, \left[ \,\cdot \, \big| \, (a_0,z_0) = (a, z) \right] \quad \text{and} \quad \mathbbm E \,_z := \mathbbm E \, \left[ \, \cdot \, \big| \, z_0 = z \right]. \end{equation} In particular, for any integrable function $f$, \begin{equation} \label{eq:nota2} \mathbbm E \,_z \, f (\hat{z}, \hat{R}, \hat{Y}) = \int f \left[ \hat{z}, R ( \hat{z}, \hat{\zeta} ), Y \left( \hat{z}, \hat{\eta} \right) \right] P(z, \diff \hat{z}) \nu(\diff \hat{\zeta}) \mu(\diff \hat{\eta}). \end{equation} \section{Optimality Results} \label{s:opt_results} In this section, we show that, with bounded or unbounded rewards, the Coleman operator adapted to the income fluctuation problem above is an $n$-step contraction mapping on a complete metric space of candidate policies, and that the unique fixed point is the optimal policy. To that end, we make the following assumptions. \begin{assumption} \label{a:ctra_coef} There exists $n \in \mathbbm N$ such that $\theta := \beta^n \sup_{z \in \mathsf Z} \mathbbm E \,_z R_1 \cdots R_n < 1$. \end{assumption} \begin{assumption} \label{a:Y_sum} For all $z \in \mathsf Z$, we have $\sum_{t=1}^{\infty} \beta^t \mathbbm E \,_z Y_t < \infty$. \end{assumption} \begin{assumption} \label{a:bd_sup_ereuprm} $\sup_{z \in \mathsf Z} \mathbbm E \,_z \, \hat{R} < \infty$, \, $\sup_{z \in \mathsf Z} \mathbbm E \,_z u'(\hat{Y}) < \infty$ and $\sup_{z \in \mathsf Z} \mathbbm E \,_z \hat{R} u' (\hat{Y}) < \infty$. \end{assumption} \begin{assumption} \label{a:conti_ereuprm} The functions $z \mapsto R(z, \zeta)$, $z \mapsto Y(z, \eta)$, $z \mapsto \mathbbm E \,_z \hat{R}$ and $z \mapsto \mathbbm E \,_z \hat{R} \, u'(\hat{Y})$ are continuous. \end{assumption} \begin{example} \label{ex:spec_rad_ctra} For all bounded continuous function $f$ on $\mathsf Z$, define % \begin{equation*} K f (z) := \mathbbm E \,_z \hat{R} f(\hat{z}), \quad z \in \mathsf Z. \end{equation*} % Then $K$ is a bounded linear operator by assumption \ref{a:bd_sup_ereuprm}. Let $r(K)$ be the spectral radius of $K$. Then assumption \ref{a:ctra_coef} holds if and only if $\beta r(K) < 1$. We prove this result in the appendix. \end{example} \begin{example} \label{ex:spec_rad_matrix} Let $\{z_t\}$ be a finite-state Markov chain on $\mathsf Z := \{ i_1, \cdots, i_N \}$ with transition matrix $\Pi$ (a ``discrete'' stochastic kernel). Let $\text{diag} (\cdot)$ denote the diagonal matrix generated by elements in the bracket, and, with slight abuse of notation, let % \begin{equation*} \mathbbm E \, R(z, \zeta) := \int R(z, \zeta) \nu (\diff \zeta) \quad \text{and} \quad D := \text{diag} \left( \mathbbm E \, R(i_1, \zeta), \cdots , \mathbbm E \, R(i_N, \zeta) \right). \end{equation*} % In this case, the operator $K$ in example \ref{ex:spec_rad_ctra} reduces to the matrix $K = \Pi D$. Therefore, assumption \ref{a:ctra_coef} holds if and only if $r(\Pi D) < 1 / \beta$. In particular, $r(\Pi D)$ equals the largest modulus of all the eigenvalues of $\Pi D$. \end{example} \begin{example} \label{ex:CS_suff} Based on the H{\"o}lder's inequality, to show assumption \ref{a:bd_sup_ereuprm}, it suffices to find some $p, q \in [1, \infty]$ such that $1/p + 1/q = 1$ and % \begin{equation*} \sup_{z \in \mathsf Z} \mathbbm E \,_z \, \hat{R}^p < \infty \quad \text{and} \quad \sup_{z \in \mathsf Z} \mathbbm E \,_z u' ( \hat{Y} )^q < \infty. \end{equation*} % \end{example} To establish the required results, we (temporarily) assume $a_0>0$ and set the asset space as $(0, \infty)$. The state space for the state process $\{(a_t, z_t) \}_{t \geq 0}$ is then\footnote{Note that the second condition of assumption \ref{a:utility} and assumption \ref{a:bd_sup_ereuprm} imply that $\mathbbm P \{ Y_t > 0 \}=1$ for all $t \geq 1$ (although $Y_t$ is allowed to be arbitrarilly close to zero). Hence, $\mathbbm P \{ a_t > 0 \} = 1$ for all $t \geq 1$ by the law of motion \eqref{eq:trans_at}. It thus makes no difference to optimality to exclude zero from the asset space. Doing this simplifies analysis since $u$ and $u'$ are finite away from zero. It actually allows us to propose a useful metric and apply the contraction approach, as to be shown later.} \begin{equation*} \label{eq:S0} \SS_0:= (0, \infty) \times \mathsf Z \ni (a,z). \end{equation*} Consider the maximal asset path $\{ \tilde{a}_t \}$ defined by \begin{equation} \label{eq:max_path} \tilde{a}_{t+1} = R_{t+1} \, \tilde{a}_t + Y_{t+1} \quad \text{and} \quad (\tilde{a}_0, \tilde{z}_0) = (a,z) \; \text{given}. \end{equation} \begin{lemma} \label{lm:max_path} If assumptions \ref{a:ctra_coef}--\ref{a:Y_sum} hold, then $\sum_{t \geq 0} \beta^t \mathbbm E \,_{a,z} \, \tilde{a}_t$ is finite for all $(a,z) \in \SS_0$. \end{lemma} A \emph{feasible policy} is a Borel measurable function $c \colon \SS_0 \rightarrow \mathbbm R$ with $0 \leq c(a,z) \leq a$ for all $(a,z) \in \SS_0$. Given any feasible policy $c$ and initial condition $(a,z) \in \SS_0$, the \emph{asset path} generated by $(c, (a,z))$ is the sequence $\{ a_t\}_{t \geq 0}$ in \eqref{eq:trans_at} when $c_t = c (a_t, z_t)$ and $(a_0, z_0) = (a,z)$. The \emph{lifetime value} of any feasible policy $c$ is the function $V_c \colon \SS_0 \rightarrow \{ - \infty \} \cup \mathbbm R$ defined by \begin{equation*} V_c (a,z) = \mathbbm E \,_{a,z} \left\{ \sum_{t \geq 0} \beta^t u \left[ c (a_t, z_t) \right] \right\}, \end{equation*} where $\{ a_t\}$ is the asset path generated by $(c,(a,z))$. Notice that $V_c(a,z) < \infty$ for any feasible $c$ and any $(a,z) \in \SS_0$. This is because, by assumption \ref{a:utility}, there exists a constant $L$ such that $u(c) \leq c + L$, and hence \begin{equation*} V_c(a,z) \leq \mathbbm E \,_{a,z} \sum_{t \geq 0} \beta^t u(a_t) \leq \mathbbm E \,_{a,z} \sum_{t \geq 0} \beta^t u(\tilde{a}_t) \leq \sum_{t \geq 0} \beta^t \mathbbm E \,_{a,z} \, \tilde{a}_t + \frac{L}{1 - \beta}. \end{equation*} The last expression is finite by lemma \ref{lm:max_path}. A feasible policy $c^*$ is called \emph{optimal} if $V_c \leq V_{c^*}$ on $\SS_0$ for any feasible policy $c$. In the present setting, the finiteness of $V_c$ for each feasible policy, the strict concavity of $u$, and the convexity of the set of feasible policies from each $(a,z) \in \SS_0$ imply that for each given parameterization, at most one optimal policy exists. A feasible policy is said to satisfy the \textit{first order optimality conditions} if \begin{equation} \left( u'\circ c \right)(a,z) \geq \beta \, \mathbbm E \,_{z} \, \hat{R} \left( u' \circ c \right) \left( \hat{R} \left[ a - c(a,z) \right] + \hat{Y}, \, \hat{z} \right) \end{equation} for all $(a,z) \in \SS_0$, and equality holds when $c(a,z) < a$. Moreover, a feasible policy is said to satisfy the \textit{transversality condition} if, for all $(a, z) \in \SS_0$, \begin{equation} \label{eq:tvc} \lim_{t \rightarrow \infty} \beta^t \mathbbm E \,_{a,z} \left[ \left(u' \circ c \right)(a_t, z_t) \, a_t \right] = 0. \end{equation} \begin{theorem} \label{t:opt_result} If assumptions \ref{a:utility} and \ref{a:ctra_coef}--\ref{a:Y_sum} hold, and $c$ is a feasible policy that satisfies both the first order optimality conditions and the transversality condition, then $c$ is an optimal policy. \end{theorem} When does an optimal policy exist, and how can we compute it? To answer these questions, following \cite{li2014solving}, we use a contraction argument, where the underlying function space is set to $\mathscr C$, the functions $c \colon \SS_0 \rightarrow \mathbbm R$ such that \begin{enumerate} \item $c$ is continuous, \item $c$ is increasing in the first argument, \item $0 < c(a,z) \leq a$ for all $(a,z) \in \SS_0$, and \item $\sup_{(a,z) \in \SS_0} \left| (u' \circ c)(a,z) - u'(a) \right| < \infty$. \end{enumerate} To compare two policies, we pair $\mathscr C$ with the distance \begin{equation} \label{eq:rho_metric} \rho(c,d) := \left\| u' \circ c - u' \circ d \right\| := \sup_{(a,z) \in \SS_0} \left| \left(u' \circ c \right)(a,z) - \left(u' \circ d \right)(a,z) \right| \end{equation} that evaluates the maximal difference in terms of marginal utility. Note that \begin{equation} \label{eq:bd_uprime} c \in \mathscr C \Longrightarrow \exists K \in \mathbbm R_+ \text{ s.t. } u'(a) \leq (u' \circ c)(a,z) \leq u'(a) + K, \, \forall (a,z) \in \SS_0. \end{equation} Moreover, while elements of $\mathscr C$ are not generally bounded, one can show that $\rho$ is a valid metric on $\mathscr C$. In particular, $\rho$ is finite on $\mathscr C$ since $\rho(c,d) \leq \left\| u' \circ c - u' \right\| + \left\| u' \circ d - u' \right\|$, and the last two terms are finite by the definition of $\mathscr C$. \begin{proposition} \label{pr:complete} $(\mathscr C, \rho)$ is a complete metric space. \end{proposition} \begin{proposition} \label{pr:suff_optpol} If assumptions \ref{a:utility} and \ref{a:ctra_coef}--\ref{a:bd_sup_ereuprm} hold, $c \in \mathscr C$, and, for all $(a,z) \in \SS_0$, \begin{equation} \label{eq:foc} \left( u' \circ c \right)(a,z) = \max \left\{ \beta \, \mathbbm E \,_z \, \hat{R} \left( u' \circ c \right) \left( \hat{R} \left[ a - c(a,z)\right] + \hat{Y}, \, \hat{z} \right), u'(a) \right\}, \end{equation} then $c$ satisfies both the first order optimality conditions and the transversality condition. In particular, $c$ is an optimal policy. \end{proposition} Inspired by proposition \ref{pr:suff_optpol}, we aim to characterize the optimal policy as the fixed point of the \emph{Coleman operator} $T$ defined as follows: for fixed $c \in \mathscr C$ and $(a,z) \in \SS_0$, the value of the image $Tc$ at $(a,z)$ is defined as the $\xi \in (0,a]$ that solves \begin{equation} \label{eq:T_opr} u'(\xi) = \psi_c(\xi, a, z), \end{equation} where $\psi_c$ is the function on \begin{equation} \label{eq:dom_T_opr} G := \left\{ (\xi, a, z) \in \mathbbm R_+ \times (0, \infty) \times \mathsf Z \colon 0 < \xi \leq a \right\} \end{equation} defined by \begin{equation} \label{eq:keypart_T_opr} \psi_c(\xi,a,z) := \max \left\{ \beta \mathbbm E \,_{z} \hat{R} (u' \circ c)[\hat{R}(a - \xi) + \hat{Y}, \, \hat{z}], \, u'(a) \right\}. \end{equation} The following propositions show that the Coleman operator $T$ is a well-defined self-map from the candidate space $(\mathscr C, \rho)$ into itself. \begin{proposition} \label{pr:welldef_T} If assumptions \ref{a:utility} and \ref{a:ctra_coef}--\ref{a:bd_sup_ereuprm} hold, then for each $c \in \mathscr C$ and $(a,z) \in \SS_0$, there exists a unique $\xi \in (0,a]$ that solves \eqref{eq:T_opr}. \end{proposition} \begin{proposition} \label{pr:self_map} If assumptions \ref{a:utility} and \ref{a:ctra_coef}--\ref{a:conti_ereuprm} hold, then $Tc \in \mathscr C$ for all $c \in \mathscr C$. \end{proposition} Recall $n$ and $\theta$ defined in assumption \ref{a:ctra_coef}. We now provide our key optimality result. \begin{theorem} \label{t:ctra_T} If assumptions \ref{a:utility} and \ref{a:ctra_coef}--\ref{a:conti_ereuprm} hold, then $T^n$ is a contraction mapping on $(\mathscr C, \rho)$ with modulus $\theta$. In particular, \begin{enumerate} \item $T$ has a unique fixed point $c^* \in \mathscr C$. \item The fixed point $c^*$ is the unique optimal policy in $\mathscr C$. \item For all $c \in \mathscr C$ and $k \in \mathbbm N$, we have $\rho(T^{nk} c, c^*) \leq \theta^k \rho(c, c^*)$. \end{enumerate} \end{theorem} \section{Stochastic Stability} \label{s:sto_stability} This section focuses on stochastic stability of the generalized income fluctuation problem. We first provide sufficient conditions for the existence of a stationary distribution and then explore conditions for uniqueness and ergodicity. Now we add zero back into the asset space, and consider a larger state space for the state process $\{ (a_t, z_t)\}_{t \geq 0}$, denoted by \begin{equation*} \label{eq:S} \SS := [0, \infty) \times \mathsf Z \ni (a,z). \end{equation*} We extend $c^*$ to $\SS$ by setting $c^* (0,z) = 0 $ for all $z \in \mathsf Z$. Together, $c^*$ and the transition functions for $\{ a_t\}$, $\{ R_t\}$ and $\{ Y_t\}$ determine a Markov process with state vector $s_t := (a_t, z_t)$ taking values in the state space $\SS$. Let $Q$ denote the corresponding stochastic kernel. The law of motion of $\{ s_t\}$ is \begin{align} \label{eq:dyn_sys} a_{t+1} &= R \left( z_{t+1}, \zeta_{t+1} \right) \left[ a_t - c^* \left(a_t, z_t \right) \right] + Y \left( z_{t+1}, \eta_{t+1} \right), \nonumber \\ z_{t+1} &\sim P \left( z_t, \, \cdot \, \right) \end{align} \subsection{Existence of a stationary distribution} \label{ss:exist_stat} To obtain existence of a stationary distribution, we make the following assumptions. \begin{assumption} \label{a:suff_bd_in_prob} There exists $\alpha \in (0,1)$ such that % \begin{enumerate} \item $\beta \, \mathbbm E \,_z \hat{R} \, u'[ \hat{R} \left(1-\alpha \right)a ] \leq u'(a)$ for all $(a,z) \in \SS_0$,\footnote{Here we adopt the convention that $0 \cdot \infty = 0$ so that assumption \ref{a:suff_bd_in_prob} does not rule out the case $\mathbbm P \{R_t =0 \mid z_{t-1} = z\} > 0$. Indeed, as would be shown in proofs, all the conclusions of this paper still hold if we replace this condition by the weaker alternative: $\beta \, \mathbbm E \,_z \hat{R} \, u'[ \hat{R} \left(1-\alpha \right)a + \alpha \hat{Y}] \leq u'(a)$ for all $(a,z) \in \SS_0$, while maintaining the second part of assumption \ref{a:suff_bd_in_prob}.} and \item there exists $n \in \mathbbm N$ such that $(1 - \alpha)^n \sup_{z \in \mathsf Z} \mathbbm E \,_z R_1 \cdots R_n < 1$. \end{enumerate} % \end{assumption} \begin{assumption} \label{a:bd_in_prob_Yt} $\sup_{t \geq 1} \mathbbm E \,_z \, Y_t < \infty$ for all $z \in \mathsf Z$. \end{assumption} \begin{assumption} \label{a:z_bdd_in_prob} The stochastic kernel $P$ is bounded in probability. \end{assumption} \begin{example} \label{ex:homog} For homogeneous utility functions (e.g., CRRA), if the first condition of assumption \ref{a:suff_bd_in_prob} holds for some $a \in (0, \infty)$, then it must hold for all $a \in (0, \infty)$. To see this, let $k$ be the degree of homogeneity. Then we have % \begin{equation*} \beta \mathbbm E \,_z \hat{R} u'[\hat{R} (1 - \alpha) a] / u'(a) = \beta \mathbbm E \,_z \hat{R}^{1+k} (1 - \alpha)^k \quad \text{for all } a \in (0, \infty). \end{equation*} % The right hand side is constant in $a$. \end{example} \begin{example} \label{ex:bdd_in_prob_matrix} Recall example \ref{ex:spec_rad_matrix}, where $\{ z_t \}$ is a finite-state Markov chain. Consider the CRRA utility defined in \eqref{eq:crra_utils}. Define further the column vector \begin{equation*} V := \left( \mathbbm E \, R(i_1, \zeta)^{1 - \gamma}, \cdots, \mathbbm E \, R(i_N, \zeta)^{1 - \gamma} \right)'. \end{equation*} Then, assumption \ref{a:suff_bd_in_prob} holds whenever \begin{equation} \label{eq:bdd_in_prob_matrix} \max \{ r(\Pi D), 1 \} < \left( \beta \| \Pi V \| \right)^{-1 / \gamma}. \end{equation} To see this, the first condition of assumption \ref{a:suff_bd_in_prob} holds if there exists $\alpha \in (0,1)$ such that $(1 - \alpha)^{-\gamma} \beta \mathbbm E \,_z \hat{R}^{1 - \gamma} \leq 1$ for all $z \in \mathsf Z$. Since $\mathsf Z$ is finite, this is equivalent to the existence of an $\alpha \in (0,1)$ such that $(1 - \alpha)^{-\gamma} \beta \| \Pi V \| \leq 1$. Similar to example \ref{ex:spec_rad_matrix}, the second condition of assumption \ref{a:suff_bd_in_prob} holds if $r(\Pi D) < 1 / (1 - \alpha)$ for the same $\alpha$. Together, these requirements are equivalent to \eqref{eq:bdd_in_prob_matrix}. \end{example} \begin{example} \cite{benhabib2015wealth} consider the CRRA utility and assume that $\left\{ R_{t} \right\}$ and $\left\{ Y_{t} \right\}$ are {\sc iid}, mutually independent, supported on bounded closed intervals of strictly positive real numbers with their distributions represented by densities, and that $\beta \mathbbm E \, R_t^{1 - \gamma} < 1$ and $( \beta \mathbbm E \, R_t^{1 - \gamma} )^{\frac{1}{\gamma}} \mathbbm E \, R_t < 1$. Under these conditions, assumptions \ref{a:bd_in_prob_Yt}--\ref{a:z_bdd_in_prob} obviously hold. Assumption \ref{a:suff_bd_in_prob} is satisfied by letting $\alpha := 1 - ( \beta \mathbbm E \, R_t^{1 - \gamma} )^{\frac{1}{\gamma}}$ and $n := 1$. The first condition of assumption \ref{a:suff_bd_in_prob} holds since $\alpha \in (0,1)$ and % \begin{align*} \beta \mathbbm E \,_z \hat{R} \, u' [ \hat{R} (1 - \alpha) a ] \big/ u'(a) = (1 - \alpha)^{-\gamma} \beta \mathbbm E \, R_t^{1 - \gamma} = \left( \beta \mathbbm E \, R_t^{1 - \gamma} \right)^{-1} \beta \mathbbm E \, R_t^{1-\gamma} = 1, \end{align*} % while the second condition holds for $n=1$ since $(1 - \alpha) \mathbbm E \, R_t = ( \beta \mathbbm E \, R_t^{1 - \gamma} )^{\frac{1}{\gamma}} \mathbbm E \, R_t < 1.$ \end{example} Let $c^*$ be the unique optimal policy obtained from theorem \ref{t:ctra_T} and $\alpha$ be defined as in assumption \ref{a:suff_bd_in_prob}. The next proposition establishes a strictly positive lower bound on the optimal consumption rate. \begin{proposition} \label{pr:opt_pol_bd_frac} If assumptions \ref{a:utility}, \ref{a:ctra_coef}--\ref{a:conti_ereuprm} and \ref{a:suff_bd_in_prob} hold, then $c^*(a,z) \geq \alpha a$ for all $(a,z) \in \SS$. \end{proposition} From this result the existence of a stationary distribution is not difficult to verify. \begin{theorem} \label{t:sta_exist} If assumptions \ref{a:utility}, \ref{a:ctra_coef}--\ref{a:conti_ereuprm} and \ref{a:suff_bd_in_prob}--\ref{a:z_bdd_in_prob} hold, then $Q$ is bounded in probability and admits at least one stationary distribution. \end{theorem} \subsection{Further Optimality Properties} \label{ss:further_prop} Slightly digressed from our main topics, we show that the optimal policy satisifies several other important properties under the following assumption. \begin{assumption} \label{a:concave} The map $ s \mapsto (u')^{-1} \left[ \beta \mathbbm E \,_z \hat{R} \left( u' \circ c \right) (\hat{R} s + \hat{Y}, \, \hat{z} ) \right]$ is concave on $\mathbbm R_+$ for each fixed $z \in \mathsf Z$ and $c \in \mathscr C$ that is concave in its first argument. \end{assumption} \begin{example} \label{eg:concave} Assumption \ref{a:concave} imposes some concavity structure on the utility function. It holds for CRRA and logarithmic utilities, as shown in appendix B. \end{example} The next proposition implies that, with this added concavity structure, the optimal policy is concave and asymptotically linear with respect to the wealth level. \begin{proposition} \label{pr:optpol_concave} If assumptions \ref{a:utility}, \ref{a:ctra_coef}--\ref{a:conti_ereuprm}, \ref{a:suff_bd_in_prob} and \ref{a:concave} hold, then \begin{enumerate} \item $a \mapsto c^*(a,z)$ is concave for all $z \in \mathsf Z$, and \item for all $z \in \mathsf Z$, there exists $\alpha' \in [\alpha,1)$ such that $\lim_{a \to \infty} [c^*(a,z) / a] = \alpha'$.\footnote{\label{fn:abar<inf} Here we rule out the trivial situation $\mathbbm P \{R_t = 0 \mid z_{t-1} = z \} = 1$, in which case $\alpha' = 1$.} \end{enumerate} % \end{proposition} By proposition \ref{pr:optpol_concave}, as $a$ gets large, $c^*(a, z) \approx \alpha' a + b(z)$ for some function $b$, which is helpful for numerical computation. In the presence of capital income risk, there can be large realized values of wealth and consumption. This proposition then provides a justification for the linear extrapolation technology adopted when computing the optimal policy at large wealth levels. \subsection{Global stability} \label{ss:glb_stb} We start with the case of {\sc iid} $\{z_t \}$ process, which allows us to exploit the monotonicity structure of the stochastic kernel $Q$. We then discuss general Markov $\{ z_t\}$ processes. Since $Q$ is not generally monotone in these settings,\footnote{Since the optimal policy $c^*(a,z)$ is not generally monotone in $z$, we cannot conclude from \eqref{eq:dyn_sys} that $a_{t+1}$ is monotone in $z_t$. Hence, $(a_{t+1}, z_{t+1})$ is not necessarily increasing in $(a_t, z_t)$ and monotonicity might fail.} global stability is established via a different approach. \subsubsection{Case I: {\sc iid} $\{z_t\}_{t \geq 0}$ process} \label{ss:gs_iid} In this case, both $\{ R_t\}$ and $\{ Y_t\}$ are {\sc iid} processes, though dependence between $\{ R_t\}$ and $\{ Y_t\}$ are allowed. The optimal policy is then a function of asset only, and the transition function \eqref{eq:dyn_sys} reduces to \begin{equation} \label{eq:dyn_sys_iid} a_{t+1} = R_{t+1} \left[ a_t - c^*(a_t) \right] + Y_{t+1}. \end{equation} In particular, we have a Markov process $\{ a_t \}_{t \geq 0}$ taking values in $\mathbbm R_+$. The next result extends theorem~3 of \cite{benhabib2015wealth}. \begin{theorem} \label{t:gs_iid} If assumptions \ref{a:utility}, \ref{a:ctra_coef}--\ref{a:conti_ereuprm}, \ref{a:suff_bd_in_prob}--\ref{a:bd_in_prob_Yt} and \ref{a:concave} hold, then $Q$ is globally stable.\footnote{Since $\{ z_t\}$ is {\sc iid}, conditional expectations reduce to unconditional ones. Hence, to verify assumptions \ref{a:ctra_coef}--\ref{a:conti_ereuprm} and \ref{a:bd_in_prob_Yt}, it suffices to show: $\mathbbm E \, R_t^2 < \infty$, $\beta \mathbbm E \, R_t < 1$, $\mathbbm E \, Y_t < \infty$ and $\mathbbm E \, [u'(Y_t)]^2 < \infty$.} \end{theorem} Let $\psi^*$ be the unique stationary distribution of $Q$, obtained in theorem \ref{t:gs_iid}. Let $\mathscr L$ be the linear span of the set of increasing $\psi^*$-integrable functions $h \colon \mathbbm R_+ \to \mathbbm R$.\footnote{In other words, $\mathscr L$ is the set of all $h \colon \mathbbm R_+ \to \mathbbm R$ such that $h = \alpha_1 h_1 + \cdots + \alpha_k h_k$ for some scalars $\{ \alpha_i\}_{i=1}^k$ and increasing measurable $\{h_i \}_{i=1}^k$ with $\int |h_i| \diff \psi^* < \infty$.} Recall that $bc \mathbbm R_+$ is the set of continuous bounded functions $h \colon \mathbbm R_+ \to \mathbbm R$. The following theorem shows that the Law of Large Numbers holds in this framework. \begin{theorem} \label{t:LLN_iid} If the assumptions of theorem \ref{t:gs_iid} hold, then the following statements hold: \begin{enumerate} \item For all $\mu \in \mathscr P(\mathbbm R_+)$ and $h \in \mathscr L$, we have \begin{equation*} \mathbbm P_{\mu} \left\{ \lim_{T \to \infty} \frac{1}{T} \sum_{t=1}^T h(a_t) = \int h \diff \psi^* \right\} = 1. \end{equation*} \item For all $\mu \in \mathscr P(\mathbbm R_+)$, we have \begin{equation*} \mathbbm P_{\mu} \left\{ \lim_{T \to \infty} \frac{1}{T} \sum_{t=1}^T h(a_t) = \int h \diff \psi^* \; \text{ for all } h \in cb \mathbbm R_+ \right\} = 1. \end{equation*} \end{enumerate} \end{theorem} \subsubsection{Case II: Markovian $\{ z_t\}_{t \geq 0}$ process} \label{ss:gs_general} In this case, $\{ R_t\}$ and $\{ Y_t\}$ are in general non-{\sc iid} and mutually dependent processes.\footnote{Since this framework encorporates the {\sc iid} $\{z_t\}$ structure as a special case, this section provides an alternative ergodic theory for the {\sc iid} framework as a byproduct. By comparing the assumptions of theorem \ref{t:gs_iid} and those of theorem \ref{t:gs_gnl_ergo_LLN} below, we see that the latter holds without assumption \ref{a:concave}, so neither of the two theories is more powerful than the other. } We assume that the stochastic processes $\{z_t\}$ and $\{Y_t\}$ admit density representations denoted respectively by $p \left(z' \mid z \right)$ and $f_L \left( Y \mid z \right)$. Specifically, there exists a nontrivial measure $\vartheta$ on $\mathscr B(\mathsf Z)$ such that \begin{equation*} P(z, A) = \int_A p(z'\mid z) \vartheta (\diff z'), \qquad \left( A \in \mathscr B(\mathsf Z), z \in \mathsf Z \right), \end{equation*} and for $\diff Y := \lambda (\diff Y)$, where $\lambda$ is the Lebesgue measure, \begin{equation*} \mathbbm P \{ Y_t \in A \mid z_t = z \} = \int_A f_L (Y \mid z) \diff Y, \qquad \left( A \in \mathscr B(\mathbbm R_+), z \in \mathsf Z \right). \end{equation*} \begin{assumption} \label{a:pos_dens} The following conditions hold: % \begin{enumerate} \item the support of $\vartheta$ contains a compact subset $\mathsf C$ that has nonempty interior,\footnote{The \textit{support} of the measure $\vartheta$ is defined as the set of points $z \in \mathsf Z$ for which every open neighborhood of $z$ has positive $\vartheta$ measure.} \item $p \left( z' \mid z \right)$ is strictly positive on $\mathsf C \times \mathsf Z$ and continuous in $z$, and \item there exists $\delta_Y > 0$ such that $f_L \left( Y \mid z \right)$ is strictly positive on $(0, \delta_Y) \times \mathsf C$. \end{enumerate} % \end{assumption} Assumption \ref{a:pos_dens} is easy to verify in applications. The following examples are some simple illustrations, while more complicated applications are treated in section \ref{s:app}. \begin{example} \label{ex:suff_pos_dens_ctb} If $\mathsf Z$ is a countable subset of $\mathbbm R^m$, then $\{ z_t \}$ is a countable state Markov chain, in which case $\vartheta$ is the counting measure and $p(z'\mid z)$ reduces to a transition matrix $\Pi$. In particular, each single point in $\mathsf Z$ is a compact subset in the support of $\vartheta$ that has nonempty interior (itself), and $p$ is continuous in $z$ by definition. Hence, conditions~(1)--(2) of assumption \ref{a:pos_dens} hold as long as at least one column of $\Pi$ is strictly positive (i.e., each element of that column is positive). \end{example} \begin{example} \label{ex:suff_pos_dens_Leb} Since $\mathsf Z$ is a Borel subset of $\mathbbm R^m$, if $\vartheta$ can be chosen as the Lebesgue measure, then condition~(1) of assumption \ref{a:pos_dens} holds trivially. Indeed, since $P(z, \mathsf Z) =1$, the support of $\vartheta$ must contain a nonempty open box (i.e., sets of the form $\Pi_{i=1}^m (a_i, b_i)$ with $a_i < b_i$, $i = 1, \cdots, m$), inside which a compact subset with nonempty interior can be found. \end{example} For all measurable map $f \colon \SS \to [1, \infty)$ and $\mu \in \mathscr P(\SS)$, we define \begin{equation*} \left\| \mu \right\|_f := \sup_{g: |g| \leq f} \left| \int g \diff \mu \right|. \end{equation*} We say that the stochastic kernel $Q$ corresponding to $\{ (a_t,z_t)\}_{t \geq 0}$ is \textit{$f$-ergodic} if \begin{enumerate} \item[(a)] there exists a unique stationary distribution $\psi^* \in \mathscr P(\SS)$ such that $\psi^* Q = \psi^*$, \item[(b)] $f \geq 1$, $\int f \diff \psi^* < \infty$, and, for all $(a,z) \in \SS$, % \begin{equation*} \lim_{t \to \infty} \| Q^t \left( (a,z), \cdot \right) - \psi^* \|_f = 0. \end{equation*} % \end{enumerate} We say that $Q$ is \emph{$f$-geometrically ergodic} if, in addition, there exist constants $r > 1$ and $M \in \mathbbm R_+$ such that, for all $(a,z) \in \SS$, \begin{equation*} \sum_{t \geq 0} r^t \left\| Q^t ((a,z), \cdot) - \psi^* \right\|_f \leq M f(a,z). \end{equation*} In particular, if $f \equiv 1$, then $Q$ is called \textit{ergodic}/\textit{geometrically ergodic}. The following theorem establishes ergodicity and the Law of Large Numbers. Notably, assumption \ref{a:concave} is {\sc not} required for these results. \begin{theorem} \label{t:gs_gnl_ergo_LLN} If assumptions \ref{a:utility}, \ref{a:ctra_coef}--\ref{a:conti_ereuprm}, \ref{a:suff_bd_in_prob}--\ref{a:z_bdd_in_prob} and \ref{a:pos_dens} hold, then % \begin{enumerate} \item $Q$ is ergodic, in particular, % \begin{equation*} \sup_{A \in \mathscr B(\SS)} \left|Q^t \left( (a,z), A \right) - \psi^* (A) \right| \to 0 \quad \text{as } \, t \to \infty. \end{equation*} % \item For all $\mu \in \mathscr P(\SS)$ and map $h: \SS \to \mathbbm R$ with $\int |h| \diff \psi^* < \infty$, % \begin{equation*} \mathbbm P_{\mu} \left\{ \lim_{T \to \infty} \sum_{t=1}^T h(a_t, z_t) = \int h \diff \psi^* \right\} = 1. \end{equation*} \end{enumerate} \end{theorem} We next show that geometric ergodicity is guaranteed under some further assumptions. Suppose $\{R_t\}$ admits a density representation $f_C(R \mid z)$, in other words, \begin{equation*} \mathbbm P \{ R_t \in A \mid z_t = z \} = \int_A f_C (R \mid z) \diff R, \qquad \left( A \in \mathscr B (\mathbbm R), \; z \in \mathsf Z \right), \end{equation*} where $\diff R := \lambda (\diff R)$. Recall the {\sc iid} innovations $\{ \zeta_t\}$ and $\{ \eta_t \}$ defined by \eqref{eq:RY_func} and the compact subset $\mathsf C \subset \mathsf Z$ defined by assumption \ref{a:pos_dens}. \begin{assumption} \label{a:geo_drift_Yt} The following conditions hold: % \begin{enumerate} \item there exists $\delta_R > 0$ such that $f_C (R \mid z)$ is strictly positive on $(0, \delta_R) \times \mathsf C$, \item there exist $q \in [0,1)$ and $q' \in \mathbbm R_+$ such that $\mathbbm E \,_z Y_2 \leq q \mathbbm E \,_z Y_{1} + q'$ for all $z \in \mathsf Z$, \item the innovations $\{ \zeta_t \}$ and $\{ \eta_t\}$ are mutually independent. \end{enumerate} % \end{assumption} \begin{example} \label{ex:suff_geodrift_Y} If either $\{Y_t\}$ is a bounded process or $\mathsf Z$ is a finite set, then the second condition of assumption \ref{a:geo_drift_Yt} holds trivially. In particular, if $\mathsf Z$ is finite, then we can let $q$ be an arbitrary number in $[0,1)$ and let $q':= \sup_{z \in \mathsf Z} \mathbbm E \,_z Y_2$, which is finite by assumption \ref{a:Y_sum}. More general examples are discussed in the next section. \end{example} Let the measurable map $V \colon \SS \to [1, \infty)$ be defined by \begin{equation} \label{eq:V_func} V(a,z) := a + m \, \mathbbm E \,_z \hat{Y} + 1, \end{equation} where $m$ is a sufficiently large constant defined in the proof of theorem \ref{t:gs_gnl} below. \begin{theorem} \label{t:gs_gnl} If assumptions \ref{a:utility}, \ref{a:ctra_coef}--\ref{a:conti_ereuprm} and \ref{a:suff_bd_in_prob}--\ref{a:geo_drift_Yt} hold, then $Q$ is $V$-geometrically ergodic. \end{theorem} \section{Applications} \label{s:app} We now turn to several substantial applications of the theory described above. We first illustrate how our theory can be applied to modeling capital income risk in different situations. We then provide a numerical example and study the quantitative effect of stochastic volatility and mean persistence of the wealth return process on wealth inequality. Throughout this section, we work with the CRRA utility function defined by \eqref{eq:crra_utils}. Recall that $\gamma > 0$ is the coefficient of relative risk aversion. \subsection{Modeling Capital Income Risk} \label{ss:app_cir_modeling} Suppose the income process contains both persistent and transient components (see, e.g., \cite{blundell2008consumption}, \cite{browning2010modelling}, \cite{heathcote2010macroeconomic}, \cite{kaplan2010much}, \cite{kaplan2012inequality}, \cite{debacker2013rising}, and \cite{carroll2017distribution}). In particular, we consider \begin{equation*} \label{eq:app_Yt} \log Y_{t} = \chi_{t} + \eta_{t}, \end{equation*} where the persistent component $\{\chi_t \}_{t \geq 0}$ is a finite-state Markov chain with transition matrix $\Pi_\chi$, and the transient component $\left\{ \eta_{t} \right\}_{t \geq 1}$ is an {\sc iid} sequence with $\mathbbm E \, \mathrm{e}^{\eta_{t}} < \infty$ and $\mathbbm E \, \mathrm{e}^{-2 \gamma \eta_{t}} < \infty$. Moreover, $\{ \chi_t\}$ and $\{ \eta_t\}$ are mutually independent. As a natural extension of the {\sc iid} financial return process assumed by \cite{benhabib2015wealth}, we consider $\{ R_{t}\}_{t \geq 1}$ taking form of \begin{equation*} \label{eq:app_Rt} \log R_{t} = \mu_t + \sigma_t \zeta_{t}, \end{equation*} where $\{ \zeta_t\}_{t \geq 1} \stackrel {\textrm{ {\sc iid }}} {\sim} N(0,1)$, $\{\mu_t\}_{t \geq 0}$ and $\{\sigma_t\}_{t \geq 0}$ are respectively finite-state Markov chains with transition matrices $\Pi_\mu$ and $\Pi_\sigma$, $\{\sigma_t \}$ is positive, and $\{\mu_t \}$, $\{\sigma_t \}$ and $\{\zeta_t \}$ are mutually independent.\footnote{Note that $\{Y_t\}$ and $\{R_t \}$ are allowed to be dependent on each other since, for example, we allow $\{ \chi_t \}$ and $\{\mu_t\}$ to be mutually dependent, as we do for $\{\eta_t \}$ and $\{ \sigma_t \}$, etc. } Such a setup, as it appears, allows us to capture both mean persistence and stochastic volatility. The state spaces of $\{\chi_t \}, \{ \mu_t \}$ and $\{ \sigma_t \}$ are respectively (sorted in increasing order) \begin{equation*} \mathsf Z_\chi := \{ \ell_1, \cdots, \ell_K \}, \quad \mathsf Z_{\mu} := \{i_1, \cdots, i_M \} \quad \text{and} \quad \mathsf Z_{\sigma} := \{j_1, \cdots, j_N \}. \end{equation*} Let $\text{diag }(\cdot)$ be the diagonal matrix created by elements in the bracket, and let \begin{equation*} D_{\mu} := \text{diag} \left( \mathrm{e}^{i_1}, \cdots, \mathrm{e}^{i_M} \right) \quad \text{and} \quad D_\sigma := \text{diag} \left( \mathrm{e}^{j_1^2 / 2}, \cdots, \mathrm{e}^{j_N^2 / 2} \right). \end{equation*} Furthermore, we define the column vectors \begin{equation*} V_{\mu} := \left( \mathrm{e}^{(1-\gamma) i_1}, \cdots, \mathrm{e}^{(1 - \gamma) i_M} \right)' \quad \text{and} \quad V_{\sigma} := \left( \mathrm{e}^{(1 - \gamma)^2 j_1^2 / 2}, \cdots, \mathrm{e}^{(1 - \gamma)^2 j_N^2 / 2} \right)'. \end{equation*} For any square matrix $A$, let $r(A)$ be its spectral radius. We assume that \begin{equation} \label{eq:app1_srad_bet} r(\Pi_{\mu} D_\mu) \cdot r(\Pi_\sigma D_\sigma) < 1 / \beta \quad \; \text{and} \end{equation} \begin{equation} \label{eq:app1_patience} \max \left\{ r(\Pi_\mu D_\mu) \cdot r(\Pi_\sigma D_\sigma) , \; 1 \right\} < \left( \beta \| \Pi_{\mu} V_{\mu} \| \cdot \| \Pi_{\sigma} V_{\sigma} \| \right)^{-1 / \gamma}. \end{equation} This problem can be placed in our framework by setting \begin{equation*} z_t := \left( \chi_t, \mu_t, \sigma_t \right) \quad \text{and} \quad \mathsf Z := \mathsf Z_{\chi} \times \mathsf Z_{\mu} \times \mathsf Z_{\sigma}. \end{equation*} To simplify notation, we denote $z := z_0$ and $(\chi, \mu, \sigma) := (\chi_0, \mu_0, \sigma_0)$. \subsubsection{Optimality Results} Since $\{ \zeta_t \} \stackrel {\textrm{ {\sc iid }}} {\sim} N(0,1)$, by the Fubini theorem, \begin{equation*} \beta^n \mathbbm E \,_z R_1 \cdots R_n = \beta^n \mathbbm E \,_z \mathrm{e}^{\mu_1 + \sigma_1 \zeta_1} \cdots \mathrm{e}^{\mu_n + \sigma_n \zeta_n} = \beta^n (\mathbbm E \,_\mu \mathrm{e}^{\mu_1} \cdots \mathrm{e}^{\mu_n} ) (\mathbbm E \,_\sigma \mathrm{e}^{\sigma_1^2 / 2} \cdots \mathrm{e}^{\sigma_n^2 / 2}). \end{equation*} For all bounded functions $f$ on $\mathsf Z_\mu$ and $h$ on $\mathsf Z_\sigma$, we define \begin{equation*} K_1 f(\mu) := \mathbbm E \,_\mu \mathrm{e}^{\mu_1} f(\mu_1) \quad \text{and} \quad K_2 h(\sigma) := \mathbbm E \,_\sigma \mathrm{e}^{\sigma_1^2 / 2} h(\sigma_1). \end{equation*} Similar to example \ref{ex:spec_rad_ctra}, $\beta^n \sup_{z} \mathbbm E \,_z R_1 \cdots R_n < 1$ for some $n \in \mathbbm N$ if and only if $\beta r(K_1) r(K_2) < 1$.\footnote{As in example \ref{ex:spec_rad_ctra}, we have $\| K_1^n \| = \sup_\mu \mathbbm E \,_\mu \mathrm{e}^{\mu_1} \cdots \mathrm{e}^{\mu_n}$ and $\| K_2^n \| = \sup_\sigma \mathbbm E \,_\sigma \mathrm{e}^{\sigma_1^2 / 2} \cdots \mathrm{e}^{\sigma_n^2 / 2}$. Then $\beta r(K_1) r(K_2) < 1$ iff $\beta \|K_1^n \|^{1/n} \|K_2^n \|^{1/n} < 1$ for some $n \in \mathbbm N$ iff $\beta^n \|K_1^n \| \|K_2^n \| < 1$ for some $n \in \mathbbm N$ iff $\sup_z \beta^n \mathbbm E \,_z R_1 \cdots R_n < 1$ for some $n \in \mathbbm N$.} The latter obviously holds since \eqref{eq:app1_srad_bet} holds, and, similar to example \ref{ex:spec_rad_matrix}, $r(K_1) = r(\Pi_\mu D_\mu)$ and $r(K_2) = r(\Pi_\sigma D_\sigma)$. Assumption \ref{a:ctra_coef} is verified. Using the fact that $\mathsf Z$ is a finite space, we have \begin{equation} \label{eq:app1_bd_chi} \sup_{t \geq 0} \sup_z \mathbbm E \,_z \, \mathrm{e}^{\chi_t} = \sup_{t \geq 0} \sup_\chi \mathbbm E \,_\chi \, \mathrm{e}^{\chi_t} \leq \sup_{t \geq 0} \sup_\chi \mathbbm E \,_\chi \mathrm{e}^{\ell_K} = \mathrm{e}^{\ell_K} < \infty. \end{equation} Since in addition $\{\eta_t \}$ is {\sc iid} with $\mathbbm E \, \mathrm{e}^{\eta_t} < \infty$, we have \begin{equation*} \sup_{t \geq 0} \sup_z \mathbbm E \,_z Y_t = \sup_{t \geq 0} \sup_z \mathbbm E \,_z \mathrm{e}^{\chi_t + \eta_t} = \left(\sup_{t \geq 0} \sup_z \mathbbm E \,_z \mathrm{e}^{\chi_t} \right) \mathbbm E \, \mathrm{e}^{\eta_1} < \infty. \end{equation*} Hence, assumption \ref{a:Y_sum} holds. As a byproduct, we have also verified assumptions \ref{a:bd_in_prob_Yt} and \ref{a:geo_drift_Yt}-(2) (recall example \ref{ex:suff_geodrift_Y}). Similarly, since $\sup_z \mathbbm E \,_z \mathrm{e}^{-2 \gamma \chi_1} \leq \mathrm{e}^{-2 \gamma \ell_1} < \infty$ and $\mathbbm E \, \mathrm{e}^{-2 \gamma \eta_t} < \infty$, we have \begin{equation} \label{eq:bd_uy2} \sup_z \mathbbm E \,_z \left[ u' \left( Y_1 \right) \right]^2 = \sup_z \mathbbm E \,_z \mathrm{e}^{-2 \gamma (\chi_1 + \eta_1)} = \left(\sup_z \mathbbm E \,_z \mathrm{e}^{-2 \gamma \chi_1} \right) \mathbbm E \, \mathrm{e}^{-2 \gamma \eta_1}< \infty. \end{equation} Moreover, for all $z \in \mathsf Z$, based on the Fubini theorem, \begin{align*} \mathbbm E \,_z \hat{R}^2 = \mathbbm E \,_z \mathrm{e}^{ 2 \mu_1 + 2 \sigma_1 \zeta_1} = \mathbbm E \,_\mu \mathrm{e}^{2 \mu_1} \mathbbm E \,_\sigma \mathrm{e}^{2 \sigma_1 \zeta_1} = \mathbbm E \,_\mu \mathrm{e}^{2 \mu_1} \mathbbm E \,_\sigma \mathrm{e}^{2 \sigma_1^2} \leq \mathrm{e}^{2 i_M + 2 j_N^2 } < \infty. \end{align*} Hence, assumption \ref{a:bd_sup_ereuprm} holds (see example \ref{ex:CS_suff}). Since $\mathsf Z$ is a finite space, this in turn implies that $z \mapsto \mathbbm E \,_z \hat{R} u' (\hat{Y})$ must be continuous, so assumption \ref{a:conti_ereuprm} holds. In summary, we have verified all the assumptions of section \ref{s:opt_results}. All the related optimality results have been established. \subsubsection{Existence of Stationary Distributions} Similar to examples \ref{ex:homog}--\ref{ex:bdd_in_prob_matrix}, assumption \ref{a:suff_bd_in_prob}-(1) holds if $(1 - \alpha)^{-\gamma} \beta \mathbbm E \,_z \hat{R}^{1 - \gamma} \leq 1$ for all $z$. Since \begin{align*} \mathbbm E \,_z \hat{R}^{1 - \gamma} &= \mathbbm E \,_{\sigma} \mathrm{e}^{ (1-\gamma) (\mu_1 + \sigma_1 \zeta_1) } = \mathbbm E \,_{\mu} \mathrm{e}^{ (1-\gamma) \mu_1} \mathbbm E \,_{\sigma} \mathrm{e}^{ (1 - \gamma) \sigma_1 \zeta_1} \\ &= \mathbbm E \,_{\mu} \mathrm{e}^{ (1-\gamma) \mu_1} \mathbbm E \,_{\sigma} \mathrm{e}^{ (1 - \gamma)^2 \sigma_1^2 / 2} \leq \| \Pi_{\mu} V_{\mu} \| \cdot \| \Pi_{\sigma} V_\sigma \|, \end{align*} it suffices to show that $\beta \| \Pi_{\mu} V_{\mu} \| \cdot \| \Pi_{\sigma} V_\sigma \| \leq (1 - \alpha)^\gamma$. Moreover, similar to verifying assumption \ref{a:ctra_coef}, assumption \ref{a:suff_bd_in_prob}-(2) holds as long as $(1 - \alpha) r(\Pi_\mu D_\mu) r(\Pi_\sigma D_\sigma) < 1$. In summary, assumption \ref{a:suff_bd_in_prob} holds whenever there exists $\alpha \in (0,1)$ that satisfies \begin{equation*} r(\Pi_\mu D_\mu) \cdot r(\Pi_\sigma D_\sigma) < 1 / (1 - \alpha) \leq \left( \beta \| \Pi_{\mu} V_{\mu} \| \cdot \| \Pi_{\sigma} V_\sigma \| \right)^{-1 / \gamma}. \end{equation*} This is guaranteed by \eqref{eq:app1_patience}. Moreover, assumption \ref{a:bd_in_prob_Yt} has been verified in the previous section, assumption \ref{a:z_bdd_in_prob} is trivial since $\mathsf Z$ is finite, and assumption \ref{a:concave} has been verified in example \ref{eg:concave}. In summary, all the assumptions up to section \ref{ss:exist_stat} have been verified. As a result, all the conclusions of propositions \ref{pr:opt_pol_bd_frac}--\ref{pr:optpol_concave} and theorem \ref{t:sta_exist} hold. \subsubsection{Global Stability} Regarding ergodicity and the Law of Large Numbers (theorem \ref{t:gs_gnl_ergo_LLN}), it remains to verify assumption \ref{a:pos_dens}. This is true if we assume further \begin{itemize} \item there are strictly positive columns in each of the matrices $\Pi_\chi$, $\Pi_{\mu}$ and $\Pi_{\sigma}$ (recall example \ref{ex:suff_pos_dens_ctb}), and \item $\{\eta_t\}$ has a density that is strictly positive on $(-\infty, \delta)$ for some $\delta \in \mathbbm R$. \end{itemize} Regarding geometric ergodicity (theorem \ref{t:gs_gnl}), it remains to verify assumption \ref{a:geo_drift_Yt}. Condition (1) is trivial since $\{ \zeta_t\} \stackrel {\textrm{ {\sc iid }}} {\sim} N(0,1)$. Condition (2) has been verified in previous sections. Hence, the model is $V$-geometrically ergodic as long as the innovations $\{ \eta_t\}$ and $\{\zeta_t\}$ are mutually independent. \subsection{Modeling Generic Stochastic Returns} Indeed, our theory works for more general setups. To illustrate, consider the following labor income process\footnote{Similar extensions can be made to the $\{R_t\}$ process.} \begin{align} \label{eq:app2_Yt} Y_{t} = \chi_{t} \, \varphi_{t} + \nu_{t} \quad \text{and} \quad \ln \chi_{t+1} = \rho \ln \chi_{t} + \epsilon_{t+1}, \end{align} where $\chi_0 \in (0, \infty)$ and $\rho \in (0,1)$ are given, $\left\{ \epsilon_t \right\}_{t \geq 1} \stackrel {\textrm{ {\sc iid }}} {\sim} N(0, \delta^2)$, $\{ \nu_t \}_{t \geq 1}$ and $\{\varphi_t \}_{t \geq 1}$ are positive {\sc iid} sequences with finite second moments, and $\mathbbm E \, \nu_{t}^{-2 \gamma} < \infty$. Moreover, $\{\chi_t \}$, $\{\varphi_t \}$ and $\{\nu_t \}$ are mutually independent. Similar setups appear in a lot of applied literature. See, for example, \cite{heathcote2010macroeconomic}, \cite{kaplan2010much}, \cite{huggett2011sources}, \cite{kaplan2012inequality} and \cite{debacker2013rising}. This setup can be placed in our framework by setting $\eta_t := (\varphi_{t}, \nu_{t})$. Next, we aim to verify all the assumptions related to $\{ Y_t \}$. Based on \eqref{eq:app2_Yt}, for all $t \geq 0$, the distribution of $\chi_t$ given $\chi_0$ follows \begin{equation*} \left( \chi_t \mid \chi_0 \right) \sim LN \left( \rho^t \ln \chi_0, \; \delta^2 \sum_{k=0}^{t-1} \varphi^{2k} \right). \end{equation*} We denote $\chi := \chi_0$ for simplicity. Then for all $t \geq 0$ and $s \in \mathbbm R$, we have\footnote{Recall that for $X \sim LN(\mu, \sigma^2)$ and $s \in \mathbbm R$, we have $\mathbbm E \, (X^s) = \exp \left( s \mu + s^2 \sigma^2 / 2 \right)$.} \begin{align*} \label{eq:mgf} \mathbbm E \,_{\chi} \chi_t^s = \exp \left[ s \rho^t \ln \chi + \frac{s^2 \delta^2 (1 - \rho^{2t})}{2 (1 - \rho^2)} \right]. \end{align*} In particular, since $\rho \in (0,1)$, this implies that $\sup_{t \geq 0} \mathbbm E \,_\chi \chi_t^s < \infty$ for all $s \in \mathbbm R$ and $\chi \in (0, \infty)$. Hence, \begin{equation*} \sup_{t \geq 0} \mathbbm E \,_\chi Y_t = \sup_{t \geq 0} \mathbbm E \,_\chi \chi_t \varphi_{t} + \mathbbm E \, \nu_{t} \leq \left(\sup_{t \geq 0} \mathbbm E \,_\chi \chi_t \right) \mathbbm E \, \varphi_{t} + \mathbbm E \, \nu_{t} < \infty \end{equation*} for all $\chi \in (0, \infty)$, and assumptions \ref{a:Y_sum} and \ref{a:bd_in_prob_Yt} hold. Moreover, since $Y_t \geq \nu_{t}$, \begin{equation*} \sup_\chi \mathbbm E \,_\chi \left[ u' \left( Y_t \right) \right]^2 \leq \mathbbm E \, \left[ u' \left( \nu_{t} \right) \right]^2 = \mathbbm E \, \nu_{t}^{-2 \gamma}< \infty, \end{equation*} and the second part of assumption \ref{a:bd_sup_ereuprm} holds. Regarding assumption \ref{a:geo_drift_Yt}-(2), since $\rho \in (0,1)$, we can choose $\bar{\chi} > 0$ such that \begin{equation*} q := \mathrm{e}^{\delta^2 \rho^2 / 2} \bar{\chi}^{\rho (\rho - 1)} < 1. \end{equation*} Then for $\chi \leq \bar{\chi}$, we have $\mathbbm E \,_\chi \chi_2 \leq \mathrm{e}^{ \rho^2 \ln \bar{\chi} + \delta^2(1+ \rho^2)/2 } =: d$, and for $\chi > \bar{\chi}$, we have \begin{align*} \mathbbm E \,_\chi \chi_2 &= \mathrm{e}^{ \delta^2(1+ \rho^2)/2} \chi^{\rho^2} = \frac{ \mathrm{e}^{ \delta^2(1+ \rho^2)/2} \chi^{\rho^2} }{\mathrm{e}^{\delta^2 / 2 } \chi^\rho} \cdot \mathrm{e}^{\delta^2 / 2 } \chi^\rho \\ &= \mathrm{e}^{ \delta^2 \rho^2 / 2} \chi^{\rho (\rho - 1)} \cdot \mathbbm E \,_\chi \chi_1 \leq \mathrm{e}^{ \delta^2 \rho^2 / 2} \bar{\chi}^{\rho (\rho - 1)} \cdot \mathbbm E \,_\chi \chi_1 = q \, \mathbbm E \,_z \chi_1. \end{align*} Hence, $\mathbbm E \,_\chi \chi_2 \leq q \, \mathbbm E \,_\chi \chi_1 + d$ for all $\chi$. Since in addition $\mathbbm E \, \varphi_t < \infty$, $\mathbbm E \, \nu_t < \infty$ and \begin{equation*} \mathbbm E \,_\chi Y_2 = \mathbbm E \,_\chi \chi_2 \, \mathbbm E \, \varphi_2 + \mathbbm E \, \nu_2, \end{equation*} assumption \ref{a:geo_drift_Yt}-(2) follows immediately. Finally, assumption \ref{a:pos_dens}-(3) holds as long as the distributions of $\{ \varphi_ t\}$ and $\{ \nu_t\}$ have densities that are strictly positive on $(0, \bar{\delta})$ for some $\bar{\delta} > 0$. \subsection{Numerical Example} \label{ss:app_numerical} What are the ``wealth inequality effects" of mean persistence and stochastic volatility in the rate of return to wealth? This is an important question that is rarely explored by the existing literature. In what follows we attempt to provide an answer via simulation. In doing this, we will also explore the generality of our theory by testing the stability properties of the economy for a broad range of parameters. Our study is based on the model of section \ref{ss:app_cir_modeling}. Regarding the finite-state Markov chains $\{ \chi_t \}$, $\{ \mu_t\}$ and $\{ \eta_t\}$, we use the method of \cite{tauchen1991quadrature} and discretize the following AR(1) processes \begin{align*} \chi_t &= \rho_\chi \chi_{t-1} + \epsilon_t^{\chi}, \qquad \{ \epsilon_t^\chi \} \stackrel {\textrm{ {\sc iid }}} {\sim} N(0, \delta_\chi^2), \\ \mu_t &= (1 - \rho_\mu) \bar{\mu} + \rho_\mu \mu_{t-1} + \epsilon_t^\mu , \qquad \{ \epsilon_t^\mu \} \stackrel {\textrm{ {\sc iid }}} {\sim} N(0, \delta_\mu^2), \\ \log \sigma_t &= (1- \rho_\sigma) \bar{\sigma} + \rho_\sigma \log \sigma_{t-1} + \epsilon_t^\sigma, \qquad \{ \epsilon_t^\sigma \} \stackrel {\textrm{ {\sc iid }}} {\sim} N(0,\delta_\sigma^2 ), \end{align*} into $N_\chi $, $N_\mu$ and $N_\sigma$ states, respectively. Regarding the parameters of the $\{Y_t\}$ process, we set $\{ \eta_t\}$ to be a normal distribution with mean $0$ and variance $\delta_\eta^2 = 0.075$. In addition, we set $\rho_\chi = 0.9770$ and $\delta_\chi^2 = 0.02$. These values are chosen broadly in line with the existing literature. See, for example, \cite{heathcote2010macroeconomic}, \cite{kaplan2010much}, and \cite{debacker2013rising}. Our calibration of the $\{R_t\}$ process is based on \cite{fagereng2016heterogeneity}, in which the authors report the average and standard deviation of the financial return process of Norway from 1993--2013.\footnote{This is the only data source we can find that has a full record of financial returns. Although our calibration is based on this dataset, we have conducted sensitivity analysis for different groups of parameters. The results show that their qualitative effects are broadly the same, although their quantitative effects vary, as one would expect.} We transform the two series to match our model and run first-order autoregressions, which yield $\bar{\mu} = 0.0281$, $\rho_\mu = 0.5722$, $\delta_\mu = 0.0067$, $\bar{\sigma}=-3.2556$, $\rho_\sigma=0.2895$ and $\delta_\sigma=0.1896$. Based on this parameterization, the stationary mean and standard deviation of the $\{R_t\}$ process are approximately $1.03$ and $4\%$, respectively. However, to distinguish the different effect of stochastic volatility and mean persistence, as well as to mitigate the computational burden caused by high state dimensionality, we consider two subsidiary model economies. The first model reduces $\{ \mu_t\}$ to its stationary mean $\bar{\mu}$, while the second model reduces $\{ \sigma_t \}$ to its stationary mean $\hat{\sigma} := \exp (\bar{\sigma} + \delta_\sigma^2/ 2(1 - \rho_\sigma^2))$. In summary, $\{R_t\}$ satisfies \begin{align*} &\log R_t = \bar{\mu} + \sigma_t \zeta_t \qquad (\text{Model \rom{1}}) \\ &\log R_t = \mu_t + \hat{\sigma} \zeta_t \qquad (\text{Model \rom{2}}) \end{align*} To test the stability properties of the economy, we set $\beta=0.95$, $N_\chi = 5$ and consider respectively $\gamma=1$ and $\gamma=2$. Furthermore, in model \rom{1}, we set $N_\sigma=5$ and consider a broad neighborhood of the calibrated $(\rho_\sigma, \delta_\sigma)$ pairs, and in model \rom{2}, we set $N_\mu=5$ and consider a large neighborhood around the calibrated $(\rho_\mu, \delta_\mu)$ values. Each scenario, we hold the rest of the parameters as in the benchmark. The results are shown in figure \ref{fig:m1} and figure \ref{fig:m2}. Since the dot points (calibrated parameter values) lie in the stable range in all cases, both the two calibrated models are globally stable, and stationary wealth distributions can be computed by the established ergodic theorems (theorem \ref{t:gs_gnl_ergo_LLN} and theorem \ref{t:gs_gnl}). Moreover, the broad stability range indicates that our theory can handle a wide range of parameter setups, including highly persistent and volatile $\{R_t\}$ processes. \begin{figure} \centering \begin{subfigure}[a]{0.75\textwidth} \includegraphics[width=1\linewidth]{m1_gam1} \caption*{(a) Model \rom{1} : $\beta = 0.95$, \, $\gamma = 1$, \, $\bar{\mu} = 0.0281$} \label{fig:m1_gam1} \end{subfigure} \vspace{0.cm} \begin{subfigure}[b]{0.75\textwidth} \includegraphics[width=1\linewidth]{m1_gam2} \caption*{(b) Model \rom{1} : $\beta = 0.95$, \, $\gamma = 2$, \, $\bar{\mu}=0.0281$} \label{fig:m1_gam2} \end{subfigure} \caption[]{Stability Range and Threshold of Model \rom{1}} \label{fig:m1} \end{figure} \begin{figure} \centering \begin{subfigure}[a]{0.8\textwidth} \includegraphics[width=1\linewidth]{m2_gam1} \caption*{(a) Model \rom{2} : $\beta = 0.95$, \, $\gamma = 1$, \, $\hat{\sigma} = 0.0393$} \label{fig:m1_gam1} \end{subfigure} \vspace{0.cm} \begin{subfigure}[b]{0.8\textwidth} \includegraphics[width=1\linewidth]{m2_gam2} \caption*{(b) Model \rom{2} : $\beta = 0.95$, \, $\gamma = 2$, \, $\hat{\sigma}=0.0393$} \label{fig:m1_gam2} \end{subfigure} \caption[]{Stability Range and Threshold of Model \rom{2}} \label{fig:m2} \end{figure} Our next goal is to explore the quantitative impact of capital income risk on wealth inequality. As a first step, we compute the optimal policy. This can be realized by iterating the Coleman opeartor and evaluating the distance between loops via the designed metric $\rho$. The algorithm is guaranteed to converge based on theorem \ref{t:ctra_T}. Specifically, we assign 100 grid points to wealth equally spaced in $[10^{-4}, 50]$. Expectations with respect to the {\sc iid} innovations are evaluated via Monte Carlo with $1000$ draws. Moreover, in all cases, we use piecewise linear interpolation to approximate policies. Policy function evaluation outside of the grid range is via linear extrapolation, as is justified by proposition \ref{pr:optpol_concave}. Once the optimal policy is obtained, we then simulate a single time series of $5 \times 10^7$ agents in each case and compute the stationary distribution based on our ergodic theorems \ref{t:gs_gnl_ergo_LLN}--\ref{t:gs_gnl}. As a final step, we compare the key properties of the stationary wealth distributions in different economies. In particular, we estimate the tail exponent based on the wealth level of the top $5\%$ and top $10\%$ of the simulated agents.\footnote{Recall that a random variable $X$ is said to have a \textit{heavy upper tail} if there exist constants $A, \alpha > 0$ such that $\mathbbm P \{ X > x\} \geq Ax^{-\alpha}$ for large enough $x$, where $\alpha$ is refered to as the \textit{tail exponent}. The smaller the tail exponent is, the fatter the distribution tail is, and thus a higher level of inequality exists. It is common in the literature to estimate the tail exponent via linearly regressing the $\log$-ranks over the $\log$-wealth levels of the top $5\%$ and top $10\%$ most wealthy agents. } Moreover, we estimate the Gini coefficient and provide a detailed analysis of the wealth share in each case. All simulations are processed in a standard Julia environment on a laptop with a 2.9 GHz Intel Core i7 and 32GB RAM. \begin{table}[h] \caption{Tail Exponent and Gini Coefficient} \label{tb:te} \vspace*{-0.45cm} \noindent \begin{center} \begin{threeparttable} {\small \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Model Economy} & Model I & Model II & IID $\{R_{t}\}$ & Constant $\{R_{t}\}$\tabularnewline \hline \hline \multirow{2}{*}{Tail Exponent} & Top 5\% & 3.0 & 2.9 & 4.4 & 4.4\tabularnewline \cline{2-6} & Top 10\% & 2.6 & 2.5 & 3.7 & 3.7\tabularnewline \hline \multicolumn{2}{|c|}{Gini Coefficient} & 0.47 & 0.45 & 0.34 & 0.33\tabularnewline \hline \end{tabular} } \begin{tablenotes} \fontsize{9pt}{9pt}\selectfont \item Parameters: $\beta=0.95$, $\gamma=2$, $\bar{\mu}=0.0281$, $\bar{\sigma}=-3.2556$, $\rho_\sigma=0.2895$, $\delta_\sigma=0.1896$, $\rho_\mu = 0.5722$ and $\delta_\mu = 0.0067$. \end{tablenotes} \end{threeparttable} \par\end{center} \end{table} \begin{table}[h] \caption{Wealth Share (in percentage)} \label{tb:wealth_share} \vspace*{-0.45cm} \noindent \begin{center} \begin{threeparttable} {\small \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline Poorest agents (\%) & 5\% & 10\% & 15\% & 20\% & 25\% & 30\% & 35\% & 40\% & 45\% & 50\%\tabularnewline \hline \hline Model I & 0.8 & 1.8 & 3.1 & 4.6 & 6.2 & 8.2 & 10.4 & 12.9 & 15.7 & 18.7\tabularnewline \hline Model II & 1.1 & 2.4 & 3.9 & 5.7 & 7.6 & 9.7 & 12.1 & 14.7 & 17.5 & 20.6\tabularnewline \hline IID $\{R_{t}\}$ & 1.5 & 3.4 & 5.6 & 8.0 & 10.6 & 13.4 & 16.5 & 19.8 & 23.4 & 27.3\tabularnewline \hline Constant $\{R_{t}\}$ & 1.6 & 3.5 & 5.6 & 8.0 & 10.7 & 13.5 & 16.6 & 20.0 & 23.6 & 27.5\tabularnewline \hline \hline Poorest agents (\%) & 55\% & 60\% & 65\% & 70\% & 75\% & 80\% & 85\% & 90\% & 95\% & 100\%\tabularnewline \hline \hline Model I & 22.1 & 25.9 & 30.0 & 34.7 & 40.3 & 47.0 & 55.1 & 64.8 & 77.0 & 100\tabularnewline \hline Model II & 24.1 & 27.8 & 31.9 & 36.6 & 42.0 & 48.5 & 56.3 & 65.7 & 77.5 & 100\tabularnewline \hline IID $\{R_{t}\}$ & 31.4 & 35.9 & 40.7 & 46.0 & 51.8 & 58.4 & 65.7 & 74.2 & 84.3 & 100\tabularnewline \hline Constant $\{R_{t}\}$ & 31.6 & 36.1 & 41.0 & 46.3 & 52.0 & 58.5 & 65.9 & 74.3 & 84.4 & 100\tabularnewline \hline \end{tabular} } \begin{tablenotes} \fontsize{9pt}{9pt}\selectfont \item Parameters: same as table \ref{tb:te}. In the first and sixth rows, $N\%$ denotes the $N\%$ of agents with lowest levels of wealth. \end{tablenotes} \end{threeparttable} \par\end{center} \end{table} \begin{figure} \centering \includegraphics[width=1\linewidth]{zipf_plot.png} \caption{The Zipf Plot} \label{fig:zipf} \end{figure} \begin{figure} \centering \includegraphics[width=1\linewidth]{lorenz_curve_1.png} \caption{The Lorenz Curve} \label{fig:lorenz} \end{figure} We compare our models with two other models, in which $\{ R_t \}$ is respectively an {\sc iid} process and a constant.\footnote{In the former case, we set $N_\sigma=1$ in model \rom{1} (so that $\sigma_t$ reduces to its stationary mean) or $N_\mu=1$ in model \rom{2} (so that $\mu_t$ reduces to its stationary mean). In the latter case, we reduce $\{R_t\}$ to its stationary mean.} The difference between the results of model \rom{1} and model \rom{2} and the results of the other two models reflects the role of stochastic volatility and mean persistence of the wealth return process. Parameter setups and results are reported in tables \ref{tb:te}--\ref{tb:wealth_share}.\footnote{Since the standard Bewley-Ayagari-Hugget model does not generate fat-tailed wealth distribution (see, e.g., \cite{stachurski2018impossibility}), calculating the tail exponent of the stationary wealth distribution when $\{R_t\}$ is a constant is relatively less standard. However, doing this allows us to reveal the effect of capital income risk on the tail thickness of the stationary wealth distribution.} As can be seen in table \ref{tb:te}, the tail exponents of model \rom{1} and model \rom{2} are smaller than the tail exponents when $\{R_t\}$ is {\sc iid} or constant. In other words, both stochastic volatility and mean persistence in wealth returns lead to a higher degree of wealth inequality. Moreover, mean persistence results in slightly lower tail exponents than stochastic volatility does. Similarly, the Gini coefficients generated by model \rom{1} and model \rom{2} are much higher than those generated by the other two models, illustrating from another perspective that stochastic volatility and mean persistence of wealth returns cause more inequality in wealth. However, different from the previous case, compared with mean persistence, which generates a Gini index 0.45, stochastic volatility has a higher impact on wealth inequality, creating a Gini index 0.47. Moreover, at least in the current models, {\sc iid} wealth returns do not have obvious effect on wealth inequality, both in terms of their impact on the tail exponent and in terms of their impact on the Gini coefficient. The above descriptions are further illustrated in table \ref{tb:wealth_share} and figures \ref{fig:zipf}--\ref{fig:lorenz}. In particular, in table \ref{tb:wealth_share} we calculate the wealth share of a given fraction of poorest agents. Notably, the top $10\%$ richest agents hold respectively $35.2\%$, $34.3\%$, $25.8\%$ and $25.7\%$ of the total wealth, while the poorest $10\%$ agents hold respectively $1.8\%$, $2.4\%$, $3.4\%$, $3.5\%$ of the total wealth in the four model economies. In figure \ref{fig:zipf} we create the Zipf plot (i.e., plotting $\log$ wealth v.s. $\log$ rank). It is clearly indicated that model \rom{1} and model \rom{2} generate stationary wealth distributions with fatter upper tails than the other models do, and that the stationary wealth distribution of model \rom{2} has the fattest upper tail. In figure \ref{fig:lorenz} we plot the Lorenz curve, which can be viewed as a generalized graphical representation of table \ref{tb:wealth_share}. Finally, sensitivity analysis with respect to model parameters and a more detailed quantitative analysis can be found in the online appendix of this paper. \newpage \section{Appendix A: Proof of Section \ref{s:opt_results} Results} \label{Appendix_A} In proofs we let $\{\mathscr F_t\}_{t \geq 0}$ be the natural filtration, where $\mathscr F_t := \sigma(s_0, \cdots, s_t)$ with $s_t := (a_t, z_t)$ for all $t$. We start by proving the results of section \ref{s:opt_results}. \begin{proof}[Proof of example \ref{ex:spec_rad_ctra}] Note that for fixed $n \in \mathbbm N$, % \begin{equation*} \| K^n \| = \sup_{ \|f\| \leq 1 } \| K^n f \| = \sup_{ \|f\| \leq 1 } \sup_{z \in \mathsf Z} \left| \mathbbm E \,_z R_1 \cdots R_n f(z_n) \right| = \sup_{z \in \mathsf Z} \mathbbm E \,_z R_1 \cdots R_n. \end{equation*} % Suppose assumption \ref{a:ctra_coef} holds. Note that every $t \in \mathbbm N$ can be written as $t = kn + \ell$ where $k \in \mathbbm N \cup \{0\}$ and $\ell \in \{0, \cdots, n-1 \}$. Since $\| K^t \| = \| K^{kn + \ell} \| \leq \|K^n\|^k \|K^\ell \|$, % \begin{equation*} \| K^t \|^{1/t} = \| K^{kn + \ell} \|^{1/t} \leq \|K^n\|^{k/t} \|K^\ell \|^{1/t} = \|K^n \|^{\frac{1}{n + \ell / k}} \| K^\ell \|^{1/t}. \end{equation*} % Since $\beta \|K^n \| < 1$ by assumption \ref{a:ctra_coef} and $\|K^\ell \| \leq \| K\|^\ell <\infty$, letting $t \to \infty$ (and thus $k \to \infty$) yields % \begin{equation*} \beta r(K) = \beta \lim_{t \to \infty} \|K^t \|^{1/t} \leq \beta \lim_{k \to \infty} \|K^n \|^{\frac{1}{n} \frac{n}{n + \ell / k}} \| K^\ell \|^{\frac{1}{kn + \ell}} = \beta \|K^n \|^{1/n} < 1. \end{equation*} % On the other hand, suppose $\beta r(K) < 1$. Then by the definition of $r$ there exists $n \in \mathbbm N$ such that $\beta \| K^n \|^{1/n} < 1$. Thus $\beta^n \|K^n \| <1$ and assumption \ref{a:ctra_coef} is verified. % \end{proof} For the rest of this section, we let $n$ and $\theta$ be defined as in assumption \ref{a:ctra_coef}. \begin{proof}[Proof of lemma \ref{lm:max_path}] Iterating backward on the maximal path \eqref{eq:max_path}, we can show that \begin{equation*} \tilde{a}_t = \left( \prod_{i=1}^t R_i \right) a + \sum_{j=1}^t \left( Y_j \, \prod_{i=j+1}^t R_i \right). \end{equation*} Taking discounted expectation yields \begin{align*} \beta^t \mathbbm E \,_{a,z} \tilde{a}_t &= \left[ \mathbbm E \,_{z} \left( \beta^t \prod_{i=1}^t R_i \right) \right] a + \sum_{j=1}^t \mathbbm E \,_{z} \left[ \left( \beta^{t-j} \prod_{i=j+1}^t R_i \right) \left( \beta^j Y_j \right) \right]. \end{align*} Let $M(a,z) : = \sum_{t \geq 0} \beta^t \mathbbm E \,_{a,z} \tilde{a}_t$. Then the monotone convergence theorem and the Markov property imply that \begin{align*} M(a,z) &= \sum_{t=0}^{\infty} \mathbbm E \,_z \left( \beta^t \prod_{i=1}^t R_i \right) a + \sum_{t=0}^{\infty} \sum_{j=1}^t \mathbbm E \,_{z} \left[ \left( \beta^{t-j} \prod_{i=j+1}^t R_i \right) \left( \beta^j Y_j \right) \right] \\ &= \mathbbm E \,_z \left( \sum_{t=0}^{\infty} \beta^t \prod_{i=1}^t R_i \right) a + \sum_{j=1}^{\infty} \mathbbm E \,_z \mathbbm E \,_z \left[ (\beta^j Y_j) \left( \sum_{i = 0}^{\infty} \beta^i \prod_{k=1}^{i} R_{j+k} \right) \Big{ | } \mathscr F_{j} \right] \\ &= \mathbbm E \,_z \left( \sum_{t=0}^{\infty} \beta^t \prod_{i=1}^t R_i \right) a + \sum_{j=1}^{\infty} \mathbbm E \,_z \left[ (\beta^j Y_j) \, \mathbbm E \,_{z_j} \left( \sum_{i = 0}^{\infty} \beta^i \prod_{k=1}^{i} R_{k} \right) \right]. \end{align*} By the Markov property and assumption \ref{a:ctra_coef}, for all $k \in \mathbbm N$ and $z \in \mathsf Z$, we have \begin{align*} \mathbbm E \,_z \beta^{kn} R_1 \cdots R_{kn} &= \mathbbm E \,_z \mathbbm E \,_z [\beta^{(k-1)n} R_1 \cdots R_{(k-1)n} \beta^n R_{(k-1)n+1} \cdots R_{kn} \mid \mathscr F_{(k-1)n}] \\ &= \mathbbm E \,_z \beta^{(k-1)n} R_1 \cdots R_{(k-1)n} \mathbbm E \,_{z_{(k-1)n}} (\beta^n R_{1} \cdots R_{n}) \\ & \leq \theta \mathbbm E \,_z \beta^{(k-1)n} R_1 \cdots R_{(k-1)n} \leq \cdots \leq \theta^k. \end{align*} Taking supremum on both sides yields \begin{equation} \label{eq:n-step-ctra} \beta^{kn} \sup_{z \in \mathsf Z} \mathbbm E \,_z R_1 \cdots R_{kn} \leq \theta^k. \end{equation} Moreover, assumption \ref{a:bd_sup_ereuprm} implies that $K_0 := \sup_{z \in \mathsf Z} \mathbbm E \,_z \hat{R} < \infty$. Hence, \begin{align*} \mathbbm E \,_{z} \left( \sum_{i=0}^{n-1} \beta^{i} R_{1} \cdots R_{i} \right) &= \sum_{i=0}^{n-1} \beta^i \mathbbm E \,_z R_1 \cdots R_i = \sum_{i=0}^{n-1} \beta^i \mathbbm E \,_z R_1 \cdots R_{i-1} \mathbbm E \,_{z_{i-1}} R_1 \\ &\leq \sum_{i=0}^{n-1} \beta^i \mathbbm E \,_z R_1 \cdots R_{i-1} K_0 \leq \cdots \leq \sum_{i=0}^{n-1} \beta^i K_0^i =: K_1 < \infty \end{align*} for all $z \in \mathsf Z$. Taking supremum on both sides yields \begin{equation} \label{eq:n-step-bd} \sup_{z \in \mathsf Z} \mathbbm E \,_{z} \left( \sum_{i=0}^{n-1} \beta^{i} R_{1} \cdots R_{i} \right) \leq K_1 < \infty. \end{equation} Based on \eqref{eq:n-step-ctra} and \eqref{eq:n-step-bd}, we have \begin{align*} \mathbbm E \,_{z} \left( \sum_{i = 0}^{\infty} \beta^i \prod_{k=1}^{i} R_{k} \right) &= \sum_{k=0}^{\infty} \mathbbm E \,_z \left( \sum_{i=0}^{n-1} \beta^{kn+i} R_1 \cdots R_{kn + i} \right) \\ &= \sum_{k=0}^{\infty} \mathbbm E \,_z \left[ \beta^{kn} R_1 \cdots R_{kn} \left( \sum_{i=0}^{n-1} \beta^{i} R_{kn+1} \cdots R_{kn + i} \right) \right] \\ &= \sum_{k=0}^{\infty} \mathbbm E \,_z \left[ \beta^{kn} R_1 \cdots R_{kn} \mathbbm E \,_{z_{kn}} \left( \sum_{i=0}^{n-1} \beta^{i} R_{1} \cdots R_{i} \right) \right] \\ &\leq \sum_{k=0}^{\infty} \mathbbm E \,_z \beta^{kn} R_1 \cdots R_{kn} K_1 \leq \sum_{k=0}^{\infty} \theta^k K_1 := K_2 < \infty \end{align*} for all $z \in \mathsf Z$. Hence, \begin{equation*} \sup_{z \in \mathsf Z} \mathbbm E \,_{z} \left( \sum_{i = 0}^{\infty} \beta^i \prod_{k=1}^{i} R_{k} \right) \leq K_2 < \infty. \end{equation*} Finally, assumption \ref{a:Y_sum} implies that \begin{align*} M(a,z) \leq K_2 a + K_2 \sum_{t=1}^{\infty} \beta^t \mathbbm E \,_z Y_t < \infty \end{align*} for all $(a,z) \in \SS_0$. This concludes the proof. \end{proof} \begin{proof}[Proof of theorem \ref{t:opt_result}] This result extends theorem~1 of \cite{benhabib2015wealth} and theorem~3.1 of \cite{li2014solving}. While the assumptions are weaker in our setting, the proof is similar and hence omitted. \end{proof} In the next, we aim to prove proposition \ref{pr:complete}. To that end, we define $\mathscr H$ to be the set of functions $h \colon \SS_0 \rightarrow \mathbbm R$ that satisfies \begin{enumerate} \item $h$ is continuous, \item $h$ is decreasing in the first argument, and \item $\exists K \in \mathbbm R$ such that $u'(a) \leq h(a,z) \leq u'(a) + K$ for all $(a,z) \in \SS_0$. \end{enumerate} On $\mathscr H$ we impose the distance \begin{equation} \label{eq:dinf_metric} d_{\infty}(h,g) := \left\| h - g \right\| := \sup_{(a,z) \in \SS_0} \left| h(a,z) - g(a,z) \right|. \end{equation} While the elements of $\mathscr H$ are not bounded, the function $d_{\infty}$ is a valid metric. Moreover, standard argument shows that $(\mathscr H, d_{\infty})$ is a complete metric space. \begin{proof}[Proof of proposition \ref{pr:complete}] Standard argument shows that $\rho$ is a valid metric. To show completeness of $(\mathscr C, \rho)$, it suffices to show that $(\mathscr C, \rho)$ and $(\mathscr H, d_{\infty})$ are isometrically isomorphic. To see that this is so, let $H$ be the map on $\mathscr C$ defined by $Hc = u' \circ c$. It is easy to show that $H: \mathscr C \rightarrow \mathscr H$ and that it is a bijection. Moreover, for all $c,d \in \mathscr C$, \begin{equation*} d_{\infty}(Hc, Hd) = \left\| Hc - Hd \right\| = \left\| u' \circ c - u' \circ d \right\| = \rho(c,d). \end{equation*} Hence, $H$ is an isometry. The space $(\mathscr C, \rho)$ is then complete, as claimed. \end{proof} \begin{proof}[Proof of proposition \ref{pr:suff_optpol}] Let $c$ be a policy in $\mathscr C$ satisfying \eqref{eq:foc}. That $c$ satisfies the first order optimality conditions is immediate by definition. It remains to show that any asset path generated by $c$ satisfies the transversality condition \eqref{eq:tvc}. To see that this is so, observe that, by \eqref{eq:bd_uprime}, \begin{equation} \label{eq:ineq_betu'ca} \mathbbm E \,_{a,z} \beta^t (u' \circ c) (a_t, z_t) a_t \leq \beta^t \mathbbm E \,_{a, z} u'(a_t) a_t + \beta^t K \mathbbm E \,_{a, z} a_t . \end{equation} Regarding the first term on the right hand side of \eqref{eq:ineq_betu'ca}, fix $L > 0$ and observe that \begin{align*} \mathbbm E \,_{a,z} u'(a_t) a_t &= \mathbbm E \,_{a,z} u'(a_t) a_t \mathbbm 1 \{a_t \leq L\} + \mathbbm E \,_{a,z} u'(a_t) a_t \mathbbm 1 \{a_t > L\} \\ & \leq L \mathbbm E \,_{a,z} u'(a_t) + u'(L) \mathbbm E \,_{a,z} a_t \leq L \mathbbm E \,_{z} u'(Y_t) + u'(L) \mathbbm E \,_{a,z} \tilde{a}_t, \end{align*} where $\tilde{a}_t$ is the maximal path defined in \eqref{eq:max_path}. We then have \begin{equation} \label{eq:ineq_betEu'a} \beta^t \mathbbm E \,_{a,z} u'(a_t) a_t \leq L \beta^t \mathbbm E \,_{z} u'(Y_t) + u'(L) \beta^t \mathbbm E \,_{a,z} \tilde{a}_t. \end{equation} Since $M := \sup_{z \in \mathsf Z} \mathbbm E \,_z u'(\hat{Y}) < \infty$ by assumption \ref{a:bd_sup_ereuprm}, the Markov property then implies that for all $z \in \mathsf Z$ and $t \geq 1$, \begin{equation*} \mathbbm E \,_z u'\left( Y_t \right) = \mathbbm E \,_z \mathbbm E \,_z \left[ u' \left( Y_t \right) \big| \mathscr F_{t-1} \right] = \mathbbm E \,_z \mathbbm E \,_{z_{t-1}} u' (\hat{Y}) \leq \mathbbm E \,_z M = M. \end{equation*} Hence, $\lim_{t \to \infty} \beta^t \mathbbm E \,_z u' \left( Y_t \right) = 0$. Since in addition $\lim_{t \to \infty} \beta^t \mathbbm E \,_{a,z} \tilde{a}_t = 0$ by lemma \ref{lm:max_path}, \eqref{eq:ineq_betEu'a} then implies that $\lim_{t \to \infty} \beta^t \mathbbm E \,_{a,z} u'(a_t) a_t = 0$. Moreover, the second term on the right hand side of \eqref{eq:ineq_betu'ca} is dominated by $\beta^t K \mathbbm E \,_{a,z} \tilde{a}_t$, and converges to zero by lemma \ref{lm:max_path}. We have thus shown that the term on the right hand side of \eqref{eq:ineq_betu'ca} converges to zero. Hence, the transversality condition holds. \end{proof} \begin{proof}[Proof of proposition \ref{pr:welldef_T}] Fix $c \in \mathscr C$ and $(a,z) \in \SS_0$. Because $c \in \mathscr C$, the map $\xi \mapsto \psi_c(\xi, a, z)$ is increasing. Since $\xi \mapsto u'(\xi)$ is strictly decreasing, the equation \eqref{eq:T_opr} can have at most one solution. Hence uniqueness holds. Existence follows from the intermediate value theorem provided we can show that \begin{enumerate} \item[(a)] $\xi \mapsto \psi_c(\xi, a, z)$ is a continuous function, \item[(b)] $\exists \xi \in (0,a]$ such that $u'(\xi) \geq \psi_c(\xi, a, z)$, and \item[(c)] $\exists \xi \in (0,a]$ such that $u'(\xi) \leq \psi_c(\xi, a, z)$. \end{enumerate} For part (a), it suffices to show that $g(\xi) := \mathbbm E \,_{z} \hat{R} \left(u' \circ c \right) \left[ \hat{R}(a - \xi) + \hat{Y}, \hat{z} \right]$ is continuous on $(0,a]$. To this end, fix $\xi \in (0,a]$ and $\xi_n \rightarrow \xi$. By \eqref{eq:bd_uprime} we have \begin{align} \label{eq:uppbd_ruprmc} \hat{R} \left( u' \circ c \right) \left[ \hat{R} \left( a - \xi \right) + \hat{Y}, \hat{z} \right] \leq \hat{R} \left( u' \circ c \right) ( \hat{Y}, \hat{z} ) \leq \hat{R} u'( \hat{Y}) + \hat{R} K. \end{align} The last term is integrable by assumption \ref{a:bd_sup_ereuprm}. Hence the dominated convergence theorem applies. From this fact and the continuity of $c$, we obtain $g(\xi_n) \rightarrow g(\xi)$. Hence, $\xi \mapsto \psi_c(\xi, a, z)$ is continuous. Part (b) clearly holds, since $u'(\xi) \rightarrow \infty$ as $\xi \rightarrow 0$ and $\xi \mapsto \psi_c(\xi, a, z)$ is increasing and always finite (since it is continuous as shown in the previous paragraph). Part (c) is also trivial (just set $\xi = a$). \end{proof} \begin{proof}[Proof of proposition \ref{pr:self_map}] Fix $c \in \mathscr C$. With slight abuse of notation, we denote \begin{equation*} g \left( \xi, a, z \right) := \mathbbm E \,_{z} \hat{R} \left( u' \circ c \right) \left[ \hat{R} \left( a - \xi \right) + \hat{Y}, \, \hat{z} \right]. \end{equation*} \textbf{Step~1.} We show that $Tc$ is continuous. To apply a standard fixed point parametric continuity result such as theorem~B.1.4 of \cite{stachurski2009economic}, we first show that $\psi_c$ is jointly continuous on the set $G$ defined in \eqref{eq:dom_T_opr}. This will be true if $g$ is jointly continuous on $G$. For any $\{ (\xi_n, a_n, z_n) \}$ and $(\xi, a, z)$ in $G$ with $(\xi_n, a_n, z_n) \rightarrow (\xi, a, z)$, we need to show that $g(\xi_n, a_n, z_n) \rightarrow g(\xi, a, z)$. To that end, we define \begin{align*} h_1 ( \xi, a, \hat{z}, \hat{\zeta}, \hat{\eta} ), \, h_2 ( \xi, a, \hat{z}, \hat{\zeta}, \hat{\eta} ) := \hat{R} [ u' (\hat{Y}) + K ] \pm \hat{R} \left( u' \circ c \right) [ \hat{R} \left( a - \xi \right) + \hat{Y}, \hat{z} ], \end{align*} where $\hat{R} := R ( \hat{z}, \hat{\zeta} )$ and $\hat{Y} := Y ( \hat{z}, \hat{\eta} )$ as defined in \eqref{eq:RY_func}. Then $h_1$ and $h_2$ are continuous in $(\xi, a, \hat{z})$ by the continuity of $c$ and assumption \ref{a:conti_ereuprm}, and they are nonnegative since \eqref{eq:uppbd_ruprmc} implies that $0 \leq \hat{R} \left( u' \circ c \right) [ \hat{R} \left( a - \xi \right) + \hat{Y}, \hat{z} ] \leq \hat{R} [ u' (\hat{Y}) + K ]$. Moreover, since the stochastic kernel $P$ is Feller, the product measure satisfies\footnote{Here $\stackrel{w}{\to}$ denotes weak convergence, i.e., for all bounded continuous function $f$, we have % \begin{equation*} \int f(\hat{z}, \hat{\zeta}, \hat{\eta}) P(z_n, \diff \hat{z}) \nu(\diff \hat{\zeta}) \mu (\diff \hat{\eta}) \to \int f(\hat{z}, \hat{\zeta}, \hat{\eta}) P(z, \diff \hat{z}) \nu(\diff \hat{\zeta}) \mu (\diff \hat{\eta}). \end{equation*} The formal definition of weak convergence is provided in section \ref{ss:gs_iid}. } \begin{equation*} P(z_n, \cdot) \otimes \nu \otimes \mu \stackrel{w}{\longrightarrow} P(z, \cdot) \otimes \nu \otimes \mu. \end{equation*} Based on the generalized Fatou's lemma of \cite{feinberg2014fatou} (theorem~1.1), \begin{align*} \liminf_{n \rightarrow \infty} & \int h_i ( \xi_n, a_n, \hat{z}, \hat{\zeta}, \hat{\eta} ) P( z_n, \diff \hat{z}) \nu(\diff \hat{\zeta}) \mu(\diff \hat{\eta}) \\ &\geq \int h_i ( \xi, a, \hat{z}, \hat{\zeta}, \hat{\eta} ) P( z, \diff \hat{z}) \nu(\diff \hat{\zeta}) \mu(\diff \hat{\eta}). \end{align*} Since $z \mapsto \mathbbm E \,_z \hat{R} \, [u'(\hat{Y}) + K ]$ is continuous by assumption \ref{a:conti_ereuprm}, this implies that \begin{align*} \liminf_{n \rightarrow \infty} \left( \pm \mathbbm E \,_{z_n} \hat{R} \left( u' \circ c \right) \left[ \hat{R} \left(a_n - \xi_n \right) + \hat{Y}, \hat{z} \right] \right) \geq \left( \pm \mathbbm E \,_{z} \hat{R} \left(u' \circ c \right) \left[ \hat{R} \left(a - \xi \right) + \hat{Y}, \hat{z} \right] \right). \end{align*} The function $g$ is then continuous since the above inequality is equivalent to \begin{align*} \liminf_{n \rightarrow \infty} g(\xi_n, a_n, z_n) \geq g(\xi, a, z) \geq \limsup_{n \rightarrow \infty} g(\xi_n, a_n, z_n). \end{align*} Hence, $\psi_c$ is continuous on $G$, as was to be shown. Moreover, since $\xi \mapsto \psi_c(\xi, a, z)$ takes values in the closed interval \begin{equation*} I(a,z) := \left[ u'(a), u'(a) + \mathbbm E \,_z \hat{R} \left( u'(\hat{Y}) + K \right) \right], \end{equation*} the correspondence $(a, z) \mapsto I(a,z)$ is nonempty, compact-valued and continuous. By theorem~B.1.4 of \cite{stachurski2009economic}, $(a,z) \mapsto [u' \circ (Tc)] (a,z)$ is continuous. $Tc$ is then continuous on $\SS_0$ since $u'$ is continuous. \textbf{Step 2.} We show that $Tc$ is increasing in $a$. Suppose that for some $z \in \mathsf Z$ and $a_1, a_2 \in (0, \infty)$ with $a_1 < a_2$, we have $\xi_1 := Tc (a_1,z) > Tc (a_2,z) =: \xi_2$. Since $c$ is increasing in $a$ by assumption, $\psi_c$ is increasing in $\xi$ and decreasing in $a$. Then $u'(\xi_1) < u'(\xi_2) = \psi_c(\xi_2, a_2, z) \leq \psi_c(\xi_1, a_1, z) = u'(\xi_1)$. This is a contradiction. \textbf{Step 3.} We have shown in proposition \ref{pr:welldef_T} that $Tc(a,z) \in (0,a]$ for all $(a,z) \in \SS_0$. \textbf{Step 4.} We show that $\| u' \circ (Tc) - u' \| < \infty$. Since $u'[Tc(a,z)] \geq u'(a)$, we have \begin{align*} &\left| u'[Tc(a,z)] - u'(a) \right| = u'[Tc(a,z)] - u'(a) \\ & \leq \mathbbm E \,_{z} \hat{R} \left( u' \circ c \right) \left( \hat{R} \left[ a - Tc(a,z) \right] + \hat{Y}, \, \hat{z}) \right) \leq \mathbbm E \,_{z} \hat{R} \left[ u'(\hat{Y}) + K \right]. \end{align*} for all $(a,z) \in \SS_0$. Assumption \ref{a:bd_sup_ereuprm} then implies that \begin{align*} \left\| u' \circ (Tc) - u' \right\| \leq \sup_{z \in \mathsf Z} \mathbbm E \,_{z} \hat{R} u'(\hat{Y}) + K \left( \sup_{z \in \mathsf Z} \mathbbm E \,_z \hat{R} \right) < \infty. \end{align*} This concludes the proof. \end{proof} In the rest of this section, we aim to prove theorem \ref{t:ctra_T}. Recall $\mathscr H$ defined above. Given $h \in \mathscr H$, let $\tilde{T} h$ be the function mapping $(a,z) \in \SS_0$ into the $\kappa$ that solves \begin{equation} \label{eq:T_hat} \kappa = \max \left\{ \beta \mathbbm E \,_{z} \hat{R} \, h \left( \hat{R} \left[ a - \left(u' \right)^{-1}(\kappa) \right] + \hat{Y}, \, \hat{z} \right), \, u'(a) \right\}. \end{equation} The next lemma implies that $\tilde{T}$ is a well-defined self-map on $\mathscr H$, as well as topologically conjugate to $T$ under the bijection $H: \mathscr C \rightarrow \mathscr H$ defined by $Hc := u' \circ c$. \begin{lemma} \label{lm:conjug} The operator $\tilde{T} \colon \mathscr H \to \mathscr H$ and satisfies $\tilde{T} H = H T $ on $\mathscr C$. \end{lemma} \begin{proof}[Proof of lemma \ref{lm:conjug}] Pick any $c \in \mathscr C$ and $(a,z) \in \SS_0$. Let $\xi := Tc(a,z)$, then $\xi$ solves \begin{equation} \label{eq:Tc_eq} u'(\xi) = \max \left\{ \beta \mathbbm E \,_{z} \hat{R} \left(u' \circ c \right) \left[ \hat{R} \left(a - \xi \right) + \hat{Y}, \, \hat{z} \right], \, u'(a) \right\}. \end{equation} We need to show that $HTc$ and $\tilde{T} Hc$ evaluate to the same number at $(a,z)$. In other words, we need to show that $u'(\xi)$ is the solution to \begin{equation*} \kappa = \max \left\{ \beta \mathbbm E \,_{z} \hat{R} \left( u' \circ c \right) \left( \hat{R} \left[ a - \left(u' \right)^{-1} (\kappa) \right] + \hat{Y}, \, \hat{z} \right), \, u'(a) \right\}. \end{equation*} But this is immediate from \eqref{eq:Tc_eq}. Hence, we have shown that $\tilde{T} H = H T$ on $\mathscr C$. Since $H \colon \mathscr C \to \mathscr H$ is a bijection, we have $\tilde{T} = HT H^{-1}$. Since in addition $T \colon \mathscr C \to \mathscr C$ by proposition \ref{pr:self_map}, we have $\tilde{T} \colon \mathscr H \to \mathscr H$. This concludes the proof. \end{proof} \begin{lemma} \label{lm:monot} $\tilde{T}$ is order preserving on $\mathscr H$. That is, $\tilde{T} h_1 \leq \tilde{T} h_2$ for all $h_1, h_2 \in \mathscr H$ with $h_1 \leq h_2$. \end{lemma} \begin{proof}[Proof of lemma \ref{lm:monot}] Let $h_1, h_2$ be functions in $\mathscr H$ with $h_1 \leq h_2$. Suppose to the contrary that there exists $(a,z) \in \SS_0$ such that $\kappa_1 := \tilde{T} h_1 (a,z) > \tilde{T} h_2 (a,z) =: \kappa_2$. Since functions in $\mathscr H$ are decreasing in the first argument, we have \begin{align*} \kappa_1 &= \max \left\{ \beta \mathbbm E \,_{z} \hat{R} \, h_1 \left( \hat{R} \left[ a - (u')^{-1}(\kappa_1) \right] + \hat{Y}, \, \hat{z} \right), \, u'(a) \right\} \\ & \leq \max \left\{ \beta \mathbbm E \,_{z} \hat{R} \, h_2 \left( \hat{R} \left[ a - (u')^{-1}(\kappa_1) \right] + \hat{Y}, \, \hat{z} \right), \, u'(a) \right\} \\ & \leq \max \left\{ \beta \mathbbm E \,_{z} \hat{R} \, h_2 \left( \hat{R} \left[ a - (u')^{-1}(\kappa_2) \right] + \hat{Y}, \, \hat{z} \right), \, u'(a) \right\} = \kappa_2. \end{align*} This is a contradiction. Hence, $\tilde{T}$ is order preserving. \end{proof} \begin{lemma} \label{lm:ctra_That} $\tilde{T}^n$ is a contraction mapping on $(\mathscr H, d_{\infty})$ with modulus $\theta$. \end{lemma} \begin{proof}[Proof of lemma \ref{lm:ctra_That}] Since $\tilde{T}$ is order preserving and $\mathscr H$ is closed under the addition of nonnegative constants, based on \cite{blackwell1965discounted}, it remains to verify: for $n$ and $\theta$ given by assumption \ref{a:ctra_coef}, \begin{equation*} \tilde{T}^n (h+\gamma) \leq \tilde{T}^n h + \theta \gamma \; \text{ for all } h \in \mathscr H \text{ and } \gamma \geq 0. \end{equation*} To that end, by assumption \ref{a:ctra_coef}, it suffices to show that for all $k \in \mathbbm N$ and $(a,z) \in \SS_0$, \begin{equation} \label{eq:That_k} \tilde{T}^k (h+ \gamma) (a,z) \leq \tilde{T}^k h(a,z) + \gamma \beta^k \mathbbm E \,_z R_1 \cdots R_k. \end{equation} Fix $h \in \mathscr H$, $\gamma \geq 0$, and let $h_{\gamma} (a,z) := h(a, z) + \gamma$. By the definition of $\tilde{T}$, we have \begin{align*} \tilde{T} h_\gamma (a,z) &= \max \left\{ \beta \mathbbm E \,_{z} \hat{R} \, h_{\gamma} \left( \hat{R} \left[ a - (u')^{-1}(\tilde{T} h_{\gamma})(a,z) \right] + \hat{Y}, \hat{z} \right), u'(a) \right\} \\ & \leq \max \left\{ \beta \mathbbm E \,_{z} \hat{R} \, h \left( \hat{R} \left[ a - (u')^{-1} (\tilde{T} h_{\gamma})(a,z) \right] + \hat{Y}, \hat{z} \right), u'(a) \right\} + \gamma \beta \mathbbm E \,_z R_1 \\ & \leq \max \left\{ \beta \mathbbm E \,_{z} \hat{R} \, h \left( \hat{R} \left[ a - (u')^{-1} (\tilde{T} h)(a,z) \right] + \hat{Y}, \hat{z} \right), u'(a) \right\} + \gamma \beta \mathbbm E \,_z R_1. \end{align*} Here, the first inequality is elementary and the second is due to the fact that $h \leq h_\gamma$ and $\tilde{T}$ is order preserving. Hence, $\tilde{T} (h+ \gamma) (a,z ) \leq \tilde{T} h (a,z) + \gamma \beta \mathbbm E \,_z R_1$ and \eqref{eq:That_k} holds for $k = 1$. Suppose that \eqref{eq:That_k} holds for arbitrary $k$. It remains to show that \eqref{eq:That_k} holds for $k+1$. Define \begin{equation*} f(z) := \gamma \beta^k \mathbbm E \,_z R_1 \cdots R_k. \end{equation*} By the induction hypothesis, the monotonicity of $\tilde{T}$ and the Markov property, \begin{align*} \tilde{T}^{k+1} h_\gamma (a,z) &= \max \left\{ \beta \mathbbm E \,_{z} \hat{R} \, (\tilde{T}^k h_{\gamma}) \left( \hat{R} \left[ a - (u')^{-1}(\tilde{T}^{k+1} h_{\gamma})(a,z) \right] + \hat{Y}, \hat{z} \right), u'(a) \right\} \\ & \leq \max \left\{ \beta \mathbbm E \,_z \hat{R} \left(\tilde{T}^k h + f \right) \left( \hat{R} \left[ a - (u')^{-1}(\tilde{T}^{k+1} h_{\gamma})(a,z) \right] + \hat{Y}, \hat{z} \right), u'(a) \right\} \\ & \leq \max \left\{ \beta \mathbbm E \,_z \hat{R} (\tilde{T}^k h) \left( \hat{R} \left[ a - (u')^{-1}(\tilde{T}^{k+1} h_{\gamma})(a,z) \right] + \hat{Y}, \hat{z} \right), u'(a) \right\} \\ & \quad + \beta \mathbbm E \,_{z} R_1 f(z_1) \\ & \leq \max \left\{ \beta \mathbbm E \,_z \hat{R} (\tilde{T}^k h) \left( \hat{R} \left[ a - (u')^{-1}(\tilde{T}^{k+1} h)(a,z) \right] + \hat{Y}, \hat{z} \right), u'(a) \right\} \\ &\quad + \gamma \beta^{k+1} \mathbbm E \,_{z} R_1 \mathbbm E \,_{z_1} R_1 \cdots R_{k} \\ &= \tilde{T}^{k+1} h(a,z) + \gamma \beta^{k+1} \mathbbm E \,_z R_1 \cdots R_{k+1}. \end{align*} Hence, \eqref{eq:That_k} is verified by induction. This concludes the proof. \end{proof} With the results established above, we are now ready to prove theorem \ref{t:ctra_T}. \begin{proof}[Proof of theorem \ref{t:ctra_T}] In view of propositions \ref{pr:complete} and \ref{pr:suff_optpol}, to establish all the claims in theorem \ref{t:ctra_T}, we need only show that \begin{equation*} \rho(T^n c, T^n d) \leq \theta \rho(c,d) \quad \text{for all } \, c,d \in \mathscr C. \end{equation*} To this end, pick any $c,d \in \mathscr C$. Note that the topological conjugacy result established in lemma \ref{lm:conjug} implies that $\tilde{T} = H T H^{-1}$. Hence, \begin{equation*} \tilde{T}^n = (H T H^{-1}) \cdots (H T H^{-1}) = H T^n H^{-1} \quad \text{and} \quad \tilde{T}^n H = H T^n. \end{equation*} By the definition of $\rho$ and the contraction property established in lemma \ref{lm:ctra_That}, \begin{equation*} \rho(T^n c, T^n d) = d_{\infty}(H T^n c, H T^n d) = d_{\infty}(\tilde{T}^n Hc, \tilde{T}^n Hd) \leq \theta d_{\infty}(Hc, Hd). \end{equation*} The right hand side is just $\theta \rho(c,d)$, which completes the proof. \end{proof} \section{Appendix B: Proof of Section \ref{s:sto_stability} Results} \label{Appendix_B} Before working into the results of each subsection, we prove a general lemma that is frequently used in later sections. Recall that, for all $c \in \mathscr C$, the value $\xi(a,z) := Tc(a,z)$ solves \begin{equation} \label{eq:T_opr_general} \left( u' \circ \xi \right)(a,z) = \max \left\{ \beta \mathbbm E \,_{z} \hat{R} \left(u' \circ c \right) \left( \hat{R} \left[a - \xi(a,z) \right] + \hat{Y}, \, \hat{z} \right), \, u'(a) \right\}. \end{equation} Let $c^* \in \mathscr C$ denote the optimal policy. For each $z \in \mathsf Z$ and $c \in \mathscr C$, define \begin{equation} \label{eq:a_bar} \bar{a}_c (z) := \left(u' \right)^{-1} \left[ \beta \mathbbm E \,_z \hat{R} \left(u' \circ c \right) ( \hat{Y}, \, \hat{z} ) \right] \quad \text{and} \quad \bar{a}(z) := \bar{a}_{c^*} (z). \end{equation} The next result implies that the borrowing constraint binds if and only if wealth is below a certain threshold level. \begin{lemma} \label{lm:binding} For all $c \in \mathscr C$, $Tc(a,z) = a$ if and only if $a \leq \bar{a}_c (z)$. In particular, $c^*(a,z) = a$ if and only if $a \leq \bar{a}(z)$. \end{lemma} \begin{proof}[Proof of lemma \ref{lm:binding}] Let $a \leq \bar{a}_c (z)$. We claim that $\xi(a,z) = a$. Suppose to the contrary that $\xi(a,z) < a$. Then $(u' \circ \xi)(a,z) > u'(a)$. In view of \eqref{eq:T_opr_general}, we have \begin{align*} u'(a) < \beta \mathbbm E \,_z \hat{R} \left(u' \circ c \right) \left( \hat{R} \left[a - \xi(a,z) \right] + \hat{Y}, \, \hat{z} \right) \leq \beta \mathbbm E \,_z \hat{R} \left(u' \circ c \right) (\hat{Y}, \hat{z}) = u'[\bar{a}_c (z)]. \end{align*} From this we get $a > \bar{a}_c (z)$, which is a contradiction. Hence, $\xi(a,z) = a$. On the other hand, if $\xi(a,z) = a$, then $(u' \circ \xi)(a,z) = u'(a)$. By \eqref{eq:T_opr_general}, we have \begin{align*} u'(a) \geq \beta \mathbbm E \,_z \hat{R} \left( u' \circ c \right) (\hat{Y}, \, \hat{z}) = u'[\bar{a}_c (z)]. \end{align*} Hence, $a \leq \bar{a}_c (z)$. The first claim is verified. The second claim follows immediately from the first claim and the fact that $c^*$ is the unique fixed point of $T$ in $\mathscr C$. \end{proof} Given $c \in \mathscr C$, lemma \ref{lm:binding} implies that $\xi(a,z) := Tc (a,z) = a$ for $a \leq \bar{a}_c (z)$, and that for $a > \bar{a}_c (z)$, $\xi(a,z)$ solves \begin{equation*} (u' \circ \xi)(a,z) = \beta \mathbbm E \,_z \hat{R} \left(u' \circ c \right) \left( \hat{R} \left[a - \xi(a,z) \right] + \hat{Y}, \, \hat{z} \right). \end{equation*} \subsection{Proof of section \ref{ss:exist_stat} results} Our first goal is to prove proposition \ref{pr:opt_pol_bd_frac}. To that end, recall $\alpha$ given by assumption \ref{a:suff_bd_in_prob}, and define the subspace $\mathscr C_1$ as \begin{equation} \label{eq::cC1} \mathscr C_1 := \left\{ c \in \mathscr C: \frac{c(a,z)}{a} \geq \alpha \quad \text{for all } (a,z) \in \SS_0 \right\}. \end{equation} \begin{lemma} \label{lm:cC1} $\mathscr C_1$ is a closed subset of $\mathscr C$, and $T c \in \mathscr C_1$ for all $c \in \mathscr C_1$. \end{lemma} \begin{proof}[Proof of lemma \ref{lm:cC1}] To see that $\mathscr C_1$ is closed, for a given sequence $\{ c_n \}$ in $\mathscr C_1$ and $c \in \mathscr C$ with $\rho( c_n , c) \rightarrow 0$, we need to verify that $c \in \mathscr C_1$. This obviously holds since $c_n(a,z) /a \geq \alpha$ for all $n$ and $(a,z) \in \SS_0$, and, on the other hand, $\rho( c_n , c) \rightarrow 0$ implies that $c_n (a,z) \to c(a,z)$ for all $(a,z) \in \SS_0$. We next show that $T$ is a self-map on $\mathscr C_1$. Fix $c \in \mathscr C_1$. We have $Tc \in \mathscr C$ since $T$ is a self-map on $\mathscr C$. It remains to show that $\xi := Tc$ satisfies $\xi (a,z) \geq \alpha a$ for all $(a,z) \in \SS_0$. Suppose to the contrary that $\xi(a,z) < \alpha a$ for some $(a,z) \in \SS_0$. Then \begin{equation*} u'(\alpha a) < (u' \circ \xi)(a, z) = \max \left\{ \beta \mathbbm E \,_z \hat{R} \left( u' \circ c \right) \left( \hat{R} \left[ a - \xi(a,z) \right] + \hat{Y}, \, \hat{z} \right), u'(a) \right\}. \end{equation*} Since $u'(\alpha a) > u'(a)$ and $c \in \mathscr C_1$, this implies that \begin{align*} u'(\alpha a ) &< \beta \mathbbm E \,_z \hat{R} \left( u' \circ c \right) \left( \hat{R} \left[ a - \xi(a,z) \right] + \hat{Y}, \, \hat{z} \right) \\ &\leq \beta \mathbbm E \,_z \hat{R} u' \left( \alpha \hat{R} \left[ a - \xi(a,z) \right] + \alpha \hat{Y} \right) \\ & \leq \beta \mathbbm E \,_z \hat{R} u' \left[ \alpha \hat{R} (1 - \alpha) a + \alpha \hat{Y} \right] \leq \beta \mathbbm E \,_z \hat{R} \, u' \left[ \hat{R} (1 - \alpha) (\alpha a) \right]. \end{align*} This is a contradicted with condition (1) of assumption \ref{a:suff_bd_in_prob} since $(\alpha a, z) \in \SS_0$. Hence, $\xi(a,z) / a \geq \alpha$ for all $(a,z) \in \SS_0$ and we conclude that $Tc \in \mathscr C_1$. \end{proof} With this result, we are now ready to prove proposition \ref{pr:opt_pol_bd_frac}. \begin{proof}[Proof of proposition \ref{pr:opt_pol_bd_frac}] Since the claim obviously holds when $a=0$, it remains to verify that this claim holds on $\SS_0$. We have shown in theorem \ref{t:ctra_T} that $T$ is a contraction mapping on the complete metric space $(\mathscr C, \rho)$, with unique fixed point $c^*$. Since in addition $\mathscr C_1$ is a closed subset of $\mathscr C$ and $T \mathscr C_1 \subset \mathscr C_1$ by lemma \ref{lm:cC1}, we know that $c^* \in \mathscr C_1$. In summary, we have $c^*(a, z) \geq \alpha a$ for all $(a,z) \in \SS$. \end{proof} Our next goal is to prove theorem \ref{t:sta_exist}. To that end, recall the integer $n$ given by the second condition of assumption \ref{a:suff_bd_in_prob}. \begin{lemma} \label{lm:bd_in_prob_at} $\sup_{t \geq 0} \mathbbm E \,_{a,z} \, a_t < \infty$ for all $(a,z) \in \SS$. \end{lemma} \begin{proof}[Proof of lemma \ref{lm:bd_in_prob_at}] Since $c^*(0,z) = 0$, proposition \ref{pr:opt_pol_bd_frac} implies that $c^*(a,z) \geq \alpha a$ for all $(a,z) \in \SS$. For all $t \geq 1$, we have $t = kn + j$ in general, where $k \in \{0\} \cup \mathbbm N$ and $j \in \{0,1, \cdots, n-1\}$. Using these facts and \eqref{eq:trans_at}, we have: \begin{align*} a_t &= R_t(a_{t-1} - c_{t-1}) + Y_t \leq (1 - \alpha) R_t a_{t-1} + Y_t \leq \cdots \\ & \leq (1 - \alpha)^t R_t \cdots R_1 a + (1 - \alpha)^{t-1} R_t \cdots R_2 Y_1 + \cdots + (1 - \alpha) R_t Y_{t-1} + Y_t \\ &= (1 - \alpha)^{kn+j} R_{kn +j} \cdots R_1 a + \sum_{\ell=1}^{j} (1 - \alpha)^{kn + j - \ell} R_{kn+j} \cdots R_{\ell+1} Y_{\ell} \\ & \quad + \sum_{m=1}^{k} \sum_{\ell=1}^{n} (1 - \alpha)^{mn - \ell} R_{kn + j} \cdots R_{(k-m)n + j + \ell + 1} Y_{(k-m)n + j + \ell} \end{align*} with probability one. Hence, \begin{align*} \mathbbm E \,_{a,z} a_t &\leq (1 - \alpha)^t \mathbbm E \,_z R_t \cdots R_1 a + \sum_{\ell = 1}^{t} (1 - \alpha)^{t-\ell} \mathbbm E \,_z R_t \cdots R_{\ell+1} Y_{\ell} \\ &= (1 - \alpha)^{kn+j} \mathbbm E \,_z R_{kn +j} \cdots R_1 a + \sum_{\ell=1}^{j} (1 - \alpha)^{kn + j - \ell} \mathbbm E \,_z R_{kn+j} \cdots R_{\ell+1} Y_{\ell} \\ & \quad + \sum_{m=1}^{k} \sum_{\ell=1}^{n} (1 - \alpha)^{mn - \ell} \mathbbm E \,_z R_{kn + j} \cdots R_{(k-m)n + j + \ell + 1} Y_{(k-m)n + j + \ell} \end{align*} for all $(a,z) \in \SS$. Define \begin{equation*} \gamma := (1 - \alpha)^n \sup_{z \in \mathsf Z} \mathbbm E \,_z R_1 \cdots R_n \quad \text{and} \quad M:= \max_{1 \leq \ell \leq n} \left[ (1 - \alpha)^\ell \sup_{z \in \mathsf Z} \mathbbm E \,_z R_\ell \cdots R_1 \right]. \end{equation*} Note that $\gamma < 1$ by assumption \ref{a:suff_bd_in_prob}-(2) and $M < \infty$ by assumption \ref{a:bd_sup_ereuprm} and the Markov property. Moreover, $M' := \sup_{t \geq 1} \mathbbm E \,_z Y_t < \infty$ by assumption \ref{a:bd_in_prob_Yt}. The Markov property then implies that for all $(a,z) \in \SS$ and $t \geq 0$, \begin{align*} \mathbbm E \,_{a,z} a_t &\leq \gamma^k (1 - \alpha)^{j} \mathbbm E \,_z R_{j} \cdots R_1 a + \gamma^k \sum_{\ell=1}^{j} (1 - \alpha)^{j - \ell} \mathbbm E \,_z R_{j} \cdots R_{\ell+1} Y_{\ell} \\ & \quad + \sum_{m=0}^{k-1} \gamma^m \sum_{\ell=1}^{n} (1 - \alpha)^{n - \ell} \mathbbm E \,_z R_{(k-m)n + j} \cdots R_{(k-m-1)n + j + \ell + 1} Y_{(k-m)n + j + \ell} \\ &\leq \gamma^k M a + \gamma^k M \sum_{\ell=1}^{j} \mathbbm E \,_z Y_\ell + \sum_{m=0}^{k-1} \gamma^m M \sum_{\ell=1}^{n} \mathbbm E \,_z Y_{(k-m-1)n + j + \ell} \\ &\leq Ma + MM'n + \sum_{m=0}^{\infty} \gamma^m MM' n < \infty. \end{align*} Hence, $\sup_{t \geq 0} \mathbbm E \,_{a,z} \, a_t < \infty$ for all $(a,z) \in \SS$, as was claimed. \end{proof} A function $w^* \colon \SS \to \mathbbm R_+$ is called \emph{norm-like} if all its sublevel sets (i.e., sets of the form $\{s \in \SS \colon w(s) \leq b \}, b \in \mathbbm R_+$) are precompact in $\SS$ (i.e., any sequence in a given sublevel set has a subsequence that converges to a point of $\SS$). \begin{proof}[Proof of theorem \ref{t:sta_exist}] Based on lemma~D.5.3 of \cite{meyn2009markov}, a stochastic kernel $Q$ is bounded in probability if and only if for all $s \in \SS$, there exists a norm-like function $w_s^* \colon \SS \to \mathbbm R_+$ such that the $(Q,s)$-Markov process $\{s_t\}_{t \geq 0}$ satisfies $\limsup_{t \to \infty} \mathbbm E \,_s \left[ w_s^*(s_t) \right] < \infty$. Fix $(a,z) \in \SS$. Since $P$ is bounded in probability by assumption \ref{a:z_bdd_in_prob}, there exists a norm-like function $w \colon \mathsf Z \to \mathbbm R_+$ such that $\limsup_{t \to \infty} \mathbbm E \,_z w (z_t) < \infty$. Then $w^* \colon \SS \rightarrow \mathbbm R_+$ defined by $w^*(a_0,z_0) := a_0 + w (z_0)$ is a norm-like function on $\SS$. The stochastic kernel $Q$ is then bounded in probability since lemma \ref{lm:bd_in_prob_at} implies that \begin{equation*} \limsup_{t \to \infty} \mathbbm E \,_{a,z} \, w^*(a_t, z_t) \leq \sup_{t \geq 0} \mathbbm E \,_{a,z} \, a_t + \limsup_{t \to \infty} \mathbbm E \,_z \, w(z_t) < \infty. \end{equation*} Regarding existence of stationary distribution, since $c^*$ is continuous and assumption \ref{a:conti_ereuprm} holds, and we have shown in the proof of proposition \ref{pr:self_map} that \begin{equation*} P(z_n, \cdot) \otimes \nu \otimes \mu \stackrel{w}{\longrightarrow} P(z, \cdot) \otimes \nu \otimes \mu \end{equation*} whenever $z_n \to z$, a simple application of the generalized Fatou's lemma of \cite{feinberg2014fatou} (theorem~1.1) as in the proof of proposition \ref{pr:self_map} shows that the stochastic kernel $Q$ is Feller. Since in addition $Q$ is bounded in probability, based on the Krylov-Bogolubov theorem (see, e.g., \cite{meyn2009markov}, proposition~12.1.3 and lemma~D.5.3), $Q$ admits at least one stationary distribution. \end{proof} \subsection{Proof of section \ref{ss:further_prop} results} We start by proving example \ref{eg:concave}. \begin{proof}[Proof of example \ref{eg:concave}] For each $c$ in $\mathscr C$ concave in the first argument, let $h_c (x, \hat{\omega}) := c ( \hat{R} x + \hat{Y}, \hat{z} )$, where $\hat{\omega} := ( \hat{R}, \hat{Y}, \hat{z} )$. Then $x \mapsto h_c (x, \hat{\omega})$ is concave. Since $u'(c) = c^{-\gamma}$, we have \begin{align*} &\left[ \beta \mathbbm E \,_z \hat{R} \, h_c(\alpha x_1 + (1 - \alpha) x_2, \hat{\omega})^{-\gamma} \right]^{-\frac{1}{\gamma}} \geq \left[ \beta \mathbbm E \,_z \hat{R} \left[ \alpha h_c (x_1, \hat{\omega}) + (1 - \alpha) h_c (x_2, \hat{\omega}) \right]^{-\gamma} \right]^{-\frac{1}{\gamma}} \\ &= \beta^{-\frac{1}{\gamma}} \left( \mathbbm E \,_z \left[ \alpha \hat{R}^{-\frac{1}{\gamma}} h_c(x_1, \hat{\omega}) + (1 - \alpha) \hat{R}^{-\frac{1}{\gamma}} h_c(x_2, \hat{\omega}) \right]^{-\gamma} \right)^{-\frac{1}{\gamma}} \\ &\geq \beta^{-\frac{1}{\gamma}} \left[ \left( \mathbbm E \,_z \left[ \alpha \hat{R}^{-\frac{1}{\gamma}} h_c(x_1, \hat{\omega}) \right]^{-\gamma} \right)^{-\frac{1}{\gamma}} + \left( \mathbbm E \,_z \left[ (1 - \alpha) \hat{R}^{-\frac{1}{\gamma}} h_c(x_2, \hat{\omega}) \right]^{-\gamma} \right)^{-\frac{1}{\gamma}} \right] \\ &= \alpha \left[ \beta \mathbbm E \,_z \hat{R} \, h_c(x_1, \hat{\omega})^{-\gamma} \right]^{-\frac{1}{\gamma}} + (1 - \alpha) \left[ \beta \mathbbm E \,_z \hat{R} \, h_c(x_2, \hat{\omega})^{-\gamma} \right]^{-\frac{1}{\gamma}}, \end{align*} where the second inequality is due to the generalized Minkowski's inequality (see, e.g., \cite{hardy1952inequalities}, page~146, theorem 198). Hence, assumption \ref{a:concave} holds. \end{proof} Next, we aim to prove proposition \ref{pr:optpol_concave}. Recall $\mathscr C_1$ given by \eqref{eq::cC1}. Consider a further subspace $\mathscr C_2$ defined by \begin{equation} \label{eq:cC2} \mathscr C_2 := \left\{ c \in \mathscr C_1 \colon a \mapsto c(a,z) \text{ is concave for all } z \in \mathsf Z \right\}. \end{equation} \begin{lemma} \label{lm:self_map_cC2} $\mathscr C_2$ is a closed subset of the metric space $(\mathscr C, \rho)$, and $T c \in \mathscr C_2$ for all $c \in \mathscr C_2$. \end{lemma} \begin{proof}[Proof of lemma \ref{lm:self_map_cC2}] The proof of the first claim is straightforward and thus omitted. We now prove the second claim. Fix $c \in \mathscr C_2$. By lemma \ref{lm:cC1} we have $Tc \in \mathscr C_1$. It remains to show that $a \mapsto \xi(a, z) := Tc (a,z)$ is concave for all $z \in \mathsf Z$. Given $z \in \mathsf Z$, lemma \ref{lm:binding} implies that $\xi(a,z) = a$ for $a \leq \bar{a}_c(z)$ and that $\xi (a,z) < a$ for $a > \bar{a}_c(z)$. Since in addition $a \mapsto \xi(a,z)$ is continuous and increasing, to show the concavity of $\xi$ with respect to $a$, it suffices to show that $a \mapsto \xi (a,z)$ is concave on $(\bar{a}_c(z), \infty)$. Suppose to the contrary that there exist some $z \in \mathsf Z$, $\alpha \in [0,1]$, and $a_1, a_2 \in (\bar{a}_c (z), \infty)$ such that \begin{equation} \label{eq:as_ctdt} \xi \left( \alpha a_1 + (1 - \alpha) a_2, \, z \right) < \alpha \xi(a_1, z) + (1 - \alpha) \xi(a_2, z). \end{equation} Let $h(a, z, \hat{\omega}):= \hat{R} \left[a - \xi(a, z) \right] + \hat{Y}$, where $\hat{\omega} := (\hat{R}, \hat{Y})$. Then by lemma \ref{lm:binding} (and the analysis that follows immediately after that lemma), we have \begin{align*} (u' \circ \xi) \left( \alpha a_1 + (1 - \alpha) a_2, \, z \right) &= \beta \mathbbm E \,_z \hat{R} \left(u' \circ c \right) \left\{ h [\alpha a_1 + (1 - \alpha) a_2, \, z, \, \hat{\omega}], \, \hat{z} \right\} \\ &\leq \beta \mathbbm E \,_z \hat{R} \left( u' \circ c \right) \left[ \alpha h(a_1, z, \hat{\omega}) + (1 - \alpha) h(a_2, z, \hat{\omega}), \, \hat{z} \right]. \end{align*} Using assumption \ref{a:concave} then yields \begin{align*} \xi (\alpha a_1 + (1 - \alpha) a_2, z) & \geq (u')^{-1} \left\{ \beta \mathbbm E \,_z \hat{R} \left(u' \circ c \right) \left[ \alpha h(a_1, z, \hat{\omega}) + (1 - \alpha) h(a_2, z, \hat{\omega}), \, \hat{z} \right] \right\} \\ & \geq \alpha \left(u' \right)^{-1} \left\{ \beta \mathbbm E \,_z \hat{R} \left(u' \circ c \right) \left[ h(a_1, z, \hat{\omega}), \, \hat{z} \right] \right\} + \\ & \quad \; (1 -\alpha) \left(u' \right)^{-1} \left\{ \beta \mathbbm E \,_z \hat{R} \left(u' \circ c \right) \left[ h(a_2, z, \hat{\omega}), \, \hat{z} \right] \right\} \\ & = \alpha \left(u' \right)^{-1} \left\{ \left( u' \circ \xi \right) (a_1, z) \right\} + (1 - \alpha) \left(u' \right)^{-1} \left\{ \left(u' \circ \xi \right) (a_2, z) \right\} \\ & = \alpha \, \xi(a_1, z) + (1 - \alpha) \, \xi(a_2, z). \end{align*} This contradicts our assumption in \eqref{eq:as_ctdt}. Hence, $a \mapsto \xi(a,z)$ is concave for all $z \in \mathsf Z$. This concludes the proof. \end{proof} Now we are ready to prove proposition \ref{pr:optpol_concave}. \begin{proof}[Proof of proposition \ref{pr:optpol_concave}] By theorem \ref{t:ctra_T}, we know that $T \colon \mathscr C \rightarrow \mathscr C$ is a contraction mapping with unique fixed point $c^*$. Since $\mathscr C_2$ is a closed subset of $\mathscr C$ and $T: \mathscr C_2 \rightarrow \mathscr C_2$ by lemma \ref{lm:self_map_cC2}, we know that $c^* \in \mathscr C_2$. The first claim is verified. Regarding the second claim, note that $c^* \in \mathscr C_2$ implies that $a \mapsto c^*(a,z)$ is increasing and concave for all $z \in \mathsf Z$. Hence, $a \mapsto \frac{c^*(a,z)}{a}$ is a decreasing function for all $z \in \mathsf Z$. Since in addition $c^*(a,z) \geq \alpha a$ for all $(a,z) \in \SS_0$ by proposition \ref{pr:opt_pol_bd_frac}, we know that $\alpha' := \lim_{a \rightarrow \infty} \frac{c^*(a,z)}{a}$ is well-defined and $\alpha' \geq \alpha$. Finally, $\alpha' < 1$ by lemma \ref{lm:binding} and the fact that $\bar{a}(z) < \infty$ (see footnote \ref{fn:abar<inf}). Hence, the second claim holds. \end{proof} \subsection{Proof of section \ref{ss:glb_stb} results.} We first prove the general result that the borrowing constraint binds in finite time with positive probability. \begin{lemma} \label{lm:bind_fntime} For all $(a,z) \in \SS$, we have $\mathbbm P_{a,z} \left( \cup_{t \geq 0} \{ c_t = a_t \} \right) > 0$. \end{lemma} \begin{proof}[Proof of lemma \ref{lm:bind_fntime}] The claim holds trivially when $a=0$. Suppose the claim does not hold on $\SS_0$ (recall that $\SS_0 = \SS \backslash \{0\}$), then $\mathbbm P_{a,z} \left( \cap_{t \geq 0} \{c_t < a_t\} \right) = 1$ for some $(a,z) \in \SS_0$, i.e., the borrowing constraint never binds with probability one. Hence, \begin{equation*} \mathbbm P_{a,z} \left\{ (u' \circ c)(a_t, z_t) = \beta \mathbbm E \, \left[ R_{t+1} (u' \circ c) (a_{t+1}, z_{t+1}) \big| \mathscr F_{t} \right] \right\} = 1 \end{equation*} for all $t \geq 0$, where $\mathscr F_t := \sigma(s_0, \cdots, s_t)$ with $s_t := (a_t, z_t)$. Then we have \begin{align} \label{eq:u'c_ineq} \left( u' \circ c \right)(a,z) &= \beta^t \mathbbm E \,_{a,z} \, R_1 \cdots R_t \left(u' \circ c \right)(a_t, z_t) \nonumber \\ & \leq \beta^t \mathbbm E \,_{a,z} \, R_1 \cdots R_t \left[ u'(a_t) + K \right] \nonumber \\ & \leq \beta^t \mathbbm E \,_{z} \, R_1 \cdots R_t \left[ u'(Y_t) + K \right] \end{align} for all $t \geq 1$. Let $t= kn +1$, where $n$ is the integer defined by assumption \ref{a:ctra_coef}. Based on assumption \ref{a:bd_sup_ereuprm} and the Markov property, \begin{align*} \beta^t \mathbbm E \,_z R_1 \cdots R_t &= \beta^t \mathbbm E \,_z R_1 \cdots R_{t-1} \mathbbm E \,_z (R_t \mid \mathscr F_{t-1}) = \beta^{t-1} \mathbbm E \,_z R_1 \cdots R_{t-1} \beta \mathbbm E \,_{z_{t-1}} R_1 \\ &\leq \left(\beta \sup_{z \in \mathsf Z} \mathbbm E \,_z R_1 \right) (\beta^{nk} \mathbbm E \,_z R_1 \cdots R_{nk}) \leq \left(\beta \sup_{z \in \mathsf Z} \mathbbm E \,_z R_1 \right) \theta^k \to 0 \end{align*} as $t \to \infty$, where $\theta \in [0,1)$ is given by assumption \ref{a:ctra_coef}. Similarly, \begin{align*} \beta^t \mathbbm E \,_z R_1 \cdots R_t u'(Y_t) &= \beta^t \mathbbm E \,_z R_1 \cdots R_{t-1} \mathbbm E \,_z \left[ R_t u'(Y_t) \mid \mathscr F_{t-1} \right] \\ &\leq \beta^{t} \mathbbm E \,_z R_1 \cdots R_{t-1} \mathbbm E \,_{z_{t-1}} \left[R_1 u'(Y_1) \right] \\ &\leq \left( \beta \sup_{z \in \mathsf Z} \mathbbm E \,_z [\hat{R} u'(\hat{Y})] \right) \beta^{nk} \mathbbm E \,_z R_1 \cdots R_{nk} \\ &\leq \left( \beta \sup_{z \in \mathsf Z} \mathbbm E \,_z [\hat{R} u'(\hat{Y})] \right) \theta^k \to 0 \end{align*} as $t \to \infty$. Letting $t \to \infty$. \eqref{eq:u'c_ineq} implies that $\left(u' \circ c \right)(a,z) \leq 0$, contradicted with the fact that $u'>0$. Thus, we must have $\mathbbm P_{a,z} \left( \cup_{t \geq 0} \{ c_t = a_t\} \right) > 0$ for all $(a,z) \in \SS$. \end{proof} \subsubsection{Proof of section \ref{ss:gs_iid} results} The next few results establish global stability and the law of large numbers for the case of {\sc iid} $\{ z_t \}$ process. We say that a stochastic kernel $Q$ \emph{increasing} if $s \mapsto \int h(s') Q(s, \diff s')$ is bounded and increasing whenever $h \colon \SS \to \mathbbm R$ is. \begin{proof}[Proof of theorem \ref{t:gs_iid}] Obviously, assumptions \ref{a:utility}, \ref{a:ctra_coef}--\ref{a:conti_ereuprm} and \ref{a:suff_bd_in_prob}--\ref{a:concave} hold under the stated assumptions of theorem \ref{t:gs_iid}. Based on proposition \ref{pr:optpol_concave}, we have $c^* \in \mathscr C_2$. In particular, $a \mapsto c^*(a)$ is continuous, and $a \mapsto \frac{c^*(a)}{a}$ is decreasing on $(0, \infty)$. Hence, $a_{t+1}$ is continuous and increasing in $a_t$ (see equation \eqref{eq:dyn_sys_iid}). The stochastic kernel $Q$ is then Feller and increasing. Moreover, $Q$ is bounded in probability by lemma \ref{lm:bd_in_prob_at}. Fix $a_0$ and $a_0'$ in $\mathbbm R_+$ with $a_0' \leq a_0$. Let $\{a_t\}$ and $\{a_t'\}$ be two independent Markov processes generated by \eqref{eq:dyn_sys_iid}, starting at $a_0$ and $a_0'$ respectively. Let $\{ c_t\}$ and $\{c_t'\}$ be the corresponding optimal consumption paths. By lemma \ref{lm:bind_fntime}, $\mathbbm P_{a_0} (\cup_{t \geq 0} \{c_t = a_t \}) > 0$, i.e., the borrowing constraint binds in finite time with positive probability. Hence, with positive probability, $a_{t+1} = Y_{t+1} \leq R_{t+1} (a_t' - c_t') + Y_{t+1} = a_{t+1}'$. In other words, $\mathbbm P \{ a_{t+1} \leq a_{t+1}'\} > 0$ and $Q$ is order reversing. Since $Q$ is increasing, Feller, order reversing, and bounded in probability, based on theorem~3.2 of \cite{kamihigashi2014stochastic}, $Q$ is globally stable. \end{proof} \begin{proof}[Proof of theorem \ref{t:LLN_iid}] We have shown in the proof of theorem \ref{t:gs_iid} that the stochastic kernel $Q$ is increasing, bounded in probability, and order reversing. Hence, $Q$ is monotone ergodic by proposition~4.1 of \cite{kamihigashi2016seeking}. The two claims of theorem \ref{t:LLN_iid} then follow from theorem \ref{t:gs_iid} (of this paper), and corollary~3.1 and theorem~3.2 of \cite{kamihigashi2016seeking}. In particular, if we pair $\SS$ with its usual pointwise order $\leq$, then assumption~3.1 of \cite{kamihigashi2016seeking} obviously holds. \end{proof} \subsubsection{Proof of Section \ref{ss:gs_general} Results} Our next goal is to prove theorems \ref{t:gs_gnl_ergo_LLN}--\ref{t:gs_gnl}. In proofs we apply the theory of \cite{meyn2009markov}. Important definitions (their locations in \cite{meyn2009markov}) include: $\psi$-irreducibility (section~4.2), small set (page~102), strong aperiodicity (page~114), petite set (page~117), Harris chain (page~199), and positivity (page~230). Note that since $\mathbbm R^m$ paired with its Euclidean topology is a second countable topological space (i.e., its topology has a countable base), while $\mathbbm R_+$ and $\mathsf Z$ are respectively Borel subsets of $\mathbbm R$ and $\mathbbm R^m$ paired with the relative topologies, $\mathbbm R_+$ and $\mathsf Z$ are also second countable. As a result, for $\SS := \mathbbm R_+ \times \mathsf Z$, it always holds that (see, e.g., page~149, theorem~4.44 of \cite{guide2006infinite}) \begin{equation*} \mathscr B(\SS) = \mathscr B (\mathbbm R_+) \otimes \mathscr B(\mathsf Z). \end{equation*} Recall the Lebesgue measure $\lambda$ on $\mathscr B(\mathbbm R_+)$ and the measure $\vartheta$ on $\mathscr B(\mathsf Z)$ defined in section \ref{ss:gs_general}. Let $\lambda \times \vartheta$ be the product measure on $\mathscr B (\SS)$. \begin{lemma} \label{lm:inf_abar} Let the function $\bar{a}$ be defined as in \eqref{eq:a_bar}. Then $\inf_{z \in \mathsf Z} \bar{a}(z) > 0$. \end{lemma} \begin{proof}[Proof of lemma \ref{lm:inf_abar}] Since $c^* \in \mathscr C$, there exists a constant $K > 0$ such that % \begin{equation*} 0 < (u' \circ c^*) (a, z) \leq u'(a) + K \quad \text{for all } (a,z) \in \SS_0. \end{equation*} % Assumption \ref{a:bd_sup_ereuprm} then implies that % \begin{equation*} \sup_{z \in \mathsf Z} \mathbbm E \,_z \hat{R} (u' \circ c^*) (\hat{Y}, \hat{z}) \leq \sup_{z \in \mathsf Z} \mathbbm E \,_z \hat{R} u' (\hat{Y}) + K \sup_{z \in \mathsf Z} \mathbbm E \,_z \hat{R} < \infty. \end{equation*} % Then, by the definition of $\bar{a}$ and the properties of $u$, % \begin{equation*} \inf_{z \in \mathsf Z} \bar{a} (z) = (u')^{-1} \left[ \beta \sup_{z \in \mathsf Z} \mathbbm E \,_z \hat{R} (u' \circ c^*) (\hat{Y}, \hat{z}) \right] > 0, \end{equation*} % as claimed. \end{proof} Recall the compact subset $\mathsf C \subset \mathsf Z$ and $\delta_Y > 0$ given by assumption \ref{a:pos_dens}. Let \begin{equation} \label{eq:C'D} \mathsf C' := \left[ 0, \, \min \left\{ \delta_Y, \, \inf_{z \in \mathsf Z} \bar{a}(z) \right\} \right] \quad \text{and} \quad \mathsf D := \mathsf C' \times \mathsf C \in \mathscr B(\SS). \end{equation} \begin{lemma} \label{lm:psi_irr} The Markov process $\{ (a_t,z_t) \}_{t \geq 0}$ is $\psi$-irreducible. \end{lemma} \begin{proof}[Proof of lemma \ref{lm:psi_irr}] We define the measure $\varphi$ on $\mathscr B(\SS)$ by % \begin{equation*} \varphi(A) := (\lambda \times \vartheta) (A \cap \mathsf D) \quad \text{for } A \in \mathscr B (\SS). \end{equation*} % Then $\varphi$ is a nontrivial measure. In particular, $\varphi (\SS) = (\lambda \times \vartheta) (\mathsf D) = \lambda(\mathsf C') \vartheta(\mathsf C) > 0$ since $\lambda (\mathsf C') > 0$ by lemma \ref{lm:inf_abar} and $\vartheta (\mathsf C) > 0$ by assumption \ref{a:pos_dens}. For fixed $(a,z) \in \SS$ and $A \in \mathscr B(\SS)$ with $\varphi (A) > 0$, by lemma \ref{lm:binding}, % \begin{align} \label{eq:bind_ineq} \mathbbm P_{(a,z)} \{ (a_{t+1}, z_{t+1}) \in A \} &\geq \mathbbm P_{(a,z)} \{ (a_{t+1}, z_{t+1}) \in A, \, a_t \leq \bar{a}(z_t) \} \nonumber \\ &= \mathbbm P_{(a,z)} \{ (a_{t+1}, z_{t+1}) \in A \mid c_t = a_t \} \, \mathbbm P_{(a,z)} \{ c_t = a_t \} \nonumber \\ &=\mathbbm P_{(a,z)} \{ (Y_{t+1}, z_{t+1}) \in A \mid c_t = a_t \} \, \mathbbm P_{(a,z)} \{ c_t = a_t \} \nonumber \\ &=\mathbbm P_{(a,z)} \{ (Y_{t+1}, z_{t+1}) \in A, \, a_t \leq \bar{a}(z_t) \}. \end{align} % Note that for all $z' \in \mathsf Z$, by assumption \ref{a:pos_dens}, $f_L(Y'' \mid z'') p(z'' \mid z') > 0$ whenever $(Y'', z'') \in \mathsf D$. Since in addition $\varphi (A) = (\lambda \times \vartheta)(A \cap \mathsf D) > 0$, we have % \begin{equation*} \int_A f_L(Y'' \mid z'') p(z'' \mid z') (\lambda \times \vartheta) [\diff (Y'', z'')] > 0 \quad \text{for all } z' \in \mathsf Z. \end{equation*} % Let $\triangle := \mathbbm P_{(a,z)} \{ (a_{t+1}, z_{t+1}) \in A \}$ and $E := \{ (a',z') \in \SS \colon a' \leq \bar{a}(z') \}$. Notice that by lemma \ref{lm:binding} and lemma \ref{lm:bind_fntime}, there exists $t \in \mathbbm N$ such that % \begin{equation*} Q^t \left( (a,z), E \right) = \mathbbm P_{(a,z)} \{ a_t \leq \bar{a}(z_t) \} > 0. \end{equation*} % Hence, \eqref{eq:bind_ineq} implies that % \begin{align*} \triangle &\geq \int_E \left\{ \int_A f_L (Y'' \mid z'') p(z'' \mid z') (\lambda \times \vartheta) [\diff (Y'', z'')] \right\} Q^t \left( (a,z), \diff (a', z') \right) > 0. \end{align*} % Therefore, we have shown that any measurable subset with positive $\varphi$ measure can be reached in finite time with positive probability, i.e., $\{ (a_t,z_t)\}$ is $\varphi$-irreducible. Based on proposition~4.2.2 of \cite{meyn2009markov}, there exists a maximal (in the sense of absolute continuity) probability measure $\psi$ on $\mathscr B(\SS)$ such that $\{ (a_t,z_t)\}$ is $\psi$-irreducible. \end{proof} \begin{lemma} \label{lm:str_aperi} The Markov process $\{ (a_t, z_t)\}_{t \geq 0}$ is strongly aperiodic. \end{lemma} \begin{proof}[Proof of lemma \ref{lm:str_aperi}] By the definition of strong aperiodicity, we need to show that there exists a $v_1$-small set $D$ with $v_1 (D) > 0$, i.e., there exists a nontrivial measure $v_1$ on $\mathscr B (\SS)$ and a subset $D \in \mathscr B (\SS)$ such that $v_1 (D) > 0$ and % \begin{equation} \label{eq:small} \inf_{(a,z) \in D} Q \left((a,z), A \right) \geq v_1 \left( A \right) \quad \text{for all } A \in \mathscr B (\SS). \end{equation} % Let $\mathsf D$ be defined as in \eqref{eq:C'D}. We show that $\mathsf D$ satisfies the above conditions. Let % \begin{equation*} r(a', z') := f_L (a' \mid z') \, \inf_{z \in \mathsf C} p(z' \mid z), \qquad (a',z') \in \SS. \end{equation*} % Since by assumption \ref{a:pos_dens}, $p(z' \mid z)$ is strictly positive on $\mathsf C \times \mathsf Z$ and continuous in $z$, and $f_L(Y' \mid z')$ is strictly positive on $(0, \delta_Y) \times \mathsf C$, the definition of $\mathsf D$ implies that $r(a',z')$ is strictly positive whenever $(a',z') \in \mathsf D$. Define the measure $v_1$ on $\mathscr B(\SS)$ by % \begin{equation*} v_1 (A) := \int_A r(a',z') (\lambda \times \vartheta) [\diff (a',z')] \quad \text{for } A \in \mathscr B(\SS). \end{equation*} % Since $(\lambda \times \vartheta) (\mathsf D) > 0$ as shown in the proof of lemma \ref{lm:psi_irr} and $r(a',z') >0$ on $\mathsf D$, we have $v_1 (\mathsf D) > 0$, which also implies that $v_1$ is a nontrivial measure. Let $g[(a',z') \mid (a,z)]$ denote the density representation of the stochastic kernel $Q$ when $(a,z) \in \mathsf D$. Lemma \ref{lm:binding} implies that % \begin{equation*} g[(a',z') \mid (a,z)] = f_L (a' \mid z') p(z'\mid z), \qquad (a,z) \in \mathsf D. \end{equation*} % Hence, for all $(a,z) \in \mathsf D$ and $A \in \mathscr B(S)$, % \begin{align*} Q \left((a,z), A \right) &= \int_A g[(a',z') \mid (a,z)] (\lambda \times \vartheta) [\diff (a',z')] \\ &\geq \int_A r(a',z') (\lambda \times \vartheta) [\diff (a',z')] = v_1 (A). \end{align*} % This implies that condition \eqref{eq:small} holds. Hence, $\{(a_t, z_t)\}_{t \geq 0}$ is strongly aperiodic. \end{proof} \begin{proof}[Proof of theorem \ref{t:gs_gnl_ergo_LLN}] We first show that $\{(a_t, z_t)\}$ is a positive Harris chain. Positivity has been established in theorem~\ref{t:sta_exist}. To show Harris recurrence, by lemma~6.1.4, theorem~6.2.9 and theorem~18.3.2 of \cite{meyn2009markov}, it suffices to verify % \begin{enumerate} \item[(a)] $Q$ is Feller and bounded in probability, and \item[(b)] $\{ (a_t, z_t)\}$ is $\psi$-irreducible, and the support of $\psi$ has non-empty interior. \end{enumerate} % Claim~(a) is already proved in theorem \ref{t:sta_exist}. Regarding claim~(b), in lemma~\ref{lm:psi_irr} we have shown that $\{(a_t,z_t) \}$ is $\varphi$-irreducible and thus $\psi$-irreducible, where $\psi$ is maximal in the sense that $\psi(A) = 0$ implies $\varphi (A)=0$ for all $A \in \mathscr B(\SS)$. This also implies that $\psi (A) > 0$ whenever $\varphi (A) > 0$. Recall that $\varphi (A) := (\lambda \times \vartheta) (A \cap \mathsf D)$, where $\mathsf D := \mathsf C' \times \mathsf C$ is defined by \eqref{eq:C'D}. Since by assumption \ref{a:pos_dens}, the support of $\vartheta$ contains $\mathsf C$ that has nonempty interior and the support of $\lambda$ (the Lebesgue measure) contains the interval $\mathsf C'$ (of positive $\lambda$ measure), the support of $\varphi$ contains $\mathsf D = \mathsf C' \times \mathsf C$ that has nonempty interior. As a result, the support of $\psi$ contains $\mathsf D$ and thus has nonempty interior. Claim~(b) is verified. Therefore, $\{(a_t, z_t)\}$ is a positive Harris chain. Since in addition we have shown in lemmas \ref{lm:psi_irr}--\ref{lm:str_aperi} that $\{(a_t, z_t)\}$ is $\psi$-irreducible and strongly aperiodic, based on theorem~13.0.1 and theorem~17.1.7 of \cite{meyn2009markov}, the stated claims of our theorem hold. This concludes the proof. % \end{proof} Our next goal is to prove theorem \ref{t:gs_gnl}. We start by proving several lemmas. \begin{lemma} \label{lm:v2small} The set $B:= [0, d] \times \{z\}$ is a petite set for all $d \in (0, \infty)$ and $z \in \mathsf Z$. \end{lemma} \begin{proof}[Proof of lemma \ref{lm:v2small}] Since any small set is petite, it suffices to show that $B$ is a $v_2$-small set, i.e., there exists a nontrivial measure $v_{2}$ on $\mathscr B(\SS)$ such that % \begin{equation} \label{eq:v2small} \inf_{(a,z) \in B} Q^2((a,z), A) \geq v_{2} (A) \quad \text{for all } A \in \mathscr B(\SS). \end{equation} % Without loss of generality, we assume that $d$ is large enough. For $a \neq c^*(a,z)$, let % \begin{equation} \label{eq:f_dens} f \left(a' \mid a,z,z' \right) := \frac{1}{a - c^*(a,z)} \int_{[0,a']} f_C \left( \frac{a' - Y'}{a - c^*(a,z)} \, \Big| \, z' \right) f_L \left( Y' \mid z' \right) \diff Y', \end{equation} % while $f \left( \cdot \mid a,z,z' \right) := f_L (\cdot \mid z')$ for $a = c^*(a,z)$. Let $g \left[ (a',z') \mid (a,z) \right]$ be the density corresponding to the stochastic kernel $Q$. Since $\{ \zeta_t\}$ and $\{ \eta_t\}$ are mutually independent by assumption \ref{a:geo_drift_Yt}, $g$ satisfies % \begin{equation*} \label{eq:g_dens} g \left[ \left( a',z' \right) \mid (a,z) \right] = f \left( a' \mid a,z, z' \right) p \left( z' \mid z \right). \end{equation*} Recall that we have shown in the proof of proposition \ref{pr:optpol_concave} that $a \mapsto c^*(a,z) / a$ is decreasing for all $z \in \mathsf Z$. This implies that, for the dynamical system \eqref{eq:dyn_sys}, $a_{t+1}$ is increasing in $a_t$ with probability one. Since in addition $c^*(a,z) = a$ if and only if $a \leq \bar{a}(z)$ by lemma \ref{lm:binding}, we have % \begin{align*} Q^2((a,z), A) &= \mathbbm P_{a,z} \left\{ (a_2, z_2) \in A \right\} \geq \mathbbm P_{a,z} \left\{ (a_2, z_2) \in A, \, a_1 \leq \bar{a}(z_1) \right\} \\ &= \mathbbm P_{a,z} \left\{ (a_2, z_2) \in A \mid a_1 \leq \bar{a}(z_1) \right\} \mathbbm P_{a,z} \left\{ a_1 \leq \bar{a}(z_1) \right\} \\ &= \mathbbm P_{a,z} \left\{ (Y_2, z_2) \in A \mid a_1 \leq \bar{a}(z_1) \right\} \mathbbm P_{a,z} \left\{ a_1 \leq \bar{a}(z_1) \right\} \\ &= \mathbbm P \left\{ (Y_2, z_2) \in A, \, a_1 \leq \bar{a}(z_1) \mid (a_0,z_0) = (a,z) \right\} \\ &\geq \mathbbm P \left\{ (Y_2, z_2) \in A, \, a_1 \leq \bar{a}(z_1) \mid (a_0,z_0) = (d,z) \right\} =: v_2(A) \end{align*} % for all $(a,z) \in B$, where the last inequality follows from the fact that $a_{t+1}$ is increasing in $a_t$ (shown above), which indicates that for all fixed $(a, z) \in B$ and $z_1 \in \mathsf Z$, % \begin{equation*} \int f(a_1\mid a, z, z_1) \mathbbm 1 \{ a_1 \leq \bar{a}(z_1) \} \diff a_1 \geq \int f(a_1\mid d, z, z_1) \mathbbm 1 \{ a_1 \leq \bar{a}(z_1) \} \diff a_1 > 0. \end{equation*} % We now show that $v_2$ defined this way is a nontrivial measure on $\mathscr B(\SS)$. Obviously, $v_2$ is a measure. Moreover, for fixed $z \in \mathsf Z$, $c^*(a,z) / a$ is decreasing in $a$, strictly less than one as $a$ gets large, and bounded below by $\alpha \in (0,1)$. Hence, there exists $\alpha' \in (0,1)$ such that $c^*(a,z) / a \leq \alpha'$ as $a$ gets large. Hence, $a - c^*(a,z) \geq (1 - \alpha') a$, which implies that $a - c^*(a,z) \to \infty$ as $a \to \infty$. Using lemma \ref{lm:binding} again shows that $f(a' \mid a,z,z')$ satisfies \eqref{eq:f_dens} as $a$ gets large. Let $\underline{a} := \inf_{z \in \mathsf Z} \bar{a}(z)$. Then $\underline{a} > 0$ by lemma \ref{lm:inf_abar}. Recall $\delta_R > 0$, $\delta_Y > 0$ and the compact subset $\mathsf C \subset \mathsf Z$ defined by assumption \ref{a:pos_dens}. Then % \begin{equation*} 0 < \frac{\underline{a}}{d - c^*(d,z)} < \delta_R \quad \text{as $d$ gets large.} \end{equation*} % Since in addition $f_L (Y \mid z)$ is strictly positive on $(0, \delta_Y) \times \mathsf C$ and $f_C (R \mid z)$ is strictly positive on $(0, \delta_R) \times \mathsf C$ by assumptions \ref{a:pos_dens}--\ref{a:geo_drift_Yt}, for $d$ that is large enough, $f(a' \mid d,z,z')$ is defined by \eqref{eq:f_dens} and it is strictly positive for all $(a',z') \in (0, \underline{a}) \times \mathsf C$. Moreover, since $p(z' \mid z)$ is strictly positive on $\mathsf C \times \mathsf Z$ and $\vartheta (\mathsf C) > 0$ by assumption \ref{a:pos_dens}, % \begin{align*} v_2 (\SS) &= \mathbbm P_{(d,z)} \{ a_1 \leq \bar{a} (z_1) \} \geq \mathbbm P_{(d,z)} \left\{ a_1 \leq \underline{a} \right\} \\ &= \int_{\mathsf Z} \left[ \int_{[0, \underline{a}]} f(a' \mid d, z, z') \diff a' \right] p(z'\mid z) \vartheta (\diff z') > 0. \end{align*} % Hence, $v_2$ is a nontrivial measure on $\mathscr B (\SS)$. Since in addition $z$ is the only element of $\mathsf Z$ that appears in the analytical form of $B$, \eqref{eq:v2small} holds and thus $B$ is petite. \end{proof} In the following, we let $\alpha \in [0,1)$ and $n \in \mathbbm N$ be defined as in assumption \ref{a:suff_bd_in_prob}. \begin{lemma} \label{lm:geo_drift} There exist a petite set $B$, constants $b < \infty$, $\rho > 0$ and a measurable map $V \colon \SS \rightarrow [1, \infty)$ such that, for all $(a,z) \in \SS$, % \begin{equation*} \mathbbm E \,_{a,z} V(a_n, z_n) - V(a,z) \leq - \rho V(a,z) + b \mathbbm 1 \{(a,z) \in B \}. \end{equation*} % \end{lemma} \begin{proof}[Proof of lemma \ref{lm:geo_drift}] By assumption \ref{a:geo_drift_Yt}, there exists $q'' \in \mathbbm R_+$ such that % \begin{equation*} \mathbbm E \,_z Y_t \leq q^{t-1} \mathbbm E \,_z Y_1 + q'' \quad \text{for all } t \in \mathbbm N \text{ and } z \in \mathsf Z. \end{equation*} % Since $c^*(a,z) \geq \alpha a$ for all $(a,z) \in \SS$ by proposition \ref{pr:opt_pol_bd_frac}, $M:= \sup_{z \in \mathsf Z} \mathbbm E \,_z \hat{R} < \infty$ by assumption \ref{a:bd_sup_ereuprm}, and $\gamma := (1 - \alpha)^n \sup_{z \in \mathsf Z} \mathbbm E \,_z R_n \cdots R_1 < 1$ by assumption \ref{a:suff_bd_in_prob}, we have % \begin{align*} \mathbbm E \,_{a,z} a_n &\leq (1 - \alpha)^n \mathbbm E \,_z R_n \cdots R_1 a + \sum_{t = 1}^{n} (1 - \alpha)^{n-t} \mathbbm E \,_z R_n \cdots R_{t+1} Y_t \\ &\leq \gamma a + \sum_{t=1}^{n} (1 - \alpha)^{n-t} M^{n-t} \mathbbm E \,_z Y_t \leq \gamma a + \sum_{t=1}^{n} (1 - \alpha)^{n-t} M^{n-t} (q^{t-1} \mathbbm E \,_z Y_1 + q''). \end{align*} % Define $L := \sum_{t=1}^{n} (1 - \alpha)^{n-t} M^{n-t}$ and $\tilde{L} := q'' L$. Then $L, \tilde{L} \in \mathbbm R_+$ and the above inequality implies that % \begin{align*} \mathbbm E \,_{a,z} a_n \leq \gamma a + L \mathbbm E \,_z Y_1 + \tilde{L} \quad \text{for all } (a,z) \in \SS. \end{align*} % Choose $m \in \mathbbm R_+$ such that $1 - q^n - L / m > 0$ (such an $m$ is available since $q \in [0,1)$ by assumption \ref{a:geo_drift_Yt}). Let $V$ be defined as in \eqref{eq:V_func}, i.e., $V(a,z) = a + m \mathbbm E \,_z Y_1 + 1$. Then the above results imply that % \begin{align*} \mathbbm E \,_{a,z} V(a_n, z_n) &= \mathbbm E \,_{a,z} a_n + m \, \mathbbm E \,_z \mathbbm E \,_{z_n} Y_1 + 1 = \mathbbm E \,_{a,z} a_n + m \, \mathbbm E \,_z Y_{n+1} + 1 \\ & \leq \gamma a + L \mathbbm E \,_z Y_1 + \tilde{L} + m (q^n \mathbbm E \,_z Y_1 + q'') + 1 \\ &= \gamma a + (L/m + q^n) m \mathbbm E \,_z Y_1 + \tilde{L} + m q'' + 1. \end{align*} % Let $\tilde{\rho} := \min \left\{ 1 - \gamma, \, 1 - q^n - L/m \right\}$. Then $\tilde{\rho} > 0$ by assumption \ref{a:suff_bd_in_prob} and the construction of $m$. Thus, % \begin{align} \label{eq:dri_ineq1} &\mathbbm E \,_{a,z} V(a_n, z_n) - V(a, z) \nonumber \\ &\leq - (1 - \gamma) a - \left( 1 - q^n - L / m \right) m \, \mathbbm E \,_z Y_1 + \tilde{L} + m q'' \nonumber \\ &\leq -\tilde{\rho} \left( a + m \, \mathbbm E \,_z Y_1 \right) + \tilde{L} + m q'' = -\tilde{\rho} V(a,z) + \tilde{\rho} + \tilde{L} + m q''. \end{align} % Choose $\rho \in (0, \tilde{\rho})$ and $d \in \mathbbm R_+$ such that $(\tilde{\rho} - \rho) d > \tilde{\rho} + \tilde{L} + m q''$. Fix $z_0 \in \mathsf Z$ and let $B:= [0,d] \times {z_0}$. Lemma \ref{lm:v2small} implies that $B$ is a petite set. Notice that % \begin{equation*} V(a,z) = a + m \, \mathbbm E \,_z Y_1 + 1 > d \quad \text{ for all } (a,z) \notin B. \end{equation*} % Hence, \eqref{eq:dri_ineq1} implies that for all $(a,z) \notin B$, we have % \begin{align} \label{eq:dri_ineq2} &\mathbbm E \,_{a,z} V(a_1, z_1) - V(a, z) \leq -\tilde{\rho} V(a,z) + \tilde{\rho} + \tilde{L} + m q'' \nonumber \\ &= -\rho V(a,z) - (\tilde{\rho} - \rho) V(a,z) + \tilde{\rho} + \tilde{L} + m q'' \nonumber \\ &< -\rho V(a,z) - (\tilde{\rho} - \rho) d + \tilde{\rho} + \tilde{L} + m q'' < -\rho V(a,z). \end{align} % Let $b:= \tilde{\rho} + \tilde{L} + m q''$. Then by \eqref{eq:dri_ineq1}--\eqref{eq:dri_ineq2}, we have % \begin{equation*} \mathbbm E \,_{a,z} V(a_n, z_n) - V(a, z) \leq -\rho V(a,z) + b \mathbbm 1 \{ (a,z) \in B \} \end{equation*} % for all $(a,z) \in \SS$. This concludes the proof. \end{proof} \begin{proof}[Proof of theorem \ref{t:gs_gnl}] That $Q$ is $V$-geometrically ergodic can be proved by applying theorem~19.1.3 (or proposition~5.4.5 and theorem~15.0.1) of \cite{meyn2009markov}. All the required conditions in those theorems have been established in our lemmas \ref{lm:psi_irr}--\ref{lm:geo_drift} above. \end{proof} \bibliographystyle{ecta}
2,869,038,155,956
arxiv
\section*{\refname} \makeatother \begin{abstract} The political discourse in Western European countries such as Germany has recently seen a resurgence of the topic of refugees, fueled by an influx of refugees from various Middle Eastern and African countries. Even though the topic of refugees evidently plays a large role in online and offline politics of the affected countries, the fact that protests against refugees stem from the right-wight political spectrum has lead to corresponding media to be shared in a decentralized fashion, making an analysis of the underlying social and mediatic networks difficult. In order to contribute to the analysis of these processes, we present a quantitative study of the social media activities of a contemporary nationwide protest movement against local refugee housing in Germany, which organizes itself via dedicated Facebook pages per city. We analyse data from 136 such protest pages in 2015, containing more than 46,000 posts and more than one million interactions by more than 200,000 users. In order to learn about the patterns of communication and interaction among users of far-right social media sites and pages, we investigate the temporal characteristics of the social media activities of this protest movement, as well as the connectedness of the interactions of its participants. We find several activity metrics such as the number of posts issued, discussion volume about crime and housing costs, negative polarity in comments, and user engagement to peak in late 2015, coinciding with chancellor Angela Merkel's much criticized decision of September 2015 to temporarily admit the entry of Syrian refugees to Germany. Furthermore, our evidence suggests a low degree of direct connectedness of participants in this movement, (i.a., indicated by a lack of geographical collaboration patterns), yet we encounter a strong affiliation of the pages' user base with far-right political parties. \end{abstract} \section{Introduction} In recent years, Europe has experienced a massive influx of refugees from Middle Eastern and African regions, mainly due to civil wars and economic stagnation in these areas. In Germany, this influx peaked in 2015 with 890,000 people seeking asylum~\cite{BMI2016}, an order of magnitude more than the average number in the preceding ten years; in early September of 2015, chancellor Angela Merkel decided to admit the entry of Syrian refugees stuck in South-East European countries. These developments have been accompanied by a steep rise in popularity of German right-wing organizations, especially in the form of the political party \emph{AfD -- Alternative für Deutschland}, (``Alternative for Germany'')~\cite{Arzheimer2016}, which managed to enter the European parliament as well as multiple German state parliaments since its inception in 2012. The AfD and other right-wing organizations successfully leverage social media to communicate with their followers; recent research shows that their gains in popularity are highly correlated with growing interaction rates and user engagement on Facebook~\cite{Schelter2016}. The refugees in Germany have been registered and temporarily placed in hastily implemented shelters distributed all over Germany. As a reaction to these refugee shelters (or mere plans to set up refugee housing), a large number of local protest movements have formed against their placement in the corresponding towns. Also, refugee shelters and refugees have become targets of a series of more than a thousand crimes in 2015~\cite{AmadeuAntonio2015}, including arsony of buildings designated for refugees as well as attacks with explosives against inhabitated shelters. Reports indicate that the communication within anti-refugee housing movements happens via dedicated Facebook pages, often titled ``Nein zum Heim'' (``No to the shelter'') or ``\emph{X} wehrt sich'' (``\emph{X} fights back'', where \emph{X} usually stands for the name of a city). Many of these pages promote racist, xenophobic and islamophobic views. Moreover, setting up such Facebook pages to organize protests is explicitly recommended in guidelines published by radical right-wing organizations~\cite{DerDritteWeg2015}, and some of the pages openly advertise such organisations.\footnote{e.g., \url{https://facebook.com/nzh.koepenick/}, \url{https://facebook.com/Kein-Asylheim-in-der-Reinhardt-Kaserne-826300594067680/}, \url{https://facebook.com/Kein-Asylanten-Containerdorf-in-Falkenberg-1497894543825007/}} To this date, we are not aware of a comprehensive quantitative study of this phenomenon, even though the topic remains at the forefront of German politics as of 2017. To fill this gap, this paper presents a limited quantitative study on the scale of 136 protest pages in 2015, whose contents we have crawled, including more than one million interactions of more than 200,000 users with these pages. Given this data, we focus our efforts on two research questions: \begin{itemize} \item[\textbf{RQ~1:}] What are the temporal characteristics of the social media activities of this protest movement? \item[\textbf{RQ~2:}] What is the degree of connectedness and cooperation in this protest movement? \end{itemize} The goal of RQ~1 is to obtain insights into general activity patterns of this protest movement, therefore we analyze summary statistics of the activity on the social media pages. Our main aim is to gain insight into topics and the general keynote of the content posted on these pages, and to find hints on how these activities relate to external events. Our discovered topics closely resemble the topics found in a recent study of German right-wing activism on Twitter~\cite{Puschmann2016}. We observe that several activity metrics such as the number of posts issued, the negative polarity in comments as well as the attraction of new users peak in late 2015. This peak coincides with chancellor Merkel's much criticized decision from early September to temporarily admit the entry of Syrian refugees to Germany~\cite{Walker2015}. For RQ~2, we focus on the connectedness of users of the protest pages in order to investigate the nature of cooperation between the participants in this protest movement. We investigate patterns of collaboration and connectedness of the users of protest pages on the level of direct interactions as well as on the level of interactions with the Facebook pages of political parties, which serve as an indicator for indirect connections between the users. Our data points at a low degree of direct connectedness: no regional collaboration patterns between users are apparent and their co-interaction network is highly disconnected. However, our data suggests that the user base of the protest pages is connected on a higher level, as we encounter a strong affiliation with far-right political parties. Interestingly, the affiliation pattern is very similar for both the extremist-right NPD party and the (self-proclaimed) non-extremist conservative AfD party. The paper is structured as follows. We first describe the data acquisition methods and the dataset used in Section~\ref{sec:acquisition}. In Section~\ref{sec:temporal}, we investigate the temporal characteristics of the movement, and in Section~\ref{sec:network}, we analyse its connectedness and cooperation patterns. Related work is reviewed in Section~\ref{sec:related}, and we conclude in Section~\ref{sec:conclusion}. An extended abstract covering parts of this work has been previously published by the authors~\cite{Schelter2017}. \section{Data Acquisition} \label{sec:acquisition} \begin{wrapfigure}{R}{0.35\textwidth} \vspace{-0.9cm} \includegraphics[scale=0.35]{figures/geo} \caption{ Geographical mapping of the pages. The dot size is logarithmically proportional to the number of users of a page. A clustering of pages is apparent in the eastern part of Germany. } \label{fig:geo} \vspace{-0.8cm} \end{wrapfigure} In order to gather a large number of protest pages for analysis, we consulted online articles listing pages and conducted several exhaustive searches on Facebook using the queries \emph{``Nein zum Heim''} and \emph{``wehrt sich''}, and manually inspected all search results. For the found protest pages, we crawled all publicly available posts with their corresponding likes and comments, restricted to the year 2015. Furthermore, we tagged each \nzhpage\ with the geographical coordinates of the city it refers to. Thereby, we obtained 136 such pages, as depicted in Fig.~\ref{fig:geo}. Our dataset comprises 46,880 posts and more than one million interactions (196,661 comments on posts, 791,072 likes on posts and 339,604 likes on comments) by 209,822 users. We do not have access to shared posts, page-level likes or profile information of users. Note that we only report aggregate statistics of the collected data, and do neither conduct analysis on the level of individual persons, nor re-distribute the data. A first look into the data reveals some interesting patterns: As shown in Fig.~\ref{fig:geo} the vast majority (77\%) of protest pages are located in the eastern part of Germany. This matches with the fact that, since the re-unification of Germany in 1990, far-right parties regularly score higher election results in this part of Germany compared to the western regions~\cite{Wikipedia}. We find the distribution of per-page activity metrics (number of posts, users and comments) to be heavily skewed. For instance, the median number of users interacting with a page is 384, while the top 10\% of pages attract more than 5,076 users each, up to a maximum of 30,346. Similarly, the median number of posts per page is 126, while the top 10\% of pages issued more than 780 posts with a maximum of 5,643. The same also holds for comments, where the median is 213, the top 10\% pages see more than 3,476 comments up to a maximum of 24,866. \section{RQ 1: Temporal Characteristics} \label{sec:temporal} As are any political and other movements, the anti-refugee protest movement is subject to temporal fluctuations -- both due to internal and external changes. In order to understand such fluctuations in the community, we investigate a wide range of activities, such as posts and news articles issued by pages, the topics in these news articles, as well as the sentiment in user comments and general user engagement. We intend to gain insights into the general keynote of the content posted on the protest pages, and to find hints on how these activities relate to external events. Specifically, we investigate four aspects: (1)~the general evolution over time of the volume of activity, (2)~the change in the topics discussed and shared, (3)~changes in the polarity of interactions, i.e., by positive and negative discourse, and (4)~the ability of individual pages to attract a new audience over time. \begin{figure}[b!] \centering \includegraphics[scale=0.4]{figures/posts-with-entities} \caption{ Weekly aggregated time course of the number of posts and posted news articles on all protest pages in 2015. The peak in September/October follows the announcement of admission of Syrian refugees into Germany. } \label{fig:posts-with-entites} \end{figure} \subsection{Time Course of Page Posts} We analyze the time course of the number of published posts to gain insight into a general activity pattern of the pages. Fig.~\ref{fig:posts-with-entites} shows the weekly number of posts as issued by the pages and the corresponding number of posted news articles over time. We observe a peak in the end of the third quarter of 2015, which coincides with the aforementioned admission of Syrian refugees into Germany in September~\cite{Walker2015}. The same phenomenon has also been recognized in previous studies of far-right engagement on social media~\cite{Schelter2016}. \subsection{Topics over Time in Posted News Articles} Next, we analyze the contents of the posted news articles to gain insights into the conversation on the pages, as well as their time course. Therefore, we crawl the contents of news articles linked to in the posts on the protest pages. We clean the resulting textual data, and represent every news article as a bag-of-words of its nouns and named entity terms, extracted via a part-of-speech tagger~\cite{Toutanova2003}. The resulting dataset comprises 6,760 articles, 7,548,572 word tokens, and 17,402 distinct terms. In order to investigate the topics eminent in these articles, we conduct topic analysis via a variant of Latent Dirichlet Allocation (LDA) called \emph{Topics over Time}~\cite{Wang2006} that not only captures the low-dimensional structure of post contents, but also how this structure changes over time. We select the number of topics via manual inspection of the resulting topic clusters, to maximize the interpretability of the model. Finally, we fit a model with ten clusters, employing the default hyperparameter settings of the implementation, which the authors report to be robust among several datasets. \begin{table*} \centering \caption{ Selection of topics from a time-sensitive topic model of news articles in posts on the protest pages. For each selected topic, we provide a manually chosen label, illustrate its time course via a histogram of the estimated document-topic matrix (binned weekly by posting dates of the articles), and list the five most likely terms, as well as headlines of a set of strongly associated news articles. Terms and headlines are translated from German. } \begin{tabular}{p{6.1cm} p{6.1cm}} \textbf{Housing and Cost} & \textbf{Sexual Crime} \\ \midrule \includegraphics[scale=0.4]{figures/topic4} & \includegraphics[scale=0.4]{figures/topic9}\\[-0.1cm] \midrule \begin{minipage}{.33\textwidth} \begin{tabular}{p{4.5cm} p{.6cm}} {\em refugees} & 0.01251 \\ {\em asylum seeker} & 0.00941 \\ {\em refugee} & 0.00736 \\ {\em city} & 0.00728 \\ {\em housing} & 0.00648 \\ \end{tabular} \end{minipage} & \begin{minipage}{.33\textwidth} \begin{tabular}{p{4.5cm} p{.6cm}} {\em man} & 0.01146 \\ {\em perpetrator} & 0.00881 \\ {\em years} & 0.00698 \\ {\em years old} & 0.00691 \\ {\em woman} & 0.00651 \\ \end{tabular} \end{minipage} \\ \midrule \begin{minipage}{.33\textwidth} \begin{tabular}{p{4.3cm} p{.9cm}} {\em ``Dresden invests 47.7 million euros for asylum seekers''} & $\;\;\;\;\;\;\;$0.94\\ {\em ``County builds new shelters for refugees''} & $\;\;\;\;\;\;\;$0.93 \\ {\em ``Welcome to New-Aleppo, the refugee city''} & $\;\;\;\;\;\;\;$0.91 \\ \end{tabular} \end{minipage} & \begin{minipage}{.22\textwidth} \begin{tabular}{p{4.3cm} p{.9cm}} {\em ``29-year old woman sexually harassed by unknown man''} & $\;\;\;\;\;\;\;$0.97\\ {\em ``Call for witnesses after rape in Dresden-Plauen''} & $\;\;\;\;\;\;\;$0.95 \\ {\em ``Another sexual attack on a young woman in Dresden''} & $\;\;\;\;\;\;\;$0.94 \\ \end{tabular} \end{minipage}\\[1cm] & \\ \textbf{Europe} & \textbf{Conflict with the Left} \\ \midrule \includegraphics[scale=0.4]{figures/topic6} & \includegraphics[scale=0.4]{figures/topic5}\\[-0.1cm] \midrule \begin{minipage}{.33\textwidth} \begin{tabular}{p{4.5cm} p{.6cm}} {\em refugees} & 0.00917 \\ {\em germany} & 0.00669 \\ {\em hungary} & 0.00604 \\ {\em eu} & 0.00572 \\ {\em humans} & 0.00556 \\ \end{tabular} \end{minipage} & \begin{minipage}{.33\textwidth} \begin{tabular}{p{4.5cm} p{.6cm}} {\em article} & 0.00857 \\ {\em donation} & 0.00718 \\ {\em support} & 0.00535 \\ {\em humans} & 0.00523 \\ {\em place} & 0.00501 \\ \end{tabular} \end{minipage}\\ \midrule \begin{minipage}{.22\textwidth} \begin{tabular}{p{4.3cm} p{.9cm}} {\em ``The ruins of asylum policy''} & $\;\;\;\;\;\;\;$0.95 \\ {\em ``Hungary closes route over the Balkans''} & $\;\;\;\;\;\;\;$0.77 \\ {\em ``Germany is Europe's refugee shelter''} & $\;\;\;\;\;\;\;$0.73 \\ \end{tabular} \end{minipage} & \begin{minipage}{.22\textwidth} \begin{tabular}{p{4.3cm} p{.9cm}} {\em ``Federal police chief speculates about leftist perpetrators after arsony in Tröglitz''} & $\;\;\;\;\;\;\;$0.90\\ {\em ``Antifacists scare off Saxony's minister of the interior''} & $\;\;\;\;\;\;\;$0.84 \\ \end{tabular} \end{minipage}\\ \midrule &\\ \end{tabular} \label{tab:topics} \end{table*} For each topic, we report the five terms (translated from German) with highest likelihood of occurring in the topic, as well as the (translated) headlines of a set of news articles strongly associated with the topic. Analogously to~\cite{Wang2006}, we illustrate the time course of the topics via a histogram of the document-topic assigment matrix learned by Latent Dirichlet Allocation, where the binning is based on the weeks of the articles' post date. Finally, we manually choose a label for each topic by inspecting the corresponding most likely terms and documents. We list data for four topics in Table~\ref{tab:topics}. The first topic we identify comprises discussions about housing capacities and costs, indicated by terms such as \textit{city} and \textit{housing}, and strongly associated news articles that talk about the rising costs for cities that set up housing for refugees. A second topic is concerned with sexual crimes, indicated by its most likely terms such as \textit{man}, \textit{perpetrator}, and \textit{woman}, and by the headlines of strongly associated news articles concerning sexual violence offenses. These two topics seem to be general motives of the right-wing agenda, as both of these grow relatively constant with the volume of posted articles over time. A topic that peaks in September 2015 comprises discussions about the fact that the majority of European countries did not follow Germany's example in admitting the entry of large numbers of refugees, despite requests of the German government. This topic is indicated by terms such as \textit{germany}, \textit{hungary} and \textit{eu}, and news articles that discuss that countries such as Hungary opposed German political demands for taking in refugees by closing their intra-European borders to Croatia in October 2015. Lastly, we encounter a topic that reflects the conflicts with leftist activists, indicated by terms such as \textit{donation} or \textit{support} and news articles that talk about militant action attributed to antifascist groups. The latter topics appear to be niche topics, which peak at special events, such as Hungary closing its borders or standoffs between right-wing and leftist activists in August 2015. ~\\ ~\\ ~\\ \begin{figure}[b] \centering \includegraphics[scale=0.6]{figures/polarity} \caption{ Time course of normalized negative and positive polarity in user comments per week in 2015, computed from a sentiment dictionary. The vertical line marks chancellor Angela Merkel's decision to admit Syrian refugees. } \label{fig:polarity} \end{figure} \subsection{Polarity in User Comments} Next, we place our focus on the users interacting with the pages and investigate the time course of overall sentiment in the user comments. For that, we employ a dictionary denoting the negative sentiment $\phi^{-}(t)$ and positive sentiment $\phi^{+}(t)$ of a German language term $t$ when used in certain parts of speech~\cite{Waltinger2010a}. We apply part-of-speech tagging to all comments in a given week $w$, which gives us all contained terms $T_w$. Next, we sum the contained polarity using the weights of the aforementioned dictionary and normalize the results by the number of terms in the comments per week to compute the normalized negative polarity $p_{w^-}$ and normalized positive polarity $p_{w^+}$ per week \begin{align*} p_{w^{\pm}} = \frac{1}{|T_w|} \sum_{t \in T_w} \phi^{\pm}(t). \end{align*} Fig.~\ref{fig:polarity} shows the resulting time course of these polarities for all weeks in 2015. We observe that negative speech dominates the comments throughout the whole year. Furthermore, we encounter a peak in negative polarity, which again coincides with chancellor Merkel's decision from early September 2015 to admit the entry of Syrian refugees to Germany~\cite{Walker2015}. This finding confirms the general perception that Merkel's decision provoked widespread anger in the far-right political spectrum. \subsection{User Attraction} Finally, we analyze the pages' ability to attract users over time. For that, we compute the set of active users $U_w$ for every week $w$,~(users who comment on a post or like a post on at least one of the protest pages during that week). For every week $w$, we split these active users into two groups: \emph{new} users \begin{align*} U_{w_{\mathrm{new}}} = \{ u \mid u \in U_w \wedge u \notin U_v \forall v \in 0, \dots, w-1 \}, \end{align*} which we encounter for the first time and \emph{continuing} users $U_{w_{\mathrm{cont}}} = U_w \setminus U_{w_{\mathrm{new}}}$, whom we have already seen previously. The corresponding sizes of these users sets for all weeks in 2015 are shown in Fig.~\ref{fig:new-and-continuing-users}. We see a slight increase in both new and continuing users in the late second half of 2015. However, this increase starts to diminish again towards the end of the year. We find that the time series of continuing users is very strongly correlated with the time series of posts~(\corr{0.91})\footnote{$\;$ \corr{} indicates significance at the $p <0.001$ level} and comments~(\corr{0.87}) in the corresponding week. We observe a similarly directed but much weaker correlation between the time series of posts and new users (\corr{0.55}) and posts and comments (\corr{0.56}). Furthermore, we note that the number of mean weekly active users~(9,935) is very small compared to the overall number of users. We encounter a strong correlation~(\corr{0.87}) of the number of continuing users with time index~$i$, but cannot determine a similar significant correlation for new users. These findings suggest that the protest pages maintain a low constant growth of users, but fail at accelerating this growth. \begin{figure}[b!] \centering \includegraphics[scale=0.6]{figures/continuing-and-new-users} \caption{Stacked bar plot of the time course of weekly active users on the protest pages in 2015. We distinguish between new users (active for the first time) and continuing users.} \label{fig:new-and-continuing-users} \end{figure} \section{RQ 2: Connectedness and Cooperation} \label{sec:network} The German anti-refugee movement is inherently decentralized -- there is no dominating ``no to refugees'' or similar page that attracts the majority of likes, as is the case for a large number of non-controversial topics on Facebook. As a result, the likes are spread among a much larger number of pages, which ostensibly follow a geographical pattern, i.e., most such pages are, at least by name, specific to a single city or small region. In order to investigate to what extent the underlying social network itself exhibits collaboration patterns, we therefore shift our focus from the contents and time course of activities to the relationships between the participating users. In particular, we investigate patterns of collaboration and connectedness of the protest page users on two levels: We first investigate direct co-interactions between users on the social networking platform itself, and in a second step, we investigate their affiliation with the Facebook pages of political parties. This affiliation serves as an indicator for indirect connections between the users, in particular since individual friendship links are not made crawlable by Facebook. These investigations can act as input to future research on the longevity of the currently observed rise of far-right movements. In the following, we investigate (1)~whether the geographical distance between two pages is reflected in the shared user base between the pages, (2)~the presence of a giant connected component in the co-like network, and (3)~the degree of affiliation for Facebook pages of right-wing political pages. \subsection{Low Correlation between Geographical Distance and Amount of Shared Users} In order to investigate geographical aspects of the data, we compute the geographical distance between the corresponding cities for each pair of pages, and compare this to the Jaccard similarity between their sets of users. We would expect to see a strong negative correlation if geographical closeness implicated co-operating user bases. However, the maximum Jaccard similarity is only 0.1428, and 4,727 page pairs exhibit non-zero similarity, leaving 3,916 page pairs with zero shared users. Even for pairs of pages with non-zero similarity, the correlation is low (\corr{$-$0.19}), which suggests against a geographical collaboration pattern. \subsection{Absence of a Giant Connected Component in the User Co-like Network} Next, we construct the \emph{user co-like network} as follows: users form the vertices of this network, and for every post, we introduce edges between all users that liked the post. The resulting network has 95,639,173 edges (co-likes among users). We study the connectivity of this network by computing the size of its largest connected component. This size amounts to 89,094 users, which account for only 57.5\% of the overall user base. This is very atypical for real-world social networks, which typically exhibit a giant connected component containing nearly all users,\footnote{See e.g.\ \url{http://konect.uni-koblenz.de/statistics/coco}} and gives a hint that the social media activities of the users on the pages might be highly separated. This is surprising in light of the fact that the users, as found in the previous experiment, were found to not follow a clustering into regions, hinting that another, non-geographic clustering of users is present in the data. The present dataset however does not allow us to identify the nature of this clustering. \subsection{Strong Affiliation with Far-right Organizations} As we could not find evidence for collaboration patterns in the direct interactions between users, we investigate whether the users of the pages are connected on a higher level. Therefore, we analyse the affiliations of the users on these protest pages to political parties in Germany, with the aim to see whether these users are connected in that way. For that, we employ additional data about likes of posts on the parties' Facebook pages from our previous work~\cite{Schelter2016}. Next, we compute the affiliation $\text{aff}_{p,o}$ between a page $p$ and a political party $o$ as the ratio of users~$U$ interacting with the page that also liked posts on the party's page: \begin{align*} \text{aff}_{p,o} = |U_{\text{interact-with}(p)} \cap U_{\text{like-post}(o)}| \; / \; |U_{\text{interact-with}(p)}| \end{align*} In the resulting distributions, we observe that the median affiliation with the right-wing parties \emph{AfD} (0.45) and \emph{NPD} (0.41) is about one order of magnitude higher than the affiliation with parties from the remaining political spectrum, such as the christian-conservative \emph{CDU}~(0.04), the social-democratic \emph{SPD}~(0.04), the green party \emph{Die Grünen}~(0.03), and the socialist-left party \emph{Die Linke}~(0.02). While it is expected to see a strong affiliation with the \emph{NPD}, which is commonly considered to be the voice of the extreme right and has repeatedly been the target of party-ban trials by the German state, it is suprising to see an even stronger affiliation with the \emph{AfD}, as the latter party claims to locate itself in the conservative spectrum rather than the extremist-right spectrum. \section{Related Work} \label{sec:related} The social media usage of political movements is of interest to many studies, e.g., with a focus on the \emph{Black Lives Matter} movement in the United States~\cite{Choudhury2016,Olteanu2016}. However, the German far-right has seen little attention so far, with current research mostly focusing on exploratory analysis of the social media activities of the AfD party on Facebook~\cite{Schelter2016} and the topics discussed by the local anti-immigrant movement \emph{Pegida}~\cite{Puschmann2016} on Twitter, as well as their corresponding news sources~\cite{Puschmann2016info}. The concentration on social concerns with crime and housing cost, and the focus on European policies in topic clusters discovered by Puschmann et al.\ on Twitter closely resemble the topics we discovered in Section~\ref{sec:temporal}. The rise of populist radical right parties in Europe has been extensively researched. While other countries, such as France, Denmark, Belgium, and Austria have experienced high levels of voting for populist radical right parties, for a long time Germany seemed to have been an exception with regard to the radical right in Western Europe~\cite{Arzheimer2016}. With the rise of the AfD party, Germany now also becomes part of the ``pathological normalcy''~\cite{Mudde2010}. The term denotes that populist right-wing ideology is a radicalization of mainstream values, such as ethnic nationalism, anti-immigrant sentiment and authoritarian values. These attitudes have been present within a segment of the population even before the rise of the AfD, but have not been represented within mainstream party politics before. Scholarship on the conditions of radical right success have associated it with a convergence of the mainstream parties on the left and right, leaving a representational gap for the radical right to move in~\cite{Arzheimer2006,Kitschelt1997,Van2005}, while some are sceptical of this ``convergence of the middle'' thesis~\cite{Norris2005}. \section{Conclusion} \label{sec:conclusion} We studied the Facebook activities of a contemporary nationwide protest movement against refugee housing in Germany. We analysed data from 136 public Facebook pages, containing more than one~million interactions by more than 200,000 users. We encountered peaks in several activity metrics that coincide with chancellor Merkel's decision to temporarily admit the entry of Syrian refugees to Germany, which suggests that this political move caused anger and outrage in far-right circles. However, despite the presumed mobilization effects stemming from Merkel's policies in 2015, our evidence suggests a low degree of user growth, connectedness and cooperation in this protest movement. From all German political parties, the {\em AfD} exhibited the strongest affiliation among the user base of the studied protest pages, which contradicts previous classifications of the {\em AfD} as not belonging to the far-right political spectrum~\cite{Arzheimer2016}. Furthermore, while we can confirm that the Facebook pages of the anti-refugee movement in Germany are split into many small pages by geography, with no apparent regional collaboration patterns. A noteworthy limitation of our work is the lack of data on comparable movements on Facebook, which would allow for stronger conclusions about the specifity of the observed trends for the German radical right. In future work, we plan to conduct a deeper analysis of textual contents of user comments, in order to be able to measure controversy on the level of user interactions. \section*{Acknowledgments} This research was partly funded by the European Regional Development Fund (ERDF/FEDER -- IDEES). \bibliographystyle{splncs03}
2,869,038,155,957
arxiv
\section{Introduction} Quantum chromodynamics (QCD), the fundamental theory of strong interactions, can be perturbatively solved only in the region of asymptotic freedom, i.e., for large momenta of quarks and gluons \cite{Pol}. At low momenta, quarks and gluons interact strongly and are confined inside hadrons. In this case, the expansion parameter of perturbation theory, the running coupling constant $\alpha _s,$ is of the order of unity, so that perturbative methods are not applicable and one has to use non-perturbative methods. One such method is the construction of effective theories to describe the behaviour of hadronic matter, such as the $\sigma -\omega $ model \cite{Val}. However, one of the most successful non-perturbative methods is the use of lattice gauge calculations \cite{Cre}, which are especially suitable for studying perturbative as well as non-perturbative effects in QCD. The presently available lattice data mostly concern simulations of $SU(N)$ pure gauge theory \cite{B,D}, since the treatment of dynamical fermions on a lattice is difficult \cite{Cre}. Moreover, lattice artefacts are believed to be well under control only in the pure gauge case \cite{EE}. A striking feature of lattice simulations of $SU(2)$ and $SU(3)$ pure gauge theory is a phase transition (of apparently first order for $SU(3)$ \cite{B,D,Ben} and second order for $SU(2)$ theory \cite{Fig}) from a phase of confined gluons (``glueballs'') to one of deconfined gluons (``gluon plasma''), leading to a sharp rise in the energy density of a gluon gas as a function of temperature at a phase transition temperature $T_c$ \cite{B,D} (Figs. 1,2). In the $\sigma-\omega $ model, a similar phase transition (or, at least, a rapid increase of the energy density in a small temperature interval) appears at small net baryon densities \cite{The} (Fig. 3). This is due to a strong enhancement of the scalar meson interaction at $T_c\simeq 200$ MeV, leading to a transition from a phase of massive baryons to a phase of massless baryons. Apparently, a phase transition to a weakly interacting phase seems to be a fundamental feature of strongly interacting matter\footnote{It should be, however, mentioned that a phase transition in the $\sigma -\omega $ model may be an artefact of the approximations made in the calculation of the thermodynamic quantities in Fig. 3 (in the mean-field approximation) \cite{HE}.} \cite{Ri}. To understand the lattice data from a simple physical point of view, Rischke {\it et al.} \cite{Ri,Ri1} have constructed a phenomenological model for the gluon plasma, in which gluons with large momenta are considered as an ideal gas with perturbative corrections of order $O(\alpha _s),$ while gluons with low momenta are subject to confining interactions and do not contribute to the energy spectrum of free gluons. The equation of state for this model, although it quantitatively reproduces the lattice data for the thermodynamic functions of $SU(3)$ pure gauge theory above the deconfinement transition temperature (Fig. 2), it has a rather complicated form and is therefore not suitable for practical use, e.g., in astrophysics (for description of stellar structure), or in cosmolody (for treatment of a hadron-plasma phase transition in the early universe). In this paper, we suggest another equation of state, which reflects the main properties of strongly interacting matter (i.e., a phase transition to a weakly interacting phase) and agrees qualitatively with the lattice data for $SU(2)$ pure gauge theory $both$ above and below the transition temperature, and has a much simpler form, as compared to that of ref. \cite{Ri}. First, let us note that in strongly interacting matter, particles undergoing continual mutual interaction are necessarily off-shell. Therefore, the effect of strong interaction in such a system may be represented by the off-shellness of its particles. The equilibrium state of such a system should be characterized by a well-defined relativistic mass distribution around the on-shell value. Thus, instead of dealing with interaction explicitly, we reduce the problem to the description of the relativistic off-shell ensemble. The role of interaction then consists in determining the effective thermodynamic parameters governing the mass distribution in a strongly interacting system. The physical framework for the description of a relativistic off-shell ensemble has been established by Horwitz and Piron \cite{HP} as a manifestly covariant relativistic dynamics whose consistent formulation is based on the ideas of Fock \cite{Fock} and Stueckelberg \cite{Stu}, in which the four components of energy-momentum are considered as independent degrees of freedom, permitting fluctuations from the mass shell. In this framework, the dynamical evolution of a system of $N$ particles, for the classical case, is governed by equations of motion that are of the form of Hamilton equations for the motion of $N$ $events$ which generate the space-time trajectories (particle world lines) as functions of a continuous Poincar\'{e}-invariant parameter $\tau $ \cite{HP,Stu}. These events are characterized by their positions $q^\mu = (t,{\bf q})$ and energy-momenta $p^\mu =(E,{\bf p})$ in an $8N$-dimensional phase-space. For the quantum case, the system is characterized by the wave function $\psi _\tau (q_1,q_2,\ldots ,q_N)\in L^2(R^{4N}),$ with the measure $d^4q_1d^4q_2\cdots d^4q_N\equiv d^{4N}q,$ $(q_i\equiv q_i^\mu ;\;\;\mu =0,1,2,3;\;\;i=1,2,\ldots ,N),$ describing the distribution of events, which evolves with a generalized Schr\"{o}dinger equation \cite{HP}. The collection of events (called ``concatenation'' \cite{AHL}) along each world line corresponds to a {\it particle,} and hence, the evolution of the state of the $N$-event system describes, {\it a posteriori,} the history in space and time of an $N$-particle system. For a system of $N$ interacting events (and hence, particles) one takes \cite{HP} (we use the system of units in which $\hbar =c=k_B=1;$ we also use the metric $g^{\mu \nu }=(-,+,+,+))$ \beq K=\sum _i\frac{p_i^\mu p_{i\mu }}{2M}+V(q_1,q_2,\ldots ,q_N), \eeq where $M$ is a given fixed parameter (an intrinsic property of the particles), with the dimension of mass, taken to be the same for all the particles of the system. The Hamilton equations are $$\frac{dq_i^\mu }{d\tau }=\frac{\partial K}{\partial p_{i\mu }}=\frac{p_ i^\mu }{M},$$ \beq \frac{dp_i^\mu }{d\tau }=-\frac{\partial K}{\partial q_{i\mu }}=-\frac{ \partial V}{\partial q_{i\mu }}. \eeq In the quantum theory, the generalized Schr\"{o}dinger equation \beq i\frac{\partial }{\partial \tau }\psi _\tau (q_1,q_2,\ldots ,q_N)=K \psi _\tau (q_1,q_2,\ldots ,q_N) \eeq describes the evolution of the $N$-body wave function $\psi _\tau (q_1,q_2,\ldots ,q_N).$ In the present paper we restrict ourselves to a relativistic Bose gas, in order to compare the results with experimental data of pure gauge theory lattice simulations. We show that our results agree with those for $SU(2)$ pure gauge theory. It should be, however, noted that since the underlying theory is basically different from QCD, a comparison with the $SU(2)$ lattice data can only be qualitative. From this point of view, the similarity is remarkable. \section{Ideal relativistic Bose gas} Gibbs ensembles in a manifestly covariant relativistic classical and quantum mechanics were derived by Horwitz, Schieve and Piron \cite{HSP}. To describe an ideal gas of events obeying Bose-Einstein statistics in the grand canonical ensemble, we use the expression for the number of events found in \cite{HSP} (for our present purposes we assume no degeneracy), \beq N=V^{(4)}\sum _{k^\mu }n_{k^\mu }= V^{(4)}\sum _{k^\mu }\frac{1}{e^{(E-\mu -\mu _K\frac{m^2}{2M})/T}-1}, \eeq where $V^{(4)}$ is the system's four-volume and $m^2\equiv -k^2=-k^\mu k_\mu $ is the variable dynamical mass. Here, in addition to the usual chemical potential $\mu ,$ there is the mass potential $\mu _K$ corresponding to the Lorentz scalar function $K(p,q)$ (Eq. (1.1)), here taken in the ideal gas limit, on the $N$-event relativistic phase space; in order to simplify subsequent considerations, we shall take it to be a fixed parameter (which determines an upper bound of the mass distribution in the ensemble we are studying, as we shall see below). To ensure a positive-definite value for $n_{k^\mu },$ the number of bosons with four-momentum $k^\mu ,$ we require that \beq m-\mu -\mu _K\frac{m^2}{2M}\geq 0. \eeq The discriminant of Eq. (2.2) must be nonnegative, which gives \beq \mu \leq \frac{M}{2\mu _K}. \eeq For such $\mu ,$ (2.2) has the solution \beq \frac{M}{\mu _K}\left( 1-\sqrt{1-\frac{2\mu \mu _K}{M}}\right) \leq m\leq \frac{M}{\mu _K}\left( 1+\sqrt{1-\frac{2\mu \mu _K}{M}}\right) . \eeq For small $\mu \mu _K/M,$ the region (2.4) may be approximated by \beq \mu \leq m\leq \frac{2M}{\mu _K}. \eeq One sees that $\mu _K$ plays a fundamental role in determining an upper bound of the mass spectrum, in addition to the usual lower bound $m\geq \mu .$ Replacing the sum over $k^\mu $ (2.1) by an integral, one obtains for the density of events per unit space-time volume $n\equiv N/V^{(4)}$ \cite{ind}, \beq n=\frac{1}{4\pi ^3}\int _{m_1}^{m_2}\frac{m^3\;dm\;\sinh ^2\beta \;d \beta }{e^{(m\cosh \beta -\mu -\mu _K\frac{m^2}{2M})/T}-1}, \eeq where $m_1$ and $m_2$ are defined in Eq. (2.4), and we have used the parametrization \cite{HSS} $$\begin{array}{lcl} p^0 & = & m\cosh \beta , \\ p^1 & = & m\sinh \beta \sin \theta \cos \phi , \\ p^2 & = & m\sinh \beta \sin \theta \sin \phi , \\ p^3 & = & m\sinh \beta \cos \theta , \end{array} $$ $$0\leq \theta <\pi ,\;\;\;0\leq \phi <2\pi ,\;\;\;-\infty <\beta <\infty .$$ In what follows we shall take $\mu \simeq 0$ (as for the case of the ensemble of gauge bosons). The integral (2.6) is calculated in ref. \cite{BHS} (in the high-temperature Boltzmann approximation for the integrand): \beq n=\frac{T^4}{4\pi ^3}\left[ 2-x^2K_2(x)\right] ,\;\;\;\;\;x=\frac{ 2M}{T\mu _K}\equiv \frac{2m_c}{T}, \eeq where $K_\nu (z)$ is the Bessel function of the third kind (imaginary argument), $m_c=M/\mu _K$ is the central value around which the mass of the particles are distributed, in view of (2.4), and $2m_c$ is the upper limit of the mass distribution in our case of small $\mu ,$ in view of (2.5). One calculates the characteristic averages \cite{BHS} to be \bqry \langle E\rangle & = & T\;\frac{8-x^3K_3(x)}{2-x^2K_2(x)}, \\ \langle E^2\rangle & = & T^2\;\frac{40-x^4K_4(x)+x^3K_3(x)}{ 2-x^2K_2(x)}, \\ \langle m^2\rangle & = & T^2\;\frac{16-x^4K_4(x)+4x^3K_3(x)}{ 2-x^2K_2(x)}, \\ \langle {\bf p}^2\rangle & = & 3T^2\;\frac{8-x^3K_3(x)}{2-x^2K_2(x)}\;= \;3T\langle E\rangle , \eqry and obtains the following thermodynamic functions (the particle number density, pressure and energy density) \footnote{If we had not used the Boltzmann limit for the integrand in (2.6), one would obtain the factors $\zeta (3)\approx 1.202$ in Eq. (2.12) and $\zeta (4)=\pi ^4/90\approx 1.082$ in Eqs. (2.13),(2.14).}: \bqry N_0 & = & \langle J^0\rangle \;=\;\frac{T^3}{\pi ^2}\;\frac{8-x^3K_3( x)}{x^2}, \\ p & = & \frac{1}{3}\langle T^{ii}\rangle g_{ii}=\;\frac{T^4}{\pi ^2}\; \frac{8-x^3K_3(x)}{x^2}\;=\;N_0T, \\ \rho & = & \langle T^{00}\rangle \;=\;\frac{T^4}{\pi ^2}\;\frac{ 40-x^4K_4(x)+x^3K_3(x)}{x^2}, \eqry where $\langle J^\mu \rangle $ and $\langle T^{\mu \nu }\rangle $ are the average $particle$ four-current and energy-momentum tensor, respectively, given by \cite{HSS}: \beq \langle J^\mu \rangle =\frac{T_{\triangle V}}{M}n\langle p^\mu \rangle , \;\;\;\langle T^{\mu \nu }\rangle = \frac{T_{\triangle V}}{M}n\langle p^\mu p^\nu \rangle . \eeq In (2.15), $T_{\triangle V}$ is the average passage interval in $\tau $ for the events which pass through the small (minimal typical) four-volume $\triangle V$ in the neighborhood of the $R^4$-point; it is related to a width of the mass distribution around the central value, $\triangle m,$ as follows \cite{BH}: \beq T_{\triangle V}\triangle m=2\pi . \eeq The expressions (2.13),(2.14) for $p$ and $\rho $ are obtained from Eq. (2.15) for $\langle T^{\mu \nu }\rangle .$ They are, moreover, thermodynamically consistent. One may varify easily that the relation \beq \rho =T\frac{dp}{dT}-p \eeq is satisfied \cite{BHS}. For low $T,$ Eqs. (2.13),(2.14) reduce, through the asymptotic formula \cite{AS1} \beq K_\nu (z)\sim \sqrt{\frac{\pi }{2z}}e^{-z}\left( 1+\frac{4\nu ^2-1}{8z}+ \ldots \right) ,\;\;\;z>>1, \eeq to \beq p=\frac{2T^6}{\pi ^2m_c^2},\;\;\;\rho =\frac{10T^6}{\pi ^2m_c^2}=5p, \eeq consistent with the ``realistic'' equation of state suggested by Shuryak for strongly interacting hadronic matter \cite{Shu}. For high $T,$ we use another asymptotic formula \cite{AS2}, \beq K_\nu (z)\sim \frac{1}{2}\Gamma (\nu )\left( \frac{z}{2}\right) ^{-\nu } \left[ 1-\frac{z^2}{4(\nu -1)}+\ldots \right] ,\;\;\;z<<1, \eeq and obtain \bqry p & = & \frac{T^4}{\pi ^2}\left( 1-\frac{m_c^2}{2T^2}\right) \;=\; p_{{\rm SB}}\left( 1-\frac{m_c^2}{2T^2}\right) , \\ \rho & = & \frac{T^4}{\pi ^2}\left( 3-\frac{m_c^2}{2T^2}\right) \;=\; \rho _{{\rm SB}}\left( 1-\frac{m_c^2}{6T^2}\right) , \eqry where $p_{{\rm SB}}=T^4/\pi ^2$ and $\rho _{{\rm SB}}=3p_{{\rm SB}}$ are the pressure and energy density of an ideal ultrarelativistic Stefan-Boltzmann gas. Therefore, as $T\rightarrow \infty ,$ the thermodynamic functions of the relativistic Bose gas we are considering become asymptotically those of a Stefan-Boltzmann gas, implying a phase transition to an effectively weakly interacting phase, from the phase of strong interactions described by Eq. (2.19). It follows from Eq. (2.10), via the asymptotic formulas (2.18),(2.20), that \beq \frac{\langle m^2\rangle }{T^2}=\left\{ \begin{array}{ll} 8, & T<<2m_c, \\ 2m_c^2/T^2, & T>>2m_c. \end{array} \right. \eeq The dependence of $p/p_{{\rm SB}},$ $\rho /\rho _{{\rm SB}}$ and $\langle m^2\rangle /T^2$ on temperature are shown in Figs. 4,5. At $T\simeq 0.2\; m_c$ (corresponding to $z\simeq 0.1$ in Figs. 4,5), there is a smooth phase transition to a weakly interacting phase described by Eqs. (2.21),(2.22). \section{Concluding remarks} The manifestly covariant framework discussed in the present paper can be an effective tool in dealing with realistic physical systems. The equation of state (2.12)-(2.14) obtained in our work reflects the main properties of strongly interacting matter (i.e., a phase transition to a weakly interacting phase), and agrees qualitatively with the lattice data for $SU(2)$ pure gauge theory. The question naturally arises of why the $SU(2)$ lattice data \cite{Fig} appear to contain a second order phase transition but the $SU(3)$ data \cite{B,D} appear to contain first order one. There is no {\it a priori} difference between $SU(2)$ and $SU(3)$ pure gauge theories from a statistical mechanical point of view. As shown in ref. \cite{B}, $SU(3)$ pure gauge theory simulations on $24^3\times N_T$ lattices indicate a rapid rise in $\rho +p$ as a function of temperature which takes place in the case of $N_T=4,$ reflecting a sharp first order phase transition. This rise is broadened considerably in the case of $N_T=6,$ with the slope of the curve diminished by almost a factor 3, indicating a smoother transition in this case. As remarked by the authors, this softening of the structure of the transition as $N_T$ is increased may well continue as the continuum limit is aproached. Thus, in ref. \cite{B} the apparent first order nature of the transition in the case of $SU(3)$ pure gauge theory has been called in question. Moreover, there are indications from lattice QCD calculations that when fermions are included, the phase transition may be of second or higher order \cite{Ben1}. Altogether, these observations suggest that a realistic equation of state of strongly interacting hadronic matter should be expected to contain a second or higher order phase transition, as reflected by the equation of state obtained in our work. As remarked by Ornik and Weiner \cite{OW}, a $single$ equation of state which, at high temperature, describes a quark-gluon phase and, at low temperature, a hadronic phase and which contains a phase transition of either first or higher order provides a more satisfactory theoretical description than one in which each phase is described by a different equation of state. In this way, the equation of state that we obtained can be considered as a candidate for such a realistic equation of state of strongly interacting matter, in contrast to the equations of state of, e.g., Rischke {\it et al.} \cite{Ri} and Shuryak \cite{Shu} each of which describes just one of the phases (above and below the transition, respectively). The introduction of the quark degrees of freedom in this equation of state, as well as taking into account an effective interaction potential in an explicit form in the strongly interacting phase, and perturbative corrections in the weakly interacting phase, should enable one to derive an equation of state which we expect to be more accurate for the description of the phenomena taking place in strongly interacting hadronic matter. The derivation of such an equation of state and its possible implications in astrophysics and cosmology are now being worked out by the authors. \section*{Acknowledgements} We wish to thank J. Eisenberg for very valuable discussions, and E. Eisenberg for his help in drawing the pictures for the present paper. \newpage
2,869,038,155,958
arxiv
\section{Introduction}\label{introduction}} Given a pointset \(\Lambda_N := \{ X_1, \ldots, X_N \} \subset \mathbb{R}^{D}\) that is assumed to lie on a submanifold \(\Lambda \supset \Lambda_N\), a natural inverse problem asks to approximate the \emph{intrinsic} geometry of \(\Lambda\), using only this data. Implicitly, there is an isometric embedding \(\iota : \mathcal{M} \to \Lambda \subset \mathbb{R}^{D}\) of an abstract manifold \(\mathcal{M}\), both of which are unknown. Granted sufficient smoothness, such a reconstruction may consist of: an atlas that is prescribed by intrinsic data, \emph{e.g.}, a system of normal coordinates, or operators that are intimately tied to the intrinsic geometry, as in the Laplace-Beltrami\footnote{In the following, we define the Laplace-Beltrami operator to be signed so it is \emph{positive semi-definite}: that is, in local coordinates, \(\Delta_{\mathcal{M}} := -\frac{1}{\sqrt{|g|}} \sum_{j,\ell} \partial_j \sqrt{|g|} g^{j,\ell} \partial_{\ell}\), wherein \(g\) is the Riemannian metric on \(\mathcal{M}\) and \(g^{j,\ell}\) are the \((j,\ell)\) components of the inverse, \(g^{-1}\).} operator \(\Delta_{\mathcal{M}}\), to name a couple of examples related to the study in this paper. Further, the restriction of functions on \(\Lambda\) to the pointset may be modeled by identifying \(\Lambda_N\) with the canonical basis in \(\mathbb{C}^N\), which gives a complex vector space \(\mathcal{H}_N \cong \mathbb{C}^N\). Then, for example, a vector in \(\mathcal{H}_N\) may be seen as the restriction to \(\Lambda_N\) of a linear combination of \(N\) bump functions \(u_1, \ldots, u_N\) with \(u_j(X_j) = 1\) or \(0\) and pairwise disjoint supports, \(\operatorname{supp} u_j \cap \operatorname{supp} u_{\ell} = \emptyset\) for all \(j \neq \ell\). Hence, the question arises as to how the dynamics of functions on \(\mathcal{M}\) can be approximated by dynamics in \(\mathcal{H}_N\). In many ways the geometry of manifolds is intimately tied with dynamics on them and so these two kinds of problems are interconnected. Over the last two decades, the theory of graph Laplacians has advanced, giving convergence rates for the recovery of \(\Delta_{\mathcal{M}}\) from samples \(\Lambda_N\) drawn from a probability distribution \(P\) supported on \(\Lambda\), as \(N \to \infty\), under various assumptions on \(\mathcal{M}\), \(\Lambda\), \(P\) and the kernels used to construct the graph Laplacians \citep{hein2005graphs, singer2006, von2008consistency, calder2019improved}. A common schema is described in \citep{nadler2006diffusion} as follows: using a positive, \emph{essentially} locally supported\footnote{This means that if \(k \in C^m(\mathbb{R}_+)\), then there are \(R_1, \ldots, R_m > 0\) such that \(\partial^j k\) decays exponentially fast outside of \([0, R_j]\).} smooth function \(k : \mathbb{R}_+ \to \mathbb{R}\), an \emph{averaging} matrix \((\mathscr{A}_{\epsilon,N})_{j,\ell} := k(||X_j - X_{\ell}||_{\mathbb{R}^{D}}^2/\epsilon)\) is defined that after row normalisation represents a Markov chain process in discrete time and space. This normalized matrix \(A_{\epsilon,N}\) converges in such settings, with high probability as \(N \to \infty\), to an operator \(A_{\epsilon}\) that represents a Markov process in continuous space \(\mathcal{M}\) and discrete time with steps of size \(\epsilon\); its transition probability from \(x \in \mathcal{M}\) to \(y \in \mathcal{M}\) is proportional to \(k_{\epsilon}(x, y) := k(||\iota(x) - \iota(y)||_{\mathbb{R}^{D}}^2/\epsilon)\). The infinitesimal generator of this Markov process is \(\Delta_{\mathcal{M}} + O(\partial^1)\) with \(O(\partial^1)\) denoting terms with differential operators of order at most one. Further, in the limit \(\epsilon \to 0\), this Markov process converges to a diffusion process with the same infinitesimal generator. The (\emph{random walk}) \emph{discrete} and \emph{continuum} graph Laplacians here are equal, up to a constant factor, to \((I - A_{\epsilon,N})/\epsilon\) and and \((I - A_{\epsilon})/\epsilon\), respectively, and they give the concrete approximations to the infinitesimal generator. In this work, we study for smooth, compact, boundaryless manifolds \(\mathcal{M}\), with some basic assumptions on \(\Lambda\) and \(P\), the inclusion of graph Laplacians in the framework of semiclassical analysis and switch perspective from diffusion processes to quantum dynamics. The geometric data we are interested in recovering are the geodesics on \(\mathcal{M}\), which can be cast as projections of the intrinsic \emph{(co)-geodesic} flow \(\Gamma^t : T^*\mathcal{M} \to T^*\mathcal{M}\) governed by the Hamiltonian \(|\xi|_{g_x} := \langle g_x^{-1} \xi, \xi \rangle^{\frac{1}{2}}\), wherein \(g_x\) is the Riemannian metric at \(x \in \mathcal{M}\). This Hamiltonian flow is \emph{quantized} by \(e^{i t \sqrt{\Delta_{\mathcal{M}}}}\), in that for times \(|t|\) smaller than the injectivity radius at \(x_0\), it moves the support of wavepackets localized at \((x_0, \xi_0)\) in \emph{phase space} \(T^*\mathcal{M}\) to those localized at \(\Gamma^t(x_0, \xi_0)\), which ultimately leads to a linear PDEs approach to accessing geodesics. We use the framework of semiclassical pseudodifferential operators (\({\Psi\text{DO}}\)s) to recover this behaviour with continuum graph Laplacians in place of \(\Delta_{\mathcal{M}}\) and a manifold generalization of coherent states for the localized wavepackets. Then, we find probabilistic convergence rates so geodesics can be recovered with matrix dynamics on \(\mathcal{H}_N\) using discrete graph Laplacians in place of their continuum counterparts and the restrictions of coherent states to the sample points \(\Lambda_N\). Along the way, we establish consistency between solutions to discrete wave equations on \(\Lambda_N\) and solutions to wave equations on \(\mathcal{M}\). The dichotomy in the continuum and discrete settings requires completely different considerations. In \protect\hyperlink{from-graph-laplacians-to-geodesic-flows}{Section \ref{from-graph-laplacians-to-geodesic-flows}}, we realise continuum graph Laplacians, denoted \(\Delta_{\lambda,\epsilon}\) with \(\lambda \geq 0\) a normalization parameter (such that \(\lambda = 0\) gives the random walk continuum graph Laplacian) --- following \citep{hein2005graphs}, we define these in \protect\hyperlink{laplacians-from-graphs-to-manifolds}{Section \ref{laplacians-from-graphs-to-manifolds}} ---, as semiclassical pseudodifferential operators (\({\Psi\text{DO}}\)s). Then, the quantum dynamical perspective is driven by Egorov's theorem, which tells more generally that if for \(h \in [0, h_0)\) with \(h_0 > 0\), \(Q\) is an order one \({\Psi\text{DO}}\) on \(\mathcal{M}\) with real-valued principal symbol \(q \in C^{\infty}(T^*\mathcal{M} \times [0, h_0)_h)\) that generates a Hamiltonian flow \(\Theta_q^t : T^*\mathcal{M} \to T^*\mathcal{M}\) on \(T^*\mathcal{M}\) and \(\hat{O}\) is a \({\Psi\text{DO}}\) with symbol \(a \in C^{\infty}(T^*\mathcal{M} \times [0, h_0)_h)\), then the operator dynamics \(\hat{O}_t := e^{-\frac{i}{h} t Q} \hat{O} e^{\frac{i}{h} t Q}\) remains pseudodifferential and is, up to lower order, the quantization of \(a \circ \Theta_q^t\). In this sense, we say that \(e^{\frac{i}{h} t Q}\) \emph{quantizes} the flow \(\Theta_q^t\). To retrieve the flow of symbols from their quantized operators, we make use of \emph{coherent states}, which are wavepackets of the form \(\tilde{\psi}_h := e^{\frac{i}{h} \phi(\cdot ; x_0, \xi_0)}\) localized at \((x_0, \xi_0) \in T^*\mathcal{M}\) with a complex phase \(\phi\) satisfying certain \emph{admissibility conditions} as defined in \protect\hyperlink{fbi-transform}{Section \ref{fbi-transform}} that say roughly that \(\tilde{\psi}_h\) locally resembles \(e^{-\frac{||\cdot - x_0||^2}{2h}} e^{\frac{i}{h} \langle (\cdot - x_0), \xi_0 \rangle}\). The coherent states are essentially Schwartz kernels of an FBI transform defined on manifolds \citep{wunsch2001fbi} and since the latter are well-known to essentially \emph{diagonalize} \({\Psi\text{DO}}\)s over \(T^*\mathcal{M}\), we can recover the flow of symbols through the diagonal matrix elements of \({\Psi\text{DO}}\)s in the basis of coherent states. That is, if \(\psi_h := \tilde{\psi}_h / ||\tilde{\psi}_h||_{L^2}\) is a normalized coherent state, then the flow is approximated by \(\langle \hat{O}_t \psi_h, \psi_h\rangle_{L^2}\) in the sense that \(|\langle \hat{O}_t \psi_h, \psi_h\rangle_{L^2} - a \circ \Theta_q^t(x_0, \xi_0)| \leq C h\) for some constant \(C > 0\) and \(h \in [0, h_0)\) (see \protect\hyperlink{semi-classical-measures-of-coherent-states}{Section \ref{semi-classical-measures-of-coherent-states}}). This relationship between operator dynamics in the space of \({\Psi\text{DO}}\)s and Hamiltonian mechanics on \emph{phase space} \(T^*\mathcal{M}\) is an instance of \emph{quantum-classical correspondence}. The order one \({\Psi\text{DO}}\) \(h \sqrt{\Delta}_{\mathcal{M}}\) quantizes \(|\xi|_{g_x}\) and therefore, the geodesic flow is quantized by \(U^t := e^{i t \sqrt{\Delta_{\mathcal{M}}}}\), which can be defined by spectral theory. While previous works (\emph{e.g.}, \citep{hein2005graphs}) have shown that \(\Delta_{\lambda,\epsilon} = \Delta_{\mathcal{M}} + O(\partial^1) + O(\epsilon)\) and it is certainly true that Egorov's theorem is agnostic to the \(O(\partial^1)\) terms since the semiclassical symbol of \(h \sqrt{\Delta_{\mathcal{M}} + O(\partial^1)}\) is just \(|\xi|_{g_x}\), the method of approximation goes through a Taylor series expansion so that the \(O(\epsilon)\) term consists of higher order operators. This indeed raises an issue in both, including \(h^2 \Delta_{\lambda,\epsilon}\) in the class of \({\Psi\text{DO}}\)s and quantizing the geodesic flow through it, which can briefly be put as: the semiclassical parameter \(h > 0\) sets the wavelength at which the pseudodiffferential calculus resolves functions, so if the diffusion scale \(\sqrt{\epsilon}\) is greater than the semiclassical wavelength \(h\), then the corresponding symbol is concentrated \emph{below} this wavelength and hence, its content goes undetected by the quantization procedure. This perspective is expounded in more detail and drives the analysis in \protect\hyperlink{from-graph-laplacians-to-geodesic-flows}{Section \ref{from-graph-laplacians-to-geodesic-flows}}. A perhaps more direct way to see the issue in approximating \(U^t\) with \(U_{\lambda,\epsilon}^t := e^{i t \sqrt{\Delta}_{\lambda,\epsilon}}\) is the following: it is well-known from microlocal analysis, or indeed discernible from Egorov's theorem that \(U^t\) propagates wavepackets localized in phase space along geodesics. That is, if the FBI transform (or physically, \emph{Husimi function}) of \(u_h \in C^{\infty}(\mathcal{M})\) is localized to a \(\sqrt{h}\) neighbourhood of \((x_0, \xi_0) \in T^*\mathcal{M}\), then for \(t\) smaller than the injectivity radius at \(x_0\), the Husimi function of \(U^t[u_h]\) is localized to roughly a \(\sqrt{h}\) neighbourhood of \(\Gamma^t(x_0, \xi_0)\). On the other hand, by functional calculus, we find that \begin{equation} \label{eq:prop-plwp-glap-decomp} U_{\lambda,\epsilon}^t[u_h] = f(0) u_h + A_{\lambda,\epsilon}[Df(A_{\lambda,\epsilon})[u_h]], \end{equation} wherein \(f(z) := e^{i t c \sqrt{(1 - z)/\epsilon}}\) for a constant \(c > 0\) and \(Df := (f(z) - f(0))/z\). Recall that for \(\lambda = 0\), \(A_{0,\epsilon} = A_{\epsilon}\) represents a Markov process with transition probabilities being close to one in roughly \(\sqrt{\epsilon}\)-balls and decaying exponentially outside of them; in fact, this is true for all \(\lambda \geq 0\). Therefore, the second term of the right-hand side of \(\eqref{eq:prop-plwp-glap-decomp}\) will be diffused to at least a \(\sqrt{\epsilon}\)-ball and we can calculate that the Husimi function of the Schwartz kernel for \(A_{\lambda,\epsilon} : u \mapsto \int_{\mathcal{M}} k_{\lambda,\epsilon}(x,y) u(y) ~ p(y) d\nu_g(y)\) (we assume \(P\) has smooth density \(p\) with respect to the volume form \(\nu_g\) on \(\mathcal{M}\)) at a fixed \(y \in \mathcal{M}\) is localized in phase space to balls of radius roughly \(h/\sqrt{\epsilon}\) about \(0 \in T_{\alpha_x}^*\mathcal{M}\) for all \(\alpha_x\) in a roughly \(\sqrt{\epsilon}\)-ball about \(y\). Hence, if \(\epsilon \gtrsim h^2\), then for \(h\) sufficiently small, the Husimi function for \(U_{\lambda,\epsilon}^t[u_h]\) will retain the support of its initial condition at all times, which is in stark contrast to the behaviour of \(U^t[u_h]\). On the other hand, if \(\epsilon \ll h^2\), then the the Husimi functions of the two terms on the right-hand side of \(\eqref{eq:prop-plwp-glap-decomp}\) can interact and there is chance for cancellations to occur. In \protect\hyperlink{from-graph-laplacians-to-geodesic-flows}{Section \ref{from-graph-laplacians-to-geodesic-flows}}, we find that essentially \(\epsilon \ll h^2\) is also sufficient for \(h^2 \Delta_{\lambda,\epsilon}\) to be a \({\Psi\text{DO}}\). Its symbol approximates \(|\xi|_{g_x}^2\) within roughly a ball of radius \(h/\sqrt{\epsilon}\) about the zero section in \(T^*\mathcal{M}\) and outside of this, it is of order zero for any fixed \(h\). Still, as a \emph{semiclassical} \({\Psi\text{DO}}\), this makes the graph Laplacian an order \emph{two} \({\Psi\text{DO}}\), but clearly not elliptic. We find that on application to coherent states \(\psi_h\), due to their localizing of \({\Psi\text{DO}}\)s to \(\sqrt{h}\)-balls in phase space, we can approximately quantize \(\Gamma^t\) with \(U_{\lambda,\epsilon}^t\). Indeed, in \protect\hyperlink{thm:sym-cs-glap-psido}{Theorem \ref{thm:sym-cs-glap-psido}} we show under the assumptions \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \tightlist \item \(\mathcal{M}\) is a compact, boundaryless \(C^{\infty}\) manifold and \item \(P\) is a probability distribution on \(\Lambda\) so that with respect to the volume form \(\nu_g\) on \(\mathcal{M}\), there is a positive density \(p := dP \circ \iota/d\nu_g \in C^{\infty}\) \end{enumerate} that the following quantum-classical correspondence holds: \begin{theorem*}[{\emph{Quantum-Classical Correspondence} for the graph Laplacian}] Let \(\lambda \geq 0\) and \(h \in (0, 1]\). Then, for all \(\alpha \geq 1\), \(h^2 \Delta_{\lambda,h^{2 + \alpha}}\) is a semiclassical \({\Psi\text{DO}}\) in \(h^0 \Psi^2\) as defined in \protect\hyperlink{quantization-and-symbol-classes}{Section \ref{quantization-and-symbol-classes}}, whose symbol in a fixed neighbourhood of \(0 \in T^*\mathcal{M}\) is \(|\xi|_{g_x}^2 + O(h)\). Furthermore, let \(\psi_h\) be an \(L^2\) normalized coherent state localized at \((x_0, \xi_0) \in T^*\mathcal{M} \setminus 0\) and denote by \(\Gamma^t : T^*\mathcal{M} \to T^*\mathcal{M}\) the (co-)geodesic flow. Then, there exists \(h_0 > 0\) such that given \(a \in C^{\infty}(T^*\mathcal{M})\) belonging to the symbol class \(h^0 S^0\) with its \emph{quantization} \(A := \operatorname{Op}_h(a) \in h^0 \Psi^0\) as per \protect\hyperlink{quantization-and-symbol-classes}{Section \ref{quantization-and-symbol-classes}} and \(|t|\) smaller than the injectivity radius at \(x_0\), we have for all \(h \in (0, h_0]\), \begin{equation}\begin{aligned} \langle U_{\lambda,h^{2 + \alpha}}^{-t} A U_{\lambda,h^{2 + \alpha}}^t \psi_h(\cdot ; x_0, \xi_0), \psi_h(\cdot ; x_0, \xi_0) \rangle_{L^2(\mathcal{M})} = a \circ \Gamma^t(x_0, \xi_0) + O(h). \end{aligned} \nonumber \end{equation} \end{theorem*} As an immediate application of this, we study in \protect\hyperlink{propagation-of-coherent-states}{Section \ref{propagation-of-coherent-states}} the localization properties of the propagation \(U_{\lambda,\epsilon}^t[\psi_h]\) of a coherent state. Namely, we let \(A\) be the multiplication operator by a smooth approximation to a point mass at \(x \in \mathcal{M}\), say \(\rho_{x,\varepsilon}\) and consider the \emph{density} \(|U_{\lambda,\epsilon}^t[\psi_h]|^2(x) = \lim_{\varepsilon \to 0} \langle \rho_{x,\varepsilon} U_{\lambda,\epsilon}^t \psi_h, U_{\lambda,\epsilon}^t \psi_h \rangle\). By appealing to FBI transforms, we arrive at a characterization in \protect\hyperlink{prop:glap-prop-cs-localized}{Proposition \ref{prop:glap-prop-cs-localized}} that gives a rather explicit form of this function, up to a term that is \(L^{\infty}\) bounded by \(Ch\) for some \(C > 0\). It further tells that the propagated state is localized to roughly a \(\sqrt{h}\)-ball about \(x_t\), the projection onto \(\mathcal{M}\) of \(\Gamma^t(x_0, \xi_0)\). These properties are put to use in \protect\hyperlink{from-graphs-to-manifolds}{Section \ref{from-graphs-to-manifolds}}, wherein we study the convergence of the discrete counterparts to the preceding constructions in the continuum setting. Namely, the propagator we use is \(U_{\lambda,\epsilon,N}^t := e^{i t \sqrt{\Delta}_{\lambda,\epsilon,N}}\), which is defined by functional calculus and acts on \(\mathcal{H}_N\), which is equipped with the inner product \(\langle u, v \rangle_N := \frac{1}{N} \sum_{j=1}^N u(x_j) \overline{v(x_j)}\) that gives the norm \(||\cdot||_N := \langle \cdot, \cdot \rangle_N^{\frac{1}{2}}\). This vector space structure limits to \(L^2(\mathcal{M}, p d\nu_g)\) as \(N \to \infty\), so it becomes important to normalize vectors to reduce the effect of the sampling density \(p\). The matrix \(U_{\lambda,\epsilon,N}^t\) is not unitary, so the initial state is a \emph{time-dependent} normalization of the coherent state \(\psi_h\), that is, we set \(\psi_{h,t,N} := \psi_h / ||U_{\lambda,\epsilon,N}^t [\psi_h]||_N\). Then, under the additional assumption that \(\Lambda_N\) is a set of \emph{i.i.d.} random vectors on \(\mathbb{R}^{D}\) with law \(P\) and \(\iota\) has a bounded second fundamental form, in \protect\hyperlink{thm:mean-prop-geoflow-consistency}{Theorem \ref{thm:mean-prop-geoflow-consistency}} we arrive at the following consistency result: \begin{theorem*}[{Quantum-Classical Correspondence for the discrete graph Laplacian}] Let \(\lambda \geq 0\), \(\alpha \geq 1\), \((x_0, \xi_0) \in T^*\mathcal{M}\) and \(|t|\) be smaller than the injectivity radius at \(x_0\). Then, for \(\psi_h\) a coherent state localized about \((x_0, \xi_0)\) and given \(u \in C^{\infty}\), there are constants \(h_0, C > 0\) such that we have for all \(h \in (0, h_0]\) and \(\epsilon := h^{2 + \alpha}\), \begin{equation}\begin{aligned} \Pr [| \langle |U_{\lambda,\epsilon,N}^t[\psi_{h,t,N}](x_j)|^2, u\rangle_N - u(x_t) | > h ] \leq e^{-\frac{N h^{2(n + 2)} \epsilon^{\frac{5}{2}n + 4}}{C}} . \end{aligned} \nonumber \end{equation} \end{theorem*} As an application of this, we recover in \(\mathcal{X}_N :-= \iota^{-1}[\Lambda_N]\) the geodesic flow on \(\mathcal{M}\). That is, using \emph{coordinate functions} with either local coordinates or the extrinsic coordinates \(\iota_1, \ldots, \iota_{D}\), we find in \protect\hyperlink{prop:mean-geodesic-recover-consistency}{Proposition \ref{prop:mean-geodesic-recover-consistency}} that, \begin{proposition*}[{Observing Geodesics: the \(\operatorname{mean}\) case}] Let \(\psi_h\) be a coherent state localized at \((x_0, \xi_0) \in T^*\mathcal{M}\) and \(|t|\) less than the injectivity radius at \(x_0\). \emph{Extrinsic case}. Define the \emph{global extrinsic sample mean} \(\bar{x}_{N,\iota,t}\) to be the closest point in \(\Lambda_N\) to \begin{align*} &&\bar{\iota}^t_N(x_0, \xi_0) &:= (\bar{\iota}_{N,1}^t(x_0, \xi_0), \ldots, \bar{\iota}_{N,D}^t(x_0, \xi_0)), \\ \quad\quad \text{with} && \bar{\iota}_{N,j}^t(x_0, \xi_0) &:= \langle \, |U_{\lambda,\epsilon,N}^t[\psi_{h,t,N}]|^2 \,, \iota_j \rangle_N . \end{align*} That is, \(\bar{x}_{N,\iota,t} := \arg\min_{X \in \Lambda_N} ||X - \bar{\iota}^t_N(x_0, \xi_0)||_{\mathbb{R}^{D}}\). Then, given \(\alpha \geq 1\), there are constants \(h_{\iota,\max} > 0\) and \(C \geq 1\) such that for all \(h \in (0, h_{\iota, \max}]\), \begin{equation}\begin{aligned} \Pr[d_g(\iota^{-1}(\bar{x}_{N,\iota,t}),x_t) > h] \leq e^{-\frac{N h^{2(n + 2)} \epsilon^{\frac{5}{2}n + 4}}{C}} . \end{aligned} \nonumber \end{equation} \emph{Local case}. Let \(\mathscr{O}_t \subset \mathcal{M}\) be an open neighbourhood about a maxmizer \(\hat{x}_{N,t}\) of \(|U_{\lambda,\epsilon,N}^t[\psi_h]|^2\) over \(\mathcal{X}_N\) and \(u : \mathscr{O}_t \to V_t \subset \mathbb{R}^{n}\) its \(C^{\infty}\) diffeomorphic coordinate mapping. Given a smooth cut-off \(\chi \in C_c^{\infty}(\mathbb{R}^{n}, [0, 1])\) with \(\operatorname{supp} \chi \subset V_t\), define the \emph{local sample mean} \(\bar{x}_{N,u,t}\) to be the closest point in \(\mathscr{V}_N := u[\mathcal{X}_N \cap \mathscr{O}_t]\) to \begin{align*} && \bar{u}^t_N(x_0, \xi_0) &:= (\bar{u}_{N,1}^t(x_0, \xi_0), \ldots, \bar{u}_{N,n}^t(x_0, \xi_0)), \\ \quad\quad\text{with} && \bar{u}_{N,j}^t(x_0, \xi_0) &:= \langle \,|U_{\lambda,\epsilon,N}^t[\psi_{h,t,N}]|^2 , (\chi \circ u) \, u_j \, \rangle_N . \end{align*} That is, \(\bar{x}_{N,u,t} := \arg\min_{X \in \mathscr{V}_N} ||X - \bar{u}_N^t(x_0, \xi_0)||_{\mathbb{R}^{n}}\). Then, there are constants \(h_{u,\max} > 0\) and \(C \geq 1\) such that for all \(h \in (0, h_{u,\max}]\), if \(\overline{\mathscr{B}}_t := \{||u(\hat{x}_{N,t}) - v||_{\mathbb{R}^{n}} \leq \sqrt{h} \} \subset V_t\), then for any smooth cut-off \(\chi \in C_c^{\infty}(\mathbb{R}^{n}, [0, 1])\) with \(\operatorname{supp} \chi \subset V_t\) that is \(\chi \equiv 1\) on \(\overline{\mathscr{B}}_t\), we have \begin{equation}\begin{aligned} \Pr[d_g(u^{-1}(\bar{x}_{N,u,t}),x_t) > h] \leq e^{-\frac{N h^{2(n + 2)} \epsilon^{\frac{5}{2}n + 4}}{C}} . \end{aligned} \nonumber \end{equation} \end{proposition*} On the way to these consistency results, we have studied in \protect\hyperlink{from-graphs-to-manifolds}{Section \ref{from-graphs-to-manifolds}} the consistency between certain matrix dynamics on \(\mathcal{H}_N\) and operator dynamics on \(C^{\infty}(\mathcal{M})\). An intermediary result of independent interest is that for \(u \in C^{\infty}\), a natural \emph{extension} of \(U_{\lambda,\epsilon,N}^t[u]\) to \(\mathcal{M}\) is, with high probability, close to \(U^t_{\lambda,\epsilon}[u]\) in \(L^{\infty}\) norm. The extension is realized by a discrete version of \(\eqref{eq:prop-plwp-glapN-decomp}\), namely, for any \(x \in \mathcal{M}\) we have, \begin{equation} \label{eq:prop-plwp-glapN-decomp} U_{\lambda,\epsilon,N}^t[u](x) = f(0) u(x) + A_{\lambda,\epsilon,N}[Df(A_{\lambda,\epsilon,N})[u_h]](x). \end{equation} Here, \(Df(A_{\lambda,\epsilon,N})\) is an \(N \times N\) matrix over \(\mathcal{H}_N\) as defined by spectral calculus, while \(A_{\lambda,\epsilon,N}\) naturally extends to a finite-rank operator on \(L^2(\mathcal{M})\), for example in the \(\lambda = 0\) case by the fact that \(A_{\epsilon,N}[v](x) = \langle v, k_{\epsilon}(x, \cdot) \rangle_N/\langle k_{\epsilon}(x, \cdot), 1 \rangle_N\). Thus, \(\eqref{eq:prop-plwp-glapN-decomp}\) is essentially a Nyström extension to the manifold. With this formalism at hand, we find in \protect\hyperlink{thm:halfwave-soln}{Theorem \ref{thm:halfwave-soln}} that, \begin{theorem*}[{Consistency of Half-Wave Propagations}] Let \(\lambda \geq 0\) and \(\epsilon \in (0, 1]\). Then, we have for any \(t \in \mathbb{R}\) and \(u \in L^{\infty}(\mathcal{M})\) that if there is \(K_u > 0\) such that for all \(|s| \leq |t|\), \(||U_{\lambda,\epsilon}^s[u]||_{\infty} \leq K_u\), then there are constants \(C, C_0 \geq 1\) such that for all \(\frac{K_u^{\frac{1}{2}} \epsilon^{-(\frac{5}{8} n + 1)}}{C_0 N^{\frac{1}{4}}} \leq \delta < C_0\), \begin{align*} \Pr&[||U_{\lambda,\epsilon,N}^t[u] - U_{\lambda,\epsilon}^t[u]||_{\infty} > \delta] \leq \exp{\left( -\frac{N \delta^4 \epsilon^{{\frac{5}{2}n + 4}}}{C K_u^2 \, |t|^8} \right)} . \end{align*} \end{theorem*} \noindent In connection to the semiclassical nature of \(\Delta_{\lambda,\epsilon}\) for \(\epsilon \ll h^2\), it may be possible to have a generic bound \(K_u\) by appealing to semiclassical Strichartz-type estimates to arrive at an \(L^1 \to L^{\infty}\) bound on \(U^t_{\lambda,\epsilon}\), as used for example in the proof of \citep[Theorem 10.8]{zworski2012}. Alternatively, a bound is attainable directly from inspection of \(\eqref{eq:prop-plwp-glap-decomp}\) combined with spectral theory and we give that in the Corollary to \protect\hyperlink{thm:halfwave-soln}{Theorem \ref{thm:halfwave-soln}}. The discrete results we present justify the use of spectral manipulations of the discrete graph Laplacian for approximating solutions to wave equations and ultimately, geodesic flows, on \(\mathcal{M}\) through linear computations on the dataset \(\Lambda_N\). In \protect\hyperlink{observing-geodesics-on-graphs}{Section \ref{observing-geodesics-on-graphs}} we indicate an algorithm of this sort for approximating geodesics of \(\mathcal{M}\) in \(\Lambda_N\). More careful constructions, especially with regards to the coherent states, are deployed in \citep{qml}, where several examples are given, both for model cases of a sphere and torus as well as for real-world datasets and convergence rates are shown. The consistency analysis we study is independent of arguments from the spectral convergence of graph Laplacians to \(\Delta_{\mathcal{M}}\), which is currently a very active topic, both in the probabilistic \citep{trillos2020error, trillos2018variational, wormell2021spectral, cheng2020spectral} and deterministic \citep{burago2015graph} settings; an interesting direction would be to apply the consistency of half-wave propagations in \protect\hyperlink{thm:halfwave-soln}{Theorem \ref{thm:halfwave-soln}} to the study of these spectral problems. \textbf{Outline}. We proceed as follows: in \protect\hyperlink{preliminaries}{Section \ref{preliminaries}} we state the assumptions on \(\mathcal{M}, \iota, P\) and the \emph{kernel function} \(k\) used to construct the graph Laplacians. Then, we state some basic geometric lemmas that will be used later and come to the definitions of the graph Laplacians. We give some of their basic spectral properties and in \protect\hyperlink{consistency-bounds-redux}{Section \ref{consistency-bounds-redux}} we re-prove a well-known result on the pointwise consistency of the application of discrete averaging operators \(A_{\lambda,\epsilon,N}\) to \(L^{\infty}\) functions, following \citep{hein2005geometrical}, but give more explicit details on the dependence on the \(L^{\infty}\) norm of the function. This allows further convergence results to be stated in terms of these norms, which may depend on \(\epsilon\), as is the case for example with the coherent states we use later. Then, we provide some background on \({\Psi\text{DO}}\)s, mainly to state the forms of quantization we consider and set some notation and in \protect\hyperlink{fbi-transform}{Section \ref{fbi-transform}} we recall the notion of an FBI transform on a smooth, compact manifold as explicated in \citep{wunsch2001fbi}. In \protect\hyperlink{semi-classical-measures-of-coherent-states}{Section \ref{semi-classical-measures-of-coherent-states}} we give an explicit form to the Husimi function of a coherent state; this simply follows from the Schwartz kernel of the operator that projects functions in \(L^2(T^*\mathcal{M})\) onto the range of the FBI transform and indeed, we follow the discussion in \citep{wunsch2001fbi}. The results in this section are likely well-known, but to the best of the author's knowledge, not explicitly written down in literature and perhaps not directly accessible to a wider audience interested in the topics of this study, so we provide them in some detail. We briefly discuss in \protect\hyperlink{state-preparation}{Section \ref{state-preparation}} some practical considerations in the construction of coherent states from the data, \(\Lambda_N\); this program is carried out in more detail in \citep{qml}. We study the connection between semiclassical analysis and graph Laplacians in \protect\hyperlink{from-graph-laplacians-to-geodesic-flows}{Section \ref{from-graph-laplacians-to-geodesic-flows}}. Then, we study the propagation of coherent states in \protect\hyperlink{propagation-of-coherent-states}{Section \ref{propagation-of-coherent-states}}; the main discussion follows general considerations of properties of \(|e^{\frac{i}{h} t Q} [\psi_h]|^2\) with \(Q\) a \({\Psi\text{DO}}\) whose symbol locally approximates \(|\xi|_{g_x}\). This is again a topic that is likely part of folklore, but to the author's knowledge, not explicated in literature. Since by the discussion in \protect\hyperlink{from-graph-laplacians-to-geodesic-flows}{Section \ref{from-graph-laplacians-to-geodesic-flows}}, the propagation \(|U_{\lambda,\epsilon}^t[\psi_h]|^2\) is essentially reduced to this situation, we quickly conclude the localization properties of this propagation. In \protect\hyperlink{from-graphs-to-manifolds}{Section \ref{from-graphs-to-manifolds}} we study the discrete problems over \(\Lambda_N\) and establish probabilistic convergence rates to bridge the gap to the continuum, \(N \to \infty\) setting. \hypertarget{preliminaries}{% \section{Preliminaries}\label{preliminaries}} We will discuss the necessary results for manifold learning and semiclassical analysis. To set the stage, there is an \emph{unknown manifold} \(\mathcal{M}\), on which we wish to implement operators and dynamics and whose geometry we wish to learn; we always operate under the following assumptions: \begin{assumptions} \hypertarget{assumptions}{\label{assumptions}} The manifold \(\mathcal{M}\) is \(C^{\infty}\), compact, boundaryless, of dimension \(n := \dim \mathcal{M}\) with Riemannian metric \(g\) giving the geodesic distance function \(d_g : \mathcal{M} \times \mathcal{M} \to \mathbb{R}_+\) and embedded as \(\Lambda \subset \mathbb{R}^{D}\) through a smooth isometry \(\iota : \mathcal{M} \stackrel{\sim}{\smash{\longrightarrow}\rule{0pt}{0.4ex}} \Lambda \subset \mathbb{R}^D\) with \(D \geq n+1\). The second fundamental form of \(\Lambda\) (see \citep[Definition 2.11]{hein2005geometrical}) has bounded norm and \begin{equation}\begin{aligned} \kappa := \inf_{x \in \mathcal{M}} \inf_{y \in B_{\pi \rho}(x,g)} |\iota(x) - \iota(y)| > 0, \end{aligned} \nonumber \end{equation} wherein \(\rho := \inf \{ \gamma \text{ unit-speed geodesic in $\mathcal{M}$} ~|~ |\partial_t^2 [\iota \circ \gamma]| \}\), termed the \emph{minimum radius of curvature}, is positive due to \(\Lambda\) having bounded second fundamental form (see \citep[\(\S 2.2.1\)]{hein2005geometrical}). Samples \(X_1, \ldots, X_N\) are drawn \emph{i.i.d.} from a fixed probability distribution \(P\) concentrated on \(\Lambda\) such that \(P \circ \iota^{-1}\) is absolutely continuous with respect to the volume measure on \(\mathcal{M}\) and has smooth, positive density \(p \in C^{\infty}(\mathcal{M})\). \end{assumptions} \begin{remark} The conditions on \(\Lambda\) give control on its extrinsic geometry through the intrinsic geometry of \(\mathcal{M}\): in particular, \citep[Lemma 2.22]{hein2005geometrical} tells that for all \(x, y \in \mathcal{M}\) such that \(|\iota(x) - \iota(y)| \leq \kappa/2\), we have \(d_g(x,y)/2 \leq |\iota(x) - \iota(y)| \leq d_g(x,y) \leq \kappa\). \end{remark} A common way to think of the samples \(X_1, \ldots, X_N\) is through a \emph{graph structure} with vertices \(\mathcal{X}_N := \{ x_1, \ldots, x_N \}\), wherein \(x_j := \iota^{-1}(X_j)\) for each \(j \in [N]\). In this setting, the graph connectivity is given by an adjacency matrix that uses the embedding \(\iota\) and is defined as one of the \emph{discrete averaging operators} \(A_{\lambda,\epsilon,N}\) defined in \protect\hyperlink{laplacians-from-graphs-to-manifolds}{Section \ref{laplacians-from-graphs-to-manifolds}}. We can also view \(\mathcal{X}_N\) as a set of basis elements giving rise to a vector space structure and endow it with an inner product in which these are orthogonal. Then, the restriction of \(C^{\infty}\) functions to \(\mathcal{X}_N\) can be seen as vectors in this space and with appropriate weighting schema, the discrete structures that emerge, begin to parallel certain weighted \(L^2\) spaces. Such representations form the basis from which discrete constructions --- particularly functions of averaging operators that encode certain dynamics --- are extended to the full, \emph{continuum} space \(\mathcal{M}\). The basic principles of this point of view are treated in \citep{hein2005geometrical} and briefly in \citep{hein2005graphs}; we will utilise it primarily in \protect\hyperlink{from-graphs-to-manifolds}{Section \ref{from-graphs-to-manifolds}}. The general notation used throughout the following sections is summarized in the Appendix and more specific notation is listed there for quick reference; when some notation is introduced for the time, it will be defined in context. \hypertarget{geometric-properties}{% \subsection{Geometric properties}\label{geometric-properties}} We will often need to switch to normal coordinates and expand them in Taylor series about a point of interest. Affecting this, we have a classical theorem of Riemann: \begin{lemma}[{Expansion of Normal Coordinates}] \hypertarget{lemma:expansion-normal-coords}{\label{lemma:expansion-normal-coords}} Let \(x \in \mathcal{M}\) and \(V_x \subset \mathbb{R}^{n}\) be a neighbourhood about the origin providing normal coordinates about \(x\). Then, for \(\exp_x : V_x \to \mathcal{M}\) defined via the identification \(T_x\mathcal{M} \cong \mathbb{R}^{n}\), we have \(D \exp_x(v) = I_n + O(|v|^2)\) and \((g \circ \exp_x)(v) = I_n + O(|v|^2)\). \qed \end{lemma} In practice, direct access to normal coordinates is seldom achievable. However, projection onto the tangent space at a point on \(\Lambda\) also provides local coordinates and for \emph{shrinking} neighbourhoods, this approximates normal coordinates reasonably well: \begin{lemma}[{Normal to Projection Coordinates}] \hypertarget{lem:proj-normal-coords}{\label{lem:proj-normal-coords}} Let \(\mathcal{U} \subset \mathcal{M}\) be a neighbourhood and \(V_x \subset \mathbb{R}^{n} \cong T_{\iota(x)}\Lambda\) for some \(x \in \mathcal{U}\). If \(\gamma : \mathcal{U} \to V_x\) are local coordinates provided by the orthogonal projection \(\iota(\mathcal{U}) \to V_x\) and \(s_x(y) := \exp^{-1}_x(y)\), then \(s_x \circ \gamma^{-1}(v) = v + O(|v|^3)\) and \(D[s_x \circ \gamma^{-1}](v) = I + O(|v|^3)\). \end{lemma} \begin{proof} This is \citep[Lemma 5]{lafon2004thesis}. \end{proof} Another option is to work directly with extrinsic coordinates provided by the embedding map \(\iota : \mathcal{M} \to \Lambda\). To this end, we have the following expansion, useful for integration formulae: \begin{lemma}[{Extrinsic to Normal Coordinates}] \hypertarget{lem:ext-normal-coords}{\label{lem:ext-normal-coords}} Let \(\mathcal{U} \subset \mathcal{M}\) be a neighbourhood about \(x \in \mathcal{M}\) contained inside the normal neighbourhood of \(x\) and \(s_x(y) := \exp^{-1}_x(y)\). Then, \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \tightlist \item \(D[\iota \circ s_x^{-1}](0) = D[\iota](x)\) and \item for all \(y \in \mathcal{U}\), \(|\iota(x) - \iota(y)|^2 = d_g(x,y)^2 + O(d_g(x,y)^4)\). \end{enumerate} \end{lemma} \begin{proof} These are both shown in the \citep[Proof of Lemma 2.1]{antil2018fractional}. \end{proof} We will also need the following information on the growth of covering numbers on compact manifolds using balls of decaying radii: \begin{lemma}[{Bishop-Günther inequality}] \hypertarget{lem:covering-number}{\label{lem:covering-number}} Let \(\rho > 0\) be fixed and define \(\mathcal{N}(\rho)\) to be the minimal number of (geodesic) balls of radius \(\rho\) that cover \(\mathcal{M}\). Then, there exist positive constants \(C_{\mathcal{M}}\) and \(C'_{\mathcal{M}}\) depending only on the dimension, volume, sectional curvature and injectivity radius of \(\mathcal{M}\) such that if \(0 < \rho \leq C'_{\mathcal{M}}\), then \(\mathcal{N}(\rho) \leq C_{\mathcal{M}} \rho^{-n}\). \end{lemma} \begin{proof} The volume of \(B(x, \rho)\) for \(x \in \mathcal{M}\) is \(V(x) \geq \rho^{n}/C''_{\mathcal{M}}\) when \(\rho \leq C'_{\mathcal{M}}\) for some constants \(C'_{\mathcal{M}}, C''_{\mathcal{M}} > 0\) independent of \(x\) that are provided by the Bishop-Günther inequality and depend only on the sectional curvature, injectivity radius and dimension of \(\mathcal{M}\). Therefore, \(\mathcal{N}(\rho) \leq \operatorname{vol}(\mathcal{M})/(\inf V) \leq C_{\mathcal{M}} \rho^{-n}\) with \(C_{\mathcal{M}} := \operatorname{vol}(\mathcal{M}) C''_{\mathcal{M}}\). \end{proof} \hypertarget{laplacians-from-graphs-to-manifolds}{% \subsection{Laplacians: from graphs to manifolds}\label{laplacians-from-graphs-to-manifolds}} As alluded to in the introduction, there are now well-established schema for approximating Laplace-Beltrami operators, modulo lower-order terms, from the Euclidean distances of samples \(X_1, \ldots, X_N\) of a smooth isometric embedding of \(\mathcal{M}\) satisfying the \protect\hyperlink{assumptions}{Assumptions \ref{assumptions}}. Namely, the objects of study are adjacency matrices of weighted graphs on these samples and taking on a certain form, which we call \emph{averaging operators}. We now fix the terminology: \begin{definition}[{Kernel functions}] A \emph{kernel function} is a monotonically decreasing function \(k: \mathbb{R}_+ \to \mathbb{R}_+\) that is smooth on \((0,\infty)\) with all derivatives having exponential decay: that is, for each \(m \geq 0\), there are constants \(R_{k,m}, A_{k,m} > 0\) so that for all \(t > R_{k,m}\), \begin{equation}\begin{aligned} |\partial_t^m k(t)| \leq e^{-A_{k,m} t} \end{aligned} \nonumber \end{equation} and that satisfies \(k(t) > ||k||_{\infty}/2\) on \([0, r_k)\) for some \(r_k > 0\). We denote \(R_k := R_{k,0}\). \end{definition} Kernel functions are the basic building blocks for the graph structures leading to the operators of interest. Using them, we can make, \begin{definition}[{Averaging operators}] Let there be \(N > 0\) random vectors \(X_1, \ldots, X_N \subset \Lambda\) with law \(P\) as per the \protect\hyperlink{assumptions}{Assumptions \ref{assumptions}} and for each \(j \in [N]\), denote \(x_j := \iota^{-1}(X_j)\). let \(k: \mathbb{R}_+ \to \mathbb{R}_+\) be a kernel function. Then, fixing \(k : \mathbb{R}_+ \to \mathbb{R}_+\) a kernel function and letting \(k_{\epsilon} : \mathcal{M} \times \mathcal{M} \to \mathbb{R}\) be given by \(k_{\epsilon} : (x,y) \mapsto \epsilon^{-\frac{n}{2}} k(|\iota(x) - \iota(y)|^2/\epsilon)\) for \(\epsilon > 0\), we define the \emph{discrete} and \emph{continuum averaging operators} as \begin{equation} \label{def:averaging-op} (\mathscr{A}_{\epsilon,N})_{j,j'} := \frac{1}{N} k_{\epsilon}(x_j,x_{j'}), \quad \mathscr{A}_{\epsilon} : u \mapsto \int_{\mathcal{M}} k_{\epsilon}(x,y) u(y) p(y) ~ d\vol{y} , \end{equation} respectively. Further, with \(p_{\epsilon,N}(x) := T_{\epsilon,N}[1](x)\) and \(p_{\epsilon}(x) := \mathscr{A}_{\epsilon}[1](x)\) we define \begin{equation}\begin{aligned} A_{\epsilon,N} := \operatorname{diag}(p_{\epsilon,N}^{-1}) \mathscr{A}_{\epsilon,N}, \quad A_{\epsilon} := \frac{1}{p_{\epsilon}} \mathscr{A}_{\epsilon}, \end{aligned} \nonumber \end{equation} called the \emph{discrete} and \emph{continuum} \emph{renormalized averaging operators}, respectively. More generally, let \(\lambda \geq 0\) and define \(k_{\lambda,\epsilon,N}(x,y) := k_{\epsilon}(x,y)/[p_{\epsilon,N}(x) p_{\epsilon,N}(y)]^{\lambda}\), \(k_{\lambda,\epsilon}(x,y) := k_{\epsilon}(x,y)/[p_{\epsilon}(x) p_{\epsilon}(y)]^{\lambda}\). With this, \begin{equation}\begin{aligned} (\mathscr{A}_{\lambda,\epsilon,N})_{j,j'} := k_{\lambda,\epsilon,N}(x_j,x_{j'}), \quad \mathscr{A}_{\lambda,\epsilon} : u \mapsto \int_{\Lambda} k_{\lambda,\epsilon}(x,y) u(y) ~ p(y)d\vol{y} \end{aligned} \nonumber \end{equation} are the \emph{discrete} and \emph{continuum} \(\lambda\)-\emph{averaging operators}, respectively and \begin{equation}\begin{aligned} A_{\lambda,\epsilon,N} := \operatorname{diag}(\mathscr{A}_{\lambda,\epsilon,N}[1]^{-1}) \, \mathscr{A}_{\epsilon,\lambda,N}, \quad A_{\lambda,\epsilon} : u \mapsto (\mathscr{A}_{\lambda,\epsilon}[1])^{-1} \mathscr{A}_{\lambda,\epsilon}[u] \end{aligned} \nonumber \end{equation} are the \emph{discrete} and \emph{continuum} \(\lambda\)-\emph{renormalized averaging operators}, respectively. \end{definition} \begin{notation} We will denote for every \(\lambda \geq 0\), \(p_{\lambda,\epsilon} := \mathscr{A}_{\lambda,\epsilon}[1]\). \end{notation} The main use of these operators is to approximate the Laplace-Beltrami operator on the manifold. Therefore, we also make the \begin{definition}[{Graph Laplacians}] Let \(c_2\) and \(c_0\) be the second and zeroth order moments of \(k(||\cdot||^2)\) on \(\mathbb{R}^{n}\). We define \begin{equation}\begin{aligned} \Delta_{\lambda,\epsilon,N} := \frac{2 c_0}{c_2} \frac{I - A_{\lambda,\epsilon,N}}{\epsilon}, \quad\quad \Delta_{\lambda,\epsilon} := \frac{2 c_0}{c_2} \frac{I - A_{\lambda,\epsilon}}{\epsilon}, \end{aligned} \nonumber \end{equation} which we call the \emph{discrete} and \emph{continuum} \(\lambda\)-\emph{renormalized} \emph{graph Laplacians}, respectively. Note that when \(\lambda = 0\), \(A_{\lambda,\epsilon,N} = A_{\epsilon,N}\) and \(A_{\lambda,\epsilon} = A_{\epsilon}\). \end{definition} \begin{remark} In using the above terminologies, we will often omit the \emph{discrete} and \emph{continuum} specifications as well as \emph{\(\lambda\)-renormalized} when the symbols and context make the regime sufficiently clear. The symbolic expressions are adapted from \citep{hein2005graphs}, while the terminology is non-standard. \end{remark} We will use the spectral properties of \(A_{\lambda,\epsilon,N}\) and \(A_{\lambda,\epsilon}\) and the simplest way to access those is to see that they are symmetrized by conjugation via \(\sqrt{p_{\lambda,\epsilon,N}}\) and \(\sqrt{p_{\lambda,\epsilon}}\) respectively. That is, we have, \begin{lemma} \hypertarget{lem:lap-symm}{\label{lem:lap-symm}} Let for all \(\lambda \geq 0\) and \(\epsilon > 0\), \begin{equation}\begin{aligned} A^{(s)}_{\lambda,\epsilon,N} := \sqrt{p_{\lambda,\epsilon,N}} A_{\lambda,\epsilon,N} \frac{1}{\sqrt{p_{\lambda,\epsilon,N}}}, \quad A^{(s)}_{\lambda,\epsilon} := \sqrt{p_{\lambda,\epsilon}} A_{\lambda,\epsilon} \frac{1}{\sqrt{p_{\lambda,\epsilon}}} . \end{aligned} \nonumber \end{equation} Then, \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \item \(A_{\lambda,\epsilon,N}^{(s)}\) and \(A_{\lambda,\epsilon}^{(s)}\) are symmetric and their spectra coincide with those of \(A_{\lambda,\epsilon,N}\) and \(A_{\lambda,\epsilon}\), respectively, \item the spectra of \(A_{\lambda,\epsilon,N}\) and \(A_{\lambda,\epsilon}\) are contained in \([-1,1]\) with \(\lambda = 1\) a simple eigenvalue and all other eigenvalues lying strictly in \((-1, 1)\) and more informatively, \item \(A_{\lambda,\epsilon,N}^{(s)}\) and \(A_{\lambda,\epsilon}^{(s)}\) fix \(\sqrt{p_{\lambda,\epsilon,N}}\) and \(\sqrt{p_{\lambda,\epsilon}}\), respectively and \item for any connected open subset \(D \subset \mathbb{C}\) with \([-1, 1] \subset \bar{D}\) and \(f : \bar{D} \to \mathbb{C}\) analytic on \(D\) with an absolutely convergent Taylor series on \([-1,1]\), \end{enumerate} \begin{equation}\begin{aligned} f(A_{\lambda,\epsilon,N}) = \frac{1}{\sqrt{p_{\lambda,\epsilon,N}}} f(A^s_{\lambda,\epsilon,N}) \sqrt{p_{\lambda,\epsilon,N}}, \quad f(A_{\lambda,\epsilon}) = \frac{1}{\sqrt{p_{\lambda,\epsilon}}} f(A^s_{\lambda,\epsilon}) \sqrt{p_{\lambda,\epsilon}} . \end{aligned} \nonumber \end{equation} \end{lemma} \begin{proof} The properties (1) and (3) follow immediately from the definitions of \(A_{\lambda,\epsilon,N}^{(s)}\) and \(A_{\lambda,\epsilon}^{(s)}\). The classical Perron-Frobenius theorem applied to row-stochastic matrices gives (2) for \(A_{\lambda,\epsilon,N}\) and the corresponding generalization for compact operators is the Krein-Rutman theorem, from which the property follows for \(A_{\lambda,\epsilon}\); together with the holomorphic functional calculus, these give the property (4). \end{proof} A summary of the notation introduced here is provided in \protect\hyperlink{notation-related-to-graph-laplacians}{Appendix}. \hypertarget{consistency-bounds-redux}{% \subsubsection{Consistency bounds redux}\label{consistency-bounds-redux}} The relationship among the discrete and continuum operators is that the discrete averaging operators are naturally accessed from samples of \(\Lambda\) and these are shown to converge, in the \(N \to \infty\) limit to the continuum averaging operators; the residual coming from the difference of the two operators is deemed the \emph{variance term}. Then, a sequence of Taylor expansions shows that the (\(\lambda\)-renormalized) continuum graph Laplacians, derived from the continuum averaging operators, converge to \(\Delta_{\mathcal{M}} +\) (lower order differential terms) in the \(\epsilon \to 0\) limit and the corresponding residuals are deemed the \emph{bias terms}. A general treatment of the convergence of both terms, simultaneously, is given in \citep{hein2005graphs}. We will draw upon these results throughout the forthcoming analyses, so we record the precise forms of the particular theorems here. In regards to the bias term, our requirements are modest: for the most part, we will need a semiclassical expansion of the operator, which in this case is similar to the Taylor series type expansions given, for example in \citep{hein2005graphs}, but requires an essentially different approach. These expansions will be carried out in \protect\hyperlink{symbol-of-a-graph-laplacian}{Section \ref{symbol-of-a-graph-laplacian}}. Nevertheless, the \emph{degree functions} \(p_{\lambda,\epsilon}\) have an important role in making the approximation procedures agnostic to \emph{a priori} knowledge of the intrinsic dimensionality \(n\) of the manifold and controlling the effect of lower-order error terms in the approximation to the Laplace-Beltrami operator, primarily in as far as the effect of the non-uniform sampling density \(p\) is concerned. This is carried out essentially through their Taylor expansions about a given point \(x \in \mathcal{M}\); the following is recorded here for use in later sections: \begin{lemma} \hypertarget{lem:taylor-expand-deg-func}{\label{lem:taylor-expand-deg-func}} Let \(\lambda \geq 0\) and \(\epsilon > 0\). Then, there exists \(q_{\lambda} \in C^{\infty}(\mathcal{M})\) depending only on \(k, p, \lambda\) and \(\mathcal{M}\) such that for all \(x \in \mathcal{M}\), \begin{equation}\begin{aligned} p_{\lambda,\epsilon}(x) = (c_0 p(x))^{1 - 2\lambda} + \epsilon \frac{c_2}{2} p(x)^{1 - 2\lambda} q_{\lambda}(x) + O(\epsilon^2) . \end{aligned} \nonumber \end{equation} \end{lemma} \begin{proof} The proof of this expansion is contained in the proof of \citep[Proposition 2.33]{hein2005geometrical}. \end{proof} As for the variance term, we now recount some of the prior results, with small modifications that we will use for the convergence of quantum dynamics from graphs to manifolds. In particular, prior works have left out the explicit dependence on \(||u||_{\infty}\) in the probabilistic convergence bounds for consistency. Since we ultimately wish to drive localized, \(L^2\) bounded states according to Schrödinger-type dynamics, their \(L^{\infty}\) norms depend on \(\epsilon\), hence it is useful for the convergence theory to make this dependence explicit. Therefore, we proceed with the essential convergence results from the thesis \citep{hein2005geometrical}, while making the small adjustments necessary to \emph{unpack} the uniform norm dependence from the constants. The primary cases that concern us are for the averaging operators. Since constants bounding the kernel, probability and geometric quantities appear abundantly in these considerations, we record the useful \begin{notation} \emph{Bounds on degree functions} (see \citep[Lemma 2.32]{hein2005geometrical} for more detailed bounds): \begin{itemize} \tightlist \item \(\underline{C}_p := \inf p_{\epsilon} \gtrsim ||k||_{\infty} (\inf p) \inf_x \operatorname{vol}(B(x, \epsilon^{\frac{1}{2}} R_k))\) \item \(\overline{C}_p := \sup p_{\epsilon} \lesssim_{\mathcal{M}} R_k^{n} ||k||_{\infty} ||p||_{\infty}\) \item \(\underline{C}_{p,\lambda} := \inf p_{\lambda,\epsilon} \geq \underline{C}_p/\overline{C}_p^{2\lambda}\), \(\overline{C}_{p,\lambda} := ||p_{\lambda,\epsilon}||_{\infty} \leq \overline{C}_p/\underline{C}_p^{2\lambda}\) \item \(\underline{C}_{k,p} := ||k||_{\infty}/(\inf p)\). \end{itemize} \end{notation} Let the subscripted variables \(X_1, \ldots, X_j, \ldots, X_N\) denote \emph{i.i.d.} random vectors in \(\mathbb{R}^{D}\) with law \(P\) that has positive \(C^{\infty}(\Lambda)\) density \(\tilde{p}\) with respect to the volume element \(dV\) on \(\Lambda = \iota(\mathcal{M}) \subset \mathbb{R}^{D}\); we denote by \(x_j := \iota^{-1}(X_j)\) the pull-back of \(X_j\) and by \(p\) the pull-back of \(\tilde{p}\) to \(\mathcal{M}\), which gives a density with respect to \(d\nu_g\). For simplicity, we assume \(k\) has compact support on \([0, R_k^2]\) (but this can be generalized to \emph{essential support} so that \(t^N k(t) < C_N\) for all \(t > R_k^2, N \geq 0\)). Then, Hein has shown, \begin{lemma}[{\cite[Lemma 2.30]{hein2005geometrical}}] \hypertarget{lem:kern-consistency}{\label{lem:kern-consistency}} There exists a constant \(K_{\mathcal{M}, k, p}\) depending on the curvature and injectivity radius of \(\mathcal{M}\), \(||p||_{\infty}\), \(||k||_{\infty}\) and \(R_k\) such that for all \(u \in L^{\infty}(\mathcal{M})\) and \(x \in \mathcal{M}\), given any \(\delta > 0\), \begin{align*} \Pr\left[ \left|\frac{1}{N} \sum_{j=1}^N k_{\epsilon}(x,x_j) u(x_j) - \int_{\mathcal{M}} k_{\epsilon}(x,y) u(y) p(y) ~ d\nu_g(y) \right| > \delta \right] \\ \leq 4 \exp\left(- \frac{N \epsilon^{\frac{n}{2}} \delta^2}{2||u||_{\infty}(K_{\mathcal{M},k,p} ||u||_{\infty} + ||k||_{\infty}\delta/3)} \right). \end{align*} \end{lemma} \begin{proof} The proof is exactly as in \citep[Lemma 2.30]{hein2005geometrical}, with the only change that we allow \(u\) to be complex-valued, which gives another factor of \(2\) for the probability, due to either applying the real-valued version in two parts, or the two-dimensional Matrix Bernstein inequality via the matrix representation of complex numbers. \end{proof} On extending this to the consistency of normalized averaging operators, one must account for the factors of \(p_{\epsilon,N}, p_{\epsilon}\) appearing in \(A_{\epsilon,N}[u], A_{\epsilon}[u]\) respectively. As random variables, the \(k_{\epsilon}(x,x_j)/p_{\epsilon,N}(x)\) are now coupled, so the application of Bernstein's inequality requires a decoupling step. This comes in the form of the assumption that \(p_{\epsilon,N}(x_j) - p_{\epsilon}(x_j)\) is small for every \(j \in [N]\), which upon taking a union bound introduces a factor of \(N\) in the probability. Since this gives the same bound for all \(\lambda \geq 0\), Hein has shown altogether an Bernstein-type bound for the probabilistic consistency between \(A_{\lambda,\epsilon,N}[u](x)\) and \(A_{\lambda,\epsilon}[u](x)\) for all \(\lambda \geq 0\), \(u \in L^{\infty}\) and \(x \in \mathcal{M}\). However, the bounds have implicitly rolled up the dependence on \(||u||_{\infty}\) into constants, while in \protect\hyperlink{from-graphs-to-manifolds}{Section \ref{from-graphs-to-manifolds}} we will use this dependence explicitly so that we have a clear view of the convergence rates when applied to localized states. Therefore, we reproduce the proof of consistency while pulling out the relevant pieces of the constants and record it as, \begin{lemma}[{\cite[Proposition 2.38]{hein2005geometrical}}] \hypertarget{lem:avgop-consistent}{\label{lem:avgop-consistent}} Given \(\lambda \geq 0\), there is a constant \(K_{\lambda} > 1\) depending only on \(||k||_{\infty}\), \(\underline{C}_p\), \(\overline{C}_p\) and \(\lambda\) such that given \(u \in L^{\infty}\) and \(x \in \mathcal{M}\), we have for all \(\epsilon > 0\), and \(2 ||k||_{\infty} \epsilon^{-\frac{n}{2}}/N < \delta \leq \underline{C}_p/2\), \begin{align} \begin{split} \label{eq:thm-avgop-consistent-bound} \Pr&[|A_{\lambda,\epsilon,N}[u](x) - A_{\lambda,\epsilon}[u](x)| > \delta] \\ &\quad\leq (4 + 2N) \exp\left( -\frac{N \epsilon^{\frac{n}{2}} \delta^2}{2 K_{\lambda} \underline{C}_p^{-\lambda} ||u||_{\infty}(K_{\mathcal{M},k,p} K_{\lambda} ||u||_{\infty} \underline{C}_p^{-\lambda} + ||k||_{\infty}\delta/3)} \right). \end{split} \end{align} \end{lemma} \begin{remark} The choice of \(\delta < \underline{C}_p / 2\) was made for simplicity; a different choice of \(\delta < \underline{C}_p\) changes only \(K_{\lambda}\) (in the proof, \(K_{\lambda,k,p}\)) as a rational function of \(\delta\). For states that are localized in tandem with the operators --- as will be our case --- only the first bound is useful, since \(||u||_{\infty}\) will grow as \(\epsilon\) decreases and the condition would then have \(\delta\) growing as well. \end{remark} \begin{proof} We follow the proof of \citep[Proposition 2.38]{hein2005geometrical}, paying close attention to the dependency of the constants on \(||u||_{\infty}\). As shown there, using that for all \(\lambda \geq 0\) and \(a, b \geq \beta \geq 0\), \(|1/a^{\lambda} - 1/b^{\lambda}| \leq \lambda |a - b|/\beta^{\lambda + 1}\) gives, \begin{align*} |\mathscr{A}_{\epsilon,N}&[u/p_{\epsilon}^{\lambda}](x) - \mathscr{A}_{\epsilon}[u/p_{\epsilon,N}^{\lambda}](x)| \\ &\leq |\mathscr{A}_{\epsilon,N}[u(1/p_{\epsilon,N}^{\lambda} - 1/p_{\epsilon}^{\lambda})]| + |\mathscr{A}_{\epsilon,N}[u/p_{\epsilon}^{\lambda}] - \mathscr{A}_{\epsilon}[u/p_{\epsilon}^{\lambda}]| \\ &\leq||u||_{\infty} \lambda \, \mathscr{A}_{\epsilon,N}\left[ \frac{|p_{\epsilon} - p_{\epsilon,N}|}{(\min\{\inf p_{\epsilon,N}, \inf p_{\epsilon} \})^{\lambda + 1}} \right] + |(\mathscr{A}_{\epsilon} - \mathscr{A}_{\epsilon,N})[u/p_{\epsilon}^{\lambda}]| \\ &=: \mathcal{E}_{\lambda,1}[u] . \end{align*} and using that for all \(a, b\), \(|a - b|^{\lambda} \leq \lambda \max \{ |a|, |b| \}^{\lambda - 1} |a - b|\), we have, \begin{align*} \left| \frac{\mathscr{A}_{\epsilon,N}[u/p_{\epsilon,N}^{\lambda}]}{p_{\epsilon,N}^{\lambda}} - \frac{\mathscr{A}_{\epsilon}[u/p_{\epsilon}^{\lambda}]}{p_{\epsilon}^{\lambda}} \right| &\leq \frac{|\mathscr{A}_{\epsilon,N}[u/p_{\epsilon,N}^{\lambda}] - \mathscr{A}_{\epsilon}[u/p_{\epsilon}^{\lambda}]|}{p_{\epsilon,N}^{\lambda}} + |\mathscr{A}_{\epsilon}[u/p_{\epsilon}^{\lambda}]|\frac{|p_{\epsilon,N}^{\lambda} - p_{\epsilon}^{\lambda}|}{p_{\epsilon}^{\lambda} p_{\epsilon,N}^{\lambda}} \\ &\leq \mathcal{E}_{\lambda,1}[u]/p_{\epsilon,N}^{\lambda} + \\ &\quad + ||u||_{\infty} \lambda \max \{ \sup p_{\epsilon,N}, \sup p_{\epsilon} \}^{\lambda - 1} \frac{\overline{C}_p}{\underline{C}_p^{\lambda}} \frac{|p_{\epsilon,N} - p_{\epsilon}|}{(p_{\epsilon} p_{\epsilon,N})^{\lambda}} \\ &=: \mathcal{E}_{\lambda,2}[u]. \end{align*} Taking the normalizations by the \(\lambda\)-degree functions into account, we have \begin{align*} |A_{\lambda,\epsilon,N}[u] - A_{\lambda,\epsilon}[u]| &= \left| \frac{\mathcal{B}_N}{p_{\lambda,\epsilon,N}} - \frac{\mathcal{B}}{p_{\lambda,\epsilon}} \right| \\ &\leq \frac{|\mathcal{B}_N - \mathcal{B}|}{p_{\lambda,\epsilon,N}} + |\mathcal{B}|\frac{|p_{\lambda,\epsilon,N} - p_{\lambda,\epsilon}|}{p_{\lambda,\epsilon,N} \, p_{\lambda,\epsilon}} \\ &\leq \frac{\mathcal{E}_{\lambda,2}[u]}{p_{\lambda,\epsilon,N}} + |\mathcal{B}| \frac{\mathcal{E}_{\lambda,2}[1]}{p_{\lambda,\epsilon,N} \, p_{\lambda,\epsilon}} , \end{align*} with \begin{equation}\begin{aligned} \mathcal{B}_N := \frac{\mathscr{A}_{\epsilon,N}[u/p_{\epsilon,N}^{\lambda}]}{p_{\epsilon,N}^{\lambda}}, \quad \mathcal{B} := \frac{\mathscr{A}_{\epsilon}[u/p_{\epsilon}^{\lambda}]}{p_{\epsilon}^{\lambda}} . \end{aligned} \nonumber \end{equation} In the event that \(|p_{\epsilon,N}(y) - p_{\epsilon}(y)| \leq \delta\), for the deterministic choice \(y = x\) along with all random variables \(y = x_j\) for all \(j \in [N]\), we have that \(\underline{C}_p - \delta < p_{\epsilon,N}(y) < \overline{C_p} + \delta\). Call this event \(\mathcal{A}_1\) and call \(\mathcal{A}_2\) the event that \(|\mathscr{A}_{\epsilon,N}[u/p_{\epsilon}^{\lambda}] - \mathscr{A}_{\epsilon}[u/p_{\epsilon}^{\lambda}]| \leq \delta ||u||_{\infty}\). Then, \begin{align*} &\mathcal{E}_{\lambda,1}[u] \leq \delta ||u||_{\infty}\{ \lambda (\overline{C}_p + \delta)/(\underline{C}_p - \delta)^{\lambda + 1} + 1 \}, \\ &\begin{aligned}[t] (\underline{C}_p - \delta)/(\overline{C}_p + \delta)^{2\lambda} &\leq p_{\lambda,\epsilon,N} = \mathscr{A}_{\epsilon,N}[1/p_{\epsilon,N}^{\lambda}]/p_{\epsilon,N}^{\lambda} \\ &\leq (\overline{C}_p + \delta)/(\underline{C}_p - \delta)^{2\lambda} , \end{aligned} \\ &|\mathscr{A}_{\epsilon}[u/p_{\epsilon}^{\lambda}]| \leq ||u||_{\infty} \overline{C}_p/\underline{C}_p^{\lambda} , \\ &\underline{C}_p /\overline{C}_p^{2\lambda} \leq p_{\lambda,\epsilon} \leq \overline{C}_p/\underline{C}_p^{2\lambda} , \end{align*} wherein we've used the occurrence of the joint event \(\mathcal{A} := \mathcal{A}_1 \wedge \mathcal{A}_2\) for the first inequality and that of \(\mathcal{A}_1\) for the second inequality. The event \(\mathcal{A}_1\) further supplies, \begin{equation}\begin{aligned} \mathcal{E}_{\lambda,2}[u] \leq \frac{\mathcal{E}_{\lambda,1}[u]}{(\underline{C}_p - \delta)^{\lambda}} + \delta\lambda ||u||_{\infty} \frac{(\overline{C}_p + \delta)^{\lambda - 1} \overline{C}_p}{\underline{C}_p^{2\lambda} (\underline{C}_p - \delta)^{\lambda}} . \end{aligned} \nonumber \end{equation} Now, working in event \(\mathcal{A}\) together with the assumption that \(\delta \leq \underline{C}_p/2\) we find, \begin{align*} |A_{\lambda,\epsilon,N}[u](x) - A_{\lambda,\epsilon}[u](x)| &\leq \mathcal{E}_{\lambda,2}[u] \frac{(\overline{C}_p + \delta)^{2\lambda}}{\underline{C}_p - \delta} + ||u||_{\infty} \mathcal{E}_{\lambda,2}[1]\frac{\overline{C}_p^{2\lambda + 1}}{\underline{C}_p^{2\lambda + 1}} \frac{(\overline{C}_p + \delta)^{2\lambda}}{\underline{C}_p - \delta} \\ &\leq \delta ||u||_{\infty}[(\overline{C}_p + \delta)^{2\lambda}(\underline{C}_p - \delta)^{-\lambda-1} + \tilde{C}_{\lambda,k,p}(\delta) + \lambda C_{k,p}(\lambda,\delta)] \\ &\leq 2(9/2)^{\lambda}\delta ||u||_{\infty}(\overline{C}_p^{2\lambda}\underline{C}_p^{-\lambda-1} + 2^{-\lambda}\overline{C}_p^{4\lambda + 1}\underline{C}_p^{-2(\lambda + 1)} + \lambda C_{\lambda,k,p}), \\ &:= K_{\lambda,k,p} ||u||_{\infty} \delta , \end{align*} wherein \(\tilde{C}_{\lambda,k,p}(\delta) := \overline{C}_p^{2\lambda + 1} (\overline{C}_p + \delta)^{2\lambda} \underline{C}_p^{-2\lambda - 1} (\underline{C}_p - \delta)^{-1}\), \(C_{k,p}(\lambda,\delta) > 0\) is monotonically increasing in \(\delta, \lambda \geq 0\) that is rational and bounded in \(\delta < \underline{C}_p\) and has coefficients that depend only on \(\underline{C}_{p}, \overline{C}_p\). The final inequality follows upon assuming that \(\delta \leq \underline{C}_p/2\) and setting \(C_{\lambda,k,p} := C_{k,p}(\lambda,\underline{C}_p/2)\). Concerning the probabilities governing the event \(\mathcal{A}\), since \(p_{\epsilon} = \mathscr{A}_{\epsilon}[1]\) and \(p_{\epsilon,N} = \mathscr{A}_{\epsilon,N}[1]\), an application of \protect\hyperlink{lem:kern-consistency}{Lemma \ref{lem:kern-consistency}} along with a union bound gives that whenever \(\delta > 2 ||k||_{\infty} \epsilon^{-\frac{n}{2}}/N\), we have that \begin{align*} & \Pr[(\forall) j \in [N], |p_{\epsilon,N}(x_j) - p_{\epsilon}(x_j)| \leq \delta] \\ &\quad\quad > 1 - 2N \exp\left( -\frac{(N - 1)\epsilon^{\frac{n}{2}} \delta^2}{4(2 K_{\mathcal{M},k,p} + ||k||_{\infty}\delta/3)} \right). \end{align*} Another application of \protect\hyperlink{lem:kern-consistency}{Lemma \ref{lem:kern-consistency}} gives, \begin{align*} &\Pr[|\mathscr{A}_{\epsilon,N}[u/p_{\epsilon}^{\lambda}](x) - \mathscr{A}_{\epsilon}[u/p_{\epsilon}^{\lambda}](x)| \leq ||u||_{\infty} \delta] \\ &\quad\quad > 1 - 4 \exp\left( -\frac{N \epsilon^{\frac{n}{2}}\delta^2}{2 \underline{C}_p^{-\lambda}(K_{\mathcal{M},k,p} \underline{C}_p^{-\lambda} + ||k||_{\infty}\delta/3)} \right). \end{align*} Therefore, a union bound gives the bound in the statement of the Lemma. \end{proof} On the other hand, Hein has given a simpler argument for the \(\lambda = 0\) case that is related to the \emph{random walk} graph Laplacian: \begin{lemma}[{\cite[Theorem 2.37]{hein2005geometrical}}] \hypertarget{lem:rwlap-conv}{\label{lem:rwlap-conv}} For all \(u \in C^{\infty}\) and \(x \in \mathcal{M}\), given any \(\delta > 0\), \begin{equation} \label{eq:thm-rwlap-conv-bound} \Pr[|A_{\epsilon,N}[u](x) - A_{\epsilon}[u](x)| > \delta] \leq 6 \exp\left( -\frac{N \epsilon^{\frac{n}{2}}\delta^2}{8||u||_{\infty} (K_{\mathcal{M},k,p} ||u||_{\infty} + ||k||_{\infty} \delta/6)} \right) . \end{equation} \end{lemma} \begin{proof} The proof in \citep[Theorem 2.37]{hein2005geometrical} and \citep[Theorem 2]{hein2005graphs} is based on a rewriting of \(A_{\epsilon,N}[u]\) following \citep{greblicki1984distribution} as, \begin{align*} A_{\epsilon,N}[u](x) &= \frac{A_{\epsilon}[u](x) - \mathcal{B}_{1,N}}{1 + \mathcal{B}_{2,N}}, \\ \mathcal{B}_{1,N} &:= \frac{1}{p_{\epsilon}(x)}\mathscr{A}_{\epsilon,N}[u](x) - A_{\epsilon}[u](x), \\ \mathcal{B}_{2,N} &:= \frac{p_{\epsilon,N}(x)}{p_{\epsilon}(x)} - 1. \end{align*} On applying \protect\hyperlink{lem:kern-consistency}{Lemma \ref{lem:kern-consistency}} twice while noting \(p_{\epsilon}(x) = \mathscr{A}_{\epsilon}[1](x)\) and \(p_{\epsilon,N}(x) = \mathscr{A}_{\epsilon,N}[1](x)\), we have that \begin{equation}\begin{aligned} \Pr\left[ \mathcal{B}_{1,N} > \delta/2 \right] \leq 4 \, \exp\left( -\frac{N \epsilon^{\frac{n}{2}} \delta^2}{8||u||_{\infty}(K_{\mathcal{M},k,p} ||u||_{\infty} + ||k||_{\infty} \delta/6)} \right) , \\ \Pr\left[ \mathcal{B}_{2,N} > \frac{\delta}{2||u||_{\infty}} \right] \leq 2 \, \exp\left( -\frac{N \epsilon^{\frac{n}{2}} \delta^2}{8 ||u||_{\infty} (K_{\mathcal{M},k,p} ||u||_{\infty} + ||k||_{\infty} \delta/6)} \right) . \end{aligned} \nonumber \end{equation} We also see, \begin{equation}\begin{aligned} |A_{\epsilon,N}[u](x) - A_{\epsilon}[u](x)| \leq \left( \left| \frac{\mathcal{B}_{1,N}}{1 + \mathcal{B}_{2,N}} \right| + ||u||_{\infty} \left| \frac{\mathcal{B}_{2,N}}{1 + \mathcal{B}_{2,N}} \right| \right). \end{aligned} \nonumber \end{equation} Thus taking a union bound gives, \begin{equation}\begin{aligned} \Pr\left[ |A_{\epsilon,N}[u](x) - A_{\epsilon}[u](x)| > \delta \right] \leq 6 \exp\left( -\frac{N \epsilon^{\frac{n}{2}}\delta^2}{8||u||_{\infty} (K_{\mathcal{M},k,p} ||u||_{\infty} + ||k||_{\infty} \delta/6)} \right). \end{aligned} \nonumber \end{equation} \end{proof} \begin{remark} In our application, when \(u = \psi_h\) is a state localized to an \(O(\sqrt{h})\)-ball with unit \(L^2\) norm, we will have that \(||\psi_h||_{\infty} \geq h^{-\frac{n}{2}}\), so this will give a convergence rate of \(e^{-\Omega(N \epsilon^{\frac{n}{2}} h^{\frac{n}{2} + 2} \delta^2)}\) for \(A_{\epsilon,N}[\psi_h](x) \to A_{\epsilon}[\psi_h](x)\) as \(N \to \infty\) and \(\epsilon, h \to 0\). \end{remark} \hypertarget{quantization-and-symbol-classes}{% \subsection{Quantization and Symbol classes}\label{quantization-and-symbol-classes}} The study of pseudodifferential calculus is rooted in the relationships between certain classes of smooth functions on phase space and linear operators acting on smooth functions on configuration space, called \emph{pseudodifferential operators} (\({\Psi\text{DO}}\)s) and in relation to the phase space functions, often called their \emph{quantizations}. The classes of functions under study are called \emph{symbols} and they are typically bounded on configuration space with well-behaved (\emph{e.g.}, polynomially bounded) momentum space asymptotics. Then, of interest are the relationships between \({\Psi\text{DO}}\)s and the symbols they quantize. An additional \emph{small parameter} \(h \in (0, 1]\) enters the picture when we work in the \emph{semiclassical} regime and serves essentially to rescale the unit length in momentum space, so the asymptotics we are concerned with happen simultaneously at scales of vanishing \(h\) and increasing momentum variable. The formal classes of functions that we quantize, as well as the exact quantization procedures, are standard and described, for example, in \citep[\(\S 14.2\)]{zworski2012}. These procedures amount to locally pushing the symbols onto Euclidean space by charts and seeing these local pieces as quantizations on open balls in \(\mathbb{R}^{n}\), then gluing them together through an atlas. Hence, the key definitions are specified on \(\mathbb{R}^{n}\). To summarize the idea briefly, we make \begin{definition}[{Symbols \& quantization on \(\mathbb{R}^{n}\) \cite[$\S 4$]{zworski2012}}] Let \(h_0 \in (0, 1]\). A smooth function \(a \in C^{\infty}(\mathbb{R}^{2n} \times (0, h_0])\) belongs to the \(h\)-\emph{symbol class of order} \((\ell, m) \in \mathbb{N}_0 \times \mathbb{Z}\) on \(\mathbb{R}^{n}\), \(h^{\ell} S^m(\mathbb{R}^{2n})\) if for all \(\alpha, \beta \in \mathbb{N}_0^{n}\), \(|\partial_x^{\alpha} \partial_{\xi}^{\beta} a(x,\xi ; h)| \lesssim_{\alpha,\beta} h^{\ell} \langle \xi \rangle^{m - \beta}\). Then, a \emph{quantization} of \(a \in h^{\ell} S^m\) is an operator acting on \(u \in \mathscr{S}(\mathbb{R}^{n})\) given by, \begin{equation}\begin{aligned} \operatorname{Op}_h^t(a) := \frac{1}{(2 \pi h)^{n}} \int_{\mathbb{R}^{2n}} e^{\frac{i}{h} \langle x - y, \xi \rangle} a(tx + (1 - t)y, \xi) u(y) ~ d\xi dy , \end{aligned} \nonumber \end{equation} for any \(t \in [0, 1]\). We call the quantization with \(t = 1\) the \emph{adjoint} quantization, which we will primarily use and hence we denote it simply as \(\operatorname{Op}_h := \operatorname{Op}_h^1\), while the case \(t = 0\) is called the \emph{Kohn-Nirenberg} quantization and we will denote it \(\operatorname{Op}_h^{KN} := \operatorname{Op}_h^0\). A highly used case is \(t = 1/2\) and called the \emph{Weyl} quantization; while it is very useful in proving various properties of pseudodifferential operators, we will not make explicit use of this. An operator of the form \(\operatorname{Op}_h^t(a)\) is called a \emph{(semiclassical) pseudodifferential operator of order} \((\ell,m)\) and sometimes (to evoke a physical interpretation) a \emph{quantum observable}, while its symbol \(a\) is sometimes called a \emph{classical observable}. We also say that \(a \in h^{\ell} S^m\) is a \emph{symbol of order} \((\ell,m)\). \end{definition} A key property in the definition of the symbol class is that under a \emph{symplectic} change of variables \(\tilde{\gamma} := (\gamma, [D \gamma^{-1}|_{\gamma}]^T)\) on \(\mathbb{R}^{2n}\) for any diffeomorphism \(\gamma : \mathbb{R}^{n} \to \mathbb{R}^{n}\), the symbol class \(h^{\ell} S^m\) is invariant \emph{to top order}, meaning that if \(a \in h^{\ell} S^m\) then \(a \circ \tilde{\gamma} \in h^{\ell} S^m\) and \(u \mapsto \operatorname{Op}^t_h(a)[ u \circ \gamma] \circ \gamma^{-1} = \operatorname{Op}^t_h(a_{\gamma})[u]\) with \(a_{\gamma}(\gamma(x), \xi) = a(x, [D \gamma|_x]^T \xi) + b(x,\xi)\) for \(b \in h^{\ell + 1} S^{m-1}\) (see \citep[\(\S 9\)]{zworski2012}). While the asymptotic behaviour required in \(\xi\) is simple to understand, we offer the notion about the joint asymptotics with \(h\) that a symbol should have, asymptotically, at most polynomial variation in cubes of side-length \(h\) in momentum space. The basic calculus of pseudodifferential operators and symbols on \(\mathbb{R}^{n}\) is exposed in detail in \citep{zworski2012} and \citep{dimassi1999spectral}. Now using this, we can give meaning to quantization on a compact manifold \(\mathcal{M}\), following \citep{zworski2012}: \begin{definition}[{Symbols \& quantization on \(\mathcal{M}\) \cite[$\S 14.2.2$]{zworski2012}}] A linear operator \(A : C^{\infty}(\mathcal{M}) \to C^{\infty}(\mathcal{M})\) is a \emph{pseudodifferential operator} on \(\mathcal{M}\) if \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \tightlist \item given an atlas \(\{ (\gamma : \mathcal{M} \supset U_{\gamma} \to V_{\gamma} \subset \mathbb{R}, U_{\gamma}) ~|~ \gamma \in \mathcal{F} \}\), there is \((\ell,m) \in \mathbb{N}_0 \times \mathbb{Z}\) and \(t \in \mathbb{R}\) such that for each \(\gamma \in \mathcal{F}\), there is \(a_{\gamma} \in h^{\ell} S^m\) so that we may write, given any \(\chi_1, \chi_2 \in C_c^{\infty}(U_{\gamma})\) and \(u \in C^{\infty}(\mathcal{M})\), \begin{equation}\begin{aligned} \chi_1 A[\chi_2 u] = \chi_1 \gamma^* \operatorname{Op}_h^t[a_{\gamma}] \circ (\gamma^{-1})^*[\chi_2 u] , \end{aligned} \nonumber \end{equation} wherein \((\gamma^{-1})^* : C^{\infty}(U_{\gamma}) \to C^{\infty}(V_{\gamma})\) denotes the \emph{pullback}, \((\gamma^{-1})^* : u \mapsto u \circ \gamma^{-1}\) and similarly \(\gamma^*\) the pullback from \(V_{\gamma}\) to \(U_{\gamma}\) and \item \(A\) is \emph{pseudolocal}, meaning: given any \(\chi_1, \chi_2 \in C^{\infty}(\mathcal{M})\) with \(\operatorname{supp} \chi_1 \cap \chi_2 = \emptyset\) and any \(N \in \mathbb{N}\), we have \(||\chi_1 A \chi_2||_{H^{-N}(\mathcal{M}) \to H^N(\mathcal{M})} = O(h^{\infty})\), wherein \(H^{\pm N}(\mathcal{M})\) Sobolev spaces as defined in \citep[\(\S 14.2.1\)]{zworski2012}. \end{enumerate} We say then that the pseudodifferential operator \(A\) is of \emph{order} \((\ell,m)\) and belongs to \(h^{\ell} \Psi^m\) and for brevity, we call it a \({\Psi\text{DO}}\). A function \(a \in C^{\infty}(T^*\mathcal{M})\) belongs to the \(h\)-\emph{symbol class of order} \((\ell,m) \in \mathbb{N}_0 \times \mathbb{Z}\) on \(\mathcal{M}\), \(h^{\ell} S^m(T^*\mathcal{M})\) if given an atlas \(\mathcal{F}\), for each \(\gamma \in \mathcal{F}\) and \(\chi \in C_c^{\infty}(U_{\gamma})\) the pullback \(a_{\gamma} \in C^{\infty}(V_{\gamma} \times \mathbb{R}^{n})\) of \(a\chi\) under the induced change of coordinates \(V_{\gamma} \times \mathbb{R}^n \to T^*(U_{\gamma})\) belongs to \(h^{\ell} S^m(\mathbb{R}^{2n})\). In short, we say that \(a\) is a \emph{symbol of order} \((\ell, m)\). \end{definition} The fact that \(h^{\ell} \Psi^{m}\) forms a bi-filtered (in the order \((\ell,m)\)) algebra respecting the properties of a \emph{quantization map} and moreover, that quantizated operators extend to Sobolev spaces, mapping \(A : H_h^s \to H_h^{s - m}\) continuously whenever \(A \in h^0 \Psi^m\) and \(A : L^2 \to L^2\) whenever \(A \in h^0 \Psi^0\) are standard results from pseudodifferential calculus \citep{zworski2012, dimassi1999spectral}. We will make liberal use of these properties along with the following basic \emph{symbol calculus}: \begin{fact*}[{\cite[Theorem 14.1]{zworski2012}}] There are linear maps \begin{equation}\begin{aligned} \operatorname{Sym} : h^{\ell} \Psi^m \to h^{\ell} S^m(T^*\mathcal{M}) / h^{\ell + 1} S^{m - 1}(T^*\mathcal{M}) \end{aligned} \nonumber \end{equation} and for all \(t \in [0, 1]\), \begin{equation}\begin{aligned} \operatorname{Op}_h^t : h^{\ell} S^m \to h^{\ell} \Psi^m \end{aligned} \nonumber \end{equation} such that \begin{equation}\begin{aligned} \operatorname{Sym}(A_1 A_2) = \operatorname{Sym}(A_1) \operatorname{Sym}(A_2) \end{aligned} \nonumber \end{equation} and \begin{equation}\begin{aligned} \operatorname{Sym}[\operatorname{Op}^t_h(a)] = [a] \in h^{\ell} S^m(T^*\mathcal{M}) / h^{\ell + 1} S^{m - 1}(T^* \mathcal{M}), \end{aligned} \nonumber \end{equation} wherein \([a]\) denotes the equivalence class of \(a\) in the quotient space \(h^{\ell} S^m / h^{\ell + 1} S^{m - 1}\). \end{fact*} We often call \(\operatorname{Sym}(A)\) the \emph{principal symbol} of \(A\) and note that by the invariance properties of \(h^{\ell} S^m\) under changes of coordinates as discussed above, this equivalence class is defined invariantly of the choice of coordinates. In fact, this is independent of the particular quantization \(t \in [0, 1]\) as well (see \citep[Theorem 9.10]{zworski2012}). A useful sub-class of \(h^{\ell} S^m\) is the class of semiclassical \emph{polyhomogeneous} symbols. The quantizations of these symbols are more direct generalizations of partial differential operators, whose symbols can be seen to behave like polynomials in the momentum \(\xi\) variable. The polyhomogeneous symbols enjoy many nice properties through their asymptotic expansions, including that we can specify an exact principal symbol. We restrict to this class primarily when dealing with the phases and amplitudes appearing in the FBI transform, following \citep{wunsch2001fbi} (discussed below, in \protect\hyperlink{semi-classical-measures-of-coherent-states}{Section \ref{semi-classical-measures-of-coherent-states}}). In particular, we will use such symbols in the phases defining coherent states, for then we have a good interaction between them and the FBI transform. Assuming the definitions of \emph{classical} polyhomogeneous symbols, as given in \citep{hormander_vol_III}, the definitions of their semiclassical counterparts are as follows: \begin{definition}[{following \cite{wunsch2001fbi}}] \hypertarget{def:phg-psidos}{\label{def:phg-psidos}} A smooth function \(a \in C^\infty(T^*\mathcal{M} \times \mathcal{M} \times [0,h_0))\) for some \(h_0 > 0\) belongs to the \(h\)-\emph{polyhomogeneous symbol class of order} \((\ell,m)\), \(h^{\ell} S^m_{\text{phg}}(T^*\mathcal{M} \times \mathcal{M}) \subset C^\infty(T^*\mathcal{M} \times \mathcal{M} \times [0, h_0))\), if for every \(j \geq 0\), there exists a polyhomogeneous symbol \(a_j(x,\xi,y)\) of degree \(j\) in \(\xi\) and a constant \(C_j > 0\) such that for all \(|\xi| > 1\) and \(h \in [0, h_0)\), the following asymptotic expansion holds: \begin{equation} \label{eq:asymp-h-symbol-phg} |a(x,\xi,y;h) - h^{\ell}(a_m + h\, a_{m-1} + \cdots + h^{\ell}\, a_{m-j})| \leq C_j h^{\ell+j+1} |\xi|^{m-j-1} . \end{equation} We abbreviate this as, \begin{equation}\begin{aligned} a(x, \xi, y ; h) \sim h^{\ell}(a_m(x,\xi,y) + h \, a_{m-1}(x,\xi,y) + \cdots ) \end{aligned} \nonumber \end{equation} and call \(a_m\) the \emph{principal part} of \(a\). Similarly, we define the \(h\)-symbol class \(h^{\ell} S^m_{\text{phg}}(T^*\mathcal{M})\) as the subset of the above class without dependence on the \(y\) variable. When ambiguity is not an issue, we drop the explicit dependence on the space and use simply the notation \(h^{\ell} S^m_{\text{phg}}\) to denote either of the two classes of symbols. The \emph{quantization} of \(a \in h^{\ell} S^m_{\text{phg}}(T^* \mathcal{M} \times \mathcal{M})\) is the operator \(\operatorname{Op}_h(a) : C^\infty(\mathcal{M}) \to C^\infty(\mathcal{M})\) given by \begin{equation} \label{def:op-quantization} \operatorname{Op}_h(a)[u](x) := \frac{1}{(2\pi h)^n} \int_{T^*\mathcal{M}} e^{\frac{i}{h} \langle \exp_y^{-1}(x), \xi \rangle_{g_y}} a(y,\xi,x;h) \chi(x,y) u(y) ~ dy d\xi, \end{equation} wherein \(\chi\) is a smooth cut-off near \(x = y\). We denote by \(h^{\ell} \Psi_{\text{phg}}^m(\mathcal{M})\) the algebra of all operators quantizing symbols \emph{modulo} \(h^\infty\), viz., \(A \in h^{\ell} \Psi_{\text{phg}}^m(\mathcal{M})\) iff. \(A = \operatorname{Op}_h(a) + R\) for some \(a \in h^{\ell} S^m_{\text{phg}}(T^*\mathcal{M} \times \mathcal{M})\) and \(R : C^{-\infty}(\mathcal{M}) \to C^\infty(\mathcal{M})\) of order \(O(h^{\infty})\). When the polyhomogeneous and space contexts are clear, we will shorten the notation to just \(h^{\ell} \Psi_{\text{phg}}^m\). The \emph{symbol map} of order \((\ell,m)\) is the linear operator \(\operatorname{Sym}_{\ell,m} : h^{\ell} \Psi_{\text{phg}}^m \to h^{\ell} S^m_{\text{phg}} / h^{\ell-1} S^{m-1}_{\text{phg}}\) such that the \emph{principal symbol} of an operator \(A = \operatorname{Op}_h(a) \in h^{\ell} \Psi_{\text{phg}}^m\) is given by \(\operatorname{Sym}_{\ell,m}(A) = [a]\). This gives a short exact sequence, \begin{equation}\begin{aligned} 0 \to h^{\ell-1}\Psi_{\text{phg}}^{m-1} \hookrightarrow h^{\ell} \Psi_{\text{phg}}^m \xrightarrow{\operatorname{Sym}_{\ell,m}} h^{\ell} S^m_{\text{phg}}/h^{\ell-1}S^{m-1}_{\text{phg}} \to0. \end{aligned} \nonumber \end{equation} An \emph{elliptic symbol} is a symbol \(a \in h^{\ell}{S^m}_{\text{phg}}\) whose principal part follows \(|a_k| \sim \langle \xi \rangle\) uniformly in all other variables. \end{definition} Symbols that are functions of both, the integrand as well as the output parameter with respect to quantization, such as \(a \in h^{\ell} S_{\text{phg}}^m(T^*\mathcal{M} \times \mathcal{M})\) are sometimes called \emph{amplitudes} (see \emph{e.g.}, \citep{taylor81}). The phases appearing in coherent states are naturally of this sort. We aim to bring graph Laplacians into the context of pseudodifferential operators and by their basic characteristics, we find that they must belong to a more general class than the polyhomogeneous. Indeed, the polyhomogeneous class is too restrictive to support symbols localized in phase-space \(T^*\mathcal{M}\), which is a stepping-stone since we go through the diffusive averaging operators \(\mathscr{A}_{\lambda,\epsilon}\); nevertheless, we will have solace in the class \(h^{\ell} S^m\), which is able to support this while remaining invariant to coordinate transformations. \hypertarget{fbi-transform}{% \subsection{FBI Transform}\label{fbi-transform}} We follow \citep{wunsch2001fbi} to state the fundamental results about FBI transforms on smooth, compact manifolds. They define the following: \begin{definition}[{Admissible Phase \& FBI Transform}] \hypertarget{def:FBI-adm-phase}{\label{def:FBI-adm-phase}} A smooth function \(\phi : T^* \mathcal{M} \times \mathcal{M} \to \mathbb{C}\) is an \emph{admissible phase function} if it satisfies all of the properties: \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \tightlist \item \(\phi(x,\xi,y)\) is an elliptic polyhomogeneous symbol of order one in \(\xi\), \item \(\Im(\phi) \geq 0\), \item \(D_y \phi|_{(x,\xi,x)} = -\xi \, dy\), \item \(D_y^2 \Im(\phi)|_{(x,\xi,x)} \sim \langle \xi \rangle\), \item and \(\phi|_{(x,\xi,x)} = 0\); \end{enumerate} herein we have denoted by \(\varphi|_{(x,\xi,x)}\) the restriction of \(\varphi:T^*\mathcal{M} \times \mathcal{M} \to \mathbb{C}\) to the set \(\{(x,\xi,y) ~|~ x = y \}\). An \emph{FBI transform} is a map \(T_h[\cdot ; \phi,a] : C^\infty(\mathcal{M}) \to C^\infty(T^* \mathcal{M})\) given by, \begin{equation}\begin{aligned} T_h[u;\phi,a](x) := \int_{\mathcal{M}} e^{\frac{i}{h}\phi(x,\xi,y)} a(x,\xi,y;h) \chi(x,\xi,y) u(y) \;dy, \end{aligned} \nonumber \end{equation} with \(\phi\) a fixed admissible phase function, \(a \in h^{-\frac{3n}{4}} S^{\frac{n}{4}}_{\text{phg}}\) is elliptic and \(\chi\) is a cut-off function near \(\{(x,\xi,y) ~|~ x = y \}\). \end{definition} We will need the following basic properties of the FBI transform, as it relates to quantization: \begin{theorem}[{\cite{wunsch2001fbi}}] \hypertarget{thm:FBI-basic}{\label{thm:FBI-basic}} Let \(q \in h^{\ell} S^m(T^*\mathcal{M})\) and fix both, an admissible phase function \(\phi\) and an elliptic symbol \(a \in h^{-\frac{3n}{4}} S^{\frac{n}{4}}_{\text{phg}}(T^* \mathcal{M} \times \mathcal{M})\). Denote \(T_h := T_h[\cdot ; \phi, a]\). Then, there exists \(b_0 \in h^{\frac{3n}{2}} S_{\text{phg}}^{-\frac{n}{2}}(T^*\mathcal{M} \times \mathcal{M})\) positive, elliptic and depending only on \(\phi\), such that \(T_h^* q T_h \in h^{\ell} \Psi^m\) and \begin{equation}\begin{aligned} T_h^* q T_h - \operatorname{Op}_h(|a|^2 b_0 \, q) \in h^{\ell+1} \Psi^{m-1}. \end{aligned} \nonumber \end{equation} Furthermore, specializing to \(a = b_0^{-\frac{1}{2}}\), there exists an elliptic \(b \in h^{\frac{3n}{2}} S_{\text{phg}}^{-\frac{n}{2}}\) depending only on \(\phi\) with a positive principal symbol such that setting \(T_h := T_h[\cdot; \phi, b^{-\frac{1}{2}}]\) gives \begin{equation}\begin{aligned} T_h^* q T_h - \operatorname{Op}(q) \in h^{\ell+1}\Psi^{m-1}, \\ T_h^*T_h - I \in h^{\infty}\Psi_{\text{phg}}^{-\infty} . \end{aligned} \nonumber \end{equation} Whenever \(q\) is polyhomogeneous, then so is the symbol being quantized by the FBI transform, above. The operator \(T_h : L^2(\mathcal{M}) \to L^2(T^*\mathcal{M})\) is bounded for all \(h \in [0,h_0)\). \end{theorem} \begin{remark} The proof given in \citep[Proposition 3.1]{wunsch2001fbi} goes through for \(q\) taken to be a symbol belonging to the more general class we consider here, \(S^m\). \end{remark} \hypertarget{semi-classical-measures-of-coherent-states}{% \section{Semi-classical measures of coherent states}\label{semi-classical-measures-of-coherent-states}} The basic properties of FBI transforms, as in \protect\hyperlink{thm:FBI-basic}{Theorem \ref{thm:FBI-basic}} show that much like the Fourier transform in flat space, they give a \emph{diagonal} representation of pseudodifferential operators through their principal symbols on phase space. On the other hand, as a map, \(T_h\) also \emph{lifts} an \(L^2(\mathcal{M})\) element to \(L^2(T^*\mathcal{M})\). These lifts allow us to access certain \emph{statistics} on phase space through inner products of the original (configuration-space) functions with quantized operators. To give some concreteness, let \(u_h \in L^2(\mathcal{M})\) for every \(h \in (0, h_0)\). Then, the \emph{Husimi function} for \(u_h\) is \(|T_h[u_h]|^2 \in L^2(T^*\mathcal{M})\) and clearly, \(\mu_h[u_h] := |T_h[u_h]|^2 \; dxd\xi\) defines a sequence of measures on the cotangent bundle in the parameter \(h \in (0, h_0)\). If \(\sup_{0 \leq h < h_0} ||u_h||_{L^2}^2 < \infty\), then also \(\sup _{0 \leq h < h_0}||T_h[u_h]||_{L^2(T^*\mathcal{M})} < \infty\) so by a diagonal argument, given any sequence \(h_j \to 0\), there exists a subsequence \(h_{j_k} \to 0\) and a non-negative finite Borel measure \(\mu\) on \(L^2(T^*\mathcal{M})\) such that \(\mu_{h_{j_k}}[u_{h_{j_k}}] \to \mu\) weakly; this is often called a \emph{semi-classical (defect) measure} \citep{zworski2012}. Therefore, \protect\hyperlink{thm:FBI-basic}{Theorem \ref{thm:FBI-basic}} allows us to take expectations of symbols with respect to Husimi functions in a simple way: \begin{align} \langle u_h |\operatorname{Op}_h(q)|u_h \rangle_{L^2(\mathcal{M})} &= \langle u_h | T_h^*[\cdot;\phi,b^{-\frac{1}{2}}] \, q \, T_h[\cdot;\phi,b^{-\frac{1}{2}}] | u_h \rangle + O(h) \nonumber \\ &= \langle T_h[u_h] | \;q\; | T_h[u_h] \rangle_{L^2(T^*\mathcal{M})} + O(h) \nonumber \\ &= \int_{T^*\mathcal{M}} q(x,\xi) \; |T_h[u_h]|^2(x,\xi) ~ dxd\xi + O(h), \label{eq:expectation-FBI} \end{align} hence, \begin{equation}\begin{aligned} \langle u_{h_{j_k}} |\operatorname{Op}_h(q)|u_{h_{j_k}} \rangle_{L^2(\mathcal{M})} \xrightarrow[h_{j_k} \to \; 0]{} \int_{T^*\mathcal{M}} q(x,\xi) ~d\mu(x,\xi). \end{aligned} \nonumber \end{equation} Now, of particular interest as a step towards \emph{quantum-classical correspondence} is if a state \(u_h\) could have a Dirac mass as semi-classical measure, for then we could \emph{access} the \emph{classical} space of symbols by probing pseudodifferential operators via inner products. There are in fact many such candidate states and we can construct and understand them through stationary phase analysis of the FBI transform. Indeed, a particularly simple case is essentially to use the kernel itself: that is, to consider the interaction between the state \begin{equation}\begin{aligned} \psi_h(x;x_0,\xi_0) := e^{-\frac{i}{h} \bar{\phi}(x_0,\xi_0,x)} \end{aligned} \nonumber \end{equation} with a fixed admissible phase \(\phi\) and \emph{localization} \((x_0, \xi_0) \in T^*\mathcal{M}\) and the FBI transform \(T_h := T_h[\cdot ; \phi, a]\) with the same phase and some fixed elliptic \(a \in h^{-\frac{3n}{4}} S^{\frac{n}{4}}\). Then, we would like to know the Husimi function for \(\psi_h\) and the \emph{ansatz} from looking at the flat case is that this is indeed a Dirac mass at \((x_0, \xi_0)\), weighted by an order zero symbol. In fact, the Husimi function is just the modulus square of the Schwartz kernel for the projection operator \(T_h T_h^*\), up to an amplitude factor: indeed, \begin{gather*} T_h T_h^*(x,\xi, x',\eta) ~ dx'd\eta = \\ \int_{\mathcal{M}}e^{\frac{i}{h}(\phi(x,\xi;y) - \bar{\phi}(x',\eta;y))} a(x,\xi,y;h) \bar{a}(x',\eta,y;h) \chi(x,\xi,y) \bar{\chi}(x',\eta,y) ~ dy ~ dx'd\eta , \end{gather*} while \begin{equation}\begin{aligned} T_h[\psi_h(\cdot ; x_0,\xi_0)](x,\xi) = \int_{\mathcal{M}} e^{\frac{i}{h}(\phi(x,\xi; y)) - \bar{\phi}(x_0,\xi_0; y))} a(x,\xi,y;h) \chi(x,\xi,y) ~ dy. \end{aligned} \nonumber \end{equation} The computation of the Schwartz kernel for \(T_h T_h^*\) is carried out in \citep[Theorem 4.4]{wunsch2001fbi} and the corresponding stationary phase analysis yields also \(T_h[\psi_h]\). The proof of \citep[Theorem 4.4]{wunsch2001fbi} goes on to elucidate the real (oscillatory) part of the resulting phase, which is more than what's necessary to for the Husimi function \(|T_h[\psi_h]|^2\); therefore, although the changes are straight-forward, for completeness we will recount their proof, with the necessary modifications, stopping just at the relevant outcome for our context and record the resulting characterization as, \begin{lemma}[{\cite[Theorem 4.4.]{wunsch2001fbi}}] \hypertarget{thm:wunsch-zworski}{\label{thm:wunsch-zworski}} Let \(\phi\) be an admissible phase function and \(a \in h^{-\frac{3n}{4}} S_{\text{phg}}^{\frac{n}{4}}\) be an elliptic amplitude such that \(h^{\frac{3n}{4}} a(x_0,\xi_0,x_0;h)|_{h=0} \neq 0\). Then, there exist \(\Phi \in C^\infty(T^*\mathcal{M} \times T^* \mathcal{M}_{(x_0,\xi_0)})\) and \(c \in C^\infty(T^*\mathcal{M} \times T^* \mathcal{M}_{(x_0,\xi_0)} \times [0,h_0))\) such that \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \item \(c(x,\xi;x_0,\xi_0;h)\) is order zero in \(h\), supported in a neighbourhood of \((x_0,\xi_0)\) in the first variable and \(c(x_0,\xi_0;x_0,\xi_0;0) \neq 0\), \item there is a positive-definite matrix \(H\) depending only on \(x_0,\xi_0\) such that \end{enumerate} \begin{equation}\begin{aligned} \Im \Phi = \frac{1}{2}\langle (x - x_0, \xi - \xi_0) | H | (x - x_0, \xi - \xi_0)\rangle +O(|x - x_0|^4 + |\xi - \xi_0|^4) \end{aligned} \nonumber \end{equation} and \begin{equation}\begin{aligned} T_h[\psi_h](x,\xi) = h^{-\frac{n}{4}} c(x,\xi;x_0,\xi_0;h) e^{\frac{i}{h}\Phi(x,\xi ; x_0,\xi_0)} + O(h^{\infty}). \end{aligned} \nonumber \end{equation} \end{lemma} \begin{proof} We follow closely the proof of \citep[Theorem 4.4]{wunsch2001fbi}. To this end, we switch to the phase, \(\varphi(x,\xi;y) := -\bar{\phi}(x,\xi;y)\) so that we are to compute, \begin{equation}\begin{aligned} T_h[\psi_h(\cdot;x_0,\xi_0)](x,\xi) = \int_{\mathcal{M}} e^{\frac{i}{h} [\varphi(x_0,\xi_0;y) - \bar{\varphi}(x,\xi;y)]} a(x,\xi,y;h) ~ dy, \end{aligned} \nonumber \end{equation} wherein we have absorbed the cut-off \(\chi\) into the symbol \(a\). The conditions of an admissible phase allow us to write locally, by Taylor expansion in \(y\) near \(\{(x, \xi, y) \in T^*\mathcal{M} \times \mathcal{M} ~|~ x = y \}\), \begin{equation}\begin{aligned} \varphi(x,\xi;y) = \xi \cdot (x-y) + \frac{1}{2} \langle Q(y;x,\xi) (x-y), x-y\rangle + O(|x-y|^3), \end{aligned} \nonumber \end{equation} wherein \(Q\) is a complex-symmetric matrix symbol of degree one in \(\xi\) such that \(\Im Q|_{x = y} \sim \langle\xi\rangle I\) and \(\langle \cdot , \cdot \rangle\) denotes the \emph{real} inner product. Since \(T_h\) localizes to a neighbourhood of \(x\), we can therefore write (up to \(O(|x-y|^3)\) terms), \begin{alignat*}{4} \varphi_0(x,\xi,x_0,\xi_0;y) &:= \varphi(x_0,\xi_0;y) - \bar{\varphi}(x,\xi;y) \\ &= (x_0-y)\cdot\xi_0 - (x-y)\cdot\xi + \\ &\quad + \frac{1}{2}\langle Q(y;x_0,\xi_0) (x_0-y),(x_0-y) \rangle - \frac{1}{2} \langle \bar{Q}(y;x,\xi)(x-y),(x-y)\rangle . \end{alignat*} The integrand is localized about \(x = x_0 = y\), and since the integral decays rapidly as \(O(h^{\infty})\) away from the neighbourhood of points where the phase is real and stationary, we also reduce to localization about \(\xi = \xi_0\). Hence, we switch to the coordinates, \begin{equation}\begin{aligned} \vartheta := x_0 - y, \quad \omega := x_0 - x, \\ \zeta:= \xi_0 - \xi , \end{aligned} \nonumber \end{equation} giving \begin{alignat*}{4} \tilde{\varphi}_0(\vartheta,\omega,\zeta; x_0,\xi_0) & := && \; \varphi_0(x_0 - \omega, \xi_0 - \zeta,x_0, \xi_0;x_0 - \vartheta) \\ &= && \; \vartheta \cdot \xi_0 - (\vartheta - \omega)\cdot(\xi_0 - \zeta) + \\ &\; && + \frac{1}{2}\langle Q(x_0 - \vartheta;x_0,\xi_0)\vartheta,\vartheta\rangle - \\ & &&- \frac{1}{2}\langle \bar{Q}(x_0 - \vartheta;x_0 - \omega,\xi_0 - \zeta)(\vartheta - \omega),\vartheta - \omega \rangle . \end{alignat*} We wish to apply the method of complex stationary phase, as per \citep[Theorem 7.7.12]{hormander_vol_I}. To see that this can be done, just note that the change of variables \(y \mapsto \vartheta\) along with the admissibility conditions on \(\varphi\) and \(\bar{\varphi}\) imply that \(\tilde{\varphi}_0\) satisfies the hypotheses of that theorem around \(\{\vartheta = \omega = \zeta = 0\}\). The result is determined up to the ideal \(I_{\tilde{\varphi}_0} := \langle \partial_{\vartheta_1} \tilde{\varphi}_0, \ldots, \partial_{\vartheta_n} \tilde{\varphi}_0 \rangle\) --- the generators of which, if we expand in Taylor series about \(\vartheta = \omega = \zeta = 0\), are given by the components of the vector, \begin{equation}\begin{aligned} D_{\vartheta} \tilde{\varphi}_0 = \zeta + Q(x_0;x_0,\xi_0) \cdot \vartheta - \bar{Q}(x_0;x_0,\xi_0) \cdot (\vartheta - \omega) + O(|\vartheta|^2 + |\omega|^2). \end{aligned} \nonumber \end{equation} Thus, we can proceed to apply the Malgrange preparation theorem, which allows us to divide \(\partial_{\vartheta_1} \tilde{\varphi}_0, \ldots, \partial_{\vartheta_n} \tilde{\varphi}_0\) by each of the coordinates \(\vartheta_1, \ldots, \vartheta_n\) to yield (complex-valued) smooth remainder functions \(X_{\vartheta_1}, \ldots, X_{\vartheta_n}(\omega,\zeta,x_0,\xi_0)\) in a neighbourhood of \(\omega = \zeta = 0\); \emph{viz}., \begin{equation}\begin{aligned} \vartheta \equiv X_{\vartheta} \pmod{ I_{\tilde{\phi}_0}} . \end{aligned} \nonumber \end{equation} We now work modulo the ideal \(I_{\tilde{\varphi}_0}\). We can resolve \(X_{\vartheta}\) via the Taylor expansion of \(D_{\vartheta} \tilde{\varphi}_0\) in a neighbourhood of \(\vartheta = \omega = \zeta = 0\): set \(Q_0 := Q(x_0;x_0,\xi_0)\), then \begin{align*} & && 0 \equiv D_{\vartheta}\tilde{\varphi}_0 \equiv \zeta + 2i \Im Q_0 \cdot X_\vartheta + \bar{Q}_0 \cdot \omega + O(|X_{\vartheta}|^2 + |\omega|^2), \\ \text{hence, } & && X_{\vartheta}(\omega,\zeta;x_0,\xi_0) \equiv \frac{i}{2}(\Im Q_0)^{-1}(\zeta + \bar{Q}_0 \cdot \omega + O(|\zeta|^2 + |\omega|^2)) \end{align*} (wherein, the \(O(|\zeta|^2 + |\omega|^2)\) term comes from recursively substituting the expression for \(X_{\vartheta}\) in the higher order term). Now, we may expand \(\tilde{\varphi}_0\) in Taylor series about \(\{\vartheta = \omega = \zeta = 0 \}\) and substitute \(X_{\vartheta}\) for \(\vartheta\), to find the phase resulting from the integration: \begin{align*} \Phi_0(\omega,\zeta;x_0,\xi_0) := &\; X_{\vartheta} \cdot \zeta + \omega \cdot (\xi_0 - \zeta) + \frac{1}{2}\langle Q_0 X_{\vartheta}, X_{\vartheta} \rangle - \frac{1}{2}\langle \bar{Q}_0 (X_{\vartheta} - \omega), X_{\vartheta} - \omega \rangle \\ &+ O(|\omega|^2 + |\zeta|^2)^2 \equiv \tilde{\varphi}_0 \pmod{I_{\tilde{\varphi}_0}} \end{align*} (wherein, \(|X_{\vartheta}|^2 \lesssim |\omega|^2 + |\zeta|^2\) gives size of the higher order terms). We elucidate this phase a bit: note that \begin{equation}\begin{aligned} X_{\vartheta} - \omega = \frac{i}{2}(\Im Q_0)^{-1}(\zeta + \Re Q_0 \cdot \omega) - \frac{1}{2} \omega = \bar{X}_{\vartheta} . \end{aligned} \nonumber \end{equation} Letting \(S := \Re Q_0\) and \(T := \Im Q_0\), this gives \begin{align*} \Im\Phi_0(\omega,\zeta;x_0,\xi_0) = &\; \frac{1}{2}\langle T^{-1}\zeta,\zeta \rangle + \frac{1}{2}\langle T^{-1}S\, \omega, \zeta\rangle \\ &- \frac{1}{4} \Im\langle T^{-1}Q_0 T^{-1}(\zeta + \bar{Q}_0 \omega), \zeta + \bar{Q}_0 \omega \rangle \\ &+ O(|\omega|^2 + |\zeta|^2)^2 \end{align*} and after expanding fully, we find that the Hessian of \(\Im \Phi_0\) at \((\omega,\zeta)=(0,0)\) is, \begin{equation}\begin{aligned} \frac{1}{4}\begin{pmatrix} T^{-1} & T^{-1} S \\ S T^{-1} & T + ST^{-1}S \end{pmatrix} = \frac{1}{4}\begin{pmatrix} I & 0 \\ S & I \end{pmatrix} \begin{pmatrix} T^{-1} & 0 \\ 0 & T \end{pmatrix} \begin{pmatrix} I & S \\ 0 & I \end{pmatrix} . \end{aligned} \nonumber \end{equation} By the admissibile phase assumption, \(T \sim \langle \xi_0 \rangle I\) at \((\omega,\zeta) = (0,0)\), therefore this decomposition shows that this Hessian \(H_0\) of \(\Im \Phi_0\) is also positive definite, whence, there exists a constant \(C > 1\) such that \begin{equation}\begin{aligned} C^{-1} (|\omega|^2 + |\zeta|^2) \leq \Im \Phi_0 = \langle(\omega,\zeta) | H_0 |(\omega,\zeta)\rangle + O(|\omega|^2 + |\zeta|^2)^2 \leq C(|\omega|^2 + |\zeta|^2) . \end{aligned} \nonumber \end{equation} This proves part (2) with \(H := 2H_0\). By \citep[Theorem 7.7.12]{hormander_vol_I}, there are differential operators \(L_j\) of order \(2j\) acting in the \(\omega,\zeta\) variables, with coefficients that are dependent only on \(\tilde{\varphi}_0\) such that \begin{equation}\begin{aligned} \left|T_h[\psi_h] - e^{\frac{i}{h} \Phi_0(\omega,\zeta;x_0,\xi_0)} \sum_{j=0}^{N-1} \tilde{b}^0 (L_j[a ])^0(\omega,\zeta;x_0,\xi_0;h) h^j \right| \lesssim_N h^{N + n/2}. \end{aligned} \nonumber \end{equation} Here, \(\tilde{b} := (\det[\partial_{\vartheta}^2 \tilde{\varphi}_0/(2\pi i h)])^{-\frac{1}{2}}\) and (following Hörmander) we denote for a function \(G(x,\xi,y;x_0,\xi_0)\), its reduction modulo \(I_{\tilde{\varphi}_0}\) by \(G^0 \equiv G \pmod{I_{\tilde{\varphi}_0}}\), which can be determined by a change of variables to \(G(\vartheta,\omega,\zeta;x_0,\xi_0;h) := G(x_0 - \omega, \xi_0 - \zeta,x_0-\vartheta;x_0, \xi_0;h)\), followed by Taylor expansion about \((\vartheta,\omega,\zeta) = 0\) and substitution of \(X_{\vartheta}\) for \(\vartheta\). We note that \(\tilde{b}^0 \equiv (h/\pi)^{\frac{n}{2}} [(\det H_0)^{-\frac{1}{2}} + O(|\omega|^2 + |\zeta|^2)]\). Then, the series can be summed, upon taking \(N \to \infty\), to \(h^{-\frac{n}{4}}\) times a smooth function \(c_0(\omega,\zeta;x_0,\xi_0;h)\) of order zero in \(h\) in a neighbourhood of \((\omega,\zeta) = (0,0)\), which is non-vanishing at \(\{(\omega, \zeta,x,\xi, h) ~|~ \omega = \zeta = 0, h = 0 \}\) due to the non-vanishing of \(a\) and the Hessian of \(\tilde{\varphi}_0\) there. After re-labelling the variables to give \(\Phi(x,\xi;x_0,\xi_0) := \Phi_0(\omega,\zeta;x,\xi)\) and \(c(x,\xi,x_0,\xi_0;h) := c_0(\omega,\zeta;x_0,\xi_0;h)\), altogether this brings us to \begin{equation}\begin{aligned} T_h[\psi_h] = h^{-\frac{n}{4}} c(x,\xi;x_0,\xi_0;h) e^{\frac{i}{h}\Phi(x,\xi;x_0,\xi_0)} + O(h^{\infty}) . \end{aligned} \nonumber \end{equation} \end{proof} \begin{remark} The statement of the theorem expresses the phase and symbol of the result of \(T_h[\psi_h]\) in terms of \((x,\xi;x_0,\xi_0)\), but since we know that the functions are localized to a neighbourhood of \((x_0,\xi_0)\) in the variable \((x,\xi)\), we can just as well switch to local coordinates and write these as functions of \((\omega,\zeta;x_0,\xi_0)\) localized about \((\omega,\zeta) = (0,0)\) as in the proof. This will be useful at times in the forthcoming calculations. \end{remark} We now have quite precise information about the \emph{localization} of \(T_h[\psi_h]\), but we are left with the symbol \(c\) as a factor, which persists when trying to access the value of the symbol of a \({\Psi\text{DO}}\) at \((x_0, \xi_0) \in T^*\mathcal{M}\) through \(\eqref{eq:expectation-FBI}\); this would later be obstructive to locating geodesic points through operator expectations. To alleviate this, it is sufficient to use \(L^2\)-normalized states \(\psi_h/||\psi_h||\): by \protect\hyperlink{thm:FBI-basic}{Theorem \ref{thm:FBI-basic}} and the Lemma just proved, using \(T_h := T_h[\cdot;\phi,b^{-\frac{1}{2}}]\) gives, \begin{equation} \label{eq:norm-coherent-FBI-xi-dep-2} \langle \psi_h | \psi_h \rangle = \langle\psi_h| T_h^* T_h | \psi_h \rangle + O(h^\infty) = h^{-\frac{n}{2}}||c e^{\frac{i}{h}\Phi}||^2_{(x,\xi)}(x_0,\xi_0;h) + O(h^\infty), \end{equation} where \(|| \cdot ||_{(x,\xi)}\) denotes the norm in \(L^2(T^*\mathcal{M})\) in the variables \((x,\xi)\) and the form on the right-hand side leads to the following, \begin{theorem}[{Symbol from C-S expectation}] \hypertarget{thm:sym-cs-psido}{\label{thm:sym-cs-psido}} Let \(\phi\) be an admissible phase, and \(\psi_h(x;x_0,\xi_0) := C_h(x_0,\xi_0) e^{\frac{i}{h} \phi(x_0,\xi_0 ; x)}\) be normalized to \(||\psi_h||_{L^2(\mathcal{M})} = 1\). Then, for all \(m \in \mathbb{Z}\) and \(a \in h^0 S^m(T^*\mathcal{M})\), \begin{equation}\begin{aligned} \langle \psi_h | \operatorname{Op}_h(a) | \psi_h\rangle = a(x_0,\xi_0;h) + O(h). \end{aligned} \nonumber \end{equation} \end{theorem} \begin{proof} The preceding theorem tells that on Taylor expanding in a neighbourhood of \(x = x_0\), \(\xi = \xi_0\), \begin{align*} T_h[\psi_h]^2 &= e^{-\frac{2}{h} \Im \Phi} \\ &= e^{-\frac{1}{h}||(x - x_0,\xi - \xi_0)||_H^2}(1 + \sum_{j=1}^{\infty}O(|x - x_0|^4 + |\xi - \xi_0|^4)^j/h^j ). \end{align*} This, together with \(\eqref{eq:norm-coherent-FBI-xi-dep-2}\), Taylor expansion of \(|c|^2\) about \((x,\xi) = (x_0,\xi_0)\) and \(d[{(s_{x_0}^{-1})}^* \nu_g](v) = \sqrt{|g_{s_{x_0}^{-1}(v)}|} dv\) (see \citep[Prop C.III.2]{berger1971spectre}) along with \(d\xi \, dx = (d\xi/\sqrt{|g_x|}) \, d\nu_g(x)\) gives, \begin{align*} h^{\frac{n}{2}} C_h(x_0,\xi_0)^{-2} &= ||ce^{\frac{i}{h}\Phi}||^2(x_0,\xi_0) \\ &= \int |c|^2(x,\xi;x_0,\xi_0;h) e^{-\frac{2}{h}\Im \Phi (x-x_0,\xi - \xi_0)} ~ dxd\xi \\ &= \int e^{-\frac{1}{h} ||(x - x_0,\xi-\xi_0)||_H^2}[1 + \sum_{j=1}^{\infty} O(|x - x_0|^4 + |\xi - \xi_0|^4)^j/h^j] \times \\ &\quad\quad \times [|c|^2(x_0,\xi_0;x_0,\xi_0;h) + O(|x - x_0| + |\xi - \xi_0|)] ~ dxd\xi \\ & = \int_{\mathbb{R}^n} \int_{V_{x_0}} e^{-\frac{1}{h} ||(v,\zeta)||_H^2}[1 + \sum_{j=1}^{\infty} O(|v|^4 + |\zeta|^4)^j/h^j] \times \\ &\quad\quad \times [|c|^2(x_0,\xi_0;x_0,\xi_0;h) + O(|v| + |\zeta|)] \, ~ dv d\zeta + O(h^{\infty}) \\ &= h^{n} [m_{\Phi}|c|^2(x_0,\xi_0;x_0,\xi_0;h) + O(h)], \\ m_{\Phi} &:= \int_{\mathbb{R}^{2n}} e^{-||(v,\zeta)||^2_H} dvd\zeta . \end{align*} Therefore, another application of the preceding theorem and the same method of Taylor expansions gives, for any \(a \in h^0 S^m\), \begin{align*} \langle \psi_h|T_h^* a T_h|\psi_h\rangle &= h^{-\frac{n}{2}}\int_{T^*\mathcal{M}} a(x,\xi) \, C_h(x_0,\xi_0)^2 |c|^2(x,\xi;x_0,\xi_0;h) e^{-\frac{2}{h} \Im \Phi(x-x_0,\xi-\xi_0)} ~ dxd\xi \\ &= h^{-\frac{n}{2}} \int a\; e^{-\frac{2}{h}\Im\Phi} \frac{|c|^2(x_0,\xi_0,x_0,\xi_0;h) + O(|x-x_0| + |\xi - \xi_0|)}{h^{\frac{n}{2}} [m_{\Phi}|c|^2(x_0,\xi_0,x_0,\xi_0;h) + O(h)]} ~ dxd\xi \\ &= h^{-n}\int a\; e^{-\frac{2}{h}\Im\Phi}[1/m_{\Phi} + O(|x - x_0| + |\xi - \xi_0|) + O(h)] ~ dxd\xi \\ &= h^{-n}\int e^{-\frac{1}{h}||(x - x_0), (\xi - \xi_0)||_H^2}[a(x_0,\xi_0)/m_{\Phi} + O(|x - x_0| + |\xi - \xi_0|) + O(h)]\times \\ & \quad\quad \times \; [1 + \sum_{j=1}^{\infty} O(|x - x_0|^4 + |\xi - \xi_0|^4)^j/h^j] ~ dx d\xi \\ &= h^{-n}\int_{\mathbb{R}^{2n}} [a(x_0,\xi_0)/m_{\Phi} + O(|v| + |\zeta|)] \times \\ &\quad\quad \times \; e^{-\frac{1}{h}||v,\zeta||_H^2}[1 + \sum_{j=1}^{\infty} h^{-j} O(|v|^4 + |\zeta|^4)^j][1 + O(|v| + |\zeta|) + O(h)] \, dv d\zeta \\ &= a(x_0,\xi_0) + O(h). \end{align*} The domains of integration have been localized and expanded (as in the fourth equality) at will, due to the fact that a symbol \(a \in h^0 S^m\) has at most polynomial growth in the cotangent fibres, so \(|T_h[\psi_h]|^2\) localizes the integrand to an \(O(\sqrt{h})\) ball about \((x_0, \xi_0) \in T^*\mathcal{M}\). On application of \protect\hyperlink{thm:FBI-basic}{Theorem \ref{thm:FBI-basic}}, we have the last part of the statement of the Theorem. \end{proof} Having seen the utility of the function \(\psi_h\), we now dignify it with its own, \begin{definition}[{Coherent States}] \hypertarget{def:coherent-state}{\label{def:coherent-state}} With \(\phi\) an admissible phase and \(h > 0\), we call \(\psi_h(x ; x_0, \xi_0) := C_h(x_0,\xi_0) e^{\frac{i}{h} \phi(x_0,\xi_0 ; x)}\), with \(C_h := ||e^{\frac{i}{h} \phi}||_{L^2(\mathcal{M})}\), a \emph{coherent state} \emph{localized at} \((x_0,\xi_0) \in T^*\mathcal{M}\), or simply a \emph{coherent state} for brevity. At times, we will also use the \emph{un-normalized coherent state}, \(\tilde{\psi}_h := e^{\frac{i}{h} \phi(x_0, \xi_0 ; x)}\). \end{definition} \hypertarget{state-preparation}{% \subsection{State preparation}\label{state-preparation}} We discuss briefly some considerations in practically constructing coherent states. At the outset, when the isometry \(\iota : \mathcal{M} \to \mathbb{R}^{D}\) is available, a simple prescription for a phase is \(\phi_{\iota} := \langle \Pi_{\iota}^T(x) \cdot \xi, \iota(x) - \iota(y) \rangle_{\mathbb{R}^{D}} + \frac{i}{2} \frac{\langle \xi \rangle}{\langle \xi_0 \rangle} |\iota(y) - \iota(x)|_{\mathbb{R}^{D}}^2\), with \(\Pi_{\iota}(x) \cong D\iota|_x^T : \mathbb{R}^{D} \supset T^*_{\iota(x)}\Lambda \to T^*_x \mathcal{M} \cong \mathbb{R}^{n}\) and a fixed \((x_0, \xi_0) \in T^*\mathcal{M}\). Then, \begin{equation}\begin{aligned} D\phi_{\iota}|_{y = x} = -\xi \, dx, \quad\quad D^2 \Im(\phi)|_{y = x} = \frac{\langle \xi \rangle}{\langle \xi_0 \rangle} D\iota|_x^T D\iota|_x \sim \langle \xi \rangle , \end{aligned} \nonumber \end{equation} so \(\phi_{\iota}\) is an admissible phase and \(\phi_{\iota}(x_0, \xi_0 ; y) = \langle \Pi_{\iota}^T(x_0) \cdot \xi_0, \iota(x_0) - \iota(y) \rangle_{\mathbb{R}^{D}} + \frac{i}{2} |\iota(y) - \iota(x_0)|^2_{\mathbb{R}^{D}}\). Hence, if \(\psi_{\iota,h}\) is a coherent state localized at \((x_0, \xi_0)\) with phase \(\phi_{\iota}\), then for any \(a \in h^0 S^{m}\) we have by \protect\hyperlink{thm:sym-cs-psido}{Theorem \ref{thm:sym-cs-psido}}, \begin{equation}\begin{aligned} \langle \psi_{\iota,h} | \operatorname{Op}_h(a) | \psi_{\iota,h} \rangle = a(x_0, \xi_0 ; h) + O(h). \end{aligned} \nonumber \end{equation} The vector \(\xi_{\iota,0} := \Pi_{\iota}^T(x_0) \cdot \xi_0\) is just the coordinate representation of \(\xi_0\) in the hyperplane tangent to \(\Lambda\) with center \(\iota(x_0)\). If normal coordinates are placed at \(x_0\) and \(x_*\) is in a normal neighbourhood of \(x_0\), then we understand that \(\xi_0 := s_{x_0}(x_*)/|s_{x_0}(x_*)|_{\mathbb{R}^{n}}\) is the direction in which a geodesic originating at \(x_0\) must emanate, to reach \(x_*\). Then, given \(\iota(x_*) \in \Lambda\) in a small neighbourhood of \(\iota(x_0)\), a reasonable approximation to \(\xi_{\iota,0}\) is given by \begin{align*} \xi_* := \frac{\iota(x_*) - \iota(x_0)}{|\iota(x_*) - \iota(x_0)|_{\mathbb{R}^{D}}} &= D\iota|_{x_0} \cdot \frac{s_{x_0}(x_*)}{|s_{x_0}(x_*)|_{\mathbb{R}^{n}}} + O(|\iota(x_*) - \iota(x_0)|_{\mathbb{R}^{D}}) \\ &= \Pi_{\iota}^T(x_0) \cdot \xi_0 + O(|\iota(x_*) - \iota(x_0)|_{\mathbb{R}^{D}}) , \end{align*} wherein the first equality follows from \protect\hyperlink{lem:ext-normal-coords}{Lemma \ref{lem:ext-normal-coords}} and Taylor's theorem upon expanding \(\iota \circ s_{x_0}^{-1}\) in a neighbourhood of the origin. Now, supposing \(|\iota(x_*) - \iota(x_0)|_{\mathbb{R}^{D}} \lesssim h^{\frac{n}{4} + 2}\) and setting \(\hat{\phi}_{\iota} := \langle \xi, \iota(x) - \iota(y) \rangle_{\mathbb{R}^{D}} + \frac{i}{2} |\iota(y) - \iota(x)|_{\mathbb{R}^{D}}^2\), another application of Taylor's theorem gives, \begin{align*} e^{\frac{i}{h} \hat{\phi}_{\iota}(x_0, \xi_* ; x)} &= e^{\frac{i}{h} [\Re(\hat{\phi}_{\iota})(x_0,\xi_{\iota,0} + O(h^2); x) + i\Im(\phi_{\iota})(x_0,\xi_0;x)]} \\ &= e^{\frac{i}{h} \phi_{\iota}(x_0, \xi_{\iota,0} ; x)} + O_{L^{\infty}}(h^{\frac{n}{4} + 1}) , \end{align*} wherein we've expanded in a neighbourhood of \(\xi_{\iota,0}\) in increments of the error term coming from the Taylor expansion of \(\xi_*\). Then, \(\hat{\psi}_{\iota,h} := e^{\frac{i}{h} \hat{\phi}_{\iota}(x_0, \xi_* ; x)} / ||e^{\frac{i}{h} \hat{\phi}_{\iota}(x_0, \xi_* ; \cdot)}||_{L^2}\) satisfies \(||\hat{\psi}_{\iota,h} - \psi_{\iota,h}||_{\infty} = O(h)\), from which we recover, \begin{equation}\begin{aligned} \langle \hat{\psi}_{\iota,h} | \operatorname{Op}_h(a) | \hat{\psi}_{\iota,h} \rangle = a(x_0, \xi_0 ; h) + O(h) . \end{aligned} \nonumber \end{equation} This means that we can use points in a sufficiently small neighbourhood of \(\iota(x_0)\) to determine unit momentum vectors at which to recover, to order \(O(h)\), the value of the symbol of a given \({\Psi\text{DO}}\). The specifications of an admissible phase make coherent states particularly amenable to prescription in local coordinate patches. Let \(u : \mathcal{M} \supset \mathscr{O} \to V \subset \mathbb{R}^{n}\) be a diffeomorphism providing local coordinates in a neighbourhood \(\mathscr{O}\) about \(x_0 \in \mathcal{M}\). Then, for a fixed \(\xi_0 \in T^*_{x_0}\mathcal{M}\), a simple construction of a phase is of the sort, \begin{equation}\begin{aligned} \phi_u(x, \xi ; y) := \langle Du|_x^{-T}\xi, u(x) - u(y) \rangle_{\mathbb{R}^{n}} + \frac{i}{2} \frac{\langle \xi \rangle}{\langle \xi_0 \rangle}|u(y) - u(x)|^2_{\mathbb{R}^{n}} \end{aligned} \nonumber \end{equation} and we have, \begin{equation}\begin{aligned} D_y \phi_u|_{y = x} = -\xi, \quad\quad D_y^2 \Im(\phi_u)|_{y = x} = \frac{\langle \xi \rangle}{\langle \xi_0 \rangle} Du|_x^T Du|_x , \end{aligned} \nonumber \end{equation} so \(\phi_u\) is admissible in \(\mathscr{O}\). Also suppose \(\chi \in C^{\infty}\) is a cut-off in \(\operatorname{supp} \chi \subset \mathscr{O}\) such that \(\chi \equiv 1\) on \(\overline{\mathscr{O}}_0 \subset \mathscr{O}\) with \(\mathscr{O}_0\) an open neighbourhood of \(x_0\). Letting \(\psi_{u,h}\) be the coherent state localized at \((x_0, \xi_0)\) with phase \(\phi_u\), we find that \protect\hyperlink{thm:wunsch-zworski}{Lemma \ref{thm:wunsch-zworski}} holds also for \(T_h[\chi \psi_{u,h}]\) since \(\chi\) cuts off the integration domain, with a slightly different symbol \(c\) that still satisfies all of the properties stated in the Lemma. Further, \(||\psi_{u,h} \chi||_{L^2} = 1 + O(h^{\infty})\), so by \protect\hyperlink{thm:sym-cs-psido}{Theorem \ref{thm:sym-cs-psido}}, we have, \begin{equation} \label{eq:cs-local-coord-sym} \langle \psi_{u,h} \chi| \operatorname{Op}_h(a) | \chi \psi_{u,h} \rangle = a(x_0,\xi_0) + O(h). \end{equation} \hypertarget{from-graph-laplacians-to-geodesic-flows}{% \section{From graph Laplacians to geodesic flows}\label{from-graph-laplacians-to-geodesic-flows}} As discussed in the Introduction, there is now a wide collection of results on the convergence of graph Laplacians to Laplace-Beltrami operators (modulo lower order terms) on manifolds. These take the following form: starting from a collection of \(N\) samples, they approximate \(\Delta_{\lambda,\epsilon}\) in the \(N \to \infty\) limit and in turn, \(\Delta_{\lambda,\epsilon}\) approximates \(\Delta_{\mathcal{M}} + O(\partial^1)\) in the sense that for each \(u \in C^{\infty}\), \(||\Delta_{\epsilon}u - [\Delta_{\mathcal{M}} + O(\partial^1)]u||_{\infty} = O(\epsilon)\) (we denote by \(O(\partial^1)\) a partial differential operator of order at most one). These are results along the lines of those displayed in \protect\hyperlink{laplacians-from-graphs-to-manifolds}{Section \ref{laplacians-from-graphs-to-manifolds}} and they are closely related to diffusion processes \citep{nadler2006diffusion}. We now wish to bring a \emph{quantum} perspective to graph Laplacians. Methods of semi-classical analysis enable us to instantiate a quantum-classical correspondence via semiclassical \(\Psi\)DOs. Namely, as discussed in \protect\hyperlink{quantization-and-symbol-classes}{Section \ref{quantization-and-symbol-classes}}, a \emph{quantization procedure} assigns a linear operator \(\operatorname{Op}_h(a) : C^{\infty}(\mathcal{M}) \to C^{\infty}(\mathcal{M})\) to a function \(a \in C^{\infty}(T^*\mathcal{M} \times (0, h_0])\) (for some \(h_0 < 1\)) belonging to a class of \emph{symbols} (\emph{classical observables}); these are, roughly speaking, functions that have controlled variation in regions of phase space \(T^*\mathcal{M}\) of unit volume, uniformly with scaling the unit length in each fibre \(T_x^*\mathcal{M}\) as \(1/h\) for \(h \in (0, h_0]\). If a symbol \(q\) is real-valued, then we may treat it as a Hamiltonian that generates a flow \(\Theta_q^t\) with respect to time \(|t| \leq T\) on \(T^*\mathcal{M}\). By Egorov's theorem, we can relate, or \emph{quantize}, the Hamiltonian flow \(\Theta_q^t\) with respect to a real-valued symbols \(q\) to \emph{Heisenberg dynamics} \(A(t) := U_q^{-t} A U_q^t\) of \(A := \operatorname{Op}_h(a)\) for \(U_q^t := e^{\frac{i}{h} t \operatorname{Op}_h(q)}\), in the sense that the theorem gives conditions on \(q\) and \(a\) such that \(A(t)\) has principal symbol \(a \circ \Theta_q^t\). In particular, using \(q := |\xi|_x\) gives the quantization of the geodesic flow \(\Gamma^t\). Our first aim is to give light to the conjugate flow on \(\Psi\)DOs by \(U_{\lambda,\epsilon}^t := e^{i t \sqrt{\Delta_{\lambda,\epsilon}}}\) as the quantization of a flow on symbols \(a \circ \Theta^t_q\) for an appropriate \(q\). To this end, we will find that when \(\epsilon = h^{2 + \alpha}\) with \(\alpha > 0\), we have \(h^2 \Delta_{\lambda,\epsilon} = \operatorname{Op}_h(q_{\lambda,\alpha})\) with \(q_{\lambda,\alpha}\) a symbol of \emph{order two} and to order\footnote{We write for \(a \in C^{\infty}\) that \(a = O_{\mathscr{S}}(h)\) whenever there is a constant \(h_0 > 0\) such that for each \(\beta\), there is a constant \(C_{\beta} > 0\) such that for \(h \in [0, h_0)\), \(|\partial^{\beta} a| \leq C_{\beta} h\).} \(O_{\mathscr{S}}(h)\), this agrees with \(|\xi|_x^2\) compact neighbourhoods of the origin in \(T^*\mathcal{M}\) whenever \(\alpha > 1\). In terms of quantization, the scaling \(\epsilon = h^{2 + \alpha}\) has a simple interpretation: since the kernel \(k(x,y;\epsilon)\) is localized to \(\sqrt{\epsilon}\)-balls on \(\mathcal{M}\), the corresponding symbol must be localized to an \(h/\sqrt{\epsilon}\)-ball in phase space, which is above unit volume (\emph{viz.}, the uncertainty principle) when \(\epsilon \in o(h^2)\). The next task is to extract the geodesic flow: once we have quantized it in a region of phase space about a cosphere bundle \(S_r^*\mathcal{M}\) for \(r > 0\), we can use coherent states localized at \((x_0, \xi_0) \in S_r^*\mathcal{M}\) to retreive \(a \circ \Gamma^t(x_0, \xi_0)\) up to \(O(h)\) error. This works because the localization of a coherent state restricts the action of \(\operatorname{Op}_h(a)\) to that of another whose symbol agrees with \(a\) on an \(O(\sqrt{h})\) neighbourhood of \((x_0, \xi_0)\) and vanishes outside this. Since, as we will see, \(q_{\lambda,\alpha} = |\xi|_x^2 + O_{\mathscr{S}}(h)\) on a fixed size neighbourhood of \(S_r^*{\mathcal{M}}\), this allows us to reduce to \(U_{\lambda,\epsilon}^{-t} \operatorname{Op}_h(a) U_{\lambda,\epsilon}^t | \psi_h \rangle = U_{|\xi|\chi}^{-t} \operatorname{Op}_h(a) U_{|\xi|\chi}^t |\psi_h \rangle + O_{L^2}(h)\) with \(\chi\) an appropriate cut-off equal to one in part of the neighbourhood. Tying this together with Egorov's theorem and the results of \protect\hyperlink{semi-classical-measures-of-coherent-states}{Section \ref{semi-classical-measures-of-coherent-states}} leads to the desired propagation result: \(\langle \psi_h | U_{\lambda,\epsilon}^{-t} \operatorname{Op}_h(a) U_{\lambda,\epsilon}^t | \psi_h \rangle = a \circ \Gamma^t(x_0, \xi_0) + O(h)\). \hypertarget{symbol-of-a-graph-laplacian}{% \subsection{Symbol of a graph Laplacian}\label{symbol-of-a-graph-laplacian}} We now compute the symbol of a graph Laplacian operator. An important aspect of this is that in order to \emph{see} the symbol of a graph Laplacian, the quantization must be done at lengths \(h\) \emph{above} the \emph{sampling density} \(\sqrt{\epsilon}\). We proceed with an \emph{intrinsic} version of the averaging operator, which readily exposes this basic facet of the application of semiclassical analysis: namely, define \begin{equation} \label{eq:intrinsic-diffop} \mathscr{A}_{g,\epsilon} : C^{\infty} \ni u \mapsto \epsilon^{-\frac{n}{2}} \int_{\mathcal{M}} k(d_g(\cdot,y)^2/\epsilon) u(y) ~ p(y) d\nu_g(y) \in C^{\infty} . \end{equation} This operator takes on the form \(\eqref{def:op-quantization}\) in the following way, \begin{align*} \mathscr{A}_{g,\epsilon}&[u](x) \\ &\equiv \int_{\mathbb{R}^{n}} \int_{\mathbb{R}^{n}} e^{-\frac{i}{h} v \cdot \xi} \mathcal{F}_h^{-1}[k(|z|^2/\epsilon)]_{z \to \xi}(\xi) \, (u \cdot p \cdot \chi_x) \circ s_x^{-1}(v) \sqrt{|g_{s_x^{-1}(v)}|} ~ d\xi dv \\ &\equiv \frac{1}{(2 \pi h)^{n}} \int_{\mathcal{M}} \int_{T_x^* \mathcal{M}} e^{-\frac{i}{h} \langle s_x(y), \xi \rangle_{g_x}} \left( \epsilon^{-\frac{n}{2}} \int_{T_x^*\mathcal{M}} e^{\frac{i}{h} \langle z, \xi \rangle_{g_x}} k(|z|^2_{g_x}/\epsilon) \frac{dz}{\sqrt{|g_x|}} p(y) \right) \\ &\quad\quad \times u(y) \chi_x(y) ~ \frac{d\xi}{\sqrt{|g_x|}} d\nu_g(y) \\ &\equiv \frac{1}{(2 \pi h)^{n}} \int_{T^*\mathcal{M}} e^{\frac{i}{h} \langle s_y(x), \eta \rangle_{g_y}} \left( \epsilon^{-\frac{n}{2}} \int_{T_y^*\mathcal{M}} e^{\frac{i}{h} \langle \zeta, \eta \rangle_{g_y}} k(|\zeta|^2_{g_y}/\epsilon) \frac{d\zeta}{\sqrt{|g_y|}} p(y) \right) u(y) \chi_x(y) dyd\eta \\ &\equiv \frac{1}{(2 \pi h)^{n}} \int_{T^*\mathcal{M}} e^{\frac{i}{h} \langle s_y(x), \eta \rangle_{g_y}} H_{g,h,\epsilon}(y,\eta) u(y) \chi_x(y) ~ d\eta dy \end{align*} with \begin{align} \label{eq:sym-intrinsic-diff-op} \begin{split} H_{g,h,\epsilon}(x,\xi) &:= \epsilon^{-\frac{n}{2}} \int_{T_x^*\mathcal{M}} e^{\frac{i}{h} \langle \zeta,\xi \rangle_{g_x}} k(|\zeta|^2_{g_x}/\epsilon) ~ \frac{d\zeta}{\sqrt{|g_x|}} ~ p(x) \\ & = \epsilon^{-\frac{n}{2}} \int_{\mathbb{R}^{n}} e^{\frac{i}{h}\langle \zeta, \xi \rangle} k(|g_x^{\frac{1}{2}} \zeta|^2/\epsilon) ~ d\zeta \sqrt{|g_x|} \, p(x) \end{split} \end{align} and wherein \(\equiv\) denotes equality up to a term of order \(O(\epsilon^{\infty}) \in C^{\infty}\), \(\chi_x(y)\) is a smooth cut-off within the normal neighbourhood of \(x\) such that \(\chi_x(y) = 1\) on a fixed, open neighbourhood of \(x\), \(\mathcal{F}_h^{-1}[\cdot(z)](\xi)\) is the inverse \(h\)-Fourier transform in \(\mathbb{R}^{n}\) with kernel \((2\pi h)^{-n} e^{\frac{i}{h} z \cdot \xi}\) and we have used the parallel transport operator \(\mathcal{T}_{y \to x} : T^*_y\mathcal{M} \to T^*_x\mathcal{M}\) along the unit speed geodesic from \(y\) to \(x\) to make the changes of variables \(\eta := \mathcal{T}_{y \to x} \xi\) and \(\zeta := \mathcal{T}_{y \to x} z\). The features of parallel transport used here are that it is an isometry with \(\mathcal{T}_{y \to x}^* = \mathcal{T}_{x \to y}\), \(\mathcal{T}_{y \to x} s_y(x) = -s_x(y)\) and \(d\mathcal{T}_{x \to y}(\cdot) = (|g_x|/|g_y|)^{\frac{1}{2}} d(\cdot)\). The fact that \(d[{(s_x^{-1})}^* \nu_g](v) = \sqrt{|g_{s_x^{-1}(v)}|} dv\) is another consequence of parallel transport we've used and its justification can be found in \citep[Prop C.III.2]{berger1971spectre}. The formal symbol \(H_{g,h,\epsilon}\) has \begin{align*} \langle \xi \rangle^{|\beta|-m} & |\partial_x^{\gamma}\partial_{\xi}^{\beta} H_{g,h,\epsilon}| \\ &= \langle \xi \rangle^{|\beta|-m} \left|\partial_x^{\gamma} \; \int_{\mathbb{R}^{n}} (i \epsilon^{\frac{1}{2}}/h)^{|\beta|} z^{\beta} e^{i \frac{\sqrt{\epsilon}}{h}\langle z, \xi \rangle} k(|g_x^{\frac{1}{2}}z|^2) ~ dz ~ |g_x^{\frac{1}{2}}| p(x) \right| \\ &= \left( \frac{\epsilon}{h^2} \right)^{\frac{m}{2}}\left| \int_{\mathbb{R}^{n}} e^{i\frac{\sqrt{\epsilon}}{h}\langle z, \xi \rangle} (\epsilon/h^2 + \Delta_z)^{\frac{|\beta|-m}{2}}\left( z^{\beta} \, \partial_x^{\gamma}[k(|g_x^{\frac{1}{2}} z|^2) |g_x^{\frac{1}{2}}| p(x)] \right) dz \right|, \end{align*} which is uniformly bounded in \((0, 1]_h \times T^*\mathcal{M}\) for \(\epsilon = h^{2 + \alpha}\) with \(\alpha \geq 0\) and each \(m, |\gamma|, |\beta| \geq 0\), while taking \(\beta = 0\) with \(m < 0\) and evaluating the right-hand side at \(\xi = 0\) shows that this quantity grows as \(\Omega(h^{m\alpha/2})\) whenever \(\alpha > 0\) and is uniformly bounded when \(\alpha = 0\). Thus, \(H_{g,h^{\alpha}} := H_{g,h,h^{2 + \alpha}} \in h^0 S^0\) for \(\alpha > 0\) and \(H_{g,h^0} \in h^0 S^{-\infty}\). On the other hand if \(\alpha < 0\), then for any \(m > 0, \delta < 0\), taking \(|\beta| = m\) gives \begin{align*} \sup_{(x,\xi) \in T^*\mathcal{M}} & h^{\delta} |\partial_{\xi}^{\beta} H_{g,h,h^{2+\alpha}}| \gtrsim_p h^{\alpha \frac{m}{2} + \delta} \left|\int_{\mathbb{R}^n} z^{\beta} k(|g_x^{\frac{1}{2}} z|^2) ~ dz\right| \gtrsim_{p, \beta} h^{\alpha \frac{m}{2} + \delta} , \end{align*} hence \(H_{g,h,h^{2 + \alpha}} \not\in h^k S^m\) for any \(k, m \in \mathbb{Z}\). This shows, \begin{lemma} \hypertarget{lem:intrinsic-diffusion-is-psido}{\label{lem:intrinsic-diffusion-is-psido}} Let \(\epsilon, h \in (0,1]\), \(\mathscr{A}_{g,\epsilon}\) be given by \(\eqref{eq:intrinsic-diffop}\) and \(H_{g,h,\epsilon}\) be given by \(\eqref{eq:sym-intrinsic-diff-op}\). Then, \(\mathscr{A}_{g,\epsilon}\) with \(\epsilon = h^{2 + \alpha}\) for \(\alpha \in \mathbb{R}\) is a \(\Psi\text{DO}\) if and only if \(\alpha \geq 0\), in which case \(\mathscr{A}_{g,h^{2 + \alpha}} = \operatorname{Op}_h(H_{g,h^\alpha})\) with \(H_{g,h^{\alpha}} := H_{g,h,h^{2 + \alpha}} \in h^0 S^0\) for \(\alpha > 0\) and \(H_{g,h^{\alpha}} \in h^0 S^{-\infty}\) whenever \(\alpha = 0\). \qed \end{lemma} The \emph{extrinsically defined} averaging operator \(\mathscr{A}_{\epsilon}\) given by \(\eqref{def:averaging-op}\) has small deformations from an isotropic kernel in normal coordinates that must be accounted for in its description as a \({\Psi\text{DO}}\). Even so, the regime for \(\epsilon\) and \(h\) giving pseudodifferentiality is the same as in the intrinsic case and the extrinsic deviations only give lower-order terms in the symbol: this we now see in, \begin{lemma} \hypertarget{lem:averaging-op-is-psido}{\label{lem:averaging-op-is-psido}} Let \(\epsilon, h \in (0, 1]\). Then, \(\mathscr{A}_{\epsilon}\) with \(\epsilon = h^{2 + \alpha}\) for \(\alpha \in \mathbb{R}\) is a \({\Psi\text{DO}}\) if and only if \(\alpha \geq 0\), in which case \(\mathscr{A}_{h^{2 + \alpha}} \equiv \operatorname{Op}_h(H_{g,h^{\alpha}}) \pmod{h^{\ell} \Psi^{-m}}\) with \((\ell, m) = (2 + \alpha(1 - m/2), m)\). \end{lemma} \begin{remark} We note the following cases of the order of the sub-principal symbol of \(\mathscr{A}_{h^{2 + \alpha}}\): we have the extremes \((\ell, m) = (1, 2(1 + 1/\alpha))\) and \((\ell, m) = (2 + \alpha/2, 1)\) and the \emph{balanced case}, \((\ell, m) = (2, 2)\). \end{remark} \begin{proof} We begin with a series expansion of the kernel of \(\mathscr{A}_{\epsilon}\): by \protect\hyperlink{lem:ext-normal-coords}{Lemma \ref{lem:ext-normal-coords}} and Taylor's formula, \begin{gather*} |\iota(x) - \iota \circ s_x^{-1}(v)|^2 = |v|^2 + \sum_{|\beta|=4} v^{\beta} \tilde{E}_{\iota,\beta}(x,v) , \\ \tilde{E}_{\iota,\beta}(x,v) := \frac{1}{6} \int_0^1 (1 - t)^3 D_v^{\beta}[|\iota(x) - \iota \circ s_x^{-1}(\cdot)|^2]|_{\cdot = tv} ~ dt , \end{gather*} hence denoting \(k^{(1)}(t) := \partial_t k(t)\) and \(E_{\iota}(x,y) := \sum_{|\beta| = 4} s_x(y)^{\beta} \tilde{E}_{\iota,\beta}(x, s_x(y))\) we have, \begin{gather*} k(|\iota(x) - \iota(y)|^2/\epsilon) - k(d_g(x,y)^2/\epsilon) = k_{\iota,\epsilon}(x,y), \\ \quad\quad k_{\iota,\epsilon}(x,y) := \frac{E_{\iota}(x,y)}{\epsilon} \int_0^1 k^{(1)}(d_g(x,y)^2/\epsilon + t E_{\iota}(x,y)/\epsilon) ~ dt . \end{gather*} There is a constant \(C_{\mathcal{M},\iota} > 0\) depending only on the geometry of \(\mathcal{M}\) and embedding \(\iota\) so that for all pairs \((x,y) \in \mathcal{M}^2\) in a geodesically convex neighbourhood, \(|E_{\iota}(x,y)| \leq C_{\mathcal{M},\iota} d_g(x,y)^4\) and by the defining assumptions on \(k\), there is \(R_k > 0\) such that \(k\) and \(k^{(1)}\) decay exponentially outside of \([0,R_k]\). Therefore, if \(\chi : \mathbb{R} \to \mathbb{R}\) is an even, non-negative smooth cut-off supported in \([-2 R_k, 2 R_k]\) and such that \(\chi(t) = 1\) on \(|t| \leq R_k\), then \begin{equation}\begin{aligned} k(|\iota(x) - \iota(y)|^2/\epsilon) - k(d_g(x,y)^2/\epsilon) = \tilde{k}_{\iota,\epsilon}(x,y) + O_{\mathscr{S}}(\epsilon^{\infty}), \\ \tilde{k}_{\iota,\epsilon}(x,y) := k_{\iota,\epsilon}(x,y) \, \chi(d_g(x,y)^2/\epsilon - C_{\mathcal{M},\iota} d_g(x,y)^4/\epsilon) \end{aligned} \nonumber \end{equation} and there is a constant \(C > 0\) depending only on \(C_{\mathcal{M}, \iota}\) such that \(\operatorname{supp}[\tilde{k}_{\iota,\epsilon}(x, \cdot)] \subseteq B_{C \sqrt{\epsilon}}(x) =: \tilde{B}_{\sqrt{\epsilon}}(x) \subset \mathcal{M}\). Now consider the operator \begin{align*} \mathscr{A}_{\epsilon} & - \mathscr{A}_{g,\epsilon} \\ &\quad \equiv \mathscr{A}^{(1)}_{\epsilon} : C^{\infty} \ni u \mapsto \epsilon^{-\frac{n}{2}} \int_{\mathcal{M}} \tilde{k}_{\iota,\epsilon}(\cdot,y) \, u(y) ~ p(y) d\nu_g(y) \in C^{\infty} \pmod{h^{\infty}\Psi^{-\infty}} . \end{align*} We wish to see that this is a \({\Psi\text{DO}}\) with symbol belonging to \(h^{\ell} S^{-m}\) for some \(\ell, m \geq 1\), which would establish \(\mathscr{A}_{\epsilon}\) as a \({\Psi\text{DO}}\) with \(h\)-principal symbol \(H_{g,\epsilon}\). By the definition of \({\Psi\text{DO}}\)s on a manifold, we must thus establish that \(\mathscr{A}^{(1)}_{\epsilon}\) is \emph{pseudolocal} and locally resembles a \({\Psi\text{DO}}\) on an open subset of Euclidean space. We begin with the former property: suppose \(\chi_1, \chi_2 \in C^{\infty}\) with \(\operatorname{supp} \chi_1 \cap \operatorname{supp} \chi_2 = \emptyset\). Since \(\tilde{k}_{\iota,\epsilon}(x,\cdot) \chi_1(\cdot) = 0\) whenever \(\operatorname{supp} \chi_2 \cap \tilde{B}_{\sqrt{\epsilon}}(x) = \emptyset\), in order for \(\chi_1(x) \tilde{k}_{\iota,\epsilon}(x,\cdot) \chi_1(\cdot) \neq 0\) we must have \(\operatorname{supp} \chi_2 \cap \tilde{B}_{\sqrt{\epsilon}}(x) \neq \emptyset\) and \(x \in \operatorname{supp} \chi_1 \cap \tilde{B}_{\sqrt{\epsilon}}(x) \neq \emptyset\), but when \(\epsilon > 0\) is sufficiently small, \(\tilde{B}_{\sqrt{\epsilon}}(x) \subset \operatorname{supp} \chi_1\), so uniformly over \((0, 1]_{\epsilon} \times \mathcal{M}^2\) we have \(|\chi_1(x) \tilde{k}_{\iota,\epsilon}(x,y) \chi_2(y)| \in O(\epsilon^{\infty})\). The same holds for all derivatives of \(\chi_1(x) \tilde{k}_{\iota,\epsilon}(x,y) \chi_2(y)\) since the support is stable under differentiation; furthermore, due to smoothness of this kernel, we have that \(\chi_1 \mathscr{A}^{(1)}_{\epsilon}[\chi_2 \cdot] \in h^{\infty} \Psi^{-\infty}\), \emph{viz}., \(\mathscr{A}_{\epsilon}^{(1)}\) is \emph{pseudolocal}. We wish to see now that \(\mathscr{A}_{\epsilon}^{(1)}\) gives, under a change of coordinates within a given patch, a \({\Psi\text{DO}}\) on the corresponding open set in \(\mathbb{R}^{n}\). So fix an atlas for \(\mathcal{M}\) with \(\gamma : \mathcal{M} \supset U \to V \subset \mathbb{R}^{n}\) providing local coordinates on the patch \(U\) and let \(\chi_1, \chi_2 \in C^{\infty}\) with \(\operatorname{supp} \chi_1, \operatorname{supp} \chi_2 \subset U\). Then, \begin{align*} \mathscr{A}^{(1)}_{\epsilon}[\chi_2 u](x) &\equiv \epsilon^{-\frac{n}{2}} \int_{V} \tilde{k}_{\iota,\epsilon}(x,\gamma^{-1}(v)) [(\chi_2 \cdot p \cdot u) \circ \gamma^{-1}](v) \, |\det D\gamma^{-1}(v)| \sqrt{|g_{\gamma^{-1}(v)}|} ~ dv \\ &\equiv \frac{1}{(2\pi h)^{n}} \int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}} e^{\frac{i}{h}\langle \gamma(x) - v, \xi \rangle} a_{\gamma}(\gamma(x),\xi) \, (\chi_2 \cdot u) \circ \gamma^{-1}(v) ~ dvd\xi , \end{align*} with \begin{gather*} a_{\gamma}(\tilde{w},\xi) := \epsilon^{-\frac{n}{2}} \int_{\mathbb{R}^{n}} e^{\frac{i}{h} \langle \xi, w - \tilde{w} \rangle} b_{\gamma,\epsilon}(\tilde{w}, w) ~ dw , \\ b_{\gamma,\epsilon}(\tilde{w},w) := \tilde{k}_{\iota,\epsilon}(\gamma^{-1}(\tilde{w}), \gamma^{-1}(w)) \, p \circ \gamma^{-1}(w) \, |\det D\gamma^{-1}(w)| \sqrt{|g_{\gamma^{-1}(w)}|} \end{gather*} and wherein \(\equiv\) denotes equality up to a term \(O_{\mathscr{S}}(\epsilon^{\infty})\) that is given by the action of a smoothing operator. The definition of \(a_{\gamma}\) is sensible because whenever \(x \in U\), with \(\epsilon\) sufficiently small we have \(\operatorname{supp}[\tilde{k}_{\iota,\epsilon}(x, \cdot)] \subset \tilde{B}_{\sqrt{\epsilon}}(x) \subset U\). We now have \(\chi_1 \mathscr{A}_{\epsilon}^{(1)}[\chi_2 u] = \chi_1 \gamma^* \operatorname{Op}_h^{KN}(a_{\gamma}) (\gamma^{-1})^* [\chi_2 u]\) in the formal sense of quantizing a smooth scalar-valued function; we wish to see that for a certain regime of \(\epsilon\) with respect to \(h\), \(a_{\gamma}\) indeed belongs to an appropriate symbol class. We first switch to normal coordinates, wherein homothetic rescalings apply cleanly so that contracting by \(\epsilon^{\frac{1}{2}}\) in these coordinates exposes the decay of \(a_{\gamma}\) in \(\epsilon\). On a change of variables \(w = \gamma \circ s_x^{-1}(v)\) with \(x = \gamma^{-1}(\tilde{w})\), we have \begin{gather*} a_{\gamma}(\tilde{w}, \xi) = \epsilon^{-\frac{n}{2}} \int_{\mathbb{R}^{n}} e^{\frac{i}{h} \langle \xi, \gamma \circ s_x^{-1}(v) - \gamma \circ s_x^{-1}(0) \rangle} b_{\iota,\epsilon}(\tilde{w}, v) ~ dv , \\ b_{\iota,\epsilon}(\tilde{w},v) := \tilde{k}_{\iota,\epsilon}(s_x^{-1}(0),s_x^{-1}(v)) \, p \circ s_x^{-1}(v) \, |\det Ds_x^{-1}(v)| \sqrt{|g_{s_x^{-1}(v)}|} . \end{gather*} We may now apply the Kuranishi trick to the phase appearing in the representation of \(a_{\gamma}\) by way of the Taylor expansion \begin{gather*} \gamma \circ s_x^{-1}(v) - \gamma \circ s_x^{-1}(0) = D[\gamma \circ s_x^{-1}]|_{v = 0} \cdot v + E_{\gamma}(\tilde{w},v) , \\ E_{\gamma}(\tilde{w},v) := \sum_{|\beta| = 2} v^{\beta} \int_0^1 D_v^2[\gamma \circ s_x^{-1}](tv) ~ dt \end{gather*} with \(F(\tilde{w}) := D[\gamma \circ s_x^{-1}]|_{v = 0}\) a smooth matrix-valued function, invertible at all \(\tilde{w} \in V\). Since \begin{gather*} E_{\gamma}(\tilde{w}, \epsilon^{\frac{1}{2}}v) = \epsilon E_{\gamma,\epsilon}(\tilde{w}, v), \\ E_{\gamma,\epsilon}(\tilde{w},v) := \sum_{|\beta| = 2} v^{\beta} \int_0^1 D_v^2[\gamma \circ s_x^{-1}](t \epsilon^{\frac{1}{2}} v) ~ dt , \end{gather*} another Taylor expansion then gives, \begin{gather*} e^{\frac{i}{h} \langle \xi, F(\tilde{w}) \cdot \epsilon^{\frac{1}{2}} v + E_{\gamma}(\tilde{w}, \epsilon^{\frac{1}{2}} v) \rangle} = e^{i \frac{\sqrt{\epsilon}}{h} \langle \xi, F(\tilde{w}) \cdot v \rangle}\left( 1 + \epsilon^{\frac{1}{2}} \Theta_h(\tilde{w},v,\xi,\epsilon) \right), \\ \Theta_h(\tilde{w},v,\xi,\epsilon) := \sum_{j=1}^{\infty} \frac{\epsilon^{\frac{j-1}{2}}}{j!} \langle E_{\gamma,\epsilon}(\tilde{w},v), (\epsilon^{\frac{1}{2}}/h) \, i \xi \rangle^{j} . \end{gather*} We also have, \begin{gather*} \tilde{k}_{\iota,\epsilon}(x, s_x^{-1}(\epsilon^{\frac{1}{2}} v)) = \epsilon \tilde{b}_{\iota,\epsilon}(x,v) , \\ \tilde{b}_{\iota,\epsilon}(x,v) := \chi(|v|^2 - \epsilon C_{\mathcal{M},\iota} |v|^4) E_{\iota}(x,s_x^{-1}(v)) \int_0^1 k^{(1)}\left( |v|^2 + \epsilon t E_{\iota}(x, s_x^{-1}(v)) \right) dt . \end{gather*} Therefore, \begin{align*} a_{\gamma}(\tilde{w},\xi) &= \epsilon \int_{\mathbb{R}^{n}} e^{i \frac{\sqrt{\epsilon}}{h}\langle \xi, v \rangle}\left( 1 + \epsilon^{\frac{1}{2}} \Theta_h(\tilde{w}, \tilde{v}, \xi, \epsilon) \right) \tilde{b}_{\iota,\epsilon}(\gamma^{-1}(\tilde{w}), \tilde{v}) \\ &\quad\quad \times p \circ s_x^{-1}(\epsilon^{\frac{1}{2}} \tilde{v}) \, |\det D s_x^{-1}(\epsilon^{\frac{1}{2}} \tilde{v})| \, |g_{s_x^{-1}(\epsilon^{\frac{1}{2}}\tilde{v})}|^{\frac{1}{2}} ~ d\tilde{v} , \\ \tilde{v} &:= F(\tilde{w})^{-1} \cdot v \end{align*} and an integration by parts gives, \begin{gather*} a_{\gamma}(\tilde{w},\xi) = \epsilon \int_{\mathbb{R}^{n}} e^{i \frac{\sqrt{\epsilon}}{h} \langle \xi, v \rangle} (B_{\iota,\epsilon}(\tilde{w}, \tilde{v}) + \epsilon^{\frac{1}{2}} \tilde{B}_{\iota,\epsilon}(\tilde{w}, \tilde{v})) ~ d\tilde{v} , \\ B_{\iota,\epsilon}(\tilde{w},\tilde{v}) := \tilde{b}_{\iota,\epsilon}(x,\tilde{v}) \, p \circ s_x^{-1}(\epsilon^{\frac{1}{2}} \tilde{v}) \, |\det D s_{x}^{-1}(\epsilon^{\frac{1}{2}} \tilde{v})| \, |g_{s_x^{-1}(\sqrt{\epsilon} \tilde{v})}|^{\frac{1}{2}} , \end{gather*} wherein \(\operatorname{supp} B_{\iota,\epsilon}(\tilde{w}, \cdot) \subset \{ |\tilde{v}| \leq C \}\) and \begin{equation}\begin{aligned} \tilde{B}_{\iota,\epsilon}(\tilde{w},\tilde{v}) := \sum_{j=1}^{\infty} \frac{\epsilon^{\frac{j-1}{2}}}{j!} \langle E_{\gamma,\epsilon}(\tilde{w}, \tilde{v}), \nabla_v \rangle^j[B_{\iota, \epsilon}(\tilde{w}, \cdot)](\tilde{v}) \end{aligned} \nonumber \end{equation} is a convergent Taylor series that defines a smooth function, \(\tilde{B}_{\iota,\epsilon}(\tilde{w}, \tilde{v}) \in C_c^{\infty}(V \times \{|\tilde{v}| \leq C \})\). Now we proceed as we've done before: \begin{align*} \langle \xi & \rangle^{|\beta|-m} |\partial_{\tilde{w}}^{\vartheta}\partial_{\xi}^{\beta} a_{\gamma}| \\ &= \epsilon \langle \xi \rangle^{|\beta|-m} \left|\partial_{\tilde{w}}^{\vartheta} \; \int_{\mathbb{R}^{n}} (i \epsilon^{\frac{1}{2}}/h)^{|\beta|} v^{\beta} e^{i \frac{\sqrt{\epsilon}}{h}\langle \xi, v \rangle} (B_{\iota,\epsilon}(\tilde{w}, \tilde{v}) + \epsilon^{\frac{1}{2}} \tilde{B}_{\iota,\epsilon}(\tilde{w},\tilde{v})) ~ d\tilde{v} \right| \\ &= \frac{\epsilon^{1 + \frac{m}{2}}}{h^m} \left| \int_{\mathbb{R}^{n}} e^{i\frac{\sqrt{\epsilon}}{h}\langle \xi, v \rangle} (\epsilon/h^2 + \Delta_v)^{\frac{|\beta|-m}{2}}\left( v^{\beta} \, \partial_{\tilde{w}}^{\vartheta} \left[ (B_{\iota,\epsilon}(\tilde{w}, \tilde{v}) + \epsilon^{\frac{1}{2}} \tilde{B}_{\iota,\epsilon}(\tilde{w},\tilde{v}))/|\det F(\tilde{w})| \right] \right) dv \right| \end{align*} so that using \(\epsilon = h^{2 + \alpha}\) gives \(\epsilon^{1 + \frac{m}{2}}/h^m = h^{2 + \alpha(1 + m/2)}\). The Fourier transform is of a smooth, compactly supported function, hence it is uniformly bounded in \((0, 1]_h \times T^*\mathcal{M}\). Thus, if \(\alpha > 0\) then \(a_{\gamma} \in h^{\ell} S^{-m}\) for \((\ell, m) = (2 + \alpha(1 - m/2), m)\) with \(m \in \mathbb{N}\), \(m \leq 2(1 + 1/\alpha)\) and we may change quantizations to the adjoint form by a transformation of the symbol \citep[Theorem 4.13]{zworski2012}, which retains its order. \end{proof} \begin{example*} The simplest case is to take for \(\mathscr{A}_{g,\epsilon}\) the operator with Schwartz kernel \(k_{g,\epsilon}(x,y) := \epsilon^{-\frac{n}{2}} e^{-d_g(x,y)^2/\epsilon}\) and uniform density \(p \equiv 1\). Then, an \(h\)-Fourier transformation shows that \(H_{g,h,\epsilon} := \pi^{\frac{3n}{2}} e^{-\frac{\epsilon}{4 h^2} |\xi|_{g_x}^2} |g_x|^{-\frac{1}{2}}\). Recall that due to the uncertainty principle, a semiclassical \(\Psi\)DO is defined for symbols with small variation in regions of phase space with unit volume. This symbol \(H_{g,h,\epsilon}\) varies polynomially within a ball of radius \(O(h/\sqrt{\epsilon})\) in each cotangent fibre \(T^*_x\mathcal{M}\) and decays exponentially outside of this ball. When \(\epsilon = h^{2 + \alpha}\), this symbol has exponential decay within a ball of radius \(h^{-\alpha/2 - \delta}\) for \(\delta > 0\), so for \(\alpha < 0\) this is amounts to large variation in sub-unit volume regions of phase space. Another (\emph{dual}) perspective is that in order for \(\mathscr{A}_{g,\epsilon}\) to be regarded as a semiclassical \(\Psi\)DO in quantization scale \(h\), by pseudolocality it must have Schwartz kernel \(k_{g,\epsilon}\) that decays, along with all derivatives, as \(O(h^{\infty} |x - y|^{-\infty})\) outside of the diagonal. The critical rate here is \(\sqrt{\epsilon}\), in the sense that for points with \(0 < d_g(x,y) < \sqrt{\epsilon}\), the Schwartz kernel \(k_{g,\epsilon}\) follows this decay rate if and only if \(h > \sqrt{\epsilon}\). Effectively, if \(h\) remains below this rate, then all of the \emph{content} of the symbol is lost and hence the quantum-classical correspondence breaks down: quantization forces such a symbol to be essentially zero (an impulse at the zero section of the cotangent bundle), while \(\mathscr{A}_{g,\epsilon}\) behaves as \(\epsilon \to 0\) like an operator with symbol having support in all of the cotangent bundle and moreover, going to a constant as \(h \to 0\). \end{example*} We may now employ symbol calculus to express graph Laplacians as \({\Psi\text{DO}}\)s: \begin{theorem} \hypertarget{thm:sym-renorm-graph-lap}{\label{thm:sym-renorm-graph-lap}} Let \(\lambda \geq 0\), \(\epsilon, h \in (0, 1]\) with \(\epsilon = h^{2 + \alpha}\) and \(\alpha > 0\). Then, \begin{equation}\begin{aligned} \mathcal{L}_{g,\lambda,h^{\alpha}} := \frac{2 c_0}{c_2} \frac{1 - H_{g,h^{\alpha}}/(p_{\epsilon}^{2\lambda} p_{\lambda,\epsilon})}{h^{\alpha}} \in h^0 S^2 \end{aligned} \nonumber \end{equation} and \(h^2 \Delta_{\lambda,\epsilon} \in h^0 \Psi^2\) with \begin{equation}\begin{aligned} h^2 \Delta_{\lambda,\epsilon} \equiv \operatorname{Op}_h(\mathcal{L}_{g,\lambda,h^{\alpha}}) \pmod{h \Psi} . \end{aligned} \nonumber \end{equation} Moreover, if we fix \(0 \leq \delta \leq 1\) and \(C > 0\) and let \(\chi_{C h^{-\delta}} : \mathbb{R} \to \mathbb{R}\) be a smooth function with \(\operatorname{supp} \chi_{C h^{-\delta}} \subseteq [-C h^{-2\delta}, C h^{-2\delta}]\), then for all pairs \((m,\alpha) \in \mathbb{Z} \times [1, \infty)\) that satisfy \(\alpha \geq 1 + \delta(4 + m)\), we have \begin{equation}\begin{aligned} \operatorname{Op}_h(\mathcal{L}_{g,\lambda,h^{\alpha}} \chi_{C h^{-\delta}}(|\xi|_{g_x}^2)) \equiv \operatorname{Op}_h(|\xi|_{g_x}^2 \chi_{C h^{-\delta}}(|\xi|_{g_x}^2)) \pmod{h \Psi^{-m}} \end{aligned} \nonumber \end{equation} and in particular, when \(\delta = 0\), this holds with \(m = \infty\) for all \(\alpha \geq 1\). \end{theorem} \begin{proof} Since \protect\hyperlink{lem:averaging-op-is-psido}{Lemma \ref{lem:averaging-op-is-psido}} gives \(\mathscr{A}_{\epsilon}\) as a \({\Psi\text{DO}}\) and the operators of multiplication by \(p_{\lambda,\epsilon}^{-1}, p_{\epsilon}^{-\lambda} \in C^{\infty}\) are readily seen to be \({\Psi\text{DO}}\)s with symbols \(p_{\lambda,\epsilon}^{-1}, p_{\epsilon}^{-\lambda} \in h^0 S^0\) respectively, we may employ the symbol calculus upon writing \(A_{\lambda,\epsilon}\) as a product of these operators to get: \begin{gather*} A^{(1)}_{\lambda,\epsilon} := A_{\lambda,\epsilon} - A_{g,\lambda,\epsilon} = p_{\lambda,\epsilon}^{-1} p_{\epsilon}^{-\lambda} \mathscr{A}^{(1)}_{\epsilon} p_{\epsilon}^{-\lambda} \in h^{\ell} \Psi^{-m} , \\ A_{g,\lambda,\epsilon}[\cdot] := p_{\lambda,\epsilon}^{-1} p_{\epsilon}^{-\lambda} \mathscr{A}_{g,\epsilon}[p_{\epsilon}^{-\lambda} \cdot] \in h^0 \Psi^0 \end{gather*} with \((\ell, m) \in \mathbb{R} \times \mathbb{Z}\) as in \protect\hyperlink{lem:averaging-op-is-psido}{Lemma \ref{lem:averaging-op-is-psido}} and \begin{equation}\begin{aligned} \tilde{\Delta}_{\lambda,h,\alpha} := h^2 \frac{c_2}{2 c_0} \Delta_{\lambda,\epsilon} = \frac{I - A_{\lambda,\epsilon}}{h^{\alpha}} = \frac{I - A_{g,\lambda,\epsilon}}{h^{\alpha}} - h^{-\alpha} A^{(1)}_{\lambda,\epsilon} . \end{aligned} \nonumber \end{equation} Since \(h^{-\alpha} A^{(1)}_{\lambda,\epsilon} \in h^{\ell'} \Psi^{-m}\) with \((\ell', m) = (2 - \alpha m/2, m)\) and \begin{equation}\begin{aligned} \operatorname{Op}_h(H_{g,h^{\alpha}}/(p_{\epsilon}^{2\lambda} p_{\lambda,\epsilon})) \equiv \mathscr{A}_{g,\epsilon} \circ \frac{1}{p_{\epsilon}^{2\lambda} p_{\lambda,\epsilon}} \pmod{h^{\infty} \Psi^{-\infty}}, \end{aligned} \nonumber \end{equation} we have \begin{gather} \tilde{\Delta}_{\lambda,h,\alpha} \equiv (p_{\epsilon}^{\lambda} p_{\lambda,\epsilon})^{-1} \tilde{\Delta}_{g,\lambda,h^{\alpha}} \circ p_{\epsilon}^{\lambda} p_{\lambda,\epsilon} - h^{-\alpha} A^{(1)}_{\lambda,\epsilon} \pmod{h^{\infty} \Psi^{-\infty}}, \label{eq:glap-intrinsic-conj-renorm} \\ \tilde{\Delta}_{g,\lambda,h^{\alpha}} := \frac{I - \operatorname{Op}_h(H_{g,h^{\alpha}}/(p_{\epsilon}^{2\lambda} p_{\lambda,\epsilon}))}{h^{\alpha}} , \nonumber \end{gather} hence to see the first part of the Theorem, it suffices to show that \(L_{g,\lambda,h^{\alpha}} := (1 - H_{g,h^{\alpha}}/(p_{\epsilon}^{2\lambda} p_{\lambda,\epsilon}))/h^{\alpha} \in h^0 S^2\). A change of variables in the representation \(\eqref{eq:sym-intrinsic-diff-op}\) shows the form, \begin{equation}\begin{aligned} H_{g,h^{\alpha}}(x,\xi) = \int_{\mathbb{R}^{n}} e^{i h^{\frac{\alpha}{2}} \langle z, g_x^{-\frac{1}{2}} \xi \rangle} k(|z|^2) ~ dz ~ p(x) \end{aligned} \nonumber \end{equation} and combining this with the defining assumption that \(k\) decays at least as fast as an exponential implies that \(H_{g,h^{\alpha}}\) is analytic in \(\xi\). Thus, by a Taylor series at \(\xi = 0\) we have the expansion \begin{gather*} (H_{g,h^{\alpha}}/(p_{\epsilon}^{2\lambda} p_{\lambda,\epsilon}))(x,\xi) = \frac{p(x)}{p_{\epsilon}^{2\lambda} p_{\lambda,\epsilon}(x)}\sum_{|\kappa| \geq 0} (-1)^{|\kappa|} h^{\alpha |\kappa|} (g_x^{-\frac{1}{2}}\xi)^{2\kappa} c_{2\kappa}/(2\kappa)! , \\ c_{2\kappa} := \int z^{2\kappa} k(|z|^2) ~ dz . \end{gather*} By \protect\hyperlink{lem:taylor-expand-deg-func}{Lemma \ref{lem:taylor-expand-deg-func}}, there is a function \(q_{\lambda} \in C^{\infty}\) depending only on \(p\), \(\mathcal{M}\), \(c_0\) and \(\lambda\) such that \begin{equation}\begin{aligned} p_{\lambda,\epsilon} = (c_0 p)^{1 - 2\lambda} + \epsilon \, c_2 p^{1 - 2\lambda} q_{\lambda} + O(\epsilon^2). \end{aligned} \nonumber \end{equation} Applying this to \(p_{\epsilon} = p_{0, \epsilon}\) and taking a Taylor series shows \begin{gather*} p_{\epsilon}^{2\lambda} = (c_0 p)^{2\lambda} + \epsilon \, (c_0 p)^{2\lambda - 1} \tilde{q}_{\lambda} + O(\epsilon^2), \\ \tilde{q}_{\lambda} := 2\lambda c_2 p \, q_0 . \end{gather*} Therefore, \begin{gather*} p_{\lambda,\epsilon} p_{\epsilon}^{2\lambda} = c_0 p + \epsilon \tilde{q}_{\lambda,0} + O(\epsilon^2), \\ \tilde{q}_{\lambda,0} := \tilde{q}_{\lambda} + c_0^{2\lambda} c_2 p \, q_{\lambda} \end{gather*} and upon taking a geometric series expansion this leads to \begin{equation}\begin{aligned} p/(p_{\lambda,\epsilon} p_{\epsilon}^{2\lambda}) = c_0^{-1}[1 - \epsilon \, \tilde{q}_{\lambda,0}/(c_0 p) + O(\epsilon^2)]. \end{aligned} \nonumber \end{equation} Employing this expansion we now have, \begin{align} \label{eq:glap-intrinsic-sym-expansion} \begin{split} &\frac{1 - H_{g,h^{\alpha}}/(p_{\epsilon}^{2 \lambda} p_{\lambda,\epsilon})}{h^{\alpha}} \\ &\quad= h^{-\alpha} - h^{-\alpha} \frac{c_0 p}{p_{\epsilon}^{2 \lambda} p_{\lambda,\epsilon}} + \frac{p}{p_{\epsilon}^{2 \lambda} p_{\lambda,\epsilon}}\sum_{|\kappa| \geq 1} \frac{(-1)^{|\kappa|-1}}{(2\kappa)!} h^{\alpha(|\kappa| - 1)} (g_x^{-\frac{1}{2}} \xi)^{2\kappa} c_{2\kappa} \\ &\quad= \frac{c_2}{2}\frac{p}{p_{\epsilon}^{2 \lambda} p_{\lambda,\epsilon}}|\xi|^2_{g_x} + h^2 \tilde{q}_{\lambda,0}/(c_0 p) + O_x(h^{4 + \alpha}) \\ & \quad\quad + \frac{p}{p_{\epsilon}^{2 \lambda} p_{\lambda,\epsilon}}\sum_{|\kappa| \geq 2} (-1)^{|\kappa| - 1} h^{\alpha(|\kappa| - 1)} (g_x^{-\frac{1}{2}} \xi)^{2\kappa} c_{2\kappa}/(2\kappa)! \\ &\quad= \frac{c_2}{2 c_0} |\xi|^2_{g_x} + O_x(h^2) + \frac{p}{p_{\epsilon}^{2 \lambda} p_{\lambda,\epsilon}}\sum_{|\kappa| \geq 2} \frac{(-1)^{|\kappa| - 1}}{(2\kappa)!} h^{\alpha(|\kappa| - 1)} (g_x^{-\frac{1}{2}} \xi)^{2\kappa} c_{2\kappa} , \end{split} \end{align} wherein the \(O_x(h^2)\) term denotes that it is a function only of \(x\) and \(h\) and is uniformly bounded (due to compactness of \(\mathcal{M}\)) by a constant multiple of \(h^2\). Denoting \(L_{g,\lambda, h^{\alpha}} := (1 - H_{g,h^{\alpha}}/(p_{\epsilon}^{2\lambda} p_{\lambda,\epsilon}))/h^{\alpha}\), we see from this, that for small \(|\xi| \leq h^{-\frac{\alpha}{2}}\), \(m \geq 2\), \(0 \leq |\beta| \leq m\) and all \(\gamma \geq 0\), \begin{align*} \langle \xi \rangle^{|\beta|-m} & |\partial_x^{\gamma} \partial_{\xi}^{\beta} L_{g,\lambda,h^{\alpha}}| \\ &\lesssim_{\gamma} \langle \xi \rangle^{2-m} + h^2 \langle \xi \rangle^{-m} + \langle \xi \rangle^{|\beta| - m}\sum_{\substack{|\kappa| \geq \\ \max\{ 2, |\beta|/2 \}}} h^{\alpha( |\kappa| - 1)} |\xi^{2\kappa - \beta}| \, c_{2\kappa}/(2\kappa - \beta)! \\ &\lesssim_{\gamma,\beta} (1 + h^{-\alpha})^{\frac{|\beta| - 2 - (m -2)}{2}} (h^\alpha)^{\frac{|\beta|}{2} - 1} \sum_{|\kappa| \geq 2} c_{2\kappa}/(2\kappa - \beta)! \\ &\lesssim_{\gamma,\beta} (h^{\alpha} + 1)^{\frac{|\beta|}{2} - 1} (1 + h^{-\alpha})^{-\frac{m-2}{2}} \\ &\lesssim_{\gamma,\beta} 1, \end{align*} while for large \(|\xi| > h^{-\frac{\alpha}{2}}\), \begin{align*} \langle \xi \rangle^{|\beta|-m} &|\partial_x^{\gamma} \partial_{\xi}^{\beta} L_{g,\lambda,h^\alpha}| \\ &\lesssim (1 + h^{-\alpha})^{\frac{|\beta| - m}{2}} (h^{\alpha})^{\frac{|\beta|}{2} - 1} \left|\int_{\mathbb{R}^{n}} e^{i h^{\frac{\alpha}{2}}\langle z, \xi \rangle} z^{\beta} \partial_x^{\gamma} \left( \delta_0(z) - k(|g_x^{\frac{1}{2}} z|^2) |g_x|^{\frac{1}{2}} \frac{p}{p_{\epsilon}^{2\lambda} p_{\lambda,\epsilon}}(x) \right) ~ dz \right| \\ &\lesssim_{\gamma,\beta} (h^{\alpha} + 1)^{\frac{|\beta|}{2}-1} (1 + h^{-\alpha})^{-\frac{m-2}{2}} \end{align*} so together this implies \(\langle \xi \rangle^{-m} |\partial_x^{\gamma} \partial_{\xi}^{\beta} L_{g,\lambda,h^\alpha}|\) is uniformly bounded in \((0, 1]_h \times T^*\mathcal{M}\). Further, for \(|\beta| \geq m\) we have \begin{align} \langle \xi &\rangle^{|\beta|-m} |\partial_x^{\gamma} \partial_{\xi}^{\beta}L_{g,\lambda,h^{\alpha}}| \nonumber\\ &= \left|\int e^{i h^{\frac{\alpha}{2}}\langle z, \xi \rangle}(h^{\alpha})^{\frac{m}{2} - 1} \; (h^{\alpha} + \Delta_z)^{\frac{|\beta|-m}{2}} \left(z^{\beta} \partial_x^{\gamma} \left[ k(|g_x^{\frac{1}{2}} z|^2) |g_x|^{\frac{1}{2}} \frac{p}{p_{\epsilon}^{2\lambda} p_{\lambda,\epsilon}}(x) \right] \right) ~ dz \right| \label{eq:glap-is-symbol}\\ &< \infty, \nonumber \end{align} so altogether we find that \(L_{g,\lambda,h^{\alpha}} \in h^0 S^2\). To see that \(L_{g,\lambda,h^{\alpha}}\) has order exactly \((0,2)\) when \(\alpha > 0\), simply evaluate the right-hand side of \(\eqref{eq:glap-is-symbol}\) at \(\xi = 0\) with \(|\beta| \geq 2 > m\). Now an application of symbol calculus to \(\eqref{eq:glap-intrinsic-conj-renorm}\) shows that for \(\alpha > 0\), \begin{equation}\begin{aligned} h^2 \Delta_{\lambda,\epsilon} \equiv \operatorname{Op}_h((2c_0/c_2) \, L_{g,\lambda,h^{\alpha}}) \pmod{h \Psi} . \end{aligned} \nonumber \end{equation} If we introduce a smooth cut-off \(\chi_{C h^{-\delta}} : \mathbb{R} \to \mathbb{R}\) with \(\operatorname{supp} \chi_{h^{-\delta}} \subseteq C [-h^{-2\delta}, h^{-2\delta}]\) for \(0 < \delta < 1\) and \(C > 0\), then from \(\eqref{eq:glap-intrinsic-sym-expansion}\) we see that for each \(m \in \mathbb{R}\), \begin{align*} h^{-1}\langle \xi \rangle^{|\beta| + m} & |\chi_{C h^{-\delta}}(|\xi|_{g_x}^2) \, \partial_x^{\gamma} \partial_{\xi}^{\beta}[L_{g,\lambda,h^{\alpha}} - (c_2/(2 c_0)) |\xi|_{g_x}^2]| \\ &\lesssim_{\gamma} h^{-1} (1 + C^2h^{-2\delta})^{\frac{|\beta| + m}{2}} \sum_{\substack{|\kappa| \geq \\ \max\{ 2, |\beta|/2 \}}} h^{\alpha(|\kappa| - 1) - \delta (2|\kappa| - |\beta|)} c_{2\kappa}/(2\kappa - \beta)! \\ &\lesssim_{\gamma,\beta} \sum_{\substack{|\kappa| \geq \\ \max\{ 2, |\beta|/2 \}}} h^{-\delta(m + |\beta|)} h^{-1} h^{\alpha(|\kappa| - 1) - \delta (2|\kappa| - |\beta|)} c_{2\kappa}/(2\kappa - \beta)! \\ &\lesssim_{\gamma,\beta} \sum_{|\kappa| \geq 2} h^{(|\kappa| - 1)\left( \alpha - \frac{\delta(2|\kappa| + m) + 1}{|\kappa| - 1} \right)} c_{2\kappa}/(2\kappa)! \\ &\lesssim_{\gamma,\beta} h^{\alpha - (1 + \delta(4 + m))} . \end{align*} Since the partial derivatives of \(\chi_{C h^{-\delta}}(|\xi|^2_{g_x})\) are also cut-offs of \(|\xi|^2_{g_x}\) in the same region and the above bound holds with the factor \(\chi_{C h^{-\delta}}\) replaced by any such cut-off, we find that upon summing up via the triangle inequality, we have the same bound up to (new) constants depending on \(\gamma\) and \(\beta\). Therefore, \begin{equation}\begin{aligned} \operatorname{Op}_h((2 c_0/c_2) L_{g,\lambda,h^{\alpha}} \chi_{C h^{-\delta}}(|\xi|_{g_x}^2)) \equiv \operatorname{Op}_h(|\xi|_{g_x}^2 \chi_{C h^{-\delta}}(|\xi|_{g_x}^2)) \pmod{h \Psi^{-m}} \end{aligned} \nonumber \end{equation} whenever \(\alpha \geq 1 + \delta(4 + m)\) so in particular, we may take \(m = \infty\) when \(\delta = 0\). \end{proof} \begin{remark} The second part of the Theorem can be generalized to give a lower bound \(\rho + \delta(4 + m)\) on \(\alpha\) if instead of using the unit step size for orders of \(h\) in the pseudodifferential calculus, we take steps of size \(\rho > 0\). In any case, while \(\alpha = 0\) is applicable to the first part of the Theorem --- and then we actually have \(h^2 \Delta_{\lambda,\epsilon} \in h^0 \Psi^{-\infty}\) --- as the second part shows, this would not be able to extract the \emph{kinetic term} \(|\xi|_{g_x}^2\) (for any step size \(\rho > 0\)). \end{remark} \hypertarget{geodesic-flows-of-symbols}{% \subsection{Geodesic flows of symbols}\label{geodesic-flows-of-symbols}} We now extend the relationships between classical observables and their quantized counterparts via coherent states as displayed in \protect\hyperlink{semi-classical-measures-of-coherent-states}{Section \ref{semi-classical-measures-of-coherent-states}}, to Hamiltonian dynamics on the observables and operator dynamics on their quantizations. The basic idea is facilitated by Egorov's theorem, which states roughly that if \(Q\) is a \({\Psi\text{DO}}\) with principal symbol \(q_0\) then the \emph{operator dynamics} \(A(t) = e^{-\frac{i}{h} t Q} \operatorname{Op}_h(a) e^{\frac{i}{h} t Q}\) preserves pseudodifferentiality and order, meaning that up to a given time \(T\) constant in \(h\), \(A(0) \in \Psi^m \implies A(t) \in \Psi^m\) for all \(|t| \leq T\) and furthermore, \(A(t) \equiv \operatorname{Op}_h(a \circ \Phi_{q_0}^t) \mod h \Psi^{m-1}\). Therefore, combining with \protect\hyperlink{lem:coherent-localization}{Lemma} leads to \(\langle \psi_h | A(t) | \psi_h \rangle = a \circ \Phi_{q_0}^t + O(h)\). As we have seen, the symbol of a graph Laplacian is locally equal to \(|\xi|_{g_x}^2\). So, for a fixed \((x_0, \xi_0) \in T^*\mathcal{M}\) and \(h\) sufficiently small, we expect the that the Hamiltonian flow given by the square root of (the principal part of) the symbol of a graph Laplacian coincides with the co-geodesic flow in a neighbourhood of this point. An issue, however, is that graph Laplacians are not elliptic, which obstructs directly utilising their square roots. Our way out goes back to the application of coherent states: since \(\psi_h\) is localized about \((x_0, \xi_0)\) in phase space, by going through the FBI transform as in \protect\hyperlink{thm:wunsch-zworski}{Lemma \ref{thm:wunsch-zworski}}, we can approximate it with the application of the quantization of a phase space localized symbol. Upon choosing an appropriate form of this symbol in concert with the spectral gaps in the graph Laplacian, we get approximately a spectral cut-off of the graph Laplacian. Finally, we use this to bring a cut-off into the generator \(Q\), for which we have the following version of Egorov's theorem: \begin{theorem}[{Egorov}] \hypertarget{thm:egorov}{\label{thm:egorov}} Let \(q \in C_c^{\infty}(T^*\mathcal{M} \times [0, h_0)) \cap h^0 S^{-\infty}\) for some \(h_0 > 0\) have a real principal symbol \(q_0\) that generates the Hamiltonian flow \(\Phi_{q_0}^t\) on \(T^*\mathcal{M}\). Then, given \(a \in h^{\ell} S^{m}\) and \(T > 0\), for all \(|t| \leq T\), \(a \circ \Phi_{q_0}^t \in h^{\ell} S^{m}\) and \begin{equation}\begin{aligned} e^{-\frac{i}{h} t Q} \operatorname{Op}_h(a) e^{\frac{i}{h} t Q} \equiv \operatorname{Op}_h(a \circ \Phi_{q_0}^t) \pmod{h^{\ell + 1} \Psi^{-\infty}} \end{aligned} \nonumber \end{equation} with \(Q := \operatorname{Op}_h(q)\). \end{theorem} \begin{proof} The main differences from standard treatments are that we use the full symbol to generate the operator dynamics, while we give the correspondence with the flow generated by the principal part and we are using the \emph{standard} quantization as opposed to the Weyl quantization for \(Q\). Thus, while the propagator \(e^{\frac{i}{h} t Q}\) solves an operator equation of the same form, it need not be unitary. Another minor difference is that typically Egorov's theorem is given for polyhomogeneous symbols when the operator dynamics uses the full symbol and the correspondence is to a classical flow with only the principal symbol. Nevertheless, the statement holds following the arguments in the proof of \citep[\emph{Egorov's Theorem} in \(\S 1\) of Ch. 8]{taylor81} and importing them into the semiclassical calculus; we carry this out presently. Let \(A := \operatorname{Op}_h(a)\) and \(A(t) := U^{-t} \operatorname{Op}_h(a) U^t\) for \(U:= e^{\frac{i}{h} Q}\). Then, \(A(t)\) is a solution to the system, \begin{equation}\begin{aligned} \partial_t A(t) = \frac{i}{h} [Q, A(t)], \quad A(0) = A . \end{aligned} \nonumber \end{equation} We wish to construct a solution \(\tilde{A}(t)\) to this system with principal symbol \(a \circ \Phi^t_{q_0}\) such that \(\tilde{A}(t) - A(t) \in h^{\infty} \Psi^{-\infty}\). That is, we look for \(\tilde{A}(t)\) that satisfies \begin{equation}\begin{aligned} \partial_t \tilde{A}(t) = \frac{i}{h}[Q, \tilde{A}(t)] + R(t) , \quad \tilde{A}(0) = A \end{aligned} \nonumber \end{equation} with \(R(t) \in h^{\infty} \Psi^{-\infty}\). On the symbol side, we look for \(\tilde{a}(t, x, \xi ; h)\) with asymptotic expansion \begin{equation}\begin{aligned} \tilde{a}(t,x,\xi ; h) \sim \sum_{j=0}^{\infty} \tilde{a}_j(t, x, \xi ; h) \end{aligned} \nonumber \end{equation} such that \(\tilde{a}_j \in h^{\ell + j} S^{m - j}\). Then by \citep[Theorem 9.5]{zworski2012}, the symbol of \(\frac{i}{h}[Q, \tilde{A}(t)]\) must have an asymptotic expansion in \(h^{\ell} S^{m}\) given in local coordinates by, \begin{align*} \operatorname{Sym}\left[ \frac{i}{h}[Q, \tilde{A}(t)] \right] &\sim \sum_{|\beta| \geq 0} h^{|\beta|-1} \frac{i^{-|\beta|+1}}{\beta!} (\partial_{\xi}^{\beta} \tilde{a}(t,x,\xi ; h) \partial_x^{\beta} q(x,\xi ; h) - \partial_{\xi}^{\beta} q(x,\xi ; h) \partial_x^{\beta}\tilde{a}(t,x,\xi ; h)) \\ &= \{ q_0, \tilde{a} \} + \{ \tilde{q}_0, \tilde{a} \} + \sum_{|\beta| \geq 2} h^{|\beta| - 1} \frac{i^{-|\beta| + 1}}{\beta!} (\partial_{\xi}^{\beta} \tilde{a} \, \partial_x^{\beta} q - \partial_{\xi}^{\beta} q \, \partial_x^{\beta} \tilde{a}) \end{align*} with \(\tilde{q}_0 := q - q_0 \in h S^{-\infty}\). Since \(\partial_t \tilde{A}_t = \operatorname{Op}_h(\partial_t \tilde{a}_t)\), it follows that \(\operatorname{Sym}[i h^{-1}[Q, \tilde{A}(t)]] = \partial_t \tilde{a}_t\), hence the asymptotic expansions would coincide. So let \(\tilde{a}_0(t, x, \xi ; h)\) be defined by the transport equation \begin{equation}\begin{aligned} \partial_t \tilde{a}_0 = \{ q_0, \tilde{a}_0 \}, \quad\quad \tilde{a}_0(0, x, \xi) = a(x, \xi), \end{aligned} \nonumber \end{equation} which is sensible as it is solved by \(\tilde{a}_0 = a \circ \Phi_{q_0}^t\) that is a symbol of order \((\ell, m)\) since \(\Phi_{q_0}^t\) is smooth with compact support. The operator \(\tilde{A}_0(t) := \operatorname{Op}_h(\tilde{a}_0)\) has \begin{align*} \operatorname{Sym}\left[ \frac{i}{h} [Q, \tilde{A}_0(t)] - \partial_t \tilde{A}_0(t) \right] &\sim \{ \tilde{q}_0, \tilde{a}_0 \} + \sum_{|\beta| \geq 2} h^{|\beta| - 1} \frac{i^{-|\beta| + 1}}{\beta!} (\partial_{\xi}^{\beta} \tilde{a}_0 \partial_x^{\beta} q - \partial_{\xi}^{\beta} q \partial_x^{\beta} \tilde{a}_0) \\ &=: r_0(t,x,\xi) \in h^{\ell + 1} S^{-\infty}. \end{align*} Now, \(\tilde{a}_1(t, x, \xi ; h)\) defined by the transport equation \begin{equation}\begin{aligned} \partial_t \tilde{a}_1 = \{ q_0, \tilde{a}_1 \} - r_0, \quad\quad \tilde{a}_1(0, x, \xi) = 0 \end{aligned} \nonumber \end{equation} is a symbol of order \((\ell + 1, -\infty)\) due to Duhamel's formula and the fact that \(r_0\) and \(\Phi_{q_0}^t\) are smooth and compactly supported. This leads to a recursive procedure to determine \(\tilde{a}_j\) for \(j \geq 1\) via transport equations: suppose that for all \(0 \leq j \leq J-1\) we have \(\tilde{a}_j \in h^{\ell + j} S^{-\infty}\) such that \(\tilde{A}_{J-1}(t) := \sum_{j=0}^{J-1} \operatorname{Op}_h(\tilde{a}_j)\) has \(\sum_{j=0}^{J-1} \tilde{a}_j(0,x,\xi) = a(x,\xi)\) and \begin{gather*} \operatorname{Sym}\left[ \frac{i}{h} [Q, \tilde{A}_{J-1}(t)] - \partial_t \tilde{A}_{J-1}(t) \right] \sim r_{J-1} , \\ r_{J-1} := \{ \tilde{q}_0, \tilde{a}_{J-1} \} + \sum_{|\beta| \geq 2} h^{|\beta| - 1} \frac{i^{-|\beta| + 1}}{\beta!} (\partial_{\xi}^{\beta} \tilde{a}_{J-1} \partial_x^{\beta} q - \partial_{\xi}^{\beta} q \partial_x^{\beta} \tilde{a}_{J-1}) \in h^{\ell + J} S^{-\infty}. \end{gather*} Then, let \(\tilde{a}_J(t,x,\xi ; h)\) be defined by the transport equation \begin{equation}\begin{aligned} \partial_t \tilde{a}_{J} = \{ q_0, \tilde{a}_J \} - r_{J-1}, \quad\quad \tilde{a}_J(0, x, \xi) = 0. \end{aligned} \nonumber \end{equation} Since \(r_{J-1}\) and \(\Phi_{q_0}^t\) are smooth and compactly supported, by Duhamels formula this is solved with \(\tilde{a}_J\) also of order \((\ell + J, -\infty)\). Moreover, \(A_J(t) := A_{J-1}(t) + \operatorname{Op}_h(\tilde{a}_{J})\) has \(\sum_{j=0}^J \tilde{a}_j(0,x,\xi) = a(x,\xi)\) and \begin{gather*} \operatorname{Sym}\left[ \frac{i}{h} [Q, \tilde{A}_{J}(t)] - \partial_t \tilde{A}_{J}(t) \right] \sim r_{J} , \\ r_{J} := \{ \tilde{q}_0, \tilde{a}_{J} \} + \sum_{|\beta| \geq 2} h^{|\beta| - 1} \frac{i^{-|\beta| + 1}}{\beta!} (\partial_{\xi}^{\beta} \tilde{a}_{J} \partial_x^{\beta} q - \partial_{\xi}^{\beta} q \partial_x^{\beta} \tilde{a}_{J}) \in h^{\ell + J + 1} S^{-\infty} . \end{gather*} Continuing like this, we have for all \(j \geq 1\), \(\tilde{a}_j \in h^{\ell + j} S^{-\infty}\) so by Borel's theorem there is \(\tilde{a}^{(1)} \in h^{\ell + 1} S^{-\infty}\) such that \begin{equation}\begin{aligned} \tilde{a}^{(1)} \sim \sum_{j=1}^{\infty} \tilde{a}_j, \quad\quad \tilde{a} := \tilde{a}_0 + \tilde{a}^{(1)} \in h^{\ell} S^m \end{aligned} \nonumber \end{equation} and therefore, \(\tilde{A}(t) := \operatorname{Op}_h(\tilde{a}(t,x,\xi ; h))\) satisfies \begin{equation}\begin{aligned} \partial_t \tilde{A}(t) = \frac{i}{h} [Q, \tilde{A}] + R(t), \quad\quad \tilde{A}(0) = A , \end{aligned} \nonumber \end{equation} with \(R(t) \in h^{\infty} \Psi^{-\infty}\). Now we wish to see that \(\tilde{R}(t) := A(t) - \tilde{A}(t) \in h^{\infty} \Psi^{-\infty}\). Since \(U^{-t} : H_h^s \to H_h^s\) continuously for all \(s \in \mathbb{R}\), it suffices to show that \(U^{-t} A - \tilde{A}(t) U^{-t} = \tilde{R}(t) U^{-t} \in h^{\infty} \Psi^{-\infty}\). Let \(u \in H_h^{-N}\) for \(N \in \mathbb{N}\) and denote \(v(t, x) := \tilde{A}(t) U^{-t}[u]\). Then \(v\) satisfies \(v(0, x) = A[u]\) and \begin{align*} \partial_t v &= (\partial_t[\tilde{A}(t)] U^{-t} + \tilde{A}(t) \partial_t[U^{-t}])[u] \\ &= \left( \frac{i}{h} [Q, \tilde{A}(t)] + R(t) \right)U^{-t}[u] + \frac{i}{h} \tilde{A}(t) Q U^{-t}[u] \\ &= \frac{i}{h} Q \tilde{A}(t)U^{-t}[u] + R(t) U^{-t}[u] \\ &= \frac{i}{h} Q[v] + R(t) U^{-t}[u]. \end{align*} Thus, \(\tilde{v}(t, x) := v(t,x) - U^{-t} A[u](x)\) satisfies \begin{equation}\begin{aligned} \partial_t \tilde{v} = \frac{i}{h} Q[\tilde{v}] + R(t)U^{-t}[u], \quad\quad \tilde{v}(0, x) = 0. \end{aligned} \nonumber \end{equation} Since \(w(t) := R(t) U^{-t}[u] \in C^{\infty}\) with \(||w(t)||_{H_h^N} = O(h^{\infty})\) and \(U^s : C^{\infty} \to C^{\infty}\) for all \(|s| \leq T\), by Duhamel's formula it follows that \(\tilde{v} = \tilde{R}(t) U^{-t}[u] \in C^{\infty}\) with \(||\tilde{v}||_{H_h^N} = O(h^{\infty})\). Thus, \(||\tilde{R}(t) U^{-t}||_{H_h^{-N} \to H_h^N} = O(h^{\infty})\), whence \(\tilde{R}(t) U^{-t} \in h^{\infty} \Psi^{-\infty}\). \end{proof} The equivalence between the operator dynamics given by conjugation with \(e^{\frac{i}{h} t Q}\) and the dynamics on symbols given by composition with \(\Phi_{q_0}^t\) as guaranteed by Egorov's theorem is a precise form of the physical notion of \emph{quantum-classical correspondence}. This establishes an assurance of the correspondence for a Liouvillian flow on symbols when we use a smoothing generator \(Q\) the operator side, whose real principal symbol serves as a local Hamiltonian for the symbol dynamics. On one hand, the second part of \protect\hyperlink{thm:sym-renorm-graph-lap}{Theorem \ref{thm:sym-renorm-graph-lap}} shows that smoothly cutting-off the principal symbol of \(h^2 \Delta_{\lambda,\epsilon}\) in a sufficiently small region of phase space gives a symbol whose Hamiltonian flow is geodesic for initial points in that region. On the other hand, the first part of that Theorem gives that this operator has order \((0,2)\). Thus, to apply Egorov's theorem we must maintain the former property and simultaneously localize the symbol of our generator to an appropriately small part of the phase space. We proceed as follows: we first see that when applied to a coherent state, the operator dynamics given by \(U_{\lambda,\epsilon}^t\) is, up to \(O(h)\) error, equal to the dynamics given by a semigroup that is generated by the square root of a phase space localized form of the graph Laplacian. Then, we apply the second part of \protect\hyperlink{thm:sym-renorm-graph-lap}{Theorem \ref{thm:sym-renorm-graph-lap}} and semiclassical functional calculus to see that this generator is pseudodifferential and has a principal symbol, whose Hamiltonian flow is geodesic about the initial point \((x_0, \xi_0)\) where the coherent state is localized. This puts us in a setting to apply Egorov's theorem, which gives the equivalence to the quantization of a symbol propagated along this flow. Finally, taking an inner product with the same coherent state recovers, up to \(O(h)\) error, the symbol propagated along the geodesic, thus giving the desired result, namely: \begin{theorem} \hypertarget{thm:sym-cs-glap-psido}{\label{thm:sym-cs-glap-psido}} Let \(\lambda \geq 0\), \(\alpha \geq 1\), \((x_0, \xi_0) \in T^*\mathcal{M} \setminus 0\) and \(|t| \leq \operatorname{inj}(x_0)\). Then, there exists \(h_0 > 0\) such that given \(a \in h^0 S^0\), for all \(h \in (0, h_0]\) and with \(\epsilon := h^{2 + \alpha}\), \begin{equation}\begin{aligned} \langle \psi_h(\cdot ; x_0, \xi_0) | U_{\lambda,\epsilon}^{-t} \operatorname{Op}_h(a) U_{\lambda,\epsilon}^t | \psi_h(\cdot ; x_0, \xi_0) \rangle = a \circ \Gamma^t(x_0, \xi_0) + O(h). \end{aligned} \nonumber \end{equation} In fact, there is a cut-off \(\chi \in C_c^{\infty}(\mathbb{R}, [0, 1])\) with \(\chi \equiv 1\) on \([-r, r]\) for some \(r > 0\) such that for all \(h \in (0, h_0]\) and with \(\epsilon := h^{2 + \alpha}\), \begin{equation}\begin{aligned} U_{\lambda,\epsilon}^{-t} \operatorname{Op}_h(a) U_{\lambda,\epsilon}^t[\psi_h] = U_{\chi}^{-t} \operatorname{Op}_h(a) U_{\chi}^t[\psi_h] + O_{L^2}(h) . \end{aligned} \nonumber \end{equation} Here, \(U_{\chi} := e^{\frac{i}{h} t Q}\) with \(Q = \operatorname{Op}_h(q)\) and \(q \in C_c^{\infty}(T^*\mathcal{M} \times [0, h_0)) \cap h^0 S^{-\infty}\) such that \(\operatorname{Sym}[Q] (x,\xi)= |\xi|_{g_x} \, \chi(|\xi|_{g_x}^2 - r_0)\) for \(r_0 := |\xi_0|_{g_{x_0}}^2\). The above statements hold with \(U_{\lambda,\epsilon}^{\pm (\varepsilon) t} := e^{\pm i t (\Delta_{\lambda,\epsilon} + (2 c_0/c_2) (\varepsilon/\epsilon) I)^{\frac{1}{2}}}\) in place of \(U_{\lambda,\epsilon}^{\pm t}\), whenever \(\varepsilon \in O(h^{1 + \alpha})\). \end{theorem} \begin{proof} We start with the phase-space localization property of coherent states: using the special form of the FBI transform with the adapted phase coinciding with \(\psi_h\) and appropriate amplitude, we have by \protect\hyperlink{thm:FBI-basic}{Theorem \ref{thm:FBI-basic}} and \protect\hyperlink{thm:wunsch-zworski}{Lemma \ref{thm:wunsch-zworski}} that there is \(C > 0\) depending only on \((x_0, \xi_0)\) and \(\mathcal{M}\) such that given any \(\gamma > 0\), if \(\chi \in C_c^{\infty}(\mathbb{R}, [0,1])\) with \(\chi \equiv 1\) on \(\mathcal{I}_{\chi} \supset \mathcal{I}_{\psi_h} := |\xi_0|_{g_{x_0}}^2 + h^{1 - \gamma}[-C^2, C^2]\), then \begin{align*} \psi_h &= T_h^* T_h[\psi_h] + O_{L^2}(h^{\infty}) \\ &= T_h^* \chi(|\xi|_{g_x}^2) T_h[\psi_h] + O_{L^2}(h^{\infty}) . \end{align*} On the other hand, by straight-forward modifications to the proof of the first part of \protect\hyperlink{thm:sym-renorm-graph-lap}{Theorem \ref{thm:sym-renorm-graph-lap}} it follows that \(h^{\alpha} \mathcal{L}_{\lambda,\epsilon} \in h^0 S^0\), so we may apply the Helffer-Sjöstrand formula as in \citep[\(\S 8\)]{dimassi1999spectral} to find that upon contracting the support \(\chi_{h^{-\alpha}}(\cdot) := \chi(h^{-\alpha} \cdot)\), \(\Pi_{h,\alpha,\chi} := \chi_{h^{-\alpha}}(\epsilon \Delta_{\lambda,\epsilon}) = \operatorname{Op}_h(q_{\chi,\lambda,\epsilon})\) with \(q_{\chi,\lambda,\epsilon} \in h^0 \Psi^{-\infty}\) and \(q_{\chi,\lambda,\epsilon} \equiv \chi \circ \mathcal{L}_{g,\lambda,h^{\alpha}} \pmod{h S^{-\infty}}\). An application of Taylor's theorem shows that \(\chi \circ \mathcal{L}_{g,\lambda,h^{\alpha}} \equiv \chi(|\xi|_{g_x}^2) \pmod{h^{\alpha} S^{-\infty}}\), so by \protect\hyperlink{thm:FBI-basic}{Theorem \ref{thm:FBI-basic}}, \(\Pi_{h,\alpha,\chi} \equiv T_h^* (\chi \circ |\xi|_{g_x}^2) T_h \pmod{h \Psi^{-\infty}}\). Therefore, \begin{equation} \label{eq:cs-phase-space-localization} \psi_h = \Pi_{h,\alpha,\chi}[\psi_h] + O_{L^2}(h). \end{equation} Now, as recorded in \protect\hyperlink{lem:lap-symm}{Lemma \ref{lem:lap-symm}}, \(A_{\lambda,\epsilon}\) has a discrete spectrum contained in \((-1, 1]\), so the spectrum of \(h^2 \Delta_{\lambda,\epsilon}\) is also discrete and contained in \([0, 2\tilde{c}_{2,0} \, h^{-\alpha})\), werein we set \(\tilde{c}_{2,0} := c_2/(2 c_0)\), with all eigenvalues isolated, except for \(\tilde{c}_{2,0} h^{-\alpha}\) where there is an accumulation point. Hence, with \(0 < |\xi_0|_{g_{x_0}}^2 \leq \tilde{c}_{2,0} h^{-\alpha}\) and supposing \(\mathcal{I}_{\chi} \subset [0, \tilde{c}_{2,0} h^{-\alpha} + C^2 h^{1 - \gamma}]\), we have \(\underline{\sigma}_{\chi}, \overline{\sigma}_{\chi} \in \operatorname{Spec}(h^2 \Delta_{\lambda,\epsilon}) \cup \{ \tilde{c}_{2,0} h^{-\alpha} + C^2 h^{1 - \gamma} \}\) such that \(\underline{\sigma}_{\chi} = \sup\{ \sigma \in \operatorname{Spec}(h^2 \Delta_{\lambda,\epsilon}) ~|~ \sigma \leq \min \mathcal{I}_{\chi} \}\) and \(\overline{\sigma}_{\chi} = \inf\{ \sigma \in \operatorname{Spec}(h^2 \Delta_{\lambda,\epsilon}) \cup \{ \tilde{c}_{2,0} h^{-\alpha} + C^2 h^{1 - \gamma} \} ~|~ \sigma \geq \max \mathcal{I}_{\chi} \}\). After possibly perturbing \(\chi\) so that \(\underline{\sigma}_{\chi}, \overline{\sigma}_{\chi} \neq \tilde{c}_{2,0} h^{-\alpha}\), we can modify \(\chi\) while keeping \(\chi^{-1}\{ 1 \} = \mathcal{I}_{\chi}\) fixed such that \(\mathcal{I}_{\chi} \subsetneq \operatorname{supp} \chi \subset (\underline{\sigma}_{\chi}, \overline{\sigma}_{\chi})\). Then, denoting by \(\Pi_{a,b} : L^2 \to L^2\) the spectral projector onto the eigenspaces of \(h^2 \Delta_{\lambda,\epsilon}\) given by the eigenvalues in \([a,b]\), we have \begin{equation}\begin{aligned} \Pi_{h,\alpha,\chi} = \Pi_{\underline{\sigma}_{\chi}, \overline{\sigma}_{\chi}} . \end{aligned} \nonumber \end{equation} Hence with the short-hand \(\Pi := \Pi_{\underline{\sigma}_{\chi}, \overline{\sigma}_{\chi}}\) and \(A := \operatorname{Op}_h(a)\) we have by the calculus of \({\Psi\text{DO}}\)s, including that \(A \in h^0 \Psi^0\) implies \(A\) is bounded on \(L^2(\mathcal{M})\) and by the foregoing considerations that \begin{align*} U_{\lambda,\epsilon}^{-t} A U_{\lambda,\epsilon}^t[\psi_h] &= U_{\lambda,\epsilon}^{-t} A \, \Pi^2 \, U_{\lambda,\epsilon}^{t}[\psi_h] + O_{L^2}(h) \\ &= U_{\lambda,\epsilon}^{-t} (\Pi \, A - [\Pi, A])\Pi \, U_{\lambda,\epsilon}^{t}[\psi_h] + O_{L^2}(h) \\ &= U_{\lambda,\epsilon}^{-t} \Pi \, A U_{\lambda,\epsilon}^{t} \Pi [\psi_h] + O_{L^2}(h) \\ &= e^{-i t(\Delta_{\lambda,\epsilon} \Pi)^{\frac{1}{2}}} \Pi A \Pi \, e^{i t (\Delta_{\lambda,\epsilon} \Pi)^{\frac{1}{2}}}[\psi_h] + O_{L^2}(h) \\ &= e^{-i t(\Delta_{\lambda,\epsilon} \Pi)^{\frac{1}{2}}} (A\Pi - [A, \Pi])\Pi \, e^{i t (\Delta_{\lambda,\epsilon} \Pi)^{\frac{1}{2}}}[\psi_h] + O_{L^2}(h) \\ &= e^{-i t(\Delta_{\lambda,\epsilon} \Pi)^{\frac{1}{2}}} A \, e^{i t (\Delta_{\lambda,\epsilon} \Pi)^{\frac{1}{2}}}[\psi_h] + O_{L^2}(h). \end{align*} Indeed, by \protect\hyperlink{lem:lap-symm}{Lemma \ref{lem:lap-symm}}, \(||U_{\lambda,\epsilon}^{\pm t}||_{L^2 \to L^2} \leq \overline{C}_{p,\lambda}/\underline{C}_{p,\lambda}\) and \([\Pi, A]\Pi \in h \Psi^{-\infty}\), so \(||U_{\lambda,\epsilon}^{-t} [\Pi, A] \Pi U_{\lambda,\epsilon}^{t}[\psi_h]||_{L^2} = O(h)\) and the remaining equalities use the commutativity among spectral functions of \(\Delta_{\lambda,\epsilon}\) and idempotency of \(\Pi\) as a spectral projector. The ultimate equality is due firstly to \(U^t := e^{i t (\Delta_{\lambda,\epsilon} \Pi)^{\frac{1}{2}}}\) having the bound, again by \protect\hyperlink{lem:lap-symm}{Lemma \ref{lem:lap-symm}}, \(||U^{\pm t}||_{L^2 \to L^2} \leq \overline{C}_{p,\lambda}/\underline{C}_{p,\lambda}\) and \([A,\Pi]\Pi \in h \Psi^{-\infty}\), which gives that the corresponding term is \(O_{L^2}(h)\) and then due to \(\eqref{eq:cs-phase-space-localization}\) so that \(||U^{-t} A U^t(\Pi - I)[\psi_h]||_{L^2} = O(h)\). Since \(|\xi_0|_{g_{x_0}} > 0\), we can choose \(\mathcal{I}_{\chi}\) so that \(\min \mathcal{I}_{\chi} > 0\) and then there is \(h_0 > 0\) such that for all \(h \in [0, h_0]\), \(\mathcal{I}_{\psi_h} \subset \mathcal{I}_{\chi}\). On this interval the square root function is smooth and since \(\Pi \in h^0 \Psi^{-\infty}\), we have that \(\Delta_{\lambda,\epsilon} \Pi = \operatorname{Op}_h(q^2) \in h^0 \Psi^{-\infty}\), wherein due to the second part of \protect\hyperlink{thm:sym-renorm-graph-lap}{Theorem \ref{thm:sym-renorm-graph-lap}}, \(q^2 \equiv |\xi|_{g_x}^2 \chi(|\xi|_{g_x}^2) \pmod{h S^{-\infty}}\). Thus, another application of the Helffer-Sjöstrand formula gives, \begin{equation}\begin{aligned} (h^2 \Delta_{\lambda,\epsilon} \Pi)^{\frac{1}{2}} = \sqrt{h^2 \Delta_{\lambda,\epsilon}} \, \Pi \equiv \operatorname{Op}_h(|\xi|_{g_x} \, \chi\circ |\xi|_{g_x}^2) \pmod{h \Psi^{-\infty}} . \end{aligned} \nonumber \end{equation} Therefore, combining the above considerations with Egorov's theorem to time \(|t| < \operatorname{inj}(x_0)\) and \protect\hyperlink{thm:sym-cs-psido}{Theorem \ref{thm:sym-cs-psido}}, we have upon recalling \(\mathcal{I}_{\psi_h} \subset \mathcal{I}_{\chi}\) that \begin{equation}\begin{aligned} \langle \psi_h | U_{\lambda,\epsilon}^{-t} A U_{\lambda,\epsilon}^t | \psi_h \rangle = a \circ \Phi_{q_0}^t(x_0, \xi_0) + O(h), \\ q_0(x,\xi) = |\xi|_{g_x} \chi(|\xi|_{g_x}^2), \quad (x_t, \xi_t) := \Phi^t_{q_0}(x,\xi). \end{aligned} \nonumber \end{equation} Since for all \((x,\xi) \in T^*\mathcal{M}\) and all \(t \in \mathbb{R}\), \(q_0 \circ \Phi^t_{q_0}(x,\xi) = q_0(x,\xi)\), we find in particular that \(|\xi_t|_{g_{x_t}}^2 = |\xi|_{g_x}^2\) for all \((x,\xi) \in \mathcal{N}_{\xi_0} := \{|\xi|_{g_x}^2 \in \mathcal{I}_{\chi} \}\). Furthermore, \(q_0(x,\xi) = |\xi|_{g_x}\) in the neighbourhood \(\operatorname{int}(\mathcal{N}_{\xi_0})\) of \((x_0, \xi_0)\), whence we find upon integrating the Liouvillian flow that \(a \circ \Phi_{q_0}^t(x_0, \xi_0) = a \circ \Gamma^t(x_0, \xi_0)\). The preceding argument applies to \(U_{\lambda,\epsilon}^{(\varepsilon) t}\) as well, since with \(\varepsilon \in O(h^{1 + \alpha})\), \(h^2(\Delta_{\lambda,\epsilon} + (2 c_0/c_2) (\varepsilon/\epsilon) I) = h^2 \Delta_{\lambda,\epsilon} + (2 c_0/c_2) h I\) has the same principal symbol as \(h^2 \Delta_{\lambda,\epsilon}\). This gives the second part of the statement of the Theorem. \end{proof} \begin{remark} The proof of the preceding Theorem shows that we may approximate the flow of classical observables along geodesics upon projecting onto any part of the spectrum of \(h^2 \Delta_{\lambda,\epsilon}\) that contains \(\{ \sigma \in \operatorname{Spec}(h^2 \Delta_{\lambda,\epsilon}) ~|~ |\sigma - |\xi_0|_{g_{x_0}}^2| \leq C^2 h^{1 - \gamma} \}\). As \(\epsilon \to 0\), we have spectral convergence of \(\Delta_{\lambda,\epsilon}\) to \(\Delta_{\mathcal{M}} + O(\partial^1)\) so by Weyl's law, roughly speaking, if we order eigenvalues increasingly and \(\sigma_{\epsilon,k} \in \operatorname{Spec}(\Delta_{\lambda,\epsilon})\) is the \(k\)-th eigenvalue then from the bottom, then with \(\epsilon \to 0\) we have \(\sigma_{\epsilon,k + 1} - \sigma_{\epsilon,k} \sim k\). If \(h = C^{\frac{2}{\gamma - 1}} k^{-1}\) and \(|\xi_0|_{g_{x_0}}^2 = 1\), then this is asking for at least \(\operatorname{Spec}(\Delta_{\lambda,\epsilon}) \cap [C' k^2 - k^{1 + \gamma}, C' k^2 + k^{1 + \gamma}]\), which will span an increasing spectral band as \(h \to 0\). \end{remark} \hypertarget{propagation-of-coherent-states}{% \section{Propagation of coherent states}\label{propagation-of-coherent-states}} We now discuss briefly the behaviour of a coherent state as it propagates according to \(U^t := e^{\frac{i}{h} t Q_h}\) for \(Q_h : C^{\infty} \to C^{\infty}\) satisfying the conditions of \protect\hyperlink{thm:egorov}{Theorem \ref{thm:egorov}}, with a generator that locally looks like \(\sqrt{\Delta_{\mathcal{M}}}\), namely: \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \tightlist \item \(Q_h = \operatorname{Op}_h(\tilde{q}) \in h^0 \Psi^{-\infty}\) such that \item \(q = \operatorname{Sym}[Q_h]\) is real-valued and \(\tilde{q} \in C_c^{\infty}(T^*\mathcal{M} \times [0, h_0))\) for some \(h_0 > 0\), \item there are \(r > 0\) and an open neighbourhood \(\mathcal{U} \subset T^*\mathcal{M} \setminus 0\) of the cosphere bundle \(S_r^*\mathcal{M}\) such that \(\mathcal{U} \subset \operatorname{supp} \tilde{q}(\cdot ; h)\) for all \(h \in [0, h_0)\) and \item \(q(\cdot ; h) = |\xi|_{g_x}\) on \(\mathcal{U}\) for all \(h \in [0, h_0)\). \end{enumerate} \noindent The \emph{ansatz} is that when \(|\xi_0|_{x_0} = r\), the evolved state \(U^t |\psi_h(\cdot;x_0,\xi_0)\rangle\) to time \(|t| < \operatorname{inj}(x_0)\) is localized about \((x_t, \xi_t) := \Gamma^t(x_0,\xi_0)\) in phase space, and in particular, it is approximately another coherent state (with different amplitude), so a reasonable conjecture would be that it is roughly a Hermite distribution about \(x_t\). This is sensible\footnote{This is worked out explicitly in the case of the Schrödinger equation with \(Q = h^2 \Delta_{\mathcal{M}}\), in \citep{paul_uribe_semiclassical_measures}.}, since the evolution of coherent states under FIOs follows a more general consideration regarding the evolution of \emph{Lagrangian distributions} \citep{boutet_guillemin, paul_uribe_semiclassical_trace, zelditch2017eigenfunctions}. Such descriptions can be likened to the \emph{Schrödinger picture} of quantum dynamics, wherein one is concerned with the explicit description of propagated states \(| \psi_h(t) \rangle\) according to \(i h \partial_t | \psi_h \rangle = Q_h | \psi_h \rangle\). In the present context, we work in the \emph{quantum statistical} or \emph{Heisenberg} framework, using the correspondence between the evolution of symbols and the evolution of their corresponding quantized operators under conjugation by the dynamics \(U^t\), as driven by Egorov's theorem, in order to achieve a weaker description. Ultimately, we arrive at a middle-ground: the quantity \(|U^t[\psi_h]|^2(x)\) is physically interpreted --- as per the \emph{Born rule} --- as the probability that at time \(t\), the particle whose quantum state is described by \(|\psi_h(t)\rangle\) can be found at the position \(x\), while in the following we use quantum-classical correspondence in the Heisenberg picture to turn this quantity into a description about the location of \(x_t\). We will revisit this interpretation in \protect\hyperlink{observing-geodesics}{Section \ref{observing-geodesics}} when we use the \emph{mean} of this distribution as another way (with better error rate) to locate \(x_t\). Hence, presently the main concerns are that \(|U^t[\psi_h]|^2\) can be approximated by a local function and, for consistency with the formulation on graphs that its \(L^{\infty}\) norm can be bounded. These can essentially be resolved using only symbol calculus. \hypertarget{localization-property}{% \subsection{Localization property}\label{localization-property}} We begin with the quantum statistics picture. Start by approximating point-evaluation with integration against a localized function: let \(\varepsilon > 0\) and define \(\rho_{\varepsilon}(x;p) := \exp(-d_g(x,p)^2/\varepsilon)\) for some \(p \in \mathcal{M}\) fixed. Then, given \(\varphi \in C^{\infty}\) and setting \(s_p : x \mapsto \exp_p^{-1}(x)\), we have \begin{align} \langle \varphi | \rho_{\varepsilon}(\cdot;p) \varphi\rangle &= \int_\mathcal{M} |\varphi|^2(y) e^{-\frac{d_g(y,p)^2}{\varepsilon}} ~ d\vol{y} \nonumber \\ &= \int_{\mathbb{R}^n} (|\varphi|^2\circ s_p^{-1})(u) \; e^{-\frac{||u||_p^2}{\varepsilon}} \sqrt{|g_{s_p^{-1}(u)}|} ~ du + O(\varepsilon^{\infty}) \label{eq:approx-ident-inner-prod} \\ &= \varepsilon^{\frac{n}{2}} m_0 |\varphi|^2(p) + O(\varepsilon^{\frac{n}{2} + 1}) , \nonumber \end{align} with \(m_0 := \int_{\mathbb{R}^n} e^{-||u||^2} ~ du\). The second equality is due to \(d[{(s_x^{-1})}^* \nu_g](v) = \sqrt{|g_{s_x^{-1}(v)}|} dv\) (see \citep[Prop C.III.2]{berger1971spectre}) and the ultimate one is due to Taylor expansions and an application of \protect\hyperlink{lemma:expansion-normal-coords}{Lemma \ref{lemma:expansion-normal-coords}}. Now, we quantize \(\rho_{\varepsilon}(\cdot; p)\); this is sensible to do since for any \(u \in C^{\infty}(\mathcal{M})\), \(\partial_{\xi} u = 0\) and \(|u | \in O(1)\) together imply that that \(u \in h^0 S^{0}\). The quantization itself is straightforward: \begin{align} \operatorname{Op}_h(\rho_{\varepsilon}(\cdot;p))[\varphi](x) &= \frac{1}{(2\pi h)^n}\int_{T^*\mathcal{M}} e^{\frac{i}{h}s_x(y) \cdot \xi} \rho_{\varepsilon}(y;p) \varphi(y) \chi_x(y) ~ dy\,d\xi \nonumber\\ &= \frac{1}{(2\pi h)^n}\int_{\mathbb{R}^n} \int_{\mathbb{R}^n} e^{\frac{i}{h}v\cdot\xi} \rho_{\varepsilon}(s_x^{-1}(v);p)(\varphi \chi_x\circ s_x^{-1})(v) ~ (s_x^{-1})^*[ dy](v) ~ d\xi \label{eq:quantization-functions} \\ &= (2\pi)^{-n}\rho_{\varepsilon}(x;p) \varphi(x) , \nonumber \end{align} wherein \(\chi_x \in C^{\infty}\) is a cut-off supported in a normal neighbourhood of \(x\) and \(\chi_x \equiv 1\) in a smaller neighbourhood. The function \(\rho_{\varepsilon}(\cdot ; p)\) is a symbol of order \((0,0)\) for each \(\varepsilon > 0\) and upon defining \(\tilde{\rho}_{\varepsilon}(\cdot;p) := (2\pi)^{n} \rho_{\varepsilon}(\cdot;p)/||\rho_{\varepsilon}(\cdot;p)||_1\), if for each fixed \(h \geq 0\), we let \(\varepsilon \to 0^+\), then as a sequence of linear maps \(\mathscr{S} \to \mathscr{S}'\) from Schwartz-class functions to tempered distributions, \(\operatorname{Op}_h(\tilde{\rho}_{\varepsilon}(\cdot;p)) \to \hat{\delta}_p : \mathscr{S} \ni \phi \mapsto \delta_p \phi \in \mathscr{S}'\) in the weak-* sense. Therefore, we have that for each \(h \in [0, h_0)\), \begin{equation}\begin{aligned} \lim_{\varepsilon \to 0} \langle \varphi | \operatorname{Op}_h(\tilde{\rho}_\varepsilon(\cdot;p))|\varphi\rangle =|\varphi|^2(p) . \end{aligned} \nonumber \end{equation} This implies, in particular, that \begin{equation} \label{eq:prop-cs-pos-rep} \lim_{\varepsilon \to 0} \langle \psi_h(\cdot;x_0,\xi_0) | U^{-t} \operatorname{Op}_h(\tilde{\rho}_{\varepsilon}(\cdot;p))U^t | \psi_h(\cdot;x_0,\xi_0)\rangle = U^t[\psi_h](p) \, \overline{(U^{-t})^*[\psi_h](p)} \end{equation} and if \(U\) is unitary, then \begin{equation}\begin{aligned} \eqref{eq:prop-cs-pos-rep} = |U^t \psi_h|^2(p). \end{aligned} \nonumber \end{equation} Now, Egorov's theorem (in the present setting, \protect\hyperlink{thm:egorov}{Theorem \ref{thm:egorov}}) tells that the conjugation by \(U^t\) gives a \(\Psi\)DO with principal symbol that flows along the classical Hamiltonian vector field governed by the principal symbol of the generator of \(U^t\). Let \(\Theta_q^t\) be this flow corresponding to \(q\). Then, upon invoking the FBI transform, we have, \begin{equation}\begin{aligned} \langle \psi_h | U^{-t} \operatorname{Op}_h(\rho_{\varepsilon}) U^t|\psi_h\rangle = \langle \psi_h | T_h^* \tilde{\rho}_{\varepsilon}(\Theta_q^t;p)T_h|\psi_h\rangle + O(h,1/\varepsilon) \end{aligned} \nonumber \end{equation} with \(O(h,1/\varepsilon)\) denoting an error term that is of order \(1\) in \(h\) and order \(-1\) in \(\varepsilon\), due to derivatives of \(\rho_{\varepsilon}\) being taken in the process of converting the left-hand side to the right-hand side. Now we specify to \(|t| < T = \operatorname{inj}(x_0)\) and assume that \(|\xi_0|_{g_{x_0}} = r > 0\). Then, to see that we can still take the limit \(\varepsilon \to 0\), we compute the first term on the right: by \protect\hyperlink{thm:wunsch-zworski}{Lemma \ref{thm:wunsch-zworski}} we have, \begin{align*} h^{\frac{n}{2}} \frac{\langle \psi_h | T_h^* \rho_{\varepsilon}(\Theta_q^t;p) T_h|\psi_h \rangle}{C_h(x_0,\xi_0)^2} &= \int_{T^*\mathcal{M}} e^{-d_g^2(\Theta_q^t(x,\xi),p)/\varepsilon} e^{-\frac{2}{h}\Im \Phi(s_{x_0}(x),\xi - \xi_0)} |c_h|^2 ~ dx d\xi \\ &=:(I), \end{align*} wherein the integrand is localized to a ball of radius \(O(h^{\frac{1}{2} - \gamma})\) for any \(\gamma > 0\) about \((x_0, \xi_0) \in T^*\mathcal{M}\) and by assumption, \(q(\cdot ; h) = |\xi|_{g_x}\) for all \(h \in [0, h_0)\), so locally \(\Theta_q^t = \Gamma^t\), hence \begin{equation}\begin{aligned} (I) = h^{\frac{n}{2}} \frac{\langle \psi_h | T_h^* \rho_{\varepsilon}(\Gamma_q^t;p) T_h|\psi_h \rangle}{C_h(x_0,\xi_0)^2} + O(h^{\infty}). \end{aligned} \nonumber \end{equation} Now, working with the first term of the right-hand side we have, \begin{align*} h^{\frac{n}{2}} \frac{\langle \psi_h | T_h^* \rho_{\varepsilon}(\Gamma^t;p) T_h|\psi_h \rangle}{C_h(x_0,\xi_0)^2} &= \int_{T^*\mathcal{M}} e^{-d_g^2(\Gamma^t(x,\xi),p)/\varepsilon} e^{-\frac{2}{h}\Im \Phi(s_{x_0}(x),\xi - \xi_0)} |c_h|^2 ~ dx d\xi \\ &= \int_{T^*\mathcal{M}} e^{-d_g^2(x,p)/\varepsilon} e^{-\frac{2}{h} \Im \Phi[s_{x_0}(\pi_{\mathcal{M}}\Gamma^{-t}(x,\xi)),\pi_{T_x^*\mathcal{M}}\Gamma^{-t}(x,\xi) - \xi_0]} |c_h^{-t}|^2 ~ dx d\xi \\ &= \int_{\mathbb{R}^n} \int_{V_p} e^{-||v||^2/\varepsilon} e^{-\frac{2}{h}\Im \Phi[s_{x_0} \circ x^{-t}(s_p^{-1}(v),\xi), \xi^{-t}(s_p^{-1}(v),\xi) - \xi_0]} ~ \times \\ & \quad\quad \times |c_h^{-t}|^2 dv d\xi + O(\varepsilon^{\infty}) \\ &= \int_{\mathbb{R}^n} \left(e^{-\frac{2}{h} \Im \Phi[s_{x_0} \circ x^{-t}(p,\xi), \xi^{-t}(p,\xi) - \xi_0]} \right) \times \\ & \quad\quad \times \int_{V_p} e^{-||v||^2/\varepsilon} [|c_h^{-t}|^2(p,\xi;x_0,\xi_0) + O(v)] (1 + O(v/h)) \; \times \\ & \quad\quad \quad\quad \times dv d\xi + O(\varepsilon^{\infty}) \\ & = \varepsilon^{\frac{n}{2}} \int_{\mathbb{R}^n} e^{-\frac{2}{h}\Im\Phi[s_{x_0} \circ x^{-t}(p,\xi),\xi^{-t}(p,\xi) - \xi_0]} \times \\ &\quad\quad \times [|c_h^{-t}|^2(p,\xi;x_0,\xi_0) + O(\varepsilon/h^2)] d\xi + O(\varepsilon^{\infty}), \\ \end{align*} wherein we have written \(x^{-t}(x,\xi) := \pi_{\mathcal{M}} \Gamma^{-t}(x,\xi)\) and \(\xi^{-t}(x,\xi) := \pi_{T_x^*\mathcal{M}} \Gamma^{-t}(x,\xi)\). To pass from the first to second line, we've used the invariance to the geodesic flow of the measure induced by the canonical symplectic form; then, we pass to normal coordinates about \(p\) and localize to its neighbourhood \(V_p \subset \mathbb{R}^n\) of size \(\omega(\sqrt{\varepsilon})\) since the integrand is decaying exponentially outside of that (the factors of the Gaussian in \(\varepsilon\) are bounded and independent of \(\varepsilon\)), which gives the \(O(\varepsilon^{\infty})\) error term. Finally, in the penultimate equality we have Taylor expanded \(|T_h \psi_h|^2 \circ \Gamma^{-t}\) and \(|c_h^{-t}|^2\) at \(x = p\) (in coordinates, \(v = 0\)) and in the last equality we have integrated the \(v\) variables after rescaling them by \(\sqrt{\varepsilon}\) and expanding the Gaussian into a Taylor series. Since now we have only non-negative powers of \(\varepsilon\), this allows to take the limit: \begin{align} |\psi_h^t|^2(p) &:= \lim_{\varepsilon \to 0} \langle \psi_h | T_h^* \tilde{\rho}_{\varepsilon}(\Gamma^t;p)T_h|\psi_h \rangle \nonumber \\ &= C_h(x_0,\xi_0)^2 \, h^{-\frac{n}{2}} \int_{\mathbb{R}^n} e^{-\frac{2}{h}\Im\Phi[\Gamma^{-t}(p,\xi) - (x_0,\xi_0)]} |c_h^{-t}|^2(p,\xi;x_0,\xi_0) ~ d\xi . \label{eq:psi_h-t-full} \end{align} On the left-hand side, we have simply introduced new (but suggestive) notation defining a function \(|\psi^t_h|^2\) and on the right-hand side we have written the integrand in local coordinates: this is justified since \(c_h^{-t}\) is a symbol of order zero, which implies that the Gaussian localizes the integral to a \(C_0\sqrt{h}\)-ball \(\tilde{B}_p\) about \(\xi_t(p) := \arg\min_{\xi} ||\Gamma^{-t}(p,\xi) - (x_0,\xi_0)||_H^2\) with \(C_0 := C \sup_{x \in \mathcal{M}} \operatorname{vol} B[(x,0), H]\) since \(\Im\Phi(x,\xi) \leq C' \frac{1}{2} ||(x,\xi)||_H^2 \leq C (|x|^2 + |\xi|^2)\). Now, we can relate this to the right-hand side of \(\eqref{eq:prop-cs-pos-rep}\) and in case that \(U\) is unitary, to \(|U^t \psi_h|^2\) via the computations above and a sequence of approximations: \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \item \(|U^t \psi_h|^2(p) = \langle \psi_h | U^{-t} \operatorname{Op}_h(\tilde{\rho}_{\varepsilon}(\cdot;p)) U^t | \psi_h \rangle + O(\varepsilon)\) due to \(\eqref{eq:approx-ident-inner-prod}\) and \(\eqref{eq:quantization-functions}\); \item upon applying Egorov's theorem to the dominant term on the right-hand side of step (1), \begin{equation}\begin{aligned} |U^t\psi_h|^2(p) = \langle \psi_h | \operatorname{Op}_h(\tilde{\rho}_{\varepsilon}(\Theta_q^t ; p)) | \psi_h \rangle + O_{2,\varepsilon}(h) + O(\varepsilon), \end{aligned} \nonumber \end{equation} wherein \(O_{2,\varepsilon}(h)\) denotes a dependence of the error term on \(\varepsilon\) and tracks that the error comes from this step (2); \item upon applying \protect\hyperlink{thm:FBI-basic}{Theorem \ref{thm:FBI-basic}} to the main term on the right-hand side of step (2), \begin{equation}\begin{aligned} |U^t\psi_h|^2(p) = \langle \psi_h | T_h^* \tilde{\rho}_{\varepsilon}(\Theta_q^t;p)T_h | \psi_h \rangle + O_{\varepsilon}(h) + O_{2,\varepsilon}(h) + O(\varepsilon), \end{aligned} \nonumber \end{equation} with the \(O_{\varepsilon}(h)\) error coming from the relation between a \(\Psi\)DO and its conjugation by the FBI transform; \item the computation of the main term on the right-hand side of step (3) was carried out above, so if we pass to the limit \(\varepsilon \to 0\), then we have, \begin{equation*} |U^t \psi_h|^2(p) - |\psi^t_h|^2(p) = \lim_{\varepsilon \to 0} [|U^t \psi_h|^2 - \langle \psi_h |T_h^* \tilde{\rho}_{\varepsilon}(\Gamma^t;p)T_h|\psi_h \rangle] = \lim_{\varepsilon \to 0}[O_{\varepsilon}(h) + O_{2,\varepsilon}(h)], \end{equation*} wherein the convergence of the left-hand side gives the existence of the limit of the error terms on the right-hand side and since this only affects the coefficients of the terms dominated by \(h\), the over-all error is still \(O(h)\). \end{enumerate} This proves the first part of: \begin{lemma} \hypertarget{lem:prop-coherent-state-localized}{\label{lem:prop-coherent-state-localized}} With the notation above, assuming that \(\psi_h\) is localized about \((x_0, \xi_0) \in T^*{\mathcal{M}} \setminus 0\) with \(|\xi_0|_{g_{x_0}}^2 = r > 0\) and setting \(x_t := \pi_{\mathcal{M}}\Gamma^t(x_0,\xi_0)\), we have for all \(h \in (0, h_0]\) and \(|t| \leq \operatorname{inj}(x_0)\), \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \tightlist \item \(|U^t \psi_h|^2(p) = |\psi_h^t|^2(p) + O(h)\), \item there is \(h_1 \in (0, h_0]\) such that for all \(h \in (0, h_1]\), \(|U^t[\psi_h]|^2(x_t) \in \Theta(h^{-\frac{n}{2}})\), \item \(|\psi_h^t|^2(p)\) is localized to (meaning it is \(O(h^{\infty})\) outside of) a ball of radius \(O(h^{\frac{1}{2} - \gamma})\), for any \(\gamma > 0\), about \(x_t\) and \item there is an \(h_{\max} \in (0, h_1]\) such that for all \(h \in [0, h_{\max})\), \(|U^t[\psi_h]|^2\) achieves its maximum value in a ball of radius \(O(\sqrt{h})\) about \(x_t\). \end{enumerate} \end{lemma} \begin{proof} We clarify parts (2)-(4). The exponential in the integral in \(\eqref{eq:psi_h-t-full}\) defining \(|\psi_h^t|^2\) is localized to an \(O(h^{\frac{1}{2} - \gamma})\) ball about \(x_t\) for any \(\gamma > 0\) and \(c_h\) is bounded, so \(|\psi_h^t|^2\) is itself localized to such a ball and \(|\psi_h^t|^2 \lesssim C_h(x_0,\xi_0)^2 ||c_h^{-t}||^2_{\infty} \lesssim h^{-\frac{n}{2}}\). Moreover, since we have used the symbol \(b^{-\frac{1}{2}}\) from \protect\hyperlink{thm:FBI-basic}{Theorem \ref{thm:FBI-basic}} in the FBI transform defining \(|\psi_h^t|^2\), it follows from \protect\hyperlink{thm:wunsch-zworski}{Lemma \ref{thm:wunsch-zworski}} that there are constants \(h_1 > 0\) and \(C' \geq 1\) such that for all \(h \in (0, h_1]\), \(C' \geq |c_h^{-t}|^2(x,\xi ; x_0,\xi_0) \geq C'^{-1}\) in a fixed (in \(h\)) neighbourhood \(\mathscr{O}_t\) of \((x_t, \xi_t)\). This, combined with the proof of \protect\hyperlink{thm:sym-cs-psido}{Theorem \ref{thm:sym-cs-psido}} gives \(C_h(x_0, \xi_0)^2 \gtrsim h^{-\frac{n}{2}}\), so taking \(h_1 \leq h_0\) along with part (1) gives part (2). Now, to see part (4), we use that by \protect\hyperlink{thm:wunsch-zworski}{Lemma \ref{thm:wunsch-zworski}}, there is a constant \(C \geq 1\) such that \(C(|x - x_0|^2 + |\xi - \xi_0|^2) \geq \Im \Phi \geq C^{-1}(|x - x_0|^2 + |\xi - \xi_0|^2)\), which gives \(0 < h_1 \leq h_0\) such that for all \(h \in [0, h_1]\), \(|\psi_h^t|^2 = O(h^{\infty})\) in \(\mathscr{O}_t^c\) and \(B_0 := B(x_t, g ; (\tilde{C} h)^{\frac{1}{2}}) \subset \mathscr{O}_t\) with \(\tilde{C} := 2 C(\log(C') + 1)\). Then, given \(p_0 \in \overline{B} := \overline{B}(x_t, g; (h/C)^{\frac{1}{2}})\), \(|\psi_h^t|^2(p_0) \geq C^{\frac{n}{2}} C_{\mathscr{O}_t}/(C' e) + O(h^{\infty})\) for \(C_{\mathscr{O}_t} := \int_{\mathscr{O}_t} e^{-|\xi_t - \xi_0|^2} ~ d\xi\). Likewise, given \(p_1 \in \mathcal{M} \setminus B_0\), \(|\psi_h^t|^2(p_1) \leq C^{-\frac{n}{2}} C_{\mathscr{O}_t}/(C' e^2) + O(h^{\infty})\). Since \(\overline{B} \subset B_0\), there is \(0 < h_{\max} \leq h_1\) such that for all \(h \in (0, h_{\max}]\), \(|U^t[\psi_h]|^2\) achieves its maximum in \(B_0\). \end{proof} \hypertarget{graph-laplacian-as-hamiltonian}{% \subsubsection{Graph Laplacian as Hamiltonian}\label{graph-laplacian-as-hamiltonian}} Connecting back to quantum dynamics generated by the graph Laplacian, \protect\hyperlink{thm:sym-cs-glap-psido}{Theorem \ref{thm:sym-cs-glap-psido}} tells that on application to a coherent state localized at \((x_0, \xi_0)\), there is \(h_0 > 0\) so that \((\Ueps{\tilde{\varepsilon}}{})^{-t} \operatorname{Op}_h(a) \Ueps{\tilde{\varepsilon}}{t}[\psi_h] = e^{-\frac{i}{h} t Q_h} \operatorname{Op}_h(a) e^{\frac{i}{h} t Q_h}[\psi_h] + O_{L^2}(h)\) whenever \(\epsilon = h^{2 + \alpha}\), \(\tilde{\varepsilon} \in O(h^{1 + \alpha})\) and \(\alpha \geq 1\), such that \(Q_h\) satisfies the conditions (1) - (4) from the beginning of \protect\hyperlink{propagation-of-coherent-states}{Section \ref{propagation-of-coherent-states}}. We wish to relate this to the density \(|\Ueps{\tilde{\varepsilon}}{t}[\psi_h]|^2\), for which we note that \protect\hyperlink{lem:lap-symm}{Lemma \ref{lem:lap-symm}} implies, \begin{equation} \label{eq:glap-prop-adjoint-identity} \left( \frac{1}{p_{\lambda,\epsilon}}(\Ueps{\tilde{\varepsilon}}{t})^* \circ p_{\lambda,\epsilon} \right) \Ueps{\varepsilon}{t} = I \end{equation} with the adjoint being taken on \(L^2(\mathcal{M}, p d\nu_g)\), so together with \(\eqref{eq:prop-cs-pos-rep}\), we have upon fixing \(x \in \mathcal{M}\), \begin{equation}\begin{aligned} |\Ueps{\tilde{\varepsilon}}{t}[\psi_h]|^2(x) = \lim_{\varepsilon \to 0} \langle p \,p_{\lambda,\epsilon} (\Ueps{\tilde{\varepsilon}}{})^{-t} \frac{\rho_{\varepsilon}(\cdot ; x)}{p\, p_{\lambda,\epsilon}} \Ueps{\tilde{\varepsilon}}{t}[\psi_h] |\psi_h \rangle . \end{aligned} \nonumber \end{equation} Denoting \(\tilde{p}_{\lambda,\epsilon} := p_{\lambda,\epsilon} p\), we can further write by \protect\hyperlink{thm:egorov}{Theorem \ref{thm:egorov}}, for all \(\varepsilon > 0\), \begin{align*} \langle & \psi_h | \tilde{p}_{\lambda,\epsilon} \, (\Ueps{\tilde{\varepsilon}}{})^{-t} \frac{\rho_{\varepsilon}(\cdot ; x)}{\tilde{p}_{\lambda,\epsilon}} \Ueps{\tilde{\varepsilon}}{t} | \psi_h \rangle \\ &\quad\quad = \langle \psi_h | \tilde{p}_{\lambda,\epsilon} \, e^{-\frac{i}{h} t Q_h} \frac{\rho_{\varepsilon}(\cdot ; x)}{\tilde{p}_{\lambda,\epsilon}} e^{\frac{i}{h} t Q_h} | \psi_h \rangle + O(h) \\ &\quad\quad = \langle \psi_h | \operatorname{Op}_h[\rho_{\varepsilon}(\Gamma^t ; x) \tilde{p}_{\lambda,\epsilon} /\tilde{p}_{\lambda,\epsilon}(\Gamma^t)] | \psi_h \rangle + O(h) \\ &\quad\quad = \langle \psi_h | T_h^* \rho_{\varepsilon}(\Gamma^t ; x) \tilde{p}_{\lambda,\epsilon} /\tilde{p}_{\lambda,\epsilon}(\Gamma^t) T_h | \psi_h \rangle + O(h). \end{align*} Now we may proceed along the lines of the discussion leading up to \protect\hyperlink{lem:prop-coherent-state-localized}{Lemma \ref{lem:prop-coherent-state-localized}}, which upon absorbing \(\tilde{p}_{\lambda,\epsilon}/\tilde{p}_{\lambda,\epsilon}(\Gamma^t) \sim 1\) into the symbol \(|c_h|^2\), gives the corresponding \(|\psi_h^t|^2\) of \(\eqref{eq:psi_h-t-full}\) so that \(|\Ueps{\tilde{\varepsilon}}{t}[\psi_h]|^2(x) = |\psi_h^t|^2(x) + O(h)\). This leads to the localization properties for the propagation of a coherent state by the dynamics generated by a graph Laplacian as given by \protect\hyperlink{lem:prop-coherent-state-localized}{Lemma \ref{lem:prop-coherent-state-localized}}, with essentially the same reasoning as in its proof, which then gives us, \begin{proposition} \hypertarget{prop:glap-prop-cs-localized}{\label{prop:glap-prop-cs-localized}} Let \(\lambda \geq 0\), \(\alpha \geq 1\), \((x_0, \xi_0) \in T^*\mathcal{M} \setminus 0\) and \(|t| < \operatorname{inj}(x_0)\). Then, for \(\psi_h\) a coherent state localized at \((x_0, \xi_0)\) and \(\tilde{\varepsilon} \in O(h^{1 + \alpha})\), there are constants \(h_0 \geq h_1 \geq h_{\max} > 0\) such that the properties (1) - (4) of \protect\hyperlink{lem:prop-coherent-state-localized}{Lemma \ref{lem:prop-coherent-state-localized}} hold for the propagated coherent state \emph{density} \(|\Ueps{\tilde{\varepsilon}}{t}[\psi_h]|^2\). \qed \end{proposition} \hypertarget{observing-geodesics}{% \subsection{Observing Geodesics}\label{observing-geodesics}} We've seen that the location of the maximum of a propagated coherent state \emph{observes} the geodesic flow in the sense that it gives an \(O(\sqrt{h})\) ball in configuration space where it can be found. Now, we borrow from quantum mechanics the \emph{position operator}, which in the basis of Dirac mass distributions represents position statistics of a particle. In particular, taking the expectation of the position operator with \(|\psi_h(t)\rangle\) gives the \emph{mean position} of a particle with quantum state \(|\psi_h(t)\rangle\). When \(U\) is unitary and a position operator is given by a vector of multiplication operators by coordinate functions, then recalling the interpretation of \(|U^t[\psi_h]|^2\) via the Born rule, we understand the mean position to be the mean of this probability density. On a submanifold of Euclidean space, the extrinsic coordinates can be used, but from a local perspective, in light of \protect\hyperlink{lem:prop-coherent-state-localized}{Lemma \ref{lem:prop-coherent-state-localized}}, the concentration of \(|U^t[\psi_h]|^2\) in an \(O(\sqrt{h})\) ball about \(x_t\) can be used to identify a position operator given by local coordinates of intrinsic dimension. In either case, through Egorov's theorem and quantization of \({\Psi\text{DO}}\)s by FBI transforms, the mean turns out to be located in an \(O(h)\) ball about \(x_t\). Thus, we have a quadratic improvement in the error term when going from the maximum to the mean. We now elaborate on this point of view. In principle, working in coordinates given by fixing a smooth, injective map \(u = (u_1, \ldots, u_N) : \mathcal{M} \to \mathbb{R}^N\) with \(u_j \in C^{\infty}(\mathcal{M})\), for any \(N \in \mathbb{N}\), we'd like to compute \begin{equation} \label{eq:prop-cs-mean-coord} \bar{u}_j^t(x_0,\xi_0) :=\int_{\mathcal{M}} u_j(x) |U^t \psi_h|^2(x) ~ d\vol{x} \end{equation} for each \(j = 1, \ldots, N\). If we quantize \(u_j\), then we have a multiplication operator in \(h^0 \Psi^0\) and therefore by Theorems \protect\hyperlink{thm:egorov}{\ref{thm:egorov}} and \protect\hyperlink{thm:sym-cs-psido}{\ref{thm:sym-cs-psido}}, \begin{equation} \label{eq:prop-cs-coord-qmean} \langle \psi_h | U^{-t} \operatorname{Op}_h(u_j) U^t | \psi_h \rangle = u_j\circ\Gamma^t(x_0,\xi_0) + O(h) = u_j(x_t) + O(h) \end{equation} and when \(U\) is unitary, this gives, \begin{equation} \label{eq:prop-cs-sym-coord} \bar{u}_j^t(x_0,\xi_0) = u_j(x_t) + O(h). \end{equation} Hence, for the \emph{mean coordinates}, \(\bar{u}^t := (\bar{u}_1^t, \ldots, \bar{u}_N^t)\) we have in the unitary case, \(||\bar{u}^t(x_0, \xi_0) - u(x_t)||_{\mathbb{R}^N} \lesssim N^{\frac{1}{2}} h\). One goal is then to turn this into a bound on \(d_g(u^{-1}[\bar{u}^t(x_0, \xi_0)], x_t)\) so that we can be certain that taking the mean of \(|U^t[\psi_h]|^2\) brings us close to the geodesic neighbour of \(x_0\) at distance \(t\) in the direction \(\xi_0\), \emph{viz}., that we are \emph{observing} the geodesic flow. Another aspect is to see that this can work \emph{locally}: we've seen already that for sufficiently small \(h\), the geodesic neighbour \(x_t\) is within an \(O(\sqrt{h})\) ball about the maximum of \(|U^t[\psi_h]|^2\). Therefore, by \(\eqref{eq:prop-cs-sym-coord}\), we can localize the mean of each coordinate to this neighbourhood. This is of practical importance because the computation, along with coefficients of error, can be reduced after setting up a local, dimensionally reduced coordinate system, as opposed to working in the extrinsic coordinates (which in some applications might grow inversely proportional to \(h\) !). Further, when needed to be done in extrinsic coordinates, it would be useful to compute the means within extrinsic balls about the maximum and then take a nearest embedded point. This is all feasible and the proof is now rather simple. It is facilitated by the following, \begin{definition} \hypertarget{def:mean-max-coords-density}{\label{def:mean-max-coords-density}} Let \(\psi_h\) be a coherent state localized at \((x_0, \xi_0) \in T^*\mathcal{M}\) and \(|t| \leq \operatorname{inj}(x_0)\). The \emph{intrinsic maximizer} of the propagated coherent state is, \begin{equation}\begin{aligned} \hat{x}_t := \arg\max_{\mathcal{M}} |U^t[\psi_h]|^2 . \end{aligned} \nonumber \end{equation} \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \item \emph{Extrinsic case}. The \emph{extrinsic maximizer} of the propagated coherent state is, \begin{equation}\begin{aligned} \hat{x}_{\iota,t} := \arg\max_{\Lambda} |U^t[\psi_h]|^2 \circ \iota^{-1} \end{aligned} \nonumber \end{equation} and we call the \emph{extrinsic mean} of the propagated coherent state to be \(\bar{x}_{\chi_{\iota}, t} \in \Lambda\), defined with respect to a cut-off \(\chi_{\iota} \in C_c^{\infty}(\mathbb{R}^{D})\), as the closest point on \(\Lambda\) to \begin{align*} &&\bar{\iota}^t(x_0, \xi_0) &:= (\bar{\iota}_1^t(x_0, \xi_0), \ldots, \bar{\iota}_{D}^t(x_0, \xi_0)), \\ \quad\quad \text{with} && \bar{\iota}_j^t(x_0, \xi_0) &:= \int_{\mathcal{M}} |U^t[\psi_h]|^2(x) \, (\chi_{\iota} \circ \iota)(x) \, \iota_j(x) ~ d\vol{x} . \end{align*} \item \emph{Local coordinates}. Let \(\mathscr{O}_t \subset \mathcal{M}\) be an open neighbourhood of \(\hat{x}_t\). Then, given a diffeomorphic coordinate mapping \(u : \mathscr{O}_t \to V_t \subset \mathbb{R}^{n}\), and \(\chi \in C_c^{\infty}(\mathbb{R}^{n})\) a cut-off with \(\operatorname{supp} \chi \subset V_t\), the \(u\)-\emph{maximizer}, or \emph{coordinate maximizer} of the propagated coherent state is, \begin{equation}\begin{aligned} \hat{x}_{u,t} := \arg\max_{\mathscr{O}_t} |U^t[\psi_h]|^2 \circ u^{-1} \end{aligned} \nonumber \end{equation} and we call \begin{align*} && \bar{x}_{u,\chi,t} &:= (\bar{u}_1^t(x_0, \xi_0), \ldots, \bar{u}_{n}^t(x_0, \xi_0)), \\ \quad\quad\text{with} && \bar{u}_j^t(x_0, \xi_0) &:= \int_{\mathcal{M}} |U^t[\psi_h]|^2(x) \, (\chi \circ u)(x) \, u_j(x) ~ d\vol{x} \end{align*} the \(u\)-\emph{mean with respect to} \(\chi\). \end{enumerate} \end{definition} It's clear that \(\iota(\hat{x}_t) = \hat{x}_{\iota,t}\) and \(u(\hat{x}_t) = \hat{x}_{u,t}\), so the notation is there just for brevity. With this at hand, assuming \(U\) is unitary, we may state, \begin{proposition} \hypertarget{prop:local-mean-geodesic-flow}{\label{prop:local-mean-geodesic-flow}} Let \(\psi_h\) be a coherent state localized at \((x_0, \xi_0) \in T^*\mathcal{M}\). Then, for all \(|t| < \operatorname{inj}(x_0)\), given any open neighbourhood \(\mathscr{O}_t \subset \mathcal{M}\) about \(\hat{x}_t\) with its diffeomorphic coordinate mapping \(u : \mathscr{O}_t \to V_t \subset \mathbb{R}^{n}\), there are constants \(h_{u,\max}, C_{u,\max} > 0\) such that if \(h \in [0, h_{u,\max})\) and \(\overline{\mathscr{B}}_t := \overline{B}_{C_{u,\max} \sqrt{h}}(\hat{x}_{u,t}, ||\cdot||_{\mathbb{R}^{n}}) \subset V_t\), then for any smooth cut-off \(\chi\) with \(\operatorname{supp} \chi \subset V_t\) that is \(\chi \equiv 1\) on \(\overline{\mathscr{B}}_t\), we have \begin{equation}\begin{aligned} d_g(u^{-1}(\bar{x}_{u,\chi,t}), x_t) \leq C_u h, \end{aligned} \nonumber \end{equation} for \(C_u > 0\) a constant. Likewise, there are constants \(h_{\iota,\max}, C_{\iota,\max} > 0\) such that for all \(h \in [0, h_{\iota,\max})\), given any cut-off \(\chi_{\iota} \in C_c^{\infty}(\mathbb{R}^{D})\) such that \(\chi_{\iota} \equiv 1\) on \(\overline{\mathscr{B}}_{\iota,t} := \overline{B}_{C_{\iota,\max}\sqrt{h}}(\hat{x}_{\iota,t}, ||\cdot||_{D})\), we have \begin{equation}\begin{aligned} d_g(\iota^{-1}(\bar{x}_{\chi_\iota,t}),x_t) \leq C_{\iota} h, \end{aligned} \nonumber \end{equation} for \(C_{\iota} > 0\) a constant. \end{proposition} \begin{remark} In both cases, if the cut-off is supported on a sufficiently large region that is of size \(O(1)\) with respect to \(h\), then the \(O(h)\) proximity of the coordinate means to the geodesics holds for \(h \in [0, h_0)\) with \(h_0\) as in the first part of \protect\hyperlink{lem:prop-coherent-state-localized}{Lemma \ref{lem:prop-coherent-state-localized}}. \end{remark} \begin{proof} The localization of \(|U^t[\psi_h]|^2\) as observed in \protect\hyperlink{lem:prop-coherent-state-localized}{Lemma \ref{lem:prop-coherent-state-localized}} gives constants \(h_{\max}, C_{\max} > 0\) such that \(d_g(u^{-1}(\hat{x}_{u,t}), x_t) \leq C_{\max} \sqrt{h}\) for \(h \in [0, h_{\max})\). We now give the steps to deduce a basic geometric fact for smooth, compact manifolds: \begin{align} \begin{split} \label{eq:mani-coord-ball-mapping} &(\exists) h_{u,0} : (\forall)h \in [0,h_{u,0}], \\ & [(\exists)C_1 > 0 : x_t \in B_{C_1 \sqrt{h}}(u^{-1}(\hat{x}_{u,t}), g) \\ &\quad\quad\implies (\exists) C_2 > 0 : u(x_t) \in B_{C_2 \sqrt{h}}(\hat{x}_{u,t}, ||\cdot||_{\mathbb{R}^{n}})]. \end{split} \end{align} Since \(u\) is a diffeomorphism, there is a constant \(h_u > 0\) such that for all for all \(h \in [0, h_u)\), \begin{gather*} |\det Du(x_t)| \operatorname{vol}(\tilde{\mathscr{B}}_{t}) /2 \leq \operatorname{vol}(\overline{\mathscr{B}}_{g,t}), \\ \overline{\mathscr{B}}_{g,t} := \overline{B}_{C_{\max} \sqrt{h}}(u^{-1}(\hat{x}_{u,t}), g), \quad \tilde{\mathscr{B}}_{t} := u[\overline{\mathscr{B}}_{g,t}]. \end{gather*} Moreover, there are constants \(h_{\mathcal{M}} > 0\) and \(S_2 > 0\) such that for all \(h \in [0, h_{\mathcal{M}})\), \(\operatorname{vol}(\overline{\mathscr{B}}_{g,t}) \leq S_2 C_{\max}^{n} h^{\frac{n}{2}}\). Since \(\inf_{\mathcal{M}} |\det Du| > 0\), there is thus a constant \(C_u > 0\) such that for all \(h \in [0, \min\{ h_{\max}, h_u, h_{\mathcal{M}} \}]\), we have \(\operatorname{vol}(\tilde{\mathscr{B}}_t) \leq 2 C_u \operatorname{vol}(\overline{\mathscr{B}}_{g,t}) \leq 2 C_u S_2 C_{\max}^{n} h^{\frac{n}{2}}\). Then, denoting \(C_{u,\max} := (2 C_u S_2)^{\frac{1}{n}} C_{n} C_{\max}\) for some constant \(C_{n} > 0\) and \(\mathscr{B}_{t} := B_{C_{u,\max} \sqrt{h}}(\hat{x}_{u,t}, || \cdot ||_{\mathbb{R}^{n}})\), we have \(\hat{x}_{u,t}, u(x_t) \in \mathscr{\tilde{B}}_t \subset \overline{\mathscr{B}}_{t}\). Now if \(\chi \in C_c^{\infty}(V_t)\) such that \(\chi \equiv 1\) on \(\overline{\mathscr{B}}_t\), then by \(\eqref{eq:prop-cs-sym-coord}\) we have, \begin{equation}\begin{aligned} \bar{u}_j^t(x_0, \xi_0) = u_j(x_t) \chi(x_t) + O(h) = u_j(x_t) + O(h). \end{aligned} \nonumber \end{equation} This means that \(||\bar{u}^t(x_0, \xi_0) - u(x_t)||_{\mathbb{R}^{n}} \leq C' h\) for some \(C' > 0\). Going through essentially the same arguments as we just saw for \(\eqref{eq:mani-coord-ball-mapping}\), we have that there are \(C, h_1 > 0\) such that for all \(h \in [0, h_1]\), \(d_g(u^{-1}(\bar{x}_{u,\chi,t}), x_t) \leq C h\), so with \(h_{u,\max} := \min\{ h_{\max}, h_u, h_{\mathcal{M}}, h_1 \}\), we have the first part of the statement of the Propostion. As for the second part, since \(\iota^{-1}(\hat{x}_{\iota,t}) = \hat{x}_t\), as in the previous discussion we have that \(d_g(\iota^{-1}(\hat{x}_{\iota,t}), x_t) \leq C_{\max} \sqrt{h}\) for \(h \in [0, h_{\max})\). Then, the \protect\hyperlink{assumptions}{Assumptions \ref{assumptions}} imply (see the Remarks just following) that for \(0 \leq h < \min\{ h_{\max}, (\kappa/C_{\max})^2 \}\), \(||\hat{x}_{\iota,t} - \iota(x_t)||_{\mathbb{R}^{D}} \leq C_{\max} \sqrt{h}\) so that \(\iota(x_t) \in \overline{\mathscr{B}}_{\iota,t}\). If \(\chi_{\iota} \equiv 1\) on \(\overline{\mathscr{B}}_{\iota,t}\), then we again have from \(\eqref{eq:prop-cs-sym-coord}\) that \(\bar{\iota}_j^t(x_0, \xi_0) = \iota_j(x_t) + O(h)\). This gives \(||\bar{\iota}^t(x_0, \xi_0) - \iota(x_t)||_{\mathbb{R}^{D}} \leq C'_{\iota} D^{\frac{1}{2}} h\) for some \(C'_{\iota} > 0\). Now for any \(x_{\iota}^* \in B(\bar{\iota}^t(x_0, \xi_0), || \cdot ||_{\mathbb{R}^{D}} ; C'_{\iota} D^{\frac{1}{2}} h) \cap \Lambda\), we have \(||x_{\iota}^* - \iota(x_t)||_{\mathbb{R}^{D}} \leq 2 C'_{\iota} D^{\frac{1}{2}} h\). Thus, the \protect\hyperlink{assumptions}{Assumptions \ref{assumptions}}, imply that if \(0 \leq h < \min\{ h_{\max}, (\kappa/C_{\max})^2, \kappa/(2 C'_{\iota} D^{\frac{1}{2}}) \}\), then \(d_g(\iota^{-1}(x_{\iota}^*), x_t) \leq 4 C'_{\iota} D^{\frac{1}{2}} h\), which gives the second part of the statement of the Proposition. \end{proof} \hypertarget{from-graphs-to-manifolds}{% \section{From graphs to manifolds}\label{from-graphs-to-manifolds}} We wish to develop consistency between the discrete semigroup \(U_{\lambda,\epsilon,N}^t\) that solves a finite-dimensional linear differential equation and the propagator \(U_{\lambda,\epsilon}^t\). We proceed by first establishing this for short times, \(t \sim \epsilon\) and then extending by splitting the time into an integer sum of smaller times. For the former part, we need two constructions: first, the consistency of the analytic functional calculus follows almost immediately from the consistency between the averaging operators \(A_{\lambda,\epsilon,N}\) and \(A_{\lambda,\epsilon}\). This supplies us with consistency for \begin{align} \mathcal{C}_N := f_e(1 - A_{\lambda,\epsilon,N} ; c_{2,0} t\sqrt{\epsilon}) & & \xrightarrow[N \to \infty]{} & & \mathcal{C} := f_e(1 - A_{\lambda,\epsilon} ; c_{2,0} t\sqrt{\epsilon}), \label{eq:wave-ops} \\ \mathcal{S}_N := f_o(1 - A_{\lambda,\epsilon,N} ; c_{2,0} t\sqrt{\epsilon}) & & \xrightarrow[N \to \infty]{} & & \mathcal{S} := f_e(1 - A_{\lambda,\epsilon} ; c_{2,0} t\sqrt{\epsilon})\; \nonumber \end{align} (in the probabilistic sense that we will discuss shortly), wherein we define \(c_{2,0} := \sqrt{2 c_0/c_2}\) and \begin{align*} f_e(1 - z; c_{2,0} t\sqrt{\epsilon}) &:= \cos\left( c_{2,0}t \sqrt{\frac{1 - z}{\epsilon}} \right), \\ f_o(1 - z; c_{2,0} t\sqrt{\epsilon}) &:= \sin\left( c_{2,0} t \sqrt{\frac{1 - z}{\epsilon}} \right)\bigg/\sqrt{1 - z} . \end{align*} Thereafter, we construct \(\frac{\sqrt{\epsilon}}{c_{2,0}} \sqrt{\Delta}_{\lambda,\epsilon,N} = \sqrt{I - A_{\lambda,\epsilon,N}}\) in order to recover \(U_{\lambda,\epsilon,N}^t = \mathcal{C}_N + i \frac{\sqrt{\epsilon}}{c_{2,0}} \sqrt{\Delta}_{\lambda,\epsilon,N} \mathcal{S}_N\). The square root needs more care due to the branch point of \(\sqrt{1 - z}\) at \(z = 1\) that poses an obstruction to the direct application of the consistency for the functional calculus. Since we are concerned with the functional calculus of avergaing operators, we note that in the same way that \(A_{\lambda,\epsilon,N}\) extends to a finite-rank operator globably defined on \(L^2(\mathcal{M})\), we can also define functions of \(A_{\lambda,\epsilon,N}\) as operators on \(L^2(\mathcal{M})\) via essentially a form of \emph{Nyström extension}. The general situation is as follows: given \(f : \mathcal{D} \to \mathbb{C}\) analytic on a disk \(\mathcal{D} \subset \mathbb{C}\) with \([-1, 1] \subset \bar{\mathcal{D}}\) and having an absolutely convergent Taylor series on \(z \in [-1, 1]\), the \emph{derived function relative to} \(f\) given by \begin{equation}\begin{aligned} Df(z) := \frac{f(z) - f(0)}{z} \end{aligned} \nonumber \end{equation} is also analytic on \(\mathcal{D}\) with absolutely convergent Taylor series on \([-1, 1]\), hence we have \(Df(A_{\lambda,\epsilon,N})[u] \in L^{\infty}\) and \begin{equation} \label{eq:derived-fun-calc-expansion} (\forall) x \in \mathcal{M}, \quad f(A_{\lambda,\epsilon,N})[u](x) = f(0)u(x) + A_{\lambda,\epsilon,N}[Df(A_{\lambda,\epsilon,N})[u]](x); \end{equation} the case for \(A_{\lambda,\epsilon,N}\) replaced with \(A_{\lambda,\epsilon}\) above is clear. Since \(A_{\lambda,\epsilon,N}, A_{\lambda,\epsilon} : L^{\infty} \to C^{\infty}\) are smoothing, this further implies that \((f(A_{\lambda,\epsilon}) - f(A_{\lambda,\epsilon,N})): L^{\infty} \to C^{\infty}\) and \(f(A_{\lambda,\epsilon,N}), f(A_{\lambda,\epsilon}): C^{\infty} \to C^{\infty}\). A particularly useful consequence is that \begin{align} \label{eq:fun-calc-derived-bound} \begin{split} |f(A_{\lambda,\epsilon,N})[u](x)| &\leq |f(0)| |u(x)| + \frac{||k_{\lambda,\epsilon,N}(x,\cdot)||_{N,\infty}}{p_{\lambda,\epsilon,N}(x)} \frac{1}{N} \sum_{j=1}^N |Df(A_{\lambda,\epsilon,N})[u](x_j)| \\ &\leq |f(0)| |u(x)| \\ &\quad\quad + \frac{||k_{\lambda,\epsilon,N}(x,\cdot)||_{N,\infty}}{p_{\lambda,\epsilon,N}(x)} ||p_{\lambda,\epsilon,N}||_{N,\infty}^{\frac{1}{2}} ||p_{\lambda,\epsilon,N}^{-1}||_{N,2}^{\frac{1}{2}} \sup_{z \in [-1,1]}|Df(z)| \, ||u||_{N,2} , \end{split} \end{align} wherein \(||\cdot||_{N,q} := (\frac{1}{N} \langle |\cdot|^{\frac{q}{2}}, |\cdot|^{\frac{q}{2}} \rangle_{\mathcal{H}_N})^{\frac{1}{q}}\) is the generalized mean \(q\)-norm on the space \(\mathcal{H}_N := \{ u|_{\mathcal{X}_N} ~ : ~ u \in L^{\infty}(\mathcal{M}) \} \cong \mathbb{C}^{N}\) of the restriction of bounded functions on \(\mathcal{M}\) to the set of samples \(\mathcal{X}_N := \{ x_1, \ldots, x_N \}\). The second inequality follows from the symmetric and spectral relations given in \protect\hyperlink{lem:lap-symm}{Lemma \ref{lem:lap-symm}}. In the following, we will be building on the basic probabilistic bounds from Lemmas \protect\hyperlink{lem:avgop-consistent}{\ref{lem:avgop-consistent}} and \protect\hyperlink{lem:rwlap-conv}{\ref{lem:rwlap-conv}} to develop bounds that give consistency of solutions to wave equations and ultimately, a quantum-classical correspondence. Thus, along the way, we will have several other probabilistic bounds that will depend on previous ones and the notations will pack several functions of \(N, \epsilon, \lambda\), bounds of \(p_{\lambda,\epsilon}\), norms of \(u\), etc., so we record here for quick reference the primary bounds and where they are defined or make their first appearance: \begin{notation} \emph{Probability bounds}: \begin{itemize} \tightlist \item \(\gamma_N, \gamma_N^*, \gamma_{\lambda,N}, \gamma_{\lambda,N}^*\) are introduced in the beginning of \protect\hyperlink{pointwise-short-time-consistency}{Section \ref{pointwise-short-time-consistency}}, \item \(\tilde{\gamma}_N\) is defined in the statement of \protect\hyperlink{thm:bdd-analytic-calc-conv}{Theorem \ref{thm:bdd-analytic-calc-conv}}, \item \(\gamma_{\lambda,N,\upsilon}, \gamma^*_{\lambda,N,\upsilon}\) are defined in the Notation following \protect\hyperlink{lem:sqrt-perturb-eps-conv}{Lemma \ref{lem:sqrt-perturb-eps-conv}}, \item \(\omega_{\lambda,N}, \omega^*_{\lambda,N}\) are defined in the Notation following \protect\hyperlink{lem:prop-short-time-conv}{Lemma \ref{lem:prop-short-time-conv}}, \item \(\Omega_{\lambda,t,N}, \Omega^*_{\lambda,t,N}\) are defined in the statement of \protect\hyperlink{thm:halfwave-soln}{Theorem \ref{thm:halfwave-soln}}, \item \(\rho_N, \rho_{N,2}\) are defined in the statement of \protect\hyperlink{lem:l2-consistency}{Lemma \ref{lem:l2-consistency}}, \item \(\Xi_{\lambda,N}\) is defined in the statement of \protect\hyperlink{lem:prop-cs-norm-consistency}{Lemma \ref{lem:prop-cs-norm-consistency}} and \item \(\tilde{\Omega}_{\lambda,t,N}, \tilde{\Omega}^*_{\lambda,t,N}\) are defined in the statement of \protect\hyperlink{lem:prop-cs-funcexpect-consistency}{Lemma \ref{lem:prop-cs-funcexpect-consistency}}. \end{itemize} \end{notation} \hypertarget{pointwise-short-time-consistency}{% \subsection{Pointwise, short-time consistency}\label{pointwise-short-time-consistency}} We have seen from Lemmas \protect\hyperlink{lem:avgop-consistent}{\ref{lem:avgop-consistent}} and \protect\hyperlink{lem:rwlap-conv}{\ref{lem:rwlap-conv}} that for each \(\lambda \geq 0\) there are \(C_0, C_1, s,\eta\) so that \begin{equation}\begin{aligned} \Pr[|(A_{\lambda,\epsilon,N} - A_{\lambda,\epsilon})[u](x)| > \delta] \leq \gamma_N(\delta,1;s,\eta;C_0,C_1;u) , \end{aligned} \nonumber \end{equation} wherein we define \begin{equation}\begin{aligned} \gamma_N(\delta,\sigma ; s, \eta ; C_0,C_1 ; u) := (C_0 + C_1 N) \exp\left( -\frac{(N - \eta) \epsilon^{\frac{n}{2}} \delta^2 \sigma}{2s||u||_{\infty}(K_{\mathcal{M},k,p} s ||u||_{\infty} + ||k||_{\infty} \delta/3)} \right) . \end{aligned} \nonumber \end{equation} We will often use a short-hand to denote this function with only the relevant arguments. The unspecified arguments will be understood to be set by the context, in view of the preceding theorems and unless specified, we set by default \(\sigma, s = 1\). That is to say, in the following arguments we will use the probabilistic consistency rates of the averaging operators on graphs as a black-box, so for brevity we will use the, \begin{notation} The symbol \(\gamma_{\lambda,N}(\delta ; u)\) denotes the minimal probability bound among those given in Lemmas \protect\hyperlink{lem:avgop-consistent}{\ref{lem:avgop-consistent}} and \protect\hyperlink{lem:rwlap-conv}{\ref{lem:rwlap-conv}} applicable to the data \((\mathcal{M}, k, p, \lambda, \epsilon, N, u, \delta)\): \emph{viz.}, this function gives the best bound when the parameters satisfy the statements of either Lemma and yields \(1\) otherwise. For brevity, when the dependence on \(u\) is clear from context, we will simply write \(\gamma_{\lambda,N}(\delta)\). Further, we denote by \(\eta_{\lambda}\), \(s_{\lambda}\), \(C_{0,\lambda}\) and \(C_{j,\lambda}\) the corresponding parameters such that \(\gamma_N(\delta,1; \eta_{\lambda}, s_{\lambda}; C_{0,\lambda}, C_{1,\lambda}; u) = \gamma_{\lambda,N}(\delta ; u)\). To allow variability, we will denote \(\gamma_{\lambda,N}^*(\delta,\sigma ; u) := \gamma_N(\delta, \sigma; \eta_{\lambda}, s_{\lambda}; C_{0, \lambda}, C_{1,\lambda} ; u)\) and shorten this to \(\gamma_N^*(\delta,\sigma)\) when the dependence on \(u\) is clear from context. \end{notation} \begin{lemma} \hypertarget{lem:avop-power-conv}{\label{lem:avop-power-conv}} Given \(\lambda \geq 0\), \(\epsilon > 0\), \(u \in L^{\infty}\) and \(x \in \mathcal{M}\), for all \(m \in \mathbb{N}\) and \(\delta > 0\), \begin{equation}\begin{aligned} \Pr[|A_{\lambda,\epsilon,N}^m[u](x) - A_{\lambda,\epsilon}^m[u](x)| > m \delta] \leq m \gamma_{\lambda,N}(\delta ; u). \end{aligned} \nonumber \end{equation} \end{lemma} \begin{proof} Since \(||A_{\lambda,\epsilon}[u]||_{\infty} \leq ||u||_{\infty}\), we have the same probabilistic bound for the event that \(|(A_{\lambda,\epsilon,N} - A_{\lambda,\epsilon})[A_{\epsilon} u](x)| \leq \delta\) as for the event \(|(A_{\lambda,\epsilon,N} - A_{\lambda,\epsilon})[u](x)| \leq \delta\). Moreover, \begin{align*} |A_{\lambda,\epsilon,N}^2[u](x) - A_{\lambda,\epsilon}^2[u](x)| &\leq |(A_{\lambda,\epsilon,N} - A_{\lambda,\epsilon})[A_{\lambda,\epsilon} u](x)| + \\ &\quad\quad + |A_{\lambda,\epsilon,N}[(A_{\lambda,\epsilon,N} - A_{\lambda,\epsilon})[u]](x)| . \end{align*} Therefore, applying a union bound over both events, we have, \begin{equation}\begin{aligned} \Pr[|(A_{\lambda,\epsilon,N}^2 - A_{\lambda,\epsilon}^2)[u](x)| > 2\delta] \leq 2\gamma_{\lambda,N}(\delta ; u) . \end{aligned} \nonumber \end{equation} Now assume that for all \(2 \leq m \leq M-1\), \begin{equation}\begin{aligned} \Pr[|(A_{\lambda,\epsilon,N}^m - A_{\lambda,\epsilon}^m)[u](x)| \leq m\delta] > 1 - m\gamma_{\lambda,N}(\delta ; u). \end{aligned} \nonumber \end{equation} Write similar to before, \begin{align*} |A_{\lambda,\epsilon,N}^M[u](x) - A_{\lambda,\epsilon}^M[u](x)| &\leq |(A_{\lambda,\epsilon,N}^{M-1} - A_{\lambda,\epsilon}^{M-1})[A_{\lambda,\epsilon} u](x)| + \\ &\quad\quad + |A_{\lambda,\epsilon,N}^{M-1}[A_{\lambda,\epsilon,N}[u] - A_{\lambda,\epsilon}[u]](x)| . \end{align*} Since for any integer \(m \geq 1\), \(A_{\lambda,\epsilon,N}^m\) has non-negative entries and \(A_{\lambda,\epsilon,N}^m[1] = 1\), we see that for any \(v \in L^{\infty}\), \(|A_{\lambda,\epsilon,N}[v](x)| \leq ||v||_{\infty}\). Therefore, after a union bound we have that \begin{equation}\begin{aligned} \Pr[|(A_{\lambda,\epsilon,N}^M - A_{\lambda,\epsilon}^M)[u](x)| \leq M\delta] > 1 - M\gamma_{\lambda,N}(\delta ; u), \end{aligned} \nonumber \end{equation} whence having completed the induction step, we have the bound as given in the first part of the statement of the Lemma. \end{proof} \begin{theorem} \hypertarget{thm:bdd-analytic-calc-conv}{\label{thm:bdd-analytic-calc-conv}} Let \(f : \mathbb{C} \to \mathbb{C}\) be an entire function such that for \(w \in \mathbb{C}\) there exists a constant \(K_w > 0\) such that for all \(m \geq 0\), \(|\partial^m f|_{z = w}| < K_w\). Then, given \(\lambda \geq 0\), \(\epsilon > 0\), \(u \in C^{\infty}\) and \(x \in \mathcal{M}\), for all \(\delta > 0\), \begin{align*} \Pr[|f(A_{\lambda,\epsilon,N} + wI)[u] - f(A_{\lambda,\epsilon} + wI)[u]| > \delta] &\leq \tilde{\gamma}_{\lambda,N}(\delta/(2 e K_w)), \\ \tilde{\gamma}_{\lambda,N} := \gamma_{\lambda,N}/(1 - \gamma_{\lambda,N})^2 . \end{align*} \end{theorem} \begin{proof} Let \(f : \mathbb{C} \to \mathbb{C}\) be an entire function with \(|\partial_z^m f|_{z = w}| < K_w\) for some constant \(K_w > 0\) and all \(m \geq 0\). By \protect\hyperlink{lem:avop-power-conv}{Lemma \ref{lem:avop-power-conv}} and a union bound, we have \begin{equation}\begin{aligned} \Pr[(\forall)m \in [M], |(A_{\lambda,\epsilon,N}^m - A_{\lambda,\epsilon}^m)[u](x)| \leq m^2 \delta] > 1 - \sum_{m=1}^M m\gamma_{\lambda,N}(m\delta ; u) \end{aligned} \nonumber \end{equation} and since for all \(m \geq 1\), \(\gamma_{\lambda,N}(m\delta) \leq \gamma^*_{\lambda,N}(\delta, m)\), we can take all powers at once to have, \begin{align*} \Pr&[(\forall) m \in \mathbb{N}, |(A_{\lambda,\epsilon,N}^m - A_{\lambda,\epsilon}^m)[u](x)| > m^2 \delta] \\ &\leq \sum_{m=1}^{\infty} m \gamma^*_{\lambda,N}(\delta, m) \leq \frac{\gamma_{\lambda,N}(\delta)}{(1 - \gamma_{\lambda,N}(\delta))^2} =: \tilde{\gamma}(\delta). \end{align*} In this event, upon taking a Taylor series expansion of \(f(z)\) at \(z = w\), \begin{equation}\begin{aligned} |f(A_{\lambda,\epsilon,N} + wI)[u] - f(A_{\lambda,\epsilon} + wI)[u]| \leq K_w \delta\sum_{m=1}^{\infty} \frac{m^2}{m!} < e K_w\delta \end{aligned} \nonumber \end{equation} hence, \begin{equation}\begin{aligned} \Pr[|f(A_{\lambda,\epsilon,N})[u] - f(A_{\lambda,\epsilon})[u]| > \delta] \leq \tilde{\gamma}(\delta/(2 e K_w)). \end{aligned} \nonumber \end{equation} \end{proof} \begin{remark} The condition that all derivatives of \(f\) are bounded enforces that we take \(t \sim \epsilon\). While the class of \(f\) can be generalized by way of Hadamard's multiplication theorem, the condition on the short time-scales is not artificial: if we directly apply this to the approximation of \(\mathcal{C}(t/\sqrt{\epsilon})\) from \(\mathcal{C}_N(t/\sqrt{\epsilon})\), then the resulting error is \(O(e^{\frac{t}{\sqrt{\epsilon}}} t^2/\epsilon)\), which is only practical for asymptotically short times, \(t \sim \epsilon\) in any case. \end{remark} \begin{lemma} \hypertarget{lem:sqrt-perturb-eps-conv}{\label{lem:sqrt-perturb-eps-conv}} Given \(\lambda \geq 0\), \(\epsilon > 0\), \(u \in L^{\infty}\) and \(\delta > 0\), if \(N \in \mathbb{N}\) is sufficiently large that \(\beta_{\gamma} := \log[(C_{0,\lambda} + C_{1,\lambda} N)/\gamma_{\lambda,N}(\delta;u)] > 1\), then with \(\upsilon := 8 \log(\beta_{\gamma}/2)/\beta_{\gamma}\) we have that for all \(\varepsilon > 0\) and \(x \in \mathcal{M}\), \begin{equation} \label{eq:lem-sqrt-perturb-eps-conv-bound} \Pr[|B^{(\varepsilon)}_{\lambda,\epsilon,N}[u](x) - B^{(\varepsilon)}_{\lambda,\epsilon}[u](x)| > \delta] \leq \gamma_{\lambda,N}(\delta \varepsilon^{(\frac{1}{2} + \upsilon(\beta))}/(2 \sqrt{2\pi}) ; u) , \end{equation} wherein \(B_{\lambda,\epsilon,N}^{(\varepsilon)} := \sqrt{(1 + \varepsilon)I - A_{\lambda,\epsilon,N}}\) and \(B_{\lambda,\epsilon}^{(\varepsilon)} := \sqrt{(1 + \varepsilon)I - A_{\lambda,\epsilon}}\). Thus, \begin{equation}\begin{aligned} \Pr[|B^{(\delta^2)}_{\lambda,\epsilon,N}[u](x) - B^{(\delta^2)}_{\lambda,\epsilon}[u](x)| > \delta] \leq \gamma_{\lambda,N}(\delta^{2(1 + \upsilon)}/(2 \sqrt{2 \pi}) ; u). \end{aligned} \nonumber \end{equation} \end{lemma} \begin{proof} Let \(\upsilon > 0\) be variable for the moment and proceed as in the proof of \protect\hyperlink{thm:bdd-analytic-calc-conv}{Theorem \ref{thm:bdd-analytic-calc-conv}}, with the change that now we take for each \(m \in \mathbb{N}\) an \(m^{\upsilon} \delta\) error in \(|A_{\lambda,\epsilon,N}[u](x) - A_{\lambda,\epsilon}[u](x)|\), so \emph{mutatis mutandis} we have, \begin{equation}\begin{aligned} \Pr[(\forall)m \in \mathbb{N}, |(A_{\lambda,\epsilon,N}^m - A_{\lambda,\epsilon}^m)[u](x)| > m^{1+\upsilon} \delta] \leq \sum_{m=1}^{\infty} m\gamma^*_{\lambda,N}(\delta, m^{\upsilon} ; u). \end{aligned} \nonumber \end{equation} Suppressing all coefficients independent of \(m\), the right-hand side has the form \begin{equation}\begin{aligned} \sum_{m=1}^{\infty} m \gamma^*_{\lambda,N}(\delta,m^{\upsilon} ; u) = \sum_{m=1}^{\infty} m (\alpha e^{-\beta m^{\upsilon}}) \end{aligned} \nonumber \end{equation} for some \(\alpha > 0\) and \(\beta = \beta_{\gamma} > 0\). We have, \begin{equation}\begin{aligned} \alpha\sum_{m=1}^{\infty} m e^{-\beta m^{\upsilon}} \leq \frac{1}{\upsilon} \int_1^{\infty} y^{\frac{2}{\upsilon}-1} e^{-\beta y} ~ dy . \end{aligned} \nonumber \end{equation} Letting \(r := \lceil \frac{1}{\upsilon} \rceil\) then bounding the right-hand side with the substitution of \(1/\upsilon\) with \(r\) and applying succesive integration by parts gives, \begin{align*} \sum_{m=1}^{\infty} m e^{-\beta m^{\upsilon}} &\leq \frac{e^{-\beta}}{\beta} r \sum_{j = 1}^{2(r - 1)} \frac{(2r - 1)!}{(2r - j)!} + r (2r)! \int_1^{\infty} y^{-1} e^{-\beta y} ~ dy \\ &\leq \frac{2r (2r)!}{\beta} e^{-\beta} . \end{align*} By Stirling's approximation, \citep{robbins1955remark} gives the bound \((2r)! \leq e^{1 - 2r} (2r)^{2r + \frac{1}{2}}\) and therefore, \begin{equation}\begin{aligned} \sum_{m=1}^{\infty} m e^{-\beta m^{\upsilon}} \leq \frac{e (2r)^{\frac{3}{2}}}{\beta} e^{-\beta + 2r(\log(2r) - 1)} . \end{aligned} \nonumber \end{equation} Setting \(\upsilon(\beta) := 8 \log(\beta/2)/\beta\) and using that \(\beta \geq 1\) for sufficiently large \(N\) then gives, \begin{align*} 2r(\log(2r) - 1) &= 2 \left\lceil \frac{\beta}{8 \log(\beta/2)} \right\rceil \left( \log\left( 2 \left\lceil \frac{\beta}{8 \log(\beta/2)} \right \rceil \right) - 1 \right) \\ &\leq \frac{\beta}{2\log(\beta/2)} \left( \log\left( \frac{\beta/2}{ \log(\beta/2)} \right) - 1 \right), \end{align*} hence, \begin{align*} \sum_{m=1}^{\infty} m e^{-\beta m^{\upsilon(\beta)}} &\leq e \left( \frac{\beta}{2 \log(\beta/2)} \right)^{\frac{1}{2}} e^{-\frac{\beta}{2}\left(1 + \frac{1}{\log(\beta/2)} \right)} \\ &\leq e^{-\frac{\beta}{2}} . \end{align*} On recovering the original form of the probability upper bound, we have, \begin{equation}\begin{aligned} \Pr[(\forall)m \in \mathbb{N}, |(A_{\lambda,\epsilon,N}^m - A_{\lambda,\epsilon}^m)[u](x)| > m^{1 + \upsilon(\beta)} \delta] \leq \gamma_{\lambda,N}(\delta/\sqrt{2}; u). \end{aligned} \nonumber \end{equation} Next, given the event that for all \(m \in \mathbb{N}\), \(|(A^m_{\lambda,\epsilon,N} - A^m_{\lambda,\epsilon})[u](x)| \leq m^{1 + \upsilon(\beta)} \delta\), we compare the applications of the Tayor series expansion of \(f := \sqrt{z}\) at \(1 + \varepsilon\) to \((1 + \varepsilon)I - A_{\lambda,\epsilon,N}\) and \((1 + \varepsilon)I - A_{\lambda,\epsilon}\), for a \emph{perturbation parameter} \(\varepsilon > 0\), with this term-wise absolute error rate: let \(B_{\lambda,\epsilon,N}^{(\varepsilon)} := \sqrt{(1 + \varepsilon)I - A_{\lambda,\epsilon,N}}\) and \(B_{\lambda,\epsilon}^{(\varepsilon)} := \sqrt{(1 + \varepsilon)I - A_{\lambda,\epsilon}}\), then assuming \(N\) is sufficiently large that \(\upsilon(\beta) \leq 1\) we have, \begin{align*} |B^{(\varepsilon)}_{\lambda,\epsilon,N}[u](x) - B^{(\varepsilon)}_{\lambda,\epsilon}[u](x)| &\leq \delta \sum_{m=1}^{\infty} \left| \binom{\frac{1}{2}}{m} \right| \frac{m^{1 + \upsilon(\beta)}}{(1 + \varepsilon)^{m - \frac{1}{2}}} \\ &\leq \delta \sum_{m=1}^{\infty} \frac{m^{\upsilon(\beta) - \frac{1}{2}}}{(1 + \varepsilon)^{m - \frac{1}{2}}} = \delta \sqrt{1 + \varepsilon} \, \operatorname{Li}\left( \frac{1}{2} - \upsilon(\beta), \frac{1}{1 + \varepsilon} \right) \\ &\leq \delta \Gamma\left[ \frac{1}{2} + \upsilon(\beta) \right] \varepsilon^{-(\frac{1}{2} + \upsilon(\beta))} \sqrt{1 + \varepsilon} \\ &\leq 2\sqrt{\pi} \, \varepsilon^{-(\frac{1}{2} + \upsilon(\beta))} \delta . \end{align*} The second inequality comes from \(|\binom{1/2}{m}| \leq m^{\frac{3}{2}}\) and the following one is due to the observation that \begin{align*} \varepsilon^{s + 1/2} \operatorname{Li}\left( \frac{1}{2} - s, \frac{1}{1 + \varepsilon} \right) &= \frac{1}{\Gamma\left( \frac{1}{2} - s \right)} \int_0^{\infty} \frac{\varepsilon^{s + 1/2} t^{-(s + 1/2)}}{e^t (1 + \varepsilon) - 1} ~ dt \\ &= \frac{1}{\Gamma\left( \frac{1}{2} - s \right)} \int_0^{\infty} \frac{\varepsilon}{e^{\varepsilon t}(1 + \varepsilon) - 1} \, t^{-(s + 1/2)} ~ dt \\ &\xrightarrow[\varepsilon \to 0]{} \frac{1}{\Gamma\left( \frac{1}{2} - s \right)} \int_0^{\infty} \frac{t^{-(s + 1/2)}}{1 + t} ~ dt \\ &= \frac{\operatorname{B}\left( \frac{1}{2} - s, \frac{1}{2} + s \right)}{\Gamma\left( \frac{1}{2} - s \right)} = \Gamma\left( \frac{1}{2} + s \right), \end{align*} (wherein \(\operatorname{B}(x,y) = \Gamma(x) \Gamma(y)/\Gamma(x + y)\) is the Beta function) together with the monotonicity of \(\operatorname{Li}\left( \frac{1}{2} - \upsilon(\beta), \frac{1}{1 + \varepsilon} \right)\) in \(\varepsilon\). The fourth (final) inequality uses that \(\Gamma\left( \frac{1}{2} + s \right)\) is maximized at \(s = 0\) when \(0 \leq s \leq 1\), \emph{viz.}, we employ the assumption that \(N\) is sufficiently large that \(\upsilon(\beta) \leq 1\). Altogether, \begin{equation}\begin{aligned} \Pr[|B^{(\varepsilon)}_{\lambda,\epsilon,N}[u](x) - B^{(\varepsilon)}_{\lambda,\epsilon}[u](x)| > \delta] \leq \gamma_{\lambda,N}(\delta \varepsilon^{(\frac{1}{2} + \upsilon(\beta))}/(2 \sqrt{2\pi}) ; u) , \end{aligned} \nonumber \end{equation} which upon setting \(\varepsilon = \delta^2\) gives both of the probability bounds in the statement of the Theorem. \end{proof} \begin{notation} We will use the symbol \(\upsilon\) to denote the function given by \protect\hyperlink{lem:sqrt-perturb-eps-conv}{Lemma \ref{lem:sqrt-perturb-eps-conv}} and we define the functions \begin{equation}\begin{aligned} \gamma^*_{\lambda,N,\upsilon}(\delta, \varepsilon ; u) := \gamma_{\lambda,N}(\delta \varepsilon^{(\frac{1}{2} + \upsilon(\beta))}/(2 \sqrt{2\pi}) ; u) \end{aligned} \nonumber \end{equation} and \begin{equation}\begin{aligned} \gamma_{\lambda,N,\upsilon}(\delta ; u) := \gamma_{\lambda,N}(\delta^{2(1 + \upsilon)}/(2 \sqrt{2 \pi}); u) \end{aligned} \nonumber \end{equation} to denote the corresponding probability bounds for the consistency of the square root of the perturbed operators. \end{notation} An immediate application of the relations in \protect\hyperlink{lem:lap-symm}{Lemma \ref{lem:lap-symm}} is to transfer consistency between the pair \(\tilde{B}_{\lambda,\epsilon,N}, \tilde{B}_{\lambda,\epsilon}\) to that between the pair of square roots of the unpertrubed operators: namely we have, \begin{theorem} \hypertarget{thm:sqrt-conv}{\label{thm:sqrt-conv}} Given \(\lambda \geq 0\), \(\epsilon > 0\), \(u \in L^{\infty}\) and \(x \in \mathcal{M}\), we have for all \(\delta > 0\), \begin{equation} \label{eq:thm-sqrt-conv-bound} \Pr[| B_{\lambda,\epsilon,N}[u](x) - B_{\lambda,\epsilon}[u](x)| > \delta] \leq \gamma_{\lambda,N,\upsilon}(\delta/3 ; u), \end{equation} with \(B_{\lambda,\epsilon,N} := \sqrt{I - A_{\lambda,\epsilon,N}}\) and \(B_{\lambda,\epsilon} := \sqrt{I - A_{\lambda,\epsilon}}\). \end{theorem} \begin{proof} Let \(\varepsilon \geq 0\) and set \(B^{(\varepsilon)}_{\lambda,\epsilon,N}\) and \(B^{(\varepsilon)}_{\lambda,\epsilon}\) as in the proof of \protect\hyperlink{lem:sqrt-perturb-eps-conv}{Lemma \ref{lem:sqrt-perturb-eps-conv}}. The matrix \((1 + \varepsilon)I - A_{\lambda,\epsilon,N}\) is an M-matrix and (in modulus) the largest eigenvalue of \(A_{\lambda,\epsilon,N}\) is one, which is also simple. Therefore, it follows from \citep[Theorem 4]{alefeld1982mmatsqrt} that \(B^{(\varepsilon)}_{\lambda,\epsilon,N} = (1 + \varepsilon)^{\frac{1}{2}}(I - \mathcal{B}_{\lambda,\epsilon,N}^{(\varepsilon)})\) is also an M-matrix for \(\mathcal{B}^{(\varepsilon)}_{\lambda,\epsilon,N}\) some matrix with non-negative entries and spectral radius \(r_{\varepsilon,N} < ((2 + \varepsilon)/(1 + \varepsilon))^{\frac{1}{2}} - 1 < 1\). Let \(L^2_{\mathbb{R}}(\mathcal{M}) := L^2(\mathcal{M} \to \mathbb{R})\) be the real Hilbert space of real-valued functions on \(\mathcal{M}\). Then, \((B^{(\varepsilon)})^2_{\lambda,\epsilon} : L^2_{\mathbb{R}}(\mathcal{M}) \to L^2_{\mathbb{R}}(\mathcal{M})\) is an \emph{M-operator} (an infinite-dimensional generalization of M-matrices) with respect to the cone \(\mathscr{K}\) of non-negative functions and satisfies the conditions of \citep[Theorem 3]{marek1995mopsqrt}. Therefore, by that Theorem, \(B^{(\varepsilon)}_{\lambda,\epsilon} = (1 + \varepsilon)^{\frac{1}{2}}(I - \mathcal{B}^{(\varepsilon)}_{\lambda,\epsilon})\) is also an M-operator for some \(\mathcal{B}^{(\varepsilon)}_{\lambda,\epsilon} : L^2_{\mathbb{R}} \to L^2_{\mathbb{R}}\) with spectral radius \(r_{\varepsilon} < 1\) (in the same way as for the discrete counterpart) and such that \(\mathcal{B}^{(\varepsilon)}_{\lambda,\epsilon}[\mathscr{K}] \subset \mathscr{K}\). As per \protect\hyperlink{lem:lap-symm}{Lemma \ref{lem:lap-symm}}, \(B^{(\varepsilon)}_{\lambda,\epsilon,N}\) and \(B^{(\varepsilon)}_{\lambda,\epsilon}\) are symmetrized via conjugation by \(\sqrt{p_{\lambda,\epsilon,N}}\) and \(\sqrt{p_{\lambda,\epsilon}}\), respectively, hence \(\mathcal{B}^{(\varepsilon)}_{\lambda,\epsilon,N}\) and \(\mathcal{B}^{(\varepsilon)}_{\lambda,\epsilon}\) are as well, so it follows from spectral theory that the symmetrizations of each of the latter operators have spectral radius less than one. Therefore, the spectral radius of \(\mathcal{B}^{(\varepsilon)}_{\lambda,\epsilon,N} + \mathcal{B}^{(0)}_{\lambda,\epsilon,N}\) and \(\mathcal{B}^{(\varepsilon)}_{\lambda,\epsilon} + \mathcal{B}^{(0)}_{\lambda,\epsilon}\) is less than two. We may write, \begin{equation}\begin{aligned} B^{(\varepsilon,0)}_{\lambda,\epsilon,N} := B_{\lambda,\epsilon,N}^{(\varepsilon)} + B_{\lambda,\epsilon,N}^{(0)} = 2(1 + \varepsilon)^{\frac{1}{2}}\left( I - \frac{1}{2}(\mathcal{B}^{(\varepsilon)}_{\lambda,\epsilon,N} + \mathcal{B}^{(0)}_{\lambda,\epsilon,N}) \right), \\ B^{(\varepsilon,0)}_{\lambda,\epsilon} := B_{\lambda,\epsilon}^{(\varepsilon)} + B_{\lambda,\epsilon}^{(0)} = 2(1 + \varepsilon)^{\frac{1}{2}}\left( I - \frac{1}{2}(\mathcal{B}^{(\varepsilon)}_{\lambda,\epsilon} + \mathcal{B}^{(0)}_{\lambda,\epsilon}) \right), \end{aligned} \nonumber \end{equation} that is, each of these operators are in the form \(t(I - B)\) with \(t > 0\) and \(B\) having a spectral radius less than one and either having non-negative entries in the matrix case or preserving the non-negative cone \(\mathscr{K}\) in the infinite dimensional case. This means that the first sum is an M-matrix and the second is an M-operator. Moreover, whenever \(\varepsilon > 0\), both operators are non-singular. In the non-singular M-operator case, \citep[Theorem 1]{marek1995mopsqrt} tells that \((B^{(\varepsilon,0)}_{\lambda,\epsilon})^{-1}\) preserves \(\mathscr{K}\). We may express \((B_{\lambda,\epsilon}^{(\varepsilon,0)})^{-1}\) as the spectral application of the function \(f(z) := (\sqrt{1 + \varepsilon - z} + \sqrt{1 - z})^{-1}\) to \(z = A_{\lambda,\epsilon}\). Then, the decomposition \(\eqref{eq:derived-fun-calc-expansion}\) applies since \(f\) is analytic on the unit disc and has absolutely convergent Taylor series on \([-1, 1]\). Using this, we see that \((B_{\lambda,\epsilon}^{(\varepsilon, 0)})^{-1}(x,y)\) is smooth for \(x \neq y\) and due to the preservation of \(\mathscr{K}\) we also have that given any \((x, y) \in \mathcal{M}^2\), for all \(\epsilon_0 > 0\), \begin{equation}\begin{aligned} A_{\epsilon_0}[(B^{(\varepsilon,0)}_{\lambda,\epsilon})^{-1}(\cdot,y)](x) \geq 0. \end{aligned} \nonumber \end{equation} Combining these properties, we have by continuity off of the diagonal that that on letting \(\epsilon_0 \to 0\), we find that for all \(x \neq y\), \((B_{\lambda,\epsilon}^{(\varepsilon,0)})^{-1}(x,y) \geq 0\). Thus by \(\eqref{eq:derived-fun-calc-expansion}\), \((B_{\lambda,\epsilon}^{(\varepsilon,0)})^{-1} - f(0) I\) has a smooth kernel that is non-negative off of the diagonal, hence by continuity the kernel is also non-negative on the diagonal. Moreover, by \protect\hyperlink{lem:lap-symm}{Lemma \ref{lem:lap-symm}}, \(((B_{\lambda,\epsilon}^{(\varepsilon,0)})^{-1} - f(0)I)[1] = \varepsilon^{-\frac{1}{2}} - f(0) > 0\), so for any \(u \in C^{\infty}\) and \(x \in \mathcal{M}\), \begin{align*} |(B_{\lambda,\epsilon}^{(\varepsilon,0)})^{-1}[u](x)| &\leq f(0)|u(x)| + |((B_{\lambda,\epsilon}^{(\varepsilon,0)})^{-1} - f(0)I)[u](x)| \\ &\leq \varepsilon^{-\frac{1}{2}} ||u||_{\infty} . \end{align*} Further applications of \protect\hyperlink{lem:lap-symm}{Lemma \ref{lem:lap-symm}} give that \((B^{(\varepsilon,0)}_{\lambda,\epsilon,N})^{-1}\) also maps \(1 \mapsto \varepsilon^{-\frac{1}{2}}\) and due to commutativity under spectral mapping, we have, \begin{equation}\begin{aligned} B^{(\varepsilon)}_{\lambda,\epsilon,N} - B^{(0)}_{\lambda,\epsilon,N} = \varepsilon(B^{(\varepsilon,0)}_{\lambda,\epsilon,N})^{-1} , \\ B^{(\varepsilon)}_{\lambda,\epsilon} - B^{(0)}_{\lambda,\epsilon} = \varepsilon(B^{(\varepsilon,0)}_{\lambda,\epsilon})^{-1} . \end{aligned} \nonumber \end{equation} Therefore, in the event that \(|B^{(\varepsilon)}_{\lambda,\epsilon,N}[u](x) - B^{(\varepsilon)}_{\lambda,\epsilon}[u](x)| \leq ||u||_{\infty} \delta\), we get \begin{align*} |B_{\lambda,\epsilon,N}[u](x) - B_{\lambda,\epsilon}[u](x)| &\leq |B_{\lambda,\epsilon,N}[u](x) - B^{(\varepsilon)}_{\lambda,\epsilon,N}[u](x)| + |B^{(\varepsilon)}_{\lambda,\epsilon,N}[u](x) - B^{(\varepsilon)}_{\lambda,\epsilon}[u](x)| \\ &\quad\quad + |B_{\lambda,\epsilon}[u](x) - B^{(\varepsilon)}_{\lambda,\epsilon}[u](x)| \\ &\leq \varepsilon \left( |(B^{(\varepsilon,0)})^{-1}_{\lambda,\epsilon,N}[u](x)| + |(B^{(\varepsilon,0)})^{-1}_{\lambda,\epsilon}[u](x)| \right) + ||u||_{\infty}\delta \\ &\leq 2 \varepsilon^{\frac{1}{2}} ||u||_{\infty} + ||u||_{\infty} \delta , \end{align*} whence upon setting \(\varepsilon = \delta^2\) and applying \protect\hyperlink{lem:sqrt-perturb-eps-conv}{Lemma \ref{lem:sqrt-perturb-eps-conv}}, we arrive at the probability bound in the statement of this Theorem. \end{proof} The consistency of the square root of the graph Laplacian and the quantum dynamics according to it are of interest in their own right, so we will carry out that theory below. Along with it, we will also state the consistency theorems for the dynamics given by the perturbed operator \(B_{\lambda,\epsilon,N}^{(\varepsilon)}\) because we have better probabilistic consistency bounds in terms of \(N\) for it than for its unperturbed counterpart, when \(\varepsilon\) is decoupled from \(\delta\). A second motivation for this is that for our applications, we see from \protect\hyperlink{thm:sym-cs-glap-psido}{Theorem \ref{thm:sym-cs-glap-psido}} that symbol information, and in particular the coordinates of geodesics propagations can only be recovered to order \(O(h)\) error. Furthermore, from \protect\hyperlink{symbol-of-a-graph-laplacian}{Section \ref{symbol-of-a-graph-laplacian}} we understand that the principal symbol of \(h^2 \Delta_{\lambda,\epsilon}\) and the propagation of coherent states that follow and form the key features for the applications, are unperturbed by order \(\varepsilon = O(h)\) perturbations to \(\Delta_{\lambda,\epsilon}\). Therefore, alongside our propagators we will also consider the perturbed ones, which we will write as, \begin{notation} \hypertarget{notation:perturb-glap}{\label{notation:perturb-glap}} Let \(\lambda \geq 0\) and \(\epsilon > 0\). Then, with \(\varepsilon \in [0, 1]\), we denote \begin{gather*} \Delta_{\lambda,\epsilon,N}^{(\varepsilon)} := (\Delta_{\lambda,\epsilon,N} + c_{2,0} \varepsilon/\epsilon)^{\frac{1}{2}}, \quad\quad \Delta_{\lambda,\epsilon}^{(\varepsilon)} := (\Delta_{\lambda,\epsilon} + c_{2,0} \varepsilon/\epsilon)^{\frac{1}{2}} , \\ \UepsN{\varepsilon}{t}:= \exp\left( i t \Delta_{\lambda,\epsilon}^{(\varepsilon)} \right), \quad\quad \Ueps{\varepsilon}{t}:= \exp\left( i t (\Delta_{\lambda,\epsilon} + c_{2,0} \varepsilon/\epsilon)^{\frac{1}{2}} \right) , \end{gather*} which for \(\varepsilon > 0\) we call \(\Delta_{\lambda,\epsilon,N}^{(\varepsilon)}\) and \(\Delta_{\lambda,\epsilon}^{(\varepsilon)}\) the (\(\varepsilon\)-)\emph{perturbed} graph Laplacians and \(\UepsN{\varepsilon}{t}\) and \(\Ueps{\varepsilon}{t}\) the (\(\varepsilon\)-)\emph{perturbed propagators}, while for \(\varepsilon = 0\) we have the usual, \emph{unperturbed} graph Laplacians and propagators, respectively. \end{notation} Now we are ready to state the consistency of both, the unperturbed and perturbed propagators for short-times: \begin{lemma} \hypertarget{lem:prop-short-time-conv}{\label{lem:prop-short-time-conv}} Let \(\tau \in [-1/c_{2,0}, 1/c_{2,0}]\). Then, given \(\lambda \geq 0\), \(\epsilon > 0\), \(u \in L^{\infty}\) and \(x \in \mathcal{M}\), we have for all \(\delta > 0\), \begin{equation}\begin{aligned} \Pr[|U_{\lambda,\epsilon,N}^{\tau \sqrt{\epsilon}}[u](x) - U_{\lambda,\epsilon}^{\tau \sqrt{\epsilon}}[u](x)| > \delta] \leq \gamma_{\lambda,N,\upsilon}(\delta/(12e) ; u) + 2 \tilde{\gamma}_{\lambda,N}(\delta/(8e) ; u), \end{aligned} \nonumber \end{equation} wherein \(U_{\lambda,\epsilon,N}^{t} := e^{i t \sqrt{\Delta_{\lambda,\epsilon,N}}}\) and \(U^t_{\lambda,\epsilon} := e^{i t \sqrt{\Delta_{\lambda,\epsilon}}}\) and \(t \in \mathbb{R}\). Moreover, for \(\varepsilon > 0\) we have, \begin{equation}\begin{aligned} \Pr[|\UepsN{\varepsilon}{\tau \sqrt{\epsilon}}[u](x) - \Ueps{\varepsilon}{\tau \sqrt{\epsilon}}[u](x)| > \delta] \leq \gamma^*_{\lambda,N,\upsilon}(\delta/(4e), \varepsilon ; u) + 2 \tilde{\gamma}_{\lambda,N}(\delta/(8e) ; u) . \end{aligned} \nonumber \end{equation} \end{lemma} \begin{proof} We start with the application of \protect\hyperlink{thm:bdd-analytic-calc-conv}{Theorem \ref{thm:bdd-analytic-calc-conv}} to the pairs \((\mathcal{C}_N, \mathcal{C})\) and \((\mathcal{S}_N, \mathcal{S})\) of \(\eqref{eq:wave-ops}\) at time \(t = \tau \sqrt{\epsilon}\). More generally, Let \(f_{e,w} := f_e(z - w)\) and \(f_{o,w} := f_o(z - w)\), so we may write \begin{equation}\begin{aligned} f_{e,w}(z) = \sum_{j=0}^{\infty} \frac{(-1)^j}{(2j)!}\partial_z^{2j}[\cos(tc_{2,0}\epsilon^{-\frac{1}{2}} \cdot)]|_{z = 1 + w} z^j \end{aligned} \nonumber \end{equation} and similarly for \(f_{o,w}\). Hence, using the notation of that Theorem, we have that \(K_{1 + w} = 1\) for \(f_{e,w}\) and \(f_{o,w}\), so we can apply it to get that \begin{equation} \label{eq:wave-ops-shorttime-consistency} \Pr[(|\mathcal{C}_{w,N}[u](x) - \mathcal{C}_w[u](x)| > \delta) \wedge |\mathcal{S}_{w,N}[u](x) - \mathcal{S}_w[u](x)| > \delta] \leq 2 \tilde{\gamma}_{\lambda,N}(\delta/(2 e);u) , \end{equation} for \(\mathcal{C}_w := f_{e,w}(A_{\lambda,\epsilon})\), \(\mathcal{C}_{w,N} := f_{e,w}(A_{\lambda,\epsilon,N})\), \(\mathcal{S}_w := f_{o,w}(A_{\lambda,\epsilon})\) and \(\mathcal{S}_{w,N} := f_{o,w}(A_{\lambda,\epsilon,N})\). Now suppose the event that \(|\mathcal{S}_N[u](x) - \mathcal{S}[u](x)| \leq \delta\) and \(|B_{\lambda,\epsilon,N}[\mathcal{S}[u]](x) - B_{\lambda,\epsilon}[\mathcal{S}[u]](x)| \leq \delta\) and note that through the spectral mapping properties we have \(\sin(t \sqrt{\Delta_{\lambda,\epsilon,N}}) = B_{\lambda,\epsilon,N} \mathcal{S}_N\) and \(\sin(t \sqrt{\Delta_{\lambda,\epsilon}}) = B_{\lambda,\epsilon} \mathcal{S}\). Further, recall from the proof of \protect\hyperlink{thm:sqrt-conv}{Theorem \ref{thm:sqrt-conv}} that \(B_{\lambda,\epsilon,N}\) is an M-matrix so that \(b_{\lambda,\epsilon,N} := I - B_{\lambda,\epsilon,N}\) has all entries non-negative and \(b_{\lambda,\epsilon,N}[1] = 1\). Then, \begin{align*} | & \sin(t \sqrt{\Delta}_{\lambda,\epsilon,N})[u](x) - \sin(t \sqrt{\Delta}_{\lambda,\epsilon})[u](x) | \\ &\leq |b_{\lambda,\epsilon,N}[(\mathcal{S}_N - \mathcal{S})[u]](x)| + |(\mathcal{S}_N - \mathcal{S})[u](x)| + |(B_{\lambda,\epsilon,N} - B_{\lambda,\epsilon})[\mathcal{S}[u]](x)| \\ &\leq 3\delta . \end{align*} On including the event that \(|\mathcal{C}_N[u](x) - \mathcal{C}[u](x)| \leq \delta\), we have \begin{equation}\begin{aligned} |U^t_{\lambda,\epsilon,N}[u](x) - U^t_{\lambda,\epsilon}[u](x)| \leq 4\delta . \end{aligned} \nonumber \end{equation} By a Taylor series expansion we have that for any \(u \in L^{\infty}\), \begin{equation}\begin{aligned} |\mathcal{S}[u](x)| \leq \sum_{m=0}^{\infty} \frac{|A_{\lambda,\epsilon}^m[u](x)|}{m!} \leq e||u||_{\infty}. \end{aligned} \nonumber \end{equation} Therefore, taking a union bound over all events and applying \protect\hyperlink{thm:sqrt-conv}{Theorem \ref{thm:sqrt-conv}} we arrive at, \begin{equation}\begin{aligned} \Pr[|U_{\lambda,\epsilon,N}^t[u](x) - U_{\lambda,\epsilon}^t[u](x)| > \delta] \leq \gamma_{\lambda,N,\upsilon}(\delta/(12e) ; u) + 2 \tilde{\gamma}_{\lambda,N}(\delta/(8 e) ; u) . \end{aligned} \nonumber \end{equation} For the second part of the statement of the Theorem, \(\eqref{eq:wave-ops-shorttime-consistency}\) already gives the probabilistic consistency bound for \(\mathcal{C}_{\varepsilon,N}\) and \(\mathcal{S}_{\varepsilon,N}\). Then, the above statements go through essentially in the same way, modulo that in place of \(b_{\lambda,\epsilon,N}\) we use \(b_{\lambda,\epsilon,N}^{(\varepsilon)} := (1 + \varepsilon)^{\frac{1}{2}}I - B_{\lambda,\epsilon,N}^{(\varepsilon)}\), which also has all non-negative entries, since \(B_{\lambda,\epsilon,N}^{(\varepsilon)}\) is an M-matrix with spectral radius \((1 + \varepsilon)^{\frac{1}{2}}\). Then due to \(b_{\lambda,\epsilon,N}^{(\varepsilon)}[1] = \sqrt{1 + \varepsilon}\), we have under the analogous events, \begin{align*} &\left| \sin\left( t \sqrt{\Delta_{\lambda,\epsilon,N}^{(\varepsilon)}} \right)[u](x) - \sin\left( t \sqrt{\Delta_{\lambda,\epsilon}^{(\varepsilon)}} \right)[u](x) \right| \\ &\quad \leq |b^{(\varepsilon)}_{\lambda,\epsilon,N}[(\mathcal{S}_{N,\varepsilon} - \mathcal{S}_{\varepsilon})[u]](x)| + (1 + \varepsilon)^{\frac{1}{2}}|(\mathcal{S}_{N,\varepsilon} - \mathcal{S}_{\varepsilon})[u](x)| \\ &\quad\quad + |(B_{\lambda,\epsilon,N}^{(\varepsilon)} - B_{\lambda,\epsilon}^{(\varepsilon)})[\mathcal{S}[u]](x)| \\ &\quad \leq 4\delta . \end{align*} On invoking the probabilistic consistency bound from \protect\hyperlink{lem:sqrt-perturb-eps-conv}{Lemma \ref{lem:sqrt-perturb-eps-conv}} for the square root of the \(\varepsilon\)-perturbed graph Laplacian, we arrive at the bound in the second part of the Lemma. \end{proof} \begin{notation} We denote the pointwise convergence rate for the short-time solution to the half-wave equation by \begin{equation}\begin{aligned} \omega_{\lambda,N}(\delta,\upsilon;u) := \gamma_{\lambda,N,\upsilon}(\delta/(12e) ; u) + 2 \tilde{\gamma}_{\lambda,N}(\delta/(8e) ; u) \end{aligned} \nonumber \end{equation} and for the perturbed case by, \begin{equation}\begin{aligned} \omega^*_{\lambda,N}(\delta,\varepsilon,\upsilon;u) := \gamma^*_{\lambda,N,\upsilon}(\delta/(4e), \varepsilon ; u) + 2 \tilde{\gamma}_{\lambda,N}(\delta/(8e) ; u) \end{aligned} \nonumber \end{equation} The explicit dependence on \(\upsilon\) will often be suppressed in the usage of this notation and we will write simply, \(\omega_{\lambda,N}(\delta ; u)\) and \(\omega^*_{\lambda,N}(\delta, \varepsilon ; u)\) with the exact meaning understood to be as given above. \end{notation} \hypertarget{uniform-consistency-at-longer-times}{% \subsection{Uniform consistency at longer times}\label{uniform-consistency-at-longer-times}} We wish to extend the pointwise consistency between the discretized solution \(u_N^t := U_{\lambda,\epsilon,N}^t[u]\) and the continuum solution \(u^t := U_{\lambda,\epsilon}^t[u]\) to longer times. For this purpose, we see that \(u_N^t\) is defined on \(\mathcal{M}\) through the form of \emph{Nyström extension} via \(A_{\lambda,\epsilon,N}\) as discussed in the beginning of \protect\hyperlink{from-graphs-to-manifolds}{Section \ref{from-graphs-to-manifolds}}. The inequality \(\eqref{eq:fun-calc-derived-bound}\) gives for all \(u \in L^{\infty}\) and \(t \in \mathbb{R}\), \begin{equation} \label{eq:discrete-prop-sup-bound} |U_{\lambda,\epsilon,N}^t[u](x)| \leq \left( 1 + c_{2,0} t \epsilon^{-\frac{1}{2}} \frac{||k_{\lambda,\epsilon,N}(x,\cdot)||_{N,\infty} ||p_{\lambda,\epsilon,N}||_{N,\infty}^{\frac{1}{2}}}{\min_{j \in [N]} p_{\lambda,\epsilon,N}^{\frac{3}{2}}(x_j)} \right) ||u||_{N,\infty} \end{equation} since in this case \(Df(z) = (\exp(i \tau\sqrt{1 - z}) - \exp(i \tau))/z\) for \(f = \exp(i \tau\sqrt{1 - z})\) and \(\tau := c_{2,0} t \epsilon^{-\frac{1}{2}}\) and setting \(w := 1 - \sqrt{1 - z}\) then taking a Taylor series expansion at \(w = 0\) of \(|Df|(w)^2 = 2(1 - \cos(\tau w))/(w(2 - w))^2\) shows that its maximum is achieved near the inverse of the second order coefficient, \(w = \sqrt{48/(\tau^2 - 9)}/\tau\), which then applied to its first order Taylor approximation gives that \(|Df|\) gives a maximum less than \(\tau/2 + 1 < \tau\). Since \(\eqref{eq:derived-fun-calc-expansion}\) applies to the case with \(A_{\lambda,\epsilon}\) in place of \(A_{\lambda,\epsilon,N}\), we also have the bound for \(|f(A_{\lambda,\epsilon})[u](x)|\) given by the ultimate inequality in \(\eqref{eq:fun-calc-derived-bound}\) with \(||\cdot||_{q}\) in place of \(|| \cdot ||_{N,q}\). Hence, \begin{equation} \label{eq:cont-prop-sup-bound} |U_{\lambda,\epsilon}^t[u](x)| \leq ||u||_{\infty} + c_{2,0} t \epsilon^{-\frac{1}{2}} ||k_{\lambda,\epsilon}||_{\infty}\frac{||p_{\lambda,\epsilon}||_{\infty}^{\frac{1}{2}}}{\inf p_{\lambda,\epsilon}^{\frac{3}{2}}} ||u||_2 \end{equation} will be useful for bounding probabilities for pointwise consistency. The bounds \(\eqref{eq:discrete-prop-sup-bound}\) and \(\eqref{eq:cont-prop-sup-bound}\) hold also for \(\UepsN{\varepsilon}{t}[u]\) and \(\Ueps{\varepsilon}{t}[u]\), respectively. This, along with \protect\hyperlink{lem:prop-short-time-conv}{Lemma \ref{lem:prop-short-time-conv}} is sufficient to establish consistency for the perturbed case through the same arguments as the \emph{unperturbed} \(\varepsilon = 0\) case. Indeed, in the following, we will use the short-time pointwise probabilistic consistency bounds given by \protect\hyperlink{lem:prop-short-time-conv}{Lemma \ref{lem:prop-short-time-conv}} as a black-box while extending to long-time, uniform probabilistic consistency. We will work through the unperturbed case, mainly to keep the notation simple, but since the arguments essentially rely on \protect\hyperlink{lem:prop-short-time-conv}{Lemma \ref{lem:prop-short-time-conv}} in a simple way, they apply directly to the perturbed case with \(\omega_{\lambda,N}^*\) in place of \(\omega_{\lambda,N}\) as well, with the only thing to keep track of being that \(\delta\) (\emph{i.e.}, error) multipliers are decoupled from \(\varepsilon\), but the notation makes this clear. A further application of \(\eqref{eq:derived-fun-calc-expansion}\) is that we can pass from pointwise consistency to uniform consistency, as in the following, \begin{lemma} \hypertarget{lem:prop-pre-uniform-bound}{\label{lem:prop-pre-uniform-bound}} Let \(t \in \mathbb{R}\), \(\lambda \geq 0\), \(\epsilon > 0\) and \(u \in L^{\infty}\). Then, there exists a constant \(C > 0\) depending only on \(k, p, \lambda\) and geometry of \(\mathcal{M}\) such that if \(\delta > 0\) and there is \(\omega_N(\delta,\epsilon ; \mathcal{M}, p, k, \lambda; u, t) > 0\) so that for all \(x \in \mathcal{M}\), \begin{equation}\begin{aligned} \Pr[|U_{\lambda,\epsilon,N}^t[u](x) - U_{\lambda,\epsilon}^t[u](x)| > \delta] \leq \omega_N , \end{aligned} \nonumber \end{equation} then with \(\underline{K}_{p,\lambda} := \min\{ \underline{C}_p, \underline{C}_{p,\lambda} \}/2\) and \(\varepsilon_{\mathcal{M}}(\epsilon,t,u) := \min\{ \operatorname{inj}(\mathcal{M})/\delta, \epsilon^{n + \frac{1}{2}}, \epsilon^{n + 1}/(|t| \, ||u||_{\infty})) \}\), \begin{align*} \Pr[ & ||U_{\lambda,\epsilon,N}^{t}[u] - U_{\lambda,\epsilon}^{t}[u]||_{\infty} > \delta] \\ &\quad \leq C \varepsilon_{\mathcal{M}}(\epsilon,t,u)^{-n} \delta^{-n}(\omega_N(\delta/2) + \gamma_{\lambda,N}(\underline{K}_{p,\lambda}/2;1) + \gamma_{0,N}(\underline{K}_{p,\lambda}/2;1)) \end{align*} and in particular when \(\epsilon \leq \min\{ 1, (\operatorname{inj}(\mathcal{M})/\delta)^{\frac{2}{2n + 1}} \}\) and \(|t| \, ||u||_{\infty} \geq 1\), \begin{align*} \Pr[ & ||U_{\lambda,\epsilon,N}^{t}[u] - U_{\lambda,\epsilon}^{t}[u]||_{\infty} > \delta] \\ &\quad \leq C (||u||_{\infty} |t| \epsilon)^{-n(n + 1)} \delta^{-n}(\omega_N(\delta/2) + \gamma_{\lambda,N}(\underline{K}_{p,\lambda}/2;1) + \gamma_{0,N}(\underline{K}_{p,\lambda}/2;1)) . \end{align*} All of these statements hold for \(\UepsN{\tilde{\varepsilon}}{t}\) and \(\Ueps{\tilde{\varepsilon}}{t}\), with \(\tilde{\varepsilon} > 0\), in place of \(U_{\lambda,\epsilon,N}^t\) and \(U_{\lambda,\epsilon}^t\), respectively. \end{lemma} \begin{proof} Let \(0 < \varepsilon < \operatorname{inj}(\mathcal{M})\) and take \(x_1^*, \ldots, x_M^* \in \mathcal{M}\) such that \(M = \mathcal{N}(\varepsilon)\) and \(\cup_{j=1}^M B(x_j^*, \varepsilon) = \mathcal{M}\). For each \(j \in [M]\), let \(\mathcal{U}_j \subset B_{\mathcal{M}}(x_j^*, \operatorname{inj}(x_j^*)) \subset \mathcal{M}\) be a geodesically convex neighbourhood of \(x_j^*\) and let \(s^{-1} : \mathcal{U}_j \to V_j \subset \mathbb{R}^{n}\) provide normal coordinates for this neighbourhood. Then, for any \(u \in L^{\infty}\) and \(x \in \mathcal{M}\), there is \(j \in [M]\) such that \(x \in B(x^*_j, \varepsilon)\) so by Taylor expansion centered at \(0 \in V_j\) and evaluated at \(s^{-1}(x) =: w \in V_j\), Taylor's theorem gives that \begin{equation} \label{eq:taylor-expand-normal-coords} |u(x)| \leq |u(x^*_j)| + \varepsilon \max_{|\alpha| = 1} \sup_{w \in \tilde{B}_j} |\partial^{\alpha}[u \circ s](w)|, \end{equation} with \(\tilde{B}_j := s(B(x_j^*, \varepsilon))\). We will use the following bound: \begin{align} \label{eq:k-first-deriv-bound} \begin{split} |\partial_{w^{(i)}} k_{\epsilon}(s(w),y)| &= 2\epsilon^{-\frac{n + 2}{2}} |\langle \iota \circ s(w) - \iota(y), \partial_{w^{(i)}}[\iota \circ s(\cdot) - \iota(y)](w) \rangle_{\mathbb{R}^{n}} \, k'_{\epsilon}(s(w),y)| \\ &\leq \epsilon^{-\frac{n + 2}{2}} ||\iota \circ s(w) - \iota(y)|| \, ||J_{\iota}[\partial_{w^{(i)}} s](w)|| \, |k'_{\epsilon}(s(w),y)| \\ &= \epsilon^{-\frac{n + 2}{2}} ||\iota \circ s(w) - \iota(y)|| \, ||\partial_{w^{(i)}} s(w)|| \, |k'_{\epsilon}(s(w),y)| \\ &\leq C_{\mathcal{M},k,1} \epsilon^{-\frac{n + 1}{2}}, \end{split} \end{align} wherein \(k' : r \mapsto \partial_r k\) is localized to \([0, R_k^2]\) and \(k'_{\epsilon}(x,y) := k'(||\iota(x) - \iota(y)||^2 / \epsilon)\), which gives that \(||\iota(x) - \iota(y)|| \leq R_k \sqrt{\epsilon}\) and \(C_{\mathcal{M},k,1} = C_{\mathcal{M},1} R_k ||k'||_{\infty}\) with \(C_{\mathcal{M},1} := \max_{i \in [n]} ||\partial_{w^{(i)}} s(w)||_{\infty}\). Hence also, \(|\partial_{w^{(i)}} [p_{\epsilon} \circ s](w)|, |\partial_{w^{(i)}} [p_{\epsilon,N} \circ s](w)| \leq C_{\mathcal{M},k,1} \epsilon^{-\frac{n+1}{2}}\). Now let \(\delta_1 > 0\) and assume we are in the event that for all \(j \in [M]\) and \(\lambda' \in \{0, \lambda \}\), \(|p_{\lambda',\epsilon,N}(x^*_j) - p_{\lambda',\epsilon}(x^*_j)| \leq \delta_1/2\). Then for any \(x \in \mathcal{M}\), there is \(j \in [M]\) such that \(d_g(x^*_j,x) \leq \varepsilon\), so that by \(\eqref{eq:taylor-expand-normal-coords}\) and \(\eqref{eq:k-first-deriv-bound}\), \begin{align*} |p_{\epsilon,N}(x) - p_{\epsilon}(x)| &\leq \frac{\delta_1}{2} + 2\varepsilon \max_{|\alpha| = 1} \sup_{(w,y) \in \tilde{B}_j \times \mathcal{M}} |\partial^{\alpha}[k_{\epsilon}(s)](w,y)| \\ &\leq \frac{\delta_1}{2} + 2 C_{\mathcal{M},k,1} \epsilon^{-\frac{n+1}{2}} \varepsilon . \end{align*} In the following, we will use the notation: \(\min_{\mathcal{M}} : \mathbb{R}^{S} \ni \vec{a} \mapsto \min\{ \operatorname{inj}(\mathcal{M}), a_1, \ldots, a_S \} \in \mathbb{R}\) for any \(S \geq 1\). Hence, with \(\varepsilon \leq \min\{ \operatorname{inj}(\mathcal{M}), \epsilon^{\frac{n + 1}{2}} \delta_1/(4 C_{\mathcal{M},k,1}) \} =: \min_{\mathcal{M}}(\epsilon^{\frac{n + 1}{2}} \delta_1/(4 C_{\mathcal{M},k,1}))\), we have that \(|p_{\epsilon,N}(x) - p_{\epsilon}(x)| \leq \delta_1\). Thus, assuming \(\epsilon \leq 1\) we have the bound, \begin{equation}\begin{aligned} |\partial_{w^{(i)}}[p_{\lambda,\epsilon,N} \circ s](w)| \leq \epsilon^{-n-\frac{1}{2}} C_{\mathcal{M},k,1} C_{p,\lambda,1,\delta_1} , \\ C_{p,\lambda,1,\delta_1} := \frac{\lambda (\overline{C}_p + \delta_1)^{2\lambda}}{(\underline{C}_{p} - \delta_1)^{4\lambda}}\left( \frac{1}{\lambda} + \frac{||k||_{\infty}}{\overline{C}_p + \delta_1} \right) \end{aligned} \nonumber \end{equation} and further assuming that \(\varepsilon \leq \min_{\mathcal{M}}\{ \delta_1(\epsilon^{\frac{n+1}{2}}, \epsilon^{n + \frac{1}{2}}/C_{p,\lambda,1,\delta_1})/(4C_{\mathcal{M},k,1}) \}\) gives on another application of \(\eqref{eq:taylor-expand-normal-coords}\) and \(\eqref{eq:k-first-deriv-bound}\), \begin{equation}\begin{aligned} |p_{\lambda,\epsilon,N}(x) - p_{\lambda,\epsilon}(x)| \leq \delta_1 . \end{aligned} \nonumber \end{equation} Now let \(x, y \in \mathcal{M}\) and \(s^{-1}\) be centered at \(x^*_j\) such that \(x \in B(x^*_j, \varepsilon)\). Then, combining the above bounds gives, \begin{align*} \left| \partial_{w^{(i)}}\left[ \frac{k_{\lambda,\epsilon,N}(s(\cdot), y)}{p_{\lambda,\epsilon,N} \circ s(\cdot)} \right] \right|_{\cdot = w} &= \left| \frac{(p_{\lambda,\epsilon,N} p_{\epsilon,N}^{\lambda})(x) \partial_{w^{(i)}}[k_{\epsilon}(s,y)](w) - k_{\epsilon}(s,y) \partial_{w^{(i)}}[p_{\lambda,\epsilon,N} \, p_{\epsilon,N}^{\lambda} \circ s](w)}{p_{\epsilon,N}(y)^{\lambda}(p_{\lambda,\epsilon,N}(x) \, p_{\epsilon,N}^{\lambda}(x))^2} \right| \\ &\leq \epsilon^{-(n + \frac{1}{2})} \frac{C_{\mathcal{M},k,1}}{(\underline{C}_{p,\lambda} - \delta_1)^3 (\underline{C}_p - \delta_1)^{3\lambda}} \\ &\quad\quad \times \left( \epsilon^{\frac{n}{2}} (\overline{C}_{p,\lambda} + \delta_1)(\overline{C}_p + \delta_1)^{\lambda} \right. \\ &\quad\quad\quad\quad \left. + ||k||_{\infty}(C_{p,\lambda,1,\delta_1} (\overline{C}_p + \delta_1)^{\lambda} + \lambda(\overline{C}_{p,\lambda} + \delta_1)(\overline{C}_p + \delta_1)^{\lambda - 1}) \right) \\ &\leq \epsilon^{-(n + \frac{1}{2})} C_{\mathcal{M},k,1} C_{k,p,\lambda,1,\delta_1} \end{align*} with \begin{align*} C_{k,p,\lambda,1,\delta_1} &:= ((\underline{C}_{p,\lambda} - \delta_1) (\underline{C}_p - \delta_1)^{\lambda})^{-3} \left( (\overline{C}_{p,\lambda} + \delta_1)(\overline{C}_p + \delta_1)^{\lambda} \right. \\ &\quad\quad \left. + \, ||k||_{\infty}(C_{p,\lambda,1,\delta_1} (\overline{C}_p + \delta_1)^{\lambda} + \lambda(\overline{C}_{p,\lambda} + \delta_1)(\overline{C}_p + \delta_1)^{\lambda - 1} \right). \end{align*} In the continuum case, these bounds hold with \(\delta_1 = 0\), so letting \(\delta_1 = \min\{ \underline{C}_{p,\lambda}, \underline{C}_p \}/2\) we have \(C_{p,\lambda,1,\delta_1} \leq 36^{\lambda} C_{p,\lambda,1,0} =: C_{p,\lambda,1}\) and \begin{equation}\begin{aligned} \left| \partial_{w^{(i)}} \left[ \frac{k_{\lambda,\epsilon,N}(s(\cdot),y)}{p_{\lambda,\epsilon,N} \circ s(\cdot)} \right]_{\cdot = w} \right|, \left| \partial_{w^{(i)}} \left[ \frac{k_{\lambda,\epsilon}(s(\cdot),y)}{p_{\lambda,\epsilon} \circ s(\cdot)} \right]_{\cdot = w} \right| \leq \epsilon^{-(n + \frac{1}{2})} C_{\mathcal{M},k,1} C_{k,p,\lambda,1} , \\ C_{k,p,\lambda,1} := 12 (432^{\lambda} C_{k,p,\lambda,1,0}) . \end{aligned} \nonumber \end{equation} Therefore, if we are also in the event that \(|(U^t_{\lambda,\epsilon,N} - U^t_{\lambda,\epsilon})[u](x_j^*)| \leq \delta/2\) for every \(j \in [M]\) and \(||u||_{N,2} \leq ||u||_{\infty}\), then denoting \(F_N := U_{\lambda,\epsilon,N}[u]\) and \(F := U_{\lambda,\epsilon}[u]\) and applying \(\eqref{eq:discrete-prop-sup-bound}\) gives, \begin{align*} |(F_N - F)(x)| &\leq \delta/2 + \varepsilon \max_{|\alpha| = 1} ||\partial^{\alpha} (F_N - F)\circ s||_{\infty} \\ &\leq \delta/2 + \varepsilon \max_{|\alpha| = 1} (||\partial^{\alpha} A_{\lambda,\epsilon,N}[Df(A_{\lambda,\epsilon,N})[u]] \circ s||_{\infty} \\ &\quad\quad + ||\partial^{\alpha} A_{\lambda,\epsilon}[Df(A_{\lambda,\epsilon})[u]] \circ s||_{\infty}) \\ &\leq \delta/2 + 2 \epsilon^{-(n + 1)} |t| c_{2,0} C_{\mathcal{M},k,1} C_{k,p,\lambda,1} ||u||_{\infty} \varepsilon . \end{align*} Hence, with \(\varepsilon \leq \min_{\mathcal{M}}\{ \delta (\epsilon^{\frac{n+1}{2}}, \epsilon^{n + \frac{1}{2}}/C_{p,\lambda,1}, \epsilon^{n + 1}/(|t| \, ||u||_{\infty} c_{2,0} C_{k,p,\lambda,1}))/(4C_{\mathcal{M},k,1}) \}\), \begin{equation}\begin{aligned} |(F_N - F)(x)| \leq \delta . \end{aligned} \nonumber \end{equation} The events \(|p_{\lambda',\epsilon,N}(x^*_j) - p_{\lambda',\epsilon}(x^*_j)| \leq \underline{K}_{p,\lambda}/2\) for each \(j \in [M]\) happen with probability at least \(1 - M \gamma_{\lambda',N}(\underline{K}_{p,\lambda}/2)\) and that \(|(F_N - F)(x^*_j)| \leq \delta/2\) with probability at least \(1 - M \omega_N(\delta/2)\). Thus, applying the Bishop-Günther inequality (\protect\hyperlink{lem:covering-number}{Lemma \ref{lem:covering-number}}) then taking a union bound gives the probabilistic bound as in the statement of the Lemma. \end{proof} The two-fold applications of this Lemma are that firstly, we can turn the short-time, pointwise consistency of \protect\hyperlink{lem:prop-short-time-conv}{Lemma \ref{lem:prop-short-time-conv}} into a uniform consistency, which then combined with the semigroup structure of \(U_{\lambda,\epsilon}^t\) leads to a kind of time-splitting argument that yields exponential probabilistic bounds for long-time, pointwise consistency. Then, since those bounds are independent of the given point, we can employ this Lemma again to get long-time, uniform consistency: this is, \begin{theorem} \hypertarget{thm:halfwave-soln}{\label{thm:halfwave-soln}} Let \(\lambda \geq 0\), \(\epsilon > 0\), \(t \in \mathbb{R}\) and \(u \in L^{\infty}\) such that there exists \(K_u > 0\) so that \(||U_{\lambda,\epsilon}^s[u]||_{\infty} \leq K_u\) for all \(|s| \leq |t|\). Then, there exist a constants \(C, C' > 0\) depending only on \(k, p, \lambda\) and geometry of \(\mathcal{M}\) such that for all \(\delta > 0\), \begin{align} \begin{split} \label{eq:thm-halfwave-soln-bound} \Pr&[||(U_{\lambda,\epsilon,N}^t - U_{\lambda,\epsilon}^t)[u]||_{\infty} > \delta] \\ &\quad\quad \leq C^2 \kappa \varepsilon_{\mathcal{M}}(\epsilon,t,K_u)^{-2n} \delta^{-2n}(\omega_{\lambda,N}(\delta \epsilon^{\frac{n}{2}}/(4 \kappa^2 \tilde{C}_{p,\lambda}) ; K_u) + C' \varepsilon_{\mathcal{M},\lambda} r_{\lambda,N}) \\ &\quad\quad\quad\quad + C \varepsilon_{\mathcal{M}}(\epsilon,t,K_u)^{-n} \delta^{-n} (C' \varepsilon_{\mathcal{M},\lambda} + 1)r_{\lambda,N} \\ &\quad\quad =: \Omega_{\lambda,t,N}(\delta,\epsilon,K_u), \end{split} \end{align} wherein \(\kappa := \lceil |t| c_{2,0}/\sqrt{\epsilon} \rceil\), \(\tilde{C}_{p,\lambda} > 0\) is a constant depending only on \(\lambda, \overline{C}_{p,\lambda}, \underline{C}_{p,\lambda}\) and \(\underline{C}_p\), \(\varepsilon_{\mathcal{M}}\) is as in \protect\hyperlink{lem:prop-pre-uniform-bound}{Lemma \ref{lem:prop-pre-uniform-bound}}, \(\varepsilon_{\mathcal{M},\lambda} := \min\{ \operatorname{inj}(\mathcal{M}), \underline{K}_{p,\lambda} \epsilon^{n + \frac{1}{2}} \}\) and \(r_{\lambda,N} := \gamma_{\lambda,N}(\underline{K}_{p,\lambda} ; 1) + \gamma_{\lambda,N}(\underline{K}_{p,\lambda} ; 1)\) with \(\underline{K}_{p,\lambda} := \min\{ \underline{C}_p, \underline{C}_{p,\lambda} \}/2\). In the perturbed case, let \(\tilde{\varepsilon} > 0\). Then, we have with a \(K_u > 0\) satisfying \(||\Ueps{\tilde{\varepsilon}}{s}[u]||_{\infty} \leq K_u\) for all \(|s| \leq |t|\), \begin{equation} \label{eq:thm-halfwave-soln-perturb-op-bound} \Pr[||(\UepsN{\tilde{\varepsilon}}{t} - \Ueps{\tilde{\varepsilon}}{t})[u]||_{\infty} > \delta] \leq \Omega^*_{\lambda,t,N}(\delta,\tilde{\varepsilon},\epsilon,K_u) \end{equation} with \(\Omega^*_{\lambda,t,N}\) defined in the same way as \(\Omega_{\lambda,t,N}\), except with the instance of \(\omega_{\lambda,N}(\cdot ; \cdot \cdot)\) replaced with \(\omega^*_{\lambda,N}(\cdot, \tilde{\varepsilon} ; \cdot \cdot)\). \end{theorem} \begin{proof} We set \(\kappa := \lceil |t| c_{2,0}/\sqrt{\epsilon} \rceil\) and \(\tau := t/\kappa\) so that \(|\tau| \leq \sqrt{\epsilon}/c_{2,0}\). Let \(\varepsilon, \delta_0 > 0\) and let us begin in the following events: \begin{alignat*}{4} & \mathcal{A}_{\lambda'}(\varepsilon) &:& \quad |p_{\lambda',\epsilon,N}(X) - p_{\lambda',\epsilon}(X)| \leq \varepsilon & & \quad (\forall)X \in \{ x_0 := x, x_1, \ldots, x_N \} \, \wedge \, \lambda' = 0,\lambda \\ & \mathcal{A}(\delta_0) &:& \quad |(U_{\lambda,\epsilon,N}^{\tau} - U_{\lambda,\epsilon}^{\tau}) U_{\lambda,\epsilon}^{m \tau}[u](X)| \leq \delta_0 & & \quad (\forall)X \in \{ x_0 := x, x_1, \ldots, x_N \}, (\forall) 0 \leq m \leq \kappa - 1 \} . \end{alignat*} Then we have due to \(\eqref{eq:discrete-prop-sup-bound}\), \begin{equation}\begin{aligned} |U_{\lambda,\epsilon,N}^{m\tau}[u](x)| \leq (1 + m C(\epsilon,\varepsilon)) ||u||_{N,\infty} \end{aligned} \nonumber \end{equation} with \begin{equation}\begin{aligned} C(\epsilon,\varepsilon) := \epsilon^{-\frac{n}{2}} \frac{||k||_{\infty}}{(\underline{C}_p - \varepsilon)^{2\lambda}}\frac{(\overline{C}_{p,\lambda} + \varepsilon)^{\frac{1}{2}}}{(\underline{C}_{p,\lambda} - \varepsilon)^{\frac{3}{2}}} . \end{aligned} \nonumber \end{equation} Define for each \(m \in \mathbb{N}\), \(v_m := U_{\lambda,\epsilon}^{m\tau}[u]\) so that in the current event we have for all \(\kappa \geq 2\) and \(0 \leq m \leq \kappa - 2\), \begin{align*} |(U_{\lambda,\epsilon,N}^{2\tau} - U_{\lambda,\epsilon}^{2\tau})[v_m](x)| &\leq |U_{\lambda,\epsilon,N}^{\tau}(U_{\lambda,\epsilon,N}^{\tau} - U_{\lambda,\epsilon}^{\tau})[v_m](x)| \\ &\quad\quad + |(U_{\lambda,\epsilon,N}^{\tau} - U_{\lambda,\epsilon}^{\tau}) [v_{m+1}](x)| \\ &\leq (2 + C(\epsilon,\varepsilon))\delta_0 . \end{align*} Supposing that for all \(2 \leq \kappa' \leq \kappa - 1\), each \(s \in [\kappa']\) and \(0 \leq m \leq \kappa' - s\), \begin{align*} |(U_{\lambda,\epsilon,N}^{s\tau} - U_{\lambda,\epsilon}^{s\tau})[v_m](x)| &\leq s\left( \frac{s-1}{2}C(\epsilon,\varepsilon) + 1 \right) \delta_0 \end{align*} leads to, \begin{align*} |(U_{\lambda,\epsilon,N}^{\kappa\tau} - U_{\lambda,\epsilon}^{\kappa\tau})[u](x)| &\leq ||U_{\lambda,\epsilon,N}^{(\kappa - 1)\tau}(U_{\lambda,\epsilon,N}^{\tau} - U_{\lambda,\epsilon}^{\tau})[u](x)| \\ &\quad\quad + |(U_{\lambda,\epsilon,N}^{(\kappa - 1)\tau} - U_{\lambda,\epsilon}^{(\kappa - 1)\tau}) [v_1](x)| \\ &\leq \kappa \left( \frac{\kappa - 1}{2} C(\epsilon,\varepsilon) + 1 \right) \delta_0 . \end{align*} Therefore, having satisfied the induction hypothesis, we have that for each \(\kappa \in \mathbb{N}\), if for all \(0 \leq m \leq \kappa - 1\) and \(0 \leq j \leq N\), \(|(U_{\lambda,\epsilon,N}^{\tau} - U_{\lambda,\epsilon}^{\tau})U_{\lambda,\epsilon}^{m \tau}[u](x_j)| \leq \delta_0\) then \begin{equation}\begin{aligned} |(U_{\lambda,\epsilon,N}^{\kappa \tau} - U_{\lambda,\epsilon}^{\kappa \tau})[u](x)| \leq \kappa^2 C(\epsilon,\varepsilon) \delta_0 . \end{aligned} \nonumber \end{equation} Now set \(\varepsilon = \min \{\underline{C}_p, \underline{C}_{p,\lambda} \}/2\) and \(\tilde{C}_{p,\lambda} := \sqrt{12} (2^{\lambda}) \overline{C}_{p,\lambda}^{\frac{1}{2}}/(\underline{C}_{p,\lambda}^{\frac{3}{2}} \underline{C}_p^{2\lambda})\) so that \(C(\epsilon,\varepsilon) \leq \epsilon^{-\frac{n}{2}} \tilde{C}_{p,\lambda}\). Moreover, as shown in the proof of \protect\hyperlink{lem:prop-pre-uniform-bound}{Lemma \ref{lem:prop-pre-uniform-bound}}, for each \(\lambda' \in \{ 0, \lambda \}\), there is a constant \(C_{\lambda'}\) depending only on \(k,p,\lambda'\) and \(\mathcal{M}\) such that the event \(\mathcal{A}_{\lambda'}(\varepsilon)\) happens with probability at least \(1 - C_{\lambda'} (\min\{ \operatorname{inj}(\mathcal{M}), \varepsilon \epsilon^{n + \frac{1}{2}} \})^{-n} \gamma_{\lambda',N}(\varepsilon ; 1)\). Upon setting \(\delta_0 = \delta \, (\epsilon^{\frac{n}{2}}/\kappa^2)/\tilde{C}_{p,\lambda}\) and denoting \(r_{\lambda,N}(\varepsilon) := \gamma_{\lambda,N}(\varepsilon ; 1) + \gamma_{0,N}(\varepsilon ; 1)\) and \(\varepsilon_{\mathcal{M}, \lambda} := (C_{\lambda} + C_0)^{-\frac{1}{n}} \min\{ \operatorname{inj}(\mathcal{M}), \varepsilon \epsilon^{n + \frac{1}{2}} \}\), an application of \protect\hyperlink{lem:prop-pre-uniform-bound}{Lemma \ref{lem:prop-pre-uniform-bound}} gives a lower bound for the probability that event \(\mathcal{A}(\delta_0)\) happens and upon taking a union bound, we have that for all \(x \in \mathcal{M}\), \begin{align*} \Pr[ & |(U_{\lambda,\epsilon,N}^{\kappa \tau} - U_{\lambda,\epsilon}^{\kappa \tau})[u](x)| > \delta] \leq \overline{\omega}_{\lambda,N}(\delta ; u, t) + \varepsilon_{\mathcal{M},\lambda}(\varepsilon,\epsilon)^{-n} r_{\lambda,N}(\varepsilon), \\ &\overline{\omega}_{\lambda}(\delta ; u, t) := C \kappa \varepsilon_{\mathcal{M}}(\epsilon,t,K_u)^{-n} \delta^{-n}[\omega_{\lambda,N}(\delta \epsilon^{\frac{n}{2}}/(2 \kappa^2 \tilde{C}_{p,\lambda}) ; K_u) + r_{\lambda,N}(\varepsilon)]. \end{align*} Since this probabilistic bound holds independently of \(x \in \mathcal{M}\), we may apply \protect\hyperlink{lem:prop-pre-uniform-bound}{Lemma \ref{lem:prop-pre-uniform-bound}} again at \(t = \kappa \tau\) and on doing so, we find that \begin{align*} \Pr&[||U_{\lambda,\epsilon,N}^{\kappa \tau}[u] - U_{\lambda,\epsilon}^{\kappa \tau}[u]||_{\infty} > \delta] \\ &\quad\quad \leq C \varepsilon_{\mathcal{M}}(\epsilon,t,u)^{-n} \delta^{-n} (\overline{\omega}_{\lambda,N}(\delta/2 ; u, t) + (\varepsilon_{\mathcal{M},\lambda}(\varepsilon,\epsilon)^{-n} + 1)r_{\lambda,N}(\varepsilon)). \end{align*} \end{proof} A generic bound to be used in place of \(K_u\) in the conditions satisfying \protect\hyperlink{thm:halfwave-soln}{Theorem \ref{thm:halfwave-soln}} is given by \(\eqref{eq:cont-prop-sup-bound}\). We record this as, \begin{corollary} \hypertarget{cor:halfwave-soln-general-bound}{\label{cor:halfwave-soln-general-bound}} Let \(\lambda \geq 0\), \(\epsilon > 0\), \(t \in \mathbb{R}\) and \(u \in L^{\infty}\). Then, there exists a constant \(C > 0\) depending only on \(k, p, \lambda\) and geometry of \(\mathcal{M}\) such that for all \(\delta > 0\), \begin{align*} \Pr[||(U_{\lambda,\epsilon,N}^t - U_{\lambda,\epsilon}^t)[u]||_{\infty} > \delta] &\leq \Omega_{\lambda,t,N}(\delta,\epsilon,C_{\epsilon,t,u}), \end{align*} wherein \(C_{\epsilon,t,u} := ||u||_{\infty} + c_{2,0} t \epsilon^{-\frac{n + 1}{2}} ||k||_{\infty} \overline{C}_{p,\lambda}^{\frac{1}{2}} /(\underline{C}_p^{2 \lambda} \underline{C}_{p,\lambda}^{\frac{3}{2}}) ||u||_2\) and \(\Omega_{\lambda,t,N}\) as defined in \protect\hyperlink{thm:halfwave-soln}{Theorem \ref{thm:halfwave-soln}}. \end{corollary} The case for the application to coherent states is facilitated by \protect\hyperlink{prop:glap-prop-cs-localized}{Proposition \ref{prop:glap-prop-cs-localized}}, since it provides a supremum bound of the propagated coherent state for \(|t|\) below the injectivity radius at the point where the state is initially localized. In fact, it gives also the localization properties that allow us to place the geodesic in an \(O(\sqrt{h})\) radius ball about the maximum of the modulus of the propagated state. Therefore, we can readily approximate the geodesic flow using the \(\max\) observable; in doing so, we account for the fact that while the \emph{un-normalized} coherent state \(\tilde{\psi}_h := \psi_h/C_h(x_0,\xi_0)\) is readily interpolatable through the graph structure, its normalization is a matter of probabilistic \(L^2\) convergence and not necessary to handle the \(\max\) observable. For this reason, we use the un-normalized coherent state at present, to establish the consistency for this observation of the geodesic flow, in \begin{proposition} \hypertarget{prop:max-observable-flow-consistency}{\label{prop:max-observable-flow-consistency}} Let \(\lambda \geq 0\), \(\alpha \geq 1\), \(h \in (0, h_0]\) for \(h_0\) given by \protect\hyperlink{prop:glap-prop-cs-localized}{Proposition \ref{prop:glap-prop-cs-localized}} and \((x_0, \xi_0) \in T^*\mathcal{M}\). Then, for \(\tilde{\psi}_h := e^{\frac{i}{h} \phi(x_0, \xi_0; \cdot)}\) an \emph{un-normalized} coherent state localized at \((x_0, \xi_0)\) and \(|t| \leq \operatorname{inj}(x_0)\), we have with \(\epsilon := h^{2 + \alpha}\), for all \(\delta > 0\), \begin{equation}\begin{aligned} \Pr[|| \, |U_{\lambda,\epsilon,N}^t[\tilde{\psi}_h]|^2 - |U_{\lambda,\epsilon}^t[\tilde{\psi}_h]|^2 \,||_{\infty} > \delta] \leq \Omega_{\lambda,t,N}(\delta/(2K_{\psi} + 1 + \delta), \epsilon, K_{\psi}) \end{aligned} \nonumber \end{equation} with a constant \(K_{\psi} > 0\) that satisfies \(||U_{\lambda,\epsilon}^t[\tilde{\psi}_h]||_{\infty} \leq K_{\psi}\) for all \(|t| \leq \operatorname{inj}(x_0)\). Furthermore, denoting \(\hat{x}_{N,t} := \arg\max_{\mathcal{X}_N}|U_{\lambda,\epsilon,N}^t[\tilde{\psi}_h]|^2\) with \(\mathcal{X}_N := \{ x_1, \ldots, x_N \}\), there exist constants \(h_{\max}', C_{\max}' > 0\) such that for all \(h \in (0, h_{\max}']\), \begin{equation} \label{eq:prop-max-observable-flow-consistency-bound} \Pr[d_g(\hat{x}_{N,t}, x_t) > C_{\max}' h^{\frac{1}{2}}] \leq \Omega_{\lambda,t,N}(C h^{\frac{n}{2}}, \epsilon, K_{\psi}) + e^{-2 N C' h^{n}} , \end{equation} with constants \(C, C' > 0\). In the perturbed case, we define \(\hat{x}^{(\varepsilon)}_{N,t} := \arg\max_{\mathcal{X}_N}|\UepsN{\varepsilon}{t}[\tilde{\psi}_h]|^2\) and set \(\varepsilon \in O(h^{1 + \alpha})\) to get with new constants \(h'_{\max,}, C'_{\max}, C, C' > 0\), and \(K^{(\varepsilon)}_{\psi} := \sup_{\{|t| \leq \operatorname{inj}(x_0) \}} \sup_{h \in (0, h'_{\max}]}||\Ueps{\varepsilon}{t}[\tilde{\psi}_h]||_{\infty}\), \begin{equation} \label{eq:prop-max-observable-perturb-glap-flow-consistency-bound} \Pr[d_g(\hat{x}^{(\varepsilon)}_{N,t}, x_t) > C'_{\max} h^{\frac{1}{2}}] \leq \Omega^*_{\lambda,t,N}(C h^{\frac{n}{2}}, \varepsilon, \epsilon, K^{(\varepsilon)}_{\psi}) + e^{-2 N C' h^{n}} . \end{equation} \end{proposition} \begin{proof} We may apply \protect\hyperlink{prop:glap-prop-cs-localized}{Proposition \ref{prop:glap-prop-cs-localized}} (we give the arguments for the unperturbed case, but they follow in the same way for the perturbed case), so in effect we have a constant \(K_{\psi} > 0\) such that \(||U_{\lambda,\epsilon}^t[\tilde{\psi}_h]||_{\infty} \leq K_{\psi}\) for all \(|t| < \operatorname{inj}(x_0)\) and \(h \in [0, h_0)\). Therefore, for all \(\delta_0 > 0\), \begin{equation} \label{eq:halfwave-soln-consistency-cs} \Pr[||U_{\lambda,\epsilon,N}^t[\tilde{\psi}_h] - U_{\lambda,\epsilon}^t[\tilde{\psi}_h]||_{\infty} > \delta_0] \leq \Omega_{\lambda,t,N}(\delta_0,\epsilon,K_{\psi}). \end{equation} Now assume the event that \(||U_{\lambda,\epsilon,N}^t[\tilde{\psi}_h] - U_{\lambda,\epsilon}^t[\psi_h]||_{\infty} \leq \delta/(2 K_{\psi} + 1 + \delta)\). Then, \begin{align*} | &\, |U_{\lambda,\epsilon,N}^t[\tilde{\psi}_h]|^2 - |U_{\lambda,\epsilon}^t[\tilde{\psi}_h]|^2 \,| \\ &= | \, |U_{\lambda,\epsilon,N}^t[\tilde{\psi}_h]| - |U_{\lambda,\epsilon}^t[\tilde{\psi}_h]| \,| (|U_{\lambda,\epsilon,N}^t[\tilde{\psi}_h]| + |U_{\lambda,\epsilon}^t[\tilde{\psi}_h]|) \\ &\leq \frac{\delta}{2 K_{\psi} + 1 + \delta} \left( 2 K_{\psi} + \frac{\delta}{2 K_{\psi} + 1 + \delta} \right) \leq \delta \frac{2K_{\psi} + 1}{2 K_{\psi} + 1 + \delta} \leq \delta . \end{align*} This establishes the first part of the statement of the Proposition. To see the second part, start with noting that by \protect\hyperlink{prop:glap-prop-cs-localized}{Proposition \ref{prop:glap-prop-cs-localized}}, we also have a constant \(C_{\psi} > 0\) such that \(| \, |U_{\lambda,\epsilon}^t[\tilde{\psi}_h]|^2 - |\psi_h^t|^2/C_h(x_0, \xi_0)^2 \, | \leq C_{\psi} h^{\frac{n}{2} + 1}\). Thus, \(| \, |U_{\lambda,\epsilon,N}^t[\tilde{\psi}_h]|^2 - |\psi_h^t|^2/C_h(x_0, \xi_0)^2 \, | \leq \delta + C_{\psi} h^{\frac{n}{2} + 1}\). As per the proof of \protect\hyperlink{lem:prop-coherent-state-localized}{Lemma \ref{lem:prop-coherent-state-localized}} affecting that Proposition, there are constants \(h_{\max}, C_0, C_1, c_0, c_1 > 0\) such that for all \(h \in [0, h_{\max})\), given \(p_0 \in \overline{B}_0 := \overline{B}(x_t, g ; (C_0 h)^{\frac{1}{2}})\) and \(p_1 \in B_1^c := \mathcal{M} \setminus B(x_t, g ; (C_1 h)^{\frac{1}{2}})\), we have \(|\psi_h^t|^2(p_0) > c_0 > c_1 > |\psi_h^t|^2(p_1)\). Hence, \begin{equation}\begin{aligned} |U_{\lambda,\epsilon,N}^t[\tilde{\psi}_h]|^2(p_0) > c_0/C_h(x_0,\xi_0)^2 - \delta - C_{\psi} h^{\frac{n}{2} + 1} , \\ |U_{\lambda,\epsilon,N}^t[\tilde{\psi}_h]|^2(p_1) < c_1/C_h(x_0,\xi_0)^2 + \delta + C_{\psi} h^{\frac{n}{2} + 1} . \end{aligned} \nonumber \end{equation} Now let \(c > 0\) and \(\tilde{C}_{\psi} > 0\) such that \(C_h(x_0, \xi_0)^2 \leq \tilde{C}_{\psi} h^{-\frac{n}{2}}\) and set \(\delta = c h^{\frac{n}{2}}\). Then, \(C_h(x_0, \xi_0)^2(\delta + C_{\psi} h^{\frac{n}{2} + 1}) \leq c \tilde{C}_{\psi} + \tilde{C}_{\psi} C_{\psi} h\). So with \(c = (c_0 - c_1)/(2\tilde{C}_{\psi})\), there is \(0 < h_0 \leq h_{\max}\) such that for all \(h \in (0, h_0]\), we have \(|U_{\lambda,\epsilon,N}^t[\tilde{\psi}_h]|^2(p_0) > |U_{\lambda,\epsilon,N}^t[\tilde{\psi}_h]|^2(p_1)\) and since these constants are independent of \(p_0, p_1\), this inequality holds uniformly with respect to their respective regions, so that \(\inf _{\overline{B}_0}|U_{\lambda,\epsilon,N}^t[\tilde{\psi}_h]|^2 > \sup_{B_1^c} |U_{\lambda,\epsilon,N}^t[\tilde{\psi}_h]|^2\). Therefore, to establish that \(d_g(\hat{x}_{N,t}, x_t) \leq (C_1 h)^{\frac{1}{2}}\), we need only find one point in \(\chi_N \cap \overline{B}_0\): let \(P_0\) be the probability that a random vector \(x\) lies in \(B_0\), which with the aid of \protect\hyperlink{lem:covering-number}{Lemma \ref{lem:covering-number}} is bounded below by, \begin{equation}\begin{aligned} P_0 = \int_{B_0} p(y) ~ d\nu_g(y) \geq (\inf p) \operatorname{vol}(B_0) \geq C_{\mathcal{M}} (\inf p) (C_0 h)^{\frac{n}{2}} =: \underline{P}_0 . \end{aligned} \nonumber \end{equation} Then a \emph{sampler} \(Y_0 : \mathcal{M} \to \{ 0, 1 \}\) that maps \(Y_0[B_0] = 1\) and \(Y_0[B_0^c] = 0\) defines \(N\) \emph{i.i.d.} random variables governed by a Bernoulli process with probability \(P_0\), when applied to the random vectors \(x_1, \ldots, x_N\). This process has mean \(N P_0 \geq N \underline{P}_0\), so a Chernoff bound yields, \begin{equation}\begin{aligned} \Pr[\# (\chi_N \cap B_j) \geq 1] \geq 1 - e^{-2 N \underline{P}_0^2} , \end{aligned} \nonumber \end{equation} hence upon taking a union bound with \(\eqref{eq:halfwave-soln-consistency-cs}\) using \(\delta_0 = c h^{\frac{n}{2}}/(2 K_{\psi} + 1 + ch^{\frac{n}{2}})\) and letting \(h \in (0, \min\{ h_0, c^{-\frac{2}{n}} \}]\), we have the second part of the statement of the Proposition. \end{proof} \hypertarget{interlude-l2-consistency}{% \subsection{\texorpdfstring{Interlude: \(L^2\) consistency}{Interlude: L\^{}2 consistency}}\label{interlude-l2-consistency}} We discuss now an intermediary step to recovering the geodesic flow through the graph structure using the mean position observables. These observables, as discussed in \protect\hyperlink{observing-geodesics}{Section \ref{observing-geodesics}}, can be expressed as \(L^2\) inner products, so to implement them we need a way to go from inner products on graphs to inner products on the manifold. The treatment in \citep[\(\S 2.3.5\)-\(6\)]{hein2005geometrical} shows that if a complex vector space over the graph structure is endowed with an inner product given by \(\langle \cdot, \cdot \rangle/N\) or \(\langle \cdot , p_{\lambda,\epsilon,N} \cdot \rangle/N\), then on the continuum side, these converge as \(N \to \infty\) and \(\epsilon \to 0\) to the inner producs \(\langle \cdot, \cdot \rangle_p\) and \(\langle \cdot, c_2^{1 - 2\lambda} p^{2 - 2\lambda} \cdot \rangle\), respectively. Hence, among these only the case \(\lambda = 1\) converges to \(\langle \cdot , \cdot \rangle_{L^2(\mathcal{M}, d\nu_g)}\), while by \protect\hyperlink{thm:sym-cs-glap-psido}{Theorem \ref{thm:sym-cs-glap-psido}}, for all other weights, the (classically) propagated continuum weight, namely \(c_2^{1 - 2\lambda} p^{2 - 2\lambda} \circ \Gamma^t\) will show up as a multiplier to top order in the expectation of a propagated quantum observable against a coherent state. Since this also means that the propagated weight must show up by itself when taking the norm of the propagated coherent state, it suggests that in order to recover the flow of symbols along geodesics through the coherent states, we must normalize their propagations. It so happens that we may do this with \emph{any} choice of the weights, so we proceed with the simplest choice that is sufficient for our needs: the corresponding picture on the continuum side is given by, \begin{lemma} \hypertarget{lem:prop-cs-mean-consistency}{\label{lem:prop-cs-mean-consistency}} Let \(\lambda \geq 0\), \(h \in (0, 1]\), \(\alpha \geq 1\) and \(\psi_h\) be a coherent state localized at \((x_0, \xi_0) \in T^*\mathcal{M}\). Then, for all \(|t| \leq \operatorname{inj}(x_0)\) and \(\varepsilon \in O(h^{1 + \alpha})\), with \(\epsilon := h^{2 + \alpha}\), we have, \begin{equation}\begin{aligned} ||\Ueps{\varepsilon}{t}[\psi_h]||_p^2 = c_0^{2\lambda - 1} p_{\lambda,\epsilon}(x_0) p(x_t)^{2\lambda} + O(h) , \end{aligned} \nonumber \end{equation} wherein \(x_t := \pi_{\mathcal{M}} \Gamma^t(x_0, \xi_0)\) and if \(v \in C^{\infty}\), then \begin{equation}\begin{aligned} \langle \CSeps{\varepsilon}{t} | (\Ueps{\varepsilon}{t})^* v \, \Ueps{\varepsilon}{t}|\CSeps{\varepsilon}{t} \rangle_p = v(x_t) + O(h) \end{aligned} \nonumber \end{equation} with \(\CSeps{\varepsilon}{t} := \psi_h/||\Ueps{\varepsilon}{t}[\psi_h]||_p\). \end{lemma} \begin{proof} This is a straightforward application of \href{glaps-to-geodesic-flows\#thm:sym-cs-glap-psido}{Theorem}, which goes through for \(\Ueps{\varepsilon}{t}\) because \(h^2 \Delta_{\lambda,\epsilon}^{(\varepsilon)} = h^2 \Delta_{\lambda,\epsilon} + h^2 c_{2,0} \varepsilon/\epsilon I\) and \(h^2 \varepsilon/\epsilon = O(h)\) so \(\operatorname{Sym}[h^2 \Delta_{\lambda,\epsilon}^{(\varepsilon)}] = \operatorname{Sym}[h^2 \Delta_{\lambda,\epsilon}]\). Combined with \(\eqref{eq:glap-prop-adjoint-identity}\) and that the quantization of \(v p\) gives the multiplication operator by \(v p\), we have, \begin{align*} \langle \psi_h | (\Ueps{\varepsilon}{t})^* v p \, \Ueps{\varepsilon}{t} | \psi_h \rangle &= \langle \psi_h | p_{\lambda,\epsilon} (\Ueps{\varepsilon}{})^{-t} v \frac{p}{p_{\lambda,\epsilon}} \, \Ueps{\varepsilon}{t} | \psi_h \rangle \\ &= \langle \psi_h | p_{\lambda,\epsilon} \operatorname{Op}_h\left( v \frac{p}{p_{\lambda,\epsilon}} \circ \Gamma^t \right) | \psi_h \rangle + O(h) \\ &= p_{\lambda,\epsilon}(x_0) \, v(x_t) \frac{p}{p_{\lambda,\epsilon}}(x_t) + O(h) \\ &= c_0^{2\lambda - 1} p_{\lambda,\epsilon}(x_0) p(x_t)^{2\lambda} v(x_t) + O(h), \end{align*} wherein the ultimate equality follows from \(\eqref{lem:taylor-expand-deg-func}\). Thus, \begin{equation}\begin{aligned} \langle \psi_{h,t,\varepsilon} | (\Ueps{\varepsilon}{t})^* v p \, \Ueps{\varepsilon}{t} | \psi_{h,t,\varepsilon} \rangle = v(x_t) + O(h). \end{aligned} \nonumber \end{equation} \end{proof} We turn now to the consistency between the Hilbert space on the graph, given by the inner product \(\langle \cdot , \cdot \rangle_N := \langle \cdot , \cdot \rangle/N\) and the space \(L^2(\mathcal{M}, p \, d\nu_g)\). The primary step is to establish the error in \begin{gather*} |\langle \CSepsN{\varepsilon}{t} | (\UepsN{\varepsilon}{t})^* \, v \, \UepsN{\varepsilon}{t} | \CSepsN{\varepsilon}{t} \rangle_N - \langle \CSeps{\varepsilon}{t} | (\Ueps{\varepsilon}{t})^* \, v \, \Ueps{\varepsilon}{t} | \CSeps{\varepsilon}{t} \rangle_p| , \\ \CSepsN{\varepsilon}{t} := \psi_h/||\UepsN{\varepsilon}{t}[\psi_h]||_N , \end{gather*} whereafter we can employ \protect\hyperlink{lem:prop-cs-mean-consistency}{Lemma \ref{lem:prop-cs-mean-consistency}} to give the consistency of flows of position-space observables. In anticipation of specifying to coherent states, we state the \(L^2\) inner product consistency bounds in terms that respect a class of \emph{local} functions. The point of this is that when we apply a Bernstein bound for the consistency of the inner product between two functions, at least one of which is localized to roughly an \(O(h^{\frac{\varsigma}{n}})\) radius ball with respect to \(h \in (0, h_0]\) for some \(h_0 > 0\), then we can use the dependence of the bound on the variance to get a better convergence rate by a factor of roughly \(h^{\varsigma}\) in the exponential. This trick was used in \citep{hein2005geometrical} to give a quadratic improvement of the rate of convergence in the bias term (\emph{viz.}, the density parameter \(\epsilon\)) for the graph Laplacian and we simply state it here in a somewhat more general and convenient form. More precisely, we have, \begin{lemma} \hypertarget{lem:l2-consistency}{\label{lem:l2-consistency}} Let \(\varsigma \in \mathbb{R}\) and \(0 < h_0 \leq 1\) be fixed and let \(u_{h,\varsigma} \in L^{\infty}\) be a family of functions over \(h \in (0, h_0]\) satisfying: there are constants \(\overline{K}_u, K_{u,2} > 0\) so that uniformly in \(h\), \(h^{\varsigma} ||u_{h,\varsigma}||_{\infty} \leq \overline{K}_u\) and \(||u_{h,\varsigma}||_{L^2}^2 \leq K_{u,2}\). Then, given any \(v \in L^{\infty}\), for all \(\delta > 0\), we have \begin{align*} \Pr[|\langle u_{h,\varsigma}, v \rangle_N - \langle u_{h,\varsigma}, v \rangle_p| > \delta] &\leq 4 \exp\left( -\frac{N h^{\varsigma} \delta^2}{2 ||v||_{\infty}(h^{\varsigma} K_{u,2} ||v||_{\infty} ||p||_{\infty} + \overline{K}_u \delta/3)} \right) \\ &=: \rho_N(\delta,h^{\varsigma}, \overline{K}_u, K_{u,2}, ||v||_{\infty}). \end{align*} In case that \(v = \tilde{u}_{h,\varsigma}\) is another such family, then \begin{align*} \Pr[|\langle u_{h,\varsigma}, \tilde{u}_{h,\varsigma} \rangle_N - \langle u_{h,\varsigma}, \tilde{u}_{h,\varsigma} \rangle_p| > \delta] &\leq 4 \exp\left( -\frac{N h^{2\varsigma} \delta^2}{2 \overline{K}_{\tilde{u}}(K_{u,2} \overline{K}_{\tilde{u}} ||p||_{\infty} + \overline{K}_u \delta/3)} \right) \\ &= \rho_{N,2}(\delta,h^{2\varsigma}, \overline{K}_u, K_{u,2}, \overline{K}_{\tilde{u}}). \end{align*} \end{lemma} \begin{proof} We first compute the maximum and variance in order to apply Bernstein's inequality: \begin{align*} \max_{1 \leq j \leq N} & |u_{h,\varsigma}(x_j)\overline{v(x_j)}| \leq h^{-\varsigma} \overline{K}_u ||v||_{\infty} , \\ \operatorname{Var}(u_{h,\varsigma} \bar{v}) &= \int_{\mathcal{M}} |u_{h,\varsigma} \bar{v}|^2 p ~ d\nu_g \leq ||v||_{\infty}^2 ||p||_{\infty} \int_{\mathcal{M}} |u_{h,\varsigma}|^2 ~ d\nu_g \\ &\leq K_{u,2} ||v||_{\infty}^2 ||p||_{\infty} . \end{align*} The probabilistic bounds in the statement of the Lemma now follows from Bernstein's inequality. \end{proof} \begin{remark} The coefficient \(K_{u,2}\) can be elaborated into more precise constants regarding the geometry of small balls in \(\mathcal{M}\), if the localization of \(u_{h,\varsigma}\) were explicitly utilised. The simpler form is sufficient for the essential purposes of the following applications. \end{remark} From here onwards, we will specify to the application to coherent states, simply to have explicit bounds and keep the discussion concrete. An immediate application is that we can recover the \(L^2(\mathcal{M}, p \, d\nu_g)\) norm of \(U_{\lambda,\epsilon}^t[\psi_h]\): \begin{lemma} \hypertarget{lem:prop-cs-norm-consistency}{\label{lem:prop-cs-norm-consistency}} Let \(\lambda \geq 0\), \(\epsilon, h \in (0, 1]\), \(t, \varepsilon \in \mathbb{R}\) and \((x_0, \xi_0) \in T^*\mathcal{M}\). Then, for \(\tilde{\psi}_h\) an \emph{un-normalized} coherent state localized about \((x_0, \xi_0)\), there are constants \(\overline{K}_{\psi}, K_{\psi,2} > 0\) depending on \(\psi_h\) and \(O(1)\) in \(h\) such that for all \(\delta > 0\), \begin{align*} \Pr&[|\, ||\UepsN{\varepsilon}{t}[\tilde{\psi}_h]||_N^2 - ||\Ueps{\varepsilon}{t}[\tilde{\psi}_h]||_p^2 \, | > \delta] \\ &\leq \rho_{N,2}(\delta/2, 1, \overline{K}_{\psi}, K_{\psi,2}, \overline{K}_{\psi}) \\ &\quad\quad + \begin{cases} \Omega_{\lambda,t,N}(\delta/(4 \overline{K}_{\psi} + 2 + 2\delta), \epsilon, \overline{K}_{\psi}) & \text{if } \varepsilon = 0, \\ \Omega^*_{\lambda,t,N}(\delta/(4 \overline{K}_{\psi} + 2 + 2\delta), \varepsilon, \epsilon, \overline{K}_{\psi}) & \text{if } \varepsilon > 0 \end{cases} \\ &=: \Xi_{\lambda,N}(\delta, \psi_h) . \end{align*} \end{lemma} \begin{proof} Assume the events \(\mathcal{A}_1\) and \(\mathcal{A}_2\) that \(|| \, |\Ueps{\varepsilon}{t}[\tilde{\psi}_h]|^2 - |\UepsN{\varepsilon}{t}[\tilde{\psi}_h]|^2 \, ||_{\infty} \leq \delta/2\) and \(| \, ||\Ueps{\varepsilon}{t}[\tilde{\psi}_h]||_N^2 - ||\UepsN{\varepsilon}{t}[\tilde{\psi}_h]||_N^2 \,| \leq \delta/2\), respectively. Then, \begin{align*} |&\, ||\UepsN{\varepsilon}{t}[\tilde{\psi}_h]||_N^2 - ||\Ueps{\varepsilon}{t}[\tilde{\psi}_h]||_p^2 \, | \\ &\leq |\, ||\UepsN{\varepsilon}{t}[\tilde{\psi}_h]||_N^2 - ||\Ueps{\varepsilon}{t}[\tilde{\psi}_h]||_N^2 \, | + |\, ||\Ueps{\varepsilon}{t}[\tilde{\psi}_h]||_N^2 - ||\Ueps{\varepsilon}{t}[\tilde{\psi}_h]||_p^2 \, | \\ &\leq \delta . \end{align*} Now, we use \protect\hyperlink{lem:l2-consistency}{Lemma \ref{lem:l2-consistency}} to establish a lower bound on the probability that \(\mathcal{A}_2\) occurs. Since \(\Ueps{\varepsilon}{t}\) is similar to a unitary operator by the diagonal \(p_{\lambda,\epsilon}^{\frac{1}{2}}\), we have that \begin{equation} \label{eq:cs-prop-l2-bound} ||\Ueps{\varepsilon}{t}[\tilde{\psi}_h]||_p^2 \leq \frac{||p_{\lambda,\epsilon}||_{\infty}}{\inf p_{\lambda,\epsilon}} ||\tilde{\psi}_h||_p^2 \leq h^{\frac{n}{2}} \overline{C}_{\psi,2} ||p||_{\infty} \frac{\overline{C}_{p,\lambda}}{\underline{C}_{p,\lambda}} \end{equation} for a constant \(\overline{C}_{\psi,2} > 0\). Likewise, there is a constant \(\overline{K}_{\psi} > 0\) such that \(||\Ueps{\varepsilon}{t}[\tilde{\psi}_h]||_{\infty} \leq \overline{K}_{\psi}\). Therefore, by \protect\hyperlink{lem:l2-consistency}{Lemma \ref{lem:l2-consistency}}, \begin{equation}\begin{aligned} \Pr[\mathcal{A}_2] \geq 1 - \rho_{N,2}(\delta/2, 1, \overline{K}_{\psi}, K_{\psi,2}, \overline{K}_{\psi}) \end{aligned} \nonumber \end{equation} with \(K_{\psi,2} := \overline{C}_{\psi,2} \overline{C}_{p,\lambda}/\underline{C}_{p,\lambda}\). As for the event \(\mathcal{A}_1\), \protect\hyperlink{prop:max-observable-flow-consistency}{Proposition \ref{prop:max-observable-flow-consistency}} gives a lower bound for the probability that this occurs; then, a union bound gives the probability as in the statement of the Lemma. \end{proof} The consistency of propagations given in \protect\hyperlink{thm:halfwave-soln}{Theorem \ref{thm:halfwave-soln}} applies readily to \(\psi_h\), as can be seen from a slight modification to account for the norm in the proof of \protect\hyperlink{prop:max-observable-flow-consistency}{Proposition \ref{prop:max-observable-flow-consistency}}. However, in practice, it is only reasonable to expect that we can construct \(\CSepsN{\varepsilon}{t} := \psi_h/||\UepsN{\varepsilon}{t}[\psi_h]||_N\). By combining \protect\hyperlink{thm:halfwave-soln}{Theorem \ref{thm:halfwave-soln}} and \protect\hyperlink{lem:prop-cs-norm-consistency}{Lemma \ref{lem:prop-cs-norm-consistency}}, we have the consistency between \(\CSepsN{\varepsilon}{t}\) and \(\CSeps{\varepsilon}{t}\). Now we use this to see the probabilistic consistency between the graph and manifold versions of the inner product of (the modulus squared of) a propagated, normalized coherent state with an arbitrary smooth function, in: \begin{lemma} \hypertarget{lem:prop-cs-funcexpect-consistency}{\label{lem:prop-cs-funcexpect-consistency}} Let \(\lambda \geq 0\), \(h_0 \in (0, 1]\) be given by \protect\hyperlink{prop:glap-prop-cs-localized}{Proposition \ref{prop:glap-prop-cs-localized}}, \(\alpha \geq 1\), and \((x_0, \xi_0) \in T^*\mathcal{M}\). Then, for \(\psi_h\) a coherent state localized about \((x_0, \xi_0)\), \(|t| \leq \operatorname{inj}(x_0)\) and given \(u \in C^{\infty}\), there are constants \(C_{\psi,\lambda,p,u}, C'_{\psi,\lambda,p,u} > 0\) and \(\overline{K}_{\psi}, K_{\psi,2}, \underline{\tilde{K}}_{\psi,2}, \tilde{K}_{\psi,2}, \overline{\tilde{K}}_{\psi} > 0\) depending on \(\psi_h\) and independent of \(h\) with \(C_{\psi,\lambda,p,u}, C'_{\psi,\lambda,p,u} > 0\) additionally depending on \(u\) such that for all \(h \in (0, h_0]\) and \(\delta \in (0, 4||u||_{\infty}]\), \begin{align*} \Pr&[| \langle |U_{\lambda,\epsilon,N}^{t}[\psi_{h,t,N}]|^2 , u \rangle_N - \langle |U_{\lambda,\epsilon}^{t}[\psi_{h,t}]|^2 , u \rangle_p | > \delta] \\ &\quad\quad \leq \Omega_{\lambda,t,N}(C'_{\psi,\lambda,p,u} h^{\frac{n}{2}} \delta, \epsilon, \overline{\tilde{K}}_{\psi}) + \rho_N(\delta,h^{-\frac{n}{4}}, \overline{K}_{\psi}^{\frac{1}{2}}, K_{\psi,2}, h^{-\frac{n}{4}} \overline{K}_{\psi}^{\frac{1}{2}} ||u||_{\infty}) \\ &\quad\quad\quad\quad + \rho_{N,2}(h^{\frac{n}{2}} \tilde{\underline{K}}_{\psi,2}, 1, \overline{\tilde{K}}_{\psi}, h^{\frac{n}{2}} \tilde{K}_{\psi,2}, \overline{\tilde{K}}_{\psi}) + \Xi_{\lambda,N}(C_{\psi,\lambda,p,u} h^{\frac{n}{2}} \delta, \psi_h) \\ &\quad\quad =: \tilde{\Omega}_{\lambda,t,N}(\delta,\epsilon,\psi_h,u), \end{align*} wherein \begin{equation}\begin{aligned} \psi_{h,t,N} := \psi_h/||U_{\lambda,\epsilon,N}^t[\psi_h]||_N, \quad\quad \psi_{h,t} := \psi_h/||U_{\lambda,\epsilon}^t[\psi_h]||_p . \end{aligned} \nonumber \end{equation} This holds as well in the pertrubed case, wherein we have for \(\varepsilon > 0\), \begin{align*} \Pr&[| \langle |\UepsN{\varepsilon}{t}[\CSepsN{\varepsilon}{t}]|^2 , u \rangle_N - \langle |\Ueps{\varepsilon}{t}[\CSeps{\varepsilon}{t}]|^2 , u \rangle_p | > \delta] \\ &\quad\quad \leq \Omega^*_{\lambda,t,N}(C'_{\psi,\lambda,p,u} h^{\frac{n}{2}} \delta,\varepsilon, \epsilon, \overline{\tilde{K}}_{\psi}) + \rho_N(\delta,h^{-\frac{n}{4}}, \overline{K}_{\psi}^{\frac{1}{2}}, K_{\psi,2}, h^{-\frac{n}{4}} \overline{K}_{\psi}^{\frac{1}{2}} ||u||_{\infty}) \\ &\quad\quad\quad\quad + \rho_{N,2}(h^{\frac{n}{2}} \tilde{\underline{K}}_{\psi,2}, 1, \overline{\tilde{K}}_{\psi}, h^{\frac{n}{2}} \tilde{K}_{\psi,2}, \overline{\tilde{K}}_{\psi}) + \Xi_{\lambda,N}(C_{\psi,\lambda,p,u} h^{\frac{n}{2}} \delta, \psi_h) \\ &\quad\quad =: \tilde{\Omega}^*_{\lambda,t,N}(\delta,\varepsilon,\epsilon,\psi_h,u) \end{align*} with new constants and \begin{equation}\begin{aligned} \CSepsN{\varepsilon}{t} := \psi_h/||\Ueps{\varepsilon}{t}[\psi_h]||_N, \quad\quad \CSeps{\varepsilon}{t} := \psi_h/||\Ueps{\varepsilon}{t}[\psi_h]||_p . \end{aligned} \nonumber \end{equation} \end{lemma} \begin{proof} We write, \begin{align*} | \langle |U_{\lambda,\epsilon,N}^{t}[\psi_{h,t,N}]|^2 & , u \rangle_N - \langle |U_{\lambda,\epsilon}^{t}[\psi_{h,t}]|^2 , u \rangle_p | \\ &\quad \leq | \langle |U_{\lambda,\epsilon,N}^{t}[\psi_{h,t,N}]|^2 , u \rangle_N - \langle |U_{\lambda,\epsilon}^{t}[\psi_{h,t}]|^2 , u \rangle_N | \\ &\quad\quad + | \langle |U_{\lambda,\epsilon}^{t}[\psi_{h,t}]|^2 , u \rangle_N - \langle |U_{\lambda,\epsilon}^{t}[\psi_{h,t}]|^2 , u \rangle_p | \\ &\quad =: I + II \end{align*} and recalling that \(\tilde{\psi}_h = e^{\frac{i}{h} \phi}\) is the \emph{un-normalized} coherent state, \begin{align*} I &= | \langle |U_{\lambda,\epsilon,N}^{t}[\psi_{h,t,N}]|^2 - |U_{\lambda,\epsilon}^{t}[\psi_{h,t}]|^2 , u \rangle_N | \\ &= \frac{| \langle c |U_{\lambda,\epsilon,N}^{t}[\tilde{\psi}_{h}]|^2 - (c + c_N - c) |U_{\lambda,\epsilon}^{t}[\tilde{\psi}_h]|^2 , u \rangle_N |}{c_N c} \\ &\leq \frac{c | \langle |U_{\lambda,\epsilon,N}^{t}[\tilde{\psi}_{h}]|^2 - |U_{\lambda,\epsilon}^{t}[\tilde{\psi}_h]|^2 , u \rangle_N | + |c_N - c| \, | \langle |U_{\lambda,\epsilon}^{t}[\tilde{\psi}_h]|^2 , u \rangle_N |}{c_N c} \\ &\leq ||\iota||_{\infty} \left( c_{N}^{-1} \, ||(U_{\lambda,\epsilon,N}^t - U_{\lambda,\epsilon}^t)[\tilde{\psi}_h]||_{\infty} \, ||(U_{\lambda,\epsilon,N}^t + U_{\lambda,\epsilon}^t)[\tilde{\psi}_h]||_{N} + \frac{|c_N - c|}{c_N c} ||U_{\lambda,\epsilon}^t [\tilde{\psi}_h]||_N^2 \right), \\ &\quad \quad c := ||U_{\lambda,\epsilon}^t[\tilde{\psi}_h]||_p^2, \quad c_N := ||U_{\lambda,\epsilon,N}^t[\tilde{\psi}_h]||_N^2 . \end{align*} Suppose the occurrence of the following events: \begin{align*} &\mathcal{A}_1 : & & ||(U_{\lambda,\epsilon,N}^t - U_{\lambda,\epsilon}^t)[\tilde{\psi}_h]||_{\infty} \leq \delta_1, \\ &\mathcal{A}_2 : & & | \, c_N - c \, | \leq \delta_1 , \\ &\mathcal{A}_3 : & & | \, ||U_{\lambda,\epsilon}^t[\tilde{\psi}_h]||_N^2 -c \, | \leq c' , \\ &\mathcal{A}_4 : & & |\langle | U_{\lambda,\epsilon}^t[\psi_{h,t}] |^2, u \rangle_N - \langle | U_{\lambda,\epsilon}^t[\psi_{h,t}] |^2, u \rangle_p| \leq \delta/2 \end{align*} for some \(0 < \delta_1 \leq c/2\) and \(0 < c' \leq c\). Then, \begin{equation}\begin{aligned} I \leq \delta_1 ||u||_{\infty} c^{-1} (4 + 6\sqrt{c}) . \end{aligned} \nonumber \end{equation} and \(II \leq \delta/2\). So letting \(\delta_1 = \delta c [2||u||_{\infty} (4 + 6 \sqrt{c})]^{-1}\) then gives for \(\delta \leq ||u||_{\infty}(4 + 6\sqrt{c}) \in 4 ||u||_{\infty} + O(h^{\frac{n}{4}})\), \begin{equation} \label{eq:mean-consistency-err-bound} I + II \leq \delta . \end{equation} We wish to apply the foregoing probabilistic bounds from the Lemmas of this section and from the first part of \protect\hyperlink{prop:max-observable-flow-consistency}{Proposition \ref{prop:max-observable-flow-consistency}} to bound the probabilities of the events \(\mathcal{A}_1, \ldots, \mathcal{A}_4\). In light of \protect\hyperlink{lem:l2-consistency}{Lemma \ref{lem:l2-consistency}}, we will have probabilistic bounds on the events \(\mathcal{A}_3\) and \(\mathcal{A}_4\) if we have \(L^2\) and \(L^{\infty}\) bounds on \(U_{\lambda,\epsilon}^t[\tilde{\psi}_h]\) and \(U_{\lambda,\epsilon}^t[\psi_{h,t}]\). These are as follows: by \(\eqref{eq:cs-prop-l2-bound}\), \(c \leq h^{\frac{n}{2}} \overline{C}_{\psi,2} ||p||_{\infty} \overline{C}_{p,\lambda}/\underline{C}_{p,\lambda}\). Likewise, by the proof of \protect\hyperlink{thm:sym-cs-psido}{Theorem \ref{thm:sym-cs-psido}} we have a constant \(\underline{C}_{\psi,2} > 0\) also \(O(1)\) in \(h\) such that \(C_h(x_0,\xi_0)^{-2} \geq h^{\frac{n}{2}} \underline{C}_{\psi,2}\), so \begin{equation}\begin{aligned} c \geq \frac{\inf p_{\lambda,\epsilon}}{||p_{\lambda,\epsilon}||_{\infty}} ||\tilde{\psi}_h||_p^2 \geq h^{\frac{n}{2}} \underline{C}_{\psi,2} \underline{C}_p \frac{\underline{C}_{p,\lambda}}{\overline{C}_{p,\lambda}} . \end{aligned} \nonumber \end{equation} Since \(|U_{\lambda,\epsilon}^t[\psi_{h,t}]|^2 = |U_{\lambda,\epsilon}^t[\tilde{\psi}_h]|^2/c\) and there is a constant \(\overline{C}_{\psi} > 0\) such that \(||U_{\lambda,\epsilon}^t[\tilde{\psi}_h]||^2_{\infty} \leq \overline{C}_{\psi}\), we have, \begin{equation}\begin{aligned} ||U_{\lambda,\epsilon}^t[\psi_{h,t}]||_{\infty}^2 \leq ||U_{\lambda,\epsilon}^t[\tilde{\psi}_h]||_{\infty}^2/c \leq h^{-\frac{n}{2}} \overline{C}_{\psi} \overline{C}_{p,\lambda}/(\underline{C}_{\psi,2} \underline{C}_{p,\lambda} \underline{C}_p) =: h^{-\frac{n}{2}} \overline{K}_{\psi} \end{aligned} \nonumber \end{equation} and there is a constant \(K_{\psi,2} > 0\) such that \begin{align*} ||U_{\lambda,\epsilon}^t[\psi_{h,t}]||_{L^2}^2 &= ||U_{\lambda,\epsilon}^t[\tilde{\psi}_{h}]||_{L^2}^2/c \\ &= \langle \tilde{\psi}_h | p \, p_{\lambda,\epsilon} U_{\lambda,\epsilon}^{-t} \frac{1}{p \, p_{\lambda,\epsilon}} U_{\lambda,\epsilon}^t | \tilde{\psi}_h \rangle/c \\ &\leq \overline{C}_{\psi,2}(1 + O(h))\overline{C}_{p,\lambda}/(\underline{C}_{\psi,2} \underline{C}_{p,\lambda} \underline{C}_p) \\ &\leq K_{\psi,2} , \end{align*} wherein the second equality follows from the fact that the adjoint of \(U_{\lambda,\epsilon}^t\) over \(L^2(\mathcal{M}, d\nu_g)\) is given by \(p \circ (U_{\lambda,\epsilon}^t)^* \circ 1/p\) with \((U_{\lambda,\epsilon}^t)^*\) the adjoint on \(L^2(\mathcal{M}, p d\nu_g)\). With this, we find on writing \begin{equation}\begin{aligned} \langle |U_{\lambda,\epsilon}^t[\psi_{h,t}]|^2, u \rangle_N - \langle |U_{\lambda,\epsilon}^t[\psi_{h,t}]|^2, u \rangle_p = \langle |U_{\lambda,\epsilon}^t[\psi_{h,t}]|, v \rangle_N - \langle |U_{\lambda,\epsilon}^t[\psi_{h,t}]|, v \rangle_p , \\ v := |U_{\lambda,\epsilon}^t[\psi_{h,t}]| u \end{aligned} \nonumber \end{equation} and applying \protect\hyperlink{lem:l2-consistency}{Lemma \ref{lem:l2-consistency}} that the event \(\mathcal{A}_4\) happens with probability at least \(1 - \rho_N(\delta/2,h^{-\frac{n}{4}}, \overline{K}_{\psi}^{\frac{1}{2}}, K_{\psi,2}, h^{-\frac{n}{4}} \overline{K}_{\psi}^{\frac{1}{2}} ||u||_{\infty})\) and the event \(\mathcal{A}_3\) with probability at least \(1 - \rho_{N,2}(h^{\frac{n}{2}} \tilde{\underline{K}}_{\psi,2}, 1, \overline{C}_{\psi}^{\frac{1}{2}}, h^{\frac{n}{2}} \tilde{K}_{\psi,2}, \overline{C}_{\psi}^{\frac{1}{2}})\) with \(\tilde{\underline{K}}_{\psi,2} := \underline{C}_{\psi,2} \underline{C}_p \underline{C}_{p,\lambda}/\overline{C}_{p,\lambda}\) and \(\tilde{K}_{\psi,2} := \overline{C}_{\psi,2} ||p||_{\infty} \overline{C}_{p,\lambda} /\underline{C}_{p,\lambda}\). As for event \(\mathcal{A}_2\), we may apply \protect\hyperlink{lem:prop-cs-norm-consistency}{Lemma \ref{lem:prop-cs-norm-consistency}} while noting that in that notation, \(C_{\psi} := \overline{C}_{\psi,2}\) and \begin{align*} \delta_1 &\geq \delta h^{\frac{n}{2}} \frac{\tilde{\underline{K}}_{\psi,2}}{2||u||_{\infty}(4 + 6 \sqrt{c})} \\ &\geq \delta h^{\frac{n}{2}} \frac{\tilde{\underline{K}}_{\psi,2}}{2||u||_{\infty}(4 + 6 \tilde{K}_{\psi,2}^{\frac{1}{2}})} =: C_{\psi,\lambda,p,u} h^{\frac{n}{2}} \delta . \end{align*} Hence, we have that this event happens with probability at least \(1 - \Xi_{\lambda,N}(h^{\frac{n}{2}} C_{\psi,\lambda,p,u} \delta, \psi_h)\). The proof of the first part of \protect\hyperlink{prop:max-observable-flow-consistency}{Proposition \ref{prop:max-observable-flow-consistency}} shows that the event \(\mathcal{A}_1\) happens with probability at least \(1 - \Omega_{\lambda,t,N}(\delta_1/(2\overline{C}_{\psi} + 1 + \delta_1), \epsilon,\overline{C}_{\psi})\). Thus, after taking into account the condition that \(\delta \leq 4 ||u||_{\infty}\), which suffices to achieve \(\eqref{eq:mean-consistency-err-bound}\), we have a constant \(C'_{\psi,\lambda,p,u} > 0\) such that \begin{equation}\begin{aligned} \Pr[\mathcal{A}_1] \geq 1 - \Omega_{\lambda,t,N}(C_{\psi,\lambda,p,u}' h^{\frac{n}{2}} \delta, \epsilon, \overline{C}_{\psi}). \end{aligned} \nonumber \end{equation} Hence, the probabilistic bounds in the statement of the present Lemma follow from taking a union bound and noting that the analogous statements follow for the perturbed case. \end{proof} Combined with \protect\hyperlink{lem:prop-cs-mean-consistency}{Lemma \ref{lem:prop-cs-mean-consistency}}, this tells that we may observe the propagation of classical configuration-space observables along the geodesic flow of \(\mathcal{M}\) through their means with propagated coherent states on the graph approximating its structure: \begin{theorem} \hypertarget{thm:mean-prop-geoflow-consistency}{\label{thm:mean-prop-geoflow-consistency}} Let \(\lambda \geq 0\), \(\alpha \geq 1\), \((x_0, \xi_0) \in T^*\mathcal{M}\) and \(|t| \leq \operatorname{inj}(x_0)\). Then, for \(\psi_h\) a coherent state localized about \((x_0, \xi_0)\) and given \(u \in C^{\infty}\), there is a constant \(C > 0\) such that we have for all \(h \in (0, \min\{ h_0, 4||u||_{\infty} \}]\) with \(h_0 \in (0, 1]\) from \protect\hyperlink{prop:glap-prop-cs-localized}{Proposition \ref{prop:glap-prop-cs-localized}} and \(\epsilon := h^{2 + \alpha}\), \begin{equation} \label{eq:thm-mean-prop-geoflow-consistency-bound} \Pr[|\langle |U_{\lambda,\epsilon,N}^t[\psi_{h,t,N}]|^2, u \rangle_N - u(x_t)| > Ch] \leq \tilde{\Omega}_{\lambda,t,N}(h, \epsilon,\psi_h, u) . \end{equation} Letting \(\varepsilon \in O(h^{1 + \alpha})\), we also have \begin{equation} \label{eq:thm-mean-prop-perturbed-glap-geoflow-consistency-bound} \Pr[|\langle |\UepsN{\varepsilon}{t}[\psi_{h,t,N}]|^2, u \rangle_N - u(x_t)| > Ch] \leq \tilde{\Omega}^*_{\lambda,t,N}(h, \varepsilon, \epsilon,\psi_h, u) . \end{equation} \end{theorem} \begin{proof} Suppose we are in the event that \(| \langle |U_{\lambda,\epsilon,N}^{t}[\psi_{h,t,N}]|^2 , u \rangle_N - \langle |U_{\lambda,\epsilon}^{t}[\psi_{h,t}]|^2 , u \rangle_p | \leq h\). Then by \protect\hyperlink{lem:prop-cs-mean-consistency}{Lemma \ref{lem:prop-cs-mean-consistency}} there is a constant \(C' > 0\) such that, \begin{align*} | \langle |U_{\lambda,\epsilon,N}^{t}[\psi_{h,t,N}]|^2 , u \rangle_N - u(x_t)| &\leq | \langle |U_{\lambda,\epsilon,N}^{t}[\psi_{h,t,N}]|^2 , u \rangle_N - \langle |U_{\lambda,\epsilon}^{t}[\psi_{h,t}]|^2 , u \rangle_p| \\ &\quad\quad + | \langle |U_{\lambda,\epsilon}^{t}[\psi_{h,t}]|^2 , u \rangle_p - u(x_t)| \\ &\leq (1 + C')h . \end{align*} Now, \protect\hyperlink{lem:prop-cs-funcexpect-consistency}{Lemma \ref{lem:prop-cs-funcexpect-consistency}} gives a lower bound on the occurrence of this event, under the condition that \(h \in (0, \min\{ h_0, 4||u||_{\infty} \}]\), as \(1 - \tilde{\Omega}_{\lambda,t,N}(h,\epsilon,\psi_h,u)\). The perturbed case follows along the same lines upon applying Lemmas \protect\hyperlink{lem:prop-cs-mean-consistency}{\ref{lem:prop-cs-mean-consistency}} and \protect\hyperlink{lem:prop-cs-funcexpect-consistency}{\ref{lem:prop-cs-funcexpect-consistency}} with \(\varepsilon \in O(h^{1 + \alpha})\). \end{proof} \hypertarget{observing-geodesics-on-graphs}{% \subsection{Observing Geodesics on Graphs}\label{observing-geodesics-on-graphs}} We are now in the position to implement \protect\hyperlink{prop:local-mean-geodesic-flow}{Proposition \ref{prop:local-mean-geodesic-flow}} on the graph holding an approximative structure of \(\mathcal{M}\) and to give its probabilistic rate of consistency: to summarize the procedure, we give the example of an, \begin{quote} \textbf{Algorithm}. Having constructed \(\Delta_{\lambda,\epsilon,N}\) and \(\tilde{\psi}_h\) localized at \((x_0, \xi_0)\) on \(\Lambda_N\) with \(\epsilon = h^{2 + \alpha}\) and \(\alpha \in [1,2]\), \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \tightlist \item construct \(U_{\lambda,\epsilon,N}^t := e^{i t \Delta_{\lambda,\epsilon,N}^{\frac{1}{2}}}\) by spectral methods, \item identify \(\hat{x}_{N,t} = \arg\max_{\mathcal{X}_N} |U_{\lambda,\epsilon,N}^t[\tilde{\psi}_h]|\), \item compute \(||U_{\lambda,\epsilon,N}^t[\tilde{\psi}_h]||_N^2\) to form \(\psi_{h,t,N}\), set \(\overline{\varepsilon}_t \sim \sqrt{h}\) and compute \(\bar{\iota}^t_{N,j} := \langle |U_{\lambda,\epsilon,N}^t[\psi_{h,t,N}]|^2, \chi_t(||\iota(\cdot) - \iota(\hat{x}_{N,t})||_{\mathbb{R}^{D}}) \iota_j \rangle_N\) for each \(j \in [D]\) with a smooth function \(\chi_t : \mathbb{R} \to \mathbb{R}\) satisfying: \(\operatorname{supp} \chi_t \subset [-R_t, R_t]\) such that \(R_t > \overline{\varepsilon}_t\) and \(\chi_t \equiv 1\) on \([-\overline{\varepsilon}_t, \overline{\varepsilon}_t]\), \item identify \(\bar{x}_{N,\iota,t} := \arg\min_{x \in \mathcal{X}_N} ||\iota(x) - \bar{\iota}^t_{N,j}||_{\mathbb{R}^{D}}\). \end{enumerate} The algorithm outputs \(\bar{x}_{N,\iota,t} \in \Lambda_N\). \end{quote} \begin{remark} We can simply take \(\chi_t \equiv 1\) when working with extrinsic coordinates, since they are globally defined. This can be preferable in situations where using the cut-off \(\chi_t\) increases uncertainty without significant cost benefit. \end{remark} Anticipating applications such as \protect\hyperlink{alg:extrinsic-mean-geo-flow}{Algorithm}, we make the following, \begin{definition} Let \(\psi_h\) be a coherent state localized at \((x_0, \xi_0) \in T^*\mathcal{M}\) and consider its propagation with respect to \(U_{\lambda,\epsilon,N}^t\). The \emph{intrinsic sample maximizer} of the propagated coherent state is, \begin{equation}\begin{aligned} \hat{x}_{N,t} := \arg\max_{\mathcal{X}_N} |U_{\lambda,\epsilon,N}^t[\tilde{\psi}_h]|^2 . \end{aligned} \nonumber \end{equation} \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \item \emph{Extrinsic case}. The \emph{extrinsic sample mean} of the propagated coherent state is \(\bar{x}_{N,\chi_{\iota}, t} \in \Lambda\), defined with respect to a cut-off \(\chi_{\iota} \in C_c^{\infty}(\mathbb{R}^{D})\), as the closest point on \(\Lambda_N\) to \begin{align*} &&\bar{\iota}^t_N(x_0, \xi_0) &:= (\bar{\iota}_{N,1}^t(x_0, \xi_0), \ldots, \bar{\iota}_{N,D}^t(x_0, \xi_0)), \\ \quad\quad \text{with} && \bar{\iota}_{N,j}^t(x_0, \xi_0) &:= \langle \, |U_{\lambda,\epsilon,N}^t[\psi_{h,t,N}]|^2 \,, (\chi_{\iota} \circ \iota) \, \iota_j \rangle_N. \end{align*} That is, \(\bar{x}_{N,\chi_{\iota},t} := \arg\min_{X \in \Lambda_N} ||X - \bar{\iota}^t_N(x_0, \xi_0)||_{\mathbb{R}^{D}}\). \item \emph{Local coordinates}. Let \(\mathscr{O}_t \subset \mathcal{M}\) be an open neighbourhood of \(\hat{x}_t\). Then, given a diffeomorphic coordinate mapping \(u : \mathscr{O}_t \to V_t \subset \mathbb{R}^{n}\) and \(\chi \in C_c^{\infty}(\mathbb{R}^{n})\) a cut-off with \(\operatorname{supp} \chi \subset V_t\), the \(u\)-\emph{sample mean with respect to} \(\chi\) of the propagated coherent state is \(\bar{x}_{N,u,\chi,t} \in V_t\) that is the closest point on \(\mathscr{V}_N := u[\mathcal{X}_N \cap \mathscr{O}_t]\) to \begin{align*} && \bar{u}^t_N(x_0, \xi_0) &:= (\bar{u}_{N,1}^t(x_0, \xi_0), \ldots, \bar{u}_{N,n}^t(x_0, \xi_0)), \\ \quad\quad\text{with} && \bar{u}_{N,j}^t(x_0, \xi_0) &:= \langle \,|U_{\lambda,\epsilon,N}^t[\psi_{h,t,N}]|^2 , (\chi \circ u) \, u_j \, \rangle_N . \end{align*} That is, \(\bar{x}_{N,u,\chi,t} := \arg\min_{X \in \mathscr{V}_N} ||X - \bar{u}_N^t(x_0, \xi_0)||_{\mathbb{R}^{n}}\). \end{enumerate} \end{definition} That \protect\hyperlink{alg:extrinsic-mean-geo-flow}{Algorithm} outputs, with high probability, a point within an \emph{intrinsic} distance of \(O(h)\) from \(x_t\) is an application of the following, \begin{proposition} \hypertarget{prop:mean-geodesic-recover-consistency}{\label{prop:mean-geodesic-recover-consistency}} Let \(\lambda \geq 0\), \(\alpha \geq 1\), \((x_0, \xi_0) \in T^*\mathcal{M}\) and \(|t| \leq \operatorname{inj}(x_0)\). Then, for \(\psi_h\) a coherent state localized about \((x_0, \xi_0)\), given any open neighbourhood \(\mathscr{O}_t \subset \mathcal{M}\) about \(\hat{x}_{N,t}\) with its diffeomorphic coordinate mapping \(u : \mathscr{O}_t \to V_t \subset \mathbb{R}^{n}\), there are constants \(h_{u,\max}, C_{u,\max} > 0\) such that if \(h \in (0, h_{u,\max})\) and \(\overline{\mathscr{B}}_t := \overline{B}_{C_{u,\max} \sqrt{h}}(u(\hat{x}_{N,t}), ||\cdot||_{\mathbb{R}^{n}}) \subset V_t\), then for any smooth cut-off \(\chi \in C_c^{\infty}(\mathbb{R}^{n}, [0, 1])\) with \(\operatorname{supp} \chi \subset V_t\) that is \(\chi \equiv 1\) on \(\overline{\mathscr{B}}_t\), we have \begin{align} \begin{split} \label{eq:prob-mean-local-geoflow} \Pr&[d_g(u^{-1}(\bar{x}_{N,u,\chi,t}), x_t) > C_u h] \\ &\quad\quad\leq n \, \tilde{\Omega}_{\lambda,t,N}(h,\epsilon,\psi_h,K_{\chi,u}) + \Omega_{\lambda,t,N}(C h^{\frac{n}{2}}, \epsilon, K_{\psi}) + e^{-2 N C'' h^{2 n}} + e^{-2 N C' h^{n}} , \end{split} \end{align} for \(C_u, C'' > 0\) constants, \(K_{\psi}, C' > 0\) constants as in \protect\hyperlink{prop:max-observable-flow-consistency}{Proposition \ref{prop:max-observable-flow-consistency}} and \(K_{\chi,u} := \max_{j \in [n]} || 1 + (\chi \circ u) \, u_j ||_{\infty}\). Likewise, there are constants \(h_{\iota,\max}, C_{\iota,\max} > 0\) such that for all \(h \in [0, h_{\iota,\max})\), given any cut-off \(\chi_{\iota} \in C_c^{\infty}(\mathbb{R}^{D}, [0, 1])\) such that \(\chi_{\iota} \equiv 1\) on \(\overline{\mathscr{B}}_{\iota,t} := \overline{B}_{C_{\iota,\max}\sqrt{h}}(\iota(\hat{x}_{N,t}), ||\cdot||_{D})\), we have \begin{align} \begin{split} \label{eq:prob-mean-extrinsic-geoflow} \Pr&[d_g(\iota^{-1}(\bar{x}_{N,\chi_{\iota},t}), x_t) > C_\iota h] \\ &\quad\quad\leq D \, \tilde{\Omega}_{\lambda,t,N}(h,\epsilon,\psi_h,K_{\chi,\iota}) + \Omega_{\lambda,t,N}(C h^{\frac{n}{2}}, \epsilon, K_{\psi}) + e^{-2 N C' h^{2 n}} + e^{-2 N C' h^{n}} , \end{split} \end{align} for \(C_{\iota} > 0\) a constant and \(K_{\chi,u} := \max_{j \in [n]} || 1 + (\chi_{\iota} \circ \iota) \, \iota_j ||_{\infty}\). \end{proposition} \begin{proof} We have by \protect\hyperlink{prop:max-observable-flow-consistency}{Proposition \ref{prop:max-observable-flow-consistency}} constants \(h_{\max}', C_{\max}' > 0\) such that for all \(h \in (0, h_{\max}')\), \(d_g(\hat{x}_{N,t}, x_t) \leq C_{\max}' \sqrt{h}\) with probability at least \(1 - \Omega_{\lambda,t,N}(C h^{\frac{n}{2}}, \epsilon, K_{\psi}) + e^{-2 N C' h^{n}}\) for some \(C, C' > 0\) and \(K_{\psi} := \sup_{\{|t| \leq \operatorname{inj}(x_0) \}} \sup_{h \in (0, h'_{\max}]}||U_{\lambda,\epsilon}^t[\tilde{\psi}_h]||_{\infty}\). Assume this event and call it \(\mathcal{A}_0\). \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \tightlist \item Following the arguments in the proof of the first part of \protect\hyperlink{prop:local-mean-geodesic-flow}{Proposition \ref{prop:local-mean-geodesic-flow}}, we have constants \(h_{u,0} \in (0, h'_{\max}]\) and \(C_{u,\max} > 0\) such that for all \(h \in (0, h_{u,0})\), \(||u(\hat{x}_{N,t}) - u(x_t)||_{\mathbb{R}^{n}} \leq C_{u,\max} \sqrt{h}\). \item By \protect\hyperlink{thm:mean-prop-geoflow-consistency}{Theorem \ref{thm:mean-prop-geoflow-consistency}} we have for each \(j \in [n]\), a constant \(C_j > 0\) such that for all \(h \in (0, \min\{ h_0, 4||u_{\chi,j}||_{\infty} \})\), with \(u_{\chi,j} := (\chi \circ u) u\), \(|\bar{u}^t_{N,j} - u_{\chi,j}(x_t)| \leq C_j h\) with probability at least \(1 - \tilde{\Omega}_{\lambda,t,N}(h, \epsilon, \psi_h, u_{\chi,j})\). Assuming also these events, say \(\mathcal{A}_1, \ldots, \mathcal{A}_{n}\) and that for each \(h \in (0, \min\{h_{u,0}, 4(||u_{\chi,1}||_{\infty}, \ldots, ||u_{\chi,{n}}||_{\infty})\})\), \(\overline{\mathscr{B}} := B(u(\hat{x}_{N,t}), ||\cdot||_{\mathbb{R}^{n}} ; C_{u,\max} \sqrt{h}) \subset V_t\) and \(\chi \equiv 1\) on \(\overline{\mathscr{B}}\), we have that \(|\bar{u}_{N,j}^t - u_j(x_t)| \leq C_j h\) and hence, \(||\bar{u}^t_{N,\chi}(x_0, \xi_0) - u(x_t)||_{\mathbb{R}^{n}} \leq (C_1^2 + \cdots + C_{n}^2)^{\frac{1}{2}} h =: C_u' h\). \item Then, there are \(C_u, h_{u,1} \in (0, 1]\) such that for all \(h \in [0, h_{u,1}]\) and any \(x^*_{u,\chi,N} \in B_{C'_u h}(\bar{u}^t_{N,\chi}(x_0, \xi_0), || \cdot ||_{\mathbb{R}^{n}})\), \(d_g(u^{-1}(x^*_{u,\chi,N}), x_t) \leq (C_u/2) h\). Although the interval for \(h\) depends on \(\chi\), we may translate the coordinate mapping \(u\) to work with \(\tilde{u} := I_{n} + u : \mathscr{O}_t \to \tilde{V}_t\) and \(\tilde{\chi} := \chi(I_{n} - \cdot)\) so that \(\tilde{V}_t \subset \mathbb{R}^{n} \setminus B_1(0, ||\cdot||_{\mathbb{R}^{n}})\) and hence, \(||u_{\chi,1}||_{\infty} \geq 1\) in the events \(\mathcal{A}_j\). With this, we achieve the same result without affecting \(h_{u,0}, h_{u,1}\), \(C_{u,\max}\) and \(C_u\): \emph{viz.}, \(||u(\hat{x}_{N,t}) - u(x_t)||_{\mathbb{R}^{n}} = ||\tilde{u}(\hat{x}_{N,t}) - \tilde{u}(x_t)||_{\mathbb{R}^{n}} \leq C_{u,\max} \sqrt{h}\) and \(d_g(u^{-1}[\bar{u}_{N,\chi}^t(x_0, \xi_0)], x_t) = d_g(\tilde{u}^{-1}[\bar{\tilde{u}}_{N,\tilde{\chi}}^t(x_0, \xi_0)], x_t) \leq (C_u/2) h\) and these are now valid over \(h \in (0, \min\{ h_{u,0}, h_{u,1} \})\). \item \(\sloppy\) The probability that at least one sample from \(\{ x_1, \ldots, x_N \}\) belongs to \(B_{\frac{C_u}{2} h}(u^{-1}[\bar{u}_{N,\chi}(x_0, \xi_0)], g)\) can be lower bounded in the same way as in the proof of \protect\hyperlink{prop:max-observable-flow-consistency}{Proposition \ref{prop:max-observable-flow-consistency}}, \emph{viz.}, this is at least \(1 - e^{-2N C'' h^{2 n}}\) with \(C'' > 0\) a constant. Thus, possibly after translations and adjusting the probability for each \(\mathcal{A}_j\) to at least \(1 - \tilde{\Omega}_{\lambda,t,N}(h, \epsilon, \psi_h, \tilde{u}_{\tilde{\chi},j})\), then taking a union bound and \(h_{u,\max} := \min\{ h_{u,0}, h_{u,1}\}\), we have the probabilistic bound in the first part of the statement of the Proposition. \end{enumerate} The second part of the Proposition follows the same line of arguments. \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \tightlist \item By \protect\hyperlink{thm:mean-prop-geoflow-consistency}{Theorem \ref{thm:mean-prop-geoflow-consistency}} we have for each \(j \in [D]\), a constant \(C_j > 0\) such that for all \(h \in (0, \min\{ h_0, 4||\iota_{\chi,j}||_{\infty} \})\), with \(\iota_{\chi,j} := (\chi_{\iota} \circ \iota) \iota\), \(|\bar{\iota}^t_{N,j} - \iota_{\chi,j}(x_t)| \leq C_j h\) with probability at least \(1 - \tilde{\Omega}_{\lambda,t,N}(h, \epsilon, \psi_h, \iota_{\chi,j})\). \item Assuming these events, say \(\mathcal{A}_1, \ldots, \mathcal{A}_{D}\) along with the event \(\mathcal{A}_0\), then due to the \protect\hyperlink{assumptions}{Assumptions \ref{assumptions}} we have that if for each \(h \in (0, \min\{(\kappa/C_{\max}')^2, 4(||\iota_{\chi,1}||_{\infty}, \ldots, ||\iota_{\chi,{D}}||_{\infty})\})\), \(\overline{\mathscr{B}} := B(\iota(\hat{x}_{N,t}), ||\cdot||_{\mathbb{R}^{D}} ; C_{\max}' \sqrt{h}) \subset V_t\) and \(\chi_{\iota} \equiv 1\) on \(\overline{\mathscr{B}}\), then \(|\bar{\iota}_{N,j}^t - \iota_j(x_t)| \leq C_j h\) and hence, \(||\bar{\iota}^t_N(x_0,\xi_0) - \iota(x_t)||_{\mathbb{R}^{D}} \leq (C_1^2 + \cdots + C_{D}^2)^{\frac{1}{2}} h =: C'_{\iota} h\). \item The \protect\hyperlink{assumptions}{Assumptions \ref{assumptions}} further imply that if \(h < \kappa/C'_{\iota}\) and \(x_{\iota,N}^* \in B(\bar{\iota}_N^t(x_0,\xi_0), || \cdot ||_{\mathbb{R}^{D}} ; C'_{\iota} h) \cap \Lambda\), then \(d_g(\iota^{-1}(x_{\iota,N}^*), x_t) \leq 2 C'_{\iota} h\). As before, we have that with probability at least \(1 - e^{-2N C'' h^{2 n}}\), there is \(j \in [N]\) such that \(d_g(x_j, \iota^{-1}(x^*_{\iota,N})) \leq h\), hence \(d_g(x_j, x_t) \leq (1 + 2 C'_{\iota}) h\). \item Now again we have the joint interval of \(h\) depending on \(\chi\), but as in the previous part, we can translate the isometry and \(\chi_{\iota}\), so that upon taking \(h_{\iota,\max} := \min\{ h_{\max}', (\kappa/C_{\max}')^2, \kappa/C' \}\) and a union bound (with respect to the shifted embedding), the probabilistic bound in the second part of the statement of the Proposition follows. \end{enumerate} \end{proof} \begin{remark} The preceding bounds hold also for the perturbed case with \(\varepsilon \in O(h^{1 + \alpha})\). \end{remark} The \protect\hyperlink{alg:extrinsic-mean-geo-flow}{Algorithm} outlined above is expanded in \citep{qml} and put to practice on model examples as well as real-world datasets. \hypertarget{summary-of-convergence-rates}{% \subsection{Summary of Convergence Rates}\label{summary-of-convergence-rates}} The foregoing discussions involve various inter-dependent probabilistic bounds. We now summarize them by giving their \emph{unwrapped} dominant terms as closed-form Bernstein-type exponential bounds: \begin{longtable}[]{@{} >{\raggedright\arraybackslash}p{(\columnwidth - 6\tabcolsep) * \real{0.2500}} >{\raggedright\arraybackslash}p{(\columnwidth - 6\tabcolsep) * \real{0.2500}} >{\raggedright\arraybackslash}p{(\columnwidth - 6\tabcolsep) * \real{0.2500}} >{\raggedright\arraybackslash}p{(\columnwidth - 6\tabcolsep) * \real{0.2500}}@{}} \toprule \begin{minipage}[b]{\linewidth}\raggedright Approx. \end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright Unperturbed \end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright Perturbed \end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright Bound functions \end{minipage} \\ \midrule \endhead \(A_N \sim_{\delta} A\): Lemmas \protect\hyperlink{lem:avgop-consistent}{\ref{lem:avgop-consistent}} \& \protect\hyperlink{lem:rwlap-conv}{\ref{lem:rwlap-conv}} (pointwise) & \(\eqref{eq:thm-rwlap-conv-bound}, \eqref{eq:thm-avgop-consistent-bound}\) \(\leq CN \exp\left( -\frac{N \epsilon^{\frac{n}{2}} \delta^2}{C} \right)\) & & \(\gamma_{\lambda,N}(\delta ; u)\) \\ \(B_N \sim_{\delta} B\): \protect\hyperlink{thm:sqrt-conv}{Theorem \ref{thm:sqrt-conv}}, \protect\hyperlink{lem:sqrt-perturb-eps-conv}{Lemma \ref{lem:sqrt-perturb-eps-conv}} (pointwise) & \(\eqref{eq:thm-sqrt-conv-bound}\) \(\leq C N \exp\left( -\frac{N \epsilon^{\frac{n}{2}} \delta^{4(1 + \upsilon)}}{C} \right)\) & \(\eqref{eq:lem-sqrt-perturb-eps-conv-bound}\) \(\leq C N \exp\left( -\frac{N \epsilon^{\frac{n}{2}} \delta^2 \varepsilon^{(1 + 2\upsilon)}}{C} \right)\) & \begin{minipage}[t]{\linewidth}\raggedright \(\gamma_{\lambda,N,\upsilon}(\delta ; u) :=\) \(\gamma_{\lambda,N}(C_0 \delta^{2(1 + \upsilon(\beta))} ; u)\)\\ \(\gamma^*_{\lambda,N,\upsilon}(\delta, \varepsilon ; u) :=\) \(\gamma_{\lambda,N}(C_0 \delta \varepsilon^{(\frac{1}{2} + \upsilon(\beta))} ; u)\)\strut \end{minipage} \\ \(U_N^t \sim_{\delta} U^t\): \protect\hyperlink{thm:halfwave-soln}{Theorem \ref{thm:halfwave-soln}} (uniform) & \begin{minipage}[t]{\linewidth}\raggedright \(\eqref{eq:thm-halfwave-soln-bound}\) \(\leq \frac{C K_u^{n} |t| N^{\frac{n + 2}{2}}}{\epsilon^{\frac{3}{4} n^2 + n - \frac{1}{2}}}\) \(e^{-\left( \frac{N \delta^4 \epsilon^{{\frac{5}{2}n + 4}}}{C K_u^2 \, |t|^8} \right)}\)\\ with \(\delta > \frac{K_u^{\frac{1}{2}} \epsilon^{-(\frac{5}{8} n + 1)}}{C_0 N^{\frac{1}{4}}}\), \(|t| \lesssim K_u^{\frac{1}{4}} \epsilon^{-\frac{n}{16}}\)\strut \end{minipage} & \begin{minipage}[t]{\linewidth}\raggedright \(\eqref{eq:thm-halfwave-soln-perturb-op-bound}\) \(\leq e^{-\Omega\left( \frac{N \delta^2 \epsilon^{\frac{3}{2}n + 2} \varepsilon}{K_u^2 |t|^4} \right) \varepsilon^{\tilde{O}\left( \frac{|t|^4}{N^{\sigma}} \right)}}\)\\ with \(\delta > \frac{K_u \epsilon^{-(\frac{3}{4} n + 1)}}{C_0 N^{\frac{ 1 - \sigma}{2}}}\), \(0 < \sigma < 1\)\strut \end{minipage} & \begin{minipage}[t]{\linewidth}\raggedright \(\Omega_{\lambda,t,N}(\delta,\epsilon,K_u)\)\\ \(\Omega^*_{\lambda,t,N}(\delta,\varepsilon,\epsilon,K_u)\)\strut \end{minipage} \\ \(\mathbb{E}_t u \sim_h u(x_t)\): \protect\hyperlink{thm:mean-prop-geoflow-consistency}{Theorem \ref{thm:mean-prop-geoflow-consistency}} & \begin{minipage}[t]{\linewidth}\raggedright \(\eqref{eq:thm-mean-prop-geoflow-consistency-bound}\) \(\leq e^{-\Omega(N h^{2(n + 2)} \epsilon^{\frac{5}{2}n + 4})}\)\\ with \(h \gtrsim N^{-\frac{1}{(2 + \alpha)(\frac{5}{2} n + 4) + 2(n+ 2)}}\), \(\epsilon := h^{2 + \alpha}\)\strut \end{minipage} & \begin{minipage}[t]{\linewidth}\raggedright \(\eqref{eq:thm-mean-prop-perturbed-glap-geoflow-consistency-bound}\) \(\leq e^{-\Omega(N h^{n + 1 + \beta} \epsilon^{3(\frac{n}{2} + 1)})}\)\\ with \(h \gtrsim N^{-\frac{1}{n + 3(2 + \alpha)(\frac{n}{2} + 1)}}\), \(\epsilon := h^{2 + \alpha}\)\strut \end{minipage} & \begin{minipage}[t]{\linewidth}\raggedright \(\tilde{\Omega}_{\lambda,t,N}(h,\epsilon,\psi_h, u)\)\\ \(\tilde{\Omega}^*_{\lambda,t,N}(h,\varepsilon,\epsilon,\psi_h,u)\) with \(\varepsilon \in \Theta(h^{1 + \alpha + \beta})\), \(\beta \geq 0\)\strut \end{minipage} \\ \(\hat{x}_{N,t} \sim_{\sqrt{h}} x_t\): \protect\hyperlink{prop:max-observable-flow-consistency}{Proposition \ref{prop:max-observable-flow-consistency}} & \begin{minipage}[t]{\linewidth}\raggedright \(\eqref{eq:prop-max-observable-flow-consistency-bound}\) \(\leq e^{-\Omega(N h^{2 n} \epsilon^{\frac{5}{2}n + 4})}\)\\ with \(h \gtrsim N^{-\frac{1}{n(\frac{5\alpha}{2} + 7) + 4(2 + \alpha)}}\), \(\epsilon := h^{2 + \alpha}\)\strut \end{minipage} & \begin{minipage}[t]{\linewidth}\raggedright \(\eqref{eq:prop-max-observable-perturb-glap-flow-consistency-bound}\) \(\leq e^{-\Omega(N h^{n - 1 + \beta} \epsilon^{3(\frac{n}{2} + 1)})}\)\\ with \(h \gtrsim N^{-\frac{1}{n + (2 + \alpha)(\frac{3}{2}n + 2) + 1}}\), \(\epsilon := h^{2 + \alpha}\)\strut \end{minipage} & \begin{minipage}[t]{\linewidth}\raggedright \(\Omega_{\lambda,t,N}(h^{\frac{n}{2}}, \epsilon, K_{\psi})\)\\ \(\Omega^*_{\lambda,t,N}(h^{\frac{n}{2}}, \varepsilon, \epsilon, K_{\psi}^{(\varepsilon)})\) with \(\varepsilon \in \Theta(h^{1 + \alpha + \beta})\), \(\beta \geq 0\)\strut \end{minipage} \\ \(\bar{x}_{N,t} \sim_h x_t\) & Same as \(\eqref{eq:thm-mean-prop-geoflow-consistency-bound}\) & Same as \(\eqref{eq:thm-mean-prop-perturbed-glap-geoflow-consistency-bound}\) & (See the previous two rows) \\ \bottomrule \end{longtable} The first column of the table indicates the approximation being considered, with a symbolic short-hand and the location of the precise statement. Here, \(A_N, A\) refer to the averaging operators, \(B_N, B\) the square roots, or \(\varepsilon\)-perturbed square roots of \(I - A_N, I - A\) resp., as in \protect\hyperlink{notation:perturb-glap}{Notation \ref{notation:perturb-glap}}, \(U_N^t, U^t\) refer to the corresponding propagators and \(\mathbb{E}_t u := \langle |U_N^t[\psi_{h,t,N}]|^2, u \rangle_N\) is short-hand for the \emph{discrete expectation} of \(u\) with the density of the sample propagation of the coherent state. The notation \(T_1 \sim_r T_2\) for \(T_1,T_2 : C^{\infty} \to C^{\infty}\) and \(r > 0\) means \(||(T_1 - T_2)[u]||_{\infty} \lesssim r\) for some \(u \in C^{\infty}\) when \emph{uniform} is specified and \(|(T_1 - T_2)[u](x)| \lesssim r\) for some \(x \in \mathcal{M}\) when \emph{pointwise} is specified. The meaning when \(T_1, T_2\) are scalars is clear and \(x \sim_r x_t\) for \(x \in \mathcal{M}\) and \(r > 0\) refers to the condition \(x \in B_r(x_t ; g)\). The second and third columns give bounds for the corresponding probabilities in the \(\varepsilon = 0\) (\emph{unperturbed}) and \(\varepsilon > 0\) (\emph{perturbed}) cases, respectively. The fourth column gives, for reference, the dominant functions in the probability bounds for the unperturbed and perturbed cases that are being further bounded in the second and third columns, respectively. We have borrowed from complexity theory the following, \begin{notation} The following are asymptotic bounds of non-negative functions: \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \tightlist \item \(f_0 = O(f(\Upsilon))\) means there is \(\Upsilon^* > 0\) such that for all \(\Upsilon \geq \Upsilon^*\), \(f_0(\Upsilon) \lesssim f(\Upsilon)\); \item \(f_0 = \Omega(f(\Upsilon))\) means there is \(\Upsilon^* > 0\) such that for all \(\Upsilon \geq \Upsilon^*\), \(f(\Upsilon) \lesssim f_0(\Upsilon)\); \item \(f_0 = \Theta(f(\Upsilon))\) means \(f_0 = \Omega(f(\Upsilon))\) and \(f_0 = O(f(\Upsilon))\) and \item \(f_0 = \tilde{O}(f(\Upsilon))\) means there is \(\Upsilon^* > 0\) and a polynomial with non-negative coefficients \(q \in \mathbb{R}[\Upsilon]\) such that for all \(\Upsilon \geq \Upsilon^*\), \(f_0(\Upsilon) \lesssim q(\log \Upsilon) f(\Upsilon)\). \end{enumerate} \end{notation} \emph{Sketch of how these bounds come about}. \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \item By its definition in \protect\hyperlink{lem:sqrt-perturb-eps-conv}{Lemma \ref{lem:sqrt-perturb-eps-conv}}, \(\upsilon \lesssim (\log N + N \delta^2 \epsilon^{\frac{n}{2}})^{-1} \log(N \delta^2 \epsilon^{\frac{n}{2}} + \log N)\), hence \(\upsilon = \tilde{O}(N^{-1} \delta^{-2} \epsilon^{-\frac{n}{2}})\). \item The dominant exponential factor in \(\eqref{eq:thm-halfwave-soln-bound}\) is \begin{align*} e^{-\left( \frac{N \delta^4 \epsilon^{{\frac{5}{2}n + 4}}}{C K_u^2 \, |t|^8} \right) \left( \frac{\delta^4 \epsilon^{2n + 4}}{|t|^8} \right)^{\tilde{O}\left( \frac{\epsilon^{\frac{n}{4}} |t|^4}{K_u\sqrt{N}} \right)}} &\leq e^{-\left( \frac{N \delta^4 \epsilon^{{\frac{5}{2}n + 4}}}{C K_u^2 \, |t|^8} \right) \left( \frac{K_u^2 \epsilon^{- \frac{n}{2}}}{N |t|^8} \right)^{\tilde{O}\left( \frac{\epsilon^{\frac{n}{4}} |t|^4}{K_u\sqrt{N}} \right)}} \\ &= e^{-\left( \frac{N \delta^4 \epsilon^{{\frac{5}{2}n + 4}}}{C K_u^2 \, |t|^8} \right) (Z/N)^{\tilde{O}\left( \frac{1}{\sqrt{N Z}} \right)}} \end{align*} with \(Z := K_u^2 \epsilon^{-\frac{n}{2}}/|t|^8\). So assuming \(|t| = O(K_u^{\frac{1}{4}} \epsilon^{-\frac{n}{16}})\) gives \(Z \gtrsim 1\) and \((Z/N)^{\tilde{O}(1/\sqrt{NZ})} \gtrsim 1\), hence that factor can be replaced by a constant in the exponential. \item Examining \(\Omega_{\lambda,t,N}(h^\frac{n}{2})\) and using that \(|t|\) is bounded gives for the dominating term in \(\eqref{eq:prop-max-observable-flow-consistency-bound} \leq e^{-\Omega(N h^{2 n} \epsilon^{\frac{5}{2}n + 4}) \, h^{\upsilon}}\) with \(\upsilon = \tilde{O}(N^{-1} h^{-n} \epsilon^{-(\frac{3}{2} n + 2)})\). Then, \(h \gtrsim N^{-\frac{1}{n(\frac{5\alpha}{2} + 7) + 4(2 + \alpha)}}\) implies \(\upsilon = \tilde{O}(N^{-\frac{2}{5}})\) and there is a \(c > 0\) such that \(h^{\tilde{O}(N^{-\frac{2}{5}})} > c > 0\), hence \(\eqref{eq:prop-max-observable-flow-consistency-bound} \leq C e^{-\Omega(N h^{2 n} \epsilon^{\frac{5}{2}n + 4})}\). \item The only significant difference in bounding \(\eqref{eq:thm-mean-prop-geoflow-consistency-bound}\) from bounding \(\eqref{eq:prop-max-observable-flow-consistency-bound}\) is that we have \(h^{\frac{n}{2} + 1}\) in place of \(h^{\frac{n}{2}}\) for the error rate argument to \(\Omega_{\lambda,t,N}\) and this gives the dominating term in \(\eqref{eq:thm-mean-prop-geoflow-consistency-bound} \leq C e^{-\Omega(N h^{2n + 4} \epsilon^{\frac{5}{2}n + 4}) h^{\upsilon}}\) with \(\upsilon = \tilde{O}(N^{-1} h^{-(n + 2)} \epsilon^{-(\frac{3}{2} n +2)})\). Then, the lower bound on \(h\) gives a lower bound, away from zero, for \(h^{\upsilon}\) in much the same way as for \(\eqref{eq:prop-max-observable-flow-consistency-bound}\) above. \item We can bound \(\tilde{\Omega}_{\lambda,t,N}(h,\varepsilon)\) to give the dominant term in \(\eqref{eq:thm-mean-prop-perturbed-glap-geoflow-consistency-bound} \leq e^{-\Omega(N h^{n + 2} \epsilon^{\frac{3}{2} n + 2} \varepsilon) \varepsilon^{2 \upsilon}}\) with \(\upsilon = \tilde{O}(N^{-1} h^{-(n + 2)} \epsilon^{-(\frac{3}{2}n + 2)})\). Then requiring \(\varepsilon \in \Theta(h^{1 + \alpha + \beta})\) with \(\beta \geq 0\) we anyways have \(h \gtrsim N^{-\frac{1}{n + 3(2 + \alpha)(\frac{n}{2} + 1)}}\), which gives \(\varepsilon^{2\upsilon} = \Theta(1)\), hence it follows that \(\eqref{eq:thm-mean-prop-perturbed-glap-geoflow-consistency-bound} \leq e^{-\Omega(N h^{n + 1 + \beta} \epsilon^{3(\frac{n}{2} + 1)})}\). \item The function \(\Omega_{\lambda,t,N}(h^{\frac{n}{2}},\varepsilon)\) can be bounded to give the dominant term in \(\eqref{eq:prop-max-observable-perturb-glap-flow-consistency-bound} \leq e^{-\Omega(N h^{n} \epsilon^{\frac{3}{2}n + 2} \varepsilon) \varepsilon^{2\upsilon}}\) with \(\upsilon = \tilde{O}(N^{-1} h^{-n} \epsilon^{-(\frac{3}{2}n + 2)})\). Then since \(\varepsilon \in \Theta(h^{1 + \alpha + \beta})\) with \(\beta \geq 0\), we require for exponential decay in \(N\) that \(h \gtrsim N^{-\frac{1}{n + (2 + \alpha)(\frac{3}{2}n + 2) + 1}}\) so \(\varepsilon^{2 \upsilon} = \Theta(1)\), hence \(\eqref{eq:prop-max-observable-perturb-glap-flow-consistency-bound} \leq e^{-\Omega(N h^{n - 1 + \beta} \epsilon^{3(\frac{n}{2} + 1)})}\). \end{enumerate} \bibliographystyle{amsalphaabbrv}
2,869,038,155,959
arxiv
\section{Introduction} The quality of images of scenes in our daily life is greatly affected by the particles suspended in the environment, such as due to dust, smoke, mist, fog, smog, etc. Bad weather also contributes to this. Beside significantly higher and non-uniform noise in the images, the usual effects are reduced visibility, reduced sharpness, and contrast of the objects within the visibility and obscuring of other objects. Therefore, performing computer vision tasks like object detection, object recognition, tracking and segmentation becomes complicated for such images. Therefore, the true potential of computer vision empowered automated and remote surveillance systems such as drones and robots cannot be realized under hazy conditions. Thus, it is of interest to enhance the quality of images taken under homogeneous and non-homogeneous hazy conditions and recover the details of the scene. Haze removal or dehazing algorithms address this problem. \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{img/teaser_dehaze.png} \vspace{-3pt} \caption{A compact representation of our novel generator and its important features.} \label{img:UNet} \end{figure} There has been a significant activity in the topic of dehazing in recent years. New algorithms ranging from physics-based solvers, image processing based algorithms, as well as deep learning-based approaches, are being proposed. Furthermore, newer challenges are being undertaken, including dehazing in the presence of dense haze, non-homogenous haze, and using a single RGB image of a scene. It is being recognized that deep learning architectures provide better performance than the other approaches for diverse and challenging dehazing scenarios if suitably designed large datasets are available. However, dehazing images through deep learning on a small dataset using a single RGB image is quite challenging and of significant practical interest. For example, in the situation of fire management or natural disaster management, a suitable dehazing model characteristic of the situation needs to be learned quickly using a small number of images in haze and corresponding pre-disaster images. We propose a novel deep learning architecture that is amenable to reliable learning of dehazing model using a small dataset. Our novel generative adversarial network (GAN) architecture includes iterative blocks of UNets to model haze features of different complexities and a pyramid convolution block to preserve and restore spatial features of different scales. The key contributions of this paper are as follows: \begin{itemize} \item A novel technique named pyramid convolution is introduced for dehazing to obtain spatial features of multiple scales structural information. \item We have used iterative UNet block for image dehazing tasks to make the generator learn different and complex features of haze without the loss of local and global structural information or without making the network too deep to result into loss of spatial features. \item The model used is end-to-end trainable with hazy image as input and haze-free image as the desired output. Therefore the conventional practice of using the atmospheric scattering model is obviated, and the problems encountered in inverse reconstruction are circumvented. It also makes the approach more versatile and applicable to haze scenarios where the conventional simplified atmospheric model may not apply. \item Extensive experimentation is done on four contemporary challenging datasets, namely I-Haze and O-Haze datasets of NTIRE 2018 challenge, Dense-haze dataset of NTIRE 2019 challenge, and non-homogeneous dehazing dataset of NTIRE 2020 challenge. \end{itemize} The outline of the paper is as follows. Section \ref{sec:related} presents related work, and section \ref{sec:proposed} introduces our architecture and learning approach. Section \ref{sec:results} presents numerical experiments and results. Section \ref{sec:ablation} includes an ablation study on the proposed method. Section \ref{sec:conclusion} concludes the paper. \section{Related work} \label{sec:related} Since this paper's focus is single image dehazing, we exclude a discussion on studies that required multiple images, for example, exploiting polarization, to perform dehazing. Single image dehazing is an ill posed problem because the number of measurements is not sufficient for learning the haze model, and the non-linearity of the haze model implies higher sensitivity to noise. Single image based dehazing exploits polarization-independent atmospheric scattering model proposed by Koschmieder \cite{koschmieder1925theorie} and its characteristics such as dark channel, color attenuation and haze-free priors. According to this model, the hazy image is specified by the atmospheric light (generally assumed uniform), the albedo of the objects in the scene, and the transmission map of the hazy medium. More details can be found in \cite{koschmieder1925theorie} and its subsequent citations,including recent ones \cite{chen2019multi,vazquez2020physical}. We have to predict the unknown transmission map and global atmospheric light. In the past, many methods have been proposed for this task. The methods can be divided into two categories, namely (i) Traditional handcrafted prior based methods and (ii) Learning based methods. \textbf{Traditional handcrafted prior based methods:} Fattal~\cite{fattal2008single} proposed a physically grounded method by estimating the albedo of the scene. Tan ~\cite{tan2008visibility} proposed the use of the Markov random field to maximize the local contrast of the image. He et al.~\cite{he2010single} proposed a dark channel prior for the estimation of the transmission map. Fattal ~\cite{fattal3dehazing} proposed a color-line method based on the observation that small image patches typically exhibit a one-dimensional distribution in the RGB color space. Traditional handcrafted prior methods give good results for certain cases but are not robust for all the cases. \textbf{Learning based approaches:} In recent years, many learning based methods have been proposed for single image dehazing that encash the success of deep learning in image processing tasks, availability of large datasets, and better computation resources. Some examples are briefly mentioned here. Cai et al.~\cite{cai2016dehazenet} proposed an end-to-end CNN based deep architecture to estimate the transmission map. Ren et al.~\cite{ren2016single} proposed a multi-scale deep architecture, which also estimates the transmission map from the haze image. Zhang et.al. in~\cite{zhang2018densely} proposed a deep network architecture that estimates the transmission map and the atmospheric light. These estimates are then used together with the atmospheric scattering model to generate the haze-free image. \textbf{Our approach in context: }In contrast to these approaches, our approach is an end-to-end learning based approach in which the learnt model directly predicts the haze-free image without needing to reconstruct the transmission map and the atmospheric light, or using the atmospheric scattering model. It is therefore more versatile to be trained for situations where the atmospheric scattering model of \cite{koschmieder1925theorie} may not apply or may be too simple. Example includes non-uniform haze. It also circumvents the numerical errors and artifacts associated with the use of inverse approaches of reconstructing the haze-free image from the transmission map and atmospheric light. \section{Proposed method}\label{sec:proposed} In this section we present our model, namely back projected pyramid network (BPPNet). The overall architecture is based on generative adversarial network ~\cite{goodfellow2014generative}, where a generator generates a haze-free image from a hazy image, and a discriminator tells whether the image provided to it is real or not. \subsection{Generator} The architecture of the generator is shown in Fig. \ref{img:generator}. It comprises of two blocks in series, namely (i) iterative UNet block, (ii) pyramid convolution block, which we describe next. \begin{figure*}[t] \includegraphics[width=\linewidth]{img/Architecture4.png} \caption{The architecture of our generator.} \label{img:generator} \end{figure*} {\textbf{Iterative UNet block (IUB):}} This block consists of multiple UNet~\cite{ronneberger2015u} units connected in series i.e. the output of one UNet (architecture in the supplementary) is fed as the input to the next UNet. In addition, the output of each UNet is passed to a concatenator, which concatenates the 3 channel output of all the UNets, providing an intermediate 12 channel feature map. The equations describing the working of IUB are the following. \begin{equation} I_{_1} = {\rm UNET}_1( I_{\rm haze} ); I_{{i}} = {\rm UNET}_{i}( I_{{i-1}}) \quad {\rm{for}} \quad i>1, \end{equation} \noindent where $I_i$ is the output of $i$th UNet unit, $I_{\rm haze}$ is the input hazy image after being transformed to YCbCr space, and the output $\hat{I}_{\rm IUB}$ of IUB is given as \begin{equation} \hat{I}_{\rm IUB} = I_{1}\oplus I_{2}\oplus \ldots I_{M} \end{equation} where $\oplus$ indicates concatenation operator and $M$ is the number of UNet unit. We have used $M=4$. An ablation study on the value of $M$ is presented later in section \ref{sec:ablation}. Here, we discuss the need of more than one UNet. In principle, a single UNet may be able to support dehazing to some extent. However, it may not be able to extract complex features and generate an output with fine details. One way to tackle this problem is to increase the number of layers in the encoder block so that more complex features can be learned. But the layers in the encoder block are arranged in feed forward fashion, and the height and the width of layers decreases upon moving further. This causes loss in spatial information and reduces the possibility of extraction of spatial features of high complexity. Therefore, we take an alternate approach of creating sequence of the multiple UNets. The sequence of UNets may be interpreted as a sequence of multiple encoder-decoder pairs with skip connections. The encoder in each UNet extracts the features from input tensor in the downsample feature map and decoder uses those features and projects them into an upsample latent space with same height and width as input tensor. In this way, each generator helps in learning increasingly complex features of haze while the decoder helps in retaining the spatial information in the image. Lastly, the concatenation step ensures that complexity of all the levels are available for subsequent reconstruction. \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{img/iedb2.png} \caption{The effect of the successive UNet units is illustrated. Images are histogram equalized for better visualization. The histograms of the channels becomes narrower after passing more number of UNet units, indicating that adding more UNet units may cease to create more value after a certain limit.} \label{img:iedb} \end{figure*} We illustrate the effect of using multiple UNets in Fig. \ref{img:iedb}. Histogram equalized 3-channel output of each UNet is shown as an RGB image. It is seen that the spatial context is preserved, and at the same time haze introduced blur of different complexities are present in the outputs of different images. The haze in the last UNet output is flatter across the image and shows large scale blurs while the haze in the first UNet is local and introduces small scale blurs. Therefore, most dehazing is accomplished in UNet1, although the subsequent UNets pick the dehazing components that the previous UNets did not. Fig. \ref{img:iedb} also explains our choice of only four UNet blocks even though more blocks could be used in principle. We explain our choice in two parts. First, there is a trade-off involved between accuracy and speed when choosing the number of UNet blocks. Second, as seen in the histograms in Fig. \ref{img:iedb}, the dynamic range of channels decreases with every subsequent UNet block, thereby indicating the reduction in the usable information content. The standard deviation of the intensity values in the 3 channels after UNet4 is $\sim$12.2. Adding more blocks would further reduce this value, and therefore not provide significantly exploitable data for dehazing. \begin{figure*}[t] \centering \includegraphics[width=0.6\linewidth]{img/pyramid.jpg} \caption{Feature maps corresponding to one of the channels of 3$\times$3, 17$\times$17, and 45$\times$45 convolution layer respectively. The figure shows that smaller kernel size generates smaller scale features such as edges while large kernel size generates large scale features such as big patches.} \label{img:newfig} \end{figure*} {\textbf{Pyramid convolution (PyCon) block:}} Although the iterative UNet block does provide global and local structural information, the output lacks the global and local structural information for different sized objects. An underlying reason is that the structural information from different scales are not directly used to generate the output. To overcome this issue, we have used a novel pyramid convolution technique. Earlier pyramid pooling has been used in~\cite{ronneberger2015u} to leverage the “global structural information”. However, since the pooling layers are not learnable, we instead employ learnable convolution layers that can easily outperform the pooling layers in leveraging the information. We employ many convolution layers of different kernel sizes in parallel on the input map (the 12 channel output of iterative UNet block). Corresponding to different kernel sizes used for convolution, different output maps are generated with structural information of different spatial scales. The kernel sizes are chosen as 3, 5, 7, 11, 17, 25, 35, and 45, as shown in Fig. \ref{img:generator}. Odd sized kernels are used since pixels in the intermediate feature map are assumed to be symmetrical around output pixel. We observed introduction of distortion over layers upon using even-sized kernels, indicating the importance of exploiting the symmetry of the features around the output pixel. Zero padding is used to ensure that the features at the edges are not lost. After the generation of feature map from corresponding kernels, all the generated maps are concatenated to make an output feature map of 128 channels, which is subsequently used for the final image construction by applying a convolution layer of kernel size 3$\times$3 with zero padding. In this manner, the local to global information is directly used for the final image reconstruction. The effect of using pyramid convolution is shown in Fig. \ref{img:newfig}. In the zoom-ins shown in the middle panel, the arrows indicate some features of the size of the convolution filter used for generating that particular feature map. The illustrated 3 channels are superimposed as a hypothetical RGB image in the bottom left of Fig. \ref{img:newfig} to demonstrate the types of details present in a subset of the output feature map. Since we have used 8 convolution filters that operate on 12 channel input, we generate a total of 128 channels in the output feature map with a large variety of spatial features of multiple scales learned and restored. Therefore, the result image shown in the bottom panel has spatial features closely matching the ground truth, resulting in a low difference map (shown in the bottom right). One may consider using the 12 channel output of the iterative UNet block for generating the dehazed image directly, without employing the PyCon block. To indicate the importance of including the PyCon block, we include an ablation study in section \ref{sec:ablation}. \begin{figure}[t] \centering \includegraphics[width=0.75\linewidth]{img/discriminator.png} \caption{Architecture of the discriminator.} \label{fig:discriminator} \end{figure} \subsection{Discriminator} We have used a patch discriminator to determine whether a particular patch is realistic or not. The patches overlap in order to eliminate the problem of low performance on the edges. We have used 4$\times$4 convolution layers in discriminator. After every convolution layer, we have added an activation layer with an activation leaky rectified linear unit (Leaky ReLu) except the last layer where the activation function is sigmoid. The size of the convolution kernel used is 4$\times$4, and the output map size is 62$\times$62 for an input of size 512$\times$512. The architecture of our discriminator is shown in Fig. \ref{fig:discriminator}. \subsection{Loss functions}\label{sec:losses} Most dehazing models use the mean squares error (MSE) as the loss function \cite{zhang2018multi}. However, MSE is known to be only weakly correlated with human perception of image quality \cite{huang2014visibility}. Hence, we employ additional loss functions that are closer to human perception. To this end, we have used a combination of MSE ($L_2$ loss), adversarial loss $L_{\rm{adv}}$, content loss $L_{\rm{con}}$, and structural similarity loss $L_{\rm{SSIM}}$. We define the remaining loss functions below. The adversarial loss for the generator $L_{\rm{adv}}$ and the discriminator $L_{\rm dis}$ is defined as: \begin{equation} L_{\rm adv} = \langle \log(D(I_{\rm pred})) \rangle, \end{equation} \begin{equation} L_{\rm dis} = \langle \log(D(I_{\rm GT})) \rangle + \langle \log(1-D(I_{\rm pred})) \rangle, \end{equation} \noindent where $(I_{\rm haze}, I_{\rm GT})$ are the supervised pair of the hazy image and the corresponding ground truth. $D(I)$ is the discriminator’s estimate of the probability that data instance $I$ provided to it is indeed real. Similarly, $G(I)$ is the generator output for the input instance $I$. Further, $I_{\rm pred} = G (I_{\rm haze})$. The notation $<>$ indicates the expected value over all the supervised pairs. The MSE, also referred to as the $L_2$ loss, is defined as the average norm 2 distance between $I_{\rm{GT}}$ and $I_{\rm{pred}}$: \begin{equation} L_2= \langle{I_{\rm{GT}}-I_{\rm{pred}}}\rangle \label{eq:L2} \end{equation} Our content loss is the VGG based perceptual loss~\cite{johnson2016perceptual}, defined as: \begin{equation} L_{\rm{con}}= \big\langle\sum_i{\frac{1}{N_i}{\mid\mid \phi_i(I_{\rm GT})-\phi_i(I_{\rm pred})\mid\mid}}\big\rangle, \end{equation} \noindent where $N_i$ is the number of elements in the $i^{th}$ activation of VGG-19 and $\phi_i$ represents $i^{\rm{th}}$ activation of VGG-19. The structural similarity loss $L_{\rm{SSIM}}$ over reconstructed image $I_{\rm pred}$ and ground truth $I_{\rm GT}$ is defined as: \begin{equation} L_{\rm{SSIM}}= 1 - \langle {{\rm SSIM}(I_{\rm GT},I_{\rm pred})}\rangle, \end{equation} \noindent where ${\rm SSIM}(I, I^\prime)$ is the SSIM \cite{wang2004image} between the images $I$ and $I^\prime$. We note that although the losses $L_2$ and $L_{\rm SSIM}$ directly compare the predicted and the ground truth images, the nature of comparison is quite different across them. $L_2$ is insensitive to the structural details but retains the comparison of the general energy and dynamic range of the two images being compared. $L_{\rm SSIM}$ on the other hand compared the structural content in the images with less sensitivity to the contrast. Therefore, including these two loss functions provide complementary aspects of comparison between the predicted and the ground truth images. The overall generator loss $L_{\rm{G}}$ and discriminator loss $L_{\rm{D}}$ are given as \begin{equation} {L_{\rm{G}}=A_1 L_{\rm{{adv}}}+A_2 L_{\rm{con}}+A_3 L_2 + A_4 L_{\rm{SSIM}}} \end{equation} \begin{equation} {L_{\rm{D}}=B_1 L_{\rm{D_{adv}}}.} \end{equation} We have heuristically chosen the values of the constant weights in the above equation as $A_1 = 0.7$, $A_2 = 0.5$, $A_3 = 1.0$, $A_4 = 1.0$, and $B_1 = 1.0$. \section{Experimental results}\label{sec:results} \subsection{Datasets} We have trained and tested our model on the following four datasets, namely NTIRE 2018 image dehazing indoor dataset (abbreviated as I-Haze), NTIRE 2018 image dehazing outdoor dataset (O-Haze), Dense-Haze dataset of NTIRE 2019, and NTIRE 2020 dataset for non-homogeneous dehazing challenge (NTIRE 2020). \textbf{\textit{I-Haze ~\cite{DBLP:journals/corr/abs-1804-05091} and O-Haze ~\cite{ancuti2018haze}:}} These datasets contains 25 and 35 hazy images (size 2833$\times$4657 pixels) respectively for training. Both datasets contain 5 hazy images for validation along with their corresponding ground truth images. For both of these datasets, the training was done on training data and validation images were used for testing because although 5 hazy images for testing were given but their ground truths were not available to make the quantitative comparison. \textbf{\textit{Dense-Haze ~\cite{ancuti2019dense}:}} This dataset contains 45 hazy images (size 1200$\times$1600 pixels) for training and 5 hazy images for validation and 5 more for testing with their corresponding ground truth images. We have done training on training data and tested our model with test data. \textbf{\textit{NTIRE 2020~\cite{NTIRE2020}:}} This dataset contains 45 hazy images (size 1200$\times$1600 pixels) for training with their corresponding ground truth. It is the first dataset of non-homogeneous haze in our knowledge. As ground truth for validation was not given, hence we used only 40 image pairs for training and calculated our results on the rest of the 5 images. \begin{table}[t] \centering \caption{Quantitative comparison of various state of the art methods with our model on I-Haze, O-Haze and Dense-Haze datasets.Our model does the dehazing task in real-time at an average running time of 0.0311 s i.e. 31.1 ms per image.} \label{tbl:indoor_outdoor_table} \begin{tabular}{|l||c|c|c|c|c|c|c|c|} \hline \multicolumn{9}{|c|}{I-Haze dataset}\\ \hline Metric & Input & CVPR'09 & TIP'15 & ECCV'16 & CVPR'16 & ICCV'17 & CVPRW'18 & Our\\ & & \cite{he2010single}& \cite{zhu2015fast}& \cite{ren2016single}& \cite{berman2016non}& \cite{li2017all}& \cite{zhang2018multi}& model\\ \hline SSIM & 0.7302 & 0.7516 & 0.6065 & 0.7545 & 0.6537 & 0.7323 & 0.8705 & \textbf{0.8994}\\ \hline PSNR & 13.80 & 14.43 & 12.24 & 15.22 & 14.12 & 13.98 & 22.53 & \textbf{22.56}\\ \hline \end{tabular} \begin{tabular}{|l||c|c|c|c|c|c|c|c|} \hline \multicolumn{9}{|c|}{O-Haze dataset}\\ \hline Metric & Input & CVPR'09 & TIP'15 & ECCV'16 & CVPR'16 & ICCV'17 & CVPRW'18 & Our\\ & & \cite{he2010single}& \cite{zhu2015fast}& \cite{ren2016single}& \cite{berman2016non}& \cite{li2017all}& \cite{zhang2018multi}& model\\ \hline SSIM & 0.5907 & 0.6532 & 0.5965 & 0.6495 & 0.5849 & 0.5385 & 0.7205 & \textbf{0.8919}\\ \hline PSNR & 13.56 & 16.78 & 16.08 & 17.56 & 15.98 & 15.03 & 24.24 & \textbf{24.27}\\ \hline \end{tabular} \begin{tabular}{|l||c|c|c|c|c|c|c|c|c|} \hline \multicolumn{10}{|c|}{Dense-Haze dataset}\\ \hline Metric & CVPR & Meng & Fattal & Cai & Ancuti & CVPR & ECCV & Morales & Our\\ & '09 \cite{he2010single} & et. al \cite{meng2013efficient} & \cite{fattal3dehazing} & et. al \cite{cai2016dehazenet} & et. al \cite{ancuti2016night} & '16 \cite{berman2016non} & '16 \cite{ren2016single} & et. al \cite{morales2019feature} & model\\ \hline SSIM & 0.398 & 0.352 & 0.326 & 0.374 & 0.306 & 0.358 & 0.369 & 0.569 & \textbf{0.613}\\ \hline PSNR & 14.56 & 14.62 & 12.11 & 11.36 & 13.67 & 13.18 & 12.52 & 16.37 & \textbf{17.01}\\ \hline \end{tabular} \end{table} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{img/New_Indoor1.png} \caption{Qualitative comparison of various benchmark with our model on I-Haze dataset} \label{img:indoor_res} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{img/NewOutdoor1.png} \caption{Qualitative comparison of various methods with our model on O-Haze dataset.} \label{img:outdoor_res} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{img/Combined1.png} \caption{Qualitative results for Dense-Haze and NTIRE 2020 dataset.} \label{img:dense} \end{figure} \subsection{Training details} The optimizer used for the training was Adam~\cite{kingma2014adam} with the initial learning rate for 0.001 and 0.001 for generator and discriminator respectively. We have randomly cropped large square patches from the training images. The crop size was 1024$\times$1024 for NTIRE 2020 and Dense-Haze. Leveraging on the even large sizes of images in I-Haze and O-Haze datasets, we created four crops each of two different sizes, 1024$\times$1024 and 2048$\times$2048. We then resized all the patches to 512$\times$512 using bicubic interpolation. These patches are randomly cropped for each epoch i.e. these patches are not same for every epoch. This has created an effectively larger dataset out of the small dataset available for training for each of the considered datasets. For datasets named I-Haze, O-Haze and NTIRE 2020, we converted these randomly cropped resize patches from RGB space to YCbCr space and then used them for training. For Dense-Haze dataset we directly used RGB patches for training. We decreased the learning rate of the generator whenever the loss became stable. We stopped training when the learning rate of the generator reached 0.00001 and the loss stabilized. We also tried to decrease the learning rate of discriminator but found that doing so did not give the best results. \subsection{Results} Here, we present our results and their comparison with the results of other models available in the literature. We note that we converted the test input image size to 512$\times$512 for all the datasets for generating our results in view of our hardware (GPU) memory constraints. Quantitative evaluation is performed using the SSIM metric and the peak signal to noise ratio (PSNR). The metrics are computed in the RGB space even if the training was done in YCbCr space. The quantitative results are compared with earlier state-of-the-art in Table \ref{tbl:indoor_outdoor_table}. The metrics for the other methods are reproduced from ~\cite{zhang2018multi} for the I-Haze and O-Haze dataset. The benchmark for Dense-haze was provided in ~\cite{ancuti2019dense}. We further include the results of Morales at al.~\cite{morales2019feature} for comparison. \textbf{\textit{I-Haze:}} The average PSNR and SSIM of our method for this dataset on validation data were 22.56 and 0.8994 respectively. It is evident from Table \ref{tbl:indoor_outdoor_table} that our model outperforms the state-of-the-art in both SSIM and PSNR index by a good margin. Qualitative comparison results on the test data are shown in Fig. \ref{img:indoor_res}. It is evident that only CVPRW'18 \cite{zhang2018multi} competes with our method in the quality of dehazed image and match with the ground truth. \textbf{\textit{O-Haze:}} The average PSNR and SSIM for this dataset on validation data were 24.27 and 0.8919 respectively on validation data, see Table \ref{tbl:indoor_outdoor_table}. Our model clearly outperforms all the state-of-the-art in both PSNR and SSIM index by a large margin. For SSIM, the closest performing method was CVPRW'18~\cite{zhang2018multi} with SSIM of 0.7205, which is significantly lower than ours i.e 0.8919. It is notable from the results of all the methods that this dataset is more challenging than I-Haze. Nonetheless, our method provides comparable performance over both I-Haze and O-Haze datasets. The qualitative comparison of results on the test data are shown in Fig. \ref{img:outdoor_res}. Similar to the I-Haze dataset, only CVPRW'18 \cite{zhang2018multi} and our method generate dehazed images of good quality. As compared to I-Haze results in Fig. \ref{img:indoor_res}, it is more strongly evident in Fig. \ref{img:outdoor_res} that the color cast of our method is slightly mismatched with the ground truth, where CVPRW'18 performs better than our method. However, CVPRW'18 shows poorer structural continuity than our method, as evident more strongly in Fig. \ref{img:indoor_res}. \textbf{\textit{Dense-Haze:}} From Table \ref{tbl:indoor_outdoor_table}, it is evident that this dataset is significantly more challenging that the I-Haze and O-Haze datasets. All methods perform quite poorer for this dataset, as compared to the numbers reported for I-Haze and O-Haze dataset. Even though the performance of our method is also poorer for this dataset as compared to the other datasets, its SSIM and PSNR values are significantly better than the other 8 methods whose results are included in Table \ref{tbl:indoor_outdoor_table} for this dataset. Qualitative comparison with select methods is shown in Fig. \ref{img:dense}. The results clearly illustrate the challenge of this dataset as the features and details of the scene are almost imperceptible through the dense haze. Only our method is capable of dehazing the image effectively and bringing forth the details of the scene. Nonetheless, the color cast is non-uniformly compensated and different from the ground truth in regions. \textbf{\textit{NTIRE 2020:}} As the ground truth for test data is not given, we randomly chose 5 images for testing and used the rest of the 40 image pair for training. The average SSIM and PSNR are 0.8726 and 19.40 respectively. This SSIM value is better than the best SSIM observed in the competition and informed to the participants in a personal email after the test phase. The qualitative results are shown in Fig. \ref{img:dense}. The observations are generally similar to the observations for the Dense-Haze dataset. Our results are qualitatively quite close to the ground truth and show the ability of our method to recover the details lost in haze, despite the non-homogeneity of the haze. Second, we observe a little bit of mismatch in the color reproduction and in-homogeneity in the color cast, which needs further work. We expect that the problem of color cast inhomogeneity may be related to the inhomogeneity in the haze itself, which may have been present in the Dense-Haze data as well but may not have been perceptible due to the generally high density of haze. \section{Ablation study}\label{sec:ablation} We conduct ablation study using I-Haze and O-Haze datasets. We consider the ablation associated with the architectural elements in section \ref{sec:unet_experiment}, loss components in section \ref{sec:losses_experiment}, and the image space used in training in section \ref{sec:rgb_experiment}. \subsection{Architecture ablation} \label{sec:unet_experiment} Here, we consider ablation study relating to the number of UNet units used in the iterative UNet block and the absence or presence of the pyramid convolution block. The results are shown in Table \ref{tab:abl1}. It is evident that decreasing or increasing the number of UNet blocks degrades the performance and the use of $M=4$ UNet blocks is optimal for the architecture. This is in agreement in the observations derived from Fig. \ref{img:iedb}. Similarly, dropping the PyCon block also degrades the performance. \subsection{Loss ablation} \label{sec:losses_experiment} We proposed in section \ref{sec:losses} to use four types of loss functions for the training of the generator. Here, we consider the effect of dropping one loss function at a time. The results are presented in the bottom panel of Table \ref{tab:abl1}. It is seen than dropping any of the loss function results into significant deterioration of performance. This indicates the different and complementary roles each of these loss functions is playing. Our observation of the qualitative results, discussed in section \ref{sec:results}, we might need to introduce another loss function related to the preservation of the color cast or color constancy. \begin{table}[t] \centering \caption{The results of ablation study are presented here. The reference indicates the use of 4 UNet blocks, inclusion of PyCon block with layer configuration as shown in Fig. \ref{img:generator}. All loss functions discussed in section \ref{sec:losses} are used and the entire architecture uses YCbCr space, such as shown in Fig. \ref{img:generator}.} \label{tab:abl1} \begin{tabular}{|l|l||c|c|c|c|c|c|} \hline \multicolumn{8}{|c|}{\textbf{(a) Ablation study on architectural units}}\\ \hline {Dataset}& {Metric}&\textbf{Reference}&{1 UNet}&{2 UNets}&{3 UNets}&{5 UNets} &{No PyCon}\\ \hline \hline I-Haze & SSIM & \textbf{0.8994} & 0.8572 & 0.8679 & 0.8820 & {0.8932} & {0.8878}\\ \cline{2-8} & PSNR & \textbf{22.57} & 18.54 & 19.94 & 20.92 & {21.62} & 21.17 \\ \hline O-Haze & SSIM & \textbf{0.8927} & 0.8500 & 0.8639 & 0.8795 & {0.8639} & {0.8768}\\ \cline{2-8} & PSNR & \textbf{24.30} & 21.34 & 22.36 & 23.06 & {22.36} & 23.13 \\ \hline \hline \multicolumn{8}{|c|}{\textbf {(b) Ablation study on losses and the image space}}\\ \hline {}&{}&{Reference}&\multicolumn{4}{|c||}{The loss function dropped} & Direct use of RGB,\\ \cline{4-7} {}&{}&{}&{$L_{\rm {adv}}$}&{$L_{\rm con}$}&{$L_{2}$}&{$L_{\rm SSIM}$} &{not the YCbCr space}\\ \hline \hline I-Haze & SSIM & \textbf{0.8994} & {0.8620} & {0.8372} & {0.8648} & {0.8343} & {0.8944}\\ \cline{2-8} {} & PSNR & \textbf{22.57} & {19.52} & {18.99} & {20.02} & {19.58} & 20.94\\ \hline O-Haze & SSIM & \textbf{0.8927} & {0.8608} & {0.8271} & {0.8650} & {0.8568} & {0.8712}\\ \cline{2-8} {} & PSNR & \textbf{24.30} & {22.26} & {20.66} & {22.78} & {22.44} & 22.54\\ \hline \end{tabular} \end{table} \subsection{Use of RGB versus YCbCr space} \label{sec:rgb_experiment} If we used RGB space instead of YCbCr space for training, we observe a degraded performance in terms of SSIM as reported in section \ref{tab:abl1}(b). However, we note that this observation is not consistent over all the datasets. Specifically, we noted that for Dense-Haze, the YCbCr conversion gave little poorer results than RGB based training. Hence, we have used RGB patches for training on Dense-Haze dataset. \section{Conclusion}\label{sec:conclusion} The presented single image dehazing method is an end-to-end trainable architecture that is applicable in diverse situations such as indoor, outdoor, dense, and non-homogeneous haze even though training datasets used are small in each of these cases. It beats the state-of-the-art results in terms of SSIM and PSNR for all the three datasets whose results are available. Qualitative results for indoor images indicate preservation of colors in the reconstructed image in the I-Haze dataset while a poorer color reconstruction is observed in the results of other datasets. In the future, we will improve our model to inherently include color preservation and seamless color cast as well. Source code, results, and trained model are shared at our project page ( https://github.com/ayu-22/BPPNet-Back-Projected-Pyramid-Network ). \clearpage \bibliographystyle{splncs04} \section{Introduction} The quality of images of scenes in our daily life is greatly affected by the particles suspended in the environment, such as due to dust, smoke, mist, fog, smog, etc. Bad weather also contributes to this. Beside significantly higher and non-uniform noise in the images, the usual effects are reduced visibility, reduced sharpness, and contrast of the objects within the visibility and obscuring of other objects. Therefore, performing computer vision tasks like object detection, object recognition, tracking and segmentation becomes complicated for such images. Therefore, the true potential of computer vision empowered automated and remote surveillance systems such as drones and robots cannot be realized under hazy conditions. Thus, it is of interest to enhance the quality of images taken under homogeneous and non-homogeneous hazy conditions and recover the details of the scene. Haze removal or dehazing algorithms address this problem. \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{img/teaser_dehaze.png} \vspace{-3pt} \caption{A compact representation of our novel generator and its important features.} \label{img:UNet} \end{figure} There has been a significant activity in the topic of dehazing in recent years. New algorithms ranging from physics-based solvers, image processing based algorithms, as well as deep learning-based approaches, are being proposed. Furthermore, newer challenges are being undertaken, including dehazing in the presence of dense haze, non-homogenous haze, and using a single RGB image of a scene. It is being recognized that deep learning architectures provide better performance than the other approaches for diverse and challenging dehazing scenarios if suitably designed large datasets are available. However, dehazing images through deep learning on a small dataset using a single RGB image is quite challenging and of significant practical interest. For example, in the situation of fire management or natural disaster management, a suitable dehazing model characteristic of the situation needs to be learned quickly using a small number of images in haze and corresponding pre-disaster images. We propose a novel deep learning architecture that is amenable to reliable learning of dehazing model using a small dataset. Our novel generative adversarial network (GAN) architecture includes iterative blocks of UNets to model haze features of different complexities and a pyramid convolution block to preserve and restore spatial features of different scales. The key contributions of this paper are as follows: \begin{itemize} \item A novel technique named pyramid convolution is introduced for dehazing to obtain spatial features of multiple scales structural information. \item We have used iterative UNet block for image dehazing tasks to make the generator learn different and complex features of haze without the loss of local and global structural information or without making the network too deep to result into loss of spatial features. \item The model used is end-to-end trainable with hazy image as input and haze-free image as the desired output. Therefore the conventional practice of using the atmospheric scattering model is obviated, and the problems encountered in inverse reconstruction are circumvented. It also makes the approach more versatile and applicable to haze scenarios where the conventional simplified atmospheric model may not apply. \item Extensive experimentation is done on four contemporary challenging datasets, namely I-Haze and O-Haze datasets of NTIRE 2018 challenge, Dense-haze dataset of NTIRE 2019 challenge, and non-homogeneous dehazing dataset of NTIRE 2020 challenge. \end{itemize} The outline of the paper is as follows. Section \ref{sec:related} presents related work, and section \ref{sec:proposed} introduces our architecture and learning approach. Section \ref{sec:results} presents numerical experiments and results. Section \ref{sec:ablation} includes an ablation study on the proposed method. Section \ref{sec:conclusion} concludes the paper. \section{Related work} \label{sec:related} Since this paper's focus is single image dehazing, we exclude a discussion on studies that required multiple images, for example, exploiting polarization, to perform dehazing. Single image dehazing is an ill posed problem because the number of measurements is not sufficient for learning the haze model, and the non-linearity of the haze model implies higher sensitivity to noise. Single image based dehazing exploits polarization-independent atmospheric scattering model proposed by Koschmieder \cite{koschmieder1925theorie} and its characteristics such as dark channel, color attenuation and haze-free priors. According to this model, the hazy image is specified by the atmospheric light (generally assumed uniform), the albedo of the objects in the scene, and the transmission map of the hazy medium. More details can be found in \cite{koschmieder1925theorie} and its subsequent citations,including recent ones \cite{chen2019multi,vazquez2020physical}. We have to predict the unknown transmission map and global atmospheric light. In the past, many methods have been proposed for this task. The methods can be divided into two categories, namely (i) Traditional handcrafted prior based methods and (ii) Learning based methods. \textbf{Traditional handcrafted prior based methods:} Fattal~\cite{fattal2008single} proposed a physically grounded method by estimating the albedo of the scene. Tan ~\cite{tan2008visibility} proposed the use of the Markov random field to maximize the local contrast of the image. He et al.~\cite{he2010single} proposed a dark channel prior for the estimation of the transmission map. Fattal ~\cite{fattal3dehazing} proposed a color-line method based on the observation that small image patches typically exhibit a one-dimensional distribution in the RGB color space. Traditional handcrafted prior methods give good results for certain cases but are not robust for all the cases. \textbf{Learning based approaches:} In recent years, many learning based methods have been proposed for single image dehazing that encash the success of deep learning in image processing tasks, availability of large datasets, and better computation resources. Some examples are briefly mentioned here. Cai et al.~\cite{cai2016dehazenet} proposed an end-to-end CNN based deep architecture to estimate the transmission map. Ren et al.~\cite{ren2016single} proposed a multi-scale deep architecture, which also estimates the transmission map from the haze image. Zhang et.al. in~\cite{zhang2018densely} proposed a deep network architecture that estimates the transmission map and the atmospheric light. These estimates are then used together with the atmospheric scattering model to generate the haze-free image. \textbf{Our approach in context: }In contrast to these approaches, our approach is an end-to-end learning based approach in which the learnt model directly predicts the haze-free image without needing to reconstruct the transmission map and the atmospheric light, or using the atmospheric scattering model. It is therefore more versatile to be trained for situations where the atmospheric scattering model of \cite{koschmieder1925theorie} may not apply or may be too simple. Example includes non-uniform haze. It also circumvents the numerical errors and artifacts associated with the use of inverse approaches of reconstructing the haze-free image from the transmission map and atmospheric light. \section{Proposed method}\label{sec:proposed} In this section we present our model, namely back projected pyramid network (BPPNet). The overall architecture is based on generative adversarial network ~\cite{goodfellow2014generative}, where a generator generates a haze-free image from a hazy image, and a discriminator tells whether the image provided to it is real or not. \subsection{Generator} The architecture of the generator is shown in Fig. \ref{img:generator}. It comprises of two blocks in series, namely (i) iterative UNet block, (ii) pyramid convolution block, which we describe next. \begin{figure*}[t] \includegraphics[width=\linewidth]{img/Architecture4.png} \caption{The architecture of our generator.} \label{img:generator} \end{figure*} {\textbf{Iterative UNet block (IUB):}} This block consists of multiple UNet~\cite{ronneberger2015u} units connected in series i.e. the output of one UNet (architecture in the supplementary) is fed as the input to the next UNet. In addition, the output of each UNet is passed to a concatenator, which concatenates the 3 channel output of all the UNets, providing an intermediate 12 channel feature map. The equations describing the working of IUB are the following. \begin{equation} I_{_1} = {\rm UNET}_1( I_{\rm haze} ); I_{{i}} = {\rm UNET}_{i}( I_{{i-1}}) \quad {\rm{for}} \quad i>1, \end{equation} \noindent where $I_i$ is the output of $i$th UNet unit, $I_{\rm haze}$ is the input hazy image after being transformed to YCbCr space, and the output $\hat{I}_{\rm IUB}$ of IUB is given as \begin{equation} \hat{I}_{\rm IUB} = I_{1}\oplus I_{2}\oplus \ldots I_{M} \end{equation} where $\oplus$ indicates concatenation operator and $M$ is the number of UNet unit. We have used $M=4$. An ablation study on the value of $M$ is presented later in section \ref{sec:ablation}. Here, we discuss the need of more than one UNet. In principle, a single UNet may be able to support dehazing to some extent. However, it may not be able to extract complex features and generate an output with fine details. One way to tackle this problem is to increase the number of layers in the encoder block so that more complex features can be learned. But the layers in the encoder block are arranged in feed forward fashion, and the height and the width of layers decreases upon moving further. This causes loss in spatial information and reduces the possibility of extraction of spatial features of high complexity. Therefore, we take an alternate approach of creating sequence of the multiple UNets. The sequence of UNets may be interpreted as a sequence of multiple encoder-decoder pairs with skip connections. The encoder in each UNet extracts the features from input tensor in the downsample feature map and decoder uses those features and projects them into an upsample latent space with same height and width as input tensor. In this way, each generator helps in learning increasingly complex features of haze while the decoder helps in retaining the spatial information in the image. Lastly, the concatenation step ensures that complexity of all the levels are available for subsequent reconstruction. \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{img/iedb2.png} \caption{The effect of the successive UNet units is illustrated. Images are histogram equalized for better visualization. The histograms of the channels becomes narrower after passing more number of UNet units, indicating that adding more UNet units may cease to create more value after a certain limit.} \label{img:iedb} \end{figure*} We illustrate the effect of using multiple UNets in Fig. \ref{img:iedb}. Histogram equalized 3-channel output of each UNet is shown as an RGB image. It is seen that the spatial context is preserved, and at the same time haze introduced blur of different complexities are present in the outputs of different images. The haze in the last UNet output is flatter across the image and shows large scale blurs while the haze in the first UNet is local and introduces small scale blurs. Therefore, most dehazing is accomplished in UNet1, although the subsequent UNets pick the dehazing components that the previous UNets did not. Fig. \ref{img:iedb} also explains our choice of only four UNet blocks even though more blocks could be used in principle. We explain our choice in two parts. First, there is a trade-off involved between accuracy and speed when choosing the number of UNet blocks. Second, as seen in the histograms in Fig. \ref{img:iedb}, the dynamic range of channels decreases with every subsequent UNet block, thereby indicating the reduction in the usable information content. The standard deviation of the intensity values in the 3 channels after UNet4 is $\sim$12.2. Adding more blocks would further reduce this value, and therefore not provide significantly exploitable data for dehazing. \begin{figure*}[t] \centering \includegraphics[width=0.6\linewidth]{img/pyramid.jpg} \caption{Feature maps corresponding to one of the channels of 3$\times$3, 17$\times$17, and 45$\times$45 convolution layer respectively. The figure shows that smaller kernel size generates smaller scale features such as edges while large kernel size generates large scale features such as big patches.} \label{img:newfig} \end{figure*} {\textbf{Pyramid convolution (PyCon) block:}} Although the iterative UNet block does provide global and local structural information, the output lacks the global and local structural information for different sized objects. An underlying reason is that the structural information from different scales are not directly used to generate the output. To overcome this issue, we have used a novel pyramid convolution technique. Earlier pyramid pooling has been used in~\cite{ronneberger2015u} to leverage the “global structural information”. However, since the pooling layers are not learnable, we instead employ learnable convolution layers that can easily outperform the pooling layers in leveraging the information. We employ many convolution layers of different kernel sizes in parallel on the input map (the 12 channel output of iterative UNet block). Corresponding to different kernel sizes used for convolution, different output maps are generated with structural information of different spatial scales. The kernel sizes are chosen as 3, 5, 7, 11, 17, 25, 35, and 45, as shown in Fig. \ref{img:generator}. Odd sized kernels are used since pixels in the intermediate feature map are assumed to be symmetrical around output pixel. We observed introduction of distortion over layers upon using even-sized kernels, indicating the importance of exploiting the symmetry of the features around the output pixel. Zero padding is used to ensure that the features at the edges are not lost. After the generation of feature map from corresponding kernels, all the generated maps are concatenated to make an output feature map of 128 channels, which is subsequently used for the final image construction by applying a convolution layer of kernel size 3$\times$3 with zero padding. In this manner, the local to global information is directly used for the final image reconstruction. The effect of using pyramid convolution is shown in Fig. \ref{img:newfig}. In the zoom-ins shown in the middle panel, the arrows indicate some features of the size of the convolution filter used for generating that particular feature map. The illustrated 3 channels are superimposed as a hypothetical RGB image in the bottom left of Fig. \ref{img:newfig} to demonstrate the types of details present in a subset of the output feature map. Since we have used 8 convolution filters that operate on 12 channel input, we generate a total of 128 channels in the output feature map with a large variety of spatial features of multiple scales learned and restored. Therefore, the result image shown in the bottom panel has spatial features closely matching the ground truth, resulting in a low difference map (shown in the bottom right). One may consider using the 12 channel output of the iterative UNet block for generating the dehazed image directly, without employing the PyCon block. To indicate the importance of including the PyCon block, we include an ablation study in section \ref{sec:ablation}. \begin{figure}[t] \centering \includegraphics[width=0.75\linewidth]{img/discriminator.png} \caption{Architecture of the discriminator.} \label{fig:discriminator} \end{figure} \subsection{Discriminator} We have used a patch discriminator to determine whether a particular patch is realistic or not. The patches overlap in order to eliminate the problem of low performance on the edges. We have used 4$\times$4 convolution layers in discriminator. After every convolution layer, we have added an activation layer with an activation leaky rectified linear unit (Leaky ReLu) except the last layer where the activation function is sigmoid. The size of the convolution kernel used is 4$\times$4, and the output map size is 62$\times$62 for an input of size 512$\times$512. The architecture of our discriminator is shown in Fig. \ref{fig:discriminator}. \subsection{Loss functions}\label{sec:losses} Most dehazing models use the mean squares error (MSE) as the loss function \cite{zhang2018multi}. However, MSE is known to be only weakly correlated with human perception of image quality \cite{huang2014visibility}. Hence, we employ additional loss functions that are closer to human perception. To this end, we have used a combination of MSE ($L_2$ loss), adversarial loss $L_{\rm{adv}}$, content loss $L_{\rm{con}}$, and structural similarity loss $L_{\rm{SSIM}}$. We define the remaining loss functions below. The adversarial loss for the generator $L_{\rm{adv}}$ and the discriminator $L_{\rm dis}$ is defined as: \begin{equation} L_{\rm adv} = \langle \log(D(I_{\rm pred})) \rangle, \end{equation} \begin{equation} L_{\rm dis} = \langle \log(D(I_{\rm GT})) \rangle + \langle \log(1-D(I_{\rm pred})) \rangle, \end{equation} \noindent where $(I_{\rm haze}, I_{\rm GT})$ are the supervised pair of the hazy image and the corresponding ground truth. $D(I)$ is the discriminator’s estimate of the probability that data instance $I$ provided to it is indeed real. Similarly, $G(I)$ is the generator output for the input instance $I$. Further, $I_{\rm pred} = G (I_{\rm haze})$. The notation $<>$ indicates the expected value over all the supervised pairs. The MSE, also referred to as the $L_2$ loss, is defined as the average norm 2 distance between $I_{\rm{GT}}$ and $I_{\rm{pred}}$: \begin{equation} L_2= \langle{I_{\rm{GT}}-I_{\rm{pred}}}\rangle \label{eq:L2} \end{equation} Our content loss is the VGG based perceptual loss~\cite{johnson2016perceptual}, defined as: \begin{equation} L_{\rm{con}}= \big\langle\sum_i{\frac{1}{N_i}{\mid\mid \phi_i(I_{\rm GT})-\phi_i(I_{\rm pred})\mid\mid}}\big\rangle, \end{equation} \noindent where $N_i$ is the number of elements in the $i^{th}$ activation of VGG-19 and $\phi_i$ represents $i^{\rm{th}}$ activation of VGG-19. The structural similarity loss $L_{\rm{SSIM}}$ over reconstructed image $I_{\rm pred}$ and ground truth $I_{\rm GT}$ is defined as: \begin{equation} L_{\rm{SSIM}}= 1 - \langle {{\rm SSIM}(I_{\rm GT},I_{\rm pred})}\rangle, \end{equation} \noindent where ${\rm SSIM}(I, I^\prime)$ is the SSIM \cite{wang2004image} between the images $I$ and $I^\prime$. We note that although the losses $L_2$ and $L_{\rm SSIM}$ directly compare the predicted and the ground truth images, the nature of comparison is quite different across them. $L_2$ is insensitive to the structural details but retains the comparison of the general energy and dynamic range of the two images being compared. $L_{\rm SSIM}$ on the other hand compared the structural content in the images with less sensitivity to the contrast. Therefore, including these two loss functions provide complementary aspects of comparison between the predicted and the ground truth images. The overall generator loss $L_{\rm{G}}$ and discriminator loss $L_{\rm{D}}$ are given as \begin{equation} {L_{\rm{G}}=A_1 L_{\rm{{adv}}}+A_2 L_{\rm{con}}+A_3 L_2 + A_4 L_{\rm{SSIM}}} \end{equation} \begin{equation} {L_{\rm{D}}=B_1 L_{\rm{D_{adv}}}.} \end{equation} We have heuristically chosen the values of the constant weights in the above equation as $A_1 = 0.7$, $A_2 = 0.5$, $A_3 = 1.0$, $A_4 = 1.0$, and $B_1 = 1.0$. \section{Experimental results}\label{sec:results} \subsection{Datasets} We have trained and tested our model on the following four datasets, namely NTIRE 2018 image dehazing indoor dataset (abbreviated as I-Haze), NTIRE 2018 image dehazing outdoor dataset (O-Haze), Dense-Haze dataset of NTIRE 2019, and NTIRE 2020 dataset for non-homogeneous dehazing challenge (NTIRE 2020). \textbf{\textit{I-Haze ~\cite{DBLP:journals/corr/abs-1804-05091} and O-Haze ~\cite{ancuti2018haze}:}} These datasets contains 25 and 35 hazy images (size 2833$\times$4657 pixels) respectively for training. Both datasets contain 5 hazy images for validation along with their corresponding ground truth images. For both of these datasets, the training was done on training data and validation images were used for testing because although 5 hazy images for testing were given but their ground truths were not available to make the quantitative comparison. \textbf{\textit{Dense-Haze ~\cite{ancuti2019dense}:}} This dataset contains 45 hazy images (size 1200$\times$1600 pixels) for training and 5 hazy images for validation and 5 more for testing with their corresponding ground truth images. We have done training on training data and tested our model with test data. \textbf{\textit{NTIRE 2020~\cite{NTIRE2020}:}} This dataset contains 45 hazy images (size 1200$\times$1600 pixels) for training with their corresponding ground truth. It is the first dataset of non-homogeneous haze in our knowledge. As ground truth for validation was not given, hence we used only 40 image pairs for training and calculated our results on the rest of the 5 images. \begin{table}[t] \centering \caption{Quantitative comparison of various state of the art methods with our model on I-Haze, O-Haze and Dense-Haze datasets.Our model does the dehazing task in real-time at an average running time of 0.0311 s i.e. 31.1 ms per image.} \label{tbl:indoor_outdoor_table} \begin{tabular}{|l||c|c|c|c|c|c|c|c|} \hline \multicolumn{9}{|c|}{I-Haze dataset}\\ \hline Metric & Input & CVPR'09 & TIP'15 & ECCV'16 & CVPR'16 & ICCV'17 & CVPRW'18 & Our\\ & & \cite{he2010single}& \cite{zhu2015fast}& \cite{ren2016single}& \cite{berman2016non}& \cite{li2017all}& \cite{zhang2018multi}& model\\ \hline SSIM & 0.7302 & 0.7516 & 0.6065 & 0.7545 & 0.6537 & 0.7323 & 0.8705 & \textbf{0.8994}\\ \hline PSNR & 13.80 & 14.43 & 12.24 & 15.22 & 14.12 & 13.98 & 22.53 & \textbf{22.56}\\ \hline \end{tabular} \begin{tabular}{|l||c|c|c|c|c|c|c|c|} \hline \multicolumn{9}{|c|}{O-Haze dataset}\\ \hline Metric & Input & CVPR'09 & TIP'15 & ECCV'16 & CVPR'16 & ICCV'17 & CVPRW'18 & Our\\ & & \cite{he2010single}& \cite{zhu2015fast}& \cite{ren2016single}& \cite{berman2016non}& \cite{li2017all}& \cite{zhang2018multi}& model\\ \hline SSIM & 0.5907 & 0.6532 & 0.5965 & 0.6495 & 0.5849 & 0.5385 & 0.7205 & \textbf{0.8919}\\ \hline PSNR & 13.56 & 16.78 & 16.08 & 17.56 & 15.98 & 15.03 & 24.24 & \textbf{24.27}\\ \hline \end{tabular} \begin{tabular}{|l||c|c|c|c|c|c|c|c|c|} \hline \multicolumn{10}{|c|}{Dense-Haze dataset}\\ \hline Metric & CVPR & Meng & Fattal & Cai & Ancuti & CVPR & ECCV & Morales & Our\\ & '09 \cite{he2010single} & et. al \cite{meng2013efficient} & \cite{fattal3dehazing} & et. al \cite{cai2016dehazenet} & et. al \cite{ancuti2016night} & '16 \cite{berman2016non} & '16 \cite{ren2016single} & et. al \cite{morales2019feature} & model\\ \hline SSIM & 0.398 & 0.352 & 0.326 & 0.374 & 0.306 & 0.358 & 0.369 & 0.569 & \textbf{0.613}\\ \hline PSNR & 14.56 & 14.62 & 12.11 & 11.36 & 13.67 & 13.18 & 12.52 & 16.37 & \textbf{17.01}\\ \hline \end{tabular} \end{table} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{img/New_Indoor1.png} \caption{Qualitative comparison of various benchmark with our model on I-Haze dataset} \label{img:indoor_res} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{img/NewOutdoor1.png} \caption{Qualitative comparison of various methods with our model on O-Haze dataset.} \label{img:outdoor_res} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{img/Combined1.png} \caption{Qualitative results for Dense-Haze and NTIRE 2020 dataset.} \label{img:dense} \end{figure} \subsection{Training details} The optimizer used for the training was Adam~\cite{kingma2014adam} with the initial learning rate for 0.001 and 0.001 for generator and discriminator respectively. We have randomly cropped large square patches from the training images. The crop size was 1024$\times$1024 for NTIRE 2020 and Dense-Haze. Leveraging on the even large sizes of images in I-Haze and O-Haze datasets, we created four crops each of two different sizes, 1024$\times$1024 and 2048$\times$2048. We then resized all the patches to 512$\times$512 using bicubic interpolation. These patches are randomly cropped for each epoch i.e. these patches are not same for every epoch. This has created an effectively larger dataset out of the small dataset available for training for each of the considered datasets. For datasets named I-Haze, O-Haze and NTIRE 2020, we converted these randomly cropped resize patches from RGB space to YCbCr space and then used them for training. For Dense-Haze dataset we directly used RGB patches for training. We decreased the learning rate of the generator whenever the loss became stable. We stopped training when the learning rate of the generator reached 0.00001 and the loss stabilized. We also tried to decrease the learning rate of discriminator but found that doing so did not give the best results. \subsection{Results} Here, we present our results and their comparison with the results of other models available in the literature. We note that we converted the test input image size to 512$\times$512 for all the datasets for generating our results in view of our hardware (GPU) memory constraints. Quantitative evaluation is performed using the SSIM metric and the peak signal to noise ratio (PSNR). The metrics are computed in the RGB space even if the training was done in YCbCr space. The quantitative results are compared with earlier state-of-the-art in Table \ref{tbl:indoor_outdoor_table}. The metrics for the other methods are reproduced from ~\cite{zhang2018multi} for the I-Haze and O-Haze dataset. The benchmark for Dense-haze was provided in ~\cite{ancuti2019dense}. We further include the results of Morales at al.~\cite{morales2019feature} for comparison. \textbf{\textit{I-Haze:}} The average PSNR and SSIM of our method for this dataset on validation data were 22.56 and 0.8994 respectively. It is evident from Table \ref{tbl:indoor_outdoor_table} that our model outperforms the state-of-the-art in both SSIM and PSNR index by a good margin. Qualitative comparison results on the test data are shown in Fig. \ref{img:indoor_res}. It is evident that only CVPRW'18 \cite{zhang2018multi} competes with our method in the quality of dehazed image and match with the ground truth. \textbf{\textit{O-Haze:}} The average PSNR and SSIM for this dataset on validation data were 24.27 and 0.8919 respectively on validation data, see Table \ref{tbl:indoor_outdoor_table}. Our model clearly outperforms all the state-of-the-art in both PSNR and SSIM index by a large margin. For SSIM, the closest performing method was CVPRW'18~\cite{zhang2018multi} with SSIM of 0.7205, which is significantly lower than ours i.e 0.8919. It is notable from the results of all the methods that this dataset is more challenging than I-Haze. Nonetheless, our method provides comparable performance over both I-Haze and O-Haze datasets. The qualitative comparison of results on the test data are shown in Fig. \ref{img:outdoor_res}. Similar to the I-Haze dataset, only CVPRW'18 \cite{zhang2018multi} and our method generate dehazed images of good quality. As compared to I-Haze results in Fig. \ref{img:indoor_res}, it is more strongly evident in Fig. \ref{img:outdoor_res} that the color cast of our method is slightly mismatched with the ground truth, where CVPRW'18 performs better than our method. However, CVPRW'18 shows poorer structural continuity than our method, as evident more strongly in Fig. \ref{img:indoor_res}. \textbf{\textit{Dense-Haze:}} From Table \ref{tbl:indoor_outdoor_table}, it is evident that this dataset is significantly more challenging that the I-Haze and O-Haze datasets. All methods perform quite poorer for this dataset, as compared to the numbers reported for I-Haze and O-Haze dataset. Even though the performance of our method is also poorer for this dataset as compared to the other datasets, its SSIM and PSNR values are significantly better than the other 8 methods whose results are included in Table \ref{tbl:indoor_outdoor_table} for this dataset. Qualitative comparison with select methods is shown in Fig. \ref{img:dense}. The results clearly illustrate the challenge of this dataset as the features and details of the scene are almost imperceptible through the dense haze. Only our method is capable of dehazing the image effectively and bringing forth the details of the scene. Nonetheless, the color cast is non-uniformly compensated and different from the ground truth in regions. \textbf{\textit{NTIRE 2020:}} As the ground truth for test data is not given, we randomly chose 5 images for testing and used the rest of the 40 image pair for training. The average SSIM and PSNR are 0.8726 and 19.40 respectively. This SSIM value is better than the best SSIM observed in the competition and informed to the participants in a personal email after the test phase. The qualitative results are shown in Fig. \ref{img:dense}. The observations are generally similar to the observations for the Dense-Haze dataset. Our results are qualitatively quite close to the ground truth and show the ability of our method to recover the details lost in haze, despite the non-homogeneity of the haze. Second, we observe a little bit of mismatch in the color reproduction and in-homogeneity in the color cast, which needs further work. We expect that the problem of color cast inhomogeneity may be related to the inhomogeneity in the haze itself, which may have been present in the Dense-Haze data as well but may not have been perceptible due to the generally high density of haze. \section{Ablation study}\label{sec:ablation} We conduct ablation study using I-Haze and O-Haze datasets. We consider the ablation associated with the architectural elements in section \ref{sec:unet_experiment}, loss components in section \ref{sec:losses_experiment}, and the image space used in training in section \ref{sec:rgb_experiment}. \subsection{Architecture ablation} \label{sec:unet_experiment} Here, we consider ablation study relating to the number of UNet units used in the iterative UNet block and the absence or presence of the pyramid convolution block. The results are shown in Table \ref{tab:abl1}. It is evident that decreasing or increasing the number of UNet blocks degrades the performance and the use of $M=4$ UNet blocks is optimal for the architecture. This is in agreement in the observations derived from Fig. \ref{img:iedb}. Similarly, dropping the PyCon block also degrades the performance. \subsection{Loss ablation} \label{sec:losses_experiment} We proposed in section \ref{sec:losses} to use four types of loss functions for the training of the generator. Here, we consider the effect of dropping one loss function at a time. The results are presented in the bottom panel of Table \ref{tab:abl1}. It is seen than dropping any of the loss function results into significant deterioration of performance. This indicates the different and complementary roles each of these loss functions is playing. Our observation of the qualitative results, discussed in section \ref{sec:results}, we might need to introduce another loss function related to the preservation of the color cast or color constancy. \begin{table}[t] \centering \caption{The results of ablation study are presented here. The reference indicates the use of 4 UNet blocks, inclusion of PyCon block with layer configuration as shown in Fig. \ref{img:generator}. All loss functions discussed in section \ref{sec:losses} are used and the entire architecture uses YCbCr space, such as shown in Fig. \ref{img:generator}.} \label{tab:abl1} \begin{tabular}{|l|l||c|c|c|c|c|c|} \hline \multicolumn{8}{|c|}{\textbf{(a) Ablation study on architectural units}}\\ \hline {Dataset}& {Metric}&\textbf{Reference}&{1 UNet}&{2 UNets}&{3 UNets}&{5 UNets} &{No PyCon}\\ \hline \hline I-Haze & SSIM & \textbf{0.8994} & 0.8572 & 0.8679 & 0.8820 & {0.8932} & {0.8878}\\ \cline{2-8} & PSNR & \textbf{22.57} & 18.54 & 19.94 & 20.92 & {21.62} & 21.17 \\ \hline O-Haze & SSIM & \textbf{0.8927} & 0.8500 & 0.8639 & 0.8795 & {0.8639} & {0.8768}\\ \cline{2-8} & PSNR & \textbf{24.30} & 21.34 & 22.36 & 23.06 & {22.36} & 23.13 \\ \hline \hline \multicolumn{8}{|c|}{\textbf {(b) Ablation study on losses and the image space}}\\ \hline {}&{}&{Reference}&\multicolumn{4}{|c||}{The loss function dropped} & Direct use of RGB,\\ \cline{4-7} {}&{}&{}&{$L_{\rm {adv}}$}&{$L_{\rm con}$}&{$L_{2}$}&{$L_{\rm SSIM}$} &{not the YCbCr space}\\ \hline \hline I-Haze & SSIM & \textbf{0.8994} & {0.8620} & {0.8372} & {0.8648} & {0.8343} & {0.8944}\\ \cline{2-8} {} & PSNR & \textbf{22.57} & {19.52} & {18.99} & {20.02} & {19.58} & 20.94\\ \hline O-Haze & SSIM & \textbf{0.8927} & {0.8608} & {0.8271} & {0.8650} & {0.8568} & {0.8712}\\ \cline{2-8} {} & PSNR & \textbf{24.30} & {22.26} & {20.66} & {22.78} & {22.44} & 22.54\\ \hline \end{tabular} \end{table} \subsection{Use of RGB versus YCbCr space} \label{sec:rgb_experiment} If we used RGB space instead of YCbCr space for training, we observe a degraded performance in terms of SSIM as reported in section \ref{tab:abl1}(b). However, we note that this observation is not consistent over all the datasets. Specifically, we noted that for Dense-Haze, the YCbCr conversion gave little poorer results than RGB based training. Hence, we have used RGB patches for training on Dense-Haze dataset. \section{Conclusion}\label{sec:conclusion} The presented single image dehazing method is an end-to-end trainable architecture that is applicable in diverse situations such as indoor, outdoor, dense, and non-homogeneous haze even though training datasets used are small in each of these cases. It beats the state-of-the-art results in terms of SSIM and PSNR for all the three datasets whose results are available. Qualitative results for indoor images indicate preservation of colors in the reconstructed image in the I-Haze dataset while a poorer color reconstruction is observed in the results of other datasets. In the future, we will improve our model to inherently include color preservation and seamless color cast as well. Source code, results, and trained model are shared at our project page ( https://github.com/ayu-22/BPPNet-Back-Projected-Pyramid-Network ). \clearpage \bibliographystyle{splncs04}
2,869,038,155,960
arxiv
\@startsection{section}{1}{\z@{Method} A very active subfield in high energy physics recently is the study of hadrons with heavy-light quark content \cite{Neubert}. A major effort has been spent in calculating the Isgur-Wise function, which, once it is determined, can be widely used in calculations of heavy meson decay ($b \to c$) processes. After an initial exploration \cite{BSS1}, calculations of the Isgur-Wise function on the lattice \cite{BSS2,UKQCD,Kenway} have quickly obtained interesting results which can be directly compared with experimental data and can be used to determine one of the elements of the CKM matrix, $V_{cb}$, in the Standard Model. For calculating the Isgur-Wise function, $\xi(v\cdot v^\prime)$, on the lattice, we have proposed \cite{BSS1} to use the flavor symmetry of the heavy quark effective theory (HQEFT) \cite{Bjorken} and measure the $D \to D$ elastic scattering matrix element \begin{equation} <D_{v^\prime} | {\bar c}\gamma_\nu c|D_{v}> = m_D C_{cc}(\mu) \xi (v\cdot v^\prime; \mu) (v + v^\prime)_\nu ~, \label{eq:matrix} \end{equation} where $m_D$ is D meson mass, $v$ and $v^\prime$ are four-velocities of the initial and final D mesons. The constant $C_{cc}(\mu)$ represents the QCD renormalization effect from the heavy quark scale to a light scale $\mu$. The calculation was performed in the quenched approximation using Wilson fermions. Both light and heavy quarks are treated as propagating. For details of the numerical simulation, refer to Refs. \cite{BSS1,BSS2}. \null From the lattice point of view, calculating the elastic scattering matrix element has significant advantages. In comparison to the $B \to D$ process, the elastic process on the lattice has much less noise and therefore has smaller statistical errors. Furthermore, because of the exactly known value \begin{equation} <D_{v} | {\bar c}\gamma_\nu c|D_{v}> = 2m_D, \label{eq:norm} \end{equation} at the ``zero recoil'' point $v^\prime = v$, the lattice artifacts that are independent of momentum can be removed without ambiguity using Eq.~(\ref{eq:norm}) as normalization condition for lattice data \cite{BSS1,BSS2}. A similar strategy for $B \to D$ decay would have introduced an extra (unknown) $O(1/m_Q^2)$ correction. Therefore, not surprisingly, the most accurate data obtained so far on the lattice are from $D \to D$ elastic scattering \cite{BSS2,UKQCD}. However, inelastic processes, such as $B \to D$ and $B \to D^*$, can be valuable consistency checks \cite{UKQCD,Kenway}. \@startsection{section}{1}{\z@{Systematic Errors} For analysis of the systematic errors, let us consider the slope, $\rho^2$, of the Isgur-Wise function at $y \equiv v\cdot v^\prime = 0$. A fit of the lattice data \cite{BSS2} to the relativistic harmonic oscillator model \cite{NeuRie} \begin{equation} \xi(y) = {2\over y+1} \exp\left[-(2\rho^2_{NR}-1){y-1\over y+1}\right]~, \label{eq:NR} \end{equation} gives $\rho_{NR}^2 = 1.41(19)$. For a model independent determination of $\rho^2$, one may choose to fit $\xi(y)$ near $y=1$ \begin{equation} \xi (y) = 1 - \rho^2 (y-1), \end{equation} and obtain \cite{BSS2} $\rho^2 = 1.24(26)$. All the fits have taken account of the correlations between data points using covariance matrices. There are several potential sources of systematic corrections: quenching, scaling violation, light quark mass $m_q$ dependence, finite volume effect, heavy quark mass $m_Q$ dependence. {\it Quenching.} The error due to quenching is the most difficult to quantify. Although the effect is expected to be small if a scale such as $f_\pi$ is set to the physical value (we use Ref.~\cite{BLS} to set the scale with $f_\pi$), a systematic numerical study is still lacking. We will not give an assessment on the quenching effect here. {\it Scaling violation.} Since by using the normalization condition Eq.~(\ref{eq:norm}) all the momentum independent lattice artifacts are removed and the remaining scaling violations are proportional to $y-1$. Therefore, we expect the residual scaling violations to be small. A fit to data at $\beta=6.3$ and $\beta=6.0$ found a difference of $13\%$ for $\rho_{NR}^2$. A direct check on the Euclidean invariance on the lattice is to measure the ratio of the form factors $f_-/f_+$. This ratio was found small and consistent with zero within large errors \cite{BSS1,BSS2}. {\it Light quark mass ($m_q$) dependence.} Our lattice data for $\xi(y)$ are presented with $m_q$ set to the strange quark mass, $m_q=m_s$. These data are directly relevant to processes such as $B_s \to D_s$, $B_s \to D_s^*$. For $B \to D$, they have to be extrapolated to the ``chiral limit'' $m_q = m_{u,d} $. An inspection shows that the linear size of the physical volume is in the range of $(100MeV)^{-1}$ \cite{BSS2}, therefore, at $m_q < m_s$ the finite size effect becomes important and contaminates the $m_q$ dependence. To estimate $m_q$ dependence we therefore use data obtained on the largest physical volume ($24^3\times39$ lattice at $\beta=6.0$). We first estimate the shift in $\rho_{NR}^2$ from $m_q$ to $m_{q^\prime}$ with both $m_q, m_{q^\prime}$ in the range of $m_s$. Then this shift in $\rho_{NR}^2$ is extrapolated to the chiral limit. Using this procedure, we find $\rho_{NR}^2$ {\it decreases} by $12\%$ from $m_q = m_s$ to $m_q = m_{u,d}$ \cite{BSS2}. It is interesting to note that the sign of this shift is opposite to the chiral perturbation prediction \cite{Jenkins} and in agreement to the bag model calculation \cite{Sadzi}. It is important to confirm this trend in the future with improved statistics. {\it Finite volume effect.} To estimate the finite volume effect, we compare our data on $16^3\times39$ and $24^3\times39$ lattices at $\beta=6.0, \kappa_q=.154$ ($m_q=m_s$). There is a shift of $15\%$ in $\rho_{NR}^2$. We expect that the finite size effect would be smaller at a heavier $m_q$. Indeed, the shift in $\rho_{NR}^2$ is reduced to $9\%$ at $\kappa_q=.152$. {\it Heavy quark mass ($m_Q$) dependence.} Recent lattice calculations indicated that the heavy quark symmetry begins to set in in the neighborhood of the charm mass. The leading $1/m_Q$ dependence agrees with the expectations of HQEFT. We refer to Ref. \cite{BSS2} for discussions of specific examples. Therefore, simulation results obtained at the charm mass range can be used and extrapolated to the heavy quark limit. For the Isgur-Wise function the leading order $1/m_Q$ correction should be $\sim (y-1)\Lambda_{QCD}/m_Q$ \cite{Luke}. It should be relatively small for current lattice calculations $y-1 < 0.2$. Indeed, comparing $\rho_{NR}^2$ at $m_Q \sim 1.6$GeV and $2.3$GeV, we find $15\%$ shift. {\it Summary.} Adding up the above items in quadrature, the total systematic correction becomes $29\%$. We have \begin{equation} \rho_{NR}^2 = 1.41 \pm .19 \pm .41, \label{eq:NRfit} \end{equation} where the first error is statistical and the second is systematic error. For linear fit, we get \begin{equation} \rho^2 = 1.24 \pm .26 \pm .36, \label{eq:Lfit} \end{equation} We should point out that this $29\%$ systematic error is probably an overestimate. Our fit in Eqs.~(\ref{eq:NRfit}) and (\ref{eq:Lfit}) have been performed with data at all $\beta$, lattice size, heavy quark mass values. Therefore, the combined systematic error is unlikely to be much larger than the statistical error ($.19$). Indeed, though we use it as an indication of the systematic errors, the shift in $\rho^2$ due to each item discussed above is not statistically significant. To get a better analysis of the systematic errors, one needs more data points and better statistics. At this point, our discussion of the systematic errors should be taken primarily as a discussion on the methodology; the estimates obtained are only qualitative. A comparison with continuum model calculations is given in Ref. \cite{BSS2}. Clearly, lattice result has reached similar, if not better, numerical accuracy as the continuum models for $\rho^2$. Our result for $\rho^2_{NR}$ is also consistent with a recent lattice calculation by UKQCD Collaboration \cite{UKQCD,Kenway}. \@startsection{section}{1}{\z@{Extracting $V_{cb}$} Although $\rho^2$ is useful for comparison with continuum model calculations, it is less useful for getting $V_{cb}$ from the experimental data. Around $y =1$, experimental data have the lowest statistical precision \cite{ARGUS}. On the lattice, a model independent determination of $\rho^2$ also tends to have larger uncertainty because only a few data point close to $y=1$ can be used. \begin{figure}[htb] \vspace{9pt} \framebox[55mm]{\rule[-21mm]{0mm}{43mm}} \caption{Comparison of present lattice data (crosses) with experimental data (open circles) [7]. The solid line is a fit to the lattice data according to Eq.~(3).} \end{figure} In ARGUS (and CLEO) experiments, what has been measured is $|V_{cb}|\xi(y)$ with the most accurate data obtained in the range $1.1 < y < 1.5$. In the lattice calculation, we have obtained $\xi(y)$ in the range $1 < y < 1.2 $. Therefore, at least in the range $ 1.1 < y < 1.2$ we can directly fit the experimental data with the lattice data with only one unknown parameter $V_{cb}$. One such fit is shown in Fig. 1. We obtain \cite{KEK} \begin{equation} |V_{cb}|\sqrt{\tau_B\over 1.53ps} = 0.044 \pm .005 \pm .007, \end{equation} where the first error is due to the statistical and systematic errors in the lattice calculation and the second error is from the experimental uncertainties. The errors on the lattice data essentially reflect the spread of $\xi$ over different $\beta$, lattice size, and heavy quark mass values.
2,869,038,155,961
arxiv
\section{INTRODUCTION} Main problem of the mathematical statistics and simulation is connected with insufficiency of primary statistical data. In this situation, the Bootstrap method can be used successfully (Efron, Tibshirani 1993, Davison, Hinkley 1997). If a dependence between characteristics of interest and input data is very composite and is described by numerical algorithm then usually it applies a simulation. By this the probabilistic distributions of input data are not estimated because the given primary data has small size and such estimation gives a bias and big variance. The Bootstrap method supposes that random variables are not generated by a random number generator during simulation in accordance with the estimated distributions but ones are extracted from given primary data at random. Various problems of this approach were considered in previous papers (Andronov et al. 1995, 1998). We will consider the known function $\phi$ of m independent continuos random variables $X_1, X_2, \ldots, X_m: \phi (X_1, X_2, \ldots, X_m).$ It is assumed that distributions of random variables $\{X_i\}$ are unknown, but the sample population $H_i=\{X_{i1}, X_{i2}, \ldots, X_{in_i}\}$ is available for each $X_i, i=\overline{1,m}$. Here $n_i$ is the size of the sample $H_i$. The problem consists in estimation of the mathematical expectation \begin{equation} \label{thetaform} \Theta=E\;\phi(X_1, X_2, \ldots, X_m). \end{equation} The Bootstrap method use supposes an organization of some realizations of the values $\phi(X_1, X_2, \ldots, X_m)$. In each realization the values of arguments are extracted randomly from the corresponding sample populations $H_1, H_2, \ldots, H_m$. Let $j_i(l)$ be a number of elements which were extracted from the population $H_i$ in the $l$-th realization. We denote ${\bf X}(l)=(X_{1,j_1(l)}, X_{2,j_2(l)}, \ldots, X_{m,j_m(l)})$ and name it the l-th subsample. The estimator $\Theta^*$ of the mathematical expectation $\Theta$ is equal to an average value for all r realizations: \begin{equation} \label{thetaest} \Theta^*=\frac{1}{r}\sum_{l=1}^r\phi({\bf X}(l)). \end{equation} Our main aim is to calculate and to minimize the variance of this estimator. The variance will depend upon two factors: 1) a calculation method of the function $\phi$; 2) a formation mode of subsamples ${\bf X}(l)$. The next two Sections will be dedicated to these questions. In Section 4 we will show how to decrease variance $D\;\Theta^*$ using the dynamic programming method. \section{THE HIERARCHICAL BOOTSTRAP METHOD} We suppose that the function $\phi$ is calculated by a calculation tree. A root of this tree corresponds to the computed function $\phi=\phi_k$. It is the vertex number k. The vertices numbers $1, 2, \ldots, m$ correspond to the input variables $X_1, X_2, \ldots, X_m$. The rest vertices are intermediate ones. They correspond to intermediate functions $\phi_{m+1}, \phi_{m+2}, \ldots, \phi_{k-1}$ (see Fig.1). \begin{figure}[h] \includegraphics{tree.eps} \bigskip \bigskip \bigskip \caption{Calculation tree} \label{f2} \end{figure} Only one arc $y_v$ comes out from each vertex $v$, $v=1,2,\ldots,k$. It corresponds to a value of the function $\phi_v$. We suppose that $y_v=X_v, v=1,2,\ldots,m; y_k=\phi=\phi_k$. We denote $I_v$ as a set of vertices from which arcs come into the vertex $v$, and $J_v$ as a corresponding set of variables (arcs): $i\in I_v\Leftrightarrow y_i\in J_v$. It is clear that $I_v=\oslash$ for $v=1,2,\ldots,m$; $y_v=f_v(J_v)$, $v=m+1, m+2, \ldots, k$. We suppose that a numbering of the vertices is correct: if $v<w$ than $w\notin I_v$. Now, function value can be calculated by the sweep method. At the beginning, we extract separate elements $X_{1j_1(1)}, X_{2j_2(1)}, \ldots, X_{mj_m(1)}$ from populations $H_1, H_2, \ldots, H_m$, then calculate the function values $\phi_{m+1}, \phi_{m+2}, \ldots$, $\phi_k=\phi$ successively. After r such runs the estimator $\Theta^*$ is computed according to the formula (\ref{thetaest}). An analysis of this method was developed in the previous papers of authors. The Hierarchical Bootstrap method is based on the wave algorithm. Here all values of the function $\phi_v$ for each vertex $v=m+1,m+2,\ldots, k$ should be calculated all at once. They form the set $H_v=\{Y_{v1},Y_{v2},\ldots,Y_{vn_v}\}$, where $n_v$ is number of realizations (sample size). Getting one realization $Y_{vl}$ consists of choosing value from each corresponding population $H_i,i\in I(v)$, and calculation of $\phi_v$ value. By this we suppose that a random sample with replacement is used when each element from $H_i$ is choosen independendly with the probability $1/n_i$. Further on, this procedure is repeated for next vertex. Finally we get $Y_{k1},Y_{k2},\ldots, Y_{kn_k}$ as values of the population $H_k$. Their average gives the estimator $\Theta^*$ by analogy with formula (\ref{thetaest}). \section{EXPRESSIONS FOR VARIANCE} The aim of this section is to show how to calculate variance $D\;\Theta^*$ of the estimator (\ref{thetaest}). It is easy to see that in the case of Hierarchical Bootstrap the variance $D\;\Theta^*$ is function of sample sizes $n_1, n_2,\ldots,n_k$. In the previous papers of authors the variance $D\;\Theta^*$ was calculated using the $\omega$-pairs notion (Andronov et. al, 1996, 1998). Then, it was considered as continuos function of variables $n_1,n_2,\ldots,n_k$, and reduced gradient method was used. But now we need other approach for the calculation of $D\;\Theta^*$. We use Taylor decomposition of function $\phi({\bf x})$ in respect to mean ${\bf \mu}=(E\;X_1,E\;X_2,\ldots,E\;X_m)$: \begin{equation} \phi({\bf x})=\phi({\bf \mu})+\bigtriangledown^T\phi({\bf \mu})({\bf x}-{\bf \mu})+\frac{1}{2}({\bf x}-{\bf \mu})^T \bigtriangledown^2\phi({\bf \mu})({\bf x}-{\bf \mu})+O(||{\bf x}-{\bf \mu}||^3), \label{expr1} \end{equation} \begin{tabbing} where \= $\bigtriangledown^2\phi({\bf x})$ is the matrix of second derivatives of the function $\phi({\bf x})$,\\ \> $||{\bf v}||$ is Euclidean norm of vector ${\bf v}$. \end{tabbing} It gives the following decomposition: \begin{eqnarray} \phi({\bf x})\phi({\bf x'})=&\phi^2({\bf \mu})+\phi({\bf \mu})\bigtriangledown^T\phi({\bf \mu})({\bf x}-{\bf \mu})+ \phi({\bf \mu})\bigtriangledown^T\phi({\bf \mu})({\bf x'}-{\bf \mu})+\nonumber\\ &+\frac{1}{2}\phi({\bf \mu})({\bf x}-{\bf \mu})^T \bigtriangledown^2\phi({\bf \mu})({\bf x}-{\bf \mu})+\nonumber\\ &+\frac{1}{2}\phi({\bf \mu})({\bf x'}-{\bf \mu})^T\bigtriangledown^2 \phi({\bf \mu})({\bf x'}-{\bf \mu})+\\ &+\bigtriangledown^T\phi({\bf \mu})({\bf x}-{\bf \mu})({\bf x'}-{\bf \mu})^T\bigtriangledown\phi({\bf \mu})+\nonumber\\ &+O(||{\bf x'}-{\bf \mu}||^3 + ||{\bf x}-{\bf \mu}||^3).\nonumber \label{expr2} \end{eqnarray} If ${\bf X}=(X_1,X_2,\ldots,X_m)$ is a random vector with mutual independent components $\{X_i\}$, $E\;{\bf X}={\bf \mu}$, $D\;X_i=\sigma^2_i$, then \begin{equation} E\;\phi({\bf X})=\phi({\bf \mu})+\frac{1}{2}\sum_{i=1}^{m}\sigma^2_i\frac{\partial^2}{\partial x_i^2}\phi({\bf \mu})+ E(O(||{\bf X}-{\bf \mu}||^3)), \label{expr31} \end{equation} \begin{equation} (E\;\phi({\bf X}))^2=\phi^2({\bf \mu})+\phi({\bf \mu})\sum_{i=1}^{m}\sigma_i^2\frac{\partial^2}{\partial x_i^2}\phi({\bf \mu})+\ldots. \label{expr32} \end{equation} Now we suppose that $X_i$ and $X'_i$ are some values from sample population $H_i=\{Y_{i1},Y_{i2},\ldots,Y_{in_i}\}$. Let $C_i$ denote the covariance of two elements $Y_{il}$ and $Y_{il'}$ with different numbers $l$ and $l'$: \begin{equation} C_i=Cov(Y_{il},Y_{il'}|l\ne l'). \label{expr4} \end{equation} Let $X_i$ and $X'_i$ correspond to the elements $Y_{il}$ and $Y_{il'}$ accordingly. Because we extract $X_i$ and $X_i'$ from $H_i$ at random and with replacement, then the event $\{l\ne l'\}$ occurs with the probability $1/n_i$. Then \begin{equation} Cov(X_i,X_i')=\left\{ \begin{array}{ll} Cov(Y_{il},Y_{il})=D\;Y_i&\mbox{with the probability $1/n_i$},\\ Cov(Y_{il},Y_{il'})=C_i&\mbox{with the probability $1-1/n_i$}. \end{array} \right. \label{expr5} \end{equation} Therefore \begin{equation} Cov(X_i,X_i')=\frac{1}{n_i}D\;Y_i+\left(1-\frac{1}{n_i}\right)C_i. \label{expr6} \end{equation} If the values $\phi(X)$ and $\phi(X')$ correspond to subfunction $y_v=\phi_v(\cdot)$ and the sample population $H_v$, then formulas (\ref{expr2}), (\ref{expr31}) and (\ref{expr32}) give \begin{equation} C_v=\sum_{i\in I_v}\left(\frac{\partial}{\partial x_i}\phi_v(\mu_v)\right)^2 Cov(X_i,X_i')+\ldots. \label{expr7} \end{equation} Now we can get from (\ref{expr6}) \begin{equation} Cov(\phi_v(X),\phi_v(X'))=\frac{1}{n_v}\sigma^2_v+\left(1-\frac{1}{n_v}\right)C_v, \label{expr8} \end{equation} where the variance $\sigma^2_v=D\;Y_v$ can be determined from (\ref{expr7}) by $X_i=X_i'$: \begin{equation} \sigma^2_v=\sum_{i\in I_v}\left(\frac{\partial}{\partial x_i}\phi_v(\mu_v)\right)^2\sigma_i^2+\ldots. \label{expr9} \end{equation} Finally we have \begin{equation} Cov(\phi_v(X),\phi_v(X'))=\sum_{i\in I_v}\left(\frac{\partial}{\partial x_i} \phi_v(\mu_v)\right)^2\left[\frac{1}{n_v}\sigma_i^2+\left(1-\frac{1}{n_v}\right)Cov(X_i,X_i')\right]+\ldots, \label{expr10} \end{equation} or $$ Cov(\phi_v(X),\phi_v(X'))=\sum_{i\in I_v}\left(\frac{\partial}{\partial x_i} \phi_v(\mu_v)\right)^2\left[ \left(\frac{1}{n_v}+\left(1-\frac{1}{n_v}\right)\frac{1}{n_i}\right)\sigma^2_i+\ldots\right. $$ \begin{equation} \left.+\left(1-\frac{1}{n_v}\right)\left(1-\frac{1}{n_i}\right)C_i\right]. \label{expr11} \end{equation} By this we suppose that variances $\sigma_1^2,\sigma_2^2,\ldots, \sigma_m^2$ of input random variables $X_1,X_2,\ldots,X_m$ are known. For example, it is possible to use estimators of these variances, calculated on given sample populations $H_1,H_2,\ldots,H_m$. If the vertex $i$ belongs to the initial level of the calculation tree (it means that $i=1,2,\ldots,m$) then $\phi_i(x_i)=x_i$, $\sigma_i^2$ is known value, $C_i=0$. Therefore \begin{equation} Cov(\phi_i(X_i),\phi_i(X'_i))=\frac{1}{n_i}\sigma_i^2,\qquad i=1,2,\ldots,m. \label{expr12} \end{equation} Another covariances and variances are calculated recurrently in accordance with formulas (\ref{expr7}),(\ref{expr9}), (\ref{expr11}), (\ref{expr12}) from vertices with less numbers to vertices with great numbers. Finally we get the variance $D\;\Theta^*$ of interest as the variance for root of the calculation tree: \begin{equation} D\;\Theta^*=\frac{1}{r}(\sigma_k^2+(r-1)C_k)+\ldots, \label{expr13} \end{equation} where $r=n_k$. As it was just mentioned, we will consider the variance $D\;\Theta^*$ as a function of sample sizes $n_1,n_2,\ldots,n_k$ and denote it $D(n_1,n_2,\ldots,n_k)$. Our aim is to minimize this function in respect to variables $n_1,n_2,\ldots,n_k$ by linear restriction, or, by other words, to solve the following optimization problem: \begin{equation} \mbox{minimize }D(n_1,n_2,\ldots,n_k) \label{probl} \end{equation} by restriction \begin{equation} a_1n_1+a_2n_2+\ldots+a_kn_k\le b, \label{restric} \end{equation} where $b$, $\{a_i\}$ and $\{n_i\}$ are integer non-negative numbers. Now we intent to apply the dynamic programming method (Minox 1989). \section{THE DYNAMIC PROGRAMMING METHOD} Let us solve the optimization problem (\ref{probl}), (\ref{restric}). Our function of interest $\phi({\bf x})$ is calculated and simulated recurrently, using the calculation tree (see Section 2). In accordance to the dynamic programming technique, we have "forward" and "backward" procedure. During "backward" procedure, we calculate recurrently so-called Bellman function $\Phi_v(\alpha,z)$, $v=1,2,\ldots,k$, $z=1,2,\ldots,b$, $0\le\alpha\le 1$. Let us consider the subfunction $\phi_v(\cdot)$, that corresponds to the vertex $v$. This subfunction directly depends on variables $y_i$, $i\in I_v$, which correspond to incoming arcs for the vertex $v$. Additionally $\phi_v$ depends on variables $\{x_i\}$ and $\{y_i\}$ from which there exists path from leaves to the vertex $v$ of our calculation tree. Let $B_v$ denote corresponding set of variables $\{x_i\}$ and $\{y_i\}$. Now we need to denote some auxiliary functions. Let us introduce the following notation \begin{equation} \psi_i(\alpha)=\alpha\sigma_i^2+(1-\alpha)Cov(X_i,X'_i),\qquad i=1,2,\ldots,k,\;0\le\alpha\le 1. \label{dyn1} \end{equation} Then we are able to write in accordance with (\ref{expr10}): \begin{equation} Cov(\phi_v(X),\phi_v(X'))=\sum_{i\in I_v}\left(\frac{\partial}{\partial x_i}\phi_v(\mu_v)\right)^2\psi_i\left(\frac{1}{n_v}\right). \label{dyn2} \end{equation} Now we have from (\ref{dyn1}), (\ref{expr9}) and (\ref{expr10}) $$ \psi_v(\alpha)=\sum_{i\in I_v}\left(\frac{\partial}{\partial x_i}\phi_v(\mu_v)\right)^2 \left\{\alpha\sigma_i^2+(1-\alpha)\left[\frac{1}{n_v}\sigma_i^2+ (1-\frac{1}{n_v})Cov(X_i,X_i')\right]\right\}= $$ $$ \sum_{i\in I_v}\left(\frac{\partial}{\partial x_i}\phi_v(\mu_v)\right)^2 \left[\left(\alpha+\frac{1-\alpha}{n_v}\right)\sigma_i^2+(1-\alpha)(1-\frac{1}{n_v})Cov(X_i,X'_i)\right], $$ so \begin{equation} \psi_v(\alpha)=\sum_{i\in I_v}\left(\frac{\partial}{\partial x_i}\phi_v(\mu_v)\right)^2 \psi_i\left(\alpha+\frac{1-\alpha}{n_v}\right). \label{dyn3} \end{equation} Note that it follows from (\ref{expr8}) and (\ref{dyn1}) that our variance of interest (\ref{expr13}) is \begin{equation} D\;\Theta^*=\psi_k(0). \end{equation} Values $\psi_v(\alpha)$ depend on the sample sizes $n_i$ for all $i\in B_v$. We will mark this fact as $\psi_v(\alpha)=\psi_v(\alpha;n_i:i\in B_v)$. Now we are able to introduce above mentioned Bellman functions: \begin{equation} \Phi_v(\alpha,z)=\min_{n_i}\psi_v(\alpha;n_i:i\in B_v), \label{dyn5} \end{equation} where minimization is realized with respect to non-negative integer variables $n_i$ that are satisfied the linear restriction \begin{equation} \sum_{i\in B_v}a_in_i\le z. \label{dyn6} \end{equation} It is clear that optimal value of variance $D^*(\cdot)$ for the problem (\ref{probl}), (\ref{restric}) is equal to $\Phi_k(0,b)$. Bellman functions $\Phi_v(\alpha,z)$ are calculated recurrently from $v=1$ to $v=k$ for $0\le\alpha\le 1$ and $z=1,2,\ldots,b$. Basic functional equation of dynamic programming has the following form: \begin{equation} \Phi_v(\alpha,z)=\min\sum_{i\in I_v}\left(\frac{\partial}{\partial x_i}\phi_v(\mu_v)\right)^2 \Phi_i\left(\alpha+\frac{1-\alpha}{n_v};z_i\right) \label{dyn7} \end{equation} where minimaiztion is realized with respect to non-negative integer variables $n_v$ and $\{z_i:i\in I_v\}$ that satisfy the linear restriction \begin{equation} a_vn_v+\sum_{i\in I_v}z_i\le z. \label{dyn8} \end{equation} The initial values of $\Phi_v(\cdot)$ are determined with the tree leaves by formulas (\ref{expr12}), (\ref{dyn1}) and (\ref{dyn5}): \begin{equation} \Phi_v(\alpha,z)=\alpha\sigma^2_v+(1-\alpha)\frac{1}{[z/a_v]}\sigma_v^2= \label{dyn9} \end{equation} $$ =\sigma_v^2\left(\alpha+(1-\alpha)\frac{1}{[z/a_v]}\right),\qquad v=1,2,\ldots,m, $$ where $[z/a_v]$ - integer part of number $z/a_v$, Thus the "backward" procedure is a recurrent calculation of Bellman functions $\Phi_v(\alpha,z)$ for $v=1,2,\ldots,k$, $0\le\alpha\le 1$, $z=1,2,\ldots,b$ by using formulas (\ref{dyn9}), (\ref{dyn7}). Finally we get the minimal variance \begin{equation} D^*\;\Theta^*=\Phi_k(0,b). \label{dyn10} \end{equation} To calculate the optimal sample sizes $n^*_1,n^*_2,\ldots,n^*_k$ we should apply "forward" procedure of dynamic programming technique. At first, we find $n^*_k$ and $\{z^*_i:i\in I_k\}$ by solving the equation \begin{equation} \Phi_k(0,b)=\min\sum_{i\in I_k}\left(\frac{\partial}{\partial x_i}\phi_k(\mu_k)\right)^2 \Phi_i\left(\frac{1}{n^*_k},z_i^*\right) \label{dyn11} \end{equation} where minimization is realized by condition \begin{equation} a_kn_k^*+\sum_{i\in I_k}z_i^*\le b. \label{dyn12} \end{equation} Let $I^{-1}(v)=\{\omega:v\in I(\omega)\}$, $\alpha_k=1/n_k^*$. Then we recurrently determine by analogy the rest $n_v^*$ and $\{z^*_i:i\in I_v\}$ for $v=k-1,k-2,\ldots,m+1$: \begin{equation} \Phi_v(\alpha_{I^{-1}(v)},z^*_v)=\min\sum_{i\in I_v}\left(\frac{\partial}{\partial x_i}\phi_v(\mu_v)\right)^2 \Phi_i\left(\alpha_{I^{-1}(v)}+\frac{1-\alpha_{I^{-1}(v)}}{n_v^*},z_i^*\right) \label{dyn13} \end{equation} by condition \begin{equation} a_vn_v^*+\sum_{i\in I_v}z_i^*\le z_v^*. \label{dyn14} \end{equation} Moreover we put \begin{equation} \alpha_v=\alpha_{I^{-1}(v)}+(1-\alpha_{I^{-1}(v)})/n_v^* \label{dyn15} \end{equation} Finally the optimal sizes $n_i^*$ for $i=1,2,\ldots,m$ are determined by the following way: \begin{equation} n_i^*=[z_i^*/a_i]. \label{dyn16} \end{equation} \section*{References} \begin{description} \item Andronov, A., Merkuryev,~Yu.~(1996) Optimization of Statistical Sizes in Simulation. In: {\sl Proceedings of the 2-nd St. Petersburg Workshop on Simulation}, St. Petersburg State University, St. Petersburg, 220-225. \item Andronov,~A., Merkuryev,~Yu.~(1998) Controlled Bootstrap Method and its Application in Simulation of Hierarchical Structures. In: {\sl Proceedings of the 3-d St. Petersburg Workshop on Simulation}, St. Petersburg State University, St. Petersburg, 271-277. \item Davison,~A.C., Hinkley,~D.V.~(1997) {\sl Bootstrap Methods and their Application}. Cambridge university Press, Cambridge. \item Efron,~B., Tibshirani,~R.Y.~(1993) {\sl Introduction to the Bootstrap}. Chapman \& Hall, London. \item Minox,~M.~(1989) {\sl Programmation Mathematique. Teorie et Algorithmes.} Dunod. \end{description} \end{document}
2,869,038,155,962
arxiv
\section{Introduction} \IEEEPARstart{T}{he} promise of {network} Slicing is to enable a high level of customization of network services in future networks (5G and beyond) leveraged by virtualization and software defined networking techniques. These key enablers transform telecommunications networks into programmable platforms capable of offering virtual networks enriched by Virtual Network Functions (VNFs) and IT resources tailored to the specific needs of certain customers (e.g., companies) or vertical markets (automotive, e-health, etc.)\cite{3GPP,etsi}. From an optimization theory perspective, the Network Slice Placement problem can be viewed as a specific case of Virtual Network Embedding (VNE) or VNF Forwarding Graph Embedding (VNF-FGE) problems \cite{survey_vnf_ra_2016,survey_vfnp}. It is then generally possible to formulate Integer Linear Programming (ILP) problems \cite{netsoft_2020}, which however turn out to be $\mathcal{NP}$-hard \cite{vne_np_hardness} with very long convergence time. With regard to network management, there are specific characteristics related to network slicing: slices are expected to share resources and coexist in a large and distributed infrastructure. Moreover, slices have a wide range of requirements in terms of resources, quality objectives and lifetime. In practice, these characteristics bring additional complexity as the placement algorithms need to be highly scalable with low response time even under varying network conditions. As an alternative to optimization techniques and the development of heuristic methods, Deep Reinforcement Learning (DRL) has recently been used in the context of VNE and Network Slice Placement \cite{p1,p2, p5, p3, p4, p8}. DRL techniques are considered as very promising since they allow, at least theoretically, the determination of optimal decision policies only based on experience \cite{sutton2018reinforcement}. However, from a practical point of view, especially in the context of non-stationary environments, ensuring that a DRL agent converges to an optimal policy is still challenging. As a matter of fact, when the environment is continually changing, the algorithm has trouble in using the acquired knowledge to find optimal solutions. The usage of the DRL algorithm online can then become impractical. In fact, most of existing works based on DRL to solve the Network Slice Placement or VNE problem assume a stationary environment, i.e., with constant network load. However, traffic conditions in real networks are basically non stationary with daily and weekly variations and subject to drastic changes (e.g., traffic storm due to an unpredictable event). To cope with traffic changes, this paper proposes a hybrid DRL-heuristic strategy called Heuristically Assisted DRL (HA-DRL)\cite{HA_DRL_TNSM}. We applied in \cite{cnsm_2021} this strategy in an online learning scenario with periodic network load variations to show how this strategy can be used to accelerate and stabilize the convergence of DRL techniques in this type of non-stationary environment. As a follow-up of theses two studies, we focus in the present paper on a different non stationary scenario with stair-stepped network load changes. The goal of the paper is to evaluate and show the robustness of the proposed strategy method in the case of sudden and stair-stepped traffic changes. The contributions of the present paper are threefold: \begin{enumerate} \item We propose a network load model to describe network slice demand and adapt it to unpredictable network load changes; \item We propose a framework combining Advantage Actor Critic and a Graph Convolutional Network (GCN) for conceiving DRL-based algorithms adapted to the non-stationary case; \item We show how the use of a heuristic function can control the DRL learning and improve its robustness to unpredictable network load changes. \end{enumerate} The organization of this paper is as follows: In Section~\ref{sec:sota}, we review the related work. In Section~\ref{sec:network_model}, we describe the Network Slice Placement problem modeling. The learning framework for slice placement optimization is described in Section~\ref{sec:drl_proposal}. The adaptation of the pure DRL approaches and its control by using heuristic is introduced in Section~\ref{sec:aidedDRL}. The experiments and evaluation results are presented in Section~\ref{sec:evaluation}, while conclusions and perspectives are presented in Section~\ref{sec:conclusion}. \section{Related Work Analysis \label{sec:sota}} We provide in Section \ref{sec:drl_based} a summarized review of the existing DRL-based approaches for network slice placement. The interested reader may refer to \cite{HA_DRL_TNSM,cnsm_2021} for a more detailed and comprehensive discussion. In Section \ref{sec:robustness} we discuss recent works on robust slice placement algorithms. \subsection{On DRL-based Approaches for Slice Placement \label{sec:drl_based}} DRL has been recently applied to solving network slice placement and VNE problems. We divide these works into two categories on the basis of their algorithmic aspects: 1) pure DRL approaches \cite{p1,p2, p5, p3, p4, p8}, in which only the knowledge acquired by the learning agent via training is used as a basis for taking placement decisions; and 2) hybrid DRL-heuristic approaches \cite{HA_DRL_TNSM,quang2019deep,rkhami2021learn}, in which the placement decision computation is assisted by a heuristic method. The use of heuristics aims at increasing the reliability of DRL algorithms. However, most of these works are based on the assumption that the network load is static, i.e., slice arrivals occurs at a constant rate. To the best of our knowledge, the work we proposed in \cite{cnsm_2021} is the first attempt to evaluate an online DRL-based approach in a non-stationary network load scenario whereas \cite{new_1} only considers offline learning. In addition, in both \cite{cnsm_2021} and \cite{new_1} is assumed that network load has periodic fluctuations. In the present paper we study the behavior of the algorithms proposed in \cite{cnsm_2021} in case of an unpredictable network load disruption. \subsection{On Robustness of Slice Placement Approaches \label{sec:robustness}} The term robustness has different meanings depending on the field of application. In Robust Optimization (RO), robustness is related to the decision/solution itself. It is the capability of the algorithm solution of coping with the worst case without losing feasibility \cite{bertsimas2006robust}. In Machine Learning (ML), specially in Deep Learning (DL), robustness is related to the learned model. It is the property of the model (i.e., Deep Neural Network (DNN)) that determines its integrity under varying operating conditions \cite{shafique2020robust}. The authors of \cite{al2021robustness} are the first to discuss robustness in the DRL context. They propose to use Genetic Algorithm to improve the robustness of a self-driving car application. Robustness is considered as the capacity of sustaining a high accuracy on image classification even when perceived images change and it is measured by Neuron Coverage (NC), i.e.,the ratio of the activated neurons in the DNN. There are only a few recent works on the robustness of slice placement procedures, most of them on RO \cite{marotta2017fast,marotta2017energy,reddy2016robust,baumgartner2017}. These works answer a question different from the one we are investigating as they evaluate the robustness of the decision whereas we want to evaluate the robustness of the learning process. Despite their originality, the above approaches present some drawbacks, such as the lack of scalability of ILP, the sub-optimality of heuristic solutions, the fact that they consider offline optimization in which all slices to be placed are known in advance, and the fact that they are single objective optimization approaches, mainly focusing on energy consumption minimization. In this work, we propose to rely on a DRL-based approach in order to overcome ILP and heuristic drawbacks and consider multiple-optimization objectives. To the best of our knowledge, paper~\cite{robust_1} is the only one to have proposed a DRL-based approach for slice placement and evaluated the learning robustness. However, the authors focus on evaluating the robustness of the DRL approach against random topology changes (e.g., node failures or deploying new nodes in the network topology). In this work, we focus on evaluating robustness against network load unpredictable variations. To the best of our knowledge, the present work is the first to perform such an evaluation. \section{Network Slice Placement Optimization Problem \label{sec:network_model}} We present in this section the various elements composing the model for slice placement. Slices are placed on a substrate network, referred to as Physical Network Substrate (PSN) and described in Section \ref{sec::psn_model}. Slices give rise to Network Slice Placement Requests (Section \ref{sec:nspr_model}), generating a network load defined in Section \ref{sec:network_load_modeling}. The optimization problem is formulated in Section \ref{sec:nsp_problem_statement}. \subsection{Physical Substrate Network Modeling \label{sec::psn_model}} The Physical Substrate PSN is composed of the infrastructure resources, namely IT resources (CPU, RAM, disk, etc.) needed for supporting the Virtual Network Functions (VNFs) of network slices together with the transport network, in particular Virual Links (VLs) for interconnecting the VNFs of slices. As depicted in Fig.~\ref{fig:sn_model}, The PSN is divided into three components: the Virtualized Infrastructure (VI) corresponding to IT resources, the Access Network (AN), and the Transport Network (TN). The Virtual Infrastructure (VI) hosting IT resources is the set of Data Centers (DCs) interconnected by network elements (switches and routers). We assume that data centers are distributed in Points of Presence (PoP) or centralized (e.g., in a big cloud platform). As in \cite{slim2018close}, we define three types of DCs with different capacities: Edge Data Centers (EDCs) close to end users but with small resources capacities, Core Data Centers (CDCs) as regional DCs with medium resource capacities, and Central Cloud Platforms (CCPs) as national DCs with big resource capacities. We consider that slices are rooted so as to take into account the location of those users of a slice. We thus introduce an Access Network (AN) representing User Access Points (UAPs) such as Wi-Fi APs, antennas of cellular networks, etc. and Access Links. Users access slices via one UAP, which may change during the life time of a communication by a user (e.g., because of mobility). The Transport Network (TN) is the set of routers and transmission links needed to interconnect the different DCs and the UAPs. The complete PSN is modeled as a weighted undirected graph $G_s = (N, L)$ with parameters described in Table \ref{tab::physical_substrate_network}, where $N$ is the set of physical nodes in the PSN, and $L \subset \{(a, b) \in N \times N : a\neq b\}$ refers to a set of substrate links. Each node has a type in the set $\{$UAP, router, switch, server$\}$. The available CPU and RAM capacities on each node are defined as $cap^{cpu}_n \in \mathbb{R}$, $cap^{ram}_n \in \mathbb{R}$ for all $n \in N$, respectively. The available bandwidth on the links are defined as $cap^{bw}_{(a,b)} \in \mathbb{R}, \forall (a,b) \in L$. \begin{table}[hbtp] \caption{PSN parameters \label{tab::physical_substrate_network}} \begin{tabular}{@{}cc@{}} \toprule \textit{\textbf{Parameter}} & \textit{\textbf{Description}} \\ \midrule $G_s = (N,L)$ & PSN graph \\ $N$ & Network nodes \\ $S \subset N$ & Set of servers \\ $DC$ & Set of data centers \\ $S_{dc} \subset S$, $\forall dc \in DC$ & Set of servers in data center $dc$ \\ $SW_{dc}, \ \forall dc \in DC$ & Switch of of data center $dc$ \\ $L = \{(a,b) \in N \times N \wedge a \neq b\}$ & Set of physical links \\ $cap^{bw}_{(a,b)} \in \mathbb{R}, \forall (a,b) \in L$ & Bandwidth capacity of link $(a,b)$ \\ $cap^{cpu}_s \in \mathbb{R}, \forall s \in S$ & available CPU capacity on server $s$ \\ $M^{cpu}_s \in \mathbb{R}, \forall s \in S$ & maximum CPU capacity of server $s$ \\ $cap^{ram}_s \in \mathbb{R}, \forall s \in S$ & available RAM capacity on server $s$ \\ $M^{ram}_s \in \mathbb{R}, \forall s \in S$ & maximum RAM capacity of server $s$ \\ $M^{bw}_s \in \mathbb{R}, \forall s \in S$ & maximum outgoing bandwidth from $s$ \\ \bottomrule \end{tabular} \end{table} \begin{figure}[hbtp] \centering \includegraphics[width=\linewidth]{images/png/sn_model.png} \caption{Physical Substrate Network example.} \label{fig:sn_model} \end{figure} \subsection{Network Slice Placement Requests Modeling \label{sec:nspr_model}} We consider that a slice is a chain of VNFs to be placed and connected over the PSN. VNFs of a slice are grouped into a request, namely a Network Slice Placement Request (NSPR), which has to be placed on the PSN. A NSPR is represented as a weighted undirected graph $G_v = (V, E)$, with parameters described in Table~\ref{tab::nspr_parameters}, where $V$ is the set of VNFs in the NSPR, and $ E \subset \{(\bar{a}, \bar{b}) \in V \times V \wedge \bar{a} \neq \bar{b}\}$ is a set of VLs to interconnect the VNFs of the slice . The CPU and RAM requirements of each VNF of a NSPR are defined as $req^{cpu}_{v} \in \mathbb{R}$ and $req^{ram}_{v} \in \mathbb{R}$ for all $v \in V$, respectively. The bandwidth required by each VL in a NSPR is given by $req_{(\bar{a},\bar{b})}^{bw} \in \mathbb{R}$ for all $(\bar{a},\bar{b}) \in E$. We consider the existence of different NSPR classes characterizing different levels of resources requirements, lifespan and arrival rate at described in Section \ref{sec:network_load_modeling}. \begin{table}[hbtp] \centering \caption{NSPR parameters \label{tab::nspr_parameters}} \begin{tabular}{@{}cc@{}} \toprule \textit{\textbf{Parameter}} & \textit{\textbf{Description}} \\ \midrule $G_v = (V,E)$ & NSPR graph \\ $V$ & Set of VNFs of the NSPR \\ $E=\{(\bar{a},\bar{b}) \in N \times N \wedge \bar{a} \neq \bar{b}\}$ & Set of VLs of the NSPR \\ $req^{cpu}_{v} \in \mathbb{R}$ & CPU requirement of VNF $v$ \\ $req^{ram}_{v} \in \mathbb{R}$ & RAM requirement of VNF $v$ \\ $req_{(\bar{a},\bar{b})}^{bw} \in \mathbb{R}$ & Bandwidth requirement of VL $ (\bar{a},\bar{b})$\\ \bottomrule \end{tabular} \end{table} \subsection{Network Load Modeling \label{sec:network_load_modeling}} The Network Load model allow us to control the percentage of the total network resources capacity being used at a specific instant. Let $J$ be the set of resources in the network (i.e., CPU, RAM, bandwidth). Let $\mathcal{K} \subset \mathbb{N}$ be the set of NSPR classes. We compute the load generated by arrivals of NSPRs of class $k \in \mathcal{K}$ for resource $j$ in $J$ as in \cite{farah2}: \begin{equation} \rho^{k}_{j} = \frac{1}{C_j}\frac{\lambda^{k}}{\mu^{k}}A^{k}_{j} , \label{eq:static} \end{equation} where $C_j$ is the total capacity of resource $j$, $A^k_j$ is the number of resource units requested by an NSPR of class $k$, $\lambda^{k}$ is the average arrival rate for an NSPR of class $k$ and $1/\mu^{k}$ is the average lifetime of an NSPR of class $k$. We define the global load $\rho_{j}$ for resource $j$ as the sum \begin{equation} \rho_{j} = \sum_{k \in \mathcal{K}} \rho^{k}_{j} \label{eq:global} \end{equation} If $ 0 \leq \rho_j \leq 1$, the system is not overloaded for resource $j$; otherwise, the system is under overload conditions and the rejection of NSPRs may be high. \subsection{Network Slice Placement Optimization Problem Statement \label{sec:nsp_problem_statement}} The Network Slice Placement optimization problem is stated as follows: \begin{itemize} \item \textit{Given:} a NSPR graph $G_v = (V, E)$ and a PSN graph $G_s = (N, L)$, \item \textit{Find:} a mapping $G_v \to \bar{G}_s =(\bar{N},\bar{L})$, $\bar{N} \subset N$, $\bar{L} \subset L$, \item\textit{Subject to:} the VNF CPU requirements $req^{cpu}_v, \forall v \in V$, the VNF RAM requirements $req^{ram}_v, \forall v \in V$, the VLs bandwidth requirements $req^{bw}_{(\bar{a},\bar{b})}, \forall (\bar{a},\bar{b}) \in E$, the server CPU available capacity $cap^{cpu}_s, \forall s \in S$, the server RAM available capacity $cap^{ram}_s, \forall s \in S$, the physical link bandwidth available capacity $cap^{bw}_{(a,b)}, \forall (a,b) \in L$. \item \textit{Objective: } maximize the network slice placement request acceptance ratio, minimize the total resource consumption and maximize load balancing. \end{itemize} A complete mathematical formulation of this problem can be found in \cite{HA_DRL_TNSM}. \section{Learning framework for Network Slice Placement Optimization \label{sec:drl_proposal}} We describe in this section the DRL-based approach used to solve the optimization formulated in Section~\ref{sec:network_model}. As mentioned, we adopt the same approach as in \cite{HA_DRL_TNSM} but we focus here on evaluating the performance when a unpredictable network load change occurs. \subsection{Learning framework} \label{sec:drl_policy} Fig.~\ref{fig::drl_framework_for_nsp} presents an overview of the DRL framework. The state contains the features of the PSN and NSPR to be placed. A valid action is, for a given NSPR graph $G_{v} = (V,E)$, a subgraph of the PSN graph $\bar{G_s} \subset \bar{G_s} = (N, L)$ to place the NSPR that does not violate the problem constraints described in \cite{HA_DRL_TNSM} Section \ref{sec:nsp_problem_statement}. The reward evaluates how good is the computed action with respect to the optimization objectives described in \cite{HA_DRL_TNSM} Section \ref{sec:nsp_problem_statement}. DNNs are trained to calculate i) optimal actions for each state (i.e., placements with maximal rewards) and ii) the State-value function used in the learning process. In the following sections we describe each one of the elements of this framework. \begin{figure}[hbtp] \centering \includegraphics[width=\linewidth]{images/png/drl_framework.png} \caption{DRL framework for Network Slice Placement Optimization} \label{fig::drl_framework_for_nsp} \end{figure} \subsubsection{Policy} We reuse the framework introduced in \cite{HA_DRL_TNSM}. We denote by $\mathcal{A}$ the set of possible actions (namely placing VNFs on nodes) and by $\mathcal{S}$ the set of all states. We adopt a sequential placement strategy so that we choose a node $n \in N$ where to place a specific VNF $v \in \{1,...,|V|\}$. The VNFs are sequentially placed so that placement starts with the VNF $v=1$ and ends for the VNF $v = |V|$. At each time step $t$, given a state $\sigma_t$, the learning agent select an action $a$ with probability given by the Softmax distribution \begin{equation} \pi_{\theta}(a_{t} = a|\sigma_t) = \frac{e^{Z_{\theta}(\sigma_t,a),}}{\sum_{b \in N}e^{Z_{\theta}(\sigma_t,b)}}, \label{eq::policy} \end{equation} where the function $Z_{\theta}: \sset \times \aset \rightarrow \mathbb{R}$ yields a real value for each state and action calculated by a Deep Neural Network (DNN) as detailed in Section~\ref{sec::drl_learning}. The notation $\pi_{\theta}$ is used to indicate that policy depends on $Z_{\theta}$. The control parameter $\theta$ represents the weights in the DNN. \subsubsection{State representation} As in \cite{HA_DRL_TNSM}, the \textbf{PSN state} is characterized by available server resources: $cap^{cpu} = \{cap^{cpu}_{n}: n \in N\}$, $cap^{ram} = \{cap^{ram}_{n}: n \in N\}$ and $cap^{bw} = \{cap^{bw}_{n} = \sum_{(n,b) \in L}cap^{bw}_{(n,b)}: n \in N\}$. In addition, we keep track of the placement of the pending NSPR (i.e., the one being placed) via the vector $\chi = \{\chi_{n} \in \{0,..,|V|\} : n \in N \}$, where $\chi_{n}$ is the number of VNFs of the current NSPR placed on node $n$. The \textbf{NSPR state} is a view of the current placement and is composed of four characteristics, three related to resource requirements (see Table \ref{tab::nspr_parameters} for the notation) of the current VNF $v$ to be placed: $req^{cpu}_{v}$, $req^{ram}_{v}$ and $req^{bw}_{v} = \sum_{(v,\bar{b}) \in E}req^{bw}_{(v,\bar{b})}$, and $m_{v} = |V| - v + 1$ the number of VNFs of the outstanding NSPR still to be placed. \subsubsection{Reward function} We reuse the reward function introduced in \cite{HA_DRL_TNSM}. We precisely consider \begin{equation} \small r_{t+1} = \left\{\begin{array}{lr} 0, & \text{if $t < T$ and $a_{t}$ is successful}\\ \sum^{T}_{i=0} \delta^{a}_{i+1}\delta^{b}_{i+1}\delta^{c}_{i+1}, & \text{if $t = T$ and $a_{t}$ is successful}\\ \delta^{a}_{t+1}, & \text{otherwise} \end{array}\right. \label{eq::reward_function} \end{equation} where $T$ is the number of iterations of a training episode and where the rewards $\delta^{a}_{i+1}$, $\delta^{b}_{i+1}$, and $\delta^{c}_{i+1}$ are defined as follows: \begin{itemize} \item An Action $a_t$ may lead to a successful or unsuccessful placement. We then define the Acceptance Reward value due to action $a_t$ as \begin{equation} \delta^{a}_{t+1} = \left\{\begin{array}{lr} 100, & \text{if $a_{t}$ is successful, }\\ -100, & \text{otherwise. } \end{array}\right. \label{eq::acceptance_signal} \end{equation} \item The Resource Consumption Reward value for the placement of VNF $v$ via action $a_t$ is defined by \begin{equation} \delta^{c}_{t+1}= \left\{\begin{array}{lr} \frac{req^{bw}_{(v-1,v)}}{req^{bw}_{(v-1,v)}|P|} = \frac{1}{|P|}, & \text{if $|P|>0$, }\\ 1, & \text{otherwise. } \end{array}\right. \label{eq::resource_consumption_signal} \end{equation} where $P$ is the path used to place VL $(v-1,v)$. Note that a maximum $\delta^{c}_{t+1} = 1$ is given when $|P|=0$, that is, when VNFs $v-1$ and $v$ are placed on the same server. \item The Load Balancing Reward value for the placement of VNF $v$ via $a_t$ \begin{equation} \delta^{b}_{t+1} = \frac{cap^{cpu}_{a_t}}{M^{cpu}_{a_{t}}} + \frac{cap^{ram}_{a_t}}{M^{ram}_{a_{t}}}. \label{eq::load_balancing_signal} \end{equation} \end{itemize} \subsection{Adaptation of DRL and Introduction of a Heuristic Function \label{sec:aidedDRL}} \subsubsection{Proposed Deep Reinforcement Learning Algorithm \label{sec::drl_learning}} As in \cite{HA_DRL_TNSM}, we use a single thread version of the A3C Algorithm introduced in \cite{a3c}. This algorithm relies on two DNNs that are trained in parallel: i) the Actor Network with the parameter $\theta$, which is used to generate the policy $\pi_{\theta}$ at each time step, and ii) the Critic Network with the parameter $\theta_{v}$ which generates an estimate $\nu^{\pi_{\theta}}_{\theta_{v}}(\sigma_t)$ for the State-value function defined by $$\nu_{\pi}(t|\sigma)=\mathbb{E}_{\pi}\left[\sum^{T-t-1}_{k=0}\gamma^{k} r_{t+k+1} | \sigma_t = \sigma \right],$$ for some discount parameter $\gamma$. As depicted in Fig.~\ref{fig::ha_advantage_actor_critic_architecture} both Actor and Critic Networks have almost identical structure. As in \cite{p1}, we use the GCN formulation proposed by \cite{kipf_gcn} to automatically extract advanced characteristics of the PSN. The characteristics produced by the GCN represent semantics of the PSN topology by encoding and accumulating characteristics of neighbour nodes in the PSN graph. The size of the neighbourhood is defined by the order-index parameter $K$. \begin{figure}[hbtp] \centering \includegraphics[width=\linewidth]{images/png/ha_actor_critic_architecture.png} \caption{Reference framework for the proposed learning algorithms.} \label{fig::ha_advantage_actor_critic_architecture} \end{figure} As in \cite{p1}, we consider in the following $K=3$ and perform automatic extraction of 60 characteristics per PSN node. The NSPR state characteristics are separately transmitted to a fully connected layer with 4 units. The characteristics extracted by both layers and the GCN layer are combined into a single column vector of size $60|N| + 4$ and passed through a fully connected layer with $|N|$ units. In the Critic Network, the outputs are forwarded to a single neuron, which is used to calculate the state-value function estimation $\nu^{\pi_{\theta}}_{\theta_{v}}(\sigma_t)$. In the Actor Network, the outputs represent the values of the function $Z_{\theta}$ introduced in Section \ref{sec:drl_policy}. These values are injected into a Softmax layer that transforms them into a Softmax distribution that corresponds to the policy $\pi_{\theta}$. During the training phase, at each time step $t$, the A3C algorithm uses the Actor Network to calculate the policy $\pi_{\theta}(.|\sigma_t)$. An action $a_t$ is sampled using the policy and performed on the environment. The Critic Network is used to calculate the state-value function approximation $\nu^{\pi_{\theta}}_{\theta_{v}}(\sigma_t)$. The learning agent receives then the reward $r_{t+1}$ and next state $\sigma_{t+1}$ from the environment and the placement process continues until a terminal state is reached, that is, until the Actor Network returns an unsuccessful action or until the current NSPR is fully placed. At the end of the training episode, the A3C algorithm updates parameters $\theta$ and $\theta_{v}$ by using the same rules as in \cite{HA_DRL_TNSM}. \subsubsection{Introduction of a Heuristic Function} \label{heuristicfunc} To guide the learning process, we use as in \cite{HA_DRL_TNSM} the placement heuristic introduced in \cite{cnsm_2020}. This yields the HA-DRL algorithm. More precisely, from the reference framework shown in Fig.~\ref{fig::ha_advantage_actor_critic_architecture}, we proposed to include in the Actor Network the Heuristic layer that calculates an Heuristic Function $H: \sset \times \aset \rightarrow \mathbb{R}$ based on external information provided by the heuristic method, referred as HEU. Let $Z_{\theta}$ be the function computed by the fully connected layer of the Actor Network that maps each state and action to a real value which is after converted by the Softmax layer into the selection probability of the respective action (see Section \ref{sec:drl_policy}). Let $\bar{a}_{t} = \text{argmax}_{a \in \aset}\,Z_{\theta}(\sigma_t,a)$ be the action with the highest $Z_{\theta}$ value for state $\sigma_{t}$. Let $a^{*}_{t}=HEU(\sigma_t)$ be the action derived by the HEU method at time step $t$ and the preferred action to be chosen. $H(\sigma_t,a^{*}_t)$ is shaped to allow the value of $Z_{\theta}(\sigma_t,a^{*}_t)$ to become closer to the value of $Z_{\theta}(\sigma_t,\bar{a}_t)$. The aim is to turn $a^{*}_t$ into one of the likeliest actions to be chosen by the policy. The Heuristic Function is then formulated as \begin{multline} \small H(\sigma_t,a_t) = \left\{\begin{array}{lr} Z_{\theta}(\sigma_t,\bar{a}_{t}) - Z_{\theta}(\sigma_t,a_t) + \eta, & \text{if $a_{t}=a^{*}_{t}$}\\ 0, & \text{otherwise} \end{array}\right. \label{eq::heuristic_function} \end{multline} where $\eta$ parameter is a small real number. During the training process the Heuristic layer calculates $H(\sigma_t,.)$ and updates the $Z_{\theta}(\sigma_t,.)$ values by using the following equation: \begin{equation} Z_{\theta}(\sigma_t,.) = Z_{\theta}(\sigma_t,.) + \xi H(\sigma_t,.)^{\beta} \label{eq:z_update} \end{equation} The Softmax layer then computes the policy using the modified $Z_{\theta}$. Note the action returned by $a^{*}_{t}$ will have a higher probability to be chosen. The $\xi$ and $\beta$ parameters are used to control how much HEU influence the policy. \subsection{Implementation Remarks} All resource-related characteristics are normalized to be in $[0,1]$. This is done by dividing $cap^{j}$ and $req^{j}$, $j \in \{$cpu, ram,bw$\}$, by $\max_{n \in N}M^{j}_{n}$. With regard to the DNNs, we have implemented the Actor and Critic as two independent Neural Networks. Each neuron has a bias assigned. We have used the hyperbolic tangent (tanh) activation for non-output layers of the Actor Network and Rectified Linear Unit (ReLU) activation for all layers of the Critic Network. We have normalized positive global rewards to be in $[0,10]$. During the training phase, we have considered the policy as a Categorical distribution and used it to sample the actions randomly. \section{Implementation and Evaluation Results \label{sec:evaluation}} \subsection{Implementation Details \& Simulator Settings} \subsubsection{Experimental setting} We developed a simulator in Python containing: i) the elements of the Network Slice Placement Optimization problem (i.e., PSN and NSPR); ii) the DRL and HA-DRL algorithms. We used the PyTorch framework to implement the DNNs. Experiments were run in a 2x6 cores @2.95Ghz 96GB machine. \subsubsection{Physical Substrate Network Settings} \label{sec::substrate_network_settings} We consider a PSN that could reflect the infrastructure of an operator as discussed in \cite{farah2}. In this network, three types of DCs are introduced as in Section~\ref{sec:network_model}. Each CDC is connected to three EDCs which are distant of 100 km. CDCs are interconnected and connected to one CCP that is 300 km away. We consider 15 EDCs each one with 4 servers, 5 CDCs each with 10 servers and 1 CCP with 16 servers. The CPU and RAM capacities of each server are 50 and 300 units, respectively. A bandwidth capacity of 100 Gbps is given to intra-data center links inside CDCs and CCP, 10Gbps being the bandwidth for intra-data center links inside EDCs. Transport links connected to EDCs have 10Gpbs of bandwidth capacity. Transport links between CDCs have 100Gpbs of bandwidth capacity as well for the ones between CDCs and the CCP. \subsubsection{Network Slice Placement Requests Settings \label{sec::network_slice_placement_requests_settings}} We consider NSPRs to have the Enhanced Mobile Broadband (eMBB) setting described in \cite{cnsm_2020}. Each NSPR is composed of 5 or 10 VNFs (see Section \ref{sec:network_loads}). Each VNF requires 25 units of CPU and 150 units of RAM. Each VL requires 2 Gbps of bandwidth. \subsection{Algorithms \& Experimental Setup }\label{sec:algorithms_tested} \subsubsection{Training Process \& Hyper-parameters} We consider a training process with maximum duration of 6 hours for the considered algorithms. We perform seven independent runs of each algorithm to assess their average performance in terms of the metrics introduced below (see Section \ref{sec:ev_metrics}). After performing Hyper-parameter search, we set the learning rates for the Actor and Critic networks of DRL and HA-DRL algorithms to $\alpha = 5 \times 10^{-5}$ and $\alpha' = 1.25 \times 10^{-3}$, respectively. We program four versions of HA-DRL agents, each with a different value for the $\beta$ parameter of the heuristic function formulation (see Section \ref{heuristicfunc}). We set in addition the parameters $\xi = 1$ and $\eta = 0$. \subsubsection{Network load calculation}\label{sec:network_loads} Network loads are calculated using CPU resource but the analysis could easily be applied to RAM; we use the network load model introduced in Section \ref{sec:network_loads}. We consider two NSPR classes: i) a Volatile class and ii) a Long term class. The differences between the two classes are related to their resource requirements and their lifespans as Volatile requests have 5 VNFs and a life-span of 20 simulation time units and Long-term requests have 10 VNFs and a life span of 500 simulation time units. \subsubsection{Network load change scenarios}\label{sec:network_load_disruption} We consider that the network runs in a standard regime under a network load being equal to 40\% (i.e., $\rho=0.4$) and that the NSPRs of each class generate half of the total load. In each experiment, the learning agent is trained during approximately 4 hours for this network load regime. Then a stair-stepped network load change occurs. We simulated eight different network load change levels. Each network load change level is characterized by the addition of a certain amount of extra network load ranging from 10\% to 80\% (causing system overload). \subsection{Evaluation Metrics \label{sec:ev_metrics}} To characterize the performance of the placement algorithms, we consider one performance metric called Acceptance Ratio per Training phase (TAR). This metric represents the Acceptance Ratio obtained in each training phase, i.e., each part of the training process, corresponding to $500$ NSPR arrivals or $500$ episodes. It is calculated as follows: $\frac{\mathrm{\# accepted \; NSPRs}}{500}$. This metric allows us to better observe the evolution of algorithm performance over time since it measures algorithm performance in independent parts (phases) of the training process without accumulating the performance of previous training phases. Based on this metric, we identify three other important metrics used in our results discussion: \begin{enumerate} \item \textbf{Rupture TAR:} it is the TAR obtained in the training phase where the network load change occurs, i.e., the rupture phase; \item \textbf{Last TAR:} it is the TAR obtained in the training phase that is prior to the rupture phase; \item \textbf{Average TAR:} it is the average of the TARs obtained in the 30 phases preceding the rupture phase; \item \textbf{TAR standard deviation:} it is the standard deviation of the TARs obtained in the 30 phases preceding the rupture phase; \end{enumerate} \subsection{ Evaluation of the impact of network load change} Fig.~\ref{fig:disruption_levels_1}, \ref{fig:disruption_levels_2} and \ref{fig:disruption_levels_3} capture the impact of different network load change levels on the TARs obtained by the different evaluated algorithms. The rupture phase is identified by a blue vertical line in the various figures. We can observe in Fig.~\ref{fig:disruption_levels_1}, \ref{fig:disruption_levels_2} and \ref{fig:disruption_levels_3} that with the reduced training time of 6 hours the only algorithm that has near optimal performance after 108 training phases is HA-DRL, with $\beta=2.0$. This is due to the fact that the strong influence of the Heuristic Function helps the algorithm to become stable more quickly as discussed in \cite{HA_DRL_TNSM} and\cite{cnsm_2021}. \begin{figure}[hbtp] \centering \begin{subfloat}[Addition of 10\% of network load.] {\includegraphics[width=.85\linewidth]{images/eps/acceptance_ratio_comparison_ar_per_training_phase_avg_nwload=0.5_nb_req_per_tr_phase=500.pdf}\label{fig:disruption_10}} \end{subfloat} \begin{subfloat}[Addition of 20\% of network load.] {\includegraphics[width=.85\linewidth]{images/eps/acceptance_ratio_comparison_ar_per_training_phase_avg_nwload=0.6_nb_req_per_tr_phase=500.pdf}\label{fig:disruption_20}} \end{subfloat} \begin{subfloat}[Addition of 30\% of network load.] {\includegraphics[width=.85\linewidth]{images/eps/acceptance_ratio_comparison_ar_per_training_phase_avg_nwload=0.7_nb_req_per_tr_phase=500.pdf}\label{fig:disruption_30}} \end{subfloat} \caption{Evaluation of impact of network load disruption on TAR: under-loaded scenarios \label{fig:disruption_levels_1}} \end{figure} \begin{figure}[hbtp] \centering \begin{subfloat}[Addition of 40\% of network load.] {\includegraphics[width=.85\linewidth]{images/eps/acceptance_ratio_comparison_ar_per_training_phase_avg_nwload=0.8_nb_req_per_tr_phase=500.pdf}\label{fig:disruption_40}} \end{subfloat} \begin{subfloat}[Addition of 50\% of network load.] {\includegraphics[width=.85\linewidth]{images/eps/acceptance_ratio_comparison_ar_per_training_phase_avg_nwload=0.9_nb_req_per_tr_phase=500.pdf}\label{fig:disruption_50}} \end{subfloat} \begin{subfloat}[Addition of 60\% of network load.] {\includegraphics[width=.85\linewidth]{images/eps/acceptance_ratio_comparison_ar_per_training_phase_avg_nwload=1.0_nb_req_per_tr_phase=500.pdf}\label{fig:disruption_60}} \end{subfloat} \caption{Evaluation of impact of network load disruption on TAR: critical scenarios\label{fig:disruption_levels_2}} \end{figure} \begin{figure}[hbtp] \centering \begin{subfloat}[Addition of 70\% of network load.] {\includegraphics[width=.87\linewidth]{images/eps/acceptance_ratio_comparison_ar_per_training_phase_avg_nwload=1.1_nb_req_per_tr_phase=500.pdf}\label{fig:disruption_70}} \end{subfloat} \begin{subfloat}[Addition of 80\% of network load.] {\includegraphics[width=.87\linewidth]{images/eps/acceptance_ratio_comparison_ar_per_training_phase_avg_nwload=1.2_nb_req_per_tr_phase=500.pdf}\label{fig:disruption_80}} \end{subfloat} \caption{Evaluation of impact of network load disruption on TAR: overloaded scenarios \label{fig:disruption_levels_3}} \end{figure} We can also observe by the shape of the different curves in Fig.~\ref{fig:disruption_levels_1}, \ref{fig:disruption_levels_2}, and \ref{fig:disruption_levels_3} that, as expected, all the algorithms have some variability in their performance during the training phases. In addition, these figures show that the performance of all the algorithms is affected at various levels by the network load change and that, generally speaking, the higher the amount of extra network load added, the lower is the TAR after the change. Finally, we can also see that the only algorithm to keep a near optimal performance even in overloaded scenarios shown in Fig.~\ref{fig:disruption_levels_3} is HA-DRL, with $\beta=2.0$. Tables \ref{tab:drl_results}, \ref{tab:ha-drl_0.1_results},\ref{tab:ha-drl_0.5_results}, \ref{tab:ha-drl_1.0_results}, and \ref{tab:ha-drl_2.0_results} present other performance metrics related to the various evaluated algorithms. The columns "Rupture TAR - Avg. TAR" and "Rupture TAR - Last TAR" indicate how much the performance of the algorithms drops in the rupture phase when compared with the Average TAR and Last TAR, respectively. The TAR Standard Deviation column indicates the TAR Standard Deviation metric described in Section \ref{sec:ev_metrics}. \begin{table}[hbtp] \centering \caption{DRL algorithm results} \label{tab:drl_results} \begin{tabularx}{\linewidth}{@{}cLLL@{}} \toprule \begin{tabular}[c]{@{}c@{}}Network Load\\ Disruption Level (\%)\end{tabular} & Rupture TAR - Avg. TAR (\%) & Rupture TAR - Last TAR (\%) & TAR Standard Deviation (\%) \\ \midrule +10 & -3.37 & -1.89 & 3.10 \\ +20 & -8.19 & -7.37 & 3.09 \\ +30 & -11.89 & -6.83 & 4.17 \\ +40 & -17.68 & -13.8 & 4.12 \\ +50 & -17.00 & -9.11 & 4.32 \\ +60 & -18.50 & -10.20 & 4.35 \\ +70 & -20.46 & -14.26 & 3.30 \\ +80 & -21.65 & -15.86 & 3.27 \\ \bottomrule \end{tabularx} \end{table} \begin{table}[hbtp] \centering \caption{HA-DRL, $\beta=0.1$ algorithm results} \label{tab:ha-drl_0.1_results} \begin{tabularx}{\linewidth}{@{}cLLL@{}} \toprule \begin{tabular}[c]{@{}c@{}}Network Load\\Disruption Level (\%)\end{tabular} & Rupture TAR - Avg. TAR (\%) & Rupture TAR - Last TAR (\%) & TAR Standard Deviation (\%) \\ \midrule +10 & -4.13 & -2.60 & 4.07 \\ +20 & -11.02 & -8.91 & 3.51 \\ +30 & -16.00 & -10.54 & 4.50 \\ +40 & -16.28 & -9.83 & 4.13 \\ +50 & -18.66 & -11.14 & 5.05 \\ +60 & -17.02 & -9.80 & 3.99 \\ +70 & -25.13 & -18.20 & 4.93 \\ +80 & -29.41 & -21.31 & 4.85 \\ \bottomrule \end{tabularx} \end{table} \begin{table}[ht] \centering \caption{HA-DRL, $\beta=0.5$ algorithm results} \label{tab:ha-drl_0.5_results} \begin{tabularx}{\linewidth}{@{}cLLL@{}} \toprule \begin{tabular}[c]{@{}c@{}}Network Load\\ Disruption Level (\%)\end{tabular} & Rupture TAR - Avg. TAR (\%) & Rupture TAR - Last TAR (\%) & TAR Standard Deviation (\%) \\ \midrule +10 & -4.55 & -3.43 & 3.95 \\ +20 & -8.80 & -9.37 & 4.21 \\ +30 & -12.78 & -10.66 & 4.59 \\ +40 & -20.33 & -15.94 & 4.61 \\ +50 & -21.24 & -13.43 & 4.56 \\ +60 & -19.46 & -10.46 & 5.08 \\ +70 & -24.28 & -16.26 & 3.75 \\ +80 & -26.78 & -20.71 & 3.88 \\ \bottomrule \end{tabularx} \end{table} \begin{table}[t] \centering \caption{HA-DRL, $\beta=1.0$ algorithm results} \label{tab:ha-drl_1.0_results} \begin{tabularx}{\linewidth}{@{}cLLL@{}} \toprule \begin{tabular}[c]{@{}c@{}}Network Load\\ Disruption Level (\%)\end{tabular} & Rupture TAR - Avg. TAR (\%) & Rupture TAR - Last TAR (\%) & TAR Standard Deviation (\%) \\ \midrule +10 & -2.96 & -1.11 & 2.37 \\ +20 & -4.94 & -6.49 & 3.50 \\ +30 & -6.93 & -4.71 & 2.37 \\ +40 & -7.67 & -7.00 & 1.97 \\ +50 & -6.80 & -4.77 & 1.72 \\ +60 & -8.95 & -5.29 & 2.45 \\ +70 & -11.25 & -8.00 & 1.73 \\ +80 & -13.62 & -11.69 & 2.89 \\ \bottomrule \end{tabularx} \end{table} \begin{table}[ht] \centering \caption{HA-DRL, $\beta=2.0$ algorithm results} \label{tab:ha-drl_2.0_results} \begin{tabularx}{\linewidth}{@{}cLLL@{}} \toprule \begin{tabular}[c]{@{}c@{}}Network Load\\ Disruption Level (\%)\end{tabular} &Rupture TAR - Avg. TAR (\%) & Rupture TAR - Last TAR (\%) & TAR Standard Deviation (\%) \\ \midrule +10 & -2.04 & 0.09 & 2.37 \\ +20 & -7.01 & -5.09 & 3.50 \\ +30 & -7.15 & -2.31 & 2.37 \\ +40 & -7.90 & -4.69 & 1.97 \\ +50 & -12.13 & -5.86 & 1.72 \\ +60 & -10.24 & -4.94 & 2.45 \\ +70 & -18.83 & -12.34 & 1.73 \\ +80 & -17.79 & -11.69 & 2.89 \\ \bottomrule \end{tabularx} \end{table} Those tables confirm that in general the performance gaps, i.e., the gaps between the Rupture TAR and Average or Last TAR, grow with the level of disruption for all algorithms. For instance, in the disruption level "+10", the performance gaps are never higher than 5\%. But, in the change level "+80" the performance gap are never lower than 11\%. In all the evaluated cases, the difference between the Rupture TAR and the Average TAR is higher than the TAR standard deviation. For instance, for the DRL algorithm, in network load disruption level of +50\%, rupture TAR is 17\% lower than Average TAR which is 3.94 times the TAR standard deviation. The algorithm with the lower performance gaps is HA-DRL with $\beta=1.0$ as we can see in columns "Rupture TAR - Avg. TAR" and "Rupture TAR - Last TAR" of Table~\ref{tab:ha-drl_1.0_results}. We can state that this algorithm has significantly better robustness then all the others as its performance gaps are significantly lower. However, HA-DRL with $\beta=1.0$ has the worst TAR performance as shown in Fig.~\ref{fig:disruption_levels_1}, \ref{fig:disruption_levels_2} and \ref{fig:disruption_levels_3}, which reduces its applicability. HA-DRL with $\beta=2.0$ has the second better robustness and DRL the third as we can see on "Rupture TAR - Avg. TAR" and "Rupture TAR - Last TAR" columns of Tables~\ref{tab:ha-drl_2.0_results} and \ref{tab:drl_results}, respectively. Even if the usage of the Heuristic Function has helped HA-DRL, with $\beta \in \{0.1, 0.5\}$ to achieve significantly better TARs than DRL, the influence of the Heuristic Function in these algorithms was not sufficient to allow to improve the robustness of the DRL algorithm against unpredictable network load disruptions (see Tables~\ref{tab:ha-drl_0.1_results} and \ref{tab:ha-drl_0.5_results}, respectively). We can observe, however, that HA-DRL with $\beta=2.0$ has better robustness against unpredictable network load changes than DRL as the performance gaps obtained with HA-DRL with $\beta=2.0$ are significantly lower than the ones obtained with DRL as can be observed in columns "Rupture TAR - Avg. TAR" and "Rupture TAR - Last TAR" of Tables~\ref{tab:ha-drl_2.0_results} and \ref{tab:drl_results}, respectively. These results confirm that HA-DRL with $\beta=2.0$ is the algorithm among those evaluated that is the most adapted to be used in practice. Indeed, the algorithm presents not only the better TAR results and quick convergence but also robust performance. \section{Conclusion \label{sec:conclusion}} We have specifically introduced two DRL-based algorithms and evaluated their performance in a non-stationary network load scenario with unpredictable changes. In line with the conclusions of \cite{HA_DRL_TNSM,cnsm_2021}, the numerical experiments performed in this paper show that coupling DRL and heuristic functions yields good and stable results even under non stationary load conditions. Therefore, we believe that such an approach is relevant in real networks that are subject to unpredictable network load changes. As part of our future work, we plan to explore distribution and parallel computing techniques to solve the considered multi-objective optimization problem using multi-agent or federated learning approaches to address slice placement in heterogeneous networks mainly when the network is decomposed into several segments or technical domains where the network abstraction introduced in this paper is no more valid. Indeed, each segment should have its own abstractions and data. It is then necessary to share information between the segments to take a global decision. Instead of exchanging complete network states, segments would exchanging minimal information obtained via heuristics. \section*{Acknowledgment} This work has been performed in the framework of 5GPPP MON-B5G project (www.monb5g.eu). The experiments were conducted using Grid'5000, a large scale testbed by Inria and Sorbonne University (www.grid5000.fr). \bibliographystyle{IEEEtran}
2,869,038,155,963
arxiv
\section{Introduction} \label{sec:intro} In \cite{huttner2018portfolio}, we can read: \begin{quote} ``To the best of our knowledge, there is no algorithm available for the generation of reasonably random [financial] correlation matrices with the Perron-Frobenius property. [...] Concerning the generation of [financial] correlation matrices whose MSTs [Minimum Spanning Trees] exhibit the scale-free property, to the best of our knowledge there is no algorithm available, and due to the generating mechanism of the MST we expect the task of finding such correlation matrices to be highly complex." \end{quote} In this paper, we propose a novel approach to solve the problem of generating realistic financial correlation matrices. Using Generative Adversarial Networks (GANs) to sample realistic financial correlation matrices has never been documented, to the best of our knowledge, despite the importance of the problem. Simulating financial data, and correlation matrices in particular, have many applications: Testing robustness of trading strategies, stress testing portfolios. Another major application could be the objective comparison of empirical methods (combination of signals and strategies, statistical filtering methods \cite{tumminello2007shrinkage}) which would otherwise be claimed superior based on a given arbitrary chosen sample. This endemic problem in empirical finance prevents the field to become a science in the Popperian terminology: one cannot easily contradict such results \cite{lopez2019tactical}. Generating multivariate financial time series is more general and difficult than to focus on their correlations: Besides the dependence structure (relatively static in comparison), one has to correctly capture the univariate time series features (e.g. autocorrelation) and the distributional properties of the margins altogether. In this work, we only focus on generating empirical correlation matrices, which may already be an approximation of the dependence structure between several financial assets (cf. copula theory \cite{nelsen2007introduction}). Despite the importance of the problem, we can explain the lack of research (and results) as GANs, a recent class of generative modelling approaches (seminal paper in 2014 \cite{goodfellow2014generative}) which stemmed from the computer science community, are not yet part of the econometrician, risk and quant analysts toolbox. This work can also be relevant for the signal processing community as robust estimation of large covariance matrices $\Sigma$, since a correlation matrix $C = {\rm diag}(\Sigma)^{-\frac{1}{2}} \Sigma~ {\rm diag}(\Sigma)^{-\frac{1}{2}}$, is a common problem \cite{balaji2014information,aubry2017geometric}. \subsection*{Contributions} \label{sec:contrib} The contributions of this article are: \begin{itemize}[noitemsep] \item sampling financial correlation matrices using GANs, and documenting results for the first time, \item showing that the samples generated look realistic, and verify the stylized facts known in the econophysics literature, \item using S\&P 500 stock returns which are widely available for reproducibility of the experiments. \end{itemize}{} We also showcase our results through a web application: CorrGAN.io (\url{www.corrgan.io}). Users are asked to determine whether a given correlation matrix was generated from the GAN model or estimated from real stock returns. We have obtained a balanced number of correct and wrong answers so far. This is consistent with random guessing. \section{Related work} \label{sec:relwork} To the best of our knowlege, there are no previous attempt at generating realistic financial correlation matrices using GANs. No known model is able to capture, even approximately, all the known characteristics of financial correlation matrices \cite{huttner2018portfolio}. We briefly review in the following subsection typical applications of GANs, and we highlight the lack of published results concerning financial data. Then, we describe the stylized facts of financial correlation matrices which will be useful to evaluate the samples generated by the different GAN-based approaches tested. \subsection{Generative Adversarial Networks} \label{sec:gans} Generative Adversarial Networks (GANs) were introduced in \cite{goodfellow2014generative}. Two networks $G$ (the generative model) and $D$ (a discriminative model) are trained simultaneously: $G$ is trained to maximize the probability of $D$ making a mistake; $D$ is trained to estimate the probability that a sample comes from the training data rather than $G$. These models are notoriously complex to train and evaluate. Their greatest success so far has been to generate realistic pictures. There are few successful applications published outside natural images, e.g. \cite{wu2016learning,binkowski2019high}; results produced by GANs are not competitive in natural language generation, for example. Related to finance, authors are aware of \cite{henry2019generative} which aims at simulating SABR (a stochastic volatility model) parameters, and \cite{koshiyama2019generative} generating univariate time series of asset returns using a conditional GAN. The paper totally discards the whole dependence structure, e.g. correlations, existing between the time series of many assets. It may not matter when focusing on time series strategies (actively trading a single asset through time), but is useless when considering cross-sectional strategies, or large portfolio and risk management. In other words, it doesn't model the multivariate joint distribution of the co-movements of many assets. Unlike for natural images which lend themselves well to a visual inspection, samples obtained from GANs are in general hard to evaluate. Researchers are for now limited to check a few statistics, for example degree distribution of the graph nodes in \cite{bojchevski2018netgan} or the partial auto-correlation function of the time series in \cite{koshiyama2019generative}. However, one needs to know which statistics are important to verify. Fortunately, financial correlation matrices have been extensively researched over the past two decades. \subsection{Financial correlation matrices} \label{sec:fincorr} Financial correlation matrices have been extensively studied in econophysics, an empirical field applying statistical physics methods to economy and finance. Around 1999, Bouchaud \textit{et al.} \cite{laloux2000random} showed how Random Matrix Theory (RMT) can be used to better understand financial correlations, and they started a two-decade research long program developing and refining methods using tools from RMT to clean large empirical correlation matrices \cite{bun2017cleaning}. About the same time, Mantegna, another econophysicist, discovered the hierarchical structure of financial correlations \cite{mantegna1999hierarchical} whose seminal and influential work sparked a rich empirical research in financial networks and clustering. An extensive review of this literature can be found in \cite{marti2017review}. This body of knowledge about financial correlation matrices can be summarized in a few stylized facts: \begin{itemize}[noitemsep] \item Distribution of pairwise correlations is significantly shifted to the positive, \item Eigenvalues follow the Marchenko-Pastur distribution \cite{laloux2000random}, but for \begin{itemize}[noitemsep] \item a very large first eigenvalue (the market), \item a couple of other large eigenvalues (industries), \end{itemize} \item Perron-Frobenius property (first eigenvector has positive entries), \item Hierarchical structure of correlations \cite{mantegna1999hierarchical}, \item Scale-free property of the corresponding Minimum Spanning Tree (MST) \cite{caldarelli2004emergence}. \end{itemize} It is possible that some stylized facts are still to be discovered. Exploring the latent space of GANs \cite{chen2016infogan} could help finding unknown properties of financial correlations; However, generative adversarial networks, alongside deep learning in general, are not yet part of the toolkit in empirical finance. This paper is meant to show that they are a relevant tool, and that the problem of sampling financial correlation matrices using GANs deserves further exploration. \section{The space of correlation matrices} \label{sec:elliptope} Let $C \in \mathbf{R}^{n \times n}$ be a correlation matrix, that is $C = C^\top$, $\forall i \in \{1, \ldots, n\}$, $C_{ii} = 1$, $\forall x \in \mathbf{R}^{n}$, $x^\top C x \geq 0$. Let the elliptope of dimension $n(n-1)/2$ be the set corresponding to the $n(n-1)/2$ coefficients of $n \times n$-correlation matrix upper triangular. More formally, \begin{equation*} \begin{aligned} & \mathcal{E}_{\frac{n(n-1)}{2}} = \{ \\ & \left(C_{12}, \ldots, C_{1n}, C_{23}, \ldots, C_{2n}, \ldots, C_{(n-1)n}\right) \in \mathbf{R}^{\frac{n(n-1)}{2}} ~|~ \\ & C = C^\top, \forall i \in \{1, \ldots, n\}, C_{ii} = 1, \forall x \in \mathbf{R}^{n}, x^\top C x \geq 0 \} \end{aligned} \end{equation*} A $n \times n$-correlation matrix can be viewed as a point in $\mathcal{E}_{\frac{n(n-1)}{2}}$. \subsection{$3 \times 3$ case} \label{sec:3x3} To build intuition, let's first consider the $3 \times 3$ case. We can visually verify that a simple GAN is able to recover the whole space of empirical correlations. In Figure~\ref{fig:res}, 10,000 blue points are sampled uniformly (in the Lebesgue measure sense) from $\mathcal{E}_3$ using the onion method \cite{ghosh2003behavior}, where $\mathcal{E}_3$ is $$\mathcal{E}_3 = \left\{ \left(\rho_{12}, \rho_{13}, \rho_{23} \right) \in \mathbf{R}^3 ~\middle|~ \begin{pmatrix} 1 & \rho_{12} & \rho_{13} \\ \rho_{12} & 1 & \rho_{23} \\ \rho_{13} & \rho_{23} & 1 \end{pmatrix} \succeq 0 \right\}.$$ In orange, 10,000 3-by-3 matrices obtained by selecting randomly (without replacement) 3 stocks among the 500 possible in the S\&P 500, and then estimating the correlations between their daily returns on one year (252 business days). We can notice that the orange set (empirical correlations) is a strict subset of the blue set (whole space of valid correlation matrices) concentrated around average to high positive values. A simple GAN is able to recover this distribution (green points): It generates only valid correlation matrices, with a support matching closely the empirical ones, and a higher concentration around the average to high positive values. \begin{figure}[htb] \begin{minipage}[b]{.48\linewidth} \centering \centerline{\includegraphics[width=4.0cm]{CorrGan3D.png}} \centerline{The $3$D elliptope \end{minipage} \hfill \begin{minipage}[b]{0.48\linewidth} \centering \centerline{\includegraphics[width=4.0cm]{CorrGan3DEmpirical.png}} \centerline{Empirical matrices (orange) \end{minipage} \begin{minipage}[b]{1.0\linewidth} \centering \centerline{\includegraphics[width=8.5cm]{CorrGan3DEmpiricalGenerated.png}} \centerline{Sampling $3 \times 3$ correlations from a GAN (green points)}\medskip \end{minipage} \caption{We can visually inspect the results: A simple GAN is able to sample realistic $3 \times 3$ financial correlation matrices} \label{fig:res} \end{figure} \subsection{$n \times n$ case} \label{sec:nxn} In financial applications, $n$ ranges typically from a few dozens to a couple of hundreds, a few thousands in the most extreme cases. The large $n$ case is more difficult for many reasons, from statistical to computational. For our concerns, it is (i) harder to assess quality and coverage of samples generated and (ii) harder to train GANs as the standard neural networks are data inefficient on correlation matrices linked to their matrix equivalence property: When estimating a correlation matrix on a set of $n$ stock returns, the order of these stocks is arbitrary. There are $n!$ such possible orders, and therefore $n!$ different correlation matrices. But they all essentially describe the same correlation structure. We would like that the output of a neural network (here, the GAN discriminator (or critic) decision: \textit{fake} or \textit{real}) is invariant to permutations. To solve this problem, we need to enforce permutation invariance either in the network (some early tentative in the literature \cite{zaheer2017deep}) or in the representation of the correlation matrix. The latter is the approach chosen in this work, namely we choose a representative for the equivalence class. We propose to consider $R_{ij} = C_{\pi_H(i) \pi_H(j)}$, where $\pi_H$ is a permutation induced by a hierarchical clustering algorithm (inspired by one of the stylized facts, namely the hierarchical structure of financial correlations \cite{mantegna1999hierarchical}) We show in Figure~\ref{fig:repr} the result of applying $\pi_H$ to a given correlation matrix. \begin{figure}[htb] \begin{minipage}[b]{.48\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{unsorted.png}} \centerline{Arbitrary $C$ \end{minipage} \hfill \begin{minipage}[b]{0.48\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{pi_H_sorted.png}} \centerline{$R_{ij} = C_{\pi_H(i)\pi_H(j)}$ \end{minipage} \caption{Two equivalent correlation matrices. The one on the left has been obtained by estimation on returns of arbitrarily ordered stocks; The one on the right by applying $\pi_H$.} \label{fig:repr} \end{figure} \section{Results and Evaluation} \label{sec:eval} We apply a deep convolutional generative adversarial network (DCGAN) \cite{radford2015unsupervised}, whose architecture is known to be able to learn a hierarchy of representations from object parts to scenes in natural images, on approximately 10,000 empirical correlation matrices estimated on S\&P 500 returns sorted using $\pi_H$. Note that the matrices generated by the GAN models are not exactly correlation matrices: Their diagonal is not exactly equal to 1 (coefficients obtained are around 0.998); the matrices look visually symmetric but are not; Small negative eigenvalues can be found. We post process the results using an alternating projections method described in \cite{higham2002computing} to find the nearest correlation matrix with respect to the Frobenius norm. Results obtained are evaluated using the stylized facts described in section~\ref{sec:fincorr}: Do we recover the main characteristics of financial correlation matrices? Essentially, yes. Tails of the distributions are not perfectly simulated though. Comparison between empirical and synthetic samples are displayed in Figures~\ref{fig:coeffs},~\ref{fig:eigs},~\ref{fig:mats},~\ref{fig:powerlaw}. Another experiment we did to assess the results was to ask people to determine whether a given matrix is \textit{real}, i.e. estimated from stocks returns, or \textit{fake}, i.e. generated from the GAN model. Concretely, we built a web application: CorrGAN.io (\url{www.corrgan.io}), which displays samples from both classes with equal probability, and where users can input their guess. We have obtained a balanced number of correct and wrong answers so far: Samples generated by the GAN seem also realistic to the human eye. \begin{figure}[htb] \begin{minipage}[b]{.48\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{correl_distrib_v2.png}} \centerline{Distribution of correlations \end{minipage} \hfill \begin{minipage}[b]{0.48\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{correl_log_distrib_v2.png}} \centerline{Log distribution \end{minipage} \caption{The distributions of empirical and DCGAN-generated correlation coefficients match closely: They have approximately the same mean (0.36) and standard deviation (0.13). We can notice in the log-plot some discrepancies in the tails.} \label{fig:coeffs} \end{figure} \begin{figure}[htb] \begin{minipage}[b]{.48\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{eigenvalues_v4.png}} \centerline{Distribution of eigenvalues \end{minipage} \hfill \begin{minipage}[b]{0.48\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{eigenvector_v4.png}} \centerline{First eigenvector entries \end{minipage} \caption{(Left) We can notice that the synthetic eigenvalues distribution share similar characteristics, i.e. a very large first eigenvalue, and a few ones outside the bulk of the distribution; (Right) All entries of the dominant eigenvector are positives.} \label{fig:eigs} \end{figure} \begin{figure}[htb] \begin{minipage}[b]{.32\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{corr1.png}} \centerline{ \end{minipage} \hfill \begin{minipage}[b]{.32\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{corr2.png}} \centerline{ \end{minipage} \hfill \begin{minipage}[b]{0.32\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{corr3.png}} \centerline{ \end{minipage} \begin{minipage}[b]{.32\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{gen1_v4.png}} \centerline{ \end{minipage} \hfill \begin{minipage}[b]{.32\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{gen2_v4.png}} \centerline{ \end{minipage} \hfill \begin{minipage}[b]{0.32\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{gen3_v4.png}} \centerline{ \end{minipage} \caption{Top row: Three randomly selected empirical correlation matrices; Bottom row: Three DCGAN-generated correlation matrices. We can notice the existence of hierarchical clusters in both set of matrices.} \label{fig:mats} \end{figure} \begin{figure}[htb] \includegraphics[width=\linewidth]{powerlaw_v3.png} \caption{Log-log plot of the distribution of node degrees in the MST. The DCGAN captures well the distribution of degrees (seemingly following a power law) but for the tail: A very few nodes have very high degrees. Typically, General Electric is known to occupy a central position in the S\&P 500 MST.} \label{fig:powerlaw} \end{figure} \section{Discussion} \label{sec:discussion} We have proposed a novel approach using GANs to generate realistic financial correlation matrices. The approach can be perfected, notably by spending more time and resources on experimental settings, but results showcased in this work are convincing. With this new tool, we can, for example, revisit the results described in \cite{huttner2018portfolio} quoted in our introduction, which compares portfolios based on graphs to Markowitz-optimal portfolios. It would be interesting to explore the use of Topological Data Analysis to compare the empirical data manifold to the synthetic data manifold as proposed in \cite{khrulkov2018geometry}. In this paper, we verified that the generated samples are realistic, but do they span the whole subspace of realistic financial correlation matrices? We might only sample from a restricted part of the space, for example due to a mode collapse during the GAN training. This work could be an important component in improving Monte Carlo backtesting \cite{lopez2019tactical}: Many paths can be sampled from a multivariate distribution parameterized by GAN-generated correlation matrices. Exploring conditional generation, for example conditioning on a market regime variable (risk-on or risk-off; quantitative easing or quantitative tightening; global crisis or not), could lead to new ways of stress testing portfolios. Finally, investigating the latent space of these models could lead to a better understanding of financial correlations, and maybe the discovery of unknown stylized facts. \vfill\pagebreak \bibliographystyle{IEEEbib}
2,869,038,155,964
arxiv
\section{Introduction} The electronic properties of graphene can be tailored in a suitable way through the deposition of foreign atoms, allowing not only the development of electronic devices but also providing platforms for new physical phenomena~\cite{CastroNeto20091094, PhysRevB.82.033414, PhysRevB.81.115427}. In particular, the adsorption of transition metals (TMs) on the graphene sheet (metal/graphene) has been considered as a promising route to modify the electronic properties of graphene. For instance, the control of spin-polarized current in graphene by the deposition of TMs (Mn, Fe Co and Ni)~\cite{Lima,PhysRevB.84.235110}. Recently, it was shown that it is possible to increase the Spin-Orbit Coupling (SOC) of graphene by depositing heavy atoms, such as indium and thallium~\cite{Weeks2011}. In this case, the absence of net (local) magnetic moment preserves the time-reversal (TR) symmetry, giving rise to a Quantum Spin-Hall (QSH) state~\cite{Kane2004} on the metal/graphene sheet with a nontrivial bulk band gap about three orders of magnitude greater than the predicted gap in pristine graphene. On the other hand, the majority of TMs with partially filled $d$ orbitals, adsorbed on the graphene sheet, may promote a local net magnetic moment, and thus, will suppress the QSH state. However, based on the {\it ab initio} calculations, Hu {\it et al.} verified that such magnetic moment can be quenched by either applying an external electric field, or by a codoping processes, thus, recovering the QSH state~\cite{Hu2012}. They have considered ($n\times n$) periodic as well as random distributions of TM adatoms on the graphene sheet, and in both cases the QSH state was preserved. On the other hand, the appearance of a nontrivial energy bandgap in metal/graphene systems, due to the SOC and nonzero magnetic moment (breaking the TR symmetry), gives rise to the so called Quantum Anomalous Hall (QAH) effect~\cite{Qiao2010,Ding2011,Qiao2012,chang2013experimental}. Furthermore, in a random distribution of TM adatoms on graphene sheet the SOC is not affected, and the intervalley $K$ and $K'$ scattering is somewhat suppressed\cite{PhysRevLett.109.116803}. That is, the nontrivial topological phase of metal/graphene was preserved. Moreover, the tunning process of QAH effect, by application of external electric field, has been proposed~\cite{Zhang2012a} for metal/graphene systems adsorbed with 5$d$ TMs. Those findings allow us to infer that the electronic properties as well as the topological phases of metal/graphene systems can be controlled by means of a geometrical and chemical manipulation, as well as by application of external fields. In a recent experiment, Gomes {\it et al.}~\cite{Gomes2012} showed that it is possible to build up artificial graphene structures, so called ``molecular graphene'', on solid surfaces by the manipulation of the surface potential. They have considered CO molecules forming a triangular lattice over the Cu(111) surface, CO/Cu(111). Such surface engineering allows a number of degrees of freedom to create of artificial lattices that exhibit a set of desirable electronic properties. For instance, the electronic properties of such ``molecular graphene'' can be tunned by changing the lateral distance between the CO molecules, by choosing another molecules instead of CO, or another surface instead of Cu(111). Indeed, recent works discuss the possibility of tuning the electronic properties of molecular graphene, as well as the realization of QSH state in CO/Cu(111) surfaces~\cite{Polini,PhysRevB.86.201406}. Thus, it is experimentally possible to manipulate atoms and molecules in order to form an ordered array on top of a substrate. In the same sense, the usage of graphene as a substrate for adatoms or foreign molecules may also be interesting. In this case, the adatoms or molecules will be embedded in a two dimensional electron gas formed by the $\pi$ orbitals of graphene. In this work we performed an investigation of the interplay between the electronic properties and the geometry of Ru arrays adsorbed on the graphene sheet (Ru/graphene). We show that by changing the concentration of Ru adatoms it is possible to cover multiple topological phases\cite{oh2013complete}. Thus, with the same transition metal atom and the same triangular lattice structure, one has in the lattice parameter (or TM separation) of this superstructure a dial that allows to tune the topological properties of the material. The Ru adatoms, independently of their lattice geometry, strongly interact with graphene and locally modify the charge density at their neighboring carbon atoms. Due to the Ru$\leftrightarrow$Ru indirect interaction via graphene, the electronic structure of Ru/graphene system becomes ruled by the Ru lattice geometry. We have considered Ru adatoms forming triangular lattices with ($n \times n$) periodicities, with respect to the graphene unitary cell, and according to the electronic structure we found three typical families of periodicities, showed in the Fig. \ref{Fig1}. For ($3n\times 3n$) periodicity, the Dirac cones are suppressed due to intervalley ($K$ and $K'$) scattering process, leading to a trivial bandgap. Whereas, for the both $((3n+1)\times (3n+1))$, and $((3n+2)\times(3n+2))$ there is a multiplicity of Dirac cones, and the appearance of a QAH phase. For these two last families, to better understand how the QAH phase emerges, we sequentially include the effects of an (i) electrostatic potential , (ii) an exchange potential, and (iii) the spin-orbit coupling. Considering only the electrostatic potential, two spin degenerated Dirac cones occur (at $K$ and $K^\prime$) due to the presence of two overlapping hexagonal lattices, one composed by the C atoms of the graphene sheet, and the other composed by the surface potential on the graphene sheet induced (lying) by (on the barycenter of) the triangular lattice of the Ru adatoms. Upon inclusion of the exchange field, due to the net magnetic moment of Ru adatoms, there is a spin-splitting of all Dirac cones, leading to an amazing crossing between bands with reverse spin very close to the Fermi level. And finally, by turning on the SOC, the Rashba Spin-Orbit interaction couples the reverse-spin states leading to a non-trivial bandgap opening around the Fermi-level. Thus, the sum of the electrostatic potential, exchange field, and the SOC coupling leads to non-trivial topological phases in Ru/graphene systems. We also show that the topological phase of the Ru/graphene systems changes for higher concentrations of Ru adatoms. Ru/graphene with the $(2\times2)$ periodicity presents a QSH phase, while for the $(4\times4)$ periodicity the system presents a metalic phase. The topological classification of the studied systems was made by the {\it ab-initio} calculation of the Chern number. \section{Methodology} All results presented in this work were obtained with first-principles calculations performed within the Density Functional Theory (DFT) framework\cite{capelle2006bird}, as implemented in the SIESTA code\cite{soler2002siesta}. The Local Density Approximation (LDA)\cite{perdew1981self} is used for the exchange-correlation functional. We used an energy cutoff of 410 Ry to define the grid in the real space, and the supercell approximation with a k-points sampling for the reciprocal space integration equivalent to $20\times20\times1$ in the unitary cell. The 2D graphene sheets lie in the xy plane, and a vacuum of 20\AA\ was used in the z-direction to avoid the undesirable interaction between the periodic images of graphene sheets. All the configurations of the Ru/graphene systems were fully relaxed until the residual forces on the atoms were smaller than 0.01 eV/\AA. In order to investigate the non-trivial topological phases in the Ru/graphene systems, we implemented the Spin-Orbit Coupling in the SIESTA code within the on-site approximation\cite{fernandez2006site}. Within this approach, the Kohn-Sham Hamiltonian $\boldsymbol{H}$ is a sum of the kinetic energy $\boldsymbol{T}$, the Hartree potential $\boldsymbol{V}^{H}$, the exchange and correlation potential $\boldsymbol{V}^{xc}$, the scalar relativistic ionic pseudopotential $\boldsymbol{V}^{sc}$, and the spin-orbit interaction $\boldsymbol{V}^{SOC}$. $\boldsymbol{H}$ can be written as a $2\times2$ matrix in the spin space as: \begin{equation} \boldsymbol{H}=\boldsymbol{T}+\boldsymbol{V}^{H}+\boldsymbol{V}^{xc}+\boldsymbol{V}^{sc}+\boldsymbol{V}^{SOC} =\left[\begin{array}{cc}\boldsymbol{H}^{\uparrow\uparrow} & \boldsymbol{H}^{\uparrow\downarrow}\\ \boldsymbol{H}^{\downarrow\uparrow} & \boldsymbol{H}^{\downarrow\downarrow}\end{array}\right]. \end{equation} All terms contribute to the diagonal elements, however only the $\boldsymbol{V}^{xc}$ and the $\boldsymbol{V}^{SOC}$ potentials have off-diagonal coupling terms due to the non-collinear spin. The spin-orbit matrix elements, as implemented in this work, are written as: \begin{equation}\label{eqls1} V_{ij}^{SOC}=\frac{1}{2}V^{SOC}_{l_{i},n_{i},n_{j}} \langle l_{i},M_{i}|\boldsymbol{L}\cdot \boldsymbol{S}|l_{i},M_{j}\rangle\delta_{l_{i}l_{j}}, \end{equation} where $|l_{i},M_{j}\rangle$ are the real spherical harmonics\cite{Blanco199719}. The radial contributions $V^{SOC}_{l_{i},n_{i},n_{j}}= \langle R_{n_{i},l_{i}}|V_{l_{i}}^{SOC}|R_{n_{j},l_{i}}\rangle$ are calculated with the solution of the Dirac equation for each atom. The angular contribution $\boldsymbol{L}\cdot \boldsymbol{S}$, considering the spin operator in terms of the Pauli matrices, can be written as: \begin{equation}\label{eqls2} \boldsymbol{L}\cdot \boldsymbol{S} =\left[\begin{array}{cc} {L}_{z} & {L}_{-}\\ {L}_{+} & -{L}_{z}.\end{array}\right]. \end{equation} The diagonal matrix elements for the SOC term $V^{SOC,\sigma\sigma}_{ij}$ (with $\sigma=\uparrow$ or $\downarrow$), are proportional to $\langle l_{i},M_{i}|L_{z}|l_{i},M_{j}\rangle$, which are different from zero only for $M_{i}=\pm M_{j}$. Thus, these terms couple orbitals with the same spins, and same $|M|$. On the other hand, the off-diagonal matrix elements $V^{SOC,\sigma-\sigma}_{ij}$ are proportional to $\langle l_{i},M_{i}|L_{\pm}|l_{i},M_{j}\rangle$, and thus couple orbitals with different spins and $M_{i}=M_{j}\pm1$. These coupling terms could open bandgaps or generate the inversion of states that are essential to the physics of the topological insulators. The band gaps were topologically characterized by calculating the Chern number ($\mathcal{C}$). This number is necessary to identify the topological class induced by the SOC in magnetic systems and is related to non-trivial Hall conductivity. In two dimensional systems the Chern number can be calculated within a non-Abelian formulation\cite{PhysRevB.85.115415} by the following expression: \begin{equation}\label{eq1} \mathcal{C}=\frac{1}{2\pi}\int_{BZ}\text{Tr}[\boldsymbol{B}(\boldsymbol{k})]d^2k. \end{equation} Where the trace is a summation over the band index, and only the occupied bands are taken in account. The integration is done over the whole Brillouin Zone (BZ), and $\boldsymbol{B}(\boldsymbol{k})$ is a matrix representing the non-abelian momentum-space Berry curvature, whose diagonal elements can be written as \cite{PhysRevB.85.115415}: $$\boldsymbol{B}_{n}(\boldsymbol{k})=\underset{\Delta_{k_y}\rightarrow0}{\text{lim}}\underset{\Delta_{k_x}\rightarrow0}{\text{lim}}\frac{-i}{\Delta_{k_x}\Delta_{k_y}} \text{Im}\text{ log}[\langle u_{n\boldsymbol{k}}\rvert u_{n\boldsymbol{k}+\Delta_{k_x}} \rangle\times$$ $$\langle u_{n\boldsymbol{k}+\Delta_{k_x}}\rvert u_{n\boldsymbol{k}+\Delta_{k_x}+\Delta_{k_y}}\rangle \langle u_{n\boldsymbol{k}+\Delta_{k_x}+\Delta_{k_y}}\rvert u_{n\boldsymbol{k}+\Delta_{k_y}}\rangle \times$$ \begin{equation} \langle u_{n\boldsymbol{k}+\Delta_{k_y}}\rvert u_{n\boldsymbol{k}}\rangle]. \end{equation} Where $\Delta_{k_x}$ ($\Delta_{k_y}$) is the grid displacement in the $k_x$ ($k_y$) direction of the reciprocal space, $\rvert u_{n\boldsymbol{k}}\rangle$ is the cell-periodic Bloch functions in the ($\boldsymbol{k}$) point of the BZ, and $n$ indicates the band index. This expression is quite adequate to perform calculations in systems with band crossing, and was implemented using a discrete grid in the reciprocal space. \section{Results} \begin{figure*} \includegraphics[width = 18cm]{Fig1Final} \caption{(Color online) Schematic representations of Ru/graphene systems with (a) ($4\times4$), (b) ($5\times5$) and (c) ($6\times6$) periodicities, respectively. These are examples of the $((3n+1)\times(3n+1))$, $((3n+2)\times(3n+2))$ and ($3n\times3n$) Ru/graphene systems, respectively. Here, $n$ is an integer. The geometric centers of the triangles formed by the Ru adatoms (barycenters) form a honeycomb lattice, represented in the figure.} \label{Fig1} \end{figure*} The energetic stability of Ru adatoms on the graphene sheet was examined through the calculation of the binding energy ($E^b$) written as, $$ E^{b}=E[\mbox{graphene}]+E[\mbox{adatom}]-E[\mbox{Ru/graphene}]. $$ $E[\mbox{graphene}]$ and $E[\mbox{adatom}]$ represent the total energies of the separated systems, graphene sheet and an isolated Ru atom, respectively, and $E[\mbox{Ru/graphene}]$ represents the total energy of the (final) Ru adsorbed graphene sheet, Ru/graphene. We have considered Ru/graphene systems with ($n \times n$) periodicity with $n$ ranging from 2 up to 12, thus, a set of different Ru concentrations on the graphene sheet. For the energetically most stable configuration, the Ru adatom presents a $C_{6v}$ symmetry, sitting on the hollow site ($H$) of the graphene sheet. For the ($4\times 4$) periodicity, we obtained $E^b = 2.64$~eV at $H$, while the top site (above the C atom) is energetically less stable by 0.73~eV ($E^b = 1.91$~eV). There is a negligible dependence between the calculated binding energies and the Ru concentration. We did not find any energetically stable configuration for Ru adatoms on the bridge site (on the C--C bond). It is noticeable that the Ru binding energy is larger when compared with most of transition metals (TMs) adsorbed on graphene\cite{Ding2011,PhysRevB.77.195434,PhysRevB.77.235430}. At the equilibrium geometry the Ru adatom lies at 1.68~\AA\ from the graphene sheet (vertical distance $z$), which is smaller when compared with most of the other TMs on graphene\cite{PhysRevB.77.235430}. Those findings allow us to infer that there is a strong chemical interaction between Ru adatoms and the graphene sheet. Indeed, our electronic structure calculations indicate that the Ru-4$d$ orbitals, $d_{x^{2}-y^{2}}$, $d_{xz}$, $d_{yz}$ and $d_{yx}$, are strongly hybridized with the carbon $\pi$ (host) orbitals, while $d_{z^2}$ behaves as lone pair orbitals. Initially we will examine the electronic properties of Ru/graphene systems with $((3n+1)\times(3n+1))$ and $((3n+2)\times(3n+2))$ periodicities, which geometries are schematically represented in the panels (a) and (b) of Fig. \ref{Fig1}, respectively. The ($3n\times 3n$) [see Fig.~\ref{Fig1}(c)] systems will be discussed later on. Due to the Ru induced electrostatic field on the graphene sheet, the (5$\times 5$) Ru/graphene system exhibits two spin-degenerated band intersections (Dirac cones), at the $K$ and $K^\prime$ points, separated by 0.78~eV [indicated as $\Delta_0$ in Fig,~\ref{Fig2}(a)]. Further inclusion of spin polarization gives rise to a sequence of four spin-split band intersections, $C1-C4$ in Fig.~\ref{Fig2}(b). The strength of the exchange field can be measured by the energetic separation $E_{x}$ at the $\Gamma$ point, of 0.66~eV, as shown in Fig. \ref{Fig2}(b). In the same diagram, $\Delta_{C2-C3}$ indicates the energy separation between the highest occupied ($C3$) and lowest unoccupied ($C2$) Dirac cones. Notice that the linear energy dispersion (Dirac cones) has been preserved. Our calculated Projected Density of States (PDOS) [Fig.~\ref{Fig2}(c)] reveals that the Dirac cones are composed by similar contributions from C $2p$ ($\pi$ orbitals) and Ru $4d$ orbitals. On the other hand, reducing the Ru adatom concentration by increasing the ($n \times n$) periodicity, we find that, (i) the electronic contribution of Ru $4d$ orbitals to the Dirac cones $C2$ and $C3$ ($C1$ and $C4$) reduces (increases); (ii) in contrast, the C $\pi$ orbitals contribution to $C1$ and $C4$ ($C2$ and $C3$) reduces (increases); (iii) the energy dispersions of the electronic bands that form the Dirac cones $C1$ and $C4$ have been reduced (becoming flatter). The localized character of Ru $4d$ orbitals will be strengthened, in accordance with (i). (iv) The electronic bands $C2$ and $C3$ retrieve the behavior of pristine graphene sheet, in accordance with (ii). The role played by the Ru adatom becomes negligible, and $\Delta_{C2-C3} \rightarrow 0$ for larger ($n \times n$) periodicity, as shown in Fig.~\ref{Fig2}(d). In Fig.~\ref{Fig2}(e) we present the electronic band structure of (10$\times$10) Ru/graphene, where (iii) and (iv) described above can be verified, and in Fig.~\ref{Fig2}(f) we present the expected picture of the electronic band structure of ($n \times n$) Ru/graphene for $n\rightarrow\infty$. Those findings confirm the strong electronic coupling between the Ru adatoms and the graphene sheet (Ru$\leftrightarrow$graphene) leading to a long range interaction between the Ru adatoms via graphene (Ru$\leftrightarrow$Ru). The total magnetic moment per Ru atom is also found to depend on the Ru coverage. Apart from the $(2\times2)$ periodicy which is non-magnetic, all studied structures present a finite magnetic moment. For all the $(3n\times3n)$ periodicities the magnetic moment is 2.0$\mu_B$. Whereas, for the $((3n+1)\times(3n+1))$ and $((3n+2)\times(3n+2))$ periodicities the magnetic moment increases with $n$ from 1.75 to 2.0$\mu_B$ at the limit of low coverage. \begin{figure} \begin{center} \includegraphics[width = 8.5cm ]{Fig2Final} \end{center} \caption{(Color online) Evolution of the electronic band structure of Ru/graphene with a ($5\times5$) periodicity, considering successively the contributions of (a) electrostatic potential generated by Ru adatoms and (b) exchange field. In (c) is shown the PDOS of (b). Dark gray (Blue) and black lines are associated with the down and up spin, respectively. (d) Variation of the energetic separation, $\Delta_{C2-C3}$, of the Dirac cones closer to the Fermi level ($C2$ and $C3$) relative to the concentration of Ru adatoms. (e) Band structure of (10$\times$10) Ru/grapehene system. (f) Expected picture of the band structure of ($n \times n$) Ru/graphene systems with $n\rightarrow\infty$.} \label{Fig2} \end{figure} In order to improve our understanding on the role played by the (long range) Ru$\leftrightarrow$Ru electronic interaction, upon the presence of graphene host, we turned off such Ru$\leftrightarrow$Ru interaction by examining the electronic structure of a Ru adatom adsorbed on the central hexagon of a coronene molecule ($C_{24}H_{12}$), Ru/coronene. This geometry is represented in the inset of Fig.\ref{Fig3}(a). This is a hypothetical system, since the equilibrium geometry of the Ru adatom in Ru/coronene is kept as the same as that obtained for the periodic Ru/graphene system. The calculated molecular spectrum, presented in Fig.~\ref{Fig3}(a), reveals that the HOMO and LUMO are both bi-degenerated states (mostly) composed by $d_ {xz}$ and $d_{yz}$ orbitals of Ru adatom, being the spin-up ($\uparrow$) component for the HOMO and spin-down ($\downarrow$) for the LUMO. The effect of Ru$\leftrightarrow$Ru interaction, mediated by the graphene $\pi$ orbitals, can be observed by comparing the panels (a) and (b) in Fig.~\ref{Fig3}. In Fig.~\ref{Fig3}(b), we present the electronic band structure of ($4\times4$) Ru/graphene, where it is noticeable that the HOMO and LUMO energies of Ru/coronene compare very well with those of ($4\times4$) Ru/graphene at the $\Gamma$ point. We also identify the other $4d$ states ($d_{z^2}$, $d_{x^2-y^2}$ and $d_{xy}$). In the Ru/graphene system, the Ru $4d$ orbitals (hybridized with $\pi$ orbitals of the graphene sheet) exhibit a dispersive character along the $\Gamma$--M and $\Gamma$--K directions within the Brillouin zone. We find that the Dirac cones with spin-up (spin-down) above (below) the Fermi level, indicated as $C2$ ($C3$) in Fig.~\ref{Fig3}(b), are formed by dispersive states with contributions of the $d_{xz}$, $d_{yz}$, $d_{x^{2}-y^{2}}$, and $d_{xy}$ Ru orbitals. In other words, the states around the Fermi energy are composed by Ru orbitals with $l=2$ and $m=\pm1,\pm2$, hybridized with the carbon $p_z$ orbitals. \begin{figure} \includegraphics[width = 8.5cm ]{Fig3Final} \caption{(Color online) Local effect of Ru adatom. (a) Molecular spectrum of Ru adsorbed on the coronene molecule, as illustrated in the structure at the bottom left of the box. (b) Electronic band structure of (4$\times$4) Ru/graphene system without spin orbit coupling. The (red) boxes around C2 and C3 indicate the energy intervals used to calculate the LDOS. In the bottom left is represented the structure of the unit cell used. We indicate the barycenters with (red) balls. (c) LDOS around the LUMO (left) and HOMO (right) of molecular spectrum in arbitrary units. (d) LDOS with down and up spins around the $C3$ and $C2$ cones, respectively. We point out a C site associated with a barycenter, which is indicated in the structure in (b). } \label{Fig3} \end{figure} In Figs.~\ref{Fig3}(c) and \ref{Fig3}(d) we present the Local Density of States (LDOS) of the Ru/coronene and (4$\times$4) Ru/graphene systems, respectively. In those diagrams, the electronic states were projected onto a parallel plane to the Ru/coronene and Ru/graphene interfaces, 0.5~\AA\ above the molecule and the graphene sheet, respectively. We verify that the molecular spectrum of the HOMO and LUMO [Fig.~\ref{Fig3}(c)] are localized on the nearest neighbor (NN) and the next nearest neighbor (NNN) C sites of the Ru adatom, respectively. It is noticeable that for periodic Ru/graphene systems, regardless of the size and geometry of Ru adatoms, we have the similar electronic distribution for the occupied and empty states at the $\Gamma$ point. {\it viz.}: the spin-up (spin-down) C $\pi$ orbitals, localized on the NN (NNN) neighbor sites of the Ru adatom, contribute to the formation of the highest occupied (lowest unoccupied) states. Thus, we can infer that the Ru adatom locally defines the charge density both at its first and second nearest neighbor carbon atoms. On the other hand, due to the energy dispersion of those states along the $\Gamma\rightarrow K$ direction, the electronic states of the Dirac cone $C2$ will be mostly localized on the C atoms NN to the Ru adatoms [Fig.~\ref{Fig3}(d-right)], while the C atoms NNN to the Ru adatom [Fig.~\ref{Fig3}(d-left)] will contribute to the formation of $C3$. In addition, as shown in Fig.~\ref{Fig3}(d), not only the C atoms NN and NNN to the Ru adatom contribute to the formation of the Dirac cones, but also there are electronic contributions from the other C atoms of the graphene sheet. In particular, the LDOS of $C2$ exhibits a constructive wave function interference (LDOS peak) on the carbon atom lying at the geometric center of the triangular array of Ru adatoms, called hereafter as barycenter [indicated by an arrow in Fig.~\ref{Fig3}(d)], while it becomes destructive for the Dirac cone $C3$ [Fig.~\ref{Fig3}(d-left)]. Further LDOS calculations reveal that the other Dirac cone above $E_F$, $C1$ (spin-down), presents similar electronic distribution as compared to $C2$, whereas the LDOS of the Dirac cone $C4$ (spin-up), below $E_F$, is similar to that of $C3$. We find the same LDOS picture for the (7$\times$7) and (10$\times$10) Ru/graphene systems, namely, the Dirac cones above $E_F$ present LDOS peaks (i) on the NN C atoms to the Ru adatom and (ii) on the C atom localized at the barycenter of the triangular array of Ru adatoms. While the LDOS of the Dirac cones below $E_F$ are (iii) localized on the NNN C atoms to the Ru adatoms, and (iv) present a negligible electronic contribution from the barycenter C atom. Such electronic picture, as described in (i)--(iv), is verified for the other ($(3n+1)\times (3n+1)$) family of Ru/graphene systems, where the barycenter and the NN C sites to the Ru adatom belong to the same sublattice. Meanwhile, for the $((3n+2)\times (3n+2))$ Ru periodicity, such as (5$\times$5), (8$\times$8), and (11$\times$11) Ru/graphene systems, the barycenter and the NNN C sites to the Ru adatom belong to the same sublattice. Such difference gives rise to a distinct LDOS picture for the Dirac cones. In Fig.~\ref{Fig4}(a-left) and \ref{Fig4}(b-right) we present the LDOS for the (5$\times$5) Ru/graphene, for the spin-up Dirac cone $C2$ and spin-down Dirac cone $C3$, above and below $E_F$, respectively. Here, compared with the (4$\times$4) counterpart, in the (5$\times$5) Ru/graphene system the Dirac cones above $E_F$ obey (i) and (iv), whereas the Dirac cones below $E_F$ are characterized by (ii) and (iii). Thus, we can infer that the electronic states at the barycenter C atom contributes to the formation of the Dirac cones below $E_F$, for the $((3n+2)\times (3n+2))$ Ru periodicity, whereas in the ($(3n+1)\times (3n+1)$) Ru/graphene systems, the barycenter C atom contribute to the formation of the Dirac cones above $E_F$. In the region of intergration used to calculate the LDOS around the C2 cone (formed by bands with up spin), there are states with opposite spin, which are associated with the formation of the cone C3, as shown in Fig. \ref{Fig3} (b). These states, although located around 0.5~eV above the vertex of the cone C3, have a distribution of peaks in the LDOS [ shown in Fig. \ref{Fig4} (a-right)] similar to that presented at the vertex of the cone C3 [ shown in \ref{Fig4} (b-right)]. The same behavior occurs for the other Dirac cones (C1-C4). Thus, the pattern of peaks distribution is not a characteristic only of the vertex of the cones, but a characteristic of the entire energy band that forms the cones. \begin{figure} \includegraphics[width = 8.5cm ]{Fig4Final} \caption{(Color online) LDOS for up and down spins around the energy level at which the (a) $C2$ and (b) $C3$ cones are formed for the (5$\times$5) Ru/graphene system. } \label{Fig4} \end{figure} In contrast, the band structures of the $(3n\times3n)$ Ru/graphene systems do not show Dirac cones. In this case, the $K$ and $K^\prime$ points are folded into the $\Gamma$ point, and upon the presence of Ru adatoms in a $(3n\times3n)$ periodicity, those electronic states face an intervalley scattering process suppressing the formation of the Dirac cones\cite{Ding2011}. Figure~5(a) presents the electronic band structure of a (6$\times$6) Ru/graphene system, where we find an energy gap of $0.11$~eV at the $\Gamma$ point. In this case, we find a quite different LDOS distribution [Fig~5(b)] on the graphene sheet, when compared with the other $((3n+1)\times(3n+1))$ and $((3n+2)\times(3n+2))$ Ru/graphene systems. Namely, the highest occupied (spin-up) states, at the $\Gamma$ point, spread out somewhat homogeneously on the graphene sheet. Similar results (not shown) are found for the lowest unoccupied (spin-down) states, as well as for other $(3n\times3n)$ periodicities. We find no LDOS peaks in the hexagonal lattice of barycenters, which can be attributed to the absence of carbon atoms, since for such Ru periodicity the barycenters lie at the hollow site of the graphene sheet. \begin{figure} \includegraphics[width = 8.5cm ]{Fig5Final} \caption{(Color online) (a)Electronic band structure and (b) LDOS for the (6$\times$6) Ru/graphene system. The region of integration used to make the LDOS is represented by the (red) box in (a).} \label{Fig5} \end{figure} In order to provide further support to such subtle compromise between the arrangement of the Ru adatoms on the graphene sheet and the formation of the Dirac cones, we have examined two additional configurations for Ru adatoms on graphene. That is, Ru adatoms forming rectangular and hexagonal lattices. For those Ru/graphene systems, we find two (spin-polarized) Dirac cones, instead of the four ones present at the triangular deposition of Ru on graphene. The disappearance of two cones occurs since the surface potential induced by the Ru adatoms has no longer a hexagonal symmetry. Here, we conclude that the presence of four Dirac cones is constrained by the triangular arrangement of the Ru adatoms on the graphene host. \begin{figure*} \centering \includegraphics[width = 17 cm ]{Fig7Final} \caption{(Color online) The SOC effects in Ru/graphene systems. (a) Electronic band structure for the (5$\times$5) periodicity with SOC. (b) Band structure of (a) near the Fermi level. (c) Energy band gaps for the $((3n+1)\times(3n+1))$ and $((3n+2)\times(3n+2))$ families with periodicities $5\times5$ or larger. Band structures for the (d) (4$\times$4), (e) (6$\times$6) and (f) (2$\times$2) periodicities. (g) Band gap variation of (f) with the SOC strength $\lambda_{SOC}$. The dashed (red) lines indicate the Fermi levels, whereas the black dotted lines is just to better visualize the energy gap. } \label{Fig7} \end{figure*} By turning on the SOC we have the ingredients necessary to look for topological phases in Ru/graphene systems. Since in pristine graphene the radial contribution of the SOC term is negligible, and both the spin and the orbital magnetic moments are quenched, the properties of Ru/graphene system are defined by the 4d Ru orbitals. In the energy range in which the Dirac cones C1-C4 are formed, there are similar contributions coming from the C 2p ($\pi$) and the Ru 4d orbitals, as already discussed above. This Ru contribution will give rise to the SOC effects. We analysed the wave functions around the Fermi energy, and concluded that their coefficients present significant changes only close to band crossings. Thus, the SOC does not modify either the electronic configuration or the effective population of the 4d orbitals when compared to the case when only the spin polarization is included. Likewise, with the SOC the total energy of the system decreases only by 0.48meV, so that the change in binding energy is negligible. The most relevant band crossings occur at the Dirac cones above and below the Fermi energy as well as right at the Fermi energy. The SOC will open gaps at these band crossings. In order to understand how the inclusion of this interaction contributes to the formation of energy gaps, we separately studied the diagonal and off-diagonal contributions of $\boldsymbol{L}\cdot\boldsymbol{S}$ to $\boldsymbol{V}^{SOC}$ [see eqs. (\ref{eqls1}) and (\ref{eqls2})]. We find that the off-diagonal term breaks the degeneracies at the $K$ point, opening gaps at the Dirac cones C1-C4. Without the SOC, the Dirac cones are formed by intersections of bands with the same spin, where the states have a unique non null component of the spinor. As previously discussed, these bands have contributions from the orbital angular momentum $l=2$ ($4d$ orbitals) with $m=\pm1$ and $\pm2$ ($d_{x^{2}-y^{2}}$, $d_{xz}$, $d_{yz}$ and $d_{yx}$ orbitals), leading to non-null off-diagonal terms. Through the self-consistent-cycle, these off-diagonal terms generate wavefunctions with non null coefficients at the spinor component which was previously zero, and break the degeneracies at the Dirac cones, opening an energy gap. We also find that the diagonal elements contribute to the opening of gaps at the Fermi level, in the vicinity of $K$ and $K'$ points (see Fig. 6(a)). In this case, there are two effects: (i) The exchange and correlation potential generates a non-collinear spin coupling term via $V^{xc,\sigma-\sigma}$; and (ii) the splitting of the energy bands with opposite spins is generated by the addition (subtraction) of the matrix element $\langle l_{i},m_{i}|L_{z}|l_{j},m_{j}\rangle$ in $H_{ij}^{\uparrow\uparrow}$ ($H_{ij}^{\downarrow\downarrow}$). The addition of these two effects leads to the opening of gaps at the Fermi energy. In Fig. \ref{Fig7} (a) we present the electronic band structure of the (5$\times$5) Ru/graphene system. The SOC gives rise to energy gaps of 9.5~meV right at the points where the different spin band cross. characteristic of the QAH phase\cite{Weeks2011} as depicted in Fig. \ref{Fig7} (b). In Fig. \ref{Fig7} (a) the spin texture is indicated, and is characteristic of a QAH topological phase. Also, we calculated the Chen number with the Eq. \ref{eq1}, obtaining $\mathcal{C} = -2$ for the (5$\times$5) Ru/graphene, unequivocally confirming the QAH phase. As discussed above, the electronic properties of Ru/graphene system, such as the strength of the exchange field ($E_{x}$), the energy separation between the Dirac cones ($\Delta_{C2-C3}$), and the electronic contribution of Ru $4d$ to the formation of the Dirac cones $C1-C4$, all depend on the Ru concentration and ($n \times n$) periodicity. In this work we also found an intriguing dependence between the topological phase and the ($n \times n$) periodicity of Ru/graphene. Indeed, by calculating the Chern number we found $\mathcal{C} = -2$ for all $((3n+1)\times(3n+1))$ and $((3n+2)\times(3n+2))$ Ru/graphene systems with periodicities ($5\times5$) or larger. For those systems, the non-trivial band gap vary with the periodicity as shown in Fig \ref{Fig7} (c). On the other hand, for the ($4\times4$) Ru/graphene system, the Ru$\leftrightarrow$Ru interaction is strengthened, and the Ru $4d$ orbitals become less localized. For this periodicity, with the SOC turned off the crossings between the up and down bands are not all aligned in energy, and with the SOC turned on the opening of gaps occurs at different energies, leading to a non-gapped band structure (metallic states), as shown in Fig. \ref{Fig7} (d) . These metallic states prevent the observation of the QAH effect. However, we obtain a non-null Chern number ($\mathcal{C}\approx 0.98$), indicating a finite Anomalous Hall Conductivity, which is given by: $\sigma_{xy}=\mathcal{C}\frac{e^{2}}{\hbar}$. For all the ($3n\times3n$) Ru/graphene systems we find $\mathcal{C} = 0$. This is a consequence of the trivial bandgap at the $\Gamma$ point, generated by the intervalley $K$ and $K'$ scattering process [the band is shown in Fig. \ref{Fig7} (e)]. Thus, all the ($3n\times3n$) Ru/graphene systems are trivial insulators since the SOC is not strong enough to reverse the trivial band gap. Further increase on the Ru concentration (0.25 ML of Ru adatoms) was examined by considering the (2$\times$2) periodicity. In these systems, the barycenters are located at the NNN C sites to the Ru adatoms. The $(2\times 2)$ Ru/graphene exhibits quite different electronic and topological properties in comparison with the other Ru/graphene systems, because: (i) it presents an energy band gap, and zero net magnetic moment [see Fig. \ref{Fig7} (f)]; (ii) it presents the QSH phase. Here, we use the adiabatic continuity argument to prove (ii). This argument has been used to identify 2D and 3D topological insulators\cite{PhysRevLett.105.036404,PhysRevLett.106.016402,bernevig2006quantum,PhysRevB.76.045302,PhysRevB.78.045426,PhysRevB.80.085307,padilha2013quantum}. According to this argument, if the Hamiltonian of a system is adiabatically transformed into another, the topological classification of the two systems can only change if the band gap closes. Thus, we smoothly changed the SOC strength by placing a multiplicative factor, $\lambda_{SOC}$, in the term associated with the on-site approximation of the SOC Hamiltonian, $H_{SOC}$. When this parameter is varied from zero to one, we observed a variation of the gap as shown in Fig.~ \ref{Fig7}(g). At $\lambda_{SOC}=0.712$ we find a metallic state (gap closing), generated by an inversion of the states that contribute to the formation of the HOMO and LUMO, indicating a transition from one trivial topological state (without SOC, $\lambda_{SOC}=0$) to the QSH state (with SOC, $\lambda_{SOC}=1$). The above description is sumarized in Table \ref{tab1}, where can be apreciated the multiple topological phases exhibited by the Ru/graphene systems. \begin{table} \caption{\label{tab1} Multiple topological phases in Ru/graphene systems as a function of the periodicity.} \begin{ruledtabular} \begin{tabular}{cc} periodicity & topological phase \\ \hline ($2\times2$) & QSH \\ ($4\times4$) & metal \\ $((3n+1)\times(3n+1))$\footnotemark[1] & QAH \\ $((3n+2)\times(3n+2))$\footnotemark[2] & QAH \\ $(3n\times3n)$\footnotemark[2] & trivial insulator \footnotetext[1]{for $n\ge 2$} \footnotetext[2]{for $n\ge 1$} \end{tabular} \end{ruledtabular} \end{table} \section{Summary} In summary, based on {\it ab initio} calculations, we investigate the structural and the electronic properties of graphene adsorbed by Ru adatoms, Ru/graphene. We map the evolution of the electronic charge density distribution around the Fermi level as a function of different Ru/graphene geometries. We found that the Ru adatom fixes the wave function phase of its NN and NNN C atoms, whereas the Ru$\leftrightarrow$Ru interaction, mediated by the $\pi$ orbitals of the graphene sheet, gives rise to four spin-polarized Dirac cones for the $((3n+1)\times(3n+1))$ and $((3n+2)\times(3n+2))$ Ru/graphene systems. The electronic distributions of the states that form those Dirac cones are constrained by the periodicity of the Ru adatoms on the graphene host. For triangular arrays of Ru adatoms, four spin-polarized Dirac cones are generated by a suitable coupling between the electronic states of two hexagonal lattices, one composed by the carbon atoms of the graphene host, and the other attributed to the (barycenter) surface potential on the graphene sheet induced by the triangular lattice of Ru adatoms. For other geometries, hexagonal and rectangular, we have only two spin-polarized Dirac cones, while there are no Dirac cones for $(3n \times 3n)$ Ru/graphene. The inclusion of SOC promotes multiple topological phases when graphene is doped with triangular arrays of Ru. The topological phase in those systems depends on the periodicities (or concentration) of Ru adatoms on the graphene sheet. For a high coverage in the $(2\times2)$ periodicity (25\%) of Ru adatoms the QSH phase is present, whereas for the $((3n+1)\times(3n+1))$ and $((3n+2)\times(3n+2))$ Ru/graphene systems the QAH phase will be preserved even for low coverage of Ru adatoms (less than 1\%). These results are summarized in the Table \ref{tab1}. Even though transition metals adatoms have been used before to obtain distinct non-trivial topological phases in graphene, in previous works it was always considered that distinct transition metals would provide distinct topological phases. However, we have shown that this is not necessarily so. The same transition metal can provide distinct topological phases, depending on the particular geometrical arrangement. \section{Acknowledgements} The authors would like to thank Prof. Shengbai Zhang for fruitful discussions. Also, we would like to thank the financial support by Conselho Nacional de Desenvolvimento Cient\'ifico e Tecnol\'ogico/Institutos Nacionais de Ci\^encia e Tecnologia do Brasil (CNPq/INCT), the Coordena{\c c}\~ao de Aperfei{\c c}oamento de Pessoal de N\'ivel Superior (CAPES), and the Funda{\c c}\~ao de Amparo \`a Pesquisa do Estado de S\~ao Paulo (FAPESP).
2,869,038,155,965
arxiv
\section{Introduction} The emerging concept of Volunteer Computing~\cite{935629, volunteercomputing} has proved its very large capacity for high performance computing in a series of highly successful projects, namely SETI@Home~\cite{setiathome}, Folding@Home~\cite{larson-2003} and etc. For example, in SETI@Home, more than 1.5 million desktops world wide are actively contributing their processors for scientific computing and that led to over 250 TeraFLOPS of processing power as of 2007\footnote{{\tt http://boincstats.com/stats/project\_graph.php?pr=sah}, accessed on 27th, Sept. 2007}. This is comparable to the current fastest computer, IBM Blue Gene, which is offering sustained speeds of 360 TeraFLOPS. \subsection{Work Pooling and Work Flows} The Berkeley Open Infrastructure for Network Computing (BOINC)~\cite{anderson04} is a generalization of SETI@Home and others which uses a work pool model, where a work pool server coordinates the work being done over a number of workers. Workers pull work units and push results. The work pool model can be readily extended to handle work flows and doing so will support work flow based grid applications. \reffig{workflow} shows a work pool server that is coordinating the flow of work among several workers (solid circles). Each step of the work flow (shown as solid arrows) requires communication to and from the server (dashed arrows). The communication is required for multiple reasons: (i) the workers usually cannot communicate directly with each other, (ii) to avoid malicious volunteers the results from workers need to be scrutinized for correctness and (iii) a large work flow requires checkpointing to be efficient. \begin{figure} \centering \subfigure[Work flow coordination in a work pool model. Inter-work flow communication takes place via the server.]{\label{workflow}\includegraphics[width=6cm]{workflow}} \hspace{1cm} \subfigure[Work flow coordination in a work pool model with peer-to-peer networking among the workers. Inter-work flow communication takes place via the peer-to-peer network.]{\label{workflowp2p}\includegraphics[width=6cm]{workflowP2P}} \caption{Work pooling and work flows.} \label{workflowfig} \end{figure} Clearly, the server communication requirements increase proportionally to the number of work flow steps. More so, if the work flow contains iterative elements, i.e. cycles, then the communication to the server will increase proportional to the complexity of the iterations. Such communication could quickly slowdown more complicated work flows. In general, one can model the work flow as a parallel process, i.e. as a message passing parallel program. A simple work flow is like a pipeline of tasks in a parallel process. It is also particularly suitable to do so for the cases when the work flow includes cycles with large numbers of iterations. In order to eliminate the communication to the server for each work flow step we propose the use of a peer-to-peer based parallel architecture that allows the workers to scrutinize and checkpoint their results independently of the server. In this case, shown in \reffig{workflowp2p}, intra-work flow communication is done using a peer-to-peer network, and only inter-work flow communication requires the server. \subsection{P2P Based Parallel Computing} P2P based parallel processing systems~\cite{mpichopen, dvm, p2pmpi} have already been built in the past few years and the previous studies~\cite{dvm} have shown that with proper designs the P2P based approach can potentially be used to collect free idle CPU cycles from the Internet efficiently with reduced maintenance cost and reduced risk of single point failure. In such P2P platforms, nodes are indexed in a Distributed Hash Table (DHT) overlay~\cite{chord, mspastry, kademlia} and messages for parallel computing are routed in multiple hops and in a decentralized way. This decentralized design enables message passing programs on such P2P platforms and allows for running a wide range of existing message passing programs. \subsubsection{Node Failure} For both the above mentioned new and traditional architectures of Volunteer Computing, one of the common issues is how to handle node failure and departure. In the context of this paper, as both the node failure and departure would immediately make the computing and storage resources of the node unavailable, we choose to use the term failure to represent such events. Different from other parallel processing domains where computing nodes are usually dedicated and well maintained, the nodes in a Volunteer Computing platform can disconnect at any time and this can happen relatively frequently. The traditional Volunteer Computing projects like SETI@Home and Folding@Home have a simple `deadline' approach where each work unit is assigned together with a deadline for reporting results and thus the system can reassign the work unit again to another node if the results can't be reported on time due to failure or machine departure. This approach is fine with data and parametric parallel programs as in such programs each work unit is usually independent of each other and can be recomputed by any other node at any time. However this mechanism is not sufficient to support parallel processing which use message passing. \subsubsection{Checkpointing} One of the long applied methods to counter nodes failure and departure in message passing systems is to employ checkpoint and restart~\cite{DBLP:journals/csur/ElnozahyAWJ02,ChandyL85} in which the status of the message passing job is saved regularly and stored on a reliable storage so that the progress can be rolled back to the last known status when any involved node fails (e.g. disconnect from the network or crash due to hardware or software failure). For example, the Chandy-Lamport algorithm~\cite{ChandyL85} is used in our P2P DHT based P2P-DVM system~\cite{dvm} to capture the status of the MPI job every $T$ seconds where the fixed number $T$ is a parameter for each job chosen by the user, the captured processes status are saved on a P2P based distributed storage system. The experiments have shown such checkpoint-rollback protocol causes little overhead for fault free running compared to other approaches and the jobs can be finished eventually with the presence of peer failure events as long as the peer failure events (peer disconnection or machine crash) can be detected and the value of $T$ is chosen carefully. A similar approach is also found on a few cluster and networked workstation based systems~\cite{BHKLC06}. The above mentioned checkpoint-rollback approach has demonstrated it usefulness in our previous work but it also introduced the question of how to choose the value of the checkpoint interval $T$ and is there any optimal value of $T$ for a given system at the given time so that the overall performance of the system can be improved? The existing implementation requires users to manually choose the checkpoint interval before the job is submitted which is quite difficult for users without adequate knowledge about the running environment (e.g. available bandwidth, failure rate, etc.). A method does not take the dynamism of the P2P networks into consideration will not adapt well when the nodes failure rate or available bandwidth change from time to time. Such a non-optimized approach may also create another major overhead for the overall performance as some resources need to be consumed when storing the process context and store it on the network. We aim to address the issue in this work by proposing an adaptive checkpoint scheme which can automatically adjust the checkpoint interval during runtime to reduce the total overheads introduced due to checkpointing and restarting. \subsection{Contribution} In this work, we propose an adaptive scheme for automatically making checkpoint decisions during runtime, the decision method dynamically decides the optimized checkpoint interval based on the estimated network conditions which in turn is based on the statistical data collected online. The scheme is designed to be completely decentralized. We use simulations to show when our model is better than the fixed checkpoint interval in terms of reduced total runtime. \subsection{Related Works} The idea of optimizing system performance using estimated P2P network conditions has been applied in a few projects at different levels of the P2P software stacks. In ~\cite{GhinitaT06} and ~\cite{mspastry} the idea of probability based stabilization is proposed to better control the cost of stabilization based on the estimation of both the P2P network size and peer failure rate and this will improve P2P routing success rate. Similar idea is applied in the gossip protocols~\cite{gossip} so with the estimated P2P network size, the protocol will be able to compute the number of gossip targets to reach for gossip messages. However, to the best of our knowledge, we are the first to propose and apply such method for P2P based parallel computing. A novel peer availability prediction algorithm using multiple predictors was proposed in~\cite{predictor} and it demonstrates good performance for possible distributed applications. In that work, each peer's availability is predicted based on its historical connection and disconnection statistics. Applications can benefit from such availability prediction in their application level planning. This algorithm depends on the availability of the log data which may not be available for some peers, e.g. peers which just have the software installed and thus don't have their log data at all. For example, it has been reported that SETI@Home~\cite{setiathome} attracts around 2,000 new machines daily~\cite{volunteercomputing}, it is clear that such new peers will not be able to predict their availability by using~\cite{predictor}. E.g. two weeks logging data is used for training the predictor in~\cite{predictor}. The computational grid community applied a similar approach~\cite{NurmiBW05} to predict the available of the grid computing nodes based on their uptime log, this is acceptable for the grid systems where nodes are all professionally managed. This approach will suffer in a P2P network for the same reason mentioned above. \section{Typical Peer Uptimes and Checkpointing Impact} \label{limitations} On traditional dedicated parallel processing platforms like clusters, the machine failure can be regarded as exceptional events. For example, as of the time of writing, the PRAGMA grid web portal shows its 207 hosts have an average uptime of 68.5 days\footnote{{\tt http://goc.pragma-grid.net/cgi-bin/scmsweb/uptime.cgi}, accessed on 21 June, 2007}. Given such average uptime of more than two months, the risk of software and hardware failures can be mitigated with the help of the checkpoint and restart. The fixed checkpoint interval can work well with the interval set to be approximately checkpointing a few times daily as the probability for each job, which may run up to a few days, to have multiple failures during its life time is quite low and thus the overhead that can be introduced by the fixed checkpoint interval is not significant. However the P2P network conditions present a new challenge for running message passing jobs on P2P networks. In general, the P2P networks consist of a large number of peers which are Internet users' desktop and laptop PCs, these machines can disconnect from the P2P networks quite frequently and we found the rate of such departure, which will cause the parallel processing on the peer to fail immediately, dominate the reasons of job failure and it varies widely from time to time. In order to highlight the running environment of parallel processing systems over P2P networks, we discuss traces data from three globally deployed P2P networks (There is currently no such peer failure statistics available from Volunteer Computing systems like SETI@Home.) The peer lifespan trace project\footnote{{\tt http://www.aqualab.cs.northwestern.edu/projects/lifeTrace.html}} monitored over 500,000 peer sessions in the Gnutella file-sharing P2P network for a week, the trace data shows the average peer session time is about 121 minutes, that is the computing nodes will become unavailable in just two hours on average. Another real P2P system called Overnet is measured for 7 days in~\cite{BhagwanSV03} and the results indicated a similar average session time of 134 minutes for its 1468 peers observed. In another more recent study, a detailed measurement study of the popular Bittorrent P2P network is provided by~\cite{Powlese05BitTorrent} and the active probe part of the \emph{Delft Bittorrent Dataset}\footnote{\tt {http://multiprobe.ewi.tudelft.nl/dataset.html}} was obtained by continuously measuring more than 180,000 Bittorrent peers and the average session time is 104 minutes which is consistent with the results found in both the Gnutella and the Overnet trace data. Such short average session time suggests that robustness design is essential as even a typical short job which takes a few hours will suffer from a few failures. As visualized in \reffig{gnutella}, the Gnutella trace data clearly show that most of peers will leave the network in just several hours and the failure rate curve can loosely fit the expected exponential distribution curve which has the mean time before failure parameter equals to the Gnutella's average peer life time. To enable the system to eventually produce the correct result within reasonable runtimes in such a P2P environment, we consider automatically adapting to the current network conditions should improve upon a fixed interval in~\cite{dvm}, because: \begin{itemize} \item Too small checkpoint interval $T$ would lead to many unused checkpoint operation and total fault free running overhead in terms of both required job execution time and bandwidth consumption for uploading checkpoint image files. \item Large $T$ reduces overheads mentioned above but can increase the impact of restarting, as the runtime wasted by each restart has the upper boundary of $T$. \item Different from the traditional platforms (e.g. clusters) where the system mean time before failure (MTBF) can be estimated offline and will usually stay unchanged during the job, the average MTBF value of peers in the P2P network are dominated by nodes' failure events and such rate can change from time to time. The fixed checkpoint interval $T$ may have sub-optimal performance in real deployed Internet environment as it does not adapt to such changes. \end{itemize} \begin{figure*} \centering \subfigure[The peer failure in Gnutella can loosely fits the exponential distribution.]{\label{gnutella}\includegraphics[width=8cm]{gnutella_peer_failure_rate}} \subfigure[The short term failure rate in Overnet is highly dynamic.]{\label{overnet}\includegraphics[width=8cm]{overnet_peer_failure_rate}} \caption{The Peer Failure in the Gnutella and Overnet P2P Networks.} \label{failurerate} \end{figure*} \section{Adaptive Checkpoint Scheme} \label{scheme} In our approach we analyse the efficiency of our checkpoint decision scheme which tries to optimize the utilization of total runtime. In this section, we define the system parameters first and then describe how some of them are estimated in \refsec{estimation} and present our model in \refsec{model} The parameters used in this work are summarized in \reftab{parameters}. \begin{table*}[htbp] \begin{center} \caption{The Parameters Used in the Adaptive Checkpoint Scheme.} \label{parameters} \begin{tabular}{|c|c|p{0.5\linewidth}|} \hline Name & Symbol & Definition \\ \hline Peer Failure Rate & $\mu$ & Peer failure is dominated by the peer departure events and we model \\ & & it as exponential distribution~\cite{tianjing07, GhinitaT06} with the rate parameter $\mu$. \\ \hline The number of peers & $k$ & The number of peers used in the job. \\ \hline Checkpoint rate & $\lambda$ & How often the status of the job is checkpointed, \\ & & the checkpoint interval is thus $\frac{1}{\lambda}$. \\ \hline Checkpoint overhead & $V$ & Extra runtime caused by the checkpoint in terms of runtime. \\ \hline Wasted computation time & $T_{wc}$ & Unsaved computation progress to be lost on failure. \\ \hline Image download overhead & $T_d$ & The required amount of time to download the checkpoint image.\\ \hline \end{tabular} \end{center} \end{table*} We model the peer failure in this work as exponential distribution as this has been suggested in some previous works~\cite{tianjing07, GhinitaT06} and accepted by the community. Tian and Dai ~\cite{tianjing07} have recently reported that once peers are grouped into different categories according to their average life time (e.g. long, medium and short life time), peers' failure can be even better fitted to the exponential distribution. During the execution of a message passing job, the time line of the possible events and the relationship of the above parameters are explained in \reffig{timeline}. We compute an optimized value of $\lambda$ for the P2P network conditions according to the observation and estimation of the above $\mu$, $V$ and $T_d$ parameters. \begin{figure} \psfrag{1/lambda}{$1/\lambda$} \psfrag{V}{$V$} \psfrag{Twc}{$T_{wc}$} \psfrag{td}{$T_d$} \psfrag{c1}{$c_1$} \psfrag{c2}{$c_2$} \psfrag{c3}{$c_3$} \centering{\includegraphics[width=0.65\linewidth]{timeline2}} \caption{The relationship of the parameters with an example failure.} \label{timeline} \end{figure} \subsection{Online Parameter Estimation} \label{estimation} The estimation of peer failure rate $\mu$, the checkpoint overhead $V$ and the image download overhead $T_d$ which we use to compute the optimized value of $\lambda$ is given below. The scheme used in our P2P based system must be decentralized to ensure the scalability, each peer would be desirable to carry out estimations based on its own observed information about the network conditions, any centralized monitoring component should be strictly avoided due to both the availability and scalability concerns. We also make sure that all the network estimation employed here will not increase the communication complexity, in fact the number of messages exchanged for making the checkpoint decisions in this proposed model is the same as our previous naive fixed interval approach used in~\cite{dvm,mpichopen}. \subsubsection{Peer Failure Rate $\mu$} \label{failureestimation} Given the goal of this study is to come up with an optimized scheme which can automatically scale the checkpoint interval for different P2P network conditions, an accurate estimation of the value $\mu$ is important. We did a detailed study~\cite{estimation} to find the comparative performance for the possible failure rate estimation methods and the results of our study indicate the Maximum Likelihood based method~\cite{estimation} out performs the commonly used estimation methods in most cases. In this work we use the Maximum Likelihood method for estimating peer failure rate, where $\mu$ is estimated as: \begin{equation} \mu=\frac{1}{\;\;\overline{t_{l}}\;\;}=\frac{K}{\sum_{i=0}^K{t_{l,i}}} \label{max-likelihood} \end{equation} where $\overline{t_{l}}$ is the average life time observed by the peer while $K$ is the required number of observed failures to compute a new estimated failure rate. We use a simple but effective cooperative scheme for collecting peer failure observations. Each peer shares its failure observation with its neighbours, and their neighbours, thus effectively allows each peer to monitor the status of its neighbouring peers and the neighbours of its neighbours. \subsubsection{Checkpoint Overhead $V$} The checkpoint overhead means how much time each checkpoint operation will slow down the parallel process. The overhead is caused by (\emph{i}) dumping the whole memory space used by the process will have some stress on the memory bandwidth, (\emph{ii}) compressing the checkpointed status costs some processing cycles (\emph{iii}) the available bandwidth need to be consumed to upload the checkpoint image file to a reliable storage and this will slow down the message passing for computing purpose. We perform the estimation online and do not rely on the historical data of the required runtime without checkpoint as the P2P network environment can be highly dynamic and the peers used for each run can be totally different, it is also not practical to expect a long running job to finish without any peer failure during execution. We estimate the checkpoint overhead $V$ by observing both the average CPU usage of the parallel application process and the number of messages exchanged for computing purpose. To estimate the value of $V$, we first run the parallel application without checkpoint for $t$ minutes and the average CPU usage $P_1$ and the totally number of incoming and outgoing messages $M_1$ will be recorded. We then switch on the checkpoint with a relative small interval and run for another $t$ minutes. If the number of checkpoint performed is $y$, the average CPU usage and the number of exchanged messages on the local peer are $P_2$ and $M_2$ respectively, then we estimate two separate $V$ based on both the CPU usage and network IO statistics as: \begin{equation} V={\frac{\left(P_1-P_2\right)\left(M_1-M_2\right) t}{2 P_1 M_1 y}} \end{equation} \subsubsection{Checkpoint Image Download Time $T_d$} After the job is submitted and we have the estimated value of $V$, we set $T_d$ to be same as $V$ as its initial value. When the first checkpoint image is captured and uploaded using the estimated $\lambda$, we download the checkpoint image from the P2P network while still keeping the parallel processing running at the background and the download time becomes a more accurate estimated value of $T_d$. If any restart is performed during the execution then the download time will be used as the $T_d$. This is also due to the desire that we want to predict the optimized value of $\lambda$ based on the recent network conditions. \subsubsection{Global Versus Local Estimation} The above $\mu$, $V$ and $T_d$ are computed according to the local knowledge on each peer and thus they reflect the local peer's estimation. It can be more accurate if these local estimations can be combined, for example take the average value, to form better global estimations. For example, as the coordinated global checkpoint~\cite{ChandyL85} is used in our system in which all involved peers will checkpoint the status of the job once any peer issue the checkpoint command, if every peer issued such command according to their results of $\lambda$ which in turn depends on their own estimation of $\mu$, then the global checkpoint rate for the system would be decided by the smallest estimation of $\mu$ on all peers. In this case, an average $\mu$ based on at least a few peers' estimation can be more accurate. We make each peer to periodically piggyback its own most recent estimation of $\mu$, $V$ and $T_d$ in its messages used for parallel computing message passing which are sent to other peers. On receiving such piggybacked values, the peer can use them to calculate the global estimated $\mu$, $V$ and $T_d$ using the average of these estimated values by different peers. This approach doesn't increase the communication cost a lot because no extra message is require, only a few messages' length will be slightly increased to carry and exchange the attached locally estimated values. \subsection{Runtime Utilization Based Adaptive Scheme} \label{model} In our scheme, we define the notion of \emph{average cycle utilization}, $U$, which is the fraction of time in a cycle time, $1/\lambda$, that is spent doing useful computation. The remaining time in the cycle is overheads introduced by the checkpointing and costs of restarting. In basic terms: \begin{equation} U =\frac{\frac{1}{\lambda}-C}{\frac{1}{\lambda}}=1-\lambda\, C \label{u-eq} \end{equation} where $C$ is the average overhead and failure costs per cycle. We use $U$ as an approximate utilization of the system. In this simple case, if the overheads exceed the cycle time then the system operates at zero percent efficiency. The value for $C$ is computed from a predicted failure rate and the various overheads. To explain the model that we use for the impact of failure we first consider a simplified model in which only a single peer is involved. \subsubsection{Single Peer Model} In this setting, the peer has the failure rate of $\mu$ and it uses checkpointing and restarting with the costs of $V$, $T_{wc}$ and $T_d$. Because the peer failure is modeled as exponential distribution~\cite{tianjing07, GhinitaT06} the probability of the job (i.e. peer) to fail at time $t$ is: \begin{equation} P_{f}(t)=\mu e^{- \mu t} \label{eq-pf} \end{equation} which is the probability density function of the exponential distribution. As the computation result between the last checkpoint and the failure will be wasted when the job has to roll back to the last checkpoint for restart, the expected wasted computation time for each failure is given by: \begin{equation} T_{wc}=\sum_{i=0}^\infty{ \int_{\frac{i}{\lambda}}^{\frac{i+1}{\lambda}}{P_f(t) \left(t-\frac{i}{\lambda}\right)} dt} =\frac{1}{\mu}-\frac{1}{e^{\frac{\mu}{\lambda}}-1}\, \frac{1}{\lambda}=\frac{1}{\mu} - \overline{c}\frac{1}{\lambda} \end{equation} where $\overline{c}$ is the average fault free running cycles for each expected failure, i.e. how many checkpoint operations are expected to succeed on average before every failure. An alternative way to derive $\overline{c}$ is: \begin{equation} \overline{c}=\sum_{i=0}^\infty{i \int_{\frac{i}{\lambda}}^{\frac{i+1}{\lambda}}{P_f(t)}\;dt}= \frac{1}{e^{\frac{\mu}{\lambda}}-1} \label{cbar} \end{equation} \subsubsection{Multiple Peer Model} In a multi-peer environment, $k$ peers work together on the same job and they do checkpoint and restart in a coordinated manner according to the Chandy-Lamport algorithm~\cite{ChandyL85} to count the possible failures. In this setting, the job will fail and a restart must be issued once any one of the $k$ peer fails. Since the failure rate for each job is $\mu$ then the $k\,\mu$ would be the failure rate for the job. The above \refeq{eq-pf} to \refeq{cbar} can be trivially modified to describe the multi-peer environment: \begin{equation} P_{f}'(t)=k \mu e^{- \mu k t} \label{eq-pfm} \end{equation} \begin{equation} T_{wc}'=\sum_{i=0}^\infty{ \int_{\frac{i}{\lambda}}^{\frac{i+1}{\lambda}}{P_f'(t) \left(t-\frac{i}{\lambda}\right)} dt} =\frac{1}{k \mu}-\frac{1}{e^{\frac{\mu k}{\lambda}}-1} \frac{1}{\lambda} \\ \end{equation} Similarly, $\overline{c}'=\frac{1}{e^{\frac{k\,\mu}{\lambda}}-1}$. \subsubsection{Average Cycle Utilization} We formulate the average cycle utilization from the total overheads: \begin{equation} C=V+\frac{\left(T_{wc}'+T_d\right)}{\overline{c}'} \end{equation} \begin{equation} U=\begin{cases} 1-C\,\lambda & \text{if $C\,\lambda<1$ },\\ 0& \text{if $C\,\lambda>1$.} \end{cases} \label{eq-u} \end{equation} The desired $\lambda$ leads $U$ to peak when $\dfrac{dU}{d\lambda}=0$ and thus can be computed as: \begin{equation*} \lambda=\frac{k \mu}{lamberW\left[\left(v k \mu- T_d k \mu - 1\right)\left(T_d k \mu+1\right)^{-1} e^{-1}\right]+1}, \end{equation*} where the above $lambertW\left(\right)$ function is the Lambert W function. \refeq{eq-u} can be used to judge whether the number of peers used for the parallel processing job is reasonable (i.e. the job can at least progress) given the current network conditions. According to the definition of the \emph{utilization}, a positive $U$ means the job is at least still progressing while a zero value means that no significant progress for the parallel process can be made. When the estimated $\mu$, $V$, $T_d$ and the above optimal $\lambda$ together with a user specified job parameter $k$ lead to $U=0$ in \refeq{eq-u}, it suggests the number of peers used for the job is too large. \input{evaluation} \section{Conclusion and Future Work} The role of Peer-to-Peer based parallel computing in future grid systems is becoming more necessary as we consider large scale Volunteer Computing that employs work flow based grid applications. Given a highly dynamic environment like current P2P networks where most peers usually just keep online for hours, and other network conditions are also changing, it is complicated and inefficient for the users to decide how to choose suitable system parameters like the checkpoint interval. In this work we have proposed and evaluated an adaptive checkpoint scheme for parallel processing systems over Peer-to-Peer networks. The evaluation results show that our proposed approach which is based on network condition estimation is almost always better than the naive fixed checkpoint interval approach in terms of reduced runtime. The results of our work presented in this paper can potentially be used in the next generation of Peer-to-Peer based parallel processing systems to effectively handle churn events and provide robustness for the system. As a part of the future work, we are going to test this scheme in real deployed P2P message passing systems over the Internet. It would also be an interesting topic to study both the possibility and the feasibility of combining the historical log and the real time network conditions observation data to predict with higher accuracy of the parameters of the running environment. \bibliographystyle{abbrv} \section{Simulation Evaluation} \subsection{Setup for the Evaluation} To evaluate our proposed adaptive checkpointing scheme we provide performance comparison using simulation. We extended the P2P simulator used in~\cite{estimation} to simulate the running of P2P based message passing programs under the affect of peer failure events. In this simulator a typical P2P network is simulated where peers can connect and disconnect according to exponential distribution, each peer knows several other peers (neighbors) and peer departure events can be detected during each peer's stabilization~\cite{chord, mspastry}. The observed failure events can be used for network failure rate estimation as described in \refsec{failureestimation}. Message passing jobs can be simulated by specifying the number of peers to use and its required runtime in a fault free environment. When the job is submitted, a list of peers is chosen at random to simulate the message passing job and the progress of such jobs can be saved periodically according to either fixed checkpoint interval or dynamically picked intervals produced by our adaptive scheme. The status of the job will always be rolled back to its previous saved checkpoint upon peer failure events. The peer failure rate, overheads, required runtime for the job in a fault free environment and the fixed checkpoint interval used in the naive fixed checkpoint interval approach can be set as the parameters of the simulation. We compare the performance between the naive fixed checkpoint interval approach and our adaptive checkpointing scheme with different network conditions, including different peer departure rates, various available bandwidth and types of parallel processing programs. We choose to do the evaluation in such a simulated environment instead of in real deployed systems because: \emph{(i)} to the best of our knowledge, there is currently no widely deployed P2P parallel processing systems that supports both message passing and checkpointing and \emph{(ii)} the simulated environment allows us to test our proposed scheme in different network conditions which is essential to this work. The metric called \emph{relative runtime} is used intensively in this section to compare the performance of our proposed approach and the naive fixed checkpoint interval approach. The relative runtime means the percentage of runtime using a fixed checkpoint interval compared to the runtime required by our adaptive scheme. \begin{equation} Relative Runtime = \frac{Runtime\ using\ fixed\ checkpoint\ interval}{Runtime\ using\ the\ adaptive\ interval} \times 100\% \end{equation} The proposed adaptive checkpoint scheme would outperform the fixed checkpoint interval for a given checkpoint rate if the relative runtime is larger than 100\%. The MTBF is used to represent the departure rate in this section for better readability and the departure rate is $\frac{1}{MTBF}$ as the failure events are of exponential distribution. \subsection{Evaluation} In \reffig{ckptperformance}, the performance of the proposed adaptive scheme is compared with the naive fixed checkpoint interval approach under different departure rates. In this experiment, we set the checkpoint overhead to be 20 seconds and the image download overhead to be 50 seconds. The left chart of \reffig{ckptperformance} shows our adaptive scheme out performs the naive approach in all three (MTBF=4000, 7200 and 14400 seconds) different network departure rate environment which are typical settings to represent high, normal and low departure rates. This is the failure rates set for the experiments and each peer would estimate the current peer failure rate, which would usually carry 10-15\% error as reported in~\cite{estimation}, for computing the optimized $\lambda$. The right chart of \reffig{ckptperformance} plots the results of experiments in which the departure rates are doubled in 20 hours with different initial departure rate. We choose the 20 hours failure rate double time as such dynamism is observed in the Overnet trace data, all other settings are the same as the ones used in the previous experiment. As we can see, our scheme again out performed the naive approach in almost all cases and it helps to ensure the jobs can all be eventually finished no matter of the network conditions while an arbitrarily selected checkpoint fixed checkpoint interval may cause the jobs running for ultra long time, for example, as we can see in the right chart of \reffig{ckptperformance}, compared with the adaptive checkpoint scheme, it took 3 times the runtime to finish the job when the initial nodes departure rate is MTBF=7200 seconds and the checkpoint interval set to be 5 minutes, it would take even much longer if the checkpoint interval is set to be even larger. The probability for the job to fail between two continuous checkpoints can increase rapidly as the checkpoint interval is too long and the job will keep rolling back to the same save status again and again. Such situation can be handled in our scheme as the checkpoint interval is always set according to the current estimated network conditions. \begin{figure*} \begin{minipage}[b]{0.46\linewidth} \centering \includegraphics[width=6.0cm, angle=270]{fixed_interval_performance} \end{minipage} \begin{minipage}[b]{0.33\linewidth} \centering \includegraphics[width=6.0cm, angle=270]{fixed_interval_performance_dynamic_churn} \end{minipage} \caption{Performance Comparison Between the Adaptive Checkpoint and Fixed Intervals Approach.} \label{ckptperformance} \end{figure*} We also simulated the performance of our scheme under different checkpoint overheads and image download overheads in \reffig{varoverheads}. As explained earlier, the checkpoint overhead means how long each checkpoint operation can slow down the job, different checkpoint overhead can be caused due the types of programs as when the checkpoint images are being uploaded onto the network the bandwidth of upstream link of the desktops will be mostly used and the communication for message passing is going to become slowed down, thus programs in which processes need to communicate a lot with each other are going to suffer larger overheads. The checkpoint image download overhead is mainly determined by the available download bandwidth and can be approximated as the required time for the slowest node used in the job to download the checkpoint image. We first fixed the image download overhead at 50 seconds and tested our scheme with different checkpoint overheads. The departure rate is MTBF=7200 seconds which represents the typical network condition according to the real P2P trace data discussed earlier. The result is presented in the left chart in \reffig{varoverheads}. The right chart in \reffig{varoverheads} shows the results when we have fixed checkpoint overhead and various image download overheads. Our scheme demonstrated good adaptiveness in these two experiments and reported better performance compared to all tested fixed checkpoint intervals. \begin{figure*} \begin{minipage}[b]{0.46\linewidth} \centering \includegraphics[width=6.0cm, angle=270]{var_checkpoint_overhead} \end{minipage} \begin{minipage}[b]{0.33\linewidth} \centering \includegraphics[width=6.0cm, angle=270]{var_download_overhead} \end{minipage} \caption{Performance Comparison Under Different overheads.} \label{varoverheads} \end{figure*} \subsection{Discussion} The experiment results of this work have shown a better optimized checkpoint and rollback approach can be achieved by applying the proposed adaptive checkpoint scheme to set the checkpoint interval; the performance of the running message passing jobs can be better guarded by balancing the overheads caused by performing the checkpoints and the overheads of actually restarting from a failure, which in turn is through monitoring the P2P network conditions. The MTBF for each peer is only hours in three different deployed P2P networks, thus when several peers are used for a message passing processes, the MTBF for this group of peers is will be only around 5-10 minutes. We believe further measures can be taken to better handle highly dynamic networks. For example, we can potentially combine the process replication and checkpoint together to reduce the overheads of restarting. In such upgraded robustness design, jobs will only need to rollback to the previous known status only if all replicas of a process have failed, which can be less frequently and will increase the MTBF of the job.
2,869,038,155,966
arxiv
\section{Introduction} High temperature superconductivity always attracts scientists working in different fields of physics and material science due to its extensive applications and extraordinary properties. A fundamental question of this topic is how to understand the origin of high temperature superconductivity~\cite{Norman}? Although preliminary consensus has been reached with respect to the high-T$_c$ cuprates that strong correlation is indispensable for high temperature superconductivity~\cite{LeeNagaosaWen}, intensive debates persist since the discovery of high-T$_c$ iron-based superconductors. In contrast to the strongly-correlated high-T$_c$ cuprates, magnetism and superconductivity in iron-based superconductors was originally proposed to be understandable from both the strong-coupling limit, where spin exchange effect of localized electrons is emphasized, and weak-coupling limit, where condensation of particle-hole excitations among the Fermi surfaces is highlighted~\cite{Wanglee,Paglione,Mazin_PhysicaC,Pickett,Mazin-Singh}. However, while conventional weak-coupling theory of the Fermi surface nesting~\cite{Eremin,bkfeas} is continuously being challenged as more experiments are performed~\cite{BaoFeTe,He-NaFeAs,DaiHu}, the density functional theory calculations in combination with dynamical mean field theory do not support the existence of localized electrons in Fe $3d$ orbitals~\cite{YinHauleKotliar}. Recently, a compromised scenario for understanding the iron-based superconductors was proposed, where localized and itinerant electrons are assumed to coexist in Fe $3d$ orbitals~\cite{KouLiWeng,HacklVojta,YinLeeKu}. Though, angular resolved photoemission spectroscopic study on A$_x$Fe$_{2-y}$Se$_2$ ($A$=K, Rb), corroborated by a slave-spin mean-field calculation, pointed to a possible observation of such a coexistence at finite temperature~\cite{OSMPexp}, it is still under debate experimentally whether the studied sample or which part of the studied sample is the parent compound of the superconducting state~\cite{Daggoto}. In other words, the detected coexistence may not be relevant to superconductivity. Another origin for magnetism in iron-based superconductors was proposed based on comparisons of densities of states between nonmagnetic state and various magnetic states in FeTe and BaFe$_2$As$_2$ obtained from density functional theory calculations. It was concluded that the magnetic ground state is determined by all occupied states below the Fermi level, not by fermiology~\cite{JohannesMazin}. However, on one hand, strong downshift in spectral weight due to spin polarization usually happens in all the materials with large magnetic moment. On the other hand, as is well known, the magnetic moments in most iron-based compounds are overestimated by density functional theory calculations~\cite{OpahleZhang,ZhangOpahle}. Therefore, this scenario is also questionable. In this paper, by applying density functional theory calculations to various families of iron-based superconductors, we proposed that the condensed particle-hole excitations away from the Fermi level, rather than those among the Fermi surfaces, are responsible for the distinct physical properties among different iron-based compounds. We further pointed out that the orbital degrees of freedom, inter-atomic magnetic interaction, and interlayer couplings have to be involved in order to correctly understand some anomalies in iron-based compounds. We found that while the strong correlation is not indispensable if one wants to understand magnetism and superconductivity of iron-based materials at qualitative level, the electronic states above and below Fermi level are both important. \section{Computational Details} \subsection{Density Functional Theory Calculations} We use experimentally-determined structures for all the compounds (see appendix~\ref{sec:app-one}), i.e.~the [1111] family LnOFePn with Ln=Sm, Nd, Ce, La and Pn=As, P through to SrFFeAs, the [111] family containing AFePn with A=Li, Na, the [11] family FeCh with Ch=Se and Te, and the [122] family including AeFe$_2$Pn$_2$ with Ae=Ca, Sr, Ba. We temporarily leave KFe$_2$Se$_2$ to later study because the parent compound of its superconducting state is still under debate experimentally and the observed special magnetic states are always connected to different types of iron vacancies~\cite{Daggoto}. Both the paramagnetic and spin-polarized cases are calculated by the full-potential linearized augmented plane wave method as implemented in Wien2k~\cite{balaha} with the exchange correlation functional of Perdew, Burke and Ernzerho (PBE). The results are consistent with those calculated within the local density approximation (LDA). We choose RK$_{max}$=7 and total 40000~k points in the Brillouin-zone integration. The open core approximation is employed when the f electrons are treated in the nonmagnetic states. In the LDA+U or the spin polarized GGA+U calculations, f electrons of Ce atoms are arranged ferromagnetically according to the experiments, and $U$ is chosen to be 6~eV~\cite{Ceatom}. Moreover, different $U$ leads to qualitatively the same results. Here the atomic limit double-counting correction is taken into account since f electrons on Ce atoms are strongly correlated~\cite{balaha}. \subsection{momentum-dependent particle-hole excitations within constant matrix elements approximation} \label{sec:one} The condensation of the momentum-dependent particle-hole excitations can be quantified by the real part of the static bare susceptibility. Within constant matrix elements approximation~\cite{Mazin-Singh}, the static bare susceptibility reads: \begin{eqnarray} \chi_{0} (q)&=& -\frac{1}{N} \sum_{k,\mu\nu}\frac{1}{E_{\nu}(k+q)-E_{\mu}(k)+i0^{+}} \nonumber \\ &&\times [f(E_{\nu}(k+q))-f(E_{\mu}(k))],\label{Eq:one} \end{eqnarray} where $\mu,\nu$ are the band indices and $q$ and $k$ are momentum vectors in the Brillouin zone and $N$ is the number of Fe lattice sites. It is well-known that prominent condensation of particle-hole excitations at certain wave vector of $q$ in a disordered phase is a precursor for an appearance of corresponding symmetry breaking state as temperature becomes lower or interaction is switched on. In most of the iron-based superconductors, a remarkable condensation present at wave vector of $(\pi,\pi)$ can account for the stripe-type antiferromagnetic order or superconductivity. (Here we refer to a unit cell containing $2$ Fe atoms.) More detailedly, if the condensation at $(\pi,\pi)$ is strong enough, i.e., exceeds a threshold, stripe-type antiferromagnetic states tend to form. Otherwise tendency towards superconducting states mediated by short range antiferromagnetic fluctuations appears as long as the condensation around $(\pi,\pi)$ prevails over the others. Without noticeable condensation, disordered state remains. \subsection{orbitally resolved momentum-dependent particle-hole excitations} \label{sec:two} In order to calculate the orbitally resolved momentum-dependent particle-hole excitations, an effective tight-binding model, including all the iron $d$ orbitals and the anion $p$ orbitals, are derived via construction of Wannier orbitals~\cite{wannier1,wannier2} on a 11$\times$11$\times$11 Monkhorst-Pack k-point mesh. The disentanglement procedure of wannier90 is employed in order to achieve the perfect match between the electronic structures of the effective tight-binding model and those obtained from {\it ab} initio calculations. Here the subspace selection step is done by projection and symmetry-preserved Wannier functions are adopted~\cite{wannierRMP}. The condensation of the orbitally resolved momentum-dependent particle-hole excitations can be quantified by the real part of the static bare susceptibility of a multi-orbital system, defined as~\cite{graser}: \begin{eqnarray} \chi^{pr;st}_{0} (q)&=& -\frac{1}{N} \sum_{k,\mu\nu}\frac{a^{s}_{\mu}(k)a^{p*}_{\mu}(k)a^{r}_{\nu}(k+q)a^{t*}_{\nu}(k+q)}{E_{\nu}(k+q)-E_{\mu}(k)+i0^{+}} \nonumber \\ &&\times [f(E_{\nu}(k+q))-f(E_{\mu}(k))]\label{Eq:two} \end{eqnarray} where matrix elements $a^{s}_{\mu}(k)=\langle s|\mu k \rangle$ connect the orbital and the band spaces and are the components of the eigenvectors obtained from diagonalization of the effective tight-binding model. Here $f(E)$ is the Fermi distribution function, $p,r,s,t$ are the orbital indices, $\mu,\nu$ the band indices, $q$ and $k$ the momentum vectors in the Brillouin zone, and N the number of Fe lattice sites. \section{Computational Results} In iron-based superconductors, itinerant scenario states that Fermi surface nesting, i.e., particle-hole excitations among the Fermi surfaces condensed at $q=(\pi,\pi)$, determines the magnetism and superconductivity while localized picture emphasizes that the strong correlations are indispensable. In the following, we will show that the physical properties of various families of iron-based superconductors can be qualitatively understood in the absence of strong correlations, provides particle-hole excitations away from the Fermi level as well as orbital degrees of freedom and interatomic, interlayer coupling are properly taken into account. \subsection{Comparison of Particle-hole Excitations close to and away from the Fermi Level} \begin{figure}[htbp] \includegraphics[width=0.48\textwidth]{Figure1} \caption{The relative strength of particle-hole excitations calculated within the constant matrix elements approximation between $q=(\pi,\pi)$ and $(0,0)$ as a function of energy window chosen around the Fermi Level $E_F$ for the archetypal compounds FeTe, LiFeAs, and LaOFeAs. Here the energy window is defined as $[E_F-\Delta W,E_F+\Delta W]$. The condensation of particle-hole excitations is at $(0,0)$ for small energy window but is at $(\pi,\pi)$ when the energy window is large enough, indicating the importance of particle-hole excitations away from the Fermi level.} \label{Fig:one} \end{figure} Fig.~\ref{Fig:one} shows the particle-hole excitations calculated within the constant matrix elements approximation~\cite{Mazin-Singh}~(see also Eq.~(\ref{Eq:one})) at wave vectors $(\pi,\pi)$ and $(0,0)$ for the representative compounds FeTe, LiFeAs, and LaOFeAs as a function of the width of energy window chosen around the Fermi Level, defined as $[E_F-\Delta W,E_F+\Delta W]$ where $E_F$ is the Fermi level and $2\Delta W$ is the energy window. Surprisingly, it is found that the particle-hole excitations are condensed at $(0,0)$ when the energy window is small. Only when the energy window is large enough are the particle-hole excitations condensed at $(\pi,\pi)$. This result suggests a new scenario that both the stripe-type antiferromagnetic states and the pairing for superconductivity mediated by spin fluctuations at $(\pi,\pi)$ are dominated by the electronic states away from the Fermi level, which is in stark contrast to the weak-coupling theory of Fermi surface nesting~\cite{Wanglee,Paglione,Mazin_PhysicaC,Mazin-Singh,Eremin,bkfeas,BaoFeTe,He-NaFeAs,DaiHu} where only the particle-hole excitations among the Fermi surfaces are emphasized. Though our results of large energy windows are consistent with previous studies, it is reported for the first time that the particle-hole excitations close to the Fermi level play an opposite role in deciding the magnetism and superconductivity. \subsection{Condensed Particle-hole Excitations away from the Fermi Level at $(\pi,\pi)$} Then, we will show that the new scenario is applicable to the whole family of iron-based compounds even if the strong correlation is completely absent. Since the nature of the condensation will be changed as a function of the energy window one chooses, throughout this paper we use a large enough energy window such that all the bands derived from the $d$ and $p$ orbitals are involved. Fig.~\ref{Fig:two} shows the condensed particle-hole excitations at $(\pi,\pi)$ for various families of iron-based superconductors. Remarkably, it is found that the mechanism playing the dominant role in deciding the physical properties of the iron-based compounds is the condensed particle-hole excitations in momentum space, i.e., strong condensation of particle-hole excitations at $q=(\pi,\pi)$ leads to a stripe-type antiferromagnetic state while weak condensation results in a nonmagnetic phase. In the intermediate region, superconductivity appears. \begin{figure}[htbp] \includegraphics[width=0.48\textwidth]{Figure2} \caption{Condensed particle-hole excitations within the constant matrix elements approximation at wave vector $(\pi,\pi)$ for various iron-based superconductors with large enough energy window. NM, SC, and SAF denote the nonmagnetic, superconducting, and stripe-type antiferromagnetic states, respectively. Most of the iron-based superconductors can be classified into the above three states by the strength of the condensation of particle-hole excitations at $(\pi,\pi)$ except for FeTe, LaOFeP, CeOFeP, and BaFe$_2$P$_2$, which are indicated by the question marks. The compounds AeFe$_2$As$_2$ with Ae=Ca, Sr, and Ba are marked by triangles due to the fact that their magnetic moments are larger than that of LaOFeAs, while the instabilities are weaker. The vertical dotted lines are a visual aid.} \label{Fig:two} \end{figure} \begin{figure*}[htbp] \includegraphics[angle=-90,width=0.96\textwidth]{Figure3} \caption{Comparison of intra-orbital contributions to the particle-hole excitations among LaOFeAs, LaOFeP, and CeOFeP. (a), (b), (c) show five intra-orbital contributions along the path in momentum space from $\Gamma(0,0)$-$X(\pi,0)$-$M(\pi,\pi)$-$\Gamma(0,0)$ for LaOFeAs, LaOFeP, and CeOFeP, respectively. The dominating contributions from the d$_{xz}$ orbitals in the $q_x-q_y$ plane for LaOFeAs, LaOFeP, CeOFeP are presented in (d), (e), (f), respectively. The dominating contribution in the $q_x-q_y$ plane from the d$_{x^2-y^2}$ orbital in LaOFeAs and those from the d$_{z^2}$ orbitals in LaOFeP, CeOFeP are exhibited in (g), (h), (i), respectively. The two-dimensional contour maps are shown at the base of the figures.} \label{Fig:three} \end{figure*} However, four compounds in Fig.~\ref{Fig:two} do violate the overall trend from this scenario and have been labelled with question marks. Specifically, LaOFeP and CeOFeP are superconducting~\cite{laofep} and heavy fermionic~\cite{ceofep} systems respectively while Fe$_{1+x}$Te is antiferromagnetically-ordered with a unique double stripe~\cite{BaoFeTe}, all of which are inconsistent with the strong condensation at $q=(\pi,\pi)$ in these materials. Moreover, BaFe$_2$P$_2$ is non-superconducting and nonmagnetic~\cite{bafepnesting,bafep}, but the condensation is slightly stronger than that of the superconductor LiFeP~\cite{lifep}. Furthermore, there is a quantitative problem present in Fig.~\ref{Fig:two} marked by three triangles which indicate that although the condensations are weaker in AeFe$_2$As$_2$ with Ae=Ca, Sr, Ba than in LaOFeAs, the observed magnetic moments are larger in AeFe$_2$As$_2$~\cite{laofeas,bafeas,cafeas,srfeas}. Nevertheless, as we will now show, all of the above anomalies can be simply explained. \subsection{Multi-orbital effects} We begin by explaining the qualitative anomalies. Whilst employing the constant matrix elements approximation does not qualitatively affect the results for the majority of the iron-based compounds, we find that it is necessary to drop this approximation and investigate the intra-orbital contributions to the particle-hole excitations~\cite{graser} (see also Eq.~(\ref{Eq:two})) in order to explain the qualitative anomalies. Such intra-orbital contributions to the particle-hole excitations have previously been found to play a dominant role in facilitating magnetism and superconductivity~\cite{Ding2013,leeshim,Kuroki}, compared to the inter-orbital counterparts. \subsubsection{LaOFeP: competitions among different orbitals} First we will explain why strong condensation of particle-hole excitations at $(\pi,\pi)$ in superconducting LaOFeP as shown in Fig.~\ref{Fig:two} does not result in a stripe-type antiferromagnetic state. From Fig.~\ref{Fig:three} (a), (b), it is found that while the particle-hole excitations in the d$_{xz/yz}$ and d$_{x^2-y^2}$ orbitals are universally condensed around $(\pi,\pi)$ in the archetypal compound LaOFeAs indicating a single instability towards the stripe-type antiferromagnetic state, they are separately condensed at $(\pi,\pi)$ in the d$_{xz/yz}$ orbitals and at $(0,0)$ in d$_{z^2}$ orbitals in LaOFeP, suggesting competing tendencies towards either a stripe-type antiferromagnetic state or other magnetic states with $q=(0,0)$ such as Ne\'{e}l and ferromagnetically-ordered states. Here x, y, z are along the a, b, c directions, respectively, of the original unit cell with two iron atoms. Contributions from other orbitals are negligible compared to the above mentioned orbitals. Fig.~\ref{Fig:three} (d)-(i) further demonstrate that the particle-hole excitations are solely condensed either around $(\pi,\pi)$ or $(0,0)$ within the whole Brillouin zone in the dominating orbitals. It should be noted that the dominant contribution from the d$_{yz}$ orbital within the whole Brillouin zone is not shown since it can be reproduced by interchanging the $q_x$ and $q_y$ axes in Fig.~\ref{Fig:three} (d)-(f). In order to reveal the competition between the tendencies towards different magnetically ordered states, we perform a mean-field calculation based on a multi-orbital Hubbard model where all the d orbitals from both Fe atoms in the unit cell are involved (see appendix~\ref{sec:app-two}). The tight-binding parameters are obtained through construction of Wannier orbitals as implemented in Wannier90~\cite{wannier1,wannier2}. A Hund's rule coupling of $J=0.15U$ is used for the calculations according to constrained random phase approximation calculations~\cite{CRPA}. Fig.~\ref{Fig:four} presents the evolution of the ground state magnetic moments in LaOFeP and LaOFeAs as the intra-orbital Coulomb repulsion in the d$_{xz/yz}$ and d$_{z^2}$ orbitals is varied individually. Since two competing condensations coexist in LaOFeP, it is expected that turning on the interaction and slightly increasing the intra-orbital Coulomb repulsion in the d$_{z^2}$ orbital alone will favor ferromagnetic or Ne\'{e}l ordered states according to the condensation at $q=(0,0)$. On the other hand, it is expected that the stripe-type antiferromagnetic state will be more strongly stabilized if the intra-orbital Coulomb repulsion in the d$_{xz/yz}$ orbitals are independently increased due to the condensation at $q=(\pi,\pi)$. This is indeed the case as shown in Fig.~\ref{Fig:four} for $U=1.7$~eV, indicating that LaOFeP is located in the proximity to the quantum critical region where magnetic orders with wave vectors $q=(0,0)$ and $q=(\pi,\pi)$ both vanish due to their mutual competition. Such competition can be viewed as an effective magnetic frustration due to itinerant electrons. The competition between the magnetically-ordered states with $q=(0,0)$ and $q=(\pi,\pi)$ remains if $U$ is changed. However, in the archetypal compound LaOFeAs, the stripe-type antiferromagnetic state always prevails over the other magnetic states since the dominating intra-orbital particle-hole excitations are all condensed at $(\pi,\pi)$. \begin{figure}[htbp] \includegraphics[width=0.48\textwidth]{Figure4} \caption{Evolution of the ground state magnetic moments in LaOFeP and LaOFeAs as the intra-orbital Coulomb repulsion is slightly increased in the d$_{xz/yz}$ or d$_{z^2}$ orbital separately. Here a ten-orbital Hubbard model is constructed based on Wannier orbitals and $J/U=0.15$ is used according to the constrained random phase approximation calculations~\cite{CRPA}. Since our calculations are based on the mean-field approximation where quantum fluctuations are completely ignored, a comparatively smaller value of $U=1.7$~eV is used. It is shown that LaOFeP is located at a critical region as schematically indicated by a grey dome. The competition between the magnetically-ordered states with $q=(0,0)$ and $q=(\pi,\pi)$ remains if $U$ is changed.} \label{Fig:four} \end{figure} \subsubsection{CeOFeP: effect of interatomic coupling} Fig.~\ref{Fig:three} (b), (c) show that the momentum-dependent particle-hole excitations in CeOFeP is similar to those in LaOFeP when f electrons in Ce atoms are treated as nonmagnetic core electrons, implying that CeOFeP should also be a superconductor. However, CeOFeP is a heavy-fermion metal. This is due to the fact that the f electrons of the Ce atoms are not nonmagnetic core electrons and are strongly coupled with the d electrons of the Fe atoms, which suppress the antiferromagnetic fluctuations and consequently the tendency towards superconductivity. This can be revealed by comparisons of total energies among different magnetic states based on DFT calculations with an LDA+U functional applied to the f electrons of the Ce atoms. It is found that if the spins of the f electrons are unpolarized, the stripe-type antiferromagnetic state is the ground state. However, if the spins of f electrons are arranged ferromagnetically (as indicated experimentally~\cite{ceofep}) in the LDA+U calculations, the weakly-ferromagnetic solution has the lowest total energy, indicating that the couplings between f electrons of Ce atoms and d electrons of Fe atoms strongly affect the nature of the magnetic fluctuations in the Fe plane and therefore lead to a non-superconducting state. \begin{figure}[htbp] \includegraphics[width=0.48\textwidth]{Figure5} \caption{Intra-orbital contributions to the particle-hole excitations. Five intra-orbital contributions along the path in momentum space from $\Gamma(0,0)$-$X(\pi,0)$-$M(\pi,\pi)$-$\Gamma(0,0)$ are shown for BaFe$_2$P$_2$ and LiFeP in (a) and (b), respectively.} \label{Fig:five} \end{figure} \subsubsection{BaFe$_2$P$_2$: suppressed condensations of intra-orbital particle-hole excitations} Next, we will show that inclusion of orbital degrees of freedom is enough to explain why BaFe$_2$P$_2$ is not a superconductor though the condensation of particle-hole excitations at $q=(\pi,\pi)$ calculated within the constant matrix elements approximation is stronger than that in the superconductor LiFeP. Fig.~\ref{Fig:five} shows the intra-orbital contributions to the particle-hole excitations in BaFe$_2$P$_2$ and LiFeP. In contrast to LiFeP, it is found that the condensation of particle-hole excitations in the d$_{x^2-y^2}$ orbital at $q=(\pi,\pi)$ vanishes in BaFe$_2$P$_2$. Only weak condensation in the d$_{xz/yz}$ orbitals remains at $q=(\pi,\pi)$, which is unlikely to be sufficient to support the appearance of either superconductivity or magnetism in BaFe$_2$P$_2$. Based on this scenario, we predict within the rigid-band approximation that K-doped BaFe$_2$P$_2$ can be a new candidate for an iron-based superconductor since the calculated intra-orbital contribution to the particle-hole excitations from the d$_{x^2-y^2}$ orbital at $q=(\pi,\pi)$ in hole-doped BaFe$_2$P$_2$ becomes as large as that in LiFeP when the hole concentration is around $0.25-0.3$~hole/Fe. \begin{figure}[htbp] \includegraphics[width=0.48\textwidth]{Figure6} \caption{The relative strength of intra-orbital contributions to the particle-hole excitations as a function of energy window chosen around the Fermi Level $E_F$ for FeTe, LiFeAs, and LaOFeAs. The particle-hole excitations become antiferromagnetic and the relative strength is saturated only when the energy window is large enough, $\Delta W > 200-600$~meV, indicating that electronic states away from the Fermi level play the dominant role in determining physical properties of iron-based superconductors.} \label{Fig:six} \end{figure} \subsubsection{Multi-orbital effect on the relative strength of particle-hole excitations close to and away from the Fermi Level} Then, it is interesting to investigate the effect of the multi-orbital physics on the nature of the condensed particle-hole excitations as a function of the width of the energy window chosen around the Fermi level. It is shown in Fig.~\ref{Fig:six} that in all the compounds we studied, the relative magnitudes of the condensation become saturated only when the energy window is large enough, at around $200-600$~meV. However, in the vicinity of Fermi level, i.e., with very small energy window or when the energy window less than $50$~meV, the particle-hole excitations in all cases are condensed at $(0,0)$ rather than antiferromagnetically at $(\pi,\pi)$ or $(\pi,0)$, which is qualitatively consistent with the results from the constant matrix elements approximation as shown in Fig.~\ref{Fig:one}. \subsubsection{Fe$_{1+x}$Te: effect of excess Fe} Moreover, a proposal has already been made to understand the unique double-stripe antiferromagnetism in Fe$_{1+x}$Te based on the scenario of condensed particle-hole excitations~\cite{Ding2013}. The role of excess Fe in the interstitial was emphasized since it contributes not only excess electrons to the in-plane Fe which enhances the tendency towards the double-stripe antiferromagnetic state but also a magnetic ion strongly coupled with the in-plane Fe~\cite{Ding2013} which further suppresses other magnetic instabilities~\cite{Liunmat}. Please note, this proposal is valid irrespective of the approximations one chooses, while other theories strongly depend on the approximations, i.e., LDA and GGA lead to contradictory conclusions~\cite{Ding2013}. \begin{table} \caption{Inter-layer exchange couplings of various iron-based superconductors. Here inter-layer exchange couplings $J=\Delta E= E_{SAF,II}-E_{SAF,I}$, where (SAF,II) denotes the intra-layer stripe-type antiferromagnetic state with inter-layer antiferromagnetic spin arrangement and (SAF,I) refers to the intra-layer stripe-type antiferromagnetic states with inter-layer ferromagnetic spin arrangement, are calculated within both the local density approximation (LDA) and the generalized gradient approximation (GGA) with units meV/Fe.} \label{Tab:one} \begin{ruledtabular} \begin{tabular}{@{}ccccccc@{}} & $\Delta E_{LDA}$ & $\Delta E_{GGA}$ \\\hline\hline CaFe$_2$As$_2$ & -25.19 & -32.55 \\ SrFe$_2$As$_2$ & -10.12 & -13.03 \\ BaFe$_2$As$_2$ & -2.90 & -4.39 \\ NaFeAs & -0.72 & -0.91 \\ LaOFeAs & -0.02 & -0.04 \\ \end{tabular} \end{ruledtabular} \end{table} \subsection{Importance of interlayer coupling} Finally, we explain the reason behind the quantitative anomalies illustrated in Fig.~\ref{Fig:two}. We emphasize that it is the inter-layer couplings ignored in most cases that are responsible for the quantitative inconsistency between weaker condensations but larger magnetic moments in AeFe$_2$As$_2$ with Ae=Ca, Sr, Ba, in comparison to LaOFeAs. We point out that although all the iron-based superconductors are layered materials, the band structures of AeFe$_2$As$_2$ compounds are more three-dimensional than that of LaOFeAs. As is well-known, the higher the dimensionality, the lower the fluctuations suppressing the ordered state. This reasoning can be further corroborated by investigating the strength of inter-layer exchange couplings in different families of iron-based superconductors. As shown in Table~\ref{Tab:one}, the inter-layer couplings in AeFe$_2$As$_2$ compounds are at least one order of magnitude larger than the other families such as the [111] and [1111] compounds. Therefore although the condensation is weaker in AeFe$_2$As$_2$ than in LaOFeAs, the stronger inter-layer couplings can suppress the fluctuations and hence enhance the magnetically-ordered state. \section{Conclusion} The origin of magnetism and superconductivity in all families of iron-based superconductors, except KFe$_2$Se$_2$ whose parent compound is still under debate, can be qualitatively understood from the scenario of condensed particle-hole excitations away but not far from the Fermi level in the absence of strong correlations, indicating that the strong correlation may not play a crucial role in determining the physics of iron-based superconductors, which is in contrast to the high-T$_c$ cuprates. However, the scenario has no relation with the conventional weak-coupling theory of Fermi surface nesting. The particle-hole excitations within $E_F \pm 200 \sim 600$~meV, rather than the Fermi surface or the one-particle states below the Fermi level, determine the physical properties of iron-based superconductors. The orbital degrees of freedom and interlayer couplings have to be involved in order to correctly understand some anomalies in iron-based compounds. The competing tendencies towards different magnetic states coexisting in different orbitals are the itinerant analogy to the magnetic frustration and the proximity to an antiferromagnetic quantum critical point are responsible for the superconductivity in the iron-based superconductors with nonmagnetic parent states, such as LaOFeP. Our results have a broad implication that a new theory, rather than an extension of conventional single-band theory, is required if one wants to correctly understand real materials with multiple active bands crossing the Fermi level. \section{Acknowledgement} This work is supported by National Natural Science Foundation of China (No. 11174219), Program for New Century Excellent Talents in University (NCET-13-0428), Research Fund for the Doctoral Program of Higher Education of China (No. 20110072110044) and the Program for Professor of Special Appointment (Eastern Scholar) at Shanghai Institutions of Higher Learning as well as the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry.
2,869,038,155,967
arxiv
\section{Introduction} It is believed that parity odd domains with chiral imbalance are produced in finite temperature quark-gluon plasma (QGP). Their presence can be detected via axial anomaly as chiral magnetic effect (CME) \cite{Kharzeev:2007jp,Kharzeev:2007tn,Kharzeev:2004ey,Fukushima:2008xe} and chiral magnetic wave (CMW) \cite{Kharzeev:2010gd,Burnier:2011bf} in heavy ion collisions, see \cite{Kharzeev:2015znc,Liao:2014ava,Huang:2015oca} for recent reviews. The former leads to the generation of vector current along the direction of external magnetic field: \begin{align} \vec{j}_V=\frac{N_ce}{2\pi^2}\m_5 \vec{B}, \end{align} where $\m_5$ is the axial chemical potential characterizing the chiral imbalance. The latter leads to the propagation of axial and vector charges along the direction of external magnetic field. Analogous effects exist when the magnetic field is replaced by vorticity of QGP \cite{Son:2009tf,Jiang:2015cva}. These effects are being intensively searched for in heavy ion collision experiments in recent years \cite{Adamczyk:2014mzf,Abelev:2009ac,Abelev:2012pa}. Theoretical descriptions of CME and CMW have been developed in different frameworks including hydrodynamics \cite{Son:2009tf,Kharzeev:2011ds,Megias:2013joa,Jensen:2013vta,Sadofyev:2010pr} and kinetic theory \cite{Pu:2010as,Chen:2012ca,Stephanov:2012ki,Stephanov:2014dma,Son:2012wh,Son:2012zy,Wu:2016dam} etc. Most frameworks assume quarks being massless, see exception for example in \cite{Chen:2013iga,Kirilin:2013fqa}. While it is known that finite quark mass does not modify CME, we do expect quark mass to have imprints on the dynamics of axial charge. Naively, if the mass of one quark flavor is much larger than the temperature of QGP, that quark flavor decouples from axial current. We would like to ask quantitative questions on the mass effect on dynamics of axial charge. This is relevant in reality because the mass of strange quark is comparable to the temperature of QGP created at relativistic heavy ion collider (RHIC) and large hadron collider (LHC). With the inclusion of mass term, the axial anomaly equation reads \begin{align}\label{anomaly_m} \pd_\m j_5^\m=2im\bar{\psi}\g^5\psi-\frac{e^2}{16\pi^2}\e^{\m\n\r\s}F_{\m\n}F_{\r\s}-\frac{g^2}{16\pi^2}\tr \e^{\m\n\r\s}G_{\m\n}G_{\r\s}, \end{align} where the three terms on the right hand side (RHS) corresponds to mass term, QED anomaly term and QCD anomaly term respectively. \eqref{anomaly_m} is written for one flavor of quark with mass $m$. All three terms lead to modification of axial charge dynamics. The effect of QED anomaly term is extensively studied in the above mentioned references. The effect of QCD anomaly was studied recently \cite{Fukushima:2010vw,Jimenez-Alba:2014iia,Iatrakis:2014dka,Iatrakis:2015fma}. In this work, we will focus on the effect of the mass term. On one hand, finite quark mass explicitly breaks axial symmetry, offering a mechanism of axial charge generation. We find that the mass operator diffuses at low frequency the same way as the Chern-Simon (CS) number. The diffusion of the CS number is known to generate axial charge. The same is true for the mass operator. We calculate the diffusion rate of mass term as a measure of axial charge generation. We also define a dynamical susceptibility by CME, and find it to be divergent in the low frequency limit. We explain the common physical reason for the diffusive mass operator and the divergent susceptibility. On the other hand, finite quark mass also leads to axial charge dissipation. The dissipation effect is studied recently in \cite{Jimenez-Alba:2015awa,Sun:2016gpy} in a relaxation time approximation. We will discuss axial charge dissipation in an indirect way: we set up parallel electric and magnetic field and measure the rate of dissipation through the mass term. The situation is further complicated by the existence of a reservoir of adjoint matter, to which axial charge can dissipate. By taking into account the additional loss rate, we find that the axial charge dissipates entirely in the long time limit, which is consistent with the relaxation time approximation. We will study these effects as a function of both quark mass and external magnetic field using a holographic model. The paper is organized as follows: In Sec II we give a self-contained review of the holographic model. In Sec III we discuss separately mass effect on axial charge generation and dissipation, which we coined mass diffusion rate and mass dissipation effect respectively. We summarize the results in Sec IV. We collect technical details in obtaining phase diagram and hydrodynamic solutions in two appendices. \section{A quick review of the model} \subsection{The D3/D7 background} We use the D3/D7 model to study the effect of finite quark mass. The background is sourced by $N_c$ D3 branes. The worldvolume fields of D3 branes are ${\cal N}=4$ supersymmetric Yang-Mills (SYM) theory. In addition, there are $N_f$ D7 branes in the background. The open string stretching between D3 and D7 branes is dual to ${\cal N}=2$ hypermultiplet. The ${\cal N}=4$ and ${\cal N}=2$ fields are in the adjoint and fundamental representations of the $SU(N_c)$ group respectively. By analogy with QCD, we will loosely refer to the ${\cal N}=4$ and ${\cal N}=2$ fields as gluons and quarks respectively. A detailed account of field content can be found in \cite{Hoyos:2011us}. The ${\cal N}=4$ theory has a $SO(6)_R$ global symmetry, which is broken by the ${\cal N}=2$ theory to $SO(4)\times U(1)_R$. As we will see, the $U(1)_R$ symmetry is anomalous. We will identify it with axial symmetry. We start with the finite temperature black hole background of D3 branes following the notations of \cite{Mateos:2006nu}: \begin{align}\label{d3_metric} ds^2&=g_{tt}dt^2+g_{xx}d\vec{x}^2+g_{\r\r}d\r^2+g_{\th\th}d\th^2+g_{\ph\ph}d\ph^2+g_{SS}d\O_3^2, \no &=-\frac{r_0^2}{2}\frac{f^2}{H}\r^2dt^2+\frac{r_0^2}{2}H\r^2dx^2+\frac{d\r^2}{\r^2}+d\th^2+\sin^2\th d\ph^2+\cos^2\th d\O_3^2. \end{align} where \begin{align} f=1-\frac{1}{\r^4},\quad H=1+\frac{1}{\r^4}. \end{align} The temperature is fixed by $T=r_0/\pi$. Note that we have factorized $S_5$ into $S_3$ and two additional angular coordinates $\th$ and $\ph$, which makes the breaking of global symmetry $SO(6)_R\to SO(4)\times U(1)_R$ manifest. There is a nontrivial background Ramond-Ramond form \begin{align} C_4=\(\frac{r_0^2}{2}\r^2H\)^2dt\wg dx_1\wg dx_2\wg dx_3-\cos^4\th d\ph\wg d\O_3. \end{align} In the probe limit $N_f/N_c\ll 1$, the D7 branes do not backreact on the background of the D3 branes. This corresponds to the quenched limit of QCD. The D3 and D7 branes occupy the following dimensions. \be \label{table:d3d7} \begin{array}{c|cccccccccc} & x_0 & x_1 & x_2 & x_3 & x_4 & x_5 & x_6 & x_7 & x_8 & x_9\\ \hline \mbox{D3} & \times & \times & \times & \times & & & & & & \\ \mbox{D7} & \times & \times & \times & \times & \times & \times & \times & \times & & \\ \end{array} \ee The D3 and D7 branes are separated in the $x_8$-$x_9$ plane. Using translation symmetry, we put D3 branes at the origin of the plane and parameterize the position of D7 branes by radius $\r\cos\th$ and polar angle $\ph$. The D7 branes have rotational symmetry in the $x_8$-$x_9$ plane, corresponding to $U(1)_R$ symmetry in the dual field theory. We use the symmetry to choose $\ph=0$. The embedding function $\th(\r)$ of D7 branes in D3 background is determined by minimizing the action including a DBI term and WZ term \begin{align}\label{S_bare} &S_{D7}=S_{DBI}+S_{WZ}, \no &S_{DBI}=-N_fT_{D7}\int d^8\x\sqrt{-\text{det}\(g_{ab}+(2\pi\a') \tilde{F}_{ab}\)}, \no &S_{WZ}=\frac{1}{2}N_fT_{D7}(2\pi\a')^2\int P[C_4]\wg \tilde{F}\wg \tilde{F}. \end{align} Here $T_{D7}$ is the D7 brane tension. $g_{ab}$ and $\tilde{F}_{ab}$ are the induced metric and worldvolume field strength respectively. Defining \begin{align} &F_{ab}=(2\pi\a')\tilde{F}_{ab}, \no &\cN=N_fT_{D7}2\pi^2=\frac{N_fN_c}{(2\pi)^4}, \end{align} we simplify the action to \begin{align}\label{S_redef} &S_{DBI}=-\frac{\cN}{2\pi^2}\int d^8\x\sqrt{-\text{det}\(g_{ab}+{F}_{ab}\)}, \no &S_{WZ}=\frac{1}{4\pi^2}\cN\int P[C_4]\wg F\wg F. \end{align} The mass of the quark is realized as the separation of the D7 branes from the D3 branes at infinity. Explicitly, the mass $M$ is determined from the asymptotic behavior of $\th$: \begin{align}\label{mc} \sin\th=\frac{m}{\r}+\frac{c}{\r^3}+\cdots. \end{align} with $M=r_0m$. We will turn on a constant magnetic field, which amounts to including worldvolume magnetic field in D7 branes. There are two possible embeddings with D7 branes crossing/not crossing the black hole horizon, corresponding to meson melting/mesonic phase respectively \cite{Mateos:2006nu,Hoyos:2006gb,Filev:2007gb,Erdmenger:2007bn}. Using $t$, $\vec{x}$, $\r$ and angular coordinates on $S_3$ as worldvolume coordinates, the induced metric is given by \begin{align}\label{ind_metric} ds^2_{\text{ind}}=-\frac{r_0^2}{2}\frac{f^2}{H}\r^2dt^2+\frac{r_0^2}{2}H\r^2d\vec{x}^2+\(\frac{1}{\r^2}+\th'(\r)^2\)d\r^2+\cos^2\th d\O_3^2. \end{align} We also turn on a constant magnetic field in $z$-direction: $F_{xy}=B$, the action of D7 branes can be written as \begin{align}\label{dbi_th} S_{DBI}=-\cN\int d\r\(\frac{r_0^2}{2}\)^2fH\r^3\sqrt{1+\r^2\th'^2}\sqrt{1+\frac{2B^2}{r_0^2H\r^2}}\cos^3\th, \end{align} with a vanishing WZ term. The phase diagram in the $m$-$B$ plane has been obtained in \cite{Filev:2007gb,Erdmenger:2007bn}. We reproduce the result in appendix A and show the result at fixed temperature in Figure.~\ref{fig_mB}. The two phases are mesonic phase with larger $m$ and $B$ and meson melting phase with smaller $m$ and $B$. In the former case, R-charge (axial charge) exchange between fundamental matter and adjoint sector is not possible due to the formation of meson bound state, while in the latter case, R-charge (axial charge) can leak from fundamental matter to adjoint sector. The phase diagram implies that large quark mass and magnetic field favors formation of meson bound state. The effect of magnetic field may be understood via an increased effective quark mass. \begin{figure}[t] \includegraphics[width=0.5\textwidth]{m_B} \caption{\label{fig_mB}$m$-$B$ phase diagram of $D3/D7$ background. The axis labels are dimensionless numbers with units set by $\pi T=1$. The region with small $m$ and $B$ corresponds to the meson melting phase, while the region with large $m$ and $B$ corresponds to the mesonic phase.} \end{figure} We are interested in the meson melting phase, which is more relevant for application in QGP. \subsection{Fluctuations and realization of axial anomaly} We consider the fluctuation of embedding function $\ph$ and worldvolume gauge field $A_M$. The quadratic action can be written in the following compact form \begin{align}\label{compact} S=\cN\int d^5x\(-\frac{1}{2}\sqrt{-G}G^{MN}\pd_M\ph\pd_N\ph-\frac{1}{4}\sqrt{-H}F^2\) -\cN\k\int d^5x\Omg\e^{MNPQR}F_{MN}F_{PQ}\pd_R\ph, \end{align} where $M=t, x_1, x_2, x_3, \r$. The EOM of $\ph$ is given by \begin{align}\label{eom_ph} \frac{\d S}{\d \ph}-\pd_M\(\frac{\d S}{\d \pd_M\ph}\)=0. \end{align} Since $\ph$ is a phase, only its derivative enters the action, we have from \eqref{eom_ph}, \begin{align} \pd_\m\(\frac{\d S}{\d \pd_\m\ph}\)+\pd_\r\(\frac{\d S}{\d \pd_\r\ph}\)=0, \end{align} with $\m=t, x_1, x_2, x_3$. Defining $J_R^\m=\int d\r \frac{\d S}{\d \pd_\m\ph}$, we obtain \begin{align} \pd_\m J^\m_R+\frac{\d S}{\d \pd_\r\ph}\vert_{\r=\r_h}^\infty=0. \end{align} We will identify $J_R$ as the axial current. The non-conservation of $J_R$ follows from two boundary terms in the integration. The boundary term at the horizon $\r=\r_h$ indicates axial charge exchange between D7 branes and D3 branes. It is pointed out in \cite{Hoyos:2011us} that this term represents leakage of R-charge from fundamental sector to adjoint sector as fields in both sectors are charged under the $U(1)_R$ symmetry. The other boundary term at $\r=\infty$ can be related to axial anomaly: \begin{align}\label{Oph} O_\ph\equiv -\frac{\d S}{\d \pd_\r\ph}\vert_{\r=\infty}=-\frac{\d S^\pd}{\d \ph(\r\to\infty)}, \end{align} where we have used the defining property of on-shell action $S^\pd$. For action \eqref{compact}, we have \begin{align}\label{Oph_split} O_\ph=\cN\sqrt{-G}G^{M\r}\pd_M\ph\vert_{\r=\infty}+\k\cN\Omg\e^{MNPQ}F_{MN}F_{PQ}\vert_{\r=\infty}. \end{align} For our model, \begin{align} &\sqrt{-G}G^{MN}=\sqrt{-h}g^{MN}g_{\ph\ph},\quad \sqrt{-H}=\sqrt{-h}, \no &\Omg=\cos^4\th,\quad \k=\frac{1}{8}, \end{align} with $h$ to be defined in the next section. Field theory analysis shows that \cite{Hoyos:2011us}\footnote{Note that we have $\ph=0$, thus no axial chemical potential is introduced.} \begin{align}\label{Oph_O} O_\ph=mi\bar{\psi}\g^5\psi+\cdots+\cN E\cdot B, \end{align} where $\cdots$ represents contribution from supersymmetric partners. Noting that $\th\to0$ as $\r\to \infty$, we readily identify the second term in \eqref{Oph_split} with the last term in \eqref{Oph_O}. The remaining term in \eqref{Oph_split} can then be identified with the mass term in \eqref{Oph_O}. For convenience, we define the remaining term by \begin{align}\label{Oeta} O_\eta=\cN\sqrt{-G}G^{M\r}\pd_M\ph\vert_{\r=\infty}. \end{align} We have thus holographically split $O_\ph$ into the mass term $O_\eta$ and anomaly term $\cN E\cdot B$, which represent respectively explicit and anomalous breaking of axial symmetry: \begin{align}\label{anomaly_split} \pd_\m J^\m_R=O_\ph=O_\eta+\cN E\cdot B. \end{align} \section{Finite Quark Mass Effect} We will study two aspects of finite quark mass effect: i, the mass term, similar to QCD anomaly term, has diffusive behavior at low frequency. This gives rise to fluctuation (random walk behavior) of axial charge. The rate of diffusion, to be referred to as mass diffusion rate, determines the rate of axial charge generation; ii, in the presence of nonvanishing $E\cdot B$, net axial charge would be produced. However, the axial charge dissipates due to finite quark mass, resulting in a reduced rate of axial charge generation. We will refer to this as mass dissipation effect. The above effects are captured by correlators of $J^z$ and $O_\eta$. The mass diffusion rate involves the correlator of $O_\eta$ itself, while the mass dissipation effect involves the correlator between $J^z$ and $O_\eta$. We stress that $J^z$ is the vector current coupled to boundary gauge field $A_z$. In holographic formulation, we need to study the fluctuation of bulk fields $A_z$ and $\ph$, which are dual to $J^z$ and $O_\eta$($O_\ph$). For our purpose, it is sufficient to turn on homogeneous (in both $\vec{x}$ and $S_3$) fluctuation of $A_z(t,\r)$ and $\ph(t,\r)$. The fluctuation leads to the following modification of the following quantities \begin{align}\label{mod} &ds^2_{\text{ind}}=\(g_{tt}+g_{\ph\ph}\dot{\ph}^2\)dt^2+g_{xx}dx^2+\(g_{\r\r}+g_{\th\th}\th'^2+g_{\ph\ph}\ph'^2\)d\r^2+g_{SS}^2d\O_3^2+2g_{\ph\ph}\dot{\ph}\ph'dtd\r, \no &\d F=\dot{A}_zdt\wg dz+A_z'd\r\wg dz, \no &\d P[C_4]=-\cos^4\th\(\dot{\ph}dt+\ph'd\r\)\wg d\O_3. \end{align} With \eqref{mod}, we can write down the quadratic action of $A_z(t,\r)$ and $\ph(t,\r)$: \begin{align}\label{quadratic} S_{\text{DBI}}+S_{\text{WZ}}&=-\cN \int dt d^3xd\r \bigg[\frac{1}{2}\sqrt{-h} \(g^{tt}g_{\ph\ph}\dot{\ph}^2+g^{\r\r}g_{\ph\ph}\ph'{}^2+g^{tt}g^{xx}\dot{A}_z^2+g^{\r\r}g^{xx}A_z'{}^2\) \no &+\cos^4\th B\(\ph'\dot{A}_z-\dot{\ph}A_z'\)\bigg], \end{align} where we have defined \begin{align}\label{h_def} \sqrt{-h}=\sqrt{-g_{tt}g_{xx}\(g_{xx}^2+B^2\)\(1+\r^2\th'^2\)g_{\r\r}g_{SS}^3}. \end{align} Variation with respect to the fluctuations gives both the EOM and the on-shell action \begin{align} \d S&=-\cN\int dtd^3xd\r \bigg[\sqrt{-h}\times \no &\(-\pd_t(g^{tt}g_{\ph\ph}\dot{\ph})\d\ph-\pd_\r(g^{\r\r}g_{\ph\ph}\ph')\d\ph-\pd_t(g^{tt}g^{xx}\dot{A}_z)\d A_z-\pd_\r(g^{\r\r}g^{xx}A_z')\d A_z\) \no &-\pd_\r(\cos^4\th B\dot{A}_z)\d\ph+\pd_t(\cos^4\th BA_z')\d\ph-\pd_t(\cos^4\th B\ph')\d A_z+\r(\cos^4\th B\dot{\ph})\d A_z\bigg] \no &-\cN \int dtd^3x\bigg[\sqrt{-h}\(g^{\r\r}g_{\ph\ph}\ph'\d\ph+g^{\r\r}g^{xx}A_z'\d A_z\)+\cos^4\th B(\dot{A}_z\d\ph-\dot{\ph}\d A_z)\bigg]. \end{align} Working with a single Fourier mode $e^{-i\o t}$, we obtain the EOM \begin{align}\label{eom_fluc} &\o^2\sqrt{-h}g^{tt}g_{\ph\ph}\ph-\pd_\r\(\sqrt{-h}g^{\r\r}g_{\ph\ph}\ph'\)-B\pd_\r(\cos^4\th)A_z(-i\o)=0, \no &\o^2\sqrt{-h}g^{tt}g^{xx}A_z-\pd_\r\(\sqrt{-h}g^{\r\r}g^{xx}A_z'\)+B\pd_\r(\cos^4\th)\ph(-i\o)=0. \end{align} The asymptotic expansion of $\ph$ and $A_z$ can be determined from EOM: \begin{align} &\ph=f_0+\frac{f_1}{\r^2}+\frac{f_h}{\r^2}\ln\r+\cdots, \no &A_z=a_0+\frac{a_1}{\r^2}+\frac{a_h}{\r^2}\ln\r+\cdots. \end{align} $f_0$ and $a_0$ correspond to sources coupled to $O_\ph$ and $J^z$. The coefficients of the logarithmic terms correspond to counter terms\footnote{Counter terms proportional to $B^2$ can in principle exist, but are not found in this case.}: \begin{align} f_h=\frac{\o^2}{r_0^2}f_0,\quad a_h=\frac{\o^2}{r_0^2}a_0. \end{align} The vevs of $O_\ph$ and $J^z$ are determined by \begin{align}\label{vev} O_\ph=&\frac{\d S^\pd}{\d \ph(\r\to\infty)} \no =&\(-\cN\sqrt{-h}g^{\r\r}g_{\ph\ph}\ph'-\cN\cos^4\th B\dot{A}_z\)\vert_{\r\to\infty}=2\cN\(\frac{r_0^2}{2}\)^2m^2f_1-\cN Ba_0(-i\o), \no J^z=&\frac{\d S^\pd}{\d A_z(\r\to\infty)} \no =&\(-\cN\sqrt{-h}g^{\r\r}g^{xx}A_z'+\cN\cos^4\th B\dot{\ph}\)\vert_{\r\to\infty}=2\cN\(\frac{r_0^2}{2}\)a_1+\cN Bf_0(-i\o), \end{align} where we have used $\r\sin\th\vert_{\r\to\infty}= m$ according to \eqref{mc}. Comparing \eqref{Oeta} and \eqref{vev}, we arrive at the following dictionary \begin{align}\label{dict} O_\eta=2\cN\(\frac{r_0^2}{2}\)^2m^2f_1. \end{align} \subsection{Mass diffusion rate and susceptibility} The mass operator $O_\eta$ can lead to fluctuation of axial charge. It is well known that the origin of axial charge fluctuation from QCD anomaly is topological transitions. The counterpart for $O_\eta$ is helicity flipping from elementary scattering \cite{Manuel:2015zpa,Grabowska:2014efa}. The rate of axial charge generation in case of topological transition is given by CS diffusion rate. Similarly, the corresponding rate in case of helicity flipping is given by the diffusion rate of $O_\eta$, which we calculate below. The diffusion rate of $O_\eta$ is encoded in the low frequency limit of retarded correlator. To calculate the retarded correlator, we need to turn on source for $\ph$ while keeping $A_z$ vanish on the boundary. Both $\ph$ and $A_z$ satisfy infalling wave condition on the horizon. It follows from \eqref{dict} that the retarded correlator is given by \begin{align}\label{eta_retarded} G_{\eta\eta}(\o)=\int dt\<[O_\eta(t),O_\eta(0)]\>\Theta(t)e^{i\o t}=-2\cN\(\frac{r_0^2}{2}\)^2m^2\frac{f_1}{f_0}. \end{align} The diffusion rate is defined by \begin{align}\label{Gm_def} \G_m=\lim_{\o\to0}\frac{2iT}{\o}G_{\eta\eta}(\o). \end{align} For the case $B=0$, there is no mixing between $\ph$ and $A_z$. We can simply use $\ph^{(0)}$ \eqref{sol_homo} in appendix B: \begin{align} &\ph^{(0)}(\r)=1-\frac{i\o}{2r_0}\big[\int_1^\r d\r' \(\frac{\(\frac{r_0^2}{2}\)^28\cos^3\th_h\sin^2\th_h}{\sqrt{-h(\r')}g^{\r\r}(\r')g_{\ph\ph}(\r')}-\frac{1}{\r'-1}\)+\ln(\r-1)\big]. \end{align} This gives the following retarded correlator of $O_\eta$: \begin{align}\label{Oeta_retarded} G_{\eta\eta}(\o)=-2\cN\(\frac{r_0^2}{2}\)^2\frac{i\o}{4r_0}8\cos^3\th_h\sin^2\th_h. \end{align} \eqref{Oeta_retarded} gives a mass diffusion rate $\G_m$ as analog of CS diffusion rate: \begin{align}\label{Gamma_m} \G_m=\frac{\cN}{\pi}\(\frac{r_0^2}{2}\)^28\cos^3\th_h\sin^2\th_h. \end{align} The dependence on $m$ is encoded in the combination of trigonometric functions, which clearly indicates an upper bound of the mass diffusion rate. We also extract $\G_m$ using \eqref{Gm_def} with numerical solutions for general $B$ and $m$ in the meson melting phase. \begin{figure}[t] \includegraphics[width=0.5\textwidth]{gam_m2all_v2} \includegraphics[width=0.5\textwidth]{Gammam_B} \caption{\label{fig_gamm}(left) The mass diffusion rate ${\Gamma}_m$ as a function of $m^2$ for $B=0$ (blue point), $B=2$ (purple square), $B=4$ (brown diamond). The units are set by $\pi T=1$. The blue line is given by \eqref{Gamma_m} which fits well for $B=0$. To guide eyes, we also include linear fittings (red line) in the small mass region. The linear behavior is consistent with field theory expectation. We have used empty symbols for points in metastable phases. (right) $\G_m$ as a function of $B$ at $m=1/20$. A rapid growth of $\G_m$ with $B$ is found.} \end{figure} We plot numerical results of $\G_m$ as a function of $m^2$ for different values of $B$ in Figure.~\ref{fig_gamm}. The case $B=0$ agrees well with analytic expression \eqref{Oeta_retarded}. We find the mass diffusion rate is a non-monotonous function of $m$. This is not difficult to understand: in the limit $m\to0$, $\G_m$ obviously should vanish as $O_\eta\sim m$. When $m$ approaches the phase boundary between meson melting phase and mesonic phase, we also expect helicity flipping to freeze due to formation of meson bound states. In between, there must be a maximum for $\G_m$. Furthermore, the linear behavior of $\G_m$-$m^2$ plot in small $m$ region supports the scaling $\G_m\sim m^2$, which is consistent with field theory expectation. The $B$ dependence is more interesting: $\G_m$ shows rapid growth with $B$. The presence of $B$ enhances the diffusion, which cannot be explained as the increase of effective mass. The enhancement of helicity flipping might provide a way to generate axial charge more efficiently. It is worth mentioning that an enhancement of CS diffusion rate due to magnetic field was also obtained in \cite{Basar:2012gh,Drwenski:2015sha}. We would like to comment on the diffusive behavior of $O_\eta$. On general ground, the mass diffusion effect leads to accumulation of axial charge, which prevents its further generation. It would lead to modification of the long time (low frequency) behavior of $G^R_{\eta\eta}$. However, this does not happen due to the existence of the adjoint reservoir. The generated axial charge entirely dissipates to the adjoint reservoir. To see that, we compare $O_\eta$ and $O_\text{loss}$, which are the same quantity below evaluated at $\r=\infty$ and $\r=1$ respectively. \begin{align} \cN\sqrt{-h}g^{\r\r}\r_{\ph\ph}\ph'. \end{align} It follows from the EOM \eqref{eom_fluc} that the above quantity is constant in the limit $\o\to 0$, \begin{align} \pd_\r\(\cN\sqrt{-h}g^{\r\r}\r_{\ph\ph}\ph'\)=0, \end{align} meaning that the generated charge is entirely balanced by the loss to the reservoir. Consequently, the low frequency behavior of $O_\eta$ correlator is still diffusive. Turing on the source for $O_\eta$ also allows us to study the susceptibility of axial charge. In the presence of finite quark mass, the axial charge is not even approximately conserved, making the susceptibility a subtle concept. Following \cite{Iatrakis:2015fma}, we can use CME to define a dynamical susceptibility $\c$. In the present model, it is given by \begin{align}\label{chi} \c=\frac{n_5}{\mu_5}=\frac{\cN B n_5}{J^z}. \end{align} We need to calculate both $n_5$ and $J^z$ from response to source for $O_\ph$ in the hydrodynamic limit. $n_5$ is essentially known already. Denoting the source by $f_m$, we can express $n_5$ as \begin{align} -i\o n_5(\o)=O_\eta(\o)=-G^R_{\eta\eta}(\o)f_m(\o)\sim O(\o)f_m(\o). \end{align} Therefore we obtain $n_5\sim O(\o^0)f_m(\o)$. On the other hand, $J^z$ is calculated using the dictionary \eqref{vev}. It is generated through the mixing between $\ph$ and $A_z$. $J^z$ is also expressible as response to $f_m$ \begin{align} J^z(\o)=-G^R_{j\eta}(\o)f_m(\o). \end{align} Using the hydrodynamic solution \eqref{A1} in appendix B and the dictionary \eqref{vev}, we find that there are two contributions to $G^R_{j\eta}$, both of which are of order $O(\o B)$. Therefore we have $J^z\sim O(\o B)f_m(\o)$. Plugging the above qualitative results into \eqref{chi}, we obtain \begin{align} \c\sim O(\o^{-1}). \end{align} It simply means that the susceptibility is divergent in the static limit $\o\to 0$. Recalling that the susceptibility is well-defined in the massless limit, we arrive at the non-commutativity of the limits $m\to0$ and $\o\to0$. The physical reason for divergent susceptibility is not difficult to understand. On on hand, the mass diffusion effect can spontaneously generate axial charge density at the cost of no energy. On the other hand, as we have seen already, the adjoint reservoir is a perfect sink for axial charge in the flavor sector, preventing accumulation of axial charge. Consequently, the axial charge can be continuously generated in the flavor sector. Note that the situation is different in case of axial charge generation by QCD anomaly. There the breaking of axial symmetry is suppressed by $1/N_c$ (or the quenched limit), resulting in a finite dynamical susceptibility. \subsection{Mass dissipation effect} To study the mass dissipation effect, we turn on an electric field in $z$-direction by a time dependent $A_z$ on the boundary. We do not need to source $\ph$ on the boundary. Its profile is entirely generated via mixing of $A_z$ and $\ph$ in the bulk. The resulting $O_\eta$ from nontrivial profile of $\ph$ corresponds to the mass dissipation effect we are after. We also impose infalling wave boundary condition for $\ph$ and $A_z$ since we are interested in calculating response. We define the dimensionless mass dissipation rate \begin{align}\label{mir} r=\frac{O_\eta}{\cN E\cdot B}. \end{align} The rate is a function of $\o$, $m$ and $B$. In the hydrodynamic limit $\o\to 0$, we can show that $r(\o\to 0)$ is a real function of $m$ and $B$. In fact it can be related to embedding function for given $m$ and $B$ in the meson melting phase. To obtain $r(\o\to 0)$ analytically, we need to solve the coupled EOM \eqref{eom_fluc} in the hydrodynamic limit. The hydrodynamic solutions can be found in appendix B. We simply quote the results here. The leading nontrivial order is the zeroth order for $A_z$ and the first order for $\ph$: \begin{align}\label{hydro_sol} &A_z^{(0)}=a_0, \no &\ph^{(1)}=\frac{\(1-\cos^4\th_h\)B i\o a_0}{\(\frac{r_0^2}{2}\)^2m^2(-2)}\r^{-2}+\cdots, \end{align} where $\th_h$ is the value of $\th$ on the horizon, which needs to be obtained from numerical embedding function for given $m$ and $B$. For $\ph^{(1)}$, we only retain its asymptotic behavior relevant for extracting $O_\eta$. \eqref{hydro_sol} leads to the following rate \begin{align}\label{r_analytic} r=1-\cos^4\th_h. \end{align} We also study the rate of dissipation by numerical solutions. In practice, we generate two independent infalling numerical solutions at the horizon and use their linear combination to construct the solution with desired boundary condition. We show $m$-dependence of $r$ in the limit $B=0$ and $B$-dependence of $r$ at different values of $m$ in Figure~\ref{fig_Oetam} and Figure~\ref{fig_OetaB}. We find good agreement with analytic expression \eqref{r_analytic}. \begin{figure}[t] \includegraphics[width=0.5\textwidth]{ratio_m} \caption{\label{fig_Oetam}The mass dissipation rate $r$ as a function of $m$ from numerics with small $B$ and small $\o$. It is a monotonous increasing function of $m$ as expected. The analytic function \eqref{r_analytic} is drawn in blue line and fits the numerical results well. The units are set by $\pi T=1$.} \end{figure} \begin{figure}[t] \includegraphics[width=0.5\textwidth]{ratio_B} \caption{\label{fig_OetaB}The mass dissipation rate $r$ as a function of $B$ for $m=7/20$ (blue point), $m=9/20$ (purple square), and $m=11/20$ (brown diamond). The units are set by $\pi T=1$. The analytic function \eqref{r_analytic} is drawn in blue line and fits the numerical results well. It is a monotonous increasing function of $B$.} \end{figure} On general ground, we expect the rate to be a monotonous increasing function of $m$. In particular, $r\to 0$ as $m\to0$. Indeed, this is confirmed in Fig.~\ref{fig_Oetam}. We further note that the effect of $B$ enhances the dissipation on top of the mass effect in Fig.~\ref{fig_OetaB}. The physical interpretation of the dissipation rate $r$ turns out to be a subtle question. Recalling the axial anomaly equation \eqref{anomaly_split}, we would draw the following conclusion: for every one unit of axial charge generated by parallel electric and magnetic field, $r$ unit of it dissipates through the mass term, with a unit of $1-r$ axial charge remaining. The remaining axial charge survives even in the hydrodynamic limit since we have $\o\to 0$. This is not true because we have ignored a third source of axial charge dissipation, i.e. loss to the adjoint reservoir. The anomaly equation \eqref{anomaly_split} should be supplemented by the loss rate \begin{align}\label{anomaly_complete} \pd_\m J^\m_R=O_\eta+\cN E\cdot B-O_\text{loss}, \end{align} with the explicit form of loss rate given by \begin{align}\label{loss} O_\text{loss}=\cN\sqrt{-h}g^{\r\r}r_{\ph\ph}\pd_\r\ph\vert_{\r\to 1}+\cN\O E\cdot B\vert_{\r\to 1}. \end{align} It is known that the loss rate can be IR unsafe \cite{Karch:2008uy}. Indeed, plugging in the hydrodynamic solution $A_z^{(0)}$ and $\ph^{(1)}$ in appendix B into \eqref{anomaly_complete}, we find both terms becomes infinitely oscillatory as $\r\to1$. Nevertheless we can still extract useful information by taking the hydrodynamic limit $\o\to0$ before the IR limit $\r\to 1$. Using this regularization we find the $\cN\O E\cdot B$ term becomes $\cN \cos^4\th_h E\cdot B$, while the other term is higher order in $\o$. We immediately note that $\cN \cos^4\th_h E\cdot B$ is precisely the $1-r$ unit of axial charge. Subtracting the charge loss to the reservoir, we find only $r$ unit of axial charge is effectively generated in the flavor sector by parallel electric and magnetic field. All dissipates by the mass term. This simply means no axial charge survives in the hydrodynamic limit. After clarifying the role of axial charge loss to adjoint reservoir, we should interpret $r$ as a measure of mass dissipation effect compared to dissipation to the adjoint reservoir. The dissipation through mass term is favored at large $m$ and $B$. The statement on the non-survival of axial charge can receive correction higher order in $\o$, which quantifies the charge survival rate. We can compare with the relaxation time approximation employed in \cite{Jimenez-Alba:2015awa,Sun:2016gpy}, in which the following form of axial anomaly equation is assumed (here we use $r\cN E\cdot B$ for effectively axial charge generation) \begin{align}\label{rta} \pd_tn_5=-\frac{n_5}{\t}+r\cN E\cdot B, \end{align} with $\t$ being the relaxation time. Physically it means the presence of axial charge $n_5$ induces $O_\eta=-\frac{n_5}{\t}$. Plugging it into \eqref{rta}, we can solve for $O_\eta$ in frequency space \begin{align}\label{Oeta_rta} O_\eta=-\frac{r\cN E\cdot B}{1-i\o\t}. \end{align} The leading order result $O_\eta=-r\cN E\cdot B$ corresponds to our result of full dissipation. In principle, by going to high order in $\o$, we could calculate the relaxation time $\t$. We will not attempt it in this paper. \section{Summary} We have investigated the effect of finite quark mass and magnetic field in the generation and dissipation of axial charge, using a D3/D7 model. For axial charge generation, we calculated the mass diffusion rate. It is analogous to the Chern-Simon diffusion rate as a measure of axial charge fluctuation. The mass diffusion rate is a bounded non-monotonous function of mass at vanishing magnetic field. The presence of magnetic field enhances the diffusion. At small $m$, our numerical results are consistent with an approximate scaling for the mass diffusion rate \begin{align} \G_m\sim m^2F(B), \end{align} with $F(B)$ a rapid growing function in the meson melting phase. We also defined a dynamical susceptibility of axial charge using CME. We found the susceptibility to be divergent in the static limit $\o\to0$. It is due to two reasons: i, spontaneous generation of axial charge by mass diffusion effect; ii, continuous leakage of axial charge from flavor sector to the adjoint sector, preventing the accumulation of axial charge. For axial charge dissipation, we found that a mass term is induced in the presence of parallel electric and magnetic fields, reducing the generation of axial charge. After carefully subtracting the axial charge loss rate to the adjoint sector, we found that the axial charge dissipate entirely through the mass term in the long time limit. To the order we consider, it is consistent with a relaxation time approximation. \section{Acknowledgments} We thank K.~Landsteiner and Y.~Yin for critical comments on an early version of the paper. We also thank D.~Kharzeev, J.~F.~Liao, Y.~ Liu, L.~Yaffe, H.-U.~Yee and Y.~Yin for useful discussions. The work of S.L. is in part supported by Junior Faculty's Fund at Sun Yat-Sen University.
2,869,038,155,968
arxiv
\section{Introduction} A fundamental question in differential geometry is to determine which transitive Lie group actions exist on a manifold. Sophus Lie considered this to be an important problem, in particular due to its applications in the symmetry theory of PDEs. In \cite{Lie1880} (see also \cite{Transformationsgruppen3}) he gave a local classification of finite-dimensional transitive Lie algebras of analytic vector fields on $\mathbb C$ and $\mathbb C^2$. Lie never published a complete list of finite-dimensional Lie algebras of vector fields on $\mathbb C^3$, but he did classify primitive Lie algebras of vector fields on $\mathbb C^3$, those without an invariant foliation, which he considered to be the most important ones and also some special imprimitive Lie algebras of vector fields. Lie algebras of vector fields on $\mathbb C^3$ preserving a one-dimensional foliation are locally equivalent to projectable Lie algebras of vector fields on the total space of the fiber bundle $\pi \colon \mathbb C^2 \times \mathbb C \to \mathbb C^2$. Finding such Lie algebras amounts to extending Lie algebras of vector fields on the base (where they have been classified) to the total space. For the primitive Lie algebras of vector fields on the plane, this was completed by Lie \cite{Transformationsgruppen3}. Amaldi continued Lie's work by extending the imprimitive Lie algebras to three-dimensional space \cite{Amaldi1, Amaldi2} (see also \cite{Hillgarter1}). Nonsolvable Lie algebras of vector fields on $\mathbb C^3$ were classified in \cite{Doubrov}. It was also showed there that a complete classification of finite-dimensional solvable Lie algebras of vector fields on $\mathbb C^3$ is hopeless, since it contains the subproblem of classifying left ideals of finite codimension in the universal enveloping algebra $U(\mathfrak g)$ for the two-dimensional Lie algebras $\mathfrak g$ which is known to be a hard algebraic problem. In this paper we consider Lie algebras of vector fields on the plane from Lie's list, and extend them to the total space $\mathbb C^2 \times \mathbb C$. In order to avoid the issues discussed in \cite{Doubrov} we only consider extensions that are of the same dimension as the original Lie algebra. The resulting list of Lie algebras has intersections with \cite{Transformationsgruppen3}, \cite{Amaldi1, Amaldi2} and \cite{Doubrov}, but it also contains some new solvable Lie algebras of vector fields in three-dimensional space. We start in section \ref{classification2D} by reviewing the classification of Lie algebras of vector fields on $\mathbb C^2$, which will be our starting point. The lifting procedure is explained in section \ref{lifts}. We show that the lifts can be divided into three types, depending on how they act on the fibers of $\pi$. In section \ref{list} we give a complete list of the lifted Lie algebras of vector fields, which is the main result of this paper. The relation between the simplest type of lift and Lie algebra cohomology is explained in section \ref{cohomology}. \section{Classification of Lie algebra actions on $\mathbb C^2$}\label{classification2D} Two Lie algebras $\mathfrak g_1 \subset \vf{M_1}, \mathfrak g_2 \subset \vf{M_2}$ of vector fields are locally equivalent if there exist open subsets $U_i \subset M_i$ and a diffeomorphism $f \colon U_1 \to U_2$ with the property $df(\mathfrak g_1|_{U_1})=\mathfrak g_2|_{U_1}$. Recall that $\mathfrak g$ is transitive if $\mathfrak g|_p =T_p M$ at all points $p \in M$. The classification of Lie algebras of vector fields on $\mathbb C$ and $\mathbb C^2$ is due to Lie \cite{Lie1880} (see \cite{AH} for English translation). There are up to local equivalence only three finite-dimensional transitive Lie algebras of vector fields on $\mathbb C$ and they correspond to the metric, affine and projective transformations: \begin{equation} \Span{\partial_u}, \qquad \Span{\partial_u, u \partial_u}, \qquad \Span{\partial_u, u \partial_u, u^2 \partial_u} \label{classification1D} \end{equation} On $\mathbb C^2$ any finite-dimensional transitive Lie algebra of analytic vector fields is locally equivalent to one of the following: \begin{align*} &\textbf{Primitive}\\ \mathfrak{g}_1 &= \Span{\partial_x, \partial_y, x \partial_x, x \partial_y, y \partial_x, y \partial_y, x^2 \partial_x +xy \partial_y, xy \partial_x +y^2 \partial_y}\\ \mathfrak{g}_2 &= \Span{\partial_x,\partial_y, x\partial_x, x \partial_y, y\partial_x, y \partial_y} \\ \mathfrak{g}_3 &= \Span{\partial_x, \partial_y, x \partial_y, y \partial_x, x \partial_x - y\partial_y}\\ \end{align*} \begin{align*} &\textbf{Imprimitive}\\ \mathfrak{g}_4 &=\Span{\partial_x, e^{\alpha_i x} \partial_y, xe^{\alpha_i x} \partial_y, ..., x^{m_i-1}e^{\alpha_i x} \partial_y\mid i=1,...,s},\\ &\qquad \text{where } m_i \in \mathbb N\setminus\{0\}, \alpha_i \in \mathbb C, \sum_{i=1}^s m_i + 1 = r \geq 2\\ \mathfrak{g}_5 &=\Span{\partial_x,y \partial_y,e^{\alpha_i x} \partial_y, xe^{\alpha_i x} \partial_y, ..., x^{m_i-1}e^{\alpha_i x} \partial_y\mid i=1,...,s},\\ &\qquad \text{where } m_i \in \mathbb N\setminus\{0\}, \alpha_i \in \mathbb C, \sum_{i=1}^s m_i + 2 = r \geq 4 \\ \mathfrak{g}_6 &= \Span{\partial_x, \partial_y, y \partial_y, y^2 \partial_y} \\ \mathfrak{g}_7 &= \Span{\partial_x, \partial_y, x\partial_x, x^2 \partial_x + x \partial_y}\\ \mathfrak{g}_8 &= \Span{\partial_x, \partial_y, x \partial_y, ..., x^{r-3} \partial_y, x \partial_x+ \alpha y \partial_y\mid \alpha \in \mathbb C} ,\; r \geq 3\\ \mathfrak{g}_9 &= \Span{\partial_x, \partial_y, x \partial_y, ..., x^{r-3} \partial_y, x \partial_x + \left( (r-2) y+ x^{r-2}\right) \partial_y },\; r \geq 3\\ \mathfrak{g}_{10} &= \Span{\partial_x, \partial_y, x \partial_y, ..., x^{r-4} \partial_y, x \partial_x, y \partial_y},\; r \geq 4\\ \mathfrak{g}_{11} &= \Span{\partial_x, x \partial_x, \partial_y, y\partial_y, y^2 \partial_y}\\ \mathfrak{g}_{12} &= \Span{\partial_x, x \partial_x, x^2 \partial_x, \partial_y, y \partial_y, y^2 \partial_y }\\ \mathfrak{g}_{13} &= \Span{\partial_x, \partial_y, x \partial_y, ..., x^{r-4} \partial_y, x^2 \partial_x + (r-4) xy \partial_y, x \partial_x + \tfrac{r-4}{2} y \partial_y},\; r \geq 5\\ \mathfrak{g}_{14} &= \Span{ \partial_x, \partial_y, x \partial_y,..., x^{r-5} \partial_y, y \partial_y, x \partial_x, x^2 \partial_x + (r-5)xy \partial_y}, \;r\geq 6\\ \mathfrak{g}_{15} &= \Span{\partial_x, x\partial_x+\partial_y, x^2 \partial_x + 2x \partial_y }\\ \mathfrak{g}_{16} &= \Span{\partial_x, x \partial_x- y \partial_y, x^2 \partial_x+(1-2xy) \partial_y} \\ \end{align*} In the list above (which is based on the one in \cite{Onishchik}), and throughout the paper, $r$ denotes the dimension of the Lie algebra. Our $\mathfrak g_{16}$ is by $y\mapsto \frac{1}{y-x}$ locally equivalent to $\Span{\partial_x + \partial_y, x \partial_x + y \partial_y,x^2 \partial_x + y^2 \partial_y}$, which often appears in these lists of Lie algebras of vector fields on the plane but has a singular orbit $y-x=0$. We also refer to \cite{Transformationsgruppen3,Campbell,Draisma,Gorbatsevich} which treat transitive Lie algebras of vector fields on the plane. \section{Lifts of Lie algebras in $\vf{\mathbb C^2}$}\label{lifts} In this section we describe how we lift the Lie algebras of vector fields from the base to the total space of $\pi \colon \mathbb C^2 \times \mathbb C \to \mathbb C^2$. \begin{definition} Let $\mathfrak g \subset \vf{\mathbb C^2}$ be a Lie algebra of vector fields on $\mathbb C^2$, and let $\hat{\mathfrak{g}} \subset \vf{\mathbb C^2 \times \mathbb C}$ be a projectable Lie algebra satisfying $d \pi(\hat{\mathfrak{g}})= \mathfrak g$. The Lie algebra $\hat{\mathfrak{g}}$ is a lift of $\mathfrak g$ (on the bundle $\pi$) if $\ker (d\pi|_{\hat{\mathfrak{g}}})=\{0\}$ . \end{definition} For practical purposes we reformulate this in coordinates. Throughout the paper $(x,y,u)$ will be coordinates on $\mathbb C^2\times \mathbb C$. If $X_i=a_i(x,y) \partial_x+b_i(x,y) \partial_y$ form a basis for $\mathfrak g \subset \vf{\mathbb C^2}$, then a lift $\hat{\mathfrak g}$ of $\mathfrak g$ on the bundle $\pi$ is spanned by vector fields of the form $\hat X_i=a_i(x,y) \partial_x+b_i(x,y) \partial_y+f_i(x,y,u) \partial_u$. The functions $f_i$ are subject to differential constraints coming from the commutation relations of $\mathfrak g$. Finding lifts of $\mathfrak g$ amounts to solving these differential equations. \subsection{Three types of lifts} The fibers of $\pi$ are one-dimensional and, as is common in these type of calculations, we will use the classification of Lie algebras of vector fields on the line to simplify our calculations. Let $\mathfrak g$ be a finite-dimensional transitive Lie algebra of vector fields on $\mathbb C^2$ and $\hat{\mathfrak g}$ a transitive lift. For $p \in \mathbb C^2 \times \mathbb C$, let $a=\pi(p)$ be the projection of $p$ and let $\mathfrak{st}_a \subset \mathfrak g$ be the stabilizer of $a\in \mathbb C^2$. Denote by $\hat{\mathfrak{st}_a} \subset \hat{\mathfrak{g}}$ the lift of $\mathfrak{st}_a$, i.e. $d\pi(\hat{\mathfrak{st}_a})=\mathfrak{st}_a$. The Lie algebra $\hat{\mathfrak{st}_a}$ preserves the fiber $F_a$ over $a$, and thus induces a Lie algebra of vector fields on $F_a$ by restriction to the fiber. Denote the corresponding Lie algebra homomorphism by \[\varphi_a \colon \hat{\mathfrak{st}_a}\to \vf{F_a}.\] In general this will not be injective, and it is clear that as abstract Lie algebras $\varphi_a(\hat{\mathfrak{st}_a})$ is isomorphic to $\mathfrak h_a= \hat{\mathfrak{st}_a}/\ker(\varphi_a)$. Since $\hat{\mathfrak{g}}$ is transitive, the Lie algebra $\varphi_a(\hat{\mathfrak{st}_a})$ is a transitive Lie algebra on the one-dimensional fiber $F_a$, and therefore it must be locally equivalent to one of the three Lie algebras (\ref{classification1D}). Transitivity of $\hat{\mathfrak g}$ also implies that for any two points $a,b \in \mathbb C^2$, the Lie algebras $\varphi_a(\hat{\mathfrak{st}_{a}}), \varphi_b(\hat{\mathfrak{st}_{b}})$ of vector fields are locally equivalent. Since the Lie algebra structure of $\mathfrak h_a$ is independent of the point $a$, it will be convenient to define $\mathfrak h$ as the abstract Lie algebra isomorphic to $\mathfrak h_a$. Thus $\dim \mathfrak h$ is equal to 1, 2 or 3, which allows us to split the transitive lifts into three distinct types. The main goal of this section is to show that we can change coordinates so that $\varphi_a(\hat{\mathfrak{st}_a})$ has one of the three normal forms from (\ref{classification1D}), on every fiber simultaneously. Before we prove this we make the following observation. \begin{lemma} \label{lemma} Let $\mathfrak g \subset \vf{\mathbb C^2}$ be a transitive Lie algebra of vector fields, and let $a \in \mathbb C^2$ be an arbitrary point. Then there exists a locally transitive two-dimensional subalgebra $\mathfrak h \subset \mathfrak{g}$, and a local coordinate chart $(U,(x,y))$ centered at $a$ such that $\mathfrak h = \Span{X_1,X_2}$ where $X_1= \partial_x$ and either $X_2=\partial_y$ or $X_2= x \partial_x+\partial_y$. \end{lemma} \begin{proof} This is apparent from the list in section \ref{classification2D}, but we also outline an independent argument. It is well known that a two-dimensional locally transitive Lie subalgebra can be brought to one of the above forms, so we only need to show that such exists. Let $\mathfrak g = \mathfrak s \ltimes \mathfrak r$ be the Levi-decomposition of $\mathfrak g$. Assume first that $\mathfrak r$ is a locally transitive Lie subalgebra and let \[ \mathfrak r \supset \mathfrak r_1 \supset \mathfrak r_2 \supset \cdots \supset \mathfrak r_k \supset \mathfrak r_{k+1}=\{0\}. \] be its derived series. If $\mathfrak r_k$ is locally transitive, it contains an (abelian) two-dimensional transitive subalgebra and we are done. If $\mathfrak r_k$ is not locally transitive, then we take a vector field $X_i \in \mathfrak r_i$ for some $i<k$ which is transversal to those of $\mathfrak r_k$. Since $[\mathfrak r_i, \mathfrak r_k] \subset \mathfrak r_k$, we have a map $\text{ad}_{X_i} \colon \mathfrak r_k \to \mathfrak r_k$. Let $X_k \in \mathfrak r_k$ be an eigenvector of $\text{ad}_{X_i}$. Then $X_i$ and $X_k$ span a two-dimensional locally transitive subalgebra of $\mathfrak g$. If $\mathfrak s$ is a transitive subalgebra, then $\mathfrak s$ is locally equivalent to the standard realization on $\mathbb C^2$ of either $sl_2$, $sl_2 \oplus sl_2$ or $sl_3$, all of which have a locally transitive two-dimensional Lie subalgebra. If neither $\mathfrak s$ nor $\mathfrak r$ is locally transitive they give transversal one-dimensional foliations, and $\mathfrak s$ is locally equivalent to the realization $\Span{\partial_x, x \partial_x, x^2 \partial_x}$ of $sl_2$ on $\mathbb C$ while $\mathfrak r$ is spanned by vector fields of the form $b_i(x,y)\partial_y$. Since $\mathfrak r$ is finite-dimensional we get $(b_i)_x=0$. Therefore $\mathfrak g=\mathfrak s\oplus \mathfrak r$, and there exists an abelian locally transitive subalgebra. \end{proof} \begin{example} \label{simplify} Let $X_1= \partial_x$ and $X_2= \partial_y$ be vector fields on $\mathbb C^2$ and consider the general lift $\hat{X}_1 = \partial_x+f_1(x,y,u) \partial_u, \hat X_2 = \partial_y+f_2(x,y,u) \partial_u$. We may change coordinates $ u\mapsto A(x,y,u)$ so that $f_1\equiv 0$. This amounts to solving $\hat X_1 (A)=A_x+f_1 A_u=0$ with $A_u \neq 0$, which can be done locally around any point. The commutation relation $[\hat X_1, \hat X_2]=(f_2)_x \partial_u=0$ implies that $f_2$ is independent of $x$. Thus, in the same way as above, we may change coordinates $u \mapsto B(y,u)$ so that $f_2 \equiv 0$. A similar argument works if $X_2= x \partial_x+\partial_y$. \end{example} The previous example is both simple and useful. Since all our Lie algebras of vector fields on $\mathbb C^2$ contain these Lie algebras as subalgebras, we can always transform our lifts to a simpler form by changing coordinates in this way. We apply this idea in the proof of the following theorem. \begin{theorem}\label{main} Let $\mathfrak g = \Span{X_1,...,X_r}$ be a transitive Lie algebra of vector fields on $\mathbb C^2$ and let $\hat{\mathfrak g}=\Span{\hat{X_1},...,\hat{X_r}}$ be a transitive lift of $\mathfrak g$ on the bundle $\pi$, with $\hat{X_i} = X_i + f_i(x,y,u) \partial_u$. Then there exist local coordinates in a neighborhood $U \subset \mathbb C^2 \times \mathbb C$ of any point so that $f_i(x,y,u)=\alpha_i(x,y) + \beta_i(x,y) u+\gamma_i (x,y) u^2$ and $\varphi_a(\hat{\mathfrak{st}_{a}})$ is of normal form (\ref{classification1D}) for every $a\in U$. \end{theorem} \begin{proof} Let $p \in \mathbb C^2 \times \mathbb C$ be an arbitrary point, $V$ an open set containing $p$, and $(V,(x,y,u))$ a coordinate chart centered at $p$. By lemma \ref{lemma} we may assume that $X_1=\partial_x$ and either $X_2=\partial_y$ or $X_2=x\partial_x+\partial_y$ and by example \ref{simplify} we may set $f_1 \equiv 0 \equiv f_2$. We choose a basis of $\mathfrak g$ such that $\mathfrak{st}_0=\Span{X_3,...,X_r}$. Since $\varphi_0(\hat{\mathfrak{st}_0})$ is a transitive action on the line, we may in addition make a local coordinate change $u \mapsto A(u)$ on $U \subset V$ containing $0$ so that $\varphi_0(\hat{\mathfrak{st}_0})$ is of the form $\Span{\partial_u}$, $\Span{\partial_u, u \partial_u}$ or $\Span{\partial_u, u \partial_u, u^2 \partial_u}$. Then for $i=3,...,r$, the functions $f_i$ have the property \[f_i(0,0,u)= \tilde \alpha_i+\tilde \beta_i u+\tilde \gamma_i u^2.\] We use the commutation relations of $\hat{\mathfrak g}$ to show that $f_i(x,y,u)$ will take this form for any $(x,y,u) \in U$. If $[X_j,X_i]=c_{ji}^k X_k$ are the commutation relations for $\mathfrak g$, then the lift of $\mathfrak g$ obeys the same relations: $[\hat X_j, \hat X_i] = c_{ji}^k \hat X^k$. Thus \[ [\hat X_1, \hat X_i] = [X_1,X_i]+X_1(f_i) \partial_u= c_{1i}^k X_k+X_1(f_i)\partial_u \] which implies that $X_1(f_i)=c_{1i}^k f_k$. In the same manner we get the equations $X_2(f_i)=c_{2i}^k f_k$. We can rewrite the equations as \[\partial_x(f_i)=c_{1i}^k f_k, \qquad \partial_y(f_i)=\tilde c_{2i}^k(x) f_k. \] The coefficients $\tilde c_{2i}^k(x)$ depend on whether $\Span{X_1,X_2}$ is abelian or not, but in any case they are indepedent of $u$. We differentiate these equations three times with respect to $u$ (denoted by $'$): \[\partial_x(f_i''')=c_{1i}^k f_k''', \qquad \partial_y(f_i''')=\tilde c_{2i}^k(x) f_k''' \] By the above assumption we have $f_i'''(0,0,u)=0$, and by the uniqueness theorem for systems of linear ODEs it follows that for every $(x,y,u) \in U$ we have $f_i'''(x,y,u)=0$, and therefore \begin{equation} f_i(x,y,u)= \alpha_i(x,y)+ \beta_i(x,y) u+ \gamma_i(x,y) u^2. \label{normalform} \end{equation} Note also that if $f_i''$ (or $f_i'$) vanish on $(0,0,u)$, we may assume $\gamma_i\equiv 0$ (or $\gamma_i\equiv 0$ and $\beta_i \equiv 0$) for every $i$. The last statement of the theorem follows by the fact that $\dim \varphi_a(\hat{\mathfrak{st}_{a}})$ is the same for every $a \in U$. \end{proof} \begin{definition} We say that the lift $\hat{\mathfrak{g}}$ of $\mathfrak g \subset \vf{\mathbb C^2}$ is metric, affine or projective if $\mathfrak h$ is one, two or three dimensional, respectively, and $\varphi_a(\hat{\mathfrak{st}_{a}})$ is of normal form (\ref{classification1D}) at every point $a \in \mathbb C^2$. \end{definition} By theorem \ref{main} all lifts are locally equivalent to one of these three types, so from now on we will restrict to such lifts. This simplifies our computations. Geometrically, we may think about this lifting as choosing a structure on the fiber, namely metric, affine or projective, and requiring the lift to preserve this structure. Another useful observation is that the properties of $\mathfrak{st}_a$ and $\mathfrak h$ are closely linked. \begin{corollary} \label{quotient} If $\mathfrak{st}_a$ is solvable, then there are no projective lifts. If $\mathfrak{st}_a$ is abelian, then there are no projective or affine lifts. \end{corollary} \begin{proof} The map $\varphi_a\colon \hat{\mathfrak{st}_a} \to \mathfrak h_a \simeq \mathfrak h$ is a Lie algebra homomorphism, and the image of a solvable (resp. abelian) Lie algebra is solvable (resp. abelian). \end{proof} In particular, from Lie's classification it follows that only the primitive Lie algebras may have projective lifts. \subsection{Coordinate transformations} It is natural to consider two lifts to be equivalent if there exist a coordinate transformation on the fibers ($u \mapsto A(x,y,u)$), taking one to the other. We consider metric lifts up to translations $u \mapsto u+A(x,y)$, affine lifts up to affine transformations $u \mapsto A(x,y) u+B(x,y)$ and projective lifts up to projective transformations $u \mapsto \frac{A(x,y) u+B(x,y)}{C(x,y) u+D(x,y)}$. The following example shows the general procedure we use for finding lifts. \begin{example}\label{g6} Consider the Lie algebra $\mathfrak{g}_{6}$ which is spanned by vector fields \[ X_1=\partial_x,\quad X_2=\partial_y,\quad X_3= y \partial_y,\quad X_4= y^2 \partial_y.\] Since the stabilizer of $0$ is solvable, we may by corollary \ref{quotient} assume that the generators of a lift $\hat{\mathfrak g}_{6}$ is of the form $\hat{X}_i = X_i+f_i \partial_u$, where $f_i$ are affine functions in $u$. All lifts are either metric og affine. By example \ref{simplify} we may assume that $f_1 \equiv 0 \equiv f_2$ after making an affine change of coordinates (or a translation if we consider metric lifts). The type of coordinate transformation was not specified in the example, but it is clear that the PDE in the example can be solved within our framework of metric and affine lifts, respectively. The commutation relations $[X_1,X_3]=0,[X_2,X_3]=X_2$ imply that $f_3$ is a function of $u$ alone. The commutation relations $[X_1,X_4]=0,[X_2,X_4]=2 X_3,[X_3,X_4]=X_4$ result in the differential equations \[(f_4)_x=0, \quad (f_4)_y=2 f_3, \quad y (f_4)_y+f_3 (f_4)_u-f_4 (f_3)_u=f_4.\] The first two equations give $f_4=2yf_3(u)+b(u)$. After inserting this into the third equation it simplifies to $f_3 b_u-b (f_3)_u=b$. Since the lift is either metric or affine, we may assume that $f_3=A_0 + A_1 u$ and $b=B_0+B_1 u$. Then the equation above results in $B_1=0$ and $B_0 A_1=-B_0$. If $B_0=0$ we get transitive lifts only when $A_1=0$: \begin{equation*} \partial_x,\quad \partial_y,\quad y\partial_y+A_0\partial_u,\quad y^2 \partial_y+2A_0 y\partial_u. \end{equation*} These are metric lifts. If $A_1=-1$ we get the affine lift \begin{equation*} \partial_x, \quad \partial_y,\quad y \partial_y-u \partial_u,\quad y^2 \partial_y+(1-2yu)\partial_u \end{equation*} where $A_0$ and $B_0$ have been normalized by a translation and scaling, respectively. \end{example} \begin{remark} The family of metric lifts is also invariant under transformations of the form $u \mapsto C u+A(x,y)$, where $C$ is constant. However, we would like to restrict to $C=1$. A consequence of this choice is that we get a correspondence between metric lifts and Lie algebra cohomology which will be discussed in section \ref{cohomology}. The same cohomology spaces are treated in \cite{Olver92} where they are used for classifying Lie algebras of differential operators on $\mathbb C^2$. We also get a correspondence between metric lifts and ``linear lifts'', whose vector fields act as infinitesimal scaling transformations in fibers. Using the notation above they take the form $\hat X=X+f(x,y) u \partial_u$. They make up an important type of lifts, but we do not consider them here due to their intransitivity. Since the transformation $u \mapsto \exp(u)$ takes metric lifts to linear lifts, the theories of these two types of lifts are analogous (given that we allow the right coordinate transformations). This makes many of the results in this paper applicable to linear lifts as well. As an example the classification of linear lifts under linear transformations ($u \mapsto u A(x,y)$), will be similar to that of metric lifts under translations ($u \mapsto u+A(x,y)$). \end{remark} \section{List of lifts} \label{list} This section contains the list of lifts of the Lie algebras from section \ref{classification2D} on $\pi\colon \mathbb C^2 \times \mathbb C \to \mathbb C^2$. For a Lie algebra $\mathfrak g \subset \vf{\mathbb C^2}$ we will denote by $\hat{\mathfrak{g}}^m,\hat{\mathfrak{g}}^a,\hat{\mathfrak{g}}^p$ the metric, affine and projective lifts, respectively. \begin{theorem} The following list contains all metric, affine and projective lifts of the Lie algebras from Lie's classification in section \ref{classification2D}. \end{theorem} \begin{align*} \hat{\mathfrak{g}}_1^m&=\Span{\partial_x, \partial_y, x \partial_y, x \partial_x-y \partial_y, y \partial_x, x \partial_x+y \partial_y+2C \partial_u, \\ &\qquad x^2 \partial_x+xy \partial_y+3 C x \partial_u, xy \partial_x+y^2 \partial_y + 3C y \partial_u}\\ \hat{\mathfrak{g}}_1^p & =\Span{\partial_x, \partial_y, x \partial_y+\partial_u, x \partial_x-y \partial_y-2u \partial_u, y \partial_x - u^2 \partial_u, x \partial_x+y \partial_y, \\ & \qquad x^2 \partial_x+xy \partial_y+(y-xu) \partial_u, xy \partial_x+y^2 \partial_y+u(y-xu)\partial_u}\\ \hat{\mathfrak{g}}_2^m&= \Span{\partial_x, \partial_y, x \partial_y, x \partial_x-y\partial_y, y \partial_x, x \partial_x+y \partial_y+C\partial_u} \\ \hat{\mathfrak{g}}_2^p&= \Span{\partial_x, \partial_y, x \partial_y+\partial_u, x \partial_x-y \partial_y-2u\partial_u, y \partial_x-u^2 \partial_u, x \partial_x+y \partial_y} \\ \hat{\mathfrak{g}}_3^p &= \Span{\partial_x, \partial_y, x \partial_y+\partial_u, x \partial_x-y \partial_y-2u \partial_u, y \partial_x-u^2 \partial_u }\\ \hat{\mathfrak{g}}_4^m& \Span{\partial_x, x^i e^{\alpha_j x} \partial_y+ e^{\alpha_j x} \left( \sum_{k=0}^i \tbinom{i}{k} C_{j,k} x^{i-k} \right) \partial_u \mid C_{1,0}=0 } \\ \hat{\mathfrak{g}}_5^m &= \Span{\partial_x,y\partial_y+C\partial_u, x^i e^{\alpha_j x} \partial_y}\\ \hat{\mathfrak{g}}_5^a &= \Span{\partial_x,y\partial_y+u \partial_u, x^i e^{\alpha_j x} \partial_y+ e^{\alpha_j x} \left( \sum_{k=0}^i \tbinom{i}{k} C_{j,k} x^{i-k} \right) \partial_u\mid C_{1,0}=0} \\ \hat{\mathfrak{g}}_6^m&= \Span{\partial_x, \partial_y, y\partial_y +C\partial_u, y^2 \partial_y+2Cy\partial_u}\\ \hat{\mathfrak{g}}_6^a&= \Span{\partial_x, \partial_y, y \partial_y-u \partial_u,y^2 \partial_y+(1-2yu)\partial_u}\\ \hat{\mathfrak{g}}_7^m&= \Span{\partial_x, \partial_y, x\partial_x +C\partial_u, x^2 \partial_x+x \partial_y+2C x\partial_u}\\ \hat{\mathfrak{g}}_7^a&= \Span{\partial_x, \partial_y, x \partial_x-u \partial_u,x^2 \partial_x+x \partial_y+(1-2xu)\partial_u}\\ \hat{\mathfrak{g}}_8^m &= \Span{\partial_x, \partial_y, x\partial_x+\alpha y \partial_y+ A\partial_u, x \partial_y,...,x^{s-1} \partial_y,\\ & \qquad x^{s+i} \partial_y + \tbinom{s+i}{s} B x^i \partial_u \mid i=0,...,r-3-s}, \\ & \text{where } B=0 \text{ unless } \alpha=s\\ \hat{\mathfrak{g}}_8^a &= \Span{\partial_x, \partial_y, x \partial_x+\alpha y\partial_y+ (\alpha-s)u\partial_u, x \partial_y,...,x^{s-1} \partial_y, \\ & \qquad x^{s+i} \partial_y +\tbinom{s+i}{s} x^i \partial_u \mid i=0,...,r-3-s, \quad \alpha \neq s}\\ \hat{\mathfrak{g}}_9^m &= \Span{\partial_x, \partial_y, x \partial_x+ ((r-2)y+x^{r-2}) \partial_y+C \partial_u, x \partial_y, ..., x^{r-3} \partial_y} \\ \hat{\mathfrak{g}}_9^a &= \Span{\partial_x, \partial_y, x \partial_x+ ((r-2)y+x^{r-2} )\partial_y+ \left(\tbinom{r-2}{s} x^{r-s-2}+(r-s-2) u\right) \partial_u,\\ & \qquad x \partial_y, ..., x^{s-1} \partial_y, x^{s+i} \partial_y + \tbinom{s+i}{s} x^i \partial_u \mid i=0,...,r-3-s}\\ \hat{\mathfrak{g}}_{10}^m &= \Span{\partial_x, \partial_y, x \partial_x+ A \partial_u, y \partial_y+B \partial_u, x \partial_y, ..., x^{r-4} \partial_y}\\ \hat{\mathfrak{g}}_{10}^a &= \Span{\partial_x, \partial_y, x \partial_x-su \partial_u, y \partial_y+u\partial_u, x \partial_y, ..., x^{s-1} \partial_y, \\ &\qquad x^{s+i} \partial_y+ \tbinom{s+i}{s} x^i \partial_u \mid i=0,...,r-4-s}\\ \hat{\mathfrak{g}}_{11}^m &=\Span{\partial_x, \partial_y, x \partial_x + A \partial_u, y \partial_y+B\partial_u, y^2 \partial_y+2By \partial_u}\\ \hat{\mathfrak{g}}_{11}^a &=\Span{\partial_x, \partial_y, x \partial_x , y \partial_y-u\partial_u, y^2 \partial_y+(1-2yu) \partial_u}\\ \hat{\mathfrak{g}}_{12}^m&= \Span{\partial_x, \partial_y, x \partial_x+A \partial_u, y \partial_y+B\partial_u,x^2 \partial_x+2Ax \partial_u , y^2 \partial_y+2By \partial_u}\\ \hat{\mathfrak{g}}_{12}^{a1} &=\Span{\partial_x, \partial_y, x \partial_x- u \partial_u , y \partial_y,x^2 \partial_x+(1-2xu) \partial_u, y^2 \partial_y}\\ \end{align*} \begin{align*} \hat{\mathfrak{g}}_{12}^{a2} &=\Span{\partial_x, \partial_y, x \partial_x , y \partial_y- u \partial_u,x^2 \partial_x, y^2 \partial_y+(1-2yu) \partial_u}\\ \hat{\mathfrak{g}}_{13}^{m1} &= \Span{\partial_x, \partial_y, x \partial_x+ y \partial_y+A \partial_u, x \partial_y+B \partial_u, x^2 \partial_y+2B x \partial_u, \\ & \qquad x^2 \partial_x+2 xy \partial_y+(2x A+2yB)\partial_u }\\ \hat{\mathfrak{g}}_{13}^{m2} &= \Span{\partial_x, \partial_y, x \partial_x+\tfrac{r-4}{2} y \partial_y+C\partial_u, x \partial_y,...,x^{r-4} \partial_y, \\ & \qquad x^2 \partial_x+(r-4) xy \partial_y+2Cx \partial_u}\\ \hat{\mathfrak{g}}_{13}^{a1} &= \Span{\partial_x, \partial_y, x \partial_x+\tfrac{r-4}{2} y \partial_y-u\partial_u, x \partial_y,...,x^{r-4} \partial_y,\\ & \qquad x^2 \partial_x+(r-4) xy \partial_y+(1-2xu) \partial_u}\\ \hat{\mathfrak{g}}_{13}^{a2} &= \Span{\partial_x, \partial_y, x^2 \partial_x+(r-4)xy \partial_y+(x (r-6) u+(r-4)y) \partial_u, \\ &\qquad x \partial_x+\tfrac{r-4}{2} y \partial_y+\tfrac{r-6}{2} u \partial_u, x^i \partial_y+i x^{i-1} \partial_u \mid i=1,...,r-4 }\\ \hat{\mathfrak{g}}_{14}^{m} &= \Span{\partial_x, \partial_y, x \partial_x+A\partial_u, y \partial_y+B \partial_u, x \partial_y, ..., x^{r-5} \partial_y,\\ & \qquad x^2 \partial_x+(r-5) x y \partial_y+(2A+(r-5) B) x \partial_u} \\ \hat{\mathfrak{g}}_{14}^{a1} &= \Span{\partial_x, \partial_y, x \partial_x- u\partial_u, y \partial_y, x \partial_y,...,x^{r-5} \partial_y, \\ & \qquad x^2 \partial_x+(r-5) x y \partial_y+(1-2 xu) \partial_u}\\ \hat{\mathfrak{g}}_{14}^{a2} &= \Span{ \partial_x, \partial_y, x^2 \partial_x+(r-5) x y \partial_y+((r-7) x u+(r-5) y) \partial_u, \\ & \qquad x \partial_x- u\partial_u, y \partial_y+u \partial_u, x^i \partial_y+ i x^{i-1} \partial_u \mid i=1,...,r-5}\\ \hat{\mathfrak{g}}_{15}^m &= \Span{\partial_x, x \partial_x + \partial_y, x^2 \partial_x+2x \partial_y+C e^y\partial_u }\\ \hat{\mathfrak{g}}_{16}^m &= \Span{\partial_x, x \partial_x-y\partial_y+C \partial_u, x^2 \partial_x+(1-2xy) \partial_y+ 2C x\partial_u} \end{align*} The proof of theorem \ref{list} is a direct computation following the algorithm described above. The computations are not reproduced here, beyond example \ref{g6}, but they can be found in the ancillary file to the arXiv version of this paper. All capital letters in the list denote complex constants. For the metric lifts, one of the constants can always be set equal to $1$ if we allow to rescale $u$. In the affine lift $\hat{\mathfrak g}_5^a$ one of the constants must be nonzero in order for the lift to be transitive, and it can be set equal to $1$ by a scaling transformation. Notice also that even though ${\mathfrak{g}}_{15}$ is not locally equivalent to ${\mathfrak{g}}_{16}$, their lifts are locally equivalent. In addition the two affine lifts of $\mathfrak g_{12}$ are locally equivalent. Most of this list already exist in the literature. The lifts of the three primitive Lie algebras can be found in \cite{Transformationsgruppen3}. The first attempt to give a complete list of imprimitive Lie algebras of vector fields on $\mathbb C^3$ was done by Amaldi in \cite{Amaldi1,Amaldi2}. Most of the Lie algebras we have found is contained in Amaldi's list of Lie algebras of ``type A'', but some of our lifts are missing. Examples are $\hat{\mathfrak{g}}_{10}^a,\hat{\mathfrak{g}}_{14}^{m},\hat{\mathfrak{g}}_{14}^{a1}$ and $\hat{\mathfrak{g}}_{8}^a$ with general $\alpha$ and $B=0$. There is in error in the Lie algebra corresponding to $\hat{\mathfrak{g}}_{14}^{a1}$ which was also noticed in \cite{Hillgarter1, Hillgarter2}. The lifts of nonsolvable Lie algebras are contained in \cite{Doubrov}, and the case of metric lifts was also considered in \cite{MasterThesis}. \section{Metric lifts and Lie algebra cohomology} \label{cohomology} We conclude this treatment by showing that there is a one-to-one correspondence between the metric lifts of $\mathfrak g \subset \vf{\mathbb C^2}$ and the Lie algebra cohomology group $H^1(\mathfrak g, C^\omega (\mathbb C^2))$. The main result is analogous to \cite[Theorem 2]{Olver92}. A metric lift of a Lie algebra $\mathfrak g \subset \vf{\mathbb C^2}$ is given by a $C^\omega(\mathbb C^2)$-valued one-form $\psi$ on $\mathfrak g$. For vector fields $X,Y \in \mathfrak g$ lifted to $\hat X = X + \psi_X \partial_u$ and $\hat Y = X + \psi_Y \partial_u$ we have \begin{equation} [\hat X,\hat Y] = [X+\psi_X \partial_u, Y+\psi_Y \partial_u] = [X,Y] + (X(\psi_Y)-Y(\psi_X))\partial_u. \label{xhatyhat} \end{equation} Consider the first terms of the Chevalley-Eilenberg complex \begin{equation*} 0 \longrightarrow C^\omega(\mathbb C^2) \overset{d}{\longrightarrow} \mathfrak g^* \otimes C^\omega(\mathbb C^2) \overset{d}{\longrightarrow} \Lambda^2 \mathfrak g^* \otimes C^\omega (\mathbb C^2) \end{equation*} where the differential $d$ is defined by \begin{align*} df(X) &= X(f), \quad f \in C^\omega(\mathbb C^2) \\ d\psi(X,Y) &= X(\psi_Y)-Y(\psi_X) - \psi_{[X,Y]}, \quad \psi \in \mathfrak g^* \otimes C^\omega(\mathbb C^2). \end{align*} This complex depends not only on the abstract Lie algebra, but also on its realization as a Lie algebra of vector fields. It is clear from (\ref{xhatyhat}) that $\psi \in \mathfrak g^* \otimes C^\omega(\mathbb C^2)$ corresponds to a metric lift if and only if $d \psi =0$. Two metric lifts are equivalent if there exists a biholomorphism \[\phi\colon (x,y,u) \mapsto (x,y,u-U(x,y))\] on $\mathbb C^2 \times \mathbb C$ that brings one to the other. A lift of $X$ transforms according to \begin{equation*} d \phi \colon X + \psi_X \partial_u \mapsto X + (\psi_X - dU(X))\partial_u \end{equation*} which shows that two lifts are equivalent if the difference between their defining one-forms is given by $dU$ for some $U\in C^\omega(\mathbb C^2)$. Thus we have the following theorem, relating the cohomology space \begin{equation*} H^1(\mathfrak g, C^\omega(\mathbb C^2)) = \{\psi \in\mathfrak g^* \otimes C^\omega(\mathbb C^2) \mid d \psi =0\}/\{dU \mid U \in C^\omega(\mathbb C^2)\}, \end{equation*} to the space of metric lifts. \begin{theorem} \label{cohomologytheorem} There is a one-to-one correspondence between the space of metric lifts of the Lie algebra $\mathfrak g \subset \vf{\mathbb C^2}$ (up to equivalence) and the cohomology space $H^1(\mathfrak g, C^\omega (\mathbb C^2))$. \end{theorem} The theorem gives a transparent interpretation of metric lifts, while also showing a way to compute $H^1(\mathfrak g, C^\omega (\mathbb C^2))$, through example \ref{g6}. This method is essentially the one that was used in \cite{Olver92}, where the same cohomologies were found. There they extended Lie's classification of Lie algebras of vector fields to Lie algebras of first order differential operators on $\mathbb C^2$, and part of this work is equivalent to our classification of metric lifts. Their results coincide with ours, with the exceptions $\mathfrak g_8$ which corresponds to case 5 and 20 in \cite{Olver92} and $\mathfrak g_{16}, \mathfrak g_{15}, \mathfrak g_7$ which correspond to cases 12, 13 and 14, respectively. For $\mathfrak g_8$ it seems like they have not considered the case corresponding to $\ker(d \pi|_{\hat{\mathfrak g}})={0}$ which is the only case we consider. The realizations used in \cite{Olver92} for cases 12, 13 and 14 have singular orbits, while their cohomologies are computed after restricting to subdomains, avoiding singular orbits. The cohomology is sensitive to choice of realization as Lie algebra of vector fields, and will in general change by restricting to a subdomain. The following example, based on realizations of $sl(2)$, illustrates this. \begin{example} The metric lift \[\hat{\mathfrak{g}}_{16}^m = \Span{\partial_x, x \partial_x-y\partial_y+C \partial_u, x^2 \partial_x+(1-2xy) \partial_y+ 2C x\partial_y} \] is parametrized by a single constant, and thus $H^1(\mathfrak g_{16}, C^\omega (\mathbb C^2)) = \mathbb C$. Similarly, we see that $H^1(\mathfrak g_{15}, C^\omega (\mathbb C^2)) = \mathbb C$. The Lie algebra $ \tilde{\mathfrak{g}}_{16} = \Span{\partial_x, x \partial_x+ y \partial_y, x^2 \partial_x+y(2x+y) \partial_y }$ is related to \cite[case 12]{Olver92} by the transformation $y \mapsto x+y$. It is also is locally equivalent to $\mathfrak g_{16}$, but it has a singular one-dimensional orbit, $y=0$. Its metric lift is given by \[ \Span{\partial_x, x\partial_x+ y \partial_y+A \partial_u, x^2 \partial_x+y(2x+y) \partial_y +(2A x+B y) \partial_u}\] which implies that $H^1(\tilde{\mathfrak{g}}_{16}, C^\omega (\mathbb C^2)) = \mathbb C^2$. The Lie algebra $ \tilde{\mathfrak{g}}_{15}=\Span{y \partial_x, x \partial_y,x \partial_x-y\partial_y}$ is the standard representation on $\mathbb C^2$. If we split $C^\omega ( \mathbb C^2)= \oplus_{k=0}^\infty S^k (\mathbb C^2)^*$, then $H^1(\tilde{\mathfrak{g}}_{15},C^\omega(\mathbb C^2))= \oplus_{k=0}^\infty H^1(\tilde{\mathfrak{g}}_{15},S^k (\mathbb C^2)^*)$. By Whitehead's lemma, since $S^k(\mathbb C^2)^*$ is a finite-dimensional module over $\tilde{\mathfrak{g}}_{15}$, the cohomologies $H^1(\mathfrak g,S^k(\mathbb C^2)^*)$ vanish, and thus $H^1(\tilde{\mathfrak{g}}_{15},C^\omega(\mathbb C^2))=0$. Hence the cohomologies of the locally equivalent Lie algebras $\mathfrak{g}_{15}$ and $\tilde{\mathfrak{g}}_{15}$ are different. To summarize, we have two pairs of locally equivalent realizations of $sl(2)$, and their cohomologies are \begin{align*} H^1(\mathfrak g_{16}, C^\omega (\mathbb C^2)) = \mathbb C, \qquad &H^1(\tilde{\mathfrak{g}}_{16}, C^\omega (\mathbb C^2)) = \mathbb C^2, \\ H^1(\mathfrak g_{15}, C^\omega (\mathbb C^2)) = \mathbb C, \qquad &H^1(\tilde{\mathfrak{g}}_{15},C^\omega(\mathbb C^2))=0. \end{align*} \end{example} The Lie algebra cohomologies considered in this paper are related to the relative invariants (and singular orbits) of the corresponding Lie algebras of vector fields \cite{FelsOlver}. A consequence of \cite[Theorem~5.4]{FelsOlver} is that a locally transitive Lie algebra $\mathfrak g$ of vector fields has a scalar relative invariant if it has a nontrivial metric lift whose orbit-dimension is equal to that of $\mathfrak g$. The Lie algebra $\tilde{\mathfrak g}_{16}$ has two-dimensional orbits when $A=B$. Therefore there exists an absolute invariant, and it is given by $e^u/y^A$. The corresponding relative invariant of $\mathfrak g_{16}$ is $y^A$ and it defines the singular orbit $y=0$. \section*{Acknowledgements} I would like to thank Boris Kruglikov for his invaluable guidance throughout this work.
2,869,038,155,969
arxiv
\section{Introduction} \begin{comment} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8a and later. I wish you the best of success. \hfill mds \hfill September 17, 2014 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \end{comment} The downlink coverage probability of a single-tier cellular network with distance-dependent interference was analyzed in \cite{andrews2011tractable} using tools from stochastic geometry. This was followed by new results for multi-tier cellular networks in single antenna \cite{jo2012heterogeneous} and multi-antenna systems~\cite{dhillon2013downlink}. An important assumption in these works is that the fading distribution of the nearest (desired) base station (BS) is Rayleigh. Rayleigh fading is an important assumption as the channel power will follow exponential distribution. This allows the distribution of signal-to-interference ratio ($\mathtt{SIR}$) to be expressed in terms of the Laplace transform of interference which can easily be computed using standard tools from stochastic geometry. In a multi-antenna system that uses maximal-ratio combining, if the fading distribution of all the links are i.i.d. Rayleigh distributed, the channel power is Gamma distributed with integer shape parameter (equal to the number of antenna terminals). The coverage probability of such a system can be computed by using Laplace transform of interference and its derivative. However, for popular channel fading distribution like Rician, the Laplace trick cannot be used as the complementary cumulative distribution function ($\mathtt{CCDF}$) of the channel power is not an exponential function. In~\cite{yang2015coverage}, the coverage probability of a two-tier cellular network was obtained when the desired signal experiences Rician fading. This approach assumes that the $\mathtt{CCDF}$ of a non-central chi-squared distributed (square of Rician) random variable, can be approximated as a weighted sum of exponentials. The weights and abscissas are obtained by minimizing the mean squared error between the $\mathtt{CCDF}$ and this approximation. As the function minimized is not convex, the weights and abscissas obtained are locally optimal and are highly dependent on the initial points assumed. Also, there are no closed-form expressions available for these weights and abscissas and have to be computed numerically. In \cite{di2014stochastic}, the coverage probability of a single-tier network with an arbitrary fading distribution was derived using Gil-Pelaez inversion theorem. The coverage probability expression in \cite{di2014stochastic} requires a numerical integration of the imaginary part of the moment generating function ($\mathtt{MGF}$) of the desired channel's power. However, this approach requires numerical evaluation of multi-dimensional integrals for heterogeneous networks. In \cite{madhu}, it is shown that analytically tractable expressions for coverage probability of a heterogeneous cellular network can not be derived in presence of arbitrary fading if the tier association is based on maximum average received power. In \cite{di2013average}, \cite{alammouri2016modeling} average rate was derived for arbitrary fading channels and fading channels with dominant specular components respectively. But these approaches can not be used for evaluating the coverage probability. In this letter, we assume all the channels to experience independent $\kappa$-$\mu$ shadowed fading and derive the exact coverage probability when the parameter $\mu$ of the desired channel is an integer and use ``Rician approximation" to derive an approximate one when $\mu$ is not an integer. So using this method, the coverage probability can be obtained if the desired channel is Nakagami faded with non-integer shape parameter whereas the Laplace trick is useful only when the shape parameter is an integer. As the popular fading distributions such as Rician, Rayleigh, Nakagami, Rician shadowing, $\kappa$-$\mu$, $\eta$-$\mu$ are special cases of $\kappa$-$\mu$ shadowed fading, the coverage probability expression obtained is generic. Our analysis assumes that the single tier base stations are PPP distributed and the interfering signals fade independently and identically. The analysis can be easily extended to a multi tier heterogeneous network that uses maximum average received power based association following similar steps as in \cite{jo2012heterogeneous}. \section{System Model} \label{sec:systemmodel} The base stations are modelled by a homogeneous Poisson point process $\Phi $ $\subseteq$ $\mathbb{R}^2$ of intensity $\lambda$. All the base stations are assumed to transmit with unit power. The signal from a base station located at $x \in \mathbb{R}^2,$ experiences a path loss $||x||^{-\alpha}$, where $\alpha > 2.$ Without loss of generality, a typical user is assumed to be at the origin and is associated with the nearest base station located at a distance $r$. Nearest base station association in a single tier network is same as the maximum average received power based association \cite{jo2012heterogeneous}. This is preferred to the highest $\mathtt{SIR}$ based association so that frequent handovers that occurs due to short term fading and shadowing can be avoided \cite{jo2012heterogeneous}. From \cite{andrews2011tractable}, the nearest neighbour distance is Rayleigh distributed, {\em i.e.}, $f(r)=2 \pi \lambda r \exp(-\pi \lambda r^2)$. The system is assumed to be interference limited and hence noise is neglected. \section{$\kappa$-$\mu$ shadowed fading} \label{sec:kappamu} $\kappa$-$\mu$ shadowed fading is represented by three parameters viz. $\kappa$, $\mu$ and $m$. Let $\gamma$ denote the signal power. The probability density function of the signal power when the channel experiences $\kappa$-$\mu$ shadowed fading \cite{paris2014statistical} is denoted by $f(\gamma)$ and is given as $\frac{\mu^\mu m^m (1+\kappa)^\mu \gamma^{\mu-1}}{\Gamma(\mu) \overline{\gamma}^{\mu} (\mu \kappa+m)^m} e^{-\frac{\mu(1+\kappa) \gamma}{\overline{\gamma}}} {}_1F_1 \left(m;\mu;\frac{\mu^2 \kappa(1+\kappa)}{\mu \kappa+m} \frac{\gamma}{\overline{\gamma}}\right),$ where ${}_1F_1(a;b;z)\overset{\Delta}{=} \sum\limits_{l=0}^\infty \frac{(a)_l}{(b)_l} \frac{z^l}{l!},$ is the confluent hypergeometric function, $(a)_l=\frac{\Gamma(a+l)}{\Gamma(a)}$. The $\mathtt{PDF}$ can be expressed as $f(\gamma) =\sum\limits_{l=0}^{\infty} w_l \frac{e^{-c \gamma} \gamma^{l+\mu-1} c^{l+\mu}}{\Gamma(l+\mu)},$ where $c=\frac{\mu(1+\kappa)}{\overline{\gamma}} $ and \begin{equation} w_l=\frac{\Gamma(l+\mu) (m)_l (\frac{\mu \kappa}{ \mu \kappa +m})^l (\frac{m}{\mu \kappa+m})^m}{\Gamma(\mu) l ! (\mu)_l}. \label{eqn:weights} \end{equation} So the $\mathtt{PDF}$ of channel power $f(\gamma)$ can be represented as an infinite sum of Gamma densities with parameters $(l+\mu,\frac{1}{c})$ and weights $w_l$. The relation between different fading distributions and $\kappa$-$\mu$ shadowed fading are given in \cite{paris2014statistical}, \cite{moreno2015kappa}. The parameter $m$ in Nakagami-m fading is denoted as $\hat{m}$ to avoid confusion with parameter $m$ in $\kappa$-$\mu$ shadowed fading. Let the channel power of the desired signal $g_0$ be $\kappa$-$\mu$ shadow faded with parameters $\kappa_0$, $\mu_0$, $m_0.$ The interfering signals are independent of each other and the desired signal. All the interfering signals fade identically, but need not be identical to the desired signal. Let the interfering signals be $\kappa$-$\mu$ shadow faded with parameters $\kappa_i$, $\mu_i$, $m_i.$ So the $\mathtt{PDF}$ of the desired channel power $g_0$ is $f(g_0) = \sum\limits_{l=0}^{\infty} w_l \frac{e^{-c_0 g_0} g_0^{l+\mu_0-1} c_0^{l+\mu_0}}{\Gamma(l+\mu_0)}.$ Similarly the $\mathtt{PDF}$ of the interference power $g_i$ is $f(g_i) = \sum\limits_{q=0}^{\infty} v_q \frac{e^{-c_i g_i} g_i^{q+\mu_i-1} c_i^{q+\mu_i}}{\Gamma(q+\mu_i)}.$ In the subsequent Section, coverage probability is derived. \section{Coverage Probability}\label{sec:hetero} The signal to interference ratio of a typical user at distance $r$ from its associated base station is $\mathtt{SIR}$ = $\frac{ g_{0} r^{-\alpha}}{I },$ where $I$=$\sum\limits_{i \in \Phi \setminus B_{0}} g_{i} |x_{i}|^{-\alpha}$ and $B_{0}$ is the base station that the typical user is associated with. Here, $g_{i}$ is the channel power from the $i$-th base station to the typical user. The coverage probability of a typical user is \begin{comment} \begin{align} P_{c} &= \mathbb{P}(\mathtt{SIR}>T) \nonumber \\ &= \int\limits_{0}^\infty \mathbb{P}(\mathtt{SIR} > T \vert r) 2 \pi \lambda r e^{-\pi \lambda r^2} \mathrm{d} r, \label{eqn:SIRccdf} \end{align} \end{comment} \begin{equation} P_{c} = \mathbb{P}(\mathtt{SIR}>T) = \int\limits_{0}^\infty \mathbb{P}(\mathtt{SIR} > T \vert r) 2 \pi \lambda r e^{-\pi \lambda r^2} \mathrm{d} r, \label{eqn:SIRccdf} \end{equation} as the distance to the nearest base station is Rayleigh distributed. First we will derive the exact coverage probability expression when $\mu_0$ is an integer and then derive the approximate one when $\mu_0$ is not an integer. \subsection{Integer $\mu_0$} Rayleigh, Rician, Rician shadowed, Hoyt, $\kappa$-$\mu$, Nakagami (integer shape parameter) are special cases of $\kappa$-$\mu$ shadowed fading where $\mu$ is an integer \cite{paris2014statistical}. In the following theorem we derive the coverage probability when $\mu_0$ is an integer. \begin{theorem} If $\mu_0$ is an integer, then coverage probability ($P_c$) is \begin{equation} \sum\limits_{l=0}^{\infty} \sum\limits_{n=0}^{l+\mu_0-1} \frac{\partial^n}{\partial s^n} \frac{w_l (-1)^n }{ n! \sum\limits_{q=0}^{\infty} v_q {}_2F_1(q+\mu_i,-\frac{2}{\alpha},1-\frac{2}{\alpha},-\frac{s T c_0}{c_i})} |_{\footnotesize{s=1}}, \label{eqn:Pcexact} \end{equation} where ${}_2F_1()$ is the Gauss-Hypergeometric function. \end{theorem} \begin{proof} Substituting for $\mathtt{SIR}$ in (\ref{eqn:SIRccdf}), coverage probability \begin{align} P_{c} &=\int\limits_{0}^{\infty} \mathbb{P}(g_{0}>T I r^{\alpha} ) 2 \pi \lambda r e^{-\pi \lambda r^2} \mathrm{d} r. \label{eqn:Pc} \end{align} As $f(g_0)$=$\sum\limits_{l=0}^{\infty} w_l \frac{e^{-c_0 g_0} g_0^{l+\mu_0-1} c_0^{l+\mu_0}}{\Gamma(l+\mu_0)}$ and using $Y$=$c_0 T I r^{\alpha}$, \begin{align} \mathbb{P}(g_{0}>T I r^{\alpha} ) &= \mathbb{E}_Y \left(\sum\limits_{l=0}^{\infty} w_l \frac{\Gamma(l+\mu_0,Y)}{\Gamma(l+\mu_0)} \right) \label{eqn:forthm3}\\ &\stackrel{(a)}= \mathbb{E}_Y \left(\sum\limits_{l=0}^{\infty} w_l \sum\limits_{n=0}^{l+\mu_0-1} e^{-Y}\frac{Y^n}{n!} \right)\\ &= \sum\limits_{l=0}^{\infty} w_l \sum\limits_{n=0}^{l+\mu_0-1} \frac{(-1)^n}{n!} \frac{\partial^n}{\partial s^n} L_Y(s)|_{s=1}. \label{eqn:Pg0} \end{align} Since $\mu_0$ and $l$ are integers, $(a)$ follows from the fact that $\frac{\Gamma(q,Y)}{\Gamma(q)}$= $\sum\limits_{n=0}^{q-1} e^{-Y}\frac{Y^n}{n!}$ , for integer $q$. \begin{equation} L_Y(s) = \mathbb{E}(e^{-sY}) = \mathbb{E}(e^{-sc_0TIr^{\alpha}})= L_I(s c_0 T r^{\alpha}). \label{eqn:Ly} \end{equation} \begin{align} L_I(s) &\stackrel{(a)}= \exp \left(-2 \pi \lambda \int\limits_r^{\infty} (1-\mathbb{E}_g(\exp(-s g v^{-\alpha})))v \mathrm{d} v \right) \nonumber \\ &\stackrel{(b)} = e^{ -2 \pi \lambda \sum\limits_{q=0}^{\infty} v_q \int\limits_r^{\infty} (1-\frac{1}{(1+\frac{s v^{-\alpha}}{c_i})^{q+\mu_i}} )v \mathrm{d} v } \nonumber \\ &= e^{-2 \pi \lambda \sum\limits_{q=0}^{\infty} v_q (\frac{r^2}{2} ({}_2F_1(q+\mu_i,-\frac{2}{\alpha},1-\frac{2}{\alpha},-\frac{r^{-\alpha}s}{c_i})-1)) }, \label{eqn:Li} \end{align} (a) from \cite{andrews2011tractable} , (b) as the $\mathtt{PDF}$ of interfering signal can be expressed as a weighted sum of Gamma density functions and the weights sum to 1. Combining (\ref{eqn:Pc}), (\ref{eqn:Pg0}), (\ref{eqn:Ly}), (\ref{eqn:Li}), and by using the fact that the weights $v_q$ sum to 1, $P_c$ is \begin{align*} \sum\limits_{l=0}^{\infty} \sum\limits_{n=0}^{l+\mu_0-1} \frac{\partial^n}{\partial s^n} \int\limits_{0}^{\infty} \frac{2 \pi \lambda r w_l(-1)^n}{n! e^{\pi \lambda r^2 \sum\limits_{q=0}^{\infty} v_q {}_2F_1(q+\mu_i,-\frac{2}{\alpha},1-\frac{2}{\alpha},-\frac{s T c_0}{c_i}) }} \mathrm{d} r |_{s=1}. \end{align*} \end{proof} In practice, only a few weights $w_l$ in (\ref{eqn:weights}) are significant as sum of the weights can be bounded as shown below. From (\ref{eqn:weights}) \begin{align*} \sum\limits_{l=N+1}^{\infty} w_l &= \frac{\left(\frac{m}{m+ \kappa \mu} \right)^m }{\Gamma(m)} \sum\limits_{j=0}^{\infty} \frac{\Gamma(m+N+1+j)(\kappa \mu)^{j+N+1}}{\Gamma(2+N+j) (\kappa \mu + m)^{j+N+1}} \\ &\stackrel{(a)}\approx \frac{\left(\frac{m}{m+ \kappa \mu} \right)^m (N+1)^{m-1} }{\Gamma(m) (\frac{\kappa \mu}{m+\kappa \mu})^{-N-1}} \sum\limits_{j=0}^{\infty} \frac{(\frac{N+1+j+\frac{m}{2}}{N+1})^{m-1}}{ (\frac{\kappa \mu}{m+ \kappa \mu})^{-j}}\\ & \leq \frac{\left(\frac{m}{m+ \kappa \mu} \right)^m (N+1)^{m-1} }{\Gamma(m) (\frac{m+\kappa \mu}{\kappa \mu})^{N+1}} \sum\limits_{j=0}^{\infty} \frac{(1+j+\frac{m}{2})^{m-1} }{ (\frac{\kappa \mu}{m+ \kappa \mu})^{-j}}\\ & \leq e^{-\mathcal{O}(N)}, \end{align*} (a) uses Kershaw's approximation, $\frac{\Gamma(k+\frac{\alpha}{2})}{\Gamma(k)}=(k+\frac{\alpha}{4}-\frac{1}{2})^{\frac{\alpha}{2}}$. The higher order derivatives in \eqref{eqn:Pcexact} is evaluated using Fa{\`a} di Bruno's formula \cite{faa}, {\em i.e.}, $\frac{\partial^n}{\partial s^n} f(g(s))$= $ \sum\limits_{k=1}^n f^{(k)}(g(s)) B_{(n,k)}(g^{(1)}(s), g^{(2)}(s),..,g^{(n-k+1)}(s)) ,$ where $f^{(k)}$, $g^{(k)}$ are the $k$th order derivatives and $B_{n,k}$ is the Bell polynomial. In \eqref{eqn:Pcexact}, $f(g(s))$ is of the form $\frac{1}{g(s)}$. Hence $f^{(k)}(g(s))$ is $(-1)^k k! g(s)^{-k-1}$ and $g^{(k)}(s)$ is \begin{comment} $\sum\limits_{q=0}^{\infty} v_q \frac{(q+\mu_i)_k (-\frac{2}{\alpha})_k}{(-1)^{-k} (1-\frac{2}{\alpha})_k} {}_2F_1(q+\mu_i+k, -\frac{2}{\alpha}+k,1-\frac{2}{\alpha}+k,-\frac{sTc_0}{c_i})$, \end{comment} $\sum\limits_{q=0}^{\infty} \frac{v_q (q+\mu_i)_k (-\frac{2}{\alpha})_k}{(-1)^{-k}(1-\frac{2}{\alpha})_k} {}_2F_1(q+\mu_i+k, -\frac{2}{\alpha}+k,1-\frac{2}{\alpha}+k,-\frac{sTc_0}{c_i})$ where $(.)_k$ is the Pochhammer symbol, {\em i.e.}, $(a)_k =\frac{\Gamma(a+k)}{\Gamma(a)}$. \subsection{Non-integer $\mu_0$} If $\mu_0$ is not an integer, the distribution of $\mathtt{SIR}$ can be expressed in terms of fractional derivatives of Laplace transform of interference which leads to intractable expressions. Another approach is to express each of the weighted Gamma $\mathtt{PDF}$ in turn as a weighted sum of Erlang $\mathtt{PDF}$ (Erlang is a special case of Gamma $\mathtt{PDF}$ with integer shape parameters). The parameters of the Erlang density functions and weights can be obtained through a numerical iterative expectation maximization procedure \cite{thummler2006novel}. Alternatively we come up with a technique to approximate the $\mathtt{PDF}$ of Gamma distribution of non integer shape parameters as a weighted sum of Erlang $\mathtt{PDF}$ using a Rician approximation of the Nakagami distribution. The Rician (then called as Nakagami-n) approximation of Nakagami-m distribution was proposed by Nakagami in \cite{nakagami} and has been widely used in wireless communication. The advantage of this method described below is that the weights and Erlang parameters can be pre-computed. \begin{itemize} \item{Square root of a Gamma distributed random variable with shape and scale parameters ($l+\mu_0$,$\frac{1}{c_0}$) is Nakagami-m distributed with shape and scale parameters ($l+\mu_0$,$\frac{l+\mu_0}{c_0}$).} \item{Nakagami-m random variable with parameters ($l+\mu_0$, $\frac{l+\mu_0}{c_0}$) can be approximated by Rician distribution with parameters ($K_l$, $\frac{l+\mu_0}{c_0}$) through moment matching where $l+\mu_0=\frac{(K_l+1)^2}{2 K_l+1},$ $\forall$ $l+\mu_0 \geq 1$ \cite{nakagami}. The original distribution and the approximate distributions are plotted in Fig. \ref{fig:nakagamirician} and it can be observed that the approximation is very tight.} \item{ Rician fading is a special case of $\kappa$-$\mu$ shadowed fading with $\mu=1$, $\kappa=K_l$, $m \rightarrow \infty$ \cite{paris2014statistical}. So the $\mathtt{PDF}$ of power of a Rician faded channel can be expressed as a weighted sum of Erlang $\mathtt{PDF}$ (as $\mu$ is an integer). Hence using this approximate equivalence, Gamma density of non-integer shape parameter can be expressed as a weighted sum of Erlang $\mathtt{PDF}$. } \end{itemize} So $f(g_0)$=$\sum\limits_{l=0}^{\infty} w_l \frac{e^{-c_0 g_0} g_0^{l+\mu_0-1} c_0^{l+\mu_0}}{\Gamma(l+\mu_0)}$ can be expressed as $f(g_0) \approx \sum\limits_{l=0}^{\infty} \sum\limits_{p=0}^{\infty} w_l \omega_{pl} \frac{e^{-c_l g_0} g_0^{p} c_l^{p+1}}{\Gamma(p+1)},$ where $\omega_{pl}$=$\frac{e^{-K_l} K_l^p}{p!},$ $c_l=\frac{1+K_l}{\Omega_l}, \Omega_l=\frac{l+\mu_0}{c_0},$ $K_l=l+\mu_0-1+ \sqrt{(l+\mu_0)(l+\mu_0-1)}.$ By following the same steps as in Theorem 1, if $\mu_0$ is not an integer and is greater than 1, then the coverage probability is approximately \begin{equation} \sum\limits_{l=0}^{\infty} \sum\limits_{p=0}^{\infty} \sum\limits_{n=0}^{p} \frac{\partial^n}{\partial s^n} \frac{w_l \omega_{pl} (-1)^n}{n! \sum\limits_{q=0}^{\infty} v_q {}_2F_1(q+\mu_i,-\frac{2}{\alpha},1-\frac{2}{\alpha},-\frac{s T c_l}{ c_i})}|_{s=1}. \label{eqn:Pcapprox} \end{equation} \begin{figure}[ht] \centering \includegraphics[height=2.4in,width=3.75in]{figure_june16.eps} \caption{Nakagami-$m$ pdf marked by circles and its Rician approximation. } \label{fig:nakagamirician} \end{figure}% \section{Numerical Results} \label{sec:fading} The results are plotted for unit mean power in both the desired and interfering channels. We assume identical and independent fading distribution in the desired and interferer links. The coverage probability plots for different fading distributions are provided in Fig. \ref{fig:simulation}. To calculate coverage probability only a finite number of weights $N$ are required and are provided in Fig. \ref{fig:simulation}. We observe that simulation results matches closely with the coverage probability derived. From the plots we can see that in $\kappa$-$\mu$ shadowed fading when $\kappa$ or $\mu$ or $m$ increases, coverage probability increases. From Fig. \ref{fig:nakagami}, we observe that the Rician approximation of Nakagami (which is used when $\mu$ is a non-integer) is very tight and the accuracy of approximate coverage probability increases with $\mathtt{SIR}$ threshold. As the exact coverage probability is not known when $\mu$ is a non-integer, we compute the squared error between the exact and approximate coverage probability when $\mu$ is an integer. As the coverage probability expressions involve multiple derivatives and summations, deriving an analytical upper bound on the approximation error is complicated. Hence in Fig. \ref{fig:error}, we plot the squared error for different fading distributions. We observe that as the $\mathtt{SIR}$ threshold $T$ increases or with decrease in Nakagami fading or with decrease in $\kappa$, the squared error decreases and is also very low (order of $10^{-5}$). \begin{figure}[ht] \centering \includegraphics[height=2.65in,width=3.75in]{sep19.eps} \caption{Theoretical and simulated coverage probability with $\kappa$-$\mu$ shadowed fading} \label{fig:simulation} \end{figure}% \begin{figure}[ht] \centering \includegraphics[height=2.65in,width=3.75in]{june22_rectangle.eps} \caption{Theoretical and simulated coverage probability when $\mu$ is a non-integer. Nakagami-m fading of parameter $\hat{m}$ is a special case of $\kappa$-$\mu$ shadowed fading for $\mu$=$\hat{m}$, $\kappa \rightarrow 0$, $m \rightarrow \infty$ and $\kappa$-$\mu$ fading is a special case of $\kappa$-$\mu$ shadowed fading when $m \rightarrow \infty$ } \label{fig:nakagami} \end{figure}% \begin{figure}[ht] \centering \includegraphics[height=2.65in,width=3.75in]{sep10.eps} \caption{Squared error between exact and approximate coverage probability} \label{fig:error} \end{figure}% \section{Conclusion} In this paper, we have derived the coverage probability when both the desired and interfering links experience $\kappa$-$\mu$ shadowed fading. As $\kappa$-$\mu$ shadowed fading generalizes many popular fading distributions, the coverage probability expression derived can be used when the links experience Rician fading, Nakagami fading, Rician shadowing etc. which were hitherto unknown. By using a Rician approximation, we also derive an approximate coverage probability expression when parameter $\mu$ is not an integer. This is useful in deriving the coverage probability when the shape parameter of Nakagami fading is not an integer.
2,869,038,155,970
arxiv
\section{Introduction\label{Intro}} Large mean free path ($L$$\gg$1 $\mu$m) of GaAs/AlGaAs-based two-dimensional electron gas (2DEG) and modern nano-fabrication technologies have enabled us to design and fabricate 2DEG samples artificially modulated with length scales much smaller than $L$. The samples have been extensively utilized for experimental investigations of novel physical phenomena that take place in the new artificial environments. \cite{BeenakkerR91} Unidirectional lateral superlattice (ULSL) represents a prototypical and probably the simplest example of such samples; there, a new length scale, the period $a$, and a new energy scale, the amplitude $V_0$, of the periodic potential modulation are introduced to 2DEG\@. These artificial parameters give rise to a number of interesting phenomena through their interplay with parameters inherent in 2DEG, especially when subjected to a perpendicular magnetic field $B$. Magnetotransport reveals intriguing characteristics over the whole span of magnetic field, ranging from low field regime dominated by semiclassical motion of electrons, \cite{Weiss89,Winkler89,Beton90P,Geim92} through quantum Hall regime where several Landau levels are occupied, \cite{Muller95,Tornow96,Milton00,Endo02f,Endo04EP} up to the highest field where only the lowest Landau level is partially occupied \cite{Smet99c,Willett99c,Endo01c}; in the last regime, semiclassical picture is restored with composite Fermions (CFs) taking the place of electrons. Of these magnetotransport features, two observed in low fields, namely, positive magnetoresistance \cite{Beton90P} (PMR) around zero magnetic field and commensurability oscillation \cite{Weiss89,Winkler89} (CO) originating from geometric resonance between the period $a$ and the cyclotron radius $R_\mathrm{c}$=$\hbar k_\mathrm{F}/e|B|$, where $k_\mathrm{F}$=$\sqrt{2 \pi n_e}$ represents the Fermi wave number with $n_e$ the electron density, have the longest history of being studied and are probably the best-known. The PMR has been ascribed to channeled orbit, or streaming orbit (SO), in which electrons travel along the direction parallel to the modulation ($y$-direction), being confined in a single valley of the periodic potential. \cite{Beton90P} Electrons that happen to have the momentum perpendicular to the modulation ($x$-direction) insufficient to overcome the potential hill constitute SO\@. In a magnetic field $B$, Lorentz force partially cancels the electric force deriving from the confining potential. Therefore the number of SO's decreases with increasing $B$ and finally disappears at the limiting field where Lorentz force balances with the maximum slope of the potential. The \textit{extinction field} $B_\mathrm{e}$ depends on the amplitude and the shape of the potential modulation, and for sinusoidal modulation $V_0 \cos(2\pi x/a)$, \begin{equation} B_\mathrm{e}=\frac{2\pi m^* V_0}{ae\hbar k_\mathrm{F}},\label{Be} \end{equation} where $m^*$ represents the effective mass of electrons. It follows then that $V_0$ can be deduced from experimental PMR provided that the line shape of the modulation is known, once $B_\mathrm{e}$ is determined from the analysis of the experimental trace. An alternative and more familiar way to experimentally determine $V_0$ is from the amplitude of CO\@. In the past, several groups compared $V_0$'s deduced by the two different methods for the same samples. \cite{Kato97,Soibel97,Emeleus98,Long99} In all cases, $V_0$'s deduced by PMR and by CO considerably disagree, with the former usually giving larger values. Part of the discrepancy may be attributable to underestimation of $V_0$ by CO, resulting from disregarding the proper treatment of the decay of the CO amplitude by scattering. \cite{Boggild95,Paltiel97,Endo00e} However, the most serious source of the disagreement appears to lie in the difficulty in identifying the position of $B_\mathrm{e}$ from an experimental PMR trace, which was taken, on a rather \textit{ad hoc} basis, as either the peak, \cite{Kato97,Soibel97,Emeleus98} or the position for steepest slope. \cite{Long99} It is therefore necessary to find out the rule to determine the exact position of $B_\mathrm{e}$. This is one of the purposes of the present paper. We will show below that $B_\mathrm{e}$ can be identified, when $V_0$ is small enough, as an inflection point at which the curvature of PMR changes from concave down to concave up. Another target of the present paper is the magnitude of PMR\@. The magnitude should also depend on $V_0$ as well as on other parameters of ULSL samples. The subject has been treated in theories by both numerical \cite{Menne98,Zwerschke98} and analytical \cite{Mirlin01} calculations. However, analyses of experimental PMR is so far restricted to the qualitative level \cite{Beton90P} that the magnitude increases with $V_0$. To the knowledge of the present authors, no effort has been made to date to quantitatively explain the magnitude of PMR, using the full knowledge of experimentally obtained sample parameters, $V_0$, $n_e$, the mobility $\mu$, and the quantum, or single-particle mobility $\mu_\mathrm{s}$. Such quantitative analysis has been done in the present paper for ULSL samples with relatively small periods and modulation amplitudes that allow determining reliable values of $V_0$ from the CO amplitude. \cite{Endo00e} The result demonstrates that magnetoresistance attributable to SO is much smaller than the observed PMR\@. We propose an alternative mechanism that accounts for the major part of PMR\@. After detailing the ULSL samples used in the present study in Sec.\ \ref{smpldetail}, we delineate in Sec.\ \ref{SOcalc} a simple analytic formula to be used to estimate the contribution of SO to PMR\@. Experimentally obtained PMR traces are presented and compared to the estimated SO-contribution in Sec.\ \ref{exppmr}, leading to the introduction of another mechanism, the contribution from drift velocity of incompleted cyclotron orbits, in Sec.\ \ref{driftvel}, which we believe dominates the PMR for our present ULSL samples. Some discussion is given in Sec.\ \ref{discussion}, followed by concluding remarks in Sec.\ \ref{conclusion}. \section{Characteristics of samples \label{smpldetail}} \begin{table \caption{List of samples\label{Sampletbl}} \begin{ruledtabular} \begin{tabular}{ccccccc} No. & $a$ (nm) & Hall-bar size ($\mu$m$^{2}$) & back gate \\ \hline 1 & 184 & 64$\times$37 & $\times$ \\ 2 & 184 & 64$\times$37 & $\times$ \\ 3 & 161 & 44$\times$16 & $\bigcirc$ \\ 4 & 138 & 44$\times$16 & $\bigcirc$ \\ \end{tabular} \end{ruledtabular} \end{table} \begin{figure}[tb] \includegraphics[bbllx=50,bblly=20,bburx=780,bbury=550,width=8.5cm]{Fig1.eps}% \caption{(a) Schematic drawing of the sample with voltage probes for measuring modulated (ULSL) and unmodulated (reference) part. (b),(c) Scanning electron micrographs of the EB-resist gratings that introduce strain-induced potential modulation. Darker areas correspond to the resist. A standard line-and-space pattern (b) was utilized for samples 1, 3 and 4. Sample 2 employed a patterned grating (c) designed to partially relax the strain. \label{samples}} \end{figure} We examined four ULSL samples with differing periods $a$, as tabulated in Table \ref{Sampletbl}. The samples were prepared from the same GaAs/AlGaAs single-heterostructure 2DEG wafer with the heterointerface residing at the depth $d$=90 nm from the surface, and having Al$_{0.3}$Ga$_{0.7}$As spacer layer thickness of $d_\mathrm{s}$=40 nm. A grating of negative electron-beam (EB) resist placed on the surface introduced potential modulation at the 2DEG plane through strain-induced piezoelectric effect. \cite{Skuras97} To maximize the effect, the direction of modulation ($x$-direction) was chosen to be along a $<$110$>$ direction. For a fixed crystallographic direction, the amplitude of the strain-induced modulation is mainly determined by the ratio $a/d$. Figures \ref{samples}(b) and (c) display scanning electron micrographs of the gratings. Samples 1, 3, and 4 utilized a simple line-and-space pattern as shown in (b). For sample 2, we employed a patterned grating depicted in (c); the ``line'' of resist was periodically notched in every 575 nm by width 46 nm. The width was intended to be small enough (much smaller than $d$) so that the notches introduce only negligibly small modulation themselves but act to partially relax the strain. The use of the patterned grating enabled us to attain smaller $V_0$ than sample 1, which has the same period $a$=184 nm. As shown in Fig.\ \ref{samples}(a), we used Hall bars with sets of voltage probes that enabled us to measure the section with the grating (ULSL) and that without (reference) at the same time. Resistivity was measured by a standard low-frequency ac lock-in technique. Measurements were carried out at $T$=1.4 and 4.2 K, both bearing essentially the same result. We present the result for 4.2 K in the following. \begin{figure}[bth] \includegraphics[bbllx=20,bblly=70,bburx=430,bbury=800,width=8.5cm]{Fig2.eps}% \caption{(Color online) Sample parameters as a function of the electron density $n_e$, varied either by LED illumination (open symbols) or by back-gate voltage (solid symbols). (a) Modulation amplitude $V_0$. (b) Mobility $\mu$. (c) Damping parameter $\mu_\mathrm{W}$ of CO\@. Quantum mobility $\mu_\mathrm{s}$ for samples 1 and 2 are also plotted by $\times$ and $+$, respectively. Inset in (a) shows $\Delta\rho_{xx}^\mathrm{osc}/\rho_0$ experimentally obtained by subtracting a slowly varying background from the magnetoresistance trace (for sample 2 at $n_e$=2.20$\times$10$^{15}$ m$^{-2}$, shown by solid trace) and calculated by Eq.\ (\ref{COosc}) using $V_0$ and $\mu_\mathrm{W}$ as fitting parameters (dotted trace, showing almost perfect overlap with the experimental trace). \label{properties}} \end{figure} To investigate the behavior of PMR under various values of sample parameters, $n_e$ was varied from about 2.0 to 3.0$\times$10$^{15}$ m$^{-2}$, employing persistent photoconductivity effect through step-by-step illumination with an infrared light-emitting diode (LED). Samples 3 and 4 were equipped with a back gate, which was also used to alter $n_e$ approximately between 1.7 and 2.0$\times$10$^{15}$ m$^{-2}$. The electron density $n_e$ was measured by the period of CO or Shubnikov-de Haas (SdH) oscillation, and also by Hall resistivity. Concomitant with the change of $n_e$, parameters associated with the random potential scattering, $\mu$ and $\mu_\mathrm{s}$, also vary. Plots of $\mu$ and $\mu_\mathrm{s}$ (the latter only for samples 1 and 2) versus $n_e$ are presented in Figs.\ \ref{properties}(b) and (c), respectively. Quantum mobility $\mu_\mathrm{s}$ is deduced from the damping of the SdH oscillation \cite{Coleridge91} of the unmodulated section of the Hall bar. The amplitude $V_0$ of the modulation was evaluated from the amplitude of CO\@. In a previous publication, \cite{Endo00e} the present authors reported that the oscillatory part of the magnetoresistance is given, for $V_0$ much smaller than the Fermi Energy $E_\mathrm{F}$, $\eta$$\equiv$$V_0/E_\mathrm{F}$$\ll$1, by \begin{eqnarray} \frac{\Delta\rho_{xx}^\mathrm{osc}}{\rho_0} &=& A\left(\frac{\pi}{\mu_\mathrm{W}B}\right)A\left(\frac{T}{T_a}\right) \nonumber \\ & &\!\!\!\!\!\! \frac{1}{2\sqrt{2\pi}}\frac{1}{\Phi_0{\mu_\mathrm{B}^*}^2}\frac{\mu^2}{a}\frac{V_0^2}{n_e^{3/2}}|B| \sin\left(2\pi \frac{2R_\mathrm{c}}{a}\right),\label{COosc} \end{eqnarray} where $A(x)$=$x/\sinh(x)$, $k_\mathrm{B}T_a$$\equiv$$(1/2\pi^2)(ak_\mathrm{F}/2)\hbar\omega_\mathrm{c}$ with $\omega_\mathrm{c}$=$e|B|/m^*$ the cyclotron angular frequency, $\Phi_0$=$h/e$ the flux quantum, and $\mu_\mathrm{B}^*$$\equiv$$e\hbar/2m^*$($\simeq$0.864 meV/T for GaAs, an analogue of the Bohr magneton with the electron mass replaced by the effective mass $m^*$$\simeq$0.067$m_e$). Apart from the factor $A(\pi/\mu_\mathrm{W}B)$, which governs the damping of CO by scattering, Eq.\ (\ref{COosc}) is identical to the formula calculated by first order perturbation theory. \cite{Peeters92} The parameter $\mu_\mathrm{W}$ was shown in Ref.\ \onlinecite{Endo00e} to be approximately equal to $\mu_\mathrm{s}$, in accordance with the formula given for low magnetic field in the theory by Mirlin and W\"olfle. \cite{Mirlin98} Measured $\Delta\rho_{xx}^\mathrm{osc}/\rho_0$ for the present samples are also described by Eq.\ (\ref{COosc}) very well, as exemplified in the inset of Fig.\ \ref{properties}(a). So far, we have treated the modulation as having a simple sinusoidal profile $V_0\cos(2\pi x/a)$, and have tacitly neglected the possible presence of higher harmonics. Although the Fourier transforms of $\Delta\rho_{xx}^\mathrm{osc}/\rho_0$ do reveal small fraction of the second- (and also the third- for samples 1 and 2) harmonics, \cite{Endo05HH} their smallness along with the power dependence on $V_0$ of the relevant resistivities [to be discussed later, see Eqs.\ (\ref{aprhoxx}) and (\ref{drift})] justifies neglecting them to a good approximation. The parameters $V_0$ and $\mu_\mathrm{W}$ obtained by fitting Eq.\ (\ref{COosc}) to experimental traces are plotted in Figs.\ \ref{properties}(a) and (c), respectively. The latter shows $\mu_\mathrm{W}$$\simeq$$\mu_\mathrm{s}$, confirming our previous result. $V_0$ does not depend very much on $n_e$ when $n_e$ is varied by LED illumination, but increases with decreasing $n_e$ when the back gate is used, the latter resembling a previous report. \cite{Soibel97} The dependence of $V_0$ on $n_e$ is discussed in detail elsewhere. \cite{Endo05MA} Since $a$ and $d$ are of comparable size, $V_0$ rapidly increases with the increase of $a$ (with exception, of course, of sample 2 whose amplitude is close to that of sample 3). Since 6$\le$$E_F$$\le$11 meV for the range of $n_e$ encompassed in the present study, the condition $\eta$$\ll$1 is fulfilled for all the measurements shown here ($\eta$=0.010$-$0.034). \section{Calculation of the contribution of streaming orbits\label{SOcalc}} \begin{figure}[tb] \includegraphics[bbllx=10,bblly=90,bburx=590,bbury=250,width=8.5cm]{Fig3.eps}% \caption{(Color online) Fermi surface in the $x$-$p_x$-$p_y$ space for (a) $\beta$=0, (b) 0$<$$\beta$$<$1, and (c) $\beta$=1. Each electron orbit is specified by the cross section of the Fermi surface by a constant-$p_y$ plane. Streaming orbits are present in the shaded area.\label{Fermi}} \end{figure} \begin{figure} \includegraphics[bbllx=40,bblly=50,bburx=560,bbury=400,width=8.5cm]{Fig4.eps}% \caption{(Color online) Functions $F(\beta)$, $G(\beta)$, $\phi(\beta)$ (thin dotted, solid, and dash-dotted lines, respectively, left axis), and $\beta^2G(\beta)$ (thick solid line, right axis).\label{FGbeta}} \end{figure} In this section, we describe a simple analytic calculation for estimating the contribution of SO to magnetoresistance. The calculation is a slight modification of a theory by Matulis and Peeters, \cite{Matulis00} the theory in which semiclassical conductance was calculated for 2DEG under unidirectional magnetic field modulation with zero average. We modify the theory to the case for potential modulation $V_0 \cos(2\pi x/a)$, and extend it to include a uniform magnetic field $-B$ (the minus sign is selected just for convenience). The Hamiltonian describing the motion of electrons is given by \begin{equation} \varepsilon(x,p_x,p_y) = \frac{1}{2m^*}\left[p_x^2+(p_y-eBx)^2\right]+V_0\cos\left(\frac{2 \pi x}{a}\right),\label{Hamiltonian} \end{equation} in the Landau gauge {\bf A}=$(0,-Bx,0)$, where {\bf p}=$(p_x,p_y)$ denotes canonical momentum. Using electron velocities \begin{equation} v_x=\frac{\partial \varepsilon}{\partial p_x}=\frac{p_x}{m^*},\quad v_y=\frac{\partial \varepsilon}{\partial p_y}=\frac{p_y-eBx}{m^*},\label{velocity} \end{equation} the conductivity tensor reads (including the factor 2 for spin degeneracy) \begin{equation} \sigma_{ij} = \frac{2 e^2}{(2\pi \hbar)^2}\frac{1}{L_x}\int_0^{L_x}\!\!\!dx\int_{-\infty}^{\infty}\!\!\!dp_x\int_{-\infty}^{\infty}\!\!\!dp_y\tau_\mathrm{s} v_iv_j (-\frac{\partial f}{\partial \varepsilon}),\label{conductivity} \end{equation} where $L_x$ represents the extent of the sample in the $x$-direction and $\tau_\mathrm{s}$ denotes an appropriate scattering time to be discussed later. \cite{EquiChambers} Since the system is periodic in $x$-direction and each SO is confined in a single period, the integration over $x$, $L_x^{-1}\int_0^{L_x} dx$, can be reduced to one period, $a^{-1}\int_0^a dx$, in calculating the conductivity from SO\@. The derivative $-\partial f/\partial \varepsilon$ of the Fermi distribution function $f(\varepsilon)$=$\{1+\exp[(\varepsilon-E_\mathrm{F})/k_\mathrm{B}T]\}^{-1}$ may be approximated by the delta function $\delta(\varepsilon-E_\mathrm{F})$ at low temperatures, $T$$\ll$$E_\mathrm{F}/k_\mathrm{B}$. Therefore the problem boils down to the integration of $\tau_\mathrm{s}v_iv_j$ over relevant part of the Fermi surface $\varepsilon(x,p_x,p_y)=E_\mathrm{F}$ in the $(x,p_x,p_y)$ space. Fermi surface is depicted in Fig.\ \ref{Fermi} for three different values of $\beta$$\equiv$$B/B_\mathrm{e}$. Since the Hamiltonian Eq.\ (\ref{Hamiltonian}) does not explicitly include $y$, $p_y$ is a constant of motion that specify an orbit; an orbit is given by the cross section of the Fermi surface by a constant-$p_y$ plane. The presence of SO is indicated by the shaded area in Fig.\ \ref{Fermi}. The ratio of SO to all the orbits is maximum at $\beta$=0, decreases with increasing $\beta$, and disappears at $\beta$=1. Before continuing the calculation, we now discuss an adequate scattering time to choose. At variance with Ref.\ \onlinecite{Matulis00}, we adopt here unweighted single-particle scattering time $\tau_\mathrm{s}$=$\mu_\mathrm{s}m^*/e$. The choice is based on the fact that the angle $\theta$=$\arctan(v_x/v_y)$ of the direction of the velocity with respect to the $y$-axis is very small for electrons belonging to SO in our ULSL samples having small $\eta$=$V_0/E_\mathrm{F}$. The maximum of $|\theta|$ at a position $u$$\equiv$$2\pi x/a$ can be approximately written as $[\eta\varphi(\beta,u)]^{1/2}$ with \begin{equation} \varphi(\beta,u)\equiv\sqrt{1-\beta^2}+\beta\arcsin\beta-\cos u-\beta u, \label{varphi} \end{equation} whose maximum over $u$ is given by $[2\eta\phi(\beta)]^{1/2}$ with $\phi(\beta)$$\equiv$$\sqrt{1-\beta^2}+\beta\arcsin{\beta}-(\pi/2)\beta$, where $|\phi(\beta)|$$\leq$1 for $|\beta|$$\leq$1 (see Fig.\ \ref{FGbeta}). Since $|\theta|$ is much smaller than the average scattering angle $\theta_\mathrm{scat}$$\sim$$\sqrt{2\mu_\mathrm{s}/\mu}$$\simeq$0.5 rad estimated for our present 2DEG wafer, electrons are kicked out of SO by virtually any scattering event regardless of the scattering angle involved, letting $\tau_\mathrm{s}$ to be the appropriate scattering time. The integration Eq.\ (\ref{conductivity}) over the shaded area gives the correction to the conductivity owing to SO, to the leading order in $\eta$, as \begin{eqnarray} \frac{\delta\sigma_{xx}^\mathrm{SO}}{\sigma_0} &=& -\frac{2}{2\pi^2}\frac{\mu_\mathrm{s}}{\mu}\int_{\arcsin\beta}^{u_1(\beta)}\frac{2}{3}[\eta\varphi(\beta,u)]^{3/2}du \nonumber \\ &=& -\frac{32\sqrt{2}}{9\pi^2}\frac{\mu_\mathrm{s}}{\mu}\eta^{3/2}F(\beta),\label{sgmxx} \end{eqnarray} where the minus sign results because electrons trapped in SO cannot carry current over the (macroscopic) sample in $x$-direction and therefore should be deducted from the conductivity, and \begin{eqnarray} \frac{\delta\sigma_{yy}^\mathrm{SO}}{\sigma_0} &=& \frac{2}{2\pi^2}\frac{\mu_\mathrm{s}}{\mu}\int_{\arcsin\beta}^{u_1(\beta)}2[\eta\varphi(\beta,u)]^{1/2}du \nonumber \\ &=& \frac{8\sqrt{2}}{\pi^2}\frac{\mu_\mathrm{s}}{\mu}\eta^{1/2}G(\beta),\label{sgmyy} \end{eqnarray} and $\delta\sigma_{xy}^\mathrm{SO}$=$\delta\sigma_{yx}^\mathrm{SO}$=0, where $\sigma_0$=$E_\mathrm{F}e^2\tau/\pi\hbar^2$ represents the Drude conductivity. The factor 2 in the first equalities accounts for the two equivalent SO areas at the upper and the lower bounds of $p_y$. The functions $F(\beta)$ and $G(\beta)$ are defined as \begin{equation} F(\beta)\equiv\frac{3}{16\sqrt{2}}\int_{\arcsin\beta}^{u_1(\beta)}[\varphi(\beta,u)]^{3/2}du \label{Fbeta} \end{equation} and \begin{equation} G(\beta)\equiv\frac{1}{4\sqrt{2}}\int_{\arcsin\beta}^{u_1(\beta)}[\varphi(\beta,u)]^{1/2}du, \label{Gbeta} \end{equation} where the upper limit of integration $u_1(\beta)$ is the solution of $\varphi(\beta,u_1)$=0 other than $\arcsin\beta$. Both $F(\beta)$ and $G(\beta)$ monotonically decrease from 1 to 0 while $\beta$ varies from 0 to 1, as shown in Fig.\ \ref{FGbeta}. Since $\delta\sigma_{xx}^\mathrm{SO}$$/$$\delta\sigma_{yy}^\mathrm{SO}$$\propto$$\eta$, $\delta\sigma_{xx}^\mathrm{SO}$$\ll$$\delta\sigma_{yy}^\mathrm{SO}$ for $\eta$$\ll$1. Correction to the resistivity by SO can be obtained by inverting the conductivity tensor, \begin{eqnarray} \frac{\delta\rho_{xx}^\mathrm{SO}}{\rho_0} &=& \left\{\frac{\delta\sigma_{xx}^\mathrm{SO}}{\sigma_0}+\left[1+(B \mu)^2\frac{\delta\sigma_{yy}^\mathrm{SO}/\sigma_0}{1+\delta\sigma_{yy}^\mathrm{SO}/\sigma_0}\right]^{-1}\right\}^{-1}\!\!\!\!\!-1 \nonumber \\ &\simeq& -\frac{\delta\sigma_{xx}^\mathrm{SO}}{\sigma_0}+(B \mu)^2 \frac{\delta\sigma_{yy}^\mathrm{SO}}{\sigma_0} \nonumber \\ &=& \frac{32\sqrt{2}}{9\pi^2}\frac{1}{{(\Phi_0\mu_\mathrm{B}^*)}^{3/2}}\frac{\mu_\mathrm{s}}{\mu}\frac{V_0^{3/2}}{n_e^{3/2}}F(\beta) \nonumber \\ & & + \frac{4\sqrt{2}}{\pi}\frac{1}{{\Phi_0}^{1/2}{\mu_\mathrm{B}^*}^{5/2}}\frac{\mu_\mathrm{s}\mu}{a^2}\frac{V_0^{5/2}}{n_e^{3/2}}\beta^2G(\beta). \end{eqnarray} For small $\eta$, $\delta\sigma_{xx}^\mathrm{SO}$$/$$\sigma_{0}$ can be neglected and consequently \begin{equation} \frac{\delta\rho_{xx}^\mathrm{SO}}{\rho_0} \simeq \frac{4\sqrt{2}}{\pi}\frac{1}{{\Phi_0}^{1/2}{\mu_\mathrm{B}^*}^{5/2}}\frac{\mu_\mathrm{s}\mu}{a^2}\frac{V_0^{5/2}}{n_e^{3/2}}\beta^2G(\beta).\label{aprhoxx} \end{equation} The correction therefore increases in proportion to $\mu$, $\mu_\mathrm{s}$, and $V_0^{5/2}$, and decreases with $a$ and $n_e$. The function $\beta^2 G(\beta)$ is also plotted in Fig.\ \ref{FGbeta}, which takes maximum at $\beta$$\simeq$0.6 and vanishes at $\beta$=1. Our final result Eq.\ (\ref{aprhoxx}) is identical to Eq.\ (41) of Ref.\ \onlinecite{Mirlin01}, which is deduced for the case $\eta$$\ll$$\mu_\mathrm{s}/\mu$. (For larger $\eta$, Ref.\ \onlinecite{Mirlin01} gives somewhat different formula that is proportional to $V_0^{7/2}$). Note that our $\phi(\beta)$ and $G(\beta)$ are identical to the functions denoted as $\Phi_1(\beta)$ and $\Phi(\beta)$, respectively, in Ref.\ \onlinecite{Mirlin01}. In the following section, Eq.\ (\ref{aprhoxx}) will be compared with experimental traces. \section{Positive magnetoresistance obtained by experiment\label{exppmr}} \begin{figure}[tb] \includegraphics[bbllx=20,bblly=30,bburx=600,bbury=710,width=8.5cm]{Fig5.eps}% \caption{(Color online) Magnetoresistance traces for various values of $n_e$. Selected values of $n_e$ are noted in the figure (in 10$^{15}$ m$^{-2}$). Dotted traces indicate that the $n_e$ is attained by LED illumination. Note that the vertical scale is expanded by five times for sample 4.\label{PMRraw}} \end{figure} \begin{figure}[tb] \includegraphics[bbllx=20,bblly=80,bburx=600,bbury=780,width=8.5cm]{Fig6.eps}% \caption{(Color online) Replot of Fig.\ \ref{PMRraw} with abscissa normalized by the extinction field $B_\mathrm{e}$ and ordinate by the sample-parameter-dependent prefactor in Eq.\ (\ref{aprhoxx}), $\alpha \mu_\mathrm{W} \mu V_0^{5/2} a^{-2} n_e^{-3/2}$, with the coefficient $\alpha$$\equiv$$4\sqrt{2}\pi^{-1}\Phi_0^{-1/2}{\mu_\mathrm{B}^*}^{-5/2}$$\simeq$4.04$\times$10$^7$ T$^2$meV$^{-5/2}$m$^{-1}$. Vertical scale is expanded twice for sample 4. The function $\beta^2G(\beta)$ is also plotted for comparison. \label{scale}} \end{figure} Figure \ref{PMRraw} shows low-field magnetoresistance traces for samples 1$-$4 for various values of $n_e$. Solid curves represent measurements before illumination ($n_e$ varied by the back gate) and dotted curves are traces for $n_e$ varied by LED illumination (back gate voltage=0 V). The magnitude of PMR shows clear tendency of being large for samples having larger $V_0$. By contrast, the peak positions do not vary much between samples. To facilitate quantitative comparison with Eq.\ (\ref{aprhoxx}), Fig.\ \ref{PMRraw} is replotted in Fig.\ \ref{scale}, with both horizontal and vertical axes scaled with appropriate parameters: the horizontal axis is normalized by $B_\mathrm{e}$ calculated by Eq.\ (\ref{Be}) using experimentally deduced $n_e$ and $V_0$ shown in Fig.\ \ref{properties}; the vertical axis is normalized by the prefactor in Eq.\ (\ref{aprhoxx}) with $\mu_\mathrm{s}$ replaced by $\mu_\mathrm{W}$, identifying the two parameters. \cite{musmuw} Magnetoresistance owing to SO will then be represented by a universal function $\beta^2G(\beta)$, which is also plotted in the figures. It is clear from the figures that experimentally observed PMR is much larger than that calculated by Eq.\ (\ref{aprhoxx}). Furthermore, the peaks appear at $B$$>$$B_\mathrm{e}$, i.e., where SO have already disappeared, for all traces for samples 1$-$3 and traces with smaller $n_e$ for sample 4. The peak position is by no means fixed, but depends on the sample parameters. This observation argues against the interpretation of PMR being solely originating from SO\@. Rather, we interpret that SO accounts for only a small fraction of the PMR, as suggested by Fig.\ \ref{scale}, and that the rest is ascribed to another effect to be discussed in the next section. In fact, humps that appear to correspond to the component $\beta^2G(\beta)$ can readily be recognized in traces with larger $n_e$ for sample 4, superposed on a slowly increasing component of PMR\@. The humps terminate at around $|\beta|$=1, where the total PMR changes the sign of the curvature. With the increase of $\eta$($\propto$$V_0/n_e$) either by decreasing $n_e$ (upper traces for sample 4) or by increasing $V_0$ (samples 1$-$3), $\beta^2G(\beta)$ makes progressively smaller contribution to the total PMR, and becomes difficult to be distinguished from the background. \begin{figure} \includegraphics[bbllx=20,bblly=100,bburx=580,bbury=750,width=8.5cm]{Fig7.eps}% \caption{(Color online) (a) Magnetoresistance traces for sample 4 with the inflection point $B_\mathrm{inf}$ marked by downward open triangles. Traces are offset proportionally to the change in $n_e$. Selected values of $n_e$ in 10$^{15}$ m$^{-2}$ are noted in the figure. (b) Illustration of the procedure to pick up $B_\mathrm{inf}$ (an example for $n_e$=2.33$\times$10$^{15}$ m$^{-2}$). The point at which the second derivative $(d^2/dB^2)(\Delta\rho_{xx}/\rho_0)$ (solid curve, right axis) crosses zero upward (marked by open downward triangle) is identified as $B_\mathrm{inf}$. $B_\mathrm{inf}$ is marked also on $\Delta\rho_{xx}/\rho_0$ (dotted curve, left axis). Shaded area indicates the contribution from SO\@. (c) Plot of $B_\mathrm{inf}$ versus $B_\mathrm{e}$ calculated by Eq.\ (\ref{Be}) using experimentally obtained $V_0$. The line represents $B_\mathrm{inf}$=$B_\mathrm{e}$. \label{offset}} \end{figure} As has been inferred just above, the interpretation that the contribution $\delta\rho_{xx}^\mathrm{SO}/\rho_0$ from SO is superimposed on another slowly increasing background component offers an alternative way to determine $B_\mathrm{e}$: $B_\mathrm{e}$ can be identified with the end of the hump, namely, the inflection point $B_\mathrm{inf}$ where the curvature of the total PMR changes from concave down, inherited from $\beta^2G(\beta)$, to concave up. To be more specific, $B_\mathrm{inf}$ is determined as a point where the second derivative $(d^2/dB^2)(\Delta\rho_{xx}/\rho_0)$ changes sign from negative to positive as illustrated in Fig.\ \ref{offset}(b). The inflection point $B_\mathrm{inf}$ is marked by a downward open triangle both in $(d^2/dB^2)(\Delta\rho_{xx}/\rho_0)$ (solid) and $\Delta\rho_{xx}/\rho_0$ (dotted) traces. [$(d^2/dB^2)(\Delta\rho_{xx}/\rho_0)$ shows oscillatory features at low field, which are attributed to the geometric resonance of Bragg-reflected cyclotron orbits. \cite{Endo05N}] Figure \ref{offset}(a) illustrates the shift of $B_\mathrm{inf}$ with $n_e$. The plot of $B_\mathrm{inf}$ versus $B_\mathrm{e}$ shown in Fig.\ \ref{offset}(c) demonstrates that $B_\mathrm{inf}$ is actually identifiable with $B_\mathrm{e}$. Thus it is now possible to deduce reliable values of $V_0$ from PMR: by replacing $B_\mathrm{e}$ with $B_\mathrm{inf}$ in Eq.\ (\ref{Be}). Unfortunately this method is applicable only for samples with very small $\eta$. For samples 1$-$3, it is difficult to find clear inflection points because of the dominance of the slowly increasing component; $(d^2/dB^2)(\Delta\rho_{xx}/\rho_0)$ only gradually approaches zero from below. In the subsequent section, we discuss the origin of the slowly increasing background component of the PMR\@. \section{Drift velocity of incompleted cyclotron orbits\label{driftvel}} \begin{figure}[tb] \includegraphics[bbllx=20,bblly=0,bburx=850,bbury=600,width=8.5cm]{Fig8.eps}% \caption{(Color online) Illustration of ${\bf E}\times{\bf B}$ drift velocity $v_\mathrm{d}$ affecting the electrons during the cyclotron motion. Orbits are depicted neglecting the modification due to the modulation $V(x)$=$V_0\cos(qx)$ (drifting movement and slight variation of the velocity depending on $x$) for simplicity. Top diagrams represent slightly larger $B$ than bottom ones for both (a) and (b). On averaging $v_{\mathrm{d},y}$ along an orbit, most contribution comes from minimum- and maximum-$x$ edges, as shown by solid arrows in the figure. Open arrows indicate the direction of $E_x$=$(qV_0/e)\sin(qx)$ at the edges. (a) For $B$ large enough so that electrons can travel cycles before being scattered. Depending on $B$, $v_\mathrm{d}$ at both edges are constructive (top diagram, $2R_\mathrm{c}/a$=$n+1/4$ with $n$ integer) or destructive (bottom diagram, $2R_\mathrm{c}/a$=$n-1/4$), resulting in maxima and minima in the magnetoresistance, respectively. (b) For small $B$ so that electrons are scattered before completing a cycle. The interrelation of $v_{\mathrm{d},y}$ at both edges is not simply determined by $B$. The edges affect the magnetoresistance independently. \label{cycorb}} \end{figure} An important point to be noticed is that even at the low magnetic-field range $|B|$$<$$B_\mathrm{e}$ where SO is present, most of the electrons are in cyclotron-like orbits, namely the cyclotron orbits slightly modified by a weak potential modulation, as evident in Fig.\ \ref{Fermi}; SO accounts for only small fraction, order of $\eta^{1/2}$, of the whole orbits. Therefore, the contribution of these cyclotron-like orbits to the magnetoresistance should be taken into consideration in interpreting the PMR\@. We will show below that the slowly varying component of the PMR is attributable to the ${\bf E}\times{\bf B}$ drift velocity of the electrons in the cyclotron-like orbits that are scattered before completing a cycle. It is well established that the ${\bf E}\times{\bf B}$ drift velocity resulting from the gradient of the modulation potential $\bf{E}$=$-\nabla V/(-e)$ and the applied magnetic field ${\bf B}$=$(0,0,B)$ is the origin of the CO\@. \cite{Beenakker89} For unidirectional modulation $V(x)$=$V_0\cos(qx)$ with $q$=$2\pi /a$, the drift velocity ${\bf v}_\mathrm{d}$=$({\bf E}\times{\bf B})/B^2$ has only the $y$-component, \begin{equation} v_{\mathrm{d},y}=\frac{qV_0}{eB}\sin(qx).\label{vdy} \end{equation} Electrons acquire $v_{\mathrm{d},y}$ during the course of a cyclotron revolution, whose sign alternates rapidly except for when electrons are traveling nearly parallel to the modulation ($\theta$$\simeq$0, $\pi$), i.e., around either the rightmost (maximum-$x$) or the leftmost (minimum-$x$) edges. Therefore, the contribution of the drift velocity to the conductivity comes almost exclusively from the two edges as depicted in Fig.\ \ref{cycorb} (a), which is actually experimentally verified in Ref. \onlinecite{Endo00H}. The CO is the result of the alternating occurrence by sweeping the magnetic field of the constructive and destructive addition of the effects from the two edges, as illustrated by the top and the bottom cyclotron orbits in Fig.\ \ref{cycorb} (a), respectively. With the decrease of the magnetic field, cyclotron radius $R_\mathrm{c}$ increases and consequently the probability of electrons being scattered before reaching from one to the other edge increases. As a result, the distinction between the constructive and destructive cases are blurred, letting the CO amplitude diminish more rapidly than predicted by the theories \cite{Beenakker89,Peeters92} neglecting such scattering. The absence of CO at lower magnetic fields signifies that electrons are mostly scattered before traveling to the other edge. Although the correlation of the local drift velocities at the both edges is lost at such magnetic fields (Fig.\ \ref{cycorb} (b)), each edge can independently contribute to the conductivity. It is to this effect that we ascribe the major part of PMR in our ULSL samples. Note that the onset of CO basically coincide with the end of the PMR, bolstering this interpretation. It can be shown, by an approximate analytic treatment of the Boltzmann's equation, that the effect actually gives rise to PMR with right order of magnitude to explain the experimentally observed slowly-varying component. For this purpose, we make use of Chambers' formula, \cite{ChambersR69,ChambersR80,Gerhardts96} representing the relaxation time approximation of Boltzmann's equation, to obtain, from the drift velocity, the component $D_{yy}$ of the diffusion tensor, \begin{equation} D_{yy}=\int_{0}^{\infty} e^{-t/\tau}\langle v_{\mathrm{d},y}(t)v_{\mathrm{d},y}(0)\rangle dt, \label{Dyy} \end{equation} where $\langle ... \rangle$ signifies averaging over all possible initial conditions for the motion of electrons along the trajectories. Einstein's relation is then used to obtain corresponding increment in the conductivity, $\delta\sigma_{yy}$=$e^2D(E_\mathrm{F})D_{yy}$ with $D(E_\mathrm{F})$=$m^*/\pi \hbar^2$=$(\Phi_0 \mu_\mathrm{B})^{-1}$ the density of states, and finally it is translated to the resistivity by tensor inversion, $\delta\rho_{xx}/\rho_0$=$(\omega_\mathrm{c}\tau)^2\delta\sigma_{yy}/\sigma_0$. We use unperturbed cyclotron trajectory, $x$=$X+R_\mathrm{c}\cos\theta$, for simplicity, neglecting the modification of the orbit by the modulation (and accordingly, SO is neglected in this treatment), which is justified for small $\eta$. Since the initial condition can be specified by the guiding center position $X$ and the initial angle $\theta_0$, we can write \begin{eqnarray} \langle v_{\mathrm{d},y}(t)v_{\mathrm{d},y}(0)\rangle = \left( \frac{qV_0}{eB} \right)^2 \frac{1}{a}\int_{0}^{a}\!\!\!\! dX\frac{1}{2\pi}\int_{-\pi}^{\pi}\!\!\!\! d\theta_0 \hspace{2mm} \hspace{0.11\columnwidth} \nonumber \\ \sin\{ q[X+R_\mathrm{c}\cos(\theta_0+\omega_\mathrm{c}t)] \}\sin[q(X+R_\mathrm{c}\cos{\theta_0})]. \label{vyvy} \end{eqnarray} Therefore Eq.\ (\ref{Dyy}) can be rewritten, performing the integration over $t$ first, as \begin{equation} D_{yy}=\left( \frac{qV_0}{eB} \right)^2 \frac{1}{a}\int_{0}^{a}\!\!\!\! dX\frac{1}{2\pi}\int_{-\pi}^{\pi}\!\!\!\! d\theta_0 \sin[q(X+R_\mathrm{c}\cos{\theta_0})] I(\theta_0), \label{DyyI} \end{equation} with \begin{equation} I(\theta_0)=\int_{0}^{\infty} e^{-t/\tau}\sin\{q[X+R_\mathrm{c}\cos(\theta_0+\omega_\mathrm{c}t)]\}dt. \label{timeint} \end{equation} Evaluation of Eq.\ (\ref{DyyI}) for a large enough magnetic field reproduces basic features of Eq.\ (\ref{COosc}), as will be shown in the Appendix. Here, we proceed with an approximation for small magnetic fields. The approximation is rather crude but is sufficient for the purpose of getting a rough estimate of the order of magnitude. Because of the exponential factor, only the time $t$$\alt$$\tau$ contributes to the integration of Eq.\ (\ref{timeint}). Due to the rapidly oscillating nature of the $\sin\{ \}$ factor and the smallness of $\omega_\mathrm{c}t$$\alt$$\omega_\mathrm{c}\tau$, $I(\theta_0)$ takes a significant value only when $\theta_0$ resides in a narrow range slightly below $\sim$0 or $\sim$$\pi$, corresponding to the situation when electrons travels near the rightmost or the leftmost edge, respectively, within the scattering time. It turns out, by comparing with the numerical evaluation of Eq.\ (\ref{timeint}) using sample parameters for our present ULSL's, that the following approximate expressions roughly reproduce the right order of magnitude and the right oscillatory characteristics (the period and phase) of Eq.\ (\ref{timeint}) for low magnetic field ($|B|$$\alt$0.02 T): \begin{equation} I(\theta_0) \simeq \left\{ \begin{array}{ll} \tau \pi [\sin(qX) J_0(qR_\mathrm{c})+\cos(qX) {\bf H}_0(qR_\mathrm{c})] & (\theta_0 \sim 0) \\ \tau \pi [\sin(qX) J_0(qR_\mathrm{c})-\cos(qX) {\bf H}_0(qR_\mathrm{c})] & (\theta_0 \sim \pi) \end{array} \right., \label{apprI} \end{equation} where $J_0(x)$ and ${\bf H}_0(x)$ represent 0-th order Bessel and Struve functions of the first kind, respectively. The approximation can be obtained by replacing the exponential factor by a constant $\omega_\mathrm{c}\tau$ and limiting the range of the time integral to include only one edge. Here we noted that the integration of $\cos(qR_\mathrm{c}\cos\theta)$ and $\sin(qR_\mathrm{c}\cos\theta)$ over the range of $\theta$ including either of the rightmost ($\theta$=0) or the leftmost ($\theta$=$\pi$) edge can be approximated (since only the close vicinity of the edges makes significant contribution to the integration) by, \begin{equation} \int_\mathrm{rightmost}\!\!\!\!\!\! d\theta \simeq \int_{-\pi/2}^{\pi/2}\!\!\! d\theta,\hspace{3mm} \int_\mathrm{leftmost}\!\!\!\!\!\! d\theta \simeq \int_{\pi/2}^{3\pi/2}\!\!\! d\theta, \end{equation} and used the relations \begin{eqnarray} \int_{-\pi/2}^{\pi/2}\!\!\!\!\!\!\!\! \cos(qR_\mathrm{c}\cos\theta)d\theta=\int_{\pi/2}^{3\pi/2}\!\!\!\!\!\!\!\! \cos(qR_\mathrm{c}\cos\theta)d\theta=\pi J_0(qR_\mathrm{c}) \nonumber \\ \mathrm{and}\hspace{0.8 \columnwidth} \nonumber \\ \int_{-\pi/2}^{\pi/2}\!\!\!\!\!\!\!\! \sin(qR_\mathrm{c}\cos\theta)d\theta=-\int_{\pi/2}^{3\pi/2}\!\!\!\!\!\!\!\! \sin(qR_\mathrm{c}\cos\theta)d\theta=\pi {\bf H}_0(qR_\mathrm{c}). \nonumber \\ \label{BesselStruve} \end{eqnarray} Substituting Eq.\ (\ref{apprI}) to Eq.\ (\ref{DyyI}) results in \begin{equation} D_{yy} \simeq \frac{\pi}{2} \tau \left(\frac{qV_0}{eB}\right)^2[J_0^2(qR_\mathrm{c})+{\bf H}_0^2(qR_\mathrm{c})], \label{Dyya} \end{equation} and with Einstein's relation one finally obtains \begin{equation} \frac{\delta\rho_{xx}^\mathrm{drift}}{\rho_0}=\sqrt{\frac{\pi}{2}}\frac{1}{\Phi_0{\mu_\mathrm{B}^*}^2}\frac{\mu^2}{a}\frac{V_0^2}{n_e^{3/2}}|B|. \label{drift} \end{equation} Here we made use of asymptotic expressions $J_0(x)\approx(2/\pi x)^{1/2}\cos(x-\pi/4)$ and ${\bf H}_0(x)$$\approx$$(2/\pi x)^{1/2}\sin(x-\pi/4)$ valid for large enough $x$ (corresponding to small enough $B$). \begin{figure} \includegraphics[bbllx=20,bblly=80,bburx=600,bbury=780,width=8.5cm]{Fig9.eps}% \caption{(Color online) Replot of magnetoresistance traces normalized by the prefactor in Eq.\ (\ref{drift}), $\alpha^\prime\mu^2V_0^2a^{-1}n_e^{-3/2}$, with $\alpha^\prime$=$(\pi/2)^{1/2}\Phi_0^{-1}{\mu_\mathrm{B}^{*}}^{-2}$=4.06$\times$10$^{14}$ T meV$^{-2}$ m$^{-2}$, after subtracting the contribution from SO, $\delta\rho_{xx}^\mathrm{SO}/\rho_0$ in Eq.\ (\ref{aprhoxx}). Contribution attributable to drift velocity of incompleted cyclotron orbits is given by $|B|$, which is also plotted by dash-dotted line. \label{dvdeCOs}} \end{figure} In order to compare experimentally obtained PMR with Eq.\ (\ref{drift}), PMR traces shown in Fig.\ \ref{PMRraw} are replotted in Fig.\ \ref{dvdeCOs} normalized by the prefactor in Eq.\ (\ref{drift}), after subtracting the small contribution from SO represented by Eq.\ (\ref{aprhoxx}). The scaled traces show reasonable agreement with $|B|$ at low magnetic fields, as predicted in Eq.\ (\ref{drift}), testifying that the mechanism considered here, the drift velocity from incompleted cyclotron orbits, generates PMR having the magnitude sufficient to explain the major part of PMR observed in our present ULSL samples. Possible sources of the remnant deviation, apart from the crudeness of the approximation, are (i) the neglect of higher harmonics and (ii) the neglect of negative magnetoresistance (NMR) component innate to GaAs/AlGaAs 2DEG \cite{Li03} arising from electron interactions \cite{Gornyi03,Gornyi04} or from semiclassical effect. \cite{Mirlin01N,Dmitriev02} The $n$-th harmonic gives rise to additional contribution analogous to Eq.\ (\ref{drift}) with $V_0$ and $a$ replaced by the amplitude $V_n$ of the $n$-th harmonic potential and $a/n$, respectively, and therefore, in principle, enhances the deviation. In practice, however, the effect will be small because of the small values of $V_n$ and its square dependence. On the other hand, the discrepancy can be made smaller by correcting for the NMR\@. We have actually observed NMR, which depends on $n_e$ and temperature, in the simultaneously measured ``reference'' plain 2DEG adjacent to the ULSL (see Fig.\ \ref{samples} (a)). Assuming that the NMR with the same magnitude are also present in the ULSL part and superposed on the PMR (the assumption whose validity remains uncertain at present), the correction are seen to appreciably reduce the discrepancy. The approximation leading to Eq.\ (\ref{drift}) is valid only for very small magnetic fields. With the increase of the magnetic field, the cooperation between the leftmost and the rightmost edges is rekindled, and the magnetoresistance tends to the expression appropriate for large enough magnetic field, outlined by Eq.\ (\ref{COoscA}), which includes a non-oscillatory term (the first term) as well as the term representing CO (the second term). Note that the non-oscillatory term approaches a constant, $\alpha^\prime \mu^2 V_0^2 a^{-1} n_e^{-3/2} (m^*/2\pi e\tau)$, at small magnetic field, although the exact value of the constant is rather difficult to estimate due to the subtlety in choosing the right scattering time $\tau$, as will be discussed in the Appendix. Therefore the (linear) increase of $\delta\rho_{xx}^\mathrm{drift}/\rho_0$ with $|B|$ is expected to flatten out at a certain magnetic field. The peak in the PMR roughly marks the position of this transition, which basically corresponds to the onset of the cooperation between the two edges. Thus the peak position is mainly determined by the scattering parameters and is expected to be insensitive to $V_0$, in agreement with what has been observed in Fig.\ \ref{PMRraw}. Experimentally, the peak position $B_\mathrm{p}$ is found to be well described by an empirical formula $B_\mathrm{p}$(T)=$[4\sqrt{2\mu_\mathrm{W}(\mathrm{m}^2/\mathrm{Vs})}]^{-1}$, using $\mu_\mathrm{W}$ determined from CO\@. On the other hand, the height of the PMR peak are seen to roughly scale as ${V_0}^2$, as inferred from Fig.\ \ref{dvdeCOs}, which reveals that the normalized peak height tends to fall into roughly the same value (notably the top panel showing two samples having the same $a$ and different $V_0$), so long as the period $a$ are the same. This is better shown after correcting for the NMR effect mentioned above. The height of the normalized peak slightly decreases with decreasing $a$ (roughly proportionally to $a$), resulting in an empirical formula for the peak height $(\Delta\rho_{xx}/\rho_0)^\mathrm{peak}$ $\sim$ 3$\times$10$^{-3}$ $[\mu(\mathrm{m}^2/\mathrm{Vs})]^2$ $[V_0(\mathrm{meV})]^2$ $[n_e(10^{15}\mathrm{m}^{-2})]^{-3/2}$. (Unfortunately, sample 4 with larger $n_e$ significantly deviates from this formula.) \section{Disscussion on the relative importance of the streaming orbit \label{discussion}} Although PMR was thus far generally interpreted to originate from SO, contribution from mechanisms other than SO was also implied in theoretical papers. By solving Boltzmann's equation numerically, Menne and Gerhardts \cite{Menne98} calculated PMR and showed separately the contribution of SO which did not account for the entire PMR (see Fig.\ 4 in Ref.\ \onlinecite{Menne98}), leaving the rest to alternative mechanisms (although the authors did not discuss the origin futher). Mirlin \textit{et al.} \cite{Mirlin01} actually calculated contribution of drifting orbit, which is basically similar to what we have considered in the present paper. They predicted cusp-like shape for the magnetoresistance arising from this mechanism, which is not observed in experimental traces. In both papers, the major part of PMR is still ascribed to SO, with other mechanisms playing only minor roles. In the present paper, we have shown that the relative importance is the other way around in our ULSL samples. However, we would like to point out that the dominant mechanism may change with the amplitude of modulation in ULSL\@. The reason for the contribution of SO being small in our samples can be traced back to the small amplitude of the modulation, combined with the small-angle nature of the scattering in the GaAs/AlGaAs 2DEG\@. As mentioned earlier, small $\eta$=$V_0/E_\mathrm{F}$ limits the SO within narrow angle range $|\theta|$$\leq$$\sqrt{2\eta}$, letting the electrons being scattered out of the SO even by an small-angle scattering event, hence the use of $\tau_\mathrm{s}$ in Eq.\ (\ref{conductivity}). This leads to small $\delta\sigma_{yy}^\mathrm{SO}$, since $\tau_\mathrm{s}$$\ll$$\tau$. Within the present framework, relative weight of SO in PMR decreases with increasing $\eta$, since the ratio of Eq.\ (\ref{aprhoxx}) to Eq.\ (\ref{drift}) is proportional to $\eta^{-1/2}$, in agreement with what was observed in Fig.\ \ref{scale}. However, the situation will be considerably altered with further increase in $\eta$ (typically $\eta$$\agt$0.1). Then, due to the expansion of the angle range encompassed by SO, electrons begin to be allowed to stay within SO after small-angle scattering, requiring $\tau_\mathrm{s}$ in Eq.\ (\ref{conductivity}) to be replaced by larger (possibly $B$-dependent) values. In the limit that the range of $|\theta|$ is much larger than the average scattering angle, $\tau_\mathrm{s}$ should be supplanted by ordinary transport lifetime (momentum-relaxation time) $\tau$, resulting in much larger $\delta\sigma_{yy}^\mathrm{SO}$. This largely enhances the relative importance of SO, possibly to an extent to exceed the contribution from the drift velocity. We presume that the contribution of SO is much larger than in our case in most of the experiments reported so far which showed the shift of PMR peak position with the modulation amplitude \cite{Beton90P,Kato97,Soibel97,Emeleus98,Long99}. Even in such situation, however, it will not be easy to obtain simple relation between the peak position $B_\mathrm{p}$ and the amplitude $V_0$ because of the complication by the remnant contribution from the drift velocity. In most experiments, $V_0$ is varied by the gate bias, which concomitantly alters the electron density and scattering parameters, thereby affecting the both contributions as well. \section{Conclusions\label{conclusion}} The positive magnetoresistance (PMR) in unidirectional lateral superlattice (ULSL) possesses two different types of mechanisms as its origin: the streaming orbit (SO) and the drift velocity of incompleted cyclotron orbit. Although virtually only the former mechanism has hitherto been taken into consideration, we have shown that the latter mechanism account for the main part of PMR observed in our ULSL samples characterized by their small modulation amplitude. The share undertaken by SO decreases with increasing $\eta$=$V_0/E_\mathrm{F}$, insofar as $\eta$ is kept small enough for the electrons in SO to be driven out even by a small-angle scattering characteristic of GaAs/AlGaAs 2DEG; $\eta$$\leq$0.034 for our samples fulfills this requirement. In this small $\eta$ regime, the peak position of PMR is not related to the modulation amplitude $V_0$ but rather determined by scattering parameters; the peak roughly coincide with the onset of commensurability oscillation (CO) that notifies the beginning of the cooperation between the leftmost and the rightmost edges in a cyclotron revolution. The height of the peak, on the other hand, are found to be roughly proportional to $V_0^2$. For small enough $\eta$, the contribution of SO becomes distinguishable as a hump superposed on slowly-increasing component and the magnetic field that marks the end of the SO, $B_\mathrm{e}$, can be identified as an inflection point of the magnetoresistance trace where the curvature changes from concave down to concave up. The extinction field $B_\mathrm{e}$ provides an alternative method via Eq.\ (\ref{Be}) to accurately determine $V_0$. We have also argued that for samples with $\eta$ much larger than ours, typically $\eta$$\agt$0.1, the relative importance of the two mechanisms can be reversed and the PMR peak position $B_\mathrm{p}$ can depend on $V_0$, although it will be difficult to deduce a reliable value of $V_0$ from $B_\mathrm{p}$. \begin{acknowledgments} This work was supported by Grant-in-Aid for Scientific Research in Priority Areas ``Anomalous Quantum Materials'', Grant-in-Aid for Scientific Research (C) (15540305) and (A) (13304025), and Grant-in-Aid for COE Research (12CE2004) from Ministry of Education, Culture, Sports, Science and Technology. \end{acknowledgments}
2,869,038,155,971
arxiv
\section{Introduction} In this paper we consider pseudoprocesses related to different types of fractional higher-order heat-type equations. Our starting point is the set of higher-order equations of the form \begin{equation} \frac{\partial}{\partial t} u_m(x, t) \, = \, \kappa_m \frac{\partial^m}{\partial x^m} u_m(x, t), \qquad x \in \mathbb{R}, t>0, m\in \mathbb{N} > 2, \label{11} \end{equation} whose solutions have been investigated by many outstanding mathematicians such as \citet{berna, levy, polia} and also, more recently, by means of the steepest descent method, by \citet{liwong}. In \eqref{11} the constant $\kappa_m$ is usually chosen in the form \begin{equation} \kappa_m \, = \, \begin{cases} \pm 1, \qquad &m = 2n+1, \\ (-1)^{n+1}, &m=2n. \end{cases} \end{equation} In our investigations we assume throughout that $\kappa_m = (-1)^n$ when $m = 2n+1$. Pseudoprocesses related to \eqref{11} have been constructed in the same way as for the Wiener process by \citet{dale1, dale2, krylov, lado, myamoto}. More recently pseudoprocesses related to \eqref{11} have been considered by \citet{debbi06, lachal2003, lachalpseudo, mazzucchi}. For equations of the form \begin{align} \frac{\partial}{\partial t} u_\gamma (x, t) \, = \, \frac{\partial^\gamma}{\partial |x|^\gamma} u_\gamma (x, t), \qquad x \in \mathbb{R}, t>0, \label{13} \end{align} where $0 < \gamma \leq 2$, and $\frac{\partial^\gamma}{\partial |x|^\gamma}$ is the Riesz operator, the fundamental solution has the form of the density of a symmetric stable process as Riesz himself has shown. For $\gamma > 2$ the equation \eqref{13} was studied by Debbi (see \cite{debbi06, debbi}) who proved the sign-varying character of the corresponding solutions. For asymmetric fractional operators of the form \begin{equation} ^FD^{\gamma, \theta} \, = \, -\left[ \frac{\sin \frac{\pi}{2}(\gamma - \theta)}{\sin \pi \gamma} \frac{^+\partial^\gamma}{\partial x^\gamma} + \frac{\sin \frac{\pi}{2}(\gamma + \theta)}{\sin \pi \gamma} \frac{^-\partial^\gamma}{\partial x^\gamma} \right] \label{14} \end{equation} the equation \begin{equation} \frac{\partial}{\partial t} u_{\gamma, \theta} (x, t) \, = \, ^FD^{\gamma, \theta} u_{\gamma, \theta}(x, t), \qquad x \in \mathbb{R}, t>0, 0 < \gamma \leq 2, \label{15} \end{equation} was studied by \citet{Feller52} who proved that the fundamental solution to \eqref{15} is the law of an asymmetric stable process of order $\gamma$. The fractional derivatives appearing in \eqref{14} are the Weyl fractional derivatives defined as \begin{align} &\frac{^+\partial^\gamma}{\partial x^\gamma} u(x) \, = \, \frac{1}{\Gamma (m-\gamma)} \frac{d^m}{dx^m} \int_{-\infty}^x \frac{u(y)}{(x-y)^{\gamma +1 -m}} dy \notag \\ & \frac{^-\partial^\gamma}{\partial x^\gamma} u(x) \, = \, \frac{1}{\Gamma (m-\gamma)} \frac{d^m}{dx^m} \int_{x}^\infty \frac{u(y)}{(y-x)^{\gamma + 1 -m}} dy \label{16} \end{align} where $m-1 < \gamma < m$. The Riesz fractional derivatives appearing in \eqref{13} are combinations of the Weyl's derivatives \eqref{16} and are defined as \begin{align} \frac{\partial^\gamma}{\partial |x |^\gamma} \, = \, -\frac{1}{2\cos \frac{\pi \gamma}{2}} \left[ \frac{^+\partial^\gamma}{\partial x^\gamma} + \frac{^-\partial^\gamma}{\partial x^\gamma} \right]. \end{align} This paper is devoted to pseudoprocesses related to fractional equations of the form \eqref{13} and \eqref{15} when $\gamma > 2$. Of course, this implies that Weyl's fractional derivatives \eqref{16} are considered in the case $\gamma > 2$. The fundamental solutions of these equations are sign-varying as in the case of higher-order heat-type equations \eqref{11} studied in the literature (compare with \cite{debbi06}). Fractional equations arise, for example, in the study of thermal diffusion in fractal and porous media (\citet{nigma, saiche}). Other fields of application of fractional equations can be found in \citet{debbi06}. Higher-order equations emerge in many contexts as in trimolecular chemical reactions (\citet{gardiner} page 295) and in the linear approximation of the Korteweg De Vries equation (see \citet{beghin4}). In our paper we study pseudo random walks (for the definitions and properties of pseudo random walks and variables see \citet{lachalpseudo}) of the form \begin{equation} W^{\gamma, 2k\beta} (t) \, = \, \sum_{j=1}^{N \left( t\gamma^{-2k\beta} \right) } U_j^{2k}(1) Q_j^{\gamma, 2k\beta} \label{19} \end{equation} where the r.v.'s $Q_j^{\gamma, 2k\beta}$ are independent from the Poisson process $N$, from the pseudo r.v.'s $U_j^{2k}(1)$ and from each other and have distribution for $0 < \beta < 1$, $\gamma > 0$, $k \in \mathbb{N}$, \begin{align} \Pr \left\lbrace Q_j^{\gamma, 2k\beta} > w \right\rbrace \, = \, \begin{cases} 1, \qquad & w < \gamma, \\ \left( \frac{\gamma}{w} \right) ^{2k\beta}, & w \geq \gamma. \end{cases} \label{110} \end{align} The $U_j^{2k}(1)$ are independent pseudo r.v.'s with law $u_{2k} (x, 1)$ with Fourier transform \begin{align} \int_{-\infty}^\infty e^{i\xi x} u_{2k}(x, 1) \, dx \, = \, e^{-|\xi|^{2k}}. \end{align} The Poisson process $N$ appearing in \eqref{19} is homogeneous and has rate $\lambda = \frac{1}{\Gamma (1-\beta)}$. We prove that \begin{align} \lim_{\gamma \to 0} W^{\gamma, 2k\beta} (t) \, \stackrel{\textrm{law}}{=} \, U^{2k} \left( H^\beta (t) \right) \label{112} \end{align} where $U^{2k}$ is the pseudoprocess of order $2k$ related to the heat-type equation \eqref{11} for $m=2k$ and $H^\beta$ is a stable subordinator of order $\beta \in (0,1)$ independent from $U^{2k}$. We show that the law of \eqref{112} is the fundamental solution to \begin{equation} \frac{\partial}{\partial t} v_{2k\beta} (x, t) \, = \, \frac{\partial^{2k\beta}}{\partial |x|^{2k\beta}} v_{2k\beta} (x, t), \qquad x \in \mathbb{R}, t>0, \beta \in (0,1), k \in \mathbb{N}. \end{equation} In other words, we are able to construct pseudoprocesses of order $\gamma >2$ in the form of integer-valued pseudoprocesses stopped at stable distributed times as the limit of suitable pseudo random walks. We consider also pseudo random walks of the form \begin{equation} \sum_{j=0}^{N \left( t \gamma^{-\beta(2k+1)} \right) } \epsilon_j U_j^{2k+1} (1) Q_j^{\gamma, \beta(2k+1)} \label{114} \end{equation} where the $Q_j^{\gamma, \beta(2k+1)}$ have distribution \eqref{110} (suitably adjusted), $U_j^{2k+1}$ (1) is an odd-order pseudo random variable with law $u_{2k+1} (x, 1)$ and Fourier transform \begin{align} \int_{-\infty}^\infty e^{i\xi x} u_{2k+1} (x, 1) \, dx \, = \, e^{-i\xi^{2k+1}} \end{align} and the $\epsilon_j$'s are random variables which take values $\pm 1$ with probability $p$ and $q$. All the variables in \eqref{114} are independent from each other and also independent from the Poisson process $N$ with rate $\lambda = \frac{1}{\Gamma (1-\beta)}$. In this case we are able to show that \begin{equation} \lim_{\gamma \to 0} W^{\gamma, (2k+1)\beta} (t) \, \stackrel{\textrm{law}}{=} \, U_1^{2k+1} \left( H_1^\beta (pt) \right) - U_2^{2k+1} \left( H_2^\beta (qt) \right) \label{116} \end{equation} where $H_j^\beta$, $j=1,2$, are independent stable subordinators independent also from the pseudoprocesses $U_1$, $U_2$. We prove that the law of \eqref{116} satisfies the higher-order fractional equation \begin{equation} \frac{\partial}{\partial t} w_{\beta (2k+1)} (x, t) \, = \, \mathfrak{R} w_{\beta (2k+1)} (x, t), \qquad x \in \mathbb{R}, t>0, \label{117v} \end{equation} where \begin{equation} \mathfrak{R} \, = \, -\frac{1}{\cos \frac{\beta \pi}{2}} \left[ p e^{i\pi\beta k} \frac{^+\partial^{\beta(2k+1)}}{\partial x^{\beta(2k+1)}} +qe^{-i\pi\beta k} \frac{^-\partial^{\beta(2k+1)}}{\partial x^{\beta (2k+1)}} \right]. \label{117} \end{equation} The Fourier transform of the fundamental solution of \eqref{117v} reads \begin{equation} \widehat{w}_{\beta(2k+1)} (\xi, t) \, = \, e^{-t|\xi|^{\beta(2k+1)} \left( 1-i \textrm{ sign}(\xi) \, (p-q) \tan \frac{\beta \pi}{2} \right) } \label{119} \end{equation} We note that \eqref{119} corresponds to the Fourier transform of the law of \eqref{116} with a suitable change of the time-scale that is \begin{align} &\mathbb{E}\exp \left\lbrace i\xi \left[ U_1^{2k+1} \left( H_1^\beta \left( \frac{pt}{\cos \frac{\beta \pi}{2}} \right) \right) - U_2^{2k+1} \left( H_2^\beta \left( \frac{qt}{\cos \frac{\beta \pi}{2}} \right) \right) \right] \right\rbrace \notag \\ = \, & e^{-t|\xi |^{\beta(2k+1)} (1-i \textrm{ sign}(\xi) \, (p-q) \, \tan \frac{\beta \pi}{2}} \end{align} The mean value here and below must be understood with respect to the signed measure of the pseudoprocess (see for example \cite{debbi06}). We study also the pseudoprocesses governed by the equation \begin{align} \frac{\partial}{\partial t} z_{\beta(2k+1), \theta} (x, t) \, = \, ^FD^{\beta(2k+1), \theta} z_{\beta(2k+1), \theta} (x, t) \end{align} where $^FD^{\beta(2k+1), \theta}$ is the operator defined in \eqref{14} with $\gamma$ replaced by $\beta (2k+1)$. Also in this case we study continuous-time random walks whose limit has Fourier transform equal to \begin{equation} \mathbb{E}e^{i\xi Z^{\beta(2k+1), \theta}} \, = \, e^{-t|\xi |^{\beta(2k+1)}e^{\frac{i\pi\theta}{2} \textrm{ sign}(\xi)}}, \qquad \beta \in (0,1), k \geq 1, -\beta < \theta < \beta. \end{equation} When we take into account pseudo random walks constructed by means of even-order pseudo random variables we arrive at limits $Z^{2\beta k, \theta}(t)$, $t>0$, with Fourier transform \begin{equation} \mathbb{E}e^{i\xi Z^{2\beta k, \theta} (t)} \, = \, e^{-t|\xi |^{2k\beta} \frac{\cos \frac{\pi}{2}\theta}{\cos \frac{\pi}{2}\beta}} \end{equation} which shows the symmetric structure of the limiting pseudoprocess. \subsection{List of symbols} For the reader's convenience we give a short list of the most important symbols and definitions appearing in the paper. \begin{enumerate} \item[$\bullet$] The right Weyl fractional derivative for $m -1<\gamma < m$, $m \in \mathbb{N}$, $x \in \mathbb{R}$ \begin{equation} \frac{^+\partial^\gamma}{\partial x^\gamma} u(x, t) \, = \, \frac{1}{\Gamma (m-\gamma)} \frac{d^m}{d x^m} \int_{-\infty}^x \frac{u(y, t)}{(x-y)^{\gamma + 1 - m}} dy \end{equation} \item[$\bullet$] The left Weyl fractional derivative for $m -1<\gamma < m$, $m \in \mathbb{N}$, $x \in \mathbb{R}$, \begin{equation} \frac{^-\partial^\gamma}{\partial x^\gamma} u(x, t) \, = \, \frac{(-1)^m}{\Gamma (m-\gamma)} \frac{d^m}{d x^m} \int_x^\infty \frac{u(y, t)}{(y-x)^{\gamma + 1 - m}} dy \end{equation} \item[$\bullet$] The Riesz fractional derivative for $m-1 < \gamma < m$, $m \in \mathbb{N}$, $x \in \mathbb{R}$, \begin{equation} \frac{\partial^\gamma}{\partial |x|^\gamma} \, = \, - \frac{1}{2\cos \frac{\gamma \pi}{2}} \left[ \frac{^+\partial^\gamma}{\partial x^\gamma} + \frac{^-\partial^\gamma}{\partial x^\gamma} \right] \end{equation} \item[$\bullet$] We introduce the operator $\mathfrak{R}$, for $\beta \in (0,1)$, $k \in \mathbb{N}$, $p,q \in [0,1] : p + q = 1$, $x \in \mathbb{R}$, \begin{equation} \mathfrak{R} \, = \, -\frac{1}{\cos \frac{\beta \pi }{2}} \left[ p e^{i\pi\beta k} \frac{^+\partial^{\beta(2k+1)}}{\partial x^{\beta(2k+1)}} + q e^{-i\pi\beta k} \frac{^-\partial^{\beta(2k+1)}}{\partial x^{\beta(2k+1)}} \right] \end{equation} \item[$\bullet$] The Feller derivative for $m-1 < \gamma < m$, $m \in \mathbb{N}$, $\theta > 0$, $x \in \mathbb{R}$, \begin{equation} ^FD^{\gamma, \theta} \, = \, - \left[ \frac{\sin \frac{\pi}{2}(\gamma - \theta)}{\sin \pi\gamma} \frac{^+\partial^\gamma}{\partial x^\gamma} + \frac{\sin \frac{\pi}{2}(\gamma + \theta)}{\sin \pi \gamma} \frac{^-\partial^\gamma}{\partial x^\gamma} \right] \end{equation} \item[$\bullet$] $U^{m}(t)$, $t>0$ is a pseudoprocess of order $m \in \mathbb{N}$ with law $u_m (x, t)$, $x \in \mathbb{R}$, $t>0$, governed by \eqref{11} \item[$\bullet$] $H^\beta (t)$ is a stable subordinator of order $\beta \in (0,1)$ with probability density $h_\beta (x, t)$, $x \geq0$, $t\geq 0$. \end{enumerate} \section{Preliminaries and auxiliary results} In this paper we consider higher-order heat-type equations where the space derivative is fractional in different ways. \subsection{Weyl fractional derivatives.} First of all we consider equations of the form \begin{equation} \frac{\partial}{\partial t} u_\gamma (x, t) \, = \, \frac{^{\pm}\partial^\gamma}{\partial x^\gamma} u_\gamma (x, t), \qquad x \in \mathbb{R}, t>0, \gamma >0, \end{equation} where $\frac{^{\pm}\partial^\gamma}{\partial x^\gamma}$ are the space-fractional Weyl derivatives defined as \begin{align} \frac{^+\partial^\gamma}{\partial x^\gamma} u_\gamma (x, t) \, = \, \frac{1}{\Gamma (m-\gamma)} \frac{d^m}{dx^m} \int_{-\infty}^x \frac{u(z, t) \, dz}{(x-z)^{\gamma - m + 1}}, \quad m-1 < \gamma < 1, m \in \mathbb{N}, \label{weyldestra} \end{align} \begin{align} \frac{^-\partial^\gamma}{\partial x^\gamma} u_\gamma (x, t) \, = \, \frac{(-1)^m}{\Gamma (m-\gamma)} \frac{d^m}{dx^m} \int_x^\infty \frac{u(z, t) \, dz}{(z-x)^{\gamma - m +1}}, \quad m - 1 < \gamma < m, m \in \mathbb{N}. \label{weylsinistra} \end{align} In our analysis the following result on the Fourier transforms of Weyl derivatives is very important. \begin{te}[\cite{samko}, page 137] \label{teoremasimboloweyl} The Fourier transforms of \eqref{weyldestra} and \eqref{weylsinistra} read \begin{align} \int_{-\infty}^\infty dx \, e^{i\xi x} \frac{^+\partial^\gamma}{\partial x^\gamma} u(x, t) \, = \, (-i\xi)^\gamma \, \widehat{u} (\xi, t) \, = \, |\xi |^{\gamma} e^{-\frac{i\pi \gamma}{2} \textrm{ sign} (\xi)} \, \widehat{u} (\xi, t), \label{simboloweyldestra} \end{align} \begin{equation} \int_{-\infty}^\infty dx \, e^{i\xi x} \frac{^-\partial^\gamma}{\partial x^\gamma} u(x, t) \, = \, (i\xi)^\gamma \, \widehat{u} (\xi, t) \, = \, |\xi |^{\gamma} e^{\frac{i\pi \gamma}{2} \textrm{ sign} (\xi)} \, \widehat{u} (\xi, t). \label{simboloweylsinistra} \end{equation} Clearly $\widehat{u}(\xi, t)$ is the $x$-Fourier transform of $u(x, t)$. \end{te} \begin{proof} We give a sketch of the proof of \eqref{simboloweyldestra} with some details. \begin{align} \int_{-\infty}^\infty dx \, e^{i\xi x} \frac{^+\partial^\gamma}{\partial x^\gamma} u(x, t) \, = \, & \int_{-\infty}^\infty dx \, e^{i\xi x} \left[ \frac{1}{\Gamma (m-\gamma)} \frac{\partial^m}{\partial x^m} \int_{-\infty}^x dz \frac{u(z, t)}{(x-z)^{\gamma - m + 1}} \right] \notag \\ = \, & \int_{-\infty}^\infty dx \, e^{i\xi x} \left[ \frac{1}{\Gamma (m-\gamma)} \int_0^\infty dz \frac{\partial^m}{\partial x^m} \frac{u(x-z,t)}{z^{\gamma - m + 1}} \right] \notag \\ = \, & \int_{-\infty}^\infty dw \, e^{i\xi w} \frac{\partial^m}{\partial w^m} u(w, t) \; \frac{1}{\Gamma (m-\gamma)} \int_0^\infty dz \, e^{i\xi z} z^{m-\gamma - 1} \notag \\ = \, & (-i\xi)^m \int_{-\infty}^\infty e^{i\xi w} u(w, t) \, dw \; \frac{1}{\Gamma (m-\gamma)} \int_0^\infty dz \, e^{i\xi z} z^{m-\gamma - 1}. \label{oddio} \end{align} The result \begin{equation} \frac{(-i\xi)^m}{\Gamma (m-\gamma)} \int_0^\infty dz \, e^{i\xi z} \, z^{m-\gamma -1} \, = \, |\xi |^\gamma e^{-\frac{i\pi }{2} \textrm{ sign}(\xi)} \end{equation} can be obtained for example by applying the Cauchy integral Theorem (see \citet{samko} page 138). \end{proof} \subsection{Riesz fractional derivatives} By means of the Weyl fractional derivatives we arrive at the Riesz fractional derivative, for $m-1 < \gamma < m$, $m \in \mathbb{N}$, \begin{align} \frac{\partial^{\gamma}}{\partial |x|^\gamma} u(x, t) = & -\frac{\frac{\partial^m}{\partial x^m}}{2\cos \frac{\pi \gamma}{2} \Gamma (m-\gamma)} \left[ \int_{-\infty}^x \frac{u(y, t) \, dy}{(x-y)^{\gamma - m + 1}} + \int_x^\infty \frac{(-1)^m \, u(y, t) \, dy}{(y-x)^{\gamma - m + 1}} \right] \notag \\ = \, & - \frac{1}{2\cos \frac{\pi \gamma}{2} } \left( \frac{^+\partial^\gamma}{\partial x^\gamma} + \frac{^-\partial^\gamma}{\partial x^\gamma} \right) \, u(x, t) \end{align} In view of Theorem \ref{teoremasimboloweyl} we have that, for $\gamma > 0$, $\gamma \notin \mathbb{N}$, \begin{align} \int_{-\infty}^\infty dx \, e^{i\xi x} \frac{\partial^\gamma}{\partial |x|^\gamma} u(x, t) \, = \, & - \frac{1}{2\cos \frac{\pi \gamma}{2}} \left[ |\xi |^\gamma e^{-\frac{i\pi \gamma}{2} \textrm{ sign} (\xi)} + |\xi |^\gamma e^{\frac{i\pi \gamma}{2} \textrm{ sign}(\xi)} \right] \widehat{u}(\xi, t) \notag \\ = \, & -|\xi |^{\gamma} \, \widehat{u} (\xi, t). \end{align} \begin{os} The general fractional higher-order heat equation \begin{equation} \frac{\partial}{\partial t} u_\gamma (x, t) \, = \, \frac{\partial^\gamma}{\partial |x|^\gamma} u_\gamma (x, t), \qquad x \in \mathbb{R}, t>0, \end{equation} has solution whose Fourier transform reads \begin{equation} \widehat{u_\gamma} (\xi, t) \, = \, e^{-t |\xi |^\gamma}. \label{due11} \end{equation} For $0 < \gamma < 2$, \eqref{due11} corresponds to the characteristic function of the symmetric stable processes (this is a classical result due to M. Riesz himself). \end{os} \section{From pseudo random walks to fractional pseudoprocesses} We consider in this section continuous-time pseudo random walks with steps which are pseudo random variables, that is measurable functions endowed with signed measures, and with total mass equal to one (see \cite{lachalpseudo}). In order to obtain in the limit pseudoprocesses whose signed law satisfies higher-order fractional equations we must construct sums of the form \begin{equation} \sum_{j=0}^{N \left( t\gamma^{-\beta (2k+1)} \right) } \epsilon_j \, U_j^{2k+1} (1) \, Q^{\gamma, \beta (2k+1)}_j, \qquad \beta \in (0,1), k \in \mathbb{N}, \gamma > 0, \label{passeggiata} \end{equation} where \begin{equation} \epsilon_j \, = \, \begin{cases} 1, \quad & \textrm{with probability } p, \\ -1, & \textrm{with probability } q, \end{cases} \qquad p + q = 1. \end{equation} The r.v.'s $Q_j^{\gamma, \beta(2k+1)}$ have probability distributions, for $\beta \in (0,1)$, $k \in \mathbb{N}$, \begin{equation} \Pr \left\lbrace Q_j^{\gamma, \beta(2k+1)} > w \right\rbrace \, = \, \begin{cases} \left( \frac{\gamma}{w} \right) ^{\beta (2k+1)}, \qquad & \textrm{for } w > \gamma \\ 1, & \textrm{for } w < \gamma. \end{cases} \end{equation} The Poisson process $N(t)$, $t>0$, appearing in \eqref{passeggiata} is homogeneous with rate \begin{equation} \lambda = \frac{1}{\Gamma (1-\beta)}, \qquad \beta \in (0,1). \end{equation} The pseudo random variables (see \citet{lachalpseudo}) $U^{2k+1}_j (1)$ have law with Fourier transform \begin{equation} \int_{-\infty}^\infty dx \, e^{i\xi x} u_{2k+1} (x, 1) \, = \, e^{-i\xi^{2k+1}} \end{equation} and the function $u_{2k+1} (x, t)$, $x \in \mathbb{R}$, $t>0$, is the fundamental solution to the odd-order heat-type equation, for $k \in \mathbb{N}$, \begin{align} \begin{cases} \frac{\partial}{\partial t} u_{2k+1} (x, t) \, = \, (-1)^k \frac{\partial^{2k+1}}{\partial x^{2k+1}} u_{2k+1} (x, t), \qquad x \in \mathbb{R}, t>0, \\ u_{2k+1} (x, 0) \, = \, \delta(x). \end{cases} \label{oddorderpseudoprocessesgovern} \end{align} There is a vast literature devoted to odd-order heat-type equations of the form \eqref{oddorderpseudoprocessesgovern}, to the behaviour of their solutions, and to the related pseudoprocesses (\citet{beghin4, lachal2003, enzolit, ecporsdov}). The r.v.'s and pseudo r.v.'s appearing in \eqref{passeggiata} are independent and also independent from each other. We say that two pseudo r.v.'s (or pseudoprocesses) with signed density $u_m^1$, $u_m^2$, are independent if the Fourier transform $\mathcal{F}$ of the convolution $u_m^1 * u_m^2 $ factorizes, that is \begin{align} \mathcal{F} \left[ u_m^1 \, * \, u_m^2 \right] (\xi) \, = \, \mathcal{F} \left[ u_m^1 \right] (\xi) \, \mathcal{F} \left[ u_m^2 \right] (\xi). \end{align} We are now able to state the first theorem of this section. \begin{te} \label{pseudowalkdispari} The following limit in distribution holds true \begin{align} \lim_{\gamma \to 0} \sum_{j=0}^{N \left( t\gamma^{-\beta(2k+1)} \right) } \epsilon_j \, U_j^{2j+1} (1) \, Q_j^{\gamma, \beta(2k+1)} \, \stackrel{\textrm{ law }}{=} \, U_1^{2k+1} \left( H_1^\beta (pt) \right) - U_2^{2k+1} \left( H_2^\beta (qt) \right) , \label{limitepasseggiatadispari} \end{align} where $H_1^\beta$ and $H_2^\beta$ are independent positively-skewed stable processes of order $0 < \beta < 1$ while $U_1^{2k+1}$ and $U_2^{2k+1}$ are independent pseudoprocesses of order $2k+1$. All the random variables $N(t)$, $t>0$, $\epsilon_j$, $Q_j^{\gamma, \beta(2k+1)}$ are independent and also independent from the pseudo random variables $U_j^{2k+1} (1)$. The Fourier transform of the limiting pseudoprocess reads \begin{equation} \mathbb{E}e^{i\xi U_1^{2k+1} \left( H_1^\beta (pt) \right) - U_2^{2k+1} \left( H_2^\beta (qt) \right) } \, = \, e^{-t |\xi |^{\beta(2k+1)} \left( \cos \frac{\beta \pi}{2} - i \textrm{ sign}(\xi) \, (p-q) \, \sin \frac{\beta \pi}{2} \right) }. \end{equation} \end{te} \begin{proof} In view of the independence of the r.v's and pseudo random variables appearing in \eqref{limitepasseggiatadispari} we have that \begin{align} & \mathbb{E} e^{i\xi\sum_{j=0}^{N \left( t\gamma^{-\beta (2k+1)} \right) } \epsilon_j U^{2k+1}_j (1) \, Q_j^{\gamma, \beta(2k+1)}} \notag \\ = \, & \mathbb{E} \left[ \mathbb{E} \left( e^{i\xi \epsilon U^{2k+1} (1) Q^{\gamma, \beta (2k+1)}} \right) ^{N \left( t \gamma^{-\beta (2k+1)} \right) } \right] \notag \\ = \, & \exp \left\lbrace - \frac{\lambda t}{\gamma^{\beta (2k+1)}} \left( 1-\mathbb{E}e^{i\xi \epsilon U^{2k+1}(1) Q^{\gamma, \beta (2k+1)}} \right) \right\rbrace \notag \\ = \, & \exp \left\lbrace - \frac{\lambda t}{\gamma^{\beta (2k+1) } } \left( 1-p \mathbb{E}e^{i\xi U^{2k+1 } (1) Q^{\gamma, \beta (2k+1)}} - q \mathbb{E}e^{-i\xi U^{2k+1} (1) Q^{\gamma, \beta (2k+1)}} \right) \right\rbrace \notag \\ = \, & \exp \left\lbrace - \frac{\lambda t}{\gamma^{\beta (2k+1) } } \left( p+q -p \mathbb{E}e^{i\xi U^{2k+1 } (1) Q^{\gamma, \beta (2k+1)}} - q \mathbb{E}e^{-i\xi U^{2k+1} (1) Q^{\gamma, \beta (2k+1)}} \right) \right\rbrace \notag \\ = \, & \exp \left\lbrace - \frac{\lambda t}{\gamma^{\beta (2k+1) } } \left( p \left( 1-\mathbb{E}e^{i\xi U^{2k+1} (1) Q^{\gamma, \beta (2k+1)}} \right) + q \left( 1-\mathbb{E}e^{-i\xi U^{2k+1} (1) Q^{\gamma, \beta (2k+1)}} \right) \right) \right\rbrace . \end{align} We observe that \begin{align} & p \left( 1-\mathbb{E}e^{i\xi U^{2k+1} (1) Q^{\gamma, \beta (2k+1)}} \right) + q \left( 1-\mathbb{E}e^{-i\xi U^{2k+1} (1) Q^{\gamma, \beta (2k+1)}} \right) \notag \\ = \, & p \int_{\gamma}^{\infty} dw \left( 1- \frac{\gamma^{\beta (2k+1)} \beta (2k+1)}{w^{\beta (2k+1) + 1}} e^{i\xi^{2k+1} w^{2k+1}} \right) \notag \\ & + q \int_\gamma^{\infty} dw \left( 1- \frac{\gamma^{\beta (2k+1)} \beta (2k+1)}{w^{\beta (2k+1) + 1}} e^{-i\xi^{2k+1} w^{2k+1}} \right) \end{align} and therefore \begin{align} & \mathbb{E} e^{i\xi\sum_{j=0}^{N \left( t\gamma^{-\beta (2k+1)} \right) } \epsilon_j U^{2k+1}_j (1) \, Q_j^{\gamma, \beta(2k+1)}} \, = \notag \\ = \, & \exp \left\lbrace -\frac{\lambda t}{\gamma^{\beta (2k+1)}} \left[ p \int_{\gamma}^{\infty} dw \left( 1- \frac{\gamma^{\beta (2k+1)} \beta (2k+1)}{w^{\beta (2k+1) + 1}} e^{i\xi^{2k+1} w^{2k+1}} \right) \right. \right. \notag \\ & \left. \left. + q \int_\gamma^{\infty} dw \left( 1- \frac{\gamma^{\beta (2k+1)} \beta (2k+1)}{w^{\beta (2k+1) + 1}} e^{-i\xi^{2k+1} w^{2k+1}} \right) \right] \right\rbrace \notag \\ = \, & \exp \left\lbrace \frac{- \lambda t}{\gamma^{\beta (2k+1)}} \left[ p \left( 1-e^{i(\xi \gamma)^{2k+1}} \right) -p \, i \xi^{2k+1} (2k+1) \int_\gamma^\infty \frac{ dw \, \gamma^{\beta(2k+1)} e^{i(\xi w)^{2k+1}}}{ w^{\beta(2k+1)-2k} } \right. \right. \notag \\ & \left. \left. +q \left( 1-e^{-i(\xi \gamma)^{2k+1}} \right) + q \, i \xi^{2k+1} (2k+1) \int_\gamma^\infty \frac{dw \, \gamma^{\beta (2k+1)} \, e^{-i(\xi w)^{2k+1}}}{w^{\beta (2k+1) - 2k}} \right] \right\rbrace . \end{align} By taking the limit we get that \begin{align} & \lim_{\gamma \to 0} \mathbb{E} e^{i\xi\sum_{j=0}^{N \left( t\gamma^{-\beta (2k+1)} \right) } \epsilon_j U^{2k+1}_j (1) \, Q_j^{\gamma, \beta(2k+1)}} \, = \notag \\ = \, &\exp \left[ -\lambda t (2k+1) \left( -p i\xi^{2k+1} \int_0^\infty \frac{dw \, e^{i(\xi w)^{2k+1} }}{w^{\beta (2k+1) - 2k}} + q i \xi^{2k+1} \int_0^\infty \frac{dw \, e^{-i(\xi w)^{2k+1} }}{w^{\beta (2k+1) -2k}} \right) \right] \notag \\ = \, & e^{-\lambda t \Gamma (1-\beta) \left[ p \left( -i\xi^{2k+1} \right) ^\beta + q (i\xi^{2k+1})^\beta \right]}. \end{align} By setting $\lambda = \frac{1}{\Gamma (1-\beta)}$ we obtain \begin{align} \lim_{\gamma \to 0} \mathbb{E} e^{i\xi\sum_{j=0}^{N \left( t\gamma^{-\beta (2k+1)} \right) } \epsilon_j U^{2k+1}_j (1) \, Q_j^{\gamma, \beta(2k+1)}} \, = \, & e^{-t \left( p |\xi |^{\beta (2k+1)} e^{-\frac{i\pi \beta}{2}\textrm{ sign}(\xi)} + q |\xi |^{\beta(2k+1)} e^{\frac{i\pi \beta}{2}\textrm{ sign}(\xi)} \right) } \notag \\ = \, & e^{-t| \xi |^{\beta(2k+1)} \left( \cos \frac{\pi \beta}{2} -i \textrm{ sign}(\xi) \, (p-q) \sin \frac{\pi \beta}{2} \right) } \label{finalmente} \end{align} Now we consider the Fourier transform of the law of the pseudoprocess \begin{equation} V^{(2k+1)\beta} (t) \, = \, U_1^{2k+1} \left( H_1^\beta (pt) \right) - \, U_2^{2k+1} \left( H_2^\beta (qt) \right) \end{equation} which reads \begin{align} \mathbb{E} e^{i\xi V^{(2k+1)\beta} (t)} \, = \, & \mathbb{E} e^{i\xi U_1^{2k+1} \left( H_1^\beta (pt) \right) } \mathbb{E} e^{-i\xi U_2^{2k+1} \left( H_2^\beta (qt) \right) } \notag \\ = \, & \left( \int_0^\infty ds \, e^{i\xi^{2k+1} s} \, h_\beta^1 (s, pt) \right) \left( \int_0^\infty dz \, e^{-i \xi^{2k+1} z} \, h_\beta^2 (z, qt) \right) \notag \\ = \, & e^{-t \, p \left( - i \xi^{2k+1} \right) ^\beta} e^{-t \, q \left( i\xi^{2k+1}\right) ^\beta} \notag \\ = \, & e^{-t \left( p |\xi |^{\beta(2k+1)} e^{-\frac{i\beta \pi}{2}\textrm{ sign}(\xi)} + q |\xi |^{\beta (2k+1)} e^{\frac{i\beta \pi}{2}\textrm{ sign}(\xi)} \right) } \notag \\ = \, & e^{-t |\xi |^{\beta(2k+1)} \left( \cos \frac{\pi \beta}{2} -i \textrm{ sign}(\xi) \, (p-q) \sin \frac{\pi \beta}{2} \right) }, \label{caratteristicapseudosimm} \end{align} and coincides with \eqref{finalmente}. \end{proof} \begin{os} If $\beta = \frac{1}{2k+1}$ the Fourier transform \eqref{caratteristicapseudosimm} becomes \begin{equation} \mathbb{E} e^{i\xi U_1^{2k+1} \left( H_1^\beta (pt) \right) } \mathbb{E} e^{-i\xi U_2^{2k+1} \left( H_2^\beta (qt) \right) } \, = \, e^{-t|\xi| \cos \frac{\pi}{2(2k+1)} + i t \xi \sin \frac{\pi}{2(2k+1)}} \end{equation} which corresponds to the characteristic function of a Cauchy r.v. with position parameter equal to $t (p-q) \sin \frac{\pi}{2(2k+1)}$ and scale parameter $t \cos \frac{\beta}{2(2k+1)}$. This slightly generalizes result 1.4 of \citet{ecporsdov}. \end{os} For even-order pseudoprocesses we have the following limit in distribution. \begin{te} \label{pseudowalkpari} If $U^{2k}(t)$, $t>0$, is an even-order pseudoprocess and $N(t)$, $t>0$, is a homogeneous Poisson process, independent from $U^{2k}(t)$, $t>0$, we have that \begin{align} \lim_{\gamma \to 0} \sum_{j=0}^{N \left( t \gamma^{-2k\beta} \right) } U_j^{2k}(1)Q_j^{\gamma, 2k\beta} \, \stackrel{\textrm{ law }}{=} \, U^{2k} \left( H^\beta (t) \right) , \qquad t>0, \end{align} where $H^\beta$ is a stable subordinator of order $\beta \in (0,1)$ and $Q_j^{\gamma, 2k\beta}$ are i.i.d. random variables with distribution \begin{align} \Pr \left\lbrace Q_j^{\gamma, 2k\beta} > w \right\rbrace \, = \, \begin{cases} 1, \qquad & w < \gamma, \\ \left( \frac{\gamma}{w} \right) ^{2k\beta}, & w > \gamma. \end{cases} \end{align} The pseudoprocess $U^{2k}(t)$ is governed by the equation \begin{equation} \frac{\partial}{\partial t}u_{2k}(x, t) \, = \, (-1)^{k+1} \frac{\partial^{2k}}{\partial x^{2k}} u_{2k} (x, t), \qquad x \in \mathbb{R}. \end{equation} \end{te} \begin{proof} We start by evaluating the Fourier transform \begin{align} & \mathbb{E}e^{i\xi \sum_{j=0}^{N \left( t \gamma^{-2k\beta} \right) } U_j^{2k}(1)Q_j^{\gamma, 2k\beta}} \notag \\ = \, & \mathbb{E} \left[ \mathbb{E} \left( e^{i\xi U^{2k}(1)Q^{\gamma, 2k\beta}} \right) ^{N \left( t\gamma^{-2k\beta} \right) } \right] \notag \\ = \, &\exp \left\lbrace -\frac{\lambda t}{\gamma^{2k\beta}} \left( 1-\mathbb{E}e^{i\xi U^{2k}(1)Q^{\gamma, 2k\beta}} \right) \right\rbrace \notag \\ = \, & \exp \left\lbrace -\frac{\lambda t}{\gamma^{2k\beta}} \int_\gamma^\infty dy \left( 1-e^{-|\xi |^{2k}y^{2k}} \right) \right\rbrace \frac{2k\beta \gamma^{2k\beta}}{y^{2k\beta + 1}} \notag \\ = \, & \exp \left\lbrace -\frac{\lambda t}{\gamma^{2k\beta }} \left[ \left( 1-e^{-|\xi |^{2k}\gamma^{2k}} \right) + \int_\gamma^{\infty} dy \, e^{-|\xi |^{2k}y^{2k}} y^{2k-1-2k\beta} \, 2k \gamma^{2k\beta} \right] \right\rbrace \end{align} By taking the limit we have that \begin{align} \lim_{\gamma \to 0} e^{i\xi \sum_{j=0}^{N \left( t \gamma^{-2k\beta} \right) } U_j^{2k}(1)Q_j^{\gamma, 2k\beta}} \, = \, & e^{-\lambda t |\xi |^{2k} 2k \int_0^\infty e^{-|\xi |^{2k}y^{2k}}y^{2k(1-\beta )-1} dy} \notag \\ = \, & e^{-\lambda t |\xi |^{2k\beta} \int_0^\infty e^{-w} w^{-\beta} dw} \notag \\ = \, & e^{-\lambda t |\xi |^{2k\beta}\Gamma (1-\beta)} \end{align} which coincides with \begin{align} \mathbb{E}e^{i\xi U^{2k} \left( H^\beta (t) \right) } \, = \, \int_0^\infty e^{-s\xi^{2k}} \Pr \left\lbrace H^\beta (t) \in ds \right\rbrace \, = \, e^{-t |\xi |^{2k\beta}} \end{align} since $\lambda = \frac{1}{\Gamma (1-\beta)}$. \end{proof} \begin{os} For $\beta = \frac{1}{k}$ the composition $U^{2k} \left( H^\beta (t) \right) $ has Gaussian distribution. For $\beta = \frac{1}{2k}$ we have instead the Cauchy distribution and for $\beta = \frac{1}{4k}$ we extract the inverse Gaussian corresponding to the distribution of the first passage time of a Brownian motion. The case $\beta = \frac{1}{6k}$ yields the stable law with distribution \begin{align} f_{\frac{1}{3}} (x) \, = \, \frac{t}{x\sqrt[3]{3x}} \textrm{ Ai} \left( \frac{t}{\sqrt[3]{3x}} \right) \end{align} where Ai denotes the Airy function (see \cite{ecporsdov}). \end{os} In order to arrive at asymmetric higher-order fractional pseudoprocesses we construct pseudo random walks by adapting the Feller approach (used for asymmetric stable laws) to our context. This means that we combine independent pseudo random walks with suitable trigonometric weights as in \eqref{14}. \begin{te} \label{teoremarwfellerdispari} Let $X_j^{\gamma, (2k+1)\beta}$ and $Y_j^{\gamma, (2k+1)\beta}$ be i.i.d. r.v.'s with distribution \begin{align} \Pr \left\lbrace X^{\gamma, (2k+1)\beta} > w \right\rbrace \, = \, \begin{cases} \left( \frac{\gamma}{w} \right) ^{(2k+1)\beta}, \qquad & w > \gamma \\ 1, & w < \gamma, \end{cases} \end{align} and let $U^{2k+1}(t)$, $t>0$, be a pseudoprocess of odd-order $2k+1$, $k \in \mathbb{N}$. For $0 < \beta < 1$ and $-\beta < \theta < \beta$ we have that \begin{align} \lim_{\gamma \to 0} & \left[ \left( \frac{\sin \frac{\pi}{2}(\beta - \theta)}{\sin \pi \beta} \right) ^{\frac{1}{(2k+1)\beta}} \sum_{j=0}^{N \left( t \gamma^{-(2k+1)\beta} \right) } X_j^{\gamma, (2k+1)\beta} U_j^{2k+1} (1) \right. \notag \\ & \left. - \left( \frac{\sin \frac{\pi}{2}(\beta + \theta)}{\sin \pi \beta} \right) ^{\frac{1}{(2k+1)\beta}} \sum_{j=0}^{N \left( t \gamma^{-(2k+1)\beta} \right) } Y_j^{\gamma, (2k+1)\beta} U_j^{2k+1} (1) \right] \stackrel{\textrm{law}}{=} \, & Z^{\beta (2 k+1), \theta} \label{mammamiabella} \end{align} where \begin{equation} \mathbb{E}e^{i\xi Z^{\beta (2 k+1), \theta}} \, = \, e^{-t |\xi |^{(2k+1)\beta} e^{\frac{i\pi \theta}{2}}} \end{equation} \end{te} \begin{proof} The Fourier transform of \eqref{mammamiabella} is written as \begin{align} &\mathbb{E}e^{i\xi \left( \frac{\sin \frac{\pi}{2}(\beta - \theta)}{\sin \pi \beta} \right) ^{\frac{1}{(2k+1)\beta}} \sum_{j=0}^{N \left( t \gamma^{-(2k+1)\beta} \right) } X_j^{\gamma, (2k+1)\beta} U_j^{2k+1} (1)} \notag \\ \times \, & \mathbb{E}e^{-i\xi \left( \frac{\sin \frac{\pi}{2}(\beta + \theta)}{\sin \pi \beta} \right) ^{\frac{1}{(2k+1)\beta}} \sum_{j=0}^{N \left( t \gamma^{-(2k+1)\beta} \right) } Y_j^{\gamma, (2k+1)\beta} U_j^{2k+1} (1)} \end{align} where the first member is given by \begin{align} &\mathbb{E}e^{i\xi \left( \frac{\sin \frac{\pi}{2}(\beta - \theta)}{\sin \pi \beta} \right) ^{\frac{1}{(2k+1)\beta}} \sum_{j=0}^{N \left( t \gamma^{-(2k+1)\beta} \right) } X_j^{\gamma, (2k+1)\beta} U_j^{2k+1} (1)} \, = \notag \\ = \, & \exp \left\lbrace -\frac{\lambda t}{\gamma^{(2k+1)\beta}} \left( 1-\mathbb{E}e^{i\xi \left( \frac{\sin \frac{\pi}{2}(\beta - \theta)}{\sin \pi \beta} \right) ^{\frac{1}{(2k+1)\beta}} U^{2k+1}(1) X^{(2k+1)\beta}} \right) \right\rbrace \notag \\ = \, & \exp \left\lbrace - \frac{\lambda t}{\gamma^{(2k+1)\beta}} \int_\gamma^\infty \left( 1-e^{i\xi^{2k+1} \left( \frac{\sin \frac{\pi}{2}(\beta - \theta)}{\sin \pi \beta} \right) ^{\frac{1}{\beta}} y^{2k+1}} \right) \frac{\gamma^{(2k+1)\beta}}{y^{(2k+1)\beta +1}} (2k+1)\beta \right\rbrace \notag \\ \stackrel{\gamma \to 0}{\longrightarrow} \, & \exp \left\lbrace -\lambda t \, i^{-\beta} \, \xi^{(2k+1)\beta} \left( \frac{\sin \frac{\pi}{2} (\beta - \theta)}{\sin \pi\beta} \right) \int_0^\infty e^{-w} w^{-\beta } dw \right\rbrace \notag \\ = \, & e^{-\lambda t |\xi |^{(2k+1)\beta} e^{-\frac{i\pi \beta \textrm{ sign}(\xi)}{2}} \left( \frac{\sin \frac{\pi}{2} (\beta - \theta)}{\sin \pi\beta} \right) \Gamma (1-\beta)} . \end{align} The second member of \eqref{mammamiabella} becomes, by performing a similar calculation, \begin{align} &\mathbb{E}e^{-i\xi \left( \frac{\sin \frac{\pi}{2}(\beta + \theta)}{\sin \pi \beta} \right) ^{\frac{1}{(2k+1)\beta}} \sum_{j=0}^{N \left( t \gamma^{-(2k+1)\beta} \right) } Y_j^{\gamma, (2k+1)\beta} U_j^{2k+1} (1)} \notag \\ \stackrel{\gamma \to 0}{\longrightarrow} \, & e^{-\lambda t |\xi |^{(2k+1)\beta} e^{\frac{i\pi \beta \textrm{ sign}(\xi)}{2}} \left( \frac{\sin \frac{\pi}{2} (\beta + \theta)}{\sin \pi\beta} \right) \Gamma (1-\beta)}. \end{align} and thus for $\lambda = \frac{1}{\Gamma (1-\beta)}$ we obtain that \begin{align} & \mathbb{E}e^{i\xi \left( \frac{\sin \frac{\pi}{2}(\beta - \theta)}{\sin \pi \beta} \right) ^{\frac{1}{(2k+1)\beta}} \sum_{j=0}^{N \left( t \gamma^{-(2k+1)\beta} \right) } X_j^{\gamma, (2k+1)\beta} U_j^{2k+1} (1)} \notag \\ \times & \mathbb{E}e^{-i\xi \left( \frac{\sin \frac{\pi}{2}(\beta + \theta)}{\sin \pi \beta} \right) ^{\frac{1}{(2k+1)\beta}} \sum_{j=0}^{N \left( t \gamma^{-(2k+1)\beta} \right) } Y_j^{\gamma, (2k+1)\beta} U_j^{2k+1} (1)} \notag \\ \stackrel{\gamma \to 0}{\longrightarrow} \, & e^{- t |\xi |^{(2k+1)\beta} e^{-\frac{i\pi \beta \textrm{ sign}(\xi)}{2}} \left( \frac{\sin \frac{\pi}{2} (\beta - \theta)}{\sin \pi\beta} \right) } e^{- t |\xi |^{(2k+1)\beta} e^{\frac{i\pi \beta \textrm{ sign}(\xi)}{2}} \left( \frac{\sin \frac{\pi}{2} (\beta + \theta)}{\sin \pi\beta} \right) } \notag \\ = \, & e^{-t|\xi |^{(2k+1)\beta} e^{\frac{i \pi \theta}{2} \textrm{ sign} (\xi)}} \end{align} \end{proof} By considering symmetric pseudo random walks with the Feller construction we arrive in the next theorem at symmetric pseudoprocesses with time scale equal to $\frac{\cos \frac{\pi \beta}{2}}{\sin \frac{\pi \beta}{2}}$, $0<\beta<1$, $-\beta < \theta < \beta$. \begin{te} Let $X_j^{\gamma, 2\beta k}$ and $Y_j^{\gamma, 2\beta k}$ be i.i.d. r.v.'s with distribution \begin{align} \Pr \left\lbrace X^{\gamma, 2\beta k} > w \right\rbrace \, = \, \begin{cases} \left( \frac{\gamma}{w} \right) ^{2\beta k}, \qquad & w > \gamma \\ 1, & w < \gamma, \end{cases} \end{align} and let $U^{2k}(t)$, $t>0$, be a pseudoprocess of order $2k$, $k \in \mathbb{N}$. If $N(t)$ is a homogeneous Poisson process, with parameter $\lambda = \frac{1}{\Gamma (1-\beta)}$, independent from $X_j^{\gamma, 2\beta k}$ and $Y_j^{\gamma, 2\beta k}$ we have that \begin{align} \lim_{\gamma \to 0} & \left[ \left( \frac{\sin \frac{\pi}{2}(\beta - \theta)}{\sin \pi \beta} \right) ^{\frac{1}{2\beta k}} \sum_{j=0}^{N \left( t \gamma^{-2\beta k} \right) } X_j^{\gamma, 2\beta k} \, U_j^{2k} (1) \, \right. \notag \\ & \left. + \left( \frac{\sin \frac{\pi}{2}(\beta + \theta)}{\sin \pi \beta} \right) ^{\frac{1}{2\beta k}} \sum_{j=0}^{N \left( t \gamma^{-2\beta k} \right) } Y_j^{\gamma, 2\beta k} \, U_j^{2k} (1) \right] \, \stackrel{ \textrm{ law }}{=} \, Z^{2k\beta, \theta}, \qquad t >0, \label{323} \end{align} for $0< \beta < 1$ and $-\beta < \theta < \beta$ and \begin{align} \mathbb{E}e^{i\xi Z^{2k\beta, \theta}} \, = \, e^{-t|\xi |^{2k\beta} \frac{\cos \frac{\pi}{2}\theta}{\cos \frac{\pi}{2}\beta}} \end{align} \end{te} \begin{proof} The Fourier transform of \eqref{323} is written as \begin{align} \mathbb{E}e^{i\xi \left( \frac{\sin \frac{\pi}{2}(\beta - \theta)}{\sin \pi \beta} \right) ^{\frac{1}{2\beta k}} \sum_{j=0}^{N \left( t \gamma^{-2\beta k} \right) } X_j \, U_j^{2k} (1)} \; \mathbb{E}e^{i\xi \left( \frac{\sin \frac{\pi}{2}(\beta + \theta)}{\sin \pi \beta} \right) ^{\frac{1}{2\beta k}} \sum_{j=0}^{N \left( t \gamma^{-2\beta k} \right) } Y_j \, U_j^{2k} (1)} \end{align} where the first member is given by \begin{align} & \mathbb{E}e^{i\xi \left( \frac{\sin \frac{\pi}{2}(\beta - \theta)}{\sin \pi \beta} \right) ^{\frac{1}{2\beta k}} \sum_{j=0}^{N \left( t \gamma^{-2\beta k} \right) } X_j^{\gamma, 2\beta k} \, U_j^{2k} (1)} = \notag \\ = \, & \exp \left\lbrace -\frac{\lambda t}{\gamma^{2k\beta}} \left[ 1-\mathbb{E}e^{i\xi \frac{\sin \frac{\pi}{2}(\beta - \theta)}{\sin \pi \beta} U^{2k}(1)X^{\gamma, 2\beta k}} \right] \right\rbrace \notag \\ = \, & \exp \left\lbrace -\frac{\lambda t}{\gamma^{2k\beta}} \left[ \int_\gamma^\infty e^{-\left| \xi \left( \frac{\sin \frac{\pi}{2} (\beta - \theta)}{\sin \pi\beta} \right) ^{\frac{1}{2k\beta}} \right|^{2k} y^{2k}} (2k\beta) \frac{\gamma^{2k\beta}}{y^{2k\beta +1}} dy \right] \right\rbrace \notag \\ \stackrel{\gamma \to 0}{\longrightarrow} \, & \exp \left\lbrace -\lambda t |\xi |^{2k} \left( \frac{\sin \frac{\pi}{2}(\beta - \theta)}{\sin \pi\beta} \right) ^{\frac{1}{\beta}} 2k \int_0^\infty e^{-| \xi |^{2k} \left( \frac{\sin \frac{\pi}{2}(\beta - \theta)}{\sin \pi\beta} \right) ^{\frac{1}{\beta}} y^{2k}} y^{2k-1-2k\beta} dy \right\rbrace \notag \\ = \, & e^{-\lambda t |\xi |^{2k\beta} \Gamma (1-\beta) \left[ \frac{\sin \frac{\pi}{2}(\beta - \theta)}{\sin \pi \beta}\right] } \end{align} and by similar calculations the second member becomes \begin{align} \mathbb{E}e^{i\xi \left( \frac{\sin \frac{\pi}{2}(\beta + \theta)}{\sin \pi \beta} \right) ^{\frac{1}{2\beta k}} \sum_{j=0}^{N \left( t \gamma^{-2\beta k} \right) } Y_j^{\gamma, 2\beta k} \, U_j^{2k} (1)} \, \stackrel{\gamma \to 0}{\longrightarrow} \, e^{-\lambda t |\xi |^{2k\beta} \Gamma (1-\beta) \left[ \frac{\sin \frac{\pi}{2}(\beta + \theta)}{\sin \pi \beta}\right] } . \end{align} Thus we have that \begin{align} & \mathbb{E}e^{i\xi \left( \frac{\sin \frac{\pi}{2}(\beta - \theta)}{\sin \pi \beta} \right) ^{\frac{1}{2\beta k}} \sum_{j=0}^{N \left( t \gamma^{-2\beta k} \right) } X_j^{\gamma, 2\beta k} \, U_j^{2k} (1)} \; \mathbb{E}e^{i\xi \left( \frac{\sin \frac{\pi}{2}(\beta + \theta)}{\sin \pi \beta} \right) ^{\frac{1}{2\beta k}} \sum_{j=0}^{N \left( t \gamma^{-2\beta k} \right) } Y_j^{\gamma, 2\beta k} \, U_j^{2k} (1)} \notag \\ \stackrel{\gamma \to 0}{\longrightarrow} \, & e^{-\lambda t |\xi |^{2k\beta} \Gamma (1-\beta) \left[ \frac{\sin \frac{\pi}{2}(\beta - \theta)}{\sin \pi \beta} + \frac{\sin \frac{\pi}{2}(\beta + \theta)}{\sin \pi \beta}\right] } \notag \\ = \, & e^{-t |\xi |^{2\beta k} \frac{\cos \frac{\pi}{2}\theta}{\cos \frac{\pi}{2}\beta}} \end{align} \end{proof} \section{Governing equations} In the previous section we obtained fractional pseudoprocesses as limit of suitable pseudo random walks. In this section we will show that the limiting fractional pseudoprocesses obtained before have signed density satisfying space-fractional heat-type equations of higher-order with Riesz or Feller fractional derivatives. The order of fractionality of the governing equations is a positive real number and this is the major difference with respect to the pseudoprocesses considered so far in the literature. We start by examining space fractional higher-order equations of order $2k\beta$, $\beta \in (0,1)$, $k \in \mathbb{N}$, which interpolate equations of the form \eqref{11}. \begin{te} The solution to the initial-value problem \begin{align} \begin{cases} \frac{\partial}{\partial t} v_{2k}^\beta (x, t) \, = \, \frac{\partial^{2k\beta}}{\partial |x|^{2k\beta}} v_{2k}^\beta (x, t), \qquad x \in \mathbb{R}, t>0, k \in \mathbb{N}, \beta \in (0,1) \\ v_{2k}^\beta (x, 0) \, = \, \delta (x) \end{cases} \label{problemapari} \end{align} can be written as \begin{align} v_{2k}^\beta (x, t) \, = \, & \frac{1}{\pi x} \mathbb{E} \left[ \sin \left( x G^{2k} \left( \frac{1}{H^\beta (t)} \right) \right) \right] \notag \\ = \, & \frac{1}{\pi x} \mathbb{E} \left[ \sin \left( x G^{2k\beta} \left( \frac{1}{t} \right) \right) \right] \label{rappresentpari} \end{align} and coincides with the law of the pseudoprocess \begin{align} V^{2k\beta} (t) \, = \, U^{2k} \left( H^\beta (t) \right) , \qquad t>0, \end{align} where $U^{2k}$ is related to equation \eqref{11} for $m=2k$ and $H^\beta$ is a stable subordinator independent from $U^{2k}$. $G^\gamma \left( t \right) $ is a gamma r.v. with density \begin{equation} g^\gamma (x, t) \, = \, \gamma \frac{x^{\gamma -1}}{t} e^{-\frac{x^\gamma}{t}}, \qquad x>0, t>0, \gamma >0. \end{equation} \end{te} \begin{proof} The Fourier transform of \eqref{problemapari} leads to the Cauchy problem \begin{align} \begin{cases} \frac{\partial}{\partial t} \widehat{v}_{2k}^\beta (\xi, t) \, = \, - | \xi |^{2k\beta} \widehat{v}_{2k}^\beta (\xi, t) \\ \widehat{v}_{2k}^\beta (\xi, 0) \, = \, 1, \end{cases} \end{align} whose unique solution reads \begin{align} \mathbb{E}e^{i\xi V^{2k\beta} (t)} \, = \, & \int_{\mathbb{R}} dx \, e^{i\xi x} \int_0^\infty ds \, u_{2k} (x, s) \, h_\beta (s, t) \notag \\ = \, & \int_0^\infty ds \, e^{-s\xi^{2k}} h_{\beta} (s, t) \, = \, e^{-t|\xi|^{2k\beta}}. \label{carattpari} \end{align} In \eqref{carattpari} $u_{2k}$ is the density of $U^{2k}$ and $h_\beta(x, t)$ is the probability density of the subordinator $H^\beta$. Now we show that the Fourier transform of \eqref{rappresentpari} coincides with \eqref{carattpari}. We have that \begin{align} \widehat{v}_{2k}^\beta (\xi, t) \, = \, & \int_{\mathbb{R}} dx e^{i\xi x} \, \frac{1}{\pi x} \mathbb{E} \left[ \sin \left( x G^{2k} \left( \frac{1}{H^\beta (t)} \right) \right) \right] \notag \\ = \, & \int_{\mathbb{R}} dx \, e^{i\xi x} \left[ \int_0^\infty \int_0^\infty \frac{\sin xy}{\pi x} \Pr \left\lbrace G^{2k} \left( \frac{1}{s} \right) \in dy \right\rbrace \, \Pr \left\lbrace H^\beta (t) \in ds \right\rbrace \right] \notag \\ = \, & \int_0^\infty \int_0^\infty \Pr \left\lbrace G^{2k} \left( \frac{1}{s} \right) \in dy \right\rbrace \, \Pr \left\lbrace H^\beta (t) \in ds \right\rbrace \left[ \int_{\mathbb{R}} dx \, e^{i\xi x} \frac{\sin xy}{\pi x} \right]. \label{27} \end{align} By considering that the Heaviside function \begin{align} \mathcal{H}_\alpha (z) \, = \, \begin{cases} 1, \qquad & z > \alpha, \\ 0, & z < \alpha \end{cases} \end{align} can be represented as \begin{align} \mathcal{H}_\alpha (z) \, = \, \frac{1}{2\pi} \int_{\mathbb{R}} dw \, e^{iwz} \frac{e^{-i\alpha w}}{iw} \, = \, - \frac{1}{2\pi} \int_{\mathbb{R}} dw \, e^{-iwz} \frac{e^{i\alpha w}}{iw}, \end{align} we obtain that formula \eqref{27} becomes \begin{align} & \widehat{v}_{2k}^\beta (\xi, t) \, = \notag \\ = \, & \int_0^\infty \int_0^\infty \Pr \left\lbrace G^{2k} \left( \frac{1}{s} \right) \in dy \right\rbrace \, \Pr \left\lbrace H^\beta (t) \in ds \right\rbrace \left[ \mathcal{H}_{-y} (\xi) - \mathcal{H}_y (\xi) \right] \notag \\ = \, & \int_0^\infty \int_0^\infty \Pr \left\lbrace G^{2k} \left( \frac{1}{s} \right) \in dy \right\rbrace \, \Pr \left\lbrace H^\beta (t) \in ds \right\rbrace \left[ \mathbb{I}_{[-\xi, + \infty]} (y) - \mathbb{I}_{[-\infty, \xi]} (y) \right] \notag \\ = \, & \int_0^\infty \int_0^\infty dy \, ds \left( 2ksy^{2k-1} e^{-sy^{2k}} \right) \mathbb{I}_{[0, \infty]} (y) \left[ \mathbb{I}_{[-\xi, + \infty]} (y) - \mathbb{I}_{[-\infty, \xi]} (y) \right] h_\beta (s, t). \label{mammamia} \end{align} For $\xi > 0$ \eqref{mammamia} becomes \begin{align} \widehat{v}_{2k}^\beta (\xi, t) \, = \, & \int_0^\infty ds \left[ 1 - \int_0^\xi dy \, 2ks y^{2k-1} e^{-y^{2k}s} \right] h_\beta (s, t) \notag \\ = \, & \int_0^\infty ds \, e^{-\xi^{2k}s} h_\beta (s, t) \, = \, e^{-t| \xi |^{2k\beta}}, \end{align} and for $\xi < 0$ \eqref{mammamia} is \begin{align} \widehat{v}_{2k}^\beta (\xi, t) \, = \, & \int_0^\infty ds \, \left[ \int_{-\xi}^\infty 2ksy^{2k-1} e^{-y^{2k}s} \right] h_\beta (s, t) \notag \\ = \, & \int_0^\infty ds \, e^{-| \xi |^{2k} s } \, h_\beta (s, t) \, = \, e^{-t |\xi |^{2k\beta}}. \end{align} Since \begin{align} \Pr \left\lbrace G^{2k} \left( \frac{1}{H^\beta (t)} \right) \in dy \right\rbrace /dy \, = \, & 2ky^{2k-1} \int_0^\infty se^{-sy^{2k}} h_\beta (s, t) \, ds \notag \\ = \, & -\frac{\partial}{\partial y} \int_0^\infty e^{-sy^{2k}} h_\beta (s, t) \, ds \notag \\ = \, & -\frac{\partial}{\partial y} e^{-y^{2k\beta}t} \notag \\ = \, & \Pr \left\lbrace G^{2k\beta} \left( \frac{1}{t} \right) \in dy \right\rbrace /dy \end{align} the second form of the solution \eqref{rappresentpari} follows immediately. \end{proof} For $k \geq 1$, $\beta \in \left( 0, \frac{1}{k} \right]$ the solutions \eqref{rappresentpari} are densities of symmetric random variables, while for $\beta > \frac{1}{k}$ the functions \eqref{rappresentpari} are sign-varying. Clearly for $\beta = 1$ we obtain the solution of even-order heat-type equations discussed in \cite{ecporsdov}. As far as space-fractional higher-order heat-type equations we have the result of the next theorem where the governing fractional operator $\mathfrak{R}$ is obtained as a suitable combination of Weyl derivatives. The operator $\mathfrak{R}$ governing the fractional pseudoprocesses appearing in Theorem \ref{pseudowalkdispari} is explicitely written for $ \left\lbrace p, q \in [0,1] : p + q = 1 \right\rbrace $, $ \left\lbrace \beta \in (0,1), \, k \in \mathbb{N}: m-1 < \beta (2k+1) < m, \, m \in \mathbb{N} \right\rbrace $ as \begin{align} & \mathfrak{R} \, v_{2k+1}^\beta (x, t) \, = \notag \\ = \, & - \frac{1}{\cos \frac{\pi \beta}{2}} \left[ p \, e^{i \pi \beta k} \, \frac{^+\partial^{\beta(2k+1)}}{\partial x^{\beta(2k+1)}} + q \, e^{-i \pi \beta k} \, \frac{^-\partial^{\beta(2k+1)}}{\partial x^{\beta(2k+1)}} \right] v_{2k+1}^\beta (x, t) \notag \\ = \, & - \frac{1}{\cos \frac{\pi \beta}{2} \Gamma (m-(2k+1)\beta))} \frac{\partial^m}{\partial x^m} \left[ e^{i \pi \beta k} p \int_{-\infty}^x \frac{v_{2k+1}^\beta (y, t)}{(x-y)^{(2k+1)\beta-m+1}} dy \right. \notag \\ & \left. + q \, e^{-i \pi \beta k} \, (-1)^m \int_x^\infty \frac{v_{2k+1}^\beta(y, t)}{(y-x)^{(2k+1)\beta -m+1}} dy \right], \label{sommaweylpesata} \end{align} where the left and right Weyl fractional derivatives appear. \begin{te} The solution to the problem \begin{align} \begin{cases} \frac{\partial }{\partial t} v_{2k+1}^\beta (x, t) \, = \, \mathfrak{R} \; v_{2k+1}^\beta (x, t), \qquad x \in \mathbb{R}, t>0, \beta \in (0,1), k \in \mathbb{N}, \\ v_{2k+1}^\beta (x, 0) \, = \, \delta(x), \end{cases} \label{problemapq} \end{align} is given by the signed law of the pseudoprocess \begin{align} \bar{V}^{\beta (2k+1)} (t) \, = \, U_1^{2k+1} \left( H_1^\beta \left( \frac{p t}{\cos \frac{\beta \pi}{2}} \right) \right) - \, U_2^{2k+1} \left( H_2^\beta \left( \frac{qt}{\cos \frac{\beta \pi}{2}} \right) \right) , \label{vbarrato} \end{align} where $U_1^{2k+1}$, $U_2^{2k+1}$ are independent odd-order pseudoprocesses and $H_1^\beta$, $H_2^\beta$, are independent stable subordinators. \end{te} \begin{proof} The Fourier transform of \eqref{sommaweylpesata} is written as \begin{align} & \mathcal{F} \left[ \mathfrak{R} \, v_{2k+1}^\beta (x, t) \right] (\xi) \, = \notag \\ = \, & \mathcal{F} \left[ - \frac{1}{\cos \frac{\pi \beta}{2}} \left[ p \, e^{i \pi \beta k} \, \frac{^+\partial^{\beta(2k+1)}}{\partial x^{\beta(2k+1)}} + q \, e^{-i \pi \beta k} \, \frac{^-\partial^{\beta(2k+1)}}{\partial x^{\beta(2k+1)}} \right] v_{2k+1}^\beta (x, t) \right] (\xi) \notag \\ = \, & -\frac{1}{\cos \frac{\beta \pi}{2}} \left[ p \, e^{i \pi \beta k} \, (-i\xi)^{\beta (2k+1)} + q \, e^{-i \pi \beta k} \, (i\xi)^{\beta (2k+1)} \right] \widehat{v}_{2k+1}^\beta (\xi, t) \notag \\ = \, & - \frac{1}{\cos \frac{\beta \pi}{2}} | \xi |^{\beta (2k+1)} \left[ pe^{-\frac{i\pi \beta}{2} \textrm{ sign}(\xi)} + q e^{\frac{i\pi \beta}{2} \textrm{ sign}(\xi)} \right] \widehat{v}_{2k+1}^\beta (\xi, t) \notag \\ = \, & -|\xi |^{\beta (2k+1)} \left( 1- i \textrm{ sign}(\xi) \, (p-q) \tan \frac{\pi \beta}{2} \right) \widehat{v}_{2k+1}^\beta (\xi, t) \end{align} and therefore we have that \begin{equation} \widehat{v}^\beta_{2k+1} (\xi, t) \, = \, e^{-t| \xi |^{\beta(2k+1)} \left( 1-i \left( p-q \right) \textrm{ sign}(\xi) \tan \frac{\beta \pi}{2} \right) } \end{equation} In view of \eqref{caratteristicapseudosimm} we get \begin{align} \mathbb{E}e^{i\xi \bar{V}^{(2k+1)\beta}(t)} \, = \, & \mathbb{E}e^{i\xi V^{(2k+1)\beta} \left( \frac{t}{\cos \frac{\beta \pi}{2}} \right) } \notag \\ = \, & e^{-t |\xi |^{\beta(2k+1)} \left( 1 -i \textrm{ sign}(\xi) \, (p-q) \tan \frac{\pi \beta}{2} \right) } \end{align} and this confirms that the solution to \eqref{problemapq} is given by the law of the pseudoprocess \eqref{vbarrato}. \end{proof} \begin{os} Since $\frac{e^{\pm i \pi k \beta}}{\cos \frac{\beta \pi}{2}} = \frac{1}{\cos \beta (2k+1) \frac{\pi}{2}}$ (because $e^{i\pi k\beta} = \left( e^{i\pi} \right) ^{k\beta} = \left( e^{-i\pi} \right) ^{k\beta} = e^{-i\pi k\beta}$) the operator \eqref{sommaweylpesata} takes the form of the Riesz fractional derivative of order $\beta(2k+1)$ when $p=q=\frac{1}{2}$. \end{os} We now pass to the derivation of the governing equation of the fractional pseudoprocesses studied in Theorem \ref{teoremarwfellerdispari}. We first recall the definition of the Feller space-fractional derivative which is \begin{align} ^FD^{\beta, \theta} u(x) \, = \, - \left[ \frac{\sin \frac{\pi}{2}(\beta - \theta)}{\sin (\pi \beta)} \frac{^+\partial^\beta}{\partial x^\beta} + \frac{\sin \frac{\pi}{2}(\beta + \theta)}{\sin (\pi \beta)} \frac{^-\partial^\beta}{\partial x^\beta} \right]u(x). \end{align} We recall that \begin{align} \mathcal{F} \left[ ^FD^{\beta, \theta} u(x) \right] (\xi) \, = \, -|\xi|^{\beta} e^{\frac{i \pi \theta}{2} \textrm{ sign}(\xi) } \widehat{u}(\xi), \label{fourierfeller01} \end{align} as can be shown by means of the following calculation \begin{align} \int_{\mathbb{R}} dx \, e^{i\xi x} \; ^FD^{\beta, \theta} u(x) \, = \, & - \left[ \frac{\sin \frac{\pi}{2}(\beta - \theta)}{\sin (\pi \beta)} (-i\xi)^\beta + \frac{\sin \frac{\pi}{2}(\beta + \theta)}{\sin (\pi \beta)} (i\xi)^\beta \right] \widehat{u}(\xi) \notag \\ = \, & -\frac{|\xi|^{\beta}}{2i \sin \pi \beta} \left[ \left( e^{\frac{i\pi}{2}\beta} e^{-\frac{i\pi}{2}\theta} - e^{-\frac{i\pi}{2}\beta} e^{\frac{i\pi}{2}\theta} \right) e^{-\frac{i\pi}{2}\beta \textrm{ sign}(\xi)} + \right. \notag \\ & + \left. \left( e^{\frac{i\pi}{2}\beta} e^{\frac{i\pi}{2}\theta} - e^{-\frac{i\pi}{2}\beta} e^{-\frac{i\pi}{2}\theta} \right) e^{\frac{i\pi}{2}\beta \textrm{ sign}(\xi)} \right] \widehat{u}(\xi) \notag \\ = \, & \begin{cases} -\xi^{\beta} e^{\frac{i\pi \theta}{2} } \widehat{u}(\xi), \qquad &\xi > 0, \\ -(-\xi)^{\beta} e^{-\frac{i\pi \theta}{2} } \widehat{u}(\xi), \qquad &\xi < 0 \end{cases} \notag \\ = \, & -|\xi|^{\beta} e^{\frac{i \pi \theta}{2} \textrm{ sign}(\xi) } \widehat{u}(\xi) \end{align} where we used the results of Theorem \ref{teoremasimboloweyl}. The explicit form of the Fourier transform of the solution to \begin{equation} \frac{\partial}{\partial t} u(x, t) \, = \, ^FD^{\beta, \theta} u(x, t), \qquad u(x, 0) \, = \, \delta(x), \qquad x \in \mathbb{R}, t>0, \end{equation} is written as \begin{equation} \widehat{u}(\xi, t) \, = \, e^{-|\xi |^\beta t e^{\frac{i\pi \theta}{2} \textrm{ sign}(\xi)} } \label{stable} \end{equation} and for $\beta \in (0,2]$, $4m-1 < \theta < 4m+1$, $m \in \mathbb{N}$, represents the characteristic function of a stable r.v.. The last condition on $\theta$ is due to the fact that \begin{equation} \left| \widehat{u}(\xi, t) \right| \leq 1 \textrm{ if and only if } \cos \frac{\theta \pi}{2} \in (0,1]. \label{condition} \end{equation} The condition \eqref{condition} must be assumed also for $\beta > 2$ where \eqref{stable} however fails to be the characteristic function of a genuine r.v.. For $\theta = \beta <1$ \eqref{stable} becomes totally negatively skewed. By interchanging $\sin (\beta - \theta) \frac{\pi}{2}$ with $\sin (\beta + \theta) \frac{\pi}{2}$ we obtain instead \begin{align} \widehat{u}(\xi, t) \, = \, e^{-|\xi |^{\beta}t e^{-\frac{i\pi}{2}\theta \textrm{ sign}(\xi)} } \end{align} which is totally positively skewed for $\theta = \beta<1$. We are now ready to prove the following Theorem. \begin{te} Let $Z^{\beta (2k+1), \theta}(t)$, $t>0$, be the limiting fractional pseudoprocess studied in Theorem \ref{teoremarwfellerdispari}. The signed density of $Z^{\beta (2k+1), \theta}(t)$ is the solution to \begin{align} \begin{cases} \frac{\partial}{\partial t} z^{\beta(2k+1), \theta}(x, t) \, = \, ^FD^{\beta(2k+1), \theta} z^{\beta(2k+1), \theta}(x, t) \\ z^{\beta(2k+1), \theta}(x, 0) \, = \, \delta(x) \end{cases} \label{cauchyfeller} \end{align} and coincide with the signed distribution of the composition for $\beta \in (0,1)$, $-\beta <\theta < \beta$, \begin{align} \mathfrak{Z}^{\beta(2k+1), \theta} (t) \, = \, U_1^{2k+1} \left( H_1^\beta \left( \frac{\sin \frac{\pi}{2}(\beta + \theta)}{\sin \pi \beta}t \right) \right) - U_2^{2k+1} \left( H_2^\beta \left( \frac{\sin \frac{\pi}{2}(\beta - \theta)}{\sin \pi \beta} t\right) \right) , \label{impiccio} \end{align} where $H^\beta_{j}$, $j=1, 2$ are independent stable r.v.'s and the independent pseudoprocesses $U_j^{2k+1}$, $j=1, 2$, are related to the odd-order heat-type equation \begin{align} \frac{\partial}{\partial t} u_{2k+1} (x, t) \, = \, (-1)^k \frac{\partial^{2k+1}}{\partial x^{2k+1}} u_{2k+1} (x, t). \end{align} The positivity of the time scales in \eqref{impiccio} implies that $-\beta < \theta < \beta$. \end{te} \begin{proof} By profiting from the result \eqref{fourierfeller01} we note that the Fourier transform of \eqref{cauchyfeller} is written as \begin{align} \begin{cases} \frac{\partial}{\partial t} \widehat{z}^{\beta(2k+1), \theta} (\xi, t) \, = \, -|\xi |^{\beta(2k+1)} e^{\frac{i\pi}{2}\theta \textrm{ sign}(\xi)} \; \widehat{z}^{\beta(2k+1), \theta} (\xi, t)\\ \widehat{z}^{\beta(2k+1), \theta} (\xi,0) \, = \, 1. \end{cases} \end{align} which is satisfied by the Fourier transform \begin{equation} \widehat{z}^{\beta(2k+1), \theta} (\xi, t) \, = \, e^{-t|\xi|^{\beta(2k+1)} e^{\frac{i\pi}{2}\theta \textrm{ sign}(\xi)}}. \label{adessobasta} \end{equation} We now prove that the Fourier transform of \eqref{impiccio} coincides with \eqref{adessobasta}. In view of the independence of the r.v.'s and pseudo r.v.'s involved we write that \begin{align} & \mathbb{E}e^{i\xi \mathfrak{Z}^{\beta(2k+1), \theta} (t)} \, = \, \mathbb{E}e^{i\xi \left[ U_1^{2k+1} \left( H_1^\beta \left( \frac{\sin \frac{\pi}{2}(\beta + \theta)}{\sin \pi \beta}t \right) \right) - U_2^{2k+1} \left( H_2^\beta \left( \frac{\sin \frac{\pi}{2}(\beta - \theta)}{\sin \pi \beta} t\right) \right) \right]} \notag \\ = \, & \left[ \int_{\mathbb{R}} dx \, e^{i\xi x} \int_0^\infty ds \, u^1_{2k+1} (x, s) h^1_\beta \left( s, \frac{\sin \frac{\pi}{2}(\beta + \theta)}{\sin \pi \beta}t \right) \right] \notag \\ & \times \left[ \int_{\mathbb{R}} dx \, e^{-i\xi x} \int_0^\infty ds \, u^2_{2k+1} (x, s) h^2_\beta \left( s, \frac{\sin \frac{\pi}{2}(\beta - \theta)}{\sin \pi \beta}t \right) \right] \notag \\ = \, & \left[ \int_0^\infty e^{-i\xi^{2k+1}s} h^1_\beta \left( s, \frac{\sin \frac{\pi}{2}(\beta + \theta)}{\sin \pi \beta}t \right) ds \right] \left[ \int_0^\infty e^{i\xi^{2k+1}s} h^2_\beta \left( s, \frac{\sin \frac{\pi}{2}(\beta + \theta)}{\sin \pi \beta}t \right) ds \right] \notag \\ = \, & e^{-t \frac{\sin \frac{\pi}{2}(\beta + \theta)}{\sin \pi \beta } \left( i\xi^{2k+1} \right) ^\beta} e^{-t \frac{\sin \frac{\pi}{2}(\beta - \theta)}{\sin \pi\beta} \left( - i\xi^{2k+1} \right) ^\beta} \notag \\ = \, & e^{- \frac{t|\xi |^{\beta(2k+1)}}{\sin \pi \beta} \left[ \sin \frac{\pi}{2}(\beta + \theta) e^{\frac{i\pi}{2}\beta \textrm{ sign}(\xi)} + \sin \frac{\pi}{2} (\beta - \theta) e^{-\frac{i\pi}{2}\beta \textrm{ sign}(\xi)} \right] } \notag \\ = \, & e^{- \frac{t|\xi |^{\beta(2k+1)}}{2i\sin \pi \beta} \left[ \left( e^{\frac{i\pi}{2}\beta} e^{\frac{i\pi}{2}\theta} - e^{-\frac{i\pi}{2}\beta} e^{-\frac{i\pi}{2}\theta} \right) e^{\frac{i\pi}{2}\beta \textrm{ sign}(\xi)} + \left( e^{\frac{i\pi}{2}\beta} e^{-\frac{i\pi}{2}\theta} - e^{-\frac{i\pi}{2}\beta} e^{\frac{i\pi}{2}\theta} \right) e^{-\frac{i\pi}{2}\beta \textrm{ sign}(\xi)} \right] } \notag \\ = \, & e^{-t|\xi|^{\beta(2k+1)} e^{\frac{i\pi \theta}{2}\textrm{ sign}(\xi)} } \end{align} which coincides with \eqref{adessobasta}. \end{proof} \section{Some remarks} We give various forms for the density $v^\gamma (x, t)$ of symmetric pseudoprocesses of arbitrary order $\gamma > 0$. For integer values of $\gamma = 2n$ or $\gamma = 2n+1$ the analysis of the structure of these densities is presented in \citet{ecporsdov}. We give here an analytical representation of $v^\gamma (x, t)$ for non-integer values of $\gamma$, which is an alternative to \eqref{rappresentpari}, as a power series and in integral form (involving the Mittag-Leffler functions). Furthermore, in Figure \ref{figura1} we give some curves for special values of $\gamma$. We also give the distribution of the sojourn time of compositions of pseudoprocesses with stable subordinators (totally positively skewed stable r.v.'s). \begin{prop} For $\gamma > 1$ the inverse of the Fourier transform \begin{equation} \widehat{v}^\gamma (\xi, t) \, = \, e^{-t|\xi |^\gamma} \end{equation} can also be written as \begin{align} v^\gamma (x, t) \, = \, & \frac{1}{\pi} \int_0^\infty \cos (\xi x) \, e^{-t \xi^\gamma} \, d\xi \notag \\ = \, & \frac{1}{\pi \gamma} \sum_{k=0}^\infty \frac{(-1)^k x^{2k}}{(2k)!} \frac{\Gamma \left( \frac{2k+1}{\gamma} \right) }{t^{\frac{2k+1}{\gamma}}} \notag \\ = \, & \frac{1}{\pi \gamma} \sum_{k=0}^\infty \frac{(-1)^k x^{2k}}{t^{\frac{2k+1}{\gamma}}} \frac{B \left( \frac{2k+1}{\gamma}, \, (2k+1) \left( 1-\frac{1}{\gamma} \right) \right) }{\Gamma \left( (2k+1) \left( 1-\frac{1}{\gamma} \right) \right) } \notag \\ = \, & \frac{1}{\pi \gamma}\sum_{k=0}^\infty \frac{(-1)^k x^{2k}}{t^{\frac{2k+1}{\gamma}}} \frac{1}{\Gamma \left( (2k+1) \left( 1-\frac{1}{\gamma} \right) \right) } \int_0^1 dy \, y^{\frac{2k+1}{\gamma}-1} (1-y)^{(2k+1) \left( 1-\frac{1}{\gamma} \right) -1} \notag \\ = \, & \frac{1}{\pi \gamma} \int_0^1 dy \, \sum_{k=0}^\infty \frac{(-1)^k \left( x y^{\frac{1}{\gamma}} \left( 1-y \right) ^{1-\frac{1}{\gamma}} \right) ^{2k}}{t^{\frac{2k+1}{\gamma}} \Gamma \left( (2k+1) \left( 1-\frac{1}{\gamma} \right) \right) } \, y^{\frac{1}{\gamma}-1} \left( 1-y \right) ^{-\frac{1}{\gamma}} \notag \\ = \, & \frac{t^{-\frac{1}{\gamma}}}{\pi \gamma} \int_0^1 dy \, E_{2 \left( 1-\frac{1}{\gamma} \right) , 1-\frac{1}{\gamma}} \left( - \left( x y^{\frac{1}{\gamma}} (1-y)^{1-\frac{1}{\gamma}} \right) ^2 t^{-\frac{1}{\gamma}} \right) \; y^{\frac{1}{\gamma}-1} \left( 1-y \right) ^{-\frac{1}{\gamma}} \notag \\ \stackrel{w=y/(1-y)}{=} \, & \frac{t^{-\frac{1}{\gamma}}}{\pi \gamma} \int_0^\infty dw \, E_{2 \left( 1-\frac{1}{\gamma} \right) , 1-\frac{1}{\gamma}} \left( -x^2 \left( \frac{w^{\frac{1}{\gamma}}}{1+w} \right) ^2 t^{-\frac{1}{\gamma}} \right) \frac{w^{\frac{1}{\gamma}}}{1+w} \frac{1}{w} \label{alternativarappr} \end{align} and for $\gamma < 2$ coincides with the characteristic function of symmetric stable processes. \end{prop} Formula \eqref{alternativarappr} is an alternative to the probabilistic representation \eqref{rappresentpari} for $\gamma = 2k\beta$. For $1<\gamma < 2$ it represents the density of a symmetric stable r.v.. \begin{os} We note that \begin{align} v^\gamma (0, t) \, = \, \frac{t^{-\frac{1}{\gamma}}}{\pi} \Gamma \left( 1+ \frac{1}{\gamma} \right) \end{align} as can be inferred from \eqref{alternativarappr}. In the neighbourhood of $x=0$ the density $v^\gamma (x, t)$ can be written as \begin{align} v^\gamma (x, t) \, \approx \, & \frac{1}{\pi \gamma} \left( \frac{1}{t^{\frac{1}{\gamma}}} \Gamma \left( \frac{1}{\gamma} \right) - \frac{x^2}{2} \frac{\Gamma \left( \frac{3}{\gamma} \right) }{t^{\frac{3}{\gamma}}} \right) \notag \\ = \, & v^\gamma (0, t) \, \left( 1-x^2 \frac{C_\gamma}{2t^{\frac{2}{\gamma}}} \right) \end{align} where \begin{equation} C_\gamma \, = \, \frac{\Gamma \left( \frac{1}{\gamma} + \frac{1}{3} \right) \Gamma \left( \frac{1}{\gamma} + \frac{2}{3} \right) \, 3^{\frac{3}{\gamma}-\frac{1}{2}}}{2\pi}. \end{equation} In the above calculation the triplication formula of the Gamma function (see \citet{lebedev} page 14) has been applied \begin{equation} \Gamma (z) \Gamma \left( z+\frac{1}{3} \right) \Gamma \left( z+\frac{2}{3} \right) \, = \, \frac{2\pi}{3^{3z-\frac{1}{2}}} \Gamma (3z). \end{equation} \end{os} \begin{os} For even-order pseudoprocesses $U^{2k}(t)$, $t>0$, the distribution of the sojourn time \begin{equation} \Gamma_t \left( U^{2k} \right) \, = \, \int_0^t \mathbb{I}_{[0, \infty)} \left( U^{2k}(s) \right) ds \end{equation} follows the arcsine law for all $n\geq 1$ (see \citet{krylov}). Therefore the distribution of the sojourn time of $U^{2k} \left( H^\beta (t) \right) $, $t>0$, $\beta \in (0,1)$, reads \begin{align} \Pr \left\lbrace \Gamma_t \left( U^{2k} \left( H^\beta \right) \right) \in dx \right\rbrace \, = \, & \int_0^\infty \Pr \left\lbrace \Gamma_s \left( U^{2k} \right) \in dx \right\rbrace \, \Pr \left\lbrace H^\beta(t) \in ds \right\rbrace \notag \\ = \, & \frac{dx}{\pi} \int_x^\infty \frac{1}{\sqrt{x (s-x)}} \Pr \left\lbrace H^\beta (t) \in ds \right\rbrace . \label{55} \end{align} In the odd-order case the distribution of the sojourn time \begin{align} \Gamma_t \left( U^{2k+1} \right) \, = \, \int_0^t \mathbb{I}_{[0, \infty)} \left( U^{2k+1} \left( s \right) \right) ds \end{align} is written as (see \citet{lachal2003}) \begin{equation} \Pr \left\lbrace \Gamma_t \left( U^{2k+1} \right) \in dx \right\rbrace \, = \, dx \, \frac{\sin \frac{\pi}{2k+1}}{\pi} x^{-\frac{1}{2k+1}} \left( t-x \right) ^{-\frac{2k}{2k+1}} \mathbb{I}_{(0,t)} (x) \end{equation} and thus we get \begin{align} \Pr \left\lbrace \Gamma_t \left( U^{2k+1} \left( H^\beta \right) \right) \in dx \right\rbrace \, = \, & \frac{dx \, \sin \frac{\pi}{2k+1}}{\pi} \int_x^\infty \frac{1}{\sqrt[2k+1]{x (s-x)^{2k}}} \, \Pr \left\lbrace H^\beta (t) \in ds \right\rbrace . \end{align} \end{os} For $\beta = \frac{1}{2}$ the integral \eqref{55} can be evaluated explicitly \begin{align} \Pr \left\lbrace \Gamma_t \left( U^{2k} \left( H^{\frac{1}{2}} \right) \right) \in dx \right\rbrace \, = \, & \frac{dx}{\pi} \int_x^\infty \frac{1}{\sqrt{x(s-x)}} \frac{te^{-\frac{t^2}{2s}}}{\sqrt{2\pi s^3}} ds \notag \\ = \, & \frac{dx \, t}{\pi \sqrt{2\pi x}} \int_0^{\frac{1}{x}} \frac{e^{-\frac{t^2}{2}y}}{\sqrt{1-xy}} dy \notag \\ = \, &\frac{dx \, t}{\pi \sqrt{2\pi x^3}} \int_0^1 \frac{e^{-\frac{t^2}{2x}w}}{\sqrt{1-w}} dw \notag \\ = \, & \frac{ dx \, t}{\pi \sqrt{2\pi x^3}} \sum_{k=0}^\infty \left( -\frac{t^2}{2x} \right) ^k \frac{1}{k!} \int_0^1 w^k \left( 1-w \right) ^{-\frac{1}{2}} \, dw \notag \\ = \, & \frac{dx \, t}{\pi \sqrt{2x^3}} E_{1, \frac{3}{2}} \left( -\frac{t^2}{2x} \right) , \qquad x>0, t>0, \end{align} where \begin{equation} E_{\nu, \mu} (x) \, = \, \sum_{j=0}^\infty \frac{x^j}{\Gamma (j\nu+\mu)}, \qquad \nu, \mu > 0, \end{equation} is the Mittag-Leffler function. \begin{figure} [htp!] \centering \caption{The density $v^\gamma (x, t)$ for $\gamma > 2$ displays an oscillating behaviour similar to that of the fundamental solution of even-order heat equations.} \includegraphics[scale=0.24]{natata.jpg} \label{figura1} \end{figure} \section*{Ackwnoledgements} The authors have benefited from fruitful discussions on the topics of this paper with Dr. Mirko D'Ovidio.
2,869,038,155,972
arxiv
\section{Appendix} In this Appendix, we first present additional experimental results for the NER before providing additional details on experiments, including model architectures and selection of hyper-parameters. \subsection{Additional Experimental Results} \begin{table} \caption{All Trials for NER experiments.} \label{tab:ner_full} \begin{center} \begin{tabular}{ccc|ccc} \toprule & \multicolumn{2}{c}{\citet{ma2016end}} & \multicolumn{2}{c}{+ GSPEN }\\ Trial & Val. F1 & Test F1 & Val. F1 & Test F1\\ \midrule $1$ & $94.94$ & $91.36$ & $94.85$ & $91.25$\\ $2$ & $94.67$ & $91.37$ & $94.76$ & $91.53$\\ $3$ & $94.74$ & $91.32$ & $95.07$ & $91.60$\\ $4$ & $94.94$ & $91.35$ & $95.08$ & $91.47$\\ $5$ & $95.12$ & $91.44$ & $95.10$ & $91.71$\\ \midrule & \multicolumn{2}{c}{\citet{akbik2018contextual}} & \multicolumn{2}{c}{+ GSPEN }\\ \midrule $1$ & $0.9576$ & $0.9284$ & $0.9587$ & $0.9288$\\ $2$ & $0.9601$ & $0.9277$ & $0.9599$ & $0.9246$\\ $3$ & $0.9581$ & $0.9271$ & $0.9588$ & $0.9271$\\ $4$ & $0.9592$ & $0.9291$ & $0.9600$ & $0.9280$\\ $5$ & $0.9590$ & $0.9272$ & $0.9604$ & $0.9260$\\ \bottomrule \end{tabular} \end{center} \end{table} Table~\ref{tab:ner_full} contains the results for all trials of the NER experiments. When comparing to \citet{ma2016end}, we outperform their model on both validation and test data in four out of five trials. When comparing to \citet{akbik2018contextual}, we outperform their model on validation data in four out of five trials, but only outperform their model on test data in one trial. \subsection{Additional Experimental Details} \label{sec:exp_details} \paragraph{General Details:} Unless otherwise specified, all Struct models were trained by using the corresponding pre-trained Unary model, fixing these parameters, and training pairwise potentials. All SPEN models were trained by using the pre-trained Unary model, fixing these parameters, and training the $T$ function. Early stopping based on task performance on validation was used to select the number of epochs for training. For SPEN, GSPEN, and Struct models, loss-augmented inference was used where the loss function equals the sum of the $0$-$1$ losses per output variable, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $L(\hat{y}, y)~\coloneqq~\sum_{i=1}^n~\mathbf{1}[\hat{y}_i~\neq~y_i]$ where $n$ is the number of output variables. \paragraph{OCR:} The Unary model is a single $3$-layer multilayer perceptron (MLP) with ReLU activations, hidden layer sizes of $200$, and a dropout layer after the first linear layer with keep probability $0.5$. Scores for each image were generated by independently passing them into this network. Both Struct and GSPEN use a graph with one pairwise region per pair of adjacent letters, for a total of $4$ pairs. Linear potentials are used, containing one entry per pair per set of assignments of values to each pair. The score function for both SPEN and GSPEN takes the form $F(x, p; w) = \sum_{r \in \mathcal{R}} \sum_{y_r \in \mathcal{Y}_r}p_r(y_r)b_r(x, y_;w) + T(B(x), p)$, where in the SPEN case $\mathcal{R}$ contains only unary regions and in the GSPEN case $\mathcal{R}$ consists of the graph used by Struct. Each $b_r$ represents the outputs of the same model as Unary/Struct for SPEN/GSPEN, respectively, and $B(x)$ represents the vector $(b_r(x, y_i;w)|_{y_r \in \mathcal{Y}_r})$. For every SPEN and GSPEN model trained, $T$ is a $2$-layer MLP with softplus activations, an output size of $1$, and either $500$, $1000$, or $2000$ hidden units. These hidden sizes as well as the number of epochs of training for each model were determined based on task performance on the validation data. Message-passing inference used in both Struct and GSPEN ran for $10$ iterations. GSPEN models were trained by using the pre-trained Struct model, fixing these parameters, and training the $T$ function. The NLStruct model consisted of a $2$-layer MLP with $2834$ hidden units, an output size of $1$, and softplus activations. We use the same initialization described by \citet{graber2018nlstruct} for their word recognition experiments, where the first linear layer was initialized to the identity matrix and the second linear layer was initialized to a vector of all 1s. NLStruct models were initialized from the Struct models trained without entropy and used fixed potentials. The inference configuration described by \citet{graber2018nlstruct} was used, where inference was run for $100$ iterations with averaging applied over the final $50$ iterations. All settings for the OCR experiments used a mini-batch size of $128$ and used the Adam optimizer, with Unary, SPEN, and GSPEN using a learning rate of $10^{-4}$ and Struct using a learning rate of $10^{-3}$. Gradients were clipped to a norm of $1$ before updates were applied. Inference in both SPEN and GSPEN were run for a maximum of $100$ iterations. Inference was terminated early for both models if the inference objective for all datapoints in the minibatch being processed changed by less than $0.0001$. Three different versions of every model, initialized using different random seeds, were trained for these experiments. The plots represent the average of these trials, and the error represented is the standard deviation of these trials. \paragraph{Tagging:} The SPEN and GSPEN models for the image tagging experiment used the same scoring function form as for the OCR experiments. The $T$ model in both cases is a $2$-layer MLP with softplus activations and $130$ hidden units. Both GSPEN and SPEN use the same score function as in the OCR experiments, with the exception that the $T$ function used for GSPEN is only a function of the beliefs and does not include the potentials as input. Both models were trained using gradient descent with a learning rate of $10^{-2}$, a momentum of $0.9$, and a mini-batch size of $128$. Once again, only the $T$ component was trained for GSPEN, and the pairwise potentials were initialized to a Struct model trained using the settings described in \citet{graber2018nlstruct}. The message-passing procedure used to solve the inner optimization problem for GSPEN was run for $100$ iterations per iteration of Frank-Wolfe. Inference for SPEN and GSPEN was run for $100$ iterations and was terminated early if the inference objective for all datapoints in the minibatch being processed changed by less than $0.0001$. \paragraph{Multilabel Classification:} For the Bibtex dataset, $25$ percent of the training data was set aside to be used as validation data; this was not necessary for Bookmarks, which has a pre-specified validation dataset. For prediction in both datasets and for all models, a threshold determining the boundary between positive/negative label predictions was tuned on the validation dataset. For the Bibtex dataset, the Unary model consists of a $3$-layer MLP taking the binary feature vectors as input and returning a $159$-dimensional vector representing the potentials for label assignments $y_i = 1$; the potentials for $y=0$ are fixed to $0$. The Unary model uses ReLU activations, hidden unit sizes of $150$, and dropout layers before the first and second linear layers with keep probability of $0.5$. The Struct model consists of a $2$-layer MLP which also uses the feature vector as input, and it contains $1000$ hidden units and ReLU activations. The SPEN model uses the same scoring function form as used in the previous experiments, except the $T$ function is only a function of the prediction vector and does not use the unary potentials as input. The $T$ model consists of a $2$-layer MLP which takes the vector $\left(p_i(y_i=1)\right)_{i=1}^{59}$ as input. This model has $16$ hidden units, an output size of $1$, and uses softplus activations. The GSPEN model was trained by starting from the SPEN model, fixing these parameters, and training a pairwise model with the same architecture as the Struct model. For the bookmarks dataset, the models use the same architectures with slightly different configurations. the Unary model consists of a similar $3$-layer MLP, except dropout is only applied before the second linear layer. The Struct model uses the same architecture as the one trained on the Bibtex data. The $T$ model for SPEN/GSPEN uses $15$ hidden units. For both datasets and for both SPEN and GSPEN, mirror descent was used for inference with an additional entropy term with a coefficient of $0.1$; for Struct, a coefficient of $1$ was used. Inference was run for $100$ iterations, with early termination as described previously using the same threshold. For Struct and GSPEN, message passing inference was run for $5$ iterations. The Unary model was trained using gradient descent with a learning rate of $10^{-2}$ and a momentum of $0.9$, while Struct, SPEN and GSPEN were trained using the Adam optimizer with a learning rate of $10^{-4}$. \paragraph{NER:} Both structured model baselines were trained using code provided by the authors of the respective papers. In both cases, hyperparameter choices for these structured models were chosen to be identical to the the choices made from their original works. For completeness, we will review these choices. The structured model of \citet{ma2016end} first produces a vector for each word in the input sentence by concatenating two vectors: the first is a $100$-dimensional embedding for the word, which is initialized from pre-trained GloVe embeddings \citep{pennington2014glove} and fine-tuned. The second is the output of a $1$-D convolutional deep net with $30$ filters of length $3$ taking as input $30$-dimensional character embeddings for each character in the word. These representations are then passed into a $1$-layer bidirectional LSTM with a hidden state size of $256$, which is passed through a linear layer followed by an ELU activation to produce an intermediate representation. Unary/pairwise graphical model scores are finally obtained by passing this intermediate representation through two further linear layers. Predictions are made using the Viterbi algorithm. Dropout is applied to the embeddings before they are fed into the RNN (zeroing probability of $0.5511$, corresponding to two separate dropout layers with zeroing probability of $0.33$ being applied) and to the output hidden states of the RNN (zeroing probability of $0.5$). The GSPEN models in this setting were trained by initializing the structured component from the pre-trained models and fixing them -- that is, only the parameters in the MLP were trained during this step. Due to the fact that the input sentences are of varying size, we zero-pad all inputs of the MLP to the maximum sequence length. Dropout with a zeroing probability of $0.75$ was additionally applied to the inputs of the MLP. Inference was conducted using mirror descent with added entropy and convergence threshold of $0.1$. For both the structured baseline and GSPEN, model parameters were trained for $200$ epochs using SGD with initial learning rate of $0.01$, which was decayed every epoch using the formula $\text{lr}(\text{epoch})~=~\tfrac{0.01}{1+\text{epoch}\cdot0.05}$. The structured baseline was trained with a mini-batch size of $16$, while the GSPEN model used a mini-batch size of $32$ during training. A larger batch size was used for the GSPEN model to decrease the amount of time to complete one pass through the data. \citet{akbik2018contextual} use a concatenation of three different pre-trained embeddings per token as input to the bidirectional LSTM. The first is generated by a bidirectional LSTM which takes character-level embeddings as input and is pre-trained using a character-based language modeling objective (see \citet{akbik2018contextual} for more details). The other two embeddings are GloVe word embeddings \citep{pennington2014glove}, and task-trained character-based embeddings (as specified by \citet{lample2016neural}). During training, these embeddings are fine-tuned by passing them through a linear layer whose parameters are learned. The embeddings are passed into a $1$-layer bidirectional LSTM with a hidden state size of $256$. Unary scores are generated from the outputs of the LSTM by passing them through a linear layer; pairwise scores consist of a matrix of scores for every pair of labels, which are shared across sentence indices. In this setting, the GSPEN models were trained by initializing the structured component from the pre-trained models and then fine-tuning all of the model parameters. Mirror descent with added entropy was used for GSPEN inference with a convergence threshold of $0.1$. For both the structured baseline and GSPEN, model parameters were trained for a maximum of $150$ epochs using SGD with mini-batch size of $32$ and initial learning rate of $0.1$, which was decayed by $0.5$ when the training loss did not decrease past its current minimum for $3$ epochs. Training was terminated early if the learning rate fell below $10^{-4}$. \section{Background} \label{sec:bg} \vspace{-0.2cm} Let $x \in \mathcal{X}$ represent the input provided to a model, such as a sentence or an image. In this work, we consider tasks where the outputs take the form $y = (y_1, \dots, y_K) \in \mathcal{Y} \coloneqq \prod_{k=1}^K \mathcal{Y}_k$, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, they are vectors where the $k$-th variable's domain is the discrete and finite set $\mathcal{Y}_k = \{1, \ldots, |\mathcal{Y}_k|\}$. In general, the number of variables $K$ which are part of the configuration $y$ can depend on the observation $x$. However, for readability only, we assume all $y \in \mathcal{Y}$ contain $K$ entries, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, we drop the dependence of the output space $\mathcal{Y}$ on input $x$. All models we consider consist of a function $F(x, y;w)$, which assigns a score to a given configuration $y$ conditioned on input $x$ and is parameterized by weights $w$. Provided an input $x$, the inference problem requires finding the configuration $\hat y$ that maximizes this score, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\hat{y} \coloneqq \argmax_{y \in \mathcal{Y}} F(x, y; w)$. To find the parameters $w$ of the function $F(x,y;w)$, it is common to use a Structured Support Vector Machine (SSVM) (a.k.a.\ Max-Margin Markov Network) objective \citep{Tsochantaridis2005, Taskar2003}: given a multiset $\left\{\left(x^i, y^i\right)_{i=1}^N\right\}$ of data points $\left(x^i,y^i\right)$ comprised of an input $x^i$ and the corresponding ground-truth configuration $y^i$, a SSVM attempts to find weights $w$ which maximize the margin between the scores assigned to the ground-truth configuration $y^i$ and the inference prediction: \begin{equation} \min_w \sum_{\left(x^i, y^i\right)} \max_{\hat{y} \in \mathcal{Y}}\left\{ F\left(x^i, \hat{y}; w\right) + L\left(\hat{y}, y^i \right) \right\} - F\left(x^i, y^i; w\right). \label{eq:SSVM} \end{equation} Hereby, $L\left(\hat{y}, y^i\right)$ is a task-specific and often discrete loss, such as the Hamming loss, which steers the model towards learning a margin between correct and incorrect outputs. Due to addition of the task-specific loss $L(\hat y, y^i)$ to the model score $F\left(x^i,\hat y;w\right)$, we often refer to the maximization task within \equref{eq:SSVM} as loss-augmented inference. The procedure to solve loss-augmented inference depends on the considered model, which we discuss next. \textbf{Unstructured Models.} Unstructured models, such as feed-forward deep nets, assign a score to each label of variable $y_k$ which is part of the configuration $y$, irrespective of the label choice of other variables. Hence, the final score function $F$ is the sum of $K$ individual scores $f_k(x, y_k; w)$, one for each variable: \begin{equation} F(x, y; w) \coloneqq \sum_{k=1}^K f_k(x, y_k; w). \end{equation} Because the scores for each output variable do not depend on the scores assigned to other output variables, the inference assignment is determined efficiently by independently finding the maximum score for each variable $y_k$. The same is true for loss-augmented inference, assuming that the loss decomposes into a sum of independent terms as well. \textbf{Classical Structured Models.} \label{sec:struct} Classical structured models incorporate dependencies between variables by considering functions that take more than one output space variable $y_k$ as input, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, each function depends on a subset $r \subseteq \{1, \dots, K\}$ of the output variables. We refer to the subset of variables via $y_r = (y_k)_{k\in r}$ and use $f_r$ to denote the corresponding function. The overall score for a configuration $y$ is a sum of these functions, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, \begin{equation} F(x, y;w) \coloneqq \sum_{r \in \mathcal{R}} f_r(x, y_r; w). \end{equation} Hereby, $\mathcal{R}$ is a set containing all of the variable subsets which are required to compute $F$. The variable subset relations between functions $f_r$, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, the structure, is often visualized using factor graphs or, generally, Hasse diagrams. This formulation allows to explicitly model relations between variables, but it comes at the price of more complex inference which is NP-hard~\citep{Shimony1994} in general. A number of approximations to this problem have been developed and utilized successfully (see \secref{sec:related} for more details), but the complexity of these methods scales with the size of the largest region $r$. For this reason, these models commonly consider only unary and pairwise regions, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, regions with one or two variables. Inference, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, maximization of the score, is equivalent to the integer linear program \begin{equation} \max_{p \in \mathcal{M}} \sum_{r \in \mathcal{R}} \sum_{y_r \in \mathcal{Y}_r} p_r(y_r)f_r(x, y_r; w), \end{equation} where each $p_r$ represents a marginal probability vector for region $r$ and $\mathcal{M}$ represents the set of $p_r$ whose marginal distributions are globally consistent, which is often called the marginal polytope. Adding an entropy term over the probabilities to the inference objective transforms the problem from maximum a-posteriori (MAP) to marginal inference, and pushes the predictions to be more uniform \citep{Wainwright2008,NiculaeMBC18}. When combined with the learning procedure specified above, this entropy provides learning with the additional interpretation of maximum likelihood estimation \citep{Wainwright2008}. The training objective then also fits into the framework of Fenchel-Young Losses \citep{blondel2018learning}. For computational reasons, it is common to relax the marginal polytope $\mathcal{M}$ to the local polytope $\mathcal{M}_L$, which is the set of all probability vectors that marginalize consistently for the factors present in the graph \citep{Wainwright2008}. Since the resulting marginals are no longer globally consistent, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, they are no longer guaranteed to arise from a single joint distribution, we write the predictions for each region using $b_r(y_r)$ instead of $p_r(y_r)$ and refer to them using the term ``beliefs.'' Additionally, the entropy term is approximated using fractional entropies~\cite{Heskes2003} such that it only depends on the factors in the graph, in which case it takes the form $H_{\mathcal{R}}(b) \coloneqq \sum_{r\in \mathcal{R}} \sum_{y_r \in \mathcal{Y}_r} - b_r(y_r) \log b_r(y_r)$. \textbf{Structured Prediction Energy Networks.} \label{sec:spen} Structured Prediction Energy Networks (SPENs)~\citep{BelangerICML2016} were motivated by the desire to represent interactions between larger sets of output variables without incurring a high computational cost. The SPEN score function takes the following form: \begin{equation} F(x, p_1, \dots, p_K; w) \coloneqq T\left(\bar{f}(x; w), p_1, \dots, p_K; w \right), \end{equation} where $\bar{f}(x; w)$ is a learned feature representation of the input $x$, each $p_k$ is a one-hot vector, and $T$ is a function that takes these two terms and assigns a score. This representation of the labels, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $p_k$, is used to facilitate gradient-based optimization during inference. More specifically, inference is formulated via the program: \begin{equation} \max_{p_{k} \in \Delta_k \forall k} T\left( \bar{f}(x; w), p_1, \dots, p_K; w \right), \end{equation} where each $p_{k}$ is constrained to lie in the $|\mathcal{Y}_k|$-dimensional probability simplex $\Delta_k$. This task can be solved using any constrained optimization method. However, for non-concave $T$ the inference solution might only be approximate. \textbf{NLStruct.} \label{sec:nlstruct} SPENs do not support score functions that contain a structured component. In response, \citet{graber2018nlstruct} introduced NLStruct, which combines a classical structured score function with a nonlinear transformation applied on top of it to produce a final score. Given a set $\mathcal{R}$ as defined previously, the NLStruct score function takes the following form: \begin{equation} F(x,p_\mathcal{R};w) \coloneqq T\left(f_\mathcal{R}(x; w) \circ p_\mathcal{R}; w \right), \end{equation} where $f_\mathcal{R}(x; w) \coloneqq (f_r(x, y_r; w))|_{r \in \mathcal{R}, y_r \in \mathcal{Y}_r}$ is a vectorized form of the score function for a classical structured model, $p_\mathcal{R} \coloneqq (p_r(y_r))|_{\forall r \in \mathcal{R}, \forall y_r \in \mathcal{Y}_r}$ is a vector containing all marginals, `$\circ$' is the Hadamard product, and $T$ is a scalar-valued function. For this model, inference is formulated as a constrained optimization problem, where $\mathcal{Y}_\mathcal{R} \coloneqq \prod_{r \in \mathcal{R}}\mathcal{Y}_r$: \begin{equation} \max_{y \in \mathbb{R}^{|\mathcal{Y}_\mathcal{R}|}, p_\mathcal{R} \in \mathcal{M}} T(y; w) \text{~~s.t.~~} y = f_\mathcal{R}(x; w) \circ p_\mathcal{R}. \end{equation} Forming the Lagrangian of this program and rearranging leads to the saddle-point inference problem \begin{equation} \min_{\lambda} \max_y \left\{T(y; w) - \lambda^T y \right\} + \max_{p_\mathcal{R} \in \mathcal{M}} \lambda^T \left(f_\mathcal{R}(x; w) \circ p_\mathcal{R}\right). \end{equation} Notably, maximization over $p_\mathcal{R}$ is solved using techniques developed for classical structured models\footnote{As mentioned, solving the maximization over $p_\mathcal{R}$ tractably might require relaxing the marginal polytope $\mathcal{M}$ to the local marginal polytope $\mathcal{M}_L$. For brevity, we will not repeat this fact whenever an inference problem of this form appears throughout the rest of this paper.}, and the saddle-point problem is optimized using the primal-dual algorithm of \citet{chambolle2011first}, which alternates between updating $\lambda$, $y$, and $p_\mathcal{R}$. \section{Conclusions} \vspace{-0.cm} The developed GSPEN model combines the strengths of several prior approaches to solving structured prediction problems. It allows machine learning practitioners to include inductive bias in the form of known structure into a model while implicitly capturing higher-order correlations among output variables. The model formulation described here is more general than previous attempts to combine explicit local and implicit global structure modeling while not requiring inference to solve a saddle-point problem. \iffalse There are several areas left to be explored in future work. Inference remains the primary bottleneck of this approach. Additionally, we did not investigate the influence of the SSVM learning objective on model performance, and other training objectives may lead to improvements. Most notably, an objective similar to that used by Deep Value Networks (described in \secref{sec:related}) is interesting to investigate. \fi \section{Experiments} \vspace{-0.2cm} \begin{wraptable}{r}{0.51\textwidth} \centering \setlength{\tabcolsep}{2pt} {\small \begin{tabular}{lcccc} \toprule & Struct & SPEN & NLStruct & GSPEN\\ \midrule OCR (size 1000) & 0.40 s & 0.60 s & 68.56 s & 8.41 s \\ Tagging & 18.85 s & 30.49 s & 208.96 s & 171.65 s \\ Bibtex & 0.36 s & 11.75 s & -- & 13.87 s \\ Bookmarks & 6.05 s & 94.44 s & -- & 234.33 s\\ NER & 29.16 s & -- & -- & 99.83 s\\ \bottomrule \end{tabular} } \vspace{-0.2cm} \caption{ Average time to compute inference objective and complete a weight update for one pass through the training data. We show all models trained for this work. } \label{tab:time} \vspace{-0.3cm} \end{wraptable} To demonstrate the utility of our model and to compare inference and learning settings, we report results on the tasks of optical character recognition (OCR), image tagging, multilabel classification, and named entity recognition (NER). For each experiment, we use the following baselines: Unary is an unstructured model that does not explicitly model the correlations between output variables in any way. Struct is a classical deep structured model using neural network potentials. We follow the inference and learning formulation of~\citep{ChenSchwingICML2015}, where inference consists of a message passing algorithm derived using block coordinate descent on a relaxation of the inference problem. SPEN and NLStruct represent the formulations discussed in \secref{sec:spen}. Finally, GSPEN represents Graph Structured Prediction Energy Networks, described in \secref{sec:ours}. For GSPENs, the inner structured inference problems are solved using the same algorithm as for Struct. To compare the run-time of these approaches, Table~\ref{tab:time} gives the average epoch compute time (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, time to compute the inference objective and update model weights) during training for our models for each task. In general, GSPEN training was more efficient with respect to time than NLStruct but, expectedly, more expensive than SPEN. Additional experimental details, including hyper-parameter settings, are provided in Appendix~\ref{sec:exp_details}. \vspace{-0.2cm} \subsection{Optical Character Recognition (OCR)} \vspace{-0.2cm} \begin{wrapfigure}{r}{0.5\textwidth} \vspace{-1.2cm} \centering \includegraphics[width=0.5\textwidth]{fig/OCR_sample.png} \vspace{-0.6cm} \caption{OCR sample data points with different interpolation factors $\alpha$. } \label{fig:ocr_sample} \vspace{-0.5cm} \end{wrapfigure} For the OCR experiments, we generate data by selecting a list of 50 common 5-letter English words, such as `close,' `other,' and `world.' To create each data point, we choose a word from this list and render each letter as a 28x28 pixel image by selecting a random image of the letter from the Chars74k dataset~\citep{de2009character}, randomly shifting, scaling, rotating, and interpolating with a random background image patch. A different pool of backgrounds and letter images was used for the training, validation, and test splits of the data. The task is to identify the words given 5 ordered images. We create three versions of this dataset using different interpolation factors of $\alpha \in \{0.3, 0.5, 0.7\}$, where each pixel in the final image is computed as $\alpha x_{\text{background}} + (1-\alpha)x_{\text{letter}}$. See \figref{fig:ocr_sample} for a sample from each dataset. This process was deliberately designed to ensure that information about the structure of the problem (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, which words exist in the data) is a strong signal, while the signal provided by each individual letter image can be adjusted. The training, validation, and test set sizes for each dataset are 10,000, 2,000, and 2,000, respectively. During training we vary the training data to be either 200, 1k or 10k. To study the inference algorithm, we train four different GSPEN models on the dataset containing 1000 training points and using $\alpha=0.5$. Each model uses either Frank-Wolfe or Mirror Descent and included/excluded the entropy term. To maintain tractability of inference, we fix a maximum iteration count for each model. We additionally investigate the effect of this maximum count on final performance. Additionally, we run this experiment by initializing from two different Struct models, one being trained using entropy during inference and one being trained without entropy. The results for this set of experiments are shown in \figref{fig:ocr_inf}. Most configurations perform similarly across the number of iterations, indicating these choices are sufficient for convergence. When initializing from the models trained without entropy, we observe that both Frank-Wolfe without entropy and Mirror Descent with entropy performed comparably. When initializing from a model trained with entropy, the use of mirror descent with entropy led to much better results. The results for all values of $\alpha$ using a train dataset size of 1000 are presented in \figref{fig:ocr_interp}, and results for all train dataset sizes with $\alpha=0.5$ are presented in \figref{fig:ocr_data}. We observe that, in all cases, GSPEN outperforms all baselines. The degree to which GSPEN outperforms other models depends most on the amount of train data: with a sufficiently large amount of data, SPEN and GSPEN perform comparably. However, when less data is provided, GSPEN performance does not drop as sharply as that of SPEN initially. It is also worth noting that GSPEN outperformed NLStruct by a large margin. The NLStruct model is less stable due to its saddle-point formulation. Therefore it is much harder to obtain good performance with this model. \begin{figure}[t] \vspace{-0.3cm} \centering {\small \begin{subfigure}[t]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{fig/ocr_infeps1_results.pdf} \vspace{-0.6cm} \subcaption{} \label{fig:ocr_inf} \end{subfigure}~ \begin{subfigure}[t]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{fig/ocr_datainterp_results.pdf} \vspace{-0.6cm} \subcaption{} \label{fig:ocr_interp} \end{subfigure}~ \begin{subfigure}[t]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{fig/ocr_datasize_results.pdf} \vspace{-0.6cm} \subcaption{} \label{fig:ocr_data} \end{subfigure} } \vspace{-0.3cm} \caption{Experimental results on OCR data. The dashed lines in (a) represent models trained from Struct without entropy, while solid lines represent models trained from Struct with entropy. } \vspace{-0.6cm} \end{figure} \vspace{-0.2cm} \subsection{Image Tagging} \vspace{-0.2cm} Next, we evaluate on the MIRFLICKR25k dataset \citep{huiskes2008mir}, which consists of 25,000 images taken from Flickr. Each image is assigned a subset of 24 possible tags. The train/val/test sets for these experiments consist of 10,000/5,000/10,000 images, respectively. We compare to NLStruct and SPEN. We initialize the structured portion of our GSPEN model using the pre-trained DeepStruct model described by \citet{graber2018nlstruct}, which consists of unary potentials produced from an AlexNet architecture \citep{KrizhevskyNIPS2012} and linear pairwise potentials of the form $f_{i,j}(y_i, y_j, W) = W_{i,j,x_i,x_j}$, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, containing one weight per pair in the graph per assignment of values to that pair. A fully-connected pairwise graph was used. The $T$ function for our GSPEN model consists of a 2-layer MLP with 130 hidden units. It takes as input a concatenation of the unary potentials generated by the AlexNet model and the current prediction. Additionally, we train a SPEN model with the same number of layers and hidden units. We used Frank-Wolfe without entropy for both SPEN and GSPEN inference. \begin{wrapfigure}{r}{0.5\textwidth} \vspace{-0.6cm} \includegraphics[width=0.5\textwidth]{fig/tagging_comparison_plot.pdf} \vspace{-0.7cm} \caption{Results for image tagging.} \label{fig:tagging_results} \vspace{-0.5cm} \end{wrapfigure} The results are shown in \figref{fig:tagging_results}. GSPEN obtains similar test performance to the NLStruct model, and both outperform SPEN. However, the NLStruct model was run for 100 iterations during inference without reaching `convergence' (change of objective smaller than threshold), while the GSPEN model required an average of 69 iterations to converge at training time and 52 iterations to converge at test time. Our approach has the advantage of requiring fewer variables to maintain during inference and requiring fewer iterations of inference to converge. The final test losses for SPEN, NLStruct and GSPEN are 2.158, 2.037, and 2.029, respectively. \vspace{-0.2cm} \subsection{Multilabel Classification} \vspace{-0.2cm} We use the Bibtex and Bookmarks multilabel datasets \citep{katakis2008multilabel}. They consist of binary-valued input feature vectors, each of which is assigned some subset of 159/208 possible labels for Bibtex/Bookmarks, respectively. We train unary and SPEN models with architectures identical to \citep{BelangerICML2016} and \citep{GygliICML2017} but add dropout layers. In addition, we further regularize the unary model by flipping each bit of the input vectors with probability 0.01 when sampling mini-batches. For Struct and GSPEN, we generate a graph by first finding the label variable that is active in most training data label vectors and add edges connecting every other variable to this most active one. Pairwise potentials are generated by passing the input vector through a 2-layer MLP with 1k hidden units. The GSPEN model is trained by starting from the SPEN model, fixing its parameters, and training the pairwise potentials. The results are in Table~\ref{tab:mlc} alongside those taken from \citep{BelangerICML2016} and \citep{GygliICML2017}. We found the Unary models to perform similarly to or better than previous best results. Both SPEN and Struct are able to improve upon these Unary results. GSPEN outperforms all configurations, suggesting that the contributions of the SPEN component and the Struct component to the score function are complementary. \vspace{-0.2cm} \subsection{NER} \vspace{-0.2cm} We also assess suitability for Named Entity Recognition (NER) using the English portion of the CoNLL 2003 shared task \citep{tjong2003introduction}. To demonstrate the applicability of GSPEN for this task, we transformed two separate models, specifically the ones presented by \citet{ma2016end} and \citet{akbik2018contextual}, into GSPENs by taking their respective score functions and adding a component that jointly scores an entire set of predictions. In each case, we first train six instances of the structured model using different random initializations and drop the model that performs the worst on validation data. We then train the GSPEN model, initializing the structured component from these pre-trained models. The final average performance is presented in Table~\ref{tab:ner}, and individual trial information can be found in Table~\ref{tab:ner_full} in the appendix. When comparing to the model described by \citet{ma2016end}, GSPEN improves the final test performance in four out of the five trials, and GSPEN has a higher overall average performance across both validation and test data. Compared to \citet{akbik2018contextual}, on average GSPEN's validation score was higher, but it performed slightly worse at test time. Overall, these results demonstrate that it is straightforward to augment a task-specific structured model with an additional prediction scoring function which can lead to improved final task performance. \begin{table}[t] \vspace{-0.4cm} \centering \begin{tabular}{cc} \begin{minipage}[t]{0.49\textwidth} \begin{table}[H] \caption{Multilabel classification results for all models. All entries represent macro F1 scores. The top results are taken from the cited publications.} \label{tab:mlc} \vspace{-0.12cm} \setlength{\tabcolsep}{5pt} \begin{center} {\small \begin{tabular}{lcccc}\toprule &\multicolumn{2}{c}{Bibtex} & \multicolumn{2}{c}{Bookmarks}\\ & Validation & Test & Validation & Test\\ \midrule SPEN \citep{BelangerICML2016} & -- & 42.2 & -- & 34.4\\ DVN \citep{GygliICML2017} & -- & 44.7 & -- & 37.1 \\ \midrule Unary & 43.3 & 44.1 & 38.4 & 37.4 \\ Struct & 45.8 & 46.1 & 39.7 & 38.9 \\ SPEN & 46.6 & 46.5 & 40.2 & 39.2\\ GSPEN & \textbf{47.5} & \textbf{48.6} & \textbf{41.2} & \textbf{40.7} \\\bottomrule \end{tabular} } \end{center} \end{table} \end{minipage} & \begin{minipage}[t]{0.46\textwidth} \begin{table}[H] \caption{Named Entity Recognition results for all models. All entries represent F1 scores averaged over five trials.} \label{tab:ner} \vspace{-0.1cm} \begin{center} {\small \begin{tabular}{rccc} \toprule & Avg. Val. & Avg. Test\\\\ \midrule Struct \citep{ma2016end} & 94.88 $\pm$ 0.18 & 91.37 $\pm$ 0.04 \\ + GSPEN & \textbf{94.97} $\pm$ 0.16 & \textbf{91.51} $\pm$ 0.17 \\ \midrule Struct \citep{akbik2018contextual} & 95.88 $\pm$ 0.10 & \textbf{92.79} $\pm$ 0.08\\ + GSPEN & \textbf{95.96} $\pm$ 0.08 & 92.69 $\pm$ 0.17\\ \bottomrule \end{tabular} } \vspace{-0.7cm} \end{center} \end{table} \end{minipage} \end{tabular} \vspace{-0.6cm} \end{table} \section{Introduction} \vspace{-0.2cm} Many machine learning tasks involve joint prediction of a set of variables. For instance, semantic image segmentation infers the class label for every pixel in an image. To address joint prediction, it is common to use deep nets which model probability distributions independently over the variables (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, the pixels). The downside: correlations between different variables aren't modeled explicitly. A number of techniques, such as Structured SVMs~\citep{Tsochantaridis2005}, Max-Margin Markov Nets~\citep{Taskar2003} and Deep Structured Models~\citep{ChenSchwingICML2015,SchwingARXIV2015}, directly model relations between output variables. However, modeling the correlations between a large number of variables is computationally expensive and therefore generally impractical. As an attempt to address some of the fallbacks of classical high-order structured prediction techniques, Structured Prediction Energy Networks (SPENs) were introduced \citep{BelangerICML2016,belanger2017end}. SPENs assign a score to an entire prediction, which allows them to harness global structure. Additionally, because these models do not represent structure explicitly, complex relations between variables can be learned while maintaining tractability of inference. However, SPENs have their own set of downsides: \citet{BelangerICML2016} mention, and we can confirm, that it is easy to overfit SPENs to the training data. Additionally, the inference techniques developed for SPENs do not enforce structural constraints among output variables, hence they cannot support structured scores and discrete losses. An attempt to combine locally structured scores with joint prediction was introduced very recently by \citet{graber2018nlstruct}. However,~\citet{graber2018nlstruct} require the score function to take a specific, restricted form, and inference is formulated as a difficult-to-solve saddle-point optimization problem. To address these concerns, we develop a new model which we refer to as `Graph Structured Prediction Energy Network' (GSPEN). Specifically, GSPENs combine the capabilities of classical structured prediction models and SPENs and have the ability to explicitly model local structure when known or assumed, while providing the ability to learn an unknown or more global structure implicitly. Additionally, the proposed GSPEN formulation generalizes the approach by~\citet{graber2018nlstruct}. Concretely, inference in GSPENs is a maximization of a generally non-concave function w.r.t\onedot} \def\dof{d.o.f\onedot structural constraints, for which we develop two inference algorithms. We show the utility of GSPENs by comparing to related techniques on several tasks: optical character recognition, image tagging, multilabel classification, and named entity recognition. In general, we show that GSPENs are able to outperform other models. Our implementation is available at \url{https://github.com/cgraber/GSPEN}. \subsubsection*{Acknowledgments} This work is supported in part by NSF under Grant No.\ 1718221 and MRI \#1725729, UIUC, Samsung, 3M, Cisco Systems Inc.\ (Gift Award CG 1377144) and Adobe. We thank NVIDIA for providing GPUs used for this work and Cisco for access to the Arcetri cluster. {\small \section{Graph Structured Prediction Energy Nets} \label{sec:ours} \vspace{-0.2cm} Graph Structured Prediction Energy Networks (GSPENs) generalize all aforementioned models. They combine both a classical structured component as well as a SPEN-like component to score an entire set of predictions jointly. Additionally, the GSPEN score function is more general than that for NLStruct, and includes it as a special case. After describing the formulation of both the score function and the inference problem (\secref{sec:model}), we discuss two approaches to solving inference (\secref{sec:fw} and \secref{sec:md}) that we found to work well in practice. Unlike the methods described previously for NLStruct, these approaches do not require solving a saddle-point optimization problem. \vspace{-0.2cm} \subsection{GSPEN Model} \label{sec:model} \vspace{-0.2cm} The GSPEN score function is written as follows: \begin{equation*} F\left(x, p_\mathcal{R}; w\right) \coloneqq T\left(\bar{f}(x; w), p_\mathcal{R}; w \right), \end{equation*} where vector $p_{\mathcal{R}} \coloneqq (p_r(y_r))|_{r \in \mathcal{R}, y_r \in \mathcal{Y}_r}$ contains one marginal per region per assignment of values to that region. This formulation allows for the use of a structured score function while also allowing $T$ to score an entire prediction jointly. Hence, it is a combination of classical structured models and SPENs. For instance, we can construct a GSPEN model by summing a classical structured model and a multilayer perceptron that scores an entire label vector, in which case the score function takes the form $F(x,p_\mathcal{R};w) \coloneqq \sum_{r \in \mathcal{R}} \sum_{y_r \in \mathcal{Y}_r} p_r(y_r)f_r(x, y_r; w) + \text{MLP}\left(p_\mathcal{R}; w\right)$. Of course, this is one of many possible score functions that are supported by this formulation. Notably, we recover the NLStruct score function if we use $T(\bar{f}(x; w), p_\mathcal{R}; w) = T'(\bar{f}(x; w) \circ p_\mathcal{R}; w)$ and let $\bar{f}(x; w) = f_\mathcal{R}(x; w)$. Given this model, the inference problem is \begin{equation} \label{eq:hspen_inf} \max_{p_\mathcal{R} \in \mathcal{M}} T\left(\bar{f}(x; w), p_\mathcal{R}; w\right). \end{equation} As for classical structured models, the probabilities are constrained to lie in the marginal polytope. In addition we also consider a fractional entropy term over the predictions, leading to \begin{equation} \max_{p_\mathcal{R} \in \mathcal{M}} T\left(\bar{f}(x; w), p_\mathcal{R}; w\right) + H_\mathcal{R}(p_\mathcal{R}). \label{eq:hspen_inf_h} \end{equation} In the classical setting, adding an entropy term relates to Fenchel duality~\citep{blondel2018learning}. However, the GSPEN inference objective does not take the correct form to use this reasoning. We instead view this entropy as a regularizer for the predictions: it pushes predictions towards a uniform distribution, smoothing the inference objective, which we empirically observed to improve convergence. The results discussed below indicate that adding entropy leads to better-performing models. Also note that it is possible to add a similar entropy term to the SPEN inference objective, which is mentioned by~\citet{BelangerICML2016} and~\citet{belanger2017end}. For inference in GSPEN, SPEN procedures cannot be used since they do not maintain the additional constraints imposed by the graphical model, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, the marginal polytope ${\cal M}$. We also cannot use the inference procedure developed for NLStruct, as the GSPEN score function does not take the same form. Therefore, in the following, we describe two inference algorithms that optimize the program while maintaining structural constraints. \vspace{-0.2cm} \subsection{Frank-Wolfe Inference} \label{sec:fw} \vspace{-0.2cm} \begin{figure}[t] \vspace{-0.4cm} \begin{minipage}[t]{0.49\linewidth} \begin{algorithm}[H] \caption{Frank-Wolfe Inference for GSPEN\\} \label{alg:fw} \begin{algorithmic}[1] \STATE {\bfseries Input:} Initial set of predictions $p_\mathcal{R}$; Input $x$; \\ \quad\quad\quad Factor graph $\mathcal{R}$\\ \FOR{$t = 1 \dots T$} \STATE $g \Leftarrow \nabla_{p_\mathcal{R}}F(x, p_\mathcal{R};w)$ \STATE $\widehat{p}_\mathcal{R} \Leftarrow \max\limits_{\widehat{p}_\mathcal{R} \in \mathcal{M}_\mathcal{R}} \sum\limits_{r \in \mathcal{R},y_r \in \mathcal{Y}_r} \hat{p}_r(y_r) g_r(y_r)$ \STATE $p_\mathcal{R} \Leftarrow p_\mathcal{R} + \frac{1}{t}\left(\widehat{p}_\mathcal{R} - p_\mathcal{R}\right)$\vspace{0.015cm} \ENDFOR \STATE {\bfseries Return:} $p_\mathcal{R}$ \end{algorithmic} \end{algorithm} \end{minipage} \begin{minipage}[t]{0.5\linewidth} \begin{algorithm}[H] \caption{Structured Entropic Mirror Descent Inference} \label{alg:md} \begin{algorithmic}[1] \STATE {\bfseries Input:} Initial set of predictions $p_\mathcal{R}$; Input $x$; \\ \quad \quad\quad Factor graph $\mathcal{R}$\\ \FOR{$t = 1 \dots T$} \STATE $g \Leftarrow \nabla_{p_\mathcal{R}}F\left(x, p_\mathcal{R};w\right)$ \STATE $a \Leftarrow 1 + \ln p_\mathcal{R} + g/\sqrt{t} \STATE \hspace{-0.1cm}$p_\mathcal{R} \hspace{-0.1cm}\Leftarrow\hspace{-0.1cm} \max\limits_{\hat{p}_\mathcal{R} \in \mathcal{M}} \hspace{-0.00cm}\sum\limits_{r \in \mathcal{R},y_r \in \mathcal{Y}_r} \hspace{-0.4cm}\hat{p}_r(y_r) a_r(y_r) \!\!+\!\! H_\mathcal{R}(\hat{p}_\mathcal{R})$ \ENDFOR \STATE {\bfseries Return:} $p_\mathcal{R}$ \end{algorithmic} \end{algorithm} \end{minipage} \vspace{-0.5cm} \label{fig:no1} \end{figure} The Frank-Wolfe algorithm \citep{Frank1956} is suitable because the objectives in Eqs.~(\ref{eq:hspen_inf},~\ref{eq:hspen_inf_h}) are non-linear while the constraints are linear. Specifically, using \citep{Frank1956}, we compute a linear approximation of the objective at the current iterate, maximize this linear approximation subject to the constraints of the original problem, and take a step towards this maximum. In Algorithm~\ref{alg:fw} we detail the steps to optimize \equref{eq:hspen_inf}. In every iteration we first calculate the gradient of the score function $F$ with respect to the marginals/beliefs using the current prediction as input. We denote this gradient using $g = \nabla_{p_\mathcal{R}} T\left(\bar{f}(x; w), p_\mathcal{R}; w \right)$. The gradient of $T$ depends on the specific function used and is computed via backpropagation. If entropy is part of the objective, an additional term of $- \ln\left(p_\mathcal{R}\right)-1$ is added to this gradient. Next we find the maximizing beliefs which is equivalent to inference for classical structured prediction: the constraint space is identical and the objective is a linear function of the marginals/beliefs. Hence, we solve this inner optimization using one of a number of techniques referenced in \secref{sec:related}. Convergence guarantees for Frank-Wolfe have been proven when the overall objective is concave, continuously differentiable, and has bounded curvature~\citep{jaggi2013revisiting}, which is the case when $T$ has these properties with respect to the marginals. This is true even when the inner optimization is only solved approximately, which is often the case due to standard approximations used for structured inference. When $T$ is non-concave, convergence can still be guaranteed, but only to a local optimum~\citep{lacoste2016convergence}. Note that entropy has unbounded curvature, therefore its inclusion in the objective precludes convergence guarantees. Other variants of the Frank-Wolfe algorithm exist which improve convergence in certain cases~\citep{krishnan2015barrier,lacoste2015global}. We defer a study of these properties to future work. \vspace{-0.2cm} \subsection{Structured Entropic Mirror Descent} \label{sec:md} \vspace{-0.2cm} Mirror descent, another constrained optimization algorithm, is analogous to projected subgradient descent, albeit using a more general distance beyond the Euclidean one~\citep{beck2003mirror}. This algorithm has been used in the past to solve inference for SPENs, where entropy was used as the link function $\psi$ and by normalizing over each coordinate independently~\citep{BelangerICML2016}. We similarly use entropy in our case. However, the additional constraints in form of the polytope ${\cal M}$ require special care. \begin{figure*}[t] \vspace{-0.2cm} \centering \fbox{ \begin{minipage}{0.973\textwidth} \begin{equation*} \min_w \sum_{\left(x^{(i)}, p_\mathcal{R}^{(i)}\right)} \left[ \max_{\hat{p}_\mathcal{R} \in \mathcal{M}}\left\{ T\left(\bar{f}(x; w), \hat{p}_\mathcal{R}; w\right) + L\left(\hat{p}_\mathcal{R}, p_\mathcal{R}^{(i)}\right) \right\} - T\left(\bar{f} \left(x^{(i)}; w\right), p_\mathcal{R}^{(i)}; w\right) \right]_+ \label{eq:learn} \end{equation*} \end{minipage} } \vspace{-0.2cm} \caption{The GSPEN learning formulation, consisting of a Structured SVM (SSVM) objective with loss-augmented inference. Note that each $p_\mathcal{R}^{(i)}$ are one-hot representations of labels $y_i$. } \label{fig:learn} \vspace{-0.6cm} \end{figure*} We summarize the structured entropic mirror descent inference for the proposed model in Algorithm~\ref{alg:md}. Each iteration of mirror descent updates the current prediction $p_\mathcal{R}$ and dual vector $a$ in two steps: (1) $a$ is updated based on the current prediction $p_\mathcal{R}$. Using $\psi(p_\mathcal{R}) = - H_\mathcal{R}(p_\mathcal{R})$ as the link function, this update step takes the form $a = 1 + \ln p_\mathcal{R} + \frac{1}{\sqrt{t}}\left(\nabla_{p_\mathcal{R}} T\left(\bar{f}(x; w),p_\mathcal{R}; w \right) \right)$. As mentioned previously, the gradient of $T$ can be computed using backpropagation; (2) $p_\mathcal{R}$ is updated by computing the maximizing argument of the Fenchel conjugate of the link function $\psi^*$ evaluated at $a$. More specifically, $p_\mathcal{R}$ is updated via \begin{equation} p_\mathcal{R} = \max_{\hat{p}_\mathcal{R} \in \mathcal{M}} \sum_{r \in \mathcal{R}} \sum_{y_r \in \mathcal{Y}_r} \hat{p}_r(y_r) a_r(y_r) + H_\mathcal{R}\left(\hat{p}_\mathcal{R}\right), \end{equation} which is identical to classical structured prediction. When the inference objective is concave and Lipschitz continuous (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, when $T$ has these properties), this algorithm has also been proven to converge~\citep{beck2003mirror}. Unfortunately, we are not aware of any convergence results if the inner optimization problem is solved approximately and if the objective is not concave. In practice, though, we did not observe any convergence issues during experimentation. \vspace{-0.2cm} \subsection{Learning GSPEN Models} \label{sec:train} \vspace{-0.2cm} GSPENs assign a score to an input $x$ and a prediction $p$. An SSVM learning objective is applicable, which maximizes the margin between the scores assigned to the correct prediction and the inferred result. The full SSVM learning objective with added loss-augmented inference is summarized in \figref{fig:learn}. The learning procedure consists of computing the highest-scoring prediction using one of the inference procedures described in \secref{sec:fw} and \secref{sec:md} for each example in a mini-batch and then updating the weights of the model towards making better predictions. \section{Related Work} \label{sec:related} \vspace{-0.2cm} A variety of techniques have been developed to model structure among output variables, originating from seminal works of \citep{Lafferty2001,Taskar2003,Tsochantaridis2005}. These works focus on extending linear classification, both probabilistic and non-probabilistic, to model the correlation among output variables. Generally speaking, scores representing both predictions for individual output variables and for combinations of output variables are used. A plethora of techniques have been developed to solve inference for problems of this form, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot,~\cite{Schlesinger1976, Werner2007, Boykov1998, Boykov2001, Wainwright2003b, Globerson2006, Welling2004, Sontag2012, Batra2011, Sontag2008, Sontag2007, Wainwright2008, Sontag2009, Murphy1999, Meshi2009, Globerson2007, Wainwright2005b, Wainwright2005, Wainwright2003, Heskes2006, Hazan2010, Hazan2008, Yanover2006, Meltzer2009, Weiss2007, Heskes2002, Heskes2003, Yedidia2001, Ihler2004, Wiegerinck2003, SchwingNIPS2012, SchwingICML2014, SchwingCVPR2011a, Komodakis2010, MeshiNIPS2015, MeshiNIPS2017}. As exact inference for general structures is NP-hard \citep{Shimony1994}, early work focused on tractable exact inference. However, due to interest in modeling problems with intractable structure, a plethora of approaches have been studied for learning with approximate inference \citep{Finley2008, Kulesza2008, Pletscher2010, Hazan2010b, Meshi2010, komodakis2011efficient, Schwing2011a, MeshiICML2016}. More recent work has also investigated the role of different types of prediction regularization, with \citet{NiculaeMBC18} replacing the often-used entropy regularization with an L2 norm and \citet{blondel2018learning} casting both as special cases of a Fenchel-Young loss framework. To model both non-linearity and structure, deep learning and structured prediction techniques were combined. Initially, local, per-variable score functions were learned with deep nets and correlations among output variables were learned in a separate second stage \citep{AlvarezECCV2012, ChenICLR2015}. Later work simplified this process, learning both local score functions and variable correlations jointly \citep{TompsonNIPS2014, zheng2015conditional, ChenSchwingICML2015, SchwingARXIV2015, LinNIPS2015}. Structured Prediction Energy Networks (SPENs), introduced by \citet{BelangerICML2016}, take a different approach to modeling structure. Instead of explicitly specifying a structure a-priori and enumerating scores for every assignment of labels to regions, SPENs learn a function which assigns a score to an input and a label. Inference uses gradient-based optimization to maximize the score w.r.t\onedot} \def\dof{d.o.f\onedot the label. \citet{belanger2017end} extend this technique by unrolling inference in a manner inspired by \citet{domke2012generic}. Both approaches involve iterative inference procedures, which are slower than feed-forward prediction of deep nets. To improve inference speed, \citet{tu2018learning} learn a neural net to produce the same output as the gradient-based methods. Deep Value Networks \citep{GygliICML2017} follow the same approach of \citet{BelangerICML2016} but use a different objective that encourages the score to equal the task loss of the prediction. All these approaches do not permit to include known structure. The proposed approach enables this. Our approach is most similar to our earlier work \cite{graber2018nlstruct}, which combines explicitly-specified structured potentials with a SPEN-like score function. The score function of our earlier work is a special case of the one presented here. In fact, earlier we required a classical structured prediction model as an intermediate layer of the score function, while we don't make this assumption any longer. Additionally, in our earlier work we had to solve inference via a computationally challenging saddle-point objective. Another related approach is described by \citet{vilnis2015bethe}, whose score function is the sum of a classical structured score function and a (potentially non-convex) function of the marginal probability vector $p_\mathcal{R}$. This is also a special case of the score function presented here. Additionally, the inference algorithm they develop is based on regularized dual averaging \citep{xiao2010dual} and takes advantage of the structure of their specific score function, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, it is not directly applicable to our setting.
2,869,038,155,973
arxiv
\section{Introduction} \label{sec: introduction} Viscoplastic (VP) fluids are a class of materials whose distinctive property is that they flow as fluids if subjected to large enough stresses but behave as solids if the applied stress is below a critical value, termed the yield stress. Although solids can also undergo plastic deformation, viscoplastic fluids are characterised by reversibility of the structural changes caused during plastic flow once the flow ceases \cite{Coussot_2017, Coussot_2018}. The class of VP fluids includes a variety of materials such as foams, emulsions, colloids and physical gels, with possibly different microscopic mechanisms being responsible for the emergence of a yield stress in each case \cite{Bonn_2017}. Viscoplastic flows are of major relevance in many industries (oil, construction, cosmetics, foodstuffs, etc.) and many natural flows can be classified as such (mud and lava flows, landslides, avalanches etc.). Viscoplastic fluids were first studied in depth by Eugene Bingham \cite{Coussot_2017} who proposed the renowned constitutive equation named after him to describe their behaviour \cite{Bingham_1922}. A short time later, Herschel and Bulkley \cite{Herschel_1926} extended the Bingham model to describe also the shear-thinning (or shear-thickening) post-yield behaviour that most of these materials exhibit, by assuming a power-law dependence of the viscosity on the shear rate. These models were originally proposed in scalar form, but it was not long until a full tensorial form was proposed, employing the von Mises yield criterion \cite{Hohenemser_1932}. The empirical Herschel-Bulkley (HB) constitutive equation has been found to represent well the behaviour of yield stress fluids under steady shear flow, and is arguably the most popular model for VP fluids. It is commonly given in the following form: \begin{equation} \label{eq: HB stress} \left\{ \begin{array}{ll} \tau \leq \tau_y \quad & \Rightarrow \quad \tf{\dot{\gamma}} \;=\; 0 \\ \tau > \tau_y \quad & \Rightarrow \quad \tf{\tau} \;=\; \left( \dfrac{\tau_y}{\dot{\gamma}} \;+\; k \dot{\gamma}^{n-1} \right) \tf{\dot{\gamma}} \end{array} \right. \end{equation} where $\tf{\tau}$ is the deviatoric stress tensor; $\tf{\dot{\gamma}} = \nabla \vf{u} + (\nabla \vf{u})^{\mathrm{T}}$ is the rate-of-strain tensor, $\vf{u}$ being the fluid velocity vector and T denoting the tensor transpose; and $\tau \equiv (\frac{1}{2} \tf{\tau}:\tf{\tau})^{\frac{1}{2}}$ and $\dot{\gamma} \equiv (\frac{1}{2} \tf{\dot{\gamma}}:\tf{\dot{\gamma}})^{\frac{1}{2}}$ are their magnitudes. The HB model includes three parameters, the yield stress $\tau_y$, the consistency index $k$, and the exponent $n$; the latter is usually in the range $0.2-0.8$ \cite{Bonn_2017, Coussot_2018}, which represents a shear-thinning behaviour. Equation \eqref{eq: HB stress} predicts two possible phases of the material, either rigid solid ($\tau < \tau_y$) or fluid ($\tau > \tau_y$), separated by the yield surface ($\tau = \tau_y$). In order to express the von Mises yield criterion, $\tf{\tau}$ in Eq.\ \eqref{eq: HB stress} is defined to be the \textit{deviatoric} component of the total stress tensor $\tf{\sigma}$; such a decomposition of $\tf{\sigma}$ into deviatoric (traceless), $\tf{\sigma}{}_d$, and isotropic, $\tf{\sigma}{}_i$, components is useful for solids: \begin{equation} \label{eq: total stress into isotropic and deviatoric} \tf{\sigma} \;=\; \underbrace{ \tf{\sigma} \;-\; \frac{1}{3} \, \mathrm{tr}(\tf{\sigma}) \, \tf{I}}_{\tf{\sigma}{}_d} \;+\; \underbrace{\frac{1}{3} \, \mathrm{tr}(\tf{\sigma}) \, \tf{I}}_{\tf{\sigma}{}_i} \end{equation} where $\mathrm{tr}(\tf{\sigma}) \equiv \sum_i \sigma_{ii}$ is the trace of $\tf{\sigma}$. Therefore, in the common form \eqref{eq: HB stress} of the HB equation, $\tf{\tau}$ identifies with $\tf{\sigma}{}_d$ of Eq.\ \eqref{eq: total stress into isotropic and deviatoric}. However, from a constitutive equation for a fluid one expects a decomposition of $\tf{\sigma}$ into the \textit{extra-stress} tensor (for which the symbol $\tf{\tau}$ is usually reserved) which expresses the forces that arise due to deformation of the fluid, and an isotropic pressure component which is responsible for enforcing continuity in incompressible fluids: \begin{equation} \label{eq: total stress into tau and p} \tf{\sigma} \;=\; \tf{\tau} \;-\; p\tf{I} \end{equation} where $\tf{I}$ is the identity tensor. Equation \eqref{eq: HB stress} does not suffice to define the HB extra-stress tensor because it allows the possibility that this tensor has an isotropic part, for which no information is given. This ambiguity is removed by the tacit assumption that the HB extra-stress tensor is traceless, so that decompositions \eqref{eq: total stress into isotropic and deviatoric} and \eqref{eq: total stress into tau and p} become equivalent and $\tf{\tau}$ of Eq.\ \eqref{eq: HB stress} also happens to equal the extra-stress tensor (hence the symbol $\tf{\tau}$ instead of $\tf{\sigma}{}_d$ in \eqref{eq: HB stress}). This makes the HB fluid a generalised Newtonial one, for which the extra and deviatoric stresses are identical since the former is traceless due to the incompressible continuity equation. When elasticity is introduced into the HB equation in Sec.\ \ref{sec: equations}, the extra-stress $\tf{\tau}$ will no longer be traceless and thus can no longer identify with $\tf{\sigma}{}_d$. In the solid region, the concepts of extra-stress and pressure are not meaningful, since the HB solid does not deform, and the continuity equation is already enforced by the rigidity condition without a need for pressure. The symbols $\tf{\tau}$ and $p$ are still used there, but it must be kept in mind that they simply refer to the deviatoric and isotropic components of $\tf{\sigma}$. In other words, in the HB solid the stress decomposition is written as in Eq.\ \eqref{eq: total stress into tau and p}, but what is really meant is the decomposition \eqref{eq: total stress into isotropic and deviatoric}. With some manipulations, noticing that in the fluid branch the stress tensor and the rate-of-strain tensor are parallel and thus the unit tensors $\tf{\dot{\gamma}} / \dot{\gamma}$ and $\tf{\tau} / \tau$ are equal, the solid and fluid branches of Eq.\ \eqref{eq: HB stress} can be combined into a single expression for $\tf{\dot{\gamma}}$ \cite{Saramito_2016}: \begin{equation} \label{eq: HB rate-of-strain} \tf{\dot{\gamma}} \;=\; \left( \frac{\max(0, \tau-\tau_y)}{k} \right)^{\frac{1}{n}} \frac{1}{\tau} \; \tf{\tau} \end{equation} Viscoplastic constitutive equations such as the HB have been studied extensively during the past decades. Their discontinuity at the yield surfaces and their inherent indeterminacy of the stress tensor within the unyielded regions require specialised numerical techniques for performing flow simulations \cite{Mitsoulis_2017, Saramito_2017, Dimakopoulos_2018}. Furthermore, there is the question of whether their assumptions of a completely rigid (inelastic) solid phase and a purely viscous fluid phase are physically realistic. Allowing for some deformation of the solid phase under stress seems more natural and indeed experimental studies on Carbopol, a prototypical material often used in experimental studies on viscoplasticity, have shown that prior to yielding it exhibits elastic deformation under stress \cite{Piau_2007, Dinkgreve_2017}. Elastic effects can be observed also in the fluid phase. For example, bubbles rising in Carbopol solutions usually acquire the shape of an inverted teardrop, with a cusp at their leeward side \cite{Dubash_2007, Lopez_2018}, but the classical Bingham and HB viscoplastic models fail to predict such behaviour \cite{Tsamopoulos_2008, Dimakopoulos_2013}; on the other hand, such shapes are observed also in viscoelastic fluids, and are correctly captured by viscoelastic constitutive equations \cite{Fraggedakis_2016_JFM}. Similarly, for the settling of spherical particles in Carbopol, classical VP models cannot predict phenomena such as the loss of fore-aft symmetry under creeping flow conditions and the formation of a negative wake behind the sphere, but these phenomena are predicted if elasticity is incorporated into the constitutive modelling \cite{Fraggedakis_2016_SM}. Therefore, recently the focus has been shifting towards constitutive equations that incorporate both plasticity and elasticity, usually called \textit{elastoviscoplastic} (EVP) constitutive equations. Actually, EVP constitutive modelling dates back to the beginning of the previous century -- a nice historical overview can be found in \cite{Saramito_2007}. Although several EVP models have been proposed (see e.g.\ the literature reviews in \cite{Saramito_2007, SouzaMendes_2012}), many of them appear only in scalar form. To be applicable in complex two- and three-dimensional flow simulations (2D/3D), a full tensorial form is required; some such models are compared in \cite{Fraggedakis_2016_JNNFM}. Complex simulations also require that these models be accompanied by appropriate numerical solvers characterised by accuracy, robustness and efficiency. Usually, FEM solvers are employed in EVP flow simulations (e.g.\ \cite{Cheddadi_2013, Frey_2015, Fraggedakis_2016_JNNFM}). An alternative, very popular discretisation / solution method in Computational Fluid Dynamics is the Finite Volume Method (FVM); it has been successfully applied to viscoplastic (e.g.\ \cite{Syrakos_2013, Syrakos_2016b}) and viscoelastic (e.g.\ \cite{Oliveira_1998, Favero_2010, Afonso_2012, Syrakos_2018}) flows individually, but not to EVP flows, to the best of our knowledge (a hybrid FE/FV method was used in \cite{Belblidia_2011}). In this paper a FVM for the simulation of EVP flows is described. The EVP constitutive equation chosen is that of Saramito \cite{Saramito_2009} which introduces elasticity into the classic HB model, to which it reduces in the limit of inelastic behaviour. This model shall be referred to as the Saramito-Herschel-Bulkley (SHB) model. We chose this model because of its simplicity, its potential as revealed in a number of recent studies on materials such as foams \cite{Cheddadi_2012} and Carbopol \cite{Lacaze_2015}, and because of the popularity of the classic HB model. Nevertheless, the general framework of the presented FVM should be applicable to a range of other EVP models, particularly those that can be regarded as modifications / extensions of viscoelastic constitutive equations. The presentation of the method in Sections \ref{sec: method: discretisation} and \ref{sec: method: solution} and its validation in Sec.\ \ref{sec: method: validation} are followed by application of the method to simulate EVP flow in a lid-driven cavity. The lid-driven cavity test case is arguably the most popular benchmark test case for new numerical methods for flow simulations. As such, it has been used also as a benchmark problem for viscoplastic \cite{Syrakos_2013, Syrakos_2014} and viscoelastic \cite{Sousa_2016} flows; there exist also the EVP flow studies of \cite{Martins_2013, Frey_2015}, but with a different EVP model that incorporates a kind of regularisation. In the present study the parameters of the SHB model are chosen so as to represent Carbopol, which is regarded as a simple VP fluid (more complex behaviour such as thixotropy \cite{Syrakos_2015} and kinematic hardening \cite{Dimitriou_2014} are not considered, but may be incorporated into the model in the future). The lid-driven cavity problem constitutes a convenient ``playground'' for testing the numerical method, testing the behaviour of the SHB model under conditions of complex 2D flow, comparing its predictions against those of the classic HB model, and providing benchmark results. The tests include varying the lid velocity to vary the flow character as quantified by dimensionless numbers, flow cessation (which occurs in finite time for VP flows), and varying the initial conditions to investigate the issue of multiplicity of solutions of the SHB model. \section{Governing equations} \label{sec: equations} The flow is governed by the continuity, momentum, and constitutive equations. The first two are: \begin{equation} \label{eq: continuity} \nabla \cdot \left( \rho \vf{u} \right) \;=\; 0 \end{equation} \begin{equation} \label{eq: momentum} \pd{(\rho\vf{u})}{t} \;+\; \nabla \cdot \left( \rho \vf{u} \vf{u} \right) \;=\; -\nabla p \;+\; \nabla\cdot\tf{\tau} \end{equation} where $t$ is time, and $\rho$ is the density of the material, assumed to be constant; the rest of the variables have been defined in Sec.\ \ref{sec: introduction}. The right-hand side of Eq.\ \eqref{eq: momentum} can be written collectively as $\nabla \cdot \tf{\sigma}$. Closure of the system of governing equations requires that the extra stress tensor be related to the flow kinematics through a constitutive equation. In the present work we use the EVP equation of Saramito \cite{Saramito_2009}, which assumes that the total deformation of the material is equal to the sum of an elastic strain $\gamma_e$ (applicable in both the solid and fluid phases) and the provisional (if the yield stress is exceeded) viscous deformation $\gamma_v$ predicted by the HB model \eqref{eq: HB rate-of-strain}. This behaviour is depicted schematically in Fig.\ \ref{fig: model schematic} by a mechanical analogue. In terms of the rate-of-strain this is written as: \begin{equation} \label{eq: constitutive} \underbrace{ \vphantom{ \left( \frac{\max(0, \tau_d-\tau_y)}{k} \right)^{\frac{1}{n}} \frac{1}{\tau} \; \tf{\tau} } \frac{1}{G} \, \ucd{\tau} }_{\dot{\tf{\gamma}}{}_e} \;+\; \underbrace{ \left( \frac{\max(0, \tau_d-\tau_y)}{k} \right)^{\frac{1}{n}} \frac{1}{\tau_d} \; \tf{\tau} }_{\dot{\tf{\gamma}}{}_v} \;=\; \dot{\tf{\gamma}} \end{equation} where $G$ is the elastic modulus, the triangle denotes the upper-convected time derivative \begin{equation} \label{eq: upper convected derivative} \ucd{\tau} \;\equiv\; \pd{\tf{\tau}}{t} \;+\; \vf{u} \cdot \nabla \tf{\tau} \;-\; (\nabla \vf{u})^{\mathrm{T}} \!\cdot \tf{\tau} \;-\; \tf{\tau} \cdot \nabla \vf{u} \end{equation} and $\tau_d \equiv (\frac{1}{2} \tf{\tau_d}:\tf{\tau_d})^{\frac{1}{2}}$ is the magnitude of the deviatoric part of the EVP stress tensor, \begin{equation} \label{eq: deviatoric stress} \tf{\tau}{}_d \;\equiv\; \tf{\tau} \;-\; \frac{1}{3} \mathrm{tr}(\tf{\tau}) \tf{I} \end{equation} Note that because $\tf{\tau}$ and $\tf{\sigma}$ differ only by an isotropic quantity ($-p\tf{I}$, Eq.\ \eqref{eq: total stress into tau and p}) it holds that $\tf{\tau}{}_d = \tf{\sigma}{}_d$. An important difference with the classic HB model \eqref{eq: HB rate-of-strain} is that, due to the last two terms of the upper convected derivative \eqref{eq: upper convected derivative}, the trace of $\tf{\tau}$ is now not necessarily zero and thus, in general, $\tf{\tau}{}_d \neq \tf{\tau}$, necessitating explicit use of $\tau_d$ inside the ``max'' term of Eq.\ \eqref{eq: constitutive} in order to express the von Mises yield criterion. The full 3D formula \eqref{eq: deviatoric stress} must be used for the deviatoric stress even in 1D or 2D simulations: a 2D stress state where all $\tau_{ij}$ are zero except $\tau_{11} = \tau_{22} \neq 0$ is not isotropic ($\tau_{33} \neq \tau_{11}, \tau_{22}$) and $\tau_d$ is not zero. \begin{figure}[tb] \centering \includegraphics[scale=1.00]{figures/model_schematic.pdf} \caption{A mechanical analogue of the Saramito-Herschel-Bulkley (SHB) model \cite{Saramito_2009}.} \label{fig: model schematic} \end{figure} Other qualitative differences with the HB model exist. For example, the SHB model allows non-zero rate of strain in the unyielded regions, arising from elastic deformations of the solid phase. Conversely, it is theoretically possible that the rate of strain is zero in yielded material because the stresses there have not had enough time to relax. In particular, in the SHB model, contrary to the HB model, the stresses do not respond instantaneously to changes in the rate of strain; rather, the EVP stress tensor is also a function of all past states of $\dot{\tf{\gamma}}$, as in viscoelastic fluids. Furthermore, as noted in \cite{Cheddadi_2008, Cheddadi_2012}, the combination of plastic and elastic terms in the SHB Eq.\ \eqref{eq: constitutive} can produce complex behaviour not observed in either purely viscoplastic or purely viscoelastic flows. One such feature is that the extra stress tensor and the velocity gradient can vary discontinuously across yield surfaces \cite{Cheddadi_2008}. Another feature is that flows are inherently transient, in the sense that there exist an infinitude of steady states which depend in a continuous manner on the initial conditions. Even if it is just the steady state that is sought, one cannot simply discard the time derivatives in Eqs.\ \eqref{eq: momentum} and \eqref{eq: constitutive}; the steady state must be obtained by a transient simulation with appropriate initial conditions. It is not only the steady state stresses that depend on the initial conditions, but also the steady state velocity. This issue was studied in \cite{Cheddadi_2012} in the context of cylindrical Couette flow. In fact, actual EVP materials do behave this way in experiments, with the steady state depending on the residual stresses present in the unyielded, stationary material at the start of the experiment \cite{Cheddadi_2012}. The residual stresses are stresses that are ``trapped'' inside unyielded material because there is no relaxation mechanism there. E.g.\ for stationary ($\vf{u} = 0$) unyielded material, Eq.\ \eqref{eq: constitutive} predicts $\partial\tf{\tau} / \partial t = 0$. In experiments, residual stresses do develop during the preparation of the material and are difficult to eliminate. This contrasts the behaviour of classic VP models such as the HB, where there is a stress indeterminacy in the unyielded regions: there exist infinite $\tf{\sigma}$ fields within these regions that have the same divergence $\nabla \cdot \tf{\sigma}$ which yields $\dot{\tf{\gamma}} = 0$ when substituted in the momentum equation \eqref{eq: momentum}. Each of these fields is a valid solution. The HB steady-state does not depend on the initial conditions, and can be obtained directly, by dropping the time derivative from Eq.\ \eqref{eq: momentum}. The HB model makes no connection between this indeterminacy and the initial conditions, and in fact it even allows that stresses in the unyielded regions vary discontinuously in time. No such indeterminacy is exhibited by the SHB model, the stresses in both the yielded and unyielded regions being precisely determined for given initial conditions. In the limit of very high elastic modulus, $G \rightarrow \infty$, the SHB material becomes so stiff that it behaves as an inelastic fluid or solid; the importance of the first term on the left-hand side of the SHB constitutive equation \eqref{eq: constitutive} diminishes and that equation tends to become identical to the classic HB equation \eqref{eq: HB rate-of-strain}. However, it must be kept in mind that the difference in character between the SHB and HB models is in some respects retained no matter how large the value of $G$ is. This concerns mostly the stress state in the unyielded regions, which for the SHB model remains uniquely determined by the initial conditions whereas that of the HB model is indeterminate. Also, as discussed in Appendix \ref{appendix: SHB tau in limit of large G}, in the HB model $\tf{\tau}$ was defined to equal $\tf{\sigma}{}_d$ whereas for the SHB model the identification of $\tf{\tau}$ with $\tf{\sigma}{}_d$ as $G \rightarrow \infty$ is not guaranteed. Figure \ref{fig: model schematic} shows that the complete SHB model contains an additional viscous component labelled $\kappa$ not discussed thus far. This is a Newtonian component, of viscosity $\kappa$, which makes the unyielded phase behave as a Kelvin-Voigt solid. The entire extra stress tensor is then \begin{equation} \label{eq: entire extra stress} \tf{\tau_e} \;=\; \tf{\tau} \;+\; \kappa \dot{\tf{\gamma}} \end{equation} The SHB model then reduces to the Oldroyd-B viscoelastic model when $\tau_y = 0$ and $n = 1$. However, in the main results of Sec.\ \ref{sec: results} we will use $\kappa = 0$. Finally, concerning the boundary conditions, we consider solid wall boundaries. We will mostly employ the no-slip condition, but since (elasto)viscoplastic materials are usually slippery we will also employ a Navier slip condition, according to which the relative velocity between the fluid and the wall, in the tangential direction, is proportional to the tangential stress. For two-dimensional flows this is expressed as follows: Let $\vf{n}$ be the unit vector normal to the wall, and $\vf{s}$ be the unit vector tangential to the wall within the plane in which the equations are solved. Let also $\vf{u}$ and $\vf{u}_w$ be the fluid and wall velocities, respectively. Then, \begin{equation} \label{eq: navier slip} \left( \vf{u} - \vf{u}_w \right) \cdot \vf{s} \;=\; \beta \left( \vf{n} \cdot \tf{\tau} \right) \cdot \vf{s} \end{equation} where the parameter $\beta$ is called the slip coefficient. \subsection{Nondimensionalisation of the governing equations} In the present work we will solve the dimensional governing equations. Nevertheless, nondimensionalisation of the equations reveals a number of dimensionless parameters that characterise the flow. So, let $L$ and $U$ be characteristic length and velocity scales of the problem, with $T = L/U$ the associated time scale, and choose the following characteristic value of stress, $S$ (the Newtonian viscosity $\kappa$, Eq.\ \eqref{eq: entire extra stress}, is omitted): \begin{equation} \label{eq: characteristic stress} S \;\equiv\; \tau_y \;+\; k \left( \frac{U}{L} \right)^n \end{equation} Then, we define the dimensionless variables, denoted with a tilde (\textasciitilde), as $x_i = \tilde{x}_i L$, $t = \tilde{t} T$, $\vf{u} = \tilde{\vf{u}} U$, $p = \tilde{p} S$, and $\tf{\tau} = \tilde{\tf{\tau}} S$. Substituting these into the governing equations \eqref{eq: continuity}, \eqref{eq: momentum}, and \eqref{eq: constitutive}-\eqref{eq: upper convected derivative}, we obtain their dimensionless forms: \begin{equation} \label{eq: continuity ND} \tilde{\nabla} \cdot \tilde{u} \;=\; 0 \end{equation} \begin{equation} \label{eq: momentum ND} \Rey \left( \pd{\tilde{\vf{u}}}{\tilde{t}} \;+\; \tilde{\nabla} \cdot (\tilde{u} \tilde{u}) \right) \;=\; - \tilde{\nabla} \tilde{p} \;+\; \tilde{\nabla} \cdot \tilde{\tf{\tau}} \end{equation} \begin{equation} \label{eq: constitutive ND} \Wei \left( \pd{\tilde{\tf{\tau}}}{\tilde{t}} \;+\; \tilde{\vf{u}} \cdot \tilde{\nabla} \tilde{\tf{\tau}} \;-\; ( \tilde{\nabla} \tilde{\vf{u}} )^{\mathrm{T}} \cdot \tilde{\tf{\tau}} \;-\; \tilde{\tf{\tau}} \cdot \tilde{\nabla} \tilde{\vf{u}} \right) \;+\; \left( \frac{\max \left( 0, \tilde{\tau}_d - \Bin \right)}{1 - \Bin} \right)^{\frac{1}{n}} \frac{1}{\tilde{\tau}_d} \; \tilde{\tf{\tau}} \;=\; \tilde{\dot{\tf{\gamma}}} \end{equation} where $\tilde{\nabla} = (1/L) \nabla$ and $\tilde{\dot{\tf{\gamma}}} = \dot{\tf{\gamma}} / (U/L)$. The following dimensionless numbers appear: \begin{align} \label{eq: Reynolds number} \text{Reynolds number} \qquad & \Rey \;\equiv\; \frac{\rho U^2}{S} \\[0.17cm] \label{eq: Weissenberg number} \text{Weissenberg number} \qquad & \Wei \;\equiv\; \frac{S}{G} \\[0.17cm] \label{eq: Bingham number} \text{Bingham number} \qquad & \Bin \;\equiv\; \frac{\tau_y}{S} \end{align} A more standard choice for $S$ would account only for viscous effects, omitting plasticity: $S = k(U/L)^n$. This would lead to more standard definitions of the Reynolds and Bingham numbers: \begin{alignat}{2} \label{eq: Reynolds number standard} \Rey' \;&\equiv\; \frac{\rho U^{2-n} L^n}{k} \;&&=\; \frac{\Rey}{1 - \Bin} \\[0.17cm] \label{eq: Bingham number standard} \Bin' \;&\equiv\; \frac{\tau_y L^n}{k U^n} \;&&=\; \frac{\Bin}{1 - \Bin} \end{alignat} However, it was shown in \cite{Syrakos_2016a} (see also the discussion in \cite{Thompson_2016}) that in viscoplastic flows $\Rey$ suffices as a standalone indicator of inertial effects in the flow, whereas $\Rey'$ does not (the inertial character must be inferred from the values of $\Rey'$ and $\Bin'$ combined). Hence the definition \eqref{eq: characteristic stress} was preferred. Concerning the Bingham number, $\Bin \in [0,1]$ and $\Bin' \in [0,\infty)$ are simply related through Eq.\ \eqref{eq: Bingham number standard} and carry exactly the same information. Because of familiarity with $\Bin' $ that is almost universally used in the literature, we will use both $\Bin$ and $\Bin'$ in this study. The Weissenberg number $\Wei$ is an indicator of elastic effects in the flow. Unlike the $\Rey$ and $\Bin$ numbers, it is not a ratio between stresses of different nature or momentum fluxes. In fact, as seen in the mechanical analogue of Fig.\ \ref{fig: model schematic}, omitting the Newtonian component $\kappa$, the viscoplastic and elastic components are connected in series and thus carry the same load. Therefore, the representative stress $S$, although defined by considerations pertaining to the viscoplastic component of the material behaviour (Eq.\ \eqref{eq: characteristic stress}), is borne also by the elastic component of the material structure and thus $\Wei \equiv S/G$ is a typical elastic deformation. In the literature, the standard form for the Weissenberg number is $\Wei = \lambda U / L$, where $\lambda$ is a relaxation time, which becomes equivalent to our definition if we define \begin{equation} \label{eq: lamda and eta} \lambda \;\equiv\; \frac{\eta}{G} \qquad \text{where} \qquad \eta \equiv \frac{S}{U/L} \end{equation} The relaxation time $\lambda$ is proportional to the apparent viscosity $\eta$ (the more viscous the flow is, the slower the recovery from elastic deformations) and inversely proportional to the elastic modulus $G$ (the stiffer the material is, the faster it recovers). Note that $\lambda$ and $\eta$ as defined by \eqref{eq: lamda and eta} are not material constants but reflect also the influence of the flow, as they depend on $U$ and $L$ in addition to the material parameters $\tau_y$, $G$, $k$ and $n$ (Eq.\ \eqref{eq: characteristic stress}). The definition $\Wei = \lambda U / L$ is also interpreted as a typical elastic strain: it is the product of the strain rate $U/L$ by the time period $\lambda$ during which the material has not yet significantly relaxed. Finally, an important dimensionless number is the yield strain, \begin{equation} \label{eq: yield strain} \gamma_y \;=\; \frac{\tau_y}{G} \;=\; \Bin \cdot \Wei \end{equation} which depends only on the material parameters and not on kinematic or geometric parameters of the flow such as $U$ and $L$. The fact that it is equal to the product of $\Bin$ and $\Wei$ means that these two numbers are not independent but their product is constant for a given material. Since $\Bin \in [0,1]$, it follows that $\Wei \in [\gamma_y, \infty)$. That $\Wei$ is bounded from below by $\gamma_y$ follows also directly from the definitions \eqref{eq: Weissenberg number} and \eqref{eq: characteristic stress}, and derives from the characteristic stress definition \eqref{eq: characteristic stress} which assumes the existence of unyielded regions; in such regions the strain, of which $\Wei$ is a measure, is higher than the yield strain $\gamma_y$. \section{Discretisation of the governing equations} \label{sec: method: discretisation} \subsection{Preliminary considerations} \label{ssec: method: discretisation: preliminary} In this section we propose a Finite Volume Method (FVM) for the discretisation of the governing equations. The method will be described for two-dimensional problems, but extension to three dimensions is straightforward. The first step is the tessellation of the domain $\Omega$ into a number of non-overlapping volumes, or \textit{cells}. Each cell is bounded by straight faces, each of which separates it from another single cell or from the exterior of the domain (the latter are called boundary faces). Figure \ref{fig: grid nomenclature} shows such a cell $P$ along with its faces and neighbours, and the associated nomenclature. Our FVM is applicable to grids of arbitrary polygonal cells, although for the results of Sec.\ \ref{sec: results} we will employ Cartesian grids due to the regularity of the problem geometry. Grids will be labelled after a characteristic cell length $h$. \begin{figure}[tb] \centering \includegraphics[scale=0.75]{figures/grid_schematic.pdf} \caption{Cell $P$ and its neighbouring cells, each having a single common face with $P$. Its faces and neighbours are numbered in anticlockwise order, with face $f$ separating $P$ from its neighbour $N_f$. The shaded area lies outside the domain, so face 5 is a boundary face. The geometric characteristics of face 1 are displayed. The position vectors of the centroids of cells $P$ and $N_f$ are denoted as $\vf{P}$ and $\vf{N}_f$; $\vf{c}_f$ is the centroid of face $f$ and $\vf{c}'_f$ is its closest point on the line connecting $\vf{P}$ and $\vf{N}_f$; $\vf{m}_f$ is the midpoint between $\vf{P}$ and $\vf{N}_f$; $\vf{n}_f$ is the unit vector normal to face $f$, pointing outwards of $P$, and $\vf{d}_f$ is the unit vector pointing from $\vf{P}$ towards $\vf{N}_f$. The volume of cell $P$ is denoted as $\Omega_P$.} \label{fig: grid nomenclature} \end{figure} The discretisation procedure will convert the governing equations into a large set of algebraic equations involving only (approximate) values of the dependent variables at the cell centroids. These values will be denoted using the cell index as a subscript, i.e.\ $\phi_P$ is the value of the variable $\phi$ at the centroid of cell $P$; if the variable name already includes a subscript then the cell index will be separated by a comma (e.g.\ $\tau_{d,P}$ is the deviatoric stress at cell $P$). We aim for a second-order accurate method, i.e.\ one whose discretisation error scales as $h^2$. With the present governing equations, a difficulty arises from the fact that they all contain only first spatial derivatives because this allows the development of spurious oscillations in the solution. For example, assume that on the grid of Fig.\ \ref{fig: oscillations schematic}, a quantity $\phi$ has the value zero at the boundary points ($\bullet$), the value $+1$ at points (\textcolor{blue}{$\blacktriangle$}), and the value $-1$ at points (\textcolor{red}{$\blacktriangledown$}). FVM integration of first derivatives of $\phi$ over each cell eventually comes down to a calculation involving values of $\phi$ at face centres ($\circ$) and ($\bullet$), due to application of the Gauss theorem. If we use linear interpolation to approximate the values of $\phi$ at the face centroids ($\circ$) from the values at the cell centres on either side of each face, then we obtain $\phi = 0$ at every face centre ($\circ$). This ultimately results in obtaining a zero integral of the first derivative of $\phi$ over any cell. Thus, interpolation filters out the oscillating cell-centre field and leaves a smooth field over the face centres, which in turn produces an image of the discrete operator that varies smoothly from one cell to the next. \begin{figure}[tb] \centering \includegraphics[scale=0.60]{figures/oscillations.pdf} \caption{A checkerboard distribution of values on a Cartesian grid. Different values are stored at points \textcolor{blue}{$\blacktriangle$} and \textcolor{red}{$\blacktriangledown$}.} \label{fig: oscillations schematic} \end{figure} To see how this creates a problem, consider the FVM discretisation on grid $h$ of the following PDE \begin{equation} \label{eq: PDE} f(\phi) \;=\; b \end{equation} where $b$ is a known function. The FVM produces a system of algebraic equations that can be written as \begin{equation} \label{eq: FVM system} F_h(\phi_h) \;=\; B_h \end{equation} where $\phi_h$ is the vector of unknown values of $\phi$ at the cell centroids, $F_h$ is the discrete operator representing the integral of the differential operator $f$ over each cell, and $B_h$ is the vector of integrals of the known function $b$ over each cell. The aforementioned filtering action of the face interpolation scheme means that $F_h$ will filter out any oscillations in $\phi_h$, producing a smooth image $F_h(\phi_h)$; therefore, the solution $\phi_h$ to the system \eqref{eq: FVM system} can be oscillatory even if the right-hand side $B_h$ is smooth. This is not just a remote possibility, but it does occur in practice, a notorious example being the spurious pressure oscillations produced by the FVM solution of the Navier-Stokes equations (see \cite{Syrakos_06a} for a demonstration). But, if the discretisation schemes incorporated in the operator $F_h$ are designed so as to reflect any oscillations in the vector $\phi_h$ onto the image $F_h(\phi_h)$, then for a smooth right-hand side $B_h$ the system \eqref{eq: FVM system} can only have a smooth solution $\phi_h$. For the Navier-Stokes equations, the main concern is the pressure oscillations because any velocity oscillations will be reflected onto the second derivatives of velocity present in the momentum equations and are therefore inhibited. But in our case, all three of the governing equations, \eqref{eq: continuity}, \eqref{eq: momentum} and \eqref{eq: constitutive}--\eqref{eq: upper convected derivative}, only contain first derivatives and thus we have to be concerned about the possibility of spurious oscillations in all three variables $\vf{u}$, $p$, and $\tf{\tau}$. In the following subsections the adopted measures will be described. In what follows we will need a ``characteristic viscosity'', a quantity with units of viscosity somewhat characteristic of the flow's viscous character. Our first choice is the following: \begin{equation} \label{eq: apparent viscosity} \eta_a \;\equiv\; \kappa \;+\; \frac{S}{U/L} \end{equation} where $S$ is given by \eqref{eq: characteristic stress}. This value is constant throughout the domain. A second option we tried is similar to the coefficient of the DAVSS-G technique proposed in \cite{Sun_1999} and varies throughout the domain, depending on the ratio between the magnitudes of the stress and rate-of-strain tensors: \begin{equation} \label{eq: apparent viscosity DAVSS} \eta_a \;\equiv\; \kappa \;+\; \frac{1}{2} \, \frac{S}{U/L} \;+\; \frac{1}{2} \, \frac{S \,+\, \tau_P \,+\, \tau_{N_f}} {U/L \,+\, \dot{\gamma}_P \,+\, \dot{\gamma}_{N_f}} \end{equation} The above formula gives $\eta_a$ at face $f$ of cell $P$. With definition \eqref{eq: apparent viscosity DAVSS}, $\eta_a$ never falls below $\kappa + 0.5 S / (U/L)$, tends to $\kappa + S/(U/L)$ when both $\tau$ and $\dot{\gamma}$ are small, and tends to $\kappa + 0.5 S / (U/L) + \tau / \dot{\gamma}$ when both $\tau$ and $\dot{\gamma}$ are large. Also, it does not tend to infinity when $\dot{\gamma} \rightarrow 0$, although it has no upper bound. The discretisation of several terms (e.g.\ the calculation of $\tf{\dot{\gamma}} \equiv \nabla \vf{u} + (\nabla \vf{u})^{\mathrm{T}}$ and its magnitude) requires the use of a discrete gradient operator which approximates $\nabla \phi$ at the cell centres. We employ the least-squares operator described in detail in \cite{Syrakos_2017}, denoted here as $\nabla_h^q$, the superscript $q$ being the exponent employed; in \cite{Syrakos_2017} it was shown that on smooth structured grids the choice $q=1.5$ engenders second-order accuracy ($\nabla_h^{1.5} \phi = \nabla \phi + O(h^2)$) while with other $q$ values the accuracy degrades to first-order at boundary cells. On irregular (unstructured) grids all choices of $q$ result in first order accuracy everywhere. Nevertheless, first-order accurate gradients suffice for second-order accuracy of the differentiated variables, i.e.\ are compatible with second-order accurate FVMs \cite{Syrakos_2017}. Finally, linear interpolation for calculating face centre values on unstructured grids is performed as \begin{equation} \label{eq: CDS} \bar{\phi}_{c_f} \;=\; \bar{\phi}_{c'_f} \;+\; \bar{\nabla}^q_h \phi_{c'_f} \!\cdot\! (\vf{c}_f - \vf{c}'_f) \end{equation} where the overbar denotes an interpolated value, and in the right-hand side both $\phi$ and its gradient are interpolated linearly to point $\vf{c}'_f$ (the closest point to the face centre lying on the line joining $\vf{P}$ to $\vf{N}_f$, Fig.\ \ref{fig: grid nomenclature}) according to the formula \begin{equation} \label{eq: linear interpolation} \bar{\phi}_{c'_f} \;\equiv\; \frac{\| \vf{c}'_f - \vf{N}_f \|}{\| \vf{N}_f - \vf{P} \|} \, \phi_P \;+\; \frac{\| \vf{c}'_f - \vf{P} \|}{\| \vf{N}_f - \vf{P} \|} \, \phi_{N_f} \end{equation} \subsection{Discretisation of the continuity equation} \label{ssec: method: discretisation: continuity} Integrating Eq.\ \eqref{eq: continuity} over cell $P$ and applying the divergence theorem, we get the mass flux balance for that cell: \begin{equation} \label{eq: continuity integral} \sum_f \int_{s_f} \vf{n} \cdot \left( \rho \vf{u} \right) \,\mathrm{d}s \;=\; 0 \end{equation} where $s_f$ is the surface of face $f$, $\vf{n}$ is the outward (of $P$) unit vector normal to that face, and $\mathrm{d}s$ is an infinitesimal surface element. The integrals summed are the outward mass fluxes through the respective faces of cell $P$. They are approximated by midpoint rule integration with additional stabilising terms: \begin{equation} \label{eq: mass flux} \int_{s_f} \vf{n} \cdot \left( \rho \vf{u} \right) \,\mathrm{d}s \;\approx\; \rho \, s_f \, \left[ \bar{\vf{u}}{}_{c_f} \!\cdot \vf{n}{}_f \;+\; u^{p+}_f \;-\; u^{p-}_f \right] \;\equiv\; \dot{M}_f \end{equation} where \begin{align} \label{eq: up+} u^{p+}_f \;&\equiv\; a_f^{mi} \left( p_P - p_{N_f} \right) \\[0.2cm] \label{eq: up-} u^{p-}_f \;&\equiv\; a_f^{mi} \: \bar{\nabla}^q_h p_{m_f} \cdot (\vf{P} - \vf{N}{}_f) ,\qquad \bar{\nabla}^q_h p_{m_f} \;\equiv\; \frac{1}{2} \left( \nabla^q_h p_P + \nabla^q_h p_{N_f} \right) \\[0.2cm] \label{eq: ami} a_f^{mi} \;&\equiv\; \frac{1}{\rho \left( \|\vf{u}_{N_f} - \vf{u}_P \| \:+\: \dfrac{h_f}{\Delta t} \right) \;+\; \dfrac{2\eta_a}{h_f}} , \qquad h_f \;\equiv\; \left[ \frac{1}{2} \left( \Omega_P + \Omega_{N_f} \right) \right]^{\frac{1}{D}} \end{align} where $\Delta t$ is the current time step (see Sec.\ \ref{ssec: method: discretisation: temporal}), $\Omega_P$ and $\Omega_{N_f}$ are the cell volumes, and $D$ equals either 2 or 3, for 2D and 3D problems, respectively. With the definition \eqref{eq: mass flux}, the discrete version of the continuity equation for a cell $P$ is \begin{equation} \label{eq: continuity integral discrete} \sum_f \dot{M}_f \;=\; 0 \end{equation} To expound the above scheme, we make the following remarks: \textbf{Remark 1:} If $u_f^{p+}$ and $u_f^{p-}$ are omitted from Eq.\ \eqref{eq: mass flux} then we are left with simple midpoint rule integration ($\bar{\vf{u}}{}_{c_f}$ is obtained using the scheme \eqref{eq: CDS}). The part of $\dot{M}_f$ due to $u_f^{p+}$ and $u_f^{p-}$ can be viewed as belonging to the truncation error of the continuity equation of cell $P$: \begin{equation} \label{eq: momentum inteprolation truncation error} \frac{1}{\Omega_P} \rho \, s_f \left( u_f^{p+} - u_f^{p-} \right) \;=\; \frac{\rho \, s_f}{\Omega_P} a_f^{mi} \left[ \left( p_P - p_{N_f} \right) \:-\: \bar{\nabla}^q_h p_{m_f} \cdot (\vf{P} - \vf{N}{}_f) \right] \end{equation} (the truncation error is defined per unit volume, hence we divide by the cell volume $\Omega_P$). It is easy to show, by expanding the pressure in a Taylor series about point $\vf{m}_f$ (Fig.\ \ref{fig: grid nomenclature}), that $(p_P - p_{N_f}) - \nabla p(\vf{m}_f) \cdot (\vf{P} - \vf{N}_f) = O(h^3)$. However, since we use only an approximation to the pressure gradient, $\nabla_h^q p = \nabla p + O(h)$, the term in square brackets in Eq.\ \eqref{eq: momentum inteprolation truncation error} is $O(h^2)$ (because $O(h) \cdot (\vf{P} - \vf{N}_f) = O(h^2)$). Given also that $\Omega_P = O(h^2)$, $s_f = O(h)$ and $a_f^{mi} = O(h)$, the whole term \eqref{eq: momentum inteprolation truncation error} is $O(h^2)$, which is compatible with a second-order accurate method such as the present one. \textbf{Remark 2:} The term \eqref{eq: momentum inteprolation truncation error} inhibits spurious pressure oscillations by reflecting them on the image of the discrete continuity operator (see Eq.\ \eqref{eq: FVM system} and associated discussion). Pressure oscillations cause the term $u_f^{p+}$ to oscillate from face to face, and are thus reflected on the continuity image. On the other hand, such oscillations are filtered out by the gradient operator $\nabla_h^q$, so that the term $u_f^{p-}$ varies smoothly from face to face (it does not pass the oscillations on to the image). The $u_f^{p-}$ term is used simply for counterbalancing the truncation error introduced by $u_f^{p+}$. \textbf{Remark 3:} The coefficient $a_f^{mi}$ is chosen so that the terms $u_f^{p+}$ and $u_f^{p-}$ of the mass flux expression \eqref{eq: mass flux}, which have units of velocity, are neither too small to have a stabilising effect nor so large that they dominate the mass flux. The ``velocities'' $u_f^{p+}$ and $u_f^{p-}$ are functions of local pressure variations, and attempt to quantify the contributions of these pressure variations to the velocity field. Pressure and velocity are connected through the momentum equation, so consider this equation in the following non-conservative form, where we assume that the stress tensor can be approximated through the use of a characteristic viscosity such as \eqref{eq: apparent viscosity} or \eqref{eq: apparent viscosity DAVSS} as $\tf{\tau} \approx \eta (\nabla \vf{u} + (\nabla \vf{u})^{\mathrm{T}})$: \begin{equation} \label{eq: ami momentum equation} \rho \left( \pd{\vf{u}}{t} \:+\; \vf{u} \cdot \nabla \vf{u} \right) \;=\; -\nabla p \;+\; \nabla \cdot (\eta \nabla \vf{u}) \end{equation} where we have neglected the term $\nabla \cdot (\eta (\nabla \vf{u})^{\mathrm{T}})$, assuming that it is small (it is zero if $\eta$ is constant). We are free to make these sorts of approximations because all we want is a rough estimate of the effect that the pressure gradient has on velocity. So, consider the simple uniform grid, of spacing $h$, shown in Fig.\ \ref{fig: 1D grid}, where $u$ denotes the velocity component normal to the face separating cells $P$ and $N$. We will employ a simple FV discretisation of Eq.\ \eqref{eq: ami momentum equation} in order to relate the velocity $u_c$ at the face centre $c$ to the pressures at the centres of the adjacent cells $P$ and $N$. The momentum conservation, Eq.\ \eqref{eq: ami momentum equation}, in the direction $x$ normal to the face, for the imaginary cell drawn in dashed line surrounding that face, can be discretised as \begin{equation} \label{eq: ami momentum eqn discrete} \rho \frac{u_c - u_c^{old}}{\Delta t} h^2 \;+\; \rho u_c \left. \frac{\mathrm{d}u}{\mathrm{d}x} \right|_c h^2 \;=\; -\frac{p_N - p_P}{h} h^2 \;+\; \frac{1}{h} \left( \eta_N \frac{u_{c_N} - u_c}{h} - \eta_P \frac{u_c - u_{c_P}}{h} \right) h^2 \end{equation} where $u_c^{\mathrm{old}}$ was the velocity at time $\Delta t$ ago. Assuming that $(\eta_N + \eta_P) u_c \approx 2\eta_c u_c$ we can solve this for $u_c$: \begin{equation} \label{eq: ami 1D} u_c \;=\; \frac{1}{ \rho \left( \left. \dfrac{\mathrm{d}u}{\mathrm{d}x} \right|_c h \:+\; \dfrac{h}{\Delta t} \right) \;+\; \dfrac{2\eta_c}{h}} \, \left( p_P - p_N\right) \;+\; \cdots \end{equation} where the dots ($\cdots$) denote the terms that are not related to pressure. The above equation provides a quantification of the local effect of pressure gradient on velocity. It was derived from a simplistic one-dimensional consideration; for more general flows, the coefficient $a_f^{mi}$ \eqref{eq: ami} can be seen to be a generalisation of the coefficient multiplying the pressure difference in Eq.\ \eqref{eq: ami 1D}. It should be noted that the one-dimensional momentum equation \eqref{eq: ami momentum eqn discrete} accounts for the viscous force due to the velocity variation in the direction perpendicular to the face, but omits that due to the velocity variation in the direction parallel to the face; had the latter also been accounted for, the viscous term in the denominator of \eqref{eq: ami 1D}, and of $a_f^{mi}$ \eqref{eq: ami}, would have been $4\eta/h$ (or $6\eta/h$ in 3D) instead of $2\eta/h$, which would have reduced the magnitude of the stabilisation terms in the mass flux scheme \eqref{eq: mass flux}, but would also likely somewhat increase the accuracy, according to the results of Sec.\ \ref{sec: method: validation}. We did not investigate this issue further, but the choice is not crucial as it affects neither the stabilisation ability of the technique nor the $O(h^2)$ magnitude of the error \eqref{eq: momentum inteprolation truncation error} that it introduces. \begin{figure}[tb] \centering \includegraphics[scale=1.]{figures/grid_1D.pdf} \caption{A row of grid cells.} \label{fig: 1D grid} \end{figure} \textbf{Remark 4:} An explanation of how the scheme retains its oscillation-inhibiting effect when $h \rightarrow 0$ despite $u_f^{p+}$, $u_f^{p-}$ tending to zero is given in Appendix \ref{appendix: pressure stabilisation}. The scheme \eqref{eq: mass flux} -- \eqref{eq: ami} is a variant of the popular technique known as \textit{momentum interpolation}, which was originally proposed in \cite{Rhie_1983}. Ever since, many variants of this technique have been proposed (see \cite{Zhang_2014} and references therein) almost all of which are intertwined with the SIMPLE algebraic solver (exceptions include \cite{Deng_1994, Syrakos_06a}). Although this connection with SIMPLE can be useful in some respects \cite{Klaij_2015}, and SIMPLE is employed in the present work, we prefer an independent method such as the presently proposed because it is more general, transparent, and easily adaptable. It can be used with other algebraic solvers such as Newton's method, and it does not lead to dependence of the solution on underrelaxation factors. \subsection{Discretisation of the momentum equation} \label{ssec: method: discretisation: momentum} As for the continuity equation, the FVM discretisation of the momentum equation \eqref{eq: momentum} begins by integrating it over a cell $P$ and applying the divergence theorem, to get \begin{equation} \label{eq: momentum integral} \int_{\Omega_P} \pd{(\rho\vf{u})}{t} \, \mathrm{d}\Omega \;+\; \sum_f \int_{s_f} \vf{n} \cdot \left( \rho \vf{u} \vf{u} \right) \, \mathrm{d}s \;=\; -\sum_f \int_{s_f} \vf{n} \, p \, \mathrm{d} s \;+\; \sum_f \int_{s_f} \vf{n} \cdot \tf{\tau} \, \mathrm{d}s \end{equation} Using midpoint rule integration, the above equation is approximated by \begin{equation} \label{eq: momentum integral discrete} \left. \frac{\partial (\rho \vf{u})}{\partial t} \right|^a_P \Omega_P \;+\; \sum_f \dot{M}_f \, \bar{\vf{u}}{}_{c_f} \;=\; - \sum_f \bar{p}_{c_f} \, s_f \, \vf{n}{}_f \;+\; \sum_f \vf{F}^{\tau}_f \end{equation} where $\partial / \partial t|^a_P$, the approximate time derivative at $\vf{P}$, will be defined in Sec.\ \ref{ssec: method: discretisation: temporal}, $\dot{M}_f$ is the mass outflux through face $f$ defined by Eq.\ \eqref{eq: mass flux}, $\bar{\vf{u}}{}_{c_f}$ and $\bar{p}_{c_f}$ are approximated from cell-centre values with the interpolation scheme \eqref{eq: CDS}, and $\vf{F}^{\tau}_f$ is the approximation of the force on face $f$ due to the EVP stress tensor $\tf{\tau}$. Note that Eq.\ \eqref{eq: momentum integral discrete} is a vector equation, with two (in 2D) or three (in 3D) scalar components. The force $\vf{F}^{\tau}_f$ is calculated as \begin{equation} \label{eq: stress force} \int_{s_f} \vf{n} \cdot \tf{\tau} \, \mathrm{d}s \;\approx\; \vf{F}^{\tau}_f \;\equiv\; s_f \, \left[ \vf{n}{}_f \cdot \bar{\tf{\tau}}{}_{c_f} \;+\; \vf{D}^{\tau+}_f \;-\; \vf{D}^{\tau-}_f \right] \end{equation} Again, the stress tensor at the face centroid, $\bar{\tf{\tau}}{}_{c_f}$, is calculated via the linear interpolation scheme \eqref{eq: CDS} (applied component-wise). The viscous pseudo-stresses $\vf{D}_f^{\tau+}$ and $\vf{D}_f^{\tau-}$ are stabilisation terms employed to suppress spurious velocity oscillations. They are vectors, whose $i$-th components are \begin{align} \label{eq: D+} D_{f,i}^{\tau+} \;&\equiv\; \eta_a \frac{u_{i,N_f} - u_{i,P}}{\|\vf{N}_f - \vf{P}\|} \\[0.2cm] \label{eq: D-} D_{f,i}^{\tau-} \;&\equiv\; \eta_a \bar{\nabla}^q_h u_{i,m_f} \cdot \vf{d}_f ,\qquad \bar{\nabla}^q_h u_{i,m_f} \;\equiv\; \frac{1}{2} \left( \nabla^q_h u_{i,P} + \nabla^q_h u_{i,N_f} \right) \end{align} where $u_i$ is the $i$-th velocity component ($\vf{u} = (u_1, u_2)$ in 2D), and $\vf{d}_f = (\vf{N}_f - \vf{P}) / \| \vf{N}_f - \vf{P} \|$ is the unit vector pointing from $\vf{P}$ to $\vf{N}_f$ (Fig.\ \ref{fig: grid nomenclature}). It can be seen that both $D_{f,i}^{\tau+}$ and $D_{f,i}^{\tau-}$ equal a characteristic viscosity, given by \eqref{eq: apparent viscosity} or \eqref{eq: apparent viscosity DAVSS}, times the velocity gradient in the direction $\vf{d}_f$, albeit calculated differently: the gradient as computed in $D_{f,i}^{\tau+}$ is sensitive to velocity oscillations whereas that computed in $D_{f,i}^{\tau-}$ is not. The mechanism of oscillation suppression is similar to that for momentum interpolation: in the presence of spurious velocity oscillations, the smooth part of $D_{f,i}^{\tau+}$ is cancelled out by $D_{f,i}^{\tau-}$, but the oscillatory part produces oscillations in the operator image (in $F_h(\phi_h)$, in the terminology of Eq.\ \eqref{eq: FVM system}). In the context of colocated FVMs for viscoelastic flows, this technique was first proposed in \cite{Oliveira_1998}, inspired from the corresponding ``momentum interpolation'' technique for pressure. A question concerning the method is the appropriate choice of the parameter $\eta_a$. The original method \cite{Oliveira_1998} as well as some subsequent variants (e.g.\ \cite{Matos_2009, Niethammer_2017}), similarly to the original momentum interpolation \cite{Rhie_1983}, derived the coefficient from the SIMPLE matrix of the linearised discrete constitutive equation, arriving at complicated expressions. The present simpler approach is essentially equivalent to that adopted by \cite{Pimenta_2017, Fernandes_2017} who, for viscoelastic flows, set the coefficient $\eta_a$ equal to the polymeric viscosity. The aim is that $\vf{D}_f^{\tau+}$ and $\vf{D}_f^{\tau-}$ are of the same order of magnitude as the EVP stress acting on the cell face, and this can be achieved using a characteristic viscosity $\eta_a$ defined as the ratio of a typical stress to a typical rate of strain for the given problem. The present technique can also be interpreted as a ``both-sides diffusion'' technique \cite{Fernandes_2017} where the momentum equation discretised by the FVM is not \eqref{eq: momentum} but an equivalent equation where the same diffusion term $\nabla \cdot (\eta_a \nabla \vf{u})$ has been subtracted from both sides: \begin{equation} \label{eq: momentum bsd} \pd{(\rho\vf{u})}{t} \;+\; \nabla \cdot \left( \rho \vf{u} \vf{u} \right) \;-\; \nabla \cdot (\eta_a \nabla \vf{u}) \;=\; -\nabla p \;+\; \nabla\cdot\tf{\tau} \;-\; \nabla \cdot (\eta_a \nabla \vf{u}) \end{equation} The left-hand side such term is not discretised in the same way as the right-hand side term; in particular, the component $\vf{D}_f^{\tau+}$ in \eqref{eq: stress force} comes from the discretisation of the left-hand side term, whereas the component $\vf{D}_f^{\tau-}$ comes from the discretisation of the right-hand side term. The other components of these diffusion terms are discretised in exactly the same way for both of them and cancel out, leaving only $\vf{D}_f^{\tau+}$ and $\vf{D}_f^{\tau-}$. Since both of these diffusion terms are discretised by central differences they contribute $O(1)$ to the truncation error \cite{Syrakos_2017} but $O(h^2)$ to the discretisation error. In particular, consider that Eq.\ \eqref{eq: momentum bsd} corresponds to Eq.\ \eqref{eq: PDE}, and its FVM discretisation with the present schemes leads to the algebraic system \eqref{eq: FVM system}, with the dependent variable $\phi \equiv \vf{u}$ being the velocity. If $\phi^e$ is the exact velocity (the solution of Eq.\ \eqref{eq: PDE}) and $\phi$ is the approximate velocity obtained by solving the system \eqref{eq: FVM system}, then the discretisation error is $\varepsilon = \phi^e - \phi$. The latter is related to the truncation error $\alpha$ by $F'_h(\phi_h) \cdot \varepsilon_h \approx - \alpha_h$, where $F'_h(\phi_h)$ is the Jacobian matrix of the operator $F_h$ evaluated at the approximate solution $\phi_h$ (see e.g.\ \cite{Syrakos_2012} for a derivation). Let us focus on the $\nabla \cdot (\eta_a \nabla \vf{u})$ term of the left-hand side of \eqref{eq: momentum bsd}, which is discretised with a compact stencil giving rise to the $\vf{D}_f^{\tau+}$ term of the scheme \eqref{eq: stress force} (a similar analysis of the term on the right-hand side can be made). Because the diffusion operator is linear, its contribution to the truncation error $F'_h(\phi_h) \cdot \varepsilon_h$ is equal to its contribution to $F_h(\varepsilon_h)$, i.e.\ it can be calculated by replacing the velocities by their discretisation errors in \eqref{eq: D+} (we neglect the rest of the contributions, which cancel out with the corresponding ones of the diffusion term of the right side of Eq.\ \eqref{eq: momentum bsd}). So, plugging $\varepsilon$ into Eq.\ \eqref{eq: D+} instead of $u_i$ we get a contribution to the truncation error of cell $P$ of $(s_f / \Omega_P) \eta_a (\varepsilon_{N_f} - \varepsilon_P) / \| \vf{N}_f - \vf{P} \| = O(h^{-2}) (\varepsilon_{N_f} - \varepsilon_P)$. If our method is second-order accurate then $\varepsilon = O(h^2)$. On unstructured grids, the variation of discretisation error in space has a random component \cite{Syrakos_2012} so that $\varepsilon_{N_f} - \varepsilon_P = O(h^2)$ as well, which results in a $O(1)$ contribution to the truncation error. On smooth structured grids the discretisation error varies smoothly \cite{Syrakos_2012} so that $\varepsilon_{N_f} - \varepsilon_P = O(h^3)$ and each face contributes $O(h)$ to the truncation error (on such grids there is also a cancellation between opposite faces for each cell which reduces the net contribution to the truncation error to $O(h^2)$). Another benefit that comes from the incorporation of the stabilisation terms in the scheme \eqref{eq: stress force} concerns the algebraic solver, SIMPLE \cite{Patankar_1980}. Within each SIMPLE iteration a succession of linearised systems of equations are solved, one for each dependent variable. The systems for the velocity components $u_i$ are derived from linearisation of the (discretised) momentum equation \eqref{eq: momentum}. The latter, unlike in (generalised) Newtonian flows, contains no diffusion terms and the velocity appears directly only in the convection terms of the left-hand side. In low Reynolds number flows, these terms play only a minor role and the momentum balance involves mainly the pressure and stress. Constructing the linear systems for the velocity components only from these inertial terms would lead to bad convergence of the SIMPLE algorithm. Nevertheless, velocity plays an important indirect role in the momentum equation through the EVP stress tensor, to which it is related via the constitutive equation \eqref{eq: constitutive}. Therefore, either the momentum and constitutive equations have to be solved in a coupled manner, or the effect of velocity on the stress tensor has to somehow be directly quantified in the momentum equation, which is precisely what the stabilisation term $\vf{D}_f^{\tau+}$ achieves; in particular, the diffusion term on the left-hand side of Eq.\ \eqref{eq: momentum bsd} contributes to the matrix of coefficients of the linear systems for $u_i$, making this matrix very similar to those for (generalised) Newtonian flows. Finally, the momentum balance for cells that lie at the wall boundaries involves the pressure and the stress tensor at these boundaries. The pressure is linearly extrapolated from the interior, as is a common practice \cite{Ferziger_02}. For stress two possible options are linear extrapolation, as for pressure, and the imposition of a zero-gradient condition which is roughly equivalent to setting the stress at the boundary face equal to that at the owner cell centre. In \cite{Habla_2012}, the former approach was found, as expected, to be more accurate, while the latter was found to be more robust. In the present work we employed either linear extrapolation or a modified extrapolation which is akin to the scheme \eqref{eq: stress force}. So, if $b$ is a boundary face of cell $P$, the linearly extrapolated value of stress at its centre is calculated as \begin{equation} \label{eq: boundary extrapolation} \bar{\tf{\tau}}{}_{c_b} \;=\; \tf{\tau}{}_P \;+\; \nabla_h^{1} \tf{\tau}{}_P \cdot (\vf{c}_b - \vf{P}) \end{equation} where the least-squares stress gradient $\nabla_h^{1} \tf{\tau}{}_P$ is calculated with $q=1$ and using only the stress values at the centres of $P$ and its neighbour cells. This scheme is used also for extrapolation of pressure, but when it comes to stress we can go a step further: the force due to the EVP stress tensor on boundary face $b$ can be calculated by a scheme that has precisely the form \eqref{eq: stress force} only that the interpolated stress $\bar{\tf{\tau}}{}_{c_f}$ \eqref{eq: CDS} is replaced by the extrapolated value $\bar{\tf{\tau}}{}_{c_b}$ \eqref{eq: boundary extrapolation} and the $D$ terms are replaced by \begin{align} \label{eq: D+ boundary} D_{b,i}^{\tau+} \;&\equiv\; \eta_a \frac{u_{i,c_b} - u_{i,P}}{\|\vf{c}_b - \vf{P}\|} \\[0.2cm] \label{eq: D- boundary} D_{b,i}^{\tau-} \;&\equiv\; \eta_a \bar{\nabla}^q_h u_{i,m_b} \cdot \vf{d}_b \end{align} where $\vf{d}_b$ is the unit vector pointing from $\vf{P}$ to $\vf{c}_b$ and the gradient in \eqref{eq: D- boundary} is extrapolated to point $\vf{m}_b \equiv (\vf{P} + \vf{c}_b) / 2$ via the scheme \eqref{eq: boundary extrapolation} with $\vf{c}_b$ replaced by $\vf{m}_b$. The terms \eqref{eq: D+ boundary} -- \eqref{eq: D- boundary} do not have an apparent stabilising effect, and in fact our experience has shown that they sometimes can cause convergence difficulties to the algebraic solver. Such is the case with wall slip examined in Sec.\ \ref{ssec: results: slip}, where we had to set these terms to zero and use simple linear extrapolation of stresses at the walls. Nevertheless, we also found that these terms can increase the accuracy of the solution (see Sec.\ \ref{sec: method: validation}) and therefore we employed them whenever possible (they were used for all the main results except those of Sec.\ \ref{ssec: results: slip}). \subsection{Discretisation of the constitutive equation} \label{ssec: method: discretisation: constitutive} Again, by integrating the SHB equation \eqref{eq: constitutive} -- \eqref{eq: upper convected derivative} and applying the divergence theorem we obtain \begin{align} \nonumber &\int_{\Omega_P} \pd{\tf{\tau}}{t} \, \mathrm{d}\Omega \;+\; \sum_f \int_{s_f} \vf{n} \cdot \vf{u} \tf{\tau} \, \mathrm{d}s \;=\; \\ \label{eq: constitutive integral} &G \int_{\Omega_P} \dot{\tf{\gamma}} \, \mathrm{d}\Omega \;-\; G \int_{\Omega_P} \left( \frac{\max(0, \tau_d-\tau_y)}{k} \right)^{\frac{1}{n}} \frac{1}{\tau_d} \; \tf{\tau} \, \mathrm{d}\Omega \;+\; \int_{\Omega_P} \left( (\nabla \vf{u})^{\mathrm{T}} \cdot \tf{\tau} \;+\; \tf{\tau} \cdot \nabla \vf{u} \right) \mathrm{d} \Omega \end{align} The surface integral on the left-hand side derives from application of the divergence theorem to the second term on the right-hand side of Eq.\ \eqref{eq: upper convected derivative}, using the identity $\vf{u} \cdot \nabla \tf{\tau} = \nabla \cdot (\vf{u}\tf{\tau}) - (\nabla \cdot \vf{u}) \tf{\tau}$ and the incompressibility condition $\nabla \cdot \vf{u} = 0$. Equation \eqref{eq: constitutive integral} has the form of a conservation equation for stress. In the left-hand side, there is the rate of change of stress in the volume plus the outflux of stress, which equal the rate of change of stress in a fixed mass of fluid moving with the flow. In the right-hand side, there are three source terms which express, respectively, stress generation (the $\dot{\tf{\gamma}}$ term), relaxation (the viscous term), and transformation (the upper convective derivative term). Equation \eqref{eq: constitutive integral} is then discretised as: \begin{align} \nonumber &\left. \pd{\tf{\tau}}{t} \right|^a_P \Omega_P \;+\; \frac{1}{\rho} \sum_f \dot{M}_f \bar{\tf{\tau}}{}^{_C}_{c_f} \;=\; \\ \label{eq: constitutive integral discrete} &G \, \Omega_P \, \dot{\tf{\gamma}}{}_P \;-\; G \, \Omega_P \, \left( \frac{\max(0, \tau_{d,P}-\tau_y)}{k} \right)^{\frac{1}{n}} \frac{1}{\tau_{d,P}} \; \tf{\tau}{}_P \;+\; \Omega_P \, \left( (\nabla_h^q \vf{u})^{\mathrm{T}}_P \cdot \bar{\tf{\tau}}{}_P \;+\; \bar{\tf{\tau}}{}_P \cdot \nabla_h^q \vf{u}_P \right) \end{align} where $\partial / \partial t|^a_P$ is the approximate time derivative at $\vf{P}$ (Sec.\ \ref{ssec: method: discretisation: temporal}), $\dot{M}_f$ is the mass flux \eqref{eq: mass flux}, $\dot{\tf{\gamma}}{}_P = \nabla_h^q \vf{u}_P + (\nabla_h^q \vf{u})^{\mathrm{T}}_P$ is the discrete value of $\dot{\tf{\gamma}}$ at $\vf{P}$, and $\tau_{d,P}$ is the value of $\tau_d$ at $\vf{P}$, calculated from $\tf{\tau}{}_P$ via Eq.\ \eqref{eq: deviatoric stress}. In the convection term of Eq.\ \eqref{eq: constitutive integral discrete} (the second term on the left-hand side), $\bar{\tf{\tau}}{}^{_C}_{c_f}$ is the value of $\tf{\tau}$ interpolated at the face centre $\vf{c}_f$, but not by the linear interpolation \eqref{eq: CDS}. Since the constitutive equation lacks diffusion terms, there is a danger of spurious stress oscillations, and a common preventive measure is to use so-called \textit{high resolution schemes} \cite{Moukalled_2016} for the discretisation of the convection term. In viscoelastic flows, the CUBISTA scheme \cite{Alves_2003} has proved to be effective and is widely adopted (e.g.\ \cite{Chen_2013, Habla_2014, Comminal_2015, Syrakos_2018}). In the present work, we adapted it for use on unstructured grids as follows. To account for skewness, a scheme similar to \eqref{eq: CDS} is used, but the value at point $\vf{c}'_f$ is interpolated with CUBISTA: \begin{equation} \label{eq: CUBISTA skewness} \bar{\phi}^{_C}_{c_f} \;=\; \bar{\phi}^{_C}_{c'_f} \;+\; \bar{\nabla}^q_h \phi_{c'_f} \!\cdot\! (\vf{c}_f - \vf{c}'_f) \end{equation} Note that the second term of the right-hand side is exactly the same as for the scheme \eqref{eq: CDS} (the gradient is interpolated linearly). The first term of the right-hand side is the value at point $\vf{c}'_f$ interpolated via CUBISTA. CUBISTA is a high-resolution scheme based on the Normalised Variable Formulation \cite{Gaskell_1988}, and on structured grids such schemes interpolate the value at a face from the values at two upwind cell centres and one downwind cell centre; these are the two cells on either side of the face, labelled $P$ and $D$ in Fig.\ \ref{fig: CUBISTA grid}, and a farther cell $U$. But on an unstructured grid the farther-upwind cell $U$ is not straightforward (or even not possible) to identify. To overcome this hurdle, we follow an approach \cite{Jasak_1999, Moukalled_2016} which is based on the observation that, on a uniform structured grid such as that shown in Fig.\ \ref{fig: CUBISTA grid}, it holds that \begin{equation} \label{eq: phi_U} \phi_U \;=\; \phi_D \;-\; \nabla_h^q \phi_P \cdot (\vf{D} - \vf{U}) \end{equation} because the least-squares gradient is such that $\nabla_h^q \phi_P \cdot (\vf{D} - \vf{U}) = \phi_D - \phi_U$. On unstructured grids, following \cite{Jasak_1999, Moukalled_2016} we define the location $\vf{U} \equiv \vf{P} - (\vf{D} - \vf{P})$ which lies at the diametrically opposite position of $\vf{D}$ relative to $\vf{P}$; since the grid is unstructured, it is unlikely that $\vf{U}$ coincides with an actual cell centre, but still we can use Eq.\ \eqref{eq: phi_U} to estimate a value there, $\phi_U$. Information from the upwind side of cell $P$ is implicitly incorporated into $\phi_U$ through the gradient $\nabla_h^q \phi_P$, and if the scheme is used on a uniform structured grid then it automatically retrieves the value at the centroid of the actual cell $U$. \begin{figure}[tb] \centering \includegraphics[scale=0.9]{figures/CUBISTA_grid.pdf} \caption{On a uniform structured grid, CUBISTA interpolates the value at face centre $\vf{c}$ from the values at the downwind ($D$), upwind ($P$), and farther-upwind ($U$) cells (assuming the mass flux is directed from cell $P$ to cell $D$).} \label{fig: CUBISTA grid} \end{figure} So now we have a set of three co-linear and equidistant points, $\vf{U}$, $\vf{P}$ and $\vf{D}$, and three corresponding values, $\phi_U$, $\phi_P$ and $\phi_D$, and CUBISTA can be employed to calculate the value at point $\vf{c}'_f$, which lies on the same line as these three points. Defining the normalised variables as \begin{equation} \label{eq: def NV xi} \xi \;=\; \frac{\| \vf{c}' - \vf{P} \|}{\| \vf{D} - \vf{P} \|} \end{equation} \begin{equation} \label{eq: def NV phi_P} \tilde{\phi}_P \;=\; \frac{\phi_P - \phi_U}{\phi_D - \phi_U} \end{equation} \begin{equation} \label{eq: def NV phi_c} \tilde{\phi}_{c'_f}(\xi,\tilde{\phi}_P) \;=\; \frac{\bar{\phi}^{_C}_{c'_f} - \phi_U}{\phi_D - \phi_U} \end{equation} (note that $\xi$ is just the linear interpolation factor multiplying $\phi_{N_f}$ in Eq.\ \eqref{eq: linear interpolation}), CUBISTA blends Quadratic Upwind Interpolation (QUICK) and first-order upwinding (UDS) as follows: \begin{equation} \label{eq: CUBISTA NV formulation} \tilde{\phi}_{c'_f}(\xi,\tilde{\phi}_P) \;=\; \left\{ \begin{array}{l l l} \tilde{\phi}_P & \quad \text{if } \tilde{\phi}_P \leq 0 & \text{(UDS)} \\[0.2cm] \frac{1}{3} (1 + \xi) (3 + \xi) \tilde{\phi}_P & \quad \text{if } 0 < \tilde{\phi}_P < \frac{3}{8} & \text{(Transition 1)} \\[0.2cm] (1-\xi^2) \tilde{\phi}_P \;+\; \frac{1}{2} \xi (1 + \xi) & \quad \text{if } \frac{3}{8} < \tilde{\phi}_P \leq \frac{3}{4} & \text{(QUICK)} \\[0.2cm] (1 - \xi)^2 \tilde{\phi}_P \;+\; \xi (2-\xi) & \quad \text{if } \frac{3}{4} < \tilde{\phi}_P \leq 1 & \text{(Transition 2)} \\[0.2cm] \tilde{\phi}_P & \quad \text{if } \tilde{\phi}_P > 1 & \text{(UDS)} \end{array} \right. \end{equation} From $\tilde{\phi}_{c'_f}$ the un-normalised value $\bar{\phi}^{_C}_{c'_f}$ can be recovered via \eqref{eq: def NV phi_c} for use in Eq.\ \eqref{eq: CUBISTA skewness}. However, the scheme can be conveniently reformulated in terms of un-normalised $\phi$ values using the median function\footnote{It can be implemented as $\text{median}(a,b,c) \,=\, \max( \,\min(a,b), \,\min( \max(a,b), c ) \,)$.} \cite{Leonard_1996} as: \begin{align} \label{eq: CUBISTA median} \bar{\phi}^{_C*}_{c'_f} \;&=\; \text{median}\left( \quad \phi_P, \quad \phi_U \,+\, \frac{(1+\xi)(3+\xi)}{3}(\phi_P - \phi_U), \quad \phi_D \,-\, (1-\xi)^2 (\phi_D-\phi_P) \quad \right) \\[0.2cm] \bar{\phi}^{_C}_{c'_f} \;&=\; \text{median}\left( \quad \phi_P, \quad \bar{\phi}^{_C*}_{c'_f}, \quad \phi_P \,+\, \frac{\phi_D-\phi_U}{2} \xi \,+\, \frac{1}{2}(\phi_D - 2\phi_P + \phi_U) \xi^2 \quad \right) \end{align} which also automatically sets $\bar{\phi}^{_C}_{c'_f} = \phi_P$ (UDS) when $\phi_D = \phi_U$. The scheme \eqref{eq: CUBISTA NV formulation} is based on QUICK, but switches to UDS when there is a local minimum or maximum at $\vf{P}$ ($\tilde{\phi}_P < 0$ or $\tilde{\phi}_P > 1$), which could indicate a spurious oscillation. The region $\tilde{\phi}_P \in [3/8, 3/4]$, where the variation between the three values $\phi_U$, $\phi_P$ and $\phi_D$ is not far from linear, is considered to be ``safe'' and QUICK is applied unreservedly there. The upper bound of this region, which for non-uniform structured grids is given to be a $\xi$-dependent value in \cite{Alves_2003}, is chosen here to be the fixed number $\tilde{\phi}_P = 3/4$ based on the same criterion as in \cite{Alves_2003}, namely the condition that the quadratic profile passing through the three points $\vf{U}$, $\vf{P}$ and $\vf{D}$ is monotonic (no overshoots / undershoots). In terms of normalised variables, this condition requires that $\tilde{\phi}_{c'_f} (\xi, \tilde{\phi}_P)$, as given by the QUICK branch of Eq.\ \eqref{eq: CUBISTA NV formulation}, increases monotonically towards the value 1 as $\xi \rightarrow 1$, which is ensured by the condition $\partial \tilde{\phi}_{c'_f} (\xi,\tilde{\phi}_P) / \partial \xi |_{\xi = 1} \geq 0 \Rightarrow \tilde{\phi}_P \leq 3/4$. We also note that, if CUBISTA achieves its goal of producing smooth $\phi$ distributions, then grid refinement, which brings the points $\vf{U}$, $\vf{P}$ and $\vf{D}$ closer together, will cause $\phi$ to vary more linearly in the neighbourhood of the three points, i.e.\ $\tilde{\phi}_P \rightarrow 0.5$ as the grid is refined. On fine grids, therefore, CUBISTA reduces to QUICK throughout the domain, except at local extrema of $\phi$. Finally, one can notice an interpolated value $\bar{\tf{\tau}}{}_P$ in the stress transformation terms in the right-hand side of Eq.\ \eqref{eq: constitutive integral discrete}. An obvious choice is to set $\bar{\tf{\tau}}{}_P = \tf{\tau}{}_P$, but it was found in \cite{Zhou_2016} that using instead a weighted average of the stress at $P$ and its neighbours can mitigate the high-$\Wei$ problem. In the original scheme \cite{Zhou_2016} the weighting was based on the mass fluxes through the cell's faces. However, we noticed that this causes a noticeable accuracy degradation and instead used the following scheme (written here for 2D problems): \begin{equation} \label{eq: upper convective terms discretisation} (\nabla_h^q \vf{u})^{\mathrm{T}}_P \cdot \bar{\tf{\tau}}{}_P \;+\; \bar{\tf{\tau}}{}_P \cdot \nabla_h^q \vf{u}_P \;=\; \left[ \begin{array}{l} 2\, \overline{\tau_{11,P}} \, \pd{u_1}{x_1} \:+\: 2\, \tau_{12,P} \, \pd{u_1}{x_2} \\ \tau_{11,P} \pd{u_2}{x_1} \:+\: \overline{\tau_{12,P}} (\pd{u_1}{x_1} + \pd{u_2}{x_2}) \:+\: \tau_{22,P} \pd{u_1}{x_2} \\ 2\, \overline{\tau_{22,P}} \pd{u_2}{x_2} \;+\; 2\, \tau_{12,P} \pd{u_2}{x_1} \end{array} \right] \, \begin{array}{l} \text{\small{(1,1) component}} \\ \text{\small{(1,2) component}} \\ \text{\small{(2,2) component}} \end{array} \end{equation} where the velocity derivatives are the components of $\nabla_h^q \vf{u}{}_P$ and \begin{equation} \label{eq: stress reconstruction} \overline{\tau_{ij,P}} \;=\; \frac{1}{D \, \Omega_P} \sum_{f=1}^F \bar{\tau}_{ij,c_f}^{_C} \, s_f \, (\vf{c}{}_f - \vf{P}) \cdot \vf{n}{}_f \end{equation} where $D$ = 2 or 3, for 2D or 3D problems, respectively. The interpolation scheme \eqref{eq: stress reconstruction} is 2nd-order accurate (Appendix \ref{appendix: derivation of reconstruction scheme}). In the (1,2) component of \eqref{eq: upper convective terms discretisation}, the middle term, although equal to zero in the limit of infinite grid fineness due to the continuity equation, has been retained for numerical stability. Compared to simply setting $\overline{\tau_{ij,P}} = \tau_{ij,P}$, this scheme was found, in the numerical tests of Sec.\ \ref{sec: method: validation}, to allow an increase of about 40\% in the maximum solvable $\Wei$ number and to provide a slight increase of accuracy. At wall boundaries it was found that using linearly extrapolated stress values in \eqref{eq: stress reconstruction} leads to convergence difficulties. Therefore, wall boundary values were omitted in the calculation, weighting the values at the remaining faces by their $s_f (\vf{c}_f-\vf{P}) \cdot \vf{n}_f$ factors, i.e.\ by replacing $D \, \Omega_P$ in the denominator of \eqref{eq: stress reconstruction} by \begin{equation} D \, \Omega_P' \;=\; \sum_{f \notin \{W_P\}} s_f \left( \vf{c}_f -\vf{P} \right) \cdot \vf{n}_f \;=\; D \, \Omega_P \;-\; \sum_{f \in \{W_P\}} s_f \left( \vf{c}_f -\vf{P} \right) \cdot \vf{n}_f \end{equation} where $\{W_P\}$ is the set of wall boundary faces of cell $P$. This is a 1st-order accurate interpolation. In the simulations of Sec.\ \ref{sec: results} we did not encounter the high-$\Wei$ problem, but should it arise one can transform the constitutive equation in terms of a more weakly varying function of the stress tensor such as the logarithm of the conformation tensor \cite{Fattal_2004, Afonso_2009}. This has been applied to the Saramito model in \cite{Vita_2018}. \subsection{Temporal discretisation scheme} \label{ssec: method: discretisation: temporal} The approximate time derivatives in the momentum and constitutive equations are calculated with a 2nd-order accurate, three-time level implicit scheme with variable time step. Suppose we enter time step $i$ and let $\phi_P^i$ be the current, yet unknown, value of $\phi$ at cell $P$. Fitting a quadratic Lagrange polynomial between the three points $(t_i, \phi_P^i)$, $(t_{i-1}, \phi_P^{i-1})$ and $(t_{i-2}, \phi_P^{i-2})$ and differentiating it at $t = t_i$ gives the following approximate time derivative: \begin{equation} \label{eq: temporal lagrange polynomial derivative} \left. \frac{\mathrm{d}\phi}{\mathrm{d}t} \right|_P^a (t_i) \;=\; l'_{i,i}(t) \phi_P^i \;+\; l'_{i,i-1}(t) \phi_P^{i-1} \;+\; l'_{i,i-2}(t) \phi_P^{i-2} \end{equation} where \begin{equation} \label{eq: lagrange l' coefficients} l'_{i,i}(t) = \frac{2t_i-t_{i-1}-t_{i-2}}{(t_i-t_{i-1})(t_i-t_{i-2})} ,\;\; l'_{i,i-1}(t) = \frac{2t_i-t_{i}-t_{i-2}}{(t_{i-1}-t_i)(t_{i-1}-t_{i-2})} ,\;\; l'_{i,i-2}(t) = \frac{2t_i-t_{i}-t_{i-1}}{(t_{i-2}-t_i)(t_{i-2}-t_{i-1})} \end{equation} All other terms of the governing equations are evaluated at the current time (the scheme is fully implicit). After solving the equations to obtain $\phi_P^{i}$ and deciding (as will be described shortly) the next time step size $\Delta t_{i+1} = t_{i+1} - t_i$, to facilitate the solution at the new time $t_{i+1}$ we can make a \textit{prediction} by evaluating the aforementioned Lagrange polynomial at $t = t_{i+1}$ to obtain a provisional value $\phi_P^{i+1*}$. This value serves as an initial guess for solving the equations at the new time step $i+1$, offering significant acceleration. We perform such a prediction even for pressure, whose time derivative does not appear in the governing equations, as this was found to accelerate the solution of the continuity equation. The subsequent solution at time $t_{i+1}$ will give us a value $\phi_P^{i+1} \neq \phi_P^{i+1*}$, in general. In order to obtain an estimate of the $O(h^2)$ truncation error of scheme \eqref{eq: temporal lagrange polynomial derivative} we can augment our set of three points by the addition of the point $(t_{i+1},\phi_P^{i+1})$ and fit a \textit{cubic} polynomial through the four points; the difference between $\partial \phi / \partial t|^a_P$ and $\partial \phi / \partial t|^c_P$, the derivatives predicted at time $t_i$ by the quadratic and cubic polynomials, respectively, is an approximation of the truncation error associated with $\partial \phi / \partial t|^a_P$. Omitting the details, it turns out that \begin{equation} \label{eq: temporal truncation error} \left. \pd{\phi}{t} \right|_P^c (t_i) \;-\; \left. \pd{\phi}{t} \right|_P^a (t_i) \;=\; c \, \frac{\phi_P^{i+1} - \phi_P^{i+1*}}{\Delta t_{i+1}} \end{equation} where the factor $c = O(1)$ depends on the relative magnitudes of $\Delta t_{i-1}$, $\Delta t_i$ and $\Delta t_{i+1}$ ($c$ = 1/3 for $\Delta t_{i-1} = \Delta t_i = \Delta t_{i+1}$). This allows us to keep an approximately constant level of truncation error by adjusting the time step sizes so as to keep the right-hand side of Eq.\ \eqref{eq: temporal truncation error} constant. We set the following goal, which accounts for all grid cells at once: \begin{equation} \label{eq: temporal adjustment goal} g_t^i \equiv \sqrt{ \frac{1}{\Omega} \sum_P \Omega_P \left( \frac{\phi_P^i - \phi_P^{i*}}{\Delta t_i} \right)^2 } \;=\; \tilde{g}^0_t \, \frac{\Phi}{T} \end{equation} I.e. we want $g_t^i$, the $L_2$ norm of $(\phi_P^i - \phi_P^{i*}) / \Delta t_i$, for any time step $i$, to equal a pre-selected non-dimensional target value $\tilde{g}_t^0$ times $\Phi / T$ where $\Phi$ is either $U$ (for the momentum equation, where $\phi \equiv u_j$) or $S$ (for the constitutive equation, where $\phi \equiv \tau_{jk}$). Suppose then that at some time step $i$ we are somewhat off target, i.e.\ Eq.\ \eqref{eq: temporal adjustment goal} is not satisfied; we can try to amend this at the new time step $i+1$ by noting that the truncation error, and the associated metric $g_t$, are of order $O(\Delta t^2)$, which means that $g_t^{i+1} / g_t^i = (\Delta t_{i+1} / \Delta t_i)^2$. This relation allows us to choose $\Delta t_{i+1}$ so that $g_t^{i+1}$ will equal $\tilde{g}_t^0 \Phi / T$ (Eq.\ \eqref{eq: temporal adjustment goal}): \begin{equation} \label{eq: new time step} \Delta t_{i+1} \;=\; r_t^i \, \Delta t_i \qquad \text{where} \qquad r_t^i \;=\; \sqrt{\tilde{g}_t^0 / \tilde{g}_t^i}, \quad \tilde{g}_t^i \;=\; \frac{g_t^i}{\Phi/T} \end{equation} In practice, we calculate the adjustment ratios $r_t^i$ for all variables whose time derivatives appear in the governing equations (one per velocity component and per stress component) and choose the smallest among them. We also limit the allowable values of $r_t^i$ to a range $r_t^i \in [r_t^{\min}, r_t^{\max}]$, and also the overall time step size to $\Delta t_i \in [\Delta t_{\min}, \Delta t_{\max}]$. Typical values used for these parameters in the simulations of Sec.\ \ref{sec: results} are $\tilde{g}_t^0$ = \num{e-4}, $r_t^{\min}$ = \num{0.95}, $r_t^{\max}$ = \num{1.001}, $\Delta t_{\min}$ =\num{5e-5} \si{s}, $\Delta t_{\max}$ = \num{2e-3} \si{s} (these may vary slightly between simulations). The scheme automatically chooses small time steps when the flow evolves rapidly, such as during the early stages of the flow, and large time steps when it evolves slowly, such as when the flow is nearing its steady state. A similar scheme is used in \cite{Dimakopoulos_2003}. \section{Solution of the algebraic system} \label{sec: method: solution} Discretisation results in one large nonlinear algebraic system per time step. These systems are solved using the SIMPLE algorithm, with multigrid acceleration, where the algebraic equations are arranged into groups, one for each momentum and constitutive equation component, each of which through linearisation produces a linear system for one of the dependent variables. The algorithm is applied in the same manner as for viscoelastic flows \cite{Oliveira_1998, Afonso_2012}; in each SIMPLE iteration, we solve successively: the linear systems for each stress component, the linear systems for the velocity components, and the pressure correction system to enforce continuity. The systems are solved with preconditioned conjugate gradient (pressure correction) or GMRES (all other variables) solvers. In the velocity systems, derived from the components of the momentum eq.\ \eqref{eq: momentum integral discrete}, the matrix of coefficients is constructed via a contribution from the time derivative, a UDS discretisation of the convective term, and the $D_{f,i}^{\tau+}$ (Eq.\ \eqref{eq: D+}) part of the EVP force $\vf{F}_f^{\tau}$. The remaining terms are evaluated from their currently estimated values and moved to the right-hand side vector, as is the difference between the UDS and CDS convection schemes (deferred correction). For the stress systems we follow the established practice in viscoelastic flows of constructing the matrix of coefficients with contributions only from the terms of the left-hand side of Eq.\ \eqref{eq: constitutive integral discrete}, breaking the CUBISTA flux into a UDS component and a remainder which is moved to the right-hand side vector (deferred correction). This poor representation of the constitutive equation by the matrix of coefficients may be responsible for the observed convergence difficulties of SIMPLE at large time steps; for small time steps the role of the time derivative in the constitutive equation becomes dominant and convergence is fast. It was noticed that if the residual reduction within each time step is not at least a couple orders of magnitude, then the temporal prediction step (Sec.\ \ref{ssec: method: discretisation: temporal}) may not produce good enough initial guesses and the solution may not be smooth in time. To avoid this possibility we set a maximum effort per time step of about 20 single-grid SIMPLE iterations followed by about 5 W(4,4) multigrid cycles \cite{Syrakos_06b, Syrakos_2013}, and if further iterations are needed to accomplish the required residual reduction then the time step is reduced by a factor of $r_t^{\min}$ (Sec.\ \ref{ssec: method: discretisation: temporal}). Setting $\Delta t_{\max}$ to an appropriate value avoids the need for this. \section{Validation and testing of the method in Oldroyd-B flow} \label{sec: method: validation} For $\tau_y = 0$ and $n = 1$, the SHB model reduces to the Oldroyd-B viscoelastic model, for which benchmark results for square lid-driven cavity flow can be found in the literature. We apply our method to this problem to validate it and to evaluate alternative options for the FVM components. The problem is solved as steady-state, omitting the time derivatives from the governing equations. Table \ref{table: validation} lists the computed location and strength of the main vortex, along with values reported in the literature, with which there is very good agreement. \begin{table}[tb] \caption{$\tilde{x}-$ and $\tilde{y}-$ coordinates $(\tilde{x}_c,\tilde{y}_c)$ of the centre of the main vortex for Oldroyd-B flow in a square cavity of side $L$, and associated value of the streamfunction there, $\tilde{\psi}_c$. The top wall moves in the positive $x-$direction with variable velocity $u(x) = 16 U (x/L)^2(1-x/L)^2$. The coordinates are normalised by the cavity side $L$ and the streamfunction by $U L$. The flow is steady, the Reynolds number is zero, and the solvent and polymeric viscosities are equal ($\kappa = k$).} \label{table: validation} \begin{center} \begin{small} \renewcommand\arraystretch{1.25} \begin{tabular}{ l | c c c | c c c } \toprule & \multicolumn{3}{c |}{ $\Wei = 0.5$ } & \multicolumn{3}{c}{ $\Wei = 1.0$ } \\ \midrule & $\tilde{x}_c$ & $\tilde{y}_c$ & $\tilde{\psi}_c$ & $\tilde{x}_c$ & $\tilde{y}_c$ & $\tilde{\psi}_c$ \\ \midrule Pan et al. \cite{Pan_2009} & $0.469$ & $0.798$ & $-0.0700$ & $0.439$ & $0.816$ & $-0.0638$ \\ Saramito \cite{Saramito_2014} & & & & $0.429$ & $0.818$ & $-0.0619$ \\ Sousa et al. \cite{Sousa_2016} & $0.467$ & $0.801$ & $-0.0698$ & $0.434$ & $0.814$ & $-0.0619$ \\ Zhou et al. \cite{Zhou_2016} & $0.468$ & $0.798$ & & $0.430$ & $0.818$ & \\ Present results & $0.468$ & $0.799$ & $-0.0698$ & $0.434$ & $0.818$ & $-0.0619$ \\ \bottomrule \end{tabular} \end{small} \end{center} \end{table} To estimate discretisation errors, we obtained accurate estimates of the $\Wei = 0.5$ and $\Wei = 1.0$ solutions by Richardson extrapolation of the solutions obtained on uniform Cartesian grids (Fig.\ \ref{sfig: grid 32 CU}) of $512 \times 512$ and $1024 \times 1024$ cells; the procedure is described in detail in \cite{Syrakos_2012}. These extrapolated solutions served as ``exact'' solutions against which we calculated the discretisation errors of coarser grids. Figure \ref{fig: grid convergence} plots average velocity and stress discretisation errors defined as \begin{equation} \label{eq: e_u} \epsilon_u \;\equiv\; \frac{1}{\#\Omega'} \sum_{P \in \Omega'} \| \vf{u}^e_P - \vf{u}_P \| \;/\; \| \vf{u}^e \|_{\mathrm{avg}} \end{equation} \begin{equation} \label{eq: e_tau} \epsilon_{\tau} \;\equiv\; \frac{1}{\#\Omega'} \sum_{P \in \Omega'} \| \tf{\tau}^e_P - \tf{\tau}_P \| \;/\; \| \tf{\tau}^e \|_{\mathrm{avg}} \end{equation} where the superscript $e$ denotes the ``exact'' solution, calculated by Richardson extrapolation. The summation is over all cells $P$ whose centres lie inside the area $\Omega' = [0.05L,0.95L] \times [0.05L,0.95L]$ ($\#\Omega'$ is the total number of such cells), $L$ being the cavity length, i.e.\ a strip of width $0.05L$ touching the boundary is excluded, because the stress magnitude ($\tau_{xx}$ in particular) appears to grow exponentially over part of the top boundary and this inflates the discretisation errors there. The vector and tensor magnitudes in \eqref{eq: e_u} and \eqref{eq: e_tau} are $\|\vf{u}\| = (\sum_i u_i^2)^{1/2}$ and $\| \tf{\tau} \| = (\sum_{ij} \tau_{ij}^2)^{1/2}$. The definitions \eqref{eq: e_u} and \eqref{eq: e_tau} incorporate a normalisation of the errors by the average velocity and stress magnitudes over $\Omega'$: $\| \vf{u}^e \|_{\mathrm{avg}} \equiv (1/\Omega') \int_{\Omega'} \| \vf{u}^e \| \mathrm{d}\Omega$ and $\| \tf{\tau}^e \|_{\mathrm{avg}} \equiv (1/\Omega') \int_{\Omega'} \| \tf{\tau}^e \| \mathrm{d}\Omega$. In Fig.\ \ref{fig: grid convergence} the discretisation errors are plotted with respect to the average grid spacing $h = (\Omega / \#\Omega)^{1/2}$ which equals $L / N$ for a $N \times N$ grid of any of the kinds shown in Fig.\ \ref{fig: grids Oldroyd-B}. To ensure that our method is applicable to unstructured grids, although the geometry favours discretisation by uniform (Fig.\ \ref{sfig: grid 32 CU}, ``CU'') or non-uniform (Fig.\ \ref{sfig: grid 32 CN}, ``CN'') Cartesian grids, we also employed grids obtained from the CU ones by randomly perturbing their vertices as described in \cite{Syrakos_2017}. In particular, from a $N \times N$ CU grid (Fig.\ \ref{sfig: grid 32 CU}) a $N \times N$ distorted grid is constructed (Fig.\ \ref{sfig: grid 32 D}) by moving each interior node $(x,y)$ to a location $(x',y') = (x + \delta x, y + \delta y)$ where $\delta x$ and $\delta y$ are randomly selected for each node under the restriction that $|\delta x|, |\delta y| < h/4$. This restriction ensures that all resulting grid cells are simple convex quadrilaterals. This procedure produces a series of grids whose skewness, unevenness and non-orthogonality do not diminish with refinement \cite{Syrakos_2017}, as is typical of unstructured grids. Checking the convergence of the method on this sort of grids is important because in \cite{Syrakos_2017} it was shown that there are popular FVM discretisations, widely regarded as second-order accurate, which actually do not converge to the exact solution with refinement on unstructured grids. So, we employed three series of grids, one for each of the grid types shown in Fig.\ \ref{fig: grids Oldroyd-B}, having $32 \times 32$, $64 \times 64$, $128 \times 128$, $256 \times 256$ and $512 \times 512$ cells. The CN grids were constructed as follows: in the $512 \times 512$ such grid, the grid spacing at the walls equals $L/1024$ and grows under a constant ratio towards the cavity centre. Then, by removing every second grid line we obtain the $256 \times 256$ grid, and so on for coarser grids. \begin{figure}[tb] \centering \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.84\linewidth]{figures/grid_32_CU.png} \caption{Cartesian uniform (CU)} \label{sfig: grid 32 CU} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.84\linewidth]{figures/grid_32_CN.png} \caption{Cartesian non-uniform (CN)} \label{sfig: grid 32 CN} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.84\linewidth]{figures/grid_32_D.png} \caption{Distorted (D)} \label{sfig: grid 32 D} \end{subfigure} \caption{$32 \times 32$ grids of different kinds, used for the Oldroyd-B benchmark problems.} \label{fig: grids Oldroyd-B} \end{figure} \begin{figure}[!tb] \centering \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.99\linewidth]{figures/Grid_Convergence_1u.png} \caption{$\Wei = 0.5$, CU grids: $\epsilon_u$} \label{sfig: grid convergence 1u} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.99\linewidth]{figures/Grid_Convergence_2u.png} \caption{$\Wei = 1.0$: $\epsilon_u$} \label{sfig: grid convergence 2u} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.99\linewidth]{figures/Grid_Convergence_3u.png} \caption{$\Wei = 0.5$; CU,CN,D grids: $\epsilon_u$} \label{sfig: grid convergence 3u} \end{subfigure} \vspace{0.5cm} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.99\linewidth]{figures/Grid_Convergence_1s.png} \caption{$\Wei = 0.5$, CU grids: $\epsilon_{\tau}$} \label{sfig: grid convergence 1s} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.99\linewidth]{figures/Grid_Convergence_2s.png} \caption{$\Wei = 1.0$: $\epsilon_{\tau}$} \label{sfig: grid convergence 2s} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.99\linewidth]{figures/Grid_Convergence_3s.png} \caption{$\Wei = 0.5$; CU,CN,D grids: $\epsilon_{\tau}$} \label{sfig: grid convergence 3s} \end{subfigure} \caption{Velocity (top row) and stress (bottom row) discretisation errors (Eqs.\ \eqref{eq: e_u} and \eqref{eq: e_tau}) of Oldroyd-B flow as a function of the grid spacing $h$, for various cases. Unless otherwise stated, the $\nabla_h^{1.5}$ gradient is used and the $D$'s at boundary faces are given by \eqref{eq: D+ boundary} -- \eqref{eq: D- boundary}. \textbf{Figs.\ \subref{sfig: grid convergence 1u}, \subref{sfig: grid convergence 1s}:} $\Wei = 0.5$, CU grids (Fig.\ \ref{sfig: grid 32 CU}). Green, red and blue lines correspond to setting $\overline{\tau_{ij,P}} = \tau_{ij,P}$ in \eqref{eq: upper convective terms discretisation}, using the log-conformation technique, and defining $\overline{\tau_{ij,P}}$ by \eqref{eq: stress reconstruction} in \eqref{eq: upper convective terms discretisation}, respectively. In the latter case, the solid and dash-dot lines correspond to the use of Eqs.\ \eqref{eq: apparent viscosity} and \eqref{eq: apparent viscosity DAVSS}, respectively. Dashed lines with short or long dashes correspond, respectively, to replacing $S$ by $S/2$ or $2S$ in \eqref{eq: apparent viscosity}. \textbf{Figs.\ \subref{sfig: grid convergence 2u}, \subref{sfig: grid convergence 2s}:} as for Figs.\ \subref{sfig: grid convergence 1u}, \subref{sfig: grid convergence 1s}, but for $\Wei = 1.0$. The dashed line now corresponds to CN grids (Fig.\ \ref{sfig: grid 32 CN}), using \eqref{eq: apparent viscosity} and \eqref{eq: characteristic stress}. \textbf{Figs.\ \subref{sfig: grid convergence 3u}, \subref{sfig: grid convergence 3s}:} $\Wei = 0.5$, use of \eqref{eq: apparent viscosity} and \eqref{eq: characteristic stress}. Green, blue, and red lines: CU (Fig.\ \ref{sfig: grid 32 CU}), CN (Fig.\ \ref{sfig: grid 32 CN}), and D (Fig.\ \ref{sfig: grid 32 D}) grids, respectively. Dashed lines correspond to use of $\nabla^{1.0}_h$ and simple linear extrapolation of stresses to the walls (i.e.\ setting $D_{b,i}^{\tau+} = D_{b,i}^{\tau-} = 0$ instead of \eqref{eq: D+ boundary} -- \eqref{eq: D- boundary}).} \label{fig: grid convergence} \end{figure} \begin{figure}[tb] \centering \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/pressure_OdB_Wi0p5.png} \caption{Pressure} \label{sfig: Oldroyd-B pressure} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/vesxy_OdB_Wi0p5.png} \caption{Stress $\tau_{12}$} \label{sfig: Oldroyd-B stress xy} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/vesxy_zoom_OdB_Wi0p5.png} \caption{Stress $\tau_{12}$} \label{sfig: Oldroyd-B stress xy zoom} \end{subfigure} \caption{The Oldroyd-B, $\Wei=0.5$ flow field calculated on the $512 \times 512$ distorted grid. \subref{sfig: Oldroyd-B pressure} Pressure contours (colour, $\delta p = 4S$ with $S$ given by Eq.\ \eqref{eq: characteristic stress}) and streamlines (white lines). \subref{sfig: Oldroyd-B stress xy} Contours of $\tau_{12}$ ($\delta \tau_{12} = 2S$) and streamlines. \subref{sfig: Oldroyd-B stress xy zoom} Close-up view of \subref{sfig: Oldroyd-B stress xy} near the upper-right corner.} \label{fig: flow Oldroyd-B} \end{figure} The numerical tests led to the following observations. Concerning the oscillations issue, the stabilisation strategies achieved their goal as all dependent variables varied smoothly across the domain without spurious oscillations; Fig.\ \ref{fig: flow Oldroyd-B} shows smooth pressure and $\tau_{12}$ fields and streamlines obtained on the $512 \times 512$ distorted grid. Concerning the accuracy of the method, Fig.\ \ref{fig: grid convergence} shows convergence to the exact solution with grid refinement on any type of grid, including the randomly distorted type. The order of accuracy is close to 1 on coarse grids and increases towards 2 (the design value) as the grid is refined, but for the $\Wei = 1$ case (Figs.\ \ref{sfig: grid convergence 2u}, \ref{sfig: grid convergence 2s}) it is still quite far from 2 even on the finest ($512 \times 512$) grids. Evidently, the convergence rates deteriorate at higher elasticity (compare Figs.\ \ref{sfig: grid convergence 1u},\ref{sfig: grid convergence 1s} and \ref{sfig: grid convergence 2u},\ref{sfig: grid convergence 2s}). The substandard accuracy performance of the method at high elasticity is most likely associated with the exponential stress growth near the lid. The Oldroyd-B predictions can be unrealistic, such as predicting unlimited extension of the material at finite extension rates; in such cases the accuracy of numerical methods can degrade significantly \cite{Hulsen_2005}. The SHB model suffers from these same issues as the Oldroyd-B model for $n \geq 1$, but not for $n < 1$ \cite{Saramito_2009}. With typical EVP materials it is usually the case that $\Wei < 1$, which, combined with the fact that their behaviour is described by $n < 1$, avoids the ``high-$\Wei$'' problem. Nevertheless, in the tests of the present Section we also tried the log-conformation technique \cite{Fattal_2004, Afonso_2009} which is implemented as an option in our code \cite{Syrakos_2018}, and, interestingly, Fig.\ \ref{fig: grid convergence} shows that it can be beneficial even at low $\Wei$ as it provides enhanced accuracy. Interpolation \eqref{eq: stress reconstruction} also appears to improve the accuracy compared to simply setting $\overline{\tau_{ij,P}} = \tau_{ij,P}$, although rate of convergence of the latter method appears to be higher for $\Wei = 0.5$ so that it is expected to surpass the accuracy of interpolation \eqref{eq: stress reconstruction} on even finer grids. Use of \eqref{eq: stress reconstruction} was observed to enable obtaining a solution at slightly higher $\Wei$ compared to setting $\overline{\tau_{ij,P}} = \tau_{ij,P}$ (up to $\Wei \approx 1.4$ compared to $\Wei = 1$). For the main results of Sec.\ \ref{sec: results} the scheme \eqref{eq: stress reconstruction} was selected, given also that it is cheaper than the log-conformation method. The scheme \eqref{eq: apparent viscosity DAVSS} performs poorly in terms of accuracy (Figs.\ \ref{sfig: grid convergence 1u},\ref{sfig: grid convergence 1s} and \ref{sfig: grid convergence 2u},\ref{sfig: grid convergence 2s}, dash-dot lines) compared to the simpler scheme \eqref{eq: apparent viscosity}. Concerning the latter, it is noteworthy that the larger the value of $\eta_a$ (adjusted by replacing $S$ with larger / smaller values in \eqref{eq: apparent viscosity}) the less accurate the results (Figs.\ \ref{sfig: grid convergence 1u},\ref{sfig: grid convergence 1s}, lines with long or short dashes). Thus, stabilisation should be exercised with parsimony. Henceforth we abandon the scheme \eqref{eq: apparent viscosity DAVSS} and employ scheme \eqref{eq: apparent viscosity}. Fig.\ \ref{fig: grid convergence} shows that packing the grid lines close to the walls -- i.e.\ using the CN grids -- achieves a very significant increase of accuracy. This is likely related to the aforementioned singular behaviour near the lid. As expected, the discretisation errors are largest on the distorted grids (Figs.\ \ref{sfig: grid convergence 3u},\ref{sfig: grid convergence 3s}, red lines), but not by a great margin compared to the uniform Cartesian grids. The rates of error convergence towards zero are similar on all grid types. Another disadvantage of the distorted grids is that the high-$\Wei$ problem is intensified: for $\Wei = 1$, we only managed to obtain a solution up to grid $64 \times 64$ with the scheme \eqref{eq: apparent viscosity}, and up to grid $128 \times 128$ with the scheme \eqref{eq: apparent viscosity DAVSS}. Finally, we note that Figs.\ \ref{sfig: grid convergence 3u},\ref{sfig: grid convergence 3s} show that the stress extrapolation scheme at the boundaries which uses \eqref{eq: D+ boundary} -- \eqref{eq: D- boundary}, in combination with the gradient $\nabla_h^{1.5}$, which retains 2nd-order accuracy at the boundaries unlike the more popular $\nabla_h^{1}$ \cite{Syrakos_2017}\footnote{This is true only for variables with Dirichlet boundary conditions, i.e.\ the velocity components. For pressure and stress, $\nabla_h^{1.5}$ also deteriorates to first-order accuracy. Hence we use the more standard gradient $\nabla_h^{1}$ in the extrapolation scheme \eqref{eq: boundary extrapolation}.}, leads to a noticeable accuracy improvement compared to simple linear extrapolation of stresses. \section{EVP flow in a lid-driven cavity} \label{sec: results} We now turn our attention to EVP flow in a lid-driven cavity. The SHB model parameters were chosen so as to represent the behaviour of Carbopol. Carbopol gels are used very frequently as prototypical viscoplastic fluids in experimental studies \cite{Piau_2007, Poumaere_2014}. In \cite{Lacaze_2015}, the SHB model was fitted to experimental data for a Carbopol gel of concentration 0.2\% in weight, and we rounded those parameters to arrive at our own chosen parameters, listed in Table \ref{table: carbopol parameters}. This material has a yield strain (Eq.\ \eqref{eq: yield strain}) of $\gamma_y = 0.175$. It is enclosed in a square cavity of side $L$ = 0.1 \si{m} and the flow is driven by horizontal motion of the top wall (lid) towards the right. We employed a grid of $384 \times 384$ cells, of uniform size in the $x$ direction but packed near the lid so that the vertical size of the cells touching the lid is $L/625$, and of those touching the bottom wall is about $L/252$. A coarse $48 \times 48$ grid of similar packing is shown in Fig.\ \ref{fig: grid 48x48}. \begin{table}[tb] \caption{Properties of the fluid used in the EVP lid-driven cavity simulations.} \label{table: carbopol parameters} \begin{center} \begin{small} \renewcommand\arraystretch{1.25} \begin{tabular}{ l | r l } \toprule $\rho$ & 1000 & \si{kg/m^3} \\ $\tau_y$ & 70 & \si{Pa} \\ $G$ & 400 & \si{Pa} \\ $k$ & 20 & \si{Pa.s^n} \\ $n$ & 0.40 & \\ \bottomrule \end{tabular} \end{small} \end{center} \end{table} \begin{figure}[tb] \centering \includegraphics[scale=0.75]{figures/Grid_48x48.png} \caption{A coarse, $48 \times 48$ cell grid, showing the packing of the cells near the lid (a coarse grid is shown for clarity; the EVP lid-driven cavity simulations were performed on a similar grid of $384 \times 384$ cells).} \label{fig: grid 48x48} \end{figure} \subsection{The base case} \label{ssec: results: base case} We start with a case where the enclosed material is initially at rest and fully relaxed ($\tf{\tau} = 0$ everywhere). Starting from rest, the lid accelerates towards the right until it reaches a maximum velocity of $U$ = 0.1 \si{m/s} at time $T = L/U$ = 1 \si{s}. The lid velocity remains constant thereafter: \begin{equation} \label{eq: lid velocity} u_{\mathrm{lid}} \;=\; U \sin \left( \frac{\min(t,T)}{T} \, \frac{\pi}{2} \right) \end{equation} The dimensionless numbers for this case are listed in the ``$U$ = 0.100 \si{m/s}'' column of Table \ref{table: dimensionless numbers}. \begin{table}[tb] \caption{Values of the dimensionless numbers, and of the relaxation time \eqref{eq: lamda and eta}, at different lid velocities.} \label{table: dimensionless numbers} \begin{center} \begin{small} \renewcommand\arraystretch{1.25} \begin{tabular}{ l | c c c } \toprule $U$ [\si{m/s}] & 0.025 & 0.100 & 0.400 \\ \midrule $\Bin'$ & 6.094 & 3.500 & 2.010 \\ $\Bin$ & 0.859 & 0.778 & 0.668 \\ $\Wei$ & 0.204 & 0.225 & 0.262 \\ $\Rey$ & 0.008 & 0.111 & 1.526 \\ \midrule $\lambda$ [\si{s}] & 0.815 & 0.255 & 0.066 \\ \bottomrule \end{tabular} \end{small} \end{center} \end{table} Figure \ref{fig: monitor} shows the evolution in time of the normalised kinetic energy of the fluid and of the normalised average absolute value of the trace of the stress tensor, calculated as \begin{equation} \label{eq: kinetic energy} \mathrm{N.K.E.} \;=\; \frac{1}{U^2\,\Omega} \sum_P \Omega_P \| \vf{u}_P \|^2 \end{equation} \begin{equation} \label{eq: average trace} \mathrm{tr}(\tf{\tilde{\tau}})_{\mathrm{avg}} \;=\; \frac{1}{S\,\Omega} \sum_P \Omega_P |\mathrm{tr}(\tf{\tau}{}\!_P)| \end{equation} where $\Omega = L^2$ is the volume of the cavity. It is immediately obvious from Fig.\ \ref{fig: monitor} that the kinetic energy assumes a constant value very quickly, but the average stress trace evolves much more slowly. To investigate this, the simulation was carried on until a time of $t$ = 210 \si{s}. \begin{figure}[tb] \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=0.80\linewidth]{figures/KE.png} \caption{N.K.E.} \label{sfig: monitor KE} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=0.80\linewidth]{figures/trT.png} \caption{$\mathrm{tr}(\tf{\tilde{\tau}})_{\mathrm{avg}}$} \label{sfig: monitor trT} \end{subfigure} \caption{\subref{sfig: monitor KE} Normalised kinetic energy (Eq.\ \eqref{eq: kinetic energy}) and \subref{sfig: monitor trT} normalised average trace of $\tf{\tau}$ (Eq. \eqref{eq: average trace}), versus time, for different lid velocities. In \subref{sfig: monitor KE} time is normalised by $L/U$.} \label{fig: monitor} \end{figure} The flow field is visualised in Fig.\ \ref{fig: base case flowfield}. It resembles that for pure viscoplastic flow \cite{Syrakos_2013, Syrakos_2014}, in that there are two unyielded zones ($\tau_d < \tau_y$), one at the bottom of the cavity touching the walls, containing stationary fluid, and one near the lid which is rotating with the flow and does not touch the walls, called a plug zone. Whether the material at a point is in a yielded or unyielded state is, of course, determined by whether $\tau_d$ is larger or smaller, respectively, than the yield stress $\tau_y$ there (Eq.\ \eqref{eq: constitutive}). Figure \ref{sfig: base case, Umag} shows that the streamlines do not cross into the lower unyielded zone, which therefore always consists of the same material, but they cross into and out of the plug zone. Thus, at every instant in time, liquefied particles are entering the plug zone, solidifying upon entry, while other, solidified particles, are exiting the zone, liquefying upon exit. \begin{figure}[tb] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[scale=0.90]{figures/base_case__yielded_vs_time.png} \caption{Unyielded areas} \label{sfig: base case, yielded vs time} \end{subfigure} \begin{subfigure}[b]{0.53\textwidth} \centering \includegraphics[scale=0.90]{figures/base_case__Umag.png} \caption{$\| \vf{u} \|$ [\si{m/s}]} \label{sfig: base case, Umag} \end{subfigure} \caption{\subref{sfig: base case, yielded vs time} Yielded (white) and unyielded (colour) regions for the base case (Sec.\ \ref{ssec: results: base case}). Different colours denote the extent of the unyielded regions at $t$ = 30, 60, 90, 120, 150, 180 and 210 \si{s} (the lower unyielded region is expanding with time). \subref{sfig: base case, Umag} Colour contours of the magnitude of the velocity vector at $t$ = 210 \si{s}. The white lines are the yield lines ($\tau_d = \tau_y$) and the dashed black lines are streamlines, both at $t$ = 210 \si{s}. The streamlines are plotted at equal streamfunction intervals.} \label{fig: base case flowfield} \end{figure} A striking feature of the flow evolution is the slowness with which the stationary unyielded zone tends to obtain its final shape. Figure \ref{sfig: base case, yielded vs time} shows that although the plug zone has reached its steady state already from $t$ = 30 \si{s}, the stationary zone continues to expand even at $t$ = 210 \si{s}. The shape of the yield line delimiting this zone is quite irregular. Furthermore, Fig.\ \ref{sfig: base case, Umag} shows that this zone is surrounded by an amount of fluid that is practically stationary and yet yielded. It is likely that this fluid will eventually become part of the stationary unyielded zone as $t \rightarrow \infty$, i.e.\ that the yield line will eventually lie somewhere close to the nearby streamline drawn in Fig.\ \ref{sfig: base case, Umag}, which separates the fluid with near-zero velocity from that whose velocity is larger. However, to ascertain this the simulation would have to be prolonged to prohibitively long times; already the expansion of the lower unyielded zone from $t$ = 180 \si{s} to $t$ = 210 \si{s}, marked in black colour in Fig.\ \ref{sfig: base case, yielded vs time}, is very small. A related feature is that the magnitude of the deviatoric stress tensor, $\tau_d$, is very close to the yield stress throughout the aforementioned near-zero velocity region into which the lower unyielded zone is expanding. Thus, this region appears as an almost completely flat surface in the three-dimensional plot of $\tau_d$ in Fig.\ \ref{fig: base case 3D} (the surface outlined in dashed line contains fluid where $\tau_d$ is within $\pm 1$ \si{Pa} of $\tau_y$). Due to its distinctive features, we will refer to this region as a ``transition zone''. In it, the fluid practically behaves as unyielded, although some of it can be formally yielded. The plug zone possesses no transition zone. \begin{figure}[thb] \centering \includegraphics[scale=1.00]{figures/base_case__3D.png} \caption{3D contour plot of the base case results (Sec.\ \ref{ssec: results: base case}) at $t$ = 210 \si{s}. Both the colour contours and the plot height ($z$ axis) represent $\tau_d$. In addition, the dashed line encloses an area where $\tau_d \in [69,71]$ \si{Pa}.} \label{fig: base case 3D} \end{figure} In order to explain the transition zone behaviour, we consider the constitutive equation \eqref{eq: constitutive} -- \eqref{eq: upper convected derivative} in the case that the fluid velocity is zero and $\tau_d$ is slightly above $\tau_y$. It becomes: \begin{equation} \label{eq: constitutive stationary tensorial} \pd{\tf{\tau}}{t} \;=\; -\frac{G}{k^{\frac{1}{n}}} (\tau_d - \tau_y)^{\frac{1}{n}} \frac{1}{\tau_d} \; \tf{\tau} \;\equiv\; -r(\tau_d) \; \tf{\tau} \end{equation} Since the function $r(\tau_d)$ assumes only positive values, the minus sign on the right-hand side of Eq.\ \eqref{eq: constitutive stationary tensorial} drives the components of $\tf{\tau}$ towards zero as time passes, with a rate proportional to $r(\tau_d)$ (and to the magnitude of those components themselves). Hence $\tau_d$ decreases towards zero as well (Eq.\ \eqref{eq: deviatoric stress}). However, the rate at which they decrease, $r(\tau_d)$, diminishes as $\tau_d \rightarrow \tau_y$ and in fact $r(\tau_y) = 0$. Therefore, $\tau_d$ actually converges towards $\tau_y$ and not to zero. This is what we observe happening inside the transition zone. It is useful to consider also a scalar version of Eq.\ \eqref{eq: constitutive stationary tensorial} ($\tau_d \equiv \tau$ in this case)\footnote{Eq.\ \eqref{eq: constitutive stationary scalar} can be viewed as describing the behaviour of the mechanical system of Fig.\ \ref{fig: model schematic} (with $\kappa = 0$) in the case that at some stress $\tau = \tau_0 > \tau_y$ we pin the right end of the spring to a fixed location, so that the total length $\gamma_v + \gamma_e$ remains constant henceforth, and we leave the system to relax. The spring will try to recover its equilibrium length by contracting or expanding $\gamma_e$, which will cause an opposite expansion or contraction of $\gamma_v$, resisted by the viscous and friction elements. Once the tension in the spring drops to $\tau_y$, the spring cannot overcome the friction $\tau_y$ of the friction element and motion stops, without the spring having attained its equilibrium length.}: \begin{equation} \label{eq: constitutive stationary scalar} \frac{\mathrm{d}\tau}{\mathrm{d}t} \;=\; -\frac{G}{k^{\frac{1}{n}}} (\tau - \tau_y)^{\frac{1}{n}} \end{equation} For $n = 1$, the solution to the above equation is \begin{equation} \tau - \tau_y \;=\; (\tau_0 - \tau_y) \, e^{-t/\lambda'} \end{equation} where $\tau_0$ is the value of $\tau$ at $t = 0$ and $\lambda' \equiv k / G$ is a relaxation time. This behaviour is similar to that of a Maxwell viscoelastic fluid, only that now $\tau$ decays exponentially towards $\tau_y$ instead of towards zero. For $n \neq 1$, Eq.\ \eqref{eq: constitutive stationary scalar} can be written as ($\lambda' \equiv k^{1/n} / G$ now has units of \si{s/Pa^{1-1/n}}): \begin{equation} \frac{\mathrm{d}(\tau - \tau_y)}{\mathrm{d}t} \;=\; -\underbrace{\frac{1}{\lambda'} (\tau - \tau_y)}_{ \mathclap{\text{rate of decay for }n=1} } (\tau - \tau_y)^{\frac{1-n}{n}} \end{equation} so that the rate of decay of $\tau$ towards $\tau_y$ equals that for $n=1$ multiplied by a factor of $(\tau-\tau_y)^{(1-n)/n}$. If $n < 1$, as is the present choice but also the most common case, then $(1-n)/n > 0$ and as time progresses and $\tau \rightarrow \tau_y$, the extra factor $(\tau-\tau_y)^{(1-n)/n}$ tends to zero, and the rate of decay becomes progressively smaller than that of the $n = 1$ case, eventually becoming infinitesimal compared to it. This behaviour is explained physically by the fact that the relaxation time (Eq.\ \eqref{eq: lamda and eta}) is proportional to the fluid viscosity, which resists recovery from deformation (relaxation), and inversely proportional to the elastic modulus, which drives towards such recovery. For shear-thinning fluids ($n < 1$), the viscosity \textit{increases} as $\tau$ becomes smaller and, in the SHB and HB models, tends to infinity as $\tau \rightarrow \tau_y$\footnote{In the language of the mechanical analogue of Fig.\ \ref{fig: model schematic} this means that the damper component resisting the relaxation of the spring becomes stiffer as this relaxation proceeds.}. Thus, for these fluids the relaxation time tends to infinity as $\tau \rightarrow \tau_y$. The opposite happens in the less common case of $n > 1$. By dividing both sides of Eq.\ \eqref{eq: momentum ND} by $\Rey$, because $Re$ is very small (Table \ref{table: dimensionless numbers}), it becomes apparent that the fluid particle accelerations (the left-hand side of that equation) are large and the velocity adjusts very quickly to force changes (the inertial time scales are very small). Thus, the velocity should vary at the same rate as these forces, i.e.\ the velocity and stress variations should go hand-in-hand (the boundary conditions are not a source of variation as the lid velocity remains constant for $t > T$ = 1 \si{s}). However, in Fig.\ \ref{fig: monitor} the velocity field appears to reach a steady state much faster than the stress field. This can be explained by observing that the stress actually also evolves quickly over most of the domain, the relaxation time $\lambda$ = 0.255 \si{s} (Table \ref{table: dimensionless numbers}) being quite small compared to the time scale of stress evolution in Fig.\ \ref{sfig: monitor trT}, except in the transition zone where the definition \eqref{eq: lamda and eta} is not representative and the actual relaxation time tends to infinity as time passes. There, the stress magnitude is of the order of $\tau_y$, so its local slow evolution has a noticeable impact on the average stress trace plotted in Fig.\ \ref{fig: monitor}. The velocity does follow the stress evolution, but since it is almost zero in this zone its local variations caused by the local stress evolution have a negligible impact on the overall kinetic energy plotted in Fig.\ \ref{fig: monitor}. We noticed that between $t$ = 30 \si{s} and $t$ = 210 \si{s} the change in the velocity components at any point in the domain is of the order of $10^{-6}$ \si{m/s}, except in the transition zone where it can be about five times larger. Figure \ref{fig: base case profiles} shows vertical profiles of most flow variables along the centreline ($x = L/2$, Figs.\ \ref{sfig: base case profiles C1} and \ref{sfig: base case profiles C2}) and close to the right wall ($x = 0.95L$, Figs.\ \ref{sfig: base case profiles R1} and \ref{sfig: base case profiles R2}). Two profiles are drawn for each variable, one at time $t$ = 30 \si{s} and one at $t$ = 210 \si{s}. These profiles are identical inside the yielded and plug zones, and only deviate very slightly inside the bottom unyielded zone. Thus the steady state has largely been reached already at $t$ = 30 \si{s}. The vertical line at $x = 0.95L$ cuts through a significant width of the transition zone, which can be recognised by the vertical line segment where $\tau_d \approx \tau_y$ in Fig.\ \ref{sfig: base case profiles R1} (approximately from $y$ = 0.041 \si{m} to $y$ = 0.051 \si{m}). A close-up view of the variation of $\tau_d$ in the transition zone is shown in the inset of the same figure, where one can see that as time passes, $\tau_d$ decreases towards $\tau_y$ throughout the zone. Interestingly, Fig.\ \ref{sfig: base case profiles C1} shows that the normal stress component $\tau_{11}$ tends to vary discontinuously across the boundary of the lower unyielded zone as time passes. The possibility of the SHB model producing solutions with discontinuous stress components and/or velocity gradients was noted and discussed in \cite{Cheddadi_2008, Cheddadi_2013}, where it was attributed to the combination of the upper convective derivative and the viscoplastic ``max'' term. Nevertheless, as noted in \cite{Cheddadi_2008}, stress discontinuities must be such that the components of the force $\nabla \cdot \tf{\sigma} = \nabla \cdot (\tf{\tau} - p \tf{I})$ remain bounded (discontinuities in $\tf{\tau}$ that lead to infinite derivatives in $\nabla \cdot \tf{\tau}$ can exist if they are counterbalanced by opposite discontinuities in $- p \tf{I}$). Otherwise, at a point of discontinuity there will result a finite (i.e.\ non-zero) force acting on an infinitesimal mass, producing infinite acceleration and making the velocity field discontinuous, but this violates the continuity equation for an incompressible medium. In our case though, the variation of $\tau_{11}$ in the $x_2$ direction does not cause any force on the fluid (there is no $\partial \tau_{11} / \partial x_2$ derivative in $\nabla \cdot \tf{\tau}$) and is therefore allowed to be discontinuous. \begin{figure}[!t] \centering \begin{subfigure}[t]{0.49\textwidth} \centering \vskip 0pt \includegraphics[scale=0.80]{figures/base_case__Cline1.png} \caption{$u_1$, $\tau_{11}$ and $\tau_d$ at $x = L/2$.} \label{sfig: base case profiles C1} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \centering \vskip 0pt \includegraphics[scale=0.80]{figures/base_case__Cline2.png} \caption{$p$, $\tau_{22}$ and $\tau_{12}$ at $x = L/2$.} \label{sfig: base case profiles C2} \end{subfigure} \vspace{0.5cm} \centering \begin{subfigure}[t]{0.49\textwidth} \centering \vskip 0pt \includegraphics[scale=0.80]{figures/base_case__X095_1.png} \caption{$u_2$, $\tau_{11}$ and $\tau_d$ at $x = 0.95L$.} \label{sfig: base case profiles R1} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \centering \vskip 0pt \includegraphics[scale=0.80]{figures/base_case__X095_2.png} \caption{$p$, $\tau_{22}$ and $\tau_{12}$ at $x = 0.95L$.} \label{sfig: base case profiles R2} \end{subfigure} \caption{Profiles of some dependent variables along vertical lines, for the base case (Sec.\ \ref{ssec: results: base case}): \subref{sfig: base case profiles C1}, \subref{sfig: base case profiles C2} at the vertical centreline ($x = 0.5L$); \subref{sfig: base case profiles R1}, \subref{sfig: base case profiles R2} close to the right wall ($x = 0.95L$). Dashed / continuous lines denote profiles at $t$ = 30 \si{s} / 210 \si{s}, respectively. The vertical dash-dot lines in \subref{sfig: base case profiles C1} and \subref{sfig: base case profiles R1} mark the yield stress value $\tau_y$. The unyielded and transition regions are shaded. The inset in \subref{sfig: base case profiles R1} shows a close-up view of the $\tau_d$ profile at $y \in [0.035,0.060]$ \si{m}.} \label{fig: base case profiles} \end{figure} \subsection{Varying the lid velocity} \label{ssec: results: varying the lid speed} Next we examine the flow driven by higher ($U$ = 0.4 \si{m/s}) and lower ($U$ = 0.025 \si{m/s}) lid velocities, which affects the dimensionless numbers as shown in Table \ref{table: dimensionless numbers}. Lowering the velocity increases the Bingham number and decreases the Weissenberg number, i.e.\ it accentuates the plastic character of the flow at the expense of its elastic character. The $\Rey$ values listed in Table \ref{table: dimensionless numbers} suggest that inertia effects should be negligible for $U$ = 0.025 and 0.100 \si{m/s}, but should be slightly noticeable for $U$ = 0.400 \si{m/s}. The Table shows also that $\lambda$ increases as $U$ decreases; at lower velocities the apparent viscosity $\eta$, Eq.\ \eqref{eq: lamda and eta}, is higher and therefore the stresses relax more slowly. Thus, we expect the flow to evolve more slowly as $U$ is reduced. The lid velocity is again increased gradually according to Eq.\ \eqref{eq: lid velocity} with $T = L/U$. Figure \ref{sfig: monitor KE} shows that the KE of the fluid builds up in an oscillatory manner, usually with an overshoot, due to elastic effects. The larger $U$ is, the more prominent and persistent the oscillations. Figure \ref{sfig: monitor trT} confirms that the stress evolution is slower at lower $U$, as discussed above. The top row of Fig.\ \ref{fig: flowfields} shows the corresponding near-steady-state flow fields. Increasing the lid velocity leads to less unyielded material in the cavity, and the vortex has more free space to move away from the lid (see the $y_c$ coordinate in Table \ref{table: vortex metrics}). In each case there is a transition zone between the bottom unyielded zone and the yielded material, with the former not having yet expanded throughout the near-zero velocity region. The number and density of streamlines in Fig.\ \ref{fig: flowfields} shows that at higher $\Bin$ the flow is weaker and more confined to a thin layer below the lid while circulation is very weak in the rest of the domain. This is reflected also in the normalised KE diagram \ref{sfig: monitor KE}, and also in the normalised vortex strengths listed in Table \ref{table: vortex metrics}. That Table also shows that the vortex lies slightly to the left of the centreline, as is typical of viscoelastic lid-driven cavity flows, although this shift is not as pronounced as for the Oldroyd-B flows of Table \ref{table: validation}. Interestingly, as $U$ is increased the vortex centre moves towards the centreline despite $\Wei$ increasing; this could be due to inertia effects that will be discussed in Sec.\ \ref{ssec: results: comparison with HB}. \begin{table}[!tb] \caption{Dimensionless coordinates of the centre of the vortex $(\tilde{x}_c, \tilde{y}_c) \equiv (x_c/L, y_c/L)$ and value of the streamfunction there $\tilde{\psi}_c \equiv \psi_c / (LU)$, for various cases, at steady-state.} \label{table: vortex metrics} \begin{center} \begin{small} \renewcommand\arraystretch{1.25} \begin{tabular}{ l | c c c | c c c } \toprule & \multicolumn{3}{c |}{ SHB } & \multicolumn{3}{c}{ HB } \\ \midrule & $\tilde{x}_c$ & $\tilde{y}_c$ & $\tilde{\psi}_c$ & $\tilde{x}_c$ & $\tilde{y}_c$ & $\tilde{\psi}_c$ \\ \midrule $U$ = 0.025 \si{m/s} & $0.495$ & $0.935$ & $-0.0211$ & $0.500$ & $0.933$ & $-0.0214$ \\ $U$ = 0.100 \si{m/s} & $0.495$ & $0.917$ & $-0.0270$ & $0.500$ & $0.915$ & $-0.0281$ \\ $U$ = 0.400 \si{m/s} & $0.500$ & $0.899$ & $-0.0333$ & $0.505$ & $0.897$ & $-0.0352$ \\ $U$ = 0.100 \si{m/s}, slip & $0.482$ & $0.846$ & $-0.0125$ & $0.500$ & $0.853$ & $-0.0112$ \\ \bottomrule \end{tabular} \end{small} \end{center} \end{table} \begin{figure}[!tb] \centering \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/flow_U0p025.png} \caption{SHB, $U = 0.025$ \si{m/s}, $t$ = 90 \si{s}} \label{sfig: SHB flow U=0.025} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/flow_U0p100.png} \caption{SHB, $U = 0.100$ \si{m/s}, $t$ = 210 \si{s}} \label{sfig: SHB flow U=0.100} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/flow_U0p400.png} \caption{SHB, $U = 0.400$ \si{m/s}, $t$ = 60 \si{s}} \label{sfig: SHB flow U=0.400} \end{subfigure} \vspace{0.25cm} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/flow_HB_U0p025.png} \caption{HB, $U = 0.025$ \si{m/s}} \label{sfig: HB flow U=0.025} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/flow_HB_U0p1.png} \caption{HB, $U = 0.100$ \si{m/s}} \label{sfig: HB flow U=0.100} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/flow_HB_U0p4.png} \caption{HB, $U = 0.400$ \si{m/s}} \label{sfig: HB flow U=0.400} \end{subfigure} \caption{Top row: SHB flow snapshots at the times indicated, for different lid velocities. Bottom row: corresponding HB steady-state flowfields. The unyielded regions are shown shaded; the lines are instantaneous streamlines, plotted at streamfunction values $\psi / (UL)$ = 0.04, 0.08, 0.12 etc., plus a $\psi$ = \num{5e-6} \si{m^3/s} streamline just above the lower unyielded region.} \label{fig: flowfields} \end{figure} \begin{figure}[tb] \centering \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/profile_U0p025.png} \caption{$U = 0.025$ \si{m/s}} \label{sfig: varU profiles U=0.025} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/profile_U0p400.png} \caption{$U = 0.400$ \si{m/s}} \label{sfig: varU profiles U=0.400} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/profile_Sxx.png} \caption{$\tau_{11} / S$} \label{sfig: varU Sxx profiles} \end{subfigure} \caption{Profiles of $u_1$ and $\tau_d$ for $U$ = 0.025 \si{m/s} \subref{sfig: varU profiles U=0.025} and $U$ = 0.400 \si{m/s} \subref{sfig: varU profiles U=0.400}, and profiles of $\tau_{11} / S$ for both $U$'s \subref{sfig: varU Sxx profiles}. All profiles are along the vertical centreline ($x = L/2$). Continuous lines: SHB profiles at $t$ = 90 \si{s} ($U$ = 0.025 \si{m/s}) and $t$ = 60 \si{s} ($U$ = 0.400 \si{m/s}); dash-dot lines: SHB profiles at 30 \si{s} earlier; dashed lines: HB steady-state profiles. In \subref{sfig: varU profiles U=0.025} and \subref{sfig: varU profiles U=0.400} the vertical lines mark the yield stress.} \label{fig: varU profiles} \end{figure} \subsection{Comparison with the classic HB model} \label{ssec: results: comparison with HB} Since the HB equation is used extensively to model viscoplastic flows, we compare its predictions against those of the SHB equation in order to get a feel of the error involved in neglecting elastic effects. The HB simulations were carried out by performing Papanastasiou regularisation \cite{Papanastasiou_1987}, implemented as \cite{Sverdrup_2018}: \begin{equation} \label{eq: HB Papanastasiou} \tf{\tau} \;=\; \eta \dot{\tf{\gamma}} \;,\qquad \eta \;=\; \frac{\tau}{\dot{\gamma}} \;\approx\; \frac{(\tau_y + k \dot{\gamma}^n)(1-e^{-\dot{\gamma}/\epsilon})}{\dot{\gamma}} \end{equation} where the regularisation parameter $\epsilon$ determines the accuracy of the approximation. This method was used for simulating lid-driven cavity Bingham flows in \cite{Syrakos_2013, Syrakos_2014}, where it was noticed that the equations became too stiff to solve for $\epsilon < 1/400$. However, with the present method, and using a continuation procedure where $\epsilon$ is progressively halved, solutions for $\epsilon$ = 1/128,000 were obtained. We performed steady-state simulations (since the steady-state of classic viscoplastic flows does not depend on the initial conditions) where the HB parameters $\tau_y$, $k$ and $n$ have the same values as for the SHB fluid (Table \ref{table: carbopol parameters}). The flowfields, depicted in the second row of Fig.\ \ref{fig: flowfields}, are much more symmetric than their SHB counterparts. In HB lid-driven cavity flow, the only source of asymmetry is inertia. Inertial effects are imperceptible for $U$ = 0.025 and 0.100 \si{m/s} (Figs.\ \ref{sfig: HB flow U=0.025} and \ref{sfig: HB flow U=0.100}) and only slightly noticeable for $U$ = 0.400 \si{m/s} (Fig.\ \ref{sfig: HB flow U=0.025}), which is in accord with the corresponding $\Rey$ values listed in Table \ref{table: dimensionless numbers}. So, for $U$ = 0.4 \si{m/s} the vortex centre is shifted slightly towards the right (Table \ref{table: vortex metrics}), whereas at the lower $U$'s it is almost exactly on the centreline. Also, in Fig.\ \ref{sfig: HB flow U=0.400} one can see that the upper unyielded (plug) region is somewhat stretched towards the left. These features are opposite to those of the SHB case, where elasticity causes the vortex centre to move towards the left and the plug region to stretch towards the right. The opposite effects of inertia and elasticity have been noted also in \cite{Martins_2013, Hashemi_2017, Syrakos_2018}. Figure \ref{fig: flowfields} and Table \ref{table: vortex metrics} show that the union of the lower unyielded and transition zones in SHB flow is slightly larger than the corresponding HB stationary regions, pushing the SHB vortex upwards and lowering its strength compared to the HB case. Figures \ref{sfig: varU profiles U=0.025} and \ref{sfig: varU profiles U=0.400} show that the SHB and HB velocity profiles along the vertical centreline are very similar; the $\tau_d$ profiles are somewhat more dissimilar but still not far apart, especially in the yielded regions. In the unyielded regions they cannot be expected to be similar due to the stress indeterminacy of the HB model (the currently predicted HB stress field in the unyielded regions is one of infinite possibilities, the one conforming to the Papanastasiou regularisation). Profiles of $\tau_{11}$ are plotted in Fig.\ \ref{sfig: varU Sxx profiles}; for HB flow, due to the symmetric flowfield and $\tau_{11}$ being proportional to $\partial u_1 / \partial x_1$, this stress component is nearly zero, except inside the plug region for $U$ = 0.400 \si{m/s}, which is somewhat asymmetric (Fig.\ \ref{sfig: HB flow U=0.400}). The SHB stresses, on the other hand, are significant due to elasticity, especially in the higher $U$ case which corresponds to higher $\Wei$. Finally, we note that in Fig.\ \ref{fig: varU profiles} two SHB profiles are plotted for each lid velocity: at $t$ = 60 and 90 \si{s} for $U$ = 0.025 \si{m/s}, and at $t$ = 30 and 60 \si{s} for $U$ = 0.400 \si{m/s}. These profiles are hardly distinguishable, indicating that the steady state for $U$ = 0.025 \si{m/s} has been practically reached at $t$ = 60 \si{s}, and for $U$ = 0.400 \si{m/s} at $t$ = 30 \si{s}. Next, we performed a couple of simulations similar to the base case, only that we artificially increased the elastic modulus to 10 and 100 times the original value listed in Table \ref{table: carbopol parameters}. This reduces the relaxation times by factors of 10 and 100, respectively, so that the SHB material becomes more inelastic and its predictions should tend to match those of the HB model. Due to the smaller relaxation times, we also used smaller time steps: starting from an initial time step of \num{5e-5} \si{s} in both cases, we allowed the time step to increase up to $\Delta t_{\max}$ = \num{1.5e-3} \si{s} and \num{e-4} \si{s} in the $G$ = 4,000 \si{Pa} and $G$ = 40,000 \si{Pa} cases, respectively. The simulations were stopped at $t$ = 50 \si{s} and 10 \si{s} respectively, and in the $G$ = 40,000 \si{Pa} case the lid acceleration period (Eq.\ \eqref{eq: lid velocity}) was reduced to 0.01 \si{s}. Figure \ref{fig: HB comparison: streamlines} shows that the SHB and HB streamlines converge rapidly with increasing $G$, while Fig.\ \ref{fig: HB comparison: tau} shows that convergence is not as rapid for the stresses. Of course, for $\tau_d < \tau_y$ (top row of Fig.\ \ref{fig: HB comparison: tau}) we cannot expect the SHB and HB stresses to ever match, due to the aforementioned indeterminacy of the HB stress tensor in the unyielded regions. The yield lines (second row of Fig.\ \ref{fig: HB comparison: tau}) appear to converge, but at a slow pace. Despite the shorter duration of the $G$ = 40,000 \si{Pa} simulation, Fig.\ \ref{sfig: HB comparison: tau 70 100G} shows that during this time the transition zone has evolved further than in the lower $G$ cases, due to the much shorter relaxation times. The same figure reveals some slight spurious stress oscillations on the lower yield line (can also be seen in Fig.\ \ref{sfig: HB comparison: tau 70 base}). They could be due to the near-zero velocities there, which complicates the application of upwinding within CUBISTA. Finally, the last row of Fig.\ \ref{fig: HB comparison: tau} shows that within the yielded region the $\tau_d$ predictions of the two models are quite close, even for $G$ = 400 \si{Pa}. A persistent discrepancy between the two models at the sides of the plug region seen in Fig.\ \ref{sfig: HB comparison: tau 80 100G} may be due to spatial or temporal discretisation errors, regularisation, etc. \begin{figure}[tb] \centering \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/base_case__Streamlines.png} \caption{$G = 400$ \si{Pa}} \label{sfig: HB comparison: streamlines base} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/case_10G__Streamlines.png} \caption{$G = 4,000$ \si{Pa}} \label{sfig: HB comparison: streamlines 10G} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/case_100G__Streamlines.png} \caption{$G = 40,000$ \si{Pa}} \label{sfig: HB comparison: streamlines 100G} \end{subfigure} \caption{Continuous lines: SHB streamlines for various values of $G$, as indicated (the rest of the parameters are as in Table \ref{table: carbopol parameters}, and the lid velocity is $U$ = 0.1 \si{m/s}). Dashed lines: corresponding HB streamlines, drawn at the same streamfunction values as for the SHB model. Since the HB predictions are independent of $G$, the dashed lines do not change between plots. The streamlines are drawn at fixed streamfunction intervals.} \label{fig: HB comparison: streamlines} \end{figure} \begin{figure}[!t] \centering \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/base_case__Tau_d_40.png} \caption{$\tau_d = 40$ \si{Pa}; $G = 400$ \si{Pa}} \label{sfig: HB comparison: tau 40 base} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/case_10G__Tau_d_40.png} \caption{$\tau_d = 40$ \si{Pa}; $G = 4,000$ \si{Pa}} \label{sfig: HB comparison: tau 40 10G} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/case_100G__Tau_d_40.png} \caption{$\tau_d = 40$ \si{Pa}; $G = 40,000$ \si{Pa}} \label{sfig: HB comparison: tau 40 100G} \end{subfigure} \vspace{0.25cm} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/base_case__Tau_d_70.png} \caption{$\tau_d = 70$ \si{Pa}; $G = 400$ \si{Pa}} \label{sfig: HB comparison: tau 70 base} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/case_10G__Tau_d_70.png} \caption{$\tau_d = 70$ \si{Pa}; $G = 4,000$ \si{Pa}} \label{sfig: HB comparison: tau 70 10G} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/case_100G__Tau_d_70.png} \caption{$\tau_d = 70$ \si{Pa}; $G = 40,000$ \si{Pa}} \label{sfig: HB comparison: tau 70 100G} \end{subfigure} \vspace{0.25cm} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/base_case__Tau_d_80.png} \caption{$\tau_d = 80$ \si{Pa}; $G = 400$ \si{Pa}} \label{sfig: HB comparison: tau 80 base} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/case_10G__Tau_d_80.png} \caption{$\tau_d = 80$ \si{Pa}; $G = 4,000$ \si{Pa}} \label{sfig: HB comparison: tau 80 10G} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/case_100G__Tau_d_80.png} \caption{$\tau_d = 80$ \si{Pa}; $G = 40,000$ \si{Pa}} \label{sfig: HB comparison: tau 80 100G} \end{subfigure} \caption{Continuous lines: contours of $\tau_d$ predicted by the SHB model (top row: $\tau_d$ = 40 \si{Pa}; middle row: $\tau_d = \tau_y$ = 70 \si{Pa}; bottom row: $\tau_d$ = 80 \si{Pa}) for various values of the elastic modulus $G$ (left column: $G$ = 400 \si{Pa}; middle column: $G$ = 4,000 \si{Pa}; right column: $G$ = 40,000 \si{Pa}), the rest of the parameters having the values listed in Table \ref{table: carbopol parameters}, and the lid velocity being $U$ = 0.1 \si{m/s}. Dashed lines: corresponding contours predicted by the HB model.} \label{fig: HB comparison: tau} \end{figure} \subsection{A case with slip} \label{ssec: results: slip} Carbopol gels are quite slippery \cite{Piau_2007, Perez_2012, Poumaere_2014}; the previous, no-slip results can be considered to correspond to roughened walls, like those used in rheological measurements to avoid slip \cite{Lacaze_2015}. However, if the walls are smooth then noticeable slip is expected. Viscoplastic and viscoelastic fluids can exhibit complex slip behaviour, e.g.\ power-law slip \cite{Kalyon_2005, Panaseti_2017}, pressure dependence \cite{Panaseti_2013}, or ``slip yield stress'' \cite{Hatzikiriakos_2012, Philippou_2017}. However, experiments with Carbopol in \cite{Perez_2012} (0.2\% by weight) and \cite{Poumaere_2014} (0.08\% by weight) showed nearly Navier slip behaviour, $u_s = 5.151 \times 10^{-4} \, \tau_w^{0.876}$ and $u_s = 2.3 \times 10^{-4} \, \tau_w^{1.32}$, respectively, where $u_s$ is the slip velocity (the left-hand side of Eq.\ \eqref{eq: navier slip}) and $\tau_w$ is the wall tangential stress (the term $(\vf{n} \cdot \tf{\tau}) \cdot \vf{s}$ of the right-hand side of Eq.\ \eqref{eq: navier slip}). In \cite{Poumaere_2014} it is conjectured that this may be due to the formation of a thin layer of Newtonian fluid (solvent) that separates the wall surface from the gel micro-particles. In the present section we impose a Navier slip condition, Eq.\ \eqref{eq: navier slip}, on all walls, with a slip coefficient $\beta = 5 \times 10^{-4}$ \si{m / Pa.s}. For this case, concerning stress extrapolation to the walls, we could not get SIMPLE to converge with the $D$ coefficients given by \eqref{eq: D+ boundary} -- \eqref{eq: D- boundary} and we set them equal to zero. \begin{figure}[tb] \centering \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/slip_flowfield.png} \caption{SHB flowfield} \label{sfig: slip flowfield} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/slip_flowfield_HB.png} \caption{HB flowfield} \label{sfig: slip flowfield HB} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/slip_tau_d_75.png} \caption{$\tau_d$ = 75 \si{Pa}} \label{sfig: slip tau_d 75} \end{subfigure} \caption{\subref{sfig: slip flowfield} SHB flowfield for the slip case; shaded region: unyielded material at $t$ = 60 \si{s}; dashed lines: yield lines at $t$ = 30 \si{s}; continuous lines: streamlines (they are drawn at equal streamfunction intervals). \subref{sfig: slip flowfield HB} Corresponding steady-state HB flowfield (the streamlines are drawn at the same values of streamfunction as in \subref{sfig: slip flowfield}). \subref{sfig: slip tau_d 75} The $\tau_d$ = 75 \si{Pa} contour of the SHB (continuous line) and HB (dashed line) flowfields.} \label{fig: slip flowfields} \end{figure} Figure \ref{sfig: slip flowfield} shows the flowfield. A distinguishing feature is that the two unyielded zones touch the cavity walls and yet contain moving material which is sliding over the wall. This feature is common also to the HB flowfield, shown in Fig.\ \ref{sfig: slip flowfield HB}, which is, again, much more symmetric. Figure \ref{sfig: slip tau_d 75} shows that SHB and HB contours of $\tau_d$ are more similar for $\tau_d > \tau_y$. Due to slip, the flow induced by the lid is much weaker than in the no-slip case, as can be seen from the lower kinetic energy in Fig.\ \ref{fig: monitor}, the smaller vortex strength in Table \ref{table: vortex metrics}, and the wider streamline spacing in Fig.\ \ref{sfig: slip, gamma} than in Fig.\ \ref{sfig: base case, gamma}. On the other hand, the streamlines in Fig.\ \ref{sfig: slip, gamma} are more evenly spaced near the bottom of the cavity compared to Fig.\ \ref{sfig: base case, gamma} as slip allows the circulation to extend all the way down to the bottom wall, where now there is no \textit{stationary} unyielded zone. Consequently, there is also no transition zone. Indeed, comparing in Fig.\ \ref{sfig: slip flowfield} the yield lines at $t$ = 30 \si{s} (dashed lines) and $t$ = 60 \si{s} (the boundary of the shaded regions) one sees that there is very little change, and the bottom unyielded zone even slightly contracts with time. The absence of a transition zone can be explained by examining the $\dot{\gamma}$ distributions of Fig.\ \ref{fig: base and slip gamma}; whereas in the no-slip case (Fig.\ \ref{sfig: base case, gamma}) the velocity gradients are practically zero inside the bottom unyielded zone, in the slip case (Fig.\ \ref{sfig: slip, gamma}) they are non-zero throughout the domain, thus excluding the transition zone situation where the constitutive equation reduces to Eq.\ \eqref{eq: constitutive stationary tensorial}. Figure \ref{fig: base and slip gamma} also shows that in the upper part of the cavity $\dot{\gamma}$ is much lower in the presence of slip. Due to the shear-thinning nature of the fluid, this results in higher viscosities and associated relaxation times which may explain why the stress evolution is slower in the slip case than in the no-slip one as seen in Fig.\ \ref{sfig: monitor trT} (and also in Figs.\ \ref{sfig: slip flowfield} and \ref{fig: slip profiles}, where there are slight changes between $t$ = 30 and 60 \si{s}). Surprisingly, Fig.\ \ref{sfig: monitor trT} shows that $\mathrm{tr}(\tf{\tau})_{\mathrm{avg}}$ is actually higher in the slip case, probably due to the absence of a bottom stationary zone. For benchmarking purposes we plot profiles of some dependent variables along the vertical centreline in Fig.\ \ref{fig: slip profiles}. \begin{figure}[tb] \centering \begin{subfigure}[t]{0.43\textwidth} \centering \includegraphics[scale=0.82]{figures/base_case__GammaD.png} \caption{No slip} \label{sfig: base case, gamma} \end{subfigure} \begin{subfigure}[t]{0.12\textwidth} \centering \includegraphics[scale=0.85]{figures/GammaD_legend.png} \end{subfigure} \begin{subfigure}[t]{0.43\textwidth} \centering \includegraphics[scale=0.82]{figures/slip_GammaD.png} \caption{Slip} \label{sfig: slip, gamma} \end{subfigure} \caption{Contours of $\dot{\gamma}$ [\si{s^{-1}}] for $U$ = 0.100 \si{m/s} without wall slip \subref{sfig: base case, gamma} (base case) or with wall slip \subref{sfig: slip, gamma}. The thick continuous lines are yield lines and the dashed lines are streamlines. The latter are plotted at equal streamfunction intervals (the same intervals in both figures).} \label{fig: base and slip gamma} \end{figure} \begin{figure}[tb] \centering \begin{subfigure}[b]{0.49\textwidth} \centering \vskip 0pt \includegraphics[width=0.88\linewidth]{figures/slip_profile_U.png} \caption{$u_1$ and $\tau_d$ at $x=L/2$} \label{sfig: slip profiles u} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=0.88\linewidth]{figures/slip_profile_stress.png} \caption{$\tau_{11}$ and $\tau_{12}$ at $x=L/2$} \label{sfig: slip profiles stress} \end{subfigure} \caption{Profiles of some dependent variables along the vertical centreline, for the slip case. SHB results at $t$ = 60 and 30 \si{s} are plotted in continuous and dashes lines, respectively, and steady state HB results in dash-dot line.} \label{fig: slip profiles} \end{figure} \subsection{Multiplicity of solutions} \label{ssec: results: multiplicity of solutions} It was mentioned in Sec.\ \ref{sec: equations} that Cheddadi et al.\ \cite{Cheddadi_2012} found that the steady state of a SHB flow (including the velocity field) can depend in a continuous manner on the initial conditions. That was observed in simple cylindrical Couette flow, where the shear stress is determined by the geometry and kinematics but the circumferential normal stress depends also on its initial value, which affects the final extent of the yield zone and therefore also the final velocity field. We performed analogous experiments to see if this can be observed also in the present, more complex flow. So, we solved two more cases that are identical to the ``base case'' of Sec.\ \ref{ssec: results: base case}, except for the initial stress conditions: $\tau_{11} = \tau_{22} = +\sqrt{3} \tau_y$ (first case) or $-\sqrt{3} \tau_y$ (second case) throughout, the remaining stress components being zero. These initial conditions correspond, respectively, to tension and compression states that are isotropic in the $xy$ plane; from Eq.\ \eqref{eq: deviatoric stress} it follows that $\tau_d(t=0) = \tau_y$ everywhere, i.e.\ the material is on the verge of yielding. The material is initially at rest ($\vf{u} = 0$). The simulations were carried on until time $t$ = 60 \si{s}. \begin{figure}[b] \centering \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/multiplicity_monitor_zoom.png} \caption{$t \in [0,4]$ \si{s}} \label{sfig: multiplicity monitor zoom} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/multiplicity_monitor.png} \caption{$t \in [0,30]$ \si{s}} \label{sfig: multiplicity monitor} \end{subfigure} \caption{Histories of N.K.E.\ (Eq.\ \eqref{eq: kinetic energy}) and $\mathrm{tr}(\tf{\tilde{\tau}})_{\mathrm{avg}}$ (Eq.\ \eqref{eq: average trace}) for the cases with initial conditions $\tau_{11} = \tau_{22} = -\sqrt{3} \tau_y$ (continuous lines), $\tau_{11} = \tau_{22} = +\sqrt{3} \tau_y$ (dashed lines) and $\tau_{11} = \tau_{22} = 0$ (dash-dot lines, the base case).} \label{fig: multiplicity monitor} \end{figure} Interestingly, the velocity fields of both of these new cases arrive at practically the same steady state; this can be seen in Fig.\ \ref{fig: multiplicity monitor}, where the kinetic energies of both cases converge, and more clearly in Fig.\ \ref{sfig: multiplicity streamlines} where the respective streamlines can be seen to be identical. However, this steady-state is not the same as that arrived at in the base case ($\tau_{11} = \tau_{22} = 0$); Fig.\ \ref{fig: multiplicity monitor} shows that unlike in the two new cases, the kinetic energy of the base case does not exhibit an overshoot and eventually increases to a value which is about 2.25\% larger than that of the other two cases. In Fig.\ \ref{sfig: multiplicity streamlines} the base case streamlines are also not identical to those of the other two cases. As far as the main vortex is concerned, the new cases have it located at $(\tilde{x}_c, \tilde{y}_c) = (0.497, 0.917)$ with a strength of $\tilde{\psi}_c = -0.0266$, which is located slightly to the right and is slightly weaker than that of the base case (Table \ref{table: vortex metrics}). Of course, all these differences are small, but nevertheless they confirm the observation of Cheddadi et al.\ \cite{Cheddadi_2012}. \begin{figure}[tb] \centering \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/multiplicity_streamlines.png} \caption{streamlines} \label{sfig: multiplicity streamlines} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/multiplicity_tau_d.png} \caption{yield lines} \label{sfig: multiplicity yield lines} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/tau_plus_minus.png} \caption{$\tau_d$} \label{sfig: multiplicity tau_d} \end{subfigure} \caption{Steady-state streamlines \subref{sfig: multiplicity streamlines} (streamfunction interval $\delta \phi / (\rho U)$ = \num{3.6e-4}), yield lines \subref{sfig: multiplicity yield lines}, and $\tau_d$ contours \subref{sfig: multiplicity tau_d} for the cases with initial conditions $\tau_{11} = \tau_{22} = -\sqrt{3} \tau_y$ (continuous lines), $+\sqrt{3} \tau_y$ (dashed lines), and zero (dash-dot lines, only in \subref{sfig: multiplicity streamlines} and \subref{sfig: multiplicity tau_d}). In \subref{sfig: multiplicity tau_d} the values of $\tau_d$, in \si{Pa}, are indicated next to each contour. The unyielded region of the $\tau_{11} = \tau_{22} = -\sqrt{3} \tau_y$ case is shaded blue. The region where $\|\vf{u}\| < 5\times 10^{-5}U$ is shaded green.} \label{fig: multiplicity flow fields} \end{figure} Concerning the stress fields, we can note the following. Firstly, a striking difference between the new cases and the base case is that in the former the bottom unyielded zone is largely absent and is replaced by a large transition zone; this is shown in Fig.\ \ref{sfig: multiplicity tau_d} where the transition zone is approximately the green region, throughout which $\tau_d$ is slightly above $\tau_y$. Inside the transition region the $\tau_d$ fields of the two new cases are similar (Fig.\ \ref{sfig: multiplicity tau_d}), but the individual $\tau_{11}$ and $\tau_{22}$ stress components have opposite signs which they have inherited from the initial conditions, as seen in Fig.\ \ref{fig: multiplicity profiles}; in fact, near the bottom wall these stress components retain values close to the initial ones, $\pm \sqrt{3} \tau_y \approx 121$ \si{Pa}. On the other hand, the base case stresses are close to zero in that region, again an inheritance of the initial conditions. This reflects on the much lower stress trace values seen in Fig.\ \ref{fig: multiplicity monitor}. Outside of the transition region, where the rates of deformation are non-zero (including the plug zone), the two new cases have identical steady-state stress fields, as seen in Figs.\ \ref{sfig: multiplicity tau_d} and \ref{fig: multiplicity profiles}, whereas those of the base case deviate slightly. This includes the yield line that forms the boundary of the plug zone: Fig.\ \ref{sfig: multiplicity yield lines} shows that the plug zones of the two new cases are identical, but that of the base case is slightly smaller, which is likely the cause of the slightly different velocity field. \begin{figure}[tb] \centering \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/Tpm_profile_s11.png} \caption{$\tau_{11}$} \label{sfig: multiplicity profiles s11} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/Tpm_profile_s22.png} \caption{$\tau_{22}$} \label{sfig: multiplicity profiles s22} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/Tpm_profile_p.png} \caption{pressure} \label{sfig: multiplicity profiles p} \end{subfigure} \caption{Normal stress components \subref{sfig: multiplicity profiles s11}-\subref{sfig: multiplicity profiles s22} and pressure \subref{sfig: multiplicity profiles p} along the vertical centreline ($x = L/2$) for the cases with initial conditions $\tau_{11} = \tau_{22} = -\sqrt{3} \tau_y$ (continuous lines), $+\sqrt{3} \tau_y$ (dashed lines), and zero (dash-dot lines).} \label{fig: multiplicity profiles} \end{figure} \subsection{Cessation} \label{ssec: results: cessation} Our final investigation concerns the sudden stopping of the lid and the study of the subsequent flow decay. Classic viscoplastic models are known to predict flow cessation in finite time after the driving agent is removed \cite{Frigaard_2019}. This can be proved rigorously using variational inequalities (e.g.\ upper bounds for the cessation times of some one-dimensional flows are derived in \cite{Glowinski_1984, Huilgol_2002, Muravleva_2010b}), but roughly speaking it is due to the effect of the yield stress on the rate of energy dissipation: it keeps it high enough during the flow decay such that all of the fluid's kinetic energy (KE) is dissipated in finite time -- a rough explanation is sketched in Appendix \ref{appendix: energy dissipation}. The cessation of viscoplastic (Bingham) lid-driven cavity flow was studied in \cite{Syrakos_2016a}, according to the findings of which we would expect HB flow to cease completely at some time $t_c \ll T_c \equiv \rho U L / S \approx$ 0.11 \si{s} ($T_c$ is the time needed for a force of magnitude $SL^2$ to bring a mass of momentum $\rho U L^3$ to rest). The situation concerning SHB flow is expected to be somewhat different. Firstly, even though the energy conversion rate $\int_{\Omega} \tf{\tau}\!:\!\nabla \vf{u} \: \mathrm{d}\Omega$ is still expected to be large enough to convert all the KE of the material in finite time, this rate now includes not only energy dissipation, but also energy storage in the form of elastic (potential) energy, which can later be converted back into KE \cite{Winter_1987}. In fact, once all of the material becomes unyielded there is no mechanism for energy dissipation and it is expected that the remaining energy will perpetually change form from elastic to kinetic and vice versa, resulting in oscillatory motions. Furthermore, the formation of transition regions in previous simulations makes it uncertain whether all of the material will become unyielded in finite time. To investigate these issues, using the ``steady-state'' base flow (Sec.\ \ref{ssec: results: base case}) as initial condition, we suddenly stopped the lid motion and carried out a simulation to see how the flow evolves. We repeated the procedure also with the slip case of Sec.\ \ref{ssec: results: slip}. Figure \ref{sfig: cessation monitor wide} shows the evolution of the KE with time ($t$ = 0 is the instant the lid is halted). In neither case does the flow cease in finite time. \begin{figure}[tb] \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[scale=0.95]{figures/cessation_monitor.png} \caption{} \label{sfig: cessation monitor wide} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[scale=0.95]{figures/cessation_monitor_zoom_narrow_Flid.png} \caption{} \label{sfig: cessation monitor zoom} \end{subfigure} \caption{\subref{sfig: cessation monitor wide}: Fluid kinetic energy (Eq.\ \eqref{eq: kinetic energy}), normalised by its value at the instant the lid is halted, versus time elapsed (the lid is halted at time $t$ = 0) for the no-slip and slip cases. \subref{sfig: cessation monitor zoom}: The top diagram is a close-up view of a portion of the no-slip curve in \subref{sfig: cessation monitor wide}. The bottom diagram plots the corresponding values of the force exerted by the fluid on the lid, normalised by $\tau_y L$.} \label{fig: cessation monitor} \end{figure} In the no-slip case, the KE history is highly oscillatory, confirming the anticipated perpetual back-and-forth conversion between kinetic and elastic energies. The top diagram of Fig.\ \ref{sfig: cessation monitor zoom} shows a clearer picture over a narrower time window. The KE peaks 29 times within that 2 \si{s} window, i.e.\ a single oscillation lasts about 69 \si{ms}; the time step is kept to about $\Delta t$ = \num{2e-4} \si{s} by the adjustable time step scheme, so each KE oscillation period is resolved into about 350 time steps. The diagram shows that the KE variation does not consist of a single harmonic component, and Fig.\ \ref{fig: cessation flowfields} shows that the flow is indeed quite complex. Usually at any given instant there appears one main vortex rotating in either the clockwise or anticlockwise sense, but its location and shape are not fixed, while smaller vortices may also appear. The material is not completely unyielded, and the yielded zones are transported along as the material oscillates while their size varies with time. The existence of these yielded zones allows some energy dissipation, and hence the mean KE in Fig.\ \ref{fig: cessation monitor} very slowly drops. The bottom diagram of Fig.\ \ref{sfig: cessation monitor zoom} shows the force $F_{\mathrm{lid}}$ exerted by the fluid on the lid. This force is negative, pushing the lid towards the left, i.e. in the opposite direction than it was moving prior to its halting. The magnitude of the force oscillates slightly above the value $\tau_y L$, and an inspection shows that this is because the magnitude of $\tau_{12}$ on the lid slowly drops towards $\tau_y$ (the lid is in touch with a transition zone). Figure \ref{sfig: cessation flowfields Flid} shows that the magnitude of $F_{\mathrm{lid}}$ drops when the lid touches a clockwise vortex (Figs.\ \ref{sfig: cessation flowfield t=3.05}, \ref{sfig: cessation flowfield t=3.15}, \ref{sfig: cessation flowfield t=3.30}) and it increases when it touches a counter-clockwise vortex (Figs.\ \ref{sfig: cessation flowfield t=3.10}, \ref{sfig: cessation flowfield t=3.20}). A comparison between the KE and $F_{\mathrm{lid}}$ diagrams in Fig.\ \ref{sfig: cessation monitor zoom} shows that $F_{\mathrm{lid}}$ oscillates at a frequency that is roughly half of that of the KE, which is expected since during a single $F_{\mathrm{lid}}$ period both the clockwise and anticlockwise vortex velocities are maximised, which leads to two KE peaks. \begin{figure}[!t] \centering \begin{subfigure}[b]{0.245\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/flow_0000131a.png} \caption{$t$ = 3 \si{s}} \label{sfig: cessation flowfield t=3} \end{subfigure} \begin{subfigure}[b]{0.245\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/flow_0000132a.png} \caption{$t$ = 3.05 \si{s}} \label{sfig: cessation flowfield t=3.05} \end{subfigure} \begin{subfigure}[b]{0.245\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/flow_0000133a.png} \caption{$t$ = 3.10 \si{s}} \label{sfig: cessation flowfield t=3.10} \end{subfigure} \begin{subfigure}[b]{0.245\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/flow_0000134a.png} \caption{$t$ = 3.15 \si{s}} \label{sfig: cessation flowfield t=3.15} \end{subfigure} \vspace{0.25cm} \begin{subfigure}[b]{0.245\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/flow_0000135a.png} \caption{$t$ = 3.20 \si{s}} \label{sfig: cessation flowfield t=3.20} \end{subfigure} \begin{subfigure}[b]{0.245\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/flow_0000136a.png} \caption{$t$ = 3.25 \si{s}} \label{sfig: cessation flowfield t=3.25} \end{subfigure} \begin{subfigure}[b]{0.245\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/flow_0000137a.png} \caption{$t$ = 3.30 \si{s}} \label{sfig: cessation flowfield t=3.30} \end{subfigure} \begin{subfigure}[b]{0.245\textwidth} \centering \includegraphics[width=0.43\linewidth]{figures/flow_legend.png} \caption{colour map} \label{sfig: cessation flowfields legend} \end{subfigure} \vspace{0.25cm} \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=0.95\linewidth] {figures/cessation_monitor_zoom_Flid_for_flowfields.png} \caption{History of $F_{\text{lid}}$} \label{sfig: cessation flowfields Flid} \end{subfigure} \begin{subfigure}[b]{0.245\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/flow_0000147s.png} \caption{(slip) $t$ = 0.10 \si{s}} \label{sfig: cessation flowfield slip start} \end{subfigure} \begin{subfigure}[b]{0.245\textwidth} \centering \includegraphics[width=0.95\linewidth]{figures/flow_0000398s.png} \caption{(slip) $t$ = 60 \si{s}} \label{sfig: cessation flowfield slip end} \end{subfigure} \caption{\subref{sfig: cessation flowfield t=3}-\subref{sfig: cessation flowfield t=3.30}: Snapshots of the no-slip cessation flow field, with colour contours of $\tau_d$, of different base colour for yielded (yellow) and unyielded (blue) regions (see the colour map \subref{sfig: cessation flowfields legend}), and instantaneous streamlines plotted at streamfunction intervals of $\delta \psi / LU = 0.002$, with $\psi = 0$ as one of the contours. \subref{sfig: cessation flowfields Flid}: Part of the history of the force exerted by the fluid on the lid, normalised by $\tau_y L$, with the instants corresponding to snapshots \subref{sfig: cessation flowfield t=3}-\subref{sfig: cessation flowfield t=3.30} marked on the plot. \subref{sfig: cessation flowfield slip start}-\subref{sfig: cessation flowfield slip end}: Snapshots of the slip cessation flow field.} \label{fig: cessation flowfields} \end{figure} In the slip case, the KE peaks immediately after the lid halt (Fig.\ \ref{sfig: cessation monitor wide}) but then the oscillations die out relatively quickly and the KE diminishes. The KE peak is due to the material recoiling right after the lid halts (Fig.\ \ref{sfig: cessation flowfield slip start}), which is possible as the walls provide limited resistance, due to slip. Then, the KE is dissipated through a mechanism that is absent in the no-slip case: as the material oscillates inside the cavity, it slides past the walls due to slip, and the friction at the wall / material interface dissipates the KE. Even in this case, however, transition zones exist long after the lid has halted (Fig.\ \ref{sfig: cessation flowfield slip end}). \section{Conclusions} \label{sec: conclusions} This work presented a FVM for elastoviscoplastic flows described by the SHB model. The method is applicable to collocated structured and unstructured meshes; it incorporates a new variant of momentum interpolation, a variant of ``two sides diffusion'', and the CUBISTA scheme, in order to suppress spurious pressure, velocity, and stress oscillations, respectively. The method was shown to achieve this goal, and also to be consistent on both smooth and irregular meshes. Similar stabilisation techniques have recently been applied in a Finite Element context to allow use of equal-order polynomial basis functions for all variables \cite{Varchanis_2019}. An implicit temporal discretisation with adaptive time step is employed. The method was applied to the simulation of EVP flow in a lid-driven cavity, with the SHB parameters chosen so as to represent Carbopol. The results can serve as benchmark solutions, but furthermore the simulations were designed so as to allow investigation of several aspects of the SHB model behaviour. Complementary simulations with the classic HB model were also performed for comparison. It was noticed that the SHB model can predict ``transition zones'', regions where the velocity is near-zero and it takes a very long, or infinite, time to transition from a formally yielded to an unyielded state. The differences between the SHB and HB velocity fields are rather small for the lid velocities tested, although some elastic effects are noticeable in the SHB case, such as a slight upstream displacement of the vortex, a downstream stretching of the plug zone, and a small swelling of the bottom unyielded zone, compared to the HB results. These differences diminish by increasing the elastic modulus $G$, although the convergence of the SHB and HB yield lines is rather slow and requires very large values of $G$. The velocity fields converge much faster. We also applied slip at the walls, as EVP materials are slippery, in which case no transition zones developed. Motivated by the observation of Cheddadi et al.\ \cite{Cheddadi_2012} that different initial conditions of stress can lead to different steady states even as concerns the velocity, we performed two additional simulations where the initial state of the material was stationary but on the verge of yielding, with either compressive or tensile residual stresses, respectively. Indeed, although both of these simulations converged to the same steady-state velocity field, that field was slightly but noticeably different than that obtained when the initial stresses were zero. Due to this property of the SHB model, an accurate calculation of a steady state requires not only sufficient spatial resolution (grid spacing) but also sufficient temporal resolution (time step size), which is facilitated by the use of an adaptive time discretisation scheme such as the one proposed herein. Finally, simulations of the cessation of the flow after the lid is suddenly halted were performed, which showed that, unlike what is predicted by the HB model, the flow does not cease in finite time but there is a perpetual back-and-forth conversion between kinetic and elastic energies. In the no-slip case, a very slow net energy dissipation was observed, which is possible due to the persisting existence of yielded regions. The energy dissipation is much faster under wall slip, due to friction between the material and the walls. Incorporation of additional rheological phenomena such as thixotropy is planned for the future. \section*{Acknowledgements} This research was funded by the LIMMAT Foundation under the Project ``MuSiComPS''. \begin{appendices} \renewcommand\theequation{\thesection.\arabic{equation}} \setcounter{equation}{0} \section{The SHB extra stress tensor in the limit of infinite elastic modulus} \label{appendix: SHB tau in limit of large G} The SHB constitutive equation \eqref{eq: constitutive} distinguishes between $\tf{\tau}$ and $\tf{\tau}{}_d$, unlike the HB equation \eqref{eq: HB rate-of-strain}. For the two models to become equivalent as $G \rightarrow \infty$ it is necessary that $\tf{\tau} \rightarrow \tf{\tau}{}_d$ or, equivalently, $\mathrm{tr}(\tf{\tau}) \rightarrow 0$. In the limiting case $G \rightarrow \infty$ the first term on the left-hand side of Eq.\ \eqref{eq: constitutive} diminishes. This makes yielded material (where the ``max'' term is non-zero) tend to behave as a generalised Newtonian fluid so that $\mathrm{tr}(\tf{\tau}) \rightarrow 0 \Rightarrow \tf{\tau} \rightarrow \tf{\tau}{}_d$ and the two models do become equivalent in yielded regions. On the other hand, in unyielded regions (where the ``max'' term is zero) Eq.\ \eqref{eq: constitutive} tends to reduce to $\dot{\tf{\gamma}} = 0$, which does not, at first glance, imply that $\tf{\tau} \rightarrow \tf{\tau}{}_d$. However, the following observation can be made. In an unyielded region the SHB model can be written as (substituting Eq.\ \eqref{eq: upper convected derivative} into Eq.\ \eqref{eq: constitutive}) \begin{equation} \label{eq: SHB in unyielded region} \frac{D\tf{\tau}}{Dt} \;\equiv\; \pd{\tf{\tau}}{t} \;+\; \vf{u}\cdot \nabla \tf{\tau} \;=\; G \dot{\tf{\gamma}} \;+\; G \left( (\nabla \vf{u})^{\mathrm{T}} \cdot \tf{\tau} \;+\; \tf{\tau} \cdot \nabla \vf{u} \right) \end{equation} The material derivative $D\tf{\tau}/Dt$ is the rate of change of the stress tensor with respect to time in a fluid particle moving with the flow. This derivative is applied component-wise to the stress tensor, and therefore the rate of change of $\mathrm{tr}(\tf{\tau})$ is \begin{equation} \label{eq: rate of change of trace} \frac{D(\mathrm{tr}(\tf{\tau}))}{Dt} \;=\; \mathrm{tr}\left( \frac{D\tf{\tau}}{Dt} \right) \end{equation} or, using Eq.\ \eqref{eq: SHB in unyielded region}, \begin{equation} \label{eq: rate of change of trace 2} \frac{D(\mathrm{tr}(\tf{\tau}))}{Dt} \;=\; G \: \mathrm{tr}(\dot{\tf{\gamma}}) \;+\; G \: \mathrm{tr} \left( (\nabla \vf{u})^{\mathrm{T}} \cdot \tf{\tau} \;+\; \tf{\tau} \cdot \nabla \vf{u} \right) \end{equation} In the right-hand side, $\mathrm{tr}(\dot{\tf{\gamma}}) \rightarrow 0$ because in the unyielded regions $G \rightarrow \infty$ forces $\dot{\tf{\gamma}} \rightarrow 0$. For the second term in the right-hand side, in index notation, we have \begin{equation} (\nabla \vf{u})^{\mathrm{T}} \cdot \tf{\tau} \;+\; \tf{\tau} \cdot \nabla \vf{u} \;=\; \sum_{i=1}^3 \sum_{j=1}^3 \left( \sum_{k=1}^3 \pd{u_i}{x_k}\tau_{kj} \;+\; \sum_{k=1}^3 \pd{u_j}{x_k}\tau_{ik} \right) \vf{e}_i \vf{e}_j \end{equation} where $\vf{e}_i$ is the unit vector in the direction $i$. Taking the trace of the above expression we obtain \begin{equation} \label{eq: trace of upper convective terms} \mathrm{tr}\left( (\nabla \vf{u})^{\mathrm{T}} \cdot \tf{\tau} \;+\; \tf{\tau} \cdot \nabla \vf{u} \right) \;=\; \sum_{i=1}^3 \sum_{j=1}^3 \left( \pd{u_j}{x_i} \,+\, \pd{u_i}{x_j} \right) \tau_{ij} \end{equation} The terms in parentheses in the sum \eqref{eq: trace of upper convective terms} are just the components of $\dot{\tf{\gamma}}$, and so they tend to zero as $G \rightarrow \infty$. Thus the whole right-hand side of Eq.\ \eqref{eq: rate of change of trace 2} tends to zero in the unyielded regions as $G \rightarrow \infty$. Therefore, under these conditions, $D(\mathrm{tr}(\tf{\tau}))/Dt \rightarrow 0$, i.e.\ $\mathrm{tr}(\tf{\tau})$ remains constant in any particle moving in an unyielded region. Given that $\mathrm{tr}(\tf{\tau}) = 0$ for yielded particles, as mentioned above, there appears to be no mechanism for $\mathrm{tr}(\tf{\tau})$ to acquire any value different than zero. Thus we expect that $\tf{\tau} \rightarrow \tf{\tau}{}_d$ also in the unyielded regions. But even if $\mathrm{tr}(\tf{\tau}) \neq 0 \Rightarrow \tf{\tau} \neq \tf{\tau}{}_d$ in such a region, for practical purposes the stress field would still be equivalent to a HB stress field where the isotropic part of $\tf{\tau}$ is incorporated into the pressure. \section{Effect of grid refinement on the pressure stabilisation scheme} \label{appendix: pressure stabilisation} \setcounter{equation}{0} It follows from the definition of $a_f^{mi}$ \eqref{eq: ami} that as the grid is refined ($h_f \rightarrow 0$) the viscous term in the denominator dominates over the inertial one: at fine grids $a_f^{mi} \approx h_f / (2\kappa + 2\eta_a)$. This can be explained with the aid of the simple one-dimensional example used for deriving the scheme. From Eq.\ \eqref{eq: ami 1D}, if we want to locally perturb a velocity field at point $c$ by $u'_c$ (Fig.\ \ref{fig: velocity perturbations}) by locally perturbing the pressure gradient by $-\mathrm{d}p'/\mathrm{d}x|_c \approx (p'_P-p'_N)/h$ then these perturbations are related by \begin{equation} \label{eq: ami 1D analysis} u'_c \;=\; \frac{1}{ \rho \left( \left. \dfrac{\mathrm{d}u}{\mathrm{d}x} \right|_c \:+\; \dfrac{1}{\Delta t} \right) \;+\; \dfrac{2\eta_c}{h^2}} \; \cdot \; \frac{p'_P - p'_N}{h} \;=\; O(h^2) \cdot \left. \frac{\mathrm{d}p'}{\mathrm{d}x} \right|_c \;\;\Rightarrow\;\; \left. \frac{\mathrm{d}p'}{\mathrm{d}x} \right|_c \;=\; O(h^{-2}) \cdot u'_c \end{equation} Thus, in order to effect a local velocity perturbation of wavelength equal to the grid spacing, as shown in Fig.\ \ref{fig: velocity perturbations}, the pressure gradient must be adjusted locally by an amount that scales as $(h^{-2})$, i.e.\ it must become larger and larger as the grid is refined. This is because it has to overcome the viscous resistance which scales as $\mathrm{d}^2u' / \mathrm{d}x^2|_c = O(h^{-2})$: grid refinement not only increases the velocity slopes (and associated viscous stresses) but also changes these slopes over a shorter distance (Fig.\ \ref{fig: velocity perturbations}). On the contrary, the part of the pressure gradient that is needed to balance the inertial contribution of $u'_c$ to the momentum balance is independent of the grid spacing. Conversely, Eq.\ \eqref{eq: ami 1D analysis} shows that fixed localised perturbations of the pressure gradient of wavelength equal to the grid spacing cause local velocity perturbations of magnitude $O(h^2)$, which become smaller and smaller with grid refinement due to increased viscous resistance. \begin{figure}[tb] \centering \includegraphics[scale=1.]{figures/u_perturbation.pdf} \caption{A local velocity perturbation $u'_c$ with wavelength equal to the grid spacing $h$ causes local perturbations $\mathrm{d}u'/\mathrm{d}x$ to the velocity gradient that are proportional to $u'_c/h = O(h^{-1})$; it follows that the corresponding perturbations to the second derivative of velocity $\mathrm{d}^2u'/\mathrm{d}x^2$ are proportional to $O(h^{-1})/h = O(h^{-2})$.} \label{fig: velocity perturbations} \end{figure} The pseudo-velocities $u_f^{p+}$ and $u_f^{p-}$ defined by Eqs.\ \eqref{eq: up+} and \eqref{eq: up-}, have magnitudes of $O(h^2)$ (because $a_f^{mi} = O(h)$ and, in smooth pressure fields, both $p_P - p_N$ and $\nabla p \cdot (\vf{P} - \vf{N})$ are $O(h)$), whereas the actual velocity $\bar{\vf{u}}{}_{c_f}$ in \eqref{eq: mass flux} does not diminish with refinement but tends to a finite value, the exact velocity at the given point. Therefore, as the grid is refined the mass flux becomes dominated by the interpolated velocity $\bar{\vf{u}}{}_{c_f}$ while the stabilisation pseudo-velocities $u_f^{p+}$ and $u_f^{p-}$ diminish. This may raise concerns that the stabilising effect of the scheme may also diminish. However, this is not the case. Suppose that spurious pressure oscillations do arise, of amplitude $\Delta p^o$ (in the absence of preventive measures, this amplitude is unaffected by grid refinement \cite{Syrakos_06a}). The total pressure can be decomposed into a smooth part, which is close to the exact solution, and a spurious oscillatory part: $p = p^s + p^o$. Then the sum of pseudo-velocities at point $c$ of the grid shown in Fig.\ \ref{fig: 1D grid} is equal to \begin{align} \nonumber u_c^{p+} \;-\; u_c^{p-} \;&=\; a_c^{mi} \left[ (p^s_P + p^o_P) - (p^s_N + p^o_N) \;-\; \bar{\nabla}^q_h (p^s+p^o)_c \cdot (\vf{P} - \vf{N}{}_f) \right] \\[0.2cm] \nonumber &=\; a_c^{mi} \Big[ \Delta p^o \;+\; \underbrace{ (p^s_P - p^s_N) \;-\; \bar{\nabla}^q_h p^s_c \cdot (\vf{P} - \vf{N}{}_f) }_{O(h^2) \;\mathrm{or}\; O(h^3)} \Big] \\%[0.1cm] \label{eq: pseudovelocity sum magnitude c} &\approx\; a_c^{mi} \, \Delta p^o \end{align} because $p^o_P - p^o_N = \Delta p^o$ and $\nabla^q_h p^o_P = \nabla^q_h p^o_N = 0$ (the $\nabla^q_h$ operator is insensitive to oscillations). Also, according to the discussion of Eq.\ \eqref{eq: momentum inteprolation truncation error}, the underlined terms add up to $O(h^2)$ (or $O(h^3)$ on smooth structured grids), which is small compared to $\Delta p^o$. In other words, $u_f^{p+}$ cancels out with the smooth part of $u_f^{p-}$ but leaves out the oscillatory part. A similar consideration for point $c_P$ (Fig.\ \ref{fig: 1D grid}) leads to \begin{equation} \label{eq: pseudovelocity sum magnitude c_P} u_{c_P}^{p+} \;-\; u_{c_P}^{p-} \;\approx\; - a_{c_P}^{mi} \, \Delta p^o \end{equation} where the minus sign comes from the fact that, according to the nature of the spurious oscillation, if the oscillatory pressure component decreases by $\Delta p^o$ from $P$ to $N$, then it \textit{increases} by $\Delta p^o$ from $PP$ to $P$. Then the combined contribution of faces $c$ and $c_P$ to the image of the continuity operator for cell $P$ (the left-hand side of Eq.\ \eqref{eq: continuity integral discrete}) divided by the cell volume is \begin{equation*} \frac{1}{\Omega_P} \left[ \dot{M}_c + \dot{M}_{c_P} \right] \;=\; \frac{1}{h^2} \rho \, h \left[ ( \bar{u}_c \:+\: u_c^{p+} \:-\: u_c^{p-} ) \;-\; ( \bar{u}_{c_P} \:+\: u_{c_P}^{p+} \:-\: u_{c_P}^{p-} ) \right] \end{equation*} which, using Eqs.\ \eqref{eq: pseudovelocity sum magnitude c} and \eqref{eq: pseudovelocity sum magnitude c_P}, and also assuming that $a_c^{mi} \approx a_{c_P}^{mi} \approx Ch$ for some constant $C$ ($C = 2(\kappa + \eta_a)$ if the grid is fine enough, from Eq.\ \eqref{eq: ami}), becomes \begin{equation} \label{eq: mass flux sum: c + c_P} \frac{1}{\Omega_P} \left[ \dot{M}_c + \dot{M}_{c_P} \right] \;=\; \rho \frac{\bar{u}_c - \bar{u}_{c_P}}{h} \;+\; 2 \rho C \Delta p^o \end{equation} The first term on the right-hand side is the contribution of the actual velocities to the continuity image, and can be seen to tend to the $h$-independent value $\rho \partial u / \partial x|_P$ with grid refinement. The second term on the right-hand side is the contribution of the pseudo-velocities, \textit{which can also be seen to be $h$-independent}, i.e.\ it does not diminish with grid refinement, although the pseudo-velocities themselves do diminish compared to the actual velocities, as mentioned above. If we take into account also the other two faces of cell $P$ (the horizontal ones) then the total contribution of the pseudo-velocities to the continuity image is $4 \rho C \Delta p^o$. If we repeat this analysis for the neighbouring cells $N$ and $PP$, then it will turn out that the contributions of the pseudo-velocities to the continuity images of those cells are $-4 \rho C \Delta p^o$, i.e.\ they have opposite sign to that of cell $P$. Overall, in the presence of pressure oscillations the contributions of the pseudo-velocities to the continuity image over the entire grid have a checkerboard (oscillatory) pattern like the one shown in Fig.\ \ref{fig: oscillations schematic}, with amplitude proportional to the amplitude of the pressure oscillations, $\Delta p^o$. As discussed in relation to Eq.\ \eqref{eq: FVM system}, when solving the system of all continuity equations \eqref{eq: continuity integral discrete}, because the right-hand side is smooth (it is zero), spurious pressure oscillations, which would produce an oscillatory right-hand side, are excluded. Thus, the effectiveness of momentum interpolation in suppressing spurious pressure oscillations does not degrade with grid refinement. The amplitude of the oscillations produced by momentum interpolation to the continuity image is proportional to the amplitude of the spurious pressure oscillations themselves, and independent of the grid spacing $h$. \section{Energy dissipation and flow cessation} \label{appendix: energy dissipation} \setcounter{equation}{0} This appendix sketches an explanation for the finite cessation times of HB fluids, but for rigorous proofs the reader is referred to the literature cited in Sec.\ \ref{ssec: results: cessation}. For inelastic fluids, once the lid motion ceases, the rate of energy dissipation equals the rate of decrease of the kinetic energy of the fluid \cite{Winter_1987}. \begin{equation} \label{eq: cessation energy balance} \frac{\mathrm{d}}{\mathrm{d}t} \int_{\Omega} \tfrac{1}{2} \rho \vf{u} \cdot \vf{u} \, \mathrm{d}\Omega \;=\; -\int_{\Omega} \tf{\tau} : \nabla \vf{u} \, \mathrm{d}\Omega \end{equation} For generalised Newtonian fluids ($\tf{\tau} \;=\; \eta(\dot{\gamma}) \, \dot{\tf{\gamma}}$) we have \begin{equation} \label{eq: energy dissipation GN} \tf{\tau} : \nabla \vf{u} \;=\; \eta(\dot{\gamma}) \, \dot{\tf{\gamma}} : \nabla \vf{u} \;=\; \eta(\dot{\gamma}) \, \dot{\gamma}^2 \end{equation} For a HB fluid, $\eta(\dot{\gamma}) = (\tau_y + k\dot{\gamma}^n) / \dot{\gamma}$ and the energy dissipation rate \eqref{eq: energy dissipation GN} becomes $(\tau_y + k\dot{\gamma}^n) \, \dot{\gamma}$; in fact, this expression holds even in unyielded regions as it predicts zero energy dissipation there\footnote{In HB unyielded regions the energy dissipation is zero due to $\dot{\tf{\gamma}} = 0$ since, by symmetry of the stress tensor, we have $\tf{\tau} : \nabla \vf{u}$ = $\frac{1}{2} \, \tf{\tau} : \nabla \vf{u} \,+\, \frac{1}{2} \, \tf{\tau} : \nabla \vf{u}$ = $\frac{1}{2} \, \tf{\tau} : \nabla \vf{u} \,+\, \frac{1}{2} \, \tf{\tau}^{\mathrm{T}} : \nabla \vf{u}^{\mathrm{T}}$ = $\tf{\tau} : ( \frac{1}{2} \nabla \vf{u} + \frac{1}{2} \nabla \vf{u}^{\mathrm{T}})$ = $\tf{\tau} : \dot{\tf{\gamma}}$ = 0.} due to $\dot{\gamma} = 0$. So, for HB flow the energy balance \eqref{eq: cessation energy balance} becomes \begin{equation} \label{eq: cessation energy balance HB} \frac{\mathrm{d}}{\mathrm{d}t} \int_{\Omega} \tfrac{1}{2} \rho \vf{u} \cdot \vf{u} \, \mathrm{d}\Omega \;=\; -\int_{\Omega} \left( \tau_y \,+\, k \dot{\gamma}^n \right) \dot{\gamma} \, \mathrm{d}\Omega \end{equation} Next we will assume that the velocity at any point $\vf{x}$ at any instant $t$ can be expressed as the product of a function of time, $\chi$, and a function of space, $u_0$, as \begin{equation} \label{eq: cessation velocity assumption} \vf{u}(\vf{x},t) \;=\; \chi(t-t_0) \, u_0(\vf{x}) \end{equation} where $u_0(\vf{x})$ is the velocity at time $t_0$, if we set $\chi(0) = 1$. We therefore assume that the velocity field retains its shape in time, but just downscales by a factor of $\chi(t-t_0)$ at time $t$ relative to time $t_0$. This assumption is correct for the cessation of Newtonian flow if $t_0$ is large enough \cite{Syrakos_2016a}, but obviously involves an error in the viscoplastic case, where the unyielded regions expand in time; nevertheless, we will assume this error to be acceptable, not invalidating the final conclusion. Then, let $V(t)$ be a measure of the velocity in the domain, e.g.\ the mean or maximum velocity magnitude. Whatever the precise choice of $V$, it follows from \eqref{eq: cessation velocity assumption} that $V(t) = \chi(t-t_0) \, V(0)$. We can then normalise the velocity as \begin{equation} \label{eq: cessation velocity normalisation} \frac{\vf{u}(\vf{x},t)}{V(t)} \;=\; \frac{\chi(t-t_0) \, u_0(\vf{x})}{\chi(t-t_0) \, V(0)} \;=\; \frac{u_0(\vf{x})}{V(0)} \;\equiv\; \tilde{\vf{u}}{}_0(\vf{x}) \;\Rightarrow\; \vf{u}(\vf{x},t) \;=\; \tilde{\vf{u}}{}_0(\vf{x}) \, V(t) \end{equation} Substituting for $\vf{u}$ from \eqref{eq: cessation velocity normalisation} into the left and right side of Eq.\ \eqref{eq: cessation energy balance HB} we get, respectively, \begin{equation} \label{eq: cessation lhs} \int_{\Omega} \tfrac{1}{2} \rho \vf{u} \cdot \vf{u} \, \mathrm{d}\Omega \;=\; V^2 \, \int_{\Omega} \tfrac{1}{2} \rho \tilde{\vf{u}}{}_0 \cdot \tilde{\vf{u}}{}_0 \, \mathrm{d}\Omega \;=\; C_k \, V^2 \end{equation} \begin{equation} \label{eq: cessation rhs} \int_{\Omega} \left( \tau_y \,+\, k \dot{\gamma}^n \right) \dot{\gamma} \, \mathrm{d}\Omega \;=\; \tau_y \, V \int_{\Omega} \tilde{\dot{\gamma}}_0 \, \mathrm{d}\Omega \;+\; k \, V^{n+1} \int_{\Omega} \tilde{\dot{\gamma}}_0^{n+1} \, \mathrm{d}\Omega \;=\; C_p \, \tau_y \, V \;+\; C_v \, k \, V^{n+1} \end{equation} where $\tilde{\dot{\gamma}}{}_0$ is the magnitude of $\tilde{\dot{\tf{\gamma}}}{}_0 \equiv \nabla \tilde{\vf{u}}{}_0 + (\nabla \tilde{\vf{u}}{}_0)^{\mathrm{T}}$ (it is dimensional). Since $\tilde{\vf{u}}{}_0$ is not a function of $t$, neither are the constants $C_k$, $C_p$ and $C_v$, and substituting \eqref{eq: cessation lhs} and \eqref{eq: cessation rhs} into \eqref{eq: cessation energy balance HB} we are left with the following ordinary differential equation: \begin{equation} \label{eq: cessation ODE} \frac{\mathrm{d}(V^2)}{\mathrm{d}t} \;=\; -C'_p \tau_y V \;-\; C'_v k V^{n+1} \;\leq\; -C'_p \tau_y V \end{equation} where $C'_p = C_p / C_k$ and $C'_v = C_v / C_k$ are positive constants. The above inequality means that the flow will decay at least as fast as if the plastic term ($-C'_p \tau_y V$) was the only one present. In that case, solving the equation would give \begin{equation} \label{eq: cessation V plastic} V(t) \;=\; V(t_0) \;-\; \frac{C'_p \tau_y}{2} (t-t_0) \end{equation} so that the flow reaches complete cessation in finite time: $V(t_c) = 0 \Rightarrow t_c = t_0 + 2V(t_0)/(C'_p \tau_y)$. Thus, the plastic dissipation (due to $\tau_y$) alone suffices to bring the whole material to rest in finite time $t_c$; including also the viscous dissipation term, $-C'_v k V^{n+1}$, will only make this process faster. It is interesting to briefly look at the case $\tau_y = 0$, when the plastic term is absent from \eqref{eq: cessation ODE}. If $n = 1$, the flow is Newtonian and the solution of the equation is \begin{equation} \label{eq: cessation V Newtonian} V(t) \;=\; V(t_0) \, e^{-\frac{C'_v k}{2}(t-t_0)} \end{equation} This result is identical with what was found in \cite{Syrakos_2016a}, and means that the flow continuously decays but never completely ceases. If $n \neq 1$ (power-law fluid) the solution is \begin{equation} \label{eq: cessation V power-law} V(t) \;=\; \left[ V(t_0)^{1-n} \;-\; (1-n) \frac{C'_v k}{2} (t-t_0) \right]^{\frac{1}{1-n}} \end{equation} For $n>1$, the exponent $1/(1-n)$ is negative, the flow decays more slowly than in the Newtonian case as the viscosity drops with flow decay, and complete cessation is never reached. However, in the $n < 1$ case the viscosity rises to infinity as the flow decays towards cessation; the exponent $1/(1-n)$ is positive, and cessation is reached in finite time, $V(t_c) = 0 \Rightarrow t_c = t_0 + 2 V(t_0)^{1-n} / ((1-n) C'_v k)$. Thus, the existence of a yield stress is not a prerequisite for finite cessation time (Fig.\ \ref{fig: PL cessation}). \begin{figure}[thb] \centering \includegraphics[scale=0.80]{figures/PL_KE.png} \caption{Decay of kinetic energy with time after lid motion cessation of two power law fluids (Eq.\ \eqref{eq: HB stress} with $\tau_y$ = 0, $k$ = 1 \si{Pa.s^n}, and $\rho$ = 1000 \si{kg/m^3}) of exponents $n$ = 1.0 and 0.5, respectively. The domain and grid are the same as in Sec.\ \ref{sec: results}, and the initial condition ($t$ = 0) is the steady state flow for a lid velocity of 1 \si{m/s}. The kinetic energy is normalised by its value at $t$ = 0. The lid is brought to a halt suddenly for $n$ = 1, but gradually over a time interval of 0.1 \si{s} for $n$ = 0.5 (to avoid iterative solver convergence difficulties).} \label{fig: PL cessation} \end{figure} \section{Second order accurate reconstruction of cell centre value from face values} \label{appendix: derivation of reconstruction scheme} \setcounter{equation}{0} Suppose that the exact (or approximated to at least second-order accuracy) values $\phi_f$ of a quantity $\phi$ are known at the face centres of a cell $P$ (Fig.\ \ref{fig: grid nomenclature}), and we want to approximate from these values the value of $\phi$ at the cell centre, $\overline{\phi_P}$. One way to proceed is the following. The fact that the known values are located at the cell boundary provides an incentive to try to derive a scheme based on the divergence theorem. Let $\vf{r}(\vf{x}) = \vf{x} - \vf{P}$ be the vector function that returns the relative position of location $\vf{x}$ relative to the centroid $\vf{P}$. Then $\nabla \cdot \vf{r} = \nabla \cdot \vf{x} = D$ where $D = 2$ or 3 in two- and three-dimensional space, respectively. We can then use the product rule to the divergence of the product $\phi \vf{r}$: \begin{equation*} \nabla \cdot (\phi \vf{r}) \;=\; \nabla \phi \cdot \vf{r} \;+\; \phi \, \nabla \cdot \vf{r} \;=\; \nabla \phi \cdot \vf{r} \;+\; D \, \phi \end{equation*} Integrating both sides over the whole cell $P$ and applying the divergence theorem on the left-hand side we get \begin{equation} \label{eq: integrated product rule} \sum_{f=1}^F \int_{s_f} \phi \, \vf{r} \cdot \vf{n}_f \, \mathrm{d}s \;=\; \int_{\Omega_P} \nabla \phi \cdot \vf{r} \, \mathrm{d}\Omega \;+\; D \int_{\Omega_P} \phi \, \mathrm{d} \Omega \end{equation} This is an exact equation, but now we will approximate each of the integrals by the midpoint rule: \begin{align} \label{eq: midpoint rule term 1} \int_{s_f} \phi \, \vf{r} \cdot \vf{n}_f \, \mathrm{d}s \;&=\; \phi(\vf{c}_f) \, (\vf{r}(\vf{c}_f) \cdot \vf{n}_f) \, s_f \;+\; O(h^2) \, s_f \\ \label{eq: midpoint rule term 2} \int_{\Omega_P} \nabla \phi \cdot \vf{r} \, \mathrm{d} \Omega \;&=\; \nabla \phi(\vf{P}) \cdot \vf{r}(\vf{P}) \, \Omega_P \;+\; O(h^2) \, \Omega_P \;=\; 0 \;+\; O(h^2) \, \Omega_P \\ \label{eq: midpoint rule term 3} \int_{\Omega_P} \phi \, \mathrm{d} \Omega \;&=\; \phi(\vf{P}) \, \Omega_P \;+\; O(h^2) \, \Omega_P \end{align} where in \eqref{eq: midpoint rule term 2} we have used that $\vf{r}(\vf{P}) = \vf{P} - \vf{P} = 0$. Substituting these into \eqref{eq: integrated product rule} we get \begin{equation} \label{eq: reconstruction error exaggerated} \phi(\vf{P}) \;=\; \underbrace{\frac{1}{D \, \Omega_P} \sum_{f=1}^F \left[ \phi(\vf{c}_f) \, s_f \, \vf{r}(\vf{c}_f) \cdot \vf{n}_f \right]}_{\overline{\phi_P}} \;+\; \underbrace{\sum_{f=1}^F O(h^2) \frac{s_f}{\Omega_P}}_{= O(h)} \end{equation} The first term on the right hand side is the approximation $\overline{\phi_P}$, as can be seen by comparing to Eq.\ \eqref{eq: stress reconstruction}. The second term on the right hand side is the difference between this approximation and the exact value $\phi(\vf{P})$, i.e.\ the error, which is $O(h)$ because $s_f/\Omega_P = O(h^{-1})$. Therefore, the approximation \eqref{eq: stress reconstruction} appears to be only first-order accurate, of the same order as simply setting $\overline{\phi_P} = \phi(\vf{c}_f)$ for any arbitrary face $f$! Is it possible that this error estimation is too pessimistic and does not account for some error cancellation that occurs when adding the truncation errors of the midpoint rule approximations \eqref{eq: midpoint rule term 1} -- \eqref{eq: midpoint rule term 3}? In fact this happens to be the case\footnote{A calculation involving Taylor expansions along the faces shows that, in the sum of the truncation errors of Eq.\ \eqref{eq: midpoint rule term 1} over all cell faces, their leading order terms cancel out.} and it turns out that the approximation \eqref{eq: stress reconstruction} is second-order accurate, which is easiest to prove by showing that it is exact for linear functions. Indeed, if it is exact for linear functions then expanding $\phi$ in a Taylor series about $\vf{P}$ we find that the error is \begin{align*} \phi(\vf{x}) \;&=\; \phi(\vf{P}) \;+\; \nabla \phi (\vf{P}) \cdot \vf{r}(\vf{x}) \;+\; O(h^2) \;\Rightarrow \\ \overline{\phi_P} \;&=\; \overline{[\phi(\vf{P}) \;+\; \nabla \phi (\vf{P}) \cdot \vf{r}(\vf{x})]_P} \;+\; \overline{O(h^2)_P} \\ &=\; \phi(\vf{P}) \;+\; \nabla \phi(\vf{P}) \cdot \vf{r}(\vf{P}) \;+\; \overline{O(h^2)_P} \\ &=\; \phi(\vf{P}) \;+\; O(h^2) \end{align*} where we have used the facts that our interpolation operator \eqref{eq: stress reconstruction} is linear ($\overline{[\phi + \psi]_P} = \overline{\phi_P} + \overline{\psi_P}$), that it is exact for linear functions (and $\phi(\vf{P}) + \nabla \phi(\vf{P}) \cdot \vf{r}(\vf{x})$ is a linear function), that $\vf{r}(\vf{P}) = 0$, and finally that $\overline{O(h^2)_P} = O(h^2)$ (which can be seen by replacing $\tau_{ij,f}$ with $O(h^2)$ in the formula \eqref{eq: stress reconstruction} and noting that both $s_f (\vf{c}_f-\vf{P})\cdot \vf{n}_f$ and $\Omega_P$ are of the same order, $O(h^2)$ in 2D and $O(h^3)$ in 3D). It remains to show that the interpolation operator is exact for linear functions. Suppose a linear function $\phi(\vf{x}) = \phi(\vf{P}) + \vf{g} \cdot \vf{r}(\vf{x})$ where $\vf{g}$ is a constant vector (it is the gradient of $\phi$). We want to show that $\overline{\phi_P} = \phi(\vf{P})$. Applying the approximation \eqref{eq: stress reconstruction} to $\phi$ we get \begin{align} \nonumber \overline{\phi_P} \;&=\; \frac{1}{D\,\Omega_P} \sum_{f=1}^F \left[ \left( \phi(\vf{P}) + \vf{g} \cdot \vf{r}(\vf{c}_f) \right) (\vf{r}(\vf{c}_f) \cdot \vf{n}_f)\,s_f \right] \\ \label{eq: interpolate linear function} &=\; \phi(\vf{P}) \, \frac{1}{D\,\Omega_P} \sum_{f=1}^F \left[ (\vf{r}(\vf{c}_f) \cdot \vf{n}_f) \, s_f \right] \;+\; \frac{1}{D\,\Omega_P} \, \vf{g} \cdot \sum_{f=1}^F \left[ \vf{r}(\vf{c}_f) (\vf{r}(\vf{c}_f) \cdot \vf{n}_f) \, s_f \right] \end{align} To proceed further we need a couple of geometrical results. Figure \ref{fig: geometry theorems} shows cell $P$ divided into $F$ triangles by the dashed lines which connect the centroid $\vf{P}$ to the vertices $\vf{v}_f$. Consider the triangle $(\vf{P},\vf{v}_f,\vf{v}_{f+1})$, which has face $f$ as one of its sides. The product $\vf{r}(\vf{c}_f) \cdot \vf{n}_f$ is the perpendicular distance from $\vf{P}$ to face $f$, and therefore the product $(\vf{r}(\vf{c}_f) \cdot \vf{n}_f) \, s_f$ is equal to the shaded area of Fig.\ \ref{fig: geometry theorems}, which is enclosed in a rectangle with one side coinciding with face $f$, another parallel side through $\vf{P}$, and two perpendicular sides passing through the vertices $\vf{v}_f$ and $\vf{v}_{f+1}$, respectively. The area of the triangle $(\vf{P},\vf{v}_f,\vf{v}_{f+1})$ is half that of this rectangle (in three dimensions, the volume of the cone-like shape obtained by joining $\vf{P}$ with the edges of face $f$ is one third of the volume of the shaded rectangular box). By adding the areas of all triangles we get the total area of cell $P$\footnote{Result \eqref{eq: volume of cell} is also obtainable by setting $\phi = 1$ in Eq.\ \eqref{eq: integrated product rule}.}: \begin{equation} \label{eq: volume of cell} \frac{1}{D} \sum_{f=1}^F \left( \vf{r}(\vf{c}_f) \cdot \vf{n}_f \right) s_f \;=\; \Omega_P \end{equation} \begin{figure}[thb] \centering \includegraphics[scale=1.00]{figures/geometry_theorems.pdf} \caption{Notation concerning the geometry of cell $P$.} \label{fig: geometry theorems} \end{figure} A second geometric result that will be needed is the following. The centroid of the triangle $(\vf{P},\vf{v}_f,\vf{v}_{f+1})$ is $(\vf{P} + \vf{v}_f + \vf{v}_{f+1}) / 3 = (\vf{P} + 2 \vf{c}_f) / 3$. The centroid of the whole cell, $\vf{P}$, is equal to the sum of the individual centroids of the triangles, $\vf{P}_f$ say, weighted by their areas, $\Omega_f = (1/2)(\vf{r}(\vf{c}_f) \cdot \vf{n}_f)s_f$: \begin{equation*} \nonumber \vf{P} \;=\; \frac{1}{\Omega_P} \sum_{f=1}^F \vf{P}_f \, \Omega_f \;\; \Rightarrow \;\; \vf{P} \Omega_P \;=\; \sum_{f=1}^F \frac{1}{3}\left( \vf{P} + 2\vf{c}_f \right) \frac{1}{2} \left(\vf{r}(\vf{c}_f) \cdot \vf{n}_f \right) s_f \end{equation*} Substituting for $\Omega_P$ from \eqref{eq: volume of cell} (with $D=2$), moving everything to the left hand side, and merging the two sums, we arrive at \begin{equation} \label{eq: geometric result 2} \sum_{f=1}^F \vf{r}(\vf{c}_f) \left( \vf{r}(\vf{c}_f) \cdot \vf{n}_f \right) s_f \;=\; 0 \end{equation} where we have used also that $\vf{c}_f - \vf{P} = \vf{r}(\vf{c}_f)$. In three dimensions, the exact same equation \eqref{eq: geometric result 2} holds, but is derived by noting that the volume of the cone- or pyramid-like shape whose base is face $f$ and its apex is at $\vf{P}$ is $\Omega_f = (1/3) (\vf{r}(\vf{c}_f) \cdot \vf{n}_f)s_f$ ($D = 3$ in Eq.\ \eqref{eq: volume of cell}) while its centroid is $\vf{P}_f = (\vf{P} + 3\vf{c}_f)/4$. Finally, substituting Eqs.\ \eqref{eq: volume of cell} and \eqref{eq: geometric result 2} in the first and second terms, respectively, of the right-hand side of Eq.\ \eqref{eq: interpolate linear function}, we arrive at $\overline{\phi_P} = \phi(\vf{P})$: the interpolation scheme \eqref{eq: stress reconstruction} is exact for linear functions, and is therefore second-order accurate. \end{appendices} \bibliographystyle{ieeetr}
2,869,038,155,974
arxiv
\section{Introduction} \label{sectionintroduction} In recent years many sophisticated methods have been developed to calculate higher order (``multi-loop'') QCD radiative corrections for high energy quantities for which it is believed that an expansion in terms of Feynman diagrams with a certain number of loops represents an excellent approximation to the predictions of quantum chromodynamics. Notable examples are the hadronic cross section in $e^+e^-$~collisions at LEP energies or the (photonic) vacuum polarization function. In the high energy limit, where the quarks can be treated as massless, these quantities have been calculated up to three loops~\cite{Chet1,Gorishny1,Surguladze1,Chet2}. However, future experiments (NLC, B-factory and $\tau$-charm factory) will test the vacuum polarization function and the hadronic cross section also in the kinematic regime close to heavy quark-antiquark thresholds, where bound state effects become important. The threshold regime is characterized by the relation \begin{equation} |\beta|\, \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}}\, \alpha_s \,, \quad \beta \, \equiv \, \sqrt{1-4\,\frac{M_Q^2}{q^2+i\epsilon}} \,, \end{equation} where $M_Q$ is the heavy quark mass and $\sqrt{q^2}$ denotes the c.m. energy. In the process of heavy quark-antiquark production above threshold, $q^2 > 4 M_Q^2$, $\beta$ is equal to the velocity of the quarks in the c.m. frame. We therefore call $\beta$ ``velocity'' in the remainder of this work, even if $q^2 < 4 M_Q^2$. In the threshold regime the accuracy of theoretical predictions to the hadronic cross section and to the vacuum polarization function is much poorer than for high energies. Aside from definitely non-perturbative effects (in the sense of ``not calculable from first principles in QCD''), the breakdown of the perturbative expansion in the number of loops makes any theoretical description in the threshold region difficult. This breakdown of the perturbation series is indicated by power ($1/\beta$) or logarithmic ($\ln\beta$) divergences in the velocity which blow up if evaluated very close to the threshold point. Some of these divergences ({\it e.g.} the $\alpha^n/\beta^n$, $n>1$, Coulomb singularities in the Dirac form factor $F_1$ describing the electromagnetic vertex) can be treated by using well-known results from non-relativistic quantum mechanics, but a systematic way to calculate higher-order corrections in the threshold regime seems to be far from obvious, at least from the point of view of covariant perturbation theory in the number of loops. This type of perturbation theory will be referred to as ``conventional perturbation theory'' from now on in this work. \par On the other hand, there are many examples of heavy quark-antiquark bound state properties where the complete knowledge of higher-order corrections would be extremely valuable. Most of the present analyses (see {\it e.g.}~\cite{Bodwin1}) are based on leading and next-to-leading order calculations. Here, higher-order corrections could significantly increase the precision of present theoretical calculations, but could also serve as an instrument to test how trustworthy certain theoretical predictions are and to estimate the size of theoretical uncertainties. Further, they could contribute toward a better understanding of the role of non-perturbative effects (in the sense mentioned above) in apparent discrepancies between the determination of the size of the strong coupling from the $\Upsilon(1S)$ decay rates~\cite{Hinchliffe1} and QCD sum rule calculations for the $\Upsilon$ system~\cite{Voloshin1}\footnote{ During completion of this paper we became aware of a new publication, where QCD sum rules for the $\Upsilon$ system are used to determine the strong coupling and the bottom quark mass~\cite{Pich1}. We will give a brief comment on this publication and on~\cite{Voloshin1} at the end of Section~\ref{SectionThreshold}. } on the one hand, and from the LEP experiments on the other. \par The framework in which bound state properties and also dynamical quantities in the threshold regime can be calculated in a systematic way to arbitrary precision is non-relativistic quantum chromodynamics (NRQCD)~\cite{Bodwin1} which is based on the concept of effective field theories. In the kinematic regime where bound states occur and slightly above the threshold, NRQCD is superior to conventional perturbation theory in QCD and (at least from the practical point of view) also to the Bethe-Salpeter approach, because it allows for an easy and transparent separation of long- and short-distance physics contributions. This is much more difficult and cumbersome with the former two methods. However, we would like to emphasize that all methods lead to the same results. As an effective field theory, NRQCD needs the input from short-distance QCD in order to produce viable predictions in accordance with quantum chromodynamics. This adjustment of NRQCD to QCD is called the {\it matching procedure} and generally requires multi-loop calculations in the framework of conventional perturbation theory at the level of the intended accuracy. \par In this work we demonstrate the efficient use of the concept of effective field theories to calculate the QED vacuum polarization function in the threshold region to ${\cal{O}}(\alpha^2)$ accuracy. In order to convince the reader of the simplicity of the approach we use our result to recalculate the vacuum polarization contributions to the ${\cal{O}}(\alpha^6)$ hyperfine splitting of the positronium ground state energy level without referring back to the Bethe-Salpeter equation. Differences between our result and an older calculation~\cite{Barbieri1,Barbieri2}, are pointed out. We analyse the vacuum polarization function at the bound state energies and above threshold and, in particular, concentrate on the size of the ${\cal{O}}(\alpha^2)$ corrections. It is shown that the size of the ${\cal{O}}(\alpha^2)$ corrections in the threshold regime is of order $\alpha^2$ rather than $\alpha^2/\pi^2$ which is a consequence of their long-distance origin. In a second step our results for the QED vacuum polarization function are applied to calculate ${\cal{O}}(C_F^2\alpha_s^2)$ (next-to-next-to-leading order) Darwin corrections to the heavy quark-antiquark $l=0$ bound state wave functions at the origin and to the cross section of heavy quark-antiquark production in $e^+e^-$ annihilation (via a virtual photon) in the threshold region. The corresponding unperturbed quantities are the solutions of the Schr\"odinger equation for a stable quark-antiquark pair with a Coulomb-like QCD potential, $V_{\mbox{\tiny QCD}}(r)=-C_F\alpha_s/r$. It is demonstrated that the size of the ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections is also of order $\alpha_s^2$ rather than $\alpha_s^2/\pi^2$. We present simple physical arguments that the scale of the strong coupling governing the ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections is of order $C_F M_Q \alpha_s$ and we analyze the size of the corrections for the $t\bar t$, $b\bar b$ and $c\bar c$ systems assuming that the size of the Darwin corrections can be taken as an order of magnitude estimate for all (yet unknown) ${\cal{O}}(\alpha_s^2)$ corrections. The sign of the latter corrections and their actual numerical values can, of course, only be determined by an explicit calculation of all ${\cal{O}}(\alpha_s^2)$ contributions. \par At this point we want to emphasize that our approach does not depend on any model-like assumptions, but represents a first principles QCD calculation. The only assumptions (for heavy quarks) are that (i) the instantaneous ({\it i.e.} uncrossed) Coulomb-like exchange of longitudinal gluons (in Coulomb gauge) between the heavy quarks leads to the dominant contributions in the threshold regime and is the main reason for heavy quark-antiquark bound state formation and that (ii) all further interactions can be treated as a perturbation. We believe that the actual size of the ${\cal{O}}(\alpha_s^2)$ corrections can then serve as an important {\it a posteriori} justification or falsification of these assumptions for the different heavy quark-antiquark systems. Finally, we address the question whether bound state effects can lead to large corrections to the vacuum polarization function in kinematical regions far from the actual threshold regime. We come to the conclusion that such corrections do not exist. \par The program for this work is organized as follows: In Section~\ref{SectionCalculation} the calculation of the QED vacuum polarization function to ${\cal{O}}(\alpha^2)$ accuracy in the threshold region is presented. We define a renormalized version of the Coulomb Green function for zero distances, which allows for application of (textbook quantum mechanics) time-independent perturbation theory to determine higher-order corrections to wave functions and energy levels. For completeness we also give an expression for the QED vacuum polarization function valid for all energies with ${\cal{O}}(\alpha^2)$ accuracy. In Section~\ref{SectionAnalysis} the QED vacuum polarization function in the threshold region is analysed with special emphasis on the size of the ${\cal{O}}(\alpha^2)$ corrections, and the ${\cal{O}}(\alpha^6)$ vacuum polarization contributions to the positronium ground state hyperfine splitting are calculated. Section~\ref{SectionQCD} is devoted to the determination and analysis of the ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections to the bound state wave functions at the origin and the production cross section in the threshold regime for the different heavy quark-antiquark systems. In Section~\ref{SectionThreshold} we comment on the existence of threshold effects far from threshold and Section~\ref{SectionSummary} contains a summary. \par \vspace{0.5cm} \section{Determination of the QED Vacuum Polarization Function \\ in the Threshold Region} \label{SectionCalculation} We consider the QED vacuum polarization function $\Pi$ defined through the one-particle-irreducible current-current correlator \begin{equation} \big(\,q^2\,g^{\mu\nu}\,-\,q^\mu q^\nu\,\big)\,\Pi(q^2) \, \equiv \, -i \int \!{\rm d}^4x\,e^{i\,qx}\, \langle 0|\,T\,j^\mu(x)\,j^\nu(0)\,|0 \rangle \,, \end{equation} where $j^\mu(x)=i e \bar{\Psi}(x)\gamma^\mu\Psi(x)$ denotes the electromagnetic current. $\Psi$ represents the Dirac field of the electron with charge $e$. According to the standard subtraction procedure, $\Pi$ vanishes for $q^2=0$. It has been shown in~\cite{Braun1} that in the kinematical region close to the $e^+e^-$ threshold point $q^2=4\,M^2$, where $M$ denotes the electron mass, the current-current correlator $\Pi$ is directly related to the Green function $G^0_E(\vec x,\vec{x}^\prime)$ of the positronium Schr\"odinger equation \begin{equation} \bigg[\, -\frac{1}{M}\vec{\nabla}^2_{\vec x} - \frac{\alpha}{|\vec x|} - E \,\bigg]\,G^0_E(\vec{x},\vec{x}^\prime) \, = \, \delta^{(3)}(\vec{x}-\vec{y}) \,, \end{equation} where $E$ denotes the energy relative to the threshold point, $E\equiv\sqrt{q^2}-2M$, and $\alpha$ is the fine structure constant. Explicit analytic expressions for the Green function have been calculated in a number of classical papers~\cite{Wichmann1}. The proper relation between the vacuum polarization function and the Green function in the threshold region reads~\cite{Braun1} \begin{equation} \Pi_{Thr}^{0,{\cal{O}}(\alpha^2)}(q^2) \, = \, \frac{8\,\alpha\,\pi}{q^2}\,G^0_E(0,0). \label{PGunrenormalized} \end{equation} Because we are only interested in ${\cal{O}}(\alpha^2)$ accuracy in the threshold region we effectively can replace the factor $1/q^2$ in eq.~(\ref{PGunrenormalized}) by $1/4M^2$. \par For illustration, let us now examine the one- and two-loop contributions to the vacuum polarization function and the expression for the vacuum polarization function from non-relativistic quantum mechanics according to eq.(\ref{PGunrenormalized}). The one- and two-loop contributions to $\Pi$ have been known for quite a long time for all energy and mass assignments~\cite{Kallensabry1,Schwinger1, Barbieri3}. Far from the threshold point those loop results provide an excellent approximation to the QED vacuum polarization function at the ${\cal{O}}(\alpha^2)$ accuracy level, \begin{equation} \Pi^{\mbox{\tiny 2 loop}}_{\mbox{\tiny QED}}(q^2) \, = \, \Big(\frac{\alpha}{\pi}\Big)\,\Pi^{(1)}(q^2) \, + \, \Big(\frac{\alpha}{\pi}\Big)^2\,\Pi^{(2)}(q^2) \, + \, {\cal{O}}\bigg(\Big(\frac{\alpha}{\pi}\Big)^3\bigg) \,. \label{Piloops} \end{equation} In the kinematic domain where $\alpha\ll|\beta|\ll1$, the expansion in terms of the number of loops is still an adequate approximation, and we are allowed to expand the coefficients in eq.~(\ref{Piloops}) for small velocities, \begin{eqnarray} \Big(\frac{\alpha}{\pi}\Big)\,\Pi^{(1)}(q^2) & \stackrel{|\beta|\to 0}{=} & \alpha\,\bigg[\, \frac{8}{9\,\pi } + \frac{i}{2}\,\beta \,\bigg] + {\cal{O}}(\alpha\,\beta^2) \,, \label{Pi1loopexpanded} \\ [3mm] \Big(\frac{\alpha}{\pi}\Big)^2\,\Pi^{(2)}(q^2) & \stackrel{|\beta|\to 0}{=} & {{\alpha}^2}\,\bigg[\, \frac{1}{4\,{{\pi }^2}}\, \left( 3 - \frac{21}{2}\,\zeta_3 \right) + \frac{11}{32} - \frac{3}{4}\,\ln 2 - \frac{1}{2}\,\ln(-i\,\beta ) \,\bigg] + {\cal{O}}(\alpha^2\,\beta) \,, \label{Pi2loopexpanded} \end{eqnarray} where $\beta=\sqrt{1-4M^2/(q^2+i\epsilon)}$ and $\zeta_3=1.202056903\ldots$. In eq.~(\ref{Pi1loopexpanded}) the ${\cal{O}}(\alpha\beta)$ contribution is also displayed, allowing for a check of the normalization during the matching procedure. Whereas the one-loop contribution, eq.~(\ref{Pi1loopexpanded}), can be evaluated for $\beta\to 0$, indicating that the ${\cal{O}}(\alpha)$ contribution of the vacuum polarization function is of pure short-distance origin, the two-loop expression, eq.~(\ref{Pi2loopexpanded}), diverges logarithmically for vanishing $\beta$. This shows that beyond ${\cal{O}}(\alpha)$ accuracy conventional perturbation theory is inadequate for $|\beta|\mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}}\alpha$ due to long-distance effects which cannot be calculated in terms of a finite number of loop diagrams. These long-distance effects can, on the other hand, be described adequately by the vacuum polarization function calculated in the framework of quantum mechanics, which {\it per constructionem} is valid in the non-relativistic limit. However, the vacuum polarization function as defined in eq.~(\ref{PGunrenormalized}) gives a divergent result,\footnote{ In eq.~(\ref{PGunrenormalizeddiv}) we can identify $\beta=\sqrt{1-4 M^2/(q^2+i\epsilon)}$ with $\sqrt{(E+i\epsilon)/M}$ because we are interested in ${\cal{O}}(\alpha^2)$ accuracy only.} \begin{eqnarray} \Pi_{Thr}^{0,{\cal{O}}(\alpha^2)}(q^2) & = & \frac{2\,\alpha\,\pi}{M^2}\,\lim_{r\to 0} G^0_E(0,r) \nonumber\\[2mm] & = & \frac{2\,\alpha\,\pi}{M^2}\,\lim_{r\to 0} \bigg[\, -i\,\frac{M^2\,\beta}{2\,\pi}\,e^{i M \beta r}\,\int_0^\infty e^{2 i M \beta r t}\,\bigg(\frac{1+t}{t}\bigg)^{i \frac{\alpha}{2 \beta}}\, {\rm d}t \,\bigg] \nonumber \\[2mm] & = & \alpha\,\bigg[\, \frac{1}{2\,M\,r} + \frac{i}{2}\,\beta \,\bigg] + {{\alpha}^2}\,\bigg[\, \frac{1}{2} - \gamma - \frac{1}{2}\,\ln(-2\,i\,M\,r\,\beta) - \frac{1}{2}\,{\Psi}\Big(1 - i\,\frac{\alpha}{2\,\beta}\Big) \,\bigg] \,, \label{PGunrenormalizeddiv} \end{eqnarray} where $\gamma$ is the Euler constant and $\Psi$ represents the digamma function, \begin{eqnarray} \gamma & = & \lim_{n\to\infty}\,\bigg[\, -\ln n \, + \, \sum_{i=1}^{n}\frac{1}{i} \,\bigg] \, = \, 0.5772156649\ldots \,, \nonumber \\[3mm] \Psi(z) & = & \frac{d}{d z}\,\ln\Gamma(z) \,. \nonumber \end{eqnarray} The divergences in eq.~(\ref{PGunrenormalizeddiv}) can be easily understood if non-relativistic quantum mechanics is considered as a low-energy effective theory which is capable of describing long-distance physics close to the threshold (characterized by momenta below the scale of the electron mass) but does not know {\it per se} any short-distance effects coming from momenta beyond the scale of the electron mass. This lack of information is indicated in eq.~(\ref{PGunrenormalizeddiv}) by short distance (UV) divergences and has to be cured by matching non-relativistic quantum mechanics to QED. The result of this matching procedure is called ``non-relativistic quantum electrodynamics'' (NRQED)~\cite{Caswell1}. In this light we have to regard relations~(\ref{PGunrenormalized}) and (\ref{PGunrenormalizeddiv}) as unrenormalized, which we have indicated by using the superscripts $0$. \par In the common approach the Lagrangian of NRQED is obtained by introducing higher dimensional operators in accordance with the underlying symmetries of the theory and by matching them to predictions within conventional perturbation theory in QED in the kinematical regime $\alpha\ll|\beta|\ll 1$ where both NRQED and QED are valid and must give the same results\footnote{ For $\alpha\ll|\beta|\ll 1$ conventional perturbation theory in QED is valid because $\alpha$ represents the smallest parameter, whereas NRQED is valid because $|\beta|$ is much smaller than the speed of light. }. In general this leads to divergent renormalization constants multiplying the NRQED operators which then cancel the divergences in eq.~(\ref{PGunrenormalizeddiv}) and add the correct finite short-distance contributions. In our case, however, the explicit determination of the renormalization constants is not necessary because we can match the vacuum polarization function obtained from unrenormalized NRQED directly to the one- and two-loop expressions from QED, eqs.~(\ref{Pi1loopexpanded}) and (\ref{Pi2loopexpanded}). This ``direct matching'' method has the advantage that the regularization of the UV divergences in eq.~(\ref{PGunrenormalizeddiv}) can be performed in a quite sloppy way, but has the disadvantage that it is of no value to determine other quantities than the vacuum polarization function itself. To arrive at the vacuum polarization function in the threshold region we just have to replace the $\beta$-independent and divergent contributions of the ${\cal{O}}(\alpha)$ and ${\cal{O}}(\alpha^2)$ coefficients of expression~(\ref{PGunrenormalizeddiv}) in an expansion for small $\alpha$ \begin{eqnarray} \Pi_{Thr}^{0,{\cal{O}}(\alpha^2)}(q^2) & = & \alpha\,\bigg[\, \frac{1}{2\,M\,r} + \frac{i}{2}\,\beta \,\bigg] + {{\alpha}^2}\, \bigg[\, \frac{1}{2} - \frac{1}{2}\,{\gamma} - \frac{1}{2}\,\ln (-2\,i\,M\,r\,\beta) \,\bigg] \,\nonumber\, \\ [2mm] & & \mbox{} + A(\alpha,\beta) \,, \label{PGunrenormalizeddiv2} \\ [4mm] A(\alpha,\beta) & \equiv & -\frac{\alpha^2}{2}\,\bigg[\, \gamma + \Psi\Big(1-i\frac{\alpha}{2\,\beta}\Big) \,\bigg] \label{Adefinition} \end{eqnarray} by the corresponding $\beta$-independent and finite contributions in eqs.~(\ref{Pi1loopexpanded}), (\ref{Pi2loopexpanded}). The correct normalization between the contributions coming from the loop calculations and from non-relativistic quantum mechanics can be checked explicitly by observing that the coefficients in front of the $\beta$-dependent ${\cal{O}}(\alpha)$ and ${\cal{O}}(\alpha^2)$ contributions in eqs.~(\ref{Pi1loopexpanded}), (\ref{Pi2loopexpanded}) and (\ref{PGunrenormalizeddiv2}) are identical. At this point we would like to note that the function $A$ contains only contributions of order $\alpha^3$ and higher for an expansion in small $\alpha$. We shall return to this point later. The final result for the vacuum polarization function valid to ${\cal{O}}(\alpha^2)$ accuracy in the threshold region then reads \begin{eqnarray} \Pi_{Thr}^{{\cal{O}}(\alpha^2)}(q^2) & = & \alpha\,\bigg[\, \frac{8}{9\,\pi } + \frac{i}{2}\,\beta \,\bigg] \,\nonumber\, \\ [2mm] & & \mbox{} + {{\alpha}^2}\, \bigg[\, \frac{1}{4\,{{\pi }^2}}\, \left( 3 - {{21}\over 2}\,\zeta_3 \right) + \frac{11}{32} - \frac{3}{4}\,\ln 2 - \frac{1}{2}\,\ln(- i\,\beta ) \,\bigg] \,\nonumber\,\\ [2mm] & & \mbox{} + A(\alpha,\beta) \,. \label{Pithreshfinal} \end{eqnarray} It is an interesting fact that this result can be obtained directly from the one- and two-loop results, eqs.~(\ref{Pi1loopexpanded}) and (\ref{Pi2loopexpanded}), by the replacement \begin{eqnarray} \ln(-i\,\beta) \, \longrightarrow \, H(\alpha,\beta) & \equiv & \gamma + \ln(-i\,\beta) + \Psi\Big(1-i\frac{\alpha}{2\,\beta}\Big) \nonumber \\[2mm] & = & \ln(-i\,\beta) - \frac{2}{\alpha^2}\,A(\alpha,\beta) \,. \label{Hdefinition} \end{eqnarray} As mentioned earlier, the function $A$ is of order $\alpha^3$ for $\alpha\ll|\beta|$. In the language of Feynman diagrams $A$ arises from diagrams with instantaneous Coulomb exchange of two and more longitudinal photons (in Coulomb gauge) between the electron-positron pair. However, if $|\beta|$ is smaller than $\alpha$ ($\beta$ being real), then $A$ is of order $\alpha^2$, \begin{equation} A(\alpha,\beta) \,\stackrel{|\beta|\ll\alpha}{=}\, \frac{\alpha^2}{2}\,\bigg[\, \ln\Big(i\,\frac{2\,\beta}{\alpha}\Big) - \gamma \,\bigg] \, + \, {\cal{O}}(\alpha\,\beta) \,. \label{Asmallb} \end{equation} At this point it is illustrative to examine the limits $\alpha\ll\beta$ and $\beta\ll\alpha$ for the function $H$, defined in eq.~(\ref{Hdefinition}), for real and positive values of $\beta$: \begin{eqnarray} H(\alpha,\beta) & \stackrel{\alpha\ll\beta}{=} & \ln(-i\,\beta) + {\cal{O}}\Big(\frac{\alpha}{\beta}\Big) \,, \\[2mm] H(\alpha,\beta) & \stackrel{\beta\ll\alpha}{=} & \ln\Big(\frac{\alpha}{2}\Big) + \gamma -i\,\pi + {\cal{O}}\Big(\frac{\beta}{\alpha}\Big) \,. \label{Hsmallbeta} \end{eqnarray} It is evident that the function $H$ interpolates between a $\ln\beta$-behaviour in the region where conventional perturbation theory is valid and a constant with a logarithm of $\alpha$ for $\beta/\alpha\to 0$. This leads to a finite value for $\Pi_{Thr}^{{\cal{O}}(\alpha^2)}$ at the threshold point. As we will see in the next section, $\Pi_{Thr}^{{\cal{O}}(\alpha^2)}$ has singularities at the positronium energy levels indicating that the breakdown of conventional perturbation theory is directly related to the formation of bound states of the virtual $e^+e^-$ pair~\cite{Braun1}. \par Based on result~(\ref{Pithreshfinal}) we are now able to define a renormalized expression for the zero-distance Green function by inverting relation~(\ref{PGunrenormalized}) \begin{equation} G^R_E(0,0) \, \equiv \, \frac{M^2}{2\,\alpha\,\pi}\, \Pi_{Thr}^{{\cal{O}}(\alpha^2)}(q^2) \,, \label{Greenfunctionrenormalized} \end{equation} As we will show later, this renormalized zero-distance Green function can be used for the calculation of higher-order corrections in time-independent perturbation theory. \par For completeness we also present the QED vacuum polarization function with ${\cal{O}}(\alpha^2)$ accuracy for all energies, \begin{eqnarray} \lefteqn{ \Pi_{\mbox{\tiny QED}}^{{\cal{O}}(\alpha^2)}(q^2) \, = \, \Big(\frac{\alpha}{\pi}\Big)\,\Pi^{(1)}(q^2) \, + \, \Big(\frac{\alpha}{\pi}\Big)^2\,\Pi^{(2)}(q^2) \, + \, A(\alpha,\beta) } \nonumber\\[2mm] & = & \Big(\frac{\alpha}{\pi}\Big)\,\bigg\{\, \frac{8 - 3\,{{\beta}^2}}{9} + \frac{\beta\,\left( 3 - {{\beta}^2} \right) }{6 }\,\ln(-p) \,\bigg\} \nonumber\\[2mm] & & + \Big(\frac{\alpha}{\pi}\Big)^2\,\bigg\{\, \frac{18 - 13\,{{\beta}^2}}{24} + \frac{\beta\,\left( 5 - 3\,{{\beta}^2} \right) }{8}\,\ln(-p) - \frac{\left( 1 - \beta \right) \, \left( 33 - 39\,\beta - 17\,{{\beta}^2} + 7\,{{\beta}^3} \right) }{96}\, {{\ln(-p)}^2}\,\nonumber\,\\ & & \mbox{}\,\qquad + \frac{\beta\,\left( -3 + {{\beta}^2} \right) }{3}\, \bigg[\, 2\,\ln(1 - p)\,\ln(-p) + \ln(-p)\,\ln(1 + p) + \mbox{Li}_2(-p) + 2\,\mbox{Li}_2(p) \,\bigg] \,\nonumber\,\\ & & \mbox{}\,\qquad + \frac{\left( 3 - {{\beta}^2} \right) \,\left( 1 + {{\beta}^2} \right) }{12 }\,\bigg[\, 2\,\ln(1 - p)\,{{\ln(-p)}^2} + {{\ln(-p)}^2}\,\ln(1 + p)\,\,\nonumber\,\\ & & \mbox{}\,\qquad\, \qquad\, + 4\,\ln(-p)\,\mbox{Li}_2(-p) + 8\,\ln(-p)\,\mbox{Li}_2(p) - 6\,\mbox{Li}_3(-p) - 12\,\mbox{Li}_3(p) - 3\,\zeta_3 \,\bigg] \,\bigg\} \nonumber \\[2mm] & & + A(\alpha,\beta) \,, \label{Piallenergies} \end{eqnarray} where \[ p \, \equiv \, \frac{1-\beta}{1+\beta} \] and $\mbox{Li}_2$, $\mbox{Li}_3$ denote the di- and trilogarithms~\cite{Lewin1}. The reader should note that $\Pi_{\mbox{\tiny QED}}^{{\cal{O}}(\alpha^2)}$ vanishes at $q^2=0$ and is an analytic function in $q^2$ except at poles and branch cuts, and satisfies the dispersion relation \begin{equation} \Pi_{\mbox{\tiny QED}}^{{\cal{O}}(\alpha^2)}(q^2) \, = \, \frac{q^2}{\pi}\,\int\limits_{-\infty}^\infty \frac{{\rm d}{q^\prime}^2}{{q^\prime}^2}\, \frac{1}{{q^\prime}^2-q^2-i\epsilon}\, {\rm Im} \Pi_{\mbox{\tiny QED}}^{{\cal{O}}(\alpha^2)}({q^\prime}^2) \,. \end{equation} The explicit form of ${\rm Im} \Pi_{\mbox{\tiny QED}}^{{\cal{O}}(\alpha^2)}$ in the threshold region will be presented in Section~\ref{SectionAnalysis}. For the use and interpretation of formula~(\ref{Piallenergies}) see also Section~\ref{SectionThreshold}. \par \vspace{0.5cm} \section{Examination of the Vacuum Polarization Function \\ in the Threshold Region} \label{SectionAnalysis} In this section we analyse the properties of the vacuum polarization function in the threshold region above and below the threshold point $q^2=4M^2$. Compared to an older work on the same subject~\cite{Braun1} we are not so much interested in general properties of perturbation theory in the presence of bound state formation, but in the explicit form and behaviour of $\Pi_{Thr}^{{\cal{O}}(\alpha^2)}$. In particular, we focus on the size of the ${\cal{O}}(\alpha^2)$ contributions. We also would like to mention that the vacuum polarization function has been studied in a similar way in~\cite{Barbieri1}. In the latter publication, however, a different definition for the vacuum polarization is employed, only the positronium ground state energy is considered and a contribution in the one-loop result is missing. We will come back to this point later. Comparing the methods used in~\cite{Barbieri1} with the effective field theoretical approach employed in this work makes the elegance of the latter technique obvious. \par We start in the kinematic region above threshold where $\alpha\ll\beta\ll 1$. Here, as mentioned in the previous section, the one- and two-loop results, eqs.~(\ref{Piloops})-(\ref{Pi2loopexpanded}), are reliable. This is consistent with the fact that the function $A$ contains only contributions of order $\alpha^3$ and higher, \begin{eqnarray} A(\alpha,\beta) & \stackrel{\alpha\ll\beta}{=} & \frac{\alpha^2}{2}\,\sum\limits_{n=2}^{\infty}\,\zeta_n\, \Big(\,i\,\frac{\alpha}{2\,\beta}\,\Big)^{n-1} \nonumber\\[2mm] & = & \alpha^3\,\bigg[\,\frac{i}{24}\frac{\pi^2}{\beta}\,\bigg] - \alpha^4\,\bigg[\,\frac{\zeta_3}{8\,\beta^2}\,\bigg] + {\cal{O}}\Big(\frac{\alpha^5}{\beta^3}\Big) \,. \label{Asmalla} \end{eqnarray} Thus, for practical applications the contributions of function $A$ can be neglected. (See also the discussion in Section~\ref{SectionThreshold}.) One might think that for $\alpha\approx\beta$ the one- and two-loop expressions should still represent an appropriate ${\cal{O}}(\alpha^2)$ prediction, because the radius of convergence of the series on the r.h.s. of eq.~(\ref{Asmalla}) is $|\beta|=\alpha/2$~\cite{Abramowitz1}. \begin{figure}[ht] \begin{center} \leavevmode \epsfxsize=5cm \epsffile[220 420 420 550]{fig1.ps} \vskip 30mm $\displaystyle{\mbox{\hspace{12.5cm}}\bf\frac{\beta}{\alpha}}$ \vskip 5mm \caption{\label{FigPi2} The ${\cal{O}}(\alpha^2)$ corrections to the vacuum polarization function in the threshold region with and without the contributions contained in function $A$, eq.(\ref{Adefinition}), in the kinematic region $0<\beta<2\alpha$ above the threshold. The solid line denotes $\mbox{Re}[\Pi^{(2)}+\pi^2/\alpha^2\,A]$, the dashed line $\mbox{Re} \Pi^{(2)}$, the dashed-dotted line $\mbox{Im}[\Pi^{(2)}+\pi^2/\alpha^2\,A]$ and the dotted line $\mbox{Im} \Pi^{(2)}$. The value of the fine structure constant is taken as $\alpha=1/137$. $\Pi^{(2)}$ represents the two-loop contribution to the vacuum polarization function and is displayed for in eq.~(\ref{Pi2loopexpanded}).} \end{center} \end{figure} However, as illustrated in Fig.~\ref{FigPi2}, for $\alpha\approx\beta$ the contributions coming from function $A$ are already of order $\alpha^2/\pi^2$ and thus have to be included if ${\cal{O}}(\alpha^2)$ accuracy is intended. For even smaller velocities, of course, the contributions from $A$ are essential because they cancel the divergent $\ln\beta$ term from the two-loop expression $\Pi^{(2)}$, see eqs.~(\ref{Pi2loopexpanded}) and (\ref{Asmallb}). Therefore the value of $\Pi_{Thr}^{{\cal{O}}(\alpha^2)}$ at the threshold point is finite and reads\footnote{ The plus sign in the argument of $\Pi_{Thr}^{{\cal{O}}(\alpha^2)}$ indicates that the expression on the r.h.s. of eq.~(\ref{Pithreshthresh}) represents only a right-sided limit on the real $q^2$-axis. } ($\alpha=1/137$)\\ \begin{eqnarray} \lefteqn{ \Pi_{Thr}^{{\cal{O}}(\alpha^2)}(q^2\to 4M^2\,+) }\nonumber \\ [2mm] & = & \Big(\frac{\alpha}{\pi }\Big)\,\frac{8}{9} + {{\alpha}^2}\,\bigg[\, -\frac{1}{2}\,\ln\alpha - \frac{1}{2}\,{\gamma} + \frac{1}{4\,{{\pi }^2}}\,\left( 3 - \frac{21}{2}\,\zeta_3 \right) + \frac{11}{32} - \frac{1}{4}\,\ln 2 + i\,\frac{\pi }{2} \,\bigg] \nonumber \\[2mm] & = & 0.89\,\Big(\frac{\alpha}{\pi }\Big) + \Big(\,-0.36 - \frac{1}{2}\,\ln\alpha + i\,\frac{\pi}{2}\,\Big)\,\alpha^2 \, = \, 0.89\,\Big(\frac{\alpha}{\pi }\Big) + \Big(\,2.10 + i\,\frac{\pi}{2}\,\Big)\,\alpha^2 \,. \label{Pithreshthresh} \end{eqnarray} It is evident from eq.~(\ref{Pithreshthresh}) and Fig.~\ref{FigPi2} that the size of the ${\cal{O}}(\alpha^2)$ corrections in the threshold region is of order $\alpha^2$ rather than $\alpha^2/\pi^2$, whereas the ${\cal{O}}(\alpha)$ contribution is of order $\alpha/\pi$. This can be understood from the fact that the ${\cal{O}}(\alpha)$ contribution in $\Pi_{Thr}^{{\cal{O}}(\alpha^2)}$ comes entirely from the one-loop result $\Pi^{(1)}$, eq.~(\ref{Pi1loopexpanded}), and therefore originates from momenta beyond the scale of the electron mass. High momenta contributions are expected to be of order $\alpha/\pi$ if no ``large logarithms'' occur\footnote{ For comparison the reader might consider the well-known one- and two-loop contributions to the anomalous magnetic moment of the electron~\cite{Schwinger1,Petermann1}, $g_e-2 = (\frac{\alpha}{\pi})-0.66\,(\frac{\alpha}{\pi})^2+ {\cal{O}}((\frac{\alpha}{\pi})^3)$. Here, long-distance effects from the $e^+e^-$ threshold play no role. Therefore $g_e-2$ can be regarded as a typical short-distance quantity with no ``large logarithms''. }. The large ${\cal{O}}(\alpha^2)$ contributions to $\Pi_{Thr}^{{\cal{O}}(\alpha^2)}$, on the other hand, arise from the interplay of the logarithm of the velocity in $\Pi^{(2)}$, eq.~(\ref{Pi2loopexpanded}), and the contributions from the instantaneous Coulomb-exchange of two and more longitudinal photons between the virtual electron-positron pair. For small velocities the latter effects generate a logarithm of the velocity with an opposite sign, which cancels the logarithm in $\Pi^{(2)}$. We therefore conclude that the large ${\cal{O}}(\alpha^2)$ contributions are of long-distance origin. This is particularly obvious for the $\ln\alpha$ term which could never be generated at short distances. \par The situation for $\alpha\ll|\beta|\ll 1$ below threshold is similar to the one above threshold. Here, the one- and two-loop contributions from conventional perturbation theory, eqs.~(\ref{Piloops})-(\ref{Pi2loopexpanded}), provide a viable prediction, because the contributions from the function $A$ are of order $\alpha^3$ and higher. They are beyond the intended accuracy and can be neglected. (See also the discussion in Section~\ref{SectionThreshold}.) On the other hand, it is obvious that the one- and two-loop results are not sufficient for energies close to the positronium bound state energies, \begin{equation} \beta \, = \, i\,\frac{\alpha}{2\,n} \,\, \Longleftrightarrow \,\, E \, = \, -\frac{M\,\alpha^2}{4\,n^2} \,, \qquad\qquad (n=1,2,3,\ldots) \,, \end{equation} because the vacuum polarization function is expected to have poles at those energy values. Therefore the full expression for $\Pi_{Thr}^{{\cal{O}}(\alpha^2)}$, eq.~(\ref{Pithreshfinal}), must be employed. It is straightforward to check that $\Pi_{Thr}^{{\cal{O}}(\alpha^2)}$ indeed has poles at the positronium energy levels~\cite{Braun1} leading to the following Laurent expansion at the bound state energies $E_n= -\frac{M\,\alpha^2}{4\,n^2}$, ($n=1,2,3,\ldots$), \begin{equation} \lim\limits_{E\to E_n}\, \Pi_{Thr}^{{\cal{O}}(\alpha^2)}(q^2) \, = \, \frac{M\,\alpha^4}{4\,n^3}\,\frac{1}{E_n-E-i\epsilon} + \Big(\frac{\alpha}{\pi}\Big)\,\frac{8}{9} + \alpha^2\,\bigg[\,\,a_n\,\bigg] + {\cal{O}}(E_n-E) \,, \label{PiLaurent} \end{equation} where \begin{equation} a_n \, \equiv \, -\frac{1}{2}\,\ln\alpha + \frac{1}{2}\,\bigg[\,\frac{1}{n}+\ln n - \sum\limits_{i=1}^{n-1}\frac{1}{i}\,\bigg] + \frac{1}{4\,\pi^2}\,\bigg(\,3-\frac{21}{2}\,\zeta_3\,\bigg) + \frac{11}{32} - \frac{1}{4}\,\ln 2 \,. \label{andefinition} \end{equation} For completeness we also present the corresponding Laurent expansion for the renormalized zero-distance Green function based on definition~(\ref{Greenfunctionrenormalized}), \begin{equation} \lim\limits_{E\to E_n}\, G^R_{E}(0,0) \, = \, \frac{\,\,\left|\Psi_n(0)\right|^2}{E_n-E-i\epsilon} + \frac{4}{9}\,\frac{M^2}{\pi^2} + \frac{M^2\,\alpha}{2\,\pi}\,\bigg[\,a_n\,\bigg] + {\cal{O}}(E_n-E) \,. \label{Greenfunctionresidues} \end{equation} As expected, the residues at the bound state energies are equal to the moduli squared of the normalized $l=0$ Coulomb wave functions at the origin, \begin{equation} \left|\Psi_n(0)\right|^2 \, = \, \frac{M^3\,\alpha^3}{8\,\pi\,n^3} \,. \label{wavefunctionsquared} \end{equation} In eqs.~(\ref{PiLaurent})-(\ref{Greenfunctionresidues}) we have also displayed the constant terms of the Laurent expansion. These constants are relevant for higher-order corrections to the positronium energy levels and to the wave functions at the origin. The size of the ${\cal{O}}(\alpha^2)$ corrections in these constant terms is (similar to the ${\cal{O}}(\alpha^2)$ contributions above threshold) of order $\alpha^2$ rather than $\alpha^2/\pi^2$ indicating again the long-distance character of the ${\cal{O}}(\alpha^2)$ corrections. \begin{table}[htb] \vskip 7mm \begin{center} \begin{tabular}{|c||c|c|c|c|c|c|} \hline $n$ & $1$ & $2$ & $3$ & $4$ & $5$ & $\infty$ \\ \hline\hline \parbox{.4cm}{\vskip 2mm $a_n$ \vskip 2mm} & $2.89$ & $2.48$ & $2.35$ & $2.29$ & $2.25$ & $2.10$ \\ \hline \end{tabular} \caption{\label{Tablean} The numerical value for the constants $a_n$ for the radial quantum numbers $n=1,2,3,4,5$ and for $n\to\infty$ with $\alpha=1/137$. } \end{center} \vskip 3mm \end{table} In Table~\ref{Tablean} we have displayed the numerical values of $a_n$ for the radial quantum numbers $n=1,2,3,4,5$. It is an interesting fact that the $n\to\infty$ limit of $a_n$ exists \begin{equation} \lim\limits_{n\to\infty}\,\alpha^2\,\bigg[\,a_n\,\bigg] \, = \, \alpha^2\,\bigg[\, - \frac{1}{2}\,\ln\alpha - \frac{1}{2}\,{\gamma} + \frac{1}{4\,{{\pi }^2}}\,\left( 3 - \frac{21}{2}\,\zeta_3 \right) + \frac{11}{32} - \frac{1}{4}\,\ln 2 \,\bigg] \end{equation} and coincides with ${\cal{O}}(\alpha^2)$ contributions of $\mbox{Re} \Pi_{Thr}^{{\cal{O}}(\alpha^2)}(q^2\to 4M^2 +)$, eq.~(\ref{Pithreshthresh}). The numerical value for $\lim_{n\to\infty}\,a_n$ is presented in Table~\ref{Tablean}. \par To illustrate the importance of the constants $a_n$ in time-independent perturbation theory (TIPT), we recalculate the ${\cal{O}}(\alpha^6)$ ``vacuum polarization'' contributions to the ground state triplet-singlet hyperfine splitting (HFS) of the positronium, which were, to our knowledge, considered for the first time in~\cite{Barbieri1}. The vacuum polarization contributions to the HFS in the energy levels of the positronium system arise from the effect that the bound triplet (${}^3S_1$, $J^{PC}=1^{-\,-}$) $e^+e^-$ pair can annihilate into a virtual photon for a time period of order $1/M$, whereas the singlet (${}^1S_0$, $J^{PC}=0^{-\,+}$) cannot. If the virtual photon energy is approximated by $\sqrt{q^2}=2M$, this annihilation process leads to a $\delta$-function kernel in the coordinate-space representation (corresponding to a constant kernel in momentum space) with the form, \begin{equation} H_{Ann}(\vec{x}) \, = \, \frac{2\,\alpha\,\pi}{M^2}\,\delta^{(3)}(\vec{x}) \,. \label{AnnihilationKernel} \end{equation} This kernel can now be used in TIPT. Taking into account that $\Pi_{Thr}^{{\cal{O}}(\alpha^2)}$ contains ${\cal{O}}(\alpha)$ as well as ${\cal{O}}(\alpha^2)$ contributions we have to apply second- and third-order TIPT to obtain all relevant ${\cal{O}}(\alpha^6)$ contributions to the HFS. The formal result for the ${\cal{O}}(\alpha^6)$ energy shift for the triplet states with radial quantum numbers $n$ and with $l=0$ due to $H_{Ann}$ reads \begin{eqnarray} \delta E^{\alpha^6}_{Ann,n} & = & \bigg\{\, \sum\hspace{-5.5mm}\int\limits_{l\ne n}\,\,\, \langle\,n\,| \, H_{Ann} \, \frac{|\,l\,\rangle\,\langle \,l\,|}{E_n-E_l} \, H_{Ann} \, |\,n\,\rangle \nonumber\\[2mm] & & \,\, + \, \sum\hspace{-6.3mm}\int\limits_{m\ne n}\,\,\, \sum\hspace{-5.7mm}\int\limits_{k\ne n}\,\,\, \langle\,n\,| \, H_{Ann} \, \frac{|\,m\,\rangle\,\langle \,m\,|}{E_n-E_m} \, \, H_{Ann} \, \frac{|\,k\,\rangle\,\langle \,k\,|}{E_n-E_k} \, H_{Ann} \, |\,n\,\rangle \,\bigg\}_{{\cal{O}}(\alpha^6)} \,, \label{HFSformal} \end{eqnarray} where $|\,i\,\rangle$, $i=l,m,n,k$, represent normalized (bound state and free scattering) eigenfunctions to the positronium Schr\"odinger equation with the eigenvalues $E_i$. The symbol $\{\}_{{\cal{O}}(\alpha^6)}$ indicates that only ${\cal{O}}(\alpha^6)$ contributions are taken into account. It is evident from the form of $H_{Ann}(\vec{x})$ that only the zero-distance Green function is relevant for $\delta E^{\alpha^6}_{Ann,n}$, \begin{eqnarray} \sum\hspace{-5.5mm}\int\limits_{l\ne n}\,\,\, \langle\,0\,|\, \frac{|\,l\,\rangle\,\langle \,l\,|}{E_l-E_n} \,|\,0\,\rangle & = & \sum\hspace{-5.5mm}\int\limits_{l\ne n}\,\,\, \frac{\,\,|\Psi_l(0)|^2}{E_l-E_n} \nonumber\\[2mm] & = & \lim\limits_{E\to E_n}\,\bigg[\, G_E^0(0,0) - \frac{\,\,|\Psi_n(0)|^2}{E_n-E-i\epsilon} \,\bigg] \,. \label{Greenfunctionsubtracted} \end{eqnarray} However, relation~(\ref{Greenfunctionsubtracted}) still contains divergences (see eq.(\ref{PGunrenormalizeddiv})). As we have pointed out in Section~\ref{SectionCalculation}, these divergences indicate that non-relativistic quantum mechanics is not capable to describe physics if the relative distance of the electron-positron pair is smaller than the inverse electron mass. Therefore, we have to replace $G_E^0(0,0)$ in relation~(\ref{Greenfunctionsubtracted}) by its renormalized version $G_E^R(0,0)$, eq.~(\ref{Greenfunctionrenormalized}) (using eq.~(\ref{Pithreshfinal})), which describes short-distance physics properly. The final expression for $\delta E^{\alpha^6}_{Ann,n}$ then reads \begin{eqnarray} \lefteqn{ \delta E^{\alpha^6}_{Ann,n} \, = \, |\Psi_n(0)|^2 \, \bigg\{\, \bigg[\,\frac{2\,\alpha\,\pi}{M^2}\,\bigg]^2\, \bigg(\,-\frac{M^2\,\alpha}{2\,\pi}\,a_n\,\bigg) + \bigg[\,\frac{2\,\alpha\,\pi}{M^2}\,\bigg]^3\, \bigg(\,-\frac{4}{9}\,\frac{M^2}{\pi^2}\,\bigg)^2 \,\bigg\} } \nonumber\\[2mm] & = & \frac{M\,\alpha^6}{4\,n^3}\,\bigg\{\, \frac{1}{2}\,\ln\alpha - \frac{1}{2}\,\bigg(\,\frac{1}{n} + \ln n - \sum\limits_{i=1}^{n-1}\frac{1}{i}\,\bigg) + \frac{1}{4\,\pi^2}\,\bigg(\,\frac{13}{81}+\frac{21}{2}\,\zeta_3\,\bigg) -\frac{11}{32} + \frac{1}{4}\,\ln 2 \,\bigg\} \,. \label{HFSfinal} \end{eqnarray} Taking also into account that the virtual photon energy is smaller than $2M$ by the amount of the binding energy $E_n=-\frac{M\,\alpha^2}{4\,n^2}$ we have to replace $M^2$ in eq.~(\ref{AnnihilationKernel}) by $(M+E_n/2)^2$ and therefore get another ${\cal{O}}(\alpha^6)$ contribution, \begin{equation} \delta E^{\alpha^6}_{Bind,n} \, = \, \frac{\alpha^2}{4\,n^2}\,\delta E^{\alpha^4}_{Ann,n} \, = \, \frac{M\,\alpha^6}{16\,n^5} \,, \label{HFSbinding} \end{equation} where $\delta E^{\alpha^4}_{Ann,n}$ represents the ${\cal{O}}(\alpha^4)$ energy shift due to $H_{Ann}$. For the ground state ($n=1$) the complete ${\cal{O}}(\alpha^6)$ vacuum polarization contribution to the HFS then reads \begin{equation} \delta E^{\alpha^6}_{Ann,1} + \delta E^{\alpha^6}_{Bind,1} \, = \, \frac{M\,\alpha^6}{4}\,\bigg\{\, \frac{1}{2}\,\ln\alpha + \frac{1}{4\,\pi^2}\,\bigg(\,\frac{13}{81}+\frac{21}{2}\,\zeta_3\,\bigg) -\frac{19}{32} + \frac{1}{4}\,\ln 2 \,\bigg\} \,. \label{HFSgroundstate} \end{equation} Our result differs from the one presented in~\cite{Barbieri1,Barbieri2} by the amount $M\alpha^6/8$. Half of the discrepancy comes from the fact that in~\cite{Barbieri1} the binding energy contribution $\delta E^{\alpha^6}_{Bind,1}$ was not taken into account, whereas the other half originates from a missing ${\cal{O}}(\alpha^2)$ contribution in the one-loop vacuum polarization\footnote{ This can be easily seen by comparing eq.~(33) of~\cite{Barbieri1} with eq.~(\ref{Pi1loopexpanded}) in this work for \[ q^2_{n=1}=(2\,M-\frac{M\alpha^2}{4})^2\,\,\Longleftrightarrow\,\, \beta_{n=1}=i\,\frac{\alpha}{2} + {\cal{O}}(\alpha^3) \,. \] This shows that at bound state energies the one-loop contribution to the vacuum polarization function also contains terms of order $\alpha^2$. }. \par Before we turn to applications of our results in the context of QCD, we do not want to leave unmentioned that the leading contributions to the normalized cross section for production of a heavy lepton-antilepton pair (with lepton mass $M$) in $e^+e^-$ collisions (via a virtual photon) in the threshold region can be recovered from $\Pi_{Thr}^{{\cal{O}}(\alpha^2)}$ by means of the optical theorem~\cite{Fadin1,Strassler1}, \begin{eqnarray} R^{L^+L^-}_{Thr} & = & \frac{\sigma(e^-e^+\to\gamma^*\to L^+L^-)}{\sigma_{pt}} \, = \, \frac{3}{\alpha}\,\mbox{Im}\Pi_{Thr}^{{\cal{O}}(\alpha^2)}(q^2) \, = \, \frac{6\,\pi}{M^2}\,\mbox{Im}\,G^R_E(0,0) \nonumber\\[2mm] & = & \frac{6\,\pi^2}{M^2}\,\sum\limits_{n=1}^{\infty}|\Psi_n(0)|^2\, \delta(E-E_n) \, + \, \Theta(E)\,\frac{3}{2}\,\frac{\alpha\,\pi}{1-\exp(-\frac{\alpha\,\pi}{\beta})} \,, \label{OpticalTheorem} \end{eqnarray} where $\sigma_{pt}$ represents the point cross section and only final-state interactions are taken into account. \par \vspace{.5cm} \section{Darwin Corrections in QCD} \label{SectionQCD} In the previous section we have shown that the size of the ${\cal{O}}(\alpha^2)$ corrections to the QED vacuum polarization function in the threshold region is of order $\alpha^2$ rather than $\alpha^2/\pi^2$. Although this fact is important for precision tests of QED\footnote{ As far as tests of QED in the $\tau^+\tau^-$ system in the threshold region are concerned the present experiments do not even reach the ${\cal{O}}(\alpha)$ (next-to-leading order) accuracy level. This can be easily seen from the fact that the complete threshold region for the $\tau^+\tau^-$ system, $|\beta|\mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}}\alpha\,\Longleftrightarrow\, |\sqrt{q^2}-2m_\tau|\mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} m_\tau\alpha^2 = 0.1~\mbox{MeV}$ still lies within the limits on the tau mass itself, $m_\tau = 1777.00^{+0.30}_{-0.27}~\mbox{MeV}$~\cite{PDG}. Thus only experiments on electron and muon systems can be regarded as precision tests of QED in the threshold regime. }, it does not lead to theoretical concerns about the convergence of the perturbative series because of the smallness of the fine structure constant $\alpha$ and because QED is not asymptotically free. \par In the framework of QCD, however, the situation is completely different: the coupling is much larger and even becomes of order one for scales much lower than $1$~GeV. Therefore, the fact that the size of the ${\cal{O}}(\alpha_s^2)$ (next-to-next-to-leading order) corrections in the threshold region might be of order $\alpha_s^2$ rather than $\alpha_s^2/\pi^2$ is an extremely important theoretical issue because this would lead to corrections of order $1\%-25\%$ rather than $0.1\%-2.5\%$ for $\alpha_s=0.1-0.5$. Here, two natural questions arise: what scale should be used in the strong coupling, and for which heavy quark-antiquark systems the ${\cal{O}}(\alpha_s^2)$ corrections represent contributions to the asymptotic perturbation series in the convergent regime. These questions will be addressed in the following section. \par To be more specific we will calculate the ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections to the ($S$-wave, $l=0$) wave functions of a bound heavy quark-antiquark pair at the origin and to the heavy quark-antiquark pair production cross section in $e^+e^-$ annihilation (via a virtual photon) in the threshold region. A presentation of all ${\cal{O}}(C_F^2\alpha_s^2)$ corrections including all kinematic and relativistic effects will given in a subsequent publication. The corresponding uncorrected quantities are the well-known exact solutions to a pure Coulomb-like non-relativistic quark-antiquark system described by a Schr\"odinger equation with the QCD-potential $V_{QCD}(r)=-C_F\,\frac{\alpha_s}{r}$, where $C_F=(N_c^2-1)/2N_c=4/3$. \par The Darwin interaction is generated in the non-relativistic expansion of the Dirac equation. In the coordinate-space representation it is proportional to a $\delta$-function and reads\footnote{ Compared to the Darwin interaction known from the hydrogen atom the expression on the r.h.s. of eq.~(\ref{Darwindef}) is a factor of two larger because both quark-antiquark-gluon vertices contribute. } \begin{equation} H_{Dar}(\vec{x}) \, = \, \frac{C_F\,\alpha_s\,\pi}{M_Q^2}\, \delta^{(3)}(\vec{x}) \,. \label{Darwindef} \end{equation} A practical application for the corrections to the bound state wave functions at the origin is the leptonic decay rate of the $J/\Psi$ and the $\Upsilon(1S)$ and (maybe) of the first few excited states of the $\Upsilon$ family, whereas the corrections to the cross section would be relevant for $t\bar t$ production at NLC. We explicitly mention those applications in this context because it is believed that for them non-perturbative (in the sense ``not calculable from first principles in QCD'') effects are either well under control or even negligible~\cite{Yndurain1,Fadin1}. But, of course, these corrections can be applied to other heavy quark-antiquark systems as well, at least in order to check their size. At this point we want to emphasize that we do not intend to present a thorough phenomenological analysis in this work. The primary aim is to use the ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections to illustrate the typical size of the complete (and yet unknown) ${\cal{O}}(\alpha_s^2)$ corrections for the $t\bar t$, $b\bar b$ and $c\bar c$ systems. Their actual numerical value and even their sign cannot, of course, be predicted at the present stage. \par To keep our analysis transparent we ignore all ${\cal{O}}(\alpha_s)$ corrections, the effects from the running of the strong coupling and also non-perturbative contributions like the gluon condensate. The latter effects are well known and have been treated in a large number of earlier publications. We further neglect the width of the quarks and treat them as stable particles for the most part in the following analysis. From the technical point of view the calculations of the ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections are identical to the corresponding QED calculations, which means that we use time-independent perturbation theory. However, we have to take care about the correct implementation of the number of colors, $N_c=3$, and the group theoretical factor $C_F$. In the following the superscript ``QCD'' indicates that the corresponding quantity is obtained from the QED expression by the replacement $\alpha\to C_F\alpha_s$. It is then straightforward to determine the ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections to the modulus squared of the $l=0$ bound state wave functions at the origin, ($n=1,2,3,\ldots$), \begin{eqnarray} \delta|\Psi_n^{\mbox{\tiny QCD}}(0)|^2_{Dar} & = & -2\,|\Psi_n^{\mbox{\tiny QCD}}(0)|\,\bigg\{\, \frac{C_F\,\alpha_s\,\pi}{M_Q^2}\hspace{-3mm} \lim\limits_{\mbox{}\quad E->E_n^{\mbox{\tiny QCD}}}\, \bigg[\,G^{R,\mbox{\tiny QCD}}_E(0,0) - \frac{\,\,|\Psi_n^{\mbox{\tiny QCD}}(0)|^2}{E_n^{\mbox{\tiny QCD}}-E-i\epsilon} \,\bigg] \,\bigg\}_{{\cal{O}}(C_F^2\alpha_s^2)} \nonumber\\[2mm] & = & -\,|\Psi_n^{\mbox{\tiny QCD}}(0)|^2\,C_F^2\,\alpha_s^2\,a_n^{\mbox{\tiny QCD}} \,, \label{Darwinwavefunctions} \end{eqnarray} where the symbol $\{\}_{{\cal{O}}(C_F^2\alpha_s^2)}$ indicates that only ${\cal{O}}(C_F^2\alpha_s^2)$ corrections are taken into account\footnote{ Eq.~(\ref{Darwinwavefunctions}) also generates ${\cal{O}}(C_F\alpha_s)$ corrections which differ from the well-known ${\cal{O}}(C_F\alpha_s)$ corrections generated by the $(1-4 C_F\alpha_s/\pi)$ correction factor~\cite{Karplus1}. Adding up all the ${\cal{O}}(C_F\alpha_s)$ corrections and the corresponding renormalization constants will of course yield the correct result. The same remark holds for the result for the cross section above threshold, eq.~(\ref{Darwincrosssection}). }. The calculation of the ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections to the quark-antiquark cross section in the threshold region is more involved. Here, we apply the optical theorem, eq.~(\ref{OpticalTheorem}), to the corrections of the zero-distance Green function themselves, \begin{equation} \delta G_{E, Dar}^{R,\mbox{\tiny QCD}}(0,0) \, = \, -\,\frac{C_F\,\alpha_s\,\pi}{M_Q^2}\,\bigg[\, G_E^{R,\mbox{\tiny QCD}}(0,0) \,\bigg]^2 \,. \label{DarwinGreenfunction} \end{equation} The ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections to the cross section above threshold then read \begin{eqnarray} \delta R^{Q\bar Q}_{Thr, Dar} & = & N_c\,\frac{6\,\pi}{M_Q^2}\,\mbox{Im} G^{R,\mbox{\tiny QCD}}_E(0,0)\, \bigg\{\, -\,\frac{2\,C_F\,\alpha_s\,\pi}{M_Q^2}\, \mbox{Re}\, G^{R,\mbox{\tiny QCD}}_E(0,0) \,\bigg\}_{{\cal{O}}(C_F^2\alpha_s^2, C_F\alpha_s\beta)} \nonumber\\[2mm] & = & R^{Q\bar Q}_{Thr}\,\bigg\{\, - \mbox{Re} \Pi_{Thr}^{{\cal{O}}(C_F^2\alpha_s^2),\mbox{\tiny QCD}}(q^2) \,\bigg\}_{{\cal{O}}(C_F^2\alpha_s^2, C_F\alpha_s\beta)} \,, \label{Darwincrosssection} \end{eqnarray} where $R^{Q\bar Q}_{Thr}$ represents the ``Sommerfeld factor'' (sometimes also called the ``Fermi factor'') \begin{eqnarray} \lefteqn{ R^{Q\bar Q}_{Thr} \, = \, N_c\, \frac{6\,\pi}{M_Q^2}\,\mbox{Im} G^{R,\mbox{\tiny QCD}}_E(0,0) } \nonumber\\[2mm] & = & N_c\,\frac{3}{2}\, \frac{C_F\,\alpha_s\,\pi}{1-\exp(-\frac{C_F\,\alpha_s\,\pi}{\beta})} \, = \, N_c\,\frac{3}{2}\,\beta\, \exp\Big(\frac{C_F\,\alpha_s\,\pi}{2\,\beta}\Big)\, \Gamma\Big(1+i\,\frac{C_F\,\alpha_s}{2\,\beta}\Big)\, \Gamma\Big(1-i\,\frac{C_F\,\alpha_s}{2\,\beta}\Big) \,. \label{Sommerfeldfactor} \end{eqnarray} Below threshold we have to determine the corrections to the residues of $G_E^{R,\mbox{\tiny QCD}}(0,0)$ at the bound state energies, where as shown above the corresponding bound state poles have to be subtracted. This calculation is straightforward and leads to the corrections to the $l=0$ bound state wave functions at the origin presented in eq.~(\ref{Darwinwavefunctions}). It is an interesting fact that eq.~(\ref{Darwincrosssection}) allows for the calculation of the shifts of the $Q\bar Q$ bound state energies due to the Darwin interaction. To show this we rewrite the sum of the Sommerfeld factor, eq.~(\ref{Sommerfeldfactor}), and the contribution involving the digamma function of the ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections above threshold, see eqs.~(\ref{Adefinition}), (\ref{Pithreshfinal}) and (\ref{Darwincrosssection}), as \begin{eqnarray} \lefteqn{ R^{Q\bar Q}_{Thr}\,\bigg\{\, 1\, + \, \frac{C_F^2\,\alpha_s^2}{4}\, \bigg[\, \Psi\Big(1+i\,\frac{C_F\,\alpha_s}{2\,\beta}\Big) + \Psi\Big(1-i\,\frac{C_F\,\alpha_s}{2\,\beta}\Big) \,\bigg] \,\bigg\} } \nonumber\\[2mm] & \longrightarrow & N_c\,\frac{3}{2}\,\beta\, \exp\Big(\frac{C_F\,\alpha_s\,\pi}{2\,\beta}\Big)\, \Gamma\Big(\frac{C_F^2\,\alpha_s^2}{4}+1+ i\,\frac{C_F\,\alpha_s}{2\,\beta}\Big)\, \Gamma\Big(\frac{C_F^2\,\alpha_s^2}{4}+1- i\,\frac{C_F\,\alpha_s}{2\,\beta}\Big) \,. \label{Energyshiftindirect} \end{eqnarray} It can be easily checked that the function $\Gamma(\frac{C_F^2\,\alpha_s^2}{4}+1-i\,\frac{C_F\,\alpha_s}{2\,\beta})$ develops poles at the energies\footnote{ It should be noted that the $(1-4 C_F\alpha_s/\pi)$ correction factor of the cross section is irrelevant for shifts of the bound state energies because the former represents a global multiplicative short-distance factor. } \begin{equation} \tilde{E}_n^{\mbox{\tiny QCD}} \, \equiv \, E_n^{\mbox{\tiny QCD}} + \delta E_{n,Dar}^{\mbox{\tiny QCD}} \,,\qquad\qquad (n=1,2,3,\ldots)\,, \end{equation} where the $\delta E_{n,Dar}^{\mbox{\tiny QCD}}$ represent the energy shift of the $l=0$ Coulomb energy levels with the radial quantum number $n$ generated by the Darwin interaction, \begin{eqnarray} \delta E_{n,Dar}^{\mbox{\tiny QCD}} & = & \langle\,n^{\mbox{\tiny QCD}}\,|\,H_{Dar}\, |\,n^{\mbox{\tiny QCD}}\,\rangle \, = \, |\Psi_n^{\mbox{\tiny QCD}}(0)|^2\,\frac{C_F\,\alpha_s\,\pi}{M_Q^2} \nonumber \\[2mm] & = & \frac{M_Q\,C_F^4\,\alpha_s^4}{8\,n^3} \,. \label{Energyshiftdirect} \end{eqnarray} At this point we also want to emphasize that the $\ln(C_F\,\alpha_s)$ and digamma contributions occurring in eqs.~(\ref{Darwinwavefunctions}) and (\ref{Darwincrosssection}) are not related to the running of the strong coupling and therefore cannot be resummed by any known type of renormalization group equation in the sense of a leading logarithmic resummation. These logarithmic terms arise because two scales are relevant in the threshold region, the heavy quark mass $M_Q$ and the relative momentum of the quark-antiquark pair $\propto C_F\,\alpha_s\,M_Q$~\cite{Labelle1}. The $\ln(C_F\,\alpha_s)$ contributions induced by the running of the strong coupling have been discussed in~\cite{Voloshin2,Yndurain1}. \par Before we turn to the discussion on the size of the ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections we have to address the question of which scale one should use in the strong coupling. Strictly speaking, a final answer to this problem would require an ${\cal{O}}(\alpha_s^3)$ analysis, which is beyond the scope of this work. However, one can find simple arguments that the scale in the strong couplings of expressions~(\ref{Darwinwavefunctions}) and (\ref{Darwincrosssection}) should be of the order $C_F\alpha_s M_Q$, which will be called ``the soft scale'' in the remainder of this work. We would like to remind the reader that the scale of the strong coupling in the unperturbed (pure Coulomb) quantities $|\Psi_n^{\mbox{\tiny QCD}}(0)|^2$ and $R^{Q\bar Q}_{Thr}$ is of the order of the soft scale. This is obvious for the wave functions of the ground state and the first few excited states and for the cross section in the kinematic region $\beta\approx C_F\alpha_s$ because they describe bound quark-antiquark pairs with relative momentum of order $C_F\,\alpha_s\,M_Q$. But it is also true for highly excited states ($n\gg 1$) and the cross section right at the threshold due to ``saturation'' effects~\cite{Voloshin2,Yndurain1}, {\it i.e.} the scale of the strong coupling is of order $C_F\alpha_s M_Q$ although the kinematical relative momentum of the quark pair vanishes\footnote{ In~\cite{Voloshin2,Yndurain1} a proof for saturation is only given for the cross section above the threshold point. An analogous proof for highly excited stated or the cross section slightly below the threshold point does, to our knowledge, not exist in literature. Such a proof is, however, much more more difficult due to the breakdown of time-independent perturbation theory for the logarithmic kernel $\delta V(r)\sim\ln(r)/r$ for high radial excitations (see {\it e.g.}~\cite{Yndurain1}). Nevertheless we find it plausible that saturation also takes place slightly below the threshold point because the cross section at the threshold point $q^2=4M_Q^2$ should be well defined. }. To understand that the scale of the ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections should also be of order of the soft scale, let us have a closer look at the origin of the strong couplings governing these corrections: one power of $\alpha_s$ comes from the Darwin interaction, $H_{Dar}$, and the other power of $\alpha_s$ (including the $\ln(C_F\alpha_s)$ terms) originates from the ${\cal{O}}(\alpha_s^2)$ contribution of the vacuum polarization function $\Pi_{Thr}^{{\cal{O}}(C_F^2\alpha_s^2),\mbox{\tiny QCD}}$. As mentioned in the previous section, the latter contribution is of long-distance origin and therefore governed by the soft scale. In contrast to the pure Coulomb interaction, $1/\vec{p}^2$, the Darwin interaction is a constant in momentum space and consequently sensitive to both low and high momenta. But based on our previous observations of the domination of long-distance effects, we can assume that the scale of the strong coupling in the Darwin interaction should also be the soft scale rather than the heavy quark mass. The size of the strong coupling governing the ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections of eqs.~(\ref{Darwinwavefunctions}) and (\ref{Darwincrosssection}) can therefore be estimated via the self-consistency equation \begin{equation} \alpha_s = \alpha_s(C_F\,\alpha_s\,M_Q) \,. \label{Selfconsistency} \end{equation} which leads to $\alpha_s=0.13-0.16$, $0.25-0.38$ and $0.34-0.59$ for the top, bottom and charm quark systems, respectively. The latter ranges are obtained by using the $\overline{\mbox{MS}}$ definition for the strong coupling, the one-loop QCD beta-function and $\alpha_s(M_z=91.187\,\mbox{GeV})=0.125$ and by taking twice and half the argument of the strong coupling on the r.h.s. of relation~(\ref{Selfconsistency}). Further, the mass values in equation~(\ref{Selfconsistency}) have been taken to be the pole values. For the quark (pole) masses we have chosen $M_t=175$~GeV, $M_b=5$~GeV and $M_c=1.7$~GeV. The reader should note that the prescription given above to calculate the size of the strong coupling is far from being unique. Depending on the choice of the definition of the strong coupling, the quark mass values or the number of loops in the QCD beta-function, larger or smaller values for $\alpha_s$ might result. This dependence on the prescription is particularly strong for the charm system\footnote{ As an example, using the two-loop QCD beta-function results in $\alpha_s=0.13-0.17$, $0.27-0.44$ and $0.38-0.76$ for the top, bottom charm system, respectively. At this point it is clearly obvious that the situation for the charm system is rather hopeless as far as the question of perturbativity is concerned. }. As a consequence the theoretical uncertainties quoted in this work should be more understood as good guesses rather than strict theoretical limits. However, we think that the ranges of the strong coupling given above are good enough in order to illustrate the impact of the ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections in particular for $t\bar t$ production in the threshold region. We also want to emphasize that our conclusions for the perturbativity of the different heavy quark systems do not depend on different prescriptions for the strong coupling. \par \begin{table}[h] \vskip 7mm \begin{center} \begin{tabular}{|c||c|c|c|c|c|c|} \hline $n$ & $1$ & $2$ & $3$ & $4$ & $\infty$ \\ \hline\hline \parbox{1cm}{\vskip 2mm $\Delta_{\Psi,n}^{t\bar t}$ \vskip 2mm} & $-0.05\,/\,-0.04$ & $-0.04\,/\,-0.03$ & $-0.03\,/\,-0.02$ & $-0.03\,/\,-0.02$ & $-0.02\,/\,-0.02$ \\ \hline \parbox{1cm}{\vskip 2mm $\Delta_{\Psi,n}^{b\bar b}$ \vskip 2mm} & $-0.20\,/\,-0.11$ & $-0.09\,/\,-0.06$ & $-0.06\,/\,-0.05$ & $-0.05\,/\,-0.04$ & $-0.02\,/\,+0.01$ \\ \hline \parbox{1cm}{\vskip 2mm $\Delta_{\Psi,n}^{c\bar c}$ \vskip 2mm} & $-0.34\,/\,-0.17$ & $-0.10\,/\,-0.09$ & $-0.06\,/\,-0.01$ & $-0.05\,/\,+0.03$ & $-0.01\,/\,+0.15$ \\ \hline \end{tabular} \caption{\label{Tablewavefunctions} The relative ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections to the moduli squared of the $l=0$ bound state wave functions $\Delta_{\Psi,n}$ are given for the $t\bar t$, $b\bar b$ and $c\bar c$ system, respectively. Displayed are the smallest and largest values for the range of $\alpha_s$ values given below eq.~(\ref{Selfconsistency}) for the radial quantum numbers $n=1,2,3,4$ and for $n\to\infty$. } \end{center} \vskip 3mm \end{table} In Table~\ref{Tablewavefunctions} the smallest and largest values for the relative ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections to the moduli squared of the $l=0$ bound state wave functions $\Delta_{\Psi,n}\equiv \delta\,|\Psi_n^{\mbox{\tiny QCD}}(0)|^2_{Dar}/ |\Psi_n^{\mbox{\tiny QCD}}(0)|^2$ for the different heavy quark systems are displayed for the ground states ($n=1$) and the first three radial excited states ($n=2,3,4$), employing the ranges for the strong coupling as given below eq.~(\ref{Selfconsistency}). For illustration the corresponding value for ($n\to\infty$) is also presented. The absolute values of the corrections to the ground states amount to $4\%-5\%$ for the $t\bar t$, $11\%-20\%$ for $b\bar b$ and $17\%-34\%$ for the $c\bar c$ system. It is an interesting fact that for the $b\bar b$ and $c\bar c$ systems the size of the corrections is rapidly decreasing for higher excited states. In particular, the sensitivity of the corrections to the different values of $\alpha_s$ seems to be surprisingly small for the excited states in the $b\bar b$ and $c\bar c$ systems. We will come back to this point later. \par \begin{figure}[t] \begin{center} \epsfxsize=4cm \leavevmode \epsffile[220 420 420 550]{fig4.ps}\mbox{\hspace{4mm}}\\ \vskip 3.3cm \epsfxsize=4cm \leavevmode \epsffile[220 420 420 550]{fig5.ps} \vskip -82mm $\displaystyle{\bf \Delta R_{t\bar t}\mbox{\hspace{12.5cm}}}$ \vskip 5mm $\displaystyle{\mbox{\hspace{3cm}}\bf \Gamma_t=0}$ \vskip 29mm $\displaystyle{\mbox{\hspace{10.5cm}}\bf\frac{\beta}{C_F\,\alpha_s}}$ \vskip 7mm $\displaystyle{\bf \Delta R_{t\bar t}\mbox{\hspace{12.5cm}}}$ \vskip 20mm $\displaystyle{\mbox{\hspace{4.3cm}} \bf \Gamma_t=1.5\,\mbox{GeV}}$ \vskip 16mm $\displaystyle{\mbox{\hspace{10.5cm}}\bf E (\mbox{GeV})}$ \vskip 2mm \caption{\label{Figcrosssection1} The relative ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections to the $t\bar t$ production cross section in the threshold region for the cases $\alpha_s=0.13$ (solid lines) and $0.16$ (dashed lines) for stable (a) and unstable (b) top quarks. The circles in Figure (b) indicate the location of the $1S$ Coulomb energy level.} \end{center} \end{figure} \begin{figure}[t] \begin{center} \epsfxsize=4cm \leavevmode \epsffile[220 420 420 550]{fig3.ps}\\ \vskip 3.3cm \epsfxsize=4cm \leavevmode \epsffile[220 420 420 550]{fig2.ps} \vskip -82mm $\displaystyle{\bf \Delta R_{b\bar b}\mbox{\hspace{12.5cm}}}$ \vskip 39.5mm $\displaystyle{\mbox{\hspace{10.5cm}}\bf\frac{\beta}{C_F\,\alpha_s}}$ \vskip 7mm $\displaystyle{\bf \Delta R_{c\bar c}\mbox{\hspace{12.7cm}}}$ \vskip 39.7mm $\displaystyle{\mbox{\hspace{10.5cm}}\bf\frac{\beta}{C_F\,\alpha_s}}$ \vskip 1mm \caption{\label{Figcrosssection2} The relative ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections to the $b\bar b$ (a) and $c\bar c$ (b) production cross section in the kinematical region $0<\beta< C_F\alpha_s$ above threshold. The solid line corresponds to $\alpha_s=0.25$ ($0.34$) and the dashed line to $\alpha_s=0.38$ ($0.59$) for the case of $b\bar b$ ($c\bar c$) production.} \end{center} \end{figure} In Fig.~\ref{Figcrosssection1}a and \ref{Figcrosssection2}a,b the relative ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections to the (stable) quark-antiquark production cross section $\Delta R_{Q\bar Q}\equiv \delta R^{Q\bar Q}_{Thr, Dar}/R^{Q\bar Q}_{Thr}$ are displayed above the threshold point for the three heavy quark systems in the range $0 < \beta < C_F\,\alpha_s$. (For $t\bar t$ production this corresponds to the energy range $0 < E < 5$~$(8)$~GeV for $\alpha_s=0.13$~$(0.16)$) The solid (dashed) lines correspond to the lower (upper) $\alpha_s$ value given below eq.~(\ref{Selfconsistency}). For the $t\bar t$ system the size of the relative corrections is quite stable between $-1.9\%$ and $-1.0\%$ with the tendency to decrease in magnitude for larger velocities. It is striking that the dependence of the corrections on the changes in the $\alpha_s$ value is weaker for larger velocities ($0.3\%$ for $\beta=0$ and $0.05\%$ for $\beta=C_F\alpha_s$). For the $b\bar b$ system the corrections vary between $-2\%$ (lower value) and $+5\%$ (upper value) where the larger values occurs for larger velocities. In contrast to the top system the dependence of the corrections on the changes in the $\alpha_s$ value ($3\%$ for $\beta=0$ and $5\%$ for $\beta=C_F\alpha_s$) increases for larger velocities. This indicates that the perturbative approach employed in this work works better for the $t\bar t$ than for the $b\bar b$ system. For the $c\bar c$ system, on the other hand, the dependence on the changes in $\alpha_s$ are tremendous. Depending on the size of the coupling the corrections vary from $-1\%$ to $+15\%$ for $\beta=0$ up to $+3\%$ to $+26\%$ for $\beta=C_F\alpha_s$, drawing a rather uncomfortable picture for the perturbativity in the charm system. For the case of $t\bar t$ production we have also plotted the corrections for a finite width $\Gamma_t=1.5$~GeV (see Fig.~\ref{Figcrosssection1}b) in the energy range $-10$~GeV~$< E < +10$~GeV in order to demonstrate the impact of the ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections in the presence of the large top width. This has been achieved by the naive replacement $E\to E+i\,\Gamma_t$ in eqs.~(\ref{Darwincrosssection}) and (\ref{Sommerfeldfactor}). We want to mention that the inclusion of a finite width by this naive procedure does not represent a consistent treatment at the ${\cal{O}}(\alpha_s^2)$ accuracy level. However, we find that this approach is justified here in order to demonstrate that the typical size of the ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections is not altered if the top quark width is taken into account. In this case the relative ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections amount to $-6\%$ to $-2\%$ around the $1S$ peak and to $-2\%$ to $-1\%$ for higher energies. For a rigorous treatment of the corrections due to the off-shellness of the top quark we refer the reader to~\cite{Modritsch1} and references therein. \par Although the ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections discussed above represent only a small part of the full ${\cal{O}}(\alpha_s^2)$ corrections, we believe that their size can be taken as an order of magnitude estimate for the sum of all ${\cal{O}}(\alpha_s^2)$ corrections. We therefore have to face the questions whether or how far a perturbative expansion in the strong coupling in the threshold regime makes sense. Because we take the position that one should not automatically reject the possibility of a perturbative treatment of long-distance effects, we think that the ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections determined in this work provide us with important hints toward an acceptable answer to this fundamental question from the point of view of perturbation theory itself. There is no doubt that perturbation theory in the strong coupling is still viable for the $t\bar t$ system. It has been shown in~\cite{Fadin1} by using more general arguments that the large top mass and width serve as a screening device which protects the $t\bar t$ properties in the threshold region from the influence of non-perturbative effects making the $t\bar t$ system into the ``hydrogen atom of the strong interaction''. Thus a perturbative treatment of the $t\bar t$ system should exhibit an excellent convergence. This is consistent with the observations from the previous discussions showing that the ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections for the top system are at the level of a few percent for the most of the threshold region\footnote{ A comparison of the size of the ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections with the ${\cal{O}}(C_F\alpha_s)$ corrections from the $(1-4 C_F\alpha_s/\pi)$ suppression factor is slightly misleading in this context because the latter represents a pure short-distance contribution. Therefore the ${\cal{O}}(C_F\alpha_s)$ correction should not be included in a discussion on the convergence in the perturbative description of long-distance corrections. However, for the convenience of the reader, the size of the large ${\cal{O}}(C_F\alpha_s)$ corrections shall also be given. It has been shown in~\cite{Voloshin1,Hoang1,Hoang2} in a two-loop analysis that the scale in the strong coupling of the ${\cal{O}}(C_F\alpha_s)$ suppression factor is $e^{-11/24}\,M_Q$ in the $\overline{\mbox{MS}}$ scheme. This results in $-4\,C_F\,\alpha_s/\pi=-20\%$, $-41\%$ and $-64\%$ for the top, bottom and charm systems, respectively, using the one-loop QCD beta-function, the pole mass values given below eq.~(\ref{Selfconsistency}) and $\alpha_s(M_z=91.187\,\mbox{GeV})=0.125$. }. This, on the other hand, allows us to conclude that the theoretical uncertainty of all present analyses for the total $t\bar t$ cross section in the threshold region is at the few percent level, because no full ${\cal{O}}(\alpha_s^2)$ treatment has ever been accomplished there. Further, the theoretical uncertainty of such a complete analysis would then be roughly one to two percent around the $1S$ peak and below several per mille for higher energies. This can be estimated by taking the $\alpha_s$ values presented below~eq.~(\ref{Selfconsistency}) cubed (assuming that no scales lower than $C_F\alpha_s M_Q$ are relevant for the corrections beyond the ${\cal{O}}(\alpha_s^2)$ accuracy level) and by observing the sensitivity of the ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections to changes in the values of the strong coupling (see Fig.~\ref{Figcrosssection1}b). To achieve an accuracy much below the percent level at the $1S$ peak a more rigorous treatment of the scale in the strong coupling governing the ${\cal{O}}(\alpha_s^2)$ corrections would be needed, {\it i.e.} an ${\cal{O}}(\alpha_s^3)$ calculation. \par As far as the $b\bar b$ system is concerned, the situation is worse than for the $t\bar t$ system. It has been shown in a number of classical papers~\cite{Shifman1,Leutwyler1,Voloshin3} that a proper theoretical description of the bottom system can only be achieved by taking into account non-perturbative corrections, which cannot be calculated from first principles in QCD. On the other hand, it has been demonstrated in~\cite{Yndurain1} that a quite acceptable ``parameter-free'' description of the (S-wave, $l=0$) $b\bar b$ bound states with low radial excitation is possible by using perturbative calculations supplemented by non-perturbative contributions in the form of the quark or the gluon condensates. However, the latter analyses (as far as corrections to the moduli squared of the wave functions at the origin and to the cross section above threshold are concerned) were essentially based on formulae including only the effects of the one-loop running of the strong coupling and the global ${\cal{O}}(\alpha_s)$ correction factor $(1-4 C_F\alpha_s/\pi)$. The question whether the ${\cal{O}}(\alpha_s^2)$ perturbative corrections lead to a still converging series was not addressed explicitly. Equipped with the results for the ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections, we are able to draw a rough picture concerning the latter question for the case of the moduli squared of the $l=0$ bound state wave functions at the origin. For the ground state the ${\cal{O}}(\alpha_s^2)$ corrections should be between $10\%$ and $20\%$ (where the actual sign of the corrections can only be determined by a complete ${\cal{O}}(\alpha_s^2)$ analysis) with theoretical uncertainties of order $\pm 5\%$ coming from the ignorance of the actual scale of the strong coupling and other corrections beyond the ${\cal{O}}(\alpha_s^2)$ level. This does not represent an overwhelming convergence, but it is acceptable compared to the precision of experimental measurements~\cite{Experiment1} and it indicates that an actual determination of all ${\cal{O}}(\alpha_s^2)$ correction would lead to a considerable improvement of the precision of the theoretical description. It is remarkable that the ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections seem to indicate that the size of the ${\cal{O}}(\alpha_s^2)$ corrections including their sensitivity to changes in the value of the strong coupling is much smaller for higher excited states (see Table ~\ref{Tablewavefunctions}). Here, however, non-perturbative contributions get more and more out of control~\cite{Leutwyler1,Voloshin3} and a complete ${\cal{O}}(\alpha_s^2)$ analysis is therefore necessary to give a trustworthy interpretation of this phenomenon. The latter remark is also true for the $c\bar c$ system. \par Finally, we also want to mention the $c\bar c$ system. In view of the ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections, we can expect the size of the complete ${\cal{O}}(\alpha_s^2)$ corrections to the modulus squared of the ground state wave function at the origin to be at least at the level of $15\%$ to $35\%$ with theoretical errors which might be almost as large as the size of the ${\cal{O}}(\alpha_s^2)$ corrections themselves. (Again we can estimate the size of the corrections beyond the ${\cal{O}}(\alpha_s^2)$ level by taking the long-distance $\alpha_s$ values given below eq.~(\ref{Selfconsistency}) cubed.) It is evident that in the case of the $c\bar c$ system the limits of perturbation theory are reached or even exceeded. Even with a complete determination of all ${\cal{O}}(\alpha_s^2)$ corrections the theoretical uncertainties would not decrease considerably, which is obviously a consequence of the large size of the strong coupling. We therefore conclude that it will be extremely difficult (if not impossible) to achieve a perturbation theory based theoretical description for the charm system with uncertainties lower than several times $10\%$ if there is no (unforeseen) cancellation among different types of corrections. \par To conclude this section there is a remark in order: for the calculations of the ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections we used the renormalized Green function at zero distances, eq.~(\ref{Greenfunctionrenormalized}), without any further explanation. This is slightly misleading because it implies that the ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections to wave functions and cross sections can be uniquely separated from all the other ${\cal{O}}(C_F^2\alpha_s^2)$ corrections. As far as the $\ln\alpha_s$ contribution and the digamma term are concerned this is definitely true, but this is not the case for the constant terms. This is a consequence of the divergences which arise during the calculations and which have to be renormalized. The use of our renormalized zero-distance Green function represents one possible way to achieve this renormalization. Nevertheless we think that our approach is justified in order to illustrate the possible size of the complete ${\cal{O}}(\alpha_s^2)$ corrections. This view is also supported by the explicit results for all ${\cal{O}}(C_F^2\alpha_s^2)$ corrections to the $l=0$ wave functions at the origin and the cross section, which will be published shortly. However, we want to emphasize that the latter considerations do not affect the validity of the expressions for the vacuum polarization function presented in Sections~\ref{SectionCalculation} and \ref{SectionAnalysis}. There, all constants are correct due to proper matching to the well established one- and two-loop expressions $\Pi^{(1)}$ and $\Pi^{(2)}$, eqs.~(\ref{Pi1loopexpanded}) and (\ref{Pi2loopexpanded}). \par \vspace{.5cm} \section{Comment on Threshold Effects far from the \\Threshold Region} \label{SectionThreshold} In this section we want to comment on the use and the interpretation of the expression of the QED vacuum polarization function valid for all energies to ${\cal{O}}(\alpha^2)$ accuracy, eq.~(\ref{Piallenergies}). \par We have shown in Section~\ref{SectionAnalysis} that the function $A$, which represents the resummed expression for diagrams with the instantaneous Coulomb exchange of two and more longitudinal polarized photons (in Coulomb gauge), see eq.~(\ref{Adefinition}), essentially has to be added to the one- and two-loop expressions for the vacuum polarization function in order to achieve ${\cal{O}}(\alpha^2)$ accuracy in the threshold region $|\beta|\mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}}\alpha$. Far from the threshold regime, however, $A$ represents contributions of order $\alpha^3$ and higher and therefore is irrelevant. This is what we mean by using the term ``valid for all energies to ${\cal{O}}(\alpha^2)$ accuracy''. -- But not more! \par At this point the reader might be tempted to apply formula~(\ref{Piallenergies}), as it stands, for an energy regime far from the threshold in the believe $A$ would represent higher-order information which should improve the accuracy of the one- and two-loop expressions calculated in the framework of conventional perturbation theory. Let us illustrate such a scenario for the energy regime where $q^2$ is close to zero. In this kinematic region, formula~(\ref{Piallenergies}) can be expanded in terms of small $q^2$. Taking into account only the first non-vanishing contributions in $q^2/M^2$ and including only contribution up to ${\cal{O}}(\alpha^3)$ the result reads \begin{equation} \Pi^{{\cal{O}}(\alpha^2)}_{\mbox{\tiny QED}}(q^2) \, \stackrel{q^2\to 0}{=} \, \Big(\frac{\alpha}{\pi}\Big)\,\frac{1}{15}\,\frac{q^2}{M^2} \, + \, \Big(\frac{\alpha}{\pi}\Big)^2\,\frac{41}{162}\,\frac{q^2}{M^2} \, + \, \alpha^3\,\frac{\pi^2}{48}\sqrt{\frac{q^2}{M^2}} \, + \, {\cal{O}}(\alpha^4) \,, \label{Pinaiveq20} \end{equation} where the numerical coefficient of the ${\cal{O}}(\alpha^3)$ coefficient is $\pi^2/48=0.21$. The corresponding multi-loop expression including also the first non-vanishing three-loop coefficient~\cite{Chet2} reads \begin{eqnarray} \Pi^{\mbox{\tiny 3 loop}}_{\mbox{\tiny QED}}(q^2) & \stackrel{q^2\to 0}{=} & \Big(\frac{\alpha}{\pi}\Big)\,\frac{1}{15}\,\frac{q^2}{M^2} \, + \, \Big(\frac{\alpha}{\pi}\Big)^2\,\frac{41}{162}\,\frac{q^2}{M^2} \nonumber\\[2mm] & & + \Big(\frac{\alpha}{\pi}\Big)^3\, \bigg[\, -\frac{8687}{13824} + \frac{{{\pi }^2}}{3}\, \left( \frac{1}{8} - \frac{1}{5}\,\ln 2 \right) + \frac{22781}{27648}\,\zeta_3 \,\bigg] \,\frac{q^2}{M^2} \, + \, {\cal{O}}(\alpha^4) \,. \label{Picorrectq20} \end{eqnarray} The numerical value of the constant term in the brackets is $0.32$. It is evident that the ${\cal{O}}(\alpha^3)$ contributions which come from $\Pi^{{\cal{O}}(\alpha^2)}_{\mbox{\tiny QED}}$ and therefore contain information on the formation of positronium bound states are much larger than the three-loop contributions. The ratio between the former ${\cal{O}}(\alpha^3)$ contributions and the three-loop result even diverges for $q^2\to 0$. The overall conclusion of this scenario would be that threshold (and therefore long-distance) effects dominate not only in the threshold regime but also the energy region $|q^2|\ll 4 M^2$. This is obviously wrong! The ``threshold effects'' in eq.~(\ref{Pinaiveq20}) contradict the Appelquist-Carrazone theorem~\cite{Appelquist1} and even represent contributions non-analytic at $q^2=0$. The solution to this apparent paradox is that $\Pi^{{\cal{O}}(\alpha^2)}_{\mbox{\tiny QED}}$ only describes the vacuum polarization function to ${\cal{O}}(\alpha^2)$ accuracy. All contributions of order $\alpha^3$ or higher have to ignored and do not represent proper higher-order contributions. This means that the contributions of the function $A$ are necessary to achieve ${\cal{O}}(\alpha^2)$ accuracy in the threshold region, but should be neglected if the vacuum polarization function has to evaluated far from the threshold point. \par To make the latter point more explicit, let us imagine that the analytical form of the complete three-loop contributions to the vacuum polarization function would be known for all energies (in the same sense as the they are known for the one- and two-loop contributions, $\Pi^{(1)}$ and $\Pi^{(2)}$). We then could try to determine the expression for the vacuum polarization function valid to ${\cal{O}}(\alpha^3)$ accuracy for all energies in the same way as we have determined $\Pi^{{\cal{O}}(\alpha^2)}_{\mbox{\tiny QED}}$, which is valid to ${\cal{O}}(\alpha^2)$ accuracy for all energies. This would be achieved by matching the three-loop expression for the vacuum polarization function to the corresponding ${\cal{O}}(\alpha^3)$ formula calculated in NRQED in the same way as presented in Section~\ref{SectionCalculation}. The vacuum polarization function valid to ${\cal{O}}(\alpha^3)$ accuracy for all energies would then have the form\footnote{ In eq.~(\ref{Pi3allenergies}) $\Delta(\alpha,\beta)$ denotes the ${\cal{O}}(\alpha^3)$ NRQED contributions, including the necessary subtractions in order to avoid double-counting. The actual form of these contributions is irrelevant here because we only want to discuss the large ${\cal{O}}(\alpha^3)$ contributions in eq.~(\ref{Pinaiveq20}). However, it is straightforward to see that $\Delta$ contains terms of order $\alpha^3$ in the threshold regime, but is of order $\alpha^4$ far from the threshold point. } \begin{eqnarray} \Pi_{\mbox{\tiny QED}}^{{\cal{O}}(\alpha^3)}(q^2) & = & \Big(\frac{\alpha}{\pi}\Big)\,\Pi^{(1)}(q^2) \, + \, \Big(\frac{\alpha}{\pi}\Big)^2\,\Pi^{(2)}(q^2) \, + \, \Big(\frac{\alpha}{\pi}\Big)^3\,\Pi^{(3)}(q^2) \nonumber\\[2mm] & & + \, A(\alpha,\beta) - \alpha^3\,\bigg[\,i\,\frac{\pi^2}{24\,\beta}\,\bigg] \, + \, \Delta(\alpha,\beta) \,. \label{Pi3allenergies} \end{eqnarray} In the second line of eq.~(\ref{Pi3allenergies}) the contribution $\alpha^3\,[\,i\,\frac{\pi^2}{24\,\beta}\,]$ has to be subtracted in order to avoid double counting in the threshold regime since \begin{equation} \Big(\frac{\alpha}{\pi}\Big)^3\, \Pi^{(3)}(q^2) \, \stackrel{|\beta|\ll 1}{=} \, \alpha^3\,\bigg[\, \,i\,\frac{\pi^2}{24\,\beta} \,\bigg] \, + \, {\cal{O}}(\beta^0) \,. \label{Pi3loopexpanded} \end{equation} It is therefore clear that far from threshold the second line of eq.~(\ref{Pi3allenergies}) only contains contributions of order $\alpha^4$ and higher (see eq.~(\ref{Asmalla})). Expanding now $\Pi_{\mbox{\tiny QED}}^{{\cal{O}}(\alpha^3)}$ for small values of $q^2$ would give a result identical to the three-loop expression, eq.~(\ref{Picorrectq20}). The large non-analytical ${\cal{O}}(\alpha^3)$ contribution which appeared in eq.~(\ref{Pinaiveq20}) would be gone. It is obvious that this large contribution originates from the leading non-vanishing term of $\Pi^{(3)}$ in an expansion for $|\beta|\ll 1$ evaluated for small $q^2$. These contributions survive in $\Pi_{\mbox{\tiny QED}}^{{\cal{O}}(\alpha^2)}$, eq.~(\ref{Piallenergies}), but are cancelled in $\Pi_{\mbox{\tiny QED}}^{{\cal{O}}(\alpha^3)}$, eq.~(\ref{Pi3allenergies}). Using the same line of arguments it can easily be shown that all contributions of the function $A$ would be cancelled if formulae for the vacuum polarization function with successively higher accuracy would be determined. \par The physical picture behind this cancellation can be drawn as follows: the contributions in function $A$ are generated by vacuum polarization diagrams with the instantaneous Coulomb exchange of two and more longitudinal photons, where the latter are defined in the Coulomb gauge. In the threshold region the exchange of these longitudinal photons represents the dominant effect, whereas all the other interactions, for simplicity reasons called ``transverse'' in the following, can be neglected in a first approximation. Although this approach is obviously not gauge invariant from the point of view of full quantum electrodynamics, the violation of gauge invariance is vanishing in the non-relativistic limit. This is not true, however, far from the threshold point. There, contributions from longitudinal and transverse photons are equally important. Their individual sizes are extraordinarily large but with different signs. Therefore, adding the transverse contributions to the contributions of function $A$ the greater part of the large corrections will be cancelled off, leaving results which can be obtained from conventional (multi-loop) perturbation theory. This remains true at any level of accuracy. From this picture it should be clear that neither effects from the formation of $e^+e^-$ bound states nor from the Coulomb rescattering, if the relative velocity of the $e^+e^-$ pair is much smaller than the speed of light, can ever lead to large corrections of the vacuum polarization function far from the threshold region. There, the contributions of the function $A$ represent unphysical (and gauge non-invariant) contributions which cannot even be used to estimate the size of the real higher-order corrections\footnote{ If applied to QCD our conclusion is essentially equivalent to arguments employed in~\cite{Novikov1,Voloshin4}. }. \par We would like to remind the reader that the previous arguments are not applicable if a high number of derivatives of the vacuum polarization function below the threshold region is considered, $({\rm d}/{\rm d} q^2)^n \Pi(q^2)$, $n\gg 1$. In the latter case threshold effects are essential. This can be easily understood from the relation \begin{equation} {\cal{M}}_n(q^2) \, \equiv \, \bigg(M^2\,\frac{d}{d q^2}\bigg)^n\,\Pi(q^2) \,\sim\, M^{2n}\,\int\frac{d {q^\prime}^2}{{q^\prime}^2}\, \frac{{\mbox{Im}} \Pi({q^\prime}^2)}{({q^\prime}^2-q^2)^n} \,. \end{equation} For large $n$ and $|q^2|\ll 4 M^2$ the high-energy contributions in the dispersion integration are strongly suppressed, which leads to the domination of effects coming from the threshold region. This fact is the foundation of QCD sum rule calculations. At this point we would like to take the opportunity to comment on a recent publication where QCD sum rules have been applied to extract $\alpha_s$ and the bottom quark mass from experimental data on the $\Upsilon$ resonances~\cite{Pich1}. In this publication it is claimed that ${\cal{O}}(\alpha_s^2)$ corrections to the moments ${\cal{M}}_n^{\mbox{\tiny QCD}}(0)$ have been calculated because two-loop corrections to the $b\bar b$ production cross section have been included into the analysis. It should be clear from the discussions of Section~\ref{SectionQCD} that a two-loop calculation of the cross section is not sufficient to describe the ${\cal{O}}(\alpha_s^2)$ corrections to the cross section in the threshold region. In the analysis of~\cite{Pich1} this can be easily seen from the fact that the removal of the two-loop contributions (after subtraction of the corresponding leading and next-to-leading threshold contributions) essentially has no effect on the results (see Table~4 in~\cite{Pich1}). The latter observation is taken as a ``final test on the importance of higher-order corrections''. However, as shown in Section~\ref{SectionQCD}, the ${\cal{O}}(\alpha_s^2)$ corrections to the cross section in the threshold region are expected to be at the $10\%$ to $20\%$ level and will therefore have a large impact on QCD sum rule calculations in the large $n$ limit. The mistake in the arguments of~\cite{Pich1} is that it is implicitly assumed that the Sommerfeld factor, eq.~(\ref{Sommerfeldfactor}), accounts for the resummation of all long-distance effects. Therefore all corrections to expression~(\ref{Sommerfeldfactor}) should be calculable by fixed-order loop calculations alone. This is true for the ${\cal{O}}(\alpha_s)$ short-distance correction factor $(1-4 C_F\alpha_s/\pi)$, but this is not the case for higher-order corrections like the ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections calculated in Section~\ref{SectionQCD}. This fact will be demonstrated explicitly in a future publication, where all ${\cal{O}}(C_F^2\alpha_s^2)$ corrections to the cross section will be presented. In~\cite{Pich1} is it also assumed that the effects of the running of the strong coupling in the Sommerfeld factor can be determined by insertion of the effective running coupling $\alpha_V$, which is related to the short-distance corrections of the QCD potential~\cite{Fischler1,Billoire1}. We would like to emphasize that this approach is not justified for large $n$ QCD sum rule calculations because the important saturation effects are neglected in this procedure. As a consequence the calculations presented in~\cite{Pich1} are not only not at the ${\cal{O}}(\alpha_s^2)$ accuracy level but also include a systematic error at order $\alpha_s$ and therefore contain much larger uncertainties than presented there. The authors of~\cite{Pich1} finally criticize an older QCD sum rule calculation by Voloshin~\cite{Voloshin1} on the same subject, claiming that in~\cite{Voloshin1} the magnitude of higher-order corrections was underestimated. In this point we agree with the authors of~\cite{Pich1} because in~\cite{Voloshin1} it is assumed that ${\cal{O}}(\alpha_s^2)$ corrections have ``no enhancement'' in the large $n$ limit and therefore should be of order $1/n$. This assumption is essentially equivalent to the statement that all ${\cal{O}}(\alpha_s^2)$ corrections to the ground state Coulomb wave function at the origin of a bound $b\bar b$ pair should vanish. We have shown explicitly in this work that this assumption is not true by calculating the ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections. The latter corrections amount to $10\%$ to $20\%$ for the modulus squared of the ground state wave function at the origin for a bound $b\bar b$ pair and are far from being negligible. We therefore conclude that the results presented in~\cite{Voloshin1} actually contain theoretical uncertainties at the $10\%$ to $20\%$ level, an order of magnitude larger than claimed there. \par \vspace{.5cm} \section{Summary} \label{SectionSummary} In this work we have used the concept of effective field theories to calculate the ${\cal{O}}(\alpha^2)$ corrections to the QED vacuum polarization function in the threshold region and to define a renormalized version of the zero-distance Coulomb Green function. In the framework where non-relativistic quantum mechanics is part of an effective low energy field theory (NRQED), long-distance effects (coming from typical momentum scales below the electron mass) are determined completely by classical quantum mechanics calculations, whereas short-distance contributions (coming from momentum scales beyond the electron mass) are included via the matching procedure. For the latter contributions multi-loop techniques (in conventional covariant perturbation theory) have to be employed. We have demonstrated that the effective field theory approach represents a highly efficient method to merge sophisticated multi-loop methods with well-known textbook quantum mechanics time-independent perturbation theory. From the physical point of view this is achieved because the effective field theory concept allows for a transparent and systematic separation of long- and short-distance physics at any level of precision. For our calculations we have used a ``direct matching'' procedure which can be applied if the multi-loop results to the quantity of interest are at hand. This direct matching allows for a quite sloppy treatment of UV divergences in the effective field theory, but is of no value if calculations of quantities are intended for which no multi-loop expressions are available. \par We have demonstrated the efficiency of our approach by calculating the ${\cal{O}}(\alpha^6)$ ``vacuum polarization'' contributions to the positronium ground state hyperfine splitting without referring back to the Bethe-Salpeter equation, and by determining the ${\cal{O}}(C_F^2\alpha_s^2)$ (next-to-next-to-leading order) Darwin corrections to heavy quark-antiquark bound state wave functions at the origin and to the heavy quark-antiquark production cross section in $e^+e^-$ annihilation (into a virtual photon). If the ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections are taken as an order-of-magnitude estimate for the complete (yet unknown) ${\cal{O}}(\alpha_s^2)$ corrections, the typical ${\cal{O}}(\alpha_s^2)$ corrections for the $t\bar t$ production cross section can be expected at the few percent level for most of the threshold region. Around the $1S$ peak they might even amount to $5\%$. For the modulus squared of the ground state wave function of a bound $b\bar b$ pair (applicable to $\Upsilon(1S)$), the ${\cal{O}}(C_F^2\alpha_s^2)$ Darwin corrections are between $10\%$ and $20\%$, whereas the corresponding corrections for the $c\bar c$ system are between $15\%$ and $35\%$. The uncertainties arise from the ignorance of higher-order corrections, in particular from the ignorance of the exact scale in the strong coupling. We conclude that the determination of all ${\cal{O}}(\alpha_s^2)$ corrections would represent a considerable improvement of the present precision of theoretical calculations to the $t\bar t$ and $b\bar b$ system in the threshold region. For the $c\bar c$ system, on the other hand, this seems to be doubtful, a consequence of the large size of the strong coupling. \par Finally, we have also discussed whether the formation of positronium states can lead to large corrections of the QED vacuum polarization function far from the threshold region and came to the conclusion that such corrections do not exist. \par \vspace{.5cm} \section*{Acknowledgement} I am grateful to R.~F.~Lebed and A.~V.~Manohar for many useful discussions and reading the manuscript and to J.~Kuti and I.~Z.~Rothstein for helpful conversations. I would especially like to thank P.~Labelle for enlightening discussions on the role of logarithmic terms in bound state problems. \vspace{1.0cm} \sloppy \raggedright \def\app#1#2#3{{\it Act. Phys. Pol. }{\bf B #1} (#2) #3} \def\apa#1#2#3{{\it Act. Phys. Austr.}{\bf #1} (#2) #3} \defProc. LHC Workshop, CERN 90-10{Proc. LHC Workshop, CERN 90-10} \def\npb#1#2#3{{\it Nucl. Phys. }{\bf B #1} (#2) #3} \def\nP#1#2#3{{\it Nucl. Phys. }{\bf #1} (#2) #3} \def\plb#1#2#3{{\it Phys. Lett. }{\bf B #1} (#2) #3} \def\prd#1#2#3{{\it Phys. Rev. }{\bf D #1} (#2) #3} \def\pra#1#2#3{{\it Phys. Rev. }{\bf A #1} (#2) #3} \def\pR#1#2#3{{\it Phys. Rev. }{\bf #1} (#2) #3} \def\prl#1#2#3{{\it Phys. Rev. Lett. }{\bf #1} (#2) #3} \def\prc#1#2#3{{\it Phys. Reports }{\bf #1} (#2) #3} \def\cpc#1#2#3{{\it Comp. Phys. Commun. }{\bf #1} (#2) #3} \def\nim#1#2#3{{\it Nucl. Inst. Meth. }{\bf #1} (#2) #3} \def\pr#1#2#3{{\it Phys. Reports }{\bf #1} (#2) #3} \def\sovnp#1#2#3{{\it Sov. J. Nucl. Phys. }{\bf #1} (#2) #3} \def\sovpJ#1#2#3{{\it Sov. Phys. LETP Lett. }{\bf #1} (#2) #3} \def\jl#1#2#3{{\it JETP Lett. }{\bf #1} (#2) #3} \def\jet#1#2#3{{\it JETP Lett. }{\bf #1} (#2) #3} \def\zpc#1#2#3{{\it Z. Phys. }{\bf C #1} (#2) #3} \def\ptp#1#2#3{{\it Prog.~Theor.~Phys.~}{\bf #1} (#2) #3} \def\nca#1#2#3{{\it Nuovo~Cim.~}{\bf #1A} (#2) #3} \def\ap#1#2#3{{\it Ann. Phys. }{\bf #1} (#2) #3} \def\hpa#1#2#3{{\it Helv. Phys. Acta }{\bf #1} (#2) #3} \def\ijmpA#1#2#3{{\it Int. J. Mod. Phys. }{\bf A #1} (#2) #3} \def\ZETF#1#2#3{{\it Zh. Eksp. Teor. Fiz. }{\bf #1} (#2) #3} \def\jmp#1#2#3{{\it J. Math. Phys. }{\bf #1} (#2) #3} \def\yf#1#2#3{{\it Yad. Fiz. }{\bf #1} (#2) #3}
2,869,038,155,975
arxiv
\subsection{Systematic uncertainties} \label{sec:systematics} \input{Systematics} \subsection{Test of SM $J^P = 0^+$ against $J^P = 0^-$ \label{sec:zerom}} The distributions of the test statistics $q$ from the \ensuremath{H \rightarrow ZZ^{(*)}}\ channel for the \ensuremath{J^P=0^+}\ and \ensuremath{0^{-}}\ hypotheses are shown in Fig.~\ref{fig:ZZ_LLR} together with the observed value. \begin{figure}[!htbp] \centering \includegraphics[width=1.0\columnwidth]{figure_7} \caption{Expected distributions of $q = \log({\cal L} (\ensuremath{J^P=0^+} )/{\cal L}( \ensuremath{J^P=0^-} ))$, the logarithm of the ratio of profiled likelihoods, under the \ensuremath{J^P=0^+}\ and \ensuremath{0^{-}}\ hypotheses for the Standard Model \ensuremath{J^P=0^+}\ (blue/solid line distribution) or \ensuremath{0^{-}}\ (red/dashed line distribution) signals. The observed value is indicated by the vertical solid line and the expected medians by the dashed lines. The coloured areas correspond to the integrals of the expected distributions up to the observed value and are used to compute the $p_0$-values for the rejection of each hypothesis. } \label{fig:ZZ_LLR} \end{figure} The expected and observed rejections of the \ensuremath{J^P=0^+}\ and \ensuremath{0^{-}}\ hypotheses are summarised in Table~\ref{tab:resultsummary_0p0m}. The data are in agreement with the \ensuremath{J^P=0^+}\ hypothesis, while the \ensuremath{0^{-}}\ hypothesis is excluded at 97.8\% CL. \subsection{Test of SM $J^P = 0^+$ against $J^P = 1^+$ \label{sec:onep}} The expected and observed rejections of the \ensuremath{J^P=0^+}\ and \ensuremath{1^{+}}\ hypotheses in the \ensuremath{H \rightarrow ZZ^{(*)}}\ and \ensuremath{H \rightarrow WW^{(*)}}\ channels and their combination are summarised in Table~\ref{tab:resultsummary_0p1p}. For both channels, the results are in agreement with the \ensuremath{J^P=0^+}\ hypothesis. In the \ensuremath{H \rightarrow ZZ^{(*)}}\ channel, the \ensuremath{1^{+}}\ hypothesis is excluded at 99.8\% CL, while in the \ensuremath{H \rightarrow WW^{(*)}}\ channel, it is excluded at 92\% CL. The combination excludes this hypothesis at 99.97\% CL. \subsection{Test of SM $J^P = 0^+$ against $J^P = 1^-$ \label{sec:onem}} The expected and observed rejections of the \ensuremath{J^P=0^+}\ and \ensuremath{1^{-}}\ hypotheses in the \ensuremath{H \rightarrow ZZ^{(*)}}\ and \ensuremath{H \rightarrow WW^{(*)}}\ channels and their combination are summarised in Table~\ref{tab:resultsummary_0p1m}. For both channels, the results are in agreement with the \ensuremath{J^P=0^+}\ hypothesis. In the \ensuremath{H \rightarrow ZZ^{(*)}}\ channel, the \ensuremath{1^{-}}\ hypothesis is excluded at 94\% CL. In the \ensuremath{H \rightarrow WW^{(*)}}\ channel, the \ensuremath{1^{-}}\ hypothesis is excluded at 98\% CL. The combination excludes this hypothesis at~99.7\%~CL. \subsection{Test of SM $J^P = 0^+$ against $J^P = 2^+$ \label{sec:twop}} The expected and observed rejections of the \ensuremath{J^P=0^+}\ and $2^{+}$ hypotheses in the three channels are summarised in Table~\ref{tab:resultsummary_combined}, for all \ensuremath{f_{q\bar{q}}}\ values of the spin-2 particle considered. For all three channels, the results are in agreement with the spin-0 hypothesis. The results from the \ensuremath{H{\rightarrow\,}\gamma\gamma}\ channel exclude a spin-2 particle produced via gluon fusion ($\ensuremath{f_{q\bar{q}}} =0$) at 99.3\% CL. The separation between the two spin hypotheses in this channel decreases with increasing \ensuremath{f_{q\bar{q}}}. For large values of \ensuremath{f_{q\bar{q}}} , the \ensuremath{|\cos \theta^*|}\ distributions associated with the spin-0 and spin-2 signals become very similar. In the case of the \ensuremath{H \rightarrow ZZ^{(*)}}\ channel, a separation slightly above one standard deviation is expected between the \ensuremath{J^P=0^+}\ and \ensuremath{J^P=2^+}\ hypotheses, with little dependence on the production mechanism. The \ensuremath{H \rightarrow WW^{(*)}}\ channel has the opposite behaviour to the \ensuremath{H{\rightarrow\,}\gamma\gamma}\ one, with the best expected rejection achieved for large values of \ensuremath{f_{q\bar{q}}} , as illustrated in Table~\ref{tab:resultsummary_combined}. The results for the \ensuremath{H \rightarrow WW^{(*)}}\ channel are also in agreement with the \ensuremath{J^P=0^+}\ hypothesis. The \ensuremath{J^P=2^+}\ hypothesis is excluded with a CL above 95\%. The data are in better agreement with the \ensuremath{J^P=0^+}\ hypothesis over the full range of \ensuremath{f_{q\bar{q}}} . \begin{figure}[!htbp] \centering \includegraphics[width=1.0\columnwidth]{figure_8} \caption{Expected (blue triangles/dashed line) and observed (black circles/solid line) confidence levels, $\ensuremath{{\rm CL}_{\rm s}}(\ensuremath{J^P=2^+})$, of the \ensuremath{J^P=2^+}\ hypothesis as a function of the fraction \ensuremath{f_{q\bar{q}}}\ (see text) for the spin-2 particle. The green bands represent the 68\%\ expected exclusion range for a signal with assumed \ensuremath{J^P=0^+} . On the right $y$-axis, the corresponding numbers of Gaussian standard deviations are given, using the one-sided convention.} \label{fig:CLs} \end{figure} Table~\ref{tab:res1} shows the expected and observed $p_0$-values for both the \ensuremath{J^P=0^+}\ and \ensuremath{J^P=2^+}\ hypotheses for the combination of the \ensuremath{H{\rightarrow\,}\gamma\gamma}, \ensuremath{H \rightarrow ZZ^{(*)}}\ and \ensuremath{H \rightarrow WW^{(*)}}\ channels. The test statistics calculated on data are compared to the corresponding expectations obtained from pseudo-experiments, as a function of \ensuremath{f_{q\bar{q}}}. The data are in good agreement with the Standard Model \ensuremath{J^P=0^+}\ hypothesis over the full \ensuremath{f_{q\bar{q}}}\ range. Figure~\ref{fig:CLs} shows the comparison of the expected and observed \ensuremath{{\rm CL}_{\rm s}}\ values for the \ensuremath{J^P=2^+}\ rejection as a function of \ensuremath{f_{q\bar{q}}}. The observed exclusion of the \ensuremath{J^P=2^+}\ hypothesis in favour of the Standard Model \ensuremath{J^P=0^+}\ hypothesis exceeds 99.9\%~CL for all values of \ensuremath{f_{q\bar{q}}}. \begin{figure}[!htbp] \centering \includegraphics[width=1.0\columnwidth]{figure_9} \caption{Expected (blue triangles/dashed lines) and observed (black circles/solid lines) confidence level $\ensuremath{{\rm CL}_{\rm s}}$ for alternative spin--parity hypotheses assuming a \ensuremath{J^P=0^+}\ signal. The green band represents the 68\%\ $\ensuremath{{\rm CL}_{\rm s}}(\ensuremath{J^{P}_{alt}})$ expected exclusion range for a signal with assumed \ensuremath{J^P=0^+} . For the spin-2 hypothesis, the results for the specific \ensuremath{2^{+}_{m}}\ model, discussed in Section~\ref{sec:SpinMC}, are shown. On the right $y$-axis, the corresponding numbers of Gaussian standard deviations are given, using the one-sided convention.} \label{fig:summary} \end{figure} \subsection{Summary} The observed and expected \ensuremath{{\rm CL}_{\rm s}}\ values for the exclusion of the different spin--parity hypotheses are summarised in Fig.~\ref{fig:summary}. For the spin-2 hypothesis, the \ensuremath{{\rm CL}_{\rm s}}\ value for the specific \ensuremath{2^{+}_{m}}\ model, discussed in Section~\ref{sec:SpinMC}, is displayed. \section*{Acknowledgements} We thank CERN for the very successful operation of the LHC, as well as the support staff from our institutions without whom ATLAS could not be operated efficiently. We acknowledge the support of ANPCyT, Argentina; YerPhI, Armenia; ARC, Australia; BMWF and FWF, Austria; ANAS, Azerbaijan; SSTC, Belarus; CNPq and FAPESP, Brazil; NSERC, NRC and CFI, Canada; CERN; CONICYT, Chile; CAS, MOST and NSFC, China; COLCIENCIAS, Colombia; MSMT CR, MPO CR and VSC CR, Czech Republic; DNRF, DNSRC and Lundbeck Foundation, Denmark; EPLANET, ERC and NSRF, European Union; IN2P3-CNRS, CEA-DSM/IRFU, France; GNSF, Georgia; BMBF, DFG, HGF, MPG and AvH Foundation, Germany; GSRT and NSRF, Greece; ISF, MINERVA, GIF, DIP and Benoziyo Center, Israel; INFN, Italy; MEXT and JSPS, Japan; CNRST, Morocco; FOM and NWO, Netherlands; BRF and RCN, Norway; MNiSW, Poland; GRICES and FCT, Portugal; MERYS (MECTS), Romania; MES of Russia and ROSATOM, Russian Federation; JINR; MSTD, Serbia; MSSR, Slovakia; ARRS and MIZ\v{S}, Slovenia; DST/NRF, South Africa; MICINN, Spain; SRC and Wallenberg Foundation, Sweden; SER, SNSF and Cantons of Bern and Geneva, Switzerland; NSC, Taiwan; TAEK, Turkey; STFC, the Royal Society and Leverhulme Trust, United Kingdom; DOE and NSF, United States of America. The crucial computing support from all WLCG partners is acknowledged gratefully, in particular from CERN and the ATLAS Tier-1 facilities at TRIUMF (Canada), NDGF (Denmark, Norway, Sweden), CC-IN2P3 (France), KIT/GridKA (Germany), INFN-CNAF (Italy), NL-T1 (Netherlands), PIC (Spain), ASGC (Taiwan), RAL (UK) and BNL (USA) and in the Tier-2 facilities worldwide. \section{Introduction} \label{sec:Introduction} \input{Introduction.tex} \section{Signal modelling and Monte Carlo samples} \label{sec:SpinMC} \input{SpinMC.tex} \section{Statistical method} \label{sec:Statistics} \input{Statistics.tex} \section{\ensuremath{H{\rightarrow\,}\gamma\gamma}\ Analysis} \label{sec:hgg} \input{Hggspin2013.tex} \section{\ensuremath{H \rightarrow ZZ^{(*)}\rightarrow 4\ell}\ Analysis} \label{sec:hzz} \input{H4lspin2013.tex} \section{\ensuremath{H{\rightarrow\,}WW^{(*)}{\rightarrow\,}\ell\nu\ell\nu}\ Analysis} \label{sec:hww} \input{Hwwspin2013.tex} \section{Results} \label{sec:Results} \input{Results.tex} \section{Conclusions} \label{sec:Conclusion} \input{Conclusions.tex} \input{acknowledgements.tex} \bibliographystyle{atlasBibStyleWithTitle} \section{Introduction} \label{sec:Introduction} \input{hcomb/Introduction.tex} \section{Signal modelling and Monte Carlo samples} \label{sec:SpinMC} \input{hcomb/SpinMC.tex} \section{Statistical method} \label{sec:Statistics} \input{hcomb/Statistics.tex} \section{\ensuremath{H{\rightarrow\,}\gamma\gamma}\ Analysis} \label{sec:hgg} \input{hgg/Hggspin2013.tex} \section{\ensuremath{H \rightarrow ZZ^{(*)}\rightarrow 4\ell}\ Analysis} \label{sec:hzz} \input{hzz/H4lspin2013.tex} \section{\ensuremath{H{\rightarrow\,}WW^{(*)}{\rightarrow\,}\ell\nu\ell\nu}\ Analysis} \label{sec:hww} \input{hww/Hwwspin2013.tex} \section{Results} \label{sec:Results} \input{hcomb/Results.tex} \section{Conclusions} \label{sec:Conclusion} \input{hcomb/Conclusions.tex} \input{acknowledgements.tex} \bibliographystyle{atlasBibStyleWithTitle} \section{Conclusion\label{sec:Conclusion}} Fabio 0.5 Pages Here two sentence with main concludsions. All tested hypothesis in alternative to SM rejected at more then xx$\%$ CL .... \input{acknowledgements.tex} \bibliographystyle{atlasBibStyleWithTitle} \section{Introduction} \label{sec:Introduction} \input{hcomb/Introduction.tex} \section{Signal modelling and Monte Carlo samples} \label{sec:SpinMC} \input{hcomb/SpinMC.tex} \section{Statistical method} \label{sec:Statistics} \input{hcomb/Statistics.tex} \section{\ensuremath{H{\rightarrow\,}\gamma\gamma}\ Analysis} \label{sec:hgg} \input{hgg/Hggspin2013.tex} \section{\ensuremath{H \rightarrow ZZ^{(*)}\rightarrow 4\ell}\ Analysis} \label{sec:hzz} \input{hzz/H4lspin2013.tex} \section{\ensuremath{H{\rightarrow\,}WW^{(*)}{\rightarrow\,}\ell\nu\ell\nu}\ Analysis} \label{sec:hww} \input{hww/Hwwspin2013.tex} \section{Results} \label{sec:Results} \input{hcomb/Results.tex} \section{Conclusions} \label{sec:Conclusion} \input{hcomb/Conclusions.tex} \input{acknowledgements.tex} \bibliographystyle{atlasBibStyleWithTitle} \section{Introduction} \label{sec:Introduction} \input{hcomb/Introduction.tex} \section{Signal modeling and MC samples} \label{sec:SpinMC} \input{hcomb/SpinMC.tex} \section{Statistical method} \label{sec:Statistics} \input{hcomb/Statistics.tex} \section{\ensuremath{H{\rightarrow\,}\gamma\gamma}\ Analysis} \label{sec:hgg} \input{hgg/Hggspin2013.tex} \section{\ensuremath{H \rightarrow ZZ^{(*)}\rightarrow 4\ell}\ Analysis} \label{sec:hzz} \input{hzz/H4lspin2013.tex} \section{\ensuremath{H \rightarrow WW^{(*)}}\ Analysis} \label{sec:Introduction} \input{hww/Hwwspin2013.tex} \section{Results} \label{sec:Results} \input{hcomb/Results.tex} \section{Conclusions} \label{sec:Conclusion} \input{hcomb/Conclusions.tex} \input{acknowledgements.tex} \bibliographystyle{atlasBibStyleWithTitle}
2,869,038,155,976
arxiv
\section{Negative binomial distribution}\label{sec:NBaug} \section{Polya-Gamma distribution} To infer the regression coefficient vector for each NB regression, we use the Polya-Gamma random variable $X\sim \mbox{PG}(a,c)$, defined in \citet{LogitPolyGamma} as the weighted sum of infinite independent, and identically distributed ($i.i.d.$) gamma random variables as \vspace{0mm}\begin{equation} X = \frac{1}{2\pi^2}\sum_{k=1}^\infty \frac{g_k}{(k-1/2)^2+c^2/(4\pi^2)}, ~~g_k\sim\mbox{Gamma}(a,1). \label{eq:PGdefine} \vspace{0mm}\end{equation} As in \citet{polson2013bayesian}, the moment generating function of the Polya-Gamma random variable $X\sim\mbox{PG}(a,c)$ can be expressed as \vspace{0mm}\begin{equation} \mathbb{E}[e^{sX}] = {\cosh^a\left(\frac{c}{2}\right)}{\cosh^{-a}\left(\sqrt{\frac{c^2/2-s}{2}}\right)}. \notag \vspace{0mm}\end{equation} Let us denote $f(s) = \sqrt{\frac{c^2/2-s}{2}}$ and hence $f'(s) = -1/[4f(s)]$. Since $$ \frac{d\mathbb{E}[e^{sX}] }{ds} = \frac{a}{4}\mathbb{E}[e^{sX}] \frac{\tanh[f(s)]}{f(s)}, $$ $$ \frac{d^2\mathbb{E}[e^{sX}] }{ds^2} = \frac{1}{\mathbb{E}[e^{sX}]} \left(\frac{d\mathbb{E}[e^{sX}] }{ds}\right)^2 + \frac{a}{4}\mathbb{E}[e^{sX}] \cosh^{-2}[f(s)]\frac{\sinh[2f(s)]-2f[s]}{[2f(s)]^3} ,\notag $$ the mean can be expressed as \vspace{0mm}\begin{equation} \mathbb{E}[X\,|\, a, c] = \frac{ d\mathbb{E}[e^{sX}] }{ds}(0) = \frac{a}{2|c|}\tanh\left(\frac{|c|}{2}\right), \label{eq:PGmean} \vspace{0mm}\end{equation} and the variance can be expressed as \begin{align} \label{eq:varPG} \mbox{var}[X\,|\, a, c] &= \frac{ d^2\mathbb{E}[e^{sX}] }{ds^2}(0) - (\mathbb{E}[X])^2=\frac{a\cosh^{-2}\big({|c|}/{2}\big) {[\sinh(|c|)-|c|]}}{4|c|^3}=\frac{a}{2|c|^3} \frac{{\sinh(|c|)-|c|}}{\cosh(|c|)+1} \notag\\ &= \frac{a}{2|c|^3} \frac{1-e^{-2|c|}-2|c|e^{-|c|}}{1+e^{-2|c|}+2e^{-|c|}} = \frac{a\cosh^{-2}\left(\frac{|c|}{2}\right) }{4} \left(\frac{1}{6} + \sum_{n=1}^\infty \frac{|c|^{2n}}{(2n+3)!}\right), \end{align} which matches the variance shown in \citet{glynn2015bayesian} but with a much simpler derivation. \begin{figure}[!t] \begin{center} \includegraphics[width=0.66\columnwidth]{figure/PG_Gamma.pdf} \vspace{-.3mm} \end{center} \vspace{-5.9mm} \caption{\small\label{fig:PG_Gamma} (a) Comparison of the normalized histogram of $10^6$ independent random samples following $X\sim\mbox{PG}(0.9,2)$ (simulated using the truncated sampler at a truncation level of 1000) and that of $10^6$ ones simulated with the approximate sampler truncated at one, $i.e.$, simulated from $X\sim\mbox{Gamma}\left( {\mu_{\triangle}^2}/{\sigma^2_{\triangle}}, {\sigma^2_{\triangle}}/{\mu_{\triangle}}\right)$, where $\mu_{\triangle}=0.1714$ and $\sigma^2_{\triangle}=0.0192$ are chosen to match the mean and variance of $X\sim\mbox{PG}(0.9,2)$; (b) the differences between the normalized frequencies of these two histograms. (c)-(d): analogous plots to (a)-(b), with the truncated sampler truncated at two. (e)-(f): analogous plots to (a)-(b), with the truncated sampler truncated at four. } \end{figure} As in \eqref{eq:PGdefine}, a $\mbox{PG}$ distributed random variable can be generated from an infinite sum of weighted $i.i.d.$ gamma random variables. In \citet{polson2013bayesian}, when $a$ is an integer, $X\sim \mbox{PG}(a,c)$ is sampled exactly using a rejection sampler, but when $a$ is a positive real number, it is sampled approximately by truncating the infinite sum in \eqref{eq:PGdefine}. However, the mean of $X$ approximately generated in this manner is guaranteed to be left biased. An improved approximate sampler is proposed in \citet{LGNB_ICML2012} to compensate the bias of the mean, but not the bias of the variance. We present in Proposition \ref{prop:PGtruncate} an approximate sampler that is unbiased in both the mean and variance, using the summation of a finite number of gamma random variables. As shown in Fig. \ref{fig:PG_Gamma}, the approximate sampler is quite accurate even only using two gamma random variables. We also provide some additional propositions, whose proofs are deferred to Appendix~\ref{sec:proof}, to describe some important properties that will be used in inference. \begin{prop}\label{prop:PGtruncate} Denoting $K\in\{1,2,\ldots\}$ as a truncation level, if \vspace{0mm}\begin{equation}\label{eq:Xhat} \hat{X} = \frac{1}{2\pi^2}\sum_{k=1}^{K-1} \frac{g_k}{(k-1/2)^2+c^2/(4\pi^2)},~~g_k\sim\emph{\mbox{Gamma}}(a,1) \vspace{0mm}\end{equation} and $ X_{\triangle}\sim\emph{\mbox{Gamma}}\left( {\mu_{\triangle}^2}{\sigma^{-2}_{\triangle}}, {\mu^{-1}_{\triangle}}{\sigma^2_{\triangle}}\right), $ where \vspace{0mm}\begin{eqnarray} \displaystyle\mu_{\triangle} = \frac{a}{2|c|}\tanh\left(\frac{|c|}{2}\right) -\mathbb{E}[\hat{X}],~~~ \displaystyle\sigma^2_{\triangle} = \frac{a}{2|c|^3} \frac{{\sinh(|c|)-|c|}}{\cosh(|c|)+1} - \emph{\mbox{var}}[\hat{X}], \notag \vspace{0mm}\end{eqnarray} then $\hat{X}+X_{\triangle}$ has the same mean and variance as those of $X\sim\emph{\mbox{PG}}(a,c)$, and the difference between their cumulant generating functions can be expressed as \vspace{0mm}\begin{equation} \ln \mathbb{E}[e^{sX}] - \ln \mathbb{E}[e^{s(\hat{X}+X_{\triangle})}] = \sum_{n=3}^\infty \frac{s^n }{n} \left[ \left( a\sum_{k=K}^\infty d_k^{-n}\right)- \frac{\mu_{\triangle}^2}{\sigma^2_{\triangle}} \left(\frac{\sigma^2_{\triangle}}{\mu_{\triangle}}\right)^{n} \right],\notag \vspace{0mm}\end{equation} where $d_k=2\pi^2(k-1/2)^2+c^2/2.$ \end{prop} \begin{prop}\label{prop:PG} If $X\sim\emph{\mbox{PG}}(a,c)$, then $\lim_{|c|\rightarrow \infty}X= 0$ and $\lim_{|c|\rightarrow \infty}|c|X= a/2$. \end{prop} \begin{prop}\label{prop:PG1} If $X\sim\emph{\mbox{PG}}(a,0)$, then $\mathbb{E}[X\,|\, a, 0]= a/4$ and $\emph{\mbox{var}}[X\,|\, a, 0]= a/24$. \end{prop} \begin{prop}\label{prop:PG1_1} If $X\sim\emph{\mbox{PG}}(a,c)$, then $\displaystyle\emph{\mbox{var}}[X\,|\, a, c] \ge \frac{a}{24}{\cosh^{-2}\left(\frac{|c|}{2}\right) } $, where the equality holds if and only if $c=0$. \end{prop} \begin{prop}\label{prop:PG2} $X\sim\emph{\mbox{PG}}(a,c)$ has a variance-to-mean ratio as \begin{align} \frac{\emph{\mbox{var}}[X\,|\, a, c] }{\mathbb{E}[X\,|\, a, c]} = \frac{1}{|c|^{2}} - \frac{1}{|c|\sinh(|c|)} \notag \end{align} and is always under-dispersed, since $ \emph{\mbox{var}}[X\,|\, a, c]\le\mathbb{E}[X\,|\, a, c]/6$ almost surely, where the equality holds if and only if $c=0$. \end{prop} \section{Proofs}\label{sec:proof} \begin{proof}[Proof for Definition \ref{thm:1}] For the hierarchical model in \eqref{eq:Softplus_model}, we have $P(y_i=0\,|\, \theta_i) = e^{-\theta_i}$. Further using the moment generating function of the gamma distribution, we have \vspace{0mm}\begin{equation} P(y_i =0 \,|\, \boldsymbol{x}_i, \betav) = \mathbb{E}_{\theta_i}[e^{-\theta_i}] = (1 + e^{\boldsymbol{x}_i'\betav})^{-1}. \notag \vspace{0mm}\end{equation} As $\lambda(\boldsymbol{x}_i) =-\ln[P(y_i =0 \,|\, \boldsymbol{x}_i, \betav)] $ by definition, we have $\lambda(\boldsymbol{x}_i) = \ln(1+e^{\boldsymbol{x}_i'\betav})$. \end{proof} \begin{proof}[Proof for Definition \ref{thm:CLR2CNB}] For the hierarchical model in \eqref{eq:CNB}, we have $P(y_i=0\,|\, \theta_i) = e^{-\theta_i} = \prod_{k=1}^\infty e^{-\theta_{ik}}$. Using the moment generating function of the gamma distribution, we have \vspace{0mm}\begin{equation} P(y_i =0 \,|\, \boldsymbol{x}_i, \{\betav_k\}_k) = \mathbb{E}_{\theta_{i}}\left[\prod_{k=1}^\infty e^{-\theta_{ik}}\right] = \prod_{k=1}^\infty \mathbb{E}_{\theta_{ik}}[e^{-\theta_{ik}}] = \prod_{k=1}^\infty (1 + e^{\boldsymbol{x}_i'\betav_k})^{-r_k}.\notag \vspace{0mm}\end{equation} As $\lambda(\boldsymbol{x}_i) = -\ln[P(y_i =0 \,|\, \boldsymbol{x}_i, \{\betav_k\}_k) ]$ by definition, we obtain \eqref{eq:sum_softplus_reg}. \end{proof} \begin{proof}[Proof for Definition \ref{thm:stack-softplus}] For the hierarchical model in \eqref{eq:BerPo_recursive_softplus_reg_model}, we have $P(y_i=0\,|\, \theta^{(1)}_i) = e^{-\theta_i^{(1)}} $. Using the moment generating function of the gamma distribution, we have \vspace{0mm}\begin{equation} P(y_i =0 \,|\, \boldsymbol{x}_i, \betav^{(2)},\theta_i^{(2)}) = \mathbb{E}_{\theta_{i}^{(1)}}\left[e^{-\theta_{i}^{(1)}}\right] = (1 + e^{\boldsymbol{x}_i'\betav^{(2)}})^{-\theta_i^{(2)}} = e^{-\theta_i^{(2)} \ln(1+e^{\boldsymbol{x}_i'\betav^{(2)}}) }.\notag \vspace{0mm}\end{equation} Marginalizing out $\theta_i^{(2)}$ leads to \vspace{0mm}\begin{equation} P(y_i =0 \,|\, \boldsymbol{x}_i, \betav^{(2:3)},\theta_i^{(3)}) = \mathbb{E}_{\theta_{i}^{(2)}}\left[e^{-\theta_i^{(2)} \ln(1+e^{\boldsymbol{x}_i'\betav^{(2)}})}\right] = e^{-\theta_i^{(3)} \ln[1+e^{\boldsymbol{x}_i'\betav^{(3)}}\ln(1+e^{\boldsymbol{x}_i'\betav^{(2)}})] }.\notag \vspace{0mm}\end{equation} Further marginalizing out $\theta_i^{(3)},\ldots,\theta_i^{(T)}$ and with $\lambda(\boldsymbol{x}_i) =-\ln[P(y_i =0 \,|\, \boldsymbol{x}_i, r,\betav^{(2:T+1)})] $ by definition, we obtain \eqref{eq:recurssive_softplus_reg}. \end{proof} \begin{proof}[Proof for Definition \ref{thm:SS-softplus}] For the hierarchical model in \eqref{eq:DICLR_model}, we have $P(y_i=0\,|\, \{\theta^{(1)}_{ik}\}_k) = e^{-\sum_{k=1}^\infty \theta_{ik}^{(1)}} $. Using the moment generating function of the gamma distribution, we have \vspace{0mm}\begin{equation} P(y_i =0 \,|\, \boldsymbol{x}_i, \{\betav_k^{(2)},\theta_{ik}^{(2)}\}_k) = \prod_{k=1}^\infty e^{-\theta_{ik}^{(2)} \ln\big(1+e^{\boldsymbol{x}_i'\betav_k^{(2)}}\big) }.\notag \vspace{0mm}\end{equation} Further marginalizing out $\{\theta_{ik}^{(2)}\}_k,\ldots,\{\theta_{ik}^{({T})}\}_k$ and by definition with $\lambda(\boldsymbol{x}_i) =-\ln[P(y_i =0 \,|\, \boldsymbol{x}_i, \{r_k,\betav_k^{(2:{T}+1)}\}_k) ]$, we obtain \eqref{eq:SRS_regression}. \end{proof} \begin{proof}[Proof of Proposition \ref{lem:finite}] By construction, the infinite product would be equal or small than one. We need to further make sure that the infinite product would not degenerate to zero. Using the L\'evy-Khintchine theorem \citep{kallenberg2006foundations}, we have \begin{align} -\ln\left\{\mathbb{E}_G\left[e^{{-\sum_{k=1}^\infty r_k \ln[1+\exp(\boldsymbol{x}_i'\betav_k)]}}\right]\right\}&=\int_{\mathbb{R}_+\times\Omega} \left[ 1-\left(\frac{1}{1+e^{\boldsymbol{x}_i'\betav}}\right)^{r} \right] \nu(drd\betav).\notag \end{align} where $\nu(drd\betav)=r^{-1}e^{-cr}drG_0(d\betav)$ is the L\'evy measure of the gamma process. % Since if $c\ge 0$, then $1-e^{-c x}\le c x$ for all $x\ge 0$, the right-hand-side term of the above equation would be bounded below \begin{align} &\int_{\mathbb{R}_+\times\Omega} r\ln[1+e^{\boldsymbol{x}_i'\betav}] \nu(drd\betav)= \frac{\gamma_0}{c} \int_{\Omega} \ln[1+e^{\boldsymbol{x}_i'\betav}] g_0(d\betav).\label{eq:normalint} \end{align} Since $e^{e^x} = 1+e^x + \sum_{n=2}^\infty \frac{e^{nx}}{n!} \ge 1+e^x,$ we have \vspace{0mm}\begin{equation} \ln(1+e^x)\le e^{x}, \vspace{0mm}\end{equation} where the equality is true if and only if $x=-\infty$. Assuming $\betav\sim\mathcal{N}(0,\Sigmamat)$, we have \vspace{0mm}\begin{equation} \int_{\Omega} \ln[1+e^{\boldsymbol{x}_i'\betav}] g_0(d\betav) \le \int_{\Omega} e^{\boldsymbol{x}'_i\betav} \mathcal{N}(\betav;0,\Sigmamat)d\betav = e^{\frac{1}{2}\boldsymbol{x}_i'\Sigmamat\boldsymbol{x}_i}\notag \vspace{0mm}\end{equation} Thus the integral in the right-hand-side of (\ref{eq:normalint}) is finite and hence the infinite product $\prod_{k=1}^\infty\left[{1+e^{\boldsymbol{x}_i'\betav_k}}\right]^{-r_k}$ has a finite expectation that is greater than zero. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:sum_polytope}] Since $\sum_{k'\neq k}r_{k'} \ln(1+e^{\boldsymbol{x}_i' \betav_{k'}}) \ge 0$ a.s., if \eqref{eq:sum_ineuqality} is true, then $ r_k \ln(1+e^{\boldsymbol{x}_i' \betav_{k}}) \le -\ln(1-p_0) $ a.s. for all $k\in\{1,2,\ldots\}$. Thus if \eqref{eq:sum_ineuqality} is true, then \eqref{eq:convex_polytope} is true a.s., which means the set of solutions to \eqref{eq:sum_ineuqality} is included in the set of solutions to \eqref{eq:convex_polytope}. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:sum_polytope}] Assuming $\boldsymbol{x}_i$ violates at least the $k$th inequality, which means $ \boldsymbol{x}_i' \betav_k > \ln\big[(1-p_0)^{-\frac{1}{r_k}}-1\big], $ then we have $$\lambda(\boldsymbol{x}_i) =r_k\ln(1+e^{\boldsymbol{x}_i' \betav_{k}}) + \sum_{k'\neq k}r_{k'} \ln(1+e^{\boldsymbol{x}_i' \betav_{k'}}) \ge r_k\ln(1+e^{\boldsymbol{x}_i' \betav_{k}}) > -\ln(1-p_0) $$ and hence $P(y_i=1\,|\, \boldsymbol{x}_i)=1-e^{-\lambda(\boldsymbol{x}_i)}> p_0$ and $P(y_i=0\,|\, \boldsymbol{x}_i)\le 1- p_0$. \end{proof} \begin{proof}[Proof of Proposition \ref{lem:finite1}] By construction, the infinite product would be equal or small than one. We need to further make sure that the infinite product would not degenerate to zero. Using the L\'evy-Khintchine theorem \citep{kallenberg2006foundations} and $1-e^{-c x}\le c x$ for all $x\ge 0$ if $c\ge 0$, we have \footnotesize{ \begin{align} &-\ln\left\{\mathbb{E}_G\exp \left[ { - \sum_{k=1}^\infty r_k \ln\left(1+e^{\boldsymbol{x}_i'\betav^{({T}+1)}_k}\ln\Bigg\{1+e^{\boldsymbol{x}_i'\betav^{({T})}_k}\ln\bigg[1+\ldot \ln\Big(1+e^{\boldsymbol{x}_i'\betav^{(2)}_k}\Big)\bigg]\Bigg\}\right) }\right]\right\}\notag\\ &=\int \left[ 1-\left(1+e^{\boldsymbol{x}_i'\betav^{({T}+1)}}\ln\Bigg\{1+e^{\boldsymbol{x}_i'\betav^{({T})}}\ln\bigg[1+\ldots e^{\boldsymbol{x}_i'\betav^{(3)}}\ln\Big(1+e^{\boldsymbol{x}_i'\betav^{(2)}}\Big)\bigg]\Bigg\}\right)^{-r} \right] \nu(drd\betav^{({T}+1:2)})\notag\\ &\le \int r\ln \left(1+e^{\boldsymbol{x}_i'\betav^{({T}+1)}}\ln\Bigg\{1+e^{\boldsymbol{x}_i'\betav^{({T})}}\ln\bigg[1+\ldots e^{\boldsymbol{x}_i'\betav^{(3)}}\ln\Big(1+e^{\boldsymbol{x}_i'\betav^{(2)}}\Big)\bigg]\Bigg\}\right) \nu(drd\betav^{({T}+1:2)}) \label{eq:Levy_Khin} \end{align}}\normalsize Since $ \ln(1+e^x)\le e^{x}, $ we have \begin{align} ~&\ln \left(1+e^{\boldsymbol{x}_i'\betav^{({T}+1)}}\ln\Bigg\{1+e^{\boldsymbol{x}_i'\betav^{({T})}}\ln\bigg[1+\ldots e^{\boldsymbol{x}_i'\betav^{(3)}}\ln\Big(1+e^{\boldsymbol{x}_i'\betav^{(2)}}\Big)\bigg]\Bigg\}\right)\notag\\ \le~& \ln \left(1+e^{\boldsymbol{x}_i'\betav^{({T}+1)}}\ln\Bigg\{1+e^{\boldsymbol{x}_i'\betav^{({T})}}\ln\bigg[1+\ldots e^{\boldsymbol{x}_i'\betav^{(4)}}\ln\Big(1+e^{\boldsymbol{x}_i'(\betav^{(3)}+\betav^{(2)})}\Big)\bigg]\Bigg\}\right)\notag\\ \le~& \ln \left(1+e^{\boldsymbol{x}_i'\betav^{(\star)}}\right) \le e^{\boldsymbol{x}_i'\betav^{(\star)}}\notag \end{align} where $\betav^{(\star)}:=\betav^{({T}+1)}+\betav^{({T})}+\ldots+\betav^{(2)}$. Assuming $\betav^{(t)} \sim\mathcal{N}(0,\Sigmamat_t)$, the right hand side of \eqref{eq:Levy_Khin} would be bound below $ \int re^{\boldsymbol{x}_i'\betav^{(\star)}} \nu(drd\betav^{(\star)}) = \gamma_0 c^{-1}e^{\frac{1}{2} \boldsymbol{x}_i' (\sum_{t=2}^{T+1} \Sigmamat_t)\boldsymbol{x}_i }. \notag $ Therefore, the integral in the right-hand-side of (\ref{eq:Levy_Khin}) is finite and hence the infinite product in Proposition \ref{lem:finite1} has a finite expectation that is greater than zero under the gamma process. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:union_polytope}] Since $$\sum_{k'\neq k} r_{k'} \ln\left(1+e^{\boldsymbol{x}_i'\betav^{({T}+1)}_{k'}}\ln\Bigg\{1+e^{\boldsymbol{x}_i'\betav^{({T})}_{k'}}\ln\bigg[1+\ldot \ln\Big(1+e^{\boldsymbol{x}_i'\betav^{(2)}_{k'}}\Big)\bigg]\Bigg\}\right) \ge 0$$ a.s., if \eqref{eq:Union_convex_polytope} is true for at least one $k\in\{1,2,\ldots\}$, then \eqref{eq:SS-softplus_ineuqality} is true a.s., which means the set of solutions to $\eqref{eq:SS-softplus_ineuqality}$ encompass $\mathcal{D}_{\star}$. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:union}] Assume $\boldsymbol{x}_i$ satisfies at least the $k$th inequality, which means \eqref{eq:Union_convex_polytope} is true, then $$\small \lambda(\boldsymbol{x}_i)\ge r_k \ln\left(1+e^{\boldsymbol{x}_i'\betav^{({T}+1)}_k}\ln\Bigg\{1+e^{\boldsymbol{x}_i'\betav^{({T})}_k}\ln\bigg[1+\ldot \ln\Big(1+e^{\boldsymbol{x}_i'\betav^{(2)}_k}\Big)\bigg]\Bigg\}\right)> -\ln(1-p_0) $$ and hence $P(y_i=1\,|\, \boldsymbol{x}_i)=1-e^{-\lambda(\boldsymbol{x}_i)}> p$ and $P(y_i=0\,|\, \boldsymbol{x}_i)\le 1- p_0$. \end{proof} \begin{proof}[Proof of Theorem \ref{cor:PGBN}] By construction (\ref{eq:deepPFA_aug1}) is true for $t=1$. Suppose (\ref{eq:deepPFA_aug1}) is also true for $t\ge 2$, then we can augment each $m_{ik}^{(t)}$ under its compound Poisson representation as \vspace{0mm}\begin{equation} m_{ik}^{(t)} \,|\, m_{ik}^{(t+1)}\sim \mbox{SumLog}(m_{ik}^{(t+1)},~p_{ik}^{(t+1)}),~~ m_{ik}^{(t+1)}\sim\mbox{Pois}\left(\theta_{ik}^{(t+1)} q_{ik}^{(t+1)}\right), \label{eq:CompP \vspace{0mm}\end{equation} where the joint distribution of $m_{ik}^{(t)}$ and $m_{ik}^{(t+1)}$, according to Theorem 1 of \citet{NBP2012}, is the same as that in \vspace{0mm}\begin{equation} m_{ik}^{(t+1)} \,|\, m_{ik}^{(t)} \sim\mbox{CRT}(m_{ik}^{(t)}, ~\theta_{ik}^{(t+1)}),~~m_{ik}^{(t)}\sim{\mbox{NB}}(\theta_{ik}^{(t+1)},~ p_{ik}^{(t+1)}) ,\notag \vspace{0mm}\end{equation} where CRT refers to the Chinese restaurant table distribution described in \citet{NBP2012}. Marginalizing $\theta_{ik}^{(t+1)}$ from the Poisson distribution in \eqref{eq:CompP} leads to $m_{ik}^{(t+1)}\sim{\mbox{NB}}(\theta_{ik}^{(t+2)},$ $p_{ik}^{(t+2)}) $. Thus if (\ref{eq:deepPFA_aug1}) is true for layer $t$, then it is also true for layer $t+1$. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:PGtruncate}] Since $\hat{X}$ and $X_{\triangle}$ are independent to each other, with \eqref{eq:PGmean} and \eqref{eq:varPG}, we have and $ \mathbb{E}[X] = \mathbb{E}[\hat{X}] + \mu_{\triangle} = \mathbb{E}[\hat{X} + X_{\triangle}]$ and $ \mbox{var}[X] =\mbox{var}[\hat{X}] + \sigma^2_{\triangle} = \mbox{var}[\hat{X} + X_{\triangle} ]$. Using Taylor series expansion, we have \begin{align} \ln \mathbb{E}[e^{sX}] &= -\sum_{k=1}^\infty a\ln(1-sd_k^{-1}) = a \sum_{k=1}^\infty \sum_{n=1}^\infty \frac{s^n d_k^{-n}}{n} = s \mathbb{E}[X] + s^2 \frac{\mbox{var}[X] }{2}+a \sum_{n=3}^\infty \sum_{k=1}^\infty \frac{s^n d_k^{-n}}{n},\notag \end{align} \begin{align} \ln \mathbb{E}[e^{s\hat{X}}] &= -\sum_{k=1}^{K-1} a\ln(1-sd_k^{-1}) = s \mathbb{E}[\hat{X}] + s^2 \frac{\mbox{var}[\hat{X}]}{2} +a \sum_{n=3}^\infty \sum_{k=1}^{K-1} \frac{s^n d_k^{-n}}{n},\notag\\ \ln \mathbb{E}[e^{sX_{\triangle}}] &= - \frac{\mu_{\triangle}^2}{\sigma^2_{\triangle}} \ln\left(1-s \frac{\sigma^2_{\triangle}}{\mu_{\triangle}} \right) = \frac{\mu_{\triangle}^2}{\sigma^2_{\triangle}}\sum_{n=1}^\infty \frac{s^n (\frac{\sigma^2_{\triangle}}{\mu_{\triangle}})^{n}}{n} = s\mu_{\triangle} + s^2 \frac{\sigma^2_{\triangle}}{2} +\frac{\mu_{\triangle}^2}{\sigma^2_{\triangle}}\sum_{n=3}^\infty \frac{s^n (\frac{\sigma^2_{\triangle}}{\mu_{\triangle}})^{n}}{n}.\notag \end{align} The proof is completed with $\ln\mathbb{E}[e^{sX}] - \ln \mathbb{E}[e^{s(\hat{X}+X_{\triangle})}] =\ln \mathbb{E}[e^{sX}] - \ln \mathbb{E}[e^{s(\hat{X})}] - \ln \mathbb{E}[e^{X_{\triangle})}] .$ \end{proof} \begin{proof}[Proof of Proposition \ref{prop:PG}] For the Polya-Gamma random variable $X\sim\mbox{PG}(a,c)$, since $ \mathbb{E}[X] = \frac{a}{2|c|}\tanh\big(\frac{|c|}{2}\big) $ and $\lim_{|c|\rightarrow \infty}\tanh\big(\frac{|c|}{2}\big)=1$, we have $\lim_{|c|\rightarrow \infty}\mathbb{E}[X]=0$. With the expression of $\mbox{var}[X]$ shown in \eqref{eq:varPG}, we have $\lim_{|c|\rightarrow \infty}\mbox{var}[X]=0$. Therefore, we have $X\rightarrow 0$ as $|c|\rightarrow \infty$. Since $\lim_{|c|\rightarrow \infty} \mathbb{E}[|c|X] =a/2$ and $$\mbox{var}[|c|X] = (|c|)^2 \mbox{var}[X] = \frac{a}{2|c|} \frac{{\sinh(|c|)-|c|}}{\cosh(|c|)+1} = \frac{a}{2} \frac{{\frac{\tanh(|c|)}{|c|}-\cosh^{-1}(|c|)}}{1+ \cosh^{-1}(|c|)}. $$ we have $\lim_{|c|\rightarrow \infty}\mbox{var}[|c|X] = 0$ and hence $\lim_{|c|\rightarrow \infty} |c|X = a/2$. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:PG1}] Using \eqref{eq:PGmean}, we have \vspace{0mm}\begin{equation} \mathbb{E}[X\,|\, a, 0] = \lim_{c\rightarrow 0} \mathbb{E}[X\,|\, a, c] = \lim_{c\rightarrow 0 }\frac{a}{2|c|}\tanh\left(\frac{|c|}{2}\right) = \lim_{c\rightarrow 0 } \frac{a}{2} \frac{e^{|c|}-1}{|c|}\frac{1}{e^{|c|}+1} = \frac{a}{4}.\notag \vspace{0mm}\end{equation} Using \eqref{eq:varPG}, we have \newline $\displaystyle ~~~~~~~\mbox{var}[X\,|\, a, 0] = \lim_{c\rightarrow 0} \mbox{var}[X\,|\, a, c] = \lim_{c\rightarrow 0} \frac{a\cosh^{-2}\left(\frac{|c|}{2}\right) }{4} \left(\frac{1}{6} + \sum_{n=1}^\infty \frac{|c|^{2n}}{(2n+3)!}\right) = \frac{a}{24}.\notag $ \end{proof} \begin{proof}[Proof of Proposition \ref{prop:PG1_1}] Using \eqref{eq:varPG}, we have $ \mbox{var}[X\,|\, a, c] = \frac{a\cosh^{-2}\left(\frac{|c|}{2}\right) }{4} \left(\frac{1}{6} + \sum_{n=1}^\infty \frac{|c|^{2n}}{(2n+3)!}\right) \ge \frac{a}{24} \cosh^{-2}\left(\frac{|c|}{2}\right) $, with the equality holds if and only if $c=0$. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:PG2}] With \eqref{eq:PGmean} and \eqref{eq:varPG}, the variance-to-mean ratio is \vspace{0mm}\begin{equation}\small \frac{\mbox{var}[X\,|\, a, c] }{\mathbb{E}[X\,|\, a, c]} = \frac{1}{|c|^{2}} - \frac{1}{|c|\sinh(|c|)} = \frac{\sinh(|c|)-|c|}{|c|^2\sinh(|c|)} =\left[{\frac{1}{6}\displaystyle\sum_{n=0}^{\infty} \frac{|c|^{2n+3} \, 3!}{(2n+3)!}}\right]\left/\left[{\displaystyle \sum_{n=0}^{\infty} \frac{|c|^{2n+3}}{(2n+1)!}}\right]\right..\notag \vspace{0mm}\end{equation} When $c=0$, with Proposition \ref{prop:PG1}, we have $ {\mbox{var}[X\,|\, a, c] } \big / {\mathbb{E}[X\,|\, a, c]} =1/6$. When $c\neq 0$, since $3!/(2n+3)! <1/(2n+1)!$ a.s. for all $n\in\{1,\ldots\}$, we have $$\sum_{n=0}^{\infty} \frac{|c|^{2n+3} \, 3!}{(2n+3)!} < \sum_{n=0}^{\infty} \frac{|c|^{2n+3}}{(2n+1)!}$$ a.s. and hence $ {\mbox{var}[X\,|\, a, c] } \big / {\mathbb{E}[X\,|\, a, c]} <1/6$ a.s. \end{proof} \section{Gibbs sampling for sum-stack-softplus regression}\label{app:sampling} For SS-softplus regression, Gibbs sampling via data augmentation and marginalization proceeds as follows.\\ \textbf{\emph{Sample $m_{i}$.}} Denote $\theta_{i\boldsymbol{\cdot}} = \sum_{k=1}^K \theta^{(1)}_{ik}$\,. Since $m_{i}=0$ a.s. given $y_{i}=0$ and $m_{i}\ge 1$ given $y_{i}=1$, and in the prior we have $m_{i}\sim\mbox{Pois}(\theta_{i\boldsymbol{\cdot}})$, following the inference for the Bernoulli-Poisson link in \citet{EPM_AISTATS2015}, we can sample $m_{i}$ as \vspace{0mm}\begin{equation} (m_{i}\,|\,-)\sim y_{i}\mbox{Pois}_+\left(\theta_{i\boldsymbol{\cdot}}\right),\label{eq:m_i} \vspace{0mm}\end{equation} where $m \sim \mbox{Pois}_+(\theta)$ denotes a draw from the truncated Poisson distribution, with PMF $f_M(m\,|\,y_i=1,\theta) = (1-e^{-\theta})^{-1}{\theta^m e^{-\theta}}/{m!}$, where $m\in\{1,2,\ldots\}$. To draw truncated Poisson random variables, we use an efficient rejection sampler described in \cite{EPM_AISTATS2015}, whose smallest acceptance rate, which happens when the Poisson rate is one, is 63.2\%. \\ \textbf{\emph{Sample $m^{(1)}_{ik}$.}} Since letting $m_{i}=\sum_{k=1}^K m^{(1)}_{ik},~m^{(1)}_{ik}\sim\mbox{Pois}(\theta^{(1)}_{ik})$ is equivalent in distribution to letting $(m^{(1)}_{i1},\ldots,m^{(1)}_{iK})\,|\,m_{i}\sim\mbox{Mult}\left(m_{i},{\theta^{(1)}_{i1}}/{\theta_{i\boldsymbol{\cdot}}},\ldots,{\theta^{(1)}_{iK}}/{\theta_{i\boldsymbol{\cdot}}}\right),~m_{i}\sim \mbox{Pois}\left( \theta_{i\boldsymbol{\cdot}}\right)$, similar to \citet{Dunson05bayesianlatent} and \citet{BNBP_PFA_AISTATS2012}, we sample $m^{(1)}_{ik}$ as \vspace{0mm}\begin{equation} (m^{(1)}_{i1},\ldots,m^{(1)}_{iK}\,|\,-)\sim\mbox{Mult}\left(m_{i},{\theta^{(1)}_{i1}}/{\theta_{i\boldsymbol{\cdot}}},\ldots,{\theta^{(1)}_{iK}}/{\theta_{i\boldsymbol{\cdot}}}\right).\label{eq:m_i_1} \vspace{0mm}\end{equation} \textbf{\emph{Sample $m^{(t)}_{ik}$ for $t\ge 2$}.} As in Theorem \ref{cor:PGBN}'s proof, we sample $m^{(t)}_{ik}$ for $t=2,\ldots,{T}+1$ as \vspace{0mm}\begin{equation} (m_{ik}^{(t)}\,|\, m_{ik}^{(t-1)},\theta_{ik}^{(t)})\sim{\mbox{CRT}}\big(m_{ik}^{(t-1)},~\theta_{ik}^{(t)}\big).\label{eq:m_i_t} \vspace{0mm}\end{equation} \textbf{\emph{Sample $\betav^{(t)}_{k}$.}} Using data augmentation for NB regression, as in \citet{LGNB_ICML2012} and \citet{polson2013bayesian}, we denote $\omega^{(t)}_{ik}$ as a random variable drawn from the Polya-Gamma ($\mbox{PG}$) distribution \citep{LogitPolyGamma} as $ \omega^{(t)}_{ik}\sim\mbox{PG}\left(m^{(t-1)}_{ik}+\theta^{(t)}_{ik},~0 \right), \notag $ under which we have $\mathbb{E}_{\omega^{(t)}_{ik}} \left[\exp(-\omega^{(t)}_{ik}(\psi^{(t)}_{ik})^2/2)\right] = {\cosh^{-(m^{(t-1)}_{ik}+\theta^{(t)}_{ik})}(\psi_{ik}^{(t)}/2)}$. Thus the likelihood of $\psi_{ik}^{(t)}:= \boldsymbol{x}_i\betav_{k}^{(t)} + \ln {q}_{ik}^{(t-1)}= \ln\big(e^{q_{ik}^{(t)}}-1\big) $ in (\ref{eq:deepPFA_aug1}) can be expressed as \begin{align} \mathcal{L}(\psi^{(t)}_{ik})&\propto \frac{{(e^{\psi^{(t)}_{ik}})}^{m^{(t-1)}_{ik}}}{{(1+e^{\psi^{(t)}_{ik}})}^{m^{(t-1)}_{ik}+\theta^{(t)}_{ik}}} = \frac{2^{-(m^{(t-1)}_{ik}+\theta^{(t)}_{ik})}\exp({\frac{m^{(t-1)}_{ik}-\theta^{(t)}_{ik}}{2}\psi^{(t)}_{ik}})}{\cosh^{m^{(t-1)}_{ik}+\theta^{(t)}_{ik}}(\psi^{(t)}_{ik}/2)}\nonumber\\ &\propto \exp\left({\frac{m^{(t-1)}_{ik}-\theta^{(t)}_{ik}}{2}\psi_i}\right)\mathbb{E}_{\omega^{(t)}_{ik}}\left[\exp[-\omega^{(t)}_{ik}(\psi^{(t)}_{ik})^2/2]\right]. \notag \end{align} Combining the likelihood $ \mathcal{L}(\psi^{(t)}_{ik}, \omega^{(t)}_{ik})\propto \exp\left({\frac{m^{(t-1)}_{ik}-\theta^{(t)}_{ik}}{2}\psi_i}\right)\exp[-\omega^{(t)}_{ik}(\psi^{(t)}_{ik})^2/2] $ and the prior, we sample auxiliary Polya-Gamma random variables $\omega^{(t)}_{ik}$ as \vspace{0mm}\begin{equation} (\omega^{(t)}_{ik}\,|\,-)\sim\mbox{PG}\left(m^{(t-1)}_{ik}+\theta^{(t)}_{ik},~\boldsymbol{x}_i'\betav_{k}^{(t)} + \ln{q}_{ik}^{(t-1)} \right),\label{eq:omega_i} \vspace{0mm}\end{equation} conditioning on which we sample $\betav^{(t)}_{k}$ as \vspace{0mm}\begin{eqnarray} &(\betav^{(t)}_{k}\,|\,-)\sim\mathcal{N}(\muv^{(t)}_{k}, \Sigmamat^{(t)}_{k}),~~~\displaystyle\Sigmamat^{(t)}_{k} = \left( \mbox{diag}(\alpha_{0tk},\ldots,\alpha_{Vtk}) + \sum \nolimits_{i} \omega^{(t)}_{ik}\boldsymbol{x}_i\boldsymbol{x}_i' \right)^{-1}, \notag\\ &\displaystyle \muv^{(t)}_{k} = \Sigmamat^{(t)}_{k}\left[ \sum \nolimits_{i} \left( -\omega_{ik}^{(t)}\ln {q}_{ik}^{(t-1)} + \frac{m^{(t-1)}_{ik}-\theta^{(t)}_{ik}}{2}\right)\boldsymbol{x}_i\right]. \label{eq:beta} \vspace{0mm}\end{eqnarray} Once we update $\betav^{(t)}_k$, we calculate $q^{(t)}_{ik}$ using \eqref{eq:lambda}. To draw Polya-Gamma random variables, we use the approximate sampler described in Proposition \ref{prop:PGtruncate}, which is unbiased in both its mean and its variance. The approximate sampler is found to be highly accurate even for a truncation level as small as one, for various combinations of the two Polya-Gamma parameters. Unless stated otherwise, we set the truncation level of drawing a Polya-Gamma random variable as six, which means the summation of six independent gamma random variables is used to approximate a Polya-Gamma random variable. \\ \textbf{\emph{Sample $\theta^{(t)}_{ik}$.}} Using the gamma-Poisson conjugacy, we sample $\tau^{(t)}_{ik} := \theta^{(t)}_{ik}q_{ik}^{(t)}$ as \vspace{0mm}\begin{equation} (\tau^{(t)}_{ik}\,|\,-)\sim\mbox{Gamma}\left(\theta^{(t+1)}_{ik}+m^{(t)}_{ik},\, 1-e^{-q_{ik}^{(t+1)}} \right).\label{eq:theta} \vspace{0mm}\end{equation} \textbf{\emph{Sample $\alpha_{vtk}$.}} We sample $\alpha_{vtk}$ as \vspace{0mm}\begin{equation} (\alpha_{vtk} \,|\,-)\sim\mbox{Gamma}\left(e_0+ \frac{1}{2},\frac{1}{f_0+\frac{1}{2}(\beta^{(t)}_{vk})^2}\right).\label{eq:alpha} \vspace{0mm}\end{equation} \textbf{\emph{Sample $c_0$.}} We sample $c_0$ as \vspace{0mm}\begin{equation} (c_0\,|\,-)\sim\mbox{Gamma}\left(e_0+\gamma_0,\frac{1}{f_0+ \sum_k r_k} \right).\label{eq:c0} \vspace{0mm}\end{equation} \textbf{\emph{Sample $\gamma_0$ and $r_{k}$.}} Let us denote $ \tilde{p}_{k} := {\sum \nolimits_{i}q^{({T}+1)}_{ik}}\Big/{\big(c_0+ \sum \nolimits_{i}q^{({T}+1)}_{ik}\big)}. $$ Given $l_{\boldsymbol{\cdot} k} = \sum_{i }m^{({T}+1)}_{ik}$, we first sample \vspace{0mm}\begin{equation} (\tilde{l}_{k}\,|\,-) \sim\mbox{CRT}(l_{\boldsymbol{\cdot} k} , \gamma_0/K). \vspace{0mm}\end{equation} With these latent counts, we then sample $\gamma_0$ and $r_{k}$ as \begin{align} &(\gamma_0 \,|\, -) \sim\mbox{Gamma}\left(a_0+{\tilde{l}}_{\boldsymbol{\cdot}},\,\frac{1}{b_0-\frac{1}{K}\sum_{k}\ln(1-\tilde{p}_{k})} \right), \notag\\ &(r_{k}\,|\,-)\sim\mbox{Gamma}\left(\frac{\gamma_0}{K}+ l_{\boldsymbol{\cdot} k}, \, \frac{1}{c_0 +\sum_i q^{({T}+1)}_{ik} }\label{eq:r_k} \right). \end{align} \subsection{Numerical stability} For stack-softplus and SS-softplus regressions with $T>1$, if for some data point $\boldsymbol{x}_i$, the inner product $\boldsymbol{x}_i'\betav_k^{(t)}$ takes such a large negative number that $e^{-\boldsymbol{x}_i'\betav_k^{(t)}}=0$ under a finite numerical precision, then ${q}_{ik}^{(t)}=0$ and $\ln {q}_{ik}^{(t)} = -\infty$. For example, in both 64 bit Matlab (version R2015a) and 64 bit R (version 3.0.2), if $\boldsymbol{x}_i'\betav_k^{(t)}\le-745.2$, then $e^{\boldsymbol{x}_i'\betav_k^{(t)}}=0$ and hence ${q}_{ik}^{(t)}=0$, $p_{ik}^{(t)}=0$, and $\ln {q}_{ik}^{(t)} = -\infty$. If ${q}_{ik}^{(t)}=0$, then with (\ref{eq:omega_i}), we let $\omega^{(t+1)}_{ik} = 0$, and with Proposition \ref{prop:PG}, we let $$-\omega^{(t+1)}_{ik}\ln {q}_{ik}^{(t)} + \frac{m_{ik}^{(t)}- \theta_{ik}^{(t+1)}}{2} = \frac{m_{ik}^{(t)}+\theta_{ik}^{(t+1)}}{2} + \frac{m_{ik}^{(t)}- \theta_{ik}^{(t+1)}}{2} =m_{ik}^{(t)}, $$ and with \eqref{eq:lambda}, we let ${q}_{ik}^{(\tilde{t})}=0$ for all $\tilde{t}\ge t$. Note that if ${q}_{ik}^{(t)}=0$, drawing $\omega_{ik}^{(\tilde{t})}$ for $\tilde{t}\in\{ t+1,\ldots,T+1\}$ becomes unnecessary. To avoid the numerical issue of calculating $\theta_{ik}^{(t)}$ with $\tau_{ik}^{(t)}/{q}_{ik}^{(t)}$ when ${q}_{ik}^{(t)}=0$, we let \vspace{0mm}\begin{equation} \theta_{ik}^{(t)} ={ \tau_{ik}^{(t)}}\big/{\max\big\{\epsilon,{q}_{ik}^{(t)}\big\}}, \vspace{0mm}\end{equation} where we set $\epsilon=10^{-10}$ to for illustrations and $\epsilon=10^{-6}$ to produce the results in the tables. To ensure that the covariance matrix for $\betav_k^{(t)}$ is positive definite, we bound $\alpha_{vtk}$ above $10^{-3}$. \subsection{The propagation of latent counts across layers}\label{app:T} As the number of tables occupied by the customers is in the same order as the logarithm of the number of customers in a Chinese restaurant process, $m^{(t+1)}_{ik}$ in \eqref{eq:ICNBE_finite_1} is in the same order as $\ln\big( m^{(t)}_{ik}\big)$ and hence often quickly decreases as $t$ increases, especially when $t$ is small. In addition, since $m_{ik}^{(t+1)}\le m_{ik}^{(t)}$ almost surely (a.s.), $m_{ik}^{(t)}=0$ a.s. if $m_{ik}^{(1)}=0$, $m_{ik}^{(t)}\ge 1$ a.s. if $m_{ik}^{(1)}\ge 1$, and $m_{i} \ge 1$ a.s. if $y_i=1$, we have the following two corollaries. \begin{cor}\label{cor:mono1} The latent count $m_{\boldsymbol{\cdot} k}^{(t)} = \sum_{i} m_{ik}^{(t)}$ monotonically decreases as $t $ increases and $ m_{\boldsymbol{\cdot} k}^{(t)}\ge \sum_{i}\delta(m_{ik}^{(1)}\ge 1). $ \end{cor} \begin{cor}\label{cor:mono2} The latent count $ m_{\boldsymbol{\cdot} \boldsymbol{\cdot}}^{(t)} = \sum_{k} m_{\boldsymbol{\cdot} k}^{(t)}$ monotonically decreases as $t $ increases and $ m_{\boldsymbol{\cdot} \boldsymbol{\cdot}}^{(t)} \ge \sum_{i} \delta(y_i=1). $ \end{cor} With Corollary \ref{cor:mono2}, one may consider using the values of $m_{\boldsymbol{\cdot}\cdotv}^{(t)}/\sum_{i} \delta(y_i=1)$ to decide whether $T$, the depth of the gamma belief network used in SS-softplus regression, need to be increased to increase the model capacity, or whether $T$ could be decreased to reduce the computational complexity. Moreover, with Corollary \ref{cor:mono1}, one may consider using the values of $m_{\boldsymbol{\cdot} k}^{(t)}/\sum_{i} \delta(y_i=1)$ to decide how many criteria would be sufficient to equip each individual expert. For simplicity, we consider the number of criteria for each expert as a parameter that determines the model capacity and we fix it as $T$ for all experts in this paper. \section{Related Methods and Discussions}\label{sec:discussion} While we introduce a novel nonlinear regression framework for binary response variables, we recognize some interesting connections with previous work, including the gamma belief network, several binary classification algorithms that use multiple hyperplanes, and the ideas of using the mixture or product of multiple probability distributions to construct a more complex predictive distribution, as discussed below. \subsection{Gamma belief network}\label{sec:GBN} The Poisson gamma belief network is proposed in \citet{PGBN_NIPS2015} to construct a deep Poisson factor model, in which the shape parameters of the gamma distributed factor score matrix at layer $t$ are factorized under the gamma likelihood into the product of a factor loading matrix and a gamma distributed factor score matrix at layer $t+1$. While the scale parameters of the gamma distributed factor scores depend on the indices of data samples, they are constructed to be independent of the indices of the latent factors, making it convenient to derive closed-form Gibbs sampling update equations via data augmentation and marginalization. The gamma belief networks in both stack- and SS-softplus regressions, on the other hand, do not factorize the gamma shape parameters but parameterize the logarithm of each gamma scale parameter using the inner product of the corresponding covariate and regression coefficient vectors. Hence a gamma scale parameter in softplus regressions depends on both the index of the data sample and that of the corresponding latent expert. On a related note, while the gamma distribution function is the building unit for both gamma belief networks, the one in \citet{PGBN_NIPS2015} is used to factorize the Poisson rates of the observed or latent high-dimensional count vectors, extracting multilayer deep representations in an unsupervised manner, whereas the one used in \eqref{eq:BerPo_recursive_softplus_reg_model} is designed for supervised learning to establish a direct functional relationship to predict a label given its covariates, without introducing factorization within the gamma belief network. \subsection{Multi-hyperplane regression models}\label{sec:CPM} Generalizing the construction of multiclass support vector machines in \citet{crammer2002algorithmic}, the idea of combining multiple hyperplanes to define nonlinear binary classification decision boundary has been discussed in \citet{aiolli2005multiclass}, \citet{wang2011trading}, \citet{manwani2010learning, manwani2011polyceptron}, and \citet{kantchelian2014large}. In particular, \citet{kantchelian2014large} clearly connects the idea of combining multiple hyperplanes for nonlinear classification with the learning of a convex polytope, defined by the intersection of multiple hyperplanes, to separate one class from the other, and shows that a convex polytope classifier can provide larger margins than a linear classifier equipped with a single hyperplane. From this point of view, the proposed sum-softplus regression is closely related to the convex polytope machine (CPM) of \citet{kantchelian2014large} as its decision boundary can be explicitly bounded by a convex polytope that encloses negative examples, as described in Theorem \ref{thm:sum_polytope} and illustrated in Fig. \ref{fig:circle_1}. Distinct from the CPM that uses a convex polytope as its decision boundary, and provides no probability estimates for class labels and no principled ways to set its number of equally-weighted hyperplanes, sum-softplus regression makes its decision boundary smoother than the corresponding bounding convex polytope, as shown in Figs. \ref{fig:circle_1} (c)-(d), using more complex interactions between hyperplanes than simple intersection, provides probability estimates for its labels, and supports countably infinite differently-weighted hyperplanes with the gamma-negative binomial process. In addition, to solve the objective function that is non-convex, the CPM relies on heuristics to hard assign a positively labeled data point to one and only one of the hyperplanes, making the learning of the parameters for each hyperplane become a convex optimization problem, whereas all softplus regressions use Bayesian inference with closed-form Gibbs sampling update equations, in which each data point is assigned to one or multiple hyperplanes to learn their parameters. Moreover, distinct from the CPM and sum-softplus regression that use either a single convex polytope or a single convex-polytope bounded space to enclose \emph{negative} examples, the proposed stack-softplus regression defines a single convex-polytope-like confined space to enclose \emph{positive} examples, and the proposed SS-softplus regression further generalizes all of them in that its decision boundary is related to the union of multiple convex-polytope-like confined spaces. \subsection{Mixture, product, convolution, and stack of experts} With each regression coefficient vector analogized as an expert, the proposed softplus regressions can also be related to the idea of combining multiple experts' beliefs to improve a model's predictive performance. Conventionally, if an expert's belief is expressed as a probability density/mass function, then one may consider using the linear opinion pool \citep{LOP} or logarithmic opinion pool \citep genest1986combining,heskes1998selecting} to aggregate multiple experts' probability distributions into a single one. To reach a single aggregated distribution of some unknown quantity $y$, the linear opinion pool, also known as mixture of experts (MoE), aggregates experts additively by taking a weighted average of their distributions on $y$, while the logarithmic opinion pool, including the product of experts (PoE) of \citet{POE} as a special case, aggregates them multiplicatively by taking a weighted geometric mean of these distributions \citep{jacobs1995methods,clemen1999combining}. Opinion pools with separately trained experts can also be related to ensemble methods \citep{hastie01statisticallearning, zhou2012ensemble}, including both bagging \citep{breiman1996bagging} and boosting \citep{freund1997decision}. Another common strategy is to jointly train the experts using the same set of features and data. For example, the PoE of \citet{POE} trains its equal-weighted experts jointly on the same data. The proposed softplus regressions follow the second strategy to jointly train on the same data not only its experts but also their weights. Assume there are $K$ experts and the $k$th expert's belief on $y$ is expressed as a probability density/mass function $f_k(y\,|\,\theta_k)$, where $\theta_k$ represents the distribution parameters. The linear opinion pool aggregates the $K$ expert distributions into a single one using \vspace{0mm}\begin{equation} f_Y(y\,|\,\{\theta_k\}_k) = \sum_{k=1}^K \pi_k f_k(y\,|\,\theta_k), \label{eq: LinearOP} \vspace{0mm}\end{equation} where $\pi_k$ are nonnegative weights and sum to one. The logarithmic opinion pool aggregates the $K$ expert distributions using \vspace{0mm}\begin{equation} f_Y(y\,|\,\{\theta_k\}_k) = \frac{\prod_{k=1}^K [f_k(y\,|\,\theta_k)]^{\pi_k}}{\sum_{y} \prod_{k=1}^K [f_k(y\,|\,\theta_k)]^{\pi_k}}, \label{eq: LOP} \vspace{0mm}\end{equation} where $\pi_k\ge 0$ and the constraint $\sum_{k=1}^K \pi_k = 1$ is also commonly imposed. If $\pi_k=1$ for all $k$, then a logarithmic opinion pool becomes a product of experts (PoE). In decision and risk analysis, the functions $f_k(y)$ usually represent independent experts' subjective probabilities, which are often assumed to be known \emph{a priori}, and the focus is to optimize the expert weights~\citep{clemen1999combining}. Whereas in statistics and machine learning, both the functions $f_k(y)$ and the expert weights are typically learned from the data. One common strategy is to first train different experts separately, such as using different feature sets, different data, and different learning algorithms, and subsequently aggregate their distributions into a single one. For example, in \cite{tax2000combining}, a set of classifiers are first separately trained on different independent data sources and then aggregated additively or multiplicatively to construct a single classifier. \subsubsection{Mixture of experts} A linear opinion pool is also commonly known as a mixture of experts (MoE). Not only are there efficient algorithms, using expectation-maximization (EM) or MCMC, to jointly learn the experts (mixture components) and their mixture weights in a MoE, there are nonparametric Bayesian algorithms, such as Dirichlet process mixture models \citep{DP_Mixture_Antoniak,rasmussen2000infinite}, that support a potentially infinite number of experts. We note it is possible to combine the proposed softplus regression models with linear opinion pool to further improve their performance. We leave that extension for future study. \subsubsection{Product of experts} In contrast to MoEs, the logarithmic opinion pool could produce a probability distribution with sharper boundaries, but at the same time is usually much more challenging to train due to a normalization constant that is typically intractable to compute. Hinton's product of experts (PoE) is one of the most well-know logarithmic opinion pools; since jointly training the experts and their weights in the logarithmic opinion pool makes the inference much more difficult, all the experts in a PoE are weighted equally \citep{POE}. A PoE is distinct from previously proposed logarithmic opinion pools in that its experts are trained jointly rather than separately. Even with the restriction of equal weights, the exact gradients of model parameters in a PoE are often intractable to compute, and hence contrastive divergence that approximately computes these gradients is commonly employed for approximate maximum likelihood inference. Moreover, no nonparametric Bayesian prior is available to allow the number of experts in a PoE to be automatically inferred. PoEs have been successfully applied to binary image modeling \citep{POE} and one of its special forms, the restricted Boltzmann machine, was widely used as a basic building block in constructing a deep neural network \citep{hinton2006fast,bengio2007greedy}. PoEs for non-binary data and several other logarithmic opinion pools inspired by PoEs have also been proposed, with applications to image analysis, information retrieval, and computational linguistics \citep{welling2004exponential,xing2005mining,smith2005logarithmic}. We note one may apply the proposed softplus regressions to regress the binary response variables on the covariates transformed by PoEs, such as restricted Boltzmann machine and its deep constructions, to further improve the classification performance. We leave that extension for future study. \subsubsection{Stack of experts} Different from both MoE and PoE, we propose stack of experts (SoE) that repeatedly mixes the same distribution with respect to the same distribution parameter as $$x\sim f(\,r_1,w_1),\ldots,~r_{k-1}\sim f(r_{k},w_{k}),\ldots,~r_{K-1}\sim f(r_K,w_{K}).$$ In a SoE, the marginal distribution of $x$ given $r$ and $\{w_k\}_{1,K}$ can be expressed as $$ f(x\,|\, r_K,\{w_k\}_{1,K}) = \int\! \ldots\!\int f(x\,|\,r_1,w_1) f(r_1\,|\,r_2,w_2)\ldots f(r_{K-1}\,|\,r_K,w_K)dr_{K-1}\ldots dr_{1}, $$ where the parameter $w_k$ that is pushed into the stack after $w_{k-1}$ will pop out before $w_{k-1}$ to parameterize the marginal distribution. In both stack- and SS-softplus regressions, we obtain a stacked gamma distribution $x\sim f(r_K,\{w_k\}_{1,K})$ by letting $f(x_k\,|\,r_k,w_k)=\mbox{Gamma}(x_k;r_k,e^{w_k})$, and as shown in \eqref{eq:recurssive_softplus_reg} and \eqref{eq:SRS_regression}, the regression coefficient vectors that are pushed into the stack later appear earlier in both the stack- and SS-softplus functions. \subsubsection{Convolution of experts} Distinct from both MoE and PoE, we may consider that both sum- and SS-softplus regressions use a convolution of experts (CoE) strategy to aggregate multiple experts' beliefs by convolving their probability distributions into a single one. A CoEs is based on a fundamental law in probability and statistics: the probability distribution of independent random variables' summation is equal to the convolution of their probability distributions \citep[$e.g.$,][]{fristedt1997}. Thus even though it is possible that the convolution is extremely difficult to solve and hence the explicit form of the aggregated probability density/mass function might not be available, simply adding together the random samples independently drawn from a CoE's experts would lead to a random sample drawn from the aggregated distribution that is smoother than any of the distributions used in convolution. In a general setting, denoting $G=\sum_{k=1}^\infty r_k\delta_{\omega_k}$ as a draw from a completely random measure \citep PoissonP} that consists of countably infinite atoms, one may construct an infinite CoE model to generate random variables from a convolved distribution as $x=\sum_{k=1}^\infty x_k, ~x_k\sim f_k,$ where $f_k=f(r_k,\omega_k)$ are independent experts parameterized by both the weights $r_k$ and atoms~$\omega_k$ of $G$. Denoting $(f_i*f_j)(x):=\int f_i(\tau)f_j(x-\tau)d\tau$ as the convolution operation, under the infinite CoE model, we have \vspace{0mm}\begin{equation} f_X(x)= ( f_1 * f_2*\ldots *f_\infty) (x),\notag \vspace{0mm}\end{equation} where the same distribution function is repeatedly convolved to increase the representation power to better fit complex data. Under this general framework, we may consider both sum- and SS-softplus regressions as infinite CoEs, with the gamma process used as the underlying completely random measure \cite{Kingman,PoissonP} to support countably infinite differently weighted probability distributions for convolution. For sum-softplus regression, as shown in \eqref{eq:CNB} of Theorem \ref{thm:CLR2CNB}, each expert can be considered as a gamma distribution whose scale parameter is parameterized by the inner product of the covariate vector and an expert-specific regression coefficient vector, and the convolution of countably infinite experts' gamma distributions is used as the distribution of the BerPo rate of the response variable; and alternatively, as shown in \eqref{eq:CNB1} of Theorem \ref{thm:CLR2CNB}, each expert can be considered as a NB regression, and the convolution of countably infinite experts' NB distributions is used as the distribution of the latent count response variable. For SS-softplus regression, each expert can be considered as a stacked gamma distribution, and the convolution of countably infinite experts' stacked gamma distributions is used as the distribution of the BerPo rate of a response variable. Related to the PoE of \citet{POE}, a CoE trains its experts jointly on the same data. Distinct from that, a CoE does not have an intractable normalization constant in the aggregated distribution, its experts can be weighted differently, and its number of experts could be automatically inferred from the data in a nonparametric Bayesian manner. The training for a CoE is also unique, as inferring the parameters of each expert essentially corresponds to deconvolving the aggregated distribution. Moreover, the convolution operation ensures that the aggregated distribution is smoother than every expert distribution. For inference, while it is often challenging to analytically deconvolve the convolved distribution function, we consider first constructing a hierarchical Bayesian model that can generate random variables from the convolved distribution, and then developing a MCMC algorithm to decompose the total sum $x$ into the $x_k$ of individual experts, which are then used to infer the model parameters $r_k$ and $\omega_k$ for each expert. \section{Experimental settings and additional results}\label{app:setting} We use the $L_2$ regularized logistic regression provided by the LIBLINEAR package \citep{REF08a} to train a linear classifier, where a bias term is included and the regularization parameter $C$ is five-fold cross-validated on the training set from $(2^{-10}, 2^{-9},\ldots, 2^{15})$. For kernel SVM, a Gaussian RBF kernel is used and three-fold cross validation is used to tune both the regularization parameter $C$ and kernel width on the training set. We use the LIBSVM package \citep{LIBSVM}, where we three-fold cross-validate both the regularization parameter $C$ and kernel-width parameter $\gamma$ on the training set from $(2^{-5}, 2^{-4},\ldots, 2^{5})$, and choose the default settings for all the other parameters. Following \citet{chang2010training}, for the ijcnn1 dataset, we choose $C=32$ and $\gamma=2$, and for the a9a dataset, we choose $C=8$ and $\gamma=0.03125$. For RVM, instead of directly quoting the results from \citet{RVM}, which only reported the mean but not standard deviation of the classification errors for each of the first six datasets in Tab. \ref{tab:data}, we use the matlab code\footnote{\url{http://www.miketipping.com/downloads/SB2_Release_200.zip}} provided by the author, using a Gaussian RBF kernel whose kernel width is three-fold cross-validated on the training set from $(2^{-5},2^{-4.5},\ldots,2^{5})$ for both ijcnn1 and a9a and from $(2^{-10},2^{-9.5},\ldots,2^{10})$ for all the others. We consider adaptive multi-hyperplane machine (AMM) of \citet{wang2011trading}, as implemented in the BudgetSVM\footnote{\url{http://www.dabi.temple.edu/budgetedsvm/}} (Version 1.1) software package \citep{BudgetSVM}. We consider the batch version of the algorithm. Important parameters of the AMM include both the regularization parameter $\lambda$ and training epochs $E$. As also observed in \citet{kantchelian2014large}, we do not observe the testing errors of AMM to strictly decrease as $E$ increases. Thus, in addition to cross validating the regularization parameter $\lambda$ on the training set from $\{10^{-7}, 10^{-6},\ldots, 10^{-2}\}$, as done in \citet{wang2011trading}, for each $\lambda$, we try $E\in\{5,10,20,50,100\}$ sequentially until the cross-validation error begins to decrease, $i.e.$, under the same $\lambda$, we choose $E=20$ if the cross-validation error of $E=50$ is greater than that of $E=20$. We use the default settings for all the other parameters. We consider the convex polytope machine (CPM) of \citep{kantchelian2014large}, using the python code\footnote{\url{https://github.com/alkant/cpm}} provided by the authors. Important parameters of the CPM include the entropy parameter $h$, regularization factor $C$, and number of hyperplanes $K$ for each side of the CPM (2$K$ hyperplanes in total). Similar to the setting of \citep{kantchelian2014large}, we first fix $h=0$ and select the best regularization factor $C$ from $\{10^{-4},10^{-3},\ldots,10^{0}\}$ using three-fold cross validation on the training set. For each $C$, we try $K\in\{1,3,5,10,20,40,60,80,100\}$ sequentially until the cross-validation error begins to decrease. With both $\lambda$ and $K$ selected, we then select $h$ from $\{0,\ln(K/10), \ln(2K/10), \ldots,\ln(9K/10)\}$. For each trial, we consider $10$ million iterations in cross-validation and $32$ million iterations in training with the cross-validated parameters. Note different from \citet{kantchelian2014large}, which suggests that the error rate decreases as $K$ increases, we cross-validate $K$ as we have found that the testing errors of the CPM may increase once it increases over certain limits. \begin{algorithm}[t!] \small \caption{\small Upward-downward Gibbs sampling for sum-stack-softplus (SS-softplus) regression.\newline \textbf{Inputs:} $y_i$: the observed labels, $\boldsymbol{x}_i$: covariate vectors, $K_{\max}$: the upper-bound of the number of experts, $T$: the number of criteria of each expert, $I_{Prune}$: the set of iterations at which the operation of deactivating experts is performed, and the model hyper-parameters. \newline \textbf{Outputs:} $KT $ regression coefficient vectors $\betav^{(t)}_{k}$ and $K$ weights $r_k$, where $K\le K_{\max}$ is the total number of active experts that are associated with nonzero latent counts. }\label{alg:1} \begin{algorithmic}[1] \State \text Initialize the model parameters with $\betav^{(t)}_{k}=0$ and $r_k=1/K_{\max}$.} \For{\text{$iter=1:maxIter$ }} Gibbs sampling \ParFor{\text{$k=1,\ldots,K_{\max}$}} Downward sampling \For{\text{$t=T,T-1\ldots,1$}} \State \text{Sample $\theta_{ik}^{(t)}$ if Expert $k$ is active } ; \EndFor \EndParFor \State \text{Sample $m_i$ }; \text{Sample $\{m^{(1)}_{ik}\}_k$ }; \ParFor{\text{$k=1,\ldots,K_{\max}$}} Upward sampling \If{Expert $k$ is active } \For{\text{$t=2,3,\ldots,{T}+1$}} \State \text{Sample $m_{ik}^{(t)}$} ; \text{Sample $\omega_{ik}^{(t)}$} ; \text{Sample $\betav_k^{(t)}$ and Calculate $p_{ik}^{(t)}$ and ${q}_{ik}^{(t)}$} ; \EndFor \EndIf \State Deactivate Expert $k$ if $iter \in I_{Prune}$ and $m_{\boldsymbol{\cdot} k}^{(1)} = 0$ ; \EndParFor \State \text{Sample $\gamma_0$ and $c_0$} ; \State \text{Sample $r_{1},\ldots,r_{K_{\max}}$} ; \EndFor \end{algorithmic} \normalsize \end{algorithm}% \begin{figure}[!h] \begin{center} \includegraphics[width=0.75\columnwidth]{figure/xor_20_20.png} \vspace{-.2cm} \end{center} \vspace{-4.9mm} \caption{\small\label{fig:xor} Analogous figure to to Fig. \ref{fig:banana} for SS-softplus regression for a different dataset with two classes, where Class $A$ consists of 50 data points $(x_i,y_i)$ centering around $(-2,2)$, where $x_i\sim\mathcal{N}(-2,1)$ and $y_i\sim\mathcal{N}(2,1)$, and another 50 such kind of data points centering around $(2,-2)$, and Class $B$ consists of 50 such kind of data points centering around $(2,2)$ and another 50 such kind of data points centering around $(-2,-2)$. } \begin{center} \includegraphics[width=0.75\columnwidth]{figure/xor_ave_20_20.png} \vspace{-.2cm} \end{center} \vspace{-5.9mm} \caption{\small\label{fig:xor_ave} Analogous figure to Fig. \ref{fig:banana}, with the same experimental setting used for Fig. \ref{fig:xor}.} \end{figure} \begin{figure}[!t] \begin{center} \includegraphics[width=0.75\columnwidth]{figure/dbmoon_20_20.png} \vspace{-.2cm} \end{center} \vspace{-5.9mm} \caption{\small\label{fig:dbmoon} Analogous figure to Fig. \ref{fig:banana} for SS-softplus regression for a double moon dataset, where both Classes $A$ and $B$ consist of 250 data points $(x_i,y_i)$. } \begin{center} \includegraphics[width=0.75\columnwidth]{figure/dbmoon_ave_20_20.png} \vspace{-.2cm} \end{center} \vspace{-4.9mm} \caption{\small\label{fig:dbmoon_ave} Analogous figure to Fig. \ref{fig:banana_ave}, with the same experimental setting used for Fig. \ref{fig:dbmoon}. } \end{figure} \begin{table}[t!] \footnotesize \caption{\small Binary classification datasets used in experiments, where $V$ is the feature dimension.\vspace{1mm}}\label{tab:data} \vspace{-4mm} \begin{center} \begin{tabular}{c | c c c c c c c c c} \toprule Dataset & banana & breast cancer & titanic & waveform & german & image & ijcnn1 & a9a \\ \midrule Train size &400 &200&150&400&700&1300 &49,990& 32,561 \\ Test size &4900 &77&2051&4600&300&1010 & 91,701& 16,281 \\ $V$ & 2 &9 & 3 &21 &20 &18&22& 123\\ \bottomrule \end{tabular} \end{center} \vspace{-4mm} \footnotesize \caption{\small Performance of stack-softplus regression with the depth set as $T\in\{1,2,3,5,10\}$, where stack-softplus regression with $T=1$ reduces to softplus regression.\vspace{1mm}} \label{table:4} \centering \begin{tabular}{c|ccccc} \toprule Dataset & softplus & stack-$\varsigma$ ($T$=2) & stack-$\varsigma$ ($T$=3) & stack-$\varsigma$ ($T$=5) & stack-$\varsigma$ ($T$=10) \\ \midrule banana & $47.87 \pm 4.36$ & $34.66 \pm 5.58$ & $32.19 \pm 4.76$ & $33.21 \pm 5.76$ & $\mathbf{30.67} \pm 4.23$ \\ breast cancer & $28.70 \pm 4.76$ & $29.35 \pm 2.31$ & $29.48 \pm 4.94$ & $\mathbf{27.92} \pm 3.31$ & $28.31 \pm 4.36$ \\ titanic & $22.53 \pm 0.43$ & $22.80 \pm 0.59$ & $22.48 \pm 0.55$ & $22.71 \pm 0.70$ & $22.84 \pm 0.54$ \\ waveform & $13.62 \pm 0.71$ & $12.52 \pm 1.14$ & $\mathbf{12.23} \pm 0.79$ & $12.25 \pm 0.69$ & $12.33 \pm 0.65$ \\ german & $24.07 \pm 2.11$ & $23.73 \pm 1.99$ & $23.67 \pm 1.89$ & $\mathbf{22.97} \pm 2.22$ & $23.80 \pm 1.64$ \\ image & $17.55 \pm 0.75$ & $9.11 \pm 0.99$ & $8.39 \pm 1.05$ & $7.97 \pm 0.52$ & $\mathbf{7.50} \pm 1.17$ \\ \bottomrule \begin{tabular}{@{}c@{}}\scriptsize Mean of SVM\\\scriptsize normalized errors\end{tabular} & $2.485$ & $1.773 $ & $1.686$ & $1.665$ & $1.609$ \\ \end{tabular} \footnotesize \caption{\small Performance of SS-softplus regression with $K_{\max}=20$ and the depth set as $T\in\{1,2,3,5,10\}$, where SS-softplus regression with $T=1$ reduces to sum-softplus regression.\vspace{1mm}} \label{tab:5} \centering \begin{tabular}{c|ccccc} \toprule Dataset & sum-$\varsigma$ & SS-$\varsigma$ ($T$=2) & SS-$\varsigma$ ($T$=3) & SS-$\varsigma$ ($T$=5) & SS-$\varsigma$ ($T$=10) \\ \midrule banana & $30.78 \pm 8.68$ & $15.00 \pm 5.31$ & $12.54 \pm 1.18$ & $\mathbf{11.89} \pm 0.61$ & $11.93 \pm 0.59$ \\ breast cancer & $30.13 \pm 4.23$ & $29.74 \pm 3.89$ & $30.39 \pm 4.94$ & $28.83 \pm 3.40$ & $\mathbf{28.44} \pm 4.60$ \\ titanic & $22.48 \pm 0.25$ & $22.56 \pm 0.65$ & $22.42 \pm 0.45$ & $22.29 \pm 0.80$ & $\mathbf{22.20} \pm 0.48$ \\ waveform & $11.51 \pm 0.65$ & $11.41 \pm 0.96$ & $\mathbf{11.34} \pm 0.70$ & $11.69 \pm 0.69$ & $12.92 \pm 1.00$ \\ german & $23.60 \pm 2.39$ & $\mathbf{23.30} \pm 2.54$ & $\mathbf{23.30} \pm 2.20$ & $24.23 \pm 2.46$ & $23.90 \pm 1.50$ \\ image & $3.50 \pm 0.73$ & $2.76 \pm 0.47$ & $\mathbf{2.59} \pm 0.47$ & $2.73 \pm 0.53$ & $2.93 \pm 0.46$ \\ \bottomrule \begin{tabular}{@{}c@{}}\scriptsize Mean of SVM\\\scriptsize normalized errors\end{tabular} & $1.370$ & $1.079$ & $1.033$ & $1.033$ & $1.059$ \\ \end{tabular} \caption{\small Analogous table to Tab. \ref{tab:5} for the number of inferred experts (hyperplanes). \vspace{1mm}} \label{table:6} \begin{tabular}{c|ccccc} \toprule Dataset & sum-$\varsigma$ & SS-$\varsigma$ ($T$=2) & SS-$\varsigma$ ($T$=3) & SS-$\varsigma$ ($T$=5) & SS-$\varsigma$ ($T$=10) \\ \midrule banana & $3.70 \pm 0.95$ & $5.70 \pm 0.67$ & $6.80 \pm 0.79$ & $7.60 \pm 1.17$ & $9.80 \pm 2.39$ \\ breast cancer & $3.10 \pm 0.74$ & $4.10 \pm 0.88$ & $5.70 \pm 1.70$ & $6.40 \pm 1.43$ & $9.50 \pm 1.51$ \\ titanic & $2.30 \pm 0.48$ & $3.30 \pm 0.82$ & $3.80 \pm 0.92$ & $4.00 \pm 0.94$ & $6.20 \pm 1.23$ \\ waveform & $4.40 \pm 0.84$ & $6.20 \pm 1.62$ & $7.00 \pm 2.21$ & $8.90 \pm 2.33$ & $11.50 \pm 2.72$ \\ german & $6.70 \pm 0.95$ & $9.80 \pm 1.48$ & $11.10 \pm 2.64$ & $14.70 \pm 1.77$ & $20.00 \pm 2.40$ \\ image & $11.20 \pm 1.32$ & $13.20 \pm 2.30$ & $14.60 \pm 2.07$ & $17.60 \pm 1.90$ & $21.40 \pm 2.22$ \\ \bottomrule \begin{tabular}{@{}c@{}}\scriptsize Mean of SVM \\\scriptsize normalized $K$\end{tabular} & $0.030$ & $0.041~(\times 2)$ & $0.048~(\times 3)$ & $0.057~(\times 5)$ & $0.077~(\times 10)$ \\ \end{tabular} \end{table} \begin{table}[t!] \footnotesize \caption{\small Comparison of classification errors of logistic regression (LR), support vector machine (SVM), adaptive multi-hyperplane machine (AMM), convex polytope machine (CPM), softplus regression, sum-softplus (sum-$\varsigma$) regression with $K_{\max}=20$, stack-softplus (stack-$\varsigma$) regression with $T=5$, and SS-softplus regression with $K_{\max}=20$ and $T=5$. \vspace{1mm}}\label{tab:Error1} \centering \begin{tabular}{c|ccccccccc} \toprule Dataset & LR & SVM & RVM & AMM & CPM & softplus & sum-$\varsigma$ & stack-$\varsigma$ ($T$=5) & SS-$\varsigma$ ($T$=5) \\ \midrule ijcnn1 & $8.00$ & $1.30$ & $\mathbf{1.29}$ & $2.06$ & $2.57$ & $8.41$ & $3.39$ & $6.43$ & $2.24$ \\ & & & & $\pm 0.27$ & $\pm 0.17$ & $\pm 0.03$ & $\pm 0.17$ & $\pm 0.15$ & $\pm 0.12$ \\ \midrule a9a & $15.00$ & $\mathbf{14.88}$ & $14.95$ & $15.03$ & $15.08$ & $15.02$ & $\mathbf{14.88}$ & $15.00$ & $15.02$ \\ & & & & $\pm 0.17$ & $\pm 0.07$ & $\pm 0.06$ & $\pm 0.05$ & $\pm 0.06$ & $\pm 0.11$ \\ \bottomrule \end{tabular} \footnotesize \caption{\small Analogous table to Tab. \ref{tab:Error1} for the number of inferred experts (hyperplanes). \vspace{1mm}} \label{table:K1} \centering \begin{tabular}{c|ccccccccc} \toprule & LR & SVM & RVM & AMM & CPM & softplus & sum-$\varsigma$ & stack-$\varsigma$ ($T$=5) & SS-$\varsigma$ ($T$=5) \\ \midrule ijcnn1 & 1 & $2477$ & $296$ & $8.20$ & $58.00$ & 2 & $37.60$ & $2~(\times 5)$ & $38.80~(\times 5)$ \\ & & & & $\pm 0.84$ & $\pm 13.04$ & & $\pm 1.52$ & & $\pm 0.84~(\times 5)$ \\ \midrule a9a & 1 & $11506$ & $109$ & $28.00$ & $7.60$ & 2 & $37.60$ & $2~(\times 5)$ & $40.00~(\times 5)$ \\ & & & & $\pm 4.12$ & $\pm 2.19$ & & $\pm 0.55$ & & $\pm 0.00~(\times 5)$\\ \bottomrule \end{tabular} \caption{\small Performance of stack-softplus regression with the depth set as $T\in\{1,2,3,5,10\}$, where stack-softplus regression with $T=1$ reduces to softplus regression.\vspace{1mm}} \centering \label{table:9} \begin{tabular}{c|ccccc} \toprule Dataset & softplus & stack-$\varsigma $ ($T$=2) & stack-$\varsigma $ ($T$=3) & stack-$\varsigma $ ($T$=5) & stack-$\varsigma $ ($T$=10) \\ \midrule ijcnn1 & $8.41 \pm 0.03$ & $6.73 \pm 0.13$ & $6.44 \pm 0.21$ & $6.43 \pm 0.15$ & $\mathbf{6.39} \pm 0.08$ \\ a9a & $15.02 \pm 0.06$ & $14.96 \pm 0.04$ & $\mathbf{14.93} \pm 0.06$ & $15.00 \pm 0.06$ & $14.97 \pm 0.08$ \\ \bottomrule \end{tabular} \caption{\small Performance of SS-softplus regression with $K_{\max}=20$ and the depth set as $T\in\{1,2,3,5,10\}$, where SS-softplus regression with $T=1$ reduces to sum-softplus regression.\vspace{1mm}} \centering \label{table:10} \begin{tabular}{c|ccccc} \toprule Dataset & sum-$\varsigma$ & SS-$\varsigma$ ($T$=2) & SS-$\varsigma$ ($T$=3) & SS-$\varsigma$ ($T$=5) & SS-$\varsigma$ ($T$=10) \\ \midrule ijcnn1 & $3.39 \pm 0.17$ & $2.32 \pm 0.18$ & $2.31 \pm 0.17$ & $2.24 \pm 0.12$ & $\mathbf{2.19} \pm 0.11$ \\ a9a & $\mathbf{14.88} \pm 0.05$ & $14.98 \pm 0.03$ & $15.07 \pm 0.20$ & $15.02 \pm 0.11$ & $15.09 \pm 0.06$ \\ \bottomrule \end{tabular} \end{table} \section{Introduction}\label{sec:introduction} Logistic and probit regressions that use a single hyperplane to partition the covariate space into two halves are widely used to model binary response variables given the covariates \citep{cox1989analysis,mccullagh1989generalized,albert1993bayesian,holmes2006bayesian}. They are easy to implement and simple to interpret, but neither of them is capable of producing nonlinear classification decision boundaries, and they may not provide large margin to achieve accurate out-of-sample predictions. For two classes not well separated by a single hyperplane, rather than regressing a binary response variable directly on its covariates, it is common to select a subset of covariate vectors as support vectors, choose a nonlinear kernel function, and regress a binary response variable on the kernel distances between its covariate vector and these support vectors \citep{boser1992training, cortes1995support, vapnik1998statistical, scholkopf1999advances,RVM}. Alternatively, one may construct a deep neural network to nonlinearly transform the covariates in a supervised manner, and then regress a binary response variable on its transformed covariates \citep{hinton2006fast, lecun2015deep,Bengio-et-al-2015-Book}. Both kernel learning and deep learning map the original covariates into a more linearly separable space, transforming a nonlinear classification problem into a linear one. In this paper, we propose a fundamentally different approach for nonlinear classification. Relying on neither the kernel trick nor a deep neural network to transform the covariate space, we construct a family of softplus regressions that exploit two distinct types of interactions between hyperplanes to define flexible nonlinear classification decision boundaries directly on the original covariate space. Since kernel learning based methods such as kernel support vector machines (SVMs) \citep{cortes1995support, vapnik1998statistical} may scale poorly in that the number of support vectors often increases linearly in the size of the training dataset, they could be not only slow and memory inefficient to train but also unappealing for making fast out-of-sample predictions \citep{steinwart2003sparseness,wang2011trading}. One motivation of the paper is to investigate the potential of using a set of hyperplanes, whose number is directly influenced by how the interactions of multiple hyperplanes can be used to spatially separate two different classes in the covariate space rather than by the training data size, to construct nonlinear classifiers that can match the out-of-sample prediction accuracies of kernel SVMs, but potentially with much lower computational complexity. Another motivation of the paper is to increase the margin of the classifier, related to the discussion in \citet{kantchelian2014large} that for two classes that are linearly separable, even though a single hyperplane is sufficient to separate the two different classes in the training dataset, using multiple hyperplanes to enclose one class may help clearly increase the total margin of the classifier and hence improve the out-of-sample prediction accuracies. Our motivated construction exploits two distinct operations---convolution and stacking---on the gamma distributions with covariate-dependent scale parameters. The convolution operation convolves differently parameterized probability distributions to increase representation power and enhance smoothness, while the stacking operation mixes a distribution in the stack with a distribution of the same family that is subsequently pushed into the stack. Depending on whether and how the convolution and stacking operations are used, the models in the family differ from each other on how they use the softplus functions to construct highly nonlinear probability density functions, and on how they construct their hierarchical Bayesian models to arrive at these functions. In comparison to the nonlinear classifiers built on kernels or deep neural networks, the proposed softplus regressions all share a distinct advantage in providing interpretable geometric constraints, which are related to either a single or a union of convex polytopes \citep{polytope}, on the classification decision boundaries defined on the original covariate space. In addition, like neither kernel learning, whose number of support vectors often increases linearly in the size of data \citep{steinwart2003sparseness}, nor deep learning, which often requires carefully tuning both the structure of the deep network and the learning algorithm \citep{Bengio-et-al-2015-Book}, the proposed nonparametric Bayesian softplus regressions naturally provide probability estimates, automatically learn the complexity of the predictive distribution, and quantify model uncertainties with posterior samples. The remainder of the paper is organized as follows. In Section \ref{sec:model}, we define four different softplus regressions, present their underlying hierarchical models, and describe their distinct geometric constraints on how the covariate space is partitioned. In Section~\ref{sec:inference}, we discuss Gibbs sampling via data augmentation and marginalization. In Section~\ref{sec:results}, we present experimental results on eight benchmark datasets for binary classification, making comparisons with five different classification algorithms. We conclude the paper in Section~\ref{sec:conclusion}. We defer to the Supplementary Materials all the proofs, an accurate approximate sampler and some new properties for the Polya-Gamma distribution, the discussions on related work, and some additional example results. \vspace{-3mm} \section{ Hierarchical Models and Geometric Constraints} \label{sec:model} \vspace{-2mm} \subsection{Bernoulli-Poisson link and softplus function} \label{sec:notation} \vspace{-1mm} To model a binary random variable, it is common to link it to a real-valued latent Gaussian random variable using either the logistic or probit links. Rather than following the convention, in this paper, we consider the Bernoulli-Poisson (BerPo) link \citep{Dunson05bayesianlatent,EPM_AISTATS2015} to threshold a latent count at one to obtain a binary outcome $y\in\{0,1\}$~as \vspace{0mm}\begin{equation} y = \delta(m\ge 1), ~m\sim\mbox{Pois}(\lambda),\label{eq:BerPo} \vspace{0mm}\end{equation} where $m\in\mathbb{Z}$, $\mathbb{Z}:=\{0,1,\ldots\}$, and $\delta(x) = 1$ if the condition $x$ is satisfied and $\delta(x) = 0$ otherwise. The marginalization of the latent count $m$ from the BerPo link leads to $$ y\sim\mbox{Bernoulli}(p),~p=1-e^{-\lambda} $$ The conditional distribution of $m$ given $y$ and $\lambda$ can be efficiently simulated using a rejection sampler \citep{EPM_AISTATS2015}. Since its use in \citet{EPM_AISTATS2015} to factorize the adjacency matrix of an undirected unweighted symmetric network, the BerPo link has been further extended for big binary tensor factorization \citep{hu2015zero}, multi-label learning \citep{rai2015large}, and deep Poisson factor analysis \citep{henao2015deep}. This link has also been used by \citet{caron2014sparse} and \citet{todeschini2016exchangeable} for network analysis. We now refer to $\lambda=-\ln(1-p)$, the negative logarithm of the Bernoulli failure probability, as the BerPo rate for $y$ and simply denote \eqref{eq:BerPo} as y \sim \mbox{BerPo}(\lambda).\notag $ It is instructive to notice that $1/(1+e^{-x}) = 1-\exp[{-\ln(1+e^x)}], $ and hence letting \vspace{0mm}\begin{equation} y\sim\mbox{Bernoulli}\big[\sigma(x)\big],~\sigma(x) = 1/(1+e^{-x}) \label{eq:BerSigmoid} \vspace{0mm}\end{equation} is equivalent to letting \vspace{0mm}\begin{equation} y\sim\mbox{BerPo}\big[\varsigma(x)\big],~\varsigma(x) = \ln(1+e^x), \label{eq:BerPoSoftplus} \vspace{0mm}\end{equation} where $\varsigma(x) = \ln(1+e^x)$ was referred to as the softplus function in \citet{dugas2001incorporating}. It is interesting that the BerPo link appears to be naturally paired with the softplus function, which is often considered as a smoothed version of the rectifier, or rectified linear unit $$ \mbox{ReLU}(x) = \max(0,x), $$ that is now widely used in deep neural networks, replacing other canonical nonlinear activation functions such as the sigmoid and hyperbolic tangent functions \citep{nair2010rectified,glorot2011deep,lecun2015deep,krizhevsky2012imagenet,CRLU}. In this paper, we further introduce the stack-softplus function \vspace{0mm}\begin{equation} \varsigma(x_1,\ldots,x_t) = \ln\left(1+e^{x_t}\ln\left\{1+e^{x_{t-1}}\ln\big[1+\ldot \ln\big(1+e^{x_1}\big)\big]\right\}\right), \label{eq:deep_softplus} \vspace{0mm}\end{equation} which can be recursively defined with $\varsigma(x_1,\ldots,x_t) = \ln[1+e^{x_t} \varsigma(x_1,\ldots,x_{t-1})]$. In addition, with $r_k$ as the weights of the countably infinite atoms of a gamma process \citep{ferguson73}, we will introduce the sum-softplus function, expressed as $ \sum_{k=1}^\infty r_k\, \varsigma(x_k), $ and sum-stack-softplus (SS-softplus) function, expressed as $ \sum_{k=1}^\infty r_k\, \varsigma(x_{k1},\ldots,x_{kt}). $ The stack-, sum-, and SS-softplus functions constitute a family of softplus functions, which are used to construct nonlinear regression models, as presented below. \subsection{The softplus regression family}\label{sec:family} The equivalence between \eqref{eq:BerSigmoid} and \eqref{eq:BerPoSoftplus}, the apparent partnership between the BerPo link and softplus function, and the convenience of employing multiple regression coefficient vectors to parameterize the BerPo rate, which is constrained to be nonnegative rather than between zero and one, motivate us to consider using the BerPo link together with softplus function to model binary response variables given the covariates. We first show how a classification model under the BerPo link reduces to logistic regression that uses a single hyperplane to partition the covariate space into two halves. We then generalize it to two distinct multi-hyperplane classification models: sum- and stack-softplus regressions, and further show how to integrate them into SS-softplus regression. These models clearly differ from each other on how the BerPo rates are parameterized with the softplus functions, leading to decision boundaries under distinct geometric constraints. To be more specific, for the $i$th covariate vector $\boldsymbol{x}_i=(1,x_{i1},\ldots,x_{iV})'\in\mathbb{R}^{V+1}$, where the prime denotes the operation of transposing a vector, we model its binary class label using \vspace{0mm}\begin{equation} y_i \,|\, \boldsymbol{x}_i \sim\mbox{BerPo}[\lambda(\boldsymbol{x}_i)], \label{eq:BerPoINreg} \vspace{0mm}\end{equation} where $\lambda(\boldsymbol{x}_i)$, given the regression model parameters that may come from a stochastic process, is a nonnegative deterministic function of $\boldsymbol{x}_i$ that may contain a countably infinite number of parameters. Let $G\sim\Gamma\mbox{P}(G_0,1/c)$ denote a gamma process \citep{ferguson73} defined on the product space $\mathbb{R}_+\times \Omega$, where $\mathbb{R}_+=\{x:x>0\}$, $c$ is a scale parameter, and $G_0$ is a finite and continuous base measure defined on a complete separable metric space $\Omega$, such that $G(A_i)\sim\mbox{Gamma}(G_0(A_i),1/c)$ are independent gamma random variables for disjoint Borel sets $A_i$ of $\Omega$. Below we show how the BerPo rate function $\lambda(\boldsymbol{x}_i)$ is parameterized under four different softplus regressions, two of which use the gamma process to support a countably infinite sum in the parameterization, and also show how to arrive at each parameterization using a hierarchical Bayesian model built on the BerPo link together with the convolved and/or stacked gamma distributions. \begin{definition}[Softplus regression]\label{thm:1} Given $\boldsymbol{x}_i$, weight $r\in \mathbb{R}_+$, and a regression coefficient vector $\betav\in\mathbb{R}^{V+1}$, softplus regression parameterizes $\lambda(\boldsymbol{x}_i)$ in \eqref{eq:BerPoINreg} using a softplus function as \vspace{0mm}\begin{equation} \lambda(\boldsymbol{x}_i) =\varsigma(\boldsymbol{x}_i'\betav)= r \ln(1+e^{\boldsymbol{x}_i'\betav}). \label{eq:BerPoSoftplusReg} \vspace{0mm}\end{equation} Softplus regression is equivalent to the binary regression model $$ y_{i} \sim \emph{\mbox{Bernoulli}}\left[ 1-\big({1+e^{\boldsymbol{x}_i'\betav}}\big)^{-r\,} \right], \notag $$ which, as proved in Appendix \ref{sec:proof}, can be constructed using the hierarchical model \begin{align} &y_{i} = \delta( m_{i} \ge 1) ,~m_{i}\sim\emph{\mbox{Pois}}(\theta_{i}),~\theta_{i}\sim\emph{\mbox{Gamma}}\big(r, e^{\boldsymbol{x}_i'\betav}\big). \label{eq:Softplus_model} \end{align} \end{definition} \begin{definition}[Sum-softplus regression] \label{thm:CLR2CNB} Given a draw from a gamma process $G\sim\Gamma\emph{\mbox{P}}(G_0,1/c)$, expressed as $G=\sum_{k=1}^\infty r_k \delta_{\betav_k} $, where $\betav_k\in\mathbb{R}^{V+1}$ is an atom and $r_k$ is its weight, sum-softplus regression parameterizes $\lambda(\boldsymbol{x}_i)$ in \eqref{eq:BerPoINreg} using a sum-softplus function as \vspace{0mm}\begin{equation} \lambda(\boldsymbol{x}_i)= \sum_{k=1}^\infty r_k \,\varsigma(\boldsymbol{x}_i' \betav_k) = \sum_{k=1}^\infty r_k \ln(1+e^{\boldsymbol{x}_i' \betav_k}). \label{eq:sum_softplus_reg} \vspace{0mm}\end{equation} Sum-softplus regression is equivalent to the binary regression model \vspace{0mm}\begin{equation} y_{i} \sim \emph{\mbox{Bernoulli}}\left[ 1-e^{-\sum_{k=1}^{\infty}r_k \varsigma(\boldsymbol{x}_i'\betav_{k})} \right] = \emph{\mbox{Bernoulli}}\left[ 1-\prod_{k=1}^{\infty}\left(\frac{1}{1+e^{\boldsymbol{x}_i'\betav_{k}}}\right)^{r_{k}} \right], \notag \vspace{0mm}\end{equation} which, as proved in Appendix \ref{sec:proof}, can be constructed using the hierarchical model \begin{align} &y_{i} = \delta( m_{i} \ge 1),~m_{i} \sim\emph{\mbox{Pois}}(\theta_i),~\theta_i = \sum_{k=1}^\infty \theta_{ik}, ~\theta_{ik}\sim{\emph{\mbox{Gamma}}}\big(r_{k},e^{\boldsymbol{x}_i'\betav_{k}}\big . \label{eq:CNB} \end{align} \end{definition} \begin{definition}[Stack-softplus regression]\label{thm:stack-softplus} With weight $r\in\mathbb{R}_+$ and $T$ regression coefficient vectors $\betav^{(2:T+1)}:=(\betav^{(2)},\ldots,\betav^{(T+1)})$, where $\betav^{(t)}\in\mathbb{R}^{V+1}$, stack-softplus regression with $T$ layers parameterizes $\lambda(\boldsymbol{x}_i)$ in \eqref{eq:BerPoINreg} using a stack-softplus function as \vspace{0mm}\begin{eqnarray} & \displaystyle\lambda(\boldsymbol{x}_i) = r\, \varsigma\big(\boldsymbol{x}_i'\betav^{(2)},\ldots,\boldsymbol{x}_i'\betav^{(T+1)}\big) \notag\\ &\displaystyle =r\ln\left(1+e^{\boldsymbol{x}_i'\betav^{(T+1)}}\ln\bigg\{1+e^{\boldsymbol{x}_i'\betav^{(T)}}\ln\Big[1+\ldot \ln\big(1+e^{\boldsymbol{x}_i'\betav^{(2)}}\big)\Big]\bigg\}\right). \label{eq:recurssive_softplus_reg} \vspace{0mm}\end{eqnarray} Stack-softplus regression is equivalent to the regression model \vspace{0mm}\begin{eqnarray}\small y_i&\sim&\emph{\mbox{Bernoulli}}\Big(1-e^{-r\,\varsigma\left(\boldsymbol{x}_i'\betav^{(2:T+1)}\right)}\Big)\notag\\ &=&\emph{\mbox{Bernoulli}}\left[1 - \left(1\!+\!e^{\boldsymbol{x}_i'\betav^{(T+1)}}\ln\bigg\{1\!+\!e^{\boldsymbol{x}_i'\betav^{(T)}}\ln\Big[1\!+\!\ldots \ln\big(1\!+\!e^{\boldsymbol{x}_i'\betav^{(2)}}\big)\Big]\bigg\}\right)^{-r} ~\right ],~~ \notag \vspace{0mm}\end{eqnarray} which, as proved in Appendix \ref{sec:proof}, can be constructed using the hierarchical model that stacks $T$ gamma distributions, whose scales are differently parameterized by the covariates, as \begin{align} &~~\theta^{(T)}_{i}\sim\emph{\mbox{Gamma}}\left(r,e^{\boldsymbol{x}_i'\betav^{(T+1)}}\right),\notag\\ &~~~~~~~~~~~~~~\cdots\notag\\ &\theta^{(t)}_{i}\sim\emph{\mbox{Gamma}}\left(\theta^{(t+1)}_{i},e^{\boldsymbol{x}_i'\betav^{(t+1)}}\right), \notag\\ &~~~~~~~~~~~~~~\cdots\notag\\ y_i = \delta(m_i\ge 1&),~m_{i} \sim \emph{\mbox{Pois}}(\theta^{(1)}_{i}),~\theta^{(1)}_{i}\sim\emph{\mbox{Gamma}}\left(\theta^{(2)}_{i},e^{\boldsymbol{x}_i'\betav^{(2)}}\right). \label{eq:BerPo_recursive_softplus_reg_model} \end{align} \end{definition} \begin{definition}[Sum-stack-softplus (SS-softplus) regression]\label{thm:SS-softplus} Given a drawn from a gamma process $G\sim\Gamma\emph{\mbox{P}}(G_0,1/c)$, expressed as $G=\sum_{k=1}^\infty r_k \delta_{\betav^{(2:{T}+1)}_k} $, where $\betav^{(2:{T}+1)}_k$ is an atom and $r_k$ is its weight, with each $\betav_k^{(t)}\in\mathbb{R}^{V+1}$, SS-softplus regression with ${T}\in\{1,2,\ldots\}$ layers parameterizes $\lambda(\boldsymbol{x}_i)$ in \eqref{eq:BerPoINreg} using a sum-stack-softplus function as \vspace{0mm}\begin{eqnarray} &\displaystyle\lambda(\boldsymbol{x}_i)= \sum_{k=1}^\infty r_k\, \varsigma\big(\boldsymbol{x}_i'\betav_k^{(2)},\ldots,\boldsymbol{x}_i'\betav_k^{(T+1)}\big) \notag\\ &\displaystyle= \sum_{k=1}^\infty r_k \ln\left(1+e^{\boldsymbol{x}_i'\betav^{({T}+1)}_k}\ln\bigg\{1+e^{\boldsymbol{x}_i'\betav^{({T})}_k}\ln\Big[1+\ldot \ln\big(1+e^{\boldsymbol{x}_i'\betav^{(2)}_k}\big)\Big]\bigg\}\right). \label{eq:SRS_regression} \vspace{0mm}\end{eqnarray} SS-softplus regression is equivalent to the regression model \vspace{0mm}\begin{eqnarray}\small y_i&\sim&\emph{\mbox{Bernoulli}}\Big(1-e^{-\sum_{k=1}^\infty r_k\,\varsigma\left(\boldsymbol{x}_i'\betav_k^{(2:T+1)}\right)}\Big)\notag\\ &=&\emph{\mbox{Bernoulli}}\left[ 1 -\prod_{k=1}^\infty \left(\!1\!+\!e^{\boldsymbol{x}_i'\betav_k^{({T}\!+\!1)}}\ln\!\bigg\{1\!+\!e^{\boldsymbol{x}_i'\betav_k^{({T})}}\ln\Big[1\!+\!\ldots \ln\big(1\!+\!e^{\boldsymbol{x}_i'\betav_k^{(2)}}\big)\Big]\bigg\}\right)^{-r_k} ~\!\right ] \notag \vspace{0mm}\end{eqnarray} which, as proved in Appendix \ref{sec:proof}, can be constructed from the hierarchical model that convolves countably infinite stacked gamma distributions that have covariate-dependent scale parameters as \small\begin{align} &~~~~~~~~\theta^{({T})}_{ik}\sim\emph{\mbox{Gamma}}\left(r_k,e^{\boldsymbol{x}_i'\betav^{({T}+1)}_k}\right) ,\notag\\ &~~~~~~~~~~~~~~~~~~~~~~~~\cdots\notag\\ &~~~~~~~~\theta^{(t)}_{ik}\sim\emph{\mbox{Gamma}}\left(\theta^{(t+1)}_{ik},e^{\boldsymbol{x}_i'\betav^{(t+1)}_k}\right), \notag\\ &~~~~~~~~~~~~~~~~~~~~~~~~\cdots\notag\\ y_i = \delta(m_i\ge 1),&~m_i =\sum_{k=1}^\infty m^{(1)}_{ik},~m^{(1)}_{ik} \sim \emph{\mbox{Pois}}(\theta^{(1)}_{ik}),~\theta^{(1)}_{ik}\sim\emph{\mbox{Gamma}}\left(\theta^{(2)}_{ik},e^{\boldsymbol{x}_j'\betav^{(2)}_k}\right)\,\,. \label{eq:DICLR_model} \end{align}\normalsize \end{definition} Below we discuss these four different softplus regression models in detail and show that both sum- and stack-softplus regressions use the interactions of multiple regression coefficient vectors through the softplus functions to define a confined space, related to a convex polytope \citep{polytope} defined by the intersection of multiple half-spaces, to separate one class from the other in the covariate space. They differ from each other in that sum-softplus regression infers a convex-polytope-bounded confined space to enclose negative examples (\emph{i.e.}, data samples with $y_i=0$), whereas stack-softplus regression infers a convex-polytope-like confined space to enclose positive examples (\emph{i.e.}, data samples with $y_i=1$). The opposite behaviors of sum- and stack-softplus regressions motivate us to unite them as SS-softplus regression, which can place countably infinite convex-polytope-like confined spaces, inside and outside each of which favor positive and negative examples, respectively, at various regions of the covariate space, and use the union of these confined spaces to construct a flexible nonlinear classification decision boundary. Note that softplus regressions all operate on the original covariate space. It is possible to apply them to regress binary response variables on the covariates that have already been nonlinearly transformed with the kernel trick or a deep neural network, which may combine the advantages of these distinct methods to achieve an overall improved classification performance. We leave the integration of softplus regressions with the kernel trick or deep neural networks for future study. \subsection{Softplus and logistic regressions} It is straightforward to show that softplus regression with $r=1$ is equivalent to logistic regression $ y_i\sim{\mbox{Bernoulli}}[1/(1+e^{-\boldsymbol{x}_i'\betav})] $, which uses a single hyperplane dividing the covariate space into two halves to separate one class from the other. Similar connection has also been illustrated in \citet{Dunson05bayesianlatent}. Clearly, softplus regression arising from \eqref{eq:Softplus_model} generalizes logistic regression in allowing $r\neq 1$. Let $p_0\in(0,1)$ denote the probability threshold to make a binary decision. One may consider that softplus regression defines a hyperplane to partition the $V$ dimensional covariate space into two halves: one half is defined with $\boldsymbol{x}_i'\betav > \ln\big[(1-p_0)^{-\frac{1}{r}}-1\big]$, assigned with label $y_i=1$ since $P(y_i=1\,|\,\boldsymbol{x}_i,\betav_k)>p_0$ under this condition, and the other half is defined with $\boldsymbol{x}_i'\betav \le \ln\big[(1-p_0)^{-\frac{1}{r}}-1\big]$, assigned with label $y_i=0$ since $P(y_i=1\,|\,\boldsymbol{x}_i,\betav)\le p_0$ under this condition. Instead of using a single hyperplane, the three generalizations in Definitions 2-4 all partition the covariate space using a confined space that is related to a single convex polytope or the union of multiple convex polytopes, as described below. \vspace{-2mm} \subsection{Sum-softplus regression and convolved NB regressions} Note that since $m_{i} \sim{\mbox{Pois}}(\theta_i), ~\theta_i = \sum_{k=1}^\infty \theta_{ik}$ in \eqref{eq:CNB} can be equivalently written as $m_i = \sum_{k=1}^\infty m_{ik},~m_{ik}\sim{\mbox{Pois}}(\theta_{ik})$, sum-softplus regression can also be constructed with \begin{align} &y_{i} = \delta( m_{i} \ge 1),~~m_{i} = \sum_{k=1}^{\infty} m_{ik},~m_{ik}\sim{\mbox{NB}}\left[r_{k},{1}/{(1+e^{-\boldsymbol{x}_i'\betav_{k}})}\right] \label{eq:CNB1} \end{align} where $m\sim\mbox{NB}(r,p)$ represents a negative binomial (NB) distribution \citep{Yule,Fisher1943} with shape parameter $r$ and probability parameter $p$, and $m\sim\mbox{NB}[r,1/(1+e^{-\boldsymbol{x}'\betav})]$ can be considered as NB regression \citep{LawlessNB87,long:1997,Cameron1998,WinkelmannCount} that parameterizes the logit of $p$ with $\boldsymbol{x}'\betav$. To ensure that the infinite model is well defined, we provide the following proposition and present the proof in Appendix \ref{sec:proof}. \begin{prop}\label{lem:finite} The infinite product e^{-\sum_{k=1}^{\infty}r_k \,\varsigma(\boldsymbol{x}_i'\betav_{k})} = \prod_{k=1}^\infty\left( {1+e^{\boldsymbol{x}_i'\betav_k}}\right)^{-r_k}$ in sum-softplus regression is smaller than one and has a finite expectation that is greater than zero \end{prop} As the probability distribution of the sum of independent random variables is the same as the convolution of these random variables' probability distributions \citep[$e.g.$,][]{fristedt1997}, the probability distribution of the BerPo rate $\theta_i$ is the convolution of countably infinite gamma distributions, each of which parameterizes the logarithm of its scale using the inner product of the same covariate vector and a regression coefficient vector specific for each~$k$. As in \eqref{eq:CNB1}, since $m_i$ is the summation of countably infinite latent counts $m_{ik}$, each of which is a NB regression response variable, we essentially regress the latent count $m_i$ on $\boldsymbol{x}_i$ using a convolution of countably infinite NB regression models. If $\betav_{k}$ are drawn from a continuous distribution, then $\betav_{k}\neq\betav_{{\tilde{k}}}$ a.s. for all $k\neq{\tilde{k}}$, and hence given $\boldsymbol{x}_i$ and $\{\betav_k\}_k$, the BerPo rate $\theta_i$ would not follow the gamma distribution and $m_i$ would not follow the NB distribution. Note that if we modify the proposed sum-softplus regression model in \eqref{eq:CNB} as \begin{align} &y_{ij} = \delta( m_{ij} \ge 1),~m_{ij} \sim{\mbox{Pois}}(\theta_{ij}),~\theta_{ij} = \sum_{k=1}^K \theta_{ijk}, ~\theta_{ijk}\sim{{\mbox{Gamma}}}\big(\phi_{k}^{-1},\phi_k \lambda_{jk}e^{\boldsymbol{x}_{ij}'\betav}\big , \label{eq:CNB_Dunson} \end{align} then we have $P(y_{ij}=1\,|\, \boldsymbol{x}_{ij}) = \left[ 1-\prod_{k=1}^{K}\left({1+\phi_{k}\lambda_{jk} e^{\boldsymbol{x}_{ij}'\betav}}\right)^{-\phi_{k}^{-1}} \right]$, which becomes the same as Eq. 2.7 of \citet{Dunson05bayesianlatent} that is designed to model multivariate binary response variables. Though related, that construction is clearly different from the proposed sum-softplus regression in that it uses only a single regression coefficient vector $\betav$ and does not support $K\rightarrow \infty$. It is of interest to extend the models in \citet{Dunson05bayesianlatent} with the sum-softplus construction discussed above and the stack- and SS-softplus constructions to be discussed below. \subsubsection{ Convex-polytope-bounded confined space that favors negative examples} For sum-softplus regression arising from \eqref{eq:CNB}, the binary classification decision boundary is no longer defined by a single hyperplane. Let us make the analogy that each $\betav_k$ is an expert of a committee that collectively make binary decisions. For expert $k$, the magnitude of $r_k$ indicates how strongly its opinion is weighted by the committee, $m_{ik}=0$ represents that it votes ``No," and $m_{ik}\ge 1$ represents that it votes ``Yes." Since the response variable $y_i =\delta\left(\sum_{k=1}^\infty m_{ik}\ge 1\right)$, the committee would vote ``No'' if and only if all its experts vote ``No'' ($i.e.$, all the counts $m_{ik}$ are zeros), in other words, the committee would vote ``Yes'' even if only a single expert votes ``Yes.'' Let us now examine the confined covariate space for sum-softplus regression that satisfies the inequality $P(y_i=1\,|\, \boldsymbol{x}_i)=1-e^{-\lambda(\boldsymbol{x}_i)}\le p_0$, where a data point is labeled as one with a probability no greater than $p_0$. Although it is not immediately clear what kind of geometric constraints are imposed on the covariate space by this inequality, the following theorem shows that it defines a confined space, which is bounded by a convex polytope defined by the intersection of countably infinite half-spaces. \begin{thm}\label{thm:sum_polytope} For sum-softplus regression, the confined space specified by the inequality $P(y_i=1\,|\, \boldsymbol{x}_i)=1-e^{-\lambda(\boldsymbol{x}_i)}\le p_0$, which can be expressed as \vspace{0mm}\begin{equation} \lambda(\boldsymbol{x}_i) = \sum_{k=1}^\infty r_k \ln(1+e^{\boldsymbol{x}_i' \betav_k})\le -\ln(1-p_0), \label{eq:sum_ineuqality} \vspace{0mm}\end{equation} is bounded by a convex polytope defined by the set of solutions to countably infinite inequalities \vspace{0mm}\begin{equation}\label{eq:convex_polytope} \boldsymbol{x}_i' \betav_k \le \ln\big[(1-p_0)^{-\frac{1}{r_k}}-1\big], ~~ k\in\{1,2,\ldots\}. \vspace{0mm}\end{equation} \end{thm} \begin{prop}\label{prop:sum_polytope} For any data point $\boldsymbol{x}_i$ that resides outside the convex polytope defined b ~\eqref{eq:convex_polytope}, which means $\boldsymbol{x}_i$ violates at least one of the inequalities in \eqref{eq:convex_polytope} a.s., it will be labeled under sum-softplus regression with $y_i=1$ with a probability greater than $p_0$, and $y_i=0$ with a probability no greater than $1-p_0$. \end{prop} The convex polytope defined in \eqref{eq:convex_polytope} is enclosed by the intersection of countably infinite $V$-dimensional half-spaces. If we set $p_0=0.5$ as the probability threshold to make binary decisions, then the convex polytope assigns a label of $y_i=0$ to an $\boldsymbol{x}_i$ inside the convex polytope ($i.e.$, an $\boldsymbol{x}_i$ that satisfies all the inequalities in Eq. \ref{eq:convex_polytope}) with a relatively high probability, and assigns a label of $y_i=1$ to an $\boldsymbol{x}_i$ outside the convex polytope ($i.e.$, an $\boldsymbol{x}_i$ that violates at least one of the inequalities in Eq. \ref{eq:convex_polytope}) with a probability of at least $50\%$. Note that as $r_k\rightarrow 0$, $r_k\ln(1+e^{\boldsymbol{x}_i' \betav_{k}}) \rightarrow 0 $ and $\ln\big[(1-p_0)^{-\frac{1}{r_k}}-1\big]\rightarrow \infty$. Thus expert $k$ with a tiny $r_k$ essentially has a negligible impact on both the decision of the committee and the boundary of the convex polytope. Choosing the gamma process as the nonparametric Bayesian prior sidesteps the need to tune the number of experts in the committee, shrinking the weights of all unnecessary experts and hence allowing a finite number of experts with non-negligible weights to be automatically inferred from the data. \vspace{-1mm} \subsubsection{Illustration for sum-softplus regression} \vspace{-1mm} A clear advantage of sum-softplus regression over both softplus and logistic regressions is that it could use multiple hyperplanes to construct a nonlinear decision boundary and, similar to the convex polytope machine of \citet{kantchelian2014large}, to separate two different classes by a large margin. To illustrate the imposed geometric constraints, we first consider a synthetic two dimensional dataset with two classes, as shown in Fig. \ref{fig:circle_1} (a), where most of the data points of Class $B$ reside within a unit circle and these of Class $A$ reside within a ring outside the unit circle. \begin{figure}[!t] \begin{center} \includegraphics[width=0.75\columnwidth]{figure/circle_20_1.png} \vspace{-.2cm} \end{center} \vspace{-4.9mm} \caption{\small\label{fig:circle_1} Visualization of sum-softplus regression with $K_{\max}=20$ experts on a binary classification problem under two opposite labeling settings. For each labeling setting, 2000 Gibbs sampling iterations are used and the MCMC sample that provides the maximum likelihood on fitting the training data labels is used to display the results. (a) A two dimensional dataset that consists of 150 data points from Class $A$, whose radiuses are drawn from $\mathcal{N}(2,0.5^2)$ and angles are distributed uniformly at random between 0 and $360$ degrees, and another 150 data points from Class $B$, whose both $x$-axis and $y$-axis values are drawn from $\mathcal{N}(0,0.5^2)$. With data points in Classes $A$ and $B$ labeled as ``1'' and ``0,'' respectively, and with $p_0=0.5$, (b) shows the inferred weights $r_k$ of the experts, ordered by their values, (c) shows a contour map, the value of each point of which represents how many inequalities specified in \eqref{eq:convex_polytope} are violated, and whose region with zero values corresponds to the convex polytope enclosed by the intersection of the hyperplanes defined in \eqref{eq:convex_polytope}, and (d) shows the the contour map of the predicted class probabilities. (f)-(h) are analogous plots to (b)-(d), with the data points in Classes $A$ and $B$ relabeled as ``0'' and ``1,'' respectively. (e) The average per data point log-likelihood as a function of MCMC iteration, for both labeling settings. } \end{figure} \begin{figure}[!t] \begin{center} \includegraphics[width=0.75\columnwidth]{figure/circle_ave_20_1.png} \vspace{-.2cm} \end{center} \vspace{-5.9mm} \caption{\small\label{fig:circle_ave_1} Visualization of the posteriors of sum-softplus regression based on 20 MCMC samples, collected once per every 50 iterations during the last 1000 MCMC iterations, with the same experimental setting used for Fig. \ref{fig:circle_1}. With $p_0=0.5$, (a) and (b) show the contour maps of the posterior means and standard deviations, respectively, of the number of inequalities specified in \eqref{eq:convex_polytope} that are violated, and (c) and (d) show the contour maps of the posterior means and standard deviations, respectively, of predicted class probabilities. (e)-(h) are analogous plots to (a)-(d), with the data points in Classes $A$ and $B$ relabeled as ``0'' and ``1,'' respectively. } \end{figure} We first label the data points of Class $A$ as ``1'' and these of Class $B$ as ``0.'' Shown in Fig. \ref{fig:circle_1} (b) are the inferred weights $r_k$ of the experts, using the MCMC sample that has the highest log-likelihood in fitting the training data labels. It is evident from Figs. \ref{fig:circle_1} (b) and (c) that sum-softplus regression infers four experts (hyperplanes) with significant weights. The convex polytope in Fig. \ref{fig:circle_1} (c) that encloses the space marked as zero is intersected by these four hyperplanes, each of which is defined as in \eqref{eq:convex_polytope} with $p_0=50\%$. Thus outside the convex polytope are data points that would be labeled as ``1'' with at least 50\% probabilities and inside it are data points that would be labeled as ``0'' with relatively high probabilities. We further show in Fig. \ref{fig:circle_1} (d) the contour map of the inferred probabilities for $P(y_i=1\,|\, \boldsymbol{x}_i) = 1-e^{-\lambda(\boldsymbol{x}_i)}$, where $\lambda(\boldsymbol{x}_i)$ are calculated with~\eqref{eq:sum_softplus_reg}. Note that due to the model construction, a single expert's influence on the decision boundary can be conveniently measured, and the exact decision boundary is bounded by a convex polytope. Thus it is not surprising that the convex polytope in Fig. \ref{fig:circle_1} (c), which encloses the space marked as zero, aligns well with the contour line of $P(y_i=1\,|\, \boldsymbol{x}_i)=0.5$ shown in Fig. \ref{fig:circle_1} (d). Despite being able to construct a nonlinear decision boundary bounded by a convex polytope, sum-softplus regression has a clear restriction in that if the data labels are flipped, its performance may substantially deteriorate, becoming no better than that of logistic regression. For example, for the same data shown in Fig. \ref{fig:circle_1} (a), if we choose the opposite labeling setting where the data points of Class $A$ are labeled as ``0'' and these of Class $B$ are labeled as ``1,'' then sum-softplus regression infers a single expert (hyperplane) with non-negligible weight, as shown in Figs. \ref{fig:circle_1} (f)-(g), and fails to separate the data points of two different classes, as shown in Figs. \ref{fig:circle_1} (g)-(h). The data log-likelihood plots in Fig. \ref{fig:circle_1} (e) also suggest that sum-softplus regression could perform substantially better if the training data are labeled in favor of its geometric constraints on the decision boundary. An advantage of a Bayesian hierarchical model is that with collected MCMC samples, one may estimate not only the posterior means but also uncertainties. The standard deviations shown in Figs. \ref{fig:circle_ave_1} (b) and (d) clearly indicate the uncertainties of sum-softplus regression on its decision boundaries and predictive probabilities in the covariate space, which may be used to help decide how to sequentially query the labels of unlabeled data in an active learning setting \citep{cohn1996active,settles2010active}. The sensitivity of sum-softplus regression to how the data are labeled could be mitigated but not completely solved by combining two sum-softplus regression models trained under the two opposite labeling settings. In addition, sum-softplus regression may not perform well no matter how the data are labeled if neither of the two classes could be enclosed by a convex polytope. To fully resolve these issues, we first introduce stack-softplus regression, which defines a convex-polytope-like confined space to enclose positive examples. We then show how to combine the two distinct, but complementary, softplus regression models to construct SS-softplus regression that provides more flexible nonlinear decision boundaries. \subsection{Stack-softplus regression and stacked gamma distributions} The model in \eqref{eq:BerPo_recursive_softplus_reg_model} combines the BerPo link with a gamma belief network that stacks differently parameterized gamma distributions. Note that here ``stacking'' is defined as an operation that mixes the shape parameter of a gamma distribution at layer $t$ with a gamma distribution at layer $t+1$, the next one pushed into the stack, and pops out the covariate-dependent gamma scale parameters from layers $T+1$ to 2 in the stack, following the last-in-first-out rule, to parameterize the BerPo rate of the class label $y_i$ shown in \eqref{eq:recurssive_softplus_reg}. \subsubsection{Convex-polytope-like confined space that favors positive examples} Let us make the analogy that each $\betav^{(t)}$ is one of the $T$ criteria that an expert examines before making a binary decision. From \eqref{eq:recurssive_softplus_reg} it is clear that as long as a single criterion $t\in\{2,\ldots,{T}+1\}$ of the expert is strongly violated, which means that $\boldsymbol{x}_i'\betav^{(t)}$ is much smaller than zero, then the expert would vote ``No'' regardless of the values of $\boldsymbol{x}_i'\betav^{({\tilde{t}})}$ for all ${\tilde{t}}\neq t$. Thus the response variable could be voted ``Yes'' by the expert only if none of the $T$ expert criteria are strongly violated. For stack-softplus regression, let us specify a confined space using the inequality $P(y_i=1\,|\, \boldsymbol{x}_i)=1-e^{-\lambda(\boldsymbol{x}_i)}>p_0$, which can be expressed as \vspace{0mm}\begin{equation}\small \label{eq:positive_convex_polytope} \boldsymbol{x}_i'\betav^{({T}+1)} + \ln \ln\!\bigg\{1+e^{\boldsymbol{x}_i'\betav^{({T})}}\ln\Big[1+\ldots \ln\big(1+e^{\boldsymbol{x}_i'\betav^{(2)}}\big)\Big]\bigg\} > \ln\big[(1-p_0)^{-\frac{1}{r}}-1\big], \vspace{0mm}\end{equation} and hence any data point $\boldsymbol{x}_i$ outside the confined space ($i.e.$, violating the inequality in Eq.~\ref{eq:positive_convex_polytope} a.s.) will be labeled as $y_i=0$ with a probability no less than $1-p_0$. Considering the covariate space \vspace{0mm}\begin{equation} \mathcal{T}^{-t}:=\left\{\boldsymbol{x}_i: \boldsymbol{x}'_i\betav_{{\tilde{t}}} \ge 0 \text{ for } {\tilde{t}}\neq t\right\}, \vspace{0mm}\end{equation} where all the criteria except criterion $t$ of the expert tend to be satisfied, the decision boundary of stack-softplus regression in $\mathcal{T}^{-t}$ would be clearly influenced by the satisfactory level of criterion $t$, whose hyperplane partitions $\mathcal{T}^{-t}$ into two parts as \vspace{0mm}\begin{equation}\small y_i = \begin{cases} \vspace{.15cm} \displaystyle 1 , & {\mbox{if }} \displaystyle 1- \left(1^{T+1}+\ln\bigg\{1^T+\ln\Big[1^{T-1}+\ldot +\ln\big(1^t+e^{\boldsymbol{x}_i'\betav^{(t)}} g_{t-1}\big)\Big]\bigg\}\right)^{-r} \!\!> p_0,\\ \displaystyle 0 , & \mbox{otherwise}, \end{cases} \vspace{0mm}\end{equation} for all $\boldsymbol{x}_i\in\mathcal{T}^{-t}$. Let us define $g_t$ with $g_1 = 1$ and the recursion $g_t=\ln(1+g_{t-1})$ for $t=2,\ldots,T$, and define $h_t$ with $h_{T+1} = (1-p_0)^{-\frac{1}{r}}-1$ and the recursion $ h_t=e^{h_{t+1}} -1 \notag $ for $t=T,T-1,\ldots,2$. Using the definition of $g_t$ and $h_t$, combining all the $T$ expert criteria, the confined space of stack-softplus regression specified in \eqref{eq:positive_convex_polytope} can be roughly related to a convex polytope, which is specified by the solutions to a set of $T$ inequalities as \begin{align}\label{eq:convex_polytope_recursive} &\boldsymbol{x}_i' \betav^{(t)} > \ln(h_{t})-\ln(g_{t-1}), ~t\in\{2,\ldots,T+1\}. \end{align} The convex polytope is enclosed by the intersection of $T$ $V$-dimensional hyperplanes, and since none of the $T$ criteria would be strongly violated inside the convex polytope, the label $y_i=1$ ($y_i=0$) would be assigned to an $\boldsymbol{x}_i$ inside (outside) the convex polytope with a relatively high (low) probability. Unlike the confined space of sum-softplus regression defined in \eqref{eq:sum_ineuqality} that is bounded by a convex polytope defined in \eqref{eq:convex_polytope}, the convex polytope defined in \eqref{eq:convex_polytope_recursive} only roughly corresponds to the confined space of stack-softplus regression, as defined in \eqref{eq:positive_convex_polytope}. Nevertheless, the confined space defined in \eqref{eq:positive_convex_polytope} is referred to as a convex-polytope-like confined space, due to both its connection to the convex polytope in \eqref{eq:convex_polytope_recursive} and the fact that \eqref{eq:positive_convex_polytope} is likely to be violated if at least one of the $T$ criteria is strongly dissatisfied ($i.e.$, $e^{\boldsymbol{x}_i'\betav^{(t)}}\rightarrow 0$ for some $t$). \subsubsection{Illustration for stack-softplus regression} \begin{figure}[!t] \begin{center} \includegraphics[width=0.75\columnwidth]{figure/circle_1_20.png} \vspace{-.2cm} \end{center} \vspace{-4.9mm} \caption{\small\label{fig:circle_2} Analogous figure to Fig. \ref{fig:circle_1} for stack-softplus regression with $T=20$ expert criteria, with the following differences: (b) shows the average latent count per positive sample, $\sum_i m_i^{(t)}\big /\sum_i \delta(y_i=1)$, as a function of layer $t$, (c) shows a contour map, the value of each point of which represents how many inequalities specified in \eqref{eq:convex_polytope_recursive} are satisfied, and whose region with the values of $T=20$ corresponds to the convex polytope enclosed by the intersections of the hyperplanes defined in \eqref{eq:convex_polytope_recursive}, } \begin{center} \includegraphics[width=0.75\columnwidth]{figure/circle_ave_1_20.png} \vspace{-.2cm} \end{center} \vspace{-4.9mm} \caption{\small\label{fig:circle_ave_2} Analogous figure to Fig. \ref{fig:circle_ave_1} for stack-softplus regression, with the following differences: (a) and (b) show the contour maps of the posterior means and standard deviations, respectively, of the number of inequalities specified in \eqref{eq:convex_polytope_recursive} that are satisfied. (e)-(f) are analogous plots to (a)-(b) under the opposite labeling setting. } \end{figure} Let us examine how stack-softplus regression performs on the same data used in Fig. \ref{fig:circle_1}. When Class $B$ is labeled as ``1,'' as shown in Fig. \ref{fig:circle_2} (g), stack-softplus regression infers a convex polytope that encloses the space marked as $T=20$ using the intersection of all $T=20$ hyperplanes, each of which is defined as in \eqref{eq:convex_polytope_recursive}; and as shown in Fig. \ref{fig:circle_2} (h), it works well by using a convex-polytope-like confined space to enclose positive examples. However, as shown in Figs. \ref{fig:circle_2} (c)-(e), its performance deteriorates when the opposite labeling setting is used. Note that due to the model construction that introduces complex interactions between the $T$ hyperplanes, \eqref{eq:convex_polytope_recursive} can only roughly describe how a single hyperplane could influence the decision boundary determined by all hyperplanes. Thus it is not surprising that neither the convex polytope in Fig. \ref{fig:circle_2} (c), which encloses the space marked with the largest count there, nor the convex polytope in Fig. \ref{fig:circle_2} (g), which encloses the space marked with $T$, align well with the contour lines of $P(y_i=1\,|\, \boldsymbol{x}_i)=0.5$ in Figs. \ref{fig:circle_2} (d) and (h), respectively. While how the latent count $m_{\boldsymbol{\cdot}}^{(t)}$ decreases as $t$ increases does not indicate a clear cutoff point for the depth $T$, neither do we observe a clear sign of overfitting when $T$ is set as large as 100 in our experiments. Both Figs. \ref{fig:circle_2} (c) and (g) indicate that most of the hyperplanes are far from any data points and tend to vote ``Yes'' for all training data. The standard deviations shown in Figs. \ref{fig:circle_ave_2} (f) and (h) clearly indicate the uncertainties of stack-softplus regression on its decision boundaries and predictive probabilities in the covariate space. Like sum-softplus regression, stack-softplus regression also generalizes softplus and logistic regressions in that it uses the boundary of a confined space rather than a single hyperplane to partition the covariate space into two parts. Unlike the convex-polytope-bounded confined space of sum-softplus regression that favors placing negative examples inside it, the convex-polytope-like confined space of stack-softplus regression favors placing positive examples inside it. While both sum- and stack-softplus regressions could be sensitive to how the data are labeled, their distinct behaviors under the same labeling setting motivate us to combine them together as SS-softplus regression, as described below. \subsection{Sum-stack-softplus (SS-softplus) regression} Note that if ${T}=1$, SS-softplus regression reduces to sum-softplus regression; if $K=1$, it reduces to stack-softplus regression; and if $K=T=1$, it reduces to softplus regression, which further reduces to logistic regression if the weight of the single expert is fixed at $r=1$. To ensure that the SS-softplus regression model is well defined in its infinite limit, we provide the following proposition and present the proof in Appendix \ref{sec:proof}. \begin{prop}\label{lem:finite1} The infinite product in sum-stack-softplus regression as $$e^{-\sum_{k=1}^\infty r_k\,\varsigma\left(\boldsymbol{x}_i'\betav_k^{(2:T+1)}\right)} = \prod_{k=1}^\infty \left(\!1\!+\!e^{\boldsymbol{x}_i'\betav_k^{({T}\!+\!1)}}\ln\!\bigg\{1\!+\!e^{\boldsymbol{x}_i'\betav_k^{({T})}}\ln\Big[1\!+\!\ldots \ln\big(1\!+\!e^{\boldsymbol{x}_i'\betav_k^{(2)}}\big)\Big]\bigg\}\right)^{-r_k}$$ is smaller than one and has a finite expectation that is greater than zero \end{prop} \subsubsection{Union of convex-polytope-like confined spaces} We may consider SS-softplus regression as a multi-hyperplane model that employs a committee, consisting of countably infinite experts, to make a decision, where each expert is equipped with $T$ criteria to be examined. The committee's distribution is obtained by convolving the distributions of countably infinite experts, each of which mixes $T$ stacked covariate-dependent gamma distributions. For each $\boldsymbol{x}_i$, the committee votes ``Yes'' as long as at least one expert votes ``Yes,'' and an expert could vote ``Yes'' if and only if none of its $T$ criteria are strongly violated. Thus the decision boundary of SS-softplus regression can be considered as a union of convex-polytope-like confined spaces that all favor placing positively labeled data inside them, as described below, with the proofs deferred to Appendix \ref{sec:proof}. \begin{thm}\label{thm:union_polytope} For sum-stack-softplus regression, the confined space specified by the inequality $P(y_i=1\,|\, \boldsymbol{x}_i)=1-e^{-\lambda(\boldsymbol{x}_i)}> p_0$, which can be expressed as \vspace{0mm}\begin{equation} \lambda(\boldsymbol{x}_i) = \sum_{k=1}^\infty r_k\, \varsigma\big(\boldsymbol{x}'\betav_k^{(2)},\ldots,\boldsymbol{x}'\betav_k^{(T+1)}\big) > -\ln(1-p_0), \label{eq:SS-softplus_ineuqality} \vspace{0mm}\end{equation} encompasses the union of convex-polytope-like confined spaces, expressed as $$\mathcal{D}_{\star} = \mathcal{D}_1\cup \mathcal{D}_2 \cup\ldots,$$ where the $k$th convex-polytope-like confined space $\mathcal{D}_k$ is specified by the inequality \vspace{0mm}\begin{equation} \small \label{eq:Union_convex_polytope} \boldsymbol{x}_i'\betav_k^{({T}+1)} + \ln \ln\!\bigg\{1+e^{\boldsymbol{x}_i'\betav_k^{({T})}}\ln\Big[1+\ldots \ln\big(1+e^{\boldsymbol{x}_i'\betav_k^{(2)}}\big)\Big]\bigg\} > \ln\big[(1-p_0)^{-\frac{1}{r_k}}-1\big]. \vspace{0mm}\end{equation} \end{thm} \begin{cor}\label{cor:union_polytope} For sum-stack-softplus regression, the confined space specified by the inequality $P(y_i=1\,|\, \boldsymbol{x}_i)=1-e^{-\lambda(\boldsymbol{x}_i)}\le p_0$ is bounded by $\bar{\mathcal{D}}_{\star} = \bar{\mathcal{D}}_1 \cap \bar{\mathcal{D}}_2\cap\ldots $ \end{cor} \begin{prop}\label{prop:union} For any data point $\boldsymbol{x}_i$ that resides inside the union of countably infinite convex-polytope-like confined spaces $\mathcal{D}_{\star} = \mathcal{D}_1\cup \mathcal{D}_2 \cup\ldots$, which means $\boldsymbol{x}_i$ satisfies at least one of the inequalities in \eqref{eq:Union_convex_polytope}, it will be labeled under sum-stack-softplus regression with $y_i=1$ with a probability greater than $p_0$, and $y_i=0$ with a probability no greater than $1-p_0$. \end{prop} \subsubsection{Illustration for sum-stack-softplus regression} Let us examine how SS-softplus regression performs on the same dataset used in Fig. \ref{fig:circle_1}. When Class $A$ is labeled as ``1,'' as shown in Figs. \ref{fig:circle_3} (b)-(c), SS-softplus regression infers about eight convex-polytope-like confined spaces, the intersection of six of which defines the boundary of the covariate space that separates the points that violate all inequalities in~\eqref{eq:Union_convex_polytope} from the ones that satisfy at least one inequality in \eqref{eq:Union_convex_polytope}. The union of these convex-polytope-like confined spaces defines a confined covariate space, which is included within the covariate space satisfying $P(y_i\,|\, \boldsymbol{x}_i)>0.5$, as shown in Fig. \ref{fig:circle_3} (d). When Class $B$ is labeled as ``1,'' as shown in Fig. \ref{fig:circle_3} (f)-(g), SS-softplus regression infers about six convex-polytope-like confined spaces, one of which defines the boundary of the covariate space that separates the points that violate all inequalities in \eqref{eq:Union_convex_polytope} from the others for the covariate space show in Fig. \ref{fig:circle_3} (g). The union of two convex-polytope-like confined spaces defines a confined covariate space, which is included in the covariate space with $P(y_i\,|\, \boldsymbol{x}_i)>0.5$, as shown in Fig. \ref{fig:circle_3} (h). Figs. \ref{fig:circle_3} (f)-(g) also indicate that except for two convex-polytope-like confined spaces, the boundaries of all the other convex-polytope-like confined spaces are far from any data points and tend to vote ``No'' for all training data. The standard deviations shown in Figs. \ref{fig:circle_ave_3} (b), (d), (f), and (h) clearly indicate the uncertainties of SS-softplus regression on classification decision boundaries and predictive probabilities. \begin{figure}[!t] \begin{center} \includegraphics[width=0.75\columnwidth]{figure/circle_20_20.png} \vspace{-.2cm} \end{center} \vspace{-4.9mm} \caption{\small\label{fig:circle_3} Analogous figure to Figs. \ref{fig:circle_1} and \ref{fig:circle_2} for SS-softplus regression with $K_{\max}=20$ experts and $T=20$ criteria for each expert, with the following differences: (b) shows the average latent count per positive sample, $\sum_i m_{ik}^{(t)}\big /\sum_i \delta(y_i=1)$, as a function of both the expert index $k$ and layer index $t$, where the experts are ordered based on the values of $\sum_i m_{ik}^{(1)}$, (c) shows a contour map, the value of each point of which represents how many inequalities specified in \eqref{eq:Union_convex_polytope} are satisfied, and whose region with nonzero values corresponds to the union of convex-polytope-like confined spaces, each of which corresponds to an inequality defined in \eqref{eq:Union_convex_polytope}, and (f) and (g) are analogous plots to (b) and (c) under the opposite labeling setting where data in Class $B$ are labeled as ``1.'' } \begin{center} \includegraphics[width=0.75\columnwidth]{figure/circle_ave_20_20.png} \vspace{-.2cm} \end{center} \vspace{-5.9mm} \caption{\small\label{fig:circle_ave_3} Analogous figure to Fig. \ref{fig:circle_ave_1} for SS-softplus regression, with the following differences: (a) and (b) show the contour maps of the posterior means and standard deviations, respectively, of the number of inequalities specified in \eqref{eq:Union_convex_polytope} that are satisfied. (e)-(f) are analogous plots to (a)-(b). } \end{figure} \vspace{-4mm} \section Gibbs sampling via data augmentation and marginalization }\label{sec:inference} \vspace{-2mm} Since logistic, softplus, sum-softplus, and stack-softplus regressions can all be considered as special cases of SS-softplus regression, below we will focus on presenting the nonparametric Bayesian hierarchical model and Bayesian inference for SS-softplus regression. The gamma process $G\sim\Gamma{\mbox{P}}(G_0,1/c)$ has an inherent shrinkage mechanism, as in the prior the number of atoms with weights larger than $\epsilon>0$ follows $ \mbox{Pois}\left(\gamma_0\int_{\epsilon}^\infty r^{-1}e^{-cr}dr\right), $ whose mean is finite a.s., where $\gamma_0=G_0(\Omega)$ is the mass parameter of the gamma process. In practice, the atom with a tiny weight generally has a negligible impact on the final decision boundary of the model, hence one may truncate either the weight to be above $\epsilon$ or the number of atoms to be below $K$. One may also follow \citet{Wolp:Clyd:Tu:2011} to use a reversible jump MCMC \citep{green1995reversible} strategy to adaptively truncate the number of atoms for a gamma process, which often comes with a high computational cost. For the convenience of implementation, we truncate the number of atoms in the gamma process to be $K$ by choosing a finite discrete base measure as $G_0=\sum_{k=1}^K \frac{\gamma_0} K \delta_{\alpha_{k}}$, where $K$ will be set sufficiently large to achieve a good approximation to the truly countably infinite model. We express the truncated SS-softplus regression model using \eqref{eq:DICLR_model} together with \begin{align} \small &r_k\sim\mbox{Gamma}(\gamma_0/K,1/c_0),~\gamma_0\sim\mbox{Gamma}(a_0,1/b_0),~c_0\sim\mbox{Gamma}(e_0,1/f_0),\notag\\ &~~~~~~~~~~~~~~~~~\betav^{(t)}_{k}\sim\prod_{v=0}^{V}\mathcal{N}(0,\alpha_{vtk}^{-1}),~\alpha_{vtk}\sim\mbox{Gamma}(a_t,1/b_t), \label{eq:ICNBE_finite} \end{align} where $t\in\{2,\ldots,T+1\}$. Related to \citet{RVM}, the normal gamma construction in (\ref{eq:ICNBE_finite}) is used to promote sparsity on the regression coefficients $\betav^{(t)}_{k}$. We derive Gibbs sampling by exploiting local conjugacies under a series of data augmentation and marginalization techniques. We comment here that while the proposed Gibbs sampling algorithm is a batch learning algorithm that processes all training data samples in each iteration, the local conjugacies revealed under data augmentation and marginalization may be of significant value in developing efficient mini-batch based online learning algorithms, including those based on stochastic gradient MCMC \citep{welling2011bayesian,girolami2011riemann,patterson2013stochastic,ma2015complete} and stochastic variation inference \citep{hoffman2013stochastic}. We leave the maximum likelihood, maximum a posteriori, (stochastic) variational Bayes inference, and stochastic gradient MCMC for softplus regressions for future research. For a model with ${T }=1$, we exploit the data augmentation techniques developed for the BerPo link in \citet{EPM_AISTATS2015} to sample $m_{i}$, these developed for the Poisson and multinomial distributions \citep{Dunson05bayesianlatent,BNBP_PFA_AISTATS2012} to sample $m_{ik}$, these developed for the NB distribution in \citet{NBP2012} to sample $r_{k}$ and $\gamma_0$, and these developed for logistic regression in \citet{LogitPolyGamma} and further generalized to NB regression in \citet{LGNB_ICML2012} and \citet{polson2013bayesian} to sample $\betav_{k}$. We exploit local conjugacies to sample all the other model parameters. For a model with ${T}\ge 2$, we further generalize the inference technique developed for the gamma belief network in \citet{PGBN_NIPS2015} to sample the model parameters of deep hidden layers. Below we provide a theorem, related to Lemma~1 for the gamma belief network in \citet{PGBN_NIPS2015}, to show that each regression coefficient vector can be linked to latent counts under NB regression. Let $m\sim\mbox{SumLog}(n,p)$ represent the sum-logarithmic distribution described in \citet{NBP_CountMatrix}, Corollary \ref{cor:sumlog} further shows an alternative representation of \eqref{eq:DICLR_model}, the hierarchical model of SS-softplus regression, where all the covariate-dependent gamma distributions are marginalized out. \begin{thm} \label{cor:PGBN} Let us denote $p_{ik}^{(t)} = 1-e^{-{q}_{ik}^{(t)}}$, $i.e.$, ${q}_{ik}^{(t)} = -\ln(1-p_{ik}^{(t)})$, and $\theta_{ik}^{({T }+1)}=r_k$. With ${q}_{ik}^{(1)}: = 1$ and \vspace{0mm}\begin{equation} {q}_{ik}^{(t+1)} := \ln\left( 1+ {q}_{ik}^{(t)}e^{\boldsymbol{x}_i\betav_{k}^{(t+1)}} \right) \label{eq:lambda} \vspace{0mm}\end{equation} for $t=1,\ldots,T$, which means \vspace{0mm}\begin{eqnarray} \displaystyle &{q}_{ik}^{(t+1)} = \varsigma\big(\boldsymbol{x}_i'\betav_k^{(2)},\ldots,\boldsymbol{x}_i'\betav_k^{(t+1)}\big)\notag\\ = &\displaystyle \ln\left(\!1\!+\!e^{\boldsymbol{x}_i'\betav_k^{({t}\!+\!1)}}\ln\!\bigg\{1\!+\!e^{\boldsymbol{x}_i'\betav_k^{({t})}}\ln\Big[1\!+\!\ldots \ln\big(1\!+\!e^{\boldsymbol{x}_i'\betav_k^{(2)}}\big)\Big]\bigg\}\right), \label{eq:q_ik} \vspace{0mm}\end{eqnarray} one may find latent counts $m_{ik}^{(t)}$ that are connected to the regression coefficient vectors a \vspace{0mm}\begin{equation} m_{ik}^{(t)}\sim\emph{\mbox{NB}}(\theta_{ik}^{(t+1)},~~ 1 - e^{-{q}_{ik}^{(t+1)}}) = \emph{\mbox{NB}}\left(\theta_{ik}^{(t+1)}, \frac{1}{1+e^{-\boldsymbol{x}_i'\betav_{k}^{(t+1)} - \ln ({q}_{ik}^{(t)})}}\right) . \label{eq:deepPFA_aug1} \vspace{0mm}\end{equation} \end{thm} \begin{cor}\label{cor:sumlog} With $q_{ik}^{(t)} = -\ln(1-p_{ik}^{(t)})$ defined as in \eqref{eq:q_ik} and hence $p_{ik}^{(t)}=1-e^{-q_{ik}^{(t)}}$, the hierarchical model of sum-stack-softplus regression can also be expressed as \small\begin{align} \small &m_{ik}^{(T+1)}\sim\emph{\mbox{Pois}}(r_k q_{ik}^{(T+1)}),~r_k\sim\emph{\mbox{Gamma}}(\gamma_0/K,1/c_0),\notag\\&m_{ik}^{(T)}\sim\emph{\mbox{SumLog}}(m_{ik}^{(T+1)}, p_{ik}^{(T+1)}),~\betav^{({T }+1)}_{k}\sim\prod_{v=0}^{V}\mathcal{N}(0,\alpha_{v({T }+1)k}^{-1}), \notag\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\tiny{\cdots}\notag\\ &m_{ik}^{(t)}\sim\emph{\mbox{SumLog}}(m_{ik}^{(t+1)}, p_{ik}^{(t+1)}),~\betav^{(t+1)}_{k}\sim\prod_{v=0}^{V}\mathcal{N}(0,\alpha_{v(t+1)k}^{-1}), \notag\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\tiny{\cdots}\notag\\ y_i = \delta(m_i&\ge 1),~m_i =\sum_{k=1}^K m^{(1)}_{ik},~m^{(1)}_{ik} \sim \emph{\mbox{SumLog}}(m_{ik}^{(2)}, p_{ik}^{(2)}),~\betav^{(2)}_{k}\sim\prod_{v=0}^{V}\mathcal{N}(0,\alpha_{v2k}^{-1}), \label{eq:ICNBE_finite_1} \end{align}\normalsize \end{cor} We outline Gibbs sampling in Algorithm \ref{alg:1} of Appendix \ref{app:setting}, where to save computation, we consider setting $K_{\max}$ as the upper-bound of the number of experts and deactivating experts assigned with zero counts during MCMC iterations. We provide several additional model properties in Appendix \ref{app:T} to describe how the latent counts propagate across layers, which may be used to decide how to set the number of layers $T$. For simplicity, we consider the number of criteria for each expert as a parameter that determines the model capacity and we fix it as $T$ for all experts in this paper. We defer the details of all Gibbs sampling update equations to Appendix \ref{app:sampling}, in which we also describe in detail how to ensure numerical stability in a finite precision machine. Note that except for the sampling of $\{m^{(1)}_{ik}\}_k$, the sampling of all the other parameters of different experts are embarrassingly parallel. \vspace{-4mm} \section{Example Results} \label{sec:results} \vspace{-2mm} We compare softplus regressions with logistic regression, Gaussian radial basis function (RBF) kernel support vector machine (SVM) \citep{boser1992training,vapnik1998statistical,scholkopf1999advances}, relevance vector machine (RVM) \citep{RVM}, adaptive multi-hyperplane machine (AMM) \citep{wang2011trading}, and convex polytope machine (CPM) \citep{kantchelian2014large}. Except for logistic regression that is a linear classifier, both kernel SVM and RVM are widely used nonlinear classifiers relying on the kernel trick, and both AMM and CPM use the intersection of multiple hyperplanes to construct their decision boundaries. We discuss the connections between softplus regressions and previous work in Appendix \ref{sec:discussion}. Following \citet{RVM}, we consider the following datasets: banana, breast cancer, titanic, waveform, german, and image. For each of these six datasets, we consider the first ten predefined random training/testing partitions, and report both the mean and standard deviation of the testing classification errors. Since these datasets, originally provided by \citet{ratsch2001soft}, were no longer available on the authors' websites, we use the version provided by \citet{Diethe15}. We also consider two additional datasets---ijcnn1 and a9a---that come with a default training/testing partition, for which we report the results of logistic regression, SVM, and RVM based on a single trial, and report the results of all the other algorithms based on five independent trials with different random initiations. We summarize in Tab. \ref{tab:data} of Appendix \ref{app:setting} the basic information of these benchmark datasets. Since the decision boundaries of all softplus regressions depend on whether the two classes are labeled as ``1'' and ``0'' or labeled as ``0'' and ``1,'' we consider repeating the same softplus regression algorithm twice, using both $y_{i1}\sim\mbox{BerPo}[\lambda_1(\boldsymbol{x}_i)] \text{ and } y_{i2}\sim\mbox{BerPo}[\lambda_2(\boldsymbol{x}_i)], $ where $y_{i1}$ and $y_{i2}:=1-y_{i1}$ are the labels under two opposite labeling settings. We combine them to the following predictive distribution $ y_i \,|\, \boldsymbol{x}_i \sim \mbox{Bernoulli} \left[(1-e^{-\lambda_1(\boldsymbol{x}_i)}+e^{-\lambda_2(\boldsymbol{x}_i)})/2\right],\label{eq:fusing} $ which no longer depends on how the data are labeled. If we set $p_0=0.5$ as the probability threshold to make binary decisions, then $y_i$ would be labeled as ``1'' if $\lambda_1(\boldsymbol{x}_i)>\lambda_2(\boldsymbol{x}_i)$ and labeled as ``0'' otherwise. This simple strategy to train the same asymmetric model under two opposite labeling settings and combine their results together is related to the one used in \citet{kantchelian2014large}, which, however, lacks of probabilistic interpretation. We leave more sophisticated training and combination strategies to future study. For all datasets, we consider 1) softplus regression, which generalizes logistic regression with $r\neq 1$, 2) sum-softplus regression, which reduces to softplus regression if the number of experts is $K=1$, 3) stack-softplus regression, which reduces to softplus regression if the number of layers is $T=1$, and 4) sum-stack-softplus (SS-softplus) regression, which reduces to sum-softplus regression if $T=1$, to stack-softplus regression if $K=1$, and to softplus regression if $K=T=1$. For sum-softplus regresion, we set the upperbound on the number of experts as $K_{\max}=20$, for deep softplus regression, we consider $T\in\{1,2,3,5,10\}$, and for SS-softplus regression, we set $K_{\max}=20$ and consider $T\in\{1,2,3,5,10\}$. For all softplus regressions, we consider 5000 Gibbs sampling iterations and record the maximum likelihood sample found during the last 2500 iterations as the point estimates of $r_k$ and $\betav_{k}^{(t)}$, which are used for out-of-sample predictions. We set $a_0=b_0=0.01$, $e_0=f_0=1$, and $a_t = b_t = 10^{-6}$ for $t\in\{2,\ldots,T+1\}$. As in Algorithm \ref{alg:1} shown in Appendix \ref{app:setting}, we deactivate inactive experts for iterations in $I_{Prune}=\{525,575,\ldots,4975\}$. For a fair comparison, to ensure that the same training/testing partitions are used for all algorithms for all datasets, we report the results by using either widely used open-source software packages or the code made public available by the original authors. We describe in Appendix \ref{app:setting} the settings of all the algorithms that are used for comparison. \subsection{Illustrations} \begin{figure}[!t] \begin{center} \includegraphics[width=0.835 \columnwidth]{figure/banana_20_20.png} \vspace{-.3cm} \end{center} \vspace{-4.9mm} \caption{\small\label{fig:banana} Analogous figure to Fig. \ref{fig:circle_3} for the ``banana'' dataset.} \begin{center} \includegraphics[width=0.83\columnwidth]{figure/banana_ave_20_20.png} \vspace{-.3cm} \end{center} \vspace{-4.9mm} \caption{\small\label{fig:banana_ave} Analogous figure to Fig. \ref{fig:circle_ave_3} for the ``banana'' dataset. } \end{figure} With a synthetic dataset, Figs. \ref{fig:circle_1}-\ref{fig:circle_ave_3} illustrate the distinctions and connections between the sum-, stack-, and SS-softplus regressions. While both sum- and stack-softplus could work well for the synthetic dataset if the two classes are labeled in their preferred ways, as shown in Figs. \ref{fig:circle_1} and \ref{fig:circle_2}, SS-softplus regression, as shown in Fig. \ref{fig:circle_3}, works well regardless of how the data are labeled. To further illustrate how the distinct, but complementary, behaviors of the sum- and stack-softplus regressions are combined together in SS-softplus regression, let us examine how SS-softplus regression performs on the banana dataset shown in Fig.~\ref{fig:banana}~(a). When Class $A$ is labeled as ``1,'' as shown in Figs. \ref{fig:banana} (b)-(c), SS-softplus regression infers about six convex-polytope-like confined spaces, the intersection of five of which defines the boundary of the covariate space that separates the points that satisfy at least one inequality in \eqref{eq:Union_convex_polytope} from the ones that violate all inequalities in \eqref{eq:Union_convex_polytope}. The union of these convex-polytope-like confined spaces defines a confined covariate space, which is included within the covariate space satisfying $P(y_i\,|\, \boldsymbol{x}_i)>0.5$, as shown in Fig. \ref{fig:banana} (d). When Class $B$ is labeled as ``1,'' as shown in Fig. \ref{fig:banana} (f)-(g), SS-softplus regression infers about eight convex-polytope-like confined spaces, three of which define the boundary of the covariate space that separates the points that satisfy at least one inequality in \eqref{eq:Union_convex_polytope} from the others for the covariate space show in Fig. \ref{fig:banana} (g). The union of four convex-polytope-like confined spaces defines a confined covariate space, which is included in the covariate space with $P(y_i\,|\, \boldsymbol{x}_i)>0.5$, as shown in Fig. \ref{fig:banana} (h). Figs. \ref{fig:banana} (f)-(g) also indicate that except for four convex-polytope-like confined spaces, all the other inferred convex-polytope-like confined spaces are far away from and tend to vote ``No'' for all training data. The standard deviations shown in Figs. \ref{fig:banana_ave} (b), (d), (f), and (h) indicate the uncertainties of SS-softplus regression on classification decision boundaries and predictive probabilities in the covariate space. In Figs. \ref{fig:xor}-\ref{fig:dbmoon_ave} of Appendix \ref{app:setting}, we further illustrate SS-softplus regression on an exclusive-or (XOR) dataset and a double-moon dataset used in \citet{haykin2009neural}. For the banana, XOR, and double-moon datasets, where the two classes cannot be well separated by a single convex-polytope-like confined space, neither sum- nor stack-softplus regressions work well regardless of how the data are labeled, whereas SS-softplus regression infers the union of multiple convex-polytope-like confined spaces that successfully separates the two classes. \subsection{Classification performance on benchmark data} \begin{table}[t!] \footnotesize \caption{\small Comparison of classification errors of logistic regression (LR), RBF kernel support vector machine (SVM), relevance vector machine (RVM), adaptive multi-hyperplane machine (AMM), convex polytope machine (CPM), softplus regression, sum-softplus (sum-$\varsigma$) regression with $K_{\max}=20$, stack-softplus (stack-$\varsigma$) regression with $T=5$, and SS-softplus (SS-$\varsigma$) regression with $K_{\max}=20$ and $T=5$. Displayed in each column of the last row is the average of the classification errors of an algorithm normalized by those of kernel SVM. }\label{tab:Error} \centering \begin{tabular}{c|ccccccccc} \toprule Dataset & LR & SVM & RVM & AMM & CPM & \!softplus &\! sum-$\varsigma$\! & \!\!stack-$\varsigma$ ($T$=5)\! & \!\!SS-$\varsigma$ ($T$=5) \\ \midrule \footnotesize banana & $47.76$ & $\mathbf{10.85}$ & $11.08$ & $18.76$ & $21.39$ & $47.87$ & $30.78$ & $33.21$ & $11.89$ \\ & $\pm 4.38$ & $\pm 0.57$ & $\pm 0.69$ & $\pm 4.09$ & $\pm 1.72$ & $\pm 4.36$ & $\pm 8.68$ & $\pm 5.76$ & $\pm 0.61$ \\ \midrule breast & $28.05$ & $28.44$ & $31.56$ & $31.82$ & $32.08$ & $28.70$ & $30.13$ & $\mathbf{27.92}$ & $28.83$ \\ cancer & $\pm 3.68$ & $\pm 4.52$ & $\pm 4.66$ & $\pm 4.47$ & $\pm 4.29$ & $\pm 4.76$ & $\pm 4.23$ & $\pm 3.31$ & $\pm 3.40$ \\ \midrule titanic & $22.67$ & $22.33$ & $23.20$ & $28.85$ & $22.37$ & $22.53$ & $22.48$ & $22.71$ & $\mathbf{22.29}$ \\ & $\pm 0.98$ & $\pm 0.63$ & $\pm 1.08$ & $\pm 8.56$ & $\pm 0.45$ & $\pm 0.43$ & $\pm 0.25$ & $\pm 0.70$ & $\pm 0.80$ \\ \midrule waveform & $13.33$ & $\mathbf{10.73}$ & $11.16$ & $11.81$ & $12.76$ & $13.62$ & $11.51$ & $12.25$ & $11.69$ \\ & $\pm 0.59$ & $\pm 0.86$ & $\pm 0.72$ & $\pm 1.13$ & $\pm 1.17$ & $\pm 0.71$ & $\pm 0.65$ & $\pm 0.69$ & $\pm 0.69$ \\ \midrule german & $23.63$ & $23.30$ & $23.67$ & $25.13$ & $25.03$ & $24.07$ & $23.60$ & $\mathbf{22.97}$ & $24.23$ \\ & $\pm 1.70$ & $\pm 2.51$ & $\pm 2.28$ & $\pm 3.73$ & $\pm 2.49$ & $\pm 2.11$ & $\pm 2.39$ & $\pm 2.22$ & $\pm 2.46$ \\ \midrule image & $17.53$ & $2.84$ & $3.82$ & $3.82$ & $3.25$ & $17.55$ & $3.50$ & $7.97$ & $\mathbf{2.73}$ \\ & $\pm 1.05$ & $\pm 0.52$ & $\pm 0.59$ & $\pm 0.87$ & $\pm 0.41$ & $\pm 0.75$ & $\pm 0.73$ & $\pm 0.52$ & $\pm 0.53$ \\ \bottomrule \begin{tabular}{@{}c@{}}\scriptsize Mean of SVM\\ \scriptsize normalized errors\end{tabular} & $2.472$ & $1$ & $1.095$ & $1.277$ & $1.251$ & $2.485$ & $1.370$ & $1.665$ & $1.033$ \\ \end{tabular} \footnotesize \centering \setlength{\tabcolsep}{5.5pt} \caption{\small Analogous table to Tab. \ref{tab:Error} for comparing the number of experts (times the number of hyperplanes per expert), where an expert contains $T$ hyperplanes for both stack- and SS-softplus regressions and contains a single hyperplane/support vector for all the others. The computational complexity for out-of-sample prediction is about linear in the number of hyperplanes/support vectors. Displayed in each column of the last row is the average of the number of experts (times the number of hyperplanes per expert) of an algorithm normalized by those of RBF kernel SVM. } \label{tab:K} \begin{tabular}{c|ccccccccc} \toprule Dataset & LR & SVM & RVM & AMM & CPM & softplus & sum-$\varsigma$ & stack-$\varsigma$\! ($T$=5) & SS-$\varsigma$\! ($T$=5) \\ \midrule banana & $1$ & $129.20$ & $22.30$ & $9.50$ & $14.60$ & $2$ & $3.70$ & $2~(\times 5)$ & $7.60~(\times 5)$ \\ & & $\pm 32.76$ & $\pm 26.02$ & $\pm 2.80$ & $\pm 7.49$ & & $\pm 0.95$ & & $\pm 1.17~(\times 5)$ \\ \midrule breast & $1$ & $115.10$ & $24.80$ & $13.40$ & $12.00$ & $2$ & $3.10$ & $2~(\times 5)$ & $6.40~(\times 5)$ \\ cancer & & $\pm 11.16$ & $\pm 28.32$ & $\pm 0.84$ & $\pm 8.43$ & & $\pm 0.74$ & & $\pm 1.43~(\times 5)$ \\ \midrule titanic & $1$ & $83.40$ & $5.10$ & $14.90$ & $5.20$ & $2$ & $2.30$ & $2~(\times 5)$ & $4.00~(\times 5)$ \\ & & $\pm 13.28$ & $\pm 3.03$ & $\pm 3.14$ & $\pm 2.53$ & & $\pm 0.48$ & & $\pm 0.94~(\times 5)$ \\ \midrule waveform & $1$ & $147.00$ & $21.10$ & $9.50$ & $6.40$ & $2$ & $4.40$ & $2~(\times 5)$ & $8.90~(\times 5)$ \\ & & $\pm 38.49$ & $\pm 10.98$ & $\pm 1.18$ & $\pm 2.27$ & & $\pm 0.84$ & & $\pm 2.33~(\times 5)$ \\ \midrule german & $1$ & $423.60$ & $11.00$ & $18.80$ & $8.80$ & $2$ & $6.70$ & $2~(\times 5)$ & $14.70~(\times 5)$ \\ & & $\pm 55.02$ & $\pm 3.20$ & $\pm 1.81$ & $\pm 7.79$ & & $\pm 0.95$ & & $\pm 1.77~(\times 5)$ \\ \midrule image & $1$ & $211.60$ & $35.80$ & $10.50$ & $23.00$ & $2$ & $11.20$ & $2~(\times 5)$ & $17.60~(\times 5)$ \\ & & $\pm 47.51$ & $\pm 9.19$ & $\pm 1.08$ & $\pm 6.75$ & & $\pm 1.32$ & & $\pm 1.90~(\times 5)$ \\ \bottomrule \begin{tabular}{@{}c@{}}\scriptsize Mean of SVM\\ \scriptsize normalized $K$\end{tabular} & $0.007$ & $1$ & $0.131$ & $0.088$ & $0.075$ & $0.014 $ & $0.030$ & $0.014~(\times 5)$ & $0.057~(\times 5)$ \\ \end{tabular} \end{table} We summarize in Tab. \ref{tab:Error} the results for the first six benchmark datasets described in Tab.~\ref{tab:data}, for each of which we report the results based on the first ten predefined random training/testing partitions. Overall for these six datasets, the RBF kernel SVM has the highest average out-of-sample prediction accuracy, followed closely by SS-softplus regression, whose mean of the errors normalized by these of of the SVM is as small as 1.033, and then by RVM, whose mean of normalized errors is 1.095. Overall, logistic regression does not perform well, which is not surprising as it is a linear classifier that uses a single hyperplane to partition the covariate space into two halves to separate one class from the other. Softplus regression, which uses an additional parameter over logistic regression, fails to reduce the classification errors of logistic regression; both sum-softplus regression, a multi-hyperplane generalization using the convolution operation, and stack-softplus regression, a multi-hyperplane generalization using the stacking operation, clearly reduce the classification errors; and SS-regression that combines both the convolution and stacking operations further improves the overall performance. Both CPM and AMM perform similarly to sum-softplus regression, which is not surprising given their connections discussed in Appendix \ref{sec:CPM}. For out-of-sample prediction, the computation of a classification algorithm generally increases linearly in the number of used hyperplanes or support vectors. We summarize the number of experts (times the number of hyperplanes per expert if that number is not one) in Tab. \ref{tab:K}, which indicates that in comparison to SVM that consistently requires the most number of experts (each expert corresponds to a support vector for SVM), the RVM, AMM, CPM, and the three proposed multi-hyperplane softplus regressions all require significantly less time for predicting the class label of a new data sample. It is also interesting to notice that the number of hyperplanes automatically inferred from the data by sum-softplus regression is generally smaller than the ones of AMM and CPM, both of which are selected through cross validations. Note that the number of active experts, defined as the value of $\sum_{k}\delta(\sum_i m^{(1)}_{ik} >0)$, inferred by both sum- and SS-softplus regressions shown in Tab. \ref{tab:K} will be further reduced if we only take into consideration the experts whose weights are larger than a certain threshold, such as those with $r_k>0.001$ for $k\in\{1,\ldots,K_{\max}\}$. Except for banana, a two-dimensional dataset, sum-softplus regression performs similarly to both AMM and CPM; and except for banana and image, stack-softplus regression performs similarly to both AMM and CPM. These results are not surprising as CPM, closely related to AMM, uses a convex polytope, defined as the intersection of multiple hyperplanes, to enclose one class, whereas the classification decision boundaries of sum-softplus regression, defined by the interactions of multiple hyperplanes via the sum-softplus function, can be bounded within a convex polytope that encloses negative examples, and that of stack-softplus regression can be related to a convex-polytope-like confined space that encloses positive examples. Note that while both sum- and stack-softplus regressions can partially remedy their sensitivity to how the data are labeled by combining the results obtained under two opposite labeling settings, the decision boundaries of them and those of both AMM and CPM are still restricted to a confined space related to a single convex polytope, which may be used to explain why on both banana and image, as well as on the XOR and double-moon datasets shown in Appendix \ref{app:setting}, they all clearly underperform SS-softplus regression, which separates two classes using the union of convex-polytope-like confined spaces. For breast cancer, titanic, and german, all classifiers have comparable classification errors, suggesting minor or no advantages of using a nonlinear classifier on them. For these three datasets, it is interesting to notice that, as shown in Figs. \ref{fig:rk}-\ref{fig:SS-softplus_rk}, sum- and SS-softplus regressions infer no more than two and three experts, respectively, with non-negligible weights under both labeling settings. These interesting connections imply that for two linearly separable classes, while providing no obvious benefits but also no clear harms, both sum- and SS-softplus regressions tend to infer a few active experts, and both stack- and SS-softplus regressions exhibit no clear sign of overfitting as the number of expert criteria $T$ increases. Whereas for banana, waveform, and image, all nonlinear classifiers clearly outperform logistic regression, and as shown in Figs. \ref{fig:rk}-\ref{fig:SS-softplus_rk}, sum- and SS-softplus regressions infer at least two and four experts, respectively, with non-negligible weights under at least one of the two labeling settings. These interesting connections imply that for two classes not linearly separable, both sum- and SS-softplus regressions may significantly outperform logistic regression by inferring a sufficiently large number of active experts, and both stack- and SS-softplus regressions may significantly outperform logistic regression by setting the number of expert criteria as $T\ge 2$, exhibiting no clear sign of overfitting as $T$ further increases. \begin{figure}[!t] \begin{center} \includegraphics[width=0.77\columnwidth]{figure/Sumsoftplus_rk.pdf} \end{center} \vspace{-7.9mm} \caption{\small\label{fig:rk} The inferred weights of the $K_{\max}=20$ experts of sum-softplus regression, ordered from left to right according to their weights, on all datasets shown in Tab. \ref{tab:data}, based on the maximum likelihood sample of a single random trial. } \begin{center} \includegraphics[width=0.77\columnwidth]{figure/SDS_rk.pdf} \end{center} \vspace{-7.9mm} \caption{\small\label{fig:SS-softplus_rk} Analogous to Fig. \ref{fig:rk} for SS-softplus regression with $K_{\max}=20$ and $T=5$. } \vspace{-3mm} \end{figure} For both stack- and SS-softplus regressions, the computational complexity in both training and out-of-sample prediction increases linearly in $T$, the depth of the stack. To understand how increasing $T$ affects the performance, we show in Tabs. \ref{table:4}-\ref{tab:5} of Appendix \ref{app:setting} the classification errors of stack- and SS-softplus regressions, respectively, for $T\in\{1,2,3,5,10\}$. It is clear that increasing $T$ from 1 to 2 generally leads to the most significant improvement if there is a clear advantage of increasing $T$, and once $T$ is sufficiently large, further increasing $T$ leads to small fluctuations of the performance but does not appear to lead to clear overfitting. It is also interesting to examine the number of active experts inferred by SS-softplus regression, where each expert is equipped with $T$ hyperplanes, as $T$ increases. As shown in Tab. \ref{table:6} of Appendix \ref{app:setting}, this number has a clear increasing trend as $T$ increases. This is not surprising as each expert is able to fit more complex geometric structure as $T$ increases, and hence SS-softplus regression can employ more of them to more detailedly describe the decision boundaries. This phenomenon is also clearly visualized in comparing the inferred experts and decision boundaries for SS-softplus regression, as shown in Fig. \ref{fig:circle_3}, with those for sum-softplus regression, as shown in Fig. \ref{fig:circle_1}. In addition to comparing softplus regressions with related algorithms on the six benchmark datasets used in \citet{RVM}, we also consider ijcnn1 and a9a, two larger-scale benchmark datasets that have also been used in \citet{chang2010training,wang2011trading} and \citet{kantchelian2014large}. In Appendix \ref{app:setting}, we report results on both datasets, whose training/testing partition is predefined, based on a single random trial for logistic regression, SVM, and RVM, and five independent random trials for AMM, CPM, and all softplus regressions. As shown in Tabs. \ref{tab:Error1}-\ref{table:10} of Appendix \ref{app:setting}, we observe similar relationships between the classification errors and the number of expert criteria for both stack- and SS-softplus regressions, and both sum- and SS-softplus regressions provide a good comprise between the classification accuracies and amount of computation required for out-of-sample predictions. As shown in Figs. \ref{fig:rk}-\ref{fig:SS-softplus_rk}, with the upper-bound of the number of experts set as $K_{\max}=20$, for each of the first six datasets, both sum- and SS-softplus regressions shrink the weights of most of the 20 experts to be close to zero, clearly inferring the number of experts with non-negligible weights under both labeling settings. For both ijcnn1 and a9a, at one of the two labeling setting for both sum- and SS-softplus regressions, $K_{\max}=20$ does not seem to be large enough to accommodate all experts with non-negligible weights. Thus we have also tried setting $K_{\max}=50$, which is found to more clearly show the ability of the model to shrink the weights of unnecessary experts for both ijcnn1 and a9a, but at the expense of clearly increased computational complexity in both training and testing. The automatic shrinkage mechanism of the gamma process based sum- and SS-softplus regressions is attractive for both computation and implementation, as it allows setting $K_{\max}$ as large as permitted by the computational budget, without the need to worry about overfitting. Having the ability to support countably infinite experts in the prior and inferring a finite number of experts with non-negligible weights in the posterior is an attractive property of the proposed nonparametric Bayesian softplus regression models. We comment that while we choose a fixed truncation to approximate a countably infinite nonparametric Bayesian model, it is possible to adaptive truncate the number of experts for the proposed gamma process based models, using strategies such as marginalizing out the underlying stochastic processes \citep{HDP,lijoi2007controlling}, performing reversible-jump MCMC \citep{green1995reversible,Wolp:Clyd:Tu:2011}, and using slice sampling \citep{walker2007sampling,neal2003slice}, which would be interesting topics for future research. \vspace{-4mm} \section{Conclusions}\label{sec:conclusion} \vspace{-2mm} To regress a binary response variable on its covariates, we propose sum-, stack-, and sum-stack-softplus regressions that use, respectively, a convex-polytope-bounded confined space to enclose the negative class, a convex-polytope-like confined space to enclose the positive class, and a union of convex-polytope-like confined spaces to enclose the positive class. Sum-stack-softplus regression, including logistic regression and all the other softplus regressions as special examples, constructs a highly flexible nonparametric Bayesian predictive distribution by mixing the convolved and stacked covariate-dependent gamma distributions with the Bernoulli-Poisson distribution. The predictive distribution is deconvolved and demixed by inferring the parameters of the underlying nonparametric Bayesian hierarchical model using a series of data augmentation and marginalization techniques. In the proposed Gibbs sampler that has closed-form update equations, the parameters of different stacked gamma distributions can be updated in parallel within each iteration. Example results demonstrate that the proposed softplus regressions can achieve classification accuracies comparable to those of kernel support vector machine, but consume significant less computation for out-of-sample predictions, provide probability estimates, quantify uncertainties, and place interpretable geometric constraints on its classification decision boundaries directly in the original covariate space. It is of great interest to investigate how to generalize the proposed softplus regressions to model count, categorical, ordinal, and continuous response variables, and to model observed or latent multivariate discrete vectors. For example, to introduce covariate-dependence into a stick-breaking process mixture model \citep{ishwaran2001gibbs}, one may consider replacing the normal cumulative distribution function used in the probit stick-breaking process of \citet{PSBP_Dunson} or the logistic function used in the logistic stick-breaking process of \citet{Lu_LSBP} with the proposed softplus functions. \small \begin{spacing}{1.125} \setlength{\bibsep}{1pt plus 0ex} \bibliographystyle{abbrvnat}
2,869,038,155,977
arxiv
\section{Introduction}\label{sec:introduction} As an active area of research, numerical study of evolutionary stochastic partial differential equations (SPDEs) has attracted increasing attention in the past decades (see, e.g., monographs \cite{kruse2014strong,lord2014introduction,jentzen2011taylor} and references therein). Albeit much progress has been made, it is still far from well-understood, especially for numerical analysis of SPDEs with non-globally Lipschitz nonlinearity. The present work attempts to make a contribution in this direction and examine numerical approximations of a typical example of parabolic SPDEs with super-linearly growing nonlinearity, i.e., the stochastic Allen-Cahn equation. The driving noise is a space-time white one, which has a special interest as it can best model the fluctuations generated by microscopic effects in a homogeneous physical system, including, for example, molecular collisions in gases and liquids, electric fluctuations in resistors \cite{gardiner1986handbook}. A lot of researchers carried out numerical analysis of SPDEs subject to the space-time white noise, e.g., \cite{liu2003convergence, faris1982large, GI98, GI99, davie2001convergence, jentzen2009overcoming, becker2016strong, blomker2013galerkin, hutzenthaler2016strong, Anton2017fully, DP09,DA10, jentzen2011higher, brehier2018analysis, liu2018strong, cao2017approximating, printems2001discretization, wang2014higher}, to just mention a few. Numerically solving the continuous problem on a computer forces us to perform both spatial and temporal discretizations. In space, we discretize the underlying SPDE by a spectral Galerkin method, resulting in a system of finite dimensional stochastic differential equations (SDEs). With the spatial discretization, we temporally propose a nonlinearity-tamed accelerated exponential time-stepping scheme \eqref{eq:full.Tamed-AEE}. The approximation errors of both the spatial discretization and the space-time full discrete scheme are carefully analyzed, with strong convergence rates successfully recovered. More accurately, by $X(t_m)$ we denote the unique mild solution of SPDE \eqref{eq:SGL-concrete} taking values at temporal grid points $t_m = m \tau, m \in \{ 0, 1,..., M \}$ with uniform time step-size $\tau = \tfrac{T}{M} > 0$ and by $Y^{M,N}_{t_m}$ the numerical approximations of $X(t_m)$, produced by the proposed fully discrete scheme. The approximation error measured in $L^p ( \Omega; H), p \in [2, \infty)$ reads (cf. Theorem \ref{thm:full-discrete-scheme-error-bound}): \begin{equation} \label{eq:Intro-numerical-main-result} \sup_{ 0 \leq m \leq M} \| X ( t_{m} ) - Y^{M,N}_{t_m} \|_{L^p ( \Omega; H) } \leq C \! \left ( N^{ - \beta } + \tau ^{ \beta } \right), \quad \forall \, \beta \in (0, \tfrac12). \end{equation} Here $H := L^2 ((0, 1); \mathbb{R} )$ and the constant $C$ depends on $p, \beta, T$ and the initial value of SPDE, but does not depend on the discretization parameters $M, N$. Over the last two years, several research works were reported on numerical analysis of space-time white noise driven SPDEs with cubic (polynomial) nonlinearity \cite{becker2016strong,Becker2017strong,liu2018strong,brehier2018analysis,brehier2018strong}. Becker and Jentzen \cite{becker2016strong} in 2016 introduced two nonlinearity-truncated Euler-type approximations for pure time discretizations of stochastic Ginzburg Landau type equations with slightly more general polynomials. There a strong convergence rate of order almost $\frac{1}{4} $ is identified. More recently when the present manuscript was almost finalized, we were aware of four other preprints \cite{Becker2017strong,brehier2018analysis,liu2018strong,brehier2018strong} submitted online, concerning with numerical approximations of similar SPDEs. Becker, Gess, Jentzen and Kloeden \cite{Becker2017strong} proposed new types of truncated exponential Euler space-time full discrete schemes for the same problem as in \cite{becker2016strong} and derived strong convergence rates of order almost $\frac{1}{2} $ in space and order almost $\frac{1}{4} $ in time. Later, Br{\'e}hier and Gouden\`ege \cite{brehier2018analysis} and Br{\'e}hier, Cui and Hong \cite{brehier2018strong} analyzed some splitting time discretization schemes and obtained convergence rates of order $\frac14$. Liu and Qiao \cite{liu2018strong} investigated spectral Galerkin backward implicit Euler full discretization, with strong convergence rates of order almost $\frac{1}{2} $ in space and order $\frac{1}{4} $ in time achieved. As clearly implied in \eqref{eq:Intro-numerical-main-result}, the spatial convergence rate coincides with those in \cite{Becker2017strong,liu2018strong}, but the convergence rate of our time-stepping scheme can be of order almost $\frac12$, twice as high as those in \cite{becker2016strong,Becker2017strong,liu2018strong,brehier2018analysis,brehier2018strong}. Despite getting involved with linear functionals of the noise process, the newly proposed scheme \eqref{eq:full.Tamed-AEE} is explicit, easily implementable and does not cost additional computational efforts (see comments in section \ref{sec:numerical-result} for the implementation of the linear functionals of the noise process). It is important to emphasize that, proving the error estimate \eqref{eq:Intro-numerical-main-result} rigorously is rather challenging, confronted with two key difficulties, one being to derive uniform a priori moment bounds for the numerical approximations with super-linearly growing nonlinearity and the other to recover the temporal convergence rate of order almost $\tfrac12$, instead of order (almost) $ \tfrac{1}{4} $ in existing literature. With regard to the former, we first derive certain estimates for deterministic perturbed PDEs \eqref{eq:determ-perturbed-PDE}, as elaborated in subsection \ref{subsec:estimates-perturbed-PDE}. Then the moment bounds are a consequence of a certain bootstrap argument, by showing $ \mathbb{E}\big[ \mathds{1}_{ \Omega_{ R^{ \tau },t_m} } \| Y^{M,N}_{t_m} \|_V^{p} \big] < \infty$ and $ \mathbb{E} \big [ \mathds{1}_{\Omega^c_{R^{\tau},t_m}} \|Y_{t_m}^{M,N} \|_V^p \big ] < \infty $, $ V : = C ( (0, 1), \mathbb{R} )$, for subevents $ \Omega_{ R^{ \tau },t} $ with $ R^{ \tau } $ depending on $\tau$ carefully chosen (see subsection \ref{subsec:a-priori-moment-numerical}). The latter difficulty lies on the estimate of the crucial term $J_1$ (cf. \eqref{eq:full-error-J0J1J2} in subsection \ref{subset:full-error-analysis}), \begin{equation} \small J_1 : = p \int_0^t \big \| P_N X( s ) - Y_s^{ M, N } \big \|^{p-2} \big \langle P_N X( s ) - Y_s^{ M, N }, F(Y_s^{M,N}) - F (Y^{M,N}_{\lfloor s \rfloor}) \big \rangle \,\text{d} s. \end{equation} As usual, such a term is simply treated with the aid of temporal H\"{o}lder regularity of $ Y_s^{M,N} $ together with the Cauchy-Schwarz inequality and H\"{o}lder's inequality, but to only attain order $\tau^{ \frac{\beta}{2} }$. In our analysis, we decompose $P_N X( s ) - Y_s^{ M, N }$ in the inner product into three parts, as shown in \eqref{eq:J1-split}. Smoothing property of the analytic semigroup is then fully exploited to handle these three terms, in conjunction with commutativity properties of the nonlinearity and higher temporal H\"{o}lder regularity in negative Sobolev space (consult subsection \ref{subset:negative-sobleve} and the treatment of $J_1$ in the proof of Theorem \ref{thm:full-discrete-scheme-error-bound} for details). This way we arrive at the desired high convergence rate in time. Furthermore, we would like to point out that the improvement of convergence rate is essentially credited to fully preserving the stochastic convolution in the time-stepping scheme \eqref{eq:full.Tamed-AEE}. Such a kind of accelerating technique is originally due to Jentzen and Kloeden \cite{jentzen2009overcoming}, simulating nearly linear parabolic SPDEs and has been further examined and extended in different settings \cite{jentzen2011efficient,wang2015note, wang2014higher,qi2017accelerated,lord2016modified}, where a globally Lipschitz condition imposed on nonlinearity is indispensable. When the nonlinearity grows super-linearly and the globally Lipschitz condition is thus violated, one can in general not expect the usual accelerated exponential time-stepping schemes converge in the strong sense, based on the observation that the standard Euler method strongly diverges for ordinary (finite dimensional) SDEs \cite{hutzenthaler2011strong}. To address this issue, we introduce a taming technique originally used in \cite{hutzenthaler2012strong,wang2013tamed,Tretyakov2013fundamental,hutzenthaler15MEMAMS} for ordinary SDEs, and propose a nonlinearity-tamed version of accelerated exponential Euler scheme for the time discretization. Analyzing the strong convergence rate is, however, much more difficult than that in the finite dimensional SDE setting (see section \ref{sect:full-discretization}). Finally, we mention that, just one spatial dimension is considered because the space-time white noise driven SPDE only allows for a mild solution with a positive (but very low) order of regularity in one spatial dimension. It is because of the low order of regularity that the error analysis becomes difficult. Strong and weak convergence analysis of smoother noise (e.g., trace-class noise) driven SPDEs in multiple spatial dimensions, with non-globally Lipschitz nonlinearity, will be our forthcoming works \cite{qi2018tamed} (see also, e.g., \cite{sauer2015lattice, kovacs15backward, kovacs15discretisation, majee2017optimal, gyongy2016convergence, jentzen2015strong, hutzenthaler2014perturbation, brehier2018strong } for revalent topics). The rest of this paper is organized as follows. In the next section we collect some basic facts and present the well-posedness of the stochastic problem under given assumptions. Section \ref{sect:spatial-discret} and Section \ref{sect:full-discretization} are, respectively, devoted to the analysis of strong convergence rates for both the spatial semi-discretization and spatio-temporal full discretization of the underlying SPDEs. Numerical results are included in section \ref{sec:numerical-result} to test previous theoretical findings. \section{Well-posedness of the stochastic problem} \label{sec:well-posedness-regularity} Throughout this article, we are interested in the additive space-time white noise driven stochastic Allen-Cahn equation with cubic nonlinearity, described by \begin{equation}\label{eq:SGL-concrete} \left\{ \begin{array}{lll} \frac{\partial u }{\partial t }(t,x) = \frac{\partial^2 u }{\partial x^2 } (t,x) + f(u(t,x)) + \dot{W} (t, x), \: x \in D, \ t \in (0, T], \\ u(0, x) = u_0(x), \: x \in D, \\ u(t, 0) = u (t, 1) = 0, \: t \in (0, T]. \end{array}\right. \end{equation} Here $D := (0, 1)$, $ T > 0$, $f \colon \mathbb{R} \rightarrow \mathbb{R}$ is given by $ f (v) = a_3 v^3 + a_2 v^2 + a_1 v + a_0, \, a_{3} < 0, \, a_2, a_1, a_0, v \in \mathbb{R}, $ and $\dot{W} (t, \cdot )$ stands for a formal time derivative of a cylindrical I-Wiener process \cite{da2014stochastic}. In order to define a mild solution of \eqref{eq:SGL-concrete} following the semigroup approach in \cite{da2014stochastic}, we attempt to put everything into an abstract framework. Given a real separable Hilbert space $(H, \langle \cdot, \cdot \rangle, \|\cdot\| )$ with $\|\cdot\| = \langle \cdot, \cdot \rangle^{\frac{1}{2}}$, by $\mathcal{L}(H)$ we denote the space of bounded linear operators from $H$ to $H$ endowed with the usual operator norm $\| \cdot \|_{\mathcal{L}(H)}$. Additionally, we denote by $\mathcal{L}_2(H) \subset \mathcal{L}(H)$ the subspace consisting of all Hilbert-Schmidt operators from $H$ to $H$ \cite{da2014stochastic}. It is known that $\mathcal{L}_2(H)$ is a separable Hilbert space, equipped with the scalar product $ \langle \Gamma_1, \Gamma_2 \rangle_{\mathcal{L}_2(H)} : = \sum_{n \in \mathbb{N}} \langle \Gamma_1 \eta_n, \Gamma_2 \eta_n \rangle, $ and norm $ \| \Gamma \|_{ \mathcal{L}_2(H) } := \big ( \sum_{n \in \mathbb{N}} \| \Gamma \eta_n \|^2 \big )^{\frac12}, $ independent of the particular choice of ON-basis $\{\eta_n\}_{n \in \mathbb{N}}$ of $H$. Below we sometimes write $\mathcal{L}_2 := \mathcal{L}_2(H)$ for brevity. If $\Gamma \in \mathcal{L}(H)$ and $\Gamma_1, \Gamma_2 \in \mathcal{L}_2(H)$, then $ | \langle \Gamma_1, \Gamma_2 \rangle_{\mathcal{L}_2(H)} | \leq \| \Gamma_1 \|_{\mathcal{L}_2(H)} \| \Gamma_2 \|_{\mathcal{L}_2(H)}, \| \Gamma \Gamma_1\|_{\mathcal{L}_2(H)} \leq \|\Gamma\|_{\mathcal{L}(H)} \|\Gamma_1\|_{\mathcal{L}_2(H)}. $ By $L^{ \gamma } ( D; \mathbb{R} ), \gamma \geq 1$ ($L^{ \gamma } ( D )$ for short) we denote a Banach space consisting of $\gamma$-times integrable functions and by $ V : = C ( D, \mathbb{R} )$ a Banach space of continuous functions with usual norms. We make the following assumptions. \begin{assumption}[Linear operator $A$]\label{ass:A} Let $D : = (0, 1)$ and let $H = L^2 (D; \mathbb{R} )$ be a real separable Hilbert space, equipped with usual product $\langle \cdot, \cdot \rangle$ and norm $\|\cdot \| = \langle \cdot, \cdot \rangle^{\frac12}$. Let $ - A \colon \text{dom} (A) \subset H \rightarrow H$ be the Laplacian with homogeneous Dirichlet boundary conditions, defined by $ - A u = \Delta u$, $u \in \text{dom}(A) := H^2 \cap H_0^1$. \end{assumption} The above setting assures that there exists an increasing sequence of real numbers $\lambda_i=\pi^2 i^2, i \in \mathbb{N}$ and an orthonormal basis $\{e_i ( x ) =\sqrt{2}\sin(i \pi x), \,x \in(0,1)\}_{i \in \mathbb{N}}$ such that $A e_i = \lambda_i e_i$. In particular, the linear unbounded operator $A$ is positive, i.e., $ \langle - A v, v \rangle \leq - \lambda_1 \| v \|^2$, for all $ v \in \text{dom}(A)$. Moreover, $-A$ generates an analytic semigroup $E(t) = e^{-t A}, t \geq 0$ on $H$ and we can define the fractional powers of $A$, i.e., $A^\gamma, \gamma \in \mathbb{R}$ and the Hilbert space $\dot{H}^{\gamma} :=\text{dom}(A^{\frac{\gamma}{2}})$, equipped with inner product $\langle \cdot, \cdot \rangle_\gamma : = \langle A^{\frac{\gamma}{2}} \cdot, A^{\frac{\gamma}{2}} \cdot \rangle$ and norm $\| \cdot \|_{\gamma} = \langle \cdot, \cdot \rangle_\gamma^{\frac12} $ \cite[Appendix B.2]{kruse2014strong}. Moreover, $\dot{H}^0 = H$ and $\dot{H}^{\gamma} \subset \dot{H}^{\delta}, \gamma \geq \delta$. It is well-known that \cite{pazy1983semigroups} \begin{equation} \begin{split} \label{eq:E.Inequality} \| A^\gamma E(t)\|_{\mathcal{L}(H)} \leq& C t^{-\gamma}, \quad t >0, \gamma \geq 0, \\ \| A^{-\rho} (I-E(t))\|_{\mathcal{L}(H)} \leq& C t^{\rho}, \quad t >0, \rho \in [0,1], \end{split} \end{equation} and that \begin{equation} \label{eq:AQ_condition} \|A^{\frac{\beta-1}{2}} \|_{\mathcal{L}_2 (H) } < \infty, \quad \text{ for any } \beta < \tfrac12. \end{equation} Throughout this paper, by $C$ and $C_{\cdot}$ we mean various constants, not necessarily the same at each occurrence, that are independent of the discretization parameters. \begin{assumption}[Nonlinearity]\label{ass:F} Let $F \colon L^{6} ( D; \mathbb{R}) \rightarrow H$ be a deterministic mapping defined by \begin{equation*} F (v ) ( x ) = f ( v ( x ) ) := a_3 v ^3(x) + a_2 v ^2(x) + a_1 v (x) + a_0, \ x \in (0, 1), \ a_3 < 0, \ a_2, a_1, a_0 \in \mathbb{R}, \, v \in L^{6} ( D; \mathbb{R}) . \end{equation*} \end{assumption} It is easy to find a constant $L \in (0, \infty)$ such that \begin{equation}\label{eq:F-one-sided-condition} \begin{split} \langle u - v, F (u) - F (v) \rangle & \leq L \| u - v \|^2, \quad u , v \in V, \\ \| F (u) - F (v) \| & \leq L ( 1 + \| u \|_V^2 + \| v \|_V^2 ) \| u - v \|, \quad u, v \in V. \end{split} \end{equation} The second property in \eqref{eq:F-one-sided-condition} immediately implies \begin{equation} \| F (u) \| \leq C ( 1 + \| u \|_V^2 ) \| u \|, \quad u \in V. \end{equation} \begin{assumption}[Noise process]\label{ass:Phi} Let $\{ W (t) \}_{t \in[0, T]}$ be a cylindrical $I$-Wiener process on a probability space $\left(\Omega,\mathcal {F},\mathbb{P} \right)$ with a normal filtration $\{\mathcal{F}_t\}_{ t\in [0, T] }$, represented by a formal series, \begin{equation} \label{eq:Wiener-representation} W (t) := \sum_{n = 1}^{\infty} \beta_n ( t ) e_n, \quad t \in [0, T], \end{equation} where $\{ \beta_n ( t ) \}_{n \in \mathbb{N}}, t \in [0, T]$ is a sequence of independent real-valued standard Brownian motions and $\{e_n = \sqrt{2}\sin(n \pi x), \,x \in(0,1) \}_{n \in \mathbb{N}}$ is a complete orthonormal basis of $H$. \end{assumption} \begin{assumption}[Initial value] \label{ass:X0} Let the initial data $X_0 \colon \Omega \rightarrow H $, given by $ X_0 ( \cdot ) = u_0 ( \cdot ) $, be an $\mathcal{F}_0/\mathcal{B}(H)$-measurable random variable satisfying, for sufficiently large positive number $ p_0 \in \mathbb{N} $, \begin{equation} \mathbb{E} [ \| X_0 \|_{ \beta }^ {p_0} ] + \mathbb{E} [ \| X_0 \|_{ V }^ {p_0} ] \leq K_0 < \infty, \quad \text { for any } \beta < \tfrac12. \end{equation} \end{assumption} At the moment, we are prepared to formulate the concrete problem \eqref{eq:SGL-concrete} as an abstract stochastic evolution equation in the Hilbert space $H$, \begin{equation}\label{eq:SGL-abstract} \begin{split} \left\{ \begin{array}{lll} \text{d} X(t) + A X(t)\, \text{d} t = F ( X(t) ) \,\text{d} t + \text{d} W(t), \quad t \in (0, T], \\ X(0) = X_0, \end{array}\right. \end{split} \end{equation} where $X (t, \cdot ) = u (t, \cdot) $ and the abstract items $A, F, X_0$ are defined as in Assumptions \ref{ass:A}-\ref{ass:X0}. The above assumptions suffice to establish well-posedness and regularity results of SPDE \eqref{eq:SGL-abstract}. Before that, we recall some estimates that can, e.g., be found in \cite[Proposition 4.3]{da2004kolmogorov} and \cite[Lemma 6.1.2]{Cerrai2001second}. \begin{lem} For any $ p \in [ 2, \infty ) $ and $ \beta \in [0, \tfrac12) $, the stochastic convolution $ \{ \mathcal{O}_t \}_{t \in [0, T]}$ satisfies \begin{align} \label{eq:lem-stoch-conv-spatial-regularity} \mathbb{E} \Big[ \sup_{t \in [0, T] } \| \mathcal{O}_t \|_V^p \Big] & < \infty, \quad \mbox{with} \quad \mathcal{O}_t : = \int_0^t E(t-s) \text{d} W(s), \\ \| \mathcal{O}_t - \mathcal{O}_s \|_{L^p(\Omega, H)} & \leq C (t -s)^{ \frac{\beta}{2} } , \quad 0 \leq s < t \leq T. \label{eq:lem-stoch-conv-regularity} \end{align} \end{lem} \begin{thm} \label{thm:SPDE-regularity-result} Under Assumptions \ref{ass:A}-\ref{ass:X0}, SPDE \eqref{eq:SGL-abstract} possesses a unique mild solution $X: [0,T] \times \Omega \rightarrow V$ with continuous sample path, determined by, \begin{equation}\label{eq:mild-solution} X(t) = E(t) X_0 + \int_0^t E(t-s) F ( X( s ) ) \, \text{d} s + \mathcal{O}_t \quad \mathbb{P} \mbox{-a.s.} . \end{equation} For $p \in [2, \infty)$ there exists a constant $C_1 \in [0, \infty)$ depending on $p, \beta, T$ such that, for any $\beta < \tfrac12$, \begin{align} \label{eq:optimal.regularity1} \sup_{ t \in [0, T] } \| X ( t ) \|_{L^p(\Omega, V )} \leq C_1 \big( 1 + \| X_0 \|_{L^p(\Omega, V )} \big), \\ \label{eq:optimal.regularity2} \sup_{ t \in [0, T] } \| X ( t ) \|_{L^p(\Omega, \dot{H}^{\beta } )} \leq C_1 \big( 1 + \| X_0 \|_{L^ { p }(\Omega, \dot{H}^{\beta } ) } + \| X_0 \|_{L^ { 3p }(\Omega, V ) }^3 \big). \end{align} Moreover, there exists a constant $C_2 \in [0, \infty)$ depending on $p, \beta, C_1, T$ and $X_0$ such that, for any $\beta < \tfrac12$, \begin{equation} \small \label{eq:optimal.regularity3} \| X_t - X_s \|_{L^p(\Omega, H)} \leq C_2 (t -s)^{ \frac{\beta}{2} } , \quad 0 \leq s < t \leq T. \end{equation} \end{thm} The uniqueness of the mild solution and the regularity assertion \eqref{eq:optimal.regularity1} are based on \cite[Proposition 6.2.2]{Cerrai2001second} and \eqref{eq:lem-stoch-conv-spatial-regularity}. The rest of estimates in Theorem \ref{thm:SPDE-regularity-result} can be verified by standard arguments. \section{Spatial semi-discretization} \label{sect:spatial-discret} This section concerns the error analysis for a spectral Galerkin spatial semi-discretization of the underlying problem \eqref{eq:SGL-abstract}. For $N\in \mathbb{N}$ we define a finite dimensional subspace of $H$ by \begin{equation}\label{eq:space.HN} H^N := \mbox{span} \{e_1, e_2, \cdots, e_N \}, \end{equation} and the projection operator $P_N \colon \dot{H}^{\alpha}\rightarrow H^N$ by $ P_N \xi = \sum_{i=1}^N \langle \xi, e_i \rangle e_i, \forall\, \xi \in \dot{H}^{\alpha}, \, \alpha \in \mathbb{R}. $ Here $H^N$ is chosen as the linear space spanned by the $N$ first eigenvectors of the dominant linear operator $A$. It is not difficult to deduce that \begin{equation}\label{eq:P-N-estimate} \| ( P_N - I ) \varphi \| \leq \lambda_{N+1}^{- \frac {\alpha}{2} } \|\varphi\|_{\alpha} \leq N^{ - \alpha } \|\varphi\|_{\alpha}, \quad \forall \: \varphi \in \dot{H}^{\alpha}, \: \alpha \geq0. \end{equation} Additionally, define $A_N \colon H \rightarrow H^N$ as $A_N = A P_N $, which generates an analytic semigroup $E_N(t) = e^{-t A_N}$, $t \in [0, \infty)$ in $H^N$. Then the spectral Galerkin approximation of \eqref{eq:SGL-abstract} results in the following finite dimensional SDEs, \begin{equation}\label{eq:spectral-spde} \begin{split} \left\{ \begin{array}{lll} \text{d} X^N(t) + A_N X^N ( t ) \, \text{d} t = F_N ( X^N (t) ) \, \text{d} t + P_N \, \text{d} W (t), \quad t \in (0, T], \\ X^N(0) = P_N X_0, \end{array}\right. \end{split} \end{equation} where we write $ F_N : = P_N F $ for short. It is clear to see that \eqref{eq:spectral-spde} admits a unique solution in $H^N$. By the variation of constant, the corresponding solution can be written as \begin{equation}\label{eq:Spectral-Galerkin-mild} X^N(t) = E_N(t) P_N X_0 + \int_0^t E_N (t - s ) P_N F ( X^N( s ) ) \, \text{d} s + \int_0^t E_N(t-s) P_N \, \text{d} W (s), \: \mathbb{P} \mbox{-a.s.}. \end{equation} Before analyzing the spatial discretization error, we need some auxiliary lemmas. The following one is a direct consequence of \cite[Lemma 5.4]{blomker2013galerkin} with $t_1 = 0$. \begin{lem} \label{Lem:PN-Stoch-Conv-V-bound} For any $p \in [2, \infty )$, the stochastic convolution $ \{ \mathcal{O}_t \}_{t \in [0, T]}$ satisfies \begin{equation}\label{eq:lemma-PN-Stoch-Conv-V-bound} \sup_{ t \in [0, T], N \in \mathbb{N} } \| P_N \mathcal{O}_t \|_{L^p(\Omega, V ) } < \infty. \end{equation} \end{lem} Moreover, we can validate the following two lemmas. \begin{lem} \label{lem:semigroup-regularity0} Let $E(t) = e^{-t A}, t \geq 0$ be the analytic semigroup defined in section \ref{sec:well-posedness-regularity}. For any $ N \in \mathbb{N} $ and $ \psi \in \dot{H}^{\gamma}, \gamma \in [0, \tfrac12 )$, it holds that \begin{equation} \label{eq:lem-V-H-regularity} \| P_N E ( t ) \psi \|_V \leq 2^{ \gamma } \big ( \tfrac{ 5 - 4 \gamma }{ 2 \pi ( 1 - 2 \gamma ) } \big )^{ \frac12 } t^{ \frac { 2 \gamma -1} {4} } \| \psi \|_{ \gamma }, \quad t > 0, \, \gamma \in [0, \tfrac12 ). \end{equation} \end{lem} { \it Proof of Lemma \ref{lem:semigroup-regularity0}. } Elementary facts readily yield \begin{equation} \small \begin{split} \| P_N E ( t ) \psi \|_V & = \sup_{ x \in [0, 1] } \Big | \sum_{ i = 1 }^N e^{ - \lambda_i t } \langle \psi, e_i \rangle e_i \Big | \leq \sqrt{2} \sum_{ i = 1 }^N e^{ - \lambda_i t } | \langle \psi, e_i \rangle | \\ & \leq \sqrt{2} \Big( \sum_{ i = 1 }^N \lambda_i^{ - \gamma } e^{ - 2\lambda_i t } \Big)^{1/2} \Big( \sum_{ i = 1 }^N \lambda_i^{ \gamma } | \langle \psi, e_i \rangle |^2 \Big)^{1/2} \\ & \leq \sqrt{2} \pi^{ - \gamma } \Big( \int_0^{ \infty } x ^{ - 2 \gamma } e^{ - 2 \pi^2 x^2 t } \text{d} x \Big)^{1/2} \| \psi \|_{ \gamma } \\ & = 2^{ \gamma } \pi^{ - \frac12 } t^{ \frac{ 2 \gamma -1 }{ 4 } } \Big( \int_0^{ \infty } y ^{ - 2 \gamma } e^{ - y^2/2 } \text{d} y \Big)^{1/2} \| \psi \|_{ \gamma } \\ & \leq 2^{ \gamma } \big ( \tfrac{ 5 - 4 \gamma }{ 2 \pi ( 1 - 2 \gamma ) } \big )^{ \frac12 } t^{ \frac { 2 \gamma -1} {4} } \| \psi \|_{ \gamma }, \end{split} \end{equation} as required. $\square$ \begin{lem} \label{lem:PN-Xt-V-bound} Let $ \{ X ( t ) \}_{ t \in [0, T] } $ be the mild solution to \eqref{eq:SGL-abstract}, defined by \eqref{eq:mild-solution}. Then it holds for any $p \in [2, \infty )$ that \begin{equation} \sup_{N \in \mathbb{N} } \| P_N X ( t ) \|_{ L^p(\Omega, V ) } \leq C_{ \gamma } \big ( 1 + t^{ \frac {2 \gamma - 1} {4}} \big ) , \quad \gamma \in [0, \tfrac12), \quad t \in ( 0, T]. \end{equation} \end{lem} {\it Proof of Lemma \ref{lem:PN-Xt-V-bound}. } Observing that $ E_N(t) P_N = E (t) P_N $ and using Lemmas \ref{Lem:PN-Stoch-Conv-V-bound}, \ref{lem:semigroup-regularity0} show \begin{align} & \| P_N X(t) \|_{ L^p(\Omega, V ) } \leq \| E(t) P_N X_0 \|_{ L^p(\Omega, V ) } + \int_0^t \| E(t-s) F_N ( X( s ) ) \|_{ L^p(\Omega, V ) } \, \text{d} s + \| P_N \mathcal{O}_t \|_{ L^p(\Omega, V ) } \nonumber \\ & \quad \leq C_{\gamma} t^{\frac{2 \gamma - 1}{4}} \| X_0 \|_{ L^p(\Omega, \dot{H}^{ \gamma } ) } + C_{\gamma} \! \int_0^t ( t - s )^{ - \frac{ 1 } { 4 } } \| F ( X( s ) ) \|_{ L^p(\Omega, H ) } \, \text{d} s + \| P_N \mathcal{O}_t \|_{ L^p(\Omega, V ) } \nonumber \\ & \quad \leq C_{\gamma} t^{\frac{2 \gamma - 1}{4}} \| X_0 \|_{ L^p(\Omega, \dot{H}^{ \gamma } ) } + C_{\gamma} \! \sup_{s \in [0, T]} \| F ( X( s ) ) \|_{ L^p(\Omega, H ) } + \| P_N \mathcal{O}_t \|_{ L^p(\Omega, V ) } \nonumber \\ & \quad \leq C_{\gamma} t^{\frac{2 \gamma - 1}{4}} \| X_0 \|_{ L^p(\Omega, \dot{H}^{ \gamma } ) } + C_{\gamma} \big ( 1 + \sup_{s \in [0, T]} \| X( s ) \|_{ L^{3p}(\Omega, V ) }^3 \big ) + \| P_N \mathcal{O}_t \|_{ L^{p} (\Omega, V ) }. \end{align} Owing to \eqref{eq:optimal.regularity1}, \eqref{eq:lemma-PN-Stoch-Conv-V-bound} and Assumption \ref{ass:X0}, one can arrive at the expected estimate. $\square$ Now we are prepared to do convergence analysis for the spectral Galerkin discretization \eqref{eq:spectral-spde}. \begin{thm}[Spatial error estimate] \label{thm:space-main-conv} Let Assumptions \ref{ass:A}-\ref{ass:X0} hold. Let $X(t)$ and $X^N(t)$ be defined through \eqref{eq:SGL-abstract} and \eqref{eq:Spectral-Galerkin-mild}, respectively. Then it holds, for any $ \beta < \tfrac12 $, $ p \in [ 2, \infty )$ and $N \in \mathbb{N}$, \begin{equation}\label{eq:spatial-error} \sup_{ t \in [ 0, T ] } \| X(t) - X^N(t) \|_{L^p ( \Omega; H) } \leq C N^{ - \beta }. \end{equation} \end{thm} The above convergence rate $\beta < \tfrac12$ can be arbitrarily close to $\tfrac12$ but can not reach $\tfrac12$, since the constant $C$ explodes when $\beta$ tends to $\tfrac12$. This comment also applies to the full approximation error estimates in section \ref{sect:full-discretization}. {\it Proof of Theorem \ref{thm:space-main-conv}. } The triangle inequality along with \eqref{eq:P-N-estimate} provides us that \begin{equation}\label{eq:space-err-proof-split} \begin{split} \| X(t) - X^N(t) \|_{L^p( \Omega, H)} & \leq \| ( I - P_N ) X(t) \|_{L^p( \Omega, H)} + \| P_N X(t) - X^N(t) \|_{L^p( \Omega, H)} \\ & \leq N^{ - \beta } \| X( t ) \|_{L^{ p } ( \Omega; \dot{H}^{ \beta }) } + \| e^N_t \|_{L^p( \Omega, H)}, \end{split} \end{equation} where $ e^N_t : = P_N X(t) - X^N(t) = \int_0^t E_N ( t - s ) \big [ F_N ( X ( s ) ) - F_N ( X^N ( s ) ) \big] \, \text{d} s $ satisfies \begin{equation} \frac{\mbox{d} } { \mbox{d} t} e^N_t = - A_N e^N_t + F_N ( X(t) ) - F_N (X^N (t) ) = - A e^N_t + F_N ( X(t) ) - F_N (X^N (t) ). \end{equation} Therefore \begin{equation} \label{eq:spatial-error-before-final} \begin{split} & \frac{\mbox{d} } { \mbox{d} t} \| e^N_t \|^p = p \| e^N_t \|^{p-2} \big \langle e^N_t, - A e^N_t + F ( X(t) ) - F (X^N (t) ) \big\rangle \\ & \quad \leq p \| e^N_t \|^{p-2} \big \langle e^N_t, F ( P_N X (t) ) - F (X^N (t) ) \big \rangle + p \| e^N_t \|^{p-2} \big \langle e^N_t, F ( X (t) ) - F (P_N X(t) ) \big \rangle \\ & \quad \leq L p \| e^N_t \|^p + p \| e^N_t \|^{p-1} \big \| F ( X (t) ) - F (P_N X(t) ) \big \| \\ & \quad \leq ( L p + p -1 ) \| e^N_t \|^p + \big \| F ( X (t) ) - F (P_N X(t) ) \big \|^p \\ & \quad \leq C \| e^N_t \|^p + C \big ( 1 + \| X (t) \|_V^{2 p} + \| P_N X (t) \|_V^{2 p} \big ) \| (I - P_N) X(t) \|^p. \end{split} \end{equation} Choosing $ \tfrac{ p - 2 } { 2 p } < \gamma < \tfrac12$ in Lemma \ref{lem:PN-Xt-V-bound} and also considering \eqref{eq:optimal.regularity1}, \eqref{eq:optimal.regularity2} and \eqref{eq:P-N-estimate} assure \begin{align} \mathbb{E} [ \| e^N_t \|^p ] & \leq C \! \int_0^ t \mathbb{E} [ \| e^N_s \|^p ] + \mathbb{E} \big[ \big ( 1 + \| X (s) \|_V^{2p} + \| P_N X (s) \|_V^{2p} \big ) \| (I - P_N) X(s) \|^p \big] \, \text{d} s \nonumber \\ & \leq C \! \int_0^ t \mathbb{E} [ \| e^N_s \|^p ] + \big( 1 + \| X (s) \|_{L^{4p}(\Omega, V) }^{2p} + \| P_N X (s) \|_{L^{4p}(\Omega, V) }^{2p} \big ) \big\| (I - P_N) X(s) \big\|_{L^{2p}(\Omega, H) }^{p} \, \text{d} s \nonumber \\ & \leq C \! \int_0^ t \mathbb{E} [ \| e^N_s \|^p ] \text{d} s + C N^{- p \beta }. \label{eq:spatial-error-estimate-final} \end{align} The Gronwall inequality implies the desired error bound. $\square$ \section{Spatio-temporal full discretization} \label{sect:full-discretization} This section is devoted to error analysis of a spatio-temporal full discretization, done by a time discretization of the spatially discretized problem \eqref{eq:spectral-spde}. For $M \in \mathbb{N}$ we construct a uniform mesh on $[0, T]$ with $\tau = \tfrac{T}{M}$ being the time stepsize, and propose a spatio-temporal full discretization as, \begin{equation} \begin{split} \label{eq:full.Tamed-AEE} Y^{M,N}_{t_{m+1} } =& E_N(\tau) Y_{t_m}^{M,N} + \frac{ A_N^{-1} \big ( I - E_N( \tau ) \big ) F_N (Y_{t_m}^{M,N}) } { 1 + \tau \| F_N(Y_{t_m}^{M,N}) \| } + \!\int_{t_m}^{t_{m+1}}\! E_N(t_{m+1}-s) P_N \mbox{d} W(s) \end{split} \end{equation} for $m = 0, 1, ..., M - 1$ and $ Y^{M,N}_0 = P_N X_0 $. Equivalently, the full discretization \eqref{eq:full.Tamed-AEE} can be written by $ Y^{M,N}_0 = P_N X_0 $ and for $m = 0, 1, ..., M - 1$, \begin{equation} \begin{split} \label{eq:full.Tamed-AEE-Semigroup} Y^{M,N}_{t_{m+1} } =& E_N(\tau) Y_{t_m}^{M,N} + \int_{t_m}^{t_{m+1}} \frac{ E_N(t_{m+1}-s)\, F_N (Y_{t_m}^{M,N}) } { 1 + \tau \| F_N(Y_{t_m}^{M,N}) \| } \,\mbox{d}s + \!\int_{t_m}^{t_{m+1}}\! E_N(t_{m+1}-s) P_N \mbox{d} W(s). \end{split} \end{equation} Here we invoke a taming technique in \cite{hutzenthaler2012strong,wang2013tamed,Tretyakov2013fundamental,hutzenthaler15MEMAMS} for ordinary SDEs, and construct a nonlinearity-tamed accelerated exponential Euler (AEE) scheme as \eqref{eq:full.Tamed-AEE}. The so-called AEE scheme without taming is originally introduced in \cite{jentzen2009overcoming}, to strongly approximate nearly linear parabolic SPDEs. Since the stochastic convolution is Gaussian distributed and diagonalizable on $\{ e_i \}_{i \in \mathbb{N}}$, the scheme is much easier to simulate than it appears at first sight (see comments in section \ref{sec:numerical-result} for the implementation). When the nonlinearity grows super-linearly, one can in general not expect that the usual AEE schemes \cite{jentzen2011efficient,wang2015note, wang2014higher,qi2017accelerated,lord2016modified,jentzen2009overcoming} converge strongly, based on the observation that the standard Euler method strongly diverges for ordinary (finite dimensional) SDEs \cite{hutzenthaler2011strong}. Also, we mention that analyzing the strong convergence rate is much more difficult than that in the finite dimensional SDE setting. We will accomplish it in subsequent subsections. \subsection{Ingredients in the deterministic setting} \label{subsec:estimates-perturbed-PDE} At first, we show some estimates involved with the semigroup. \begin{lem}\label{lem:semigroup-norm} Let $t > 0$ and let $P_N, E (t)$ be defined as in the above sections. Then \begin{equation} \label{eq:lem-semigroup-regularity} \begin{split} & \| P_N E ( t ) \psi \|_{L^4( D )} \leq t^{ - \frac18 } \| \psi \|, \quad \| P_N E ( t ) \psi \|_V \leq ( \tfrac{ t } { 2 } ) ^{ - \frac12 } \| \psi \|_{ L^1( D ) }, \\ & \| P_N E ( t ) \psi \|_{ L^3( D ) } \leq t^{ - \frac{1}{12} } \| \psi \|, \quad \| P_N E ( t ) \psi \|_{ L^3( D ) } \leq ( \tfrac{ t } { 2 } )^{ - \frac{5}{24} } \| \psi \|_{ L^{\frac43}( D ) }. \end{split} \end{equation} \end{lem} { \it Proof of Lemma \ref{lem:semigroup-norm}. } The first assertion can be proved directly with the aid of \eqref{eq:lem-V-H-regularity}. One can, for example, see \cite[Lemma 5.6]{blomker2013galerkin} for its proof. To arrive at the second one, we use \eqref{eq:lem-V-H-regularity} with $\gamma = 0$ to get \begin{equation} \begin{split} \| P_N E ( t ) \psi \|_V & = \| P_N E ( \tfrac{t}{2} ) P_N E ( \tfrac{t}{2} ) \psi \|_V \leq ( \tfrac{t}{2} )^{ - \frac14 } \| P_N E ( \tfrac{t}{2} ) \psi \| = ( \tfrac{t}{2} )^{ - \frac14 } \sup_{ \| \phi \| \leq 1 } \big | \langle P_N E ( \tfrac{t}{2} ) \psi, \phi \rangle \big | \\ & = ( \tfrac{t}{2} )^{ - \frac14 } \sup_{ \| \phi \| \leq 1 } \big | \langle \psi, P_N E ( \tfrac{t}{2} ) \phi \rangle \big | \leq ( \tfrac{t}{2} )^{ - \frac14 } \sup_{ \| \phi \| \leq 1 } \| \psi \|_{ L^1(D) } \cdot \| P_N E ( \tfrac{t}{2} ) \phi \|_V \\ & \leq ( \tfrac{t}{2} )^{ - \frac12 } \| \psi \|_{ L^1(D) }. \end{split} \end{equation} This helps us to deal with the third one, \begin{equation} \| P_N E ( t ) \psi \|_{ L^3 (D) }^3 \leq \| P_N E ( t ) \psi \|^2 \cdot \| P_N E ( t ) \psi \|_{ V } \leq \| \psi \|^2 \cdot t^{ - \frac{1}{4} } \| \psi \| = t^{ - \frac{1}{4} } \| \psi \|^3. \end{equation} Concerning the last inequality, one similarly acquires \begin{equation} \begin{split} \| P_N E ( t ) \psi \|_{ L^3( D ) } & \leq ( \tfrac{t}{2} )^{ - \frac{1}{12} } \| P_N E ( \tfrac{t}{2} ) \psi \| = ( \tfrac{t}{2} )^{ - \frac{1}{12} } \sup_{ \| \phi \| \leq 1 } \big | \langle P_N E ( \tfrac{t}{2} ) \psi, \phi \rangle \big | \\ & = ( \tfrac{t}{2} )^{ - \frac{1}{12} } \sup_{ \| \phi \| \leq 1 } \big | \langle \psi, P_N E ( \tfrac{t}{2} ) \phi \rangle \big | \leq ( \tfrac{t}{2} )^{ - \frac{1}{12} } \sup_{ \| \phi \| \leq 1 } \| \psi \|_{ L^{\frac43} (D) } \| P_N E ( \tfrac{t}{2} ) \phi \|_{ L^{4} ( D ) } \\ & \leq ( \tfrac{t}{2} )^{ - \frac{5}{24} } \| \psi \|_{ L^{\frac43} (D) }. \end{split} \end{equation} The proof is now completed. $\square$ In the sequel, we restrict ourselves to the following problem in $H^N$, $ N \in \mathbb{N} $, \begin{equation} \label{eq:determ-perturbed-PDE} \left\{ \begin{array}{ll} \frac{\partial v^N }{\partial t } = - A_N v^N + P_N F ( v^N + z^N ), \quad t \in (0, T], \\ v^N ( 0 ) = 0, \end{array}\right. \end{equation} where $F$ comes from Assumption \ref{ass:F} and $ z^N , v^N \colon H^N \times [0, T] \rightarrow H^N $. It is easy to see, \eqref{eq:determ-perturbed-PDE} has a unique solution in $ H^N $, which can be expressed by \begin{equation} v^N ( t ) = \int_0^t E_N (t - s ) P_N F ( v^N (s) + z^N (s) ) \, \text{d} s. \end{equation} Define norms $\| u \|_{ \mathbb{ L }^q( D \times [0, t] ) } : = \big ( \int_0^t \| u (s) \|_{L^{q} (D) }^q \, \text{d} s \big)^{\frac1q}, \, q \geq 1, \, t \in [ 0, T ] $. For the particular case $q = 2$, $\mathbb{ L }^q( D \times [0, t] ) $ ($\mathbb{ L }^q $ for brevity) becomes a Hilbert space with $\langle u, v \rangle_{ \mathbb{ L }^2 ( D \times [0, t] ) } : = \int_0^t \langle u (s), v (s) \rangle \text{d} s $. The forthcoming estimate plays an essential role in proving moment bounds of the approximations. \begin{lem} \label{lem:vN-controlled-ZN} Let $ v^{N}, N \in \mathbb{N} $ be the solution to \eqref{eq:determ-perturbed-PDE}. For any $ t \in [ 0, T ] $, there exists a constant $C$, independent of $N$, such that \begin{equation} \| v^{N} ( t ) \|_V \leq C ( 1 + \| z^N \|_{ \mathbb{L}^9 ( D \times [0, t] ) }^9 ), \quad \forall \ t \in [0, T]. \end{equation} \end{lem} { \it Proof of Lemma \ref{lem:vN-controlled-ZN}. } The proof is divided into two steps. {\it Step 1.} For any fixed $t \in [ 0, T]$, we claim first that, by setting $ \varrho_t := 5 t^{\frac14} \max \{ \tfrac{ | a_2 | } { | a_3 | } , \big| \tfrac{ a_1 } { a_3 } \big|^\frac12, \big | \tfrac { a_0 } { a_3 } \big|^{ \frac13 } \} $, \begin{equation} \label{eq:claim1} \| v^N \|_{ \mathbb{ L }^4( D \times [0, t] ) } \leq 5 \| z^N \|_{ \mathbb{ L }^4( D \times [0, t] ) } \quad \text{ or } \quad \| v^N \|_{ \mathbb{ L }^4( D \times [0, t] ) } \leq \varrho_t. \end{equation} By deterministic calculus and noting $ A_N v^N = A v^N$ for any $v^N \in H^N$, we infer \begin{equation}\label{eq:inner-produc-positive} \begin{split} 0 & \leq \tfrac12 \| v^N (t) \|^2 = \int_0^t \left \langle v^N ( s ), - A_N v^N ( s ) + P_N F ( v^N(s) + z^N (s) ) \right\rangle \text{d} s \\ & \leq \int_0^t \left \langle v^N ( s ), F ( v^N(s) + z^N (s) ) \right\rangle \text{d} s = \langle v^N, F ( v^N + z^N ) \rangle_{ \mathbb{ L }^2 }. \end{split} \end{equation} Noticing that $ a_3 < 0 $, for any $ v , z \in \mathbb{R}$, \begin{equation} \begin{split} v f ( v + z ) & = a_3 v ( v + z )^3 + a_2 v ( v + z )^2 + a_1 v ( v + z ) + a_0 v \\ & \leq a_3 v^4 + 3 a_3 v^3 z + a_3 v z^3 + a_2 v^3 + 2 a_2 v^2 z + a_2 v z^2 + a_1 v^2 + a_1 v z + a_0 v. \end{split} \end{equation} After using the fact $a_3 < 0$ and the H\"{o}lder inequality, one derives \begin{equation} \begin{split} \langle v^N, F ( v^N + z^N ) \rangle_{ \mathbb{ L }^2 } & \leq a_3 \| v^N \|_{ \mathbb{ L }^4 }^4 + 3 | a_3 | \| v^N \|_{ \mathbb{ L }^4 }^3 \| z^N \|_{ \mathbb{ L }^4 } + | a_3 | \| v^N \|_{ \mathbb{ L }^4 } \| z^N \|_{ \mathbb{ L }^4 }^3 \\ & \quad + | a_2 | t^{ \frac14 } \| v^N \|_{ \mathbb{ L }^4 }^3 + 2 | a_2 | t^{ \frac14 } \| v^N \|_{ \mathbb{ L }^4 }^2 \| z^N \|_{ \mathbb{ L }^4 } + | a_2 | t^{ \frac14 } \| v^N \|_{ \mathbb{ L }^4 } \| z^N \|_{ \mathbb{ L }^4 }^2 \\ & \quad + | a_1 | t^{ \frac12 } \| v^N \|_{ \mathbb{ L }^4 }^2 + | a_1 | t^{ \frac12 } \| v^N \|_{ \mathbb{ L }^4 } \| z^N \|_{ \mathbb{ L }^4 } + | a_0 | t^{ \frac34 } \| v^N \|_{ \mathbb{ L }^4 } . \end{split} \end{equation} Assume the claim \eqref{eq:claim1} is false, namely, $ \| z^N \|_{ \mathbb{ L }^4( D \times [0, t] ) } < \frac{ 1 }{ 5 } \| v^N \|_{ \mathbb{ L }^4( D \times [0, t] ) } $ and $ \| v^N \|_{ \mathbb{ L }^4( D \times [0, t] ) } > \varrho_t. $ This enables us to derive \begin{equation} \begin{split} \langle v^N, F ( v^N + z^N ) \rangle_{ \mathbb{ L }^2 } & \leq \big ( a_3 + \tfrac{ 3 | a_3 | }{ 5 } + \tfrac{ | a_3 | }{ 125 } \big) \| v^N \|_{ \mathbb{ L }^4 }^4 + \big( | a_2 | t^{ \frac14 } + \tfrac{ 2 | a_2 | t^{ \frac14 } }{ 5 } + \tfrac{ | a_2 | t^{ \frac14 } }{ 25 } \big) \| v^N \|_{ \mathbb{ L }^4 }^3 \\ & \quad + \big ( |a_1| t^\frac12 + \tfrac{ | a_1 | t^{ \frac12 } }{ 5 } \big) \| v^N \|_{ \mathbb{ L }^4 }^2 + | a_0 | t^{ \frac34 } \| v^N \|_{ \mathbb{ L }^4 } \\ & < \big( a_3 + \tfrac{ 76 | a_3 | }{ 125 } + \tfrac{ | a_2 | t^{ \frac14 } }{ \varrho_t } + \tfrac{ 2 | a_2 | t^{ \frac14 } }{ 5 \varrho_t } + \tfrac{ | a_2 | t^{ \frac14 } }{ 25 \varrho_t } + \tfrac{ |a_1| t^\frac12 } { \varrho_t^2 } + \tfrac{ | a_1 | t^{ \frac12 } }{ 5 \varrho_t^2 } + \tfrac{ | a_0 | t^{ \frac34 } } { \varrho_t^3 } \big) \| v^N \|_{ \mathbb{ L }^4 }^4 \\ & \leq \tfrac{ 6 a_3 }{ 125 } \| v^N \|_{ \mathbb{ L }^4 }^4 < 0, \end{split} \end{equation} which contradicts \eqref{eq:inner-produc-positive}. {\it Step 2.} Apparently, \eqref{eq:claim1} implies \begin{equation} \| v^N \|_{ \mathbb{ L }^4 ( D \times [0, t] ) } \leq 5 \| z^N \|_{ \mathbb{ L }^4 ( D \times [0, t] ) } + \varrho_T, \quad \forall t \in [0, T]. \end{equation} This together with the last inequality in \eqref{eq:lem-semigroup-regularity}, the property of the cubic nonlinearity and the H\"{o}lder inequality yields, for any $ t \in [ 0, T]$, \begin{equation}\label{vN-L3-estimate} \begin{split} \| v^N ( t ) \|_{L^3 ( D ) } & \leq \int_0^t \| E (t - s ) P_N F ( v^N (s) + z^N (s) ) \|_{L^3 ( D ) } \, \text{d} s \\ & \leq \int_0^t ( \tfrac { t - s} {2} )^{ - \frac{5}{24} } \| F ( v^N (s) + z^N (s) ) \|_{L^{\frac43} ( D ) } \, \text{d} s \\ & \leq C \int_0^t ( \tfrac { t - s} {2} )^{ - \frac{ 5 } { 24 } } \big( 1 + \| v^N (s) \|_{ L^4 (D) }^3 + \| z^N (s) \|_{ L^4 (D) }^3 \big) \, \text{d} s \\ & \leq C \Big ( \int_0^t ( \tfrac { t - s} {2} )^{ - \frac{ 5 } { 6 } } \, \text{d} s \Big )^{ \frac14 } \Big ( \int_0^t 1 + \| v^N (s) \|_{ L^4 (D) }^4 + \| z^N (s) \|_{ L^4 (D) }^4 \, \text{d} s \Big )^{ \frac34 } \\ & \leq C \big ( 1 + \| z^N \|_{ \mathbb{L}^4 ( D \times [0, t] ) }^3 \big). \end{split} \end{equation} Likewise, by virtue of the second inequality in \eqref{eq:lem-semigroup-regularity} instead, one obtains, for any $ t \in [ 0, T]$, \begin{equation} \begin{split} \| v^N ( t ) \|_V & \leq \int_0^t \| P_N E ( t-s) F ( v^N (s) + z^N (s) ) \|_V \, \text{d} s \\ & \leq C \int_0^t ( \tfrac{ t - s}{2} )^{-\frac12} \| F ( v^N (s) + z^N (s) ) \|_{L^1(D)} \, \text{d} s \\ & \leq C \int_0^t ( t - s )^{-\frac12} ( 1 + \| v^N (s) \|_{L^3(D)}^3 + \| z^N (s) \|_{L^3(D)}^3 ) \, \text{d} s \\ & \leq C \bigg ( 1 + \| z^N \|_{ \mathbb{L}^4 ( D \times [0, t] ) }^9 + \int_0^t ( t - s )^{-\frac12} \| z^N (s) \|_{L^3(D)}^3 \, \text{d} s \bigg) \\ & \leq C ( 1 + \| z^N \|_{ \mathbb{L}^9 ( D \times [0, t] ) }^9 ). \end{split} \end{equation} The proof of Lemma \ref{lem:vN-controlled-ZN} is thus finished. $\square$ \subsection{A priori moment bounds of the approximations} \label{subsec:a-priori-moment-numerical} This subsection aims to obtain a priori estimates of the full discrete approximation, which require estimates in the previous subsection as well as a certain bootstrap argument. First, we define \begin{equation} \lfloor t \rfloor : = t_i, \quad \text{ for } t \in [t_i, t_{i+1}), \quad i \in \{ 0, 1, ... ,M-1\} , \end{equation} and introduce a continuous version of the full discrete scheme \eqref{eq:full.Tamed-AEE-Semigroup} as, \begin{equation} \label{eq:full-discrete-continous} \begin{split} Y^{M,N}_t = & E_N(t) Y_0^{M,N} + \int_{0}^{t} \tfrac{ E_N(t - s)\, F_N(Y^{M,N}_{\lfloor s \rfloor}) } { 1 + \tau \| F_N(Y^{M,N}_{\lfloor s \rfloor}) \| } \,\mbox{d}s + \mathcal{O}_t^N, \quad \mbox{ with } \mathcal{O}_t^N := P_N \mathcal{O}_t, \, t \in [0, T]. \end{split} \end{equation} By $B^c$ and $\mathds{1}_B$, we denote the complement and indicator function of a set $B$, respectively. Additionally, we introduce a sequence of decreasing subevents \begin{equation}\label{Eq:tame.Omega} \Omega_{R,t_i} := \Big \{ \omega\in\Omega:\sup_{j \in \{ 0, 1, ... ,i\} } \| Y^{M,N}_{ t_j }(\omega) \|_V \leq R \Big\}, \quad R \in (0, \infty), \, i \in \{ 0, 1, ... ,M\}. \end{equation} It is clear that $ \mathds{1}_{\Omega_{R,t_i}} \in \mathcal{F}_{t_i}$ for $ i \in \{ 0, 1, ... ,M\} $ and $\mathds{1}_{ \Omega_{R,t_i} } \leq \mathds{1}_{ \Omega_{R, t_j} }$ for $t_i \geq t_j$ since $ \Omega_{R,t_i} \subset \Omega_{R, t_j}, t_i \geq t_j $. Besides, we put additional assumptions on the initial data. \begin{assumption}\label{ass:X0-additional} For sufficiently large positive number $ p_0 \in \mathbb{N} $, the initial data $X_0$ obeys \begin{equation} \label{eq:ass-X0-additional} \sup_{ N \in \mathbb{N} } \| P_N X_0 \|_{L^{p_0}( \Omega, V)} < \infty. \end{equation} \end{assumption} Due to the Sobolev embedding inequality, \eqref{eq:ass-X0-additional} is fulfilled provided $\| P_N X_0 \|_{L^{p_0}( \Omega, \dot{H}^{\gamma})} < \infty$ for any $\gamma > \tfrac12$. Next we start the bootstrap argument, by showing $ \mathbb{E}\big[ \mathds{1}_{ \Omega_{ R^{ \tau },t_m} } \| Y^{M,N}_{t_m} \|_V^{p} \big] < \infty$ and $ \mathbb{E} \big [ \mathds{1}_{\Omega^c_{R^{\tau},t_m}} \|Y_{t_m}^{M,N} \|_V^p \big ] < \infty $ for subevents $ \Omega_{ R^{ \tau },t} $ with $ R^{ \tau } $ depending on $\tau$ carefully chosen. \begin{lem}\label{Lem:Full-discrete-MB} Let $ p \in [2, \infty)$ and $ R^{\tau} := \tau^{-\frac{\beta}{ 4 }} $ for any $ \beta \in (0, \tfrac12) $. Under Assumptions \ref{ass:A}-\ref{ass:X0}, \ref{ass:X0-additional}, the approximation process $Y^{M,N}_{t_i}, i \in \{ 0, 1,..., M \}$ produced by \eqref{eq:full.Tamed-AEE} obeys \begin{equation} \label{eq:lem-moment-bound0} \sup_{ M, N \in \mathbb{N}} \sup_{ i \in \{ 0, 1,..., M \}} \mathbb{E}\big[ \mathds{1}_{\Omega_{ R^{ \tau }, t_{i-1} } } \| Y^{M,N}_{t_i} \|_V^{p} \big] < \infty, \end{equation} where we set $ \mathds{1}_{\Omega_{R^{\tau},t_{-1} }} = 1$. \end{lem} {\it Proof of Lemma \ref{Lem:Full-discrete-MB}. } % The proof heavily relies on the use of Lemma \ref{lem:vN-controlled-ZN}. In order to apply it, we introduce a process $Z_t^{ M, N} $ given by, \begin{equation} \label{eq:Zt-defn} \begin{split} Z_t^{ M, N} & := E_N(t) Y_0^{M,N} + \int_0^t E_N ( t - s ) \Big[ \tfrac{ P_N F( Y^{M,N}_{\lfloor s \rfloor} ) }{ 1 + \tau \| P_N F( Y^{M,N}_{\lfloor s \rfloor} ) \| } - P_N F( Y^{M,N}_s ) \Big] \,\text{d} s + \mathcal{O}_t^N \\ & = E_N(t) Y_0^{M,N} + \int_0^t E(t-s) P_N \big [ F(Y^{M,N}_{\lfloor s \rfloor}) - F(Y^{M,N}_s) \big ] \,\text{d} s \\ & \quad + \int_0^t E(t-s) \Big [ \tfrac{ P_N F ( Y^{M,N}_{ \lfloor s \rfloor } ) } {1 + \tau\| P_N F(Y^{M,N}_{\lfloor s \rfloor}) \|} - P_N F ( Y^{M,N}_{ \lfloor s \rfloor} ) \Big ] \,\text{d} s + P_N \mathcal{O}_t, \quad t \in [0, T]. \end{split} \end{equation} With this, one can rewrite \eqref{eq:full-discrete-continous} as \begin{equation} \label{eq:full-discrete-continous-form2} \begin{split} Y^{M,N}_t & = \int_0^t E_N (t - s) P_N F ( Y^{M,N}_s ) \,\text{d} s + Z_t^{ M, N}. \end{split} \end{equation} Further, we define $\bar{Y}^{M,N}_t$ as \begin{equation} \label{eq:defn-Ybar} \bar{Y}^{M,N}_t := Y^{M,N}_t - Z^{M,N}_t, \quad \text{ with } \quad \bar{Y}^{M,N}_0 = 0 . \end{equation} Once again, we recast \eqref{eq:full-discrete-continous-form2} as \begin{equation} \bar{Y}^{M,N}_t = \int_0^t E_N (t-s) P_N F ( \bar{Y}^{M,N}_s+Z^{M,N}_s ) \,\text{d} s, \quad t \in [0, T], \end{equation} which satisfies \begin{equation} \frac{\text{d} } { \text{d} t} \bar{Y}^{M,N}_t = - A_N \bar{Y}^{M,N}_t + P_N F(\bar{Y}^{M,N}_t+Z^{M,N}_t), \quad t \in ( 0, T], \quad \bar{Y}^{M,N}_0 = 0. \end{equation} Now one can employ Lemma \ref{lem:vN-controlled-ZN} to deduce, \begin{equation} \| \bar{Y}^{M,N}_t \|_V \leq C ( 1 + \| Z^{M,N} \|_{ \mathbb{L}^9 ( D \times [0, t] ) }^9 ), \quad t \in [0, T], \end{equation} where $Z^{M,N}_\cdot$ is defined by \eqref{eq:Zt-defn}. Thus, for any $ i \in \{ 0, 1,...,M \} $, \begin{equation}\label{eq:Zt-Yt} \begin{split} \mathbb{E} \big[ \mathds{1}_{\Omega_{R,t_{i-1}}} \| \bar{ Y }^{ M, N }_{t_i} \|_V^p \big] & \leq C \big( 1 + \mathbb{E} \big[ \mathds{1}_{\Omega_{R,t_{i-1} }} \| Z^{M,N} \|_{ \mathbb{L}^{9p}( D \times [0, t_i] ) }^{9p} \big] \big) \\ & \leq C \Big( 1 + \mathbb{E} \Big[ \mathds{1}_{\Omega_{R,t_{i-1} }} \! \int_0^{t_i} \| Z^{M,N}_s \|_V^{9p} \, \text{d} s \Big] \Big), \end{split} \end{equation} where, for $s \in [0, t_i]$, $ i \in \{ 0, 1,...,M \} $, it stands that \begin{equation} \label{eq:Z-V-subevents} \begin{split} \mathds{1}_{\Omega_{R,t_{i-1} }} \| Z^{M,N}_s \|_V & \leq \| E_N( s ) Y_0^{M,N} \|_V + \mathds{1}_{\Omega_{R,t_{i-1} }} \bigg\| \! \int_0^s E ( s - r ) P_N \big [ F( Y^{M,N}_r ) - F( Y^{ M,N }_{ \lfloor r \rfloor } ) \big ] \,\text{d} r \bigg\|_V \\ & \quad + \mathds{1}_{\Omega_{R,t_{i-1} }} \bigg\| \int_0^s E(s-r) P_N F(Y^{M,N}_{\lfloor r \rfloor}) \tfrac{\tau \|P_N F(Y^{M,N}_{\lfloor r \rfloor})\|}{1 + \tau \| P_N F(Y^{M,N}_{\lfloor r \rfloor}) \|} \,\text{d} r \bigg\|_V + \| P_N \mathcal{O}_s \|_V \\ & := \| E_N( s ) Y_0^{M,N} \|_V + I_1 + I_2 + \| P_N \mathcal{O}_s \|_V. \end{split} \end{equation} Before proceeding further, we claim \begin{equation} \label{eq:Omega-Y-estimate} \mathds{1}_{\Omega_{R,t_{i-1} }} \| Y_r^{M,N} \|_V \leq C ( 1 + R + \tau^{\frac34} R^{3} ), \quad \forall \, r \in [0, t_i]. \end{equation} For the case $ r \in (t_{i-1}, t_i] $, the definition of $Y_r^{M,N} $, boundedness of the semigroup $E (t)$ in $V$ and \eqref{eq:lem-V-H-regularity} with $\gamma = 0$ promise \begin{equation} \label{eq:Omega-Y-estimate1} \begin{split} \mathds{1}_{\Omega_{R,t_{i-1} }} \| Y_r^{M,N} \|_V & \leq \mathds{1}_{\Omega_{R,t_{i-1} }} \Big( \| E ( r - t_{i-1} ) Y_{t_{i-1}}^{M,N} \|_V + \int_{ t_{i-1} }^r \| E ( r - u ) F_N ( Y_{ \lfloor u \rfloor }^{M,N} ) \|_V \, \text{d} u \\ & \quad + \big \| \int_{ t_{i-1} }^r E ( r - u ) P_N \, \text{d} W_u \big\|_V \Big) \\ & \leq C ( R + \tau^{3/4} R^{3} + 1 ). \end{split} \end{equation} For the case $ r \in [0, t_{i-1}] $, we recall $\mathds{1}_{\Omega_{R,t_{i-1} }} \leq \mathds{1}_{\Omega_{R,\lfloor r \rfloor }} $, which allows us to get $\mathds{1}_{\Omega_{R,t_{i-1} }} \| Y_r^{M,N} \|_V \leq \mathds{1}_{\Omega_{R,\lfloor r \rfloor }} \| Y_r^{M,N} \|_V$. Then repeating the same arguments as used in \eqref{eq:Omega-Y-estimate1} shows \eqref{eq:Omega-Y-estimate}. With the aid of \eqref{eq:F-one-sided-condition} and \eqref{eq:Omega-Y-estimate}, the first term $I_1$ can be treated as follows, \begin{equation} \label{eq:I1-first} \begin{split} I_1 & \leq \mathds{1}_{\Omega_{R,t_{i-1} }} \int_0^s (s-r)^{- \frac{1}{4}} \| F(Y^{M,N}_r) - F(Y^{M,N}_{\lfloor r \rfloor}) \| \,\text{d} r \\ & \leq \mathds{1}_{\Omega_{R,t_{i-1} }} C ( 1 + R^2 + \tau^\frac32 R^6 ) \int_0^s (s-r)^{- \frac{1}{4}} \| Y^{M,N}_r - Y^{M,N}_{\lfloor r \rfloor} \| \,\text{d} r, \end{split} \end{equation} where $r \in [0, s]$, $ s \in [0, t_i]$, \begin{equation} \begin{split} Y^{M,N}_r - Y^{M,N}_{\lfloor r \rfloor} & = [ E ( r ) - E ( \lfloor r \rfloor ) ] Y^{ M, N}_0 + \int_0^r E(r-u) \tfrac{ P_N F (Y^{M,N}_{\lfloor u \rfloor}) } { 1 + \tau \| P_N F (Y^{M,N}_{\lfloor u \rfloor}) \| } \,\text{d} u \\ & \quad - \int_0^{\lfloor r \rfloor} E({\lfloor r \rfloor}-u) \tfrac{ P_N F (Y^{M,N}_{\lfloor u \rfloor}) } { 1 + \tau \| P_N F (Y^{M,N}_{\lfloor u \rfloor}) \| } \,\text{d} u + P_N \mathcal{O}_r - P_N \mathcal{O}_{\lfloor r \rfloor}. \end{split} \end{equation} This suggests that \begin{equation} \begin{split} & \mathds{1}_{\Omega_{R,t_{i-1} }} \| Y^{M,N}_r - Y^{M,N}_{\lfloor r \rfloor} \| \\ & \quad \leq \tau^{\frac{ \beta }{2}} | Y^{M,N}_0 |_\beta + 1_{\Omega_{R,t_{i-1}}} \Big\| \int_0^{\lfloor r \rfloor} E({\lfloor r \rfloor}-u) \big( E( r - {\lfloor r \rfloor})-I \big) \tfrac{ P_N F (Y^{M,N}_{\lfloor u \rfloor}) } { 1 + \tau \| P_N F (Y^{M,N}_{\lfloor u \rfloor}) \| } \,\text{d} u \Big\| \\ & \qquad + \mathds{1}_{\Omega_{R,t_{i-1} }} \Big\| \int_{\lfloor r \rfloor}^r E ( r - u ) \tfrac{ P_N F (Y^{M,N}_{\lfloor u \rfloor}) } { 1 + \tau \| P_N F (Y^{M,N}_{\lfloor u \rfloor}) \| } \,\text{d} u \Big \|_H + \mathds{1}_{\Omega_{R,t_{i-1} }} \| P_N ( \mathcal{O}_r - \mathcal{O}_{\lfloor r \rfloor} ) \| \\ & \leq \tau^{\frac{ \beta }{ 2 } } | X_0 |_\beta + C(1 + R^ { 3 } ) (\tau^{ \frac34 } + \tau) + \| \mathcal{O}_r - \mathcal{O}_{\lfloor r \rfloor} \| . \end{split} \end{equation} Inserting this into \eqref{eq:I1-first} results in \begin{equation} \begin{split} I_1 & \leq C ( 1 + R^2 + \tau^\frac32 R^6 ) \int_0^s ( s - r )^{- \frac{1}{4} } \big [ \tau^{\frac{\beta}{2}} | X_0 |_\beta + C( 1+R^{ 3 } ) \tau^{ \frac34 } + \| \mathcal{O}_r - \mathcal{O}_{\lfloor r \rfloor} \| \big] \, \text{d} r \\ & \leq C(1 + R^2 + \tau^\frac32 R^6 ) \tau^{\frac{\beta}{2}} | X_0 |_\beta s^{\frac{3}{4}} + C(1 + R^{ 2 } + \tau^{\frac32} R^6 )(1 + R^{3 } ) \tau^{ \frac34 } s^{\frac{3}{4}} \\ & \quad + C \! \int_0^s ( s - r)^{- \frac14} (1 + R^2 + \tau^\frac32 R^6 ) \| \mathcal{O}_r - \mathcal{O}_{\lfloor r \rfloor} \| \,\text{d} r. \end{split} \end{equation} Therefore, letting $ R = R^{\tau} := \tau^{-\frac{\beta}{ 4 }} $ and considering \eqref{eq:lem-stoch-conv-regularity} one can further infer that \begin{equation}\label{eq:I1-moment-bound} \| I_1 \|_{L^{9p} (\Omega, \mathbb{R}) } \leq C ( 1 + \| X_0 \|_{L^{9p} (\Omega, \dot{H}^{\beta} ) } ). \end{equation} In a similar manner, choosing $ R = R^{\tau} := \tau^{-\frac{\beta}{ 4 }} $ enables us to arrive at \begin{equation} \label{eq:I2-estimate} \begin{split} I_2 & \leq \mathds{1}_{\Omega_{R,t_{i-1}}} \int_0^s \| E( s -r ) P_N F(Y^{M,N}_{\lfloor r \rfloor}) \|_V \cdot \tau \| P_N F(Y^{M,N}_{\lfloor r \rfloor})\| \,\text{d} r \\ & \leq \mathds{1}_{\Omega_{R,t_{i-1}}} \tau \int_0^s ( s - r )^{ -\frac{ 1 }{ 4 } } \| F(Y^{M,N}_{\lfloor r \rfloor}) \|^2 \,\text{d} s \leq C \tau (R^{6} + 1) \\ & \leq C ( \tau^{ \frac{ 2 - 3\beta}{2} } + \tau ) . \end{split} \end{equation} Bearing \eqref{eq:I1-moment-bound}, \eqref{eq:I2-estimate} and \eqref{eq:lemma-PN-Stoch-Conv-V-bound} in mind, one can deduce from \eqref{eq:Z-V-subevents} that, for any $ s \in [0, t_i]$, \begin{equation} \label{eq:Z-MB-subevent} \mathbb{E} [ \mathds{1}_{\Omega_{R,t_{i-1} }} \| Z^{M,N}_s \|_V ^{ 9p} ] \leq C < \infty. \end{equation} This together with \eqref{eq:Zt-Yt} and \eqref{eq:Z-MB-subevent} immediately implies \begin{equation} \mathbb{E} [ \mathds{1}_{\Omega_{R^{\tau},t_{i-1}}} \| \bar{Y}^{M,N}_{t_i} \|_V^p] \leq C < \infty . \end{equation} Combining this with \eqref{eq:defn-Ybar} verifies the desired assertion \eqref{eq:lem-moment-bound0}. $\square$ \begin{thm}\label{thm:Numer-Moment-Bound} Let Assumptions \ref{ass:A}-\ref{ass:X0}, \ref{ass:X0-additional} be fulfilled. Then for any $p \in [2, \infty)$, \begin{equation} \label{eq:thm-Numer-Moment-Bound} \sup_{ M, N \in \mathbb{N}} \sup_{ m \in \{ 0, 1,..., M \}} \mathbb{E}\big[ \| Y^{M,N}_{t_m} \|_V^{p} \big] < \infty. \end{equation} \end{thm} {\it Proof of Theorem \ref{thm:Numer-Moment-Bound}.} Since \eqref{eq:lem-moment-bound0} and the fact that $ \Omega_{R,t_i} \subset \Omega_{R,t_{i-1}} $ ensure \begin{equation} \label{eq:estimate-large-events} \sup_{ M, N \in \mathbb{N}} \sup_{ i \in \{ 0, 1,..., M \}} \mathbb{E}\big[ \mathds{1}_{\Omega_{ R^{ \tau }, t_{i} } } \| Y^{M,N}_{t_i} \|_V^{p} \big] \leq \sup_{ M, N \in \mathbb{N}} \sup_{ i \in \{ 0, 1,..., M \}} \mathbb{E}\big[ \mathds{1}_{\Omega_{ R^{ \tau }, t_{i-1} } } \| Y^{M,N}_{t_i} \|_V^{p} \big] < \infty, \end{equation} it remains to estimate $ \sup_{ M, N \in \mathbb{N}} \sup_{ m \in \{ 0, 1,..., M \}} \mathbb{E} [ \mathds{1}_{\Omega_{R^{\tau},t_m}^c} \| Y^{M,N}_{t_m} \|_V^p]. $ It is evident to check that \begin{equation} \label{eq:numer-bound-1} \begin{split} \|Y_{t_m}^{M,N}\|_V & \leq \|E( t_m ) P_N X_0 \|_V + \| P_N \mathcal{O}_{t_m}^N \|_V + \int_0^{t_m} \Big \| E( t_m - s ) \tfrac{P_N \, F(Y^{M,N}_{\lfloor s \rfloor})}{1+\tau\| P_N \, F(Y^{M,N}_{\lfloor s \rfloor}) \|} \Big \|_V \,\text{d} s \\ & \leq C t_m^{\frac{ 2 \beta - 1}{4} } \| X_0 \|_\beta + \int_0^{t_m} \Big \| A^{\frac{\eta}{2}} E(t_m - s) \tfrac{P_N \, F(Y^{M,N}_{\lfloor s \rfloor})} {1+\tau\| P_N F(Y^{M,N}_{\lfloor s \rfloor}) \|} \Big \|_H \,\text{d} s + \| P_N \mathcal{O}_{t_m}^N \|_V \\ & \leq C t_m^{\frac{ 2 \beta - 1}{4} } \| X_0 \|_\beta + C \tau^{-1} + C, \quad m \in \{ 1, 2,..., M \}. \end{split} \end{equation} Meanwhile, one can learn that \begin{equation} \Omega^c_{R^{\tau},t_m} = \Omega^c_{R^{\tau},t_{m-1}} + \Omega_{R^{\tau},t_{m-1}} \cdot \{\omega\in\Omega : \| Y^{M,N}_{t_m}\|_V > R^{\tau } \}, \end{equation} and as a result \begin{equation} \begin{split} \mathds{1}_{\Omega^c_{R^{\tau},t_m}} & = \mathds{1}_{\Omega^c_{R^{\tau},t_{m-1}}} + \mathds{1}_{\Omega_{R^{\tau},t_{m - 1} } } \cdot \mathds{1}_{\{\| Y^{M,N}_{t_m}\|_V > R^{\tau}\}} = \sum_{i=0}^m \mathds{1}_{\Omega_{R^{\tau}, t_{i-1} } } \cdot \mathds{1}_{\{\| Y^{M,N}_{t_i}\|_V > R^{\tau}\}} , \end{split} \end{equation} where we recall $ \mathds{1}_{\Omega^c_{R^{\tau},t_{-1} }} = 0. $ By Assumption \ref{ass:X0-additional}, \eqref{eq:numer-bound-1} and the Chebyshev inequality, one can show, for any $M \in \mathbb{N}$ and $m \in \{ 0, 1,...,M \}$, \begin{equation} \begin{split} \mathbb{E} \big [ \mathds{1}_{\Omega^c_{R^{\tau},t_m}} \|Y_{t_m}^{M,N} \|_V^p \big ] & = \sum_{i=0}^m \mathbb{E} \big [ \|Y_{t_m}^{M,N} \|_V^p \cdot \mathds{1}_{\Omega_{ R^{\tau},t_{i-1} } } \mathds{1}_{\{\| Y^{M,N}_{t_i}\|_V > R^{\tau}\}} \big ] \\ & \leq \sum_{i=0}^m \Big ( \mathbb{E} \big [ \|Y_{t_m}^{M,N} \|_V^{2p} \big ] \Big )^{\frac{1}{2}} \cdot \Big( \mathbb{E} \big [ \mathds{1}_{\Omega_{R^{\tau}, t_{i-1} }} \mathds{1}_{\{\| Y^{M,N}_{t_i}\|_V > R^{\tau}\}} \big ] \Big )^{\frac{1}{2}} \\ & \leq \sum_{i=0}^m C ( 1+\tau^{-p}) \cdot \big( \mathbb{P} (\mathds{1}_{\Omega_{R^{\tau},t_{i-1} }} \| Y^{M,N}_{t_i}\|_V > R^{\tau}) \big) ^{\frac{1}{2}} \\ & \leq C ( 1+\tau^{-p}) \sum_{i=0}^m \Big ( \mathbb{E} \Big [ \mathds{1}_{\Omega_{R^{\tau}, t_{i-1} }} \| Y^{M,N}_{t_i}\|_V^{ \frac{8(p+1)}{\beta}} / (R^{\tau})^{ \frac{8(p+1)}{\beta} } \Big ] \Big ) ^{\frac{1}{2}} \\ & \leq C ( 1+\tau^{-p}) \sum_{i=0}^m \tau^{ p+1 } \Big ( \mathbb{E} \Big [ \mathds{1}_{\Omega_{R^{\tau}, t_{i-1} }} \| Y^{M,N}_{t_i}\|_V^{ \frac{8(p+1)}{\beta}} \Big ] \Big ) ^{\frac{1}{2}} < \infty. \end{split} \end{equation} This estimate together with \eqref{eq:estimate-large-events} yields the required estimate \eqref{eq:thm-Numer-Moment-Bound}. $\square$ With Theorem \ref{thm:Numer-Moment-Bound} at hand, one can use standard arguments to validate the coming corollaries. \begin{cor} \label{cor:numer-spatial-regularity} Under conditions in Theorem \ref{thm:Numer-Moment-Bound}, for any $p \in [2, \infty)$ and $\beta < \tfrac12$ we obtain, \begin{equation} \sup_{ M, N \in \mathbb{N}, \, t \in [0, T]} \| Y^{M,N}_{t} \| _{ L^p ( \Omega, \dot{H}^{\beta} ) } + \sup_{ M, N \in \mathbb{N}, \, t \in [0, T]} \| Y^{M,N}_{t} \| _{ L^p ( \Omega, V ) } < \infty. \end{equation} \end{cor} \begin{cor} Under conditions in Theorem \ref{thm:Numer-Moment-Bound}, for any $p \in [2, \infty)$ and $\beta < \tfrac12$ we get, \begin{equation}\label{eq:numer-regularity-normal} \| Y_t^{M,N} - Y^{M,N}_s \|_{ L^p ( \Omega, H ) } \leq C (t-s)^{ \frac{\beta}{2} } , \quad 0 \leq s<t \leq T. \end{equation} \end{cor} \subsection{Further technical lemmas} \label{subset:negative-sobleve} In addition to the above preparations, we still rely on the following results, which are essential to identify the expected temporal convergence rates of order almost $\tfrac12$. \begin{lem}\label{Lem:F-negative-soblev} Let $F \colon L^{6} ( D; \mathbb{R}) \rightarrow H$ be a mapping determined by Assumption \ref{ass:F}. Then it holds for any $\beta \in (0, \tfrac12)$ and $ \eta > \tfrac12$ that \begin{equation} \label{eq:lem-derivative-F-regularity} \| F'( \chi ) \nu \|_{ - \eta } \leq C \big(1+\max{ \{ \| \chi \|_V ,\| \chi \|_{\beta} \} } ^{2} \big) \| \nu \|_{-\beta}, \quad \chi \in V \cap \dot{H}^{\beta}, \nu \in V. \end{equation} \end{lem} {\it Proof of Lemma \ref{Lem:F-negative-soblev}. } As $ \beta \in (0, \frac{1}{2} ) $, standard arguments with the Sobolev-Slobodeckij norm (see, e.g., \cite[(19.14)]{thomee2006galerkin}) and properties of the nonlinearity guarantee \begin{equation} \begin{split} \| F' ( \chi ) \psi \|_{\beta}^2 & \leq C \|F' ( \chi ) \psi \|^2 + C \int_0^1 \int_0^1 \frac{\big|f'(\chi(x)) \psi(x) - f'(\chi(y)) \psi(y)\big|^2} {|x-y|^{2{\beta}+1}} \, \text{d} y \text{d} x \\ & \leq C \|F ' ( \chi ) \psi \|^2 + C \int_0^1 \int_0^1 \frac{\big|f'(\chi(x)) (\psi(x)-\psi(y))\big|^2} {|x-y|^{2{\beta}+1}} \, \text{d} y \text{d} x \\ & \quad + C \int_0^1 \int_0^1 \frac{ \big| [ f'(\chi(x))- f'(\chi(y)) ] \psi(y)\big|^2} {|x-y|^{2{\beta}+1}} \, \text{d} y \text{d} x \\ & \leq C \big\|F ' ( \chi ) \psi \big\|^2 + C \big\| f'(\chi(\cdot)) \big\|_V^2 \cdot \| \psi \|^2_{W^{{\beta},2}} + C \big\| f''( \chi(\cdot) ) \big\|_V^2 \cdot \|\psi\|_V^2 \cdot \| \chi \|^2_{W^{{\beta},2}} \\ & \leq C \big( 1+ \|\chi\|_V^{ 4 } \big) \|\psi\|^2 + C \big( 1+ \|\chi\|_V^{ 4 } \big) \|\psi\|^2_{\beta} + C \big( 1 + \|\chi\|_V^{ 2 } \big) \|\psi\|^2_V \cdot \|\chi\|^2_{\beta} \\ & \leq C \big(1+\max{ \{ \| \chi \|_V ,\| \chi \|_{\beta} \} } ^{4} \big) (\|\psi\|^2_{\beta} + \|\psi\|^2_V ). \end{split} \end{equation} Accordingly, for any $\beta \in (0, \tfrac12)$ and $ \eta > \tfrac12$, one uses the Sobolev embedding inequality to derive \begin{equation} \begin{split} \| F' ( \chi ) \nu \|_{ - \eta} & = \sup_{\| \varphi \| \leq 1 } \Big|\big\langle A^{-\frac{\eta}{2}} F' ( \chi ) \nu ,\varphi \big\rangle \Big| = \sup_{\| \varphi \| \leq 1 } \Big|\big\langle \nu, ( F' ( \chi ) )^\ast \cdot A^{-\frac{\eta}{2}} \varphi \big\rangle \Big| \\ & =\sup_{\| \varphi \| \leq 1 } \Big|\big\langle A^{-\frac{\beta}{2}} \nu , A^{\frac{\beta}{2}} F' ( \chi ) A^{-\frac{\eta}{2}}\varphi \big\rangle \Big| \\ & \leq \sup_{\| \varphi \| \leq 1 } \| \nu\|_{-\beta} \cdot \big\| F' ( \chi ) A^{-\frac{\eta}{2}}\varphi \big\|_{\beta} \\ & \leq \sup_{\| \varphi \| \leq 1 } \| \nu\|_{-\beta} \cdot C \big(1+\max{ \{ \| \chi \|_V ,\| \chi \|_{\beta} \} } ^{ 2 } \big) (\| \varphi \|_{\beta-\eta}+ \| A^{-\frac{\eta}{2}}\varphi \|_V ) \\ & \leq C_{\beta} \, \big(1+\max{ \{ \| \chi \|_V ,\| \chi \|_{\beta} \} } ^{ 2 } \big) \| \nu\|_{-\beta}. \end{split} \end{equation} This completes the proof. $\square$ \begin{lem} \label{Lem:regularity-negative-soblev} Letting Assumptions \ref{ass:A}-\ref{ass:X0}, \ref{ass:X0-additional} be fulfilled, for any $p \in [2, \infty)$ and $\beta < \tfrac12$ we have \begin{equation}\label{eq:numer-regularity-negative} \| Y_t^{M,N} - Y^{M,N}_s \|_{ L^p ( \Omega, \dot{H}^{-\beta} ) } \leq C (t-s)^{\beta} , \quad 0 \leq s < t \leq T. \end{equation} \end{lem} {\it Proof of Lemma \ref{Lem:regularity-negative-soblev}}. The definition \eqref{eq:full-discrete-continous} implies, for $ 0 \leq s < t \leq T $, \begin{equation} \label{eq:Y-diff-negative-sobleve0} \begin{split} Y_t^{M,N}-Y^{M,N}_s & = \big( E_N ( t - s ) - I \big) Y^{M,N}_s \\ & \quad + \int_s^t E_N ( t - r ) \tfrac{ F_N(Y^{M,N}_{\lfloor r \rfloor})}{1+\tau \| F_N(Y^{M,N}_{\lfloor r \rfloor}) \|} \, \text{d} r + \int_s^t E_N ( t - r ) P_N \, \text{d} W ( r ). \end{split} \end{equation} Making use of \eqref{eq:E.Inequality}, \eqref{eq:AQ_condition} and the inequality $\| \Gamma \Gamma_1\|_{\mathcal{L}_2 } \leq \|\Gamma\|_{\mathcal{L} } \|\Gamma_1\|_{\mathcal{L}_2 }$, $ \Gamma \in \mathcal{L}(H), \Gamma_1 \in \mathcal{L}_2(H) $ gives \begin{equation} \label{eq:Y-diff-negative-sobleve1} \begin{split} \Big \| \int_s^t E_N ( t - r ) P_N \, \text{d} W ( r ) \Big \|_{ L^p ( \Omega, \dot{H}^{-\beta} ) } & \leq C \Big( \int_s^t \big \| A^{-\frac{\beta}{2}} E_N ( t - r ) P_N \big \|^2_{ \mathcal{L}_2(H) } \, \text{d} r \Big)^{\frac{ 1 }{2}} \leq C (t-s)^{ \beta}. \end{split} \end{equation} A combination of \eqref{eq:E.Inequality} and Corollary \ref{cor:numer-spatial-regularity} shows \begin{equation} \label{eq:Y-diff-negative-sobleve2} \begin{split} \| \big( E_N ( t - s ) - I \big) Y^{M,N}_s \|_{ L^p ( \Omega, \dot{H}^{-\beta} ) } & \leq \big \| A^{-\beta} \big( E ( t - s ) - I \big) \big\|_{\mathcal{L} (H) } \cdot \| Y^{M,N}_s \|_{ L^p ( \Omega, \dot{H}^{\beta} ) } \\ & \leq C (t-s)^{ \beta}. \end{split} \end{equation} Now we proceed to estimate the left term in \eqref{eq:Y-diff-negative-sobleve0}, with the help of \eqref{eq:thm-Numer-Moment-Bound} and \eqref{eq:F-one-sided-condition}, \begin{equation} \label{eq:Y-diff-negative-sobleve3} \Big \| \int_s^t E(t - r) \tfrac{ F_N(Y^{M,N}_{\lfloor r \rfloor})}{1+\tau \| F_N(Y^{M,N}_{\lfloor r \rfloor}) \|} \, \text{d} r \Big \|_{ L^p ( \Omega, \dot{H}^{-\beta} ) } \leq \int_s^t \| F (Y^{M,N}_{\lfloor r \rfloor}) \|_{ L^p ( \Omega, H ) } \, \text{d} r \leq C ( t - s ) . \end{equation} Gathering \eqref{eq:Y-diff-negative-sobleve1}, \eqref{eq:Y-diff-negative-sobleve2} and \eqref{eq:Y-diff-negative-sobleve3} we deduce from \eqref{eq:Y-diff-negative-sobleve0} that \eqref{eq:numer-regularity-negative} is true. $\square$ In view of \eqref{eq:thm-Numer-Moment-Bound}, \eqref{eq:lem-derivative-F-regularity}, \eqref{eq:numer-regularity-negative} and Corollary \ref{cor:numer-spatial-regularity}, one can see the following corollary. \begin{cor} \label{cor:F-diff-regularity} Under conditions in Lemma \ref{Lem:regularity-negative-soblev}, for any $\beta \in (0, \tfrac12)$ and $ \eta > \tfrac12 $ it holds \begin{equation} \| F ( Y_t^{M,N} ) - F ( Y_s^{M,N} ) \|_{ L^p ( \Omega, \dot{H}^{ - \eta } ) } \leq C ( t - s )^{ \beta }, \quad 0<s<t<T. \end{equation} \end{cor} \subsection{Main results: error bounds for the full discretization} \label{subset:full-error-analysis} Equipped with these results in previous subsections, we are now ready to prove the main result. \begin{thm}[Error bounds for the full discretization] \label{thm:full-discrete-scheme-error-bound} Let Assumptions \ref{ass:A}-\ref{ass:X0}, \ref{ass:X0-additional} hold. There is a generic constant $C$ independent of $N$ and $M$ such that, for any $p \in [2, \infty)$, \begin{equation} \label{eq:thm-main-result} \sup_{ 0 \leq m \leq M} \| X(t_m) - Y^{M,N}_{t_m} \|_{L^p ( \Omega; H) } \leq C \big( N ^{ - \beta } + \tau ^{ \beta } \big). \end{equation} \end{thm} {\it Proof of Theorem \ref{thm:full-discrete-scheme-error-bound}. } Denoting $ e^{ M, N}_t : = P_N X(t) - Y_t^{ M, N } $, we note that \begin{equation} \| X({t_m}) - Y^{M,N}_{t_m} \|_{L^p ( \Omega; H) } \leq \| X({t_m}) - P_N X({t_m}) \|_{L^p ( \Omega; H) } + \| e^{M,N}_{t_m} \|_{L^p ( \Omega; H) }, \end{equation} where \begin{equation} \frac{\text{d} }{\text{d} t} e^{ M, N}_t = - A_N e^{ M, N}_t + F_N \big ( X ( t ) \big ) - \tfrac{ F_N(Y^{M,N}_{\lfloor t \rfloor})}{1+\tau \| F_N(Y^{M,N}_{\lfloor t \rfloor}) \|} . \end{equation} This in conjunction with \eqref{eq:F-one-sided-condition} tells us that \begin{equation} \label{eq:full-error-J0J1J2} \begin{split} \big \| e^{ M, N}_t \big \|^p & = p \int_0^t \big \| e^{ M, N}_s \big \|^{p-2} \Big \langle e^{ M, N}_s, - A_N e^{ M, N}_s + F_N \big ( X (s) ) - \tfrac{ F_N(Y^{M,N}_{\lfloor s \rfloor})} { 1 + \tau \| F_N ( Y^{M,N}_{\lfloor s \rfloor}) \|} \Big \rangle \,\text{d} s \\ & \leq p \int_0^t \big \| e^{ M, N}_s \big \|^{p-2} \Big \langle e^{ M, N}_s, F_N ( X (s) ) - F_N ( P_N X (s) ) \\ & \quad \quad \quad + F_N \big ( Y^{M,N}_s \big ) - \tfrac{ F_N ( Y^{M,N}_{\lfloor s \rfloor} ) } { 1 + \tau \| F_N(Y^{M,N}_{\lfloor s \rfloor}) \|} \Big \rangle \,\text{d} s + pL \int_0^t \big \| e^{ M, N}_s \big \|^p \,\text{d} s \\ & = pL \int_0^t \big \| e^{ M, N}_s \big \|^p \,\text{d} s + p \int_0^t \big \| e^{ M, N}_s \big \|^{p-2} \big \langle e^{ M, N}_s, F( X (s) ) - F ( P_N X (s) ) \big \rangle \,\text{d} s \\ & \quad + p \int_0^t \big \| e^{ M, N}_s \big \|^{p-2} \big \langle e^{ M, N}_s, F(Y_s^{M,N}) - F (Y^{M,N}_{\lfloor s \rfloor}) \big \rangle \,\text{d} s \\ & \quad + p \int_0^t \big \| e^{ M, N}_s \big \|^{p-2} \Big \langle e^{ M, N}_s, \tfrac{ \tau \| F_N(Y^{M,N}_{\lfloor s \rfloor}) \| \cdot F(Y^{M,N}_{\lfloor s \rfloor}) }{1+\tau \| F_N(Y^{M,N}_{\lfloor s \rfloor}) \|} \Big \rangle \,\text{d} s \\ & := pL \int_0^t \big \| e^{ M, N}_s \big \|^p \,\text{d} s + J_0 + J_1 + J_2. \end{split} \end{equation} Following the same lines as in estimates of \eqref{eq:spatial-error-before-final} and \eqref{eq:spatial-error-estimate-final} can bound the item $J_0$ as, \begin{equation} \label{eq:estimateJ0} \begin{split} \mathbb{E} [ J_0 ] & \leq p \, \mathbb{E} \! \int_0^t \| e^{ M, N}_s \|^{p-1} \| F( X (s) ) - F ( P_N X (s) ) \| \,\text{d} s \\ & \leq ( p - 1 ) \int_0^t \mathbb{E} [ \| e^{ M, N}_s \|^{p} ] + \int_0^t \mathbb{E} [ \| F( X (s) ) - F ( P_N X (s) ) \|^p ] \\ & \leq ( p - 1 ) \int_0^t \mathbb{E} [ \| e^{ M, N}_s \|^{p} ] + C ( \tfrac{1}{N} )^{ p \beta }. \end{split} \end{equation} The term $J_2 $ is also easy to be treated, after taking the H\"{o}lder inequality and \eqref{eq:thm-Numer-Moment-Bound} into account: \begin{equation}\label{eq:estimateJ2} \begin{split} \mathbb{E} [ J_2 ] & \leq p \mathbb{E} \int_0^t \big \| e^{ M, N}_s \big \|^{p-1} \cdot \tau \big \| F(Y^{M,N}_{\lfloor s \rfloor}) \big \|^2 \,\text{d} s \\ & \leq (p-1) \int_0^t \mathbb{E} \big[ \big \| e^{ M, N}_s \big \|^p \big] \,\text{d} s + L \tau^p \int_0^t \mathbb{E} \big[ \big \| F(Y^{M,N}_{\lfloor s \rfloor}) \big \|^{2p} \big] \,\text{d} s \\ & \leq (p-1) \int_0^t \mathbb{E} \big [ \big \| e^{ M, N}_s \big \|^p \big ] \,\text{d} s + C_{ p, T} \big ( 1 + \mathbb{E} \big [ \| X_0 \|^{ 6 p } \big ] \big ) \tau^p . \end{split} \end{equation} The remaining term $J_1$ must be handled more carefully. To do so we recall that $ e^{ M, N}_s = P_N X ( s ) - Y_s^{ M, N } = \int_0^s E(s-r) \big ( F_N \big ( X ( r ) \big ) - \tfrac{ F_N(Y^{M,N}_{\lfloor r \rfloor})}{1+\tau \| F_N(Y^{M,N}_{\lfloor r \rfloor}) \|} \big ) \text{d} r $ and split $J_1$ into three terms: \begin{align} \label{eq:J1-split} J_1 & = p \int_0^t \big \| e^{ M, N}_s \big \|^{p-2} \Big \langle \int_0^s E(s-r) \Big ( F_N \big ( X ( r ) \big ) - \tfrac{ F_N(Y^{M,N}_{\lfloor r \rfloor})}{1+\tau \| F_N(Y^{M,N}_{\lfloor r \rfloor}) \|} \Big ) \,\text{d} r, F(Y_s^{M,N})-F(Y^{M,N}_{\lfloor s \rfloor}) \Big \rangle \,\text{d} s \nonumber \\ & = p \int_0^t \big \| e^{ M, N}_s \big \|^{p-2} \Big \langle \int_0^s E(s-r) \big ( F_N \big ( X ( r ) \big ) - F_N(Y^{M,N}_{ r } ) \big ) \text{d} r , F(Y_s^{M,N})-F(Y^{M,N}_{\lfloor s \rfloor}) \Big \rangle \,\text{d} s \nonumber \\ & \quad + p \int_0^t \big \| e^{ M, N}_s \big \|^{p-2} \Big \langle \int_0^s E(s-r) \big ( F_N(Y_r^{M,N})-F_N(Y^{M,N}_{\lfloor r \rfloor}) \big) \,\text{d} r, F(Y_s^{M,N})-F(Y^{M,N}_{\lfloor s \rfloor}) \Big \rangle \,\text{d} s \nonumber \\ & \quad + p \int_0^t \big \| e^{ M, N}_s \|^{p-2} \Big \langle \int_0^s E(s-r) \tfrac{ \tau \| F_N(Y^{M,N}_{\lfloor r \rfloor}) \| \cdot F_N(Y^{M,N}_{\lfloor r \rfloor}) }{1+\tau \| F_N(Y^{M,N}_{\lfloor r \rfloor}) \|} \,\text{d} r , F(Y_s^{M,N})-F(Y^{M,N}_{\lfloor s \rfloor}) \Big \rangle \,\text{d} s \nonumber \\ & := J_{11} + J_{12} + J_{13}. \end{align} Since the estimates of $ \mathbb{E} [J_{11}] $ and $ \mathbb{E} [J_{12}] $ are demanding, we handle the item $ \mathbb{E} [J_{13}] $ first. Utilizing \eqref{eq:F-one-sided-condition}, the H\"{o}lder inequality, \eqref{eq:thm-Numer-Moment-Bound} and \eqref{eq:numer-regularity-normal} results in \begin{align} \mathbb{E} [ J_{13} ] & \leq p \, \mathbb{E} \! \int_0^t \! \int_0^s \big \| e^{ M, N}_s \big \|^{p-2} \cdot \tau \big \| F(Y^{M,N}_{\lfloor r \rfloor}) \big \|^2 \cdot \big \| F(Y_s^{M,N})-F(Y^{M,N}_{\lfloor s \rfloor}) \big \| \,\text{d} r \text{d} s \nonumber \\ & \leq C \tau \mathbb{E} \! \int_0^t \! \int_0^s \big \| e^{ M, N}_s \big \|^{p-2} \big \| F(Y^{M,N}_{\lfloor r \rfloor}) \big \|^2 \big ( 1 + \| Y_s^{M,N} \|_V^{ 2 } + \| Y^{M,N}_{\lfloor s \rfloor} \|_V^{ 2 } \big ) \| Y_s^{M,N}-Y^{M,N}_{\lfloor s \rfloor} \| \,\text{d} r \text{d} s \nonumber \\ & \leq C \! \int_0^t \mathbb{E} [ \| e^{ M, N}_s \|^p ] \,\text{d} s + C \tau^{\frac{p}{2}} \int_0^t \mathbb{E} \Big[ \Big| \int_0^s \| F(Y^{M,N}_{\lfloor r \rfloor}) \|^2 \cdot \big ( 1+ \|Y_s^{M,N} \|_V^{2} \nonumber \\ & \quad + \| Y^{M,N}_{\lfloor s \rfloor}\|_V^{2} \big ) \| Y_s^{M,N}-Y^{M,N}_{\lfloor s \rfloor} \| \,\text{d} r \Big |^{\frac{p}{2}} \Big] \,\text{d} s \nonumber \\ & \leq C \! \int_0^t \mathbb{E} [ \| e^{ M, N}_s \|^p ] \,\text{d} s + C \tau^{\frac{p}{2}(1+ \frac{\beta}{2} )}. \label{eq:EstimateJ13} \end{align} At the moment we come to the estimate of $ \mathbb{E} [ J_{11} ] $ and use the Taylor formula, the self-adjointness of operators $ F ' ( u )$ and $P_N$ to infer that \begin{equation} \begin{split} \mathbb{E} [ J_{11} ] & = p \, \mathbb{E}\int_0^t \big \| e^{ M, N}_s \big \|^{p-2} \Big \langle \! \int_0^s E(s - r) \big( F_N ( X ( r ) ) - F_N ( Y_r^{M,N} ) \big ) \,\text{d} r , F(Y_s^{M,N})- F(Y^{M,N}_{\lfloor s\rfloor}) \Big \rangle \,\text{d} s \\ & = p \, \mathbb{E}\int_0^t \big \| e^{ M, N}_s \big \|^{p-2} \Big \langle \int_0^s E(s - r ) P_N \! \int_0^1 F' \big( Y_r^{M,N} + \sigma ( X ( r ) - Y_r^{M,N} ) \big) \text{d} \sigma \\ & \qquad \qquad \cdot \big( X ( r ) - Y_r^{M,N} \big) \,\text{d} r , \ F(Y_s^{M,N})- F(Y^{M,N}_{\lfloor s\rfloor}) \Big \rangle \,\text{d} s \\ & = p \, \mathbb{E}\int_0^t \int_0^s \int_0^1 \big \| e^{ M, N}_s \big \|^{p-2} \Big \langle X ( r ) - Y_r^{M,N}, \big( F' \big( Y_r^{M,N} + \sigma ( X ( r ) - Y_r^{M,N} ) \big) \big)^* \\ & \qquad \qquad \qquad P_N E(s - r) \big[ F(Y_s^{M,N})- F(Y^{M,N}_{\lfloor s\rfloor}) \big] \Big \rangle \, \text{d} \sigma \,\text{d} r \, \text{d} s \\ & \leq p \, \mathbb{E}\int_0^t \int_0^s \int_0^1 \big \| e^{ M, N}_s \big \|^{p-2} \cdot \big \| X ( r ) - Y_r^{M,N} \big \| \cdot \big \| F' \big( Y_r^{M,N} + \sigma ( X ( r ) - Y_r^{M,N} ) \big) \\ & \qquad \qquad \qquad P_N A^{ \frac{\eta}{2} } E(s - r) A^{ - \frac{\eta}{2} } \big[ F(Y_s^{M,N})- F(Y^{M,N}_{\lfloor s\rfloor}) \big] \big \| \, \text{d} \sigma \,\text{d} r \, \text{d} s \\ & \leq C \, \mathbb{E}\int_0^t \int_0^s \big \| e^{ M, N}_s \big \|^{p-2} \big \| X ( r ) - Y_r^{M,N} \big \| \cdot \big ( 1 + \| X ( r ) \|_V^2 + \| Y_r^{M,N} \|_V^2 \big ) \\ & \qquad \qquad \qquad \times ( s - r )^{- \frac{\eta}{2} } \big \|A^{-\frac{\eta}{2}} \big[ F(Y_s^{M,N})- F(Y^{M,N}_{\lfloor s\rfloor}) \big] \big \| \,\text{d} r \, \text{d} s. \end{split} \end{equation} Further, employing Young's inequality, H\"{o}lder's inequality and the transformation of integral domain and taking $\tfrac12 < \eta < 1$ give \begin{equation} \begin{split} \mathbb{E} [ J_{11} ] & \leq C \, \mathbb{E} \int_0^t \int_0^s ( s - r )^{ - \frac{ \eta }{ 2 } } \big \| e^{ M, N}_s \big \|^{p} \, \text{d} r \, \text{d} s + C \, \mathbb{E} \int_0^t \int_0^s ( s - r )^{ - \frac{ \eta }{ 2 } } \big \| X ( r ) - Y_r^{M,N} \big \|^{\frac{p}{2}} \\ & \qquad \qquad \qquad \times \big ( 1 + \| X ( r ) \|_V^p + \| Y_r^{M,N} \|_V^p \big ) \big \|A^{-\frac{\eta}{2}} \big[ F(Y_s^{M,N})- F(Y^{M,N}_{\lfloor s\rfloor}) \big] \big \|^{\frac{p}{2} } \,\text{d} r \, \text{d} s \\ & \leq C \int_0^t \mathbb{E} \big [ \big \| e^{ M, N}_s \big \|^{p} \big ] \,\text{d} s + C \, \mathbb{E} \int_0^t \int_0^s ( s - r )^{ - \frac{ \eta }{ 2 } } \big \| X ( r ) - Y_r^{M,N} \big \|^{p} \, \text{d} r \, \text{d} s \\ & \quad + C \, \mathbb{E} \int_0^t \int_0^s ( s - r )^{ - \frac{ \eta }{ 2 } } \big ( 1 + \| X ( r ) \|_V^{2p} + \| Y_r^{M,N} \|_V^{2p} \big ) \big \|A^{-\frac{\eta}{2}} \big[ F(Y_s^{M,N})- F(Y^{M,N}_{\lfloor s\rfloor}) \big] \big \|^{ p } \, \text{d} r \, \text{d} s \\ & \leq C \! \int_0^t \mathbb{E} \big [ \big \| e^{ M, N}_s \big \|^{p} \big ] \,\text{d} s + C \! \int_0^t \mathbb{E} \big [ \big \| X ( s )-Y_s^{M,N} \big \|^{p} \big ] \,\text{d} s \\ & \quad + C \! \int_0^t \int_0^s ( s - r )^{ - \frac{ \eta }{ 2 } } \mathbb{E} \Big [ \big ( 1 + \| X ( r ) \|_V^{2p} + \| Y_r^{M,N} \|_V^{2p} \big ) \big \|A^{-\frac{\eta}{2}} \big[ F(Y_s^{M,N})- F(Y^{M,N}_{\lfloor s\rfloor}) \big] \big \|^p \Big ] \text{d} r \,\text{d} s. \end{split} \end{equation} To proceed further, we resort to Corollary \ref{cor:F-diff-regularity} as well as \eqref{eq:optimal.regularity1}, \eqref{eq:thm-Numer-Moment-Bound} and achieve \begin{equation}\label{eq:EstimaeJ11} \begin{split} \mathbb{E}[J_{11}] & \leq C \! \int_0^t \mathbb{E} \big [ \big\| e^{ M, N}_s \big \|^p \big] \, \text{d} s + C \lambda_{N + 1}^{- \frac{ p \beta }{2} } + C \! \int_0^t \! \Big( \mathbb{E} \Big[ \big \|A^{-\frac{\eta}{2}} \big[ F(Y_s^{M,N})- F(Y^{M,N}_{\lfloor s\rfloor}) \big] \big \|^{2p} \Big] \Big)^{\frac{1}{2}}\, \text{d} s \\ & \leq C \! \int_0^t \mathbb{E} \, \big [ \big\| e^{ M, N}_s \big \|^p \big ] \, \text{d} s + C \, ( \tfrac{1}{N} )^{ p \beta } + C \, \tau^{p \beta} . \end{split} \end{equation} Finally, it remains to deal with the estimate of $ \mathbb{E}[J_{12}] $. By the H\"{o}lder inequality and putting $\tfrac12 < \eta < 1$ one can derive that \begin{align} \mathbb{E}[J_{12}] & = p\mathbb{E} \int_0^t \| e^{ M, N}_s \| ^ {p-2} \Big \langle \int_0^s A^{\eta} E(s-r) A^{ -\frac{\eta}{2}} \Big ( F_N \big ( Y^{M,N}_r \big ) - F_N \big ( Y ^{M,N } _ { \lfloor r \rfloor } \big ) \Big ) \, \text{d} r, \nonumber \\ & \qquad \qquad A ^ { -\frac{\eta}{2}} \big ( F_N \big ( Y^{M,N}_s \big ) - F_N \big ( Y^{M,N}_{ \lfloor s \rfloor } \big) \big) \Big \rangle \, \text{d} s \nonumber \\ & \leq p \mathbb{E} \int_0^t \int_0^s \| e^{ M, N}_s \|^{p-2} \cdot C (s-r)^{-\eta} \big \| A^{-\frac{\eta}{2}} \big(F \big( Y^{M,N}_r \big) - F \big( Y^{M,N}_{ \lfloor r \rfloor } \big) \big) \big \| \nonumber \\ & \qquad \qquad \times \big \| A^{-\frac{\eta}{2}} \big( F \big(Y^{M,N}_s \big) - F \big( Y^{M,N}_{ \lfloor s \rfloor } \big) \big) \big \| \, \text{d} r \, \text{d} s \nonumber \\ & \leq C \int_0^t \int_0^s (s-r)^{-\eta} \mathbb{E} [ \| e^{ M, N}_s \|^p ] \, \text{d} r \, \text{d} s + C \, \mathbb{E} \int_0^t \int_0^s (s-r)^{-\eta} \big \| A^{-\frac{\eta}{2}} \big( F \big( Y^{M,N}_r \big) - F \big(Y^{M,N}_{ \lfloor r \rfloor } \big)\big) \big \|^{\frac{p}{2}} \nonumber \\ & \qquad \qquad \quad \, \, \times \big \| A^{-\frac{\eta}{2}} \big( F \big(Y^{M,N}_s \big) - F \big( Y^{M,N}_{ \lfloor s \rfloor } \big) \big) \big \|^{\frac{p}{2}} \, \text{d} r \, \text{d} s \nonumber \\ & \leq C \int_0^t \mathbb{E} [ \| e^{ M, N}_s \|^p ] \, \text{d} s + C \int_0^t \int_0^s (s-r)^{-\eta} \mathbb{E} \big[ \big \| A^{-\frac{\eta}{2}} \big( F \big( Y^{M,N}_r \big) - F \big(Y^{M,N}_{ \lfloor r \rfloor } \big)\big) \big \|^{p} \big ] \, \text{d} r \, \text{d} s \nonumber \\ & \quad + C \int_0^t \int_0^s (s-r)^{-\eta} \mathbb{E} \big[ \big \| A^{-\frac{\eta}{2}} \big( F \big( Y^{M,N}_s \big) - F \big(Y^{M,N}_{ \lfloor s \rfloor } \big)\big) \big \|^{p} \big ] \, \text{d} r \, \text{d} s . \end{align} Again, the use of Corollary \ref{cor:F-diff-regularity} leads us to \begin{equation}\label{eq:EstimateJ12-final} \small \begin{split} \mathbb{E}[J_{12}] \leq C \int_0^t \mathbb{E} [ \| e^{ M, N}_s \|^p ] \, \text{d} s + C \tau^{\beta p}, \end{split} \end{equation} which together with \eqref{eq:EstimateJ13}, \eqref{eq:EstimaeJ11} forces us to recognize from \eqref{eq:J1-split} that \begin{equation} \small \mathbb{E}[J_{1}] \leq C \int_0^t \mathbb{E} [ \| e^{ M, N}_s \|^p ] \, \text{d} s + C \, ( \tfrac{1}{N} )^{ p \beta } + C \tau^{\beta p}. \end{equation} Plugging this and \eqref{eq:estimateJ0}, \eqref{eq:estimateJ2}, into \eqref{eq:full-error-J0J1J2} and applying the discrete version of the Gronwall inequality gives the desired error bound. $\square$ % \section{Numerical experiments} \label{sec:numerical-result} Some numerical experiments are performed in this section to test previous theoretical findings. Consider the stochastic Allen-Cahn equation with additive space-time white noise, described by \begin{equation}\label{eq:num-result-AC-eqn} \left\{ \begin{array}{lll} \frac{\partial u}{\partial t} = \frac{\partial^2 u}{\partial x^2} + u - u^3 + \dot{W} (t), \quad t \in (0, 1], \:\: x \in (0,1), \\ u(0, x) = \sin( \pi x), \quad\quad x \in (0,1), \\ u(t, 0) = u(t,1) = 0, \quad t \in (0, 1]. \end{array}\right. \end{equation} Here $\{ W (t) \}_{t \in[0, T]}$ is a cylindrical $I$-Wiener process represented by \eqref{eq:Wiener-representation}. In what follows, we will use the new full discrete scheme \eqref{eq:full.Tamed-AEE} to approximate the continuous problem \eqref{eq:num-result-AC-eqn}. Error bounds are always measured in terms of mean-square approximation errors at the endpoint $T = 1$, caused by spatial and temporal discretizations and the expectations are approximated by computing averages over 1000 samples. Before proceeding further with numerical simulations, it is helpful to mention that the stochastic convolution in the scheme \eqref{eq:num-result-AC-eqn} is easily implementable once one realize that $\int_{t_m}^{t_{m+1}}\! E_N(t_{m+1}-s) P_N \mbox{d} W(s) = \sum_{i=1}^N \Lambda_i e_i$, where $ \Lambda_i = \int_{t_m}^{t_{m+1}} e^ { - (t_{m+1}-s) \lambda_i } \text{d} \beta(s), 1 \leq i \leq N $ are independent, zero-mean normally distributed random variables with explicit variances $\mathbb{E} [ | \Lambda_i |^2 ] = \tfrac{ 1 - e^{ - 2 \lambda_i \tau } } { 2 \lambda_i }$. For more details on the implementation of so-called AEE schemes, one can consult \cite[section 3]{jentzen2009overcoming} and \cite[section 4.1]{wang2014higher}. To visually inspect the convergence rates in space, we identify the ``exact'' solution by using the full discretization with $M_{\text{exact}} = N_{\text{exact}} = 2^{11} = 2048$. The spatial approximation errors $\| X(1) - X^N(1) \|_{L^2 ( \Omega; H) }$ with $ N= 2^i, i = 2, ..., 7 $ are depicted in Fig.\ref{fig:spatial-error}, against $\tfrac{1}{N}$ on a log-log scale, where one can observe that the resulting spatial errors decrease at a slope close to $1/2$. This is consistent with the previous theoretical result \eqref{eq:spatial-error}. \begin{figure}[htp] \centering \includegraphics[width=4in,height=3in] {Spatial-convergence-order.eps} \caption{The convergence rate of the spectral Galerkin spatial discretization.} \label{fig:spatial-error} \end{figure} Moreover, we attempt to illustrate the error bound \eqref{eq:thm-main-result} for the full discrete scheme \eqref{eq:num-result-AC-eqn}. As implied by \eqref{eq:thm-main-result}, the convergence rate in space is identical to that in time. Consequently, we take $M = N$, $p = 2$ and $\beta = \frac12 - \epsilon$ with arbitrarily small $\epsilon >0$ in \eqref{eq:thm-main-result} to arrive at \begin{equation} \label{eq:numerical-full-error} \| X(1) - Y^{N,N}_{t_N} \|_{L^2 ( \Omega; H) } \leq C_{\epsilon} N ^{ - \frac12 + \epsilon } . \end{equation} To see \eqref{eq:numerical-full-error}, we, similarly as above, do a full discretization on a very fine mesh with $M_{\text{exact}} = N_{\text{exact}} = 2^{11} = 2048$ to compute the ``exact'' solution. Six different mesh parameters $N = 2^{i}, i = 2,3,...,7$ are then used to get six full discretizations. The resulting errors are listed in Table \ref{table:full-error} and plotted in Fig.\ref{fig:full-error} on a log-log scale. From Fig.\ref{fig:full-error}, one can observe the expected convergence rate of order almost $\tfrac12$, which agrees with that indicated in \eqref{eq:numerical-full-error}. \begin{table}[htp] \begin{center} \footnotesize \caption{Computational errors of the full discrete scheme with $M =N$} \label{table:full-error} \begin{tabular*}{15cm}{@{\extracolsep{\fill}}cccccc} \hline $N = 2^2 $ & $ N = 2^3 $ & $N = 2^4$ & $N = 2^5$ & $N = 2^6$ & $N = 2^7$ \\ \hline 0.106381 & 0.077172 & 0.055174 & 0.039209 & 0.027624 & 0.019225 \\ \hline \end{tabular*} \end{center} \end{table} \begin{figure}[htp] \centering \includegraphics[width=4in,height=3in] {Space-time-full-error.eps} \caption{The convergence rate of the space-time full discretization.} \label{fig:full-error} \end{figure} \section*{Acknowledgment} This work was partially supported by NSF of China (Nos.11671405, 91630312, 11571373), NSF of Hunan Province (No.2016JJ3137), Innovation Program of Central South University (No.2017CX017), and Program of Shenghua Yuying at CSU. The author would like to thank Arnulf Jentzen for his comments and encouragement after the presentation of this work in Edinburg in July 2017. Also, the author wants to thank Ruisheng Qi and Meng Cai for their useful comments based on carefully reading the manuscript and Yuying Zhao for her kind help with excellent typesetting. As the first preprint after my baby was born in May 2016, I devote it to my beloved wife and lovely son! \bibliographystyle{abbrv}
2,869,038,155,978
arxiv
\section{INTRODUCTION} MHD waves and oscillations observed in the solar corona are extremely important as they provide an excellent opportunity to probe the corona indirectly via coronal seismology \citep{roberts1984,nakariakov2005,demoortel2009}. Slow magnetoacoustic waves were discovered in post-flare coronal loops with SOHO/SUMER, by measuring periodic Doppler shifts in lines from Fe XIX and Fe XXI, formed at temperatures, $T>6$~MK \citep{kliem2002,wang2002,wang2003a,wang2003b}. A survey of over 50 events found oscillation periods with $\sim$7-31 minutes, decay times of $\sim$6-37 minutes, and maximum Doppler velocities in the range 100--300~km~s$^{-1}$ \citep{wang2005}. These oscillations were interpreted as standing slow magnetoacoustic modes because their phase speed was close to sound speed in the loop, and in one event there was a quarter period phase shift between the observed velocity and intensity oscillations. Although indicative of a standing mode, such a phase shift could also be produced by loops moving into and out of a spatial pixel as a result of Alfv\'enic oscillations \citep{tian2012}. Similar Doppler shift oscillations were also observed in flare and coronal emission lines with Yohkoh/BCS and Hinode/EIS, respectively \citep{mariska2005,mariska2006,mariska2008}. Recently, \citet{kim2012} reported observations of slow magnetoacoustic oscillations in 17 GHz Nobeyama Radioheliograph density and AIA 335~\AA\ measurements, during an M1.6 flare. It was suggested that the waves are excited by small flares at one of the footpoints of the heated loop \citep{wang2003a,wang2005,wang2011} because a RHESSI hard X-ray source was often seen near one of the loops' footpoints. In addition many events showed that there were initially two spectral components suggesting that the wave onset was accompanied by a pulse of hot plasma. Numerical magnetohydrodynamic (MHD) simulations demonstrated that standing slow-mode magnetoacoustic waves can be excited by a localized pressure pulse at one of the footpoints of a loop \citep{selwa2005, selwa2007, taroyan2005}. \citet{ofman2012} recently performed three-dimensional MHD modelling of a bipolar active region and concluded that the excitation of slow-mode (and some transverse) oscillations in coronal loops may result from the injection of plasma at the corona--chromosphere interface of the loop footpoints. The strong damping of Doppler-shift oscillations was investigated by \citet{ofman2002}. The authors suggested that thermal conduction is the main dissipation mechanism for the slow magnetoacoustic waves in hot loops. Later numerical studies showed that other physical effects, (e.g., viscosity, radiative emission, shock dissipation, coupling between fast- and slow-mode MHD waves, wave leakage etc) also play a role \citep{briceno2004,pandey2006,bradshaw2008,haynes2008,verwichte2008, Ogrodowczyk2007,Ogrodowczyk2009,selwa2009}. After all, the mechanisms causing the observed very rapid excitation and damping of standing slow magnetoacoustic waves in hot loops are still not well understood. There is considerable support for the slow magnetoacoustic wave interpretation of the Doppler-shift oscillations seen in hot coronal loops. One of the most critical and least constrained observation parameter for this interpretation is the loop length. Many of the derived speeds were close to the sound speed and if the loops were actually 20\%\ longer than assumed, the derived wave speeds would be supersonic. Most Doppler-shift oscillations were seen in loops on the solar limb where there are large uncertainties in the loop length estimation. The loop oscillation, reported here, occurred in an active region 50$^\circ$ east of central meridian, and we are able to clearly identify the loop along its entire length, including its two footpoints. The oscillation period and the loop length are well constrained and we find that the phase speed of the oscillation was close to, or slightly faster, than the loop's sound speed. In this letter, we report the trigger and observation of a reflecting longitudinal wave, seen with AIA, in a hot loop observed on 7 May 2012. The event occurred shortly after and close to the site of a C-class flare. In section 2, we present the observations and results and in the last section, we discuss and summarize the results. \section{OBSERVATIONS AND RESULTS} The {\it Atmospheric Image Assembly} (AIA; \citealt{lemen2012}) onboard the {\it Solar Dynamics Observatory} (SDO; \citealt{pesnell2012}) obtains full disk images of the Sun (field-of-view $\sim$1.3 R$_\odot$) with a spatial resolution of 1.5$\arcsec$ (0.6$\arcsec$ pixel$^{-1}$) and a cadence of 12 sec in 10 extreme ultraviolet (EUV) and UV filters. For the present study, we utilized 171~\AA\ (Fe IX, with formation temperature $T\approx$0.7 MK), 131~\AA\ (Fe VIII/XXI, $T\approx$0.4 \& 11 MK), and 304~\AA\ (He II, $T\approx$0.05 MK) images. We also used Helioseismic and Magnetic Imager (HMI; \citealt{scherrer2012}) magnetograms to explore the magnetic field configuration of the active region (AR). The AR NOAA 11476 produced an impulsive C7.4 class flare which started at $\sim$17:20~UT, peaked at $\sim$17:26~UT, and ended at $\sim$17:36~UT. The top panel of Figure \ref{fig1}(a) displays the GOES soft X-ray flux profile in 1-8~\AA\ channel. The bottom panel shows the soft X-ray flux derivative, which may be considered as a proxy of the hard X-ray burst during the flare impulsive phase \citep{neupert1968}. The loop intensity oscillation was detected 6~min after the flare peak. Two vertical dotted red lines represent the time over which the loop oscillation was detectable. Figure \ref{fig1}(b) shows the loop in the AIA 131~\AA\ image at 17:33:09~UT. The flare was triggered at the eastern footpoint of the hot coronal loop (marked by an arrow). To view the magnetic field distribution at the flare site, we overlaid HMI magnetogram contours of positive (white) and negative (black) polarities on the AIA image. The presence of opposite polarity (negative) field is evident at the flare site. \subsection{Oscillation characteristics} The plasma ejection, loop heating and wave appeared shortly after the flare at $\sim$17:30~UT. This loop was detected only in the AIA 131 and 94~\AA\ channels, indicating that it had a high temperature. Figure~\ref{fig2}(a) shows the AIA 131~\AA\ base-difference image of the loop just after the first loop crossing by the wave. To investigate the oscillations, we chose the path along the loop marked by red `+' symbols and extracted the 131~\AA\ base-difference intensity between 17:25--18:05~UT. The base image was taken at 17:24:45~UT. The resulting time-distance plot of the intensity distribution is shown in Figure \ref{fig2}(b). This plot clearly reveals an intensity oscillation along the loop length. These oscillations are also clearly visible in the 131 and 94~\AA\ movie, available online. The hot-plasma emission started along the eastern leg, close to where the C-class flare was triggered, and then propagated along the loop to the other footpoint where it was reflected back along the loop. Decaying, multiple reflections continued for about 30~min, until about 18:00~UT. In total, the intensity oscillations went to and fro two and a half times. To study the oscillation properties, we extracted the mean intensity within the boxes 1 and 2 (shown in Figure~\ref{fig2}) from the 131~\AA\ base-difference images. Figure~\ref{fig3}(a) and (b) display the normalized intensity profiles within the boxes 1 and 2, respectively. The top panel (a) exhibits double or broader peaks because the incident and reflected waves are partially resolved at this position (see Figure~\ref{fig2}). Box 2 is close to the western footpoint so the incident and reflected wave brightening merges to a single peak. At both positions, the intensity oscillation decayed rapidly. For the purpose of fitting and to emphasize the oscillations, we de-trended the intensity curve by subtracting a parabola (marked by the blue dotted line in panel (b)). We then fitted the de-trended light curve, $I(t)$, with the function \begin{equation} I(t)=A\mbox{ sin}(\frac{2\pi t }{P}+\phi)\mbox{ exp}(\frac{-t}{\tau}), \end{equation} \noindent where $A$, $P$, $\tau$, and $\phi$ are the amplitude, period, decay time and initial phase, respectively. The best-fit curve is shown by the thick red curve in panel (c). The period of oscillation is $P\sim$634~s, and the decay time is $\tau\sim437$~s. To deduce the phase speed of the wave, we require an estimate of the loop length. In principle, STEREO images could be used to obtain loop length. Unfortunately, the hot loop emission was only visible in the 131 and 94~\AA\ filters, and was not visible in STEREO images. We therefore fitted the observed loop to the circular loop model of \citet{asc2002}. This fits the de-projected loop with two free parameters: the height of the loop center above the solar surface, $h_{loop}$, and the angle between the loop plane and the vertical, $\theta_{loop}$. The method gives the shortest loop compatible with the chosen points along the loop (tie points), and so the phase-speed estimate is therefore a lower bound. The best-fit loop with $h_{loop}=-17$\arcsec\ and $\theta_{loop}=53^\circ$, is shown in Fig.~\ref{fig2}c. The loop length is 220\arcsec\ which implies that the wave had a phase speed $2L/P \sim510$~km~s$^{-1}$. By varying the tie points, the shortest loop length is 200$\arcsec$ (dotted line) which gives a phase speed of 460 km s$^{-1}$. To determine the temperature of the hot loop, we utilized AIA images in six EUV channels (i.e., 94, 171, 131, 211, 335, and 193~\AA), and the SSWIDL code developed by \citet{asc2011}. In this code, the co-alignment of the AIA images from the six EUV channels is carried out by using a limb fitting method, with an accuracy of $<$1 pixel. At each position, the code fits a differential emission measure (DEM) parametrized by a single Gaussian function with three free parameters: the peak temperature emission measure ($EM_p$ in cm$^{-5}$K$^{-1}$), the temperature of the DEM peak ($T_p$), and the DEM temperature width ($\sigma_T$). The DEM peak temperatures and their emission measures across the AR are shown in Figure \ref{fig4}. At this time, the loop top, where 131~\AA\ emission was brightest, had the highest DEM peak temperature. The temperature derived in the legs represents the background active region temperature because we have not done any background subtraction. To estimate the average temperature near the loop top, we used a box region, marked in the figure, and extracted the average and maximum value of the DEM peak temperature, $\sim$8 and 10~MK, respectively. Using these temperatures, the sound speed within the loop was $c_s\sim$152$\sqrt{T (\mbox{MK})}\sim$430 and 480~km s$^{-1}$, respectively. To estimate the density of the hot loop, we calculated the average values of the peak $T_p$, $EM_p$ and $\sigma_T$ in the selected region. Using these values, we estimated the total emission measure ($EM$ in cm$^{-5}$) in the selected rectangular region $\int \! DEM(T) \, \mathrm{d} T$. If the depth is approximately equal to the width, $d$, of the loop \citep{cheng2012} then the density, $n_e$, of the loop can be calculated using the relation $n_e=\sqrt{EM/d}$ (assuming the filling factor $\approx$1). Using a total EM$\sim$9.96$\times$10$^{28}$ cm$^{-5}$, and width of the hot loop system $\sim$18$\arcsec$ (Figure \ref{fig2}), the estimated density is $\sim$8.5$\times$10$^{9}$ cm$^{-3}$ at the top of the loop. \subsection{Excitation Mechanism} To investigate the oscillation trigger, we looked at AIA 304, 171 and 131~\AA\ images. Figure \ref{fig5}(a)-(c) displays some of the selected AIA 171~\AA\ images. Panel (a) shows the flare site (at 17:27~UT), and the magnetic configuration of the AR. The flare brightening occurred where there was a small concentration of minor, negative-polarity field. An impulsive ejection started at about 17:29:02~UT north of the flare site (panel (b) and (c)). This ejection was probably triggered by the flare. From the movies, it appeared to start from a region in the corona above the flare, and did not coincide with any strong photospheric flux concentrations. The ejection was seen in all AIA EUV channels which may be because the plasma was rapidly heated \citep{fletcher2013} or it could be due to intense emission from chromospheric lines in all the channels \citep{brosius2012}. In the 131 and 94~\AA\ movies the ejected plasma rises upwards, driving a front of hot plasma along the loop ahead of it. The front then reflected back and forth along the hot loop several times. This is entirely consistent with the SUMER observations of two spectral components in the hot line profiles at the start of the Doppler shift oscillations. In the cooler, 304 and 171~\AA, filter images only the plasma ejection, not the hot loop, are seen. Panel (d) displays AIA 171 \AA~ image overlaid by AIA 131 \AA~ base difference image contours of the hot loop, which indicates that the hot loop was a separate structure, and did not overlap with the cool 171~\AA\ loops. At the eastern footpoint of the hot loop, a series of low-lying, hot loops formed simultaneously with the plasma ejection. These loops probably formed as a result of the same reconnection process that led to the plasma ejection. We also note that brightening was seen in the 131 and 94~\AA\ images at the opposite footpoint before the arrival of the main hot plasma emission. This could indicate heating by particles accelerated at the reconnection site. Multiple mini-plasma blobs were ejected at the same time. They were observed in all the AIA EUV channels. The plasma ejection followed two paths: (i) along the lower edge of heated loop; (ii) across the heated loop (Figure~\ref{fig5}e). The ejecta that went across the loop look as though they were inside the loop because they stop abruptly at the upper edge of the loop. To estimate the speed of the plasma ejection across the loop, we looked at the 304~\AA\ intensity evolution along the ejection path (shown by a dotted line in panel (e)). The space-time plot is shown in panel (f). The plasma blob rose toward the loop-top with a speed of $\sim$160 km s$^{-1}$ which is less than the observed wave speed. The lower part later fell back toward the solar surface. To find the plane-of-sky speed of the ejection on the lower edge of the hot loop, we display the stack plot in panel (e) following the ejection path (`+' symbol). The speed of the blobs along this path was $\sim$335 km s$^{-1}$, which is approximately double the speed of the blob across the hot loop. The timing of the flare, plasma ejection, and the heated loop is also illustrated by the light curves of the average AIA 304~\AA\ intensity within the sub-regions surrounded by box 1 (blue) and box 2 (green), plotted on the stack plot in Figure~\ref{fig5}(f). In the EUV, the flare maximum is at $\sim$17:24 UT, and the plasma ejection occurred $\sim$5 min later, at $\sim$17:29 UT. \section{DISCUSSION AND CONCLUSION} We report the first direct observation of an intensity oscillation along a hot loop, seen in the AIA 131 and 94~\AA\ channels. Similar to the Doppler-shift oscillations observed by SUMER \citep{wang2002,wang2003a,wang2003b,wang2005,wang2011}, this oscillation was only seen in hot lines and had two spectral components at onset. The phase speed ($\sim$460-510~km~s$^{-1}$) was roughly equivalent (within the errors limits of loop length and DEM temperature) to the sound speed ($\sim$430-480~km~s$^{-1}$). The speed is consistent with a slow-mode wave which is the accepted explanation for the Doppler-shift oscillations observed by SUMER. In most of the SUMER events, no flare was observed prior to the waves and it was conjectured that a microflare might be the trigger. We speculate that the oscillation was excited by a pressure pulse associated with the rapid onset of reconnection at one of the loop footpoints. \citet{selwa2005} numerically studied the excitation of waves in a hot ($\sim$5~MK) coronal loop by launching a pressure pulse at different positions along the loop. They found that pulses close to a footpoint of the loop excites the fundamental of the slow magnetoacoustic mode \citep{selwa2009}. Moreover, recent MHD simulations have demonstrated that plasma flows with a subsonic speed can excite higher-speed slow-mode waves \citep{ofman2012,wang2013}. In conclusion, we have presented a unique observational evidence of a longitudinal oscillation in a hot loop, generated by footpoint excitation. This kind of intensity oscillation in a hot AIA loop has not been reported earlier. However, future statistical studies of the similar events using high-resolution observations from SDO/AIA and Hinode will help to understand the properties of these waves in more detail. \acknowledgments We express our gratitude to the referee for his/her valuable comments/suggestions, which improved the manuscript considerably. We thank Don Schmit for discussions. SDO is a mission for NASA's Living With a Star (LWS) program. \bibliographystyle{apj}
2,869,038,155,979
arxiv
\section{Introduction} Image restoration aims to recover the high-quality image from its low-quality counterpart and includes a series of computer vision applications, such as image super-resolution (SR) and denoising. It is an ill-posed inverse problem since there are a huge amount of candidates for any original input. Recently, deep convolutional neural networks (CNNs) have been investigated to design various models~\cite{kim2016deeply,zhang2020rdnir,zhangASSL} for image restoration. SRCNN~\cite{dong2014learning} firstly introduced deep CNN into image SR. Then several representative works utilized residual learning (e.g., EDSR~\cite{lim2017enhanced}) and attention mechanism (e.g., RCAN~\cite{zhang2018image}) to train very deep network in image SR. Meanwhile, a number of methods were also proposed for image denoising such as DnCNN~\cite{zhang2017beyonddncnn}, RPCNN~\cite{xia2020identifyingRPCNN}, and BRDNet~\cite{tian2020imageBRDNet}. These CNN-based networks have achieved remarkable performance. However, due to parameter-dependent receptive field scaling and content-independent local interactions of convolutions, CNN has limited ability to model long-range dependencies. To overcome this limitation, recent works have begun to introduce self-attention into computer vision systems~\cite{hu2019local,ramachandran2019stand,wang2020axial,zhao2020exploring}. Since Transformer has been shown to achieve state-of-the-art performance in natural language processing~\cite{vaswani2017attention} and high-level vision tasks~\cite{dosovitskiy2020image,touvron2021training,pvt2021,zheng2021rethinking}, researchers have been investigating Transformer-based image restoration networks~\cite{yang2020learning,kumar2021colorization,wang2022uformer}. Chen et al. proposed a pre-trained image processing Transformer named IPT~\cite{chen2021preIPT}. Liang et al. proposed a strong baseline model named SwinIR~\cite{swinir2021} based on Swin Transformer~\cite{swintransformer2021} for image restoration. Zamir et al. also proposed an efficient Transformer model using U-net structure named Restormer~\cite{restormer2022} and achieved state-of-the-art results on several image restoration tasks. In contrast, higher performance can be achieved when using Transformer. \begin{wrapfigure}{r}{0.50\linewidth} \centering \includegraphics[width=\linewidth]{figs/swinirvsours.pdf} \vspace{-7mm} \caption{(\textbf{a}) Dense attention and sparse attention strategies of our ART. (\textbf{b}) Dense attention strategy with shifted window of SwinIR.} \label{fig:swinvsours} \vspace{-3mm} \end{wrapfigure} Despite showing outstanding performance, existing Transformer backbones for image restoration still suffer from serious defects. As we know, SwinIR~\cite{swinir2021} takes advantage of shifted window scheme to limit self-attention computation within non-overlapping windows. On the other hand, IPT~\cite{chen2021preIPT} directly splits features into $P$$\times$$P$ patches to shrink original feature map $P^2$ times, treating each patch as a token. In short, these methods compute self-attention with shorter token sequences and the tokens in each group are always from a dense area of the image. It is considered as a dense attention strategy, which obviously causes a restricted receptive field. To address this issue, the sparse attention strategy is employed. We extract each group of tokens from a sparse area of the image to provide interactions like previous studies (e.g., GG-Transformer~\cite{gg-transformer2021}, Twins~\cite{chu2021twins}, CrossFormer~\cite{wang2021crossformer}), but different from them. Our proposed sparse attention module focuses on equal-scale features. Besides, We pay more attention to pixel-level information than semantic-level information. Since the sparse attention has not been well proposed to solve the problems in low-level vision fields, our proposed method can bridge this gap. We further propose Attention Retractable Transformer named ART for image restoration. Following RCAN~\cite{zhang2018image} and SwinIR~\cite{swinir2021}, we reserve the residual in residual structure~\cite{zhang2018image} for model architecture. Based on joint dense and sparse attention strategies, we design two types of self-attention blocks. We utilize fixed non-overlapping local windows to obtain tokens for the first block named dense attention block (DAB) and sparse grids to obtain tokens for the second block named sparse attention block (SAB). To better understand the difference between our work and SwinIR, we show a visual comparison in Fig.~\ref{fig:swinvsours}. As we can see, the image is divided into four groups and tokens in each group interact with each other. Visibly, the token in our sparse attention block can learn relationships from farther tokens while the one in dense attention block of SwinIR cannot. At the same computational cost, the sparse attention block has stronger ability to compensate for the lack of global information. We consider our dense and sparse attention blocks as successive ones and apply them to extract deep feature. In practice, the alternating application of DAB and SAB can provide retractable attention for the model to capture both local and global receptive field. Our main contributions can be summarized as follows: \vspace{-2mm} \begin{itemize} \item We propose the sparse attention to compensate the defect of mainly using dense attention in existing Transformer-based image restoration networks. The interactions among tokens extracted from a sparse area of an image can bring a wider receptive field to the module. \vspace{-0.5mm} \item We further propose Attention Retractable Transformer (ART) for image restoration. Our ART offers two types of self-attention blocks to obtain retractable attention on the input feature. With the alternating application of dense and sparse attention blocks, the Transformer model can capture local and global receptive field simultaneously. \vspace{-0.5mm} \item We employ ART to train an effective Transformer-based network. We conduct extensive experiments on three image restoration tasks: image super-resolution, denoising, and JPEG compression artifact reduction. Our method achieves state-of-the-art performance. \end{itemize} \vspace{-4mm} \section{Related Work} \vspace{-2mm} \noindent \textbf{Image Restoration.} With the rapid development of CNN, numerous works based on CNN have been proposed to solve image restoration problems~\cite{anwar2020densely,dudhane2021burst,zamir2020learning,zamir2021multi,li2022blueprint,chen2021attention} and achieved superior performance over conventional restoration approaches~\cite{timofte2013anchored,michaeli2013nonparametric,he2010single}. The pioneering work SRCNN~\cite{dong2014learning} was firstly proposed for image SR. DnCNN~\cite{zhang2017beyonddncnn} was a representative image denoising method. Following these works, various model designs and improving techniques have been introduced into the basic CNN frameworks. These techniques include but not limit to the residual structure~\cite{kim2016accurate,zhang2021plugDRUNet}, skip connection~\cite{zhang2018image,zhang2020rdnir}, dropout \cite{kong2022reflash}, and attention mechanism~\cite{dai2019secondSAN,niu2020singleHAN}. Recently, due to the limited ability of CNN to model long-range dependencies, researchers have started to replace convolution operator with pure self-attention module for image restoration~\cite{yang2020learning,swinir2021,restormer2022,chen2021preIPT}. \noindent \textbf{Vision Transformer.} Transformer has been achieving impressive performance in machine translation tasks~\cite{vaswani2017attention}. Due to the parameter-independent global receptive field, it has been introduced to improve computer vision systems in recent years. Dosovitskiy et al.~\cite{dosovitskiy2020image} proposed ViT and introduced Transformer into image recognition by projecting large image patches into token sequences. Chu et al. proposed Twins~\cite{chu2021twins} as an efficient Vision Transformer. Wang et al. proposed CrossFormer~\cite{wang2021crossformer} to build the interactions among long and short distance tokens. Yu et al. proposed GG-Transformer~\cite{gg-transformer2021}, which performed self-attention on the adaptively-dilated partitions of the input. Inspired by the strong ability to learn long-range dependencies, researches have also investigated the usage of Transformer for low-level vision tasks~\cite{yang2020learning,restormer2022,chen2021preIPT,kumar2021colorization,wang2022uformer}. However, existing works still suffer from restricted receptive fields due to mainly using dense attention strategy. While, our method uses dense and sparse attention strategies to build the network, which can capture wider global interactions. As the sparse attention has not been well proposed to solve the low-level vision problems, our proposed method can bridge this gap. \vspace{-3mm} \section{Proposed Method}\label{method} \vspace{-3mm} \subsection{Overall Architecture} \vspace{-3mm} The overall architecture of our ART is shown in Fig.~\ref{fig:framework}. Following RCAN~\cite{zhang2018image}, ART employs residual in residual structure to construct a deep feature extraction module. Given a degraded image $I\in \mathbb{R}^{H\times W \times C_{in}}$ ($H$, $W$, and $C_{in}$ are the height, width, and input channels of the input), ART firstly applies a 3$\times$3 convolutional layer (Conv) to obtain shallow feature $F_0 \in \mathbb{R}^{H\times W \times C}$, where $C$ is the dimension size of the new feature embedding. Next, the shallow feature is normalized and fed into the residual groups, which consist of core Transformer attention blocks. The deep feature is extracted and then passes through another 3$\times$3 convolutional layer to get further feature embeddings $F_1$. Then we use element-wise sum to obtain the final feature map $F_R=F_0 + F_1$. Finally, we employ the restoration module to generate the high-quality image $\hat{I}$ from the feature map $F_R$. Note that we do not reduce spatial size of feature maps during the feed-forward process. \begin{figure*}[t] \centering \begin{tabular}{c} \hspace{-5mm} \includegraphics[width=\linewidth]{figs/overall_architecture_ICLR23.pdf} \\ \end{tabular} \vspace{-5mm} \caption{(\textbf{a}) The architecture of our proposed ART for image restoration. (\textbf{b}) The inner structure of two successive attention blocks DAB and SAB. The D-MSA and S-MSA modules emerge alternately in different blocks corresponding to dense and sparse attention strategies respectively.} \label{fig:framework} \vspace{-7mm} \end{figure*} \noindent \textbf{Residual Group.} We use $N_G$ successive residual groups to extract the deep feature. Each residual group consists of $N_B$ pairs of attention blocks. We design two successive attention blocks shown in Fig.~\ref{fig:framework}{(b)}. The input feature $x_{l-1}$ passes through layer normalization (LN) and multi-head self-attention (MSA). After adding the shortcut, the output $x{'}_{l}$ is fed into the multi-layer perception (MLP). $x_{l}$ is the final output at the $l$-th block. The process is formulated as \begin{equation} \label{equ:attention} \begin{split} x{'}_{l} &= {\rm MSA}({\rm LN}(x_{l-1}))+x_{l-1}, \\ x_{l} &= {\rm MLP}({\rm LN}(x{'}_{l}))+x{'}_{l}. \end{split} \end{equation} Lastly, we also apply a 3$\times$3 convolutional layer to refine the feature embeddings. As shown in Fig~\ref{fig:framework}(a), a residual connection is employed to obtain the final output in each residual group module. \noindent \textbf{Restoration Module.} The restoration module is applied as the last stage of the framework to obtain the reconstructed image. As we know, image restoration tasks can be divided into two categories according to the usage of upsampling. For image super-resolution, we take advantage of the sub-pixel convolutional layer~\cite{shi2016real} to upsample final feature map $F_R$. Next, we use a convolutional layer to get the final reconstructed image $\hat{I}$. The whole process is formulated as \begin{equation} \label{equ:sr_restore} \hat{I} = {\rm Conv}({\rm Upsample}(F_R)). \end{equation} For tasks without upsampling, such as image denoising, we directly use a convolutional layer to reconstruct the high-quality image. Besides, we add the original image to the last output of restoration module for better performance. We formulate the whole process as \begin{equation} \label{equ:dn_restore} \hat{I} = {\rm Conv}(F_R) + I. \end{equation} \noindent \textbf{Loss Function.} We optimize our ART with two types of loss functions. There are various well-studied loss functions, such as $L_2$ loss~\cite{dong2016accelerating,sajjadi2017enhancenet,tai2017memnet}, $L_1$ loss~\cite{lai2017deep,zhang2020rdnir}, and Charbonnier loss~\cite{charbonnier1994two}. Same with previous works~\cite{zhang2018image,swinir2021}, we utilize $L_1$ loss for image super-resolution (SR) and Charbonnier loss for image denoising and compression artifact reduction. For image SR, the goal of training ART is to minimize the $L_1$ loss function, which is formulated as \begin{equation} \label{equ:l1loss} \mathcal{L} = \lVert\hat{I}_{HQ} - I_{HQ}\rVert_1, \end{equation} where $\hat{I}_{HQ}$ is the output of ART and $I_{HQ}$ is the ground-truth image. For image denoising and JPEG compression artifact reduction, we utilize Charbonnier loss with super-parameter $\varepsilon$ as $10^{-3}$, which is \begin{equation} \label{equ:charbonnierloss} \mathcal{L} = \sqrt{\lVert\hat{I}_{HQ} - I_{HQ}\rVert^2 + \varepsilon^2}. \end{equation} \begin{figure*}[t] \vspace{-2mm} \centering \begin{tabular}{c} \includegraphics[width=\linewidth]{figs/dense_and_sparse.pdf} \\ \end{tabular} \vspace{-5.5mm} \caption{(\textbf{a}) Dense attention strategy. Tokens of each group are from a dense area of the image. (\textbf{b}) Sparse attention strategy. Tokens of each group are from a sparse area of the image.} \label{fig:attention} \vspace{-5.5mm} \end{figure*} \vspace{-3mm} \subsection{Attention Retractable Transformer} \label{subsec:art1} \vspace{-2mm} We elaborate the details about our proposed two types of self-attention blocks in this section. As plotted in Fig.~\ref{fig:framework}{(b)}, the interactions of tokens are concentrated on the multi-head self-attention module (MSA). We formulate the calculation process in MSA as \begin{equation} \label{equ:msa} {\rm MSA}(X) = {\rm Softmax}(\frac{QK^T}{\sqrt{C}})V, \end{equation} where $Q, K, V \in \mathbb{R}^{N\times C}$ are respectively the query, key, and value from the linear projecting of input $X \in \mathbb{R}^{N\times C}$. $N$ is the length of token sequence, and $C$ is the dimension size of each token. Here we assume that the number of heads is $1$ to transfer MSA to singe-head self-attention for simplification. \noindent \textbf{Multi-head Self Attention.} Given an image with size $H$$\times$$D$, vision Transformer firstly splits the raw image into numerous patches. These patches are projected by convolutions with stride size $P$. The new projected feature map $\hat{X} \in \mathbb{R}^{h\times w\times C}$ is prepared with $h=\frac{H}{P}$ and $w=\frac{D}{P}$. Common MSA uses all the tokens extracted from the whole feature map and sends them to self-attention module to learn relationships between each other. It will suffer from high computational cost, which is \begin{equation} \label{equ:msa-complexity} \Omega({\rm MSA}) = 4hwC^2 + 2(hw)^2C. \end{equation} To lower the computational cost, existing works generally utilize non-overlapping windows to obtain shorter token sequences. However, they mainly consider the tokens from a dense area of an image. Different from them, we propose the retractable attention strategies, which provide interactions of tokens from not only dense areas but also sparse areas of an image to obtain a wider receptive field. \noindent \textbf{Dense Attention.} As shown in Fig.~\ref{fig:attention}{(a)}, dense attention allows each token to interact with a smaller number of tokens from the neighborhood position of a non-overlapping $W$$\times$$W$ window. All tokens are split into several groups and each group has $W$$\times$$W$ tokens. We apply these groups to compute self-attention for $\frac{h}{W}$$\times$$\frac{w}{W}$ times and the computational cost of new module named D-MSA is \begin{equation} \label{equ:d-msa-complexity} \begin{aligned} \Omega({\rm D\mbox{-}MSA}) = (4W^2C^2 + 2W^4C)\times \frac{h}{W}\times \frac{w}{W} = 4hwC^2 + 2W^2hwC. \end{aligned} \end{equation} \noindent \textbf{Sparse Attention.} Meanwhile, as shown in Fig.~\ref{fig:attention}{(b)}, we propose sparse attention to allow each token to interact with a smaller number of tokens, which are from sparse positions with interval size $I$. After that, the updates of all tokens are also split into several groups and each group has $\frac{h}{I}$$\times$$\frac{w}{I}$ tokens. We further utilize these groups to compute self-attention for $I$$\times$$I$ times. We name the new multi-head self-attention module as S-MSA and the corresponding computational cost is \begin{equation} \label{equ:s-msa-complexity} \begin{aligned} \Omega({\rm S\mbox{-}MSA}) = (4\frac{h}{I}\times \frac{w}{I}C^2 + 2(\frac{h}{I}\times \frac{w}{I})^2C)\times I\times I = 4hwC^2 + 2\frac{h}{I}\frac{w}{I}hwC. \end{aligned} \end{equation} By contrast, our proposed D-MSA and S-MSA modules have lower computational cost since $W^2\ll hw$ and $\frac{h}{I}\frac{w}{I} < hw$. After computing all groups, the outputs are further merged to form original-size feature map. In practice, we apply these two attention strategies to design two types of self-attention blocks named as dense attention block (DAB) and sparse attention block (SAB) as plotted in Fig.~\ref{fig:framework}. \noindent \textbf{Successive Attention Blocks.} We propose the alternating application of these two blocks. As the local interactions have higher priority, we fix the order of DAB in front of SAB. Besides, we provide the long-distance residual connection between each three pairs of blocks. We show the effectiveness of this joint application with residual connection in Appendix Sec.~\ref{sec1} and Sec.~\ref{sec2}. \noindent \textbf{Attention Retractable Transformer.} We demonstrate that the application of these two blocks enables our model to capture local and global receptive field simultaneously. We treat the successive attention blocks as a whole and get a new type of Transformer named Attention Retractable Transformer, which can provide interactions for both local dense tokens and global sparse tokens. \begin{table}[t] \scriptsize \vspace{-4mm} \caption{Comparison to related works. The differences between our ART with other works.} \label{difference_showing} \begin{center} \begin{tabular}{llllll} \hline \makecell[l]{Methods} &\makecell[l]{Solving problems} &\makecell[l]{Structure}&\makecell[l]{Interval of \\ extracted tokens}&\makecell[l]{Representation \\ of tokens}&\makecell[l]{Using long-distance\\residual connection} \\ \hline GG-Transformer~\cite{gg-transformer2021} &High-level & Pyramid & Changed & Semantic-level & No \\ Twins-SVT~\cite{chu2021twins} &High-level & Pyramid & Changed & Semantic-level & No \\ CrossFormer~\cite{wang2021crossformer} &High-level & Pyramid & Changed & Semantic-level & No \\ ART (Ours) &Low-level & Isotropic & Unchanged & Pixel-level & Yes \\ \hline \end{tabular} \end{center} \vspace{-6mm} \end{table} \vspace{-2mm} \subsection{Differences to related works} \vspace{-2mm} We summarize the differences between our proposed approach, ART with the closely related works in Tab.~\ref{difference_showing}. We conclude them as three points. \textbf{(1) Different tasks.} GG-Transformer~\cite{gg-transformer2021}, Twins-SVT~\cite{chu2021twins} and CrossFormer~\cite{wang2021crossformer} are proposed to solve high-level vision problems. Our ART is the only one to employ the sparse attention in low-level vision fields. \textbf{(2) Different designs of sparse attention.} In the part of attention, GG-Transformer utilizes the adaptively-dilated partitions, Twins-SVT utilizes the sub-sampling function and CrossFormer utilizes the cross-scale long-distance attention. As the layers get deeper, the interval of tokens from sparse attention becomes smaller and the channels of tokens become larger. Therefore, each token learns more semantic-level information. In contrast, the interval and the channel dimension of tokens in our ART keep unchanged and each token represents the accurate pixel-level information. \textbf{(3) Different model structures.} Different from these works using Pyramid model structure, our proposed ART enjoys an Isotropic structure. Besides, we provide the long-distance residual connection between several Transformer encoders, which enables the feature of deep layers to reserve more low-frequency information from shallow layers. More detailed discussion can be found in the Appendix Sec.~\ref{sec2}. \vspace{-2mm} \subsection{Implementation Details} \label{subsec:implementation} \vspace{-2mm} Some details about how to apply our ART to construct image restoration model are introduced here. Firstly, the residual group number, DAB number, and SAB number in each group are set as $6$, $3$, and $3$. Secondly, all the convolutional layers are equipped with 3$\times$3 kernel, $1$-length stride, and $1$-length padding, so the height and width of feature map remain unchanged. In practice, we treat 1$\times$1 patch as a token. Besides, we set the channel dimension as 180 for most layers except for the shallow feature extraction and the image reconstruction process. Thirdly, the window size in DAB is set as 8 and the interval size in SAB is adjustable according to different tasks, which is discussed in Sec.~\ref{subsec:ablation}. Lastly, to adjust the division of windows and sparse grids, we use padding and mask strategies to the input feature map of self-attention, so that the number of division is always an integer. \section{Experimental Results} \vspace{-2mm} \subsection{Experimental Settings} \label{subsec:settings} \vspace{-2mm} \noindent \textbf{Data and Evaluation.} We conduct experiments on three image restoration tasks, including image SR, denoising, and JPEG Compression Artifact Reduction (CAR). For image SR, following previous works~\cite{zhang2018image,haris2018deepDBPN}, we use DIV2K~\cite{timofte2017ntire} and Flickr2K~\cite{lim2017enhanced} as training data, Set5~\cite{bevilacqua2012low}, Set14~\cite{zeyde2012single}, B100~\cite{martin2001database}, Urban100~\cite{huang2015single}, and Manga109~\cite{matsui2017sketch} as test data. For image denoising and JPEG CAR, same as SwinIR~\cite{swinir2021}, we use training data: DIV2K, Flickr2K, BSD500~\cite{arbelaez2010contourbsd500}, and WED~\cite{ma2016waterloowed}. We use BSD68~\cite{martin2001database}, Kodak24~\cite{franzen1999kodak}, McMaster~\cite{zhang2011color}, and Urban100 as test data of image denoising. Classic5~\cite{foi2007pointwise} and LIVE1~\cite{sheikh2006statistical} are test data of JPEG CAR. Note that we crop large-size input image into $200$$\times$$200$ partitions with overlapping pixels during inference. Following~\cite{lim2017enhanced}, we adopt the self-ensemble strategy to further improve the performance of our ART and name it as ART+. We evaluate experimental results with PSNR and SSIM~\cite{wang2004image} values on Y channel of images transformed to YCbCr space. \noindent \textbf{Training Settings.} Data augmentation is performed on the training data through horizontal flip and random rotation of $90^{\circ}$, $180^{\circ}$, and $270^{\circ}$. Besides, we crop the original images into 64$\times$64 patches as the basic training inputs for image SR, 128$\times$128 patches for image denoising, and 126$\times$126 patches for JPEG CAR. We resize the training batch to $32$ for image SR, and $8$ for image denoising and JPEG CAR in order to make a fair comparison. We choose ADAM~\cite{kingma2014adam} to optimize our ART model with $\beta_1=0.9$, $\beta_2=0.999$, and zero weight decay. The initial learning rate is set as 2$\times 10^{-4}$ and is reduced by half as the training iteration reaches a certain number. Taking image SR as an example, we train ART for total $500$k iterations and adjust learning rate to half when training iterations reach $250$k, $400$k, $450$k, and $475$k, where $1$k means one thousand. Our ART is implemented on PyTorch~\cite{paszke2017automatic} with 4 NVIDIA RTX8000 GPUs. \begin{figure}[t] \centering \vspace{-3mm} \begin{tabular}{ccc} \hspace{-3mm} \includegraphics[width=0.34\linewidth]{figs/DABSABablationStudy.pdf} & \hspace{-4.5mm} \includegraphics[width=0.34\linewidth]{figs/IntervalAbaltionStudy.pdf} & \hspace{-4.5mm} \includegraphics[width=0.34\linewidth]{figs/comparison_of_variants.pdf} \\ \end{tabular} \vspace{-5mm} \caption{\bb{Left:} PSNR (dB) comparison of our ART using all dense attention block (DAB), using all sparse attention block (SAB), and using alternating DAB and SAB. \bb{Middle:} PSNR (dB) comparison of our ART using large interval size in sparse attention block which is $(8,8,8,8,8,8)$ for six residual groups, using medium interval size which is $(8,8,6,6,4,4)$, and using small interval size which is $(4,4,4,4,4,4)$. \bb{Right:} PSNR (dB) comparison of SwinIR, ART-S, and ART.} \label{fig:ablation} \vspace{-3.5mm} \end{figure} \vspace{-2mm} \subsection{Ablation Study} \label{subsec:ablation} \vspace{-2mm} For ablation experiments, we train our models for image super-resolution ($\times$2) based on DIV2K and Flicke2K datasets. The results are evaluated on Urban100 benchmark dataset. \noindent \textbf{Design Choices for DAB and SAB.} We demonstrate the necessity for simultaneous usage of dense attention block (DAB) and sparse attention block (SAB) by conducting ablation study. We set three different experiment conditions, which are using 6 DABs, 6 SABs, and 3 pairs of alternating DAB and SAB. We keep the rest of experiment environment the same and train all models within $100$k iterations. The experimental results are shown in Fig.~\ref{fig:ablation}(Left). As we can see, only using DAB or SAB suffers from poor performance, because they lack either global receptive field or local receptive field. On the other hand, the structure of SAB following DAB brings higher performance. It validates that both local contextual interactions and global sparse interactions are important for improving strong representation ability of Transformer by obtaining retractable attention on the input feature. \noindent \textbf{Impact of Interval Size.} The interval size in sparse attention block has a vital impact on the performance of our ART. In fact, if the interval size is set as 1, it will be transferred to full attention. Generally, a smaller interval means wider receptive fields but higher computational cost. We compare the experimental results under different interval settings in Fig.~\ref{fig:ablation}(Middle). As we can see, smaller intervals bring more performance gains. To keep the balance between accuracy and complexity, we set the interval size of 6 residual groups as $(4, 4, 4, 4, 4, 4)$ for image SR, $(16,16,12,12,8,8)$ for image denoising, and $(18,18,13,13,7,7)$ for JPEG CAR in the following comparative experiments. \noindent \textbf{Comparison of Variant Models.} We provide a new version of our model for fair comparisons and name it ART-S. Different from ART, the MLP ratio in ART-S is set to 2 (4 in ART) and the interval size is set to 8. We demonstrate that ART-S has comparable model size with SwinIR. We provide the PSNR comparison results in Fig.~\ref{fig:ablation}(Right). As we can see, our ART-S achieves better performance than SwinIR. More comparative results can be found in following experiment parts. \begin{table*}[t] \tiny \begin{center} \vspace{-3mm} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Scale} & \multicolumn{2}{c|}{Set5} & \multicolumn{2}{c|}{Set14} & \multicolumn{2}{c|}{B100} & \multicolumn{2}{c|}{Urban100} & \multicolumn{2}{c|}{Manga109}\\ \cline{3-12} & & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM \\ \hline \hline EDSR~\cite{lim2017enhanced} & $\times$2 & 38.11 & 0.9602 & 33.92 & 0.9195 & 32.32 & 0.9013 & 32.93 & 0.9351 & 39.10 & 0.9773\\ RCAN~\cite{zhang2018image} & $\times$2 & 38.27 & 0.9614 & 34.12 & 0.9216 & 32.41 & 0.9027 & 33.34 & 0.9384 & 39.44 & 0.9786\\ SAN~\cite{dai2019secondSAN} & $\times$2 & 38.31 & 0.9620 & 34.07 & 0.9213 & 32.42 & 0.9028 & 33.10 & 0.9370 & 39.32 & 0.9792\\ SRFBN~\cite{li2019feedbackSRFBN} & $\times$2 & 38.11 & 0.9609 & 33.82 & 0.9196 & 32.29 & 0.9010 & 32.62 & 0.9328 & 39.08 & 0.9779\\ HAN~\cite{niu2020singleHAN} & $\times$2 & 38.27 & 0.9614 & 34.16 & 0.9217 & 32.41 & 0.9027 & 33.35 & 0.9385 & 39.46 & 0.9785\\ IGNN~\cite{zhou2020crossIGNN} & $\times$2 & 38.24 & 0.9613 & 34.07 & 0.9217 & 32.41 & 0.9025 & 33.23 & 0.9383 & 39.35 & 0.9786\\ CSNLN~\cite{mei2020imageCSNLN} & $\times$2 & 38.28 & 0.9616 & 34.12 & 0.9223 & 32.40 & 0.9024 & 33.25 & 0.9386 & 39.37 & 0.9785\\ RFANet~\cite{liu2020residualRFANet} & $\times$2 & 38.26 & 0.9615 & 34.16 & 0.9220 & 32.41 & 0.9026 & 33.33 & 0.9389 & 39.44 & 0.9783\\ NLSA~\cite{mei2021imageNLSA} & $\times$2 & 38.34 & 0.9618 & 34.08 & 0.9231 & 32.43 & 0.9027 & 33.42 & 0.9394 & 39.59 & 0.9789\\ IPT~\cite{chen2021preIPT} & $\times$2 & 38.37 & N/A & 34.43 & N/A & 32.48 & N/A & 33.76 & N/A & N/A & N/A \\ SwinIR~\cite{swinir2021} & $\times$2 & 38.42 & 0.9623 & 34.46 & 0.9250 & 32.53 & 0.9041 & 33.81 & 0.9427 & 39.92 & 0.9797\\ \textbf{ART-S (ours)} & $\times$2 & 38.48 & 0.9625 & 34.50 & 0.9258 & 32.53 & 0.9043 & 34.02 & 0.9437 & 40.11 & 0.9804\\ \textbf{ART (ours)} & $\times$2 & \textcolor{blue}{38.56} & \textcolor{blue}{0.9629} & \textcolor{blue}{34.59} & \textcolor{blue}{0.9267} & \textcolor{blue}{32.58} & \textcolor{blue}{0.9048} & \textcolor{blue}{34.30} & \textcolor{blue}{0.9452} & \textcolor{blue}{40.24} & \textcolor{blue}{0.9808} \\ \textbf{ART+ (ours)} & $\times$2 & \textcolor{red}{38.59} & \textcolor{red}{0.9630} & \textcolor{red}{34.68} & \textcolor{red}{0.9269} & \textcolor{red}{32.60} & \textcolor{red}{0.9050} & \textcolor{red}{34.41} & \textcolor{red}{0.9457} & \textcolor{red}{40.33} & \textcolor{red}{0.9810} \\ \hline \hline EDSR~\cite{lim2017enhanced} & $\times$3 & 34.65 & 0.9280 & 30.52 & 0.8462 & 29.25 & 0.8093 & 28.80 & 0.8653 & 34.17 & 0.9476\\ RCAN~\cite{zhang2018image} & $\times$3 & 34.74 & 0.9299 & 30.65 & 0.8482 & 29.32 & 0.8111 & 29.09 & 0.8702 & 34.44 & 0.9499\\ SAN~\cite{dai2019secondSAN} & $\times$3 & 34.75 & 0.9300 & 30.59 & 0.8476 & 29.33 & 0.8112 & 28.93 & 0.8671 & 34.30 & 0.9494\\ SRFBN~\cite{li2019feedbackSRFBN} & $\times$3 & 34.70 & 0.9292 & 30.51 & 0.8461 & 29.24 & 0.8084 & 28.73 & 0.8641 & 34.18 & 0.9481\\ HAN~\cite{niu2020singleHAN} & $\times$3 & 34.75 & 0.9299 & 30.67 & 0.8483 & 29.32 & 0.8110 & 29.10 & 0.8705 & 34.48 & 0.9500\\ IGNN~\cite{zhou2020crossIGNN} & $\times$3 & 34.72 & 0.9298 & 30.66 & 0.8484 & 29.31 & 0.8105 & 29.03 & 0.8696 & 34.39 & 0.9496\\ CSNLN~\cite{mei2020imageCSNLN} & $\times$3 & 34.74 & 0.9300 & 30.66 & 0.8482 & 29.33 & 0.8105 & 29.13 & 0.8712 & 34.45 & 0.9502\\ RFANet~\cite{liu2020residualRFANet} & $\times$3 & 34.79 & 0.9300 & 30.67 & 0.8487 & 29.34 & 0.8115 & 29.15 & 0.8720 & 34.59 & 0.9506\\ NLSA~\cite{mei2021imageNLSA} & $\times$3 & 34.85 & 0.9306 & 30.70 & 0.8485 & 29.34 & 0.8117 & 29.25 & 0.8726 & 34.57 & 0.9508\\ IPT~\cite{chen2021preIPT} & $\times$3 & 34.81 & N/A & 30.85 & N/A & 29.38 & N/A & 29.49 & N/A & N/A & N/A \\ SwinIR~\cite{swinir2021} & $\times$3 & 34.97 & 0.9318 & 30.93 & 0.8534 & 29.46 & 0.8145 & 29.75 & 0.8826 & 35.12 & 0.9537\\ \textbf{ART-S (ours)} & $\times$3 & 34.98 & 0.9318 & 30.94 & 0.8530 & 29.45 & 0.8146 & 29.86 & 0.8830 & 35.22 & 0.9539\\ \textbf{ART (ours)} & $\times$3 & \textcolor{blue}{35.07} & \textcolor{blue}{0.9325} & \textcolor{blue}{30.99} & \textcolor{blue}{0.8540} & \textcolor{blue}{29.51} & \textcolor{blue}{0.8159} & \textcolor{blue}{30.10} & \textcolor{blue}{0.8871} & \textcolor{blue}{35.39} & \textcolor{blue}{0.9548} \\ \textbf{ART+ (ours)} & $\times$3 & \textcolor{red}{35.11} & \textcolor{red}{0.9327} & \textcolor{red}{31.05} & \textcolor{red}{0.8545} & \textcolor{red}{29.53} & \textcolor{red}{0.8162} & \textcolor{red}{30.22} & \textcolor{red}{0.8883} & \textcolor{red}{35.51} & \textcolor{red}{0.9552} \\ \hline \hline EDSR~\cite{lim2017enhanced} & $\times$4 & 32.46 & 0.8968 & 28.80 & 0.7876 & 27.71 & 0.7420 & 26.64 & 0.8033 & 31.02 & 0.9148\\ RCAN~\cite{zhang2018image} & $\times$4 & 32.63 & 0.9002 & 28.87 & 0.7889 & 27.77 & 0.7436 & 26.82 & 0.8087 & 31.22 & 0.9173\\ SAN~\cite{dai2019secondSAN} & $\times$4 & 32.64 & 0.9003 & 28.92 & 0.7888 & 27.78 & 0.7436 & 26.79 & 0.8068 & 31.18 & 0.9169\\ SRFBN~\cite{li2019feedbackSRFBN} & $\times$4 & 32.47 & 0.8983 & 28.81 & 0.7868 & 27.72 & 0.7409 & 26.60 & 0.8015 & 31.15 & 0.9160\\ HAN~\cite{niu2020singleHAN} & $\times$4 & 32.64 & 0.9002 & 28.90 & 0.7890 & 27.80 & 0.7442 & 26.85 & 0.8094 & 31.42 & 0.9177\\ IGNN~\cite{zhou2020crossIGNN} & $\times$4 & 32.57 & 0.8998 & 28.85 & 0.7891 & 27.77 & 0.7434 & 26.84 & 0.8090 & 31.28 & 0.9182\\ CSNLN~\cite{mei2020imageCSNLN} & $\times$4 & 32.68 & 0.9004 & 28.95 & 0.7888 & 27.80 & 0.7439 & 27.22 & 0.8168 & 31.43 & 0.9201\\ RFANet~\cite{liu2020residualRFANet} & $\times$4 & 32.66 & 0.9004 & 28.88 & 0.7894 & 27.79 & 0.7442 & 26.92 & 0.8112 & 31.41 & 0.9187\\ NLSA~\cite{mei2021imageNLSA} & $\times$4 & 32.59 & 0.9000 & 28.87 & 0.7891 & 27.78 & 0.7444 & 26.96 & 0.8109 & 31.27 & 0.9184\\ IPT~\cite{chen2021preIPT} & $\times$4 & 32.64 & N/A & 29.01 & N/A & 27.82 & N/A & 27.26 & N/A & N/A & N/A \\ SwinIR~\cite{swinir2021} & $\times$4 & 32.92 & 0.9044 & 29.09 & 0.7950 & 27.92 & 0.7489 & 27.45 & 0.8254 & 32.03 & 0.9260\\ \textbf{ART-S (ours)} & $\times$4 & 32.86 & 0.9029 & 29.09 & 0.7942 & 27.91 & 0.7489 & 27.54 & 0.8261 & 32.13 & 0.9263\\ \textbf{ART (ours)} & $\times$4 & \textcolor{blue}{33.04} & \textcolor{blue}{0.9051} & \textcolor{blue}{29.16} & \textcolor{blue}{0.7958} & \textcolor{blue}{27.97} & \textcolor{blue}{0.7510} & \textcolor{blue}{27.77} & \textcolor{blue}{0.8321} & \textcolor{blue}{32.31} & \textcolor{blue}{0.9283} \\ \textbf{ART+ (ours)} & $\times$4 & \textcolor{red}{33.07} & \textcolor{red}{0.9055} & \textcolor{red}{29.20} & \textcolor{red}{0.7964} & \textcolor{red}{27.99} & \textcolor{red}{0.7513} & \textcolor{red}{27.89} & \textcolor{red}{0.8339} & \textcolor{red}{32.45} & \textcolor{red}{0.9291} \\ \hline \end{tabular} \vspace{-2mm} \caption{PSNR (dB)/SSIM comparisons for image super-resolution on five benchmark datasets. We color best and second best results in \textcolor{red}{red} and \textcolor{blue}{blue}.} \label{table:psnr_ssim_SR_5sets} \end{center} \vspace{-1mm} \end{table*} \begin{table*}[t] \scriptsize \vspace{-2mm} \begin{center} \begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline Method & EDSR & RCAN & SRFBN & HAN & CSNLN & SwiIR & ART-S (ours) & ART (ours)\\ \hline \hline Params (M) & 43.09 & 15.59 & 3.63 & 16.07 & 7.16 & 11.90 & 11.87 & 16.55\\ \hline Mult-Adds (G) & 1,286 & 407 & 498 & 420 & 103,640 & 336 & 392 & 782\\ \hline PSNR on Urban100 (dB) & 26.64 & 26.82 & 26.60 & 26.85 & 27.22 & 27.45 & 27.54 & 27.77\\ \hline PSNR on Manga109 (dB) & 31.02 & 31.22 & 31.15 & 31.42 & 31.43 & 32.03 & 32.13 & 32.31\\ \hline \end{tabular} \vspace{-2mm} \caption{Model size comparisons ($\times$4 SR). Output size is 3$\times$640$\times$640 for Mult-Adds calculation.} \label{table:model_size} \end{center} \vspace{-8mm} \end{table*} \begin{figure*}[t] \tiny \centering \begin{tabular}{cc} \hspace{-0.42cm} \begin{adjustbox}{valign=t} \begin{tabular}{c} \includegraphics[width=0.213\textwidth]{figs/visual/large/Resize_ComL_img_092_HR_x4.png} \\ Urban100: img\_092 ($\times$4) \end{tabular} \end{adjustbox} \hspace{-0.46cm} \begin{adjustbox}{valign=t} \begin{tabular}{cccccc} \includegraphics[width=0.149\textwidth]{figs/visual/large/ComS_img_092_HR_x4.png} \hspace{-4mm} & \includegraphics[width=0.149\textwidth]{figs/visual/large/ComS_img_092_Bicubic_x4.png} \hspace{-4mm} & \includegraphics[width=0.149\textwidth]{figs/visual/large/ComS_img_092_RCAN_x4.png} \hspace{-4mm} & \includegraphics[width=0.149\textwidth]{figs/visual/large/ComS_img_092_SRFBN_x4.png} \hspace{-4mm} & \includegraphics[width=0.149\textwidth]{figs/visual/large/ComS_img_092_SAN_x4.png} \hspace{-4mm} \\ HQ / PSNR (dB) \hspace{-4mm} & Bicubic / 15.31\hspace{-4mm} & RCAN / 18.36 \hspace{-4mm} & SRFBN / 18.26 \hspace{-4mm} & SAN / 18.26\hspace{-4mm} \\ \includegraphics[width=0.149\textwidth]{figs/visual/large/ComS_img_092_IGNN_x4.png} \hspace{-4mm} & \includegraphics[width=0.149\textwidth]{figs/visual/large/ComS_img_092_CSNLN_x4.png} \hspace{-4mm} & \includegraphics[width=0.149\textwidth]{figs/visual/large/ComS_img_092_RFANet_x4.png} \hspace{-4mm} & \includegraphics[width=0.149\textwidth]{figs/visual/large/ComS_img_092_SwinIR_x4.png} \hspace{-4mm} & \includegraphics[width=0.149\textwidth]{figs/visual/large/ComS_img_092_ART_x4.png} \hspace{-4mm} \\ IGNN / 18.51 \hspace{-4mm} & CSNLN / 18.69 \hspace{-4mm} & RFANet / 18.49 \hspace{-4mm} & SwinIR / 18.59 \hspace{-4mm} & \textbf{ART / 19.56} \hspace{-4mm} \\ \end{tabular} \end{adjustbox} \\ \hspace{-0.42cm} \begin{adjustbox}{valign=t} \begin{tabular}{c} \includegraphics[width=0.213\textwidth]{figs/visual/large/Resize_ComL_img_098_HR_x4.png} \\ Urban100: img\_098 ($\times$4) \end{tabular} \end{adjustbox} \hspace{-0.46cm} \begin{adjustbox}{valign=t} \begin{tabular}{cccccc} \includegraphics[width=0.149\textwidth]{figs/visual/large/ComS_img_098_HR_x4.png} \hspace{-4mm} & \includegraphics[width=0.149\textwidth]{figs/visual/large/ComS_img_098_Bicubic_x4.png} \hspace{-4mm} & \includegraphics[width=0.149\textwidth]{figs/visual/large/ComS_img_098_RCAN_x4.png} \hspace{-4mm} & \includegraphics[width=0.149\textwidth]{figs/visual/large/ComS_img_098_SRFBN_x4.png} \hspace{-4mm} & \includegraphics[width=0.149\textwidth]{figs/visual/large/ComS_img_098_SAN_x4.png} \hspace{-4mm} \\ HQ / PSNR (dB) \hspace{-4mm} & Bicubic / 18.28\hspace{-4mm} & RCAN / 19.70 \hspace{-4mm} & SRFBN / 19.55 \hspace{-4mm} & SAN / 19.66 \hspace{-4mm} \\ \includegraphics[width=0.149\textwidth]{figs/visual/large/ComS_img_098_IGNN_x4.png} \hspace{-4mm} & \includegraphics[width=0.149\textwidth]{figs/visual/large/ComS_img_098_CSNLN_x4.png} \hspace{-4mm} & \includegraphics[width=0.149\textwidth]{figs/visual/large/ComS_img_098_RFANet_x4.png} \hspace{-4mm} & \includegraphics[width=0.149\textwidth]{figs/visual/large/ComS_img_098_SwinIR_x4.png} \hspace{-4mm} & \includegraphics[width=0.149\textwidth]{figs/visual/large/ComS_img_098_ART_x4.png} \hspace{-4mm} \\ IGNN / 19.70 \hspace{-4mm} & CSNLN / 19.82 \hspace{-4mm} & RFANet / 19.72 \hspace{-4mm} & SwinIR / 20.00 \hspace{-4mm} & \textbf{ART / 20.10} \hspace{-4mm} \\ \end{tabular} \end{adjustbox} \end{tabular} \vspace{-3.5mm} \caption{Visual comparison with challenging examples on image super-resolution ($\times$4).} \label{fig:img_sr_visual} \vspace{-7mm} \end{figure*} \vspace{-2.5mm} \subsection{Image Super-Resolution} \vspace{-2mm} We provide comparisons of our proposed ART with representative image SR methods, including CNN-based networks: EDSR~\cite{lim2017enhanced}, RCAN~\cite{zhang2018image}, SAN~\cite{dai2019secondSAN}, SRFBN~\cite{li2019feedbackSRFBN}, HAN~\cite{niu2020singleHAN}, IGNN~\cite{zhou2020crossIGNN}, CSNLN~\cite{mei2020imageCSNLN}, RFANet~\cite{liu2020residualRFANet}, NLSA~\cite{mei2021imageNLSA}, and Transformer-based networks: IPT~\cite{chen2021preIPT} and SwinIR~\cite{swinir2021}. Note that IPT is a pre-trained model, which is trained on ImageNet benchmark dataset. All the results are provided by publicly available code and data. Quantitative and visual comparisons are provided in Tab.~\ref{table:psnr_ssim_SR_5sets} and Fig.~\ref{fig:img_sr_visual}. \noindent \textbf{Quantitative Comparisons.} We present PSNR/SSIM comparison results for $\times$2, $\times$3, and $\times$4 image SR in Tab.~\ref{table:psnr_ssim_SR_5sets}. As we can see, our ART achieves the best PSNR/SSIM performance on all five benchmark datasets. Using self-ensemble, ART+ gains even better results. Compared with existing state-of-the-art method SwinIR, our ART obtains better gains across all scale factors, indicating that our proposed joint dense and sparse attention blocks enable Transformer stronger representation ability. Despite showing better performance than CNN-based networks, another Transformer-based network IPT is not as good as ours. It is validated that our proposed ART becomes a new promising Transformer-based network for image SR. \begin{wrapfigure}{r}{0.55\linewidth} \centering \vspace{-4mm} \includegraphics[width=\linewidth]{figs/visual_comp.pdf} \vspace{-8mm} \caption{Visual comparison ($\times$4) of SwinIR and Ours.} \label{fig:visual_comp} \vspace{-3mm} \end{wrapfigure} \noindent \textbf{Retractable vs. Dense Attention.} We further show a typical visual comparison with SwinIR in Fig.~\ref{fig:visual_comp}. As SwinIR mainly utilizes dense attention strategy, it restores wrong texture structures under the influence of close patches with mainly vertical lines. However, our ART can reconstruct the right texture, thanks to the wider receptive field provided by sparse attention strategy. Visibly, the patch is able to interact with farther patches with similar horizontal lines so that it can be reconstructed clearly. This comparison demonstrates the advantage of retractable attention and its strong ability to restore high-quality outputs. \noindent \textbf{Model Size Comparisons.} Table~\ref{table:model_size} provides comparisons of parameters number and Mult-Adds of different networks, which include existing state-of-the-art methods. We calculate the Mult-Adds assuming that the output size is 3$\times$640$\times$640 under $\times$4 image SR. Compared with previous CNN-based networks, our ART has comparable parameter number and Mult-Adds but achieves high performance. Besides, we can see that our ART-S has less parameters and Mult-Adds than most of the compared methods. The model size of ART-S is similar with SwinIR. However, ART-S still achieves better performance gains than all compared methods except our ART. It indicates that our method is able to achieve promising performance at an acceptable computational and memory cost. \noindent \textbf{Visual Comparisons.} We also provide some challenging examples for visual comparison ($\times$4) in Fig.~\ref{fig:img_sr_visual}. We can see that our ART is able to alleviate heavy blurring artifacts while restoring detailed edges and textures. Compared with other methods, ART obtains visually pleasing results by recovering more high-frequency details. It indicates that ART preforms better for image SR \begin{table*}[t] \tiny \begin{center} \resizebox{\textwidth}{!}{ \setlength{\tabcolsep}{0.8mm} \vspace{-3mm} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{BSD68} & \multicolumn{3}{c|}{Kodak24} & \multicolumn{3}{c|}{McMaster} & \multicolumn{3}{c|}{Urban100}\\ \cline{2-13} &$\sigma$=15 & $\sigma$=25 & $\sigma$=50 &$\sigma$=15 & $\sigma$=25 & $\sigma$=50 &$\sigma$=15 & $\sigma$=25 & $\sigma$=50 &$\sigma$=15 & $\sigma$=25 & $\sigma$=50\\ \hline \hline CBM3D~\cite{Dabov2007CBM3D} & N/A & N/A & 27.38 & N/A & N/A & 28.63 & N/A & N/A & N/A & N/A & N/A & 27.94\\ IRCNN~\cite{zhang2017learningIRCNN} & 33.86 & 31.16 & 27.86 & 34.69 & 32.18 & 28.93 & 34.58 & 32.18 & 28.91 & 33.78 & 31.20 & 27.70\\ FFDNet~\cite{zhang2018ffdnet} & 33.87 & 31.21 & 27.96 & 34.63 & 32.13 & 28.98 & 34.66 & 32.35 & 29.18 & 33.83 & 31.40 & 28.05\\ DnCNN~\cite{zhang2017beyonddncnn} & 33.90 & 31.24 & 27.95 & 34.60 & 32.14 & 28.95 & 33.45 & 31.52 & 28.62 & 32.98 & 30.81 & 27.59\\ RNAN~\cite{zhang2019rnan} & N/A & N/A & 28.27 & N/A & N/A & 29.58 & N/A & N/A & 29.72 & N/A & N/A & 29.08\\ RDN~\cite{zhang2020rdnir} & N/A & N/A & 28.31 & N/A & N/A & 29.66 & N/A & N/A & N/A & N/A & N/A & 29.38 \\ IPT~\cite{chen2021preIPT} & N/A & N/A & 28.39 & N/A & N/A & 29.64 & N/A & N/A & 29.98 & N/A & N/A & 29.71\\ DRUNet~\cite{zhang2021plugDRUNet} & 34.30 & 31.69 & 28.51 & 35.31 & 32.89 & \textcolor{black}{29.86} & 35.40 & 33.14 & 30.08 & 34.81 & 32.60 & 29.61\\ P3AN~\cite{hu2021pseudo} & N/A & N/A & 28.37 & N/A & N/A & 29.69 & N/A & N/A & N/A & N/A & N/A & 29.51\\ SwinIR~\cite{swinir2021} & 34.42 & 31.78 & 28.56 & 35.34 & 32.89 & 29.79 & 35.61 & 33.20 & 30.22 & 35.13 & 32.90 & 29.82\\ Restormer~\cite{restormer2022} & 34.40 & 31.79 & \textcolor{black}{28.60} & \textcolor{red}{35.47} & \textcolor{red}{33.04} & \textcolor{red}{30.01} & 35.61 & 33.34 & \textcolor{black}{30.30} & 35.13 & 32.96 & 30.02\\ \textbf{ART (ours)} & \textcolor{blue}{34.46} & \textcolor{blue}{31.84} & \textcolor{blue}{28.63} & \textcolor{black}{35.39} & \textcolor{black}{32.95} & 29.87 & \textcolor{blue}{35.68} & \textcolor{blue}{33.41} & \textcolor{blue}{30.31} & \textcolor{blue}{35.29} & \textcolor{blue}{33.14} & \textcolor{blue}{30.19} \\ \textbf{ART+ (ours)} & \textcolor{red}{34.47} & \textcolor{red}{31.85} & \textcolor{red}{28.65} & \textcolor{blue}{35.41} & \textcolor{blue}{32.98} & \textcolor{blue}{29.89} & \textcolor{red}{35.71} & \textcolor{red}{33.44} & \textcolor{red}{30.35} & \textcolor{red}{35.34} & \textcolor{red}{33.20} & \textcolor{red}{30.27} \\ \hline \end{tabular} } \vspace{-4mm} \caption{PSNR (dB) comparisons for color image denoising on four benchmark datasets. We color best and second best results in \textcolor{red}{red} and \textcolor{blue}{blue}.} \label{table:psnr_DN_4sets} \end{center} \vspace{-3.5mm} \end{table*} \begin{table*}[t] \scriptsize \begin{center} \resizebox{\textwidth}{!}{ \setlength{\tabcolsep}{1.5mm} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{$q$} & \multicolumn{2}{c|}{RNAN} & \multicolumn{2}{c|}{RDN} & \multicolumn{2}{c|}{DRUNet} & \multicolumn{2}{c|}{SwinIR} & \multicolumn{2}{c|}{ART (ours)} & \multicolumn{2}{c|}{ART+ (ours)} \\ \cline{3-14} & & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM \\ \hline \hline \multirow{3}{*}{Classic5} & 10 & 29.96 & 0.8178 & 30.00 & 0.8188 & 30.16 & 0.8234 & \textcolor{black}{30.27} & \textcolor{black}{0.8249} & \textcolor{blue}{30.27} & \textcolor{blue}{0.8258} & \textcolor{red}{30.32} & \textcolor{red}{0.8263} \\ & 30 & 33.38 & 0.8924 & 33.43 & 0.8930 & 33.59 & 0.8949 & \textcolor{black}{33.73} & \textcolor{black}{0.8961} & \textcolor{blue}{33.74} & \textcolor{blue}{0.8964} & \textcolor{red}{33.78} & \textcolor{red}{0.8967} \\ & 40 & 34.27 & 0.9061 & 34.27 & 0.9061 & 34.41 & 0.9075 & \textcolor{black}{34.52} & \textcolor{black}{0.9082} & \textcolor{blue}{34.55} & \textcolor{blue}{0.9086} & \textcolor{red}{34.58} & \textcolor{red}{0.9089} \\ \hline \multirow{3}{*}{LIVE1} & 10 & 29.63 & 0.8239 & 29.67 & 0.8247 & 29.79 & 0.8278 & \textcolor{black}{29.86} & \textcolor{black}{0.8287} & \textcolor{blue}{29.89} & \textcolor{blue}{0.8300} & \textcolor{red}{29.92} & \textcolor{red}{0.8305} \\ & 30 & 33.45 & 0.9149 & 33.51 & 0.9153 & 33.59 & 0.9166 & \textcolor{black}{33.69} & \textcolor{black}{0.9174} & \textcolor{blue}{33.71} & \textcolor{blue}{0.9178} & \textcolor{red}{33.74} & \textcolor{red}{0.9181} \\ & 40 & 34.47 & 0.9299 & 34.51 & 0.9302 & 34.58 & 0.9312 & \textcolor{black}{34.67} & \textcolor{black}{0.9317} & \textcolor{blue}{34.70} & \textcolor{blue}{0.9322} & \textcolor{red}{34.73} & \textcolor{red}{0.9324} \\ \hline \end{tabular} } \vspace{-4mm} \caption{PSNR (dB)/SSIM comparisons for JPEG Compression Artifact Reduction on two benchmark datasets. We color best and second best results in \textcolor{red}{red} and \textcolor{blue}{blue}.} \label{table:psnr_ssim_psnrb_JPEG} \end{center} \vspace{-7mm} \end{table*} \vspace{-3mm} \subsection{Image Denoising} \vspace{-2mm} We show color image denoising results to compare our ART with representative methods in Tab.~\ref{table:psnr_DN_4sets}. These methods are CBM3D~\cite{Dabov2007CBM3D}, IRCNN~\cite{zhang2017learningIRCNN}, FFDNet~\cite{zhang2018ffdnet}, DnCNN~\cite{zhang2017beyonddncnn}, RNAN~\cite{zhang2019rnan}, RDN~\cite{zhang2020rdnir}, IPT~\cite{chen2021preIPT}, DRUNet~\cite{zhang2021plugDRUNet}, P3AN~\cite{hu2021pseudo}, SwinIR~\cite{swinir2021}, and Restormer~\cite{restormer2022}. Following most recent works, we set the noise level to $15$, $25$, and $50$. We also shows visual comparisons of challenging examples in Fig.~\ref{fig:img_dn_visual}. \noindent \textbf{Quantitative Comparisons.} Table~\ref{table:psnr_DN_4sets} shows PSNR results of color image denoising. As we can see, our ART achieves the highest performance across all compared methods on three datasets except Kodak24. Even better results are obtained by ART+ using self-ensemble. Particularly, it obtains better gains than the state-of-the-art model Restormer~\cite{restormer2022} by up to 0.25dB on Urban100. Restormer also has restricted receptive fields and thus has difficulty in some challenging cases. In conclusion, these comparisons indicate that our ART also has strong ability in image denoising. \noindent \textbf{Visual Comparisons.} The visual comparison for color image denoising of different methods is shown in Fig.~\ref{fig:img_dn_visual}. Our ART can preserve detailed textures and high-frequency components and remove heavy noise corruption. Compared with other methods, it has better performance to restore clean and crisp images. It demonstrates that our ART is also suitable for image denoising. \begin{figure*}[ht] \tiny \centering \vspace{-2mm} \begin{tabular}{cc} \hspace{-0.45cm} \begin{adjustbox}{valign=t} \begin{tabular}{c} \includegraphics[width=0.213\textwidth]{figs/visual/DN_RGB/Resize_ComL_img_033_HQ_N50.png} \\ Urban100: img\_033 \end{tabular} \end{adjustbox} \hspace{-0.46cm} \begin{adjustbox}{valign=t} \begin{tabular}{cccccc} \includegraphics[width=0.149\textwidth]{figs/visual/DN_RGB/ComS_img_033_HQ_N50.png} \hspace{-4mm} & \includegraphics[width=0.149\textwidth]{figs/visual/DN_RGB/ComS_img_033_LQ_N50.png} \hspace{-4mm} & \includegraphics[width=0.149\textwidth]{figs/visual/DN_RGB/ComS_img_033_CBM3D_N50.png} \hspace{-4mm} & \includegraphics[width=0.149\textwidth]{figs/visual/DN_RGB/ComS_img_033_IRCNN_N50.png} \hspace{-4mm} & \includegraphics[width=0.149\textwidth]{figs/visual/DN_RGB/ComS_img_033_DnCNN_S_N50.png} \hspace{-4mm} \\ HQ / PSNR (dB) \hspace{-4mm} & Noisy / 15.15 \hspace{-4mm} & CBM3D / 28.72 \hspace{-4mm} & IRCNN / 28.57 \hspace{-4mm} & DnCNN / 29.13 \hspace{-4mm} \\ \includegraphics[width=0.149\textwidth]{figs/visual/DN_RGB/ComS_img_033_RNAN_N50.png} \hspace{-4mm} & \includegraphics[width=0.149\textwidth]{figs/visual/DN_RGB/ComS_img_033_RDN_N50.png} \hspace{-4mm} & \includegraphics[width=0.149\textwidth]{figs/visual/DN_RGB/ComS_img_033_SwinIR_N50.png} \hspace{-4mm} & \includegraphics[width=0.149\textwidth]{figs/visual/DN_RGB/ComS_img_033_Restormer_N50.png} \hspace{-4mm} & \includegraphics[width=0.149\textwidth]{figs/visual/DN_RGB/ComS_img_033_ART_N50.png} \hspace{-4mm} \\ RNAN / 30.28 \hspace{-4mm} & RDN / 30.48 \hspace{-4mm} & SwinIR / 31.16 \hspace{-4mm} & Restormer / 31.81 \hspace{-4mm} & \textbf{ART / 31.94 }\hspace{-4mm} \\ \end{tabular} \end{adjustbox} \end{tabular} \vspace{-3.5mm} \caption{Visual comparison with challenging examples on color image denoising ($\sigma$=50).} \label{fig:img_dn_visual} \vspace{-4mm} \end{figure*} \vspace{-1.5mm} \subsection{JPEG Compression Artifact Reduction} \vspace{-2mm} We compare our ART with state-of-the-art JPEG CAR methods: RNAN~\cite{zhang2019rnan}, RDN~\cite{zhang2020rdnir}, DRUNet~\cite{zhang2021plugDRUNet}, and SwinIR~\cite{swinir2021}. Following most recent works, we set the compression quality factors of original images to 40, 30, and 10. We provide the PSNR and SSIM comparison results in Table~\ref{table:psnr_ssim_psnrb_JPEG}. \textbf{Quantitative Comparisons.} Table~\ref{table:psnr_ssim_psnrb_JPEG} shows the PSNR/SSIM comparisons of our ART with existing state-of-the-art methods. We can see that our proposed method has the best performance. Better results are achieved by ART+ using self-ensemble. These results indicate that our ART also performs outstandingly when solving image compression artifact reduction problems. \vspace{-4mm} \section{Conclusion} \vspace{-4mm} In this work, we propose Attention Retractable Transformer for image restoration named ART, which offers two types of self-attention blocks to enhance the Transformer representation ability. Most previous image restoration Transformer backbones mainly utilize dense attention modules to alleviate self-attention computation within non-overlapping regions and thus suffer from restricted receptive fields. Without introducing additional computational cost, we employ the sparse attention mechanism to enable tokens from sparse areas of the image to interact with each other. In practice, the alternating application of dense and sparse attention modules is able to provide retractable attention for the model and bring promising improvement. Experiments on image SR, denoising, and JPEG CAR tasks validate that our method achieves state-of-the-art results on various benchmark datasets. \section*{Reproducibility Statement} We provide the reproducibility statement of our proposed method in this section. We introduce the model architecture and core dense and sparse attention modules in Sec.~\ref{method}. Besides, we also give the implementation details. In Sec.~\ref{subsec:settings}, we provide the detailed experiment settings. To ensure the reproducibility, we provide the source code and pre-trained models at the website\footnote{\href{https://github.com/gladzhang/ART}{https://github.com/gladzhang/ART}}. Everyone can run our code to check the training and testing process according to the given instructions. At the website, the pre-trained models are provided to verify the validity of corresponding results. More details please refer to the website or the main supplementary materials. \small
2,869,038,155,980
arxiv
\section{Introduction} Most of the recent work has been based on spectroscopy, while little effort was done in analyzing the photometry. In this poster we present new, high accuracy photometry obtained at ESO La Silla with the New Technology Telescope ($NTT$) under good seeing conditions ($0.7^{\prime\prime} FWHM$) in the inner regions of the cluster. In particular we will focus our analysis in the field located at $5^{\prime}$ North of the cluster center, where the crowding is considerably lower than in the central region, but the field is still rich enough in stars to secure more than about 1000 objects between the Main Sequence (MS) Turnoff region (TO) and the Horizontal Branch (HB) in a 6 square arcminute field.\\ \begin{figure}[!htb] \plotfiddle{ortolani_fig1.eps}{75truemm}{0}{45}{45}{-150}{-80} \caption{The bright portion of $\Omega$ Centauri CMD. See text for details.} \end{figure} \section {The Color-Magnitude Diagram} Fig.~1a shows the Color-Magnitude Diagram (CMD) obtained in this field. Overimposed are $15~Gyr$ isochrones for metallicities of $z~=~0.0001$, $0.0004$ and $0.004$ (corresponding respectively to about $[Me/H]~=~-2.3$, $-1.7$ and $-0.7$), adopted from Bertelli et al. (1994) and shifted by $0.12 ~mag.$ in $(B-V)$ and $14.1 ~mag.$ in $V$ to get the best fit of the $z~=~0.004$ isochrone with the data. The points follow quite well the two metal poor isochrones, from the HB to the TO, while the $z~=~0.04$ isochrone clearly appears at the edge of the distribution, but still could explain the presence of a bump in the Sub Giant Branch (SGB) at about $V~=~14.6-14.7$, due to metal rich HB stars superimposed. In order to evaluate the effects of instrumental errors on the CMD, we carried out several experiments with about 1000 artificial stars injected in the original frame, generated from the input CMD shown in Fig.~1b. The corresponding output is shown in fig.~1c. A close analysis of the results from the simulations indicate that the output diagram is systematically shifted upwards, compared to the input one, as a consequence of blending. The resulting spread is also higher than that obtained simply combining the photometric errors in the two colors from single stars. \section{Comparison with Metallicity Data} Fig.~2a presents the histogram distribution in color of the observed data, Fig.~2b the histogram distribution from the artificial CMD and Fig.~2c the distribution of the SGB stars metallicities observed by SK without any compensation for radial and evolutionary effects. The shape of the two histograms is surprisingly similar in spite of the different samples used (SK sample is located at about $10^{\prime}$ from the cluster center, while ours is at $5^{\prime}$), with a tail toward the highest metallicities and redder colors. \begin{figure}[!htb] \plotfiddle{ortolani_fig2.eps}{75truemm}{0}{45}{45}{-150}{-80} \caption{Comparison between metallicity and photometry.} \end{figure} The problem of the existence of a secondary peak due to a metal rich broad distribution discussed by Norris et al. (1997) cannot be confirmed by the present data but will be further investigated after the analysis of the full set of the available data. Our comparison of the spectroscopic metallicities with the photometric ones is based mainly on the intrinsic width of the SGB compared with the width computed from the spectroscopically determined metallicity spread. We used Suntzeff and Kraft paper (1996) as a reference because they measured a wide and consistent sample in the SGB. Fig.~3a has been obtained from their original data (Tab.~3b). It shows the relation $[Fe/H]$ versus $(B-V)$ dereddened color (see also their fig.~9), where the high resolution metallicity scale was choosen. The solid line superimposed to the data is the theoretical relationship derived from Bertelli et al. (1994), for an age of $15 ~Gyr$, at the level of $0.44 ~mag$. below the HB, corresponding to SK sample of stars. The dashed line has been obtained using younger models ($9 ~Gyr$) for the $[Fe/H]~=~-0.7$ isochrone. Fig.~3b is the same as fig.~3a but for the sample of SK giant stars located within $6.2^{\prime}$ from the cluster center. Two facts are evident from these figures:\\ \begin{description} \item{$\bullet$}~~the spread in both metallicity and $(B-V)$ color is very wide. At a fixed metallicity the color range spans about 0.2 ~mag., while at a given color the metallicity goes from [Fe/H]=-1.8 to -1.0. This wide range is higher than the expected observational errors, even if it is not easy to get an estimate of the photometric errors in the colors published by SK (which are derived from Wolley, 1966); \item{$\bullet$}~~ the $[Fe/H]$ versus $(B-V)$ relationships are roughly in agreement with the general trend of the data and they get steeper for a younger age of the metal rich component, as it is evident from the analysis of fig.~3c, where isochrones of $15$ and $9~Gyr$ are shown, respectively. \end{description} \begin{figure}[!htb] \plotfiddle{ortolani_fig3.eps}{75truemm}{0}{45}{45}{-150}{-80} \caption{Metallicity vs dereddened colors. See text for details.} \end{figure} \section{Discussion} This younger age is still compatible with the data. An analitical, approximate expression for the intrinsic width of the SGB can be given in the following simplified form \[ \Delta(B-V) = \sqrt((\frac{d(B-V)}{d[Fe/H]} \times \Delta [Fe/H])^{2} + (\frac{d(B-V)}{d\tau} \times \Delta \tau)^{2}) = \] \begin{equation} = \sqrt((0.23 \times \Delta [Fe/H])^{2} + (0.01 \times \Delta \tau)^{2}). \end{equation} \noindent where $\tau$ is the age measured in billion years. The factor $\Delta \tau$ is negative for increasing age at a metallicity higher than the average, positive if lower. Using a metallicity spread $\sigma[Fe/H]~=~0.2 ~dex$ obtained by SK for the SGB sample, corrected for radial effects, we derive a theoretical $\sigma_{(B-V)}~=0.26 \times 0.23~=~0.06 ~mag.$ when no age difference is assumed between the metallicity components. From our data in fig.~1 we measured a SGB, width at $V~=~15.8$ in a half magnitude bin in our data obtaining $\sigma (B-V)~=~0.064 ~mag$. From the artificial CMD in the same bin we get $\sigma (B-V)~=~0.047 ~mag.$, indicating that an important fraction of the color spread is coming from instrumental errors. The deconvolution gives a final $\sigma(B-V)~=~0.043 ~mag.$, which is smaller than the predicted one. This is still an upper limit because binary stars, peculiar objects and field stars contributes in widening the intrinsic distribution. Similar results are obtained changing the bin width and the position along the SGB. \\ Possible explanations to this relatively narrow photometric dispersion are the following:\\ \begin{description} \item (1) individual element ratios are effective in reducing the global $[Me/H]$ spread, or \item (2) the metal rich component is some billion years younger than the metal poor one, as discussed by Norris et al. (1997). \end{description} The first hypothesis implies a trend of decreasing $CNO$ or $s$-elements with increasing $Fe$, which is not supported by from Norris and Da Costa (1995) high resolution analysis. The age effect is an interesting possibility, with important consequences on the metal enrichment history of the cluster, but further work is still needed to check if the observed residual is due to some systematical effects in the color transformations or in other ingredients used in the theoretical models (Cayrel et al., 1997). A further test could be the measurement of the MS intrinsic width (see also Noble et al., 1991, Bell and Gustafsson, 1983). Unfortunately the color-metallicity relationship in the MS is expected to give a color spread about twice smaller than in the SGB. Our tests performed from the best photometry at about $1 ~mag.$ below the turnoff show that the intrinsic width is completely hidden by the binary star sequence.\\ \acknowledgments We would like to thank Roger Cayrel and Alessandro Bressan for suggestions and helpful discussions.
2,869,038,155,981
arxiv
\section{Introduction} \label{Introduction} Alzheimer's disease (AD) is an irreversible brain disorder that slowly destroys memory and thinking skills. According to World Alzheimer Reports \citep{gaugler20192019}, there are around $55$ million people worldwide living with Alzheimer's disease and related dementia. The total global cost of Alzheimer's disease and related dementia was estimated to be a trillion US dollars, equivalent to $1.1\%$ of global gross domestic product. Alzheimer's patients often suffer from behavioral deficits including memory loss and difficulty of thinking, reasoning and decision making. In the current model of AD pathogenesis, it is well established that deposition of amyloid plaques is an early event that, in conjunction with tau pathology, causes neuronal damage. Scientists have identified risk genes that may cause the abnormal aggregation and deposition of the amyloid plaques \citep[e.g.][]{morishima2002alzheimer}. The neuronal damage typically starts from the hippocampus and results in the first clinical manifestations of the disease in the form of episodic memory deficits \citep{weiner2013alzheimer}. Specifically, \citet{jack2010hypothetical} presented a hypothetical model for biomarker dynamics in AD pathogenesis, which has been empirically and collectively supported by many works in the literature. The model begins with the abnormal deposition of $\beta$ amyloid (A$\beta$) fibrils, as evidenced by a corresponding drop in the levels of soluble A$\beta$-42 in cerebrospinal fluid (CSF) \citep{ aizenstein2008frequent}. After that, neuronal damage begins to occur, as evidenced by increased levels of CSF tau protein \citep{hesse2001transient}. Numerous studies have investigated how A$\beta$ and tau impact the hippocampus \citep[e.g.][]{ferreira2011abeta}, known to be fundamentally involved in acquisition, consolidation, and recollection of new episodic memories \citep{frozza2018challenges}. In particular, as neuronal degeneration progresses, brain atrophy, which starts with hippocampal atrophy \citep{fox1996presymptomatic}, becomes detectable by magnetic resonance imaging (MRI). Studies from recent years also found other important CSF proteins that may be related to hippocampal atrophy. For instance, the low levels of chromogranin A (CgA) and trefoil factor 3 (TFF3), and high level of cystatin C (CysC) are evidently associated with hippocampal atrophy \citep{khan2015subset,paterson2014cerebrospinal}. Indeed, the impact of protein concentration on behavior can also be through atrophy of other brain regions. For example, there exists potential entorhinal tau pathology on episodic memory decline \citep{maass2018entorhinal}. As sufficient brain atrophy accumulates, it results in cognitive symptoms and impairment. This process of AD pathogenesis is summarized by the flow chart in Figure \ref{fig: flowchart}. Note that it is still debatable how A$\beta$ and tau interact with each other as mentioned by \citet{jack2013tracking, majdi2020amyloid}; however, it is evident that A$\beta$ may still hit a biomarker detection threshold earlier than tau \citep{jack2013tracking}. In addition, as noted by \citet{hampel2018cholinergic}, it is likely that highly complex interactions exist between A$\beta$ as well as tau, and the cholinergic system. For instance, the association has been found between CSF biomarkers of amyloid and tau pathology in AD \citep{remnestaal2021association}. It has also been found that other factors, such as dysregulation and dysfunction of the Wnt signaling pathway, may also contribute to A$\beta$ and tau pathologies \citep{ferrari2014wnt}. In addition, the M1 and M3 subtypes of muscarinic receptors increase amyloid precursor protein production via the induction of the phospholipase C/protein kinase C pathway and increase BACE expression in AD brains \citep{nitsch1992release}. \begin{figure}[htbp] \centering \includegraphics[height=2.2in,width=6.5in]{./plotsMainArkSupp/flowchartv6.png} \caption{A hypothetical model of AD pathogenesis based on \cite{selkoe2016amyloid}. The double arrows represent the possible interactions that exist between A$\beta$ as well as tau, and the cholinergic system. The red arrow denotes the conditional association we are interested in estimating.} \label{fig: flowchart} \end{figure} The aim of this paper is to map the genetic-imaging-clinical (GIC) pathway for AD, which is the most important part of the hypothetical model of AD pathogenesis in Figure \ref{fig: flowchart}. Histological studies have shown that the hippocampus is particularly vulnerable to Alzheimer's disease pathology and has already been considerably damaged at the first occurrence of clinical symptoms \citep{braak1998evolution}. Therefore, the hippocampus has become a major focus in Alzheimer's studies \citep{de1989early}. Some neuroscientists even conjecture that the association between hippocampal atrophy and behavioral deficits may be causal, because the former destroys the connections that help the neuron communicate and results in a loss of function \citep{BrainAtrophy}. We are interested in delineating the genetically-regulated hippocampal shape that drives AD related behavioral deficits and disease progression. To map the GIC pathway, we extract clinical, imaging, and genetic variables from the Alzheimer's Disease Neuroimaging Initiative (ADNI) study as follows. First, we use the Alzheimer's Disease Assessment Scale (ADAS) cognitive score to quantify behavioral deficits, for which a higher score indicates more severe behavioral deficits. Second, we characterize the exposure of interest, hippocampal shape, by the hippocampal morphometry surface measure, summarized as two $100 \times 150$ matrices corresponding to the left/right hippocampi. Each element of the matrices is a continuous-valued variable, representing the radial distance from the corresponding coordinate on the hipppocampal surface to the medial core of the hippocampus. Compared with the conventional scalar measure of hippocampus shape \citep{jack2003mri}, recent studies show that the additional information contained in the hippocampal morphometry surface measure is valuable for Alzheimer's diagnosis \citep{thompson2004mapping}. For example, \citet{li2007hippocampal} showed that the surface measures of the hippocampus could provide more subtle indexes compared with the volume differences in discriminating between patients with Alzheimer's and healthy control subjects. In our case, with the 2D matrix radial distance measure, one may investigate how local shapes of hippocampal subfields are associated with the behavioral deficits. Third, the ADNI study measures ultra-high dimensional genetic covariates and other demongraphic covariates at baseline. There are more than $6$ million genetic variants per subject. The special data structure of the ADNI data application presents new challenges for statistically mapping the GIC pathway. First, unlike conventional statistical analysis that deals with scalar exposure, our exposure of interest is represented by high-dimensional 2D hippocampal imaging measures. Second, the dimension of baseline covariates, which are also potential confounders, is much larger than the sample size. Recently there have been many developments for confounder selection, most of which are in the causal inference literature. Studies show inclusion of the variables only associated with exposure but not directly with the outcome except through the exposure (known as instrumental variables) may result in loss of efficiency in the causal effect estimate \citep[e.g.][]{schisterman2009overadjustment}, while inclusion of variables only related to the outcome but not the exposure (known as precision variables) may provide efficiency gains \citep[e.g.][]{brookhart2006variable}; see \cite{shortreed2017outcome, richardson2018discussion,tang2021outcome} and references therein for an overview. When a large number of covariates are available, the primary difficulty for mapping the GIC pathway is how to include all the confounders and precision variables, while excluding all the instrumental variables and irrelevant variables (not related to either outcome or exposure). We develop a novel two-step approach to estimate the conditional association between the high-dimensional 2D hippocampal surface exposure and the Alzheimer's behaviorial score, while taking into account the ultra-high dimensional baseline covariates. The first step is a fast screening procedure based on both the outcome and exposure models to rule out most of the irrelevant variables. The use of both models in screening is crucial for both computational efficiency and selection accuracy, as we will show in detail in Sections \ref{jointscreening} and \ref{Blockwise joint screening}. The second step is a penalized regression procedure for the outcome generating model to further exclude instrumental and irrelevant variables, and simultaneously estimate the conditional association. Our simulations and ADNI data application demonstrate the effectiveness of the proposed procedure. Our analysis represents a novel inferential target compared to recent developments in imaging genetics mediation analysis \citep{bi2017genome}. Although we consider a similar set of variables and structure among these variables as illustrated later in Figure \ref{fig: DAG}, our goal is to estimate the conditional association of hippocampal shape with behavioral deficits. In contrast, in mediation analysis, researchers are often interested in the effects of genetic factors on behavioral deficits, and how those are mediated through hippocampus. Direct application of methods developed for imaging genetics mediation analysis to our problem may select genetic factors that are confounders affecting both hippocampal shape and behavioral deficits. In comparison, we aim to include precision variables in the adjustment set as they may improve efficiency. The rest of the article is organized as follows. Section \ref{description} includes a detailed data and problem description. We introduce our models and a two-step variable selection procedure in Section \ref{Setup}. We analyze the ADNI data and estimate the association between hippocampal shape and behavioral deficits conditional on observed clinical and genetic covariates in Section \ref{imgenedatastudy}. Simulations are conducted in Section \ref{simulation} to evaluate the finite-sample performance of the proposed method. We finish with a discussion in Section \ref{sec:discussion}. The theoretical properties of our procedure are included in Section \ref{Theoretical guarantees} in the supplementary material. \section{Data and problem description} \label{description} Understanding how human brains work and how they connect to human behavior is a central goal in medical studies. In this paper, we are interested in studying whether and how hippocampal shape is associated with behavioral deficits in Alzheimer's studies. We consider the clinical, genetic, imaging and behavioral measures in the ADNI dataset. The outcome of interest is the Alzheimer's Disease Assessment Scale cognitive score observed at 24th month after baseline measurements. The Alzheimer's Disease Assessment Scale cognitive 13 items score (ADAS-13) \citep{mohs1997development} includes 13 items: word recall task, naming objects and fingers, following commands, constructional praxis, ideational praxis, orientation, word recognition task, remembering test directions, spoken language, comprehension, and word-finding difficulty, delayed word recall and a number cancellation or maze task. A higher ADAS score indicates more severe behavioral deficits. The exposure of interest is the baseline 2D surface data obtained from the left/right hippocampi. The hippocampus surface data were preprocessed from the raw MRI data, where the detailed preprocessing steps are included in Section \ref{image preprocessing} of the supplementary material. After preprocessing, we obtained left and right hippocampal shape representations as two $100\times 150$ matrices. The imaging measurement at each pixel is an absolute metric, representing the radial distance from the pixel to the medial core of the hippocampus. The unit for the measurement is in millimeters. In the ADNI data, there are millions of observed covariates that one may need to adjust for, including the whole genome sequencing data from all of the 22 autosomes. We have included detailed genetic preprocessing techniques in Section \ref{genetic preprocessing} of the supplementary material. After preprocessing, $6,087,205$ bi-allelic markers (including SNPs and indels) of $756$ subjects were retained in the data analysis. We excluded those subjects with missing hippocampus shape representations, baseline intracranial volume (ICV) information or ADAS-13 score observed at Month 24, after which there are $566$ subjects left. Our aim is to estimate the association between the hippocampal surface exposure and the ADAS-13 score conditional on clinical measures including age, gender and length of education, ICV, diagnosis status, and $6,087,205$ bi-allelic markers. \section{Methodology} \label{Setup} \subsection{Basic set-up} \label{Setup:Notation} Suppose we observe independent and identically distributed samples $ \{L_i=(X_i, \bm{Z}_i, Y_i), 1\leq i\leq n\} $ generated from $ L $, where $ L=(X, \bm{Z}, Y) $ has support $ \mathcal{L}=(\mathcal{X}\times \mathcal{Z}\times \mathcal{Y})$. Here $ \bm{Z}\in \mathcal{Z}\subseteq \mathbb{R}^{p\times q}$ is a 2D-image continuous exposure, $ Y\in \mathcal{Y} $ is a continuous outcome of interest, and $ X\in \mathcal{X}\subseteq \mathbb{R}^{s} $ denotes a vector of ultra-high dimensional genetic (and clinical) covariates, where we assume $ s>>n$. We are interested in characterizing the association between the 2D exposure $ \bm{Z} $ and outcome $ Y $ conditional on the observed covariates $ X$. \begin{figure}[htbp] \centering \includegraphics[height=3in,width=4in]{./plotsMainArkSupp/causalv2.png} \caption{Directed acyclic graph showing potential high dimensional confounder and precision variables $X$ (gold), the possible unmeasured confounders $U$ (light yellow), the 2D imaging exposure $Z$ (green), the instrumental variables $X$ (purple) and the outcome of interest $Y$ (blue). The red arrow denotes the association of interest. }\label{fig: DAG} \end{figure} Denote $ X_i=(X_{i1}, \ldots, X_{is})^{\mathrm{\scriptscriptstyle T}} $. Without loss of generality, we assume that $X_{il}$ has been standardized for every $1 \leq l \leq s$, and $\bm{Z}_i$ and $Y_i$ have been centered. To map the GIC pathway, we assume the following linear equation models: \begin{eqnarray} \label{outcomemodel} Y_i &=& \sum_{l=1}^s X_{il}\beta_l+ \langle \bm{Z}_i,\bm{B} \rangle + \epsilon_{i} \quad \textrm{(outcome model)}; \\ \label{treatmentmodel} \bm{Z}_i &=& \sum_{l=1}^s X_{il}*\bm{C}_{l} + \bm{E}_{i} \quad \textrm{(exposure model)}. \end{eqnarray} In \eqref{outcomemodel}, the matrix $ \bm{B}\in \mathbb{R}^{p\times q} $ is the main parameter of interest, representing the association between the 2D imaging treatment $\bm{Z}_i$ and the behavioral outcome $ Y_i $, $\beta_l$ represents the association between the $l$-th observed covariate $X_{il} $ and the behavioral outcome $ Y_i $, and $ \epsilon_i $ and $\bm E_i$ are random errors that may be correlated. The inner product between two matrices is defined as $ \langle\bm{Z}_i, \bm{B} \rangle =\langle \mathrm{vec}(\bm{Z}_i), \mathrm{vec}(\bm{B})\rangle$, where $ \mathrm{vec}(\cdot) $ is a vectorization operator that stacks the columns of a matrix into a vector. Model \eqref{treatmentmodel}, previously introduced in \citet{kong2019l2rm}, specifies the relationship between the 2D imaging exposure and the observed covariates. The $ \bm{C}_{l} $ is a $ p\times q $ coefficient matrix characterizing the association between the $l$th covariate $X_{il} $ and the 2D imaging exposure $ \bm{Z}_i $, and $ \bm{E}_i $ is a $ p\times q $ matrix of random errors with mean $ 0 $. The symbol ``$*$" denotes element-wise multiplication. Define $\mathcal{M}_1 = \{ 1 \leq l \leq s: \beta_l \neq 0\}$ and $\mathcal{M}_2 = \{ 1 \leq l \leq s : \bm{C}_l \neq \bm{0} \}$, where we assume $|\mathcal{M}_{1}| << n$ and $|\mathcal{M}_{2}| << n$; here $|\mathcal{M}_1|$ and $|\mathcal{M}_2|$ represent the number of elements in $\mathcal{M}_1$ and $\mathcal{M}_2$ respectively. To estimate $ \bm{B}$, the first step is to perform variable selection in models \eqref{outcomemodel} and \eqref{treatmentmodel}. For all the covariates $ X_{l}$, we group them into four categories. Let $ \mathcal{A} = \{ 1,\ldots, s\}$, and denote $\mathcal{C}$ the indices of confounders, i.e. variables associated with both the outcome and the exposure; $\mathcal{P}$ denotes the indices of precision variables, i.e. predictors of the outcome, but not the exposure; $\mathcal{I}$ denotes the indices of instrumental variables, i.e. covariates that are only associated with the exposure but not directly with the outcome except through the exposure; $\mathcal{S}$ denotes the indices of irrelevant variables, i.e. covariates that are not related to the outcome or the exposure. Mathematically speaking, $\mathcal{C}= \{l \in \mathcal{A}| \beta_l \neq 0 \textrm{ and } \bm{C}_l \neq 0\}$, $\mathcal{P}= \{l \in \mathcal{A}| \beta_l \neq 0 \textrm{ and } \bm{C}_l = 0\}$, $\mathcal{I} = \{l \in \mathcal{A}| \beta_l = 0 \textrm{ and } \bm{C}_l \neq 0\}$ and $\mathcal{S} = \{l \in \mathcal{A}| \beta_l = 0 \textrm{ and } \bm{C}_l = 0\}$. The relationships among different types of $ X $, $ \bm{Z}$ and $ Y $ are shown in Figure \ref{fig: DAG}, where $U$ denotes possible unmeasured confounders. Since we are interested in characterizing the association between $ \bm{Z} $ and $ Y $ conditional on $ X$, further discussion of $U$ will be omitted for the remainder of the paper. When there are no unobserved confounders $U$, the estimate of $ \bm{B}$ has underlying causal interpretations. In this case, the ideal adjustment set includes all confounders to avoid bias and all precision variables to increase statistical efficiency, while excluding instrumental variables and irrelevant variables \citep{brookhart2006variable,shortreed2017outcome}. Although we are studying the conditional association rather than the causal relationship due to the possible unobserved confounding, our target adjustment set remains the same. In other words, we aim to retain all covariates from $\mathcal{M}_1=\mathcal{C} \cup \mathcal{P}=\{l \in \mathcal{A}| \beta_l \neq 0\}$, while excluding covariates from $\mathcal{I} \cup \mathcal{S}=\{l \in \mathcal{A}| \beta_l=0\}$. \subsection{Naive screening methods} \label{Naive screening methods} To find the nonzero $ \beta_l$'s, a straightforward idea is to consider a penalized estimator obtained from the outcome generating model \eqref{outcomemodel}, where one imposes, say Lasso penalties, on $ \beta_l $'s. However, this is computationally infeasible in our ADNI data application as the number of baseline covariates $ s $ is over $ 6 $ million. Consequently, it is important to employ a screening procedure \citep[e.g.][]{fan2008sure} to reduce the model size. To find covariates $X_l$'s that are associated with the outcome $Y$ conditional on the exposure $\bm Z$, one might consider a conditional screening procedure for model \eqref{outcomemodel} \citep{barut2016conditional}. Specifically, one can fit the model $ Y_i = X_{il}\beta_l+ \langle \bm{Z}_i, \bm{B} \rangle + \epsilon_{i} $ for each $ 1\leq l\leq s$, obtain marginal estimates of $ \widehat{\beta}^{MZ}_l$'s and then sort the $|\widehat{\beta}^{MZ}_l|$'s for screening. This procedure works well if the exposure variable $ \bm{Z} $ is of low dimension as one only needs to fit low dimensional ordinary least squares (OLS) $s$ times. However, in our ADNI data application, the imaging exposure $ \bm{Z} $ is of dimension $ pq=15,000 $. As a result, one cannot obtain an OLS estimate since $ n<pq$. Thus, to apply the conditional sure independence screening procedure to our application, one may need to solve a penalized regression problem for each $ 1\leq l\leq s$, such as $\arg\min_{\bm{B},\beta_l} \left[ \frac{1}{2 n} \sum_{i=1}^n \left( Y_i -\langle \bm{Z}_i, \bm{B} \rangle - {X}_{il} \beta_l \right)^2 + P_\lambda (\bm{B}) \right]$, where $P_\lambda (\bm{B})$ is a penalty of $\bm{B}$. In theory, for each $l\in \{1,\ldots, s\},$ one can obtain the estimates $\widehat{\beta}^{MZ}_{l,\lambda} $, and then rank the $|\widehat{\beta}^{MZ}_{l,\lambda}| $'s. However, this is computationally prohibitive in the ADNI data with $ s> 6,000,000 $. First, the penalized regression problem is much slower to solve compared to the OLS. Second, selection of the tuning parameter $ \lambda $ based on grid search substantially increases the computational burden. Alternatively one may apply the marginal screening procedure of \cite{fan2008sure} to model \eqref{outcomemodel}. Specifically, one may solve the following marginal OLS on each $ X_{il} $ by ignoring the exposure $ \bm{Z}_i$: $\arg\min_{\beta_l} \left[ \frac{1}{2 n} \sum_{i=1}^n \left( Y_i -{X}_{il} \beta_l \right)^2 \right]$. The marginal OLS estimate has a closed form $ \widehat{\beta}_l^{M}=n^{-1} \sum_{i=1}^n X_{il} Y_i $, and one can rank $|\widehat{\beta}_l^{M}|$'s for screening. Specifically, the selected sub-model is defined as ${\widehat{\mathcal{M}}_{1}^*} = \{1 \leq l \leq s: |\widehat{\beta}_l^{M} | \geq \gamma_{1,n}\}$, where $ \gamma_{1,n} $ is a threshold. Computationally, it is much faster than conditional screening for model \eqref{outcomemodel} as we only need to fit one dimensional OLS for $ s> 6,000,000 $ times. However, this procedure is likely to miss some important confounders. To see this, plugging model \eqref{treatmentmodel} into \eqref{outcomemodel} yields \begin{eqnarray*} Y_i &=& \sum_{l=1}^s X_{il}(\beta_l+\langle \bm{C}_{l},\bm{B} \rangle)+ \langle \bm{E}_{i},\bm{B} \rangle + \epsilon_{i}. \end{eqnarray*} Even in the ideal case when $X_{il} $'s are orthogonal for $ 1\leq l\leq s$, $ \widehat{\beta}_l^{M} $ is not a good estimate of $ \beta_l $ because of the bias term $\langle \bm{C}_{l},\bm{B} \rangle$. Thus, we may miss some nonzero $ \beta_l $'s in the screening step if $ \beta_l $ and $\langle \bm{C}_{l},\bm{B} \rangle$ are of similar magnitudes but opposite signs. We illustrate this point in Figures \ref{sim1step1n200sigma1} in Section \ref{simulation}, in which cases the conventional marginal screening on \eqref{outcomemodel} fails to capture some of the important confounders. \subsection{Joint screening} \label{jointscreening} To overcome the drawbacks of the estimation methods discussed in Section \ref{Naive screening methods}, we develop a joint screening procedure, specifically for our ADNI data application. The procedure is not only computationally efficient, but can also select all the important confounders and precision variables with high probability. The key insight here is that although we are interested in selecting important variables in the outcome generating model, this can be done much more efficiently by incorporating information from the exposure model. Specifically, let $\widehat{\bm{C}}_l^M = n^{-1} \sum_{i=1}^n X_{il} * \bm{Z}_i \in \mathbb{R}^{p \times q}$ be the marginal OLS estimate in model \eqref{treatmentmodel} for $l = 1,\ldots,s$. Following \citet{kong2019l2rm}, the important covariates in model \eqref{treatmentmodel} can be selected by ${\widehat{\mathcal{M}}_{2}} = \{1 \leq l \leq s: \|\widehat{\bm{C}}_l^M\|_{op} \geq \gamma_{2,n}\}$, where $||\cdot||_{op}$ is the operator norm of a matrix and $ \gamma_{2,n} $ is a threshold. We define our joint screening set as $\widehat{\mathcal{M}} = \widehat{\mathcal{M}}_1^* \cup \widehat{\mathcal{M}}_2$. Intuitively, most important confounders and precision variables are contained in the set ${\widehat{\mathcal{M}}_{1}}^*$. The only exceptions are the covariates $X_l$ for which both $\beta_l$ and $\langle \bm{C}_{l},\bm{B} \rangle$ are of similar magnitudes but opposite signs. On the other hand, these $X_l$ will be included in ${\widehat{\mathcal{M}}_{2}}$ and hence, $ \widehat{\mathcal{M}}_{} $ along with instrumental variables with large $ ||\bm{C}_l||_{op} $. In Section \ref{Theoretical guarantees} of the supplementary material, we show that with properly chosen $\gamma_{1,n}$ and $\gamma_{2,n}$, the joint screening set includes the confounders and precision variables with high probability: $P(\mathcal{M}_1 \subset \widehat{\mathcal{M}} ) \rightarrow 1$ as $ n\rightarrow \infty$. In practice, we recommend choosing $ \gamma_{1,n}$ and $\gamma_{2,n}$ such that $ |\widehat{\mathcal{M}}_1^*|=|\widehat{\mathcal{M}}_2|=k $, where $ k $ is the smallest integer such that $|\widehat{\mathcal{M}}| \geq \lfloor n/\log(n) \rfloor $. We set them to be of equal sizes following the convention that the size of screening set is determined only by the sample size \citep{fan2008sure}, which is the same for $\widehat{\mathcal{M}}_{1}^{*}$ and $\widehat{\mathcal{M}}_{2}$. Depending on the prior knowledge about the sizes and signal strengths of confounding, precision and instrumental variables, the sizes of $|\widehat{\mathcal{M}}_1^*|$ and $|\widehat{\mathcal{M}}_2|$ may be chosen differently. In the simulations and real data analyses, we conduct sensitivity analyses by varying the relative sizes of $\widehat{\mathcal{M}}_{1}^{*}$ and $\widehat{\mathcal{M}}_{2}$. In general, the set $ \widehat{\mathcal{M}}_{}$ includes not only confounders and precision variables in $ \mathcal{M}_1=\mathcal{C}\bigcup \mathcal{P}$, but also instrumental variables in $ \mathcal{I}$ and a small subset of the irrelevant variables $\mathcal{S}$. Nevertheless, the size of $ |\widehat{\mathcal{M}}_{}| $ is greatly reduced compared to that of all the observed covariates. This makes it feasible to perform the second step procedure, a refined penalized estimation of ${\bm B}$ based on the covariates $ \{X_{l}: l\in \widehat{\mathcal{M}}_{}\} $. \subsection{Blockwise joint screening} \label{Blockwise joint screening} Linkage disequilibrium (LD) is a ubiquitous biological phenomenon where genetic variants present a strong blockwise correlation (LD) structure \citep{wall2003haplotype}. If all the SNPs of a particular LD block are important but with relatively weak signals, they may be missed by the screening procedure described in Section \ref{jointscreening}. To appropriately utilize LD blocks' structural information to select those missed SNPs, we develop a modified screening procedure described below. Suppose that $X=(X_1,\ldots, X_s)^T$ can be divided into $b$ discrete haplotype blocks: regions of high LD that are separated from other haplotype blocks by many historical recombination events \citep{wall2003haplotype}. Let the index set of each $b$ non-overlapping block be $\mathcal{B}_1, \ldots, \mathcal{B}_b$ with $\cup_{j=1}^b \mathcal{B}_j = \{ 1\ldots, s\}$. For $ l = 1,\ldots, s,$ we define $${\beta^{block,M}_{l}} = \sum_{j=1}^b \frac{1(l \in \mathcal{B}_j)}{|\mathcal{B}_{j}|}\sum_{i \in \mathcal{B}_{j}} |{\beta_i^{M}}| \quad \textrm{and} \quad {C^{block,M}_{l}} = \sum_{j=1}^{b} \frac{1(l \in \mathcal{B}_j)}{|\mathcal{B}_{j}|}\sum_{i \in \mathcal{B}_{j}} \|{\bm{C}}_i^M\|_{op}, $$ $$\widehat{\beta}^{block,M}_{l} = \sum_{j=1}^b \frac{1(l \in \mathcal{B}_j)}{|\mathcal{B}_{j}|}\sum_{i \in \mathcal{B}_{j}} |\widehat{\beta}_i^{M}| \quad \textrm{and} \quad \widehat{C}^{block,M}_{l} = \sum_{j=1}^{b} \frac{1(l \in \mathcal{B}_j)}{|\mathcal{B}_{j}|}\sum_{i \in \mathcal{B}_{j}} \|\widehat{\bm{C}}_i^M\|_{op}, $$ where $1(\cdot)$ is the indicator function of an event. We also define \begin{equation*} {\widehat{\mathcal{M}}_{1}^{block,*}} = \{1 \leq l \leq s: \widehat{\beta}^{block,M}_{l} \geq \gamma_{3,n}\} \quad \textrm{and} \quad {\widehat{\mathcal{M}}_{2}^{block}} = \{1 \leq l \leq s: \widehat{C}_l^{block,M} \geq \gamma_{4,n}\}. \end{equation*} We propose to use the new set $\widehat{\mathcal{M}}^{block} = \widehat{\mathcal{M}}_1^* \cup \widehat{\mathcal{M}}_2 \cup {\widehat{\mathcal{M}}_{1}^{block,*}} \cup \widehat{\mathcal{M}}_{2}^{block}$, rather than $\widehat{\mathcal{M}} = \widehat{\mathcal{M}}_1^* \cup \widehat{\mathcal{M}}_2$, to select important covariates. Intuitively, when $|\beta_{l_1}| > |\beta_{l_2}|$, $X_{l_1}$ is much more easily selected compared with $X_{l_2}$ by $\widehat{\mathcal{M}}_1^*$. However, suppose that $l_1 \in \mathcal{B}_1$ and $l_2 \in \mathcal{B}_2$, with only a small proportion of $X_l$ in $\mathcal{B}_1$ having $|\beta_l|>0$, whereas a large proportion of $X_l$ in $\mathcal{B}_2$ has $|\beta_l|>0$. It may well be the case that ${\beta^{block,M}_{l_1}}< {\beta^{block,M}_{l_2}} $, meaning that $X_{l_2}$ can be selected more easily than $X_{l_1}$ by $\widehat{\mathcal{M}}_1^{block,*}$. In addition, as $\widehat{\beta}_l^{M}$ is not a good estimate of $\beta_l$ due to the bias term $\langle \bm{C}_{l},\bm{B} \rangle$, $\widehat{\beta}^{block,M}_{l}$ is not a good estimate of ${\beta^{block,M}_{l}}$ either. Therefore, some $X_l$ with nonzero ${\beta^{block,M}_{l}}$ may not be included in ${\widehat{\mathcal{M}}_{1}^{block,*}}$. Nevertheless, they will be included in ${\widehat{\mathcal{M}}_{2}^{block}}$ and hence $\widehat{\mathcal{M}}^{block}$. Theoretically, when $\gamma_{1,n}$, $\gamma_{2,n}$, $\gamma_{3,n}$ and $\gamma_{4,n}$ are chosen properly, $P(\mathcal{M}_1 \subset \widehat{\mathcal{M}}^{block} ) \rightarrow 1$ as $ n\rightarrow \infty$; see Theorem \ref{thm1a} in Section \ref{Theoretical guarantees} of the supplementary material. In practice, we recommend choosing $\gamma_{1,n}$, $\gamma_{2,n}$, $\gamma_{3,n}$ and $\gamma_{4,n}$, such that $ |\widehat{\mathcal{M}}_1^*|=|\widehat{\mathcal{M}}_2|= |{\widehat{\mathcal{M}}_{1}^{block,*}}| = |{\widehat{\mathcal{M}}_{2}^{block}}|= k $, where $ k $ is the smallest integer such that $|\widehat{\mathcal{M}}^{block}| \geq 2\lfloor n/\log(n) \rfloor $. The threshold here is twice the threshold what we suggested in Section \ref{jointscreening} since we unionize two additional sets. \subsection{Second-step estimation} In this step, we aim to estimate $\bm{B}$ by excluding the instrument variables $ \mathcal{I}$ and irrelevant variables in $\mathcal{S}$ from $ \widehat{\mathcal{M}}$ (or $\widehat{\mathcal{M}}^{block}_{}$) and keeping the other covariates. This can be done by solving the following optimization problem: \begin{equation} \label{min1} \argmin_{\bm{B},\{\beta_l,l \in \widehat{\mathcal{M}}_{}\}} \left[ \frac{1}{2 n} \sum_{i=1}^n \left( Y_i -\langle \bm{Z}_i, \bm{B} \rangle - \sum_{l \in \widehat{\mathcal{M}}_{}} {X}_{il} \beta_l \right)^2 + \lambda_{1} \sum_{l \in \widehat{\mathcal{M}}_{}} |\beta_l| + \lambda_{2} || \bm{B}||_* \right]. \end{equation} Denote $ (\widehat{\bm{B}}, \widehat{\boldsymbol\beta})$ the solution to the above optimization problem. The Lasso penalty on $\beta_l$ is used to exclude instrumental and irrelevant variables in $\widehat{\mathcal{M}}_{}$, whose corresponding coefficients $ \beta_l$'s are zero. The nuclear norm penalty $|| \cdot ||_*$, defined as the sum of all the singular values of a matrix, is used to achieve a low-rank estimate of $\bm{B}$, where the low-rank assumption in estimating 2D structural coefficients is commonly used in the literature \citep{zhou2014regularized, kong2019l2rm}. For tuning parameters, we use five-fold cross validation based on two-dimensional grid search, and select $\lambda_{1}$ and $\lambda_{2}$ using the one standard error rule \citep{hastie2009}. \section{ADNI data applications} \label{imgenedatastudy} We use the data obtained from the ADNI study (adni.loni.usc.edu). The data usage acknowledgement is included in Section \ref{usage} of the supplement material. As described in Section \ref{description}, we include $566$ subjects from the ADNI1 study. The exposure of interest is the baseline 2D hippocampal surface radial distance measures, which can be represented as a $ 100\times 150 $ matrix for each part of the hippocampus. The outcome of interest is the ADAS-13 score observed at Month 24. The average ADAS-13 score is $20.8$ with standard deviation $14.1$. The covariates to adjust for include $6,087,205$ bi-allelic markers as well as clinical covariates, including age, gender and education length, baseline intracranial volume (ICV), and baseline diagnosis status. The average age is $75.5$ years old with standard deviation $6.6$ years, and the average education length is $15.6$ years with standard deviation $2.9$ years. Among all the $566$ subjects, $58.1\%$ were female. The average ICV was $1.28\times 10^6$ $\rm{mm}^3$ with standard deviation $1.35\times 10^5$ $\rm{mm}^3$. There were $175$ ($184$ at Month 24) cognitive normal patients, $268$ ($157$ at Month 24) patients with mild cognitive impairment (MCI), and $123$ ($225$ at Month 24) patients with AD at the baseline. Studies have shown that age and gender are the main risk factors for Alzheimer's disease \citep{vina2010women, guerreiro2015age} with older people and females more likely to develop Alzheimer's disease. Multiple studies have also shown that prevalence of dementia is greater among those with low or no education \citep{zhang1990prevalence}. On the other hand, age, gender and length of education have been found to be strongly associated with the hippocampus \citep{van2006hippocampal, jack2000rates, noble2012hippocampal}. Previous studies \citep{sargolzaei2015practical} suggest that the ICV is an important measure that needs to be adjusted for in studies of brain change and AD. In addition, the baseline diagnosis status may help explain the baseline hippocampal shape and the AD status at Month 24. Therefore, we consider age, gender, education length, baseline ICV and baseline diagnosis status as part of the confounders, and adjust for them in our analysis. In addition, we also adjust for population stratification, for which we use the top five principal components of the whole genome data. As both left and right hippocampi have 2D radial distance measures and the two parts of hippocampi have been found to be asymmetric \citep{pedraza2004asymmetry}, we apply our method to the left and right hippocampi separately. We use the default method \citep{gabriel2002structure} of Haploview \citep{barrett2005haploview} and PLINK \citep{purcell2007plink} to form linkage disequilibrium (LD) blocks. Previous studies report that about 50 genetic variants are associated with AD; see the review in \cite{sims2020multiplex} for details. This provides support for our assumption that $|\mathcal{M}_{1}| < n$ ($n=566$). On the other hand, a genome-wide association analysis of 19,629 individuals by \cite{zhao2019genome} shows that $57$ genetic variants are associated with the left hippocampal volumes and $54$ are associated with the right hippocampal volumes. This provides support for our assumption that $|\mathcal{M}_{2}| < n$. Therefore, we apply our blockwise joint screening procedure on those SNPs on each part of hippocampal outcome $ Y_i $ and exposure $ \bm{Z}_i $ marginally. We choose the thresholds $\gamma_{1,n}$, $\gamma_{2,n}$,$ \gamma_{3,n}$ and $\gamma_{4,n}$ such that $|\widehat{\mathcal{M}}^{block}|=2 \lfloor n/\log(n) \rfloor=178$. In Table \ref{imgenet1} of the supplementary material, we list the top $20$ SNPs corresponding to left and right hippocampi, respectively. As suggested by one referee, we plot similar figures as the Manhattan plot for $\widehat{\mathcal{M}}^{*}_1$, $\widehat{\mathcal{M}}^{block,*}_1$, $\widehat{\mathcal{M}}^{\textbf{}}_2$ and $\widehat{\mathcal{M}}^{block}_2$ in Figure \ref{manhattanPlots} of the supplementary material, where genomic coordinates are displayed along the x-axis, the y-axis represents the magnitude of $| \widehat{\beta}^{M}_{l} |$, $ \widehat{\beta}^{block,M}_{l} $, $ \| \widehat{\bm{C}}_l^M \|_{op}$ and $\widehat{C}_l^{block,M}$ and the horizontal dashed line represents the threshold values $\gamma_{1,n}$, $\gamma_{2,n}$, $\gamma_{3,n}$, and $\gamma_{4,n}$, respectively. From Table \ref{imgenet1} and Figure \ref{manhattanPlots}, one can see that there are quite a few important SNPs for both hippocampi. For example, the top SNP is the rs429358 from the 19th chromosome. This SNP is a C/T single-nucleotide variant (snv) variation in the APOE gene. It is also one of the two SNPs that define the well-known APOE alleles, the major genetic risk factor for Alzheimer's disease \citep{kim2009role}. In addition, a great portion of the SNPs in Table \ref{imgenet1} has been found to be strongly associated with Alzheimer's. These include rs10414043 \citep{du2018fast}, an A/G snv variation in the APOC1 gene; rs7256200 \citep{takei2009genetic}, an A/G snv variation in the APOC1 gene; rs73052335 \citep{zhou2018identification}, an A/C snv variation in the APOC1 gene; rs769449 \citep{chung2014exome}, an A/G snv variation in the APOE gene; rs157594 \citep{hao2017mining}, a G/T snv variation; rs56131196 \citep{gao2016shared}, an A/G snv variation in the APOC1 gene; rs111789331 \citep{gao2016shared}, an A/T snv variation; and rs4420638 \citep{coon2007high}, an A/G snv variation in the APOC1 gene. Among those SNPs that have been found to be associated with Alzheimer's, some of them are also directly associated with hippocampi. For example, \citet{zhou2019analysis} revealed that the SNPs rs10414043, rs73052335 and rs769449 are among the top SNPs that have significant genetic effects on the volumes of both left and right hippocampi. \cite{guo2019genome} identified the SNP rs56131196 to be associated with hippocampal shape. We then perform our second-step estimation procedure for each part of the hippocampi. Here $ X_{\widehat{\mathcal{M}}}$ denotes the SNPs selected in the screening step, the population stratification (top five principal components of the whole genome data) and the five clinical measures (age, gender, education length, baseline ICV and baseline diagnosis status), and $ {\bm Z} $ denotes the left/right hippocampal surface image matrix. To visualize the results, we map the estimates $\widehat{\bm B}$ corresponding to each part of the hippocampus onto a representative hippocampal surface and plot it in Figure \ref{dataPlots}(a). We have also plotted the hippocampal subfield \citep{apostolova20063d} in Figure \ref{dataPlots}(b). Here Cornu Ammonis region 1 (CA1), Cornu Ammonis region 2 (CA2) and Cornu Ammonis region 3 (CA3) are a strip of pyramidal neurons within the hippocampus proper. CA1 is the top portion, named as ``regio superior of Cajal'' \citep{blackstad1970distribution}, which consists of small pyramidal neurons. Within the lower portion (regio inferior of Cajal), which consists of larger pyramidal neurons, there are a smaller area called CA2 and a larger area called CA3. The cytoarchitecture and connectivity of CA2 and CA3 are different. The subiculum is a pivotal structure of the hippocampal formation, positioned between the entorhinal cortex and the CA1 subfield of the hippocampus proper (for a complete review, see \citealt{dudek2016rediscovering}). From the plots, we can see that all the $15,000$ entries of $\widehat{\bm{B}}$ corresponding to both hippocampi are negative. This implies that the radial distance of each pixel of both hippocampi is negatively associated with the ADAS-13 score, which depicts the severity of behavioral deficits. The subfields with strongest associations are mostly CA1 and subiculum. Existing literature \citep{apostolova2010subregional} has found that as Alzheimer's disease progresses, it first affects CA1 and subiculum subregions and later CA2 and CA3 subregions. This can partially explain why the shapes of CA1 and subiculum may have stronger associations with ADAS-13 scores compared to CA2 and CA3 subregions. \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{}[0.45\linewidth] {\includegraphics[height=3in,width=4in]{./plotsMainArkSupp/imgene_ldblocks_VISCODE6score2_CV_hippoRL_Bv9_1se.jpg}} \hfill \subcaptionbox{}[0.45\linewidth] {\includegraphics[height=2.4in,width=2in]{./plotsMainArkSupp/hippocampalsubfield.pdf}} \hfill \caption{Real data results: Panel (a) plots the estimate $\widehat{\bm B}$ corresponding to the left hippocampi (left part) and the right hippocampi (right part). Panel (b) plots the hippocampal subfield.} \label{dataPlots} \end{figure} We examine the effect size of the whole hippocampal shape by evaluating the term $ \langle \bm{Z}_{i}, \widehat{\bm{B}} \rangle $. Specifically, we calculate the proportion of variance explained by imaging covariates as follows: \begin{eqnarray*} R^2 = \frac{\sum_{i=1}^{n} (y_{i}-\bar{y})^{2} - \sum_{i=1}^{n} (y_{i}-\bar{y} -\langle \bm{Z}_{i}, \widehat{\bm{B}} \rangle )^2}{\sum_{i=1}^{n} (y_{i}-\bar{y})^{2}}. \end{eqnarray*} Our results show that the shape of the left hippocampi and that of the right one account for 5.83\% and 4.71\% of the total variations in behavior deficits, respectively. Such effect sizes are quite large compared with those for polygenic risk scores in genetics. In addition, we perform permutation test to test whether the $R^2$ statistic is significant. In particular, we randomly permutate the $\{Y_1,\ldots,Y_n\}$, denoted by $ \{Y_i^*,\ldots,Y_n^*\}$, and we then apply our estimation procedure based on $(X_i, \bm{Z}_i, Y_i^*)$, obtain $ \widehat{\bm{B}}^* $, and calculate $(R^2)^*$. We repeat this for 1,000 times and and we obtain the $\{(R_{(k)}^2)^*, 1\leq k\leq 1000 \}$, which mimics the null distribution. Finally, the p-value can be calculated as $\frac{1}{1000}\sum_{k=1}^{1000} 1\{(R_{(k)}^2)^*\geq R^2 \}$. The p-values for both hippocampi are less than $0.001$, suggesting that the $R^2$'s are significantly different from zero. We also conduct sensitivity analysis by varying the relative sizes of $\widehat{\mathcal{M}}_{1}^{*}$ and $\widehat{\mathcal{M}}_{2}$ in the joint screening procedure. The estimates $\widehat{\bm B}$s are similar among different choices of $|\widehat{\mathcal{M}}_{1}^{*}|$ and $|\widehat{\mathcal{M}}_{2}|$; see supplementary material Section \ref{Sensitivity analysis} for details. In addition, we repeated our analysis on the $391$ MCI and AD subjects. We have similar findings that the radial distances of each pixel of both hippocampi are mostly negatively associated with the ADAS-13 score. And the subfields with strongest associations are mostly CA1 and subiculum; see supplementary material Section \ref{Subgroup analysis ADNI data applications} for details. As suggested by one referee, we have performed the SNP-imaging-outcome mediation analysis proposed by \citet{bi2017genome}; see Section \ref{Results for mediation analyses} in the supplementary material for the detailed procedure. There is no evidence for the mediating relationship of SNP-imaging-outcome from our analysis. \section{Simulation studies} \label{simulation} In this section, we perform simulation studies to evaluate the finite sample performance of the proposed method. The dimension of covariates is set as $s=5000$, and the exposure is a $64\times 64$ matrix. The $X_i \in \mathbb{R}^s$ is independently generated from $N(\bm{0},\bm\Sigma_x)$, where $\bm\Sigma_x = (\sigma_{x,ll^\prime})$ has an autoregressive structure such that $\sigma_{x,ll^\prime} = \rho_1^{|l - l^\prime|}$ holds for $1 \leq l,l^\prime \leq s$ with $\rho_1 = 0.5$. Define ${\bm B}$ as a $64 \times 64$ image shown in Figure \ref{figureTCross}(a), and ${\bm C}$ a $64 \times 64$ image shown in Figure \ref{figureTCross}(b). For ${\bm B}$, the black regions of interest (ROIs) are assigned value $0.0408$ and white ROIs are assigned value $0$. For ${\bm C}$, the black ROIs are assigned value $0.0335$ and white ROIs are assigned value $0$. Further we set ${\bm C}_l=v_l*\bm{C}$, where $ v_1=-1/3$, $v_2=-1$, $v_3=-3$, $ v_{207}=-3$, $v_{208}=-1$, $v_{209}=-1/3$, and $ v_l=0 $ for $ 4\leq l\leq 206$ and $ 210\leq l \leq s $. We set the parameters $\beta_1=3$, $\beta_2=1$, $\beta_3=1/3$, $ \beta_{104}=3$, $\beta_{105}=1$, $\beta_{106}=1/3$, and $ \beta_l=0$ for $4 \leq l \leq 103$ and $ 107\leq l\leq s$. In this setting, we have $\mathcal{C} = \{1, 2, 3\}$, $\mathcal{P} = \{ 104, 105, 106\}$, $\mathcal{I} =\{ 207, 208, 209\}$ and $\mathcal{S} = \{1 \ldots, 5000\} \backslash \{1,2,3,104,105,106,207,208,209\}$. \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{}[0.35\linewidth] {\frame{\includegraphics[scale = 2]{./plotsMainArkSupp/T64.png}}} \subcaptionbox{}[0.35\linewidth] {\frame{\includegraphics[scale = 2]{./plotsMainArkSupp/cross64.png}}} \hfill \caption{Panels (a) and (b) plot ${\bm B}$ and ${\bm C}$ respectively. In Panels (a), the value at each pixel is either 0 (white) or 0.0408 (black). In Panels (b), the value at each pixel is either 0 (white) or 0.0335 (black).} \label{figureTCross} \end{figure} The random error $\mathrm{vec}(\bm{E}_i)$ is independently generated from $N(\bm{0},\bm\Sigma_e)$, where we set the standard deviations of all elements in $\bm{E}_i$ to be $\sigma_e = 0.2$ and the correlation between $\bm{E}_{i,jk}$ and $\bm{E}_{i,j^\prime k^\prime}$ to be $\rho_2^{|j - j^\prime| + |k-k^\prime|}$ for $1 \leq j,k,j^\prime, k^\prime \leq 64$ with $\rho_2 =0.5$. The random error $\epsilon_{i}$ is generated independently from $N(0,\sigma^2)$, where we consider $ \sigma=1$ or $0.5$. The $Y_i$'s and $ {\bm Z}_i $'s are generated from models \eqref{outcomemodel} and \eqref{treatmentmodel}. We consider three different sample sizes $ n=200, 500$ and $ 1000$. \subsection{Simulation for screening} \label{Simulation for screening} We perform our screening procedure (denoted by ``joint") and report the coverage proportion of $ \mathcal{M}_1 $, which is defined as $\frac{|\widehat{\mathcal{M}} \cap \mathcal{M}_1|}{ |\mathcal{M}_1|}$, where the size of the selected set $ |\widehat{\mathcal{M}}|$ changes from $ 1 $ to $ 100$. In addition, we report the coverage proportion for each of the confounding and precision variables, i.e. each of the $j$'s in the set ${\mathcal{M}}_1=\{ 1,2,3,104,105,106\}$. All the coverage proportions are averaged over $ 100 $ Monte Carlo runs. To control the changing size of the $|\widehat{\mathcal{M}}|$, we first set $|\widehat{\mathcal{M}}_1^*|= |\widehat{\mathcal{M}}_2|=1 $ by specifying appropriate $ \widehat{\gamma}_{1,n}$ and $ \widehat{\gamma}_{2,n}$. Then we sequentially add two variables, one to $\widehat{\mathcal{M}}_1^*$ by increasing $ \widehat{\gamma}_{1,n}$ and one to $ \widehat{\mathcal{M}}_2$ by increasing $ \widehat{\gamma}_{2,n}$, until $|\widehat{\mathcal{M}}|$ reaches $ 100 $. Note that we always keep $|\widehat{\mathcal{M}}_1^*|= |\widehat{\mathcal{M}}_2|$ in the procedure. We may not obtain all the sizes between $ 1 $ and $ 100 $ because $|\widehat{\mathcal{M}}|$ may increase by at most $ 2 $. Therefore, for those sizes that cannot be reached, we use a linear interpolation to estimate the coverage proportion of $ \mathcal{M}_1 $ by using the closest two end points. We compare the proposed joint screening procedure to two competing procedures. The first is an outcome screening procedure that selects set $\widehat{\mathcal{M}}_1^*.$ For fair comparison, we let $ |\widehat{\mathcal{M}}_1^*| $ range from $ 1 $ to $ 100 $. The second is an intersection screening procedure, that selects set $\widehat{\mathcal{M}}_\cap = \widehat{\mathcal{M}}_1^* \cap \widehat{\mathcal{M}}_2$. We let $ |\widehat{\mathcal{M}}_\cap| $ range from $ 1 $ to $ 100 $, while keeping $|\widehat{\mathcal{M}}_1^*|= |\widehat{\mathcal{M}}_2|$. Similarly, for those specific sizes that $|\widehat{\mathcal{M}}^*| $ cannot reach, we use linear interpolation to estimate the coverage proportions. We plot the results for $(n, s, \sigma) = (200, 5000, 1)$ in Figure \ref{sim1step1n200sigma1}. The remaining results for $(n,s,\sigma) = (200, 5000, 0.5)$, $ (500,5000,1)$, $(500,5000,0.5)$, $(1000,5000,1)$ and $(1000,5000,0.5)$ can be found in Figures \ref{sim1step1n200sigma025} -- \ref{sim1step1n1000sigma025} of the supplementary material. \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9count1.png}} \subcaptionbox{Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9count2.png}} \subcaptionbox{Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9count3.png}} \subcaptionbox{Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9count4.png}} \subcaptionbox{Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9count5.png}} \subcaptionbox{Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9count6.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000coverageoptB2C1snry9.png}} \caption{Simulation results for the case $(n,s,\sigma) = (200,5000,1)$: Panels (a) -- (f) plot the average coverage proportion for $X_l$, where $l=1,2,3,104,105$ and $106$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (f) correspond to strong, moderate and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{M}_1 = \{1,2,3,104,105,106\}$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The green solid, the red dashed and the black dash dotted lines denote our joint screening method, the outcome screening method, and the intersection screening method, respectively.} \label{sim1step1n200sigma1} \end{figure} From the plots, one can see that both the ``intersection'' and ``outcome" screening methods miss the confounder $ X_3 $ with a very high probability even as the size of the selected set approaches $100$. In contrast, our method can select $ X_3 $ with high probability when $ |\widehat{\mathcal{M}}| $ is relatively small. For confounders $ X_1$ and $ X_2 $, all three methods perform similarly. For the precision variables, the ``outcome" method and our ``joint" method perform similarly in covering these variables, while the ``intersection'' performs badly. Combining the results, one can see that our method performs the best as our method selects all the confounders and precision variables with high probabilities. In addition, we find that the coverage proportion of our method increases when the sample size increases, which validates the sure independence screening property developed in Section \ref{Theoretical guarantees} of the supplementary material. \subsection{Simulation for estimation} \label{Simulation for estimation} In this part, we evaluate the performance of our estimation procedure after the first-step screening. For the size of $ \widehat{\mathcal{M}} $ in the screening step, we set $|\widehat{\mathcal{M}}| = \lfloor n / \log(n) \rfloor$. We compare the proposed estimate with the oracle estimate, which is calculated by adjusting for the ideal adjustment set including only confounders and precision variables as $X$ and then estimate $\bm{B}$ by using the optimization (\ref{min1}) without imposing the $l_{1}$-regularization. We report the mean squared errors (MSEs) for $\bm{\beta}$ and $\bm{B}$ defined as $||{\bm\beta}_{}-\widehat{\bm\beta}||_2^2$ and $ \|{\bm{B}}-\widehat{\bm{B}}\|_F^2$, respectively. Table $\ref{sim1t1oracle}$ summarizes the average MSEs of the proposed and oracle estimates for $\bm\beta$ and $\bm{B}$ among 100 Monte Carlo runs when $n = 200$, $500$ and $1000$. We can see that the MSE decreases as the sample size increases. In terms of the primary parameter of interest $\bm{B}$, the proposed estimate is close to the oracle estimate. \begin{table}[htbp] \centering \caption{Simulation results of the proposed joint screening method and oracle estimates for $ \sigma=1 $ and $ \sigma = 0.5 $, when $n=200$, $500$ and $1000$: the average MSEs for $\bm\beta$ and $ {\bm B}$, and their associated standard errors in the parentheses are reported. The results are based on 100 Monte Carlo repetitions.} \begin{tabular}{ rcc rcc } $\sigma = 1.0$ & MSE $\bm\beta$ & MSE ${\bm{B}}$ &$\sigma = 0.5$ &MSE $\bm\beta$ & MSE ${\bm{B}}$ \\ \hline \multicolumn{6}{c}{n = 200}\\ Proposed &0.496(0.021)&0.667(0.005) & Proposed &0.276(0.009)&0.528(0.005)\\ Oracle &0.086(0.005)&0.624(0.004)&Oracle &0.021(0.001)&0.501(0.004)\\ \multicolumn{6}{c}{n = 500}\\ Proposed &0.303(0.008)&0.574(0.006)& Proposed &0.191(0.005)&0.345(0.004)\\ Oracle &0.036(0.002)&0.553(0.005)&Oracle &0.006(0.000)&0.340(0.004)\\ \multicolumn{6}{c}{n = 1000}\\ Proposed &0.217(0.004)&0.449(0.004)& Proposed &0.128(0.006)&0.234(0.002)\\ Oracle &0.013(0.001)&0.460(0.005)&Oracle &0.003(0.000)&0.233(0.002)\\ \end{tabular} \label{sim1t1oracle} \end{table} We plot the 2D map of $\widehat{\bm B}$ based on the average of 100 Monte Carlo runs in Figure \ref{sim1Btn1000sigma025}(c). For comparison, we also plot the corresponding average oracle estimate $\widehat{{\bm B}}_{\rm{oracle}}$ in Figure \ref{sim1Btn1000sigma025}(b) and the true ${\bm B}$ in Figure \ref{sim1Btn1000sigma025}(a). One can see that the proposed method recovers the signal pattern reasonably well. \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{Truth}[0.30\linewidth] {\frame{\includegraphics[scale = 2]{./plotsMainArkSupp/Bt_original.png}}} \subcaptionbox{Oracle }[0.30\linewidth] {\frame{\includegraphics[scale = 2]{./plotsMainArkSupp/Bt_sim1oal_p64_oracle_v2_optNorm2_n1000s5000snry8_1se.png}}} \subcaptionbox{Proposed }[0.30\linewidth] {\frame{\includegraphics[scale = 2]{./plotsMainArkSupp/Bt_sim1oal_p64_v2_optNorm2_n1000s5000snry8_1se.png}}} \caption{Panel (a) plots the true ${\bm B}$ (Truth), Panel (b) plots the average of $\widehat{\bm B}_{\rm{oracle}}$ (Oracle), Panel (c) plots the average of $\widehat{\bm B}$ (proposed). Here $(n,s,\sigma) = (1000,5000,0.5)$. The value at each pixel is a gray scale with 0 as white and 0.0408 as black.} \label{sim1Btn1000sigma025} \end{figure} We also report the sensitivity and specificity of the estimates in Section \ref{Sensitivity and specificity analysis of simulation} of the supplementary material. We have found that although the proposed method may not remove all of the instrumental variables, eliminating even just some of the instruments greatly reduces the MSEs of both $\bm\beta$ and ${\bm{B}}$, compared to the method where we do not impose $l_{1}$-regularization on $\bm\beta$ in the second-step estimation. \subsection{Screening and estimation using blockwise joint screening} \label{Group Selection Using Linkage Disequilibrium Information main} Linkage disequilibrium (LD) is a ubiquitous biological phenomenon where genetic variants present a strong blockwise correlation (LD) structure \citep{wall2003haplotype}. In Section \ref{Blockwise joint screening}, we propose the blockwise joint screening procedure to appropriately utilize LD blocks' structural information. The performance of this procedure is illustrated in this section using an adapted simulation based on the settings of \cite{dehman2015performance}. For $i=1,\ldots,n$, $X_i \in \mathbb{R}^s$ is independently generated from an $s$-dimensional multivariate distribution $N(\bm{0},\bm\Sigma_x)$, where $\bm\Sigma_x = (\sigma_{x,ll^\prime})$ is block-diagonal. If $l \neq l^\prime$ are in the same block, the covariance $\sigma_{x,ll^\prime} = 0.4$, else $\sigma_{x,ll^\prime} = 0$, and the diagonal elements $\sigma_{x,ll}$s are all set to $1$. We set $X_{ij}$ to $0$, $1$ or $2$ according to whether $X_{ij} < d_1$, $d_1 \leq X_{ij} \leq d_2$, or $X_{ij} > d_2$, where $d_1$ and $d_2$ are thresholds determined for producing a given minor allel frequency (MAF). For instance, choosing $d_1 = \Phi^{-1}(1-6\mathrm{MAF}/5)$ and $d_2 = \Phi^{-1}(1-2\mathrm{MAF}/5)$, where $\Phi$ is the c.d.f. of standard normal distribution, corresponds to a given fixed MAF. In order to generate more realistic MAF distributions, we simulate genotype $X_{ij}$, where the MAF for each $j$ is uniformly sampled between 0.05 and 0.5 \citep{dehman2015performance}. Adapting the simulation setting of Section \ref{Simulation for screening} according to \cite{dehman2015performance}, we set $s=5000$, with $300$ blocks of covariates of size $2, 4, 6, 12, 24, 52$ replicated $50$ times. We perform 100 Monte Carlo runs, and the ordering of the block is drawn at random for each run. The settings for $ {\bm B}$ and $ {\bm C}$ remain the same as before: $ {\bm B} $ is as in Figure 5(a), and $ {\bm C} $ is as in Figure 5(b). Further we set $ {\bm C}_l=v_l*\bm{C}$, where $ v_1=-1/3$, $v_2=-1$, $v_3=-3$, $ v_{207}=-3$, $v_{208}=-1$, $v_{209}=-1/3$, and $ v_l=0 $ for $ 4\leq l\leq 206$ and $ 210\leq l \leq s $. We set $ \beta_1=3$, $\beta_2=1$, $\beta_3=1/3$, $ \beta_{104}=3$, $\beta_{105}=1$, $\beta_{106}=1/3$, $\beta_{j}$ = $1/4$ for $j \in \mathcal{P}_{LD}$, and $ \beta_l=0$ otherwise. Here $\mathcal{P}_{LD}$ is a randomly selected block consisting of $K$ consecutive indices from $\{210,211,\ldots,s\}$, where $K \in \{2,4,6,12,24, 52\}$. We have $\mathcal{C} = \{1, 2, 3\}$, $\mathcal{P} = \{ 104, 105, 106\} \cup \mathcal{P}_{LD}$, $\mathcal{I} =\{ 207, 208, 209\}$ and $\mathcal{S} = \{1 \ldots, 5000\} \backslash ( \mathcal{C} \cup \mathcal{P} \cup \mathcal{I})$. The other settings are the same as the ones in Section \ref{Simulation for screening}. We consider three different sample sizes $n= 200$, $500$ and $1000$. We first perform the screening procedure and report the coverage proportion of $\mathcal{M}_1 = \mathcal{C} \cup \mathcal{P}$. We also report the coverage proportion for each of the confounding and precision variables. In particular, we include the screening results, in which $s=5000$, $\sigma=1$, $n=200,500,1000$, and $K =2,4,6,12,24,52$, can be found in Figures \ref{sim3step1n200sizesig2sigma1} -- \ref{sim3step1n1000sizesig52sigma1} of the supplementary material. From the plots, one can see that the blockwise joint screening method (blue dotted line) selects $\mathcal{P}_{LD}$ and $\mathcal{M}_1$ with higher probability compared with the original joint screening method (green solid line). Based on the results, the blockwise joint screening method can better utilize precision variables with block structures to select $\mathcal{M}_1$. In addition, we evaluate the performance of the two proposed estimation procedure after the first-step screening. For the sizes of $ \widehat{\mathcal{M}} $ and $ \widehat{\mathcal{M}} ^{block} $ in the screening step, we set $|\widehat{\mathcal{M}}| = \lfloor n / \log(n) \rfloor$ for the original joint screening procedure, and $|\widehat{\mathcal{M}}^{block}| = 2 \lfloor n / \log(n) \rfloor$ for the blockwise joint screening procedure. We report the average MSEs for $\bm{\beta}$ and $\bm{B}$ when $(n, s, \sigma) = (200,5000, 1)$ in Table \ref{sim1t2main}. The complete results, in which $s=5000$, $\sigma=1$, $n=200, 500, 1000$, and $K =2,4,6,12,24,52$, can be found in Table \ref{sim1t2} of the supplementary material. In summary, the blockwise joint screening estimate outperforms the original joint screening estimate when the sample size $n$ is small or block size of precision variables $K$ is large. For the rest of the scenarios, there are no significant differences between the two methods. \begin{table}[htbp] \centering \caption{Simulation results for $ (n,s,\sigma) =(200,5000, 1) $: the average MSEs for $\bm\beta$ and $ {\bm B}$, and their associated standard errors in the parentheses are reported. The left panel summarizes the results from the joint screening method; the right panel summarizes the results from the blockwise joint screening method. The results are based on 100 Monte Carlo repetitions. } \begin{tabular}{ crr | crrrr } Proposed & MSE $\bm\beta$ & MSE ${\bm{B}}$ & Proposed (block) &MSE $\bm\beta$ & MSE ${\bm{B}}$ \\ \hline K=2&1.423(0.096)&0.785(0.009)&K=2&1.390(0.090)&0.793(0.010)\\ K=4&1.667(0.096)&0.815(0.011)&K=4&1.548(0.088)&0.805(0.010)\\ K=6&1.955(0.101)&0.826(0.010)&K=6&1.701(0.084)&0.816(0.009)\\ K=12&2.466(0.096)&0.890(0.039)&K=12&2.223(0.129)&0.838(0.011)\\ K=24&2.533(0.164)&0.847(0.014)&K=24&2.136(0.138)&0.821(0.010)\\ K=52&14.650(0.815)&2.034(0.487)&K=52&13.693(0.728)&1.870(0.459)\\ \end{tabular} \label{sim1t2main} \end{table} In the supplementary material, we assess the variable screening results for various sparsity levels of instrumental variables in Section \ref{Screening under different sparsity levels}, evaluate the performance of our estimation procedure for different covariances of exposure errors in Section \ref{Screening under different covariance structures of exposure errors}, and assess the sensitivity of the choices for different sizes of $\widehat{\mathcal{M}}_{1}^{*}$ and $\widehat{\mathcal{M}}_{2}$ in Section \ref{Screening and estimation under different sizes}. \section{Discussion} \label{sec:discussion} This paper aims at mapping the complex GIC pathway for AD. The unique features of the hippocampal morphometry surface measure data motivate us to develop a computationally efficient two-step screening and estimation procedure, which can select biomarkers among more than $6$ million observed covariates and estimate the conditional association simultaneously. If there was no unmeasured confounding, then the conditional association we estimate corresponds to the causal effect. This is, however, not the case in the ADNI study because we have unmeasured confounders such as A$\beta$ and tau protein levels. To control for unmeasured confounding and estimate the causal effect, one possible approach is to use the generic variants as potential instrumental variables \citep[e.g.][]{lin2015regularization}. There are a number of other important directions for future work. Firstly, the vast majority of AD, known as ``polygenic AD'', is influenced by the actions and interactions of multiple genetic variants simultaneously, likely in concert with non-genetic factors, such as environmental exposures and lifestyle choices among many others \citep{bertram2020genomic}. Therefore, various types of interaction effects, such as genetic-genetic and imaging-genetic, could be incorporated into the outcome generating model (\ref{outcomemodel}). However, this may significantly increase the computation as the dimension of genetic relevant covariates will increase from $6,087,205$ to more than $ 90$ billion covariates, if we add all the possible imaging-genetics interaction terms. One may consider interaction screening procedures \citep{hao2014interaction} as the first-step. Secondly, this study simply removes observations with missingness. Accommodation of missing exposure, confounders and outcome under the proposed model framework is of great practical value and worth further investigation. Thirdly, baseline diagnosis status is an important effect modifier, as the effect of hippocampus shape on behavioral measures can be different across the CN/MCI/AD groups. However, the relatively small sample size in the ADNI study does not allow us to conduct a reliable subgroup analysis. The subgroup analyses are pertinent for further exploration when a larger sample size is available. Fourthly, in the ADNI dataset, there are longitudinal ADAS-13 scores observed at different months as well as other longitudinal behavioral scores obtained from Mini-Mental State Examination and Rey Auditory Verbal Learning Test, which can provide a more comprehensive characterization of the behavioral deficits. Integrating these different scores as a multivariate longitudinal outcome to improve the estimation of the conditional association requires substantial effort for future research. Lastly, one could consider incorporating information from other brain regions. For instance, an entorhinal tau may exist on episodic memory decline through other brain regions, such as the medial temporal lobe \citep{maass2018entorhinal}. \section*{Supplementary Material} \label{supplements} Supplementary material available online contains detailed derivations and explanations of the main algorithm, ADNI data usage acknowledgement, image and genetic data preprocessing steps, screening results and sensitivity analyses of the ADNI data application with a subgroup analysis including only MCI and AD patients, detailed procedure and results for the SNP-imaging-outcome mediation analyses, additional simulation results, theoretical properties of the proposed procedure including the main theorems, assumptions needed for our main theorems, and proofs of auxiliary lemmas and main theorems. \section*{Acknowledgement} The authors thank the editor, associate editor and referees for their constructive comments, which have substantially improved the paper. Yu was partially supported by the Canadian Statistical Sciences Institute (CANSSI) postdoctoral fellowship and the start-up fund of University of Texas at Arlington. Wang and Kong were partially supported by the Natural Science and Engineering Research Council of Canada and the CANSSI Collaborative Research Team Grant. Zhu was partially supported by the NIH-R01-MH116527. \newpage \if00 {\centering \title{\bf \LARGE Supplementary Material for \\``Mapping the Genetic-Imaging-Clinical Pathway with Applications to Alzheimer's Disease''} \maketitle \begin{center} \author{\large Dengdeng Yu \\ \vspace{10pt} Department of Mathematics, University of Texas at Arlington}\\ \vspace{10pt} \author{\large Linbo Wang \\ \vspace{10pt} Department of Statistical Sciences, University of Toronto}\\ \vspace{10pt} \author{\large Dehan Kong \\ \vspace{10pt} Department of Statistical Sciences, University of Toronto}\\ \vspace{10pt} \author{\large Hongtu Zhu \\ \vspace{10pt} Department of Biostatistics, University of North Carolina, Chapel Hill }\\ \vspace{10pt} {\large for the Alzheimer's Disease Neuroimaging Initiative \footnote[1]{ Data used in preparation of this article were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. A complete listing of ADNI investigators can be found at: \url{http://adni.loni.usc.edu/wp-content/uploads/how_to_apply/ADNI_Acknowledgement_List.pdf}.} } \end{center} \newpage } \fi \if10 { \title{\bf Supplementary Material for ``Mapping the Genetic-Imaging-Clinical Pathway with Applications to Alzheimer's Disease''} \maketitle } \fi \spacingset{1.7} The supplementary file is organized as follows. The detailed description of the main algorithm is included in Section \ref{commentforalgorithm}. We include the ADNI data usage acknowledgement in Section \ref{usage} and image and genetic data preprocessing steps in Section \ref{image genetic preprocessing}. In Section \ref{Screening results of ADNI data applications}, we list the screening results of ADNI data applications. Section \ref{Sensitivity analysis} examines the sensitivity of the estimate $\widehat{\bm B}$ from the ADNI data application by varying the relative sizes of $\widehat{\mathcal{M}}_{1}^{*}$ and $\widehat{\mathcal{M}}_{2}$. Section \ref{Subgroup analysis ADNI data applications} includes a subgroup analysis including only the mild cognitive impairment (MCI) and Alzheimer's disease (AD) patients. We include the detailed SNP-imaging-outcome mediation analysis procedure and results in Section \ref{Results for mediation analyses}. In Section \ref{addsimulation}, we list additional simulation results. The theoretical properties including the main theorems of our procedure are included in Section \ref{Theoretical guarantees}. We state the assumptions needed for the main theorems in Section \ref{assumption}. In Section \ref{auxlemma}, we include the auxiliary lemmas needed for the theorems and their proofs. We give the detailed proofs of our main theorems in Section \ref{proofthm}. \section{Description and derivation of Algorithm 1}\label{commentforalgorithm} To solve the minimization problem (\ref{min1}) of the main paper, we utilize the Nesterov optimal gradient method \citep{nesterov1998introductory}, which has been widely used in solving optimization problems for non-smooth and non-convex objective functions \citep{beck2009fast, zhou2014regularized}. Before we introduce Nesterov's gradient algorithm, we first state two propositions on shrinkage thresholding formulas \citep{beck2009fast, cai2010singular}. \begin{prop} \label{prop1} For a given matrix $\bm{A}$ with singular value decomposition $\bm{A} = \bm{U} \mathrm{diag}(\bm{a}) \bm{V}^{\mathrm{\scriptscriptstyle T}}$, where $\bm{a}=(a_1, \ldots, a_m)^{\mathrm{\scriptscriptstyle T}} $ with $\{a_k\}_{1 \leq k \leq r}$ being $\bm{A}$'s singular values, the optimal solution to \[ \min_{\bm{B}} \left\{ \frac{1}{2} \| \bm{B} - \bm{A} \|_F^2 + \lambda \| \bm{B}\|_*\right\} \] share the same singular vectors as $\bm{A}$ and its singular values are $b_k = (a_k - \lambda)_+$ for $k =1,\ldots, r$, where $ (a_k)_+=\max(0, a_k) $. \end{prop} \begin{prop} \label{prop2} For vectors $\bm{a}=(a_1, \ldots, a_r)^{\mathrm{\scriptscriptstyle T}} $ and $\bm{b}=(b_1, \ldots, b_r)^{\mathrm{\scriptscriptstyle T}}$, the optimal solution to \[ \min_{\bm b} \left\{ \frac{1}{2}\| \bm{b} - \bm{a} \|_2^2 + \lambda \| \bm b \|_1\right\} \] is $b_k = \textrm{sgn} (a_k) (|a_k| - \lambda)_+$ for $k =1,\ldots, r$, where $ \textrm{sgn}(\cdot) $ denotes the sign of the number. \end{prop} The Nestrov's gradient method utilizes the first-order gradient of the objective function to produce the next iterate based on the current search point. Differed from the standard gradient descent algorithm, the Nesterov's gradient algorithm uses two previous iterates to generate the next search point by extrapolating, which can dramatically improve the convergence rate. The Nesterov's gradient algorithm for problem (\ref{min1}) is presented as follows. Denote $l(\bm\beta, \bm B) = \frac{1}{2 n}\sum_{i=1}^n\left( Y_i -\langle \bm{\beta}, X_i\rangle- \langle \bm{Z}_i, \bm{B}\rangle\right)^2$ and $P(\bm{\beta},\bm{B}) = P_1(\bm{\beta}) + P_2(\bm{B})$, where $ P_1(\bm{\beta}) = \lambda_{1} \sum_l |\beta_l| $ and $P_2(\bm{B})= \lambda_{2} || \bm{B}||_*$. We also define \begin{eqnarray*} g(\bm\beta,\bm{B}| \bm{s}^{(t)},\bm{S}^{(t)} , \delta) &=& l(\bm{s}^{(t)},\bm{S}^{(t)}) + \left\langle \nabla l(\bm{s}^{(t)}, \bm{S}^{(t)}),\left[ (\bm\beta-\bm{s}^{(t)})^{\mathrm{\scriptscriptstyle T}}, \{\mathrm{vec}(\bm{B} - \bm{S}^{(t)})\}^{\mathrm{\scriptscriptstyle T}} \right]^{\mathrm{\scriptscriptstyle T}} \right\rangle\\ & &+ (2 \delta)^{-1}\left( \left\|\bm\beta-\bm{s}^{(t)} \right\|_2^2+ \left\| \bm{B} - \bm{S}^{(t)}\right\|_F^2 \right)+ P(\bm\beta,\bm{B})\\ &=& (2 \delta)^{-1}\left[ \left\|\bm\beta - \left\{ \bm{s}^{(t)} - \delta \partial_{\bm{\beta}} l(\bm{s}^{(t)},\bm{S}^{(t)}) \right\} \right\|_2^2 \right.\\ & & +\left. \left\| \mathrm{vec}(\bm{B}) - \left\{ \mathrm{vec}(\bm{S}^{(t)}) - \delta \partial_{\mathrm{vec}(\bm{B})} l(\bm{s}^{(t)}, \bm{S}^{(t)}) \right\} \right\|_2^2 \right] \\ & &+ P(\bm\beta,\bm{B}) + c^{(t)}, \end{eqnarray*} where $\nabla l(\bm{\beta},\bm{B}) = [(\partial_{\bm\beta}l)^{\mathrm{\scriptscriptstyle T}}, \{ \partial_{\mathrm{vec}(\bm{B})}l\}^{\mathrm{\scriptscriptstyle T}}]^{\mathrm{\scriptscriptstyle T}} \in \mathbb{R}^{|\widehat{\mathcal{M}}|+pq}$ denotes the first-order gradient of $l(\bm\beta,\bm{B})$ with respect to $\left[ \bm{\beta}^{\mathrm{\scriptscriptstyle T}},\{\mathrm{vec}(\bm{B})\}^{\mathrm{\scriptscriptstyle T}}\right]^{\mathrm{\scriptscriptstyle T}} \in \mathbb{R}^{|\widehat{\mathcal{M}}|+pq}$. We define \begin{eqnarray*} \frac{\partial}{\partial_{\bm\beta}}l(\bm{\beta},\bm{B}) &=& n^{-1} \sum_{i=1}^{n} X_i \left(\langle \bm{\beta}, X_i\rangle + \langle \bm{B}, \bm{Z}_i\rangle - Y_i\right),\\ \frac{\partial}{\partial_{\mathrm{vec}(\bm{B})}}l(\bm{\beta},\bm{B}) &=& \mathrm{vec} \left\{ n^{-1} \sum_{i=1}^{n} \bm{Z}_i \left(\langle \bm{\beta}, X_i\rangle + \langle \bm{B}, \bm{Z}_i\rangle - Y_i\right) \right\}, \end{eqnarray*} with $\partial_{\bm\beta}l(\bm{\beta},\bm{B}) \in \mathbb{R}^{|\widehat{\mathcal{M}}|}$, $\partial_{\mathrm{vec}(\bm{B})}l(\bm{\beta},\bm{B}) \in \mathbb{R}^{pq}$. Here $s^{(t)}$ and $S^{(t)}$ are interpolations between $\bm\beta^{(t-1)}$ and $\bm\beta^{(t)}$, and $\bm{B}^{(t-1)}$ and $\bm{B}^{(t)}$ respectively, which will be defined below; $c^{(t)}$ denotes all terms that are irrelevant to $\bm{B}$, and $\delta >0$ is a suitable step size. Given previous search points $\bm{s}^{(t)}$ and $\bm{S}^{(t)}$, the next search points $\bm{s}^{(t+1)}$ and $\bm{S}^{(t+1)}$ would be the minimizer of $g(\bm\beta,\bm B| \bm{s}^{(t)},\bm{S}^{(t)} , \delta)$. For the search points $\bm{s}^{(t)}$ and $\bm{S}^{(t)}$, they can be generated by linearly extrapolating two previous algorithmic iterates. A key advantage of using the Nestrov gradient method is that it has an explicit solution at each iteration. In fact, minimizing $g(\bm\beta,\bm B| \bm{s}^{(t)},\bm{S}^{(t)} , \delta) $ can be divided into two sub-problems, minimizing $(2 \delta)^{-1} \left\|\bm\beta - \left( \bm{s}^{(t)} - \right. \right.$ $\left. \left. \delta \partial_{\bm\beta} l(\bm{s}^{(t)},\bm{S}^{(t)}) \right) \right\|_2^2 + \lambda_{1} \sum_l |\beta_l| $ and $(2 \delta)^{-1}\left\| \mathrm{vec}(\bm{B}) - \left\{ \mathrm{vec}(\bm{S}^{(t)}) - \right. \right.$ $ \left. \left. \delta \partial_{\mathrm{vec}(\bm{B})} l(\bm{s}^{(t)}, \bm{S}^{(t)}) \right\} \right\|_2^2 + \lambda_{2} || \bm{B}||_*$, respectively. These sub-problems can be solved by the shrinkage thresholding formulas in Propositions \ref{prop1} and \ref{prop2}, respectively. Let $ \bm{X}^{\widehat{\mathcal{M}}}=(X^{\widehat{\mathcal{M}}}_1, \ldots, X^{\widehat{\mathcal{M}}}_n)^{\mathrm{\scriptscriptstyle T}}\in \mathbb{R}^{n\times|\widehat{\mathcal{M}}|} $ where $X^{\widehat{\mathcal{M}}}_i$ is $\{ X_{ij}\}^{\mathrm{\scriptscriptstyle T}}_{j \in \widehat{\mathcal{M}}} \in \mathbb{R}^{|\widehat{\mathcal{M}}|}$ for $i = 1, \ldots, n$. Define $\bm{Z}_{new} = (\mathrm{vec}(\bm{Z}_1),\ldots,\mathrm{vec}(\bm{Z}_n))^{\mathrm{\scriptscriptstyle T}} \in \mathbb{R}^{n \times pq}$ and $\bm{X}_{new} = (\bm{X}^{\widehat{\mathcal{M}}},\bm{Z}_{new}) \in \mathbb{R}^{n \times (|\widehat{\mathcal{M}}|+pq)}$. For a given vector $\bm{a} = (a_1,\ldots, a_r)^{\mathrm{\scriptscriptstyle T}} \in \mathbb{R}^{r}$, $(\bm{a})_+$ is defined as $\{(a_1)_+,\ldots, (a_r)_+\}^{\mathrm{\scriptscriptstyle T}} \in \mathbb{R}^{p}$, where $ (a)_+=\max(0, a) $. Similarly, $\textrm{sgn}(\bm{a})$ is obtained by taking the sign of $\bm{a}$ componentwisely. For a given pair of tuning parameters $\lambda_{1}$ and $\lambda_{2}$, \eqref{min1} can be solved by Algorithm \ref{algorithm:solve8}. \begin{algorithm}[!htb] \caption{ Shrinkage thresholding algorithm to solve \eqref{min1}} \label{algorithm:solve8} \begin{enumerate} \item Initialize: $\bm{\beta}^{(0)} = \bm{\beta}^{(1)}$, $\bm{B}^{(0)} = \bm{B}^{(1)}$, $\alpha^{(0)} = 0$ and $\alpha^{(1)} = 1$,\\ $\delta = n/\lambda_{\textrm{max}}( \bm{X}_{new}^{\mathrm{\scriptscriptstyle T}} \bm{X}_{new})$. \item Repeat (a) to (f) until the objective function $Q(\bm{\beta},\bm{B})$ converges:\\ \begin{enumerate} \item $\bm{s}^{(t)} = \bm{\beta}^{(t)} + \frac{\alpha^{(t-1)}-1}{\alpha^{(t)}} (\bm{\beta}^{(t)} - \bm{\beta}^{(t-1)})$,\\ $\bm{S}^{(t)} = \bm{B}^{(t)} + \frac{\alpha^{(t-1)}-1}{\alpha^{(t)}} (\bm{B}^{(t)} - \bm{B}^{(t-1)})$; \item $\bm{\beta}_{\textrm{temp}} = \bm{s}^{(t)} - \delta \frac{\partial l(\bm{s}^{(t)},\bm{B}^{(t)})}{\partial \bm{\beta}} $;\\ $\mathrm{vec}(\bm{B}_{\textrm{temp}}) = \mathrm{vec}(\bm{S}^{(t)}) - \delta \frac{\partial l(\bm{\beta}^{(t)},\bm{S}^{(t)})}{\partial \mathrm{vec}(\bm{B})} $; \item Singular value decomposition: ${\bm{B}}_{\textrm{temp}} = \bm{U} \mathrm{diag}(\bm{B}) \bm{V}^{\mathrm{\scriptscriptstyle T}}$; \item $\bm{a}_{new} = \textrm{sgn}(\bm{\beta}_{\textrm{temp}})\cdot (|\bm{\beta}_{temp}| - \lambda_1 \delta\cdot \bm{1})_{+}$,\\ $\bm{b}_{new} = (\bm{b} - \lambda_2 \delta \cdot \bm{1})_{+}$;\\ \item $\bm{\beta}^{(t+1)} = \bm{a}_{new}$, \\ $\bm{B}^{(t+1)} = \bm{U} \mathrm{diag}(\bm{b}_{new}) \bm{V}^{\mathrm{\scriptscriptstyle T}}$;\\ \item $\alpha^{(t+1)} = \left[1 + \sqrt{1+(2 \alpha^{(t)})^2}\right]/2$.\\ \end{enumerate} \end{enumerate} \end{algorithm} In particular, step 2(a) predicts search points $\bm{s}^{(t)}$ and $\bm{S}^{(t)}$ by linear extrapolations from the solutions of previous two iterates, where $\alpha^{(t)}$ is a scalar sequence that plays a critical role in the extrapolation. This sequence is updated in step 2(f) as in the original Nesterov method. Next, steps 2(b) -- 2(d) perform gradient descent from the current search points to obtain the optimal solutions at current iteration. Specifically, the gradient descent is based on minimizing $g(\bm\beta,\bm{B}|$ $ \bm{s}^{(t)},\bm{S}^{(t)} , \delta)$, the first-order approximation to the loss function, at the current search points $\bm{s}^{(t)}$ and $\bm{S}^{(t)}$. This minimization problem is tackled by minimizing two sub-problems by the shrinkage thresholding formulas in Propositions \ref{prop1} and \ref{prop2} respectively, as mentioned above. Finally, step 2(e) forces the descent property of the next iterate. A sufficient condition for the convergence of $\{ \bm\beta^{(t)} \}_{t \geq 1}$ and $\{ \bm{B}^{(t)} \}_{t \geq 1}$ is that the step size $\delta$ should be smaller than or equal to $1/{L_f}$, where $L_f$ is the smallest Lipschitz constant of the function $l(\bm{\beta},\bm{B})$ \citep{beck2009fast}. In our case, $L_f$ is equal to $\lambda_{\textrm{max}}( \bm{X}_{new}^{\mathrm{\scriptscriptstyle T}} \bm{X}_{new})/n$, where $ \lambda_{\textrm{max}}(\cdot) $ denotes the largest eigenvalue of a matrix. \section{Data usage acknowledgement}\label{usage} Data used in the preparation of this article were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). The ADNI was launched in 2003 as a public-private partnership, led by Principal Investigator Michael W. Weiner, MD. The primary goal of ADNI has been to test whether serial magnetic resonance imaging (MRI), positron emission tomography (PET), other biological markers, and clinical and neuropsychological assessment can be combined to measure the progression of mild cognitive impairment and early Alzheimer's disease. For up-to-date information, see www.adni-info.org. Data collection and sharing for this project was funded by the Alzheimer's Disease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense award number W81XWH-12-2-0012). ADNI is funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, and through generous contributions from the following: AbbVie, Alzheimer's Association; Alzheimer's Drug Discovery Foundation; Araclon Biotech; BioClinica, Inc.; Biogen; Bristol-Myers Squibb Company; CereSpir, Inc.; Cogstate; Eisai Inc.; Elan Pharmaceuticals, Inc.; Eli Lilly and Company; EuroImmun; F. Hoffmann-La Roche Ltd and its affiliated company Genentech, Inc.; Fujirebio; GE Healthcare; IXICO Ltd.; Janssen Alzheimer Immunotherapy Research \& Development, LLC.; Johnson \& Johnson Pharmaceutical Research \& Development LLC.; Lumosity; Lundbeck; Merck \& Co., Inc.; Meso Scale Diagnostics, LLC.; NeuroRx Research; Neurotrack Technologies; Novartis Pharmaceuticals Corporation; Pfizer Inc.; Piramal Imaging; Servier; Takeda Pharmaceutical Company; and Transition Therapeutics. The Canadian Institutes of Health Research is providing funds to support ADNI clinical sites in Canada. Private sector contributions are facilitated by the Foundation for the National Institutes of Health (www.fnih.org). The grantee organization is the Northern California Institute for Research and Education, and the study is coordinated by the Alzheimer's Therapeutic Research Institute at the University of Southern California. ADNI data are disseminated by the Laboratory for Neuro Imaging at the University of Southern California. \section{Image and genetic data preprocessing} \label{image genetic preprocessing} \subsection{Image data preprocessing} \label{image preprocessing} The hippocampus surface data were preprocessed from the raw MRI data, which were collected across a variety of 1.5 Tesla MRI scanners with protocols individualized for each scanner. Standard T1-weighted images were obtained by using volumetric 3-dimensional sagittal MPRAGE or equivalent protocols with varying resolutions. The typical protocol includes: inversion time (TI) = 1000 ms, flip angle = 8$^o$, repetition time (TR) = 2400 ms, and field of view (FOV) = 24 cm with a $256\times 256\times 170$ acquisition matrix in the $x-$, $y-$, and $z-$dimensions yielding a voxel size of $1.25\times 1.26\times 1.2$ mm$^3$. We adopted a surface fluid registration based hippocampal subregional analysis package \citep{shi:nimg13}, which uses isothermal coordinates and fluid registration to generate one-to-one hippocampal surface registration for surface statistics computation. It introduced two cuts on a hippocampal surface to convert it into a genus zero surface with two open boundaries. The locations of the two cuts were at the front and back of the hippocampal surface. By using conformal parameterization, it essentially converts a 3D surface registration problem into a 2D image registration problem. The flow induced in the parameter domain establishes high-order correspondences between 3D surfaces. Finally, the radial distance was computed on the registered surface. This software package and associated image processing methods have been adopted and described in \citet{Wang2011}. \subsection{Genetic data preprocessing} \label{genetic preprocessing} For the genetic data, we applied the following preprocessing technique to the $756$ subjects in ADNI1 study. The first line quality control steps include (i) call rate check per subject and per single nucleotide polymorphism (SNP) marker, (ii) gender check, (iii) sibling pair identification, (iv) the Hardy-Weinberg equilibrium test, (v) marker removal by the minor allele frequency, and (vi) population stratification. The second line preprocessing steps include removal of SNPs with (i) more than 5$\%$ missing values, (ii) minor allele frequency smaller than 10$\%$, and (iii) Hardy-Weinberg equilibrium $p$-value $< 10^{-6}$. The 503,892 SNPs obtained from 22 autosomes were included for further processing. MACH-Admix software (http://www.unc.edu/~yunmli/MaCH-Admix/) \citep{LiuLi2013} is applied on all the subjects to perform genotype imputation, using 1000G Phase I Integrated Release Version 3 haplotypes (http://www.1000genomes.org) \citep{GPC1000} as reference panel. Quality control was also conducted after imputation, excluding markers with (i) low imputation accuracy (based on imputation output $R^2$), (ii) Hardy-Weinberg equilibrium $p$-value $10^{-6}$, and (iii) minor allele frequency $<5\%$. \section{Screening results of ADNI data applications} \label{Screening results of ADNI data applications} In Table \ref{imgenet1}, we list the top $20$ SNPs selected through the blockwise joint screening procedure corresponding to left and right hippocampi respectively. \begin{table}[htbp] \centering \begin{tabular}{ c c | c c} \multicolumn{2}{c}{Left hippocampi} & \multicolumn{2}{c}{Right hippocampi} \\ \hline Chromesome number & SNP name & Chromesome number & SNP name \\ \hline 19 & rs429358 & 19 & rs429358\\ 7 & rs1016394 & 19 & rs10414043\\ 19 & rs10414043 & 14 & 14:25618120:G\_GC\\ 7 & rs1181947 & 19 & rs7256200\\ 19 & rs7256200 & 14 & rs41470748\\ 22 & rs134828 & 19 & rs73052335\\ 19 & rs73052335 & 14 & 14:25613747:G\_GT\\ 7 & 7:101403195:C\_CA & 19 & rs157594\\ 19 & rs157594 & 14 & rs72684825\\ 13 & rs12864178 & 19 & rs769449\\ 19 & rs769449 & 6 & rs9386934\\ 2 & rs13030626 & 6 & rs9374191\\ 19 & rs56131196 & 19 & rs56131196\\ 2 & rs13030634 & 6 & rs9372261\\ 19 & rs4420638 & 19 & rs4420638\\ 2 & rs11694935 & 6 & rs73526504\\ 19 & rs111789331 & 19 & rs111789331\\ 2 & rs11696076 & 14 & rs187421061\\ 19 & rs66626994 & 19 & rs66626994\\ 2 & rs11692218 & 13 & rs342709\\ \end{tabular} \caption{The top 20 SNPs selected through the blockwise joint screening procedure. The left two columns correspond to results from the left hippocampi, and the right two columns correspond to results from the right hipppocampus.} \label{imgenet1} \end{table} We plot similar figures as the Manhattan plot for $\widehat{\mathcal{M}}^{*}_1$, $\widehat{\mathcal{M}}^{block,*}_1$, $\widehat{\mathcal{M}}^{\textbf{}}_2$ and $\widehat{\mathcal{M}}^{block}_2$ in Figure \ref{manhattanPlots}. Unlike the conventional Manhattan plots, where genomic coordinates are displayed along the x-axis, with the negative logarithm of the association p-value for each SNP displayed on the y-axis, in our analysis, we do not have the p-values. So in these figures, the y-axis represents the magnitude of $| \widehat{\beta}^{M}_{l} |$, $ \widehat{\beta}^{block,M}_{l} $, $ \| \widehat{\bm{C}}_l^M \|_{op}$ and $\widehat{C}_l^{block,M}$ and the horizontal dashed line represents the threshold values $\gamma_{1,n}$ (Panel (a)), $\gamma_{2,n}$((Panel (b)), $\gamma_{3,n}$ ((Panel (c)) and $\gamma_{4,n}$ ((Panel (d)). In Panels (c) and (d), the left and right figures represent the left and right hippocampi, respectively. The SNPs with $| \widehat{\beta}^{M}_{l} |$, $ \widehat{\beta}^{block,M}_{l} $, $ \| \widehat{\bm{C}}_l^M \|_{op}$ and $\widehat{C}_l^{block,M}$ greater than or equal to $\gamma_{1,n}$, $\gamma_{2,n}$, $\gamma_{3,n}$ and $\gamma_{4,n}$, hence being selected by $\widehat{\mathcal{M}}^{*}_1$, $\widehat{\mathcal{M}}^{block,*}_1$, $\widehat{\mathcal{M}}^{\textbf{}}_2$ and $\widehat{\mathcal{M}}^{block}_2$ respectively, are highlighted with red diamond symbols. \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{$| \widehat{\beta}^{M}_{l} |$}[0.45\linewidth] {\includegraphics[height=2in,width=3in]{./plotsMainArkSupp/manhplot_absb_hippo1.png}} \hfill \subcaptionbox{$ \widehat{\beta}^{block,M}_{l} $}[0.45\linewidth] {\includegraphics[height=2in,width=3in]{./plotsMainArkSupp/manhplot_absbpseudo_hippo1.png}} \hfill \subcaptionbox{$ \| \widehat{\bm{C}}_l^M \|_{op}$}[0.95\linewidth] {\includegraphics[height=2in,width=3in]{./plotsMainArkSupp/manhplot_normS_hippo1.png} \includegraphics[height=2in,width=3in]{./plotsMainArkSupp/manhplot_normS_hippo2.png}} \hfill \subcaptionbox{$\widehat{C}_l^{block,M}$}[0.95\linewidth] {\includegraphics[height=2in,width=3in]{./plotsMainArkSupp/manhplot_normSpseudo_hippo1.png} \includegraphics[height=2in,width=3in]{./plotsMainArkSupp/manhplot_normSpseudo_hippo2.png}} \hfill \caption{Real data results: Panels (a) -- (d) present the results for $| \widehat{\beta}^{M}_{l} |$, $ \widehat{\beta}^{block,M}_{l} $, $ \| \widehat{\bm{C}}_l^M \|_{op}$ and $\widehat{C}_l^{block,M}$, where genomic coordinates are displayed along the x-axis, y-axis represents the magnitude of $| \widehat{\beta}^{M}_{l} |$, $ \widehat{\beta}^{block,M}_{l} $, $ \| \widehat{\bm{C}}_l^M \|_{op}$ and $\widehat{C}_l^{block,M}$ and the horizontal dashed line represents the threshold values $\gamma_{1,n}$ (Panel (a)), $\gamma_{2,n}$((Panel (b)), $\gamma_{3,n}$ ((Panel (c)) and $\gamma_{4,n}$ ((Panel (d)). In Panels (c) and (d), the left and right figures represent the left and right hippocampi, respectively. The SNPs with $| \widehat{\beta}^{M}_{l} |$, $ \widehat{\beta}^{block,M}_{l} $, $ \| \widehat{\bm{C}}_l^M \|_{op}$ and $\widehat{C}_l^{block,M}$ greater than or equal to $\gamma_{1,n}$, $\gamma_{2,n}$, $\gamma_{3,n}$ and $\gamma_{4,n}$, hence being selected by $\widehat{\mathcal{M}}^{*}_1$, $\widehat{\mathcal{M}}^{block,*}_1$, $\widehat{\mathcal{M}}^{\textbf{}}_2$ and $\widehat{\mathcal{M}}^{block}_2$ respectively, are highlighted with red diamond symbols.} \label{manhattanPlots} \end{figure} \section{Sensitivity analysis of ADNI data applications} \label{Sensitivity analysis} In our analysis, we set $\widehat{\mathcal{M}}_{1}^{*}$ and $\widehat{\mathcal{M}}_{2}$ the same sizes, following the convention that the size of screening set is determined only by the sample size \citep{fan2008sure}, which is the same for $\widehat{\mathcal{M}}_{1}^{*}$ and $\widehat{\mathcal{M}}_{2}$. To assess the sensitivity of our results to this choice, we conduct sensitivity analyses varying the relative sizes of $\widehat{\mathcal{M}}_{1}^{*}$ and $\widehat{\mathcal{M}}_{2}$ in the joint screening procedure. For simplicity, we only consider the joint screening procedure proposed in Section \ref{jointscreening}. Figure \ref{dataPlots2} lists the estimate $\widehat{\bm B}$ corresponding to the left hippocampi (left part) and the right hippocampi (right part) using $\widehat{\mathcal{M}} = \widehat{\mathcal{M}}_{1}^{*} \cup \widehat{\mathcal{M}}_{2}$. Denote the estimates corresponding to $|\widehat{\mathcal{M}}_{2}|/|\widehat{\mathcal{M}}_{1}^{*}| = 1/2, 1, 2$ by $\widehat{\bm B}^{(0.5)}$, $\widehat{\bm B}^{(1)}$, $\widehat{\bm B}^{(2)}$ respectively. We set $|\widehat{\mathcal{M}}|=\lfloor n/\log(n) \rfloor = 89$. The estimates $\widehat{\bm B}^{(0.5)}$, $\widehat{\bm B}^{(1)}$ and $\widehat{\bm B}^{(2)}$ are plotted in Figure \ref{dataPlots2} (a), (b) and (c) correspondingly. In addition, we consider $|\widehat{\mathcal{M}}_{2}|/|\widehat{\mathcal{M}}_{1}^{*}| =1$ but $|\widehat{\mathcal{M}}|=2\lfloor n/\log(n) \rfloor = 178$. Denote the corresponding estimate by $\widetilde{\bm B}^{(1)}$. We plot $\widetilde{\bm B}^{(1)}$ in Figure \ref{dataPlots2} (d). \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{}[0.4\linewidth] {\includegraphics[height=2.25in,width=3in]{./plotsMainArkSupp/imgene_noldblocks_VISCODE6score2propSb1_CV_hippoRL_Bv9_1se.jpg}} \hfill \subcaptionbox{}[0.4\linewidth] {\includegraphics[height=2.25in,width=3in]{./plotsMainArkSupp/imgene_noldblocks_VISCODE6score2propSb05_CV_hippoRL_Bv9_1se.jpg}} \hfill \subcaptionbox{}[0.4\linewidth] {\includegraphics[height=2.25in,width=3in]{./plotsMainArkSupp/imgene_noldblocks_VISCODE6score2propSb2_CV_hippoRL_Bv9_1se.jpg}} \hfill \subcaptionbox{}[0.4\linewidth] {\includegraphics[height=2.25in,width=3in]{./plotsMainArkSupp/imgene_noldblocks_VISCODE6score2_CV_hippoRL_Bv9new_1se.jpg}} \hfill \caption{Real Data Results: Panels (a), (b), (c) and (d) plot the estimate $\widehat{\bm B}^{(0.5)}$, $\widehat{\bm B}^{(1)}$, $\widehat{\bm B}^{(2)}$ and $\widetilde{\bm B}^{(1)}$ corresponding to the left hippocampi (left part) and the right hippocampi (right part).} \label{dataPlots2} \end{figure} Furthermore, by defining the relative risk of an estimate $\widehat{\bm{B}}$ as $\mathrm{RR}(\widehat{\bm{B}}) = \frac{\| \widehat{\bm{B}}-\widehat{\bm B}^{(1)} \|_F^2}{\| \widehat{\bm B}^{(1)} \|_F^2}$, we report the relative risks of three estimates $\widehat{\bm B}^{(0.5)}$, $\widehat{\bm B}^{(2)}$ and $\widetilde{\bm B}^{(1)}$ in Table \ref{sensana1}. \begin{table}[htbp] \centering \begin{tabular}{ c c c r} & Left hippocampi & Right hippocampi\\ \hline $\mathrm{RR}(\widehat{\bm B}^{(0.5)})$ & 0.0022 & 0.1074\\ $\mathrm{RR}(\widehat{\bm B}^{(2)})$ & 0.2938 & 0.0907\\ $\mathrm{RR}(\widetilde{\bm B}^{(1)})$& 0.0611 & 0.0927\\ \end{tabular} \caption{The relative risks of $\widehat{\bm B}^{(0.5)}$, $\widehat{\bm B}^{(2)}$ and $\widetilde{\bm B}^{(1)}$ for left and right hippocampi. } \label{sensana1} \end{table} \begin{table}[htbp] \centering \begin{tabular}{ c c c r} Number of negative entries & Left hippocampi & Right hippocampi\\ \hline $\widehat{\bm{B}}^{(0.5)}$ & 15,000 & 15,000\\ $\widehat{\bm{B}}^{(1)}$ & 15,000 & 15,000\\ $\widehat{\bm{B}}^{(2)}$ & 14,600 & 15,000\\ $\widetilde{\bm B}^{(1)}$ & 15,000 & 15,000\\ \end{tabular} \caption{ Number of negative entries of $\widehat{\bm{B}}$ for left and right hippocampi. } \label{sensana2} \end{table} To summarize, the estimate $\widehat{\bm B}$ is not very sensitive against the choices of $|\widehat{\mathcal{M}}_{1}^{*}|$ and $|\widehat{\mathcal{M}}_{2}|$ except for the left hippocampi when $|\widehat{\mathcal{M}}_{2}|/|\widehat{\mathcal{M}}_{1}^{*}|=2$ and $|\widehat{\mathcal{M}}| = 89$. In fact, as shown in Table \ref{sensana2}, when $|\widehat{\mathcal{M}}_{2}|/|\widehat{\mathcal{M}}_{1}^{*}|=2$, for the left hippocampi, there are $400$ entries are non-negative. We believe it may be due to some confounder variables being missed in the screening step. For instance, we find that rs157582, a previously identified risk loci for Alzheimer's disease \citep{guo2019genome} is adjusted in estimating $\widehat{\bm B}^{(0.5)}$, $\widehat{\bm B}^{(1)}$ and $\widetilde{\bm B}^{(1)}$ except for $\widehat{\bm B}^{(2)}$ . But in general, as demonstrated in Figure \ref{dataPlots2}, the estimate $\widehat{\bm B}$s are similar among different choices of $|\widehat{\mathcal{M}}_{1}^{*}|$ and $|\widehat{\mathcal{M}}_{2}|$ . \section{Subgroup analysis ADNI data applications} \label{Subgroup analysis ADNI data applications} We repeat the analysis on the $391$ MCI and AD subjects. The estimates $\widehat{\bm B}$ corresponding to each part of the hippocampus onto a representative hippocampal surface are plotted in Figure \ref{dataPlotsv12}(a). We have also plotted the hippocampal subfield \citep{apostolova20063d} in Figure \ref{dataPlotsv12}(b). The results are similar to the complete data analysis including all the $566$ subjects. For example, from these plots, we can see that $13,700$ entries of $\widehat{\bm{B}}$ corresponding to left and all the $15,000$ entries of $\widehat{\bm{B}}$ corresponding to right hippocampi are negative. This implies that the radial distances of each pixel of both hippocampi are mostly negatively associated with the ADAS-13 score, which depicts the severity of behavioral deficits. Furthermore, the subfields with the strongest associations are still mostly CA1 and subiculum. \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{}[0.45\linewidth] {\includegraphics[height=3in,width=4in]{./plotsMainArkSupp/imgene_ldblocks_VISCODE6score2_CV_hippoRL_Bv12_1se.jpg}} \hfill \subcaptionbox{}[0.45\linewidth] {\includegraphics[height=2.4in,width=2in]{./plotsMainArkSupp/hippocampalsubfield.pdf}} \hfill \caption{Real data results for MCI and AD subgroup: Panel (a) plots the estimate $\widehat{\bm B}$ corresponding to the left hippocampi (left part) and the right hippocampi (right part). Panel (b) plots the hippocampal subfield.} \label{dataPlotsv12} \end{figure} \section{Results for mediation analyses} \label{Results for mediation analyses} We perform the SNP-imaging-outcome mediation analyses following the same procedure as in \cite{bi2017genome}. In specific, we regress the $30,000$ imaging measures against $6,087,205$ SNPs in the first step to search for the pairs of intermediate imaging measures and genetic variants. Then the behaviour outcome is fit against each candidate genetic variant to identify direct and significant influence. In the last step, the behaviour outcome is fit against identified genetic variant and its associated intermediate imaging measure simultaneously. A mediation relationship is built if a) the genetic variant is significant in both first and second steps, b) the intermediate imaging measure is significant in the last step, and c) the genetic variant has a smaller coefficient in the last step compared with the second step. Note that the total effect of the genetic variant in the second step should be the summation of direct and indirect effects which motivates the criterion c) of coefficient comparison. Note that, the total effect may not always be greater than the direct effect in the last step when the direct and indirect effects have opposite signs, while the causal inference tool proposed in this paper does not have this problem. Similar to \citet{bi2017genome}, we try to identify the pairs of SNP and imaging measure, for which the direct effect of SNP on behavioral outcome, the effect of SNP on imaging measure and the effect of imaging measure on the behavioral outcome are all significant. However, there is no SNP with at least one paired imaging measure (i.e. hippocampal imaging pixel) being significant. Therefore, there is no evidence for the mediating relationship of SNP-imaging-outcome from our analysis. \section{Additional results for simulation studies} \label{addsimulation} In this section, we list additional simulation results. In particular, Figures \ref{sim1step1n200sigma025} -- \ref{sim1step1n1000sigma025} present the screening results for Section \ref{Simulation for screening} with $(n,s,\sigma) = (200,5000,0.5)$, $(500,5000,1)$, $(500,5000,0.5)$, $(1000,5000,1)$ and $(1000,5000,0.5)$, respectively. Section \ref{Sensitivity and specificity analysis of simulation} presents the sensitivity and specificity analyses for Section \ref{Simulation for estimation}, where the detailed definitions of sensitivity and specificity can be found here. Section \ref{Screening under different sparsity levels} presents an additional simulation study considering various sparsity levels of instrumental variables. Section \ref{Screening under different covariance structures of exposure errors} presents an additional simulation study considering different covariances of exposure errors. Section \ref{Screening and estimation under different sizes} conducts an additional simulation study by varying the relative sizes of $\widehat{\mathcal{M}}_{1}^{*}$ and $\widehat{\mathcal{M}}_{2}$. Section \ref{Group Selection Using Linkage Disequilibrium Information} lists additional screening and estimation results for Section \ref{Group Selection Using Linkage Disequilibrium Information main} of the main article. \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8count1.png}} \subcaptionbox{Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8count2.png}} \subcaptionbox{Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8count3.png}} \subcaptionbox{Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8count4.png}} \subcaptionbox{Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8count5.png}} \subcaptionbox{Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8count6.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000coverageoptB2C1snry8.png}} \caption{Simulation results for the case $(n,s,\sigma) = (200,5000,0.5)$: Panels (a) -- (f) plot the average coverage proportion for $X_l$, where $l=1,2,3,104,105$ and $106$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (f) correspond to strong, moderate and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{M}_1 = \{1,2,3,104,105,106\}$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The green solid, the red dashed and the black dash dotted lines denote our joint screening method, the outcome screening method, and the intersection screening method, respectively.} \label{sim1step1n200sigma025} \end{figure} \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n500s5000snry9count1.png}} \subcaptionbox{Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n500s5000snry9count2.png}} \subcaptionbox{Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n500s5000snry9count3.png}} \subcaptionbox{Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n500s5000snry9count4.png}} \subcaptionbox{Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n500s5000snry9count5.png}} \subcaptionbox{Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n500s5000snry9count6.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n500s5000coverageoptB2C1snry9.png}} \caption{Simulation results for the case $(n,s,\sigma) = (500,5000,1)$: Panels (a) -- (f) plot the average coverage proportion for $X_l$, where $l=1,2,3,104,105$ and $106$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (f) correspond to strong, moderate and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{M}_1 = \{1,2,3,104,105,106\}$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The green solid, the red dashed and the black dash dotted lines denote our joint screening method, the outcome screening method, and the intersection screening method, respectively.} \label{sim1step1n500sigma1} \end{figure} \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n500s5000snry8count1.png}} \subcaptionbox{Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n500s5000snry8count2.png}} \subcaptionbox{Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n500s5000snry8count3.png}} \subcaptionbox{Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n500s5000snry8count4.png}} \subcaptionbox{Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n500s5000snry8count5.png}} \subcaptionbox{Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n500s5000snry8count6.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n500s5000coverageoptB2C1snry8.png}} \caption{Simulation results for the case $(n,s,\sigma) = (500,5000,0.5)$: Panels (a) -- (f) plot the average coverage proportion for $X_l$, where $l=1,2,3,104,105$ and $106$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (f) correspond to strong, moderate and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{M}_1 = \{1,2,3,104,105,106\}$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The green solid, the red dashed and the black dash dotted lines denote our joint screening method, the outcome screening method, and the intersection screening method, respectively.} \label{sim1step1n500sigma025} \end{figure} \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n1000s5000snry9count1.png}} \subcaptionbox{Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n1000s5000snry9count2.png}} \subcaptionbox{Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n1000s5000snry9count3.png}} \subcaptionbox{Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n1000s5000snry9count4.png}} \subcaptionbox{Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n1000s5000snry9count5.png}} \subcaptionbox{Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n1000s5000snry9count6.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n1000s5000coverageoptB2C1snry9.png}} \caption{Simulation results for the case $(n,s,\sigma) = (1000,5000,1)$: Panels (a) -- (f) plot the average coverage proportion for $X_l$, where $l=1,2,3,104,105$ and $106$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (f) correspond to strong, moderate and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{M}_1 = \{1,2,3,104,105,106\}$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The green solid, the red dashed and the black dash dotted lines denote our joint screening method, the outcome screening method, and the intersection screening method, respectively. } \label{sim1step1n1000sigma1} \end{figure} \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n1000s5000snry8count1.png}} \subcaptionbox{Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n1000s5000snry8count2.png}} \subcaptionbox{Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n1000s5000snry8count3.png}} \subcaptionbox{Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n1000s5000snry8count4.png}} \subcaptionbox{Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n1000s5000snry8count5.png}} \subcaptionbox{Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n1000s5000snry8count6.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n1000s5000coverageoptB2C1snry8.png}} \caption{Simulation results for the case $(n,s,\sigma) = (1000,5000,0.5)$: Panels (a) -- (f) plot the average coverage proportion for $X_l$, where $l=1,2,3,104,105$ and $106$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (f) correspond to strong, moderate and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{M}_1 = \{1,2,3,104,105,106\}$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The green solid, the red dashed and the black dash dotted lines denote our joint screening method, the outcome screening method, and the intersection screening method, respectively.} \label{sim1step1n1000sigma025} \end{figure} \subsection{Sensitivity and specificity analyses of simulation} \label{Sensitivity and specificity analysis of simulation} In this subsection, we report the sensitivity and specificity of the estimates. The sensitivity (true positive rate) is defined as $\frac{ |\{ j: \widehat{\beta}_j \neq 0 \} \cap \mathcal{M}_1 |}{ | \mathcal{M}_1|}$, i.e. the proportion of variables in the oracle adjustment set $\mathcal{M}_1$ that are selected by our estimation procedure. The specificity (true negative rate) is defined as $\frac{ |\{ j: \widehat{\beta}_j = 0 \} \cap (\mathcal{I} \cup \mathcal{S})| }{ | \mathcal{I} \cup \mathcal{S}|}$, i.e. the proportion of variables not in the oracle adjustment set $\mathcal{M}_1$ that are not selected by our estimation procedure. Furthermore, we define the instrumental specificity as $\frac{ |\{ j: \widehat{\beta}_j = 0 \} \cap \mathcal{I}| }{ | \mathcal{I}|}$, i.e. the proportion of variables in the instrumental set $\mathcal{I}$ that are not selected by our estimation procedure. \begin{table}[htbp] \centering \caption{Simulation results for $ \sigma=1 $ and $ \sigma = 0.5 $, when $n=500$. The average $\bm\beta$ sensitivity, $\bm\beta$ instrumental specificity and $\bm\beta$ specificity, MSE for $\bm\beta$, and MSE for $ {\bm B}$, with their associated standard errors in the parentheses are reported. The results are based on 100 Monte Carlo repetitions. ``No Lasso'' estimate is calculated by including all the selected variables from the screening step then estimating $\bm{B}$ using the optimization (\ref{min1}) without the $l_{1}$-regularization. ``Oracle'' estimate is calculated by pretending to know the correct set of confounders and precision variables as $X$ and then estimate $\bm{B}$ using the optimization (\ref{min1}) without the $l_{1}$-regularization. } \begin{tabular}{ cccccc } n = 500 & Sensitivity & Instrumental specificity & Specificity & MSE $\bm\beta$ & MSE ${\bm{B}}$\\ \hline \multicolumn{6}{c}{$\sigma$ = 1.0}\\ Oracle &1.000(0.000)&1.000(0.000)&1.000(0.000) &0.036(0.002)&0.553(0.005)\\ Proposed &0.833(0.000)&0.293(0.020)&0.998(0.000)&0.303(0.008)&0.574(0.006)\\ No Lasso &1.000(0.000)&0.000(0.000)&0.985(0.000)&1.740(0.078)&0.693(0.013)\\ \multicolumn{6}{c}{$\sigma$ = 0.5}\\ Oracle &1.000(0.000)&1.000(0.000)&1.000(0.000)&0.006(0.000)&0.340(0.004)\\ Proposed &0.897(0.008)&0.217(0.017)&0.999(0.000)&0.191(0.005)&0.345(0.004)\\ No Lasso &1.000(0.000)&0.000(0.000)&0.985(0.000)&0.372(0.017)&0.371(0.004)\\ \end{tabular} \label{sim1t1sssmmn500} \end{table} In the simulation studies, we report the results for the case $n=500$, which is close to the sample size $566$ in the real data. From Table \ref{sim1t1sssmmn500}, one can see that the second step can only regularize out some of the instrumental variables. We guess there may be a better tuning method, which can regularize out more instrumental variables, while still keep the confounders and prevision variables in the model. We leave this as future research. Nevertheless, one can also see from Table \ref{sim1t1sssmmn500} that although the proposed method may not remove all of the instrumental variables, eliminating even just some of the instruments greatly reduce the MSEs of both $\bm\beta$ and ${\bm{B}}$, compared to the method where we do not impose $l_{1}$-regularization on $\bm\beta$ in the second-step estimation (denoted by the ``No Lasso'' method). In addition, the estimation of ${\bm B} $ is reasonably good compared to the oracle estimates as shown in Table \ref{sim1t1oracle} of the main article. \subsection{Screening under different sparsity levels} \label{Screening under different sparsity levels} We also consider different sparsity levels in the simulation. It is particularly of interest for our study since when more instrumental variables than confounders and precision variables exist, which could be the case in an imaging-genetic study, the robustness of the proposed method may be undermined. As discussed before, to reduce bias and increase the statistical efficiency of the estimated $\bm{B}$, the ideal adjustment set should include all confounders and precision variables while excluding instrumental variables and irrelevant variables. In particular, we consider three scenarios where the sizes of instrumental variables $\mathcal{I}$ are the same, twice, and eight times of the size of confounders and precision variables $\mathcal{M}_1 $. We set $s=5000$ and the settings for $ {\bm B} $ and $ {\bm C} $ remain the same as before: $ {\bm B} $ is as in Figure \ref{figureTCross}(a), and $ {\bm C} $ is as in Figure \ref{figureTCross}(b). Further we set $ {\bm C}_l=v_l*\bm{C}$, where $ v_1=-1/3$, $v_2=-1$, $v_3=-3$, $ v_{207}=v_{210}=\ldots=v_{204+6L}=-3$, $v_{208}=v_{211}=\ldots=v_{205+6L}=-1$, $v_{209}=v_{212}=\ldots=v_{206+6L}=-1/3$, and $ v_l=0 $ for $ 4\leq l\leq 206$ and $ 207+6L \leq l \leq s $. Here $L$ is a positive integer. We set $ \beta_1=3$, $\beta_2=1$, $\beta_3=1/3$, $ \beta_{104}=3$, $\beta_{105}=1$, $\beta_{106}=1/3$, and $ \beta_l=0$ for $4 \leq l \leq 103$ and $ 107\leq l\leq s$. In this setting, we have $\mathcal{C} = \{1, 2, 3\}$, $\mathcal{P} = \{ 104, 105, 106\}$, $\mathcal{I} =\{ 207,208,209,\ldots,206+6L\}$ and $\mathcal{S} = \{1 \ldots, 5000\} \backslash \{1,2,3,104,105,106,207,208,209,\ldots,206+6L\}$. Note that $\frac{|\mathcal{I}|}{|\mathcal{C} \cup \mathcal{P}|}=\frac{|\mathcal{I}|}{|\mathcal{M}_1|} = L$. For $n=200$, we let $\sigma=1$ and $0.5$, and consider three different sparsity levels that $L=1,2,8$. The complete screening results can be found in Figures \ref{sim1step1n200sigma1sparsity_instru2} -- \ref{sim1step1n200sigma025sparsity_instru16}. Specifically, as summarized in Figure \ref{sim1step1n200sparsity_instru_summary}, when the number of instrumental variables is much larger than that of confounders and precision variables, the size of $\mathcal{M}_1 \cup \mathcal{M}_2$ is larger than the number of covariates kept in the first screening step. And in this case, our results show that the screening step may include many instrumental variables, while missing some confounders and precision variables. This may deteriorate the accuracy and efficiency of the second step estimation. \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{$\frac{|\mathcal{I}|}{|\mathcal{M}_1|} = 1$, $\sigma=1$}[0.45\linewidth] {\includegraphics[width=7cm,height=10cm,keepaspectratio]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000sparsity_instru2coverage_snry9.png}} \subcaptionbox{$\frac{|\mathcal{I}|}{|\mathcal{M}_1|} = 1$, $\sigma = 0.5$}[0.45\linewidth] {\includegraphics[width=7cm,height=10cm,keepaspectratio]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000sparsity_instru2coverage_snry8.png}} \subcaptionbox{$\frac{|\mathcal{I}|}{|\mathcal{M}_1|} = 2$, $\sigma=1$}[0.45\linewidth] {\includegraphics[width=7cm,height=3.75cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000sparsity_instru4coverage_snry9.png}} \subcaptionbox{$\frac{|\mathcal{I}|}{|\mathcal{M}_1|} = 2$, $\sigma = 0.5$}[0.45\linewidth] {\includegraphics[width=7cm,height=10cm,keepaspectratio]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000sparsity_instru4coverage_snry8.png}} \subcaptionbox{$\frac{|\mathcal{I}|}{|\mathcal{M}_1|} = 8$, $\sigma=1$}[0.45\linewidth] {\includegraphics[width=7cm,height=10cm,keepaspectratio]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000sparsity_instru16coverage_snry9.png}} \subcaptionbox{$\frac{|\mathcal{I}|}{|\mathcal{M}_1|} = 8$, $\sigma = 0.5$}[0.45\linewidth] {\includegraphics[width=7cm,height=10cm,keepaspectratio]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000sparsity_instru16coverage_snry8.png}} \caption{Simulation results where the size of instrumental variables $\mathcal{I}$ are the same, twice and eight times of $\mathcal{M}_1 $ for the case $(n,s) = (200,5000)$: Panels (a) (c) (e) plot the average coverage proportion for the index set $\mathcal{M}_1 = \{1,2,3,104,105,106\}$ when $\sigma=1$. Panels (b) (d) (f) plot the average coverage proportion for the index set $\mathcal{M}_1 = \{1,2,3,104,105,106\}$ when $\sigma = 0.5$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The green solid, the red dashed and the black dash dotted lines denote our joint screening method, the outcome screening method, and the intersection screening method, respectively. } \label{sim1step1n200sparsity_instru_summary} \end{figure} \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9sparsity_instru2count1.png}} \subcaptionbox{Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9sparsity_instru2count1.png}} \subcaptionbox{Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9sparsity_instru2count1.png}} \subcaptionbox{Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9sparsity_instru2count1.png}} \subcaptionbox{Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9sparsity_instru2count1.png}} \subcaptionbox{Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9sparsity_instru2count1.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000sparsity_instru2coverage_snry9.png}} \caption{Simulation results where the number of instrumental variables are the same of $\mathcal{M}_1 $ for the case $(n,s,\sigma) = (200,5000,1)$: Panels (a) -- (f) plot the average coverage proportion for $X_l$, where $l=1,2,3,104,105$ and $106$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (f) correspond to strong, moderate and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{M}_1 = \{1,2,3,104,105,106\}$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The green solid, the red dashed and the black dash dotted lines denote our joint screening method, the outcome screening method, and the intersection screening method, respectively. } \label{sim1step1n200sigma1sparsity_instru2} \end{figure} \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9sparsity_instru4count1.png}} \subcaptionbox{Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9sparsity_instru4count1.png}} \subcaptionbox{Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9sparsity_instru4count1.png}} \subcaptionbox{Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9sparsity_instru4count1.png}} \subcaptionbox{Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9sparsity_instru4count1.png}} \subcaptionbox{Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9sparsity_instru4count1.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000sparsity_instru4coverage_snry9.png}} \caption{Simulation results where the number of instrumental variables are twice of $\mathcal{M}_1 $ for the case $(n,s,\sigma) = (200,5000,1)$: Panels (a) -- (f) plot the average coverage proportion for $X_l$, where $l=1,2,3,104,105$ and $106$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (f) correspond to strong, moderate and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{M}_1 = \{1,2,3,104,105,106\}$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The green solid, the red dashed and the black dash dotted lines denote our joint screening method, the outcome screening method, and the intersection screening method, respectively. } \label{sim1step1n200sigma1sparsity_instru4} \end{figure} \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9sparsity_instru16count1.png}} \subcaptionbox{Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9sparsity_instru16count1.png}} \subcaptionbox{Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9sparsity_instru16count1.png}} \subcaptionbox{Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9sparsity_instru16count1.png}} \subcaptionbox{Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9sparsity_instru16count1.png}} \subcaptionbox{Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9sparsity_instru16count1.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000sparsity_instru16coverage_snry9.png}} \caption{Simulation results where the number of instrumental variables are eight times of $\mathcal{M}_1 $ for the case $(n,s,\sigma) = (200,5000,1)$: Panels (a) -- (f) plot the average coverage proportion for $X_l$, where $l=1,2,3,104,105$ and $106$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (f) correspond to strong, moderate and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{M}_1 = \{1,2,3,104,105,106\}$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The green solid, the red dashed and the black dash dotted lines denote our joint screening method, the outcome screening method, and the intersection screening method, respectively. } \label{sim1step1n200sigma1sparsity_instru16} \end{figure} \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8sparsity_instru2count1.png}} \subcaptionbox{Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8sparsity_instru2count1.png}} \subcaptionbox{Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8sparsity_instru2count1.png}} \subcaptionbox{Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8sparsity_instru2count1.png}} \subcaptionbox{Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8sparsity_instru2count1.png}} \subcaptionbox{Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8sparsity_instru2count1.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000sparsity_instru2coverage_snry8.png}} \caption{Simulation results where the number of instrumental variables are the same of $\mathcal{M}_1 $ for the case $(n,s,\sigma) = (200,5000,0.5)$: Panels (a) -- (f) plot the average coverage proportion for $X_l$, where $l=1,2,3,104,105$ and $106$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (f) correspond to strong, moderate and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{M}_1 = \{1,2,3,104,105,106\}$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The green solid, the red dashed and the black dash dotted lines denote our joint screening method, the outcome screening method, and the intersection screening method, respectively. } \label{sim1step1n200sigma025sparsity_instru2} \end{figure} \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8sparsity_instru4count1.png}} \subcaptionbox{Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8sparsity_instru4count1.png}} \subcaptionbox{Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8sparsity_instru4count1.png}} \subcaptionbox{Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8sparsity_instru4count1.png}} \subcaptionbox{Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8sparsity_instru4count1.png}} \subcaptionbox{Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8sparsity_instru4count1.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000sparsity_instru4coverage_snry8.png}} \caption{Simulation results where the number of instrumental variables are twice of $\mathcal{M}_1 $ for the case $(n,s,\sigma) = (200,5000,0.5)$: Panels (a) -- (f) plot the average coverage proportion for $X_l$, where $l=1,2,3,104,105$ and $106$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (f) correspond to strong, moderate and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{M}_1 = \{1,2,3,104,105,106\}$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The green solid, the red dashed and the black dash dotted lines denote our joint screening method, the outcome screening method, and the intersection screening method, respectively. } \label{sim1step1n200sigma025sparsity_instru4} \end{figure} \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8sparsity_instru16count1.png}} \subcaptionbox{Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8sparsity_instru16count1.png}} \subcaptionbox{Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8sparsity_instru16count1.png}} \subcaptionbox{Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8sparsity_instru16count1.png}} \subcaptionbox{Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8sparsity_instru16count1.png}} \subcaptionbox{Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8sparsity_instru16count1.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000sparsity_instru16coverage_snry8.png}} \caption{Simulation results where the number of instrumental variables are eight times of $\mathcal{M}_1 $ for the case $(n,s,\sigma) = (200,5000,0.5)$: Panels (a) -- (f) plot the average coverage proportion for $X_l$, where $l=1,2,3,104,105$ and $106$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (f) correspond to strong, moderate and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{M}_1 = \{1,2,3,104,105,106\}$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The green solid, the red dashed and the black dash dotted lines denote our joint screening method, the outcome screening method, and the intersection screening method, respectively. } \label{sim1step1n200sigma025sparsity_instru16} \end{figure} \subsection{Screening and estimation under different covariances of exposure errors} \label{Screening under different covariance structures of exposure errors} We also consider various exposure errors in the simulation. We use the same setting as Section \ref{Simulation for screening} of the main paper but taking three different covariance structures of exposure errors $\bm{E}_{i}$. In particular, the random error $\mathrm{vec}(\bm{E}_i)$ is independently generated from $N(\bm{0},\bm\Sigma_e)$, where we set the standard deviations of all elements in $\bm{E}_i$ to be $\sigma_e = 0.2$ and the correlation between $\bm{E}_{i,jk}$ and $\bm{E}_{i,j^\prime k^\prime}$ to be $\rho_2^{|j - j^\prime| + |k-k^\prime|}$ for $1 \leq j,k,j^\prime, k^\prime \leq 64$ with $\rho_2$. We consider three scenarios that $\rho_2 = 0.2$, $0.5$, and $0.8$, and report the selected covariates from the screening step. We consider $ \sigma=1$ or $0.5$ and fix the sample size $n=200$. The complete screening results can be found in Figure \ref{sim1step1n200sigma1} as well as Figures \ref{sim1step1n200sigma025}, \ref{sim1step1n200sigma1rho202} -- \ref{sim1step1n200sigma025rho208} here ($\rho_2$ is set to be $0.5$ in Section \ref{Simulation for screening} of the main paper). Specifically, as summarized in Figure \ref{sim1step1n200rho2_summary}, when $\rho_2$ increases, the average coverage proportion for the index set $\mathcal{M}_1 = \{1,2,3,104,105,106\}$ does not change too much. \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{$\rho_2 = 0.2$, $\sigma=1$}[0.45\linewidth] {\includegraphics[width=7cm,height=10cm,keepaspectratio]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000coverage_snry9rho202.png}} \subcaptionbox{$\rho_2 = 0.2$, $\sigma = 0.5$}[0.45\linewidth] {\includegraphics[width=7cm,height=10cm,keepaspectratio]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000coverage_snry8rho202.png}} \subcaptionbox{$\rho_2 = 0.5$, $\sigma=1$}[0.45\linewidth] {\includegraphics[width=7cm,height=10cm,keepaspectratio]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000coverageoptB2C1snry9.png}} \subcaptionbox{$\rho_2 = 0.5$, $\sigma = 0.5$}[0.45\linewidth] {\includegraphics[width=7cm,height=10cm,keepaspectratio]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000coverageoptB2C1snry8.png}} \subcaptionbox{$\rho_2 = 0.8$, $\sigma=1$}[0.45\linewidth] {\includegraphics[width=7cm,height=3.75cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000coverage_snry9rho208.png}} \subcaptionbox{$\rho_2 = 0.8$, $\sigma = 0.5$}[0.45\linewidth] {\includegraphics[width=7cm,height=10cm,keepaspectratio]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000coverage_snry8rho208.png}} \caption{Simulation results where the size of instrumental variables $\mathcal{I}$ are the same, twice and eight times of $\mathcal{M}_1 $ for the case $(n,s) = (200,5000)$: Panels (a) (c) (e) plot the average coverage proportion for the index set $\mathcal{M}_1 = \{1,2,3,104,105,106\}$ when $\sigma=1$. Panels (b) (d) (f) plot the average coverage proportion for the index set $\mathcal{M}_1 = \{1,2,3,104,105,106\}$ when $\sigma = 0.5$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The green solid, the red dashed and the black dash dotted lines denote our joint screening method, the outcome screening method, and the intersection screening method, respectively. } \label{sim1step1n200rho2_summary} \end{figure} In addition, we evaluate the performance of our estimation procedure after the first-step screening. For the size of $ \widehat{\mathcal{M}} $ in the screening step, we set $|\widehat{\mathcal{M}}| = \lfloor n / \log(n) \rfloor$, so that $|\widehat{\mathcal{M}}|=37$ for sample size $n = 200$. We report the mean squared errors (MSEs) for $\bm{\beta}$ and $\bm{B}$ defined as $||{\bm\beta}_{}-\widehat{\bm\beta}||_2^2$ and $ \|{\bm{B}}-\widehat{\bm{B}}\|_F^2$, respectively. Table \ref{sim1rho2t1} summarizes the average MSEs for $\bm\beta$ and $\bm{B}$ among 100 Monte Carlo runs. We can see that the MSE decreases with $\rho_2$ increases. As nuclear norm penalization estimation procedure can be regarded as one way of spatial smoothing, the large correlations among $\bm{E}_{i}$ actually help with the spatial smoothing, and thus we have better estimation accuracy when $\rho_2$ increases. \begin{table}[htbp] \centering \caption{Simulation results for $ \sigma=1 $ and $ \sigma = 0.5 $: the average MSEs for $\bm\beta$ and $ {\bm B}$, and their associated standard errors in the parentheses are reported. The results are based on 100 Monte Carlo repetitions.} \begin{tabular}{ lcc | lcccc } $\sigma = 1.0$ & MSE $\bm\beta$ & MSE ${\bm{B}}$ &$\sigma = 0.5$ &MSE $\bm\beta$ & MSE ${\bm{B}}$ \\ \hline $\rho_2=0.2$ &0.986(0.099)&0.802(0.011)& $\rho_2=0.2$ &0.397(0.042)&0.701(0.006)\\ $\rho_2=0.5$ &0.496(0.021)&0.667(0.005)& $\rho_2=0.5$ &0.276(0.009)&0.528(0.005)\\ $\rho_2=0.8 $ &0.252(0.010)&0.439(0.007)& $\rho_2=0.8$ &0.097(0.006)&0.305(0.004)\\ \end{tabular} \label{sim1rho2t1} \end{table} \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9rho202count1.png}} \subcaptionbox{Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9rho202count2.png}} \subcaptionbox{Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9rho202count3.png}} \subcaptionbox{Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9rho202count4.png}} \subcaptionbox{Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9rho202count5.png}} \subcaptionbox{Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9rho202count6.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000coverage_snry9rho202.png}} \caption{Simulation results for the case $(n,s,\sigma,\rho_2) = (200,5000,1,0.2)$: Panels (a) -- (f) plot the average coverage proportion for $X_l$, where $l=1,2,3,104,105$ and $106$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (f) correspond to strong, moderate and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{M}_1 = \{1,2,3,104,105,106\}$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The green solid, the red dashed and the black dash dotted lines denote our joint screening method, the outcome screening method, and the intersection screening method, respectively. } \label{sim1step1n200sigma1rho202} \end{figure} \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9rho208count1.png}} \subcaptionbox{Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9rho208count2.png}} \subcaptionbox{Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9rho208count3.png}} \subcaptionbox{Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9rho208count4.png}} \subcaptionbox{Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9rho208count5.png}} \subcaptionbox{Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry9rho208count6.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000coverage_snry9rho208.png}} \caption{Simulation results for the case $(n,s,\sigma,\rho_2) = (200,5000,1,0.8)$: Panels (a) -- (f) plot the average coverage proportion for $X_l$, where $l=1,2,3,104,105$ and $106$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (f) correspond to strong, moderate and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{M}_1 = \{1,2,3,104,105,106\}$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The green solid, the red dashed and the black dash dotted lines denote our joint screening method, the outcome screening method, and the intersection screening method, respectively. } \label{sim1step1n200sigma1rho208} \end{figure} \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8rho202count1.png}} \subcaptionbox{Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8rho202count2.png}} \subcaptionbox{Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8rho202count3.png}} \subcaptionbox{Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8rho202count4.png}} \subcaptionbox{Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8rho202count5.png}} \subcaptionbox{Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8rho202count6.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000coverage_snry8rho202.png}} \caption{Simulation results for the case $(n,s,\sigma,\rho_2) = (200,5000,0.5,0.2)$: Panels (a) -- (f) plot the average coverage proportion for $X_l$, where $l=1,2,3,104,105$ and $106$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (f) correspond to strong, moderate and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{M}_1 = \{1,2,3,104,105,106\}$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The green solid, the red dashed and the black dash dotted lines denote our joint screening method, the outcome screening method, and the intersection screening method, respectively. } \label{sim1step1n200sigma025rho202} \end{figure} \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8rho208count1.png}} \subcaptionbox{Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8rho208count2.png}} \subcaptionbox{Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8rho208count3.png}} \subcaptionbox{Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8rho208count4.png}} \subcaptionbox{Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8rho208count5.png}} \subcaptionbox{Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000snry8rho208count6.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n200s5000coverage_snry8rho208.png}} \caption{Simulation results for the case $(n,s,\sigma,\rho_2) = (200,5000,0.5,0.8)$: Panels (a) -- (f) plot the average coverage proportion for $X_l$, where $l=1,2,3,104,105$ and $106$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (f) correspond to strong, moderate and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{M}_1 = \{1,2,3,104,105,106\}$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The green solid, the red dashed and the black dash dotted lines denote our joint screening method, the outcome screening method, and the intersection screening method, respectively. } \label{sim1step1n200sigma025rho208} \end{figure} \subsection{Screening and estimation under different sizes of $\widehat{\mathcal{M}}_{1}^{*}$ and $\widehat{\mathcal{M}}_{2}$} \label{Screening and estimation under different sizes} In addition, we conduct a similar study following the same setting as described in Section \ref{Simulation for screening} of the main paper, where $|\widehat{\mathcal{M}}_2|/|\widehat{\mathcal{M}}_1^*|=2$ and $1/2$. We use $n = 500$ here since it is close to the number of observations $n = 566$ in the real data analysis. Specifically, as summarized in Figures \ref{sim1step1n500sigma1_propSb05} and \ref{sim1step1n500sigma1_propSb2}, when the ratio of $|\widehat{\mathcal{M}}_2|/|\widehat{\mathcal{M}}_1^*|$ is taken as $1/2$ or $2$, the performances of the proposed joint screening method are quite similar to each other. By comparing them with Figure \ref{sim1step1n500sigma1}, where $|\widehat{\mathcal{M}}_2|/|\widehat{\mathcal{M}}_1^*|=1$, the performances of the screening step results are quite similar. \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n500s5000snry9count1propSb05.png}} \subcaptionbox{Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n500s5000snry9count2propSb05.png}} \subcaptionbox{Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n500s5000snry9count3propSb05.png}} \subcaptionbox{Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n500s5000snry9count4propSb05.png}} \subcaptionbox{Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n500s5000snry9count5propSb05.png}} \subcaptionbox{Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n500s5000snry9count6propSb05.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n500s5000coverage_snry9propSb05.png}} \caption{Simulation results for the case $(n,s,\sigma) = (500,5000,1)$ when $|\widehat{\mathcal{M}}_2|/|\widehat{\mathcal{M}}_1^*|=1/2$: Panels (a) -- (f) plot the average coverage proportion for $X_l$, where $l=1,2,3,104,105$ and $106$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (f) correspond to strong, moderate and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{M}_1 = \{1,2,3,104,105,106\}$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The green solid, the red dashed and the black dash dotted lines denote our joint screening method, the outcome screening method, and the intersection screening method, respectively.} \label{sim1step1n500sigma1_propSb05} \end{figure} \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n500s5000snry9count1propSb2.png}} \subcaptionbox{Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n500s5000snry9count2propSb2.png}} \subcaptionbox{Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n500s5000snry9count3propSb2.png}} \subcaptionbox{Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n500s5000snry9count4propSb2.png}} \subcaptionbox{Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n500s5000snry9count5propSb2.png}} \subcaptionbox{Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n500s5000snry9count6propSb2.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=7cm,height=3.75cm]{./plotsMainArkSupp/sim1oal_p64_v2_optNorm2_n500s5000coverage_snry9propSb2.png}} \caption{Simulation results for the case $(n,s,\sigma) = (500,5000,1)$ when $|\widehat{\mathcal{M}}_2|/|\widehat{\mathcal{M}}_1^*|=2$: Panels (a) -- (f) plot the average coverage proportion for $X_l$, where $l=1,2,3,104,105$ and $106$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (f) correspond to strong, moderate and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{M}_1 = \{1,2,3,104,105,106\}$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The green solid, the red dashed and the black dash dotted lines denote our joint screening method, the outcome screening method, and the intersection screening method, respectively.} \label{sim1step1n500sigma1_propSb2} \end{figure} Furthermore, we evaluate the performance of our estimation procedure after the first-step screening. For the size of $ \widehat{\mathcal{M}} $ in the screening step, we set $|\widehat{\mathcal{M}}| = \lfloor n / \log(n) \rfloor$, so that $|\widehat{\mathcal{M}}|=89$ for sample size $n = 500$. We report the mean squared errors (MSEs) for $\bm{\beta}$ and $\bm{B}$ defined as $||{\bm\beta}_{}-\widehat{\bm\beta}||_2^2$ and $ \|{\bm{B}}-\widehat{\bm{B}}\|_F^2$, respectively. As summarized in Table \ref{sim1t1mmn500sigma1propSb}, the average MSEs for $\bm\beta$ and $\bm{B}$ among 100 Monte Carlo runs are all similar to each other for the different choices of $|\widehat{\mathcal{M}}_1^*|$ and $|\widehat{\mathcal{M}}_2|$. Therefore, depending on the prior knowledge about the sizes and strengths of signals of confounding, precision and instrumental variables, one may choose $\widehat{\mathcal{M}}_1^*$ and $\widehat{\mathcal{M}}_2$ differently, though the estimations of $\bm{B}$ appear to be similar among the different choices. \begin{table}[htbp] \caption{Simulation results of the proposed estimates for $(n,s,\sigma) = (500,5000,1)$ , when $|\widehat{\mathcal{M}}_2|/|\widehat{\mathcal{M}}_1^*|$ is taken as 0.5, 1.0 and 2.0: the average MSEs for $\bm\beta$ and $ {\bm B}$, and their associated standard errors in the parentheses are reported. The results are based on 100 Monte Carlo repetitions.} \centering \begin{tabular}{ ccc} $|\widehat{\mathcal{M}}_2|/|\widehat{\mathcal{M}}_1^*|$ & MSE $\bm\beta$ & MSE ${\bm{B}}$\\ \hline 0.5 &0.301(0.008)&0.567(0.005)\\ 1.0&0.303(0.008)&0.574(0.006)\\ 2.0&0.302(0.008)&0.574(0.006)\\ \end{tabular} \label{sim1t1mmn500sigma1propSb} \end{table} \subsection{Screening and estimation using blockwise joint screening} \label{Group Selection Using Linkage Disequilibrium Information} In this section, we list additional screening and estimation results for Section \ref{Group Selection Using Linkage Disequilibrium Information main} of the main article. In particular, the results for the screening step, in which $s=5000$, $\sigma=1$, $n=200,500,1000$, and $K =2,4,6,12,24,52$, can be found in Figures \ref{sim3step1n200sizesig2sigma1} -- \ref{sim3step1n1000sizesig52sigma1}. The complete results for the second step estimation, in which $s=5000$, $\sigma=1$, $n=200, 500, 1000$, and $K =2,4,6,12,24,52$, can be found in Table \ref{sim1t2}. \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{\footnotesize Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig2rho14count1.png}} \subcaptionbox{\footnotesize Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig2rho14count2.png}} \subcaptionbox{\footnotesize Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig2rho14count3.png}} \subcaptionbox{\footnotesize Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig2rho14count4.png}} \subcaptionbox{\footnotesize Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig2rho14count5.png}} \subcaptionbox{\footnotesize Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig2rho14count6.png}} \subcaptionbox{\footnotesize Precision: weaker \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig2rho14count7.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig2rho14coverage.png}} \caption{ Simulation results for the case $(n,s,K,\sigma) = (200,5000,2,1)$: Panels (a) -- (g) plot the average coverage proportion for $X_l$, where $l \in \mathcal{M}_1 = \{1,2,3,104,105, 106\} \cup \mathcal{P}_{LD}$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (g) correspond to strong, moderate, and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{P}_{LD}$. Panel (h) plots the average coverage proportion for the index set $\mathcal{M}_1$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The blue dot, green solid, red dashed and black dash dotted lines denote the blockwise joint screening, joint screening, outcome screening, and intersection screening methods, respectively.} \label{sim3step1n200sizesig2sigma1} \end{figure} \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{\footnotesize Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig4rho14count1.png}} \subcaptionbox{\footnotesize Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig4rho14count2.png}} \subcaptionbox{\footnotesize Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig4rho14count3.png}} \subcaptionbox{\footnotesize Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig4rho14count4.png}} \subcaptionbox{\footnotesize Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig4rho14count5.png}} \subcaptionbox{\footnotesize Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig4rho14count6.png}} \subcaptionbox{\footnotesize Precision: weaker \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig4rho14count7.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig4rho14coverage.png}} \caption{ Simulation results for the case $(n,s,K,\sigma) = (200,5000,4,1)$: Panels (a) -- (g) plot the average coverage proportion for $X_l$, where $l \in \mathcal{M}_1 = \{1,2,3,104,105, 106\} \cup \mathcal{P}_{LD}$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (g) correspond to strong, moderate, and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{P}_{LD}$. Panel (h) plots the average coverage proportion for the index set $\mathcal{M}_1$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The blue dot, green solid, red dashed and black dash dotted lines denote the blockwise joint screening, joint screening, outcome screening, and intersection screening methods, respectively.} \label{sim3step1n200sizesig4sigma1} \end{figure} \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{\footnotesize Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig6rho14count1.png}} \subcaptionbox{\footnotesize Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig6rho14count2.png}} \subcaptionbox{\footnotesize Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig6rho14count3.png}} \subcaptionbox{\footnotesize Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig6rho14count4.png}} \subcaptionbox{\footnotesize Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig6rho14count5.png}} \subcaptionbox{\footnotesize Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig6rho14count6.png}} \subcaptionbox{\footnotesize Precision: weaker \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig6rho14count7.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig6rho14coverage.png}} \caption{ Simulation results for the case $(n,s,K,\sigma) = (200,5000,6,1)$: Panels (a) -- (g) plot the average coverage proportion for $X_l$, where $l \in \mathcal{M}_1 = \{1,2,3,104,105, 106\} \cup \mathcal{P}_{LD}$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (g) correspond to strong, moderate, and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{P}_{LD}$. Panel (h) plots the average coverage proportion for the index set $\mathcal{M}_1$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The blue dot, green solid, red dashed and black dash dotted lines denote the blockwise joint screening, joint screening, outcome screening, and intersection screening methods, respectively.} \label{sim3step1n200sizesig6sigma1} \end{figure} \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{\footnotesize Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig12rho14count1.png}} \subcaptionbox{\footnotesize Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig12rho14count2.png}} \subcaptionbox{\footnotesize Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig12rho14count3.png}} \subcaptionbox{\footnotesize Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig12rho14count4.png}} \subcaptionbox{\footnotesize Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig12rho14count5.png}} \subcaptionbox{\footnotesize Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig12rho14count6.png}} \subcaptionbox{\footnotesize Precision: weaker \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig12rho14count7.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig12rho14coverage.png}} \caption{ Simulation results for the case $(n,s,K,\sigma) = (200,5000,12,1)$: Panels (a) -- (g) plot the average coverage proportion for $X_l$, where $l \in \mathcal{M}_1 = \{1,2,3,104,105, 106\} \cup \mathcal{P}_{LD}$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (g) correspond to strong, moderate, and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{P}_{LD}$. Panel (h) plots the average coverage proportion for the index set $\mathcal{M}_1$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The blue dot, green solid, red dashed and black dash dotted lines denote the blockwise joint screening, joint screening, outcome screening, and intersection screening methods, respectively.} \label{sim3step1n200sizesig12sigma1} \end{figure} \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{\footnotesize Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig24rho14count1.png}} \subcaptionbox{\footnotesize Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig24rho14count2.png}} \subcaptionbox{\footnotesize Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig24rho14count3.png}} \subcaptionbox{\footnotesize Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig24rho14count4.png}} \subcaptionbox{\footnotesize Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig24rho14count5.png}} \subcaptionbox{\footnotesize Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig24rho14count6.png}} \subcaptionbox{\footnotesize Precision: weaker \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig24rho14count7.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig24rho14coverage.png}} \caption{ Simulation results for the case $(n,s,K,\sigma) = (200,5000,24,1)$: Panels (a) -- (g) plot the average coverage proportion for $X_l$, where $l \in \mathcal{M}_1 = \{1,2,3,104,105, 106\} \cup \mathcal{P}_{LD}$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (g) correspond to strong, moderate, and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{P}_{LD}$. Panel (h) plots the average coverage proportion for the index set $\mathcal{M}_1$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The blue dot, green solid, red dashed and black dash dotted lines denote the blockwise joint screening, joint screening, outcome screening, and intersection screening methods, respectively.} \label{sim3step1n200sizesig24sigma1} \end{figure} \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{\footnotesize Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig52rho14count1.png}} \subcaptionbox{\footnotesize Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig52rho14count2.png}} \subcaptionbox{\footnotesize Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig52rho14count3.png}} \subcaptionbox{\footnotesize Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig52rho14count4.png}} \subcaptionbox{\footnotesize Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig52rho14count5.png}} \subcaptionbox{\footnotesize Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig52rho14count6.png}} \subcaptionbox{\footnotesize Precision: weaker \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig52rho14count7.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n200s5000snry9stg4sizesig52rho14coverage.png}} \caption{ Simulation results for the case $(n,s,K,\sigma) = (200,5000,52,1)$: Panels (a) -- (g) plot the average coverage proportion for $X_l$, where $l \in \mathcal{M}_1 = \{1,2,3,104,105, 106\} \cup \mathcal{P}_{LD}$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (g) correspond to strong, moderate, and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{P}_{LD}$. Panel (h) plots the average coverage proportion for the index set $\mathcal{M}_1$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The blue dot, green solid, red dashed and black dash dotted lines denote the blockwise joint screening, joint screening, outcome screening, and intersection screening methods, respectively.} \label{sim3step1n200sizesig52sigma1} \end{figure} \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{\footnotesize Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig2rho14count1.png}} \subcaptionbox{\footnotesize Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig2rho14count2.png}} \subcaptionbox{\footnotesize Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig2rho14count3.png}} \subcaptionbox{\footnotesize Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig2rho14count4.png}} \subcaptionbox{\footnotesize Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig2rho14count5.png}} \subcaptionbox{\footnotesize Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig2rho14count6.png}} \subcaptionbox{\footnotesize Precision: weaker \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig2rho14count7.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig2rho14coverage.png}} \caption{Simulation results for the case $(n,s,K,\sigma) = (500,5000,2,1)$: Panels (a) -- (g) plot the average coverage proportion for $X_l$, where $l \in \mathcal{M}_1 = \{1,2,3,104,105, 106\} \cup \mathcal{P}_{LD}$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (g) correspond to strong, moderate, and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{P}_{LD}$. Panel (h) plots the average coverage proportion for the index set $\mathcal{M}_1$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The blue dot, green solid, red dashed and black dash dotted lines denote the blockwise joint screening, joint screening, outcome screening, and intersection screening methods, respectively.} \label{sim3step1n500sizesig2sigma1} \end{figure} \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{\footnotesize Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig4rho14count1.png}} \subcaptionbox{\footnotesize Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig4rho14count2.png}} \subcaptionbox{\footnotesize Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig4rho14count3.png}} \subcaptionbox{\footnotesize Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig4rho14count4.png}} \subcaptionbox{\footnotesize Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig4rho14count5.png}} \subcaptionbox{\footnotesize Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig4rho14count6.png}} \subcaptionbox{\footnotesize Precision: weaker \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig4rho14count7.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig4rho14coverage.png}} \caption{Simulation results for the case $(n,s,K,\sigma) = (500,5000,4,1)$: Panels (a) -- (g) plot the average coverage proportion for $X_l$, where $l \in \mathcal{M}_1 = \{1,2,3,104,105, 106\} \cup \mathcal{P}_{LD}$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (g) correspond to strong, moderate, and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{P}_{LD}$. Panel (h) plots the average coverage proportion for the index set $\mathcal{M}_1$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The blue dot, green solid, red dashed and black dash dotted lines denote the blockwise joint screening, joint screening, outcome screening, and intersection screening methods, respectively.} \label{sim3step1n500sizesig4sigma1} \end{figure} \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{\footnotesize Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig6rho14count1.png}} \subcaptionbox{\footnotesize Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig6rho14count2.png}} \subcaptionbox{\footnotesize Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig6rho14count3.png}} \subcaptionbox{\footnotesize Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig6rho14count4.png}} \subcaptionbox{\footnotesize Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig6rho14count5.png}} \subcaptionbox{\footnotesize Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig6rho14count6.png}} \subcaptionbox{\footnotesize Precision: weaker \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig6rho14count7.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig6rho14coverage.png}} \caption{ Simulation results for the case $(n,s,K,\sigma) = (500,5000,6,1)$: Panels (a) -- (g) plot the average coverage proportion for $X_l$, where $l \in \mathcal{M}_1 = \{1,2,3,104,105, 106\} \cup \mathcal{P}_{LD}$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (g) correspond to strong, moderate, and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{P}_{LD}$. Panel (h) plots the average coverage proportion for the index set $\mathcal{M}_1$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The blue dot, green solid, red dashed and black dash dotted lines denote the blockwise joint screening, joint screening, outcome screening, and intersection screening methods, respectively.} \label{sim3step1n500sizesig6sigma1} \end{figure} \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{\footnotesize Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig12rho14count1.png}} \subcaptionbox{\footnotesize Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig12rho14count2.png}} \subcaptionbox{\footnotesize Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig12rho14count3.png}} \subcaptionbox{\footnotesize Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig12rho14count4.png}} \subcaptionbox{\footnotesize Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig12rho14count5.png}} \subcaptionbox{\footnotesize Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig12rho14count6.png}} \subcaptionbox{\footnotesize Precision: weaker \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig12rho14count7.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig12rho14coverage.png}} \caption{ Simulation results for the case $(n,s,K,\sigma) = (500,5000,12,1)$: Panels (a) -- (g) plot the average coverage proportion for $X_l$, where $l \in \mathcal{M}_1 = \{1,2,3,104,105, 106\} \cup \mathcal{P}_{LD}$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (g) correspond to strong, moderate, and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{P}_{LD}$. Panel (h) plots the average coverage proportion for the index set $\mathcal{M}_1$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The blue dot, green solid, red dashed and black dash dotted lines denote the blockwise joint screening, joint screening, outcome screening, and intersection screening methods, respectively.} \label{sim3step1n500sizesig12sigma1} \end{figure} \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{\footnotesize Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig24rho14count1.png}} \subcaptionbox{\footnotesize Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig24rho14count2.png}} \subcaptionbox{\footnotesize Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig24rho14count3.png}} \subcaptionbox{\footnotesize Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig24rho14count4.png}} \subcaptionbox{\footnotesize Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig24rho14count5.png}} \subcaptionbox{\footnotesize Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig24rho14count6.png}} \subcaptionbox{\footnotesize Precision: weaker \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig24rho14count7.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig24rho14coverage.png}} \caption{ Simulation results for the case $(n,s,K,\sigma) = (500,5000,24,1)$: Panels (a) -- (g) plot the average coverage proportion for $X_l$, where $l \in \mathcal{M}_1 = \{1,2,3,104,105, 106\} \cup \mathcal{P}_{LD}$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (g) correspond to strong, moderate, and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{P}_{LD}$. Panel (h) plots the average coverage proportion for the index set $\mathcal{M}_1$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The blue dot, green solid, red dashed and black dash dotted lines denote the blockwise joint screening, joint screening, outcome screening, and intersection screening methods, respectively.} \label{sim3step1n500sizesig24sigma1} \end{figure} \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{\footnotesize Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig52rho14count1.png}} \subcaptionbox{\footnotesize Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig52rho14count2.png}} \subcaptionbox{\footnotesize Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig52rho14count3.png}} \subcaptionbox{\footnotesize Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig52rho14count4.png}} \subcaptionbox{\footnotesize Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig52rho14count5.png}} \subcaptionbox{\footnotesize Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig52rho14count6.png}} \subcaptionbox{\footnotesize Precision: weaker \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig52rho14count7.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n500s5000snry9stg4sizesig52rho14coverage.png}} \caption{ Simulation results for the case $(n,s,K,\sigma) = (500,5000,52,1)$: Panels (a) -- (g) plot the average coverage proportion for $X_l$, where $l \in \mathcal{M}_1 = \{1,2,3,104,105, 106\} \cup \mathcal{P}_{LD}$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (g) correspond to strong, moderate, and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{P}_{LD}$. Panel (h) plots the average coverage proportion for the index set $\mathcal{M}_1$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The blue dot, green solid, red dashed and black dash dotted lines denote the blockwise joint screening, joint screening, outcome screening, and intersection screening methods, respectively.} \label{sim3step1n500sizesig52sigma1} \end{figure} \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{\footnotesize Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig2rho14count1.png}} \subcaptionbox{\footnotesize Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig2rho14count2.png}} \subcaptionbox{\footnotesize Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig2rho14count3.png}} \subcaptionbox{\footnotesize Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig2rho14count4.png}} \subcaptionbox{\footnotesize Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig2rho14count5.png}} \subcaptionbox{\footnotesize Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig2rho14count6.png}} \subcaptionbox{\footnotesize Precision: weaker \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig2rho14count7.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig2rho14coverage.png}} \caption{ Simulation results for the case $(n,s,K,\sigma) = (1000,5000,2,1)$: Panels (a) -- (g) plot the average coverage proportion for $X_l$, where $l \in \mathcal{M}_1 = \{1,2,3,104,105, 106\} \cup \mathcal{P}_{LD}$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (g) correspond to strong, moderate, and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{P}_{LD}$. Panel (h) plots the average coverage proportion for the index set $\mathcal{M}_1$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The blue dot, green solid, red dashed and black dash dotted lines denote the blockwise joint screening, joint screening, outcome screening, and intersection screening methods, respectively.} \label{sim3step1n1000sizesig2sigma1} \end{figure} \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{\footnotesize Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig4rho14count1.png}} \subcaptionbox{\footnotesize Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig4rho14count2.png}} \subcaptionbox{\footnotesize Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig4rho14count3.png}} \subcaptionbox{\footnotesize Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig4rho14count4.png}} \subcaptionbox{\footnotesize Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig4rho14count5.png}} \subcaptionbox{\footnotesize Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig4rho14count6.png}} \subcaptionbox{\footnotesize Precision: weaker \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig4rho14count7.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig4rho14coverage.png}} \caption{ Simulation results for the case $(n,s,K,\sigma) = (1000,5000,4,1)$: Panels (a) -- (g) plot the average coverage proportion for $X_l$, where $l \in \mathcal{M}_1 = \{1,2,3,104,105, 106\} \cup \mathcal{P}_{LD}$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (g) correspond to strong, moderate, and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{P}_{LD}$. Panel (h) plots the average coverage proportion for the index set $\mathcal{M}_1$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The blue dot, green solid, red dashed and black dash dotted lines denote the blockwise joint screening, joint screening, outcome screening, and intersection screening methods, respectively.} \label{sim3step1n1000sizesig4sigma1} \end{figure} \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{\footnotesize Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig6rho14count1.png}} \subcaptionbox{\footnotesize Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig6rho14count2.png}} \subcaptionbox{\footnotesize Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig6rho14count3.png}} \subcaptionbox{\footnotesize Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig6rho14count4.png}} \subcaptionbox{\footnotesize Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig6rho14count5.png}} \subcaptionbox{\footnotesize Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig6rho14count6.png}} \subcaptionbox{\footnotesize Precision: weaker \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig6rho14count7.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig6rho14coverage.png}} \caption{ Simulation results for the case $(n,s,K,\sigma) = (1000,5000,6,1)$: Panels (a) -- (g) plot the average coverage proportion for $X_l$, where $l \in \mathcal{M}_1 = \{1,2,3,104,105, 106\} \cup \mathcal{P}_{LD}$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (g) correspond to strong, moderate, and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{P}_{LD}$. Panel (h) plots the average coverage proportion for the index set $\mathcal{M}_1$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The blue dot, green solid, red dashed and black dash dotted lines denote the blockwise joint screening, joint screening, outcome screening, and intersection screening methods, respectively.} \label{sim3step1n1000sizesig6sigma1} \end{figure} \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{\footnotesize Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig12rho14count1.png}} \subcaptionbox{\footnotesize Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig12rho14count2.png}} \subcaptionbox{\footnotesize Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig12rho14count3.png}} \subcaptionbox{\footnotesize Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig12rho14count4.png}} \subcaptionbox{\footnotesize Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig12rho14count5.png}} \subcaptionbox{\footnotesize Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig12rho14count6.png}} \subcaptionbox{\footnotesize Precision: weaker \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig12rho14count7.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig12rho14coverage.png}} \caption{ Simulation results for the case $(n,s,K,\sigma) = (1000,5000,12,1)$: Panels (a) -- (g) plot the average coverage proportion for $X_l$, where $l \in \mathcal{M}_1 = \{1,2,3,104,105, 106\} \cup \mathcal{P}_{LD}$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (g) correspond to strong, moderate, and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{P}_{LD}$. Panel (h) plots the average coverage proportion for the index set $\mathcal{M}_1$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The blue dot, green solid, red dashed and black dash dotted lines denote the blockwise joint screening, joint screening, outcome screening, and intersection screening methods, respectively.} \label{sim3step1n1000sizesig12sigma1} \end{figure} \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{\footnotesize Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig24rho14count1.png}} \subcaptionbox{\footnotesize Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig24rho14count2.png}} \subcaptionbox{\footnotesize Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig24rho14count3.png}} \subcaptionbox{\footnotesize Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig24rho14count4.png}} \subcaptionbox{\footnotesize Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig24rho14count5.png}} \subcaptionbox{\footnotesize Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig24rho14count6.png}} \subcaptionbox{\footnotesize Precision: weaker \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig24rho14count7.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig24rho14coverage.png}} \caption{ Simulation results for the case $(n,s,K,\sigma) = (1000,5000,24,1)$: Panels (a) -- (g) plot the average coverage proportion for $X_l$, where $l \in \mathcal{M}_1 = \{1,2,3,104,105, 106\} \cup \mathcal{P}_{LD}$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (g) correspond to strong, moderate, and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{P}_{LD}$. Panel (h) plots the average coverage proportion for the index set $\mathcal{M}_1$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The blue dot, green solid, red dashed and black dash dotted lines denote the blockwise joint screening, joint screening, outcome screening, and intersection screening methods, respectively.} \label{sim3step1n1000sizesig24sigma1} \end{figure} \begin{figure}[htbp] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{\footnotesize Confounder: strong \\ outcome, weak exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig52rho14count1.png}} \subcaptionbox{\footnotesize Confounder: medium \\ outcome, medium exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig52rho14count2.png}} \subcaptionbox{\footnotesize Confounder: weak \\ outcome, strong exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig52rho14count3.png}} \subcaptionbox{\footnotesize Precision: strong \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig52rho14count4.png}} \subcaptionbox{\footnotesize Precision: medium \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig52rho14count5.png}} \subcaptionbox{\footnotesize Precision: weak \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig52rho14count6.png}} \subcaptionbox{\footnotesize Precision: weaker \\ outcome, zero exposure}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig52rho14count7.png}} \subcaptionbox{Overall coverage of $\mathcal{M}_1$}[0.45\linewidth] {\includegraphics[width=6cm,height=3.5cm]{./plotsMainArkSupp/sim3oal_p64_optNorm2_n1000s5000snry9stg4sizesig52rho14coverage.png}} \caption{ Simulation results for the case $(n,s,K,\sigma) = (1000,5000,52,1)$: Panels (a) -- (g) plot the average coverage proportion for $X_l$, where $l \in \mathcal{M}_1 = \{1,2,3,104,105, 106\} \cup \mathcal{P}_{LD}$. Panels (a) -- (c) correspond to strong outcome and weak exposure predictor, moderate outcome and moderate exposure predictor and weak outcome and strong exposure predictor; Panels (d) -- (g) correspond to strong, moderate, and weak predictors of outcome only. Panel (g) plots the average coverage proportion for the index set $\mathcal{P}_{LD}$. Panel (h) plots the average coverage proportion for the index set $\mathcal{M}_1$. The x-axis represents the size of $\widehat{\mathcal{M}} $, while y-axis denotes the average proportion. The blue dot, green solid, red dashed and black dash dotted lines denote the blockwise joint screening, joint screening, outcome screening, and intersection screening methods, respectively.} \label{sim3step1n1000sizesig52sigma1} \end{figure} \begin{table}[htbp] \centering \caption{Simulation results for $ \sigma=1 $: the average MSEs for $\bm\beta$ and $ {\bm B}$, and their associated standard errors in the parentheses are reported. The left panel summarizes the results from the joint screening method; the right panel summarizes the results from the blockwise joint screening method. The results are based on 100 Monte Carlo repetitions. } \begin{tabular}{ rrr | rrrrr } & & & & &\\ Proposed & MSE $\bm\beta$ & MSE ${\bm{B}}$ & Proposed (block) &MSE $\bm\beta$ & MSE ${\bm{B}}$ \\ \hline n=200,K=2&1.423(0.096)&0.785(0.009)&n=200,K=2&1.390(0.090)&0.793(0.010)\\ n=500,K=2&0.831(0.069)&0.726(0.008)&n=500,K=2&0.892(0.082)&0.734(0.009)\\ n=1000,K=2&0.591(0.050)&0.676(0.008)&n=1000,K=2&0.488(0.028)&0.666(0.006)\\* \hline n=200,K=4&1.667(0.096)&0.815(0.011)&n=200,K=4&1.548(0.088)&0.805(0.010)\\* n=500,K=4&1.059(0.082)&0.751(0.011)&n=500,K=4&1.094(0.090)&0.758(0.012)\\ n=1000,K=4&0.606(0.057)&0.671(0.008)&n=1000,K=4&0.555(0.045)&0.678(0.008)\\ \hline n=200,K=6&1.955(0.101)&0.826(0.010)&n=200,K=6&1.701(0.084)&0.816(0.009)\\* n=500,K=6&1.155(0.085)&0.749(0.011)&n=500,K=6&1.107(0.089)&0.752(0.011)\\ n=1000,K=6&0.578(0.051)&0.674(0.008)&n=1000,K=6&0.551(0.047)&0.672(0.008)\\* \hline n=200,K=12&2.466(0.096)&0.890(0.039)&n=200,K=12&2.223(0.129)&0.838(0.011)\\* n=500,K=12&1.024(0.082)&0.735(0.010)&n=500,K=12&0.927(0.077)&0.727(0.008)\\* n=1000,K=12&0.570(0.046)&0.673(0.008)&n=1000,K=12&0.627(0.057)&0.681(0.009)\\ \hline n=200,K=24&2.533(0.164)&0.847(0.014)&n=200,K=24&2.136(0.138)&0.821(0.010)\\* n=500,K=24&1.065(0.080)&0.733(0.010)&n=500,K=24&1.119(0.088)&0.737(0.011)\\ n=1000,K=24&0.662(0.050)&0.669(0.008)&n=1000,K=24&0.677(0.056)&0.673(0.009)\\ \hline n=200,K=52&14.650(0.815)&2.034(0.487)&n=200,K=52&13.693(0.728)&1.870(0.459)\\* n=500,K=52&1.816(0.144)&0.775(0.019)&n=500,K=52&1.725(0.143)&0.762(0.019)\\* n=1000,K=52&0.937(0.066)&0.684(0.010)&n=1000,K=52&0.861(0.056)&0.675(0.008)\\* \end{tabular} \label{sim1t2} \end{table} \newpage \section{ Theoretical properties} Starting from here, we denote $\dot{\bm\beta} = ( \dot\beta_1, \ldots, \dot\beta_s)^T$, $\dot{\bm B}$ and $\dot{\bm C}_l$ as the true values for $\bm\beta = (\beta_1, \ldots,\beta_s)^T$, ${\bm B}$ and ${\bm C}_l$ respectively. Furthermore, we denote $\dot{\mathcal{C}}$, $\dot{\mathcal{P}}$, and $\dot{\mathcal{I}}$ as the true index sets of ${\mathcal{C}}$, ${\mathcal{P}}$, and ${\mathcal{I}}$. \label{Theoretical guarantees} \subsection{Sure screening property} In this subsection, we study theoretical properties for our screening procedure. We let $\mathcal{M}_1 = \{1 \leq l \leq s_n : \dot{\beta}_{l}^{*} \neq 0 \}= \dot{\mathcal{C}} \cup \dot{\mathcal{P}}$, where $\dot{\mathcal{C}} = \{1 \leq l \leq s_n: \dot{\bm{C}_{l}} \neq \bm{0} \textrm{ and } \dot{\beta}_{l}^{*} \neq 0 \}$ and $\dot{\mathcal{P}} = \{1 \leq l \leq s_n: \dot{\bm{C}_{l}} = \bm{0} \textrm{ and } \dot{\beta}_{l}^{*} \neq 0 \}$. Here $\dot{\beta}_{l}^{*}$ and $\dot{\bm{C}_{l}}$ are the true values for $\beta_{l}$ and $\bm{C}_{l}$, respectively, and $\dot{\bm{B}}$ is the true value of $\bm{B}$. We have the following theorems, where the assumptions needed are included in Section \ref{assumption}. \begin{thm} \label{thm1} Under Assumptions (A0) -- (A3) and (A5), let $\gamma_{1,n} = \alpha D_1 n^{-\kappa}$ and $\gamma_{2,n} = \alpha D_1 (pq)^{1/2}$ $n^{-\kappa}$ with $0 < \alpha < 1$, then we have $P(\mathcal{M}_1 \subset \widehat{\mathcal{M}}_{}) \to 1 $ as $n \to \infty$. \end{thm} Since the screening procedure automatically includes all the significant covariates for small value of $\gamma_{1,n}$ and $\gamma_{2,n}$, it is necessary to consider the size of $\widehat{\mathcal{M}}_{}$, which we quantify in Theorem \ref{thm2}. \begin{thm} \label{thm2} Under Assumptions (A0) -- (A5), when $\gamma_{1,n} = \alpha D_1 n^{-\kappa}$ and $\gamma_{2,n} = \alpha D_1 (pq)^{1/2} n^{-\kappa}$ with $0 < \alpha < 1$, we have $P(|\widehat{\mathcal{M}}_{}| = O(n^{2 \kappa + \tau})) \to 1$ as $n \to \infty$. \end{thm} \begin{corr} \label{coro2} Under Assumptions (A0) -- (A5), when $\gamma_{1,n} = \alpha D_1 n^{-\kappa}$ and $\gamma_{2,n} = \alpha D_1 (pq)^{1/2} n^{-\kappa}$ with $0 < \alpha < 1$, we have $P(|\widehat{\mathcal{M}}-\widehat{\mathcal{M}}_{1}^*| = O(n^{2\kappa +\tau})) \to 1$ as $n \to \infty$. \end{corr} Theorem \ref{thm1} shows that if $\gamma_{1,n}$ and $\gamma_{2,n}$ are chosen properly, our screening procedure will include all significant variables with a high probability. Theorem \ref{thm2} guarantees that the size of selected model from the screening procedure is only of a polynomial order of $n$ even though the original model size is of an exponential order of $n$. Therefore, the false selection rate of our screening procedure vanishes as $n \to \infty$, while the size of $\widehat{\mathcal{M}}$ grows in a polynomial order of $n$, where the order depends on two constants $\kappa$ and $\tau$ defined in Section \ref{assumption}. Theorem \ref{thm1a} shows our blockwise screening procedure also enjoys the screening property. The proofs of these theorems are collected in Section \ref{proofthm}. \begin{thm} \label{thm1a} Under Assumptions (A0) -- (A3), (A5), and further assume the $j$-th block size $|\mathcal{B}_j| = D_6 n^{\nu_1}$ for some constant $D_6>0$. Let $\gamma_{1,n} = \alpha D_1 n^{-\kappa}$, $\gamma_{2,n} = \alpha D_1 (pq)^{1/2}$ $n^{-\kappa}$ with $0 < \alpha < 1$, $\gamma_{3,n} = \alpha D_1 D_6^{-1} n^{-\kappa-\nu_1}$, and $\gamma_{4,n} = \alpha D_1 D_6^{-1} (pq)^{1/2}$ $n^{-\kappa-\nu_1}$ with $0 < \alpha < 1$, then we have $P(\mathcal{M}_1 \subset \widehat{\mathcal{M}}^{block}_{}) \to 1 $ as $n \to \infty$. \end{thm} \subsection{Theory for two-step estimator} In this section, we develop a unified theory for our two-step estimator. In particular, we derive a non-asymptotic bound for the final estimates. We first introduce some notation. Denote parameter $\bm{\theta} =\left\{ \bm\beta^{\mathrm{\scriptscriptstyle T}}, \mathrm{vec}^{\mathrm{\scriptscriptstyle T}} (\bm{B})\right\}^{\mathrm{\scriptscriptstyle T}} \in \mathbb{R}^{s+pq}$, where $\bm\beta \in \mathbb{R}^{s}$ and $\bm{B} \in \mathbb{R}^{p \times q}$. Using this notation, problem \eqref{min1} can be recasted as minimizing $l(\bm{\theta}) + P(\bm{\theta})$, where $l(\bm{\theta}) = (2n)^{-1} \sum_{i=1}^n \left( Y_i - \right.$ $\left. \langle \bm{Z}_i, \bm{B} \rangle - \right. $ $\left. \sum_{l \in \widehat{\mathcal{M}}_{}} {X}_{il} \beta_l \right)^2$, and $P(\bm{\theta}) = \lambda_{1} \sum_{l \in \widehat{\mathcal{M}}_{}} |\beta_l| + \lambda_{2} || \bm{B}||_* $. In addition, we let $\dot{\bm\theta} = \{ \dot{\bm\beta}^{\mathrm{\scriptscriptstyle T}}, $ $\mathrm{vec}(\dot{\bm{B}})^{\mathrm{\scriptscriptstyle T}}\}^{\mathrm{\scriptscriptstyle T}}$ be the true value for $\bm\theta$, where $\dot{\bm\beta}$ and $\dot{\bm{B}}$ is the true values for $\bm\beta$ and $\bm{B}$, respectively. Let $\widehat{\bm\theta}_{\bm\lambda} = \{\widehat{\bm\beta}^{ \mathrm{\scriptscriptstyle T}},\mathrm{vec}(\widehat{\bm{B}}\}^{\mathrm{\scriptscriptstyle T}})^{\mathrm{\scriptscriptstyle T}}$ be the proposed estimator for $\bm\theta$, where $\widehat{\bm\beta}$ and $\widehat{\bm{B}}$ are the estimators obtained from \eqref{min1} for tuning parameters $\bm\lambda = (\lambda_1,\lambda_2)$. We hereby give nonasymptotic error bound for the proposed two-step estimator $\widehat{\bm\theta}_{\bm\lambda}$: \begin{thm}(Nonasymptotic error bounds for two-step estimator) \label{thm3} Under Assumptions (A0) -- (A9), $2 \kappa + \tau < 1$ and $\kappa < 1/4$, and the condition that $ \mathcal{M}_1\subset \widehat{\mathcal{M}}$ with $|\widehat{\mathcal{M}}_{}| = O(n^{2 \kappa + \tau}) $, conditional on $\widehat{\mathcal{M}}$, there exists some positive constants $c_1, c_2, c_3, c_4$, $C_0$, $C_1$, $g_0$ and $g_1$, such that for $\lambda_1 \geq 2 \sigma_{0} [ 2 n^{-1} \{ \log (\log n) + C_0 (2 \kappa + \tau)\log n \}]^{1/2}$ and $\lambda_2 \geq 2 b s_2 \sigma_{0} [ 2 n^{-1} \{ 3 \log s_2 + \log (\log n) \}]^{1/2} + 4 n^{-1/2} $ $\sigma_{\epsilon} (p^{1/2} + q^{1/2})$, with probability at least $1- c_1/ \log n - c_2 / (s_2 \log n) - c_3 \exp\{ -c_4 (p+q)\} - \exp(-n) $, one has $$\left\| \widehat{\bm\theta}_{\bm\lambda} - \dot{\bm\theta} \right\|_2^2 \leq C_0 \max\left\{C_1 \lambda_1^2 n^{2 \kappa + \tau},\lambda_2^2 r \right\} \iota^{-2}.$$ \end{thm} The bound in Theorem \ref{thm3} implies that the convergence rate of the proposed estimator $\widehat{\bm\theta}_{\lambda}$ is $O(\max\{$ $n^{2\kappa+\tau-1},n^{1-2\tau} \} )$. Here $\iota$ is a positive constant defined in Assumption (A6) in Section \ref{assumption}, and $r$ is the rank of $\dot{\bm{B}}$. The convergence rate is controlled by $\kappa$ and $\tau$, where $\kappa$ controls the exponential rates of model complexity that can diverge and $\tau$ controls the rate of largest eigenvalue of population covariance matrix that can grow. The proof of the theorem is deferred to section \ref{proofthm}. \section{Assumptions for main theorems}\label{assumption} In this section, we state the assumptions for the main theorems. We first make the following assumptions, which are needed for Theorems \ref{thm1} and \ref{thm2}.\\ (A0) The covariates $X_i$ are independent and identically distributed (i.i.d) with mean zero and covariance $\Sigma_x$. The random error $\epsilon_i$ are i.i.d with mean zero and variance $\sigma_{\epsilon}^2$. Define $\sigma_l^2 = (\Sigma_x)_{ll}$. The vectorized error matrices $\mathrm{vec}(E_i)$ are i.i.d with mean zero and covariance $\Sigma_e$. There exists a constant $\sigma_x > 0$ such that $ \left\|\Sigma_{x} \right\|_{\infty} \leq \sigma_x $. Moreover, $x_i$ is independent of $E_i = (E_{i,jk})$ and $\epsilon_i$. \\ (A1) There exist some constants $D_1 > 0 $ and $b> 0 $, and $0 < \kappa < 1/2$ such that \begin{eqnarray*} \min_{l \in \mathcal{M}_1} \left| cov \left(\sum_{l^\prime \in \mathcal{M}_1} x_{i l^\prime} \dot{\beta_{l^\prime}^{*}}, x_{il} \right) \right| &\geq& D_1 n^{-\kappa},\\ \min_{l \in \mathcal{M}_2} \left\| cov \left( \sum_{l^\prime \in \mathcal{M}_2} x_{il^\prime} * \dot{\bm{C}_{l^\prime}}, x_{il}\right) \right\|_{op} &\geq& D_1 (pq)^{1/2} n^{-\kappa}, \end{eqnarray*} and $\max \left\{\max_{l \in \mathcal{M}_2} \left\| \dot{\bm{C}_{l}} \right\|_{\infty}, \max_{l \in \mathcal{M}_2} \left\| \dot{\bm{C}_{l}} \right\|_{op} ,\max_{l \in \mathcal{M}_2} \left| \langle \dot{\bm{C}_{l}}, \dot{\bm{B}} \rangle \right|, \max_{l \in \mathcal{M}_1} | \dot{\beta}_{l}^{*} | \right\}< b$.\\ (A2) There exist positive constants $D_2$ and $D_3$ such that \[ \max \left\{ E[e^{D_2 x_{il}^2}],E[e^{D_2 E_{i,jk}^2}] , E[e^{D_2\langle \bm{E}_i, \dot{\bm{B}}\rangle^2}]\right\} \leq D_3 \] for every $1 \leq l \leq s_n$, $1 \leq j \leq p$ and $1 \leq k \leq q$. Denote $\bm \epsilon = (\epsilon_1,\ldots,\epsilon_n)^T$ is a n-dimensional vector of zero-mean, there exists a constant $ \sigma_{0} > 0$ such that for any fixed $\|\bm v\|_2 = 1$, $P( \left| \langle \bm v, \bm \epsilon \rangle \right| \geq t ) \leq 2 \exp \left( - \frac{t^2}{2 \sigma_{0}^2} \right)$ for all $t>0$.\\ (A3) There exists a constant $D_4 > 0 $ such that $\log(s_n) = D_4 n^{\xi}$ for $\xi \in (0,1-2\kappa)$. \\ (A4) There exists constants $D_5 > 0 $ and $\tau > 0 $ such that $\lambda_{\max} (\Sigma_x) \leq D_5 n^\tau$.\\ (A5) $\log(pq) = o(n^{1- 2\kappa})$.\\ Before we state the assumptions for Theorem \ref{thm3}, we first introduce some notations. Denote $P(\bm{\theta}) = P_1(\bm{\beta})+P_2(\bm{B})$ where $ P_1(\bm{\beta})= \lambda_{1} \sum_{l \in \widehat{\mathcal{M}}_{}} |\beta_l| $ and $P_2(\bm{B}) = \lambda_{2} || \bm{B}||_* $. In addition, let $r = \mathrm{rank}(\dot{\bm{B}})$, the true rank of matrix $\dot{\bm{B}} \in \mathbb{R}^{p \times q}$. Let us consider the class of matrices $ \Theta$ that have rank $r \leq \min\left\{ p,q \right\}$. For any given matrix $\Theta$, we let $\mathrm{row}(\Theta) \subset \mathbb{R}^p$ and $\mathrm{col}(\Theta) \subset \mathbb{R}^q$ denote its row and column space, respectively. Let $U$ and $V$ be a given pair of $r$-dimensional subspace $U \subset \mathbb{R}^{p}$ and $V \subset \mathbb{R}^{q}$, respectively. For a given $\bm\theta$ and pair $(U,V)$, we define the subspace $\Omega_1(\mathcal{M}_1)$, $\overline{\Omega}_1(\mathcal{M}_1)$, $\overline{\Omega}^{\perp}_1(\mathcal{M}_1)$, $\Omega_2(U,V)$, $\overline{\Omega}_2(U,V)$ and $\overline{\Omega}^{\perp}_2(U,V)$ as follows: \begin{eqnarray*} {\Omega}_1(\mathcal{M}_1)&=&\overline{\Omega}_1(\mathcal{M}_1) := \left\{ \bm\beta \in \mathbb{R}^{s} | \beta_j = 0 \textrm{ for all } j \not\in \mathcal{M}_1 \right\},\\ \overline{\Omega}_1^{\perp}(\mathcal{M}_1) &:=& \left\{ \bm\beta \in \mathbb{R}^{s} | \beta_j = 0 \textrm{ for all } j \in \mathcal{M}_1 \right\},\\ {\Omega}_2(U,V) &:=& \left\{ \Theta \in \mathbb{R}^{p \times q} | \mathrm{row}(\Theta) \subset V, \textrm{ and } \mathrm{col}(\Theta) \subset U \right\},\\ \overline{\Omega}_2(U,V) &:=& \left\{ \Theta \in \mathbb{R}^{p \times q} | \mathrm{row}(\Theta) \subset V, \textrm{ or } \mathrm{col}(\Theta) \subset U \right\},\\ \overline{\Omega}_2^{\perp}(U,V) &:=& \left\{ \Theta \in \mathbb{R}^{p \times q} | \mathrm{row}(\Theta) \subset V^{\perp}, \textrm{ and } \mathrm{col}(\Theta) \subset U^{\perp} \right\}. \end{eqnarray*} Denote $\bm\Delta = \{ \bm\Delta_1^{\mathrm{\scriptscriptstyle T}},\mathrm{vec}(\bm\Delta_2)^{\mathrm{\scriptscriptstyle T}} \}^{\mathrm{\scriptscriptstyle T}} \in \mathbb{R}^{s+pq}$ with $\bm\Delta_1 \in \mathbb{R}^{s}$ and $\bm\Delta_2 \in \mathbb{R}^{p \times q}$. Then $\bm\Delta_{1,\overline{\Omega}_1} = \arg\min\limits_{\bm{v} \in \overline{\Omega}_1} \| \bm\Delta_{1} - \bm{v}\|_2$ and $\bm\Delta_{1,\overline{\Omega}_1^{\perp}} = \arg\min\limits_{\bm{v} \in \overline{\Omega}_1^{\perp}} \| \bm\Delta_{1} - \bm{v}\|_2$; $\bm\Delta_{2,\overline{\Omega}_2} = \arg\min\limits_{\bm{v} \in \overline{\Omega}_2} \| \bm\Delta_{2} - \bm{v}\|_F$ and $\bm\Delta_{2,\overline{\Omega}_2^{\perp}} = \arg\min\limits_{\bm{v} \in \overline{\Omega}_2^{\perp}} \| \bm\Delta_{2} - \bm{v}\|_F$. We write $\bm{X}_{comp} = (\bm{X},\bm{Z}_{new}) \in \mathbb{R}^{n \times (s+pq)}$ with $\bm{Z}_{new} = (\mathrm{vec}(\bm{Z}_1),$ $\ldots,\mathrm{vec}(\bm{Z}_n))^{\mathrm{\scriptscriptstyle T}} \in \mathbb{R}^{n \times pq}$ and let $X_{comp,i}$ represent the $i$-th column of $\bm{X}_{comp}^{\mathrm{\scriptscriptstyle T}}$ for $i=1,\ldots,n$. With loss of generality we assume that $\bm X$ has been column normalized, i.e. $\| \bm{x}_l\|_2 / \sqrt{n} = 1$, for all $l \in 1, \ldots,s $. We need the following assumptions: (A6) Define \begin{eqnarray*} \iota := \min\limits_{ {\tiny \begin{array}{c} \left|\bm\Delta_{1,\overline{\Omega}_1^{\perp}}\right|_1 \leq 3 \left|\bm\Delta_{1,\overline{\Omega}_1}\right|_1 \\ \left\|\bm\Delta_{2,\overline{\Omega}_2^{\perp}}\right\|_{*} \leq 3 \left\|\bm\Delta_{2,\overline{\Omega}_2}\right\|_{*} \end{array}} } \frac{1}{n} \sum_{i=1}^{n} \left\{ \frac{\left| \langle X_{comp,i}, \bm\Delta \rangle \right|^2}{\|\bm\Delta \|_2^2}\right\}, \end{eqnarray*} and assume that $\iota$ is a positive constant. (A7) Assume $\max\{p,q\} / \log(n) \to \infty$ and $ \max\{p,q\} = o(n^{1-2\tau})$ as $n \to \infty$ with $\tau < 1/2$. (A8) The vectorized error matrices $\mathrm{vec}(\bm{E}_i)$ are i.i.d. $N(\bm{0},\bm\Sigma_e^2)$, where $\lambda_{\max}(\bm\Sigma_e) \leq C_U^2 < \infty$. (A9) $\mathrm{rank}(\dot{\bm{B}}) = r < \min (p,q)$ holds. \section{Auxiliary lemmas} \label{auxlemma} In this section, we include the auxiliary lemmas needed for the theorems and their proofs. \begin{lem} \label{lem1} (Bernstein's inequality) Let $T_1,\ldots, T_n$ be independent random variable with zero mean such that $E(|T_i|^m) \leq m ! M^{m-2} v_i/2$, for every $m \geq 2$ (and all i) and some constant $M$ and $v_i$. Then \[ P(|\sum_{i=1}^n T_i| > x) \leq 2 e^{-\frac{1}{2} \frac{x^2}{v+Mx}}, \] for $v = \sum_{i=1}^n v_i$. \end{lem} This is Lemma 2.2.11 from \cite{van2000weak} and we omit the proof here. \begin{lem} \label{lem2} Under Assumptions (A0), (A1) and (A2), for arbitrary $t>0$ and for every $l,l^\prime,j,k$, we have that \begin{eqnarray*} P\left(( |\sum_{i=1}^n \{ x_{il} x_{il^\prime} -E(x_{il} x_{il^\prime}) \}| \geq t\right) &\leq& 2 \exp\left\{-\frac{t^2}{2(2n D_2^{-2} e^{D_2 \sigma_x} D_3 + t/D_2)}\right\},\\ P\left(\left|\sum_{i=1}^n (x_{il} E_{i,jk})\right| \geq t \right) &\leq& 2 \exp\left\{-\frac{t^2}{2(2n D_2^{-2} D_3 + t/D_2)}\right\}, \\ P\left( \left|\sum_{i=1}^n \left( x_{il} \langle \bm{E}_i, \dot{\bm{B}}\rangle \right)\right| \geq t \right) &\leq& 2 \exp\left\{-\frac{t^2}{2(2n D_2^{-2} D_3 + t/D_2)}\right\},\\ P\left(|\sum_{i=1}^n (x_{il} \epsilon_{i})| \geq t\right) &\leq& 2 \exp\left\{-\frac{t^2}{2(2n D_2^{-2} D_3 + t/D_2)}\right\}. \end{eqnarray*} \end{lem} \begin{proof}[Proof of Lemma \ref{lem2}:] Note that the last part of Assumption (A2) actually implies that, there exist positive constants $D^\prime_2$ and $D^\prime_3$, such that $E[e^{D^\prime_2 \epsilon_i^2}] \leq D^\prime_3 $ by applying Theorem 3.1 from \cite{rivasplata2012subgaussian}. Therefore, it can be unified into the first part of Assumption (A2) which implies that $$\max \left\{ E[e^{D_2 x_{il}^2}],E[e^{D_2 E_{i,jk}^2}] , E[e^{D_2 \langle \bm{E}_i, \dot{\bm{B}}\rangle^2}], E[e^{D_2 \epsilon_i^2}] \right\} \leq D_3 $$ for every $1 \leq l \leq s_n$, $1 \leq j \leq p$ and $1 \leq k \leq q$. Therefore, by Assumptions (A0) and (A2) and Jensen's inequality, we have \begin{eqnarray*} E \left[ e^{D_2 | x_{il} x_{il^\prime} - E \left(x_{il} x_{il^\prime}\right)|}\right] \leq E \left[ e^{D_2 | x_{il} x_{il^\prime} | + D_2|E \left(x_{il} x_{il^\prime}\right)|}\right] = e^{D_2|E \left(x_{il} x_{il^\prime}\right)|} E \left[ e^{D_2 | x_{il} x_{il^\prime} | }\right]\\ \leq e^{D_2 \sigma_x} E\left[ e^{D_2 \frac{x_{il}^2 + x_{il^\prime}^2}{2}}\right] \leq e^{D_2 \sigma_x} \left[ E\left\{ e^{D_2 x_{il}^2 }\right\} E\left\{ e^{D_2 x_{il^\prime}^2}\right\} \right]^{1/2} \leq e^{D_2 \sigma_x} D_3. \end{eqnarray*} Then for every $m \geq 2$, one has \[ E\left[ | x_{il} x_{il^\prime} -E(x_{il} x_{il^\prime}) |^m \right] \leq \frac{m !}{D_2^m} E\left[ e^{D_2 | x_{il} x_{il^\prime} -E(x_{il} x_{il^\prime}) |}\right] \leq \frac{m !}{D_2^m} e^{D_2 \sigma_x} D_3. \] It follows from Lemma \ref{lem1} that \[ P( |\sum_{i=1}^n \{ x_{il} x_{il^\prime} -E(x_{il} x_{il^\prime}) \}| \geq t) \leq 2 \exp\left\{-\frac{t^2}{2(2n D_2^{-2} e^{D_2 \sigma_x} D_3 + t/D_2)}\right\}. \] Similarly we obtain \begin{eqnarray*} E \left[ e^{D_2 \left| x_{il} E_{i,jk}\right|}\right] \leq E \left[ e^{ D_2 \frac{x_{il}^2 + E_{i,jk}^2}{2} }\right] \leq \left[ E\left\{ e^{D_2 x_{il}^2 }\right\} E\left\{ e^{D_2 E_{i,jk}^2}\right\} \right]^{1/2} \leq D_3. \end{eqnarray*} Then for every $m \geq 2$, one has \[ E\left[ \left| x_{il} E_{i,jk} \right|^m \right] \leq \frac{m !}{D_2^m} E \left[ e^{ \left| x_{il} E_{i,jk}\right|}\right] \leq \frac{m !}{D_2^m} D_3. \] It follows from Lemma \ref{lem1} that \[ P\left(\left|\sum_{i=1}^n (x_{il} E_{i,jk})\right| \geq t \right) \leq 2 \exp\left\{-\frac{t^2}{2(2n D_2^{-2} D_3 + t/D_2)}\right\}. \] Similarly we have \begin{eqnarray*} E \left[ e^{D_2 \left| x_{il} \langle \bm{E}_i, \dot{\bm{B}}\rangle \right|}\right] &\leq& E \left[ e^{ D_2 \frac{x_{il}^2 + \langle \bm{E}_i, \dot{\bm{B}}\rangle^2}{2} }\right] \leq \left[ E\left\{ e^{D_2 x_{il}^2 }\right\} E\left\{ e^{D_2 \langle \bm{E}_i, \dot{\bm{B}}\rangle^2}\right\} \right]^{1/2} \leq D_3, \\ E \left[ e^{D_2 \left| x_{il} \epsilon_{i}\right|}\right] &\leq& E \left[ e^{ D_2 \frac{x_{il}^2 + \epsilon_{i}^2}{2} }\right] \leq \left[ E\left\{ e^{D_2 x_{il}^2 }\right\} E\left\{ e^{D_2 \epsilon_{i}^2}\right\} \right]^{1/2} \leq D_3. \end{eqnarray*} Then following the proof of showing second inequality, one has \begin{eqnarray*} P\left( \left|\sum_{i=1}^n \left( x_{il} \langle \bm{E}_i, \dot{\bm{B}}\rangle \right)\right| \geq t \right) &\leq& 2 \exp\left\{-\frac{t^2}{2(2n D_2^{-2} D_3 + t/D_2)}\right\},\\ P\left(|\sum_{i=1}^n (x_{il} \epsilon_{i})| \geq t\right) &\leq& 2 \exp\left\{-\frac{t^2}{2(2n D_2^{-2} D_3 + t/D_2)}\right\}. \end{eqnarray*} \end{proof} The following lemma is a standard result called Gaussian comparison inequality \citep{anderson1955integral}. \begin{lem} \label{lem3} Let $X$ and $Y$ be zero-mean vector Gaussian random vectors with covariance matrix $\Sigma_X$ and $\Sigma_Y$ respectively. If $\Sigma_X - \Sigma_Y$ is positive semi-definite, then for any convex symmetric set $C$, $P(X \in C) \leq P(Y \in C)$. \end{lem} \section{Proof of theorems} \label{proofthm} \begin{proof}[Proof of Theorem \ref{thm1}:] We can write \begin{eqnarray*} && P\left\{ \mathcal{M}_1 \subset \left( \widehat{\mathcal{M}}_{1}^{*} \cup \widehat{\mathcal{M}}_{2} \right) \right\} \\ &=& P\left\{ \cap_{l \in \mathcal{M}_1} \left( \widehat{\mathcal{M}}_{1}^{*} \cup \widehat{\mathcal{M}}_{2} \right) \right\} \\ &=& 1 - P\left\{ \cup_{l \in \mathcal{M}_1} \left( \widehat{\mathcal{M}}_{1}^{*} \cup \widehat{\mathcal{M}}_{2} \right)^c \right\} \\ &\geq& 1 - \sum_{l \in \mathcal{M}_1} P \left( \widehat{\mathcal{M}}_{1}^{*c} \cap \widehat{\mathcal{M}}_{2}^c \right) \\ &=& 1 - \sum_{l \in \mathcal{M}_1} P\left( |\widehat{{\beta}_{l}^{M}}| \leq \gamma_{1,n}, \|\widehat{{\bm{C}}}_{l}^M\|_{op} \leq \gamma_{2,n}\right) \\ &\geq& 1 - \sum_{l \in \mathcal{M}_1 \cap \mathcal{M}_2} P\left( \|\widehat{{\bm{C}}}_{l}^M\|_{op} \leq \gamma_{2,n} \right) - \sum_{l \in \mathcal{M}_1 \cap \mathcal{M}_2^c} P\left(|\widehat{{\beta}_{l}^{M}}| \leq \gamma_{1,n}\right). \end{eqnarray*} Firstly, recall that $\dot{\bm{C}}_{l}^M = cov (\sum_{l^\prime \in \mathcal{M}_2} x_{i l^\prime} * \dot{\bm{C}}_{l^\prime}, x_{il})$ i.e. $\dot{C}_{l,jk}^M = cov (\sum_{l^\prime \in \mathcal{M}_2} x_{i l^\prime} \dot{C}_{l^\prime,jk},$ $x_{il}) = n^{-1} \sum_{i=1}^n E(x_{il} Z_{i,jk})$. For every $1 \leq j \leq p$, $1 \leq k \leq q$ and $1 \leq l \leq s_n$, we have \[ \widehat{C}_{l,jk}^M -\dot{C}_{l,jk}^M = n^{-1} \sum_{i=1}^n \left[ x_{il} Z_{i,jk} - E(x_{il} Z_{i,jk})\right]. \] It follows from Assumptions (A0), (A1), (A2) and Lemma 2 that for any $t>0$, one has \begin{eqnarray*} &&P\left(\left|\widehat{C}_{l,jk}^M -\dot{C}_{l,jk}^M \right| \geq t\right) = P\left( \left| \sum_{i=1}^n [ x_{il} Z_{i,jk} - E(x_{il} Z_{i,jk}) \right| \geq nt \right) \\ &=& P \left( \left|\sum_{l^\prime \in \mathcal{M}_2} \sum_{i=1}^n \left[x_{il} x_{il^\prime} - E(x_{il} x_{il^\prime}) \right] \dot{C}_{l^\prime,jk} + \sum_{i=1}^n x_{il} E_{i,jk}\right| \geq nt\right)\\ &\leq& \sum_{l^\prime \in \mathcal{M}_2} P\left(\left| \sum_{i=1}^n \left[x_{il} x_{il^\prime} - E(x_{il} x_{il^\prime}) \right] \right| \geq \frac{nt}{b(s_2 + 1)}\right) + P\left( \left| \sum_{i=1}^n x_{il} E_{i,jk}\right| \geq \frac{nt}{s_2 + 1} \right) \\ &\leq& 2 s_2 \exp \left[ - \frac{n t^2 b^{-2} (s_1 + 1)^{-2}}{2 \{ 2 D_2^{-2} e^{D_2 \sigma_x} D_3 + D_2^{-1} b^{-1} (s_1 + 1)^{-1} t \} }\right]\\ && + 2 \exp \left[ - \frac{n t^2 (s_2 + 1)^{-2}}{2 \{ 2 D_2^{-2} D_3 + D_2^{-1} (s_2 + 1)^{-1} t\} }\right]. \end{eqnarray*} Therefore, for every $l \in \mathcal{M}_2$, we have \begin{eqnarray*} && P\left( \| \widehat{{\bm{C}}}_{l}^M \|_{op} \leq \gamma_{2,n} \right) \leq P\left( \| \widehat{{\bm{C}}}_{l}^M - \dot{\bm{C}}_{l}^M\|_{op} \geq D_1(pq)^{1/2}n^{-\kappa} - \gamma_{2,n} \right) \\ &\leq& P\left( \| \widehat{{\bm{C}}}_{l}^M - \dot{\bm{C}}_{l}^M\|_{F} \geq (pq)^{1/2} (1-\alpha) D_1 n^{-\kappa} \right) \\ &=& P\left( \sum_{j,k} \left| \widehat{C}_{l,jk}^M - \dot{C}_{l,jk}^M \right|^2 \geq (pq)^{1/2} (1-\alpha) D_1 n^{-\kappa} \right) \\ &\leq& \sum_{j,k} P\left( \left| \widehat{C}_{l,jk}^M - \dot{C}_{l,jk}^M \right|^2 \geq \left\{(1-\alpha) D_1 n^{-\kappa} \right\}^2 \right) \\ &\leq& \sum_{j,k} P\left( \left| \widehat{C}_{l,jk}^M - \dot{C}_{l,jk}^M \right| \geq (1-\alpha) D_1 n^{-\kappa} \right)\\ &\leq& 2pq \left( s_2 \exp \left[ - \frac{n^{1-2 \kappa} \left\{ (1-\alpha)D_1 b^{-1} (s_2 + 1)^{-1} \right\}^2 }{2 \{2 D_2^{-2} e^{D_2 \sigma_x} D_3 + D_2^{-1} b^{-1} (s_2 + 1)^{-1} (1-\alpha) D_1 n^{-\kappa}\}}\right] \right. \\ &&\left. + \exp \left[ - \frac{n^{1-2\kappa} \left\{ (1-\alpha) D_1 (s_2 + 1)^{-1} \right\}^2}{2 \{2 D_2^{-2} D_3 + D_2^{-1} (s_2 + 1)^{-1} (1-\alpha) D_1 n^{-\kappa}\}}\right]\right)\\ &\leq& 2pq \left( s_2 \exp \left[ - \frac{n^{1-2 \kappa} \left\{ (1-\alpha)D_1 b^{-1} (s_2 + 1)^{-1} \right\}^2 }{2 \{2 D_2^{-2} e^{D_2 \sigma_x} D_3 + D_2^{-1} b^{-1} (s_2 + 1)^{-1} (1-\alpha) D_1 \}}\right] \right. \\ &&\left. + \exp \left[ - \frac{n^{1-2\kappa} \left\{ (1-\alpha) D_1 (s_2 + 1)^{-1} \right\}^2}{2 \{2 D_2^{-2} D_3 + D_2^{-1} (s_2 + 1)^{-1} (1-\alpha) D_1\}}\right]\right)\\ \end{eqnarray*} Let \begin{eqnarray*} d_0 &=& \min \left[ \frac{ \left\{ (1-\alpha)D_1 b^{-1} (s_2 + 1)^{-1} \right\}^2 }{2 \{2 D_2^{-2} e^{D_2 \sigma_x} D_3 + D_2^{-1} b^{-1} (s_2 + 1)^{-1} (1-\alpha) D_1 \}}, \right.\\ && \left. \frac{ \left\{ (1-\alpha) D_1 (s_2 + 1)^{-1} \right\}^2}{2 \{2 D_2^{-2} D_3 + D_2^{-1} (s_2 + 1)^{-1} (1-\alpha) D_1\}} \right], \end{eqnarray*} We have for every $l \in \mathcal{M}_2$, \begin{eqnarray} P\left( \| \widehat{{\bm{C}}}_{l}^M \|_{op} \leq \gamma_{2,n} \right) &\leq& 2pq(s_2 + 1)\exp(-d_0 n^{1-2\kappa}), \label{thm1eq1} \end{eqnarray} Let us consider $P\left( |\widehat{{\beta}_{l}^{M}}| \leq \gamma_{1,n} \right)$. recall that, $\dot{\beta}_{l}^M = \dot{\beta}_{l}^{*M} + \langle\dot{\bm{C}}_{l}^{M}$, $\dot{\bm{B}} \rangle $, $\dot{\beta}_{l}^{*M} = cov (\sum_{l^\prime \in \mathcal{M}_1} $ $x_{i l^\prime} \dot{\beta}_{l^\prime}, x_{il})$ and $\dot{\beta}_{l}^M =n^{-1} \sum_{i=1}^n E(x_{il} Y_{i})$. For every $1 \leq l \leq s_n$, we have \[ \widehat{\beta}_{l}^M -\dot{\beta}_{l}^M = n^{-1} \sum_{i=1}^n \left\{ x_{il} Y_{i} - E(x_{il} Y_{i})\right\}. \] It follows from Assumptions (A0), (A1), (A2) and Lemma 2 that for any $t>0$, we have \begin{eqnarray*} &&P\left(\left|\widehat{\beta}_{l}^M -\dot{\beta}_{l}^M \right| \geq t\right) = P\left[ \left|\sum_{i=1}^n \left\{ x_{il} Y_{i} - E(x_{il} Y_{i})\right\} \right| \geq nt \right] \\ &=& P \left[ \left|\sum_{l^\prime \in \mathcal{M}_1} \sum_{i=1}^n \left\{ x_{il} x_{il^\prime} - E\left( x_{il} x_{il^\prime} \right) \right\} \dot{\beta}_{l^\prime}^{*M} + \sum_{l^\prime \in \mathcal{M}_2} \sum_{i=1}^n \left\{ x_{il} x_{il^\prime} - E\left( x_{il} x_{il^\prime} \right) \right\} \langle \dot{\bm{C}}_{l^\prime}^{M}, \dot{\bm{B}}\rangle + \right. \right.\\ &+&\left. \left. \sum_{i=1}^n \left\{ x_{il} \langle \bm{E}_{i}, \dot{\bm{B}} \rangle - E( x_{il})E\left( \langle \bm{E}_{i}, \dot{\bm{B}} \rangle \right) \right\} + \sum_{i=1}^n x_{il} \epsilon_{i}\right| \geq n t\right]\\ &\leq& P \left[ \sum_{l^{\prime} \in \mathcal{M}_1} \left|\sum_{i=1}^n \left\{ x_{il} x_{il^\prime} - E\left( x_{il} x_{il^\prime} \right) \right\} \right| b + \sum_{l^{\prime} \in \mathcal{M}_2} \left|\sum_{i=1}^n \left\{ x_{il} x_{il^\prime} - E\left( x_{il} x_{il^\prime} \right) \right\} \right| b \right. \\ && + \left. \left| \sum_{i=1}^n x_{il} \langle \bm{E}_i, \dot{\bm{B}}\rangle \right| + \left| \sum_{i=1}^n x_{il} \epsilon_{i} \right| \geq nt \right]\\ &\leq& \sum_{l^\prime \in \mathcal{M}_1} P\left[\left| \sum_{i=1}^n \left\{ x_{il} x_{il^\prime} - E\left( x_{il} x_{il^\prime} \right) \right\} \right| \geq \frac{nt}{b(s_1 + s_2 + 2)}\right] \\ && + \sum_{l^\prime \in \mathcal{M}_2} P\left[\left| \sum_{i=1}^n \left\{ x_{il} x_{il^\prime} - E\left( x_{il} x_{il^\prime} \right) \right\} \right| \geq \frac{nt}{b(s_1 + s_2 + 2)}\right] \\ && + P\left( \left| \sum_{i=1}^n x_{il} \langle \bm{E}_i, \dot{\bm{B}}\rangle \right| \geq \frac{nt}{s_1 + s_2 + 2} \right) + P\left( \left| \sum_{i=1}^n x_{il} \epsilon_{i} \right| \geq \frac{nt}{s_1 + s_2 + 2} \right) \\ &\leq& 2 (s_1 + s_2) \exp \left[ - \frac{n t^2 (2b)^{-2} (s_1 + s_2 + 2)^{-2}}{2\left\{2 D_2^{-2} e^{D_2 \sigma_x} D_3 + D_2^{-1} (2b)^{-1} (s_1 + s_2 + 2)^{-1} t\right\}}\right] \\ && + 4 \exp \left[ - \frac{n t^2 (s_1 + s_2 + 2)^{-2}}{2 \left\{2 D_2^{-2} D_3 + D_2^{-1} (s_1 + s_2 + 2)^{-1} t\right\}}\right]. \end{eqnarray*} For $l \in \mathcal{M}_1 \cap \mathcal{M}^c_2$, we have $\langle \dot{\bm{C}}^M_{l}, \dot{\bm{B}} \rangle = 0$, under Assumption (A1) and previous deduction, we have \begin{eqnarray*} && P\left( | \widehat{\beta}^M_{l} | \leq \gamma_{1,n} \right) = P\left( - | \widehat{\beta}^M_{l} | \geq - \gamma_{1,n} \right) \leq P\left( | \dot{\beta}_{l}^{*M} | - | \widehat{\beta}^M_{l} | \geq D_1 n^{-\kappa} - \gamma_{1,n} \right) \\ &=& P\left( | \dot{\beta}_{l}^{*M}| - |\langle \dot{\bm{C}}^M_{l}, \dot{\bm{B}} \rangle | - | \widehat{\beta}^M_{l} | \geq (1-\alpha)D_1 n^{-\kappa} \right)\\ &\leq& P\left( | \dot{\beta}_{l}^M | - | \widehat{\beta}^M_{l} | \geq (1-\alpha)D_1 n^{-\kappa} \right) \\ &\leq& P\left( | \dot{\beta}_{l}^M - \widehat{\beta}^M_{l} | \geq (1-\alpha)D_1 n^{-\kappa} \right)\\ &\leq& 2 (s_1 + s_2) \exp \left[ - \frac{n^{1-2\kappa} (1-\alpha)^2 D_1^2 (2b)^{-2} (s_1 + s_2 + 2)^{-2}}{2\left\{2 D_2^{-2} e^{D_2 \sigma_x} D_3 + D_2^{-1} (2b)^{-1} (s_1 + s_2 + 2)^{-1} (1-\alpha)D_1 n^{-\kappa} \right\}}\right] \\ && + 4 \exp \left[ - \frac{n^{1-2\kappa} (1-\alpha)^2 D_1^2 (s_1 + s_2 + 2)^{-2}}{2 \left\{2 D_2^{-2} D_3 + D_2^{-1} (s_1 + s_2 + 2)^{-1} (1-\alpha) D_1 pq^{1/2} n^{-\kappa} \right\}}\right]\\ &\leq& 2 (s_1 + s_2) \exp \left[ - \frac{n^{1-2\kappa} (1-\alpha)^2 D_1^2 (2b)^{-2} (s_1 + s_2 + 2)^{-2}}{2\left\{2 D_2^{-2} e^{D_2 \sigma_x} D_3 + D_2^{-1} (2b)^{-1} (s_1 + s_2 + 2)^{-1} (1-\alpha)D_1 \right\}}\right] \\ && + 4 \exp \left[ - \frac{n^{1-2\kappa} (1-\alpha)^2 D_1^2 (s_1 + s_2 + 2)^{-2}}{2 \left\{2 D_2^{-2} D_3 + D_2^{-1} (s_1 + s_2 + 2)^{-1} (1-\alpha) D_1 \right\}}\right]\\ \end{eqnarray*} Let \begin{eqnarray*} d_1 &=& \min \left[ \frac{(1-\alpha)^2 D_1^2 (2b)^{-2} (s_1 + s_2 + 2)^{-2}}{2\left\{2 D_2^{-2} e^{D_2 \sigma_x} D_3 + D_2^{-1} (2b)^{-1} (s_1 + s_2 + 2)^{-1} (1-\alpha)D_1 \right\}}, \right.\\ &&\left. \frac{(1-\alpha)^2 D_1^2 (s_1 + s_2 + 2)^{-2}}{2 \left\{2 D_2^{-2} D_3 + D_2^{-1} (s_1 + s_2 + 2)^{-1} (1-\alpha) D_1 \right\}} \right], \end{eqnarray*} We have for each $l \in \mathcal{M}_1 \cap \mathcal{M}_2^c$, \begin{eqnarray} P\left( |\widehat{{\beta}_{l}^{M}}| \leq \gamma_{2,n} \right) \leq 2(s_1 +s_2 + 2) \exp \left( -d_1 n^{1-2\kappa} \right) . \label{thm1eq4} \end{eqnarray} In sum, by Assumption (A5), and \eqref{thm1eq1} and \eqref{thm1eq4}, we have \begin{eqnarray*} && P\left\{ \mathcal{M}_1 \subset \left( \widehat{\mathcal{M}}_{1}^{*} \cup \widehat{\mathcal{M}}_{2} \right) \right\} \\ &\geq& 1 - \sum_{l \in \mathcal{M}_1 \cap \mathcal{M}_2} P\left( \|\widehat{{\bm{C}}}_{l}^M\|_{op} \leq \gamma_{2,n} \right) - \sum_{l \in \mathcal{M}_1 \cap \mathcal{M}_2^c} P\left(|\widehat{{\beta}_{l}^{M}}| \leq \gamma_{1,n} \right)\\ &\geq& 1 - 2pqs_2(s_2 + 1)\exp(-d_0 n^{1-2\kappa}) - 2s_1 (s_1 +s_2 + 2) \exp \left( -d_1 n^{1-2\kappa} \right)\\ &\geq& 1 - d_0^\prime pq \exp \left( -d_1^\prime n^{1-2\kappa} \right) \to 1, \quad \textrm{ as } n \to \infty, \end{eqnarray*} for some positive constants $d_0^\prime$ and $d_1^\prime$. Therefore, $P(\mathcal{M}_1 \subset \widehat{\mathcal{M}}_{}) \to 1$ as $n \to \infty$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm2}] The proof consists of two steps. In step 1, we will show that $P(\widehat{\mathcal{M}}_{} \subset \mathcal{M}^0) \to 1$, where $\mathcal{M}^0 = \mathcal{M}^0_1 \cup \mathcal{M}^0_2$, $\mathcal{M}^0_1 = \left\{1 \leq l \leq s_n: |\dot{\beta}_{l}^M| \geq \gamma_{1,n}/2 \right\}$ and $\mathcal{M}^0_2 = \left\{1 \leq l \leq s_n: \right.$ $\left. \left\|\dot{\bm{C}}^M_{l}\right\|_{op} \geq \gamma_{2,n}/2 \right\}.$ Recall that $\widehat{\mathcal{M}}_{} = \widehat{\mathcal{M}}_{1}^{*} \cup \widehat{\mathcal{M}}_{2} = \left\{1 \leq l \leq s_n: |\widehat{\beta}^M_{l}| \geq\right.$ $\left.\gamma_{1,n} \right\}$ $\cup \left\{1 \leq l \leq s_n: \right.$ $\left. \left\| \widehat{{\bm{C}}}_{l}^M \right\|_{op} \geq \gamma_{2,n} \right\}$. Let $\gamma_{1,n} = \alpha D_1 n^{-\kappa}$ and $\gamma_{2,n} = \alpha D_1 (pq)^{1/2} n^{-\kappa}$ with $0 < \alpha < 1$, we have \begin{eqnarray*} && P( \widehat{\mathcal{M}}_{} \subset \mathcal{M}^0_1 \cup \mathcal{M}^0_2) \\ &\geq& P\left[ \cap_{1 \leq l \leq s_n} \left\{|\widehat{{\beta}_{l}^{M}} - \dot{\beta}_{l}^M| \leq \frac{\gamma_{1,n}}{2}\right\} \cap_{1 \leq l \leq s_n} \left\{ ||\widehat{{\bm{C}}}_{l}^M - \dot{\bm{C}}_{l}^M||_{op} \leq \frac{\gamma_{2,n}}{2} \right\}\right]\\ &=& 1 - P \left[ \cup_{1 \leq l \leq s_n} \{|\widehat{{\beta}_{l}^{M}} - \dot{\beta}_{l}^M| \geq \frac{\gamma_{1,n}}{2}\} \cup_{1 \leq l \leq s_n} \{||\widehat{{\bm{C}}}_{l}^M - \dot{\bm{C}}_{l}^M||_{op} \geq \frac{\gamma_{2,n}}{2} \} \right]\\ &\geq& 1 - \sum_{1 \leq l \leq s_n} \left\{ P\left( |\widehat{{\beta}_{l}^{M}} - \dot{\beta}_{l}^M| \geq \frac{\gamma_{1,n}}{2} \right) + P\left( ||\widehat{{\bm{C}}}_{l}^M - \dot{\bm{C}}_{l}^M||_{op} \geq \frac{\gamma_{2,n}}{2} \right) \right\} \\ &\geq& 1 - \sum_{1 \leq l \leq s_n} P\left( |\widehat{{\beta}_{l}^{M}} - \dot{\beta}_{l}^M| \geq \frac{\gamma_{1,n}}{2} \right) - \sum_{1 \leq l \leq s_n} P\left( ||\widehat{{\bm{C}}}_{l}^M - \dot{\bm{C}}_{l}^M||_{F} \geq \frac{\gamma_{2,n}}{2} \right) \\ &\geq& 1 - \sum_{1 \leq l \leq s_n} P\left( |\widehat{{\beta}_{l}^{M}} - \dot{\beta}_{l}^M| \geq \alpha D_1 n^{-\kappa} /2 \right) \\ && - \sum_{1 \leq l \leq s_n} \sum_{j,k} P\left( | \widehat{C}_{l,jk}^M - \dot{C}_{l,jk}^M | \geq \alpha D_1 n^{-\kappa}/2 \right)\\ &\geq& 1 - 2 s_n\left\{ (s_1 + s_2) \exp \left[ - \frac{ \alpha^2 D_1^2 (4b)^{-2} (s_1 + s_2 + 2)^{-2} n^{1-2\kappa} }{2\left\{2 D_2^{-2} e^{D_2 \sigma_x} D_3 + D_2^{-1} (4b)^{-1} (s_1 + s_2 + 2)^{-1} \alpha D_1 n^{-\kappa} \right\}}\right] \right. \\ && \left. + 2 \exp \left[ - \frac{\alpha^2 D_1^2 2^{-2} (s_1 + s_2 + 2)^{-2} n^{1-2\kappa}}{2 \left\{2 D_2^{-2} D_3 + D_2^{-1} (s_1 + s_2 + 2)^{-1} \alpha D_1 n^{-\kappa} \right\}}\right] \right\}\\ && -2 s_n pq \left\{ s_2 \exp \left[ - \frac{\alpha^2 D_1^2 2^{-2} b^{-2} (s_1 + 1)^{-2} n^{1-2\kappa}}{2 \{ 2 D_2^{-2} e^{D_2 \sigma_x} D_3 + D_2^{-1} b^{-1} (s_1 + 1)^{-1} 2^{-1}\alpha D_1 n^{-\kappa}\}}\right] \right.\\ && \left. + \exp \left[ - \frac{ \alpha^2 D_1^2 2^{-2} (s_2 + 1)^{-2} n^{1-2\kappa}}{2 \{ 2 D_2^{-2} D_3 + D_2^{-1} (s_2 + 1)^{-1} 2^{-1}\alpha D_1 n^{-\kappa}\}}\right]\right\} \\ &\geq& 1 - 2 s_n\left\{ (s_1 + s_2) \exp \left[ - \frac{ \alpha^2 D_1^2 (4b)^{-2} (s_1 + s_2 + 2)^{-2} n^{1-2\kappa} }{2\left\{2 D_2^{-2} e^{D_2 \sigma_x} D_3 + D_2^{-1} (4b)^{-1} (s_1 + s_2 + 2)^{-1} \alpha D_1 \right\}}\right] \right. \\ && \left. + 2 \exp \left[ - \frac{\alpha^2 D_1^2 2^{-2} (s_1 + s_2 + 2)^{-2} n^{1-2\kappa}}{2 \left\{2 D_2^{-2} D_3 + D_2^{-1} (s_1 + s_2 + 2)^{-1} \alpha D_1 \right\}}\right] \right\}\\ && -2 s_n pq \left\{ s_2 \exp \left[ - \frac{\alpha^2 D_1^2 2^{-2} b^{-2} (s_1 + 1)^{-2} n^{1-2\kappa}}{2 \{ 2 D_2^{-2} e^{D_2 \sigma_x} D_3 + D_2^{-1} b^{-1} (s_1 + 1)^{-1} 2^{-1}\alpha D_1 \}}\right] \right.\\ && \left. + \exp \left[ - \frac{ \alpha^2 D_1^2 2^{-2} (s_2 + 1)^{-2} n^{1-2\kappa}}{2 \{ 2 D_2^{-2} D_3 + D_2^{-1} (s_2 + 1)^{-1} 2^{-1}\alpha D_1 \}}\right]\right\} \\ &=& 1 - 2 \exp(D_4 n^{\xi})\left\{ (s_1 + s_2) \exp \left[ - \frac{ \alpha^2 D_1^2 (4b)^{-2} (s_1 + s_2 + 2)^{-2} n^{1-2\kappa} }{2\left\{2 D_2^{-2} e^{D_2 \sigma_x} D_3 + D_2^{-1} (4b)^{-1} (s_1 + s_2 + 2)^{-1} \alpha D_1 \right\}}\right] \right. \\ && \left. + 2 \exp \left[ - \frac{\alpha^2 D_1^2 2^{-2} (s_1 + s_2 + 2)^{-2} n^{1-2\kappa}}{2 \left\{2 D_2^{-2} D_3 + D_2^{-1} (s_1 + s_2 + 2)^{-1} \alpha D_1 \right\}}\right] \right\}\\ && -2 pq \exp(D_4 n^{\xi}) \left\{ s_2 \exp \left[ - \frac{\alpha^2 D_1^2 2^{-2} b^{-2} (s_1 + 1)^{-2} n^{1-2\kappa}}{2 \{ 2 D_2^{-2} e^{D_2 \sigma_x} D_3 + D_2^{-1} b^{-1} (s_1 + 1)^{-1} 2^{-1}\alpha D_1 \}}\right] \right.\\ && \left. + \exp \left[ - \frac{ \alpha^2 D_1^2 2^{-2} (s_2 + 1)^{-2} n^{1-2\kappa}}{2 \{ 2 D_2^{-2} D_3 + D_2^{-1} (s_2 + 1)^{-1} 2^{-1}\alpha D_1 \}}\right]\right\} \\ \end{eqnarray*} By Assumptions (A3) and (A5), we have \[ P( \widehat{\mathcal{M}}_{} \subset \mathcal{M}^0_1 \cup \mathcal{M}^0_2) \geq 1 - d_2 \exp( - d_3 n^{1-2 \kappa}), \] for some constants $d_2$ and $d_3 > 0$. Therefore, we have $P( \widehat{\mathcal{M}}_{} \subset \mathcal{M}^0) \to 1$ as $n \to \infty$.\\ In step 2, we will show that $|\mathcal{M}^0| = O(n^{2 \kappa+\tau})$. As $ \left| \mathcal{M}^0 \right| = \left| \mathcal{M}^0_1 \cup \mathcal{M}^0_2 \right| \leq\left| \mathcal{M}^0_1 \right| + \left| \mathcal{M}^0_2 \right| $, we only need to show that both $\left| \mathcal{M}^0_1 \right|$ and $\left| \mathcal{M}^0_2 \right|$ are $O(n^{2 \kappa+\tau})$. Define $\mathcal{M}^1_1 = \left\{ 1 \leq l \leq s_n: \left| \dot{\beta}_{l}^M \right|^2 \geq \gamma_{1,n}^2/4 \right\}$ and $\mathcal{M}^0_1 \subset \mathcal{M}^1_1$. By the definition of $\mathcal{M}^1_1$, we have \begin{eqnarray*} \left| \mathcal{M}^1_1 \right| \gamma_{1,n}^2/4 &\leq& \sum_{l=1}^{s_n} \left| \dot{\beta}_{l}^M \right |^2 = \sum_{l=1}^{s_n} \left( E \left[x_{il} Y_{i}\right] \right)^2 = \left\| E\left[ \bm{x}_i * Y_i \right]\right\|^2. \end{eqnarray*} Define $\dot{\bm\beta}^{*} = (\dot{\beta}_{l}^{*},\ldots, \dot{\beta}_{s_n}^{*})^{\mathrm{\scriptscriptstyle T}}$ and $\dot{\bm{c}} = (\langle \dot{\bm{C}}_{l}, \dot{\bm{B}}\rangle,\ldots, \langle \dot{\bm{C}}_{s_n}, \dot{\bm{B}}\rangle )^{\mathrm{\scriptscriptstyle T}}$, we can write \begin{eqnarray*} Y_i &=& \bm{x}_i^{\mathrm{\scriptscriptstyle T}} \left( \dot{\bm\beta}^{*} + \dot{\bm{c}}\right) + \langle \bm{E}_i,\dot{\bm{B}}\rangle + \epsilon_i. \\ \end{eqnarray*} Multiplying $\bm{x}_i$ on both sides and taking expectations yield $E \left[\bm{x}_{i} * Y_{i}\right] = \Sigma_x \left( \dot{\bm\beta}^{*} + \dot{\bm{c}} \right)$. Therefore, we have \begin{eqnarray*} \left| \mathcal{M}^2_1 \right| \gamma_{2,n}^2/4 &\leq& \left\| \Sigma_x \left( \dot{\bm\beta}^{*} + \dot{\bm{c}} \right) \right\|^2 \leq \lambda_{max}(\Sigma_x) \left( \dot{\bm\beta}^{*} + \dot{\bm{c}} \right)^{\mathrm{\scriptscriptstyle T}} \left( \dot{\bm\beta}^{*} + \dot{\bm{c}} \right) \leq 4 b^2 \lambda_{max}(\Sigma_x). \end{eqnarray*} By Assumption (A4), we have $\left| \mathcal{M}^1_1 \right| \leq 4 b^2 \lambda_{max}(\Sigma_x) \gamma_{1,n}^{-2} = O(n^{2\kappa + \tau})$. This implies that $\left| \mathcal{M}^0_{1}\right| \leq \left| \mathcal{M}^1_{1}\right| \leq O(n^{2\kappa + \tau})$. Define $\mathcal{M}^1_2 = \left\{ 1 \leq l \leq s_n: \left\| \dot{\bm{C}}_{l}^M \right\|_{F}^2 \geq \gamma_{2,n}^2/4 \right\}$. As $\left\| \dot{\bm{C}}_{l}^M \right\|_{op} \leq \left\| \dot{\bm{C}}_{l}^M \right\|_{F}$, we have $\mathcal{M}^0_2 \subset \mathcal{M}^1_1$. By the definition of $\mathcal{M}^1_2$, we have \begin{eqnarray*} \left| \mathcal{M}^1_2 \right| \gamma_{2,n}^2/4 &\leq& \sum_{l=1}^{s_n} \left\| \dot{\bm{C}}_{l}^M \right\|^2_{F} \\ &=& \sum_{j,k} \sum_{l=1}^{s_n} (\dot{C}_{l,jk}^M)^2 = \sum_{j,k} \sum_{l=1}^{s_n} \left( E \left[x_{il} Z_{i,jk}\right] \right)^2 = \sum_{j,k} \left\| E\left[ \bm{x}_i * Z_{i,jk} \right] \right\|^2. \end{eqnarray*} Define $\dot{\bm{C}}_{jk} = (\dot{C}_{1,jk},\ldots, \dot{C}_{s_n,jk})^{\mathrm{\scriptscriptstyle T}}$, we can write $Z_{i,jk} = \bm{x}_i^{\mathrm{\scriptscriptstyle T}} \dot{\bm{C}}_{jk} + E_{i,jk}$. Multiplying $\bm{x}_i$ on both sides and taking expectations yield $E \left[\bm{x}_{i} * Z_{i,jk}\right] = \Sigma_x \dot{\bm{C}}_{jk}$. \begin{eqnarray*} \left| \mathcal{M}^1_2 \right| \gamma_{1,n}^2/4 &\leq& \sum_{j,k} \left\| \Sigma_x \dot{\bm{C}}_{jk} \right\|^2 \leq \lambda_{max}(\Sigma_x) \sum_{j,k} \dot{\bm{C}}_{jk}^{\mathrm{\scriptscriptstyle T}} \dot{\bm{C}}_{jk} \leq pq b^2 \lambda_{max}(\Sigma_x). \end{eqnarray*} By Assumption (A4), we have $\left| \mathcal{M}^1_2 \right| \leq 4 pq b^2 \lambda_{max}(\Sigma_x) \gamma_{2,n}^{-2} = O(n^{2\kappa + \tau})$. Combining results from two steps above leads to $P\{ |\widehat{\mathcal{M}}_{}| = O(n^{2 \kappa + \tau}) \} \geq P(\widehat{\mathcal{M}}_{} \subset \mathcal{M}^0) \to 1$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm1a}:] We can write \begin{eqnarray*} && P\left\{ \mathcal{M}_1 \subset \left( \widehat{\mathcal{M}}_{1}^{*} \cup \widehat{\mathcal{M}}_{2} \cup \widehat{\mathcal{M}}_{1}^{block,*} \cup \widehat{\mathcal{M}}_{2}^{block} \right) \right\} \\ &=& P\left\{ \cap_{l \in \mathcal{M}_1} \left( \widehat{\mathcal{M}}_{1}^{*} \cup \widehat{\mathcal{M}}_{2} \cup \widehat{\mathcal{M}}_{1}^{block,*} \cup \widehat{\mathcal{M}}_{2}^{block} \right)\right\} \\ &=& 1 - P\left\{ \cup_{l \in \mathcal{M}_1} \left( \widehat{\mathcal{M}}_{1}^{*} \cup \widehat{\mathcal{M}}_{2} \cup \widehat{\mathcal{M}}_{1}^{block,*} \cup \widehat{\mathcal{M}}_{2}^{block}\right)^c \right\} \\ &\geq& 1 - \sum_{l \in \mathcal{M}_1} P \left\{ \widehat{\mathcal{M}}_{1}^{*c} \cap \widehat{\mathcal{M}}_{2}^c \cap (\widehat{\mathcal{M}}_{1}^{block,*})^c \cap (\widehat{\mathcal{M}}_{2}^{block })^c\right\} \\ &=& 1 - \sum_{l \in \mathcal{M}_1} P\left( |\widehat{{\beta}_{l}^{M}}| \leq \gamma_{1,n}, \|\widehat{{\bm{C}}}_{l}^M\|_{op} \leq \gamma_{2,n}, \widehat{{\beta}_{l}^{block,M}} \leq \gamma_{3,n},\widehat{{C}_{l}^{block,M}} \leq \gamma_{4,n} \right) \\ &\geq& 1 - \sum_{l \in \mathcal{M}_1 \cap \mathcal{M}_2} P\left( \|\widehat{{\bm{C}}}_{l}^M\|_{op} \leq \gamma_{2,n},\widehat{{C}_{l}^{block,M}} \leq \gamma_{4,n} \right) - \sum_{l \in \mathcal{M}_1 \cap \mathcal{M}_2^c} P\left(|\widehat{{\beta}_{l}^{M}}| \leq \gamma_{1,n}, \widehat{{\beta}_{l}^{block,M}} \leq \gamma_{3,n}\right)\\ &\geq& 1 - \sum_{l \in \mathcal{M}_1 \cap \mathcal{M}_2} P\left( \|\widehat{{\bm{C}}}_{l}^M\|_{op} \leq \gamma_{2,n} \right) - \sum_{l \in \mathcal{M}_1 \cap \mathcal{M}_2} P\left( \widehat{{C}_{l}^{block,M}} \leq \gamma_{4,n} \right) \\ && - \sum_{l \in \mathcal{M}_1 \cap \mathcal{M}_2^c} P\left(|\widehat{{\beta}_{l}^{M}}| \leq \gamma_{1,n}\right) - \sum_{l \in \mathcal{M}_1 \cap \mathcal{M}_2^c} P\left( \widehat{{\beta}_{l}^{block,M}} \leq \gamma_{3,n}\right) \end{eqnarray*} Firstly, recall that $\dot{\bm{C}}_{l}^M = cov (\sum_{l^\prime \in \mathcal{M}_2} x_{i l^\prime} * \dot{\bm{C}}_{l^\prime}, x_{il})$ i.e. $\dot{C}_{l,jk}^M = cov (\sum_{l^\prime \in \mathcal{M}_2} x_{i l^\prime} \dot{C}_{l^\prime,jk},$ $x_{il}) = n^{-1} \sum_{i=1}^n E(x_{il} Z_{i,jk})$. For every $1 \leq j \leq p$, $1 \leq k \leq q$ and $1 \leq l \leq s_n$, we have \[ \widehat{C}_{l,jk}^M -\dot{C}_{l,jk}^M = n^{-1} \sum_{i=1}^n \left[ x_{il} Z_{i,jk} - E(x_{il} Z_{i,jk})\right]. \] It follows from Assumptions (A0), (A1), (A2) and Lemma 2 that for any $t>0$, one has \begin{eqnarray*} &&P\left(\left|\widehat{C}_{l,jk}^M -\dot{C}_{l,jk}^M \right| \geq t\right) = P\left( \left| \sum_{i=1}^n [ x_{il} Z_{i,jk} - E(x_{il} Z_{i,jk}) \right| \geq nt \right) \\ &=& P \left( \left|\sum_{l^\prime \in \mathcal{M}_2} \sum_{i=1}^n \left[x_{il} x_{il^\prime} - E(x_{il} x_{il^\prime}) \right] \dot{C}_{l^\prime,jk} + \sum_{i=1}^n x_{il} E_{i,jk}\right| \geq nt\right)\\ &\leq& \sum_{l^\prime \in \mathcal{M}_2} P\left(\left| \sum_{i=1}^n \left[x_{il} x_{il^\prime} - E(x_{il} x_{il^\prime}) \right] \right| \geq \frac{nt}{b(s_2 + 1)}\right) + P\left( \left| \sum_{i=1}^n x_{il} E_{i,jk}\right| \geq \frac{nt}{s_2 + 1} \right) \\ &\leq& 2 s_2 \exp \left[ - \frac{n t^2 b^{-2} (s_1 + 1)^{-2}}{2 \{ 2 D_2^{-2} e^{D_2 \sigma_x} D_3 + D_2^{-1} b^{-1} (s_1 + 1)^{-1} t \} }\right]\\ && + 2 \exp \left[ - \frac{n t^2 (s_2 + 1)^{-2}}{2 \{ 2 D_2^{-2} D_3 + D_2^{-1} (s_2 + 1)^{-1} t\} }\right]. \end{eqnarray*} Therefore, for every $l \in \mathcal{M}_2$, we have \begin{eqnarray*} && P\left( \| \widehat{{\bm{C}}}_{l}^M \|_{op} \leq \gamma_{2,n} \right) \leq P\left( \| \widehat{{\bm{C}}}_{l}^M - \dot{\bm{C}}_{l}^M\|_{op} \geq D_1(pq)^{1/2}n^{-\kappa} - \gamma_{2,n} \right) \\ &\leq& P\left( \| \widehat{{\bm{C}}}_{l}^M - \dot{\bm{C}}_{l}^M\|_{F} \geq (pq)^{1/2} (1-\alpha) D_1 n^{-\kappa} \right) \\ &=& P\left( \sum_{j,k} \left| \widehat{C}_{l,jk}^M - \dot{C}_{l,jk}^M \right|^2 \geq (pq)^{1/2} (1-\alpha) D_1 n^{-\kappa} \right) \\ &\leq& \sum_{j,k} P\left( \left| \widehat{C}_{l,jk}^M - \dot{C}_{l,jk}^M \right|^2 \geq \left\{(1-\alpha) D_1 n^{-\kappa} \right\}^2 \right) \\ &\leq& \sum_{j,k} P\left( \left| \widehat{C}_{l,jk}^M - \dot{C}_{l,jk}^M \right| \geq (1-\alpha) D_1 n^{-\kappa} \right)\\ &\leq& 2pq \left( s_2 \exp \left[ - \frac{n^{1-2 \kappa} \left\{ (1-\alpha)D_1 b^{-1} (s_2 + 1)^{-1} \right\}^2 }{2 \{2 D_2^{-2} e^{D_2 \sigma_x} D_3 + D_2^{-1} b^{-1} (s_2 + 1)^{-1} (1-\alpha) D_1 n^{-\kappa}\}}\right] \right. \\ &&\left. + \exp \left[ - \frac{n^{1-2\kappa} \left\{ (1-\alpha) D_1 (s_2 + 1)^{-1} \right\}^2}{2 \{2 D_2^{-2} D_3 + D_2^{-1} (s_2 + 1)^{-1} (1-\alpha) D_1 n^{-\kappa}\}}\right]\right)\\ &\leq& 2pq \left( s_2 \exp \left[ - \frac{n^{1-2 \kappa} \left\{ (1-\alpha)D_1 b^{-1} (s_2 + 1)^{-1} \right\}^2 }{2 \{2 D_2^{-2} e^{D_2 \sigma_x} D_3 + D_2^{-1} b^{-1} (s_2 + 1)^{-1} (1-\alpha) D_1 \}}\right] \right. \\ &&\left. + \exp \left[ - \frac{n^{1-2\kappa} \left\{ (1-\alpha) D_1 (s_2 + 1)^{-1} \right\}^2}{2 \{2 D_2^{-2} D_3 + D_2^{-1} (s_2 + 1)^{-1} (1-\alpha) D_1\}}\right]\right)\\ \end{eqnarray*} Let \begin{eqnarray*} d_0 &=& \min \left[ \frac{ \left\{ (1-\alpha)D_1 b^{-1} (s_2 + 1)^{-1} \right\}^2 }{2 \{2 D_2^{-2} e^{D_2 \sigma_x} D_3 + D_2^{-1} b^{-1} (s_2 + 1)^{-1} (1-\alpha) D_1 \}}, \right.\\ && \left. \frac{ \left\{ (1-\alpha) D_1 (s_2 + 1)^{-1} \right\}^2}{2 \{2 D_2^{-2} D_3 + D_2^{-1} (s_2 + 1)^{-1} (1-\alpha) D_1\}} \right], \end{eqnarray*} We have for every $l \in \mathcal{M}_2$, \begin{eqnarray} P\left( \widehat{{C}_{l}^{block,M}} \leq \gamma_{4,n} \right) \leq P\left( \| \widehat{{\bm{C}}}_{l}^M \|_{op} \leq \gamma_{2,n} \right) &\leq& 2pq(s_2 + 1)\exp(-d_0 n^{1-2\kappa}), \label{thm1aeq1} \end{eqnarray} Let us consider $P\left( |\widehat{{\beta}_{l}^{M}}| \leq \gamma_{1,n} \right)$. recall that, $\dot{\beta}_{l}^M = \dot{\beta}_{l}^{*M} + \langle\dot{\bm{C}}_{l}^{M}$, $\dot{\bm{B}} \rangle $, $\dot{\beta}_{l}^{*M} = cov (\sum_{l^\prime \in \mathcal{M}_1} $ $x_{i l^\prime} \dot{\beta}_{l^\prime}, x_{il})$ and $\dot{\beta}_{l}^M =n^{-1} \sum_{i=1}^n E(x_{il} Y_{i})$. For every $1 \leq l \leq s_n$, we have \[ \widehat{\beta}_{l}^M -\dot{\beta}_{l}^M = n^{-1} \sum_{i=1}^n \left\{ x_{il} Y_{i} - E(x_{il} Y_{i})\right\}. \] It follows from Assumptions (A0), (A1), (A2) and Lemma 2 that for any $t>0$, we have \begin{eqnarray*} &&P\left(\left|\widehat{\beta}_{l}^M -\dot{\beta}_{l}^M \right| \geq t\right) = P\left[ \left|\sum_{i=1}^n \left\{ x_{il} Y_{i} - E(x_{il} Y_{i})\right\} \right| \geq nt \right] \\ &=& P \left[ \left|\sum_{l^\prime \in \mathcal{M}_1} \sum_{i=1}^n \left\{ x_{il} x_{il^\prime} - E\left( x_{il} x_{il^\prime} \right) \right\} \dot{\beta}_{l^\prime}^{*M} + \sum_{l^\prime \in \mathcal{M}_2} \sum_{i=1}^n \left\{ x_{il} x_{il^\prime} - E\left( x_{il} x_{il^\prime} \right) \right\} \langle \dot{\bm{C}}_{l^\prime}^{M}, \dot{\bm{B}}\rangle + \right. \right.\\ &+&\left. \left. \sum_{i=1}^n \left\{ x_{il} \langle \bm{E}_{i}, \dot{\bm{B}} \rangle - E( x_{il})E\left( \langle \bm{E}_{i}, \dot{\bm{B}} \rangle \right) \right\} + \sum_{i=1}^n x_{il} \epsilon_{i}\right| \geq n t\right]\\ &\leq& P \left[ \sum_{l^{\prime} \in \mathcal{M}_1} \left|\sum_{i=1}^n \left\{ x_{il} x_{il^\prime} - E\left( x_{il} x_{il^\prime} \right) \right\} \right| b + \sum_{l^{\prime} \in \mathcal{M}_2} \left|\sum_{i=1}^n \left\{ x_{il} x_{il^\prime} - E\left( x_{il} x_{il^\prime} \right) \right\} \right| b \right. \\ && + \left. \left| \sum_{i=1}^n x_{il} \langle \bm{E}_i, \dot{\bm{B}}\rangle \right| + \left| \sum_{i=1}^n x_{il} \epsilon_{i} \right| \geq nt \right]\\ &\leq& \sum_{l^\prime \in \mathcal{M}_1} P\left[\left| \sum_{i=1}^n \left\{ x_{il} x_{il^\prime} - E\left( x_{il} x_{il^\prime} \right) \right\} \right| \geq \frac{nt}{b(s_1 + s_2 + 2)}\right] \\ && + \sum_{l^\prime \in \mathcal{M}_2} P\left[\left| \sum_{i=1}^n \left\{ x_{il} x_{il^\prime} - E\left( x_{il} x_{il^\prime} \right) \right\} \right| \geq \frac{nt}{b(s_1 + s_2 + 2)}\right] \\ && + P\left( \left| \sum_{i=1}^n x_{il} \langle \bm{E}_i, \dot{\bm{B}}\rangle \right| \geq \frac{nt}{s_1 + s_2 + 2} \right) + P\left( \left| \sum_{i=1}^n x_{il} \epsilon_{i} \right| \geq \frac{nt}{s_1 + s_2 + 2} \right) \\ &\leq& 2 (s_1 + s_2) \exp \left[ - \frac{n t^2 (2b)^{-2} (s_1 + s_2 + 2)^{-2}}{2\left\{2 D_2^{-2} e^{D_2 \sigma_x} D_3 + D_2^{-1} (2b)^{-1} (s_1 + s_2 + 2)^{-1} t\right\}}\right] \\ && + 4 \exp \left[ - \frac{n t^2 (s_1 + s_2 + 2)^{-2}}{2 \left\{2 D_2^{-2} D_3 + D_2^{-1} (s_1 + s_2 + 2)^{-1} t\right\}}\right]. \end{eqnarray*} For $l \in \mathcal{M}_1 \cap \mathcal{M}^c_2$, we have $\langle \dot{\bm{C}}^M_{l}, \dot{\bm{B}} \rangle = 0$, under Assumption (A1) and previous deduction, we have \begin{eqnarray*} && P\left( | \widehat{\beta}^M_{l} | \leq \gamma_{1,n} \right) = P\left( - | \widehat{\beta}^M_{l} | \geq - \gamma_{1,n} \right) \leq P\left( | \dot{\beta}_{l}^{*M} | - | \widehat{\beta}^M_{l} | \geq D_1 n^{-\kappa} - \gamma_{1,n} \right) \\ &=& P\left( | \dot{\beta}_{l}^{*M}| - |\langle \dot{\bm{C}}^M_{l}, \dot{\bm{B}} \rangle | - | \widehat{\beta}^M_{l} | \geq (1-\alpha)D_1 n^{-\kappa} \right)\\ &\leq& P\left( | \dot{\beta}_{l}^M | - | \widehat{\beta}^M_{l} | \geq (1-\alpha)D_1 n^{-\kappa} \right) \\ &\leq& P\left( | \dot{\beta}_{l}^M - \widehat{\beta}^M_{l} | \geq (1-\alpha)D_1 n^{-\kappa} \right)\\ &\leq& 2 (s_1 + s_2) \exp \left[ - \frac{n^{1-2\kappa} (1-\alpha)^2 D_1^2 (2b)^{-2} (s_1 + s_2 + 2)^{-2}}{2\left\{2 D_2^{-2} e^{D_2 \sigma_x} D_3 + D_2^{-1} (2b)^{-1} (s_1 + s_2 + 2)^{-1} (1-\alpha)D_1 n^{-\kappa} \right\}}\right] \\ && + 4 \exp \left[ - \frac{n^{1-2\kappa} (1-\alpha)^2 D_1^2 (s_1 + s_2 + 2)^{-2}}{2 \left\{2 D_2^{-2} D_3 + D_2^{-1} (s_1 + s_2 + 2)^{-1} (1-\alpha) D_1 pq^{1/2} n^{-\kappa} \right\}}\right]\\ &\leq& 2 (s_1 + s_2) \exp \left[ - \frac{n^{1-2\kappa} (1-\alpha)^2 D_1^2 (2b)^{-2} (s_1 + s_2 + 2)^{-2}}{2\left\{2 D_2^{-2} e^{D_2 \sigma_x} D_3 + D_2^{-1} (2b)^{-1} (s_1 + s_2 + 2)^{-1} (1-\alpha)D_1 \right\}}\right] \\ && + 4 \exp \left[ - \frac{n^{1-2\kappa} (1-\alpha)^2 D_1^2 (s_1 + s_2 + 2)^{-2}}{2 \left\{2 D_2^{-2} D_3 + D_2^{-1} (s_1 + s_2 + 2)^{-1} (1-\alpha) D_1 \right\}}\right]\\ \end{eqnarray*} Let \begin{eqnarray*} d_1 &=& \min \left[ \frac{(1-\alpha)^2 D_1^2 (2b)^{-2} (s_1 + s_2 + 2)^{-2}}{2\left\{2 D_2^{-2} e^{D_2 \sigma_x} D_3 + D_2^{-1} (2b)^{-1} (s_1 + s_2 + 2)^{-1} (1-\alpha)D_1 \right\}}, \right.\\ &&\left. \frac{(1-\alpha)^2 D_1^2 (s_1 + s_2 + 2)^{-2}}{2 \left\{2 D_2^{-2} D_3 + D_2^{-1} (s_1 + s_2 + 2)^{-1} (1-\alpha) D_1 \right\}} \right], \end{eqnarray*} We have for each $l \in \mathcal{M}_1 \cap \mathcal{M}_2^c$, \begin{eqnarray} P\left( \widehat{{\beta}_{l}^{block,M}} \leq \gamma_{3,n}\right) \leq P\left( |\widehat{{\beta}_{l}^{M}}| \leq \gamma_{1,n} \right) \leq 2(s_1 +s_2 + 2) \exp \left( -d_1 n^{1-2\kappa} \right) . \label{thm1aeq4} \end{eqnarray} In sum, by Assumption (A5), and \eqref{thm1aeq1} and \eqref{thm1aeq4}, we have \begin{eqnarray*} && P\left\{ \mathcal{M}_1 \subset \left( \widehat{\mathcal{M}}_{1}^{*} \cup \widehat{\mathcal{M}}_{2} \cup \widehat{\mathcal{M}}_{1}^{block,*} \cup \widehat{\mathcal{M}}_{2}^{block} \right) \right\} \\ &\geq& 1 - 2\sum_{l \in \mathcal{M}_1 \cap \mathcal{M}_2} P\left( \|\widehat{{\bm{C}}}_{l}^M\|_{op} \leq \gamma_{2,n} \right) - 2\sum_{l \in \mathcal{M}_1 \cap \mathcal{M}_2^c} P\left(|\widehat{{\beta}_{l}^{M}}| \leq \gamma_{1,n} \right)\\ &\geq& 1 - 4pqs_2(s_2 + 1)\exp(-d_0 n^{1-2\kappa}) - 4s_1 (s_1 +s_2 + 2) \exp \left( -d_1 n^{1-2\kappa} \right)\\ &\geq& 1 - d_0^\prime pq \exp \left( -d_1^\prime n^{1-2\kappa} \right) \to 1, \quad \textrm{ as } n \to \infty, \end{eqnarray*} for some positive constants $d_0^\prime$ and $d_1^\prime$. Therefore, $P(\mathcal{M}_1 \subset \widehat{\mathcal{M}}_{}) \to 1$ as $n \to \infty$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm3}:] Denote $\bm\theta^{\widehat{\mathcal{M}}} = (\bm\beta^{\widehat{\mathcal{M}}, \mathrm{\scriptscriptstyle T}},\mathrm{vec}(\bm{B})^{\mathrm{\scriptscriptstyle T}})^{\mathrm{\scriptscriptstyle T}}$, where $\bm\beta^{\widehat{\mathcal{M}}} = \{ \beta_l \}_{l \in \widehat{\mathcal{M}}} \in \mathbb{R}^{|\widehat{\mathcal{M}}|}$ and $\bm{B} \in \mathbb{R}^{p \times q}$. In addition, for a given pair $\bm\lambda = (\lambda_1,\lambda_2)$, we let $\dot{\bm\theta}^{\widehat{\mathcal{M}}} = (\dot{\bm\beta}^{\widehat{\mathcal{M}}, \mathrm{\scriptscriptstyle T}},\mathrm{vec}(\dot{\bm{B}})^{\mathrm{\scriptscriptstyle T}})^{\mathrm{\scriptscriptstyle T}}$ be the true value for $\bm\theta^{\widehat{\mathcal{M}}}$ with $\dot{\bm\beta}^{\widehat{\mathcal{M}}}$ and $\dot{\bm{B}}$ being true values for $\bm\beta^{\widehat{\mathcal{M}}}$ and $\bm{B}$ respectively, and $\widehat{\bm\theta}_{\bm\lambda}^{\widehat{\mathcal{M}}} = (\widehat{\bm\beta}^{\widehat{\mathcal{M}}, \mathrm{\scriptscriptstyle T}},\mathrm{vec}(\widehat{\bm{B}})^{\mathrm{\scriptscriptstyle T}})^{\mathrm{\scriptscriptstyle T}}$ be the proposed estimator for $\bm\theta^{\widehat{\mathcal{M}}}$ with $\widehat{\bm\beta}^{\widehat{\mathcal{M}}}$ and $\widehat{\bm{B}}$ being the estimators for $\bm\beta^{\widehat{\mathcal{M}}}$ and $\bm{B}$ respectively. Furthermore, we let $r = \mathrm{rank}(\dot{\bm{B}})$, the true rank of matrix $\dot{\bm{B}} \in \mathbb{R}^{p \times q}$. Let us consider the class of matrices $ \Theta$ that have rank $r \leq \min\left\{ p,q \right\}$. For any given matrix $\Theta$, we let $\mathrm{row}(\Theta) \subset \mathbb{R}^p$ and $\mathrm{col}(\Theta) \subset \mathbb{R}^q$ denote its row and column space, respectively. Let $U$ and $V$ be a given pair of $r$-dimensional subspace $U \subset \mathbb{R}^{p}$ and $V \subset \mathbb{R}^{q}$, respectively. For a given $\bm\theta^{\widehat{\mathcal{M}}}$ and pair $(U,V)$, we define the subspace $\Omega_1(\widehat{\mathcal{M}})$, $\Omega_2(U,V)$, $\overline{\Omega}_1(\widehat{\mathcal{M}})$, $\overline{\Omega}_2(U,V)$, $\overline{\Omega}^{\perp}_1(\widehat{\mathcal{M}})$ and $\overline{\Omega}^{\perp}_2(U,V)$ as followed: \begin{eqnarray*} {\Omega}_1(\widehat{\mathcal{M}}) &=&\overline{\Omega}_1(\widehat{\mathcal{M}}) := \left\{ \bm\beta^{\widehat{\mathcal{M}}} = \{ \beta_l \}_{l \in \widehat{\mathcal{M}}}\in \mathbb{R}^{|\widehat{\mathcal{M}}|} | \beta_l = 0 \textrm{ for all } l \not\in \mathcal{M}_1 \right\},\\ \overline{\Omega}_1^{\perp}(\widehat{\mathcal{M}}) &:=& \left\{ \bm\beta^{\widehat{\mathcal{M}}} = \{ \beta_l \}_{l \in \widehat{\mathcal{M}}}\in \mathbb{R}^{|\widehat{\mathcal{M}}|} | \beta_l = 0 \textrm{ for all } l \in \mathcal{M}_1 \right\},\\ {\Omega}_2(U,V) &:=& \left\{ \Theta \in \mathbb{R}^{p \times q} | \mathrm{row}(\Theta) \subset V, \textrm{ and } \mathrm{col}(\Theta) \subset U \right\},\\ \overline{\Omega}_2(U,V) &:=& \left\{ \Theta \in \mathbb{R}^{p \times q} | \mathrm{row}(\Theta) \subset V, \textrm{ or } \mathrm{col}(\Theta) \subset U \right\},\\ \overline{\Omega}_2^{\perp}(U,V) &:=& \left\{ \Theta \in \mathbb{R}^{p \times q} | \mathrm{row}(\Theta) \subset V^{\perp}, \textrm{ and } \mathrm{col}(\Theta) \subset U^{\perp} \right\}, \end{eqnarray*} where $\overline{\Omega}_1^{\perp}(\widehat{\mathcal{M}})$ and $\overline{\Omega}_2^{\perp}(U,V)$ are the orthogonal complements for $\overline{\Omega}_1(\widehat{\mathcal{M}})$ and $\overline{\Omega}_2(U,V)$ respectively. For simplicity, we will use ${\Omega}_1$, $\overline{\Omega}_1$ and $\overline{\Omega}_1^{\perp}$ to denote ${\Omega}_1(\widehat{\mathcal{M}})$, $\overline{\Omega}_1(\widehat{\mathcal{M}})$ and $\overline{\Omega}_1^{\perp}(\widehat{\mathcal{M}})$, respectively; and use ${\Omega}_2$, $\overline{\Omega}_2$ and $\overline{\Omega}_2^{\perp}$ to denote ${\Omega}_2(U,V)$, $\overline{\Omega}_2(U,V)$ and $\overline{\Omega}_2^{\perp}(U,V)$. It is easy to see that both $P_1$ and $P_2$ satisfy the condition that they are decomposable with respect to the subspace pair $(\Omega_1, \overline{\Omega}_1^{\perp})$ and $(\Omega_2, \overline{\Omega}_2^{\perp})$, respectively. Therefore the regularizer terms $P_1$ and $P_2$ satisfies condition (G1) of \cite{negahban2012unified}. Here we define the function $F : \mathbb{R}^{|\widehat{\mathcal{M}}|+pq} \to \mathbb{R}$ by \begin{eqnarray*} F(\bm{\Delta}) &:=& l(\dot{\bm\theta}^{\widehat{\mathcal{M}}} + \bm{\Delta}) - l(\dot{\bm\theta}^{\widehat{\mathcal{M}}}) + \lambda_1 \left\{ P_1(\dot{\bm\beta}^{\widehat{\mathcal{M}}} + \bm{\Delta}_1) - P_1(\dot{\bm\beta}^{\widehat{\mathcal{M}}}) \right\} \\ && + \lambda_2 \left\{ P_2( \dot{\bm{B}} + \bm{\Delta}_2) - P_2(\dot{\bm{B}}) \right\}, \end{eqnarray*} where, $\bm\Delta = \{ \bm\Delta_1^{\mathrm{\scriptscriptstyle T}},\mathrm{vec}(\bm\Delta_2)^{\mathrm{\scriptscriptstyle T}} \}^{\mathrm{\scriptscriptstyle T}} \in \mathbb{R}^{|\widehat{\mathcal{M}}|+pq}$ with $\bm\Delta_1 \in \mathbb{R}^{|\widehat{\mathcal{M}}|}$ and $\bm\Delta_2 \in \mathbb{R}^{p \times q}$. Next, We will derive a lower bound for $F(\bm{\Delta})$. Before we formally prove the result, we first introduce the concept of subspace compatibility constant. For a subspace $\Omega$, the subspace compatibility constant with respect to the pair $(P,\| \cdot\|)$ is given by \[ \psi(\Omega) := \sup_{u \in \Omega \backslash \{ 0\}} \frac{P(u)}{\| u\|}. \] Therefore, we have \begin{eqnarray*} \psi_1(\overline{\Omega}_1) &=& \sup_{\bm\beta^{\widehat{\mathcal{M}}} \in \overline{\Omega}_1 \backslash \{ 0\}} \frac{P_1(\bm\beta^{\widehat{\mathcal{M}}})}{\| \bm\beta^{\widehat{\mathcal{M}}}\|} = \frac{\sum_{l \in \widehat{\mathcal{M}}} | \beta_l | }{\sqrt{ \sum_{l \in \widehat{\mathcal{M}}} {\beta_l^2}}} \leq \frac{\sqrt{ \sum_{l \in \widehat{\mathcal{M}}} | \beta_l |^2 \sum_{l \in \widehat{M}} 1^2} }{\sqrt{ \sum_{l \in \widehat{\mathcal{M}}} {\beta_l^2}}} \leq \sqrt{|\widehat{\mathcal{M}}|};\\ \psi_2(\overline{\Omega}_2) &=& \sup_{\bm{B} \in \overline{\Omega}_2 \backslash \{ 0\}} \frac{P_2(\bm{B})}{\| \bm{B}\|} = \frac{ \left\| \bm{B} \right\|_* }{\| \bm{B}\|_F} \leq \frac{ \sqrt{r} \left\| \bm{B} \right\|_F }{\| \bm{B}\|_F} = \sqrt{r}, \end{eqnarray*} where the last step of first inequality is obtained by applying Cauchy-Schwarz inequality. Furthermore, we have \begin{eqnarray*} F(\bm{\Delta}) &=& l(\dot{\bm\theta}^{\widehat{\mathcal{M}}} + \bm{\Delta}) - l(\dot{\bm\theta}^{\widehat{\mathcal{M}}}) + \lambda_1 \left\{ P_1(\dot{\bm\beta}^{\widehat{\mathcal{M}}} + \bm{\Delta}_1) - P_1(\dot{\bm\beta}^{\widehat{\mathcal{M}}}) \right\}\\ && + \lambda_2 \left\{ P_2(\dot{\bm{B}} + \bm{\Delta}_2) - P_2(\dot{\bm{B}}) \right\}\\ &\geq& \langle \nabla l(\dot{\bm\theta}^{\widehat{\mathcal{M}}}), \bm{\Delta} \rangle + \iota \left\| \bm{\Delta} \right\|^2 + \lambda_1 \left\{ P_1(\dot{\bm\beta}^{\widehat{\mathcal{M}}} + \bm{\Delta}_1) - P_1(\dot{\bm\beta}^{\widehat{\mathcal{M}}}) \right\}\\ && + \lambda_2 \left\{ P_2(\dot{\bm{B}} + \bm{\Delta}_2) - P_2(\dot{\bm{B}}) \right\}\\ &\geq& \langle \nabla l(\dot{\bm\theta}^{\widehat{\mathcal{M}}}), \bm{\Delta} \rangle + \iota \left\| \bm{\Delta} \right\|^2 + \lambda_1 \left[ P_1( \bm{\Delta}_{1,\overline{\Omega}_1^{\perp}}) - P_1(\bm{\Delta}_{1,\overline{\Omega}_1}) - 2 P_1\{(\dot{\bm\beta}^{\widehat{\mathcal{M}}})_{\overline{\Omega}_1^{\perp}}\} \right]\\ &&+ \lambda_2 \left[ P_2( \bm{\Delta}_{2,\overline{\Omega}_2^{\perp}}) - P_2(\bm{\Delta}_{2,\overline{\Omega}_2}) - 2 P_2\{(\dot{\bm{B}})_{\overline{\Omega}_2^{\perp}}\} \right], \end{eqnarray*} where the first inequality follows from Assumption (A6) and the second inequality follows from Lemma 3 in \cite{negahban2012unified}. By the Cauchy-Schwarz inequality applied to $P_k$ and its dual $P_k^*$, $k=1,2$, where $P_k^*$ is defined as the dual norm of $P_k$ such that $P_k^*(\bm\theta) = \sup_{P_k(\bm\eta) \leq 1} \langle \bm\theta, \bm\eta \rangle$, we have $| \langle l(\dot{\bm\theta}^{\widehat{\mathcal{M}}}), \bm{\Delta} \rangle | = |\langle \nabla l(\dot{\bm\beta}^{\widehat{\mathcal{M}}}), \bm{\Delta}_1 \rangle| + |\langle\nabla l(\dot{\bm{B}}), \bm{\Delta}_2 \rangle | \leq P_1^* \left\{ \nabla l(\dot{\bm\beta}^{\widehat{\mathcal{M}}}) \right\} P_1\left( \bm{\Delta}_1 \right) + P_2^* \left\{ \nabla l(\dot{\bm{B}}) \right\} P_2\left( \bm{\Delta}_2 \right).$ If $\lambda_k \geq 2 P_k^*(\cdot)$ holds, where $ P_1^*(\cdot) = \left\| \nabla l(\dot{\bm\beta}^{\widehat{\mathcal{M}}}) \right\|_{\infty} $ and $ P_2^*(\cdot) = \left\| \nabla l(\dot{\bm{B}}) \right\|_{op}$ \citep{negahban2012unified, kong2019l2rm}, one has $| \langle \nabla l(\cdot), \bm{\Delta}_k \rangle | \leq \frac{1}{2} \lambda_k P_k(\bm{\Delta}_k) \leq \frac{1}{2} \lambda_k \left\{ P_k( \bm{\Delta}_{k,\overline{\Omega}_k^{\perp}}) + P_k(\bm{\Delta}_{k,\overline{\Omega}_k}) \right\}. $ Therefore, we have \begin{eqnarray*} F(\bm{\Delta}) &\geq& \langle \nabla l(\dot{\bm\theta}^{\widehat{\mathcal{M}}}), \bm{\Delta} \rangle + \iota \left\| \bm{\Delta} \right\|^2 + \lambda_1 \left[ P_1( \bm{\Delta}_{1,\overline{\Omega}_1^{\perp}}) - P_1(\bm{\Delta}_{1,\overline{\Omega}_1}) - 2 P_1\{(\dot{\bm{\beta}}^{\widehat{\mathcal{M}}})_{\overline{\Omega}_1^{\perp}}\} \right]\\ &&+ \lambda_2 \left[ P_2( \bm{\Delta}_{2,\overline{\Omega}_2^{\perp}}) - P_2(\bm{\Delta}_{2,\overline{\Omega}_2}) - 2 P_2\{\mathrm{vec}(\dot{\bm{B}})_{\overline{\Omega}_2^{\perp}}\} \right]\\ &=& \langle \nabla l(\dot{\bm\beta}^{\widehat{\mathcal{M}}}), \bm{\Delta}_1 \rangle + \langle\nabla l(\dot{\bm{B}}), \bm{\Delta}_2 \rangle + \iota \left\| \bm{\Delta} \right\|^2 \\ &&+ \lambda_1 \left[ P_1( \bm{\Delta}_{1,\overline{\Omega}_1^{\perp}}) - P_1(\bm{\Delta}_{1,\overline{\Omega}_1}) - 2 P_1\left\{(\dot{\bm{\beta}}^{\widehat{\mathcal{M}}})_{\overline{\Omega}_1^{\perp}}\right\} \right]\\ &&+ \lambda_2 \left[ P_2( \bm{\Delta}_{2,\overline{\Omega}_2^{\perp}}) - P_2(\bm{\Delta}_{2,\overline{\Omega}_2}) - 2 P_2\left\{(\dot{\bm{B}})_{\overline{\Omega}_2^{\perp}}\right\} \right] \\ &\geq& - \frac{\lambda_1}{2} \left\{ P_1( \bm{\Delta}_{1,\overline{\Omega}_1^{\perp}}) + P_1(\bm{\Delta}_{1,\overline{\Omega}_1}) \right\} - \frac{\lambda_2}{2} \left\{ P_2( \bm{\Delta}_{2,\overline{\Omega}_2^{\perp}}) + P_2(\bm{\Delta}_{2,\overline{\Omega}_2}) \right\} + \iota \left\| \bm{\Delta} \right\|^2 \\ && + \lambda_1 \left[ P_1( \bm{\Delta}_{1,\overline{\Omega}_1^{\perp}}) - P_1(\bm{\Delta}_{1,\overline{\Omega}_1}) - 2 P_1\left\{(\dot{\bm{\beta}}^{\widehat{\mathcal{M}}})_{\overline{\Omega}_1^{\perp}}\right\} \right]\\ &&+ \lambda_2 \left[ P_2( \bm{\Delta}_{2,\overline{\Omega}_2^{\perp}}) - P_2(\bm{\Delta}_{2,\overline{\Omega}_2}) - 2 P_2\left\{(\dot{\bm{B}})_{\overline{\Omega}_2^{\perp}}\right\} \right] \\ &=&\iota \left\| \bm{\Delta} \right\|^2 + \lambda_1 \left[ \frac{1}{2} P_1( \bm{\Delta}_{1,\overline{\Omega}_1^{\perp}}) - \frac{3}{2} P_1(\bm{\Delta}_{1,\overline{\Omega}_1}) - 2 P_1\left\{(\dot{\bm{\beta}}^{\widehat{\mathcal{M}}})_{\overline{\Omega}_1^{\perp}}\right\} \right] \\ &&+\lambda_2 \left[ \frac{1}{2} P_2( \bm{\Delta}_{2,\overline{\Omega}_2^{\perp}}) -\frac{3}{2} P_2(\bm{\Delta}_{2,\overline{\Omega}_2}) - 2 P_2\left\{(\dot{\bm{B}})_{\overline{\Omega}_2^{\perp}}\right\} \right].\\ \end{eqnarray*} By the subspace compatibility, we have $ P_k(\bm{\Delta}_{k,\overline{\Omega}_k}) \leq \psi_k(\overline{\Omega}_k) \| \bm{\Delta}_{k,\overline{\Omega}_k}\|$, for $k=1,2$. Substituting them into the previous inequality, and noticing that $P_1\left\{(\dot{\bm{\beta}}^{\widehat{\mathcal{M}}})_{\overline{\Omega}_1^{\perp}}\right\} = P_2\left\{(\dot{\bm{B}})_{\overline{\Omega}_2^{\perp}}\right\}$ $= 0 $, we obtain that \begin{eqnarray*} F(\bm{\Delta}) &\geq& \iota \left\| \bm{\Delta} \right\|^2 -\sum_{k \in \{ 1,2\}} \frac{3 \lambda_k}{2} \psi_k(\overline{\Omega}_k) \| \bm{\Delta}_{k,\overline{\Omega}_k}\| \\ &\geq& \iota \left\| \bm{\Delta} \right\|^2 -\sum_{k \in \{ 1,2\}} \frac{3 \lambda_k}{2} \psi_k(\overline{\Omega}_k) \| \bm{\Delta}_{k}\| \\ &\geq& \iota \left\| \bm{\Delta} \right\|^2 - 3 \max_{k \in \{ 1,2\}}\{ \lambda_k \psi_k(\overline{\Omega}_k) \} \left\| \bm{\Delta} \right\|. \end{eqnarray*} The right hand side is a quadratic form of $\bm{\Delta}$. Therefore, as long as $ \left\| \bm{\Delta} \right\|^2 > \frac{9}{4 \iota^2} \max^2_{k \in \{ 1,2\}}\{ $ $\lambda_k \psi_k(\overline{\Omega}_k) \} $, one has $F(\bm{\Delta}) > 0$. Therefore, by Lemma 4 in \cite{negahban2012unified}, we can establish that \begin{eqnarray*} \left\| \widehat{\bm\theta}^{\widehat{\mathcal{M}}}_{\bm\lambda} - \dot{\bm\theta}^{\widehat{\mathcal{M}}} \right\|_2^2 \leq C \max\left\{\lambda_1^2 |\widehat{\mathcal{M}}|,\lambda_2^2 r \right\} \iota^{-2}, \end{eqnarray*} for some constant $C > 0$. It is easy to see that there exists some constant $C_0 > 0$ such that \begin{eqnarray*} \left\| \widehat{\bm\theta}_{\bm\lambda} - \dot{\bm\theta} \right\|_2^2 \leq C_0 \max\left\{ C_1 n^{2 \kappa + \tau} \lambda_1^2 ,\lambda_2^2 r \right\} \iota^{-2}. \end{eqnarray*} Now we calculate $ P_1^*(\cdot) $ and $ P_2^*(\cdot) $. According to \cite{kong2019l2rm,negahban2012unified}, we have that $ P_1^*(\cdot) = \left\| \nabla l(\dot{\bm\beta}^{\widehat{\mathcal{M}}}) \right\|_{\infty} $ and $ P_2^*(\cdot) = \left\| \nabla l(\dot{\bm{B}}) \right\|_{op}$, where $ \nabla l(\dot{\bm\beta}^{\widehat{\mathcal{M}}}) = - n^{-1}\sum_{i=1}^{n} \epsilon_i X_i^{\widehat{\mathcal{M}}} $ and $\nabla l(\dot{\bm{B}}) = - n^{-1}\sum_{i=1}^{n} \epsilon_i * \bm{Z}_i $. We first calculate $ \left\| \nabla l(\dot{\bm\beta}^{\widehat{\mathcal{M}}}) \right\|_{\infty} $. Denoting $\bm\epsilon = (\epsilon_1,\ldots,\epsilon_n)^{\mathrm{\scriptscriptstyle T}}$, $\bm X^{\widehat{\mathcal{M}}} = (X_1^{\widehat{\mathcal{M}},\mathrm{\scriptscriptstyle T}},\ldots,X_n^{\widehat{\mathcal{M}},\mathrm{\scriptscriptstyle T}})^{\mathrm{\scriptscriptstyle T}}$, where $X_i^{\widehat{\mathcal{M}}} = (x_{ij})^{\mathrm{\scriptscriptstyle T}}_{j \in \widehat{\mathcal{M}}} \in \mathbb{R}^{|\widehat{\mathcal{M}}|}$ for $i= 1,\ldots,n$. we let $\bm x_l^{\widehat{\mathcal{M}}}$, where $l = 1,\ldots, |\widehat{\mathcal{M}}|$ represent the $l$-th column of $\bm X^{\widehat{\mathcal{M}}}$. Since $\bm X$ column normalized, i.e. $\| \bm{x}_l\|_2 / \sqrt{n} = 1$, for all $l \in 1, \ldots,s $, by Assumption (A2), there exists a constant $ \sigma_{0} > 0$ such that $P( \left| \langle \bm x_l^{\widehat{\mathcal{M}}}, \bm \epsilon \rangle / n \right| \geq t ) \leq 2 \exp \left( - \frac{n t^2}{2 \sigma_{0}^2} \right)$. Applying union bound, we have $P \left( \left\| - n^{-1}\sum_{i=1}^{n} \epsilon_i X_i^{\widehat{\mathcal{M}}} \right\|_\infty \geq t \right) = P\left( \left\| \bm X^{\widehat{\mathcal{M}},\mathrm{\scriptscriptstyle T}} \bm\epsilon / n \right\|_\infty \geq t \right) = P\left( \mathrm{sup}_{l \in \widehat{\mathcal{M}}} \left| \langle \bm x_l^{\widehat{\mathcal{M}}}, \bm \epsilon \rangle /n \right| \geq t \right) \leq 2 \exp \left(- \frac{n t^2}{2 \sigma_{0}^2} + \log |\widehat{\mathcal{M}}| \right) \leq 2 \exp \left( \right.$ $\left. - \frac{n t^2}{2 \sigma_{0}^2} + C_1 (2 \kappa + \tau)\log n \right)$. By choosing $t^2 = 2 n^{-1} \sigma_{0}^2$ $\{ \log (\log n) + C_1 (2 \kappa + \tau)\log n \}$, we can see that when $\lambda_1 \geq 2 \sigma_{0} [2 n^{-1} \{ \log (\log n) + C_1 (2 \kappa + \tau)\log n \}]^{1/2}$, then there exist a positive constant $c_1 > 0$ such that the choice of $\lambda_1$ holds with probability at least $1 - c_1 (\log n)^{-1}$. Secondly, we calculate $ \left\| \nabla l(\dot{\bm{B}}) \right\|_{op}$: \begin{eqnarray*} && \left\| \nabla l(\dot{\bm{B}}) \right\|_{op} = \left\| - n^{-1} \sum_{i=1}^{n} \epsilon_i * \bm{Z}_i \right\|_{op} =\left\| n^{-1} \sum_{i=1}^{n} \epsilon_i * \left( \sum_{l \in \mathcal{M}_2} X_{il}* \dot{\bm{C}}_{l} + \bm{E}_{i}\right) \right\|_{op}\\ &=& \left\| n^{-1} \sum_{i=1}^{n} \epsilon_i * \left( \sum_{l \in \mathcal{M}_2} X_{il}* \dot{\bm{C}}_{l} \right)+ n^{-1} \sum_{i=1}^{n} \epsilon_i * \bm{E}_{i} \right\|_{op}\\ & =& \left\| n^{-1} \sum_{l \in \mathcal{M}_2} \langle \bm x_l, \bm\epsilon \rangle * \dot{\bm{C}}_{l} + n^{-1} \sum_{i=1}^{n} \epsilon_i * \bm{E}_{i} \right\|_{op}\\ &\leq& n^{-1} \sum_{l \in \mathcal{M}_2} \left| \langle \bm x_l, \bm\epsilon \rangle \right| \left\| \dot{\bm{C}}_{l} \right\|_{op} + \left\|n^{-1} \sum_{i=1}^{n} \epsilon_i * \bm{E}_{i} \right\|_{op}. \end{eqnarray*} Under condition (A1), $ n^{-1} \sum_{l \in \mathcal{M}_2} \left| \langle \bm x_l, \bm\epsilon \rangle \right| \left\| \dot{\bm{C}}_{l} \right\|_{op} \leq n^{-1} b \sum_{l \in \mathcal{M}_2} \left| \langle \bm x_l, \bm\epsilon \rangle \right|$. Therefore, we have \begin{eqnarray*} &&P(n^{-1} b \sum_{l \in \mathcal{M}_2} \left| \langle \bm x_l, \bm\epsilon \rangle \right| \geq t ) = P( \sum_{l \in \mathcal{M}_2} \left| \langle \bm x_l, \bm\epsilon \rangle /n \right| \geq t/b ) \\ &\leq& P\left[ \cup_{l \in \mathcal{M}_2} \left\{ \left| \langle \bm x_l, \bm\epsilon \rangle /n \right| \geq \frac{t}{b s_2} \right\} \right] \leq \sum_{l \in \mathcal{M}_2} P\left( \left| \langle \bm x_l, \bm\epsilon \rangle /n \right| \geq \frac{t}{b s_2}\right)\\ &\leq& \sum_{l \in \mathcal{M}_2} P\left( \sup_{l \in \mathcal{M}_2} \left| \langle \bm x_l, \bm\epsilon \rangle /n \right| \geq \frac{t}{b s_2}\right) \leq 2 \exp \left( - \frac{n t^2}{2 b^2 s_2^2 \sigma_{0}^2} + 2 \log s_2 \right). \end{eqnarray*} Therefore, by choosing $t= b s_2 \sigma_{0}$ $[ 2 n^{-1} \{ 3 \log s_2 + \log (\log n) \}]^{1/2}$, for any choice of $t_1 \geq t$, we guarantee that $t_1 \geq n^{-1} b \sum_{l \in \mathcal{M}_2} \left| \langle \bm x_l, \bm\epsilon \rangle \right| $ therefore $t_1 \geq n^{-1} \sum_{l \in \mathcal{M}_2} \left| \langle \bm x_l, \bm\epsilon \rangle \right| \left\| \dot{\bm{C}}_{l} \right\|_{op}$ is valid with probability at least $1 - c_2 / (s_2 \log n)$ for some positive constant $c_2$. On the other hand, let $\bm{W}_i$ be a $p \times q$ random matrix with each entry i.i.d. standard normal. By Assumption (A8) and Lemma \ref{lem3}, conditioning on $\epsilon_i$ we have \begin{eqnarray*} P( \left\| \sum_{i=1}^{n} \epsilon_i * \bm{E}_{i} \right\|_{op} \geq t ) = P( \sup_{\| \bm A \|_* \leq 1 }\langle \bm A, \sum_{i=1}^{n} \epsilon_i * \bm{E}_{i} \rangle \geq t ) \leq P( \sup_{\| \bm A \|_* \leq 1 }\langle \bm A, \sum_{i=1}^{n} \epsilon_i * \bm{W}_{i} \rangle \geq \frac{t}{C_U} ), \end{eqnarray*} since $\Sigma_e \preceq C_u^2 I_{pq \times pq}$. As $ \sup_{\| \bm A \|_* \leq 1 }\langle \bm A, \sum_{i=1}^{n} \epsilon_i * \bm{W}_{i} \rangle = \left\| \sum_{i=1}^{n} \epsilon_i * \bm{W}_{i} \right\|_\infty$ conditioning on $\bm{W}_{i}$, each entry of the matrix $\sum_{i=1}^{n} \epsilon_i * \bm{W}_{i} $ is i.i.d. $N(0, \| \bm\epsilon \|_{op}^2)$. Since $\frac{\| \bm\epsilon \|_{op}^2}{\sigma_{\epsilon}^2}$ is a $\chi^2$ random variable with $n$ degrees of freedom, one has \[ P(\frac{\| \bm\epsilon \|_{op}^2}{n \sigma_{\epsilon}^2} \geq 4) \leq \exp(-n) \] using the tail bound of $\chi^2$ presented by the corollary of Lemma 1 from \cite{laurent2000adaptive}. Combing with the standard random matrix theory, we know that $\| n^{-1/2} \sum_{i=1}^{n} \epsilon_i * $ $\bm{W}_{i} \|_{op} \leq 2 n^{-1/2} \sigma_{\epsilon} (p^{1/2} + q^{1/2})$ with probability at least $1 - c_3 \exp\{ -c_4 (p+q)\} - \exp(-n)$ where $c_3$ and $c_4$ are some positive constants. Combining with what we got from previous step, we have $ \sum_{l \in \mathcal{M}_2} \left| \langle \bm x_l, \bm\epsilon \rangle \right| \left\| \dot{\bm{C}}_{l} \right\|_{op} + \left\| \sum_{i=1}^{n} \epsilon_i * \bm{E}_{i} \right\|_{op} \leq b s_2 \sigma_{0}[ 2 n^{-1} \{ 3 \log s_2 + \log (\log n) \}]^{1/2} + 2 n^{-1/2} \sigma_{\epsilon} (p^{1/2} + q^{1/2})$ holds with probability at least $1- c_2 / (s_2 \log n) - c_3 \exp\{ -c_4 (p+q)\} - \exp(-n) $. Therefore, the choice of $\lambda_2 \geq 2 b s_2 \sigma_{0}[ 2 n^{-1} \{ 3 \log s_2 + \log (\log n) \}]^{1/2} + 4 n^{-1/2} \sigma_{\epsilon} (p^{1/2} + q^{1/2})$ holds with probability at least $1- c_2 / (s_2 \log n) - c_3 \exp\{ -c_4 (p+q)\} - \exp(-n) $. As a result, the event that both $\lambda_1$ and $\lambda_2$ satisfy the above inequalities holds with probability at least $1- c_1/ \log n - c_2 / (s_2 \log n) - c_3 \exp\{ -c_4 (p+q)\} - \exp(-n) $ for some postive constants $c_1$, $c_2$, $c_3$ and $c_4$. In sum, there exists some positive constants $c_1, c_2, c_3, c_4$, with probability at least $1- c_1/ \log n - c_2 / (s_2 \log n) - c_3 \exp\{ -c_4 (p+q)\} $, one has $$\left\| \widehat{\bm\theta}_{\bm\lambda} - \dot{\bm\theta} \right\|_2^2 \leq C_0 \max\left\{C_1 \lambda_1^2 n^{2 \kappa + \tau},\lambda_2^2 r \right\} \iota^{-2},$$ for some constants $C_0, C_1 > 0$. This completes the proof. \end{proof} \bibliographystyle{chicago}
2,869,038,155,982
arxiv
\section{Introduction} Finite generation of symbolic Rees rings is one of very interesting problems in commutative ring theory. It is deeply related to Hilbert's 14th problem or Kronecker's problem (Cowsik's question~\cite{Cowsik}). It often happens that the Cox ring (or a multi-section ring) of an algebraic variety coincides with the extended symbolic Rees ring of an ideal of a ring. Finite generation of these rings is also a very important problem in birational geometry. The simplest non-trivial examples of symbolic Rees rings are \[ R_s(\bmp_{a,b,c}) = \bigoplus_{n \ge 0}{\bmp_{a,b,c}}^{(n)}t^n \subset k[x,y,z,t], \] where $\bmp_{a,b,c}$ is the kernel of the $k$-algebra homomorphism \[ \phi_{a,b,c} : k[x,y,z] \longrightarrow k[\lambda] \] given by $\phi_{a,b,c}(x) = \lambda^a$, $\phi_{a,b,c}(y) = \lambda^b$, $\phi_{a,b,c}(z) = \lambda^c$ ($k$ is a field and $a$, $b$, $c$ are pairwise coprime integers) and ${\bmp_{a,b,c}}^{(n)} = {\bmp_{a,b,c}}^{n}k[x,y,z]_{\bmp_{a,b,c}} \cap k[x,y,z]$ is the $n$th symbolic power. In this case, the extended symbolic Rees ring $R_s(\bmp_{a,b,c})[t^{-1}]$ is the Cox ring of the blow-up $Y_{\Delta_{a,b,c}}$ of the weighted projective surface $X_{\Delta_{a,b,c}}=\proj(k[x,y,z])$ at the point $(1,1)$ in the torus. Many commutative algebraists and algebraic geometers studied them and gave many results (\cite{Hu}, \cite{C}, \cite{GNSnagoya}, \cite{GNW}, \cite{GK}, \cite{GAGK}, \cite{KM}, \cite{KN} etc.). Finite generation of $R_s(\bmp_{a,b,c})$ depends on $a$, $b$, $c$ and the characteristic of $k$. There are many examples of finitely generated $R_s(\bmp_{a,b,c})$. Examples of infinitely generated $R_s(\bmp_{a,b,c})$ were first discovered by Goto-Nishida-Watanabe~\cite{GNW}. In the case where the characteristic of $k$ is positive, it is not known whether $R_s(\bmp_{a,b,c})$ is finitely generated for any $a$, $b$, $c$ or not. We say that $C$ is a negative curve if $C$ is a curve in $Y_{\Delta_{a,b,c}}$ with $C^2 < 0$ such that $C$ is not the exceptional curve $E$ of the blow-up $\pi : Y_{\Delta_{a,b,c}} \rightarrow X_{\Delta_{a,b,c}}$. Since the Picard number of $Y_{\Delta_{a,b,c}}$ is two, such a curve is unique if it exists. Cutkosky~\cite{C} proved that finite generation of the Cox ring of $Y_{\Delta_{a,b,c}}$ is equivalent to the existence of curves $D_1$ and $D_2$ in $Y_{\Delta_{a,b,c}}$ such that $D_1 \cap D_2 = \emptyset$, $D_1\neq E$ and $D_2\neq E$ (the defining equations of $\pi(D_1)$ and $\pi(D_2)$ satisfy Huneke's criterion~\cite{Hu} for finite generation). If $R_s(\bmp_{a,b,c})$ is finitely generated with $\sqrt{abc} \not\in \bZ$, we may assume that either $D_1$ or $D_2$ coincides with the negative curve $C$. If the negative curve does not exists, one can prove Nagata's conjecture (for $abc$ points) affirmatively as in \cite{CK}. Therefore the existence of the negative curve is a very important question. The aim of this paper is to study the structure of the negative curve. In particular, the author is interested in the problem whether the negative curve is rational or not. If there exists a negative rational curve $C$, it is possible to estimate the degree of the curve $D$ such that $C$ and $D$ satisfies Huneke's criterion for finite generation in the same way as in \cite{KN}. Assume that the negative curve $C$ in $Y_{\Delta_{a,b,c}}$ exists. We shall study \[ \pi(C) \cap T \] in this paper, where $T$ is the torus in $X_{\Delta_{a,b,c}}$. When $C.E = r$, the defining equation of $\pi(C) \cap T$ in $T = \spec(k[v^{\pm 1}, w^{\pm 1}])$ is an irreducible Laurent polynomial contained in $(v-1,w-1)^rk[v^{\pm 1}, w^{\pm 1}]$ and the Newton polygon has area less than $r^2/2$ (see Proposition~\ref{Prop3.2}). We call such a Laurent polynomial an {\em $r$-nct} in this paper (Definition~\ref{Def3.1}). Remark that $C$ and $\pi(C) \cap T$ is birational. The author does not know any example that $C$ is not rational. We shall prove some basic properties on $r$-ncts in Proposition~\ref{Fact3.3}. It is proved that, for each $r \ge 0$, there exist essentially finitely many $r$-ncts. When $r=1$, there exists essentially only one $1$-nct $\varphi_1 = vw-1$. When $r=2$, there exists essentially only one $2$-nct $\varphi_2 = -v^2w -vw^2+3vw-1$. When $r=3$, there exist essentially two $3$-ncts $\varphi_3$ and $\varphi'_3$ as in Example~\ref{123-nct}. Let $P_{\varphi'_3}$ be the Newton polygon of $\varphi'_3$. In the case where the characteristic of $k$ is not $2$, $P_{\varphi'_3}$ is a tetragon. However, in the case of characteristic $2$, $P_{\varphi'_3}$ is a smaller triangle than this tetragon. This is a reason why ${\bmp_{9,10,13}}^{(3)}$ contains a negative curve in the case of characteristic $2$. In other characteristic, ${\bmp_{9,10,13}}^{(3)}$ does not contain a negative curve (see Example~\ref{ExampleOfNct}). The following is the main theorem of this paper. \begin{Theorem}\label{Thm3.6} \begin{rm} Let $k$ be a field and $\varphi$ be an $r$-nct over $k$, where $r \ge 2$. Let $P_\varphi$ be the Newton polygon of $\varphi$. Consider the prime ideal $\bmp_\varphi$ of the Ehrhart ring \[ E(P_\varphi,\lambda) = \bigoplus_{d \ge 0} \left( \bigoplus_{(\alpha,\beta) \in d P_\varphi \cap \bZ^2} kv^\alpha w^\beta \right) \lambda^d \subset k[v^{\pm 1}, w^{\pm 1}][\lambda] \] satisfying $\bmp_\varphi = E(P_\varphi,\lambda) \cap (v-1,w-1)k[v^{\pm 1}, w^{\pm 1}][\lambda]$. Let $Y_{\Delta_\varphi}$ be the blow-up of $X_{\Delta_\varphi} = \proj(E(P_\varphi, \lambda))$ at the point $\bmp_\varphi$. Let $C_\varphi$ be the proper transform of the curve $V_+(\varphi \lambda)$ in $X_{\Delta_\varphi}$. Let $I_\varphi$ (resp.\ $B_\varphi$) be the number of the interior lattice points (resp.\ the boundary lattice points) of $P_\varphi$. Consider the following conditions: \begin{enumerate} \item[(1)] $-K_{Y_{\Delta_\varphi}}$ is nef and big. \item[(2)] $-K_{Y_{\Delta_\varphi}}$ is nef. \item[(3)] $(-K_{Y_{\Delta_\varphi}})^2>0$. \item[(4)] Let $\bma_1$, $\bma_2$, \ldots, $\bma_n$ be the first lattice points of the $1$-dimensional cones in the fan corresponding to the toric variety $X_{\Delta_\varphi}$. We put \[ P_{-K_{X_{\Delta_\varphi}}} = \{ \bmx \in \bR^2 \mid \seisei{\bmx, \bma_i} \ge -1 \ (i = 1, 2, \ldots, n) \} . \] Then $|P_{-K_{X_{\Delta_\varphi}}}| > 1/2$ is satisfied. \item[(5)] $-K_{Y_{\Delta_\varphi}}$ is big. \item[(6)] The Cox ring ${\rm Cox}(Y_{\Delta_\varphi})$ is Noetherian. \footnote{ As we shall see in Remark~\ref{coxY}, ${\rm Cox}(Y_{\Delta_\varphi})$ is a extended symbolic Rees ring of an ideal $I$ over the polynomial ring ${\rm Cox}(X_{\Delta_\varphi})$. If ${\rm Cl}(X_{\Delta_\varphi})$ is torsion-free, then $I$ is a prime ideal. In the case where the characteristic of $k$ is $0$, it holds $\sqrt{I} = I$.} \item[(7)] $B_\varphi \ge r$. \item[(8)] $H^0(Y_{\Delta_\varphi}, {\cal O}_{Y_{\Delta_\varphi}}(K_{Y_{\Delta_\varphi}} + n C_\varphi)) = 0$ for $n > 0$. \item[(9)] $I_\varphi = \frac{r(r-1)}{2}$. \item[(10)] The extended symbolic Rees ring $R'_s(\bmp_\varphi)$ is Noetherian. \item[(11)] $C_\varphi \simeq \bP_k^1$ is satisfied. \end{enumerate} Then we have the following: \begin{itemize} \item[(a)] \[ \begin{array}{ccccccccccc} (1) & \Longrightarrow & (2) & \multicolumn{3}{c}{\xLongrightarrow[]{\hspace{5.5em}}} &(7) & & & & \\ \Downarrow & & & & & & \Downarrow& & & & \\ (3) & \Longrightarrow & (4) & \Longrightarrow & (5) & \Longrightarrow & (8) & \Longleftrightarrow & (9) & \Longleftarrow & (11) \\ & & & & \Downarrow & & \Downarrow& & & & \\ & & & & (6) & \Longrightarrow & (10) & & & & \end{array} \] is satisfied.\footnote{ We do not have to restrict ourselves to the polygon $P_\varphi$ for $(1)\Rightarrow (3)\Rightarrow (4)\Rightarrow (5)\Rightarrow (6)\Rightarrow (10)$. They hold for any integral convex polygon.} \item[(b)] If $k$ is algebraically closed, then (9) is equivalent to (11). \item[(c)] If the characteristic of $k$ is positive, then (10) is always satisfied. \end{itemize} \end{rm} \end{Theorem} We shall prove this theorem in section~\ref{sect4}. Using this theorem, we shall see that the negative curve $C$ in $Y_{\Delta_{a,b,c}}$ is rational in many cases in Remark~\ref{rational}. For instance, if $r \le 4$, then $C$ is rational. If $r \le 5$ and ${\rm ch}(k) = 0$, $C$ is rational. \section{Preliminaries}\label{junbi} All rings in this paper are commutative with unity. For a Noetherian ring $A$ and a prime ideal $Q$ of $A$, $Q^{(n)} = Q^nA_Q \cap A$ is called the $n$th symbolic power of $Q$. When $n < 0$, we think $Q^{(n)} = A$. We put \begin{eqnarray*} R_s(Q) & = & \oplus_{n \ge 0} Q^{(n)} t^n \subset A[t] \\ R'_s(Q) & = & \oplus_{n \in \bZ} Q^{(n)} t^n \subset A[t^{\pm 1}] \end{eqnarray*} and call them the symbolic Rees ring of $Q$ and the extended symbolic Rees ring of $Q$, respectively. We know that $R_s(Q)$ is Noetherian if and only if so is $R'_s(Q)$. \begin{Definition}\label{Def1} \begin{rm} Let $a$, $b$, $c$ be pairwise coprime integers. Let $k$ be a field and $S_{a,b,c} = k[x,y,z]$ be a graded polynomial ring with $\deg(x) = a$, $\deg(y) = b$, $\deg(z) = c$. Let $\lambda$ be a variable and consider the $k$-algebra homomorphism \[ \phi_{a,b,c} : S_{a,b,c} \longrightarrow k[\lambda] \] given by $\phi_{a,b,c}(x) = \lambda^a$, $\phi_{a,b,c}(y) = \lambda^b$, $\phi_{a,b,c}(z) = \lambda^c$. Let $\bmp_{a,b,c}$ be the kernel of $\phi_{a,b,c}$. \end{rm} \end{Definition} \begin{Remark}\label{Rem2} \begin{rm} Let $Y_{\Delta_{a,b,c}}$ be the blow-up of $X_{\Delta_{a,b,c}} = \proj S_{a,b,c}$ at $V_+(\bmp_{a,b,c})$. (Here $X_{\Delta_{a,b,c}}$ is a toric variety with a fan $\Delta_{a,b,c}$. We shall define $\Delta_{a,b,c}$ in Remark~\ref{MethodOfToric}.) Let $E$ be the exceptional divisor and $H$ be the pull back of ${\cal O}_{X_{\Delta_{a,b,c}}}(1)$. Then $\{ E, H \}$ is a generating set of ${\rm Cl}(Y_{\Delta_{a,b,c}}) \simeq \bZ^2$. By Cutkosky~\cite{C}, we have an identification \[ H^0(Y_{\Delta_{a,b,c}}, {\cal O}_{Y_{\Delta_{a,b,c}}}(dH - rE)) = [{\bmp_{a,b,c}}^{(r)}]_d \] and an isomorphism of rings as follows: \begin{equation}\label{doukei} {\rm Cox}(Y_{\Delta_{a,b,c}}) = \bigoplus_{d, r \in \bZ}H^0(Y_{\Delta_{a,b,c}}, {\cal O}_{Y_{\Delta_{a,b,c}}}(dH - rE)) = \bigoplus_{r \in \bZ} {\bmp_{a,b,c}}^{(r)} t^r = R'_s(\bmp_{a,b,c}) \end{equation} \end{rm} \end{Remark} \begin{Remark}\label{MethodOfToric} \begin{rm} Consider $S_{a,b,c}$, $\bmp_{a,b,c}$, $X_{\Delta_{a,b,c}}$, $Y_{\Delta_{a,b,c}}$ as in Definition~\ref{Def1} and Remark~\ref{Rem2}. By Herzog~\cite{Her}, we have \[ \bmp_{a,b,c} = I_2\left( \begin{array}{ccc} x^{s_2} & y^{t_3} & z^{u_1} \\ y^{t_1} & z^{u_2} & x^{s_3} \end{array} \right) = (x^{s} - y^{t_1}z^{u_1}, y^{t} - x^{s_2}z^{u_2}, z^{u} - x^{s_3}y^{t_3}) , \] where $s = s_2+s_3$, $t = t_1 + t_3$, $u = u_1+u_2$, and $I_2( \ )$ is the ideal generated by $2 \times 2$-minors of the given matrix. Here we put $v = x^{s_2}z^{u_2}/y^t$, $w = x^{s_3}y^{t_3}/z^u$. Then $S_{a,b,c}[x^{-1}, y^{-1}, z^{-1}]$ is a ${\Bbb Z}$-graded ring such that \[ S_{a,b,c}[x^{-1}, y^{-1}, z^{-1}]_0=k[v^{\pm 1}, w^{\pm 1}] \] (cf.\ the proof of Lemma~3.6 in \cite{KN}). Take integers $i_0$, $j_0$ satisfying $i_0a + j_0b = 1$. Putting $\lambda = x^{i_0}y^{j_0}$, we obtain \[ S_{a,b,c} \subset S_{a,b,c}[x^{-1}, y^{-1}, z^{-1}]= \left( S_{a,b,c}[x^{-1}, y^{-1}, z^{-1}]_0 \right) [\lambda^{\pm 1}] = k[v^{\pm 1}, w^{\pm 1}, \lambda^{\pm 1}] . \] Here suppose \[ v^\alpha w^\beta \lambda^d = x^iy^jz^k \in S_{a,b,c} . \] Then we have $d = \deg(x^iy^jz^k) = ia + jb + kc \ge 0$ and \begin{align*} & i = s_2\alpha + s_3 \beta + i_0d \ge 0 \\ & j = -t\alpha + t_3 \beta + j_0d \ge 0 \\ & k = u_2\alpha - u \beta \ge 0 . \end{align*} Consider the rational triangle \[ P_{a,b,c} = \left\{ (\alpha,\beta) \in \bR^2 \ \left| \ \begin{array}{l} s_2\alpha + s_3 \beta + i_0 \ge 0 \\ -t\alpha + t_3 \beta + j_0 \ge 0 \\ u_2\alpha - u \beta \ge 0 \end{array} \right\} \right. . \] When $s_3$, $t_3$, $u$ are positive integers\footnote{ If $\bmp_{a,b,c}$ is not a complete intersection, then all of $s_2$, $s_3$, $t_1$, $t_3$, $u_1$, $u_2$ are positive by Herzog~\cite{Her}. In the case where $\bmp_{a,b,c}$ is a complete intersection, we can choose positive integers $s_3$, $t_3$, $u$ after a suitable permutation of $a$, $b$, $c$. }, $P_{a,b,c}$ is the following triangle: \[ { \setlength\unitlength{1truecm} \begin{picture}(6,5)(-1,-3) \put(1.4,-1){\mbox{\Large $P_{a,b,c}$}} \put(0,0){\line(4,1){4}} \qbezier (0,0) (1.25,-1.5) (2.5,-3) \qbezier (4,1) (3.25,-1) (2.5,-3) \put(1.5,1){$\frac{u_2}{u}$} \put(3.5,-1){$\frac{t}{t_3}$} \put(0.5,-2){$-\frac{s_2}{s_3}$} \end{picture} } \] Thus we have the identification \begin{equation}\label{iden} (S_{a,b,c})_d = \left( \bigoplus_{(\alpha, \beta) \in d P_{a,b,c} \cap \bZ^2} k v^\alpha w^\beta \right) \lambda^d . \end{equation} Consider the Ehrhart ring of $P_{a,b,c}$ defined by \[ E(P_{a,b,c}, \lambda) = \bigoplus_{d \ge 0}\left( \bigoplus_{(\alpha, \beta) \in dP_{a,b,c} \cap \bZ^2} k v^\alpha w^\beta \right) \lambda^d \subset k[v^{\pm 1}, w^{\pm 1}, \lambda^{\pm 1}] . \] Thus $E(P_{a,b,c}, \lambda)$ is isomorphic to $S_{a,b,c}$ as a graded $k$-algebra. Let $\Delta_{a,b,c}$ be the complete fan in $\bR^2$ with one dimensional cones \[ \bRo (s_2,s_3), \ \ \bRo (-t,t_3), \ \ \bRo (u_2,-u) . \] Then $\proj(E(P_{a,b,c}, \lambda))$ is the toric variety $X_{\Delta_{a,b,c}}$ with the fan $\Delta_{a,b,c}$. Furthermore we have \[ \bmp_{a,b,c} = E(P_{a,b,c}, \lambda) \cap (v-1, w-1)k[v^{\pm 1}, w^{\pm 1}, \lambda^{\pm 1}] . \] It is easy to see \begin{equation}\label{nth} {\bmp_{a,b,c}}^{(r)} = E(P_{a,b,c}, \lambda) \cap (v-1, w-1)^rk[v^{\pm 1}, w^{\pm 1}, \lambda^{\pm 1}] \end{equation} for any $r \ge 1$. Therefore we have \[ H^0(Y_{\Delta_{a,b,c}}, {\cal O}_{Y_{\Delta_{a,b,c}}}(dH - rE)) = \left[ \bfp^r \cap \left( \bigoplus_{(\alpha, \beta) \in d P_{a,b,c} \cap \bZ^2} k v^\alpha w^\beta \right) \right] \lambda^d = [{\bmp_{a,b,c}}^{(r)}]_d , \] where $\bfp = (v-1, w-1)k[v^{\pm 1}, w^{\pm 1}]$. \end{rm} \end{Remark} \begin{Remark}\label{Area} \begin{rm} With notation as in Remark~\ref{MethodOfToric}, we shall calculate the area of $P_{a,b,c}$ here. Suppose $z^u = v^{\alpha_0}w^{\beta_0}\lambda^{cu}$. Then the bottom vertex of $cuP_{a,b,c}$ is $(\alpha_0,\beta_0)$. Since \[ x^{s_3}y^{t_3} = wz^u = v^{\alpha_0}w^{\beta_0+1}\lambda^{cu} , \] the point $(\alpha_0,\beta_0+1)$ is on the upper edge of $cuP_{a,b,c}$. \[ { \setlength\unitlength{1truecm} \begin{picture}(6,5)(-1,-3) \put(2.8,-3){${\scriptstyle (\alpha_0,\beta_0)}$} \put(2.3,0.2){${\scriptstyle (\alpha_0,\beta_0+1)}$} \put(1.4,-1){\mbox{\large $cuP_{a,b,c}$}} \put(0,0){\line(4,1){4}} \qbezier (0,0) (1.25,-1.5) (2.5,-3) \qbezier (4,1) (3.25,-1) (2.5,-3) \put(1.5,1){$\frac{u_2}{u}$} \put(3.5,-1){$\frac{t}{t_3}$} \put(0.5,-2){$-\frac{s_2}{s_3}$} \put(2.4,-3.1){$\bullet$} \put(2.4,0.5){$\bullet$} \end{picture} } \] Then the width of $cuP_{a,b,c}$ is \[ \frac{1}{\frac{t}{t_3} - \frac{u_2}{u}} + \frac{1}{\frac{s_2}{s_3} + \frac{u_2}{u}} = \frac{t_3 u}{a} + \frac{s_3u}{b} = \frac{cu^2}{ab} . \] Here recall that \begin{align*} a & = e((\lambda^a), k[\lambda]) = e((x), S_{a,b,c}/\bmp_{a,b,c}) = \ell_{S_{a,b,c}}(S_{a,b,c}/(x)+\bmp_{a,b,c}) \\ & = \ell_{S_{a,b,c}}(S_{a,b,c}/(x, y^{t_1}z^{u_1}, y^t, z^u)) = tu - t_3u_2 , \end{align*} $b = su - s_3u_1 = s_2u + s_3u_2$ and $cu = s_3a+t_3b$, where $e( \ )$ is the multiplicity and $\ell( \ )$ is the length. Therefore the area of $cuP_{a,b,c}$ is $\frac{cu^2}{2 ab}$. Thus we know that the area of $P_{a,b,c}$ is $\frac{1}{2 abc}$. \end{rm} \end{Remark} \begin{Definition}\label{negative curve} \begin{rm} If an irreducible polynomial $f \in [{\bmp_{a,b,c}}^{(r')}]_{d'}$ satisfies $d'/r' < \sqrt{abc}$, we say that $f \in [{\bmp_{a,b,c}}^{(r')}]_{d'}$ is a negative curve. \end{rm} \end{Definition} If a negative curve $f \in [{\bmp_{a,b,c}}^{(r')}]_{d'}$ exists, then both $r'$ and $d'$ are uniquely determined, and $f$ is also unique up to a constant factor. We denote the proper transform of $V_+(f)$ by $C$ ($\subset Y_{\Delta_{a,b,c}}$). Then $C$ satisfies $C^2 = \frac{{d'}^2}{abc} - {r'}^2 < 0$. \begin{Lemma}\label{PrimeElementOfE} Let $k$ be a field and $P$ be a rational convex polygon (the convex hull of a finite subset of $\bQ^2$) in $\bR^2$. Let $A = k[v^{\pm 1}, w^{\pm 1}]$ be a Laurent polynomial ring with two variables $v$, $w$. Consider the Ehrhart ring \[ E(P, \lambda) = \bigoplus_{d \ge 0} \left( \bigoplus_{(\alpha,\beta) \in dP \cap \bZ^2} kv^\alpha w^\beta \right) \lambda^d \subset A[\lambda] . \] Let $d_1$ be a positive integer and put $\varphi = \sum_{(\alpha,\beta) \in d_1P \cap \bZ^2} c_{(\alpha,\beta)}v^\alpha w^\beta$ where $c_{(\alpha,\beta)} \in k$ and \begin{equation}\label{Nvarphi} N_\varphi = \{ (\alpha,\beta) \in \bZ^2 \mid c_{(\alpha,\beta)} \neq 0 \} . \end{equation} Assume that $N_\varphi$ contains at least two elements. Then the following two conditions are equivalent: \begin{enumerate} \item $\varphi \lambda^{d_1}$ is a prime element of $E(P, \lambda)$. \item $\varphi$ is irreducible in $A$ and each edge of $P$ contains an element in $N_\varphi$. \end{enumerate} \end{Lemma} \Proof Before proving this lemma, we shall give some remarks here. Suppose that a lattice point $(\alpha', \beta')$ is in the interior of $d'P$ for some $d'>0$. Then we have \[ E(P, \lambda)[(v^{\alpha'} w^{\beta'} \lambda^{d'})^{-1}] = A[\lambda^{\pm 1}] \] and \begin{equation}\label{domain} \left( E(P, \lambda)/(\varphi \lambda^{d_1}) \right)[(v^{\alpha'} w^{\beta'} \lambda^{d'})^{-1}] = \left( A/ \varphi A \right) [\lambda^{\pm 1}] . \end{equation} Here remark that both sides are not $0$ since $N_\varphi$ contains at least two elements. Let $\{ E_1, \ldots, E_s \}$ be the set of edges of $P$. Putting \[ \bmp_i = \bigoplus_{d > 0} \left( \bigoplus_{(\alpha,\beta) \in d(P \setminus E_i) \cap \bZ^2} kv^\alpha w^\beta \right) \lambda^d \subset E(P, \lambda) , \] $\bmp_i$ is a prime ideal of $E(P, \lambda)$ of height $1$ for $i = 1, 2, \ldots, s$. It is easy to see \begin{equation}\label{radical} \sqrt{(v^{\alpha'} w^{\beta'} \lambda^{d'})} = \bmp_1 \cap \cdots \cap \bmp_s . \end{equation} First assume (1). Since $\varphi \lambda^{d_1}$ is a prime element, $\varphi$ is irreducible in $A$ by (\ref{domain}). If some $E_i$ does not meet $N_\varphi$, $\varphi \lambda^{d'}$ is contained in $\bmp_i$. Since $\varphi \lambda^{d_1}$ is a prime element, we obtain $\bmp_i = (\varphi \lambda^{d_1})$. It is impossible since $N_\varphi$ contains at least two points. Next assume (2). Since $\varphi$ is irreducible in $A$, the right-hand side of (\ref{domain}) is an integral domain. In order to prove (1), it is enough to show that $\varphi \lambda^{d_1}$, $v^{\alpha'} w^{\beta'} \lambda^{d'}$ is an $E(P, \lambda)$-regular sequence. Since $E(P, \lambda)$ is a normal domain, we know \[ {\rm Ass}_{E(P, \lambda)}\left( E(P, \lambda)/(v^{\alpha'} w^{\beta'} \lambda^{d'}) \right) = \{ \bmp_1, \ldots, \bmp_s \} \] by (\ref{radical}). Since none of $\bmp_i$'s contains $\varphi \lambda^{d_1}$ by (2), $v^{\alpha'} w^{\beta'} \lambda^{d'}$, $\varphi \lambda^{d_1}$ is an $E(P, \lambda)$-regular sequence. Since $v^{\alpha'} w^{\beta'} \lambda^{d'}$ and $\varphi \lambda^{d_1}$ are homogeneous elements, $\varphi \lambda^{d_1}$, $v^{\alpha'} w^{\beta'} \lambda^{d'}$ is an $E(P, \lambda)$-regular sequence. \qed \begin{Remark}\label{coxY} \begin{rm} As in Remark~\ref{Rem2}, the extended symbolic Rees ring $R'_s(\bmp_{a,b,c})$ is identified with the Cox ring of a blow-up of a toric surface. Here we generalize this identification. Let $X_\Delta$ be a $d$-dimensional toric variety with a fan $\Delta$. Let \[ \{ \bRo \bma_1, \bRo \bma_2, \ldots, \bRo \bma_n \} \] be the set of the $1$-dimensional cones in $\Delta$. We assume that each $\bma_i$ is in $\bZ^d$ such that the greatest common measure of the components of $\bma_i$ is $1$, and $\sum_i\bR \bma_i = \bR^d$. Here remark that $\bma_i \neq \bma_j$ if $i \neq j$. Then we have the exact sequence \[ 0 \longleftarrow {\rm Cl}(X_\Delta) \longleftarrow \bZ^n \stackrel{ \left( \begin{array}{c} \bma_1 \\ \bma_2 \\ \vdots \\ \bma_n \end{array} \right) }{\longleftarrow} \bZ^d \longleftarrow 0 , \] where ${\rm Cl}(X_\Delta)$ is the divisor class group of $X_\Delta$. The morphism of monoids \begin{equation}\label{grading} {\rm Cl}(X_\Delta) \longleftarrow \bZ^n \supset (\bNo)^n \end{equation} induces the morphism of semi-group rings \[ k[{\rm Cl}(X_\Delta)] \longleftarrow A:= k[x_1, x_2, \ldots, x_n] = k[(\bNo)^n] . \] Let $I$ be the kernel of this ring homomorphism. By (\ref{grading}), $A$ has a structure of a ${\rm Cl}(X_\Delta)$-graded ring. Then we have an isomorphism $A \simeq {\rm Cox}(X_\Delta)$ as a ${\rm Cl}(X_\Delta)$-graded ring~\cite{Cox}. Let $\{ \bmp_1, \bmp_2, \ldots, \bmp_\ell \}$ be the set of minimal prime ideals of $I$ and put \[ I^{(r)} = I^rA_{\bmp_1} \cap \cdots \cap I^rA_{\bmp_\ell} \cap A . \] In the case where ${\rm Cl}(X_\Delta)$ is torsion-free, $I$ is a prime ideal of $A$ and $I^{(r)}$ is the $r$th symbolic power of $I$. If the characteristic of $k$ is $0$, then we have $I = \sqrt{I}$ and $I^{(r)} =\bmp_1^{(r)} \cap \bmp_2^{(r)} \cap \cdots \cap \bmp_\ell^{(r)}$. Let $Y_\Delta$ be the blow-up of $X_\Delta$ at $(1,1,\dots,1)$ in the torus $(k^\times)^n$. Then we have an isomorphism \begin{equation}\label{generalization} {\rm Cox}(Y_\Delta) \simeq R'_s(I) \end{equation} as a ${\rm Cl}(X_\Delta) \times \bZ$-graded ring. In the case of $X_{\Delta_{a,b,c}}$ and $Y_{\Delta_{a,b,c}}$ in Definition~\ref{Def1}, Remark~\ref{Rem2} and Remark~\ref{MethodOfToric}, we put $\bma_1 = (s_2,s_3)$, $\bma_2 = (-t,t_3)$, $\bma_3= (u_2,-u)$. Then the morphism (\ref{grading}) of monoids coincides with \[ \bZ \stackrel{ (a \ b \ c)}{\longleftarrow} \bZ^3 \supset (\bNo)^3 . \] Let $I$ be the kernel of the $k$-algebra homomorphism $k[x_1,x_2,x_3] \rightarrow k[y_1^{\pm 1}]$ ($x_1 \mapsto y_1^{a}$, $x_2 \mapsto y_1^b$, $x_3 \mapsto y_1^c$). Then $I = \bmp_{a,b,c}$ and ${\rm Cox}(Y_{\Delta_{a,b,c}})\simeq R'_s(\bmp_{a,b,c})$ by (\ref{generalization}). It is just the isomorphism in Remark~\ref{Rem2} given by Cutkosky. Suppose $\bma_1 = (2,-1)$, $\bma_1 = (-2,-1)$, $\bma_1 = (0,1)$. Then the morphism (\ref{grading}) of monoids is \[ \bZ \oplus \bZ/(2) \longleftarrow \bZ^3 \supset (\bNo)^3 \] such that $(1,0,0) \mapsto (1,\overline{1})$, $(0,1,0) \mapsto (1,\overline{0})$, $(0,0,1) \mapsto (2,\overline{1})$, respectively. Letting $I$ be the kernel of the $k$-algebra homomorphism $k[x_1,x_2,x_3] \rightarrow \frac{k[y_1^{\pm 1}, y_2]}{(y_2^2-1)}$ ($x_1 \mapsto y_1y_2$, $x_2 \mapsto y_1$, $x_3 \mapsto y_1^2y_2$), we have ${\rm Cox}(Y_\Delta)\simeq R'_s(I)$. Consider the following $k$-algebra homomorphisms $f_1 : k[x_1,x_2,x_3] \rightarrow k[y_1]$ ($x_1 \mapsto y_1$, $x_2 \mapsto y_1$, $x_3 \mapsto y_1^2$) and $f_2 : k[x_1,x_2,x_3] \rightarrow k[y_1]$ ($x_1 \mapsto -y_1$, $x_2 \mapsto y_1$, $x_3 \mapsto -y_1^2$). If the characteristic of $k$ is not $2$, then we have $I = {\rm Ker}(f_1) \cap {\rm Ker}(f_2)$. Suppose $t \ge 2$ and \[ \left( \begin{array}{c} \bma_1 \\ \bma_2 \\ \bma_3 \\ \bma_4 \\ \bma_5 \\ \bma_6 \\ \bma_7 \end{array} \right) = \left( \begin{array}{ccc} -1 & t & t \\ t & -1 & t \\ t & t & -1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ -1 & -1 & -1 \end{array} \right) . \] Then the morphism (\ref{grading}) of monoids is \[ \bZ^4 \stackrel{ \left( \begin{array}{ccccccc} 1 & 0 & 0 & t+1 & 0 & 0 & t \\ 0 & 1 & 0 & 0 & t+1 & 0 & t \\ 0 & 0 & 1 & 0 & 0 & t+1 & t \\ 0 & 0 & 0 & 1 & 1 & 1 & 1 \end{array} \right)}{ \xlongleftarrow[]{\hspace{16em}}} \bZ^7 \supset (\bNo)^7 . \] Let $I$ be the kernel of the $k$-algebra homomorphism $k[x_1,x_2,x_3,S,T,U,V] \rightarrow k[x_1,x_2,x_3,W]$ ($S \mapsto x_1^{t+1}W$, $T \mapsto x_2^{t+1}W$, $U \mapsto x_3^{t+1}W$, $V \mapsto (x_1x_2x_3)^{t}W$). Then $I$ is a prime ideal of height $3$ and ${\rm Cox}(Y_\Delta) \simeq R'_s(I)$. Roberts~\cite{Ro} proved that $R'_s(I)$ is not Noetherian and some pure subring of $R'_s(I)$ gives a counterexample to Hilbert's fourteenth problem if the characteristic of $k$ is $0$. On the other hand, $R'_s(I)$ is Noetherian if the characteristic of $k$ is positive~\cite{K8}. Let $E$ be the Ehrhart ring of the following tetragon \[ { \setlength\unitlength{1truecm} \begin{picture}(6,3.5)(-2,-2) \qbezier (0,-2) (1.5,-1.5) (3,-1) \qbezier (0,-2) (0.5,-1) (1,0) \qbezier (1,0) (1.5,0.5) (2,1) \qbezier (2,1) (2.5,0) (3,-1) \put(-0.1,-2.1){$\bullet$} \put(0.9,-0.1){$\bullet$} \put(0.9,-1.1){$\bullet$} \put(1.9,-0.1){$\bullet$} \put(1.9,-1.1){$\bullet$} \put(1.9,0.9){$\bullet$} \put(2.9,-1.1){$\bullet$} \end{picture} } \] and put $X_\Delta = \proj(E)$. In this case, the $1$-dimensional cones are \[ \left( \begin{array}{c} \bma_1 \\ \bma_2 \\ \bma_3 \\ \bma_4 \end{array} \right) = \left( \begin{array}{ccc} 2 & -1 \\ 1 & -1 \\ -2 & -1 \\ -1 & 3 \end{array} \right) \] and the morphism (\ref{grading}) of monoids is \[ \bZ^2 \stackrel{ \left( \begin{array}{cccc} 1 & 1 & 1 & 1 \\ 3 & -4 & 1 & 0 \end{array} \right)}{ \longleftarrow} \bZ^4 \supset (\bNo)^4 . \] Let $I$ be the kernel of the $k$-algebra homomorphism $k[x_1,x_2,x_3,x_4] \rightarrow k[y_1^{\pm 1},y_2^{\pm 1}]$ ($x_1 \mapsto y_1y_2^3$, $x_2 \mapsto y_1y_2^{-4}$, $x_3 \mapsto y_1y_2$, $x_4 \mapsto y_1$). Then $I$ is a prime ideal called a toric ideal. By Example~\ref{ExampleOfGGK}, we know that $R'_s(I)$ is a Noetherian ring. Let $\bmp$ be a prime ideal of the Ehrhart ring $E \subset k[v^{\pm 1}, w^{\pm 1}][\lambda]$ satisfying $\bmp = E \cap (v-1,w-1)k[v^{\pm 1}, w^{\pm 1}][\lambda]$. Then the symbolic Rees ring $R'_s(\bmp)$ is a pure subring of $R'_s(I)$. Therefore we know that $R'_s(\bmp)$ is also a Noetherian ring. \end{rm} \end{Remark} \section{Definition of $r$-nct and basic properties}\label{nctdefbasic} We define an $r$-nct and give basic properties of it in this section. \begin{Definition}\label{Def3.1} \begin{rm} Let $k$ be a field and $A = k[v^{\pm 1}, w^{\pm 1}]$ be a Laurent polynomial ring with two variables $v$, $w$. Let $r$ be a positive integer. An element $\varphi$ of $A$ is called an {\it equation of a negative curve with multiplicity $r$ over $k$} (or simply an {\it $r$-nct over $k$}) if the following two conditions are satisfied: \begin{enumerate} \item[(1)] $\varphi$ is irreducible in $A$ and $\varphi \in (v-1,w-1)^rA$. \item[(2)] Let $N_\varphi$ be the set defined as in (\ref{Nvarphi}). Let $P_\varphi$ be the convex hull of $N_\varphi$. Then $2|P_\varphi| < r^2$ is satisfied, where $|P_\varphi|$ is the area of the polygon $P_\varphi$. \end{enumerate} \end{rm} \end{Definition} \begin{Example}\label{123-nct} \begin{rm} Polynomials $v-1$, $w-1$, $\varphi_1=vw-1$ are $1$-ncts over any field $k$. The area of $P_{\varphi_1}$ is $0$. The polynomial \[ \varphi_2=-\varphi_1(v-1) - v(w-1)^2 = -v^2w - vw^2 +3vw-1 \] is a $2$-nct since it is irreducible. For the irreducibility we refer the reader to Lemma~2.1 in \cite{GAGK}. The area of $P_{\varphi_2}$ is $3/2$. The polynomial \[ \varphi_3 = -\varphi_2(v-1) + v(w-1)^3 = -1 + 6 v w - 4 v^2 w + v^3 w - 4 v w^2 + v^2 w^2 + v w^3 \] is a $3$-nct since it is irreducible. The area of $P_{\varphi_3}$ is $4$. The polynomial \begin{align*} \varphi'_3 = & \ -\varphi_2(vw-1) - (v-1)^2(w-1)vw \\ = & \ -1 + 5 v w - 3 v^2 w + v^3 w - 2 v w^2 - v^2 w^2 + v^2 w^3 \end{align*} is also a $3$-nct since it is irreducible. The area of $P_{\varphi'_3}$ is $4$ if the characteristic of $k$ is not $2$. (The Newton polygon $P_{\varphi'_3}$ depends on the characteristic of the base field. See Example~\ref{ExampleOfNct} (3).) If the characteristic of $k$ is not $2$, the Newton polygons $P_{\varphi_i}$'s and $P_{\varphi'_3}$ are as follows: \[ { \setlength\unitlength{1truecm} \begin{picture}(5,5)(-1,-3) \put(2.7,1.2){\mbox{\large $P_{\varphi_1}$}} \qbezier (0,-2) (0.5,-1.5) (1,-1) \put(-1,-2){\vector(1,0){5}} \put(0,-3){\vector(0,1){5}} \put(-0.1,-2.1){$\bullet$} \put(0.9,-1.1){$\bullet$} \end{picture} } \ \ \ \ { \setlength\unitlength{1truecm} \begin{picture}(5,5)(-1,-3) \put(2.7,1.2){\mbox{\large $P_{\varphi_2}$}} \put(-1,-2){\vector(1,0){5}} \put(0,-3){\vector(0,1){5}} \qbezier (0,-2) (0.5,-1) (1,0) \qbezier (0,-2) (1,-1.5) (2,-1) \qbezier (1,0) (1.5,-0.5) (2,-1) \put(-0.1,-2.1){$\bullet$} \put(0.9,-1.1){$\bullet$} \put(0.9,-0.1){$\bullet$} \put(1.9,-1.1){$\bullet$} \end{picture} } \] \[ { \setlength\unitlength{1truecm} \begin{picture}(5,5)(-1,-3) \put(2.7,1.2){\mbox{\large $P_{\varphi_3}$}} \put(-1,-2){\vector(1,0){5}} \put(0,-3){\vector(0,1){5}} \qbezier (0,-2) (0.5,-0.5) (1,1) \qbezier (0,-2) (1.5,-1.5) (3,-1) \qbezier (1,1) (2,0) (3,-1) \put(-0.1,-2.1){$\bullet$} \put(0.9,-1.1){$\bullet$} \put(0.9,-0.1){$\bullet$} \put(0.9,0.9){$\bullet$} \put(1.9,-1.1){$\bullet$} \put(1.9,-0.1){$\bullet$} \put(2.9,-1.1){$\bullet$} \end{picture} } \ \ \ \ { \setlength\unitlength{1truecm} \begin{picture}(5,5)(-1,-3) \put(2.7,1.2){\mbox{\large $P_{\varphi'_3}$}} \put(-1,-2){\vector(1,0){5}} \put(0,-3){\vector(0,1){5}} \qbezier (0,-2) (0.5,-1) (1,0) \qbezier (0,-2) (1.5,-1.5) (3,-1) \qbezier (1,0) (1.5,0.5) (2,1) \qbezier (2,1) (2.5,0) (3,-1) \put(-0.1,-2.1){$\bullet$} \put(0.9,-1.1){$\bullet$} \put(0.9,-0.1){$\bullet$} \put(1.9,0.9){$\bullet$} \put(1.9,-1.1){$\bullet$} \put(1.9,-0.1){$\bullet$} \put(2.9,-1.1){$\bullet$} \end{picture} } \] We can inductively construct an $r$-nct $\varphi_r$ by \[ \varphi_r = -\varphi_{r-1}(v-1) + (-1)^{r-1}v(w-1)^r \] for $r \ge 2$ as in Gonz\'alezAnaya-Gonz\'alez-Karu~\cite{GAGK}. \end{rm} \end{Example} Negative curves are deeply related to $r$-ncts as follows: \begin{Proposition}\label{Prop3.2} Let $S_{a,b,c}$, $\bmp_{a,b,c}$ be the ring and the ideal as in Definition~\ref{Def1}. Let $d_1$ and $r_1$ be positive integers. Suppose that $f \in (S_{a,b,c})_{d_1}$ corresponds to $\varphi \lambda^{d_1} \in \left( \bigoplus_{(\alpha, \beta) \in d_1 P_{a,b,c} \cap \bZ^2} kv^\alpha w^\beta \right) \lambda^{d_1}$ under the identification (\ref{iden}).\footnote{ Remark that $N_\varphi$ is contained in $d_1 P_{a,b,c}$.} Then $f$ is irreducible in $S_{a,b,c}$ and contained in $[{\bmp_{a,b,c}}^{(r_1)}]_{d_1}$ with $\frac{d_1}{r_1} < \sqrt{abc}$ (that is, $f \in [{\bmp_{a,b,c}}^{(r_1)}]_{d_1}$ is a negative curve) if and only if the following four conditions are satisfied: \begin{enumerate} \item[(1)] $\varphi$ is irreducible in $A = k[v^{\pm 1}, w^{\pm 1}]$. \item[(2)] There exists an element of $N_\varphi$ on each edge of $d_1 P_{a,b,c}$. \item[(3)] $\varphi \in (v-1,w-1)^{r_1}A$. \item[(4)] $2|d_1 P_{a,b,c}| < {r_1}^2$. \end{enumerate} When this is the case, $\varphi$ is an $r_1$-nct. \end{Proposition} \Proof Recall that $|P_{a,b,c}| = 1/2abc$ as in Remark~\ref{Area}. Therefore the condition (4) is equivalent to $\frac{d_1}{r_1} < \sqrt{abc}$. The condition (3) is equivalent to $f \in [{\bmp_{a,b,c}}^{(r_1)}]_{d_1}$ by (\ref{nth}). Recall $r_1 > 0$. If either the condition (3) or $f \in [{\bmp_{a,b,c}}^{(r_1)}]_{d_1}$ is satisfied, $N_\varphi$ contains at least two elements. Then, by Lemma~\ref{PrimeElementOfE}, $\varphi \lambda^{d_1}$ is irreducible in $S_{a,b,c} = E(P_{a,b,c}, \lambda)$ if and only if both (1) and (2) are satisfied. \qed \vspace{2mm} Here we state basic properties of $r$-ncts. \begin{Proposition}\label{Fact3.3} Let $k$ be a field and $A = k[v^{\pm 1}, w^{\pm 1}]$ be a Laurent polynomial ring with two variables $v$, $w$. We put $\bfp=(v-1,w-1)A$. Let $r$ be a positive integer. \begin{enumerate} \item A unit element of $A$ is of the form $cv^\alpha w^\beta$, where $c \in k^\times$, $\alpha, \beta \in \bZ$. If $u$ is a unit of $A$ and $\varphi$ is an $r$-nct, then $u\varphi$ is also an $r$-nct. Let $\xi: A \rightarrow A$ be a $k$-isomorphism such that $\xi(\bfp) = \bfp$. Then there exists $(a_{ij}) \in {\rm GL}(2, \bZ)$ such that $\xi(v) = v^{a_{11}}w^{a_{12}}$, $\xi(w) = v^{a_{21}}w^{a_{22}}$. If $\varphi$ is an $r$-nct, then so is $\xi(\varphi)$. \item Assume that $\varphi$ is an $r$-nct over $k$. \begin{enumerate} \item We have \begin{equation}\label{1dimensional} \bfp^r \cap \left( \bigoplus_{(\alpha, \beta) \in P_\varphi \cap \bZ^2} kv^\alpha w^\beta \right) = k \varphi . \end{equation} \item The number of $P_\varphi \cap \bZ^2$ is at most $\frac{r(r+1)}{2} + 1$. \item If $\zeta$ is a reducible element contained in $\bfp^r$, then $P_{\varphi}$ does not contain $N_\zeta$. In particular, if $r \ge 2$, $P_{\varphi}$ does not contain $r+1$ points on a line. \end{enumerate} \item Assume that $\varphi$ is an $r$-nct over $k$. Then there exists an element $c \in k^\times$ such that all the coefficients of $c\varphi$ is in the prime field of $k$. \item If $\varphi$ is an $r$-nct over $k$, then $\varphi \not\in \bfp^{r+1}$. \item Assume that $\varphi$ is an $r$-nct over $k$. Let $L/k$ be a field extension. Then $\varphi$ is also an $r$-nct over $L$. \item Let $NCT_r$ be the set of $r$-ncts over $k$. Consider the equivalence relation $\sim$ on $NCT_r$ generated by $\varphi \sim u\varphi$ and $\varphi \sim \xi(\varphi)$ as in (1). Then the quotient set $NCT_r/ \! \mathop{\sim}$ is a non-empty finite set for each $r$. \end{enumerate} \end{Proposition} \Proof It is easy to show (1). We omit a proof of it. We shall prove (2) (a). Assume the contrary. Let $\varphi'$ be an element contained in the left-hand side but not in the right-hand one. Since $\varphi \lambda$ is a prime element of $E(P_\varphi, \lambda)$ by Lemma~\ref{PrimeElementOfE}, $\varphi \lambda$, $\varphi' \lambda$ is an $E(P_\varphi, \lambda)$-regular sequence. Take a homogeneous element $h \in E(P_\varphi, \lambda)$ such that $\varphi \lambda$, $\varphi' \lambda$, $h$ is a homogeneous system of parameters of $E(P_\varphi, \lambda)$. Since $E(P_\varphi, \lambda)$ is a $3$-dimensional graded Cohen-Macaulay ring, we have \begin{align*} & \ \ell_{E(P_\varphi, \lambda)} \left( E(P_\varphi, \lambda) /(\varphi \lambda, \varphi' \lambda, h) \right) = e\left( (h), E(P_\varphi, \lambda) /(\varphi \lambda, \varphi' \lambda) \right) \\ \ge & \ \ell_{E(P_\varphi, \lambda)_{\bmp_\varphi}} \left( E(P_\varphi, \lambda)_{\bmp_\varphi} /(\varphi \lambda, \varphi' \lambda) E(P_\varphi, \lambda)_{\bmp_\varphi} \right) \cdot e\left( (h), E(P_\varphi, \lambda) /{\bmp_\varphi} \right) \\ \ge & \ r^2 (\deg h) , \end{align*} where $\ell$ is the length, $e$ is the multiplicity and $\bmp_\varphi = E(P_\varphi, \lambda) \cap \bfp A[\lambda]$. Here we remark that the first inequality comes from the additive formula of multiplicities, and the second one comes from $\varphi \lambda, \varphi' \lambda \in \bmp_\varphi^r$ and $e((h), E(P_\varphi,\lambda)/\bmp_\varphi)) = e((\lambda^{\deg h}), k[\lambda]) = \deg h$. On the other hand, consider the Poincare series \[ P(E(P_\varphi, \lambda), s) = \sum_{n \ge 0} \dim_k E(P_\varphi, \lambda)_n s^n = \frac{f(s)}{(1-s)^3} , \] where we remark that $E(P_\varphi, \lambda)$ is generated by elements of degree $1$ over $k$. Here $f(1)$ is equal to the multiplicity of $E(P_\varphi, \lambda)$, which is $2|P_\varphi|$ by a theorem of Ehrhart (e.g. see Part~II of \cite{Hibi}). Hence we have \[ P(E(P_\varphi, \lambda)/(\varphi \lambda, \varphi' \lambda, h), s) = \frac{f(s)(1-s)^2(1-s^{\deg h})}{(1-s)^3} \] since $\varphi \lambda$, $\varphi' \lambda$, $h$ is a regular sequence. Substituting $1$ for $s$, we have \[ \ell_{E(P_\varphi, \lambda)} \left( E(P_\varphi, \lambda) /(\varphi \lambda, \varphi' \lambda, h) \right) = 2|P_\varphi| (\deg h) < r^2 (\deg h) . \] It is a contradiction. Thus the equality (\ref{1dimensional}) is proved.\footnote{We can also prove (\ref{1dimensional}) using intersection theory on the blow-up of $\proj(E(P_\varphi, \lambda))$ at $(1,1)$ in the torus.} The left-hand side of (\ref{1dimensional}) is defined in $\bigoplus_{(\alpha, \beta) \in P_\varphi \cap \bZ^2} kv^\alpha w^\beta$ by $\frac{r(r+1)}{2}$ linear equations. Therefore (b) follows from (a). We shall prove (c). If $\zeta$ is a reducible element contained in $\bfp^r$, then $k \varphi$ does not contain $\zeta$ and $P_{\varphi}$ does not contain $N_\zeta$. Assume that $P_{\varphi}$ contains $r+1$ points on a line. Replacing $\varphi$ by some $u \xi(\varphi)$ as in (1), we may assume that $P_{\varphi}$ contains $(0,0)$, $(1,0)$, \ldots, $(r,0)$. (Here recall that $P_\varphi$ is a convex polygon.) Then we have $(v-1)^r \in \bfp^r$ and $N_{(v-1)^r} \subset P_\varphi$. It is a contradiction since $(v-1)^r$ is reducible for $r \ge 2$. We shall prove (3). Let $F$ be the prime field of $k$. The assertion immediately comes from \begin{align*} & \ \left( (v-1,w-1)^r F[v^{\pm 1}, w^{\pm 1}] \cap \left( \bigoplus_{(\alpha, \beta) \in P_\varphi \cap \bZ^2} Fv^\alpha w^\beta \right) \right) \otimes_Fk \\ = & \ (v-1,w-1)^r k[v^{\pm 1}, w^{\pm 1}] \cap \left( \bigoplus_{(\alpha, \beta) \in P_\varphi \cap \bZ^2} kv^\alpha w^\beta \right) = k \varphi . \end{align*} Next we shall prove (4). Suppose that an $r$-nct satisfies $\varphi \in \bfp^{r+1}$. Multiplying $\varphi$ by a unit of $A$, we may assume that $\varphi$ is a Laurent polynomial over the prime field and the origin is in $N_\varphi$. Assume that the characteristic of $k$ is a prime number $p$. (In the case of characteristic $0$, we can prove the assertion easier.) If both $\alpha$ and $\beta$ were divided by $p$ for any $(\alpha,\beta) \in N_\varphi$, then $\varphi$ would be reducible since $\varphi$ is a Laurent polynomial over the prime field of characteristic $p$. Therefore we may assume that there exists $(\alpha', \beta') \in N_\varphi$ such that $p \not| \ \alpha'$. Then we have \[ 0 \neq v \frac{\partial \varphi}{\partial v} \in \bfp^r \cap \left( \bigoplus_{(\alpha, \beta) \in P_\varphi \cap \bZ^2} kv^\alpha w^\beta \right) = k \varphi . \] Here remark that the constant term of $v \frac{\partial \varphi}{\partial v}$ is $0$. It is a contradiction since $\varphi$ and $v \frac{\partial \varphi}{\partial v}$ are linearly independent over $k$. We shall prove (5). We have only to show that $\varphi$ is irreducible in $L[v^{\pm 1}, w^{\pm 1}]$. Assume the contrary and suppose $\varphi = \psi_1\psi_2$ in $L[v^{\pm 1}, w^{\pm 1}]$. Then we have \[ P_\varphi = P_{\psi_1} + P_{\psi_2} , \] where the right-hand side is the Minkowski sum, that is $\{ \bma + \bmb \mid \bma \in P_{\psi_1}, \bmb \in P_{\psi_2} \}$. Suppose \begin{align*} & \psi_1 \in (v-1,w-1)^{r_1} L[v^{\pm 1}, w^{\pm 1}] \setminus (v-1,w-1)^{r_1+1} L[v^{\pm 1}, w^{\pm 1}] , \\ & \psi_2 \in (v-1,w-1)^{r_2} L[v^{\pm 1}, w^{\pm 1}] \setminus (v-1,w-1)^{r_2+1} L[v^{\pm 1}, w^{\pm 1}] . \end{align*} Then, by (4), we have $r = r_1+r_2$. Since $r^2 > 2|P_\varphi|$, we have \[ \frac{r_1}{\sqrt{2}} + \frac{r_2}{\sqrt{2}} = \frac{r}{\sqrt{2}} > \sqrt{|P_\varphi|} \ge \sqrt{|P_{\psi_1}|} + \sqrt{|P_{\psi_2}|} . \] Here the second inequality is called the Brunn-Minkowski inequality. Therefore either $r_1^2 > 2|P_{\psi_1}|$ or $r_2^2 > 2|P_{\psi_2}|$ is satisfied. Hence we know that some irreducible divisor $\varphi'$ of $\varphi$ is an $r'$-nct for some $r'$. By (3), we may assume that $\varphi'$ is a Laurent polynomial over the prime field. It is a contradiction since $\varphi$ is irreducible over $k$. We shall prove (6). Since $NCT_r$ contains $\varphi_r$ in Example~\ref{123-nct}, the set $NCT_r/\sim$ is not empty. Any $1$-nct is equivalent to $v-1$. Suppose $r \ge 2$. Let $\varphi$ be an $r$-nct. If $|P_{\varphi}| =0$, then all the points of $N_\varphi$ are on a line and we may assume that $\varphi$ is a polynomial in $v$. Then $(v-1)^r$ divides $\varphi$. It contradicts to the irreducibility of $\varphi$. Therefore we have $|P_{\varphi}| >0$. Let $\Omega$ be the convex hull of the four points $(0, 0)$, $(\sqrt{2}r^2, 0)$, $((\sqrt{2}+1)r^2, r^2)$, $(0, r^2)$. Let $P$ be an integral convex polygon (a convex hull of a finite subset of $\bZ^2$ with positive area). Then, by (2), $r$-ncts $\varphi$ with $P_\varphi = P$ are equivalent to each others. Since $\Omega$ is bounded, there exist only finitely many integral convex polygons contained in $\Omega$. Therefore there exists a finite subset $F$ of $NCT_r$ such that any $r$-nct $\varphi$ with $P_\varphi \subset \Omega$ is equivalent to one of $F$. It is enough to prove that any integral convex polygon with area less than $r^2/2$ is contained in $\Omega$ after an affine transformation $\xi$ satisfying $\xi(\bZ^2) = \bZ^2$. Let $P$ be an integral convex polygon with area less than $r^2/2$. By an affine transformation preserving the lattice, we may assume that $(0,0)$ and $(\alpha_1, 0)$ are adjacent vertices of $P$, where $\alpha_1$ is a positive integer. Since $|P| < r^2/2$, we may assume that any point $(\alpha,\beta)$ in $P$ satisfies $0 \le \beta < r^2$. Let $(\alpha_2, \beta_2)$ be the other vertex adjacent to $(0,0)$. By a linear transformation of the form $\left( \begin{array}{cc} 1 & a \\ 0 & 1\end{array} \right)$ for some $a \in \bZ$, we may assume $\beta_2> \alpha_2\ge 0$. Then any point $(\alpha,\beta)$ in $P$ satisfies $0 \le \alpha$. Suppose that $P$ is not contained in $\Omega$. Take $(\alpha_3,\beta_3) \in P \setminus \Omega$. Let $\ell$ be the line through $(0,0)$ and $(\alpha_2, \beta_2)$. The distance of the line $\ell$ and the point $(\alpha_3,\beta_3)$ is bigger than that of $\ell$ and $(\sqrt{2}r^2,0)$. Therefore \[ |P| > \ \mbox{(the area of the triangle $(0,0)$, $(\alpha_2,\beta_2)$, $(\sqrt{2}r^2,0)$)} = \frac{\beta_2 \sqrt{2}r^2}{2} > \frac{r^2}{2} . \] It is a contradiction. Therefore we have $P \subset \Omega$. \qed \begin{Example}\label{ExampleOfNct} \begin{rm} \begin{enumerate} \item The quotient set $NCR_1/ \! \mathop{\sim}$ consists of the equivalence class of $\varphi_1$ in Example~\ref{123-nct}. \item The quotient set $NCR_2/ \! \mathop{\sim}$ consists of the equivalence class of $\varphi_2$ in Example~\ref{123-nct}. In fact, suppose that $\varphi$ is a $2$-nct. By Proposition~\ref{Fact3.3} (2) (c), $P_\varphi$ does not contain three points on a line. As in the proof of Proposition~\ref{Fact3.3} (6), we may assume that $P_\varphi$ has three successive vertices $(\alpha_2, \beta_2)$, $(0,0)$, $(1,0)$ where $0 \le \alpha_2 < \beta_2 < 4$. If $(\alpha_2, \beta_2)=(0,1)$, then $P_\varphi$ contains $(1,1)$. Putting $\zeta = (v-1)(w-1) \in \bfp^2$, $P_\varphi$ contains $N_\zeta$. It contradicts to Proposition~\ref{Fact3.3} (2) (c). Then we know $(\alpha_2, \beta_2) = (2,3)$. In this case, $\varphi$ is equivalent to $\varphi_2$. In the case of $r = 1, 2$, $NCR_r/ \! \mathop{\sim}$ consists of the equivalence class of $\varphi_r$ and $P_{\varphi_r}$ is independent of $k$. Therefore, for $r= 1, 2$, the existence of negative curves contained in $[{\bmp_{a,b,c}}^{(r)}]_d$ does not depend on the base field $k$ (see Proposition~\ref{Prop3.2}). \item One can prove that the quotient set $NCT_3/ \! \mathop{\sim}$ consists of two elements, which are represented by $\varphi_3$ and $\varphi'_3$ in Example~\ref{123-nct}, respectively. The polygon $P_{\varphi_3}$ is the triangle with vertices $(0,0)$, $(3,1)$, $(1,3)$. If the characteristic of $k$ is not $2$, then $P_{\varphi'_3}$ is the tetragon as in Example~\ref{123-nct}. Assume that the characteristic of $k$ is $2$. Since the coefficient of $vw^2$ in $\varphi'_3$ is $-2$, $\varphi'_3$ is a $3$-nct such that $P_{\varphi'_3}$ is the triangle with vertices $(0,0)$, $(3,1)$ and $(2,3)$. In the case where $(a,b,c)=(9,10,13)$ and ${\rm ch}(k) = 2$, there exists a negative curve $f \in [{\bmp_{9,10,13}}^{(3)}]_{100}$ corresponding to a $3$-nct which is equivalent to $\varphi'_3$ as in the proof of Theorem~4.1 in \cite{MG}. There does not exist a negative curve in ${\bmp_{9,10,13}}^{(3)}$ if ${\rm ch}(k) \neq 2$. \end{enumerate} \end{rm} \end{Example} \section{The Cox ring of the blow-up of $X_{\Delta_{\varphi}}$ and the symbolic Rees ring of the Ehrhart ring $E(P_{\varphi}, \lambda)$} \label{sect4} Let $f \in [{\bmp_{a,b,c}}^{(r_1)}]_{d_1}$ be a negative curve and $\varphi$ be an $r_1$-nct such that $f = \varphi \lambda^{d_1}$ as in Proposition~\ref{Prop3.2}. Then we have \[ |P_\varphi| \le |d_1 P_{a,b,c}| < r^2/2 . \] From the results of Gonz\'alezAnaya-Gonz\'alez-Karu~\cite{GAGK}, it can be inferred that $R'_s(\bmp_{a,b,c})$ tends to be finitely generated when $|d_1 P_{a,b,c}|$ is close to $|P_\varphi|$, and $R'_s(\bmp_{a,b,c})$ tends to be infinitely generated when $|d_1 P_{a,b,c}|$ is close to $r^2/2$. Therefore it is natural to ask: \[ \mbox{Is $R'_s(\bmp_\varphi)$ a Noetherian ring?} \] Here $\bmp_\varphi$ is a prime ideal of the Ehrhart ring $E(P_\varphi,\lambda) \subset k[v^{\pm 1}, w^{\pm 1}][\lambda]$ defined by $\bmp_\varphi = E(P_\varphi,\lambda) \cap (v-1,w-1)k[v^{\pm 1}, w^{\pm 1}][\lambda]$. We put $X_{\Delta_{a,b,c}} = \proj(E(P_{a,b,c}, \lambda))$ and $X_{\Delta_\varphi} = \proj(E(P_\varphi, \lambda))$, where $\Delta_{a,b,c}$ and $\Delta_\varphi$ are the fans corresponding to the toric varieties $X_{\Delta_{a,b,c}}$ and $X_{\Delta_\varphi}$, respectively. Let $Y_{\Delta_{a,b,c}}$ (resp.\ $Y_{\Delta_\varphi}$) be the blow-up of $X_{\Delta_{a,b,c}}$ (resp.\ $X_{\Delta_\varphi}$) at the point $(1,1)$ in the torus $\spec k[v^{\pm 1}, w^{\pm 1}]$. Another reason why we are studying $R'_s(\bmp_\varphi)$ is that $Y_{\Delta_\varphi}$ contains a negative curve that is birational to the negative curve in $Y_{\Delta_{a,b,c}}$. We are interested in the birational class of the negative curve in $Y_{\Delta_{a,b,c}}$. (The author does not know any example that the negative curve in $Y_{\Delta_{a,b,c}}$ is not rational.) \vspace{2mm} We shall prove Theorem~\ref{Thm3.6} in this section. Before proving this theorem, we shall show the following lemma: \begin{Lemma}\label{Lemma8.9} With notation as in Theorem~\ref{Thm3.6}, the following conditions are equivalent: \begin{enumerate} \item[(i)] $H^0(Y_{\Delta_\varphi}, {\cal O}_{Y_{\Delta_\varphi}}(K_{Y_{\Delta_\varphi}} + n C_\varphi)) = 0$ for any $n > 0$, \item[(ii)] $H^0(Y_{\Delta_\varphi}, {\cal O}_{Y_{\Delta_\varphi}}(K_{Y_{\Delta_\varphi}} + C_\varphi)) = 0$, \item[(iii)] $C_\varphi . (K_{Y_{\Delta_\varphi}} + C_\varphi) = -2$, \item[(iv)] $C_\varphi . (K_{Y_{\Delta_\varphi}} + C_\varphi) < 0$, \item[(v)] $I_\varphi = \frac{r(r-1)}{2}$, \item[(vi)] $I_\varphi \le \frac{r(r-1)}{2}$. \end{enumerate} If $C_\varphi \simeq \bP_k^1$, then {\rm (ii)} is satisfied. In the case where $k$ is algebraically closed, the converse is also true. \end{Lemma} \Proof ${\rm (i)} \Rightarrow {\rm (ii)}$, ${\rm (iii)} \Rightarrow {\rm (iv)}$, ${\rm (v)} \Rightarrow {\rm (vi)}$ are obvious. We shall prove ${\rm (iv)} \Rightarrow {\rm (ii)}$. Assume that there exists an effective Weil divisor $D$ on $Y_{\Delta_\varphi}$ that is linearly equivalent to $K_{Y_{\Delta_\varphi}}+C_\varphi$. By (iv), there exists an effective Weil divisor $D'$ such that $D = C_\varphi+ D'$. Let $\pi_\varphi: Y_{\Delta_\varphi} \rightarrow X_{\Delta_\varphi}$ be the blow-up at the point $(1,1)$. Since \[ K_{X_{\Delta_\varphi}} + (\pi_\varphi)_*(C_\varphi) \sim (\pi_\varphi)_*(D) = (\pi_\varphi)_*(C_\varphi) + (\pi_\varphi)_*(D') , \] we have \[ (\pi_\varphi)_*(D') - K_{X_{\Delta_\varphi}} \sim 0 . \] It contradicts to the fact that the left-hand side is a non-zero effective divisor. We shall prove ${\rm (ii)} \Rightarrow {\rm (vi)}$. We have \[ 0 = H^0(Y_{\Delta_\varphi}, {\cal O}_{Y_{\Delta_\varphi}}(K_{Y_{\Delta_\varphi}} + C_\varphi)) = \bfp^{r-1} \cap \left( \bigoplus_{(\alpha,\beta) \in {P_\varphi}^\circ \cap \bZ^2} kv^\alpha w^\beta \right) , \] where ${P_\varphi}^\circ$ is the interior of $P_\varphi$ and $\bfp = (v-1,w-1)k[v^{\pm 1}, w^{\pm 1}]$. Since $\bfp^{r-1}$ is defined by $\frac{r(r-1)}{2}$ linear equations in $k[v^{\pm 1}, w^{\pm 1}]$, the number of ${P_\varphi}^\circ \cap \bZ^2$ is less than or equal to $\frac{r(r-1)}{2}$. Next we shall prove ${\rm (iii)} \Leftrightarrow {\rm (v)}$ and ${\rm (vi)} \Rightarrow {\rm (iv)}$. By Pick's theorem, we have \[ 2|P_\varphi| = B_\varphi + 2I_\varphi -2 . \] Since $C_\varphi^2 = 2|P_\varphi| - r^2$ and $C_\varphi . (-K_{Y_{\Delta_\varphi}}) = B_\varphi - r$, we have \[ C_\varphi . (K_{Y_{\Delta_\varphi}} + C_\varphi) + 2 = - B_\varphi + r +2|P_\varphi| - r^2 +2 = 2I_\varphi - r(r-1) . \] It is easy to check ${\rm (iii)} \Leftrightarrow {\rm (v)}$ and ${\rm (vi)} \Rightarrow {\rm (iv)}$. We shall prove ${\rm (ii)} \Rightarrow {\rm (i)}$. As we have already seen, (ii) is equivalent to (iv). Therefore we have $C_\varphi . (K_{X_{\Delta_\varphi}} + nC_\varphi) < 0$ for any $n > 0$. Since any effective divisor that is linearly equivalent to $K_{Y_{\Delta_\varphi}} + nC_\varphi$ has $C_\varphi$ as a component, the multiplication by $C_\varphi$ \[ H^0(Y_{\Delta_\varphi}, {\cal O}_{Y_{\Delta_\varphi}}(K_{Y_{\Delta_\varphi}} + (n-1) C_\varphi)) \rightarrow H^0(Y_{\Delta_\varphi}, {\cal O}_{Y_{\Delta_\varphi}}(K_{Y_{\Delta_\varphi}} + n C_\varphi)) \] is bijective. Therefore (ii) implies (i). Next we shall prove ${\rm (vi)} \Rightarrow {\rm (v)}$. We may assume that $k$ is algebraically closed. Remark that \begin{equation}\label{P1} h^0({\cal O}_{Y_{\Delta_\varphi}}( K_{Y_{\Delta_\varphi}} + C_\varphi)) = h^2({\cal O}_{Y_{\Delta_\varphi}}( - C_\varphi)) = h^1({\cal O}_{C_\varphi}). \end{equation} Since (vi) is equivalent to (ii), we have $h^1({\cal O}_{C_\varphi}) = 0$ and $C_\varphi \simeq \bP_k^1$. By definition, $C_\varphi$ does not meet the singular points of $Y_{\Delta_\varphi}$. Then, by the adjunction formula, we have $\omega_{Y_{\Delta_\varphi}}( C_\varphi)|_{C_\varphi} = \omega_{C_\varphi}$. Since (iii) is satisfied, (v) holds. If $C_\varphi \simeq \bP^1_k$, (ii) is satisfied by (\ref{P1}). (ii) implies $C_\varphi \simeq \bP_k^1$ when $k$ is algebraically closed. \qed \vspace{2mm} Now we start to prove Theorem~\ref{Thm3.6}. $(1)\Rightarrow (2)$ is trivially true. We shall prove $(1)\Rightarrow (3)\Rightarrow (4)\Rightarrow (5)\Rightarrow (6)\Rightarrow (10)$ for any integral convex polygon $P$ and the corresponding complete $2$-dimensional fan $\Delta$. $(1)\Rightarrow (3)$ is a basic fact. (Any nef and big $\bQ$-Cartier divisor $D$ on a normal projective surface satisfies $D^2 > 0$.) We shall show $(3)\Rightarrow (4)$. Recall that $-K_{X_\Delta}= D_1 + \cdots + D_s$ is a $\bQ$-Cartier divisor, where each $D_i$ is a toric prime divisor corresponding to each edge of $P$. Take a positive integer $q$ such that $qP_{-K_{X_\Delta}}$ is an integral convex polygon and $-qK_{X_\Delta}$ is a Cartier divisor on $X_\Delta$. By the Riemann-Roch theorem, we know that $\chi({\cal O}_{X_{\Delta}}(-nqK_{X_{\Delta}}))$ is a polynomial in $n$ of degree $2$ such that the coefficient of $n^2$ is $(-qK_{X_{\Delta}})^2/2$. On the other hand, $h^0({\cal O}_{X_{\Delta}}(-nqK_{X_{\Delta}})) = {}^\#(nqP_{-K_{X_{\Delta}}} \cap \bZ^2)$ is a polynomial of degree $2$ for $n\ge 0$ (it is called the Ehrhart polynomial) and the coefficient of $n^2$ is $|qP_{-K_{X_{\Delta}}}|$. Since \[ \chi({\cal O}_{X_{\Delta}}(-nqK_{X_{\Delta}})) = h^0({\cal O}_{X_{\Delta}}(-nqK_{X_{\Delta}}))-h^1({\cal O}_{X_{\Delta}}(-nqK_{X_{\Delta}}))+h^2({\cal O}_{X_{\Delta}}(-nqK_{X_{\Delta}})) \] and $h^2({\cal O}_{X_{\Delta}}(-nqK_{X_{\Delta}})) = h^0({\cal O}_{X_{\Delta}}(K_{X_\Delta}+nqK_{X_{\Delta}})) = 0$ for $n \ge 0$, we obtain \[ (-K_{X_{\Delta}})^2/2 \le |P_{-K_{X_{\Delta}}}| . \] Let $E$ be the exceptional divisor of the blow-up $Y_\Delta \rightarrow X_\Delta$ at $(1,1)$ in the torus. Since $(-K_{Y_{\Delta}})^2 = (-K_{X_{\Delta}})^2 + E^2=(-K_{X_{\Delta}})^2-1$, we obtain $|P_{-K_{X_{\Delta}}}| > \frac{1}{2}$. We shall prove $(4)\Rightarrow (5)$. We have $h^0({\cal O}_{X_\Delta}( -nq K_{X_{\Delta}})) = {}^\#(nqP_{-K_{X_{\Delta}}} \cap \bZ^2) $. By definition, $H^0(Y_\Delta, {\cal O}_{Y_\Delta}(-nq K_{Y_{\Delta}}))$ is a $k$-vector subspace of $H^0(X_\Delta, {\cal O}_{X_\Delta}(-nq K_{X_{\Delta}}))$ defined by $nq(nq+1)/2$ linear equations. Therefore we have \[ {}^\#(nqP_{-K_{X_{\Delta}}} \cap \bZ^2) - \frac{nq(nq+1)}{2} \le h^0({\cal O}_{Y_\Delta}( -nq K_{Y_{\Delta}})) . \] Since ${}^\#(nqP_{-K_{X_{\Delta}}} \cap \bZ^2)$ is a polynomial of degree $2$ for $n\ge 0$ whose coefficient of $n^2$ is $|qP_{-K_{X_{\Delta}}}|$, we know that $-K_{Y_{\Delta}}$ is big if $|P_{-K_{X_{\Delta}}}| > \frac{1}{2}$. Next we shall prove $(5)\Rightarrow (6)$. We may assume that $k$ is an algebraically closed field. First we shall construct a refinement $\Delta'$ of $\Delta$ such that \begin{itemize} \item $X_{\Delta'}$ is a smooth toric variety, and \item $P_{-K_{X_\Delta}}=P_{-K_{X_{\Delta'}}}$ . \end{itemize} Let \[ \{ \bRo \bma_1, \bRo \bma_2, \ldots, \bRo \bma_n \} \] be the set of the $1$-dimensional cones in $\Delta$. We assume that each $\bma_i$ is the shortest integer vector in the cone $\bRo \bma_i$. Assume that $\bma_1$, $\bma_2$, \ldots, $\bma_n$ are arranged counterclockwise around the origin. We think that each $\bma_i$ is a row vector of length 2. We shall construct $\Delta'$ using induction on \begin{equation}\label{bma} \left| {\rm det}\left( \begin{array}{l} \bma_1 \\ \bma_2 \end{array} \right) \times {\rm det}\left( \begin{array}{l} \bma_2 \\ \bma_3 \end{array} \right) \times \cdots \times {\rm det}\left( \begin{array}{l} \bma_{n-1} \\ \bma_n \end{array} \right) \times {\rm det}\left( \begin{array}{l} \bma_n \\ \bma_1 \end{array} \right) \right| . \end{equation} Suppose that $X_\Delta$ is not smooth. After a linear transformation in ${\rm SL}(2,\bZ)$, we may assume that $\bma_1 = (1,0)$ and $\bma_2 = (a,b)$, where $b > a > 0$. Here we put $\bmb = (1,1)$. Then \[ \left| {\rm det}\left( \begin{array}{l} \bma_1 \\ \bmb \end{array} \right) \times {\rm det}\left( \begin{array}{l} \bmb \\ \bma_2 \end{array} \right) \times {\rm det}\left( \begin{array}{l} \bma_2 \\ \bma_3 \end{array} \right) \times \cdots \times {\rm det}\left( \begin{array}{l} \bma_{n-1} \\ \bma_n \end{array} \right) \times {\rm det}\left( \begin{array}{l} \bma_n \\ \bma_1 \end{array} \right) \right| \] is strictly less than (\ref{bma}). Let $\bar{\Delta}$ be the complete fan in $\bR^2$ with $1$-dimensional cones \[ \{ \bRo \bma_1, \bRo \bmb, \bRo \bma_2, \ldots, \bRo \bma_n \} . \] Here one can check $P_{-K_{X_\Delta}}=P_{-K_{X_{\bar{\Delta}}}}$. Repeating this process, we can construct $\Delta'$ satisfying the required conditions. Let $Y_{\Delta'}$ be the blow-up of $X_{\Delta'}$ at the point $(1,1)$ in the torus. Then $Y_{\Delta'} \rightarrow Y_\Delta$ is a resolution of singularities. Then, since $P_{-K_{X_\Delta}}=P_{-K_{X_{\Delta'}}}$, we have \[ H^0(Y_\Delta, {\cal O}_{Y_\Delta}(-nK_{Y_\Delta})) = \bfp^n \cap \left( \bigoplus_{(\alpha,\beta) \in nP_{-K_{X_\Delta}} \cap \bZ^2} kv^\alpha w^\beta \right) = H^0(Y_{\Delta'}, {\cal O}_{Y_{\Delta'}}(-nK_{Y_{\Delta'}})), \] where $\bfp = (v-1,w-1)k[v^{\pm 1},w^{\pm 1}]$. Hence, if $-K_{Y_\Delta}$ is big, so is $-K_{Y_{\Delta'}}$. Then, by Theorem~1 in Testa-VarillyAlvarado-Velasco~\cite{TVAV}, ${\rm Cox}(Y_{\Delta'})$ is finitely generated. Since $Y_{\Delta'} \rightarrow Y_\Delta$ is surjective, ${\rm Cox}(Y_{\Delta})$ is also finitely generated by Theorem~1.1 in Okawa~\cite{Okawa}. We have completed the proof of $(5)\Rightarrow (6)$. We shall prove $(6)\Rightarrow (10)$. Let $E$ be the exceptional divisor of $Y_{\Delta}\rightarrow X_{\Delta}=\proj(E(P, \lambda))$. Let $H$ be the pullback of ${\mathcal O}_{X_{\Delta}}(1)$. We put $\bmp = E(P,\lambda) \cap \bfp k[v^{\pm 1}, w^{\pm 1}, \lambda]$. Then the extended symbolic Rees ring $R'_s(\bmp)$ coincides with the multisection ring \[ \bigoplus_{r, d \in \bZ} H^0(Y_{\Delta}, {\cal O}_{Y_{\Delta}}(dH -rE)) , \] which is a pure subring of ${\rm Cox}(Y_{\Delta})$. Therefore, if ${\rm Cox}(Y_{\Delta})$ is finitely generated, so is $R'_s(\bmp)$. In the rest of this proof, we consider an $r$-nct $\varphi$, the integral convex polygon $P_\varphi$ and the corresponding fan $\Delta_\varphi$. We shall prove $(5)\Rightarrow (8)$. Suppose that there exists an effective divisor $D$ that is linearly equivalent to $K_{Y_{\Delta_\varphi}} + nC_\varphi$. Then we have $ n C_\varphi \sim D -K_{Y_{\Delta_\varphi}} $ and $C_\varphi$ is big. It contradicts to \[ {C_\varphi}^2 = 2|P_\varphi| - r^2 < 0 . \] $(2)\Rightarrow (7)$ follows from $(-K_{Y_{\Delta_\varphi}}).C_\varphi = B_\varphi - r$. We shall prove $(7) \Rightarrow (8)$. Since $0 \le B_\varphi - r = (-K_{\Delta_\varphi}). C_\varphi$ and $C_\varphi^2< 0$, (iv) in Lemma~\ref{Lemma8.9} is satisfied. Therefore (i) in Lemma~\ref{Lemma8.9} is satisfied. $(8)\Leftrightarrow (9)$ is nothing but ${\rm (i)} \Leftrightarrow {\rm (v)}$ in Lemma~\ref{Lemma8.9}. $(11) \Rightarrow (9)$ and (b) follow from Lemma~\ref{Lemma8.9}. In the rest of the proof, we prove $(8)\Rightarrow (10)$ and (c). When we prove (c), we may assume that $k$ is a finite field. Then we can prove (10) in the same way as Theorem~1 in Cutkosky~\cite{C}. Next we shall prove $(8)\Rightarrow (10)$. By (c), we may assume that $k$ is a field of characteristic $0$. Furthermore, we may assume that $k$ is the field of complex numbers $\bC$. Since \[ 0=h^0({\cal O}_{Y_{\Delta_\varphi}}(K_{Y_{\Delta_\varphi}} + nC_\varphi)) = h^2({\cal O}_{Y_{\Delta_\varphi}}(-nC_\varphi)) = h^1({\cal O}_{nC_\varphi}) \] for any $n > 0$, we can contract $C_\varphi$, that is, there exists a birational morphism $\xi:Y_{\Delta_\varphi} \rightarrow Z$ such that $\xi(C_\varphi)$ is a point, where $Z$ is a normal projective surface with at most rational singularity. Here put $H= (\pi_\varphi)^*({\cal O}_{X_{\Delta_\varphi}}(1))$, where $\pi_\varphi: Y_{\Delta_\varphi} \rightarrow X_{\Delta_\varphi}$ is the blow-up at $(1,1)$ in the torus. Let $E$ be the exceptional divisor of $\pi_\varphi$. Let $i$ and $j$ be positive integers such that $(iH-jE).C_\varphi = 0$. Then there exists $n > 0$ such that $n\xi_*(iH-jE)$ is a very ample Cartier divisor on $Z$. Then one can prove that $n(iH-jE)$ is a semi-ample Cartier divisor on $Y_{\Delta_\varphi}$. It is easy to verify that $R'_s(\bmp_\varphi)$ is Noetherian. \qed \begin{Example}\label{ExampleOfGGK} \begin{rm} Suppose that the characteristic of $k$ is $0$. In Gonz\'alezAnaya-Gonz\'alez-Karu~\cite{GAGK}, they constructed two distinct $r$-ncts $\varphi_r$ and $\varphi'_r$ over $k$ for each $r \ge 3$. Here $P_{\varphi_r}$ is a triangle with vertices $(-1,-1)$, $(r-1,0)$ and $(0,r-1)$. The polygon $P_{\varphi'_r}$ is a tetragon with vertices $(-1,-1)$, $(r-1,0)$, $(\frac{r-1}{2},r-1)$, $(\frac{r-3}{2},r-2)$ in the case where $r$ is odd, and $(-1,-1)$, $(r-1,0)$, $(\frac{r}{2},r-2)$, $(\frac{r-2}{2},r-1)$ in the case where $r$ is even. Both $P_{\varphi_r}$ and $P_{\varphi'_r}$ contain $\frac{r(r+1)}{2} + 1$ lattice points. Both $Y_{\Delta_{\varphi_r}}$ and $Y_{\Delta_{\varphi'_r}}$ satisfy the condition (1) in Theorem~\ref{Thm3.6}. We shall give an outline of the proof of the above assertions for $\varphi'_r$. (One can prove the same assertions for $\varphi_r$ in the same way.) Consider the tetragon $P$ with four vertices as above. Obviously it contains $\frac{r(r+1)}{2} + 1$ lattice points. Put $\bfp = (v-1,w-1)k[v^{\pm 1}, w^{\pm 1}]$. Since \begin{equation}\label{pr} \bfp^r \cap \left( \bigoplus_{(\alpha,\beta) \in P \cap \bZ^2} kv^\alpha w^\beta \right) \end{equation} is defined by $\frac{r(r+1)}{2}$ linear equations in $\left( \bigoplus_{(\alpha,\beta) \in P \cap \bZ^2} kv^\alpha w^\beta \right)$, (\ref{pr}) is not $0$. Let $\varphi'_r$ be a non-zero element in (\ref{pr}). Using Lemma~\ref{EU} below, we have \[ \bfp^r \cap \left( \bigoplus_{ \substack{ (\alpha,\beta) \in P \cap \bZ^2 \\ (\alpha,\beta) \neq (-1,-1) }} kv^\alpha w^\beta \right) = 0 . \] Therefore the coefficient of $v^{-1}w^{-1}$ in $\varphi'_r$ is not zero. In the same way, we know that the coefficients of monomials corresponding to the vertices of $P$ are not zero. Thus we obtain $P_{\varphi'_r} = P$. By Lemma~2.1 in \cite{GAGK}, we know $\varphi'_r$ is irreducible. Since $|P| = \frac{r^2-1}{2}$, we know that $\varphi'_r$ is an $r$-nct. Next we shall prove that $-K_{Y_{\Delta_{\varphi'_r}}}$ is nef and big. It is easy to see that $|P_{-K_{X_{\Delta_{\varphi'_r}}}}| > \frac{1}{2}$. Therefore $-K_{Y_{\Delta_{\varphi'_r}}}$ is big by Theorem~\ref{Thm3.6}. Let $V$ be the closure of \[ \spec(k[v^{\pm 1}, w^{\pm 1}]/(w-1)) \] in $X_{\Delta_{\varphi'_r}}$. Let $D$ be the toric prime divisor on $X_{\Delta_{\varphi'_r}}$ corresponding to the bottom edge of $P_{\varphi'_r}$. Let $\tilde{V}$ and $\tilde{D}$ be the proper transforms of $V$ and $D$, respectively. Then we know $-K_{Y_{\Delta_{\varphi'_r}}}$ is linearly equivalent to $\tilde{V}+\tilde{D}$. Since $V^2 > 1$ and $D^2 > 0$, we obtain $\tilde{V}^2>0$ and $\tilde{D}^2>0$. Thus we know $-K_{Y_{\Delta_{\varphi'_r}}}$ is nef. \end{rm} \end{Example} \begin{Lemma}\label{EU} Let $k$ be a field of characteristic $0$ and $n$ be a positive integer. For a subset $U$ of $\bZ^2$, we put \[ k\seisei{U} = \{ \sum_{(\alpha,\beta)\in U} c_{(\alpha,\beta)}v^\alpha w^\beta \mid c_{(\alpha,\beta)} \in k \} . \] Let $L$ be a line in $\bR^2$ such that ${}^\#(L \cap U) = n$. Put $U' = U \setminus (L \cap U)$. Then we have an isomorphism of $k$-vector spaces \[ k\seisei{U} \cap (v-1,w-1)^nk[v^{\pm 1}, w^{\pm 1}] \simeq k\seisei{U'} \cap (v-1,w-1)^{n-1}k[v^{\pm 1}, w^{\pm 1}] . \] \end{Lemma} We omit a proof of Lemma~\ref{EU}. We can prove this lemma in the same way as the proof of Lemma~4.5 in \cite{KN}. \begin{Remark}\label{Ex3.7} \begin{rm} Let $\varphi$ be an $r$-nct. By Proposition~\ref{Fact3.3} (2) (b), we have ${}^\#(P_\varphi \cap \bZ^2) \le \frac{r(r+1)}{2} + 1$. In many cases ${}^\#(P_\varphi \cap \bZ^2) = \frac{r(r+1)}{2} + 1$ is satisfied. Here we show that it is equivalent to $B_\varphi = r+1$. If $B_\varphi = r+1$ is satisfied, we know $I_\varphi = \frac{r(r-1)}{2}$ by Theorem~\ref{Thm3.6}. Thus we obtain \[ {}^\#(P_\varphi \cap \bZ^2) = B_\varphi + I_\varphi = \frac{r(r+1)}{2} + 1 . \] Conversely assume $B_\varphi + I_\varphi = \frac{r(r+1)}{2} + 1$. Since \[ 0 < r^2-2|P_\varphi| = r^2 - 2I_\varphi - B_\varphi + 2 = r^2 - 2(I_\varphi + B_\varphi) + B_\varphi + 2 = B_\varphi -r \] by Pick's theorem, we know $I_\varphi = \frac{r(r-1)}{2}$ by Theorem~\ref{Thm3.6}. Therefore we have $B_\varphi = {}^\#(P_\varphi \cap \bZ^2) - I_\varphi = r+1$. In particular, the condition (7) in Theorem~\ref{Thm3.6} is satisfied if ${}^\#(P_\varphi \cap \bZ^2) = \frac{r(r+1)}{2} + 1$. \end{rm} \end{Remark} \begin{Remark}\label{SMC} \begin{rm} Assume that there exists a negative curve $f \in [{\bmp_{a,b,c}}^{(r)}]_d$ for some pairwise coprime positive integers $a$, $b$, $c$. Then we know $\dim_k[{\bmp_{a,b,c}}^{(r)}]_d = 1$ by the same reason as Proposition~\ref{Fact3.3} (2) (a). Since $[{\bmp_{a,b,c}}^{(r)}]_d$ is defined in $[S_{a,b,c}]_d$ by $\frac{r(r+1)}{2}$ linear equations, we know $\dim_k[S_{a,b,c}]_d \le \frac{r(r+1)}{2} + 1$. Let $\varphi$ be the $r$-nct corresponding to $f$ as in Proposition~\ref{Prop3.2}. Let $P$ be the convex hull of $dP_{a,b,c} \cap \bZ^2$. Then we have \[ P_\varphi \subset P \subset dP_{a,b,c} . \] Usually it is very difficult to determine $P_\varphi$. Here assume ${}^\#(dP_{a,b,c} \cap \bZ^2) = \dim_k[S_{a,b,c}]_d = \frac{r(r+1)}{2} + 1$. Then $P$ contains just $\frac{r(r+1)}{2} + 1$ lattice points. Since $|P| \le |dP_{a,b,c}| < r^2/2$, we know the number of lattice points in the boundary of $P$ is bigger than or equal to $r+1$ and that in the interior of $P$ is less than or equal to $r(r-1)/2$ by Pick's theorem. Since the interior of $P_\varphi$ is contained in that of $P$, the condition (9) in Theorem~\ref{Thm3.6} is satisfied for $\varphi$ by Lemma~\ref{Lemma8.9}. \end{rm} \end{Remark} \begin{Remark}\label{rational} \begin{rm} Assume that $k$ is algebraically closed and there exists a negative curve $f \in [{\bmp_{a,b,c}}^{(r)}]_d$ for some pairwise coprime positive integers $a$, $b$, $c$. Let $\varphi$ be the $r$-nct corresponding to $f$ as in Proposition~\ref{Prop3.2}. Then the negative curve $C$ in $Y_{\Delta_{a,b,c}}$ is birational to $C_\varphi$ in $Y_{\Delta_{\varphi}}$. If the condition (9) in Theorem~\ref{Thm3.6} is satisfied for $\varphi$, we know that $C$ is a rational curve. If $r=1$, it is easy to check $C \simeq \bP^1_k$ (e.g.\ Lemma~3.2 in \cite{KN}). Suppose $r=2$. Since the unique $2$-nct satisfies (1) in Theorem~\ref{Thm3.6} (cf.\ Example~\ref{ExampleOfNct} (2), Example~\ref{ExampleOfGGK}), $C$ is a rational curve. It is easy to see that $C$ is singular if and only if ${}^\#(d {P_{a,b,c}}^\circ \cap \bZ^2) > 1$. There are many examples such that $C \simeq \bP^1_k$ (e.g.\ $(a,b,c) = (3,7,8)$, $(16,97,683), \ldots$). There are also many examples such that $C$ is singular (e.g.\ $(a,b,c) = (5,77,101)$, $(107,159,173), \ldots$). In the case where $(a,b,c) = (3,7,8)$, $(5,77,101)$, $R'_s(\bmp_{a,b,c})$ is Noetherian. In the case where $(a,b,c) = (16,97,683)$, $(107,159,173)$, $R'_s(\bmp_{a,b,c})$ is not Noetherian. Assume that $r = 3$. In this case any $3$-nct satisfies (1) in Theorem~\ref{Thm3.6} over any field $k$. In fact, when ${\rm ch}(k) \neq 2$, we can prove it in the same way as Example~\ref{ExampleOfGGK}. Assume that ${\rm ch}(k) = 2$. For $\varphi_3$ in Example~\ref{123-nct}, we can also prove it in the same way as Example~\ref{ExampleOfGGK}. Consider $\varphi'_3$ in Example~\ref{123-nct}. Remark that $P_{\varphi'_3}$ is a triangle and the Picard number of $Y_{\Delta_{\varphi'_3}}$ is $2$. Since $C_{\varphi'_3}^2 < 0$ and $-K_{Y_{\Delta_{\varphi'_3}}}.C_{\varphi'_3} = B_{\varphi'_3} - 3 = 0$, we know that $-K_{Y_{\Delta_{\varphi'_3}}}$ is nef and big. If $r = 4$, then (9) in Theorem~\ref{Thm3.6} is satisfied over any field $k$. If $r = 4$ and ${\rm ch}(k) = 0$, then (7) in Theorem~\ref{Thm3.6} is satisfied. If $r = 5$ and ${\rm ch}(k)=0$, then one can prove that (9) in Theorem~\ref{Thm3.6} is satisfied. The author does not know any example that the condition (7) in Theorem~\ref{Thm3.6} is not satisfied in the case where ${\rm ch}(k) = 0$. From the above, if $r \le 4$, then $C$ is rational. If $r = 5$ and ${\rm ch}(k)=0$, then $C$ is rational. In many case $\dim_k[S_{a,b,c}]_d = \frac{r(r+1)}{2} + 1$ is satisfied. When this is the case, $C$ is rational since (9) in Theorem~\ref{Thm3.6} is satisfied (cf.\ Remark~\ref{SMC}). The author knows a few examples that $\dim_k[S_{a,b,c}]_d < \frac{r(r+1)}{2} + 1$. See the next example. \end{rm} \end{Remark} \begin{Example}\label{(8,15,43)} \begin{rm} Suppose that $k$ is of characteristic $0$ and $(a,b,c) = (8,15,43)$. Using a computer, we know that there exists a negative curve $f \in [{\bmp_{a,b,c}}^{(9)}]_{645}$. Let $\varphi$ be the corresponding $9$-nct as in Proposition~\ref{Prop3.2}. Then $P_\varphi$ is the following pentagon: \[ { \setlength\unitlength{1truecm} \begin{picture}(5,4.5)(0,-2) \qbezier (0.1,0.1) (1.1,0.6) (5.1,2.6) \qbezier (0.1,0.1) (1.85,-0.9) (3.5,-1.9) \qbezier (3.6,-1.9) (4.1,-0.75) (4.6,0.6) \qbezier (4.6,0.6) (4.85,1.35) (5.1,2.1) \qbezier (5.1,2.1) (5.1,2.4) (5.1,2.6) \put(0,0){$\bullet$} \put(0.5,0){$\bullet$} \put(1,0.5){$\bullet$} \put(1,0){$\bullet$} \put(1,-0.5){$\bullet$} \put(1.5,0.5){$\bullet$} \put(1.5,0){$\bullet$} \put(1.5,-0.5){$\bullet$} \put(2,1){$\bullet$} \put(2,0.5){$\bullet$} \put(2,0){$\bullet$} \put(2,-0.5){$\bullet$} \put(2,-1){$\bullet$} \put(2.5,1){$\bullet$} \put(2.5,0.5){$\bullet$} \put(2.5,0){$\bullet$} \put(2.5,-0.5){$\bullet$} \put(2.5,-1){$\bullet$} \put(3,1.5){$\bullet$} \put(3,1){$\bullet$} \put(3,0.5){$\bullet$} \put(3,0){$\bullet$} \put(3,-0.5){$\bullet$} \put(3,-1){$\bullet$} \put(3,-1.5){$\bullet$} \put(3.5,1.5){$\bullet$} \put(3.5,1){$\bullet$} \put(3.5,0.5){$\bullet$} \put(3.5,0){$\bullet$} \put(3.5,-0.5){$\bullet$} \put(3.5,-1){$\bullet$} \put(3.5,-1.5){$\bullet$} \put(3.5,-2){$\bullet$} \put(4,2){$\bullet$} \put(4,1.5){$\bullet$} \put(4,1){$\bullet$} \put(4,0.5){$\bullet$} \put(4,0){$\bullet$} \put(4,-0.5){$\bullet$} \put(4.5,2){$\bullet$} \put(4.5,1.5){$\bullet$} \put(4.5,1){$\bullet$} \put(4.5,0.5){$\bullet$} \put(5,2.5){$\bullet$} \put(5,2){$\bullet$} \end{picture} } \] \noindent It satisfies $B_\varphi + I_\varphi = \frac{r(r+1)}{2}= 45$, $B_\varphi = r =9$, $I_\varphi = \frac{r(r-1)}{2}=36$. In this case, $P_{a,b,c}$ is the triangle with three edges having slopes $1/2$, $-4/7$ and $5/2$. Since $a+b+c$ is less than $d/r$ in this case, $-K_{Y_{\Delta_{a,b,c}}}$ is neither nef nor big. Since we have a birational surjective map $Y_{\Delta_\varphi} \rightarrow Y_{\Delta_{a,b,c}}$, we know that $-K_{Y_{\Delta_\varphi}}$ is neither nef nor big. The author does not know whether ${\rm Cox}(Y_{\Delta_\varphi})$ is Noetherian or not. Since the condition (7) in Theorem~\ref{Thm3.6} is satisfied, the extended symbolic Rees ring $R'_s(\bmp_\varphi)$ is Noetherian. In this case, generators of one-dimensional cones of the fan $\Delta_\varphi$ is \[ \left( \begin{array}{c} \bma_1 \\ \bma_2 \\ \bma_3 \\ \bma_4 \\ \bma_5 \end{array} \right) = \left( \begin{array}{cc} -1 & 0 \\ -3 & 1 \\ -5 & 2 \\ 4 & 7 \\ 1 & -2 \end{array} \right) . \] Therefore the morphism of monoids (\ref{grading}) is \[ \bZ^3 \stackrel{ \left( \begin{array}{ccccc} 1 & -2 & 1 & 0 & 0 \\ 25 & -7 & 0 & 1 & 0 \\ -5 & 2 & 0 & 0 & 1 \end{array} \right)}{ \longleftarrow} \bZ^5 \supset (\bNo)^5 . \] Let $I$ be the kernel of the $k$-algebra homomorphism $k[x_1,x_2,x_3,x_4,x_5] \rightarrow k[y_1^{\pm 1},y_2^{\pm 1},y_3^{\pm 1}]$ ($x_1 \mapsto y_1y_2^{25}y_3^{-5}$, $x_2 \mapsto y_1^{-2}y_2^{-7}y_3^2$, $x_3 \mapsto y_1$, $x_4 \mapsto y_2$, $x_5 \mapsto y_3$). Then we have ${\rm Cox}(Y_{\Delta_\varphi})=R'_s(I)$ as in Remark~\ref{coxY}. Using a computer, we know that there exists a negative curve $f \in [P^{(18)}]_{1617}$ in the case where $(a,b,c) = (5,33,49)$ over a field $k$ of characteristic $0$. Let $\varphi$ be the corresponding $18$-nct as in Proposition~\ref{Prop3.2}. Let $P$ be the convex hull of $1617P_{5,33,49} \cap \bZ^2$ as in Remark~\ref{SMC}. The number of lattice points in the boundary of $P$ is $r=18$ and that in the interior of $P$ is $r(r-1)/2=18\times17/2$. Since $P_\varphi$ is contained in $P$, the number of lattice points in the interior of $P_\varphi$ is $r(r-1)/2=18\times17/2$ by Lemma~\ref{Lemma8.9}. Hence the condition (9) in Theorem~\ref{Thm3.6} is satisfied for $\varphi$. In this case, $-K_{Y_{\Delta_\varphi}}$ is neither nef nor big. The author does not know whether ${\rm Cox}(Y_{\Delta_\varphi})$ is Noetherian or not. \end{rm} \end{Example} \begin{Remark}\label{K+C} \begin{rm} Let $\varphi$ be an $r$-nct. Then we have $h^0(Y_{\Delta_\varphi}, {\cal O}_{Y_{\Delta_\varphi}}(-K_{Y_{\Delta_\varphi}} + C_\varphi)) = p_a(C_\varphi)$, $h^1(Y_{\Delta_\varphi}, {\cal O}_{Y_{\Delta_\varphi}}(-K_{Y_{\Delta_\varphi}} + C_\varphi)) = 0$ and $I_\varphi = \frac{r(r-1)}{2} + p_a(C_\varphi)$. Let $f \in [{\bmp_{a,b,c}}^{(r)}]_d$ be the negative curve. Then we have $h^0(Y_{\Delta_{a,b,c}}, {\cal O}_{Y_{\Delta_{a,b,c}}}(-K_{Y_{\Delta_{a,b,c}}} + C)) = p_a(C)$, $h^1(Y_{\Delta_{a,b,c}}, {\cal O}_{Y_{\Delta_{a,b,c}}}(-K_{Y_{\Delta_{a,b,c}}} + C)) = 0$ and the number of lattice points in the interior of $dP_{a,b,c}$ is $\frac{r(r-1)}{2} + p_a(C)$. \end{rm} \end{Remark}
2,869,038,155,983
arxiv
\section{Ablation Studies} \label{sec:ablation} \marco{ In this section\um{,} we provide extensive ablation studies to investigate key features of our approach. We will consider the \textit{urban} experimental setup, with $\text{CS} \rightarrow \text{BDD} \rightarrow \text{IDD}$ domain and $\mathcal{C}_{bgr} \rightarrow \mathcal{C}_{stat} \rightarrow \mathcal{C}_{mov}$ class orders, unless otherwise stated.} \begin{table}[t!] \scriptsize \centering \renewcommand{\arraystretch}{1.2} \caption{Ablation study on the contribution of loss components. The \tocheck{$\mathcal{L}_{\lossname}^{n}$} notation here implies that pseudo-labels are generated leveraging new-domain input samples.} \setlength{\tabcolsep}{2.7pt} \newcommand\cw{6.8mm} \newcommand\cwd{6.4mm} \begin{tabu}{l | >{\centering}m{\cw}>{\centering}m{\cwd}| >{\centering}m{\cw}>{\centering}m{\cwd}| >{\centering}m{\cw}>{\centering}m{\cwd}| >{\centering}m{\cwd}} \toprule \multicolumn{1}{c|}{$\text{CS} \veryshortarrow \text{BDD} \veryshortarrow \text{IDD}$} & \multicolumn{2}{c}{\textbf{IDD}} & \multicolumn{2}{c}{BDD} & \multicolumn{2}{c}{CS} & \\ \multicolumn{1}{c|}{$\mathcal{C}_{bgr} \veryshortarrow \mathcal{C}_{stat} \veryshortarrow \mathcal{C}_{mov}$} & \scalebox{1.}{$\text{mIoU}_{2}^{2}$ $ \!\! \uparrow$} & $\Delta_{2}^{2} \!\! \downarrow$ & \scalebox{1.}{$\text{mIoU}_{2}^{1}$ $ \!\! \uparrow$} & $\Delta_{2}^{1} \!\! \downarrow$ & \scalebox{1.}{$\text{mIoU}_{2}^{0}$ $ \!\! \uparrow$} & $\Delta_{2}^{0} \!\! \downarrow$ & $\bar{\Delta}_{2} \! \downarrow$ \\ \midrule $\mathcal{L}_{ce}^{n}$ & 26.27 & \oracle{61.45}{61.48} & 10.47 & \oracle{78.90}{81.72} & 12.10 & \oracle{82.18}{81.18} & \oracle{74.18}{74.79} \\ $\mathcal{L}_{ce}^{\tilde{n}}$ & 27.12 & \oracle{60.20}{60.24} & 11.51 & \oracle{76.80}{79.91} & 13.68 & \oracle{79.86}{78.72} & \oracle{\underline{72.29}}{\underline{72.95}} \\ \midrule $\mathcal{L}_{ce}^{\tilde{n}} \!+\! \mathcal{L}_{ce}^{\tilde{o}}$ & 28.09 & \oracle{58.78}{58.81} & 13.32 & \oracle{73.16}{76.75} & 16.32 & \oracle{75.97}{74.61} & \oracle{69.30}{70.06} \\ $\mathcal{L}_{ce}^{\tilde{n}} \!+\! \mathcal{L}_{kd}^{\tilde{o}}$ & 40.63 & \oracle{40.37}{40.43} & 24.95 & \oracle{49.72}{56.44} & 34.14 & \oracle{49.73}{46.90} & \oracle{46.61}{47.92} \\ $\mathcal{L}_{ce}^{n} \!+\! \tocheck{\mathcal{L}_{\lossname}^{n}}$ & 43.33 & \oracle{36.41}{36.47} & 26.62 & \oracle{46.35}{53.53} & 37.36 & \oracle{44.99}{41.89} & \oracle{42.58}{43.96} \\ $\mathcal{L}_{ce}^{\tilde{n}} \!+\! \mathcal{L}_{\lossname}^{\tilde{n}}$ & 48.12 & \oracle{29.38}{29.45} & 32.40 & \oracle{34.70}{43.44} & 40.57 & \oracle{40.26}{36.89} & \oracle{\underline{34.78}}{\underline{36.59}} \\ \midrule $\mathcal{L}_{ce}^{\tilde{n}} \!+\! \mathcal{L}_{\lossname}^{\tilde{n}} \!+\! \mathcal{L}_{kd}^{\tilde{o}}$ & 19.23 & \oracle{71.78}{71.80} & 17.68 & \oracle{64.37}{69.13} & 24.15 & \oracle{64.44}{62.44} & \oracle{66.86}{67.79} \\ $\mathcal{L}_{ce}^{\tilde{n}} \!+\! \mathcal{L}_{ce}^{\tilde{o}} \!+\! \mathcal{L}_{kd}^{\tilde{o}}$ & 50.08 & \oracle{26.50}{26.57} & 34.16 & \oracle{31.16}{40.36} & 42.86 & \oracle{36.89}{33.33} & \oracle{31.52}{33.42} \\ $\mathcal{L}_{ce}^{\tilde{n}} \!+\! \mathcal{L}_{\lossname}^{\tilde{n}} \!+\! \mathcal{L}_{ce}^{\tilde{o}}$ & 50.70 & \oracle{25.59}{25.66} & 34.86 & \oracle{29.75}{39.14} & 43.04 & \oracle{36.62}{33.05} & \oracle{\underline{30.65}}{\underline{32.62}} \\ \midrule $\mathcal{L}_{ce}^{n} \!+\! \tocheck{\mathcal{L}_{\lossname}^{n}} \!+\! % \mathcal{L}_{ce}^{\tilde{o}} \!+\! \mathcal{L}_{kd}^{\tilde{o}}$ & 46.59 & \oracle{31.63}{31.69} & 30.51 & \oracle{38.51}{46.74} & 40.44 & \oracle{40.45}{37.10} & \oracle{36.86}{38.51} \\ $\mathcal{L}_{ce}^{\tilde{n}} \!+\! \mathcal{L}_{\lossname}^{\tilde{n}} \!+\! \mathcal{L}_{ce}^{\tilde{o}} \!+\! \mathcal{L}_{kd}^{\tilde{o}}$ & 51.20 & \oracle{24.86}{24.93} & 35.73 & \oracle{27.99}{37.62} & 44.17 & \oracle{34.96}{31.29} & \oracle{\underline{\textbf{29.27}}}{\underline{\textbf{31.28}}} \\ \midrule Oracle & 68.20 & \oracle{-}{-} & 57.28 & \oracle{-}{-} & 64.29 & \oracle{-}{-} & \oracle{-}{-} \\ \bottomrule \end{tabu} \label{tab:loss_comp} \vspace{-0mm} \end{table} \subsection{Contribution of Individual Optimization Objectives} We investigate the impact of each of the proposed learning objectives in the overall optimization framework \pietro{ in Table} \ref{tab:loss_comp}. Just leveraging the currently available training data by fine-tuning (first two rows) yields unsatisfactory results (even with self-stylization), leading to catastrophic forgetting of class and domain knowledge. Yet, $\mathcal{L}_{ce}^{n}$ (or $\mathcal{L}_{ce}^{\tilde{n}}$) is essential to learn new tasks, so it will be kept in the following analyses to test multi-term objectives. \marco{By} adding a second term in the overall objective (second block of rows) we improve \marco{results,} especially if the supplemental objective is focused on retaining old-class knowledge. We reach, in fact, the best performance with a 2-term % configuration when $\mathcal{L}^{\tilde{n}}_{\lossname}$ is introduced. This suggests that old-class knowledge preservation is effective even when applied on the new domain, which is directly experienced by means of the available training data. At the same time, the $\mathcal{L}^{\tilde{n}}_{\lossname}$ objective allows to retain good accuracy w.r.t. past domains, thanks to the improved generalization aptitude promoted by the \tocheck{stylization} % mechanism, without which (\ie, third row of the block) multiple accuracy points are lost. When analyzing 3-term objectives (third block of rows), we see noticeable gain with different combinations, except for $\mathcal{L}^{\tilde{n}}_{\lossname}$ and $\mathcal{L}^{\tilde{o}}_{kd}$ jointly active, where the excessive focus on past-class knowledge preservation generates training instability. \marco{In the last row of the block, we clearly see that, by adding the $\mathcal{L}^{\tilde{o}}_{ce}$ loss on top of the best two-term configuration, the incremental learning becomes more robust, with improved final results on all domains.} Finally, we remark that the \marco{full} framework \marco{(last block)} yields the best overall performance, with \tocheck{stylization} % once more playing a substantial role. The overall performance is, in fact, strongly degraded if \tocheck{stylization} is turned off, as showed in the second last row. \subsection{Pseudo-label Generation} \label{sec:abl_pseudo} \begin{table}[t!] \scriptsize \centering \setlength{\tabcolsep}{3.pt} \renewcommand{\arraystretch}{1.2} \caption{ \marco{Ablation study on pseudo-labeling schemes. \tocheck{We added % $\{n,o\}$ % to the loss notation to indicate if pseudo-labels are generated leveraging new-domain ($\mathcal{L}^{d}_{\marco{kd},n}$) or oldly-stylized ($\mathcal{L}^{d}_{\marco{kd},o})$ % input samples.}} } \newcommand\cw{6.4mm} \newcommand\cwd{5.9mm} \begin{tabu}{l | >{\centering}m{\cw}>{\centering}m{\cwd}| >{\centering}m{\cw}>{\centering}m{\cwd}| >{\centering}m{\cw}>{\centering}m{\cwd}| >{\centering}m{\cwd}} \toprule \multicolumn{1}{c|}{$\text{CS} \veryshortarrow \text{BDD} \veryshortarrow \text{IDD}$} & \multicolumn{2}{c}{\textbf{IDD}} & \multicolumn{2}{c}{BDD} & \multicolumn{2}{c}{CS} & \\ \multicolumn{1}{c|}{$\mathcal{C}_{bgr} \veryshortarrow \mathcal{C}_{stat} \veryshortarrow \mathcal{C}_{mov}$} & \scalebox{1.}{$\text{mIoU}_{2}^{2}$ $ \!\! \uparrow$} & $\Delta_{2}^{2} \!\! \downarrow$ & \scalebox{1.}{$\text{mIoU}_{2}^{1}$ $ \!\! \uparrow$} & $\Delta_{2}^{1} \!\! \downarrow$ & \scalebox{1.}{$\text{mIoU}_{2}^{0}$ $ \!\! \uparrow$} & $\Delta_{2}^{0} \!\! \downarrow$ & $\bar{\Delta}_{2} \! \downarrow$ \\ \midrule $\mathcal{L}_{ce}^{n} \! + \! \mathcal{L}_{\marco{kd}, n}^{n} \! + \! % \mathcal{L}_{ce}^{\tilde{o}} \! + \! \mathcal{L}_{kd}^{\tilde{o}}$ & 46.59 & \oracle{31.63}{31.69} & 30.51 & \oracle{38.51}{46.74} & 40.44 & \oracle{40.45}{37.10} & \oracle{36.86}{38.51} \\ $\mathcal{L}_{ce}^{n} \! + \! \mathcal{L}_{\marco{kd}, o}^{n} \! + \! % \mathcal{L}_{ce}^{\tilde{o}} \! + \! \mathcal{L}_{kd}^{\tilde{o}}$ & 40.09 & \oracle{41.17}{41.22} & 25.55 & \oracle{48.51}{55.40} & 34.90 & \oracle{48.61}{45.71} & \oracle{46.09}{47.44} \\ $\mathcal{L}_{ce}^{\tilde{n}} \! + \! \mathcal{L}_{\marco{kd}, n}^{\tilde{n}} \! + \! \mathcal{L}_{ce}^{\tilde{o}} \! + \! \mathcal{L}_{kd}^{\tilde{o}}$ & 51.11 & \oracle{24.99}{25.06} & 34.01 & \oracle{31.46}{40.63} & 43.96 & \oracle{35.27}{31.62} & \oracle{30.57}{32.44} \\ $\mathcal{L}_{ce}^{\tilde{n}} \! + \! \mathcal{L}_{\marco{kd}, o}^{\tilde{n}} \! + \! \mathcal{L}_{ce}^{\tilde{o}} \! + \! \mathcal{L}_{kd}^{\tilde{o}}$ & 51.20 & \oracle{24.86}{24.93} & 35.73 & \oracle{27.99}{37.62} & 44.17 & \oracle{34.96}{31.29} & \oracle{29.27}{\tblbold{31.28}} \\ \midrule Oracle & 68.20 & \oracle{-}{-} & 57.28 & \oracle{-}{-} & 64.29 & \oracle{-}{-} & \oracle{-}{-} \\ \bottomrule \end{tabu} \label{tab:pseudo} \end{table} \begin{table}[t] \centering \setlength{\tabcolsep}{1pt} \small \begin{minipage}{0.99\linewidth}\centering \begin{tabular}{ ccc } Image & GT & $ \fullhatt[-0.3ex]{\mathbf{Y}}^{\lower.75em\hbox{\fontsize{6}{6}\selectfont \{2\}}}_{t\shortminus1} $ \\[-2.2ex] \subfloat{\includegraphics[width=0.31\linewidth]{images/pseudo/img.png}} & \subfloat{\includegraphics[width=0.31\linewidth]{images/pseudo/gt.png}} & \subfloat{\includegraphics[width=0.31\linewidth]{images/pseudo/PLn.png}} \\[-2.8ex] \subfloat{\includegraphics[width=0.31\linewidth]{images/pseudo/PLs1.png}} & \subfloat{\includegraphics[width=0.31\linewidth]{images/pseudo/PLs2.png}} & \subfloat{\includegraphics[width=0.31\linewidth]{images/pseudo/PLo.png}} \\[-0.5ex] $\fullhatt[-0.3ex]{\mathbf{Y}}^{\lower.75em\hbox{\fontsize{6}{6}\selectfont $\{0\}$}}_{t\shortminus1}$ & $\fullhatt[-0.3ex]{\mathbf{Y}}^{\lower.75em\hbox{\fontsize{6}{6}\selectfont $\{1\}$}}_{t\shortminus1}$ & $\fullhatt[-0.3ex]{\mathbf{Y}}^{\lower.75em\hbox{\smalless \fontsize{6}{6}\selectfont $t$}}_{t\shortminus1} = \fullhatt[-0.3ex]{\mathbf{Y}}^{\lower.75em\hbox{\fontsize{6}{6}\selectfont $\{0, \!1\}$}}_{t\shortminus1}$ \\ \end{tabular} \captionof{figure}{Different ways of pseudo-labeling ($t\!=\!2$). White regions correspond to the \textit{ignore} label.} \label{fig:pseudo} \end{minipage} \vspace{-0mm} \end{table} We \marco{further} analyze the influence exerted by pseudo-labeling \pietro{in Table \ref{tab:pseudo}}. % We remark that the proposed enhanced labeling mechanism (described in Sec.~\ref{sec:newD_oldC}) exploits oldly-stylized images to mitigate the domain shift endured by the frozen segmentation model distilling knowledge from the past. \pietro{ We notice that when self-stylization is disabled (first two rows) the efficacy of our method is reduced, % while % the beneficial effect offered by \marco{the self-stylizing module} can be appreciated in the last two rows. This occurs because self-stylization better prepares the segmentation model for future steps, in which the stylizing mechanism leverages old-domain styles to inject old-domain knowledge into the ongoing learning step. \tocheck{In other words, when self-\marco{stylizing} images, what will be experienced as an \textit{old} style will have already been experienced \marco{as a \textit{new}} style before. Therefore, the undesired visual artifacts generated by style transfer are experienced by the network from the very first step in which each domain is introduced. This, in turn, ensures greater robustness over the incremental learning process.} Furthermore, in setups with self-stylization, as opposed to what occurs without \marco{it}, % pseudo-labeling performed on top of oldly-stylized images yields the best overall performance, if compared to the same labeling process executed over image samples with new-domain style. This happens because the network (frozen from the past step) used to \um{generate pseudo-labels} is better equipped to face input distributions of previously experienced old domains, while, instead, it may suffer from domain shift when presented with new unseen input distributions.} In Fig.~\ref{fig:pseudo} we report pseudo-labels generated according to different criteria, to provide visual confirmation of the improved pseudo-supervision achieved \marco{on top of the \um{oldly stylization}}. The considered setup involves $\text{CS} \rightarrow \text{BDD} \rightarrow \text{IDD}$ and $\mathcal{C}_{bgr} \rightarrow \mathcal{C}_{stat} \rightarrow \mathcal{C}_{mov}$ progressions, and maps are retrieved at the last step (\ie, $t\!=\!2$). We observe that the segmentation model taken from \marco{step $t \!-\! 1$ (\ie, second last step)} is not detecting the sky region of the new-domain image, \ie, $\fullhatt[-0.3ex]{\mathbf{Y}}^{\lower.3em\hbox{\fontsize{6}{6}\selectfont $\{2\}$}}_{t\shortminus1}$ provides unreliable supervision by labeling the top portion of the picture as \marco{\textit{unknown}} % (when the true \textit{sky} class is among those already seen). \marco{On the other hand, when} leveraging oldly-stylized images to generate pseudo-supervision ($\fullhatt[-0.3ex]{\mathbf{Y}}^{\lower.3em\hbox{\smalless \fontsize{6}{6}\selectfont $t$}}_{t\shortminus1}$), \marco{more reliable} old-domain \tocheck{guidance} % ($\fullhatt[-0.3ex]{\mathbf{Y}}^{\lower.3em\hbox{\fontsize{6}{6}\selectfont $\{0\}$}}_{t\shortminus1}$ and $\fullhatt[-0.3ex]{\mathbf{Y}}^{\lower.3em\hbox{\fontsize{6}{6}\selectfont $\{1\}$}}_{t\shortminus1}$) \tocheck{is exploited}, with individual positive contributions successfully merged in the final map (\eg, in \textit{sky} and \textit{road} regions). \marco{ Thus, we end up with $\fullhatt[-0.3ex]{\mathbf{Y}}^{\lower.3em\hbox{\smalless \fontsize{6}{6}\selectfont $t$}}_{t\shortminus1}$ being more accurate than each domain-specific alternative $\fullhatt[-0.3ex]{\mathbf{Y}}^{\lower.3em\hbox{\fontsize{6}{6}\selectfont $\{k\}$}}_{t\shortminus1}, \, k \leq t$.} \begin{table}[t!] \scriptsize \centering \setlength{\tabcolsep}{4.5pt} \renewcommand{\arraystretch}{1.} \caption{ \marco{Ablation study on stylization ($\beta \!=\! 0.01$ corresponds to the default configuration).} } \begin{tabu}{ccccccc} \toprule \multicolumn{3}{c}{$\text{CS} \veryshortarrow \text{BDD} \veryshortarrow \text{IDD}$} & \multicolumn{1}{c}{No} & \multicolumn{3}{c}{$\beta$} \\ \multicolumn{3}{c}{$\mathcal{C}_{bgr} \veryshortarrow \mathcal{C}_{stat} \veryshortarrow \mathcal{C}_{mov}$} & stylization & $0.001$ & $0.01$ & $0.1$ \\ \midrule \multirow{2}{*}{Step 0} & \multirow{1}{*}{mIoU\textsubscript{0} $ \! \uparrow$} & \textbf{CS} & 79.67 & 79.8 & 79.19 & 78.54 \\ \arrayrulecolor{black!50} \cmidrule{2-7} \arrayrulecolor{black!100} & \multicolumn{2}{c}{$\bar{\Delta}_{0}$ $ \! \downarrow$} & \oracle{6.44}{5.32} & \oracle{6.29}{\tblbold{5.17}} & \oracle{7.0}{5.89} & \oracle{7.76}{6.66} \\ \midrule \multirow{3}{*}{Step 1} & \multirow{2}{*}{mIoU\textsubscript{1} $ \! \uparrow$} & \textbf{BDD} & 33.67 & 35.06 & 44.47 & 44.79 \\ & & CS & 49.20 & 43.75 & 53.31 & 50.45 \\ \arrayrulecolor{black!50} \cmidrule{2-7} \arrayrulecolor{black!100} & \multicolumn{2}{c}{$\bar{\Delta}_{1}$ $ \! \downarrow$} & \oracle{39.71}{38.08} & \oracle{42.35}{40.88} & \tblbold{\oracle{28.37}{26.58}} & \oracle{30.08}{28.37} \\ \midrule \multirow{4}{*}{Step 2} & \multirow{3}{*}{mIoU\textsubscript{2} $ \! \uparrow$} & \textbf{IDD} & 43.33 & 48.60 & 51.20 & 50.03 \\ & & BDD & 26.62 & 27.77 & 35.73 & 34.84 \\ & & CS & 37.36 & 37.61 & 44.17 & 43.01 \\ \arrayrulecolor{black!50} \cmidrule{2-7} \arrayrulecolor{black!100} & \multicolumn{2}{c}{$\bar{\Delta}_{2}$ $ \! \downarrow$} & \oracle{42.58}{43.96} & \oracle{39.11}{40.59} & \tblbold{\oracle{29.27}{31.28}} & \oracle{31.01}{32.97} \\ \bottomrule \end{tabu} \label{tab:style} \end{table} \subsection{Degree of Stylization} \label{sec:abl_style} We propose an additional analysis on the stylization mechanism. Table \ref{tab:style} shows the results of the \marco{our} method \marco{(complete with all objectives)} under different degrees of stylization, which are \marco{determined} by the $\beta$ parameter (\marco{see} Sec.~\ref{sec:domain_style}). We notice that disabling stylization or \marco{operating} it in a more conservative manner (\marco{\ie, with} $\beta \!=\! 0.001$) yields low results, with the latter configuration still outperforming the no stylization approach, \marco{as} the statistical properties captured and transferred are not sufficient to successfully retain old-domain information. On the other hand, if the stylization is \marco{raised} to an excessive extent (\marco{\ie, with} $\beta \!=\! 0.1$), we observe performance degradation on the overall $\bar\Delta_{2}$ score. In this scenario, artifacts are more likely to be introduced on oldly-stylized images, thus hindering the segmentation task. \subsection{Knowledge Transfer Across Tasks and Domains} \label{sec:abl_transfer} We \marco{propose} further ablation studies to evaluate the knowledge transfer aptitude of \marco{our} method, both \marco{under} task and domain perspectives. Fig.~\ref{fig:domain_tf} \marco{presents} a comparative of multiple CIL competitors in terms of predisposition towards \textit{domain-knowledge transfer;} we report the mIoU achieved on individual domains \pietro{only on classes experienced so far} across multiple steps in matrix form. % \marco{We consider multiple incremental setups, with urban datasets and variable domain order.} We observe that our approach, right from the first learning step, achieves better \textit{forward transfer} to future domains, \marco{as indicated by per-domain mIoU values in the top triangular sections}, regardless of the setup considered. \marco{At the same time, this translates into superior performance on current domains (represented by diagonal mIoU values), as they benefit from a better forward-adaptability acquired before.} Plus, improved \textit{backward transfer} to \marco{former} domains is testified by higher \marco{mIoU} values in the bottom triangular part of matrices. \marco{To provide an insight on \textit{task-knowledge transfer} proneness of different incremental methods, in Fig.~\ref{fig:task_tf} we report a comparative in terms of $\bar\Delta$ results at multiple learning steps; values are computed on \textit{single} incremental sets of classes and represent an average score across all domains (both experienced and future ones).% } The experimental setups are the same considered \marco{when studying} domain transfer\marco{, and results are arranged in matrix form}. We observe that our $\bar\Delta$ scores in the bottom triangular part of matrices are lower than competitors, \marco{suggesting that our method yields} better \textit{backward transfer} in terms of task knowledge. \marco{At the same time, the smaller $\bar\Delta$ diagonal elements indicate improved performance on current tasks, confirming the better stability-plasticity compromise offered by our approach.} \begin{figure}[!t] \centering \includegraphics[width=0.9\linewidth,trim={2.cm 1.7cm 1.8cm 1.2cm},clip]{images/transfer/domain_mat_cities_v1.png} \captionof{figure}{Domain-knowledge transfer (mIoU $\! \uparrow$ (\%)).} \label{fig:domain_tf} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.9\linewidth,trim={2cm 1.7cm 1.8cm 1.2cm},clip]{images/transfer/class_mat_cities_v1.png} \captionof{figure}{Task-knowledge transfer ($\bar\Delta \! \downarrow$).} \label{fig:task_tf} \end{figure} \section{Conclusions} In this paper, we formalized a general setting for continual learning, where both domains and tasks to be learned incrementally change over time. We addressed this under-explored learning setting targeting the semantic segmentation task by breaking it down into underlying sub-problems, each tackled with a specific learning objective. Leveraging a stylization mechanism, domain knowledge is replayed over time, whereas a robust distillation mechanism allows to retain and adapt old-task information. Overall, the proposed learning framework enables learning new tasks, while preserving \pietro{performance on old ones and spreading task knowledge across all the encountered domains}. We achieved significant results outperforming state-of-the-art competitors on multiple challenging benchmarks. Further research will tackle even more application-oriented settings, \ie, where task and domain shifts happen in a \pietro{continuous} fashion rather than in discrete steps and distinct overlapping sets of classes are \marco{introduced} in different domains. \subsection{Datasets} To simulate the distribution shift at the input (image) level, we make use of multiple driving data sets, each limited to a specific geographic region \marco{or environmental factors}, and thus characterized by its distinctive low-level appearance (\eg, road pavement material, type of vehicles, % \marco{light conditions}). On the contrary, the high-level semantic content is mostly consistent across image sets, that is, \marco{the road-related or other categories, moving and static obstacles} can be found everywhere, and follow similar inter-class structural relations (\eg, the sky will always appear above the road). \noindent \textbf{Cityscapes. } The Cityscapes \cite{Cordts2016} dataset (CS) is a popular benchmark for autonomous driving applications. Images are collected across 50 cities, all located in Central Europe. \noindent \textbf{BDD100K. } The Berkeley DeepDrive dataset (BDD) \cite{yu2020bdd100k} is a more diverse collection of road scenes, \pietro{captured with variable weather conditions at different times of the day}. Still, all samples are from 4 restricted localities in the US. % \noindent \textbf{IDD. } The Indian Driving Dataset (IDD) \cite{varma2029idd} includes driving scenes from Indian cities and their outskirts. It offers a diversified set of moving and static road obstacles, as well as a wilder and more natural environment, which breaks away from the typical European or American urban scenarios. \noindent \textbf{Mapillary Vistas. } The Mapillary Vistas dataset \cite{neuhold2017mapillary} contains images collected worldwide, with highly \pietro{diverse} acquisition settings and locations. Unlike previously introduced benchmarks, samples are not limited to a few cities located within quite uniform \marco{geographic} regions. We leverage the Mapillary \pietro{dataset} to generate continent-wise \marco{data} splits, as well as to test the domain generalization potential of the proposed class and domain incremental approach. \noindent \textbf{Shift. } The Shift benchmark \cite{sun2022shift} is a synthetic dataset for autonomous driving, designed to provide a plethora of distribution shifts, simulating the highly variable environmental conditions faced in real-world applications. We exploit it to mimic domain shift due to environmental diversity. For BDD, IDD and Mapillary datasets, only the 19 classes available on Cityscapes were used. For Shift, we considered the available 22 semantic categories. \subsection{Incremental Learning Setup} \label{sec:incr_setup} \noindent \textbf{Domain Incremental Setup. } The first domain incremental setup is \pietro{created} by experiencing in succession the CS, BDD and IDD datasets (in different orders) during 3 separate learning steps. Additionally, we propose a further setup, where domain shift across learning steps is achieved by splitting the entire Mapillary dataset into incremental sets based on \marco{geographic} proximity of samples, \ie, 6 separate data subsets are generated, grouping together pictures taken on the same continent. Finally, we leverage Shift to simulate incrementally variable environmental conditions, by partitioning the whole dataset into 3 groups of samples according to light conditions (\ie, \textit{daytime}, \textit{twilight} and \textit{night}). \noindent \textbf{Class Incremental Setup. } We start by following \cite{klingner2020class} to identify 3 separate groups within the 19 Cityscapes' classes, \ie, (i) \textit{background regions}, (ii) \textit{moving elements}, (iii) \textit{static elements}, which are observed incrementally under various arrangements. Then, we extend the aforementioned 3-way class splitting to Shift in a similar fashion to \cite{klingner2020class}, this time on the 22 classes offered by the synthetic benchmark. \marco{All the class incremental sets are detailed in Table~\ref{tab:train_cil}.} By merging class and domain individual settings, we devise each \pietro{class and domain incremental setup} reported in Table \ref{tab:train_incr_setup}. The first (\ie, \textit{urban}) is generated using CS, BDD and IDD datasets, together with the 3-way class split from \cite{klingner2020class}. Formally, we set the total number of \marco{learning} steps $T=3$, and at each step $0 \le t < T$: \begin{equation} \mathcal{D}_{t} \subset (\mathcal{X}_{t}, \mathcal{C}_{t}) \in \{ \text{CS}, \text{BDD}, \text{IDD} \} \times \{ \mathcal{C}_{bgr}, \mathcal{C}_{stat}, \mathcal{C}_{mov} \}, \end{equation} where each dataset and class split is observed once. \\ We further propose an incremental setup (\ie, \textit{\tocheck{worldwide}}) based on continent-wise splitting of the Mapillary dataset. To match the increase in domain set size % to 6 elements, we divide each class group \cite{klingner2020class} in half, for a total of 6 class splits (Table~\ref{tab:train_cil}). We set $T=6$, and at each step $0 \le t < T$: \begin{equation} \begin{split} \mathcal{D}_{t} \subset (\mathcal{X}_{t}, \mathcal{C}_{t}) \in & \{ \text{EU}, \text{NA}, \text{AS}, \text{OC}, \text{AF}, \text{SA} \} \times \\ &\{ \mathcal{C}^{0}_{bgr}, \mathcal{C}^{1}_{bgr}, \mathcal{C}^{0}_{stat}, \mathcal{C}^{1}_{stat}, \mathcal{C}^{0}_{mov}, \mathcal{C}^{1}_{mov} \}, \end{split} \end{equation} where each class set and \marco{each} domain appears only in a single step. Among the large number of possible incremental sequences, we perform the experimental evaluation in the $\text{EU} \rightarrow\allowbreak \text{NA} \rightarrow\allowbreak \text{AS} \rightarrow\allowbreak \text{OC} \rightarrow\allowbreak \text{AF} \rightarrow\allowbreak \text{SA}$ and {$\mathcal{C}^{0}_{bgr}\rightarrow\allowbreak \mathcal{C}^{1}_{bgr}\rightarrow\allowbreak \mathcal{C}^{0}_{stat}\rightarrow\allowbreak \mathcal{C}^{1}_{stat}\rightarrow\allowbreak \mathcal{C}^{0}_{mov}\rightarrow\allowbreak \mathcal{C}^{1}_{mov}$} \pietro{setups}. \\ Finally, the last setup (\ie, \textit{\tocheck{environmental}}) combines the environmental partitioning chosen for Shift with the 3-way class splitting from \cite{klingner2020class}. \begin{table}[!t] \centering \setlength{\tabcolsep}{0.3pt} \renewcommand{\arraystretch}{1.} \caption{\marco{Split of Cityscapes's (CS) and Shift's class sets following the criterion proposed by \cite{klingner2020class}.}} \begin{tabular}{c@{\hspace{1mm}}c|cc @{\hspace{-1.4mm}}c} \toprule \multicolumn{2}{c|}{} & \multicolumn{1}{c}{$\mathcal{C}_{bgr}$} & $\mathcal{C}_{stat}$ & $\mathcal{C}_{mov}$ \\ \midrule \multirow{3}{*}{\vspace{-1.mm} \rotatebox[origin=c]{90}{CS}} & \multirow{2}{*}{$\mathcal{C}^{0}$} & \multirow{2}{*}{\{road, sidewalk\}} & \multirow{2}{*}{\{build., wall, fence\}} & \{person, rider, \\ & & & & motorcycle, bicycle\} \\ \rule{0pt}{7.5pt} % & \multirow{1}{*}{$\mathcal{C}^{1}$} & \{veg., terr., sky\} & \{pole, t.\ light, t.\ sign\} & \{car, truck, bus, train\} \\ \midrule \multirow{3}{*}{\rotatebox[origin=c]{90}{Shift}} & \multirow{3}{*}{$\mathcal{C}^{s}$} & \{r.line, road, veg., & \{build., wall, fence, pole, & \{pedestrian, \\ & & ground, water, & t.\ light, bridge, r.track, & vehicles, \\ & & s.walk, terr., sky\} & g.rail, t.\ sign, static\} & dynamic\} \\ \bottomrule \end{tabular} \label{tab:train_cil} \end{table} \begin{table}[!t] \centering \setlength{\tabcolsep}{5.pt} \caption{Class and domain incremental sets.} \begin{tabular}{c|cc} \toprule \multicolumn{1}{c|}{} & \multicolumn{1}{c}{Class sets} & Domains \\ \midrule \multirow{1}{*}{Urban} & $\{ \mathcal{C}_{bgr}, \mathcal{C}_{stat}, \mathcal{C}_{mov} \}$ & $ \{ \text{CS}, \text{BDD}, \text{IDD} \}$\\ \midrule \multirow{2}{*}{\tocheck{Worldwide}} & $\{ \mathcal{C}^{0}_{bgr}, \mathcal{C}^{1}_{bgr}, \mathcal{C}^{0}_{stat},$ & $\{ \text{EU}, \text{NA}, \text{AS},$ \\ & $\mathcal{C}^{1}_{stat}, \mathcal{C}^{0}_{mov}, \mathcal{C}^{1}_{mov} \}$ & $\text{OC}, \text{AF}, \text{SA} \}$ \\ \midrule \multirow{1}{*}{\tocheck{Environmental}} & $\{ \mathcal{C}^{s}_{bgr}, \mathcal{C}^{s}_{stat}, \mathcal{C}^{s}_{mov} \}$ & $ \{ \text{Daytime}, \text{Twilight}, \text{Night} \}$ \\ \bottomrule \end{tabular} \vspace{1mm} \\ \marco{$\mathcal{C}^{s}$ indicates that the class subset is derived from Shift's original set.} \label{tab:train_incr_setup} \end{table} \subsection{Implementation Details} We \marco{built} % our framework in PyTorch. Due to the complexity of the investigated problem, \marco{in most experiments} we use a lightweight segmentation model, \ie, ErfNet \cite{romera2018erfnet}. We argue that a smaller network complies more realistically to deployment-related constraints in real-word applications, \eg, in terms of memory occupation and inference speed. Yet, for comparison purposes we report additional results with the heavier and better performing DeeplabV3 architecture \cite{chen2017rethinking} with ResNet101 backbone \cite{he2016deep}. In all experiments, the segmentation model is pre-trained on ImageNet \cite{deng2009imagenet}. With ErfNet, we use the Adam optimizer \cite{kingma2015adam} and learning rate set to $5\mathrm{e}{\shortminus4}$. With DeeplabV3, we use the SGD optimizer and learning rate set to $1\mathrm{e}{\shortminus3}$. Weight decay is fixed to $1\mathrm{e}{\shortminus4}$, and we employ a polynomial decay of power $0.9$ for learning rate scheduling. We train for 100 and 50 epochs at each learning step, with ErfNet and DeeplabV3 respectively \marco{(except in Shift, where we set the number of epochs to 10)}. With ErfNet we use a batch size of 6, with DeeplabV3 we reduce its value to 2 \pietro{due to} GPU memory constraints. When experimentally evaluating on Cityscapes-BDD-IDD and Shift setups, images are resized to $512\times1024$ resolution. When using Mapillary for training, inputs are first resized to $1024$ width (fixed aspect ratio), and then cropped to $512\times1024$. This pre-processing is done to accommodate for the highly variable aspect ratios of Mapillary's samples. The $\beta$ parameter controlling the size of the style window is empirically set to $1\mathrm{e}{\shortminus2}$ and fixed in all experiments. Plus, we \marco{experimentally fix} $\lambda_{{ce}}^{\tilde{o}} \!=\! \lambda_{{\lossname}}^{\tilde{n}} \!=\! \lambda_{{kd}}^{\tilde{o}} \!=\! 10$, and keep them unchanged in every incremental setup. This shows that our approach is robust to change of experimental setting, and requires minimal hyper-parameter tuning. Ablation studies \pietro{on the impact of $\beta$ and loss weights are in Sec.\ \ref{sec:ablation}.} \subsection{Competitors} To the best of our knowledge, this is the first work explicitly modeling and addressing class and domain incremental learning in semantic segmentation. For this reason, we compare with other methods targeting class (CIL) or domain (DIL) incremental learning as individual problems. Among class-incremental methods, we consider ILT \cite{michieli2019incremental} and MiB \cite{cermelli2020modeling}, along with state-of-the-art PLOP \cite{douillard2021plop} and UCD \cite{yang2022uncertainty}. When using PLOP with ErfNet, we apply the \textit{LocalPOD} loss \cite{douillard2021plop} on embeddings extracted at the end of the first and second blocks, as well as at the output of the encoder. % For UCD, we modify the contrastive distillation loss so that the maximum number of positives and negatives is set to 3000 each (which are randomly selected among the whole sets as defined in the original work). We perform this adjustment to meet GPU memory limitations. All experiments were performed on a RTX Titan GPU with 24GB of memory. We believe that a fair comparison should involve comparable GPU resources for all the competitors. On the domain-incremental side, we compare with \cite{garg2022multi}. Differently from our setup, the\um{y} assume to have full task supervision on all the domains incrementally encountered. We adapt their framework to a class-incremental setup by replacing the standard cross-entropy loss with the unbiased version from \cite{cermelli2020modeling}, to prevent the background shift from erasing the task-knowledge learned in past steps. \section{Experimental Results} \label{sec:exp_res} \subsection{Evaluation on Urban Scenes} \begin{table*}[t!] \scriptsize \centering \setlength{\tabcolsep}{4.3pt} \renewcommand{\arraystretch}{1.} \caption{Experimental results on $\text{CS} \protect\rightarrow \text{BDD} \protect\rightarrow \text{IDD}$ domain setup and $\mathcal{C}_{bgr} \protect\rightarrow \mathcal{C}_{stat} \protect\rightarrow \mathcal{C}_{mov}$ class setup. } \begin{tabu}{l |cc|c| cc|cc|c| cc|cc|cc|c} \toprule \multicolumn{1}{c|}{\multirow{3}{*}{\vspace{-2mm}\shortstack{ $\text{CS} \veryshortarrow \text{BDD} \veryshortarrow \text{IDD}$ \\ $\mathcal{C}_{bgr} \veryshortarrow \mathcal{C}_{stat} \veryshortarrow \mathcal{C}_{mov}$}}} & \multicolumn{3}{c|}{Step 0} & \multicolumn{5}{c|}{Step 1} & \multicolumn{7}{c}{Step 2} \\ \rule{0pt}{7pt} % & \multicolumn{2}{c}{\textbf{CS ($\mathcal{X}_0$)}} & & \multicolumn{2}{c}{\textbf{BDD ($\mathcal{X}_1$)}} & \multicolumn{2}{c}{CS ($\mathcal{X}_0$)}& & \multicolumn{2}{c}{\textbf{IDD ($\mathcal{X}_2$)}} & \multicolumn{2}{c}{BDD ($\mathcal{X}_1$)} & \multicolumn{2}{c}{CS ($\mathcal{X}_0$)} & \\ \rule{0pt}{7pt} % & $\text{mIoU}_{0}^{0}$ $ \! \uparrow$ & $\Delta^0_{0} \! \downarrow$ & $\bar{\Delta}_{0} \! \downarrow$ & $\text{mIoU}_{1}^{1}$ $ \! \uparrow$ & $\Delta^1_{1} \! \downarrow$ & $\text{mIoU}_{1}^{0}$ $ \! \uparrow$ & $\Delta^0_{1} \! \downarrow$ & $\bar{\Delta}_{1} \! \downarrow$ & $\text{mIoU}_{2}^{2}$ $ \! \uparrow$ & $\Delta^2_{2} \! \downarrow$ & $\text{mIoU}_{2}^{1}$ $ \! \uparrow$ & $\Delta^1_{2} \! \downarrow$ & $\text{mIoU}_{2}^{0}$ $ \! \uparrow$ & $\Delta^0_{2} \! \downarrow$ & $\bar{\Delta}_{2} \! \downarrow$ \\ \midrule FT ($\mathcal{L}_{ce}^{n}$) & 79.67 & \oracle{6.44}{5.32} & \oracle{6.44}{5.32} & 24.38 & \oracle{61.51}{61.35} & 18.11 & \oracle{75.18}{74.06} & \oracle{68.35}{67.71} & 26.27 & \oracle{61.45}{61.48} & 10.47 & \oracle{78.90}{81.72} & 12.10 & \oracle{82.18}{81.18} & \oracle{74.18}{74.79} \\ FT w/ self-style ($\mathcal{L}_{ce}^{\tilde{n}}$) & 79.19 & \oracle{7.0}{5.89} & \oracle{7.0}{5.89} & 20.41 & \oracle{67.78}{67.65} & 19.08 & \oracle{73.86}{72.67} & \oracle{70.82}{70.16} & 27.12 & \oracle{60.20}{60.24} & 11.51 & \oracle{76.80}{79.91} & 13.68 & \oracle{79.86}{78.72} & \oracle{72.29}{72.95} \\ MDIL \cite{garg2022multi} & 80.35 & \oracle{5.64}{4.51} & \oracle{5.64}{\tblbold{4.51}} & 26.12 & \oracle{58.76}{58.59} & 23.65 & \oracle{67.59}{66.13} & \oracle{63.18}{62.36} & 28.10 & \oracle{58.76}{58.80} & 12.46 & \oracle{74.89}{78.25} & 13.22 & \oracle{80.53}{79.44} & \oracle{71.39}{72.16} \\ ILT \cite{michieli2019incremental} & 79.67 & \oracle{6.44}{5.32} & \oracle{6.44}{5.32} & 22.21 & \oracle{}{64.80} & 44.70 & \oracle{}{35.99} & \oracle{}{50.39} & 26.69 & \oracle{}{60.87} & 16.70 & \oracle{}{70.85} & 29.76 & \oracle{}{53.71} & \oracle{}{61.81} \\ MiB \cite{cermelli2020modeling} % & 79.67 & \oracle{6.44}{5.32} & \oracle{6.44}{5.32} & 34.35 & \oracle{45.77}{45.55} & 49.24 & \oracle{32.53}{29.48} & \oracle{39.15}{37.51} & 42.58 & \oracle{37.51}{37.57} & 26.36 & \oracle{46.88}{53.98} & 36.58 & \oracle{46.14}{43.10} & \oracle{43.51}{44.88} \\ PLOP \cite{douillard2021plop} & 79.67 & \oracle{6.44}{5.32} & \oracle{6.44}{5.32} & 36.78 & \oracle{41.93}{41.70} & 50.05 & \oracle{31.42}{28.32} & \oracle{36.68}{35.01} & 43.15 & \oracle{36.67}{36.73} & 27.24 & \oracle{45.10}{52.44} & 36.84 & \oracle{45.75}{42.70} & \oracle{42.51}{43.96} \\ UCD \cite{yang2022uncertainty} & 79.67 & \oracle{6.44}{5.32} & \oracle{6.44}{5.32} & 35.45 & \oracle{44.03}{43.80} & 50.38 & \oracle{30.97}{27.85} & \oracle{37.50}{35.83} & 43.19 & \oracle{36.61}{36.67} & 27.38 & \oracle{44.81}{52.19} & 37.34 & \oracle{45.01}{41.91} & \oracle{42.15}{43.59} \\ \name \; w/o $\mathcal{L}_{kd}^{\tilde{o}}$ & 79.19 & \oracle{7.0}{5.89} & \oracle{7.0}{5.89} & 44.41 & \oracle{29.89}{29.60} & 50.77 & \oracle{30.43}{27.29} & \oracle{30.16}{28.44} & 50.70 & \oracle{25.59}{25.66} & 34.86 & \oracle{29.75}{39.14} & 43.04 & \oracle{36.62}{33.05} & \oracle{30.65}{32.62} \\ \name & 79.19 & \oracle{7.0}{5.89} & \oracle{7.0}{5.89} & 44.47 & \oracle{29.79}{29.51} & 53.31 & \oracle{26.95}{23.65} & \oracle{28.37}{\tblbold{26.58}} & 51.20 & \oracle{24.86}{24.93} & 35.73 & \oracle{27.99}{37.62} & 44.17 & \oracle{34.96}{31.29} & \oracle{29.27}{\tblbold{31.28}} \\ \midrule Oracle & \oracle{85.15}{84.15} & - & - & \oracle{63.34}{63.08} & - & \oracle{72.98}{69.82} & - & - & \oracle{68.14}{68.20} & - & \oracle{49.62}{57.28} & - & \oracle{67.91}{64.29} & - & -\\ \bottomrule \end{tabu} \label{tab:CBI} \end{table*} \begin{table*}[t!] \scriptsize \centering \setlength{\tabcolsep}{4.3pt} \renewcommand{\arraystretch}{1.} \caption{Experimental results on $\text{BDD} \protect\rightarrow \text{IDD} \protect\rightarrow \text{CS}$ domain setup and $\mathcal{C}_{bgr} \protect\rightarrow \mathcal{C}_{stat} \protect\rightarrow \mathcal{C}_{mov}$ class setup.} \begin{tabu}{l |cc|c| cc|cc|c| cc|cc|cc|c} \toprule \multicolumn{1}{c|}{\multirow{3}{*}{\vspace{-2mm}\shortstack{ $\text{BDD} \veryshortarrow \text{IDD} \veryshortarrow \text{CS}$ \\ $\mathcal{C}_{bgr} \veryshortarrow \mathcal{C}_{stat} \veryshortarrow \mathcal{C}_{mov}$}}} & \multicolumn{3}{c|}{Step 0} & \multicolumn{5}{c|}{Step 1} & \multicolumn{7}{c}{Step 2} \\ \rule{0pt}{7pt} % & \multicolumn{2}{c}{\textbf{BDD ($\mathcal{X}_0$)}} & & \multicolumn{2}{c}{\textbf{IDD ($\mathcal{X}_1$)}} & \multicolumn{2}{c}{BDD ($\mathcal{X}_0$)}& & \multicolumn{2}{c}{\textbf{CS ($\mathcal{X}_2$)}} & \multicolumn{2}{c}{IDD ($\mathcal{X}_1$)} & \multicolumn{2}{c}{BDD ($\mathcal{X}_0$)} & \\ \rule{0pt}{7pt} % & $\text{mIoU}_{0}^{0}$ $ \! \uparrow$ & $\Delta^0_{0} \! \downarrow$ & $\bar{\Delta}_{0} \! \downarrow$ & $\text{mIoU}_{1}^{1}$ $ \! \uparrow$ & $\Delta^1_{1} \! \downarrow$ & $\text{mIoU}_{1}^{0}$ $ \! \uparrow$ & $\Delta^0_{1} \! \downarrow$ & $\bar{\Delta}_{1} \! \downarrow$ & $\text{mIoU}_{2}^{2}$ $ \! \uparrow$ & $\Delta^2_{2} \! \downarrow$ & $\text{mIoU}_{2}^{1}$ $ \! \uparrow$ & $\Delta^1_{2} \! \downarrow$ & $\text{mIoU}_{2}^{0}$ $ \! \uparrow$ & $\Delta^0_{2} \! \downarrow$ & $\bar{\Delta}_{2} \! \downarrow$ \\ \midrule FT ($\mathcal{L}_{ce}^{n}$) & 72.22 & \oracle{5.10}{6.61} & \oracle{5.10}{6.61} & 33.37 & \oracle{51.54}{52.43} & 20.49 & \oracle{67.65}{67.52} & \oracle{59.60}{59.98} & 23.36 & \oracle{65.60}{65.60} & 7.09 & \oracle{89.6}{89.60} & 5.45 & \oracle{89.02}{89.02} & \oracle{81.40}{81.41} \\ FT w/ self-style ($\mathcal{L}_{ce}^{\tilde{n}}$) & 72.12 & \oracle{5.12}{6.74} & \oracle{5.12}{6.74} & 33.27 & \oracle{51.68}{52.58} & 21.12 & \oracle{66.66}{66.52} & \oracle{59.17}{59.55} & 28.52 & \oracle{58.00}{55.64} & 15.44 & \oracle{77.34}{77.36} & 14.24 & \oracle{71.30}{75.14} & \oracle{68.88}{69.38} \\ MDIL \cite{garg2022multi} & 72.44 & \oracle{4.81}{6.33} & \oracle{4.81}{\tblbold{6.33}} & 26.78 & \oracle{61.10}{61.83} & 15.44 & \oracle{75.62}{75.52} & \oracle{68.36}{68.68} & 25.52 & \oracle{62.42}{60.30} & 11.61 & \oracle{82.96}{82.98} & 10.77 & \oracle{78.30}{81.20} & \oracle{74.56}{74.83} \\ ILT \cite{michieli2019incremental} & 72.22 & \oracle{5.10}{6.61} & \oracle{5.10}{6.61} & 42.10 & \oracle{}{39.98} & 43.00 & \oracle{}{31.84} & \oracle{}{35.91} & 33.33 & \oracle{}{48.15} & 26.93 & \oracle{}{60.52} & 29.68 & \oracle{}{48.19} & \oracle{}{52.29} \\ MiB \cite{cermelli2020modeling} % & 72.22 & \oracle{5.10}{6.61} & \oracle{5.10}{6.61} & 52.18 & \oracle{24.21}{25.62} & 45.28 & \oracle{28.51}{28.22} & \oracle{26.36}{26.92} & 48.22 & \oracle{29.00}{25.00} & 33.57 & \oracle{50.73}{50.77} & 30.94 & \oracle{37.64}{45.98} & \oracle{39.12}{40.58} \\ PLOP \cite{douillard2021plop} & 72.22 & \oracle{5.10}{6.61} & \oracle{5.10}{6.61} & 53.15 & \oracle{22.80}{24.24} & 44.25 & \oracle{30.14}{29.85} & \oracle{26.47}{27.05} & 47.21 & \oracle{30.48}{26.56} & 35.36 & \oracle{48.10}{48.15} & 32.02 & \oracle{35.47}{44.10} & \oracle{38.02}{39.60} \\ UCD \cite{yang2022uncertainty} & 72.22 & \oracle{5.10}{6.61} & \oracle{5.10}{6.61} & 52.42 & \oracle{23.86}{25.28} & 45.20 & \oracle{28.64}{28.35} & \oracle{26.25}{26.81} & 48.40 & \oracle{28.74}{24.72} & 32.60 & \oracle{52.15}{52.19} & 28.95 & \oracle{41.66}{49.47} & \oracle{40.85}{42.13} \\ \name \; w/o $\mathcal{L}_{kd}^{\tilde{o}}$ & 72.12 & \oracle{5.12}{6.74} & \oracle{5.12}{6.74} & 54.34 & \oracle{21.07}{21.07} & 41.36 & \oracle{34.70}{34.44} & \oracle{27.89}{27.75} & 52.56 & \oracle{22.61}{18.24} & 36.70 & \oracle{46.14}{46.19} & 32.33 & \oracle{34.84}{43.56} & \oracle{34.53}{36.00} \\ \name & 72.12 & \oracle{5.12}{6.74} & \oracle{5.12}{6.74} & 54.53 & \oracle{20.80}{20.80} & 43.98 & \oracle{30.57}{30.28} & \oracle{25.68}{\tblbold{25.54}} & 52.63 & \oracle{22.50}{18.14} & 38.14 & \oracle{44.03}{44.08} & 34.03 & \oracle{31.42}{40.59} & \oracle{32.65}{\tblbold{34.27}} \\ \midrule Oracle & \oracle{76.10}{77.33} & - & - & \oracle{68.85}{70.16} & - & \oracle{63.34}{63.08} & - & - & \oracle{68.14}{68.20} & - & \oracle{49.62}{57.28} & - & \oracle{67.91}{64.29} & - & -\\ \bottomrule \end{tabu} \label{tab:BIC} \end{table*} The first experimental setup we explore entails incrementally transitioning between urban and suburban areas of different regions around the world. High- and low- level image contents undergo distribution shifts of different extent: although it might be reasonable to assume that the basic semantic structure of road images is invariant to \marco{geographic} location, scene elements are likely to change appearance significantly when travelling around the world. \subsubsection{ \marco{Study on Domain Ordering} } To reproduce class and domain distribution shifts, we train on the Cityscapes, BDD and IDD datasets in an incremental fashion. The class incremental protocol is instead the one \marco{proposed} in \cite{klingner2020class} (\ie, $\mathcal{C}_{bgr}\rightarrow \mathcal{C}_{stat}\rightarrow \mathcal{C}_{mov}$). As detailed in Sec.~\ref{sec:incr_setup}, we define a total of 3 learning steps. In Tables \ref{tab:CBI}, \ref{tab:BIC} and \ref{tab:ICB} we report experimental results following 3 different dataset orders, so that each dataset is viewed at all the 3 possible learning steps, considering all experiments performed. We report results in terms of % \um{mIoU} computed over all classes excluding the \textit{unknown} one, as typically done in the literature. \tocheck{The mIoU is computed for each domain $\mathcal{X}_{k}$ (\ie, dataset) experienced up to a current step $t$ (\ie, $\text{mIoU}^{k}_{t}$, $k \!\leq\! t$), $ \forall t \!<\! T$}. \tocheck{In addition, we provide a measure of relative performance w.r.t.\ a supervised reference, % both for individual domains $\Delta_{t}^k$, % and as a global quantity $\bar\Delta_{t}$ (Eq.~\eqref{metric:rel}). The supervised reference, denoted as \textit{Oracle}, corresponds to the joint training over both class sets and domains.} We compare with methods % addressing \classincr learning (ILT \cite{michieli2019incremental}, MiB \cite{cermelli2020modeling}, PLOP \cite{douillard2021plop} and UCD \cite{yang2022uncertainty}) and with a recent \domincr method (MDIL \cite{garg2022multi}). We also include a simple baseline, activating only the \marco{task} loss on the new \marco{classes} and new domain (Eq.~\eqref{eq:nc_nd}). This approach is usually referred to as \textit{fine-tuning}, as the focus is just posed on learning the new task. Two variants are reported for this baseline, \ie, with or without self-stylization applied on input images, % indicated \pietro{respectively as $\mathcal{L}_{ce}^{\tilde{n}}$ and $\mathcal{L}_{ce}^{n}$.} As for \marco{our} approach, we evaluate its final form (Eq.~\eqref{eq:complete}), complete of all the training objectives detailed in Sec.\ \ref{sec:method}, as well as a simpler configuration without the $\marco{\mathcal{L}_{kd}^{\tilde{o}}}$ loss (Eq.~\eqref{eq:kd}). % By inspecting results in Tables \ref{tab:CBI}, \ref{tab:BIC} and \ref{tab:ICB}, % we notice that the performance achieved by different methods at the end of the \textbf{initial learning step} are comparable. This is due to the similar objectives employed so far, to learn just the first class set ($\mathcal{C}_{bgr}$) on the first domain, regardless of the domain order. % We remark that the proposed self-stylization is not detrimental when learning the current task. % \tocheck{We will provide some ablation studies on the impact of stylization in Sec.~\ref{sec:ablation}} When progressing to the \textbf{first incremental step}, catastrophic forgetting has to be addressed to retain good performance. We observe that the $\mathcal{L}_{ce}^{n}$ and $\mathcal{L}_{ce}^{\tilde{n}}$ losses alone are not sufficient to achieve satisfactory results, being focused on the new task and providing no constraints \marco{to preserve} past knowledge. MDIL \cite{garg2022multi} performs poorly as well, since the proposed dynamic architecture is not suitable to address partial \classincr supervision, which in our setup is present along with domain incremental shift. \\ By analyzing \classincr learning methods, we note that they are able to preserve previously acquired knowledge to some extent, while allowing some plasticity for learning the new task. Still, the domain shift between previous and current datasets has a negative impact on the prediction accuracy of the incrementally trained predictor. All the considered CIL methods, in fact, rely on the ability of a segmentation model frozen from the previous step to preserve knowledge of the past. Yet, because of the domain discrepancy between past and new data, this distillation mechanism could introduce unreliable guidance on former tasks, as the frozen model is subject to a shift in the experienced distribution at the input level when fed with new domain data. At the same time, the distribution gap may hinder the transferability of new-class knowledge to old domains, which are no longer available as training data. \\ These drawbacks are revealed \marco{by} results of Table \ref{tab:ICB} ($\text{IDD} \rightarrow\allowbreak \text{CS} \rightarrow\allowbreak \text{BDD}$): the significant domain shift between the Cityscapes and IDD datasets prevents CIL methods from effectively preserving and learning \tocheck{task-related clues} % on IDD, which was experienced at step 0. On the contrary, our approach addresses domain shift by leveraging the stylization scheme and applying carefully designed objectives to suitably tackle the \marco{general} class and domain incremental learning. In particular, the proposed objectives $\mathcal{L}^{\tilde{o}}_{ce}$ (Eq.~\eqref{eq:nc_od}) and $\mathcal{L}^{\tilde{o}}_{kd}$ (Eq.~\eqref{eq:kd}) are specifically designed to address the aforementioned problems affecting CIL methods and \pietro{allow to achieve} superior accuracy on former domains. As a result, \name\ \um{improves accuracy by more than 17 mIoU points on IDD} at step 1 w.r.t.\ the best competitor \um{(\ie,} \um{UCD} \cite{yang2022uncertainty}). \\ We also remark that, even with alternative domain orders (Tables \ref{tab:CBI} and \ref{tab:BIC}), % \name\ shows the best stability-plasticity trade-off, retaining the best overall accuracy in terms of $\bar\Delta_{1}$. Furthermore, we can see that, for both $\text{CS} \rightarrow \text{BDD} \rightarrow \text{IDD}$ and $\text{BDD} \rightarrow \text{IDD} \rightarrow \text{CS}$ orders, the addition of the $\marco{\mathcal{L}_{kd}^{\tilde{o}}}$ objective in \um{ \name\ leads to a boost} in performance on the past domain, which coincides with the design purpose of the objective. \begin{table*}[!t] \scriptsize \centering \setlength{\tabcolsep}{4.3pt} \renewcommand{\arraystretch}{1.} \caption{Experimental results on $\text{IDD} \protect\rightarrow \text{CS} \protect\rightarrow \text{BDD}$ domain setup and $\mathcal{C}_{bgr} \protect\rightarrow \mathcal{C}_{stat} \protect\rightarrow \mathcal{C}_{mov}$ class setup.} \begin{tabu}{l |cc|c| cc|cc|c| cc|cc|cc|c} \toprule \multicolumn{1}{c|}{\multirow{3}{*}{\vspace{-2mm}\shortstack{ $\text{IDD} \veryshortarrow \text{CS} \veryshortarrow \text{BDD}$ \\ $\mathcal{C}_{bgr} \veryshortarrow \mathcal{C}_{stat} \veryshortarrow \mathcal{C}_{mov}$}}} & \multicolumn{3}{c|}{Step 0} & \multicolumn{5}{c|}{Step 1} & \multicolumn{7}{c}{Step 2} \\ \rule{0pt}{7pt} % & \multicolumn{2}{c}{\textbf{IDD ($\mathcal{X}_0$)}} & & \multicolumn{2}{c}{\textbf{CS ($\mathcal{X}_1$)}} & \multicolumn{2}{c}{IDD ($\mathcal{X}_0$)}& & \multicolumn{2}{c}{\textbf{BDD ($\mathcal{X}_2$)}} & \multicolumn{2}{c}{CS ($\mathcal{X}_1$)} & \multicolumn{2}{c}{IDD ($\mathcal{X}_0$)} & \\ \rule{0pt}{7pt} % & $\text{mIoU}_{0}^{0}$ $ \! \uparrow$ & $\Delta^0_{0} \! \downarrow$ & $\bar{\Delta}_{0} \! \downarrow$ & $\text{mIoU}_{1}^{1}$ $ \! \uparrow$ & $\Delta^1_{1} \! \downarrow$ & $\text{mIoU}_{1}^{0}$ $ \! \uparrow$ & $\Delta^0_{1} \! \downarrow$ & $\bar{\Delta}_{1} \! \downarrow$ & $\text{mIoU}_{2}^{2}$ $ \! \uparrow$ & $\Delta^2_{2} \! \downarrow$ & $\text{mIoU}_{2}^{1}$ $ \! \uparrow$ & $\Delta^1_{2} \! \downarrow$ & $\text{mIoU}_{2}^{0}$ $ \! \uparrow$ & $\Delta^0_{2} \! \downarrow$ & $\bar{\Delta}_{2} \! \downarrow$ \\ \midrule FT ($\mathcal{L}_{ce}^{n}$) & 78.80 & \oracle{9.54}{{8.52}} & \oracle{9.54}{\tblbold{8.52}} & 9.66 & \oracle{49.64}{47.37} & 8.64 & \oracle{92.94}{93.07} & \oracle{71.29}{70.22} & 9.66 & \oracle{80.53}{83.14} & 8.64 & \oracle{87.28}{86.56} & 7.09 & \oracle{89.59}{89.60} & \oracle{85.80}{86.43}\\ FT w/ self-style ($\mathcal{L}_{ce}^{\tilde{n}}$) & 78.78 & \oracle{9.56}{8.55} & \oracle{9.56}{8.55} & 42.11 & \oracle{42.30}{39.69} & 19.81 & \oracle{71.23}{71.76} & \oracle{56.76}{55.73} & 14.05 & \oracle{71.68}{75.47} & 12.63 & \oracle{81.40}{80.35} & 11.01 & \oracle{83.84}{83.86} & \oracle{78.98}{79.89} \\ MDIL \cite{garg2022multi} & 78.72 & \oracle{9.63}{8.62} & \oracle{9.63}{8.62} & 34.87 & \oracle{52.22}{50.06} & 11.70 & \oracle{83.01}{83.32} & \oracle{67.61}{66.69} & 8.90 & \oracle{82.06}{84.46} & 8.22 & \oracle{87.90}{87.21} & 6.70 & \oracle{90.17}{90.18} & \oracle{86.71}{87.28} \\ ILT \cite{michieli2019incremental} & 78.80 & \oracle{9.54}{{8.52}} & \oracle{9.54}{\tblbold{8.52}} & 44.44 & \oracle{}{36.35} & 43.32 & \oracle{}{38.26} & \oracle{}{37.30} & 24.48 & \oracle{}{57.26} & 30.00 & \oracle{}{53.34} & 27.88 & \oracle{}{59.12} & \oracle{}{56.57} \\ MiB \cite{cermelli2020modeling} % & 78.80 & \oracle{9.54}{8.52} & \oracle{9.54}{\tblbold{8.52}} & 56.23 & \oracle{22.95}{19.47} & 23.59 & \oracle{65.74}{66.37} & \oracle{44.34}{42.92} & 23.62 & \oracle{52.40}{58.76} & 33.24 & \oracle{51.05}{48.30} & 20.57 & \oracle{69.81}{69.84} & \oracle{57.75}{58.97} \\ PLOP \cite{douillard2021plop} & 78.80 & \oracle{9.54}{8.52} & \oracle{9.54}{\tblbold{8.52}} & 57.05 & \oracle{21.83}{18.29} & 24.74 & \oracle{64.07}{64.74} & \oracle{42.95}{41.51} & 24.18 & \oracle{51.27}{57.79} & 34.23 & \oracle{49.60}{46.76} & 21.42 & \oracle{68.56}{68.59} & \oracle{56.48}{57.71} \\ UCD \cite{yang2022uncertainty} & 78.80 & \oracle{9.54}{8.52} & \oracle{9.54}{\tblbold{8.52}} & 56.29 & \oracle{22.86}{19.38} & 26.45 & \oracle{61.58}{62.29} & \oracle{42.22}{40.84} & 24.88 & \oracle{49.87}{56.57} & 34.72 & \oracle{48.87}{45.99} & 22.35 & \oracle{67.21}{67.24} & \oracle{55.31}{56.60} \\ \name \; w/o $\mathcal{L}_{kd}^{\tilde{o}}$ & 78.78 & \oracle{9.56}{8.55} & \oracle{9.56}{8.55} & 59.61 & \oracle{18.32}{14.63} & 43.30 & \oracle{37.11}{38.28} & \oracle{27.71}{26.45} & 34.84 & \oracle{29.79}{39.18} & 39.11 & \oracle{42.41}{39.17} & 36.13 & \oracle{46.98}{47.03} & \oracle{39.72}{41.79}\\ \name & 78.78 & \oracle{9.56}{8.55} & \oracle{9.56}{8.55} & 59.26 & \oracle{18.80}{15.13} & 43.95 & \oracle{36.17}{37.35} & \oracle{27.48}{\tblbold{26.24}} & 37.94 & \oracle{23.54}{33.76} & 42.10 & \oracle{38.01}{34.51} & 36.60 & \oracle{46.29}{46.34} & \oracle{35.94}{\tblbold{38.21}}\\ \midrule Oracle & \oracle{87.11}{86.14} & - & - & \oracle{72.98}{69.82} & - & \oracle{68.85}{70.16} & - & - & \oracle{68.14}{68.20} & - & \oracle{49.62}{57.28} & - & \oracle{67.91}{64.29} & - & -\\ \bottomrule \end{tabu} \label{tab:ICB} \end{table*} In the \textbf{final learning step}, the struggle to handle the class and domain incremental training is exacerbated for all the competitors. Baselines and MDIL still provide inferior results, with the latter performing even worse than na\"ive fine-tuning with self-stylization in some setups. \\ As for CIL methods, PLOP \cite{douillard2021plop} and UCD \cite{yang2022uncertainty} are the best performing. Both combines output and feature level % objectives, which prove to be somewhat robust to domain shift. Even so, the simpler MiB \cite{cermelli2020modeling} approach shows very competitive results, suggesting that strategies taking into account only a \classincr perspective may not be so effective when incremental domain shift is also occurring. Our method in its complete form greatly outperforms all CIL competitors by a large margin regardless of domain order, going from $5\%$ ($\text{BDD} \rightarrow \text{IDD} \rightarrow \text{CS}$) to $12\%$ ($\text{CS} \rightarrow \text{BDD} \rightarrow \text{IDD}$) and even $16\%$ ($\text{IDD} \rightarrow \text{CS} \rightarrow \text{BDD}$) in terms of $\bar\Delta_{2}$ \marco{gap}. \tocheck{Furthermore, in Table~\ref{tab:map_gen} we investigate the generalization performance (\ie, $\Gamma^{gen}_{t}$ from Eq.~\eqref{eq:gen_metric}) achieved by the considered methods. To do so, we compute the accuracy at each incremental step on the \textit{unseen} Mapillary dataset for the sets of classes observed so far. We notice that simple fine-tuning and MDIL offer poor generalization results, which is expected due to the low accuracy they already provide on datasets directly observed. % On the other hand, CIL methods reach more competitive results, even if none of them proves to be superior in all setups. Still, our approach outperforms all competitors, getting significantly closer to the \textit{Oracle} upper-bound (\ie, the supervised training on the entire Mapillary), specially in the $\text{IDD} \rightarrow\allowbreak \text{CS} \rightarrow\allowbreak \text{BDD}$ setup. Also, we remark how we get similar generalization results with different domain incremental orders, demonstrating how our approach is able to learn and preserve generalizable task-related clues regardless of the training environment.} \begin{table}[t!] \scriptsize \centering \setlength{\tabcolsep}{2.6pt} \caption{Generalization performance \marco{($\Gamma^{gen}_{t}$)} as mIoU computed on Mapillary's test set ($\mathcal{C}_{bgr} \protect\rightarrow \mathcal{C}_{stat} \protect\rightarrow \mathcal{C}_{mov}$ setup).} \begin{tabu}{l |ccc|ccc|ccc} \toprule & \multicolumn{3}{c}{$\text{CS} \veryshortarrow \text{BDD} \veryshortarrow \text{IDD}$} & \multicolumn{3}{c}{$\text{BDD} \veryshortarrow \text{IDD} \veryshortarrow \text{CS}$} & \multicolumn{3}{c}{$\text{IDD} \veryshortarrow \text{CS} \veryshortarrow \text{BDD}$}\\ & Step 0 & Step 1 & Step 2 & Step 0 & Step 1 & Step 2 & Step 0 & Step 1 & Step 2 \\ \midrule FT ($\mathcal{L}_{ce}^{n}$) & 36.27 & 22.03 & 13.71 & 66.74 & 25.27 & 6.60 & \tblbold{59.56} & 7.81 & 8.52 \\ FT\textsuperscript{\textdagger} ($\mathcal{L}_{ce}^{\tilde{n}}$) & \tblbold{58.09} & 19.83 & 14.99 & \tblbold{66.83} & 25.77 & 16.34 & 59.19 & 23.40 & 11.97 \\ MDIL & 44.60 & 24.77 & 16.05 & 66.36 & 18.40 & 11.01 & 56.41 & 14.86 & 8.55 \\ ILT & 36.27 & 26.80 & 20.69 & 66.74 & 41.32 & 28.97 & \tblbold{59.56} & 39.27 & 27.96 \\ MiB % & 36.27 & 37.68 & 32.36 & 66.74 & 45.99 & 33.01 & \tblbold{59.56} & 23.61 & 24.23 \\ PLOP & 36.27 & 39.62 & 33.69 & 66.74 & 45.45 & 34.01 & \tblbold{59.56} & 25.29 & 25.04 \\ UCD & 36.27 & 38.46 & 34.07 & 66.74 & \tblbold{46.22} & 29.92 & \tblbold{59.56} & 27.08 & 25.89 \\ \name & \tblbold{58.09} & \tblbold{46.36} & \tblbold{40.43} & \tblbold{66.83} & 44.99 & \tblbold{37.33} & 59.19 & \tblbold{43.15} & \tblbold{39.16} \\ \midrule Oracle & 83.96 & 73.77 & 65.42 & 83.96 & 73.77 & 65.42 & 83.96 & 73.77 & 65.42 \\ \bottomrule \end{tabu} \\ \vspace{0.7mm} \textsuperscript{\textdagger} indicates the presence of self-stylization. \label{tab:map_gen} \vspace{-0mm} \end{table} \begin{table}[t!] \scriptsize \centering \setlength{\tabcolsep}{3.pt} \renewcommand{\arraystretch}{1.} \caption{Experimental results on $\text{CS} \protect\rightarrow \text{BDD} \protect\rightarrow \text{IDD}$ domain setup and $\boldsymbol{\mathcal{C}_{bgr} \protect\rightarrow \mathcal{C}_{mov} \protect\rightarrow \mathcal{C}_{stat}}$ class setup.} \begin{tabu*}{ cccccccccc} \toprule \multicolumn{3}{c}{$\text{CS} \veryshortarrow \text{BDD} \veryshortarrow \text{IDD}$} & \multicolumn{7}{c}{Method} \\ \multicolumn{3}{c}{$\mathcal{C}_{bgr} \veryshortarrow \mathcal{C}_{stat} \veryshortarrow \mathcal{C}_{mov}$} & $\mathcal{L}_{\scaleto{ce\mathstrut}{4pt}}^{\scaleto{n\mathstrut}{4pt}}$ & $\mathcal{L}_{\scaleto{ce\mathstrut}{4pt}}^{\scaleto{\tilde{n}\mathstrut}{4pt}}$ & MiB % & PLOP & UCD & \name & Oracle\\ \midrule \multirow{2}{*}{Step 0} & \multirow{1}{*}{mIoU\textsubscript{0} $ \! \uparrow$} & \textbf{CS} & 79.83 & 79.39 & 79.83 & 79.83 & 79.83 & 79.39 & \oracle{85.15}{84.15} \\ \arrayrulecolor{black!50} \cmidrule{2-10} \arrayrulecolor{black!100} & \multicolumn{2}{c}{$\bar{\Delta}_{0} \! \downarrow$} & \oracle{6.25}{\tblbold{5.13}} & \oracle{6.77}{5.65} & \oracle{6.25}{\tblbold{5.13}} & \oracle{6.25}{\tblbold{5.13}} & \oracle{6.25}{\tblbold{5.13}} & \oracle{6.77}{5.65} & - \\ \midrule \multirow{3}{*}{Step 1} & \multirow{2}{*}{mIoU\textsubscript{1} $ \! \uparrow$} & \textbf{BDD} & 15.79 & 19.43 & 26.15 & 27.69 & 23.73 & 40.92 & \oracle{49.39}{63.08} \\ & & CS & 14.26 & 17.22 & 40.38 & 42.44 & 39.76 & 49.70 & \oracle{70.26}{69.82} \\ \arrayrulecolor{black!50} \cmidrule{2-10} \arrayrulecolor{black!100} & \multicolumn{2}{c}{$\bar{\Delta}_{1} \! \downarrow$} & \oracle{73.87}{76.26} & \oracle{68.08}{71.03} & \oracle{44.79}{48.21} & \oracle{41.77}{45.40} & \oracle{47.68}{50.68} & \oracle{23.21}{\tblbold{28.99}} & - \\ \midrule \multirow{4}{*}{Step 2} & \multirow{3}{*}{mIoU\textsubscript{2} $ \! \uparrow$} & \textbf{IDD} & 13.47 & 14.82 & 31.01 & 31.89 & 30.72 & 43.54 & \oracle{68.14}{68.20} \\ & & BDD & 6.94 & 8.21 & 23.40 & 25.21 & 22.83 & 32.34 & \oracle{49.62}{57.28} \\ & & CS & 7.45 & 10.49 & 33.60 & 33.81 & 32.85 & 39.76 & \oracle{67.91}{64.29} \\ \arrayrulecolor{black!50} \cmidrule{2-10} \arrayrulecolor{black!100} & \multicolumn{2}{c}{$\bar{\Delta}_{2} \! \downarrow$} & \oracle{85.09}{85.52} & \oracle{82.09}{82.54} & \oracle{52.62}{53.81} & \oracle{50.87}{52.21} & \oracle{53.51}{54.67} & \oracle{37.46}{\tblbold{39.29}} & - \\ \bottomrule \end{tabu*} \label{tab:CBI_rCIL} \end{table} \begin{table*}[h!] \centering \setlength{\tabcolsep}{1.2pt} \renewcommand{\arraystretch}{0.2} \small \begin{minipage}{0.99\linewidth}\centering \begin{tabular}{ cccccccc } & Input & GT & FT w/ self-style& MiB & PLOP & \name & \\[-2ex] \raisebox{1.8\normalbaselineskip}[0pt][0pt]{\rotatebox[origin=c]{90}{Step 0}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step0/cityscapes/frankfurt_000001_001464_leftImg8bit_img.png}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step0/cityscapes/frankfurt_000001_001464_leftImg8bit_gt.png}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step0/cityscapes/frankfurt_000001_001464_leftImg8bit_Baseline.png}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step0/cityscapes/frankfurt_000001_001464_leftImg8bit_MiB.png}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step0/cityscapes/frankfurt_000001_001464_leftImg8bit_PLOP.png}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step0/cityscapes/frankfurt_000001_001464_leftImg8bit_Ours.png}} & \raisebox{1.8\normalbaselineskip}[0pt][0pt]{\rotatebox[origin=c]{270}{\textbf{CS}}} \\[0.5ex] \hline & \\[-2.25ex] \raisebox{0\normalbaselineskip}[0pt][0pt]{\rotatebox[origin=c]{90}{Step 1}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step1/cityscapes/frankfurt_000001_001464_leftImg8bit_img.png}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step1/cityscapes/frankfurt_000001_001464_leftImg8bit_gt.png}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step1/cityscapes/frankfurt_000001_001464_leftImg8bit_Baseline.png}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step1/cityscapes/frankfurt_000001_001464_leftImg8bit_MiB.png}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step1/cityscapes/frankfurt_000001_001464_leftImg8bit_PLOP.png}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step1/cityscapes/frankfurt_000001_001464_leftImg8bit_Ours.png}} & \raisebox{1.8\normalbaselineskip}[0pt][0pt]{\rotatebox[origin=c]{270}{CS}} \\[-2ex] & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step1/bdd/8fcce630-3c3e0000_img.png}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step1/bdd/8fcce630-3c3e0000_gt.png}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step1/bdd/8fcce630-3c3e0000_Baseline.png}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step1/bdd/8fcce630-3c3e0000_MiB.png}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step1/bdd/8fcce630-3c3e0000_PLOP.png}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step1/bdd/8fcce630-3c3e0000_Ours.png}} & \raisebox{1.8\normalbaselineskip}[0pt][0pt]{\rotatebox[origin=c]{270}{\textbf{BDD}}} \\[0.5ex] \hline & \\[-2.25ex] \raisebox{-3\normalbaselineskip}[0pt][0pt]{\rotatebox[origin=c]{90}{Step 2}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step2/cityscapes/frankfurt_000001_001464_leftImg8bit_img.png}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step2/cityscapes/frankfurt_000001_001464_leftImg8bit_gt.png}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step2/cityscapes/frankfurt_000001_001464_leftImg8bit_Baseline.png}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step2/cityscapes/frankfurt_000001_001464_leftImg8bit_MiB.png}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step2/cityscapes/frankfurt_000001_001464_leftImg8bit_PLOP.png}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step2/cityscapes/frankfurt_000001_001464_leftImg8bit_Ours.png}} & \raisebox{1.8\normalbaselineskip}[0pt][0pt]{\rotatebox[origin=c]{270}{CS}} \\[-2ex] & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step2/bdd/8fcce630-3c3e0000_img.png}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step2/bdd/8fcce630-3c3e0000_gt.png}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step2/bdd/8fcce630-3c3e0000_Baseline.png}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step2/bdd/8fcce630-3c3e0000_MiB.png}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step2/bdd/8fcce630-3c3e0000_PLOP.png}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step2/bdd/8fcce630-3c3e0000_Ours.png}} & \raisebox{1.8\normalbaselineskip}[0pt][0pt]{\rotatebox[origin=c]{270}{BDD}} \\[-2ex] & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step2/idd/771373_leftImg8bit_img.png}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step2/idd/771373_leftImg8bit_gt.png}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step2/idd/771373_leftImg8bit_Baseline.png}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step2/idd/771373_leftImg8bit_MiB.png}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step2/idd/771373_leftImg8bit_PLOP.png}} & \subfloat{\includegraphics[width=0.15\textwidth]{images/qualitative/CBI/step2/idd/771373_leftImg8bit_Ours.png}} & \raisebox{1.8\normalbaselineskip}[0pt][0pt]{\rotatebox[origin=c]{270}{\textbf{IDD}}} \\ \end{tabular} \captionof{figure}{Qualitative results on $\text{CS} \protect\rightarrow \text{BDD} \protect\rightarrow \text{IDD}$ domain setup and $\mathcal{C}_{bgr} \protect\rightarrow \mathcal{C}_{stat} \protect\rightarrow \mathcal{C}_{mov}$ class setup.} \label{fig:qualitative} \end{minipage} \end{table*} Finally, qualitative results in the form of segmentation maps are provided in Fig.~\ref{fig:qualitative}. We stress how the proposed approach yields better \textbf{backward} and \textbf{forward transfer} throughout the incremental learning. In particular, moving classes like \textit{bicycle} and \textit{bus} appear to be recognized more effectively by our method on the Cityscapes (CS) dataset at the end of the incremental training, even though CS was experienced only along with \marco{background-class} supervision during the first step. On the other hand, MiB and PLOP fail to provide satisfactory \textbf{backward transfer} of those classes to the past CS domain. A similar reasoning can be done regarding the \textbf{forward transfer} aptitude. Our approach is able to deliver good segmentation accuracy on the \textit{road} and \textit{sidewalk} background classes even on BDD and IDD datasets, despite them being experienced when $\mathcal{C}_{bgr}$ supervision is no longer available. Contrarily, MiB and PLOP suffer from the domain statistical gap across learning steps, struggling to maintain satisfactory segmentation accuracy \pietro{on} first-step classes by forward transferring knowledge to future steps. \tocheck{Additional \um{analyses} will be provided in Sec.~\ref{sec:abl_transfer}.} \begin{table}[t!] \scriptsize \centering \setlength{\tabcolsep}{3.pt} \renewcommand{\arraystretch}{1.} \caption{\marco{Experimental results with DeeplabV3-ResNet101.} } \begin{tabu*}{c @{\hspace*{0.3cm}} cc ccccccc} \toprule \multicolumn{3}{c}{$\text{CS} \veryshortarrow \text{BDD} \veryshortarrow \text{IDD}$} & \multicolumn{7}{c}{Method} \\ \multicolumn{3}{c}{$\mathcal{C}_{bgr} \veryshortarrow \mathcal{C}_{stat} \veryshortarrow \mathcal{C}_{mov}$} & $\mathcal{L}_{\scaleto{ce\mathstrut}{4pt}}^{\scaleto{n\mathstrut}{4pt}}$ & $\mathcal{L}_{\scaleto{ce\mathstrut}{4pt}}^{\scaleto{\tilde{n}\mathstrut}{4pt}}$ & MiB % & PLOP & UCD & \name & Oracle \\ \midrule \multirow{2}{*}{Step 0} & \multirow{1}{*}{mIoU\textsubscript{0} $ \! \uparrow$} & \textbf{CS} & 78.1 & 77.13 & 78.1 & 78.1 & 78.1 & 77.13 & \oracle{83.63}{84.30} \\ \arrayrulecolor{black!50} \cmidrule{2-10} \arrayrulecolor{black!100} & \multicolumn{2}{c}{$\bar{\Delta}_{0} \! \downarrow$} & \oracle{6.61}{\tblbold{7.35}} & \oracle{7.77}{8.50} & \oracle{6.61}{\tblbold{7.35}} & \oracle{6.61}{\tblbold{7.35}} & \oracle{6.61}{\tblbold{7.35}} & \oracle{7.77}{8.50} & - \\ \midrule \multirow{3}{*}{Step 1} & \multirow{2}{*}{mIoU\textsubscript{1} $ \! \uparrow$} & \textbf{BDD} & 31.97 & 29.44 & 28.38 & 29.07 & 30.84 & 50.36 & \oracle{64.02}{64.60} \\ & & CS & 31.82 & 55.13 & 45.10 & 45.15 & 45.51 & 54.75 & \oracle{69.73}{71.24} \\ \arrayrulecolor{black!50} \cmidrule{2-10} \arrayrulecolor{black!100} & \multicolumn{2}{c}{$\bar{\Delta}_{1} \! \downarrow$} & \oracle{52.22}{52.92} & \oracle{55.53}{56.19} & \oracle{45.50}{46.38} & \oracle{44.92}{45.81} & \oracle{43.28}{44.19} & \oracle{21.41}{\tblbold{22.60}} & - \\ \midrule \multirow{4}{*}{Step 2} & \multirow{3}{*}{mIoU\textsubscript{2} $ \! \uparrow$} & \textbf{IDD} & 30.68 & 30.29 & 35.93 & 33.98 & 38.24 & 51.92 & \oracle{70.64}{70.94}\\ & & BDD & 17.48 & 17.27 & 24.18 & 23.57 & 26.14 & 44.98 & \oracle{59.22}{61.48} \\ & & CS & 19.46 & 17.92 & 33.22 & 34.38 & 34.93 & 47.08 & \oracle{70.64}{69.17} \\ \arrayrulecolor{black!50} \cmidrule{2-10} \arrayrulecolor{black!100} & \multicolumn{2}{c}{$\bar{\Delta}_{2} \! \downarrow$} & \oracle{66.10}{66.73} & \oracle{67.16}{67.77} & \oracle{53.06}{53.99} & \oracle{53.76}{54.68} & \oracle{50.03}{51.02} & \oracle{26.99}{\tblbold{28.53}} & -\\ \bottomrule \end{tabu*} \label{tab:CBI_resnet} \end{table} \subsubsection{ \marco{Study on Class Ordering} } We further investigate the impact of \pietro{a permutation of the} class incremental arrangement. Table \ref{tab:CBI_rCIL} reports experimental results with \marco{the} $\text{CS} \rightarrow \text{BDD} \rightarrow \text{IDD}$ \marco{progression}, \marco{but} a modified class order with moving categories $\mathcal{C}_{mov}$ experienced before static ones $\mathcal{C}_{stat}$. We notice a similar trend to that observed in Table \ref{tab:CBI} (\ie, same domain order, but different class order), with baselines and MDIL \cite{garg2022multi} performing poorly, and the improved accuracy achieved by CIL methods still being largely outperformed by the proposed approach. \\ \marco{In addition}, we observe that the absolute results are decreased by applying the new class order. The \um{performance} of our approach, in fact, \marco{drops} from $31.28\%$ to $39.29\%$ of $\bar \Delta_{2}$. This discrepancy might be due to class sets observed on domains where it is harder to learn them, and, at the same time, to generalize to the other domains. For instance, we note that IDD provides a lower overall percentage of pixels of $\mathcal{C}_{stat}$ w.r.t.\ the BDD ($11\%$ vs $17\%$), while for $\mathcal{C}_{mov}$ numbers are similar between them (both around $10\%$ of total pixels). % \marco{Still}, the performance \marco{loss} is similar for CIL methods, with \marco{the} gap w.r.t.\ the best competitor rising from $12$ to $13$ points \um{of $\bar\Delta_{2}$} \um{(compared to the previous class order)}. \begin{table}[t!] \scriptsize \centering \setlength{\tabcolsep}{3.pt} \renewcommand{\arraystretch}{0.9} \caption{\marco{Experimental results on the Mapillary dataset.}} \begin{tabu}{cccccccccc} \toprule \multicolumn{3}{c}{ $\text{EU} \!\veryshortarrow\! \text{NA} \!\veryshortarrow\! \text{AS} \!\veryshortarrow\! \text{OC} \!\veryshortarrow\! \text{AF} \!\veryshortarrow\! \text{SA}$ } & \multicolumn{7}{c}{Method} \\ \multicolumn{3}{c}{$\mathcal{C}^{0 \!\rightarrow\! 1}_{bgr} \veryshortarrow \mathcal{C}^{0 \!\rightarrow\! 1}_{stat} \veryshortarrow \mathcal{C}^{0 \!\rightarrow\! 1}_{mov}$} & $\mathcal{L}_{\scaleto{ce\mathstrut}{4pt}}^{\scaleto{n\mathstrut}{4pt}}$ & $\mathcal{L}_{\scaleto{ce\mathstrut}{4pt}}^{\scaleto{\tilde{n}\mathstrut}{4pt}}$ & MiB % & PLOP & UCD & \name & Oracle\\ \midrule \multirow{2}{*}{Step 0} & \multirow{1}{*}{mIoU\textsubscript{0} $ \! \uparrow$} & \textbf{EU} & 73.12 & 73.07 & 73.12 & 73.12 & 73.12 & 73.07 & \oracle{80.08}{79.53} \\ \arrayrulecolor{black!50} \cmidrule{2-10} \arrayrulecolor{black!100} & \multicolumn{2}{c}{$\bar{\Delta}_{0} \! \downarrow$} & \oracle{8.69}{\tblbold{8.06}} & \oracle{8.75}{8.13} & \oracle{8.69}{\tblbold{8.06}} & \oracle{8.69}{\tblbold{8.06}} & \oracle{8.69}{\tblbold{8.06}} & \oracle{8.75}{8.13} & - \\ \midrule \multirow{3}{*}{Step 1} & \multirow{2}{*}{mIoU\textsubscript{1} $ \! \uparrow$} & \textbf{NA} & 51.80 & 51.63 & 81.28 & 80.82 & 81.70 & 81.85 & \oracle{86.81}{87.51} \\ & & EU & 47.67 & 47.52 & 76.05 & 75.76 & 75.26 & 74.80 & \oracle{82.05}{82.34} \\ \arrayrulecolor{black!50} \cmidrule{2-10} \arrayrulecolor{black!100} & \multicolumn{2}{c}{$\bar{\Delta}_{1} \! \downarrow$} & \oracle{41.11}{41.46} & \oracle{41.30}{41.65} & \oracle{6.84}{\tblbold{7.38}} & \oracle{7.28}{7.82} & \oracle{7.08}{7.62} & \oracle{7.27}{7.82} & - \\ \midrule \multirow{4}{*}{Step 2} & \multirow{3}{*}{mIoU\textsubscript{2} $ \! \uparrow$} & \textbf{AS} & 25.18 & 26.09 & 65.40 & 65.98 & 65.70 & 65.36 & \oracle{73.09}{74.70} \\ & & NA & 23.61 & 23.82 & 69.28 & 69.63 & 68.66 & 69.77 & \oracle{77.62}{79.40} \\ & & EU & 23.98 & 24.10 & 66.66 & 66.86 & 65.79 & 65.87 & \oracle{76.04}{76.62} \\ \arrayrulecolor{black!50} \cmidrule{2-10} \arrayrulecolor{black!100} & \multicolumn{2}{c}{$\bar{\Delta}_{2} \! \downarrow$} & \oracle{67.87}{68.42} & \oracle{67.31}{67.87} & \oracle{11.20}{12.73} & \oracle{10.70}{\tblbold{12.24}} & \oracle{11.71}{13.23} & \oracle{11.36}{12.89} & - \\ \midrule \multirow{5}{*}{Step 3} & \multirow{4}{*}{mIoU\textsubscript{3} $ \! \uparrow$} & \textbf{OC} & 16.53 & 16.74 & 61.29 & 60.58 & 60.53 & 63.07 & \oracle{70.42}{76.46} \\ & & AS & 14.31 & 14.22 & 57.95 & 57.60 & 57.61 & 58.13 & \oracle{68.85}{70.96} \\ & & NA & 17.10 & 17.20 & 62.41 & 63.29 & 61.91 & 64.04 & \oracle{74.64}{75.77} \\ & & EU & 14.94 & 14.94 & 59.78 & 60.01 & 59.34 & 61.15 & \oracle{72.69}{72.97} \\ \arrayrulecolor{black!50} \cmidrule{2-10} \arrayrulecolor{black!100} & \multicolumn{2}{c}{$\bar{\Delta}_{3} \! \downarrow$} & \oracle{78.07}{78.79} & \oracle{77.99}{78.72} & \oracle{15.74}{18.47} & \oracle{15.74}{18.46} & \oracle{16.45}{19.15} & \oracle{14.02}{\tblbold{16.82}} & - \\ \midrule \multirow{6}{*}{Step 4} & \multirow{4}{*}{mIoU\textsubscript{4} $ \! \uparrow$} & \textbf{AF} & 8.98 & 7.77 & 38.48 & 39.97 & 40.54 & 43.93 & \oracle{54.31}{66.54} \\ & & OC & 6.03 & 5.95 & 40.17 & 43.52 & 42.15 & 47.43 & \oracle{63.74}{72.30} \\ & & AS & 7.23 & 7.31 & 39.15 & 41.09 & 42.03 & 46.13 & \oracle{67.11}{69.87} \\ & & NA & 7.78 & 7.07 & 43.10 & 45.12 & 45.00 & 50.07 & \oracle{71.99}{74.22} \\ & & EU & 5.45 & 5.41 & 38.99 & 41.52 & 41.28 & 46.37 & \oracle{69.57}{70.22} \\ \arrayrulecolor{black!50} \cmidrule{2-10} \arrayrulecolor{black!100} & \multicolumn{2}{c}{$\bar{\Delta}_{4} \! \downarrow$} & \oracle{88.92}{89.91} & \oracle{89.57}{90.48} & \oracle{38.38}{43.40} & \oracle{34.91}{40.20} & \oracle{34.95}{40.24} & \oracle{27.95}{\tblbold{33.77}} & - \\ \midrule \multirow{7}{*}{Step 5} & \multirow{5}{*}{mIoU\textsubscript{5} $ \! \uparrow$} & \textbf{SA} & 9.61 & 9.18 & 39.64 & 41.79 & 41.48 & 45.36 & \oracle{59.45}{64.45} \\ & & AF & 11.35 & 9.63 & 40.76 & 41.98 & 41.44 & 45.25 & \oracle{50.03}{63.03} \\ & & OC & 6.78 & 7.42 & 36.52 & 39.08 & 37.21 & 41.76 & \oracle{50.36}{60.82} \\ & & AS & 8.97 & 8.17 & 37.90 & 40.15 & 38.76 & 43.63 & \oracle{60.67}{64.74} \\ & & NA & 8.97 & 9.54 & 41.40 & 43.45 & 43.05 & 47.08 & \oracle{62.56}{66.88} \\ & & EU & 8.18 & 7.63 & 38.37 & 40.53 & 38.93 & 43.51 & \oracle{64.39}{64.04} \\ \arrayrulecolor{black!50} \cmidrule{2-10} \arrayrulecolor{black!100} & \multicolumn{2}{c}{$\bar{\Delta}_{5} \! \downarrow$} & \oracle{84.31}{85.98} & \oracle{85.00}{86.58} & \oracle{31.84}{38.90} & \oracle{28.27}{35.67} & \oracle{30.05}{37.28} & \oracle{22.60}{\tblbold{30.57}} & - \\ \bottomrule \end{tabu} \label{tab:map_cont} \end{table} \subsubsection{Study on Model Architecture} \marco{We \tocheck{finally} evaluate the considered methods when a more \pietro{complex} segmentation network is used, moving from the lightweight ErfNet to the heavier DeeplabV3 with ResNet101 backbone.} \marco{For comparison purposes,} the setup analyzed is again that involving $\text{CS} \rightarrow \text{BDD} \rightarrow \text{IDD}$ and $\mathcal{C}_{bgr} \rightarrow \mathcal{C}_{stat} \rightarrow \mathcal{C}_{mov}$ orders \marco{(Table~\ref{tab:CBI_resnet})}. \marco{For what concerns our approach, we observe an improved relative performance, raising from $31.28\%$ to $28.53\%$ in terms of $\bar\Delta_{2}$. We emphasize that the $\bar\Delta$ measure already takes into account the better oracle results; the accuracy boost, then, shows that our method is able to capitalize the increased capacity offered by the segmentation model. } On the other hand, the CIL competitors are unable to take advantage of the \marco{growth} in network capacity, which could indicate a tendency to overfit on the currently observed domain distribution. \marco{ The best competitor (\ie, UCD), in fact, is significantly outperformed by more than 20\% in terms of $\bar\Delta$ at both steps 1 and 2. } We remark that no additional parameter tuning is performed in this experimental setup concerning method-specific parameters. \subsection{ Evaluation with Larger Geographic Diversity } The second experimental \marco{class and domain incremental} setup we explore is derived from the Mapillary dataset. Domain shift is once more induced by \marco{the variable} geographic \marco{origin} of image samples \marco{collected worldwide}, \ie, we identify data partitions associated to 6 different continents, corresponding to 6 incremental steps. However, the Mapillary dataset contains variegate data distribution, even considering intra-continent samples, providing a more robust support for training segmentation models. % Data richness in turns promotes generalization across steps, in fact lessening the domain gap between different domains. We report experimental results in Table \ref{tab:map_cont}. % \pietro{In the first steps, when the domain shift is small (\eg, between Europe\um{, EU,} and North America\um{, NA}), the different methods achieve similar performance.} Nonetheless, when progressing to the last steps and experiencing increased statistical gap (\eg, when introducing Africa's images\um{, AF}), we note that our approach outperforms CIL competitors by a \marco{considerable} margin, which is of $5$ points of $\bar\Delta$ w.r.t.\ the best competitor (PLOP) at the end of incremental training. \marco{Also, superior performance in later steps is attained \pietro{on} both new and old domains, confirming the better plasticity-stability trade-off provided by our method.} \marco{Overall, the improved results \um{\name\ reaches w.r.t.\ state-of-the-art CIL}~competitors, even when training data is collected to ensure some statistical diversity (as in the experimental setup just considered), further suggests that CIL methods are likely to be inadequate to deal with distribution shift in the input space.} \begin{table}[t!] \scriptsize \centering \setlength{\tabcolsep}{2.9pt} \renewcommand{\arraystretch}{1.1} \caption{ \marco{Experimental results on the Shift dataset.} } \newcommand\cw{6.4mm} \newcommand\cwd{6.mm} \begin{tabu}{l | >{\centering}m{\cw}>{\centering}m{\cwd}| >{\centering}m{\cw}>{\centering}m{\cwd}| >{\centering}m{\cw}>{\centering}m{\cwd}| >{\centering}m{\cwd}} \toprule \multicolumn{1}{c|}{$\text{Daytime} \! \veryshortarrow \! \text{Twilight} \! \veryshortarrow \! \text{Night}$} & \multicolumn{2}{c}{\textbf{Night}} & \multicolumn{2}{c}{Twilight} & \multicolumn{2}{c}{Daytime} & \\ \multicolumn{1}{c|}{$\mathcal{C}_{bgr} \veryshortarrow \mathcal{C}_{stat} \veryshortarrow \mathcal{C}_{mov}$} & \scalebox{1.}{$\text{mIoU}_{2}^{2}$ $ \!\! \uparrow$} & $\Delta_{2}^{2} \!\! \downarrow$ & \scalebox{1.}{$\text{mIoU}_{2}^{1}$ $ \!\! \uparrow$} & $\Delta_{2}^{1} \!\! \downarrow$ & \scalebox{1.}{$\text{mIoU}_{2}^{0}$ $ \!\! \uparrow$} & $\Delta_{2}^{0} \!\! \downarrow$ & $\bar{\Delta}_{2} \! \downarrow$ \\ \midrule FT ($\mathcal{L}_{ce}^{n}$) & 10.54 & \oracle{-}{85.82} & 9.62 & \oracle{-}{87.21} & 4.61 & \oracle{-}{94.06} & \oracle{-}{89.03} \\ FT w/ self-style ($\mathcal{L}_{ce}^{\tilde{n}}$) & 10.12 & \oracle{-}{86.39} & 8.50 & \oracle{-}{88.70} & 7.56 & \oracle{-}{90.26} & \oracle{-}{88.45} \\ MiB & 48.07 & \oracle{-}{35.35} & 52.71 & \oracle{-}{29.92} & 48.29 & \oracle{-}{37.77} & \oracle{-}{34.34} \\ PLOP & 48.58 & \oracle{-}{34.67} & 53.66 & \oracle{-}{28.66} & 51.11 & \oracle{-}{34.13} & \oracle{-}{32.48} \\ \name & 60.27 & \oracle{-}{18.94} & 62.57 & \oracle{-}{16.81} & 59.78 & \oracle{-}{22.97} & \oracle{-}{\tblbold{19.57}} \\ \midrule Oracle & 74.35 & \oracle{-}{-} & 75.21 & \oracle{-}{-} & 77.60 & \oracle{-}{-} & \oracle{-}{-} \\ \bottomrule \end{tabu} \label{tab:shift} \vspace{-0mm} \end{table} \subsection{Evaluation with Variable Environmental Conditions} We evaluate the proposed method when incremental domain shift is due to changing environmental \marco{factors}, \ie, \marco{variable light conditions experienced at different times during the day. In this setting, we employed the Shift synthetic benchmark.} \tocheck{We consider the $\textit{Daytime} \rightarrow\allowbreak \textit{Twilight} \rightarrow\allowbreak \textit{Night}$ domain sequence.} \marco{Class incremental scheduling follows the $\mathcal{C}_{bgr} \rightarrow \allowbreak\mathcal{C}_{stat} \rightarrow\allowbreak \mathcal{C}_{mov}$ arrangement of \cite{klingner2020class}, with the only difference from \cite{klingner2020class} being that the starting class pool to be split corresponds to the 22 Shift's categories in place of the 19 Cityscape's ones.} Results are reported in Table \ref{tab:shift}, where we \marco{compare} with MiB and PLOP \um{as} CIL competitors, along with fine-tuning baselines. We verify the superiority of our approach in jointly handling class and domain incremental training, as we surpass PLOP by $13$ points of \um{$\bar\Delta_{2}$}. We once more point out the better stability-plasticity \marco{balance} reached by our method, which achieves improved performance simultaneously over novel and \marco{former domains}. \marco{ Overall, results show that the proposed method is effective \marco{under domain shifts of different nature.} On the other hand, CIL methods prove to be greatly penalized just from the variable scene illumination in different tasks. We argue that in many real-world applications, such as autonomous driving, it is unrealistic to assume that a continual learner will not experience any sort of alteration in input data distribution, making our \pietro{continual learning} approach much more applicable.} \section{Experimental Setup} In this section we provide a detailed description of the experimental setup utilized to validate the proposed framework against multiple competing methods. \marco{In Sec.~\ref{sec:exp_res} and \ref{sec:ablation} we will report the results of the evaluation campaign and extensive ablation studies as additional support.} \input{sections/data_impl_details} \subsection{Metrics} \label{sec:metrics} \marco{Inspired by \cite{garg2022multi}, to provide a valuable measure of prediction performance across multiple tasks and domains, we resort to a} domain average relative performance w.r.t.\ a fully-supervised \marco{\textit{oracle}} reference \um{(the smaller the better)} \marco{defined at any step $t$ as}: \begin{equation} \bar{\Delta}_{t} = \underbrace{\frac{1}{t+1} \, \, \sum_{k=0}^{t}}_{\text{domain avg}} \underbrace{ \, \, \frac{ A_{\mathcal{X}_{k}|S_{t}}^{\mathcal{C}_{0:t}} - A_{\mathcal{X}_{k}| \marco{S^{*}} }^{\mathcal{C}_{0:t}} }{ A_{\mathcal{X}_{k}| \marco{S^{*}} }^{\mathcal{C}_{0:t}} } }_{\marco{\substack{ \Delta^k_{t} \text{: relative acc.\ gap w.r.t.} \\ \text{oracle on step-$k$ domain} }} }, \label{metric:rel} \end{equation} where $A_{\mathcal{X}|S}^{\mathcal{C}}$ is the class-average accuracy (we make use of the commonly employed mIoU metric \cite{minaee2021image}) attained by segmentation network $S$ on domain $\mathcal{X}$ and class set $\mathcal{C}$. \marco{$S^{*}$ is the oracle segmentation model, \ie, trained with full supervision on the entire pool of classes and domains (even classes and domains that will be observed after step $t$). } \marco{We further provide a measure of generalization aptitude \um{(the higher the better)}, expressed as the accuracy (\ie, in terms of mIoU) achieved over the entire class set observed so far on a novel dataset never experienced before. At step $t$, the metric follows:} \begin{equation} \Gamma^{gen}_{t} = A_{\mathcal{X}_{ext}|S_{t}}^{\mathcal{C}_{0:t}} = \frac{1}{|\mathcal{C}_{0:t}|} \sum_{c \in \mathcal{C}_{0:t}} A^{c}_{\mathcal{X}_{ext}|S_{t}}, \label{eq:gen_metric} \end{equation} \marco{where $\mathcal{X}_{ext}$ is the unseen domain.} \section{Introduction} With the recent rise of deep learning, the computer vision field has witnessed remarkable advances. Challenging tasks, such as image semantic segmentation, are nowadays successfully addressed by well-established deep learning architectures \cite{long2015fully,chen2018deeplab,romera2018erfnet}. % Nonetheless, the fundamental problem of continuously learning and adapting to novel environments remains open and is actively investigated, with a long way before its \um{definitive} solution. % \marco{Although capable of} remarkable performance in narrow and confined tasks, deep models tend to struggle when confronted with \um{continual} learning of dynamic tasks in ever-changing environments. A major issue stands in the tendency to \textit{catastrophically forget} previously acquired knowledge \cite{kirkpatrick2017overcoming}, % with new information erasing that experienced so far. \pietro{Furthermore,} variable input distribution between supervised training data and target data has been shown \um{to} cause performance degradation, giving rise to the need for \textit{domain adaptation}, which targets knowledge transferability across domains. Both constitute critical problems \marco{when it comes to deploying} deep models in practical applications, as in the real world it is very likely to \marco{face} distribution variability both in terms of input data and of target tasks. A thriving research endeavour has been devoted to \marco{continual} learning \marco{(also referred to as incremental \tocheck{learning, IL,} or lifelong learning \cite{delange2022continual})} in vision problems, such as image classification \cite{kirkpatrick2017overcoming,li2017learning,rebuffi2017icarl}, object detection \cite{shmelkov2017incremental,peng2020faster,joseph2021towards} and, more recently, semantic segmentation \cite{michieli2019incremental,cermelli2020modeling,douillard2021plop}. The majority of those works, however, are limited to a \textit{class incremental} perspective of the continual learning problem, where the focus is strictly posed on the variable task (\eg, class) supervision and label-space shift experienced throughout the learning process. On the other side, a significant research effort has been directed toward the domain adaptation problem, ranging from a static learning setting \cite{Saito2018MCD,vu2019advent,yang2020fda} to, quite recently, a dynamic perspective \cite{volpi2021continual,volpi2022road,wang2022continualtest}, taking into account incremental changes \pietro{in} \um{the} data distribution. \begin{figure}[!t] \centering \includegraphics[width=0.93\linewidth]{images/ga/GA_v9.pdf} % \caption{\marco{High-level view} of our approach. Transparency decrease \tocheck{(top$\protect\rightarrow$down and left$\protect\rightarrow$right)} indicates progression through learning steps. Colored task icons denote presence of supervision within training data, % grayscale \pietro{ones} signal lack of supervision. \tocheck{ At each step, we leverage training data to \textit{learn} new classes on the new domain. Domain stylization allows to reiterate old-domain distribution, crucial to \textit{learn} new tasks and \textit{preserve} old ones on former domains, and to \textit{adapt} old-domain old-task knowledge to new domains.} } \label{fig:ga} \end{figure} Nonetheless, the general continual learning problem across both tasks and domains is yet unexplored for the semantic segmentation task. Where \classincr methods usually struggle to cope with domain knowledge transferability, \domincr methods lack predisposition to address incremental task supervision. We instead propose to tackle \tocheck{continual} semantic segmentation with joint \tocheck{incremental} shift along class and domain directions. The training process involves multiple steps, each of which carries a new set of classes to learn, along with a training set comprising image samples with a step-distinctive distribution, differing from those experienced in previous steps, and supervision \um{available} only on the newly introduced class set. The overall objective is for the incremental segmentation model to deliver satisfactory performance across all the tasks (\ie, class sets) and domains encountered so far, with the class- and domain- wise joint training as the target upper bound. In this novel problem setup \pietro{(see Fig.~\ref{fig:ga})}, both domain adaptation and recollection of past classes must be performed to achieve satisfactory performance. Under the \domincr angle, it is required to simultaneously learn new classes over past domains and adapt old-class knowledge to the new domain. From the \classincr perspective, recollection of past knowledge must take into account the variable input distribution characterizing the \marco{addressed} incremental learning \pietro{problem}. We therefore devise multiple training objectives to face underlying sub-problems. While to rehearse knowledge of old classes we resort to the old-step segmentation model, which is a common practice among \classincr learning methods \cite{michieli2019incremental}, to replay information of past-domain input distribution we propose a stylization mechanism. The average style (\um{\ie, a} very compact representation) of each \um{encountered domain} is computed and stored in a memory bank, to be transferred to novel domains in future steps and reproduce some domain-level information. The overall optimization framework is made of (i) a standard task loss (\ie, cross-entropy objective) to learn new \pietro{classes} over available training data, (ii) an additional task loss instance to learn new classes in old domains by leveraging stylization, (iii) a knowledge \pietro{distillation-like} objective to infuse adapted information of past classes in the form of hard pseudo-labels to the new domain and finally (iv) an output-level knowledge distillation objective applied \marco{on} stylized images to retain old-domain old-class performance. To summarize, our contributions are as follows: \begin{itemize} \item We \marco{investigate} a novel % \marco{comprehensive} incremental learning \pietro{setting} that accounts for variable distribution within both input and label spaces. \item We develop a framework to tackle all facets of the class and \domincr learning problem, based on a stylization mechanism to recall domain knowledge under incremental task supervision and a robust distillation framework to % \marco{retain} task knowledge under incremental domain shift. \item We devise novel experimental setups to simulate the % \um{proposed learning} setting and conduct an extensive evaluation campaign. \item We show that the proposed method outperforms existing state-of-the-art methods that address the IL problem only from a class or a \domincr perspective. \end{itemize} \section{Overview of the Proposed Method} We \marco{concurrently} face challenges peculiar to both \pietro{the} domain adaptation and \pietro{the \classincr learning} settings. \\ \textbf{Domain Adaptation.} The segmentation network is trained on data from multiple domains, each \marco{holding} only a subset of the whole set of the semantic classes. \marco{Even so}, the model is expected to provide satisfactory prediction performance on all the observed domains and semantic classes. \marco{Hence}, it is necessary to transfer knowledge across domains to \begin{enumerate} \item[(i)] learn \textit{new-class} clues shared across the current (supervised) domain and the past ones (where new-class supervision was not available \marco{during} past steps); \item[(ii)] adapt \textit{old-class} knowledge learned in \marco{former} % domains to the novel \um{domain}. \end{enumerate} \textbf{\ClassIncr Learning.} The different class supervision available on different domains leads us to a \classincr problem, where semantic categories come across in a continual fashion. Therefore, we are required to address the widely known \um{{catastrophic forgetting}} phenomenon \cite{kirkpatrick2017overcoming}, aiming at preserving knowledge from past classes when learning new ones. However, unlike standard CIL, knowledge preservation has to be performed differently depending on the domain in which it is applied: \begin{itemize} \item[(i)] in \textit{past domains} straightforward \pietro{recollection} of previously observed classes can be imposed, \pietro{as those classes were learned over past domain distributions;} \item[(ii)] whereas, in the \textit{novel domain} recalled memory of past classes should be adapted to account for the semantic shift happening within the input space. \end{itemize} \begin{table}[!t] \centering \renewcommand{\arraystretch}{1.3} \caption{ Training objectives: \marco{the \textit{n}/\textit{o} \pietro{superscripts} denote the use of new/old domain data, with $\tilde{\cdot}$ implying stylization.} } \begin{tabular}{c|cc} \toprule \multicolumn{1}{c|}{} & \multicolumn{1}{c}{New Domain} & Old Domains \\ \midrule New Classes & $\mathcal{L}_{ce}^{\tilde{n}}$ & $\mathcal{L}_{ce}^{\tilde{o}}$ \\ Old Classes & $\mathcal{L}_{\lossname}^{\tilde{n}}$ & $\mathcal{L}_{kd}^{\tilde{o}}$ \\ \bottomrule \end{tabular} \label{tab:train_obj} \vspace{-0mm} \end{table} We break down the domain \um{shift} and class continual learning \um{problems} into simpler underlying sub-problems, as indicated above. % Our overall learning framework builds upon multiple individual objectives, each focusing on a specific challenge enclosed in the general setup. We simultaneously progress along class and \domincr directions; at each learning step\um{,} after the first one\um{,} both class\marco{es} and domains experienced so far can be arranged into \um{\textit{new}} or \um{\textit{old}} types, according to whether they are currently available or not. More in detail, we propose a specific learning objective for each of the different combinations of domain and class types (see Table~\ref{tab:train_obj} and \marco{Fig.~\ref{fig:arch}}), \ie, \marco{to}\um{:} \begin{enumerate}[topsep=0pt] \item[(i)] learn new classes on new domains (Sec.~\ref{sec:newD_newC})\um{;} \item[(ii)] \marco{learn new classes on old domains} (Sec.~\ref{sec:oldD_newC})\um{;} \item[(iii)] adapt old-class information to new domains (Sec.~\ref{sec:newD_oldC})\um{;} \item[(iv)] preserve old-class information in old domains (Sec.~\ref{sec:oldD_oldC})\um{.} \end{enumerate} \subsection*{Domain Stylization} \label{sec:domain_style} We resort to a style transfer mechanism to recreate image data with statistical properties resembling those of past domains. More specifically, starting from the available image data originating from the input domain accessible at the current step, we transfer the styles extracted from all the previously encountered domains. By doing so, a stylized version of each of the former domains is produced, with image content derived from the novel dataset. The benefits that \marco{originate} from domain stylization are manifold: (i) We force the prediction model to experience past input distributions under supervision or \tocheck{pseudo-supervision}, tackling domain-level catastrophic forgetting. (ii) We aim at learning new classes on old domains, where supervision was not available when they were directly observed. At the same time, we propose to preserve old-class knowledge on old domains, counteracting class-level catastrophic forgetting. (iii) By encountering \marco{a} variegate input distribution, the predictor is encouraged to develop the ability to generalize to unseen domains, which is crucial in a continual learning paradigm \marco{that involves} \pietro{domain shift}. \marco{The style transfer mechanism we adopt is inspired by \cite{yang2020fda} and involves low computational cost and memory requirements.} \pietro{ The original algorithm works in the Fourier transform domain:} the low frequency portion of the amplitude of the spectral representation from a target image (\um{\ie, the} style) is extracted and applied to replace that of a source image (\um{\ie, the} content), whose phase \marco{component} % is kept unchanged. The outcome % \marco{is} image data with source semantic information, and target-like low-level appearance. \marco{We enhance the original method to accommodate for the further complexity brought \um{in} by the class and domain incremental setting.} From each image of the currently available dataset, we extract its style tensor (\ie, \um{the} amplitude central window), and we average \um{it} over all the samples\um{:} \begin{equation} \bar{F}^{A}_{t} = \frac{1}{|\mathcal{D}_{t}|} \sum_{\mathbf{X} \in \mathcal{D}_{t}} \mathcal{F}^{A} (\mathbf{X})[{W_{\beta}}], \end{equation} where $\mathcal{F}^{A}(\mathbf{X})$ is the amplitude obtained by the FFT applied to image $\mathbf{X}$, and $W_{\beta}$ is the style window. By doing so, we are extracting significant knowledge of \tocheck{\textit{domain-dependent}} statistical properties, condensed in a compact representation. The domain-specific style $\bar{F}^{A}_{t}$ of step $t$ is stored in an incrementally\um{-}filled memory bank $\mathcal{M}^{F}_{0:t\shortminus 1} \!=\! \{ \bar{F}^{A}_{k} \, | \, k \!<\! t \}$ and preserved across steps. \marco{By leveraging the proposed storage mechanism, at each incremental step we can access crucial information of past domain low-level properties (yet minimal if compared to that contained in whole training sets), without requiring direct access to raw image data, which would violate the exemplar-free assumption. We stress that domain shift affects low-level details, while high-level semantic content is mostly shared across domains (\eg, the road serves the same purpose regardless of the dataset, while its appearance in terms of texture or pavement material might vary considerably). } To create an oldly-stylized dataset at step $t$ looking back at step $k \!<\! t$ (\ie, $\tilde{\mathcal{X}}_{k}^{t}$), \marco{for each image of the current domain we replace its amplitude window with that of the selected former domain as follows:} \begin{equation} \tilde{\mathcal{X}}_{t\shortto k} = \{ \mathcal{F}^{\shortminus1}([\bar{F}^{A}_{k} + \mathcal{F}^{A} (\mathbf{X})[{W^{c}_{\beta}}], \mathcal{F}^{P}(\mathbf{X})]) \, | \, \mathbf{X} \in {\mathcal{X}}_{t} \}, \end{equation} \pietro{where $\mathcal{F}^{\shortminus1}$ is the inverse FFT operator and $\mathcal{F}^{P}\marco{(\mathbf{X})}$ is the Fourier phase component of $\mathbf{X}$.} \marco{In addition, we devise a self-stylization mechanism by self-applying domain style to improve generalization toward future steps, promoting forward transfer.} As for the dimension of the style window, we experimentally found that the $\beta$ parameter as defined in \cite{yang2020fda} (\ie, the parameter controlling the window size) provides satisfactory \um{and robust} results when set to $1\mathrm{e}\!\shortminus\!2$. \um{Finally, \marco{we stress}} that our approach is independent of the style transfer technique used, \marco{provided that style information and content can be extracted in two distinct} \um{steps}. \section{Learning Across Tasks and Domains} \label{sec:method} \subsection{Learning New Classes over New Domains} \label{sec:newD_newC} In the proposed class and domain continual learning framework, direct supervision comes uniquely for the newly introduced class set $\mathcal{C}_{t}$ and image domain $\mathcal{X}_{t}$ in the form of the training dataset $\mathcal{D}_{t} \subset \mathcal{X}_{t} \times \marco{\mathcal{Y}_{t}}$. As mentioned before, image pixels not belonging to $\mathcal{C}_{t}$, \ie, of past or never seen classes, are assigned to a special class \textit{unknown}, whose semantic \marco{statistical properties} \um{are} highly dynamic. To account for the semantic shift suffered by the \textit{unknown} class at the current step $t>0$ w.r.t.\ previous steps, we group the past and unknown class probability channels as follows: \begin{equation} \unceP_{t}(\mathbf{X})[x,y,c] = \begin{cases} P_{t}(\mathbf{X})[x,y,c],& \text{if } c\neq u \\ \sum_{c' \in \mathcal{C}_{0:t\shortminus1}} P_{t}(\mathbf{X})[x,y,c'],& \text{if } c = u \\ \end{cases} \end{equation} where $P_{t}(\mathbf{X}) \in \mathbb{R}^{H \times W \times |\mathcal{C}_{0:t}|}$ is the output of $S_{t}$ prior to the $\argmax$ when a generic image $\mathbf{X} \in \mathcal{X}$ is given as input. \\ We additionally define $\tilde{\mathcal{D}}_{t\shortto t} \subset \tilde{\mathcal{X}}_{t\shortto t} \times \mathcal{Y}_{t}$ as the \textit{self-stylized} training dataset at step $t$, where the average style (defined above in \marco{Sec.~\ref{sec:domain_style}}) of the current image domain has been applied on top of the $\mathcal{X}_{t}$ domain itself. To learn the newly introduced classes over the new domain we optimize\um{:} \begin{equation} \mathcal{L}_{{ce}}^{\tilde{n}}(\mathcal{C}_{t},\mathcal{X}_{t}) = - \frac{1}{ |\tilde{\mathcal{D}}_{t\shortto t}| } \sum_{\tilde{\mathbf{X}}, \mathbf{Y} \in \tilde{\mathcal{D}}_{t\shortto t} } \!\!\!\! \mathbf{Y} \cdot \log \unceP_{t}(\tilde{\mathbf{X}}) \um{,} \label{eq:nc_nd} \end{equation} where we leverage input data \marco{with} current style and supervision over the new class set. The $\tilde{n}$ superscript indicates the use of self-stylized data \marco{on the \textit{new} domain}. The purpose of self-stylization is twofold; first, it provides additional robustness and generalization capability to the prediction model, since input data is supplied with more homogeneous low-level statistic across individual samples. Second, it forces the prediction model to experience domain statistics that will be stored and replayed in the future, acting as proxies for the no longer available previous domain statistics. \begin{figure*} \centering \includegraphics[width=1.\linewidth]{images/arch/Arch5.pdf} \caption{ \tocheck{ Model architecture: we decompose class and domain \um{IL} into simpler sub-problems, each addressed by a suitable objective (\um{4 panels in the} right side); to access no longer available old domain data, we resort to stylization (left side). } } \label{fig:arch} \end{figure*} \subsection{Learning New Classes over Past Domains} \label{sec:oldD_newC} To \marco{compensate} for the lack of \marco{available input data} for past domains, we generate proxy datasets retaining low-level statistics resembling those of past domains. More precisely, for each style $\bar{F}^{A}_{k} \in \mathcal{M}_{0:t}$ of step $k<t$ we \pietro{build} $\tilde{\mathcal{D}}_{t\shortto k} \subset \tilde{\mathcal{X}}_{t\shortto k} \times \mathcal{Y}_{t}$ (as detailed in Sec.\ \ref{sec:domain_style}), \ie, an \textit{oldly-stylized} training dataset at step $t$, for which \marco{domain-specific visual attributes} of step $k<t$ has been applied on % domain $\mathcal{X}_{t}$. Supervision on the newly introduced classes over the old domains is exploited by optimizing\marco{:} \begin{equation} \mathcal{L}_{ce}^{\tilde{o}}(\mathcal{C}_{t},\mathcal{X}_{0:t\shortminus1}) = - \frac{1}{t} \sum_{k=0}^{t\shortminus1} \frac{1}{ |\tilde{\mathcal{D}}_{t\shortto k}| } \sum_{\tilde{\mathbf{X}}, \mathbf{Y} \in \tilde{\mathcal{D}}_{t\shortto k} } \!\!\!\! \mathbf{Y} \cdot \log \unceP_{t}(\tilde{\mathbf{X}}), \label{eq:nc_od} \end{equation} \um{where we leverage input data with past styles (\ie, with distributions supposedly close\footnote{\tocheck{The \textit{closeness} depends on what the style transfer mechanism is able to transfer in terms of statistical properties. The distribution gap is reduced in terms of low-level properties, while semantic high-level distribution should already be similar across domains.}} to those of no longer available former domains) and the supervision over the new class set.} The superscript $\tilde{o}$ indicates the use of \textit{oldly}-stylized data. By concurrently learning the segmentation task at the present step over an augmented pool of input data distributions \marco{from the past}, the prediction model should learn more general and shareable clues, overcoming \marco{the} domain shift % \marco{inherent in} the domain continual learning paradigm. \subsection{Adapting Old Classes to New Domains} \label{sec:newD_oldC} In the addressed \classincr learning scenario, at each new learning step all past class sets are assumed to lack any direct supervision. To recall previously acquired knowledge, we resort to the \pietro{well-known} knowledge distillation objective \cite{hinton2015distilling}. \marco{Yet}, differently from the standard \classincr learning problem as traditionally \pietro{formalized} in \um{the} literature \cite{rebuffi2017icarl}, we expect to encounter additional challenges: \\ (i) the input data of past domains % (\ie, experienced by the segmentation model when previous class sets were learned) are no longer available; \\ (ii) a distribution shift separates the current image data to that available at \marco{former} steps. \marco{Thus}, we no longer have access to % \tocheck{data distributed as that experienced} by the segmentation model saved from \marco{the} past step, \tocheck{which, in principle, should be} leveraged to distill knowledge of old \marco{classes}. To replicate the image distribution of % data of \marco{past} steps, we resort to the stylization {mechanism} (Sec.~\ref{sec:domain_style}). Specifically, for each \marco{old} domain $\mathcal{X}_{k}$, $k\!<\!t$, we build an oldly-stylized dataset $\mathcal{D}_{t\shortto k}$ starting from that of the current step $t$. To access a form of supervision over the past classes we make use of pseudo-labeling via the prediction model from the previous step, which should retain profitable knowledge on the semantic categories learned so far. However, said model might not distill knowledge effectively when fed with input data of an unseen distribution, \ie, originating from the newly introduced domain. Therefore, we exploit oldly-stylized data to enhance pseudo-labeling by mitigating domain shift. \pietro{\um{We} denote with $P_{t\shortminus1}^{k}(\tilde{\mathbf{X}}) \subset \mathbb{R}^{H \times W \times |\mathcal{C}_{t\shortminus1}|}$, $\tilde{\mathbf{X}} \in \mathcal{X}_{t\shortto k}$, the classification} probability map from model $S_{t\shortminus1}$ over new domain images with \marco{the style of step $k$.} % We then compute pseudo-labels following\marco{:} \begin{equation} \hat{\mathbf{Y}}_{t\shortminus1}^{ \mathcal{K} }[x,y] = \argmax_{c \in \mathcal{C}_{0:t\shortminus1}} \max_{ \marco{k \in \mathcal{K}} } P_{t\shortminus1}^{k}(\tilde{\mathbf{X}})[x,y], \label{eq:pseudo} \end{equation} \pietro{where we leverage old model predictions over past styles, \ie, we set $\mathcal{K} \!=\! \{0,...,t \!-\! 1\}$, % while} $\max_{ \marco{k \in \mathcal{K}} } P_{t\shortminus1}^{k}(\tilde{\mathbf{X}})[x,y]$ indicates that for each spatial location $(x,y)$ we take the probability vector \marco{associated to the style} with maximum peak value. We then refine the generated pseudo-labels at each spatial location (we \pietro{will shorten $\hat{\mathbf{Y}}_{t\shortminus1}^{\mathcal{K}=\{0,...,t \shortminus 1\}}$ as $\hat{\mathbf{Y}}_{t\shortminus1}^{\smalless t}$ and drop \marco{the term} $[x,y]$ for ease of notation) }as\marco{:} \begin{equation} \fullhatt[-0.3ex]{\mathbf{Y}}^{ \lower.75em\hbox{\smalless \scriptsize $t$} }_{t\shortminus1} = \begin{cases} \hat{\mathbf{Y}}_{t\shortminus1}^{\smalless t},& \text{if } \hat{\mathbf{Y}}_{t\shortminus1}^{\smalless t} \marco{\text{ confident }} \wedge \mathbf{Y}_{t} = u \\ {u,}& { \text{if } {\mathbf{Y}}_{t} \neq u }\\ \mathrm{ignore},& \text{elsewhere} \\ \end{cases} \end{equation} where $\mathbf{Y}_{t} \in \mathcal{Y}_{t}$. The hard pseudo-label $\hat{\mathbf{Y}}_{t\shortminus1}^{\smalless t}[x,y]$ (\ie, after the $\argmax$ operation in Eq.~\eqref{eq:pseudo}) is considered to provide a confident prediction if the peak probability value (of the probability map prior to the $\argmax$) is bigger than a threshold $\tau$, or if that value is among the top-$K$ fraction of highest peaks for class $c \!=\! \hat{\mathbf{Y}}_{t\shortminus1}[x,y]$. We set $\tau \!=\! 0.9$ and $K \!=\! 0.66$ \marco{as advised in} \cite{yang2020fda}. In addition, we leverage the ground-truth supervision on new classes to correct noisy estimations in pseudo-labels, by marking as \textit{unknown} \marco{(\ie, \textit{u})} all the pixels of newly introduced categories. We remark that the employed knowledge distillation is designed to provide insight on previous tasks (where current new classes were assigned to the \marco{\textit{u}} class), whereas we entrust Eq.~\eqref{eq:nc_nd} to instill understanding of the novel task. We experimentally \marco{verify} that \marco{using} separate objectives to train on new and old classes leads to improved results, as it forces the model to learn to better discriminate between different incremental class sets, part of which might coexist under the same \textit{unknown} group for one or more learning steps. This is especially true for autonomous driving datasets, where each image can contain several semantically diverse elements, for all of which we may not have supervision from the start of the training. To infuse adapted information about past classes at the current step without direct access to ground-truth information, we resort to the following objective: \begin{equation} \mathcal{L}^{\tilde{n}}_{\lossname}(\mathcal{C}_{0:t\shortminus1},\mathcal{X}_{t}) = - \frac{1}{|\mathcal{D}_{t}|} \sum_{{\tilde{\mathbf{X}} \in \mathcal{D}_{t}}} \fullhatt[-0.3ex]{\mathbf{Y}}^{\lower.75em\hbox{\smalless \scriptsize $t$}}_{t\shortminus1} \cdot \log {\unkdP_{t}}({\tilde{\mathbf{X}}}), \end{equation} by which we distill knowledge of past tasks (\ie, recognition of classes in $\mathcal{C}_{0:t\shortminus1}$) over the new domain $\mathcal{X}_{t}$ via the pseudo-labels derived from the old model $S_{t}$. To account for the semantic shift suffered by the \textit{unknown} class of step $t - 1$ when moving to a \marco{new} step $t \!>\! 0$, we group \textit{new} and \textit{unknown} class probability channels as follows\marco{:} \begin{equation} \unkdP_{t}(\mathbf{X})[x,y,c] = \begin{cases} P_{t}(\mathbf{X})[x,y,c],& \text{if } c\neq u \\ \sum_{c' \in \mathcal{C}_{t}} P_{t}(\mathbf{X})[x,y,c'],& \text{if } c = u \\ \end{cases} \label{eq:unb_ce} \end{equation} where $\unkdP_{t}(\mathbf{X}) \!\in\! \mathbb{R}^{H \times W \times |\mathcal{C}_{0:t\shortminus1}|}$. We opt for the use of hard-labels in place of the more common soft-labels in the distillation-like loss in order to prevent enforcing an uncertain behavior to $S_{t}$. This behaviour could be originated by the mismatch between training and inference input distribution undergone by the old model $S_{t-1}$, which has been trained over past domains and now is fed with new domain data (the \marco{oldly-stylizing} operation reduces domain shift but \um{has no guarantees on its complete removal}). \pietro{Experimental data on the pseudo-labeling strategy is provided in Sec.~\ref{sec:abl_pseudo}.} \subsection{Preserving Old Classes on Old Domains} \label{sec:oldD_oldC} In Sec.~\ref{sec:newD_oldC} we focused on distilling old-task knowledge on the current novel domain. Nonetheless, our ultimate target is to end up with a segmentation network capable to recognize all the observed \um{classes} over all the experienced domains, that is a prediction model robust to both domain and label distribution shifts. For this reason, at every novel incremental step it is required to preserve the task knowledge acquired in the past, that is, on past classes over past domains. To do so, we leverage the % output\marco{-level} knowledge distillation objective in its standard formulation \cite{hinton2015distilling}, where we force a student model (\ie, the current model) to mimic the predicted classification probability distribution of a teacher model (\ie, the model saved and kept frozen \marco{since} the end of the previous step). We opted for the objective in its standard fashion \cite{hinton2015distilling}, \marco{as} both image and label distributions ideally \marco{originate} from previous steps, so no domain shift should, in principle, affect the distillation process. In practice, we can not access former incremental datasets. Therefore, to retrieve the missing old-domain data, we resort \marco{once more} to stylization (Sec.~\ref{sec:domain_style}), \marco{so that} % we \marco{can} leverage oldly-stylized data as proxy for the missing original images. The final objective is of the following form: \begin{equation} \mathcal{L}^{\tilde{o}}_{kd}(\mathcal{C}_{0:t\shortminus1},\mathcal{X}_{0:t\shortminus1}) = - \frac{1}{t} \sum_{k=0}^{t\shortminus1} \frac{1}{ |\tilde{\mathcal{D}}_{t\shortto k}| } \!\!\!\!\! \sum_{ \;\;\; \tilde{\mathbf{X}} \in \tilde{\mathcal{D}}_{t\shortto k} } \!\!\! \!\!\! P_{t\shortminus1}(\tilde{\mathbf{X}}) \cdot \log \unkdP_{t}(\tilde{\mathbf{X}}), \label{eq:kd} \end{equation} where $\unkdP_{t}(\tilde{\mathbf{X}}) \in \mathbb{R}^{H \times W \times |\mathcal{C}_{0:t\shortminus1}|}$ refers to the modified probability distribution \pietro{from Eq.~\eqref{eq:unb_ce}}, for which \textit{new} and \textit{unknown} categories are incorporated into a single output channel to address the label shift within the $u$ class. The overall objective \marco{is given by}: \begin{equation} \mathcal{L}_{tot} = \mathcal{L}_{{ce}}^{\tilde{n}} + \lambda_{{ce}}^{\tilde{o}} \cdot \mathcal{L}_{{ce}}^{\tilde{o}} + \lambda_{{\lossname}}^{\tilde{n}} \cdot \mathcal{L}_{{\lossname}}^{\tilde{n}} + \lambda_{{kd}}^{\tilde{o}} \cdot \mathcal{L}_{{kd}}^{\tilde{o}}. \label{eq:complete} \end{equation} \section{Related Works} \noindent \textbf{Semantic Segmentation.} Under the impulse of deep learning, semantic segmentation has witnessed a considerable advance in recent years \cite{minaee2021image}. Since the introduction of fully convolutional networks (FCNs) \cite{long2015fully}, which \um{introduced} the popular encoder-decoder architecture, \pietro{huge} research efforts have \um{improved} the \um{state of the art}. Dilated convolutions \cite{yu2016multi,chen2018deeplab} allow to retain sufficiently large receptive fields \um{limiting the growth} in model size. Spatial \cite{zhao2017pyramid} and feature \cite{li2018pyramid} pyramid pooling \um{extract} and \um{aggregate} contextual information at different scales to acquire enriched representation for \um{improved} dense predictions. At the same time, considerable interest was devoted to the design of lightweight architectures for practical applications typically burdened by strict hardware constraints. MobileNet architectures \cite{howard2017mobilenets,sandler2018mobilenetv2} are built upon the efficient depthwise separable convolution. ErfNet \cite{romera2018erfnet} resorts to factorized residual layers to provide real-time \um{accurate} segmentation. \tocheck{Recently, transformers have been applied in vision, even for dense prediction tasks such as semantic segmentation \cite{khan2021transformers}.} \noindent \textbf{Class Incremental Learning} (CIL). Continual learning in the form of incremental classification tasks has been subject of growing research interest in the recent past \cite{delange2022continual}. Extensive literature can be found targeting image classification \cite{kirkpatrick2017overcoming,li2017learning,rebuffi2017icarl,hou2019learning,douillard2020podnet,yan2021der,zhu2021prototype,toldo2022bring,zhu2022self,wu2022class,xie2022general,tang2022learning,douillard2022dytox} and object detection tasks \cite{shmelkov2017incremental,peng2020faster,joseph2021towards,yang2022continualobject} under the incremental learning paradigm. \marco{Many of} these works \cite{rebuffi2017icarl,douillard2020podnet,yan2021der,wu2022class,xie2022general,tang2022learning,douillard2022dytox} rely on exemplars, \ie, a small portion of training data is stored to be replayed in future steps. We instead place ourselves in a totally exemplar-free setup. Among the exemplar-free methods \cite{kirkpatrick2017overcoming,li2017learning,shmelkov2017incremental,peng2020faster,joseph2021towards,yang2022continualobject,zhu2021prototype,toldo2022bring,zhu2022self} we can identify regularization-based \cite{kirkpatrick2017overcoming,joseph2021towards,yang2022continualobject}, rehearsal-based \cite{li2017learning,shmelkov2017incremental,peng2020faster,zhu2021prototype,toldo2022bring} and structure-based \cite{zhu2022self}. Even if many works propose techniques which could in principle be generalized to various vision tasks (such as the prosperous knowledge distillation mechanism \cite{hinton2015distilling,li2017learning,shmelkov2017incremental}), when facing the semantic segmentation \um{task, additional complexity, which is not present in case of whole-image classification or object detection, arises \cite{michieli2022domain}.} More limited literature can be found for incremental semantic segmentation \cite{michieli2019incremental,klingner2020class,cermelli2020modeling,douillard2021plop,maracani2021recall,michieli2021continual}, even though this field has experienced a very recent rise in research consideration \cite{yang2022uncertainty,yang2022continual,cermelli2022incremental,zhang2022representation,phan2022class}. A first direction of study has been oriented toward the adaptation of the knowledge distillation mechanism to incremental semantic segmentation \cite{michieli2019incremental,klingner2020class,cermelli2020modeling,douillard2021plop,phan2022class,yang2022continual,yang2022uncertainty}. Michieli \etal \cite{michieli2019incremental,michieli2021knowledge} \pietro{have been} the first to introduce this technique in CIL for dense classification, proposing both feature- and output- level variants of the distillation objective. In \cite{cermelli2020modeling} authors address the semantic shift of background regions by proposing a novel distillation formula. Furthermore, \cite{douillard2021plop} improves feature-level distillation by pooling representations to capture spatial relationships. Phan \etal \cite{phan2022class} introduce a measure of task similarity as a weighting factor in the distillation objective. Yang \etal \cite{yang2022continual} resort to a structured self-attention approach for preserve relevant knowledge. Finally, \cite{yang2022uncertainty} extends the popular contrastive learning paradigm to incremental semantic segmentation to improve class discriminability in the feature space. Nonetheless, none of the aforementioned works address the distribution shift that could be present across tasks within the input space. We propose to use a distillation objective which is robust to \domincr gaps, and targets the preservation of old-task knowledge both on the current domain, by distilling through robust hard pseudo-labels, and on the past domains, by leveraging domain stylization to distill knowledge when experiencing old-domain input statistics. Targeting semantic discriminability of latent representations, a clustering-based objective built upon class prototypes \pietro{is proposed in \cite{michieli2021continual}}. Maracani \etal \cite{maracani2021recall} introduce a novel rehearsal approach based on the retrieval of training samples by external sources, \ie, via GAN-based generation or web-crawling. Cermelli \etal \cite{cermelli2022incremental} further show that it is possible to perform continual training with only image-level annotations in incremental steps and reach \um{high accuracy} in some CIL experimental setups. Nonetheless, this approach could be susceptible to the amount of dense supervision provided in the first learning step, and might not scale well \pietro{to segmentation of images containing objects of different classes.} Zhang \etal \cite{zhang2022representation} devise a dynamic incremental framework to decouple the representation learning of old and new tasks. All the aforementioned works assume statistical homogeneity across learning steps in terms of input data distribution. On the other hand, we address the more realistic setup with both input and label spaces undergoing incremental shifts, and we show the superiority in this generalized setup of the proposed incremental approach \um{compared to} pure CIL competitors. \noindent \textbf{Domain Adaptation} (DA). Deep models are known to suffer performance degradation when presented with varying input distribution between training and testing phases \cite{ben2006analysis}. Domain adaptation has been extensively investigated to alleviate the aforementioned problem, by safely transferring learned knowledge from label-abundant source domains to label-scarce, \pietro{or even unsupervised, target ones.} Particularly flourishing has been unsupervised domain adaptation (UDA) for the semantic segmentation task \cite{toldo2020unsupervised,Saito2018MCD,vu2019advent,yang2020fda,zhang2021prototypical,hoyer2022daformer}, as supervision in terms of dense segmentation maps is usually very costly and time expensive to be collected for real-world data. In its standard form, UDA entails no continual learning, being the task at hand the same on both source and target static domains, which are concurrently available. We instead address a more realistic setup with dynamic task and domain evolution. More recently, different variations of the static DA have been proposed, relaxing some of the original strict assumptions. One research direction involves distinct tasks between source and target domains, \ie, \marco{allows} source and target classes to be different. Depending on the relationship between source and target class sets, partial \cite{tian2021partial}, open-set \cite{jing2021towards} and universal \cite{saito2021ovanet,ma2021active} domain adaptation setups have been proposed, even though most research has been confined to the image classification problem \cite{jing2021towards,saito2021ovanet,ma2021active}. Moreover, these works do not involve \classincr learning, as adaptation is performed with simultaneous access to source and target domains in a single learning phase. Another line of works has explored diverse setups in terms of domain availability. Some propose to handle multiple source \cite{he2021multi,gong2021mdalu} or target \cite{liu2020open,volpi2021continual,isobe2021multi,zhao2022sourcefree,volpi2022road,wang2022continualtest,marsden2023continual} domains. This can involve a single adaptation phase \cite{he2021multi,gong2021mdalu}, or multiple phases where different domains are experienced in different learning steps in a incremental fashion \cite{volpi2021continual,zhao2022sourcefree,volpi2022road,wang2022continualtest,marsden2023continual}, in fact, undertaking continual learning under the domain adaptation perspective. Yet, all these works assume homogeneity of tasks across all the domains encountered, whereas the class and \domincr setup we propose deals with variable learning conditions both along task and domain \marco{progressions.} Garg \etal \cite{garg2022multi} develop a multi-\domincr learning (MDIL) framework that involves classification tasks shifting across multiple domains experienced in an incremental fashion. However, \textit{total} supervision is available on all the domains encountered, leading to overlapping incremental class sets. We instead adhere to a stricter CIL setup, with disjoint groups of semantic categories incrementally introduced. It is possible to find a few works that address both task incremental and domain adaptation problems. Kalb \etal \cite{kalb2021continual} discuss class and \domincr learning, but each task is tackled individually by evaluating standard CIL and DA methods. In \cite{shenaj2022continual} coarse-to-fine continual learning is explored, but the proposed setup does not involve domain shift across learning steps, as source and target domains are kept fixed. Recently, Simon \etal \cite{simon2022generalizing} address continual learning with tasks and domains dynamically evolving. Still, they assume to have task supervision on all the considered domains at each \marco{task incremental} step, which may not be a realistic assumption in real-world applications. In addition, rehearsal of training exemplars is performed, and the method specifically targets image classification. \section{Problem Setup} In semantic segmentation we aim at labeling every individual spatial location of an image by associating it with a semantic class taken from a predefined collection of candidates $\mathcal{C}$. That is, given an \um{RGB} image $\mathbf{X} \in \mathcal{X} \subset \mathbb{R}^{H \times W \times 3}$, a segmentation network $S: \mathcal{X} \mapsto \mathcal{Y}$ is exploited to provide its segmentation map $\hat{\mathbf{Y}} \in \mathcal{Y} \subset \mathcal{C}^{H \times W}$. $\hat{\mathbf{Y}}$ should be an accurate prediction of the ground truth map $\mathbf{Y}$, which is available only at training time. We follow an incremental learning protocol to optimize the segmentation network, \marco{as depicted in Fig.~\ref{fig:setup}}. Specifically, the predictor is trained in multiple steps \pietro{$t \!=\! 0,...,T \!-\!1$} to recognize a progressively increasing set of semantic classes. At step $t$, a new class set $\mathcal{C}_{t}$ is introduced, along with training data \marco{$\mathcal{D}_{t} \!=\! \{ (\mathbf{X}_{t}, \mathbf{Y}_{t}) \} \!\subset\! \mathcal{X}_{t} \times \marco{\mathcal{Y}}_{t}$} associated to that set, which is available on the current image domain $\mathcal{X}_{t}$. The supervision provided by $\mathcal{D}_{t}$ is restricted to $\mathcal{C}_{t}$, meaning that any pixel within $\mathcal{D}_{t}$ is tagged \pietro{in $\mathcal{Y}_{t}$} with $c \in \mathcal{C}_{t}$. At the end of the step, all the currently accessible data is discarded % \um{and is not} reused again. The procedure is reiterated for multiple learning steps, \marco{with} a new domain $\mathcal{X}_{t}$ and class set $\mathcal{C}_{t}$ \marco{being} introduced and \um{used for training} \pietro{at each step}. \\ More formally, the objective is to train $S_{t}: \mathcal{X}_{0:t} \mapsto \mathcal{Y}_{0:t}$ \begin{itemize} \item to recognize all the semantic classes observed up to the \pietro{current} step $t$\um{:} \begin{equation} \mathcal{Y}_{0:t} \in \mathcal{C}_{0:t}^{H \times W}, \quad \mathcal{C}_{0:t} \!=\! \bigcup_{k=0}^{t}\mathcal{C}_{k}\um{,} \end{equation} \item on all the image domains \pietro{experienced} so far\um{:} \begin{equation} \mathcal{X}_{0:t} = \bigcup_{k=0}^{t} \mathcal{X}_{k}. \end{equation} \end{itemize} \marco{We remark that $\{ \mathcal{X}_{t} \}_{t=0}^{T}$ are characterized by diverse statistical properties, \ie, domain shift occurs between them, typically manifested through cross-domain variable visual appearance of scene elements that yet share semantic significance.} All $\mathcal{C}_{t}$ are disjoint sets, except for the \textit{unknown} ($u$) class, which belongs to each of them. Class $u$ at step $t$ contains all the past and future classes. In other words, $u$ undergoes a semantic shift across subsequent steps and, for this reason, demands special care when being handled {\cite{cermelli2020modeling}}. \begin{figure}[!t] \centering \includegraphics[width=1.\linewidth]{images/setup/setup6.pdf} \caption{ \marco{Overview of the class and domain incremental setup.} \pietro{At each step, training data come from a new domain and is labeled on a new class set.} When testing, performance is measured on all domains and classes experienced so far. } \label{fig:setup} \vspace{-0mm} \end{figure}
2,869,038,155,984
arxiv
\section{Introduction}\label{sec:intro} Image generation or synthesis consists in the act of producing a novel image---representing a subject of interest or whatever else---from an input that could be a random noise matrix, another (real) image, or a combination of these two possibilities, eventually put beside a label or a condition that somehow controls the output. The required output should belong to a specific domain, or it should have been obtained following a precise style. Conversely, in some cases the image domain or style could be not decided \emph{a-priori}, and the developed system should perform multi-domain image generation. Since the recent advances of deep learning training techniques and architectures on image generation, the image synthesis task has become more and more accessible and understood. Examples of such techniques are the use of \emph{Generative Adversarial Networks} (\emph{GANs}) (in particular \emph{Deep Convolutional} GANs, called DCGANs) and \emph{conditional GANs} (\emph{cGANs}), which should have different architectures, such as \emph{U-Nets} and \emph{StyleGAN}, and different training methods, like \emph{pix2pix}, \emph{cycleGAN}, and so on. A high number of specific approaches were developed to face the aforementioned variety of image generation tasks, leading to a vast literature for each specific sub-problem. A brief outline of the major methodologies are reported in the next section. This paper focuses on the specific problem of image-to-image translation. Image-to-image translation is the act of transforming an arbitrary image in another, more useful, representation of the same data. Image colorization, semantic segmentation, style transfer are examples of image-to-image translations. In particular, this work approaches the task of transforming the input image over a range of so-called \textit{domains}, i.e. recognizable sets of similar images which share common characteristics. One of the scope of this work is to develop a single architecture that is able to handle many different domains, i.e. a multi-domain image-to-image generator system. More specifically, in a system like that, the architecture is required to learn multiple mapping functions, that is one for each domain. In our case, the multiple domains are represented by different facial attributes like ``blond hair'' or``pale skin'' and the mapping functions have the objective to apply these attributes on any face passing through the architecture. Moreover, the idea is to make the model capable to learn new, unseen, domains by using few images for each new domain, in order to gain a great flexibility and generalization capability of the proposed architecture and to tackle also those applications or domains where there is scarcity of available data. Hence, the topic covered in this work is \emph{multi-domain image-to-image translation}, deeply entranced with the concept of domain adaptation. One interesting take on this topic is that many of these image-to-image transformations are linked by a common way of working. For example, changing hair color, e.g. switching the domain to the new one of ``people with blond hair'', needs to correctly segment hair in the same way of changing the domain in ``people with black hair''. Similarly, changing a face into its older version and add glasses to a face both need to correctly locate the subject's eyes. Yet, for long time, a single neural network for each of these tasks had to be created, even if the tasks were quite similar. A solution to this kind of problem has been proposed with StarGAN~\cite{choi2018stargan}, which had the intuition of bringing together multiple image-to-image transformations in the same network architecture. Another observation is that most of the existing approaches to image-to-image translations perform a full training with input-output examples of images, to achieve high quality results. An evident drawback of this approach is that they need very large datasets to be trained. Dataset could be labeled or unlabeled, but usually the domain switch is controlled by a conditioning label, which indicates the target domain to transform the image to. Hence, image-to-image translation often requires a lot of labeled images, where the label denotes the domain(s). It is worth noting that an image could have more than one label: this is the reason for not addressing labels as classes. Regarding the adaptation of the model to new domains with few images, a similar issue is the few-shot learning problem. Few-shots problems are often addressed by using \emph{meta-learning} techniques, thanks to their ability to switch among a distribution of tasks during training. Training in a meta-learning settings means creating a learning system that includes another learning sub-system: the sub-system trains a model (such as a neural network) on a single task sampled from the distribution, and the meta-learning system trains the sub-system, thus adapting the model to all tasks. Meta-learning methods have proved to be successful in classification and regression scenarios, but there are still few papers~\cite{liu2019few, zhang2018metagan} in the field of image generation. Linked to domain adaptation, another known problem of traditional training settings is that once a new set of tasks emerges, e.g. a new domain is added to the target (or desired) outputs, a full retraining of the whole system is needed. This happens even if the new task is similar to tasks that the network has already learned. The full re-training includes incorporating the new domain in the input examples and also it often needs architecture changes. The main proposal of this article, and its principal contributions are: \begin{itemize} \item a system that consists in a single cGAN (i.e., two networks, a generator and a discriminator) performing image-to-image translation, trained on multiple domains; \item both networks do not contain any reference to the label or domain of the input or output (\textit{label-less}) therefore allowing a much more flexible architecture; \item the system is able to switch task with just few examples of a new, unseen domain, by means of a meta-learning training algorithm. This was impossible in previous architectures representing a great limitation; \item the system uses knowledge accumulated at training time with well-known, largely represented classes, to easily learn new, unknown tasks in few iterations. \end{itemize} Taking into account all these contributions and the main proposed idea to fuse together meta-learning and GAN, we named our approach \textit{MetalGAN}. The paper is organized as follows. Section \ref{sec:related} presents an extensive evaluation of the state-of-art. Section \ref{sec:metalgan} introduces a complete overview of the system: main idea and notations, architecture of the network and algorithm. Section \ref{sec:experiments} describes the experimental results. Finally, Section \ref{sec:conclusions} presents our conclusions. \section{Related Work} \label{sec:related} \paragraph{\bf Image-to-image translation} The main topic of this paper, i.e. image-to-image translation, has become a hot topic in machine learning researcher community after the introduction of encoder-decoder networks like U-Nets \cite{ronneberger2015u}, \emph{Fully Convolutional Neural networks} (\emph{FCN}) \cite{long2015fully} and conditional GANs \cite{mirza2014conditional}. The GAN approach to image synthesis has proven an unprecedented quality of output results, reaching photorealism in many domains, such as face synthesis. While traditional GANs~\cite{goodfellow2014generative} generate images from noise, conditional GANs (cGANs)~\cite{mirza2014conditional} in their many variations are able to generate images from labels or other input images, or both. To this extent, cGANs are often used to perform lots of different image-to-image translation tasks like producing sketch colorization and texture generation~\cite{sangkloy2017scribbler,xian2018texturegan}, super-resolution of images~\cite{ledig2017photo} or to generate a photo-realistic image from a semantic label map~\cite{wang2018high, park2019semantic}. cGANs can be trained in both a paired \cite{isola2017image, zhu2017toward} or unpaired way \cite{zhu2017unpaired, almahairi2018augmented, kim2017learning}. In our approach, we use cGANs without a paired dataset with only input image but label-less, in order to maintain a great generalization capability of the generator network. Moreover, we introduced skip connections in the generator network, as in U-Nets. \paragraph{\bf Multi-domain image-to-image translation} A common trait of most of the image-to-image methods is that they are only able to produce outputs belonging to a single domain or class. Regarding multi-domain facial attributes transfer, our main work of reference is StarGAN~\cite{choi2018stargan}, though there exist other relevant works like \cite{he2019attgan,xiao2017dna, kim2017unsupervised}. StarGAN proposes an unified method for multi-domain image-to-image translation. It achieves great results in image synthesis taking strength from the multiple domain adaptations and it learns multiple domains at the same time using only one underlying representation. The main differences between StarGAN and the proposed method are: in our approach networks do not use labels information (while StarGAN do); our training method relies on a small number of images per-iteration; and also a few-shot-like approach is employed when dealing with new domains during inference. \paragraph{\bf Few-shots learning} Few-shot problems are usually tackled with meta-learning techniques, since recent results show great performance of meta-learners on typical few-shot datasets and learning settings. There are many types of meta-learners. Some learn how to parameterize the optimizer of the network \cite{hochreiter2001learning,ravi2016optimization}, while others use a network as optimizer \cite{li2017learning,andrychowicz2016learning, wichrowska2017learned}. Furthermore, using a recurrent neural network trained on the episodes of a set of task is one of the most general approach~\cite{santoro2016meta, mishra2017simple, duan2016rl, wang1611learning}. For our work, the most relevant meta-learners are the ones based on hyper-parameterized gradient descent such as Reptile~\cite{nichol2018first} and MAML~\cite{finn2017model}. In fact, we use the Reptile algorithm applied to a generation problem, where Reptile tasks are identified with our domains. Reptile was already used in combination with GANs in \cite{clouatre2019figr} in order to generate very simple black and white images (such as MNIST digits) or in \cite{zhang2018metagan} that introduced an adversarial discriminator, conditioned on tasks. Regarding few-shots image-to-image translation, a new method was recently introduced in \cite{liu2019few}, coupling an adversarial training scheme with a novel network design. Unlike our method, it does not use meta-learning and does not act as a proper domain transfer algorithm, but rather as a style transfer one: for example, in the case of face image translation task, the translation output maintains the pose of the input content image, but the appearance is similar to the the faces of the target person. \section{Overview of the System} \label{sec:metalgan} \subsection{Idea and Notations} \label{sec:idea} As briefly outlined in the introduction, there are some key points from which our work originates, namely, the need of a \emph{few-shots} setting, the use of a \emph{single} GAN architecture, the \emph{absence of labels}, and the \emph{multi-domain} adaptation. All these key points require a proper definition. Starting from the most potentially ambiguous definition, we call a ``domain'' a set of images which share a well-defined common characteristic, clearly recognizable by using a single label or keyword: for example, ``black-hair'' in a dataset of faces denotes the domain of people with a black hair color. Given the example above, it is also clear that the type of dataset is also important: if the dataset contains both dogs and cats images, ``black-hair'' should have another meaning; if it contains only landscapes, ``black-hair'' should have no meaning at all. Moreover, domains are not mutually exclusive, rather they could intersect each other. Closely related to the concept of domain, there is the concept of "label". Usually, when approaching multi-domains problems, labels are employed to identify which domains a certain image belongs to. This helps the networks in detecting a target domain and thus generating images belonging to such a domain. In our case, \textit{label-less} means that the domain of the input and the target images have to be inferred from other information. In a classic few-shots classification setting, from which we borrow the notations, there are $n$ classes $\{c_1, c_2, \dots, c_n\}$ and a certain number $k$ of input examples per-class, e.g. $\{x_1, x_2, \dots, x_{k_i}\}$ are the input of the $i$-th class $c_i$. During training only $N$ classes per iteration are used over the total number of classes $n$, and for each of these $N$ classes only $K$ input examples over the total number of examples of a class are used, where $K << k$. Then, the trained $N$-classifier has to classify a new example of a random class $\tilde{c}$. In our case, domains are treated as classes, and the generator-discriminator (from now on, called $G$ and $D$) networks are trained on a single domain per meta-iteration ($N = 1$), in order to make $G$ and $D$ able to learn the domain they are working on, without labels. The number $K$ of examples per domain varies according to the type of experiment performed (see Section~\ref{sec:experiments}), but we choose to perform an almost full training and a few-shot inference to allow $G$ to learn adequately the reconstruction of images, and then to switch domain. The architecture of our multi-domain GAN is detailed in Section~\ref{sec:architecture}. Finally, in the meta-learning nomenclature, we defined a \emph{task} as a group of $K$ images that belong to the same domain, used for the inner-iteration of the algorithm, explained in detail in Section~\ref{sec:algorithm}. Our approach uses a single GAN on different tasks. This forces the underlying weights structure of both $G$ and $D$ networks to learn a general yet effective representation for describing all tasks. $G$ and $D$ networks are conditioned with the use of a meta-learning algorithm, on each task/domain. Other approaches, like StarGAN, instead, needs target labels that condition the output for both $G$ and $D$ networks. In detail, the conditioning is implicitly provided by the task selection performed during meta-learning. For each meta-iteration, a single task is selected, and the network is trained on that single task for a number of internal iterations. In the next meta-iteration the training is performed on another, different but related task. With this training algorithm the network learns, meta-iteration after meta-iteration, a representation that is good (but not optimal) in performing all tasks, and just needs a little final push (few epochs of training) to be moved in the direction of the target task. \subsection{Architecture of the Network} \label{sec:architecture} One of the strengths of our proposal is that it completely removes the need of providing specific labels for the data, because the network does not use one-hot labels or similar. If data are already labeled, labels are only useful in the preprocessing phase for dividing into domains the dataset, since the main algorithm works task-by-task. It is worth noting that such domains could overlaps. On the other hand, unlabeled data has to be clustered into domains, a passage that can be completely automated (in contrast with manual labeling), but using a clustering method, domains does not overlap. A clusterization followed by a meta-learning approach is shown in a preliminary work on colorization~\cite{fontanini2019metalgan}. \begin{figure}[] \centering \includegraphics[width=\textwidth]{D}\\ \vspace{1cm} \includegraphics[height=0.4\textwidth]{G} \caption{Complete network architecture. First, the discriminator is trained to distinguish between fake images (a) and real ones (b) and between images belonging to the current domain (b) and images that do not belong to it (c). Then, the generator is trained to fool the discriminator by labeling its outputs as real and as belonging to the current domain (d). Finally, the reconstruction step is executed and its results are labeled as part of the current domain (e). } \label{fig:gd} \end{figure} Our system is composed by a single cGAN. In particular, since the objective is to generate the face of a person with only a bunch of new attributes, without changing the peculiar traits of the person itself and without using labels, we conditioned the cGAN with the input face, in order to maintain the identity of the person, and, at the same time, changing the target attributes. The generator network $G$ is the same as the StarGAN one with the addition of skip-connections (inspired by the classic U-Net), but input labels are removed. The introduction of skip-connection in the generator architecture aims at enhancing the quality of the reconstruction; in other words, they are useful for keeping contents of the input image unchanged in the output image (for example, the face of a person remains the same despite the changing of hair color). On the other side, the $D$ structure is the PatchGAN from pix2pix~\cite{isola2017image}. Since during each task the network tunes itself on a single domain, there is no need of a domain classification output for our discriminator. Instead, we choose to classify images both as real or fake and as belonging or not to the current domain. By doing so our discriminator has two outputs: $D_{adv}(x)$ and $D_{dom}(x)$, one for each probability distribution. Finally, we define a set of losses in order to train our architecture. \paragraph{\bf Adversarial Loss} For the discriminator, we use an adversarial loss to distinguish between real and generated images: \begin{equation}\label{eqn:adv_loss} \mathcal{L}_{\mathrm{adv}}(D, G) = \mathbb{E}_{y \sim p_{\tau}} \left[ \log{D_{adv}(y)} \right] + \mathbb{E}_{x \sim p_{\mathrm{data}}} \left[ 1 - \log{D_{adv}(G(x))} \right], \end{equation} where $y$ is sampled over a distribution of current task images $p_{\tau}$ (real samples), and $x$ over the distribution of the whole dataset $p_{\mathrm{data}}$ ($G(x)$ are the generated samples). In particular, during each task, $D$ tries to classify if an image (or a batch of images) $y$ belongs or not to the current domain distribution $\tau$ (all images in the batch must belong to the domain). For example, if the current task is to produce people with blond hair, the discriminator has to determine if an image contains a person with blond hair or not, while in classic adversarial settings it should simply decide if the image contains a face or not. The adversarial loss formula~\eqref{eqn:adv_loss} does not reflect explicitly this aspect, since the only difference is the nature of the input $y$: in our work, $y$ is not concatenated to any label. \paragraph{\bf Domain Loss} After we select a new task, during the training of the discriminator, we want images sampled from the current task to be classified as such and, on the other side, images sampled from the whole dataset to be classified as not belonging to the current task. \begin{equation}\label{eqn:domain_loss_i} \mathcal{L}'_{\mathrm{dom}}(D) = 2 \cdot \mathbb{E}_{y \sim p_{\tau}} \left[ \log{D_{dom}(y)} \right] + \mathbb{E}_{x \sim p_{\mathrm{data}}} \left[ 1 - \log{D_{dom}(x)} \right], \end{equation} where the multiplicative factor before $\mathbb{E}_{y \sim p_{\tau}} \left[ \log{D_{dom}(y)} \right]$ is motivated by the fact that we need to take into account that an image $x$ may also belong to the domain identified by the current task, since it is drawn from the whole data distribution. For this reason, the first part of the equation strongly reinforces the classification of examples of the target domain, while the second part weakly penalizes every domain (that is, also the target one). Instead, during the generator training, the goal is that all the generated images, even the ones obtained from the reconstruction of the input, would be classified as belonging to the current task. \begin{equation}\label{eqn:domain_loss_ii} \mathcal{L}''_{\mathrm{dom}}(D, G) = \mathbb{E}_{x \sim p_{\mathrm{data}}} \left[ \log{D_{dom}(G(x))} \right] + \mathbb{E}_{y \sim p_{\tau}} \left[ \log{D_{dom}(G(y))} \right] \end{equation} Adversarial and domain losses are visually described in Figure~\ref{fig:gd}. \paragraph{\bf Reconstruction Loss} This loss is crucial to guarantee that the generator maintains the content information of the source image. $G$ has to be already tuned on the current domain/task in the meta-learning training. Since we completely removed the labels from our architecture and we tune the network on a new domain for each iteration, we cannot use a cycle consistency loss like in StarGAN, because $G$ is able to produce images of only one target domain each task. The solution we choose to adopt is to apply the reconstruction loss on the images $y$ belonging to the target domain. The reason is that if an image already belongs to the target domain, it should be left unchanged by $G$. The equation for the reconstruction loss is as follows: \begin{equation} \mathcal{L}_{\mathrm{rec}}(G) = \mathbb{E}_{y \sim p_{\tau}}[|| G(y) - y ||_1], \end{equation} where $||\cdot||_1$ denotes the $L_1$ norm in the space of target images. \paragraph{\bf Feature Matching Loss} In order to regularize the training, we also include a feature matching loss following the work of \citep{wang2018high} and \citep{liu2019few}. Feature Loss stabilizes the training since it is required to the Generator to produce natural statistic at multiple scale. We extract features from the discriminator layers located before the prediction layer. This feature extractor is called $D_{\mathrm{feat}}$. The definition of the feature matching loss is as follows: \begin{equation} \mathcal{L}_{\mathrm{feat}}(D_{\mathrm{feat}}, G) = \mathbb{E}_{x,y \sim p_{\mathrm{data}}, p_{\tau}}[||D_{\mathrm{feat}}(G(x)) - D_{\mathrm{feat}}(y)||_1]. \end{equation} \paragraph{\bf Full Objective} Finally, our full objective becomes: \begin{equation}\label{eqn:full_obj_D} \mathcal{L}_{D} = \mathcal{L}_{\mathrm{adv}} + \mathcal{L}'_{\mathrm{dom}}, \end{equation} \begin{equation}\label{eqn:full_obj_G} \mathcal{L}_{G} = w_{\mathrm{adv}} \mathcal{L}_{\mathrm{adv}} + w_{\mathrm{dom}}\mathcal{L}''_{\mathrm{dom}} + w_{\mathrm{rec}}\mathcal{L}_{\mathrm{rec}} + w_{\mathrm{feat}}\mathcal{L}_{\mathrm{feat}}, \end{equation} for the discriminator and generator, respectively. Furthermore, $w_{\mathrm{adv}}$, $w_{\mathrm{dom}}$, $w_{\mathrm{rec}}$, $w_{\mathrm{feat}}$ are the weights assigned to the loss functions. The discriminator loss functions do not have weights assigned since adversarial and domain losses should contribute equally to discriminator training to obtain balanced results. Weights choices are more properly discussed in Section~\ref{sec:experiments}. \subsection{Algorithm} \label{sec:algorithm} Our approach relies on a meta-learning algorithm based on Reptile \cite{nichol2018first} and adapted to the image generation problem. The problem setting is as follows. A large dataset of images, called $\mathcal{D}$, is used to extract random input images. Let $\tau_j$ be a single task, where $j$ ranges over the number of chosen training domains, here called $N_\tau$. Each task dataset consists of a restriction of $\mathcal{D}$ on the images of a single domain, called $\mathcal{D} |_{\tau_j}$. Hyper-parameters of the algorithm are the inner learning rates of $G$ and $D$ networks, respectively $\lambda_G$ and $\lambda_D$; the loss weights $w_{\mathrm{adv}}$, $w_{\mathrm{dom}}$, $w_{\mathrm{rec}}$, and $w_{\mathrm{feat}}$; two thresholds $t$ and $T$ for keeping discriminator accuracy into a certain range (in order to neither over- nor under-train $D$); and a learning rate for the outer networks, i.e. a meta-learning rate $\lambda_{ML}$. Parameters of the networks, i.e. networks weights and biases, are indicated as $\theta_G$ and $\theta_D$ for $G$ and $D$, respectively. The algorithm is divided into two phases, as illustrated in Figure \ref{fig:meta_train}. The first one is the training phase, and the second one is the inference phase. In the training phase, the $G$-$D$ network is trained repeatedly on a single task, randomly extracted at each epoch from the set of available tasks. During inference phase, instead, a new task ${\tau}_I$ is used for a last-time few-shot training to adapt the network to the new domain. A detailed explanation of training phase is given in the next section and in Figure \ref{fig:train}, and it is followed by another section devoted to the description of the inference phase. \begin{figure}[H] \vspace{0.5em} \centering \includegraphics[width=0.5\textwidth]{meta_train} \caption{A full overview of the system: during the training phase, the network is trained for $N$ epochs on a set of tasks, and then, during the inference phase, a new unknown task, i.e. not present in the training phase, is selected and added to the network.} \label{fig:meta_train} \end{figure} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{train_Metal} \caption{Scheme of a single training epoch. A task $\tau_i$ is sampled and $N_{\mathrm{meta\_iter}}$ inner iterations are performed on cloned networks. Then $\theta$ and $\widetilde{\theta}$ are used to update the networks parameters using the Reptile equation.} \label{fig:train} \end{figure} \paragraph{\bf Training} \begin{algorithm}[h] \small \setstretch{1.2} \caption{MetalGAN algorithm}\label{alg:metalgan} \begin{algorithmic}[1] \Require $N_{\mathrm{epochs}}:$ number of epochs \Require $N_{\tau}:$ number of selected domains \Require $\lambda_{\mathrm{ML}}:$ meta-learning rate \State load entire dataset : $\mathcal{D}$ \State load datasets restricted to each single task $\tau_j$ : $\mathcal{D} |_{\tau_j}$ for $j \in \{0, \dots, N_\tau\}$ \For{$epoch \in \{ 0, \dots, N_{\mathrm{epochs}} \} $} \State extract randomly $\tau_j$ \State clone $D$ into $\widetilde{D}$ of parameters $\theta_{\widetilde{D}}$ \label{lin:clone_d} \State clone $G$ into $\widetilde{G}$ of parameters $\theta_{\widetilde{G}}$ \label{lin:clone_g} \State {\bf Inner training loop} on $\tau_j$ \State $\theta_G \gets \theta_G + \lambda_{\mathrm{ML}} \left( \theta_{\widetilde{G}} - \theta_G \right)$ \Comment{updates generator parameters}\label{lin:upd_g} \State $\theta_D \gets \theta_D + \lambda_{\mathrm{ML}} \left( \theta_{\widetilde{D}} - \theta_D \right)$ \Comment{updates discriminator parameters}\label{lin:upd_d} \EndFor \end{algorithmic} \label{metal_alg} \end{algorithm} The algorithm for training consists of an outer and an inner loop, similar to Reptile. The outer loop is responsible of training the actual $G$-$D$ networks, updating their parameters, epoch-by-epoch as a traditional learning algorithm. At each epoch, a task $\tau$ is randomly sampled from a distribution of tasks. It is recalled that, in our case, a task is a domain and the associated dataset is the few-shot subset of the set of images in the domain. Then, $G$ and $D$ networks are cloned into $\widetilde{G}$ and $\widetilde{D}$ networks of parameters $\theta_{\widetilde{G}}$ and $\theta_{\widetilde{D}}$, respectively. The cloned networks are trained in the inner training loop, where the traditional DCGAN training is performed by using task images, here indicated as $y$. This is needed in order to learn the current domain. Also a small portion of generic images from $\mathcal{D}$ is used, to teach the generator to perform domain switch. These images are called $x$. It is required to the generator to learn the transformation from the random image $x$ of the dataset into an output image ``similar'' to $y$, i.e., the generated image should belong to the extracted task. Finally, the obtained parameters are used to update $G$ and $D$ weights, with the Reptile rule (a sort of SGD step where the gradient is approximated by the difference between inner and outer weights). This baseline is illustrated in Figure~\ref{fig:train}. In detail, the outer training loop is shown in Algorithm~\ref{alg:metalgan}. Lines~\ref{lin:upd_g}--\ref{lin:upd_d} of Algorithm~\ref{alg:metalgan} are responsible of the parameter adaptation for networks $G$ and $D$ and such an operation is performed layer by layer. \begin{algorithm} \small \setstretch{1.2} \caption{Inner training loop}\label{alg:inner_loop} \begin{algorithmic}[1] \Require $\tau:$ extracted task in the outer loop \Require $N_{\mathrm{meta\_iter}}:$ number of inner epochs \Require $\lambda_D, \lambda_G:$ learning rates of $D$ and $G$ \Require $w_{\mathrm{adv}}, w_{\mathrm{dom}}, w_{\mathrm{rec}}, w_{\mathrm{feat}}:$ adversarial, domain, reconstruction, and feature weights \Require $t, T:$ minimum and maximum thresholds for discriminator accuracy \For {$i \in \{0, \dots, N_{\mathrm{meta\_iter}} \}$} \State sample $y$ from $\mathcal{D} |_\tau$ \State sample $x$ from $\mathcal{D}$ \State \(\triangleright\) {\bf Discriminator training:} \State $ \varepsilon_{D} \gets \nabla_{\theta_{\widetilde{D}}} \mathcal{L}_{\mathrm{adv}} ( \widetilde{D}, \widetilde{G}) $ \Comment $x$ is considered fake, $y$ real \State $ \varepsilon_{\mathrm{dom}} \gets \nabla_{\theta_{\widetilde{D}}} \mathcal{L}'_{\mathrm{dom}} (\widetilde{D}) $ \Comment $x$ is considered false, $y$ true \State calculate accuracy $a_{\widetilde{D}}$ of discriminator $\widetilde{D}$ \If {$ a_{\widetilde{D}} < T$} \State $ \theta_{\widetilde{D}} \gets \theta_{\widetilde{D}} - \lambda_D (\varepsilon_{D} + \varepsilon_{\mathrm{dom}}) $ \EndIf \If {$ a_{\widetilde{D}} > t $ {\bf or} $i = 0 $} \State \(\triangleright\) {\bf Generator training:} \State $ \varepsilon_G \gets \nabla_{\theta_{\widetilde{G}}} \mathbb{E}_{\widetilde{G} ( x ) \sim p_{\tau}} [ \log{ \widetilde{D} ( \widetilde{G} ( x ) ) } ] $ \Comment $\widetilde{G} ( x )$ is considered real \State $ \varepsilon_{\mathrm{task\_rec}} \gets \nabla_{\theta_{\widetilde{G}}} \mathcal{L}_{\mathrm{rec}} ( \widetilde{G} ) $ \Comment the reconstruction is made with $y$ \State $ \varepsilon_{\mathrm{dom}} \gets \nabla_{\theta_{\widetilde{G}}} \mathcal{L}''_{\mathrm{dom}} (\widetilde{D}, \widetilde{G}) $ \Comment both $y$ and $\widetilde{G}(x)$ are considered true \State $ \varepsilon_{\mathrm{feat}} \gets \nabla_{\theta_{\widetilde{G}}} \mathcal{L}_{\mathrm{feat}}(\widetilde{D}_{\mathrm{feat}}, \widetilde{G}) $ \State $ \theta_{\widetilde{G}} \gets \theta_{\widetilde{G}} - \lambda_G ( w_{\mathrm{adv}} \varepsilon_G + w_\mathrm{rec} \varepsilon_{\mathrm{rec}} + w_{\mathrm{dom}} \varepsilon_{\mathrm{dom}} + w_{\mathrm{feat}} \varepsilon_{\mathrm{feat}}) $ \EndIf \EndFor \end{algorithmic} \end{algorithm} The inner training loop is illustrated in Algorithm~\ref{alg:inner_loop}. It is nothing more than a classic DCGAN training, but performed on the cloned networks. For each iteration, a small part of two datasets, that is the task dataset and the full training dataset, is used. The whole $\mathcal{D}$ is sampled randomly only for $N_{\mathrm{meta\_iter}}$ iterations, using only few batches of images. The chosen task dataset $\mathcal{D} |_{\tau}$ is used for extracting domain specific images. The first part is the discriminator training. Domain loss and adversarial loss are computed as in Section~\ref{sec:architecture}, and $\widetilde{D}$ parameters are updated if the accuracy of the discriminator is under a certain threshold $T$. On the contrary, the second part, that is the generator training, is executed only if the accuracy of the discriminator is above a certain threshold $t$. During this step, adversarial, task reconstruction, domain, and feature losses are all employed to update $\widetilde{G}$ parameters. \paragraph{\bf Inference} The inference part is also a crucial one. In our work, we experiment the use of few images for adapting the trained model to new, unseen, domains, directly during the inference phase. The idea is to feed the trained $G$-$D$ networks with images from new domains, moving the obtained parameters $\theta_G$ and $\theta_D$ in a new optimal direction to include the new tasks. A sort of fine-tuning is performed, by showing to the model few images from a new domain, and then few images from another new domain, and so on. \begin{algorithm}[h] \small \caption{Inference}\label{alg:inference} \begin{algorithmic}[1] \Require $\lambda_{\mathrm{ML}}:$ meta-learning rate for inference \Require $\mathcal{T} :$ set of {\bf new} tasks/domains \Require $N_{\mathrm{inf\_epochs}}, N_{\mathrm{inf\_train}}, N_{\mathrm{inf\_test}} :$ number of inference iterations \State load few-shot test dataset: $\mathcal{D}^{(\mathrm{test})}$ \State load few-shot restricted test dataset on each $\tau \in \mathcal{T} : \mathcal{D}|_{\tau}$ \State \(\triangleright\) {\bf Fine-tuning on unseen domains:} \For {$epoch \in \{0, \dots, N_{\mathrm{inf\_epochs}} \}$} \For {$\tau \in \mathcal{T}$} \State clone $G$-$D$ networks \State {\bf Inner training loop} on $\tau$, for $N_{\mathrm{inf\_train}}$ iterations \State update $\theta_G$ and $\theta_D$ with Reptile rule \EndFor \EndFor \State \(\triangleright\) {\bf Inference on unseen domains:} \For {$\tau \in \mathcal{T}$} \State clone $G$-$D$ networks \State {\bf Inner training loop} on $\tau$, for $N_{\mathrm{inf\_test}}$ iterations \For {$x \in \mathcal{D}^{(\mathrm{test})}$} \State generate output image $\widetilde{G}(x)$ \EndFor \State \Comment $G$-$D$ parameters are not updated anymore \EndFor \end{algorithmic} \label{alg:rec} \end{algorithm} The settings of the inference algorithm are the following. A set of unseen tasks (or domains) $\mathcal{T}$ is adopted. A test dataset $\mathcal{D}^{(\mathrm{test})}$ containing all domains, where $\mathcal{D}^{(\mathrm{test})} \cap \mathcal{D} = \emptyset $, is used. As for the training dataset, for each new domain to infer $\tau \in \mathcal{T}$, the adequate restriction of dataset is used, $\mathcal{D}|_{\tau} \subset \mathcal{D}$, in order to avoid overlaps between test and training datasets. Algorithm~\ref{alg:inference} shows the inference method. It is divided in two main parts: a few-shot fine-tuning, and a test phase where unseen images are transformed into target domain images. In the first part, new domains are used to learn a new parameter adaptation, using few images per class. Moreover, the meta-learning rate for inference is greater than the one used in training, in order to ensure a faster adaptation. The second part is used only for generating the results, and it resembles a more classic inference. The inner training loop of the meta-learning algorithm is used, but obtained parameters are not updated for the next domain. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{inference_theory_2.png} \caption{Inference algorithm in the case of one of the training domains, `Black Hair' (b) and in the case of an unseen domain, `Heavy Makeup' (d). The violet dots represent the domains used during training while the green dots are the unseen domains. The model $M$ can move towards a known model, i.e. from state (a) to state (b); the other option is fine-tuning towards a set of unseen domains (c), then converge to a specific one (d).} \label{fig:inference} \end{figure} An explanation of inference algorithm is visually provided in Figure~\ref{fig:inference} where two exemplar cases are shown: the top row of the figure (Figure \ref{fig:inference}(a) and \ref{fig:inference}(b)) shows the case of the seen domain `Black Hair', whereas the bottom row (Figure \ref{fig:inference}(c) and \ref{fig:inference}(d)) shows the case of unseen domain `Heavy Makeup'. In the image, the model, called $M$, consists of the $G$-$D$ networks trained as in Algorithm~\ref{alg:metalgan} on five different example domains. In Figure \ref{fig:inference}(a), a n\"{a}ive illustration of the domain space is provided. Given the dots as domains, and the violet dots as the seen domains, the model $M$ has learned, during training, a sub-optimal representation for the seen domains. Intuitively, such a representation permits the model to easily move towards the optimal one in few steps of the inner training loop. In Figure \ref{fig:inference}(b), an example inference on `Black Hair' domain is shown. Since `Black Hair' belongs to the trained domains set, the model is cloned and with some steps of inner training loop, the cloned model learns an optimal representation for the domain. Finally, parameters of the cloned model are not saved nor used for updating the main model, so the situation returns to the one depicted in Figure \ref{fig:inference}(a), ready to make another inference. When a domain, or a set of domains, are completely new to the generator and discriminator, a preliminary fine-tuning is needed. In Figure \ref{fig:inference}(c), green dots represents these unseen domains. The inference pre-training moves the (already trained) model in a sub-optimal position for both the trained domains and the new domains. In Figure \ref{fig:inference}(d), the inference on the updated model is shown. As Figure \ref{fig:inference}(d) shows, the fine-tuning for the unseen domains moves the model M in a new ``position'' in the space, closer to the unseen domains. In this way, during the inference (Figure \ref{fig:inference}(d)) the `Heavy Makeup' domain is better learnt and more correct images are generated, even if the domain has not been seen during the training. Please note that the figure is completely exemplifying, since it depicts domains in random positions, and does not take in account the intersection between them, nor their `real' positioning in an actual domain space (which is unknown). The idea of the model moving towards a sub-optimal yet effective representation during training epochs---minimizing the expected distance from all tasks---, and an informal proof of the idea using Euclidean distances in the manifold of optimal solutions of a task, is provided in Reptile paper~\cite{nichol2018first}. \section{Experimental Results}\label{sec:experiments} This section presents visual and quantitative results of performed experiments of MetalGAN, compared with StarGAN ones. All our experiments were conducted using the CelebA dataset \cite{liu2015faceattributes} which is a large-scale face attributes dataset with more than 200k celebrity images, each with 40 attribute annotations. We decided to test our algorithm on this dataset for three main reasons: first of all, since it contains images of faces with all kind of attributes, it is suitable for multi-domain image-to-image task; secondly, it was used by StarGAN so it allows a clear comparison between the results of the two different algorithms; and finally, even though our approach is completely label-less, it is very easy to automatically divide a-priori the dataset in its different domains. Experiments are divided into two categories: test results on seen domains (i.e., tasks the $G$-$D$ networks were trained on), and results on unseen domains. In case of experiments on StarGAN, since their algorithm requires labels, we trained their network on some domains with a few number of images (1000), and we call these ``unseen'' domains. This workaround permits us to compare StarGAN with our inference on unseen domains. It is worth to note that this approach is unfair for us, since StarGAN is fully trained for each of these ``unseen'' domains, while we only perform a small inference step. This is due to the fact that we can choose to add new domains to our network at every time, while StarGAN needs to define all the domains at the training stage. \subsection{Results on Trained Domains} \label{sec:results_trained} \begin{table}[h] \caption{Hyper-parameters of MetalGAN training phase.} \begin{center} \begin{tabular}{|l|c|} \hline $N_{\mathrm{epochs}}$ & 100000 \\ \hline $\lambda_{\mathrm{ML}}$ & 0.01 \\ \hline $\lambda_{G}$, $\lambda_{D}$ (Adam) & 0.0001 \\ \hline $N_{\mathrm{meta\_iter}}$ & 20 \\ \hline batch size & 16 \\ \hline $w_{\mathrm{adv}}$ & 1 \\ \hline $w_{\mathrm{dom}}$ & 1 \\ \hline $w_{\mathrm{rec}}$ & 10 \\ \hline $w_{\mathrm{feat}}$ & 1 \\ \hline \end{tabular} \label{tab:exp_settings} \end{center} \end{table} We trained $G$-$D$ networks model on 5 domains, namely `Eyeglasses', `Male', `Blond Hair', `Black Hair', and `Pale Skin' for $N_{epochs} = 100000$ using the MetalGAN algorithm, and on the same domains for $200000$ epochs using StarGAN. \begin{figure}[h] \begin{subfigure}[t]{.53\linewidth} \includegraphics[height=0.44\textheight]{inference_metalgan.jpg} \caption{MetalGAN outputs.} \end{subfigure} \begin{subfigure}[t]{.3\linewidth} \includegraphics[height=0.44\textheight]{inference_stargan.jpg} \caption{StarGAN outputs.} \end{subfigure} \caption{Results on training classes. In the first column, the input images. From second to fifth column there are the outputs of the model moved towards the respective domain, in case of MetalGAN, or labeled with the indication of the domain, in case of StarGAN. The last column of MetalGAN results is the output of the model without moving it from the sub-optimum.} \label{fig:training_results} \end{figure} Table \ref{tab:exp_settings} presents the main settings for our experiments. We set the Reptile learning rate $\lambda_{\mathrm{ML}}$ to 0.01 and optimized the generator and discriminator networks using Adam with a learning rate equals to 0.0001. Furthermore, we set the number of meta-iterations $N_{meta\_iter}$ during training equals to 20 since we empirically found that this value represents the best trade-off between speed and accuracy of the algorithm. For coherence with StarGAN, batch size is set to 16 during training. Weights for MetalGAN objective during training are left to 1 except for reconstruction weight, that is set to 10, in order to obtain an accurate reconstruction of the image and gain more quality in results. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{comparison_eyeglasses.jpg} \caption{Results on training domain Eyglasses. The image triplets are composed by input image, MetalGAN output and StarGAN output.} \label{fig:eyeglasses_inference} \end{figure} \begin{figure}[p] \centering \includegraphics[width=\textwidth]{test_black_hair_2.jpg} \caption{Results on training domain Black Hair. The image triplets are composed by input image, MetalGAN output and StarGAN output.} \label{fig:black_hair_inference} \end{figure} Figure~\ref{fig:training_results} shows some visual results on a batch of eight input images. Figure~\ref{fig:training_results}(a) contains the outputs of MetalGAN algorithm, while Figure~\ref{fig:training_results}(b) shows the StarGAN outputs. In addition, a greater number of results on some of the training classes are shown in Figure \ref{fig:eyeglasses_inference} and \ref{fig:black_hair_inference}. Figure~\ref{fig:eyeglasses_inference} shows generated images on `Eyeglasses' domain, where input images are put side-by-side to MetalGAN outputs and StarGAN outputs. In the same fashion, results on `Black Hair' domain are reported in Figure~\ref{fig:black_hair_inference}. We decided to choose these two domains since they are very different in terms of features and since our method performs very well on `Eyeglasses' and, on the contrary, it is not so good on `Black Hair'. It is also worth noting that MetalGAN on `Eyeglasses' produces a great variability of examples, compared to StarGAN, generating both simple glasses and sunglasses. However, the image generation should be considered successful and visually close to StarGAN one. As a matter of fact, we can see how our label-less approach produces results that are visually very similar to the ones produced by StarGAN. In particular, our algorithm is able to understand the different target domains just by seeing few examples of them each epoch, and can correctly produce these domains from the input images even without labels or supervision. In addition, we performed quantitative analysis of the produced results. As far as we know, no pure theoretical framework is available for a precise quantification of our model contributions and advantages, in order to compare it to others, but there exists some relevant metrics that are suitable for a numerical placement of our proposal. Metrics considered in this work are FID (Frechet Inception Distance)~\cite{heusel2017gans} and PRD (Precision and Recall for Distributions)~\cite{sajjadi2018assessing}, described below. We use FID to calculate the distribution matching between the original CelebA images for each training domains and our results, and we compare our score with the one obtained on StarGAN images. This comparison is presented in Figure \ref{fig:Training_graph} with lower values indicating the better scores. Our method performs slightly better than StarGAN for `Eyeglasses', `Male' and `Blond Hair' domains and slightly worse than StarGAN for `Black Hair' and `Pale Skin', confirming the visual evaluation of the images. \begin{figure}[h] \centering \includegraphics[width=0.7\textwidth]{Training_graph.png} \caption{FID score on training domains (the lower the better).} \label{fig:Training_graph} \end{figure} Another quantitative analysis is based on PRD for both StarGAN and MetalGAN methods, using classes of images of CelebA as target datasets. Precision is a measure of raw quality of generated images, and does not take in account the internal variability of the distribution, while recall measures how well the generated images resembles the ``class distribution'' of the target dataset. We choose to measure a single domain at once. \begin{figure}[h] \begin{subfigure}{0.2\textwidth} \includegraphics[width=\linewidth]{eyeglasses94k.png} \caption{Eyeglasses} \end{subfigure}% \begin{subfigure}{0.2\textwidth} \includegraphics[width=\linewidth]{male94k.png} \caption{Male} \end{subfigure}% \begin{subfigure}{0.2\textwidth} \includegraphics[width=\linewidth]{blondhair94k.png} \caption{Blond Hair} \end{subfigure}% \begin{subfigure}{0.2\textwidth} \includegraphics[width=\linewidth]{blackhair94k.png} \caption{Black Hair} \end{subfigure}% \begin{subfigure}{0.2\textwidth} \includegraphics[width=\linewidth]{paleskin94k.png} \caption{Pale Skin} \end{subfigure} \caption{PRD on each training domains.} \label{fig:prd_training_graph} \end{figure} Results are shown in five different graphics, in Figure~\ref{fig:prd_training_graph}. As shown, MetalGAN PRD on `Eyeglasses', `Blond Hair', and `Black Hair' are very similar to each other and close to StarGAN results. The main difference between StarGAN and MetalGAN in case of hair domains is that StarGAN is usually more precise (it produces images with a better quality w.r.t. the target distribution), but it has a lower recall, meaning that the distribution of StarGAN generated images is less varied than MetalGAN one. Regarding `Male' and `Pale Skin', the precision of MetalGAN suffers from the fact that such domains require a significant change in all the faces in the input image, highlighting a weakness in MetalGAN global reconstruction. On the other hand, the domain change is successful, as confirmed for `Male' FID score. \begin{figure}[h] \centering \includegraphics[width=0.3\textwidth]{training12.png} \caption{Global PRD on training domains.} \label{fig:prd_alltraining_graph} \end{figure} In Figure~\ref{fig:prd_alltraining_graph}, global PRD, computed on all training domains at once, resembles the previous consideration, showing a worse precision of MetalGAN generated distribution, but a similar high recall for StarGAN and MetalGAN distributions. It is worth emphasizing once more that MetalGAN achieves these results without labels, showing in any case comparable quantitative results and often better qualitative results. \subsection{Results on Unseen Domains} \label{sec:result_inference} \begin{table}[h] \caption{Hyper-parameters of MetalGAN inference phase.} \begin{center} \begin{tabular}{|l|c|} \hline $N_{\mathrm{epochs}}$ & 10 \\ \hline $\lambda_{\mathrm{ML}}$ & 0.1 \\ \hline $\lambda_{G}$, $\lambda_{D}$ (Adam) & 0.0001 \\ \hline $N_{\mathrm{inf\_train}}$ (train) & 20 \\ \hline $N_{\mathrm{inf\_test}}$ (test) & 100 \\ \hline batch size & 16 \\ \hline $w_{\mathrm{adv}}$ & 100 \\ \hline $w_{\mathrm{dom}}$ & 100 \\ \hline $w_{\mathrm{rec}}$ & 1 \\ \hline $w_{\mathrm{feat}}$ & 1 \\ \hline \end{tabular} \label{tab:inference_exp_settings} \end{center} \end{table} During the inference step, we modify the hyper-parameters of MetalGAN as shown in Table \ref{tab:inference_exp_settings}. In particular, in order to allow the network to quickly adapt to the new domains, we increment the $\lambda_{\mathrm{ML}}$ to 0.1 and we set $w_{\mathrm{adv}}$ and $w_{\mathrm{dom}}$ to 100. Furthermore, since the network already learned to reconstruct the content of the input images we lower $w_{\mathrm{rec}}$ to 1. Finally, we tested the MetalGAN trained model on 6 unseen domains, namely `Big Lips', `Bushy Eyebrows', `Heavy Makeup', `Smiling', `Gray Hair', and `Mustache' using the MetalGAN inference. MetalGAN, trained on the 5 seen domains of Section~\ref{sec:results_trained}, performs 10 further outer iterations (on each new domain), each of them consisting of 20 inner iteration, where $320$ task images are seen for the first time. In this way, a fine-tuned model is obtained, as in Figure~\ref{fig:inference}(c). Then, images are generated by specializing the fine-tuned model on the chosen domain, as in Figure~\ref{fig:inference}(d). Such a specialization is done performing 100 inner iterations per domain. On the other side, we trained 6 \emph{new} StarGAN models with the same domains used during training, plus one unseen domain for each model, i.e. we obtained a StarGAN model specialized also in `Big Lips', another StarGAN model specialized also on `Bushy Eyebrows', and so on. This is necessary, since StarGAN uses image labels, so adding a new domain is possible only by retraining the model. All six new StarGAN models were fully trained for $N_{\mathrm{epochs}} = 200000$. For StarGAN, ``unseen'' means that only 1000 input images are selected for that domain, as already described in the beginning of Section~\ref{sec:experiments}. Visual qualitative results for unseen domains for both MetalGAN and StarGAN are presented in Figure \ref{fig:biglips_inference}, \ref{fig:heavymakeup_inference} and \ref{fig:mustache_inference}. \begin{figure}[p] \centering \includegraphics[width=\textwidth]{comparison_biglips_bushyeyebrows.jpg} \caption{Results on unseen domains Big Lips and Bushy Eyebrows. The image triplets are composed by input image, MetalGAN output without fine-tuning, MetalGAN output with fine-tuning and StarGAN output.} \label{fig:biglips_inference} \end{figure} In particular, for MetalGAN, the results produced without performing the fine-tuning iterations are also shown. Our algorithm is able to produce compelling images even in this case and further improves the visual appearance of the images after the fine-tuning step. In addition, MetalGAN applyes the unseen domains to the input images in a more soft and natural way than StarGAN. This is particularly evident in `Big Lips' and `Smiling', where StarGAN produced results that could be described as ``creepy''. \begin{figure}[p] \centering \includegraphics[width=\textwidth]{comparison_heavymakeup_smiling.jpg} \caption{Results on unseen domains Heavy Makeup and Smiling. The image triplets are composed by input image, MetalGAN output without fine-tuning, metalGAN output with fine-tuning and starGAN output.} \label{fig:heavymakeup_inference} \end{figure} A further consideration is the fact that sometimes, during the unseen domain transfer, MetalGAN tends to apply unwanted features to the images. For example in `Bushy Eyebrows' the network often changes the hair color to black or applyes mustaches. This is due to the fact that people with bushy eyebrows generally have darker hair and facial hair. The same reasoning can be applyed to `Gray Hair', where the network tends to produce older people, because people with gray hair are usually old. \begin{figure}[p] \centering \includegraphics[width=\textwidth]{comparison_grayhair_mustache.jpg} \caption{Results on unseen domains Gray Hair and Mustache. The image triplets are composed by input image, MetalGAN output without fine-tuning, metalGAN output with fine-tuning and starGAN output.} \label{fig:mustache_inference} \end{figure} The reason for this behavior is that, because of the lack of labels, the network has to infer which is the domain to be transferred without any help. This is also a big advantage, because produces a much greater flexibility to the network and allows to add new domains to the network very easily. \begin{figure}[h] \centering \includegraphics[width=0.7\textwidth]{Inference_graph.png} \caption{FID score on inference domains (the lower the better).} \label{fig:Inference_graph} \end{figure} We calculated both FID and PRD also for inference domains, as in the previous section. In Figure~\ref{fig:Inference_graph}, FID scores are reported. As for training, FID scores depend heavily on the selected domain, but in general, StarGAN and MetalGAN scores are close to each other. In particular, MetalGAN performs better on `Big Lips', `Smiling', and `Bushy Eyebrows', confirming visual evaluation of results. In Figure~\ref{fig:prd_inference_graph}, PRD graphs for each unseen domain are reported. All results show how both StarGAN and MetalGAN decrease their precision in this phase, as reasonable. As we can see in Figure~\ref{fig:biglips_inference},~\ref{fig:heavymakeup_inference}, and~\ref{fig:mustache_inference}, the overall quality of the reconstruction is slightly worse than the one of trained domains. However, despite the unfair comparison, PRD for MetalGAN and StarGAN are pretty similar. Looking at the global PRD, calculated on all six unseen domains at once (Figure~\ref{fig:prd_allinference_graph}), MetalGAN shows better performances especially on distribution recall. \begin{figure}[h] \centering \begin{subfigure}{0.25\textwidth} \includegraphics[width=\linewidth]{biglips94k.png} \caption{Big Lips} \end{subfigure}% \begin{subfigure}{0.25\textwidth} \includegraphics[width=\linewidth]{bushyeyebrows94k.png} \caption{Bushy Eyebrows} \end{subfigure}% \begin{subfigure}{0.25\textwidth} \includegraphics[width=\linewidth]{heavymakeup78k.png} \caption{Heavy Makeup} \end{subfigure}\\% \begin{subfigure}{0.25\textwidth} \includegraphics[width=\linewidth]{smiling94k.png} \caption{Smiling} \end{subfigure}% \begin{subfigure}{0.25\textwidth} \includegraphics[width=\linewidth]{grayhair78k.png} \caption{Gray Hair} \end{subfigure}% \begin{subfigure}{0.25\textwidth} \includegraphics[width=\linewidth]{mustache94k.png} \caption{Mustache} \end{subfigure} \caption{PRD graphs on inference domains.} \label{fig:prd_inference_graph} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.25\textwidth]{inference12.png} \caption{Global PRD on inference domains.} \label{fig:prd_allinference_graph} \end{figure} \subsection{Additional Experiments} \subsubsection{Results on Radboud Faces Database} In order to further prove the effectiveness of our method, we also trained the $G$-$D$ network on another multi-domain dataset, with MetalGAN algorithm. Such a dataset is Radboud Faces Database (RAFD) \citep{langner2010presentation}. RAFD is a set of pictures of 67 models displaying 8 emotional expressions. We trained the model for 20k iterations with MetalGAN on 5 different emotions (\textit{disgusted}, \textit{fearful}, \textit{happy}, \textit{sad} and \textit{surprised}), maintaining the same configuration used with the CelebA dataset. A batch of visual results is shown in Figure \ref{fig:rafd} (a), where only the trained domains are tested. Then, additional unseen domains were added to further test our method on this new dataset, as for CelebA. Such new domains are \emph{angry}, \emph{contemptuous} and \emph{neutral}. Inference configurations are the same of Table \ref{tab:inference_exp_settings}. In Figure \ref{fig:rafd} (b), the visual results of MetalGAN inference on RAFD are reported. Results are comparable to trained ones, even if they were obtained by few iterations on the trained model, and with few input images. The naive reason could be that changing the facial expression involves few attributes of the image, thus switching from the input facial expression domain to an unseen one shares a lot of knowledge with the switching between the input and the trained domains. In other words, the main task is the same: changing the facial expression, and little differences between domains are handled easily by the inference steps. In addition to the qualitative comparison, we also trained a classification network in order to obtain a quantitative evaluation of our method. We choose ResNet-18 as classification network (following the StarGAN paper) and we produced classification results on the different emotions both for our architecture as well as for StarGAN trained on the same domains. Results can be seen in Table \ref{tab:rafd_class}. Following the considerations that were made for the CelebA results, our results for the RAFD dataset are in line with the StarGAN ones, but without the use of label or supervision. The only exception is the \textit{sad} domain where our network tends to only change the mouth leaving the rest of the face almost unchanged. Therefore, if the input image has another emotion strongly characterized by the eyes or by the eyebrows (such as \textit{surprised}), such features are not changed during the domain switch leading to misclassification. \begin{table}[h] \centering \caption{Classification results on RAFD dataset.} \begin{tabular}{|l|l|l|l|l|l|} \hline & disgusted & fearful & happy & sad & surprised \\ \hline StarGAN & 98.6\% & 97.2\% & 98.6\% & 97.7\% & 97.1\% \\ \hline \textbf{MetalGAN (ours)}& 98.4\% & 93.1\% & 97.3\% & 69.7\% & 95.2\%\\ \hline \end{tabular} \label{tab:rafd_class} \end{table} \begin{figure}[H] \begin{subfigure}[t]{.53\linewidth} \centering \includegraphics[height=0.75\textheight]{rafd_trained.jpg} \caption{Training domains.} \end{subfigure} \begin{subfigure}[t]{.3\linewidth} \centering \includegraphics[height=0.75\textheight]{rafd_inference.jpg} \caption{Unseen domains.} \end{subfigure} \caption{Results on RAFD on trained and unseen domains. Columns in first image represent disgusted, fearful, happy, sad, and surprised domains. Columns in second image represent angry, contemptuous, and neutral domains.} \label{fig:rafd} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.45\textwidth]{nodoloss.jpg} \caption{Results on CelebA without the contribution of Domain Loss.} \label{fig:no_doloss} \end{figure} \subsubsection{Experiments without Domain Loss} The necessity of the Domain Loss in the MetalGAN algorithm is not self-evident: for this reason, we performed an experiment with the same configurations of MetalGAN standard training (see Table \ref{tab:exp_settings}), but nullifying the domain weight, i.e. $w_{\mathrm{dom}} = 0$. It is worth noting that this setting still relies on meta-learning, that is the major boost for domain adaptation without labels. Nevertheless, visual results on CelebA, for MetalGAN without Domain Loss, show that the domain change loses quality, as it should be seen in Figure \ref{fig:no_doloss}. \section{Conclusions}\label{sec:conclusions} We proposed a new architecture for multi-domain label-less image-to-image translation. Our system has many features that distinguish it from the state-of-the-art. First of all, instead of relying on labels for switching the domains, like other state-of-the-art architectures, we had chosen to use meta-learning and in particular Reptile. Furthermore, getting rid of labels allowed the architecture to be more flexible, since there is no need of providing hard-coded vectors of labels. It is possible to arbitrarily change the number of domains, and to add a new one during inference. Such an approach was completely unfeasible in previous algorithms like StarGAN, that needs hard-coded labels at training time, and it was a very serious limitation. Finally, beside the lack of labels, an immediate advantage of the meta-learning approach is that such a method has been used for few-shot learning. Not only, as highlighted above, a new, unseen, task can be added, but in order to do so, just few examples are needed, and neither tedious and long-lasting annotations of labels, nor a full retraining of the model are required. We proved the effectiveness of our approach with face attributes transfer using the CelebA dataset, and we evaluated it using both FID and PRD metrics. Moreover, we performed additional experiments on RAFD dataset, and tested our approach by nullifying the contribution of Domain Loss, showing its necessity. Regarding future work, our first objective would be to explore more deeply the possibilities and limitations of meta-learning in order to further improve our algorithm and to prove its effectiveness on others tasks like image generation and semantic segmentation. \bibliographystyle{elsarticle-harv}
2,869,038,155,985
arxiv
\section{Introduction} For many decades a commonly accepted view was that a purely gravitational hair of a black hole contains no information besides its mass $M$ and an angular momentum. This view is closely connected to another statement that the Hawking spectrum \cite{Hawking} of black hole evaporation is {\it exactly} thermal. Both statements are based on semi-classical analysis and in order to understand when and how they go wrong, we have to define various regimes rigorously. In black hole physics the two important length-scales are: The gravitational (Schwarzschild) radius, $r_S \equiv 2G_NM$, and the Planck length, $L_P^2 \equiv \hbar G_N$, where $G_N$ and $\hbar$ are Newton's and Planck's constants respectively. In this notations, the quantum gravitational coupling of gravitons of wavelength $R$ is given by $\alpha \equiv {L_P^2 \over R^2}$. Then, the semi-classical black hole regime corresponds to the following double-scaling limit, \begin{equation} \hbar = {\rm finite}, ~ r_S = {\rm finite} ,~ G_N \rightarrow 0, ~ M \rightarrow \infty \, . \label{semilimit} \end{equation} In this limit, any quantum back-reaction on the geometry can be safely ignored. This is also apparent from the fact that in the above limit quantum gravitational coupling vanishes, $\alpha \rightarrow 0$. The equation (\ref{semilimit}) describes the limit in which Hawking's famous calculation \cite{Hawking} has been done and it reveals that the spectrum of emitted particles is exactly thermal. However, regarding the question of a potential loss of information, this analysis is inconclusive, because exactly in this limit the black hole is {\it eternal}. In order to conclude whether there is any information puzzle, we need to know what happens in the case of finite $M$ and $G_N$. \\ In \cite{Nportrait, gold} it was concluded that gravitational hair of a black hole is capable of carrying its entire quantum information. This conclusion was reached within a particular microscopic theory, but the result that we shall use, are very general. Namely, a black hole represents a system at the {\it quantum critical point}, characterized by existence of many nearly-gapless collective modes, the so-called Bogoliubov modes. These modes are the carriers of the black hole quantum information and they exhibit the following features: \\ {\it 1) Number of distinct modes scales as Bekenstein entropy, \begin{equation} N \, = \, {r_S^2 \over L_P^2} \,; \end{equation} 2) The energy gap for exciting each mode scales as, \begin{equation} \Delta E = {1 \over N} {\hbar \over r_S}; \label{gap} \end{equation} 3) The gravitational coupling, evaluated for wave-length $r_S$, is $\alpha = { 1\over N}$. 4) Deviations from thermal spectrum in Hawking radiation are of order ${1 \over N}$.}\footnote{This property can be proven in very general terms \cite{nonthermal}.} \\ Correspondingly, the hair-resolution time scales as $t_{hair} \sim N r_S$ and is of the order of the black hole half life time. Notice that, both in classical as well as in semi-classical limits, we have $N \rightarrow \infty$. Thus, according to \cite{Nportrait,gold} the purely gravitational hair of a (semi)classical back hole in fact contains an {\it infinite} amount of information. However, the time-scale for resolving this information is also infinite. This fact makes it clear that the exact thermality of Hawking spectrum in semi-classical theory is no ground for the information puzzle. The determining factor is the relation between the relative deviations from the thermal spectrum (i.e., the strength of black hole hair) and the black hole life-time. The deviations of order ${1 \over N}$ per emission time are fully enough both for storing the entire black hole information as well as for its consistent recovery \cite{Nportrait,gold} \\ What is very important is to appreciate that the above features are completely independent of a particular microscopic theory and represent a parameterization of black hole properties obtained by superimposing well-accepted facts, such as Bekenstein entropy \cite{Bent}, on quantum mechanics \cite{giamischa}. This is the approach we shall adopt in the present work. We shall not make any particular assumption about the microscopic theory, but firmly demand that it exists and is compatible with the rules of quantum mechanics. From the existence of Bekenstein entropy, it then directly follows that black hole information-carrying degrees of freedom must satisfy the properties {\it 1)-3)} \cite{giamischa}. Indeed, in order to match the entropy, the number of such degrees of freedom must scale as $N$.\footnote{Attributing all $2^N$ states to a single (or few) oscillators inevitably runs us into the problem. In this case the only label that distinguishes states is energy. Then because of high degeneracy the resolution time would be exponentially large. So, information {\it must} be stored in distinct degrees of freedom, in which case the states carry an extra label of oscillator species and everything is consistent} Moreover, the energy needed for exciting individual modes, $\Delta E$, is fixed by the two constraints. First, the energy of bringing all the modes simultaneously into excited states should not exceed the energy of a single Hawking quantum, i.e., $ N \Delta E \lesssim {\hbar \over r_S}$. Secondly, the minimal time-scale for resolving the state of individual modes, from quantum uncertainty is $t_{hair} \sim {\hbar \over \Delta E}$. By consistency, the latter time-scale should not be much longer than the black hole life-time. These two constraints uniquely fix the gap as given by (\ref{gap}). Finally, the relevant coupling is simply an ordinary gravitational coupling evaluated for the wavelength $r_S$, which gives $\alpha = {1 \over N}$. Notice, that since $\alpha N =1$ the quantum criticality comes out as an inevitable property, rather than an assumption. Because of this universal property, despite the fact that we abstract from a particular microscopic picture, we shall keep the terminology applicable to critical Bose-Einstein substances. Namely, we shall continue to refer to the information-carrying degrees of freedom as Bogoliubov modes and to the equations that these modes satisfy as Bogoliubov-de Gennes equation. The classical geometric description is obtained as a double-scaling limit in which $N \rightarrow \infty$ and $\alpha N = 1$. \\ The question that we shall investigate in the present paper is the following: What is the geometric description of the black hole quantum hair? The answer to this question was already suggested in \cite{Dvali:2015rea} where it was conjectured that the geometric description of Bogoliubov modes is in terms of Goldstone bosons of spontaneously broken BMS-type symmetry \cite{bms} acting on the black hole horizon. In the present paper we shall try to explicitly derive the broken symmetry transformations that are responsible for carrying black hole information and entropy. As a possible candidate we identify the part of BMS symmetries that act as the area-preserving diffeomorphisms on the black hole event horizon. The corresponding Goldstone bosons then are naturally identified as the geometric limit of quantum Bogoliubov modes. The correspondence between quantum and geometric descriptions is summarized in table 1. \\ \begin{table} \begin{center} \begin{tabular}{|l | l|} \hline \textbf{Quantum Picture: $N=$finite} & \textbf{Geometric Picture: $N \rightarrow \infty$}\\ \hline Quantum Criticality & Existence of the Horizon \\ \hline Bogoliubov modes & ${\cal A} \equiv BMS^{\cal H}/BMS^-$ Goldstones \\ \hline Bogoliubov- de Gennes equation & Equation for small metric deformations \\ \hline Number of modes $N=$finite & Number of modes $N = \infty$ \\ \hline Coupling $\alpha = {1 \over N}=$finite & Coupling $\alpha = {1 \over N} \rightarrow 0$ \\ \hline \hline Hair resolution time $t_{hair} = NR$ & Hair resolution time $t_{hair} \rightarrow \infty$\\ \hline Energy gap $\Delta E = {1 \over N}{\hbar \over r_S} $ & Energy gap $\Delta E \rightarrow 0 $ \\ \hline \end{tabular} \end{center} \caption{Correspondence between the quantum and the classical-geometric pictures. \newline The latter corresponds to the double scaling limit with $N \rightarrow \infty,~ \alpha N = 1$ }\label{BH} \end{table} We shall now introduce the BMS side of the story. Over many years, asymptotic symmetries of certain space-time geometries play an important role in general relativity. So far asymptotic symmetries are constructed at the boundaries of certain space-time geometries. In particular they play an important role on the AdS boundary in the context of the AdS/CFT correspondence and holography. E.g. three-dimensional AdS space possesses an infinite-dimensional, asymptotic $W$-symmetry \cite{Brown:1986nw}, which is holographically realized as symmetry group of the two-dimensional conformal field theory that lives on the boundary of the AdS space. In four space-time dimensions, the BMS supertranslations were already introduced in 1962 by Bondi, van der Burg, Metzner and Sachs \cite{bms} (for more work on asymptotic symmetries in gravity see \cite{asym}). These infinite-dimensional BMS transformations describe the symmetries of asymptotically-flat space-times at future or past null-infinity, denoted by ${\mathscr I}^+$ and ${\mathscr I}^-$ respectively. Hence the BMS groups $BMS^\pm$ describe the symmetries on ${\mathscr I}^+$ and ${\mathscr I}^-$, but in general not on the interior of four-dimensional space-time. Furthermore, if one considers a gravitational scattering process (an S-matrix in Quantum Field Theory) on asymptotically flat spaces, there is a non-trivial intertwining between $BMS^+$ and $BMS^-$ (see first reference in \cite{asym}). However holography using black holes is independent of asymptotic boundaries but originates from considerations of the black hole horizon. This suggests that a more general set up should be possible, extending the concept of symmetries also to the horizon of black hole. Recently it was conjectured \cite{Strominger:2013jfa} (for other recent work on BMS symmetries, gravitational memory and the relation to soft theorems in scattering amplitudes see \cite{strom}) that the BMS symmetry will play an important role in resolving the so called black hole information paradox by providing an (infinite-dimensional) hair, i.e. charges to the black hole, that carry the information about the collapsing matter before the black hole is formed. Moreover it was argued \cite{Hawking:2015qqa} that the BMS group can can be extended as symmetry group $BMS^{\cal H}$ to the horizon ${\cal H}$ of a Schwarzschild black hole. However the relation between $BMS^{\cal H}$ and the standard BMS groups $BMS^\pm$ is still rather unclear. Also what is the action of $BMS^{\cal H}$ on the mass of the black hole was so far not yet discussed. In this paper we will explicitly construct the supertranslations $BMS^{\cal H}$ on the event horizon ${\cal H}$ of a Schwarzschild black hole with metric written in Eddington-Finkelstein coordinates\footnote{For additional recent work on infinite (BMS-like) symmetries on black hole horizon see \cite{bmshor}.} In addition there are still the standard BMS transformation $BMS^-$ acting on the past null infinity surface ${\mathscr I}^-$ of the black hole geometry. Working out the relation between $BMS^{\cal H}$ and $BMS^-$, it is possible to entangle the transformations $BMS^{\cal H}$ and $BMS^-$ in a non-trivial way. Namely we will show that the horizon supertranslations $BMS^{\cal H}$ contain a part that cannot be compensated by $BMS^-$ transformations. This part of supertranslations on ${\cal H}$, which can be formally denoted by $\cal A \equiv BMS^{\cal H}/BMS^-$, act only on the horizon of the black hole and take the form of area-preserving diffeomorphisms on the black hole event horizon ${\cal H}$. We will show that these $\cal A$-supertranslations leave the ADM mass of the black hole invariant. Using this information we can explicitly construct the Bogoliubov/Goldstone-type modes that correspond to the $BMS^{\cal H}/BMS^-$ transformations on the Schwarzschild metric, and show in this way that these modes are classically gapless. As we shall argue, the transformations $BMS^{\cal H}/BMS^-$ are those that give raise to the microstates of the black hole that are relevant for the black entropy and eventually also for the solution of the black hole information puzzle. Hence, the supertranslations $BMS^{\cal H}/BMS^-$ provide a gravitational hair on the black hole horizon. Note that, as we will discuss, for infinite radius $r_S$ of the event horizon, the groups $BMS^{\cal H}$ and $BMS^-$ become identical to each other; in this limit we are dealing just with the BMS transformations of four-dimensional Minkowski space, since for $r_S\rightarrow\infty$ approaches the Minkowski metric (see the appendix of this paper). In the second part of the paper, we shall address the question of quantum information counting that is encoded in states created by the transformations $BMS^{\cal H}/BMS^-$. Classically, these transformations connect the different vacuum states of the black hole into each other and the corresponding classical Goldstone field is exactly gapless. Thus, classically the amount of entropy carried by these modes is {\it infinite}. This nicely matches the fact originating from the microscopic theory \cite{Nportrait} that in classical theory $N$ is infinite and so must be the amount of information carried by the black hole hair. However, as we shall argue, in quantum theory the Goldstone field of $BMS^{\cal H}/BMS^-$-transformation is no longer gapless and contains only a finite number of Bogoliubov modes that can contribute into the entropy. We shall show that the number of the eligible modes scales as $N$ and thus gives the correct scaling of black hole entropy. We thus provide a crucial missing link between the geometric description of the pure gravitational hair of a black hole and its quantum entropy. This result is in full agreement with the idea of \cite{Dvali:2015rea} about the possible geometric interpretation of Bogoliubov modes, but now we have an explicit candidate for it. Finally, we devote a section in explaining the crucial role of quantum criticality in obtaining the correct entropy counting, once we go from classical to quantum regimes. We explain, why naively dividing the horizon area into Planck-size pixels does not work for understanding the black hole entropy, and why one needs an input from quantum criticality \cite{gold,giamischa} for quantifying the amount of information carried by the black hole hair in the quantum world. \section{Supertranslations on event horizon of a Schwarzschild-black hole} \subsection{The standard BMS transformations on null infinity} First let us review the classical BMS transformations \cite{bms,strom} in more detail. As usual, in this context we perform a coordinate transformation to Bondi coordinates by introducing a retarded or advanced time $u$ and spherical coordinates $r,\theta,\phi$: \begin{eqnarray} u&=&t\mp r\,, \nonumber\\ r\, \cos\theta&=& x_1 \, , \nonumber\\ r\, \sin\theta e^{i\phi}&=& x_3+ix_2\, . \end{eqnarray} Instead of $\theta$ and $\phi$ one can also use complex coordinates $z,\bar z$ on the (conformal) sphere $S^2$: \begin{equation} z=\cot \biggl({\theta\over 2}\biggr)e^{i\phi}\, . \end{equation} In this coordinate system, future or past null infinity, denoted by ${\mathscr I}^+$ or by ${\mathscr I}^-$ respectively, are the two null surfaces at spatial infinity: \begin{equation} \label{skri} {\mathscr I}^\pm\, :\quad R^1_{r=\infty, u}\otimes S^2_{\phi,\theta}\, . \end{equation} The future (past) boundaries of ${\mathscr I}^\pm$ ($r=\infty, u=\pm \infty, z,\bar z$) are denoted by ${\mathscr I}^\pm_\pm$. Asymptotically flat metrics in retarded (advanced) time coordinates have an expansion around ${\mathscr I}^\pm$ with the following few terms: \begin{eqnarray}\label{bondimetric} ds^2&=&-du^2\mp dudr+2r^2\gamma_{z\bar z}dzd\bar z\nonumber\\ &~&{2m_B\over r}du^2+rC_{zz}^\pm dz^2+rC^\pm_{\bar z\bar z}d\bar z^2-2U^\pm_zdudz-2U^\pm_{\bar z}dud\bar z+\dots \, . \end{eqnarray} Here $\gamma_{z\bar z}$ is the round metric of the unit $S^2$ and \begin{equation} U^\pm_z=-{1\over 2}D^zC^+_{zz}\, . \end{equation} Furthermore, $m_B$ is the Bondi mass for gravitational radiation, the $C_{zz}^\pm$ are in general functions of $z,\bar z,u$ and the Bondi news $N_{zz}^\pm$ are characterizing outgoing (ingoing) gravitational waves: \begin{equation} N^+_{zz}=\partial_uC^+_{zz}\, . \end{equation} Gravitational vacua with zero radiation have $N^\pm_{zz}=0$. $BMS^\pm$ transformations are defined as the subgroup of diffeomorphisms that acts non-trivially on the metric around ${\mathscr I}^\pm$, but still preserve the asymptotic structure of the metric defined in eq.(\ref{bondimetric}). They include Lorentz transformations and an infinite family of supertranslations, which are generated by an infinite number of functions $g^\pm(z,\bar z)$. Specifically the $BMS^\pm$ transformations act on the Bondi coordinates in the following way: \begin{eqnarray}\label{bmstrans} u&\rightarrow& u-g^\pm(z,\bar z)\, ,\nonumber\\ z&\rightarrow &z+{1\over r}\gamma^{z\bar z}\partial_{\bar z}g^\pm(z,\bar z)\, ,\nonumber\\ \bar z&\rightarrow &\bar z+{1\over r}\gamma^{z\bar z}\partial_{ z}g^\pm(z,\bar z)\, ,\nonumber\\ r&\rightarrow &r-D^zD_zg^\pm(z,\bar z)\, . \end{eqnarray} One can introduce vector fields $\xi$, which generate infinitesimal $BMS^\pm$ supertranslations: \begin{equation} BMS^\pm \,:\quad \xi_g=g^\pm{\partial\over\partial u}+D^zD_zg^\pm{\partial\over \partial r}-{1\over r}(D^2g^\pm{\partial\over \partial z}+h.c.)\, . \end{equation} Then one can show that the supertranslations act on the Bondi mass and on $C_{zz}^\pm$ as \begin{eqnarray}\label{supertrans} {\cal L}_{\xi_g} m_B&=&g^\pm \partial_um_B\, ,\nonumber \\ {\cal L}_{\xi_g} C^+_{zz}&=&g^\pm\partial_u C^\pm_{zz}-2D^2_zg^\pm=g^\pm N^\pm_{zz} -2D^2_zg^\pm\, . \end{eqnarray} To each of these BMS transformation one can associate corresponding generators $T(g^\pm)$. These supertranslation generators act on the data around ${\mathscr I}^\pm$ as \begin{eqnarray}\label{bmscharges} \lbrace T(g^\pm),N^\pm_{zz}\rbrace& =&g^\pm \partial_uN^\pm_{zz}\, ,\nonumber\\ \lbrace T(g^\pm),C^\pm_{zz}\rbrace& =&g^\pm \partial_uC^\pm_{zz} -2D^2_zg^\pm \, . \end{eqnarray} Among themselves they build an infinite dimensional algebra, which is in fact closely related to the Virasoro algebra. It is important to emphasize that the BMS transformations are only asymptotic symmetries in the sense that the generators $T(g^\pm)$ correspond to global symmetries only on the asymptotic null surfaces ${\mathscr I}^\pm$. In the interior of space-time the BMS transformations have to be viewed as local gauge transformations, i.e., here they just act as local diffeomorphism on the space-time metric. Hence the globally-conserved BMS charges only exist on ${\mathscr I}^\pm$ but not on the entire space-time. Eq.(\ref{supertrans}) also means that super translations are spontaneously broken in the vacuum. In field-theoretic language their action on the ground-state of the system leads to the generation of a Goldstone boson, namely the soft graviton. One can also view this action as the transition between different BMS-vacua, which are separated from each other by a radiation pulse. The initial and final states precisely differ by a BMS supertranslation, which is also known as the {\sl gravitational memory effect}. This is also in agreement with the fact that the BMS transformations on ${\mathscr I}^\pm$ are global symmetries of the field theory S-matrix that describes the scattering of soft gravitons. We can specialize the BMS transformations to Minkowski space-time. In terms of the Bondi coordinates, the Minkowski metric takes the following form: \begin{eqnarray}\label{bondimetricMin} ds^2=-du^2\mp dudr+2r^2\gamma_{z\bar z}dzd\bar z \, . \end{eqnarray} As seen from eq.(\ref{supertrans}), the super translations on Minkowski space do not generate a Bondi mass. However, even starting with a trivial metric with $C^\pm_{zz}=0$, the supertranslations in general generate a nontrivial $C^\pm_{zz}$: \begin{equation}\label{ctrans} C^\pm_{zz}\,\rightarrow \, C^\pm_{zz}-2D^2_zg^\pm\, . \end{equation} So the supertranslations do not act trivially on Minkowski space-time, but instead generate an infinite family of Minkowski metrics which are all flat, i.e., all describe Minkowski space-time: \begin{eqnarray}\label{bondimetricMina} ds^2=-du^2-dudr+2r^2\gamma_{z\bar z}dzd\bar z -2rD^2_zg^\pm dz^2-2rD^2_zg^\pm d\bar z^2 \, . \end{eqnarray} The BMS charges $T(g^\pm)$ are again localized only on ${\mathscr I}^\pm$ of Minkowski space, and not in its interior. \subsection{The supertranslations on the black hole horizon} A review on the geometry and physics of null infinity can be found in \cite{Ashtekar:2014zsa}. There, the BMS-group is defined intrinsically on (future or past) null infinity in a coordinate-independent manner. Since the null infinity in a conformal completion shares many similarities with a black hole event horizon, it is tempting to extend this definition to the case of an event horizon. Indeed, in \cite{Hawking:2015qqa}, there was made the proposal that the group of supertranslations on the event horizon plays an important role in the resolution of the black hole information paradox. In this note, we wish to identify the right supertranslation group, responsible for black hole information and show that the corresponding Goldstone modes are indeed gapless and have the right properties to match the critical Bogoliubov modes of quantum theory. We show that existence of these classically-gapless modes is intrinsically linked with the existence of the event horizon (and the associated supertranslation group). We hence establish the mapping between the existence of the horizon in the classical description and an underlying quantum critical state, as it was explained in the introduction. Although the reasoning is extendable to general horizons, for concreteness and simplicity, we shall consider the case of an eternal Schwarzschild black hole. Consider the Schwarzschild-metric in (infalling) Eddington-Finkelstein coordinates $(v,r,\theta,\phi)$ given by the coordinate-transformation \begin{equation} v= t + r^* \end{equation} with \begin{equation} dr^* = (1-\frac{r_S}{r})^{-1} dr \,. \end{equation} This choice of Eddington-Finkelstein coordinates covers the exterior and the future-interior of a Schwarzschild black hole and the metric gets the form \begin{equation} ds^2 = -(1-\frac{r_S}{r})dv^2 + 2dvdr + r^2 d\Omega^2\,. \end{equation} On the event horizon $\cal H$ with $r=r_S$ we have a nullvector $n^\mu=(1,0,0,0)$ with $g_{\mu \nu}n^{\mu}n^{\nu}=0.$ We define the group of supertranslations on the horizon ${\cal H}$ by its Lie-Algebra in analogy with the way the group of supertranslations is defined on null-infinity (see the definition in \cite{Ashtekar:2014zsa}). This Lie-algebra consists of vector fields $\zeta^\mu$ on the entire space-time, which on the horizon are supposed to satisfy the condition $\zeta^\mu |_{r=r_S} = fn^\mu$, where $f$ is a real function on the horizon with the Lie-derivative in null-direction vanishing, i.e., $L_n f=0$. The vector fields with $\zeta^\mu |_{r=r_S} = 0$ are divided out. In Eddington-Finkelstein coordinates representatives in the Lie algebra have therefore the form \begin{equation} \zeta^\mu = (f,A,B,C)\, , \end{equation} with the components satisfying the boundary conditions \begin{align} f|_{r=r_S} = f(\theta, \phi)\, , \label{RB}\\ A|_{r=r_S} = B|_{r=r_S} = C|_{r=r_S} = 0. \label{Randbedingungen} \end{align} We now choose the representatives of the equivalence classes in the Lie algebra by specifying four coordinate conditions for the metrics we will consider in a moment \begin{align} g_{10} &= 1\, ,\\ g_{1i} &= 0 \,. \end{align} Remember that in a Hamiltonian formulation of the Einstein-Hilbert action, the coordinate conditions have to be imposed to fix the gauge-freedom. The coordinate conditions help in eliminating redundant canonical coordinates/momenta in order to leave a minimal set of two pairs of canonically-conjugated coordinates and momenta (two helicities of the graviton). For a review of the Hamiltonian formulation of general relativity, see \cite{Arnowitt:1962hi}. By choosing these coordinate conditions we specify the coordinate-system under the consideration. We require that coordinate transformations induced by the concrete representatives $\zeta$ do not leave this particular choice of coordinates, i.e., we require \begin{equation} \delta_{\zeta} g_{1 \mu} = 0. \label{Bestimmungsgleichung} \end{equation} This requirement ensures that the induced field excitation $\delta_{\zeta} g_{\mu \nu}$ is physical. It is a shift in the phase space spanned by the minimal set of four canonically conjugated variables. For an observer in this coordinate conditions, $\delta_{\zeta} g_{\mu \nu}$ is a measurable excitation of the metric field rather than a redundancy. This observation is crucial in the argument showing that black holes do carry classical hair. Furthermore, precisely this requirement fixes the representative of each equivalence class in the Lie algebra. Eq.\eqref{Bestimmungsgleichung} yields the first order differential equations for the components \begin{align} \frac{\partial f}{\partial r} &= 0\, ,\\ \frac{\partial A}{\partial r} &= 0\, ,\\ \frac{\partial}{\partial r}(r^2 B) + \frac{\partial f}{\partial \theta} - 2rB &=0\, ,\\ \frac{\partial}{\partial r}(r^2 \sin^2 (\theta) C) + \frac{\partial f}{\partial \phi} - 2r\sin^2 (\theta) C &= 0. \end{align} Together with the boundary conditions \eqref{RB}, \eqref{Randbedingungen} this fixes the representatives uniquely, and the supertranslations $BMS^{\cal H}$ on the horizon ${\cal H}$ are generated by the following vector fields $\zeta^\mu$, which can be labeled by function $f=f(\theta, \phi)$ on a two-sphere $S^2$: \begin{equation} BMS^{\cal H}\, :\quad \zeta_f^\mu = \biggl(f(\theta,\phi), 0, \frac{\partial f}{\partial \theta}(\frac{1}{r}-\frac{1}{r_S}), \frac{1}{\sin^2 \theta} \frac{\partial f}{\partial \phi}(\frac{1}{r}-\frac{1}{r_S})\biggr). \label{Supertranslationen} \end{equation} \subsection{BMS-supertranslations of the Schwarzschild metric on ${\mathscr I}^-$ } Taking the limit $r_S \to \infty$ takes the future horizon $r=r_S$ to past null infinity $\mathscr{I}^-.$ Therefore, taking this limit in \eqref{Supertranslationen} yields the BMS-supertranslations at null infinity $\mathscr{I}^-$ with respect to the particular coordinate conditions used in the previous section: \begin{equation} \eta_g^\mu = \biggl(g(\theta,\phi), 0, \frac{\partial g}{\partial \theta}\frac{1}{r}, \frac{1}{\sin^2 \theta} \frac{\partial g}{\partial \phi}\frac{1}{r}\biggr) \label{BMS} \end{equation} Therefore, in addition to the supertranslations eq.\eqref{Supertranslationen} on the event horizon ${\cal H}$ labeled by $f=f(\theta, \phi),$ there exist (for all isolated systems) the standard $BMS^-$-supertranslations at null infinity \eqref{BMS} labeled by an independent function $g=g(\theta, \phi)$. Let us remark that taking the limit $r_S \to \infty$ we are not led to consider the standard BMS transformations on ${\mathscr I}^+$. So for the black hole metric, we are still considering two {\it a priori} independent supertranslations, namely, $BMS^{\cal H}$ on the horizon and $BMS^-$ on past null infinity ${\mathscr I}^-$. Hence instead of the standard supertranslations $BMS^+$ we are dealing in case of the black hole with $BMS^{\cal H}$, whereas the group $BMS^-$ still provides the asymptotic symmetries of the black hole at ${\mathscr I}^-$. In other words, the dynamics of black hole formation is associated with the relation between the two BMS groups we are dealing with, namely $BMS^-$, describing the collapsing body before black hole formation, and $BMS^{\cal H}$ associated with the horizon of the black hole. \section{Bogoliubov-modes as classical black hole hair} Associated to the supertranslations \eqref{Supertranslationen} we can calculate the Bogoliubov-modes $\delta_\zeta g_{\mu \nu}$. Note that any representative of an equivalence class in the Lie algebra of supertranslations, i.e., any vector field satisfying the boundary conditions \eqref{RB}, \eqref{Randbedingungen}, would give rise to solutions $\delta_\zeta g_{\mu \nu}$ of the Bogoliubov-de Gennes equations. However, for an observer in a concrete coordinate system we have to specify the additional coordinate conditions (i.e. fix the gauge-freedom). Bogoliubov modes which do not fulfill the coordinate conditions (called ghosts) are not the observable excitations for an observer in this particular coordinate system. In that particular gauge, ghost do not correspond to shifts in the phase space spanned by the gauge-fixed minimal set of canonical variables and are therefore unphysical. The supertranslations derived by us (both on null infinity as well as on the event horizon), by-construction, induce physical excitations (see also the discussion in the last section). For our coordinate conditions the supertranslations $BMS^{\cal H}$ have the form \eqref{Supertranslationen} and the induced physical Bogoliubov-excitations have the form \begin{align} \delta_{\zeta_f} g_{\mu \nu} = \begin{pmatrix} 0 & 0 & -(1-\frac{r_S}{r})\frac{\partial f}{\partial \theta} & -(1-\frac{r_S}{r})\frac{\partial f}{\partial \phi}\\ 0 & 0 & 0 & 0\\ * & * & 2r^2(\frac{1}{r}-\frac{1}{r_S})\frac{\partial^2 f}{\partial \theta^2} & 2r^2(\frac{1}{r}-\frac{1}{r_S})(\frac{\partial^2 f}{\partial \theta \partial \phi}-\cot \theta \frac{\partial f}{\partial \phi})\\ * & * & * & 2r^2(\frac{1}{r}-\frac{1}{r_S})(\frac{\partial^2 f}{\partial \phi^2} + \sin \theta \cos \theta \frac{\partial f}{\partial \theta}) \end{pmatrix} . \label{Bogoliubov} \end{align} On the other hand, the Bogoliubov excitations with respect to the standard $BMS^-$-supertranslations in eq.\eqref{BMS} of the Schwarzschild-metric in Eddington-Finkelstein coordinates are given by \begin{align} \delta_{\eta_g} g_{\mu \nu}= \begin{pmatrix} 0 & 0 & -(1-\frac{r_S}{r})\frac{\partial g}{\partial \theta} & -(1-\frac{r_S}{r})\frac{\partial g}{\partial \phi}\\ 0 & 0 & 0 & 0\\ * & * & 2r \frac{\partial^2 g}{\partial \theta^2} & 2r (\frac{\partial^2 g}{\partial \theta \partial \phi}-\cot \theta \frac{\partial g}{\partial \phi})\\ * & * & * & 2r (\frac{\partial^2 g}{\partial \phi^2} + \sin \theta \cos \theta \frac{\partial g}{\partial \theta}) \end{pmatrix}\, . \label{BMS-Bogoliubov} \end{align} \subsection{Microstates of Schwarzschild black hole} Comparing the Bogoliubov-excitations due to supertranslations on the event horizon \eqref{Supertranslationen} with the excitations due to ordinary BMS-supertranslations \eqref{BMS}, we see that the horizon supertranslations contain a part which cannot be compensated by standard BMS-supertranslations. The horizon supertranslations therefore provide new Bogoliubov-excitations which are intrinsically due to the presence of an event horizon. In order to factor-out the part of event horizon supertranslations, which is not due to standard BMS-supertranslations, we define "`disentangled"' supertranslations with respect to the quotient space ${\cal A} \equiv BMS^{\cal H}/BMS^-$. These are defined by the vector field which can be formally seen as the difference of the two vector fields, $\zeta$ and $\eta$, where we set the functions $g$ and $f$ equal each to each other. In this way we obtain the vector field $\chi$ as \begin{eqnarray}\label{quotient} BMS^{\cal H}/BMS^-\, :\quad \chi_f^\mu& =& \zeta_f^\mu - \eta_{f}^\mu\nonumber\\ &=&(0, 0,-\frac{1}{r_S} \frac{\partial f}{\partial \theta}, -\frac{1}{r_S\sin^2 \theta} \frac{\partial f}{\partial \phi}) \,, \end{eqnarray} for an arbitrary real function $f=f(\theta, \phi)$. The corresponding Bogoliubov excitations are \begin{align} \delta_{\chi_f}g_{\mu \nu}= \begin{pmatrix} 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ * & * & -2r^2 \frac{1}{r_S} \frac{\partial^2 f}{\partial \theta^2} & -2r^2 \frac{1}{r_S} (\frac{\partial^2 f}{\partial \theta \partial \phi}-\cot \theta \frac{\partial f}{\partial \phi})\\ * & * & * & -2r^2 \frac{1}{r_S} (\frac{\partial^2 f}{\partial \phi^2} + \sin \theta \cos \theta \frac{\partial f}{\partial \theta}). \end{pmatrix} \label{microstates} \end{align} Note that in the limit $r_S \to \infty$ the vector fields $\chi^\mu$ tend to zero and the horizon supertranslations become indistinguishable from the ordinary BMS-supertranslations. We shall call these additional Bogoliubov-modes \eqref{microstates} and \eqref{quotient}, which are due to the group $\cal A$, as $\cal A$-modes. Since the BMS-Bogoliubov modes, i.e., the modes due to BMS-supertranslations at null infinity $\mathscr I^-,$ are physical, they provide, already at the classical level, a hair to the black hole. In order to determine the point in phase space corresponding to the black hole state, the function $g$ in \eqref{BMS-Bogoliubov} has to be specified. However, this BMS-degeneracy is present for all isolated systems. Therefore, it is not to expect that these BMS-Bogoliubov modes contribute to the Bekenstein-Hawking entropy. However, there are additional physical excitations due to the $\cal A$-modes. It is highly expected that they contribute to the Bekenstein-Hawking entropy and should be identified with the black hole micro-states, as the presence of $\cal A$ is intrinsically due to the presence of the event horizon. In the next section, we will show explicitly that the $\cal A$-modes are classically-gapless, i.e., they do not change the value of the ADM-mass.\footnote{That the BMS-Bogoliubov-modes are gapless is well-known and follows from the fact that \eqref{BMS-Bogoliubov} \newline decays as $O(\frac{1}{r})$ in cartesian-coordinates \cite{Arnowitt:1962hi}.} The group $\cal A$ provides additional hair to the back hole. One has to specify (independently of $g$) an additional function $f$ in \eqref{microstates} in order to determine the exact position of the black hole state in phase space. Therefore, we have two degeneracies providing classical black hole hair, which consistently collapse to ordinary BMS-degeneracy in the limit $r_S \to \infty$ in which the group $\cal A$ disappears. \subsection{Gaplessness of $\cal A$-modes - Quantum Criticality} In order to see that the Bogoliubov-excitations \eqref{microstates} are gapless, we translate them via $t=v-r^*$ to Schwarzschild coordinates $(t,r,\theta,\phi)$, which yields \begin{eqnarray} g_{\mu \nu} + \delta_{\chi_f} g_{\mu \nu} ={\tiny \begin{pmatrix} -(1-\frac{r_S}{r}) & 0 & 0 & 0\\ 0 & (1-\frac{r_S}{r})^{-1} & 0 & 0\\ 0 & 0 & r^2 -2r^2 \frac{1}{r_S} \frac{\partial^2 f}{\partial \theta^2} & -2r^2 \frac{1}{r_S} (\frac{\partial^2 f}{\partial \theta \partial \phi}-\cot \theta \frac{\partial f}{\partial \phi})\\ 0 & 0 & * & r^2 \sin^2 \theta -2r^2 \frac{1}{r_S} (\frac{\partial^2 f}{\partial \phi^2} + \sin \theta \cos \theta \frac{\partial f}{\partial \theta}) \end{pmatrix}} . \label{microstates2} \end{eqnarray} Next we compute the ADM-mass (in units where $G_N=1$) via a version of the Brown-York formula due to Brewin \cite{Brewin:2006qe} \begin{equation} M_{ADM} = \frac{1}{8\pi} \lim_{S \to i^0} (\frac{dA}{ds})_+ - (\frac{dA}{ds})_-, \end{equation} where $S$ is the sphere $t=const., r=const.$ and $r$ is taken to infinity. In the expression above, outside the sphere (+) the space is taken to be flat and inside (-) the space is given by the actual metric. $A$ refers to the area of the sphere and $s$ to the proper length as measured with the line element. The induced metric on $S(r)$ is given by \begin{align} s_{ij}= \begin{pmatrix} r^2 -2r^2 \frac{1}{r_S} \frac{\partial^2 f}{\partial \theta^2} & -2r^2 \frac{1}{r_S} (\frac{\partial^2 f}{\partial \theta \partial \phi}-\cot \theta \frac{\partial f}{\partial \phi})\\ * & r^2 \sin^2 \theta -2r^2 \frac{1}{r_S} (\frac{\partial^2 f}{\partial \phi^2} + \sin \theta \cos \theta \frac{\partial f}{\partial \theta}) \end{pmatrix} . \end{align} Already from the form of $\chi^\mu$ it is clear that one can do a reparametrization in $(\theta, \phi)$ such that one obtains the line element of an Euclidean $S^2.$ One has therefore \begin{equation} A_+(r) = A_-(r) = 4\pi r^2. \end{equation} Futhermore, one has \begin{align} ds_+ &= dr\, ,\\ ds_- &= (1-\frac{r_S}{r})^{-\frac{1}{2}}dr = (1+\frac{r_S}{2r})dr + O(\frac{1}{r}). \end{align} Altogether this yields \begin{equation} M_{ADM} = \frac{r_S}{2} = M. \end{equation} Therefore, for every Bogoliubov-excitation \eqref{microstates} given by a function $f=f(\theta, \phi)$ the ADM-mass is equal to the Schwarzschild-mass and therefore these excitations are gapless. It might look strange that the Bogoliubov excitations look as angular reparametrizations. However, the important point is that precisely those angular reparametrizations which are of the form given by $\chi^\mu$ are the physical Bogoliubov excitations. Although, arbitrary angular reparametrizations solve the Bogoliubov-de Gennes equations, they are in general unobservable. Indeed, they also solve the coordinate condition we have imposed. But, since in our coordinate conditions we have fixed a time-component of the metric, one of our coordinate conditions fixes only a Lagrange-parameter of the ADM-formulation of general relativity. This Lagrange-parameter restricts one of the other parameters, but does not help in getting rid of one degree of freedom. However, the argument can be repaired by using coordinate conditions which fix four degrees of freedom, in such a way that the remaining gauge freedom is really due to physical excitation of the canonical variables. One then carries out the same argument (with more involved equations), transforms back to our coordinate conditions and arrives to the same result that precisely the Bogoliubov excitations given above correspond to the physical gapless excitations. The presence of the physical gapless angular reparametrizations is due to the presence of the black hole event horizon. Thus, we have shown that the $\cal A$-modes are gapless. The presence of these additional gapless excitations is a highly non-trivial phenomenon and is the classical sign of the underlying quantum criticality of the Schwarzschild geometry. This phenomenon has several important consequences. In particular it is the key to the information storage capacity of the black hole and respectively to the existence of black hole hair and entropy. We shall quantify some of these properties in the next section. \section{Information Resolution and Entropy Counting} We have identified the black hole hair carried by the Bogoliubov excitation \eqref{microstates} given by a function $f=f(\theta, \phi)$. We have shown that for this form of excitations the ADM-mass is equal to the Schwarzschild-mass and therefore these excitations are classically-gapless. We now need to understand how much information can be encoded in these excitations. Notice, that up until now, our analysis was purely classical. That is, we have been working with finite $M$ and $G_N$, but zero $\hbar$. This is equivalent to saying that we were working with $L_P = 0$ and thus, $N = \infty$. The results obtained in this limit pass the first consistency check that we are on the right track of understanding the origin of black hole entropy, since as we saw, in this limit an {\it arbitrary} deformation of the form $f=f(\theta, \phi)$ is exactly gapless. Thus, we can store an {\it infinite} amount of information in such deformations. We have thus correctly identified the source of infinite entropy in the limit of $\hbar = 0$. We now need to understand the effects of finite $L_P$ (i.e., finite $N$). For this we need to clarify the meaning that the transformation $f=f(\theta, \phi)$ acquires in the quantum theory. For this let us perform expansion of this function in spherical harmonics, \begin{equation} f(\theta, \phi) = \sum_{l=0}^{l=\infty} \sum_{m=-l}^{m=+l} \, b_{lm} Y_{lm}(\theta, \phi), \end{equation} where $b_{ml}$ are expansion coefficients. Classically, these coefficients are just $c$-numbers, but in quantum theory they acquire the meaning of (expectation values of) the occupation numbers of spherical momentum modes, $\hat{b}_{lm}$. Notice that, since the only scale entering in (\ref{quotient}) and (\ref{microstates}) is $r_S$, it is natural to think of the modes $b_{lm}$ as living on a sphere of radius $r_S$ and correspondingly to measure the momenta in units of ${\hbar \over r_S}$. Thus, by promoting the expansion coefficients $b_{lm}$ into the expectation values of operators, we establish connection between the classical and quantum pictures. In classical theory, by performing $BMS^H/BMS^-$ transformation, parameterized by the function $f=f(\theta, \phi)$, we excite the classical Goldstone field. In quantum theory, the same transformation is equivalent of populating system with different occupation numbers of $\hat{b}_{ml}$-modes. An each momentum mode in quantum theory acts as a qubit that can store information in its occupation number, $n_{lm} \equiv \langle \hat{b}_{lm}^{+} \hat{b}_{lm} \rangle$. These quantum modes are nothing but the Bogoliubov modes appearing at the quantum critical point. The amount of the information carried by the black hole hair is determined by the information-storage capacity of these modes. In order to measure it we need to answer the following question: \\ {\it How many independent angular Bogoliubov qubits must be counted as the legitimate information carriers? } \\ Let $l_{max}$ be a maximal spherical harmonic number up to which we include modes in entropy-counting. Then, the total number of included modes scales as $\sim l_{max}^2$ and the major contribution into this number comes from the highest harmonics, i.e., modes with $l \sim l_{max}$. The value of $l_{max}$ can be estimated from the requirement that the energy gap, $\Delta E$, in any given mode contributing to the entropy-counting must be bounded by, \begin{equation} \Delta E \lesssim {1\over N} {\hbar \over r_S}\,. \label{boundE} \end{equation} The energy gap generated by the quantum effects in a given mode of momentum $l {\hbar \over r_S}$ is proportional to the strength of gravitational self-coupling $\alpha(l) \equiv l^2 {L_P \over r_S^2}$ as well as to the departure of the background from criticality, i.e., $(1-\alpha(r_S) N)$. Exactly at criticality we have $\alpha(r_S) = {1\over N}$. However, the quantum fluctuations in occupation number cause small departure from the critical relation. The effect can be estimated as $(1-\alpha(r_S) N) \sim {1\over N}$. We thus arrive to the following estimate, $\Delta E \sim {\hbar \over r_S} \alpha(l) (1 - \alpha(r_S) N) \sim {l^2\over N^2} {\hbar \over r_S}$. Inserting this into (\ref{boundE}) we get the estimate $l_{max} \lesssim \sqrt{N}$. Thus, the total number of Bogoliubov qubits contributing into the entropy scales as $\sim l_{max}^2 \sim N$. As discussed in the introduction, this is exactly the number of information carriers needed for accounting for Bekenstein entropy! We thus conclude, that in quantum theory, the amount of information carried by the black hole hair generated by $BMS^{\cal H}/BMS^-$ transformations is finite and is given by $N$. We would like to stress the striking similarity between the counting obtained above and the similar counting obtained using the so-called St\"uckelberg approach for accounting the quantum information in the presence of a horizon \cite{stuckelberg}. This is probably not surprising, since the vector field $\chi_{\mu}$ and the corresponding Goldstone field $f(\theta,\phi)$ can be thought to be related with the St\"uckelberg field, discussed in \cite{stuckelberg}, needed for maintaining the gauge invariance from the point of view of the external observer in the presence of the information boundary. The role of the latter in the present case is played by the two-sphere that is subjected to $\cal A$-transformations. \section{Crucial Role of Quantum Criticality in Entropy Scaling} Although this is already clear from our previous analysis, we would like to explain why quantum criticality is absolutely crucial for obtaining the correct scaling of entropy, when we go from the classical ($\hbar =0,~ L_P=0$) to the quantum ($\hbar \neq 0,~ L_P\neq 0$) description. In particular, we would like to explain why explaining entropy by a simple division of the system into the Planck size pixels does not work. As we have seen, in the classical theory the gapless excitation is given by a Goldstone field $f(\theta,\phi)$. Since $f$ is an arbitrary continuous function of spherical coordinates, we can encode an { \it infinite} amount of information in it at {\it zero} energy cost. Indeed, in classical theory we can encode a message by giving different values to $f$ for set of coordinates, say, $\theta_1,\phi_1$ and $\theta_2,\phi_2$ that can be {\it arbitrarily} close. This is because in classical theory the arbitrarily small coordinate differences (i.e., $ \Delta \theta = \theta_2 - \theta-1, ~~ \Delta\phi = \phi_2-\phi_1$) are resolvable at zero energy cost. This is why the ground-state of any classical system with a gapless field $f(\theta,\phi)$ has infinite entropy, from the information point of view. In quantum theory, there is no longer a free lunch, because the coordinate-resolution costs finite energy. The amount of entropy carried by the same system, once $\hbar$ is set to be non-zero, crucially depends on the energy-cost of the coordinate-resolution. This is why, {\it a priory} it is not obvious what shall remain out of the infinite classical entropy in the quantum world. Without having a microscopic quantum theory that could tells us what is the energy-cost of coordinate-resolution, we can only make some guesses. For example, we can take a naive root and put a Planck-scale resolution cutoff on the coordinate-separation, demanding $r_S\Delta \theta > L_P, ~~r_S\Delta \phi > L_P$. That is, we decide to use in information-count only the values of the function $f$ in the set of points that are separated by distance of Planck length or larger. This way of entropy-counting is equivalent to dividing the are of the sphere in Planck-area pixels, each housing a single quantum degree of freedom (i.e., a qubit). Since the total number of pixels is $N = R^2/L_P^2$, it is obvious that the scaling of entropy obtained in this way shall match the Bekenstein entropy. This is essentially the approach originally introduced by 't Hooft \cite{thooft} and also adopted recently in \cite{Hawking:2016msc}. The problem, however, is that such a method, despite being an interesting prescription, cannot be considered, without an additional microscopic input, to be an explanation of entropy scaling, because of the reason that we shall now explain. In a generic quantum system, housing a degree of freedom (say a two-level quantum system) in a Planck-size pixel would cost the energy gap $\Delta E \sim {\hbar \over L_P}$. Since the total number of pixels, is $N$, the resulting number of states, $2^N$, would naively account for Bekenstein entropy. However, these states span an enormous total energy gap \begin{equation} \Delta E_{total} \sim N {\hbar \over L_P}\, , \label{totalGap} \end{equation} which is by a factor $\sqrt{N}$ larger than the black hole mass! This, of course, shows that such states cannot exist and cannot be counted in entropy. So, in order to account for the entropy-counting, we must explain how come the energy gap per pixel, instead of being $\Delta E \sim {\hbar \over L_P}$, is \begin{equation} \Delta E \sim {1 \over N^{{3\over 2}}}{\hbar \over L_P}\,. \label{truegap} \end{equation} The additional suppression factor $N^{-{3\over 2}}$ is enormous even for relatively-small black holes. For example, for a black hole of earth's mass this factor would be $N^{-{3\over 2}}\sim 10^{-99}$. This enormous suppression is precisely what quantum criticality delivers, as explained in series of papers \cite{gold,giamischa} and as outlined in the previous section. For a detailed review of the role of quantum criticality for obtaining such cheap qubits, we refer the reader to \cite{giamischa} and references therein. \section{Gravitational vacua, graviton condensates and Goldstone modes} \subsection{Minkowski vacua} The action of BMS transformations on some gravitational metric $G$ reveals the existence of different gravitational vacua we can denote by $|g;G\rangle$ for $g$ being the BMS function. The BMS transformations $\xi_g$ normally act on the null surfaces of asymptotically flat metrics at spacial infinity. This applies also to Minkowski space time, and hence there exist {\it infinitely many} distinct Minkowski vacua, $|g^-;Minkowski\rangle$, which correspond to the family of BMS-transformed metrics as given by equation (\ref{bondimetricMina}). In \cite{Dvali:2015rea} we have explained that this degeneracy of Minkowski vacua can be understood by describing Minkowski as a coherent state of $N=\infty$ gravitons of {\it infinite} wavelength. This behavior naturally emerges in the framework of the black hole quantum N-portrait \cite{Nportrait}, in which a black hole is described as a state of $N$ soft gravitons (coherent state or a Bose-Einstein condensate) at a quantum critical point, $\alpha N = 1$. On the other hand, one can show (see the appendix) that the {\it near horizon geometry} in the infinite mass limit of a Schwarzschild black hole is a Minkowski space. The value of this comment lies in relating the infinite number of Minkowski vacua we can define on null infinity with the limit of the black hole entropy in the limit of infinite mass. This is the limit in which the moduli of Minkowski vacua generated by asymptotic symmetries can match the expected entropy of an infinitely massive black hole. Namely, in this limit the horizon becomes the null infinity and both asymptotic symmetries $BMS^-$ and $BMS^{\cal H}$ become isomorphic. As also discussed in \cite{Dvali:2015rea} the supertranslations are spontaneously broken in the gravitational vacuum of the theory, in particular by the vacuum state that corresponds to Minkowski space. So acting with the $BMS^-$ supertranslations on infinite-$N$ graviton state, that corresponds to Minkowski space, the Goldstone equation has the following form: \begin{equation}\label{MinBMS} BMS^-\, :\qquad T(g^-)|Minkowski\rangle =|g^-;Minkowski\rangle\, . \end{equation} The corresponding Goldstone modes precisely correspond to the Bogoliubov excitations eq.\eqref{BMS-Bogoliubov} with $r_S\rightarrow\infty$ that are given with respect to the standard $BMS^-$ transformations on ${\mathscr I}^-$. Equation \eqref{MinBMS} means that Minkowski space is infinitely degenerate and is described by the infinite family of Minkowski metrics as given in eq.\eqref{bondimetricMina}. The states $ |Minkowski\rangle$ and $|{g^-};Minkowski\rangle$ correspond to two Minkowski metrics that are related by a $BMS^-$ transformation and the difference among the two metrics can be accounted by having two different functions $C^\pm_{zz}$, which are related by the $BMS^-$ transformation on $C^\pm_{zz}$, as shown in eq.\eqref{ctrans}. Note that what we learn from the classical analysis of the asymptotic symmetries is the existence of many classical vacua interpreted as connections defined by $C$ functions that are total derivatives $D_z^2g$. In quantum theory the manifold of classically-inequivalent vacua corresponds to different quantum vacuum states. Each of these states is characterized by a BMS function $g$. Quantum mechanically these vacua for different $g$'s are orthogonal. Moreover, action by a broken symmetry transformation that connects two vacua is equivalent to creation of soft gravitons that play the role of the Goldstone bosons for the large diffs defining the asymptotic symmetry. When we talk about a quantum resolution of these vacuum states we really mean a coherent state representation of them in terms of quanta with infinite wavelength. In the case of Minkowski vacua this quantum resolution because of infinite-$N$ and infinite wavelength is automatically consistent with quantum mechanical stability. This is not the case for the metrics which in quantum picture give finite $N$, such as the finite mass black holes. Quantum mechanically, such states are not necessarily stable. \subsection{Black hole vacua} As we discussed, since the black hole horizon is also a null surface, we can extend the rules of asymptotic symmetries, vacua and Goldstone modes to this case too. The first important thing to be clarified is what is playing the role of manifold of {\it vacua}. Following our previous discussion, it is very natural to identify this manifold with the space of states generated by the action of $\cal A \equiv BMS^{\cal H}/BMS^-$. In the quantum portrait the black hole state $|BH\rangle$ is given in terms of the state of $N$ gravitons at criticality, $\alpha N = 1$. This criticality manifests in the appearance of classically-gapless Bogoliubov modes that we have identified with the gapless modes obtained by acting with $\cal A$. The degeneracy of $|BH\rangle$ originates from the action of the supertranslations $\cal A$ on the horizon ${\cal H}$ , where we have factored out the standard BMS transformations on ${\mathscr I}^-$. These superstranslations, are defined in eq.\eqref{quotient} and act as follows: \begin{equation}\label{MinBMS} {\cal A} \, :\qquad T(f)|BH\rangle =|f;BH\rangle\, . \end{equation} The corresponding Bogoliubov modes in \eqref{microstates} are shown to be classically-gapless and preserve the ADM mass of the black hole. Hence, we propose to identify the family of black hole vacua $|f;BH\rangle$ with the family of $\cal A$-transformed black hole metrics, as given in eq.(\ref{microstates2}), with the gapless Bogoliubov modes corresponding to the Goldstone field. For infinite $N$, which corresponds to classical limit, the Goldstone field is exactly gapless. However, as discussed above, for finite $N$ the energy gap $\Delta E\sim {1 \over N}$ is generated. Moreover, the effective number of modes that counts in the entropy is finite and also set by $N$. Therefore we would like to stress that there are effects, captured by the quantum $N$-portrait, that are expected not to be visible in the classical BMS picture and their account requires the finite-$N$ resolution. In particular, we expect that because of the interaction among different modes, at finite $N$ the different would-be vacua obtained by action of $\cal A$ are no longer orthogonal. This overlap is a quantum ${1\over N}$ effect and is linked with black hole evaporation. \subsection{Some remarks on the relation to gravitational scattering amplitudes} It has been known from the early days of quantum field theory that in theories with long-range forces, such as QED or gravity, the asymptotic states needed to define an IR-finite S-matrix are not the ones diagonalizing the free part of the hamiltonian, but instead the dressed states with infinite number of soft quanta. Generically, these dressed states are coherent states. This phenomenon underlines the standard recipes, such as the Bloch Nordsieck theorem in QED, for dealing with IR divergences. A nice way for understanding the role of asymptotic symmetries in Minkowski space-time is simply in terms of symmetries governing the kinematics of these asymptotic dressed states. This is in essence the meaning of the relation between soft quanta theorems and asymptotic symmetries. In the case of gravity we can consider an ultra-Planckian scattering with large center of mass energy $\sqrt{s}$ at small impact parameter. In such conditions we expect to form a black hole in the final state (ignoring the subsequent evaporation). Within the classicalization scheme described in \cite{Dvali:2010jz}, the final state of such scattering process can be formally represented as some quanta of matter (e.g., the ultra-Planckian scattering of two electrons ) gravitationally dressed by $N$ soft gravitons with $N\sim sL_P^2$, i.e., \begin{equation} |f; coh(N)\rangle \, . \end{equation} In other words the final state is gravitationally-dressed by $N$ soft gravitons of typical wave length $r_S\sim \sqrt{s} L_P^2$. These $N$ gravitons are organized in the form of a coherent state that we have denoted $|f;coh(N)\rangle$, where $f$ denotes the soft graviton part. The in-state picture was thought from the point of view of the standard kinematics, defining the S-matrix in an asymptotically flat space, and therefore, the relevant kinematic asymptotic symmetry was BMS($\pm$). However, if a black hole is formed, the gravitational dressing of the final state should be modified in order to take into the account the degeneracy of final states obtained by the action of $\cal A$. That is, the S-matrix element must be schematically replaced by, \begin{equation} \int_{\cal A} |\langle in | \hat{S} t|f; coh(N)\rangle|^2 \,, \end{equation} where the integral it taken over the quotient space $\cal A \equiv BMS^{\cal H}/BMS^-$ and $t$ representing the action of this quotient space on the IR gravitationally dressed state. This integral formally compensates the suppression factor $e^{-N}$ of the gravitational $2\rightarrow N$ part of the ultra-Planckian scattering. This formal picture provides a nice understanding of black hole formation in ultra-Planckian scattering \cite{Dvali:2014ila} in the following sense. First of all, the UV/IR non-Wilsonian structure of gravity leads us to a gravitational dressing by $N$ soft quanta. The suppression factor is however compensated by the difference between the asymptotic group $BMS^-$ and the $BMS^{\cal H}$. From the microscopic point of view defined by the $N$-portrait this {\it enhancement} of asymptotic symmetries simply reflects the quantum criticality of the many body graviton system. Of course, the previous discussion simply intends to grasp the underlying structure. The key point is how the integral over $\cal A$ precisely compensates the suppression factor $e^{-N}$, or in other words, how do we get the right black hole entropy. This was explained above, by showing that at finite-$N$ the number of relevant Bogoliubov modes contributing into the entropy scales precisely as $N$. This fact shall ensure that for finite $N$ the integral over $\cal A$ gets reduced to a discrete sum over $\sim 2^N$ final micro-states. This provides the needed compensation for the suppression factor $e^{-N}$. \section{Conclusions} The goal of this paper was to explicitly identify a candidate symmetry that in the geometric description can account for the existence of the degenerated black hole (micro)states, which give rise to black hole entropy. The information stored in these states is a manifestation of the black hole hair. From the microscopic quantum theory the existence of such a hair was concluded some time ago \cite{Nportrait, gold}, but its precise geometric description has not been found so far. In \cite{Dvali:2015rea}, using the view of Minkowski space as of infinite-mass limit of a black hole, this symmetry was identified with BMS symmetry of Minkowski space. Minkowski space was described as a coherent state with infinite occupation number of infinite-wavelength gravitons. The symmetry that connects the inequivalent Minkowski vacua changes the occupation number of these soft gravitons in the coherent state. These gravitons play the role of Goldstone bosons that carry infinite entropy of Minkowski space. In this paper, we have suggested that the right candidate for such a symmetry for a finite-size black hole is the transformation corresponding to the quotient $\cal A \equiv BMS^{\cal H}/BMS^-$. Geometrically, we have identified the supertranslations on the black hole horizon as being area-preserving transformations on the two-sphere. We have performed various consistency checks showing that this symmetry indeed passes all the tests for describing the geometric limit of an underlying quantum critical degeneracy of black hole micro-states. In particular, we showed that the variations of the black hole metric, $\delta_{\chi} g_{\mu \nu}$, induced by $\cal A$, are classically gapless, as they do not change the ADM mass of the metric. Classically, these gapless excitations can be interpreted as the Goldstone field (parameterized by an arbitrary function $f(\theta,\phi)$) that connects the degenerate black hole ground-states. In the quantum language, these excitations correspond to the Bogoliubov excitations that give rise to the black hole micro-states. In short, we have identified some clear hints for the quantum criticality of the Schwarzschild-geometry. In the classical limit, the hair and the entropy carried by these gapless modes is infinite and information cannot be resolved during any finite time. Thus, for the information-recovery the quantum effects are crucial. We have then showed, confirming the suggestion of the previous work \cite{Dvali:2015rea}, that in quantum theory, the gap is necessarily generated due to ${1 \over N}$-effects. Simultaneously, the number of independent modes that contribute into the entropy counting becomes finite and scales as $N$. This matches the black hole entropy scaling. We have argued that in gravity boundary null-surfaces are equipped with data that qualify as classical ground-states. In quantum picture these ground-states can be described as coherent states composed out of soft background modes and are related by the action of the asymptotic symmetry groups. Once the non-zero interaction among the background quanta is taken into the account, the degeneracy is expected to be lifted and the majority of the ground-states becomes unstable. This will manifest in radiation coming from the null boundary in a perfectly unitary form. Of course, as argued in \cite{Dvali:2015rea}, this effect is absent for Minkowski space, since the constituent quanta have infinite wave-lengths, and correspondingly, they have vanishing quantum gravitational coupling. \\ Finally, let us make a comment that should help the reader to catch the essence of the transformation $\cal A$ and the key difference of our result from the recent analysis in \cite{Hawking:2016msc}. For this note that the analog of $\cal A$ in case of $U(1)$-gauge theory would be trivial. In this case one can easily observe that any large gauge transformation on $\cal H$ can be compensated by the corresponding one on $\mathscr I^-$, leading to a trivial $\cal A$, i.e., equal to the identity. The obvious reason for this is that in the gauge case, contrary to what we have shown is happening in the case of gravity, the large gauge transformations have no explicit dependence on the geometrical data defining the horizon. In other words, in the gauge case the Goldstone bosons associated with large gauge transformations are always $S$-matrix soft photons and the only possibility to use them as some form of horizon hair is by implementing (or implanting in the sense of \cite{Hawking:2016msc}) soft photons on the horizon. This is due to the fact that for the case of electro-magnetic large gauge transformation $\cal A$ is trivial. For gravity and its BMS transformations the situation is dramatically different in two important senses. First, the quotient between the transformations on $\mathscr I^-$ and $\cal H$ is already non-trivial on the basis of the geometry and secondly this quotient defines physical transformations. Note again the important difference with the gauge case where, if we use two different gauge shift functions (denoted in \cite{Hawking:2016msc} by $\epsilon$) on $\cal H$ and $\mathscr I^-$, the difference becomes a {\it gauge redundancy}. As we have lengthly discussed along the paper, the non-triviality of $\cal A$ in the case of gravity is what allows us to identify the purely-gravitational candidates to account for the black hole entropy, or equivalently, the purely-gravitational hair. A hair that is soft, but relative not to a notion of $S$-matrix softness, but to the intrinsic notion of softness (existence of gapless modes) determined by the black hole criticality. In this sense, the $\cal A$-Goldstones (generated by the vector $\chi_{\mu}$) can be viewed as the St\"uckelberg fields \cite{stuckelberg} that manifest physicality of this transformation. Note that in the case of charged black holes the explicit dependence of $\cal A$ on the charge (or charges) of the black hole appears through the geometric dependence of $\cal A$ on the gravitational radius. A potentially-important general lesson we can extract from our findings is that what characterizes the ability of a null-surface $\cal H$ to work as holographic screen, is the ``rank'' of the corresponding $\cal A(\cal H)$. \section*{Acknowledgements} We thank Ioannis Bakas and Alexander Gu\ss mann for many discussions. We also thank Nico Wintergerst and Sebastian Zell for discussions on related topics. The work of G.D. was supported by Humboldt Foundation under Alexander von Humboldt Professorship, by European Commission under ERC Advanced Grant 339169 ``Selfcompletion'' and by TRR 33 "The Dark Universe". The work of C.G. was supported in part by Humboldt Foundation and by Grants: FPA 2009-07908, CPAN (CSD2007-00042) and by the ERC Advanced Grant 339169 ``Selfcompletion'' . The work of D.L. was supported by the ERC Advanced Grant 32004 ``Strings and Gravity'' and also by TRR 33. \section*{Appendix: Minkowski space as the near horizon geometry of the Schwarzschild metric} Let us derive the near horizon limit of the Schwarzschild black hole. The well-known black hole metric has the form \begin{equation} ds^2=-(1-r_S/r)dt^2+(1-r_S/r)^{-1}dr^2+r^2d\Omega^2\,. \end{equation} We now introduce the coordinate $\epsilon=r-r_S$ and for small $\epsilon$ we obtain the metric in the near horizon limit \begin{equation} ds^2=-{\epsilon\over r_S}dt^2+{r_S\over\epsilon}dr^2+r_S^2d\Omega^2\,. \end{equation} Next we introduce two new coordinates, \begin{equation} \rho=\sqrt{r_S\epsilon}\, ,\qquad \omega={t\over r_S}\, , \end{equation} and we take the large $N$ limit \begin{equation}\label{largeN} N\rightarrow\infty\, ,\quad r_S\rightarrow\infty\, , \quad M\rightarrow\infty\,. \end{equation} such that $\rho$ and $\omega$ are finite for small $\epsilon$ reps. for large $t$. In this limit the entropy as well as mass and horizon become infinitely large. Then the near horizon metric finally becomes: \begin{equation} ds^2=-\rho^2d\omega^2+d\rho^2+\sum_{i=2,3}dx^idx_i\,. \end{equation} This is the metric of $M^{1,1} \times R^2$, where $M^{1,1}$ is the 2-dimensional Minkowski space in Rindler coordinates $\omega$ and $\rho$. These are related to the flat Euclidean coordinates as \begin{equation} t=\rho\,\sinh \omega\, ,\qquad x_1=\rho\,\cosh \omega\, . \end{equation} In Rindler space the horizon ${\cal H}$ of the black hole is at $\rho=0$, called the Rindler horizon, which corresponds to the two light cones, $x_1=\pm t$, of $M^{1,1}$: \begin{equation} \label{horizon} {\cal H}\, :\quad R^1_{\omega}\otimes R^2_{x_2,x_3}=R^1_{x_1=\pm t}\otimes R^2_{x_2,x_3}\, . \end{equation} In summary, in its Rindler version Minkowski space-time is just the near horizon geometry of a black hole with \begin{equation} N=M=r_S=\infty\, . \end{equation} As discussed before, in this limit the black hole horizon ${\cal H}$ agrees with ${\mathscr I}^-$. Note that the Rindler space with its horizon ${\cal H}$ radiates, it is a thermal space with some Rindler temperature. However, the Rindler entropy is infinite, just as the entropy of a infinitely-massive black hole. In summary, we are obtaining here the picture, which is in close analogy with the emergence of $AdS$-geometry via the branes in superstring or M-theory: four-dimensional Minkowkski space $M^{1,3}$, written as $M^{1,1} \times R^2$, arises as the near horizon geometry of a Schwarzschild black hole. In analogy to the $N$ coincident D3-branes of the $AdS_5\times S^5$ geometry, the microscopic picture of $M^{1,1} \times R^2$ is the bound state (sort of a Bose-Einstein condensate) of $N$ graviton particles. If we push this picture, we must note that since the Schwarzschild black hole is a non-extremal object, we are really dealing with a bound state, created by the attractive gravitational force among the $N$ graviton particles. This is in contrast to the BPS branes, which are just coincident, since the gravitational and the $p$-form forces among the branes are opposite and cancel each other. This picture finally raises the question, if there exist a one-dimensional dual conformal theory living on the holographic screen of Rindler space, namely the one-dimensional Rindler cone of $M^{1,1}$, in analogy to the $SU(N)$ gauge theory on the boundary of $AdS_5$. This conformal theory should be closely related to the bound states of the $N$-gravitons at the quantum critical point, which is also expected to be described by sort of a conformal theory of gapless modes.
2,869,038,155,986
arxiv
\section{Introduction} Algorithms to automate consequential decisions such as hiring \citep{hiring}, lending \citep{lending}, policing \citep{policing}, and criminal sentencing \citep{sentencing} are frequently suspected of being unfair or discriminatory. The suspicions are not hypothetical. The 2016 ProPublica study \citep{propublica} of the COMPAS Recidivism Algorithm (used to inform criminal sentencing decisions by attempting to predict recidivism) found that the algorithm was significantly more likely to incorrectly label black defendants as recidivism risks compared to white defendants, despite similar overall rates of prediction accuracy between populations. Since then, discoveries of ``algorithmic bias'' have proliferated, including a recent study of racial bias by algorithms that prioritize patients for healthcare \citep{sendhilScience}. Thus spurred, policymakers, regulators, and computer scientists have proposed that algorithms be designed to satisfy notions of fairness (see for instance \cite{mathbabe16,fairmlbook,ethicalalgorithm} for overviews). This raises a question: what measure(s) of fairness should designers be held to, and how do these constraints interact with the original objectives the algorithm was designed to target? The COMPAS case illustrates that the answer is not clear. ProPublica and Northpointe (the company that designed COMPAS) advocated for different measures of fairness. ProPublica argued that the algorithm's predictions did not maintain parity in false positive and false negative rates between white and black defendants,% \footnote{Northpointe's algorithm had differing Type-1 and Type-2 error rates across the two groups.} while Northpointe countered that their algorithm satisfied predictive parity.% \footnote{Roughly, the accuracy of COMPAS scores was the same for both groups at all risk levels.} Subsequent research identified hard trade-offs in the choice of fairness metrics: under some mild conditions, the two requirements above cannot simultaneously be satisfied (\cite{KMR16}, \cite{Chou17}). This inspired a literature proposing (or criticizing) notions of fairness based on ethical/ normative grounds. The literature evaluates algorithms on the basis of these measures, and/or proposes novel algorithms that better trade-off the goals of the original designer (decision accuracy, algorithmic efficiency) with these fairness desiderata.% \footnote{See e.g. \cite{DworkFairness,HPS16,sharad1,sharad2,sharad3,impossibility,gerrymander,multicalibration,implicit} for a small sample of an enormous literature.} In general, the different proposed fairness measures are fundamentally at odds with one another. For example, in addition to the impossibility results due to \cite{KMR16,Chou17}, enforcing parity of false positive or false negative rates for e.g. parole decisions typically requires making parole decisions using different thresholds on the posterior probability that an individual will commit a crime for different groups. This has itself been identified by \citep{sharad1} as a potential source of ``unfairness''. This line of research is subject to two criticisms. First raised by, for example \citep{sharad1}: these notions of fairness are disconnected from and lead to unpalatable tradeoffs with other economic and social quantities and consequences one might care about. Second, the literature almost exclusively assumes that the agent types, which are relevant to the decision at hand, are exogenously determined, i.e. unaffected by the decision rule that is selected. For instance, in the criminal justice application described, individual choices of whether to commit a crime or not, and therefore the overall crime rates, are fixed and not affected by policy decisions made at a societal level (e.g. what legal standards are used to convict, policing decisions etc). In settings like this where agent decisions are exogenously fixed, \cite{sharad1} and \cite{implicit} observe that optimizing natural notions of welfare and accuracy (incarcerating the guilty, acquitting the innocent) are achieved by decision rules that select a uniform threshold on ``risk scores'' that are well calibrated --- for example, the posterior probability of criminal activity --- which tend \emph{not} to satisfy statistical notions of fairness that have been proposed in the literature. Does this mean that setting uniform thresholds on equally calibrated risk scores is better aligned with natural societal objectives than is asking for parity in terms of false positive and negative rates across populations? In this paper, we consider a setting in which agent decisions are endogenously determined and show that in this model, the answer is \emph{no}: in fact, parity of false positive and negative rates (sometimes known in this literature as \emph{equalized odds} \cite{HPS16}) is aligned with the natural objective of minimizing crime rates. Parity of positive predictive value and posterior threshold uniformity are {\em not}. Although the model need not be tied to any particular application, we develop it using the language of criminal justice. We treat agents as rational actors whose decisions about whether or not to commit crimes are {\em endogenously} determined as a function of the incentives given by the decision procedure society uses to punish crime. The possibility for unfairness arises because agents are ex-ante heterogeneous: their demographic group is correlated with their underlying incentives--- for example each individual has a private \emph{outside option} value for not committing a crime, and the distribution of outside options differs across groups. Our key result is that policies that are optimized to minimize crime rates are compatible with a popular measure of demographic fairness --- equalizing false positive and negative rates across demographics --- and are generally incompatible with equalizing positive predictive value and uniform posterior thresholds. Thus, which of these notions of fairness is compatible with natural objectives hinges crucially on whether one believes that criminal behavior is responsive to policy decisions or not. Our results have direct implications for regulatory testing for unfairness. Often, in settings of interest, a regulator does not directly observe the decision rule used by an adjudicator. However, the regulator may wish to test whether the adjudicator is using a ``fair'' rule, i.e. whether the adjudicators choices are biased towards or against some demographic group. Following a tradition starting with \cite{becker}, one standard used is called an outcome test, i.e. comparing, ex-post, the classification assigned by the adjudicator to observed outcomes. For instance, in a criminal justice setting, one may compare the judge's decision to the (somehow obtained) actual innocence or guilt of the defendants, or in a lending setting, compare the lender's decision on whom to extend loans to with the actual repayment outcomes of loan applicants etc. In this context, a given prescription on what constitutes a ``fair'' or non-discriminatory rule maps into a corresponding outcome test. In particular, a test that is popularly used by researchers and regulators corresponds to the common-posterior-threshold rule described above. As already mentioned, this is not the best test in our model. When used, this test attempts to evaluate whether the adjudicator is using a common posterior threshold across groups by evaluating whether the marginal agents across groups have similar probabilities of different outcomes. \footnote{For a discussion of this in the context of evaluating the fairness of lending standards, see \cite{ferguson1995constitutes}.} However, implementing this test is difficult: identifying (and being sure that one has correctly identified) the marginal agent in each group is hard (this is roughly the infra-marginality problem, see e.g. \citep{simoiu2017problem}). For instance, there may be information observed by the decision maker but not by the regulator/ econometrician (an oft cited example is that police observe a suspect's demeanor, and use this as a factor, but this cannot be quantified). By contrast, if our maintained assumptions are valid, then an adjudicator wishing to minimize crime should use a rule that equalizes false positive and false negative rates across demographic groups. This is easy to estimate and test: there is no need to identify a marginal agent. \subsection{Overview of Model and Results}\label{sec:overview} We first derive our results in an extremely simple baseline model to highlight the underlying intuition. We then show that our conclusions are robust to a number of elaborations and generalizations of the model. \subsubsection*{The Baseline Model} Our baseline model (in Section \ref{sec:model}) has a mass of agents who each belong to one of two demographic groups. Each agent has a single choice on the extensive margin, for instance a binary choice of whether or not to commit a crime; or whether or not to acquire human capital, etc. To fix ideas, in this paper we frame the matter as a decision about whether to commit a crime. An adjudicator has to classify each agent as guilty or innocent. This classification is based on a noisy signal that the adjudicator receives of each agent's choice; the distribution of this signal depends only on the agent's choice, and not on her group. Further, the adjudicator observes the group membership of each agent. The adjudicator commits ex-ante to a classification rule, i.e. how it will classify agents as a function of the signal received, and potentially the agent's group membership. Agents are expected payoff maximizers who enjoy a monetary benefit from crime but also a cost of being declared guilty of the crime. In choosing whether to commit a crime, they compare their expected net benefit from the crime to an outside option. The costs and benefits are privately known to the agent, but not to the adjudicator (who only sees group membership). The only distinction between groups is that the distributions of costs and benefits may be different for different groups. For example, individuals from different groups might have different legal employment opportunities, different costs of incarceration (e.g. differences in stigma), etc. The model is flexible enough to allow (potentially different) fractions of the population in each group who are rigidly law-abiding (i.e. do not commit a crime regardless of circumstance) or hardened criminals (i.e. will commit a crime regardless of circumstance), and a variety of responses to incentives in between these two extremes. We don't model the source of this heterogeneity: it is exogenous, and the distribution is known to the adjudicator. Given these preferences, the adjudicator's decision rule determines their choices, which in turn determines the overall crime rate in each population. The adjudicator's objective is to minimize the overall crime rate, i.e. the total mass of agents that choose to commit a crime. While we model the adjudicator as knowing that the underlying groups are heterogeneous (i.e. knowing the above distributions that describe each group), the adjudicator is not biased for or against any group, nor is there any underlying preference for fairness. Our main result (see Section \ref{sec:baseline}) is that the classifier that minimizes the crime rate is fair according to a metric that has attracted attention in the literature: setting \emph{different} thresholds on the posterior probability of crime for each group so as to guarantee equality of false positive and negative rates. This corresponds to setting the \emph{same} threshold on signals across groups. To dig a little deeper into this result, the equilibrium crime rate in each population can be viewed as the adjudicator's prior belief in equilibrium that an agent has committed a crime, given knowledge only of her group membership. Given the noisy signal, the adjudicator has a posterior belief that the agent has committed a crime. In a static environment, the optimal classification rule for an adjudicator who wishes to optimize classification accuracy will be a group-independent threshold on her posterior belief that an agent has committed a crime. Note that priors will generally differ between populations (because outside option distributions differ, crime rates in the two populations differ). Therefore policies corresponding to group-independent thresholds on posterior beliefs will typically correspond to applying group-dependent thresholds on signals and vice versa. Equalizing false-positive and false-negative rates across groups then corresponds to setting identical thresholds on the raw \emph{signal} for each group. It can be viewed as a commitment to avoid conditioning on group membership, even when group membership information is ex-post informative for classification. The intuition is that if the adjudicator uses the same threshold on the posterior belief that an agent has committed a crime for each group, the decision rule is making use of information contained within each group's prior. Although this information is statistically informative of the decision made by the agent, it is not within the agent's control. Using this information therefore only distorts the (dis-)incentives to commit crime. On the other hand, if the adjudicator uses the same threshold on agents' signals for each group (and hence different thresholds on posterior beliefs), decisions are made only as a function of information under the control of the agents, and hence are more effective at discouraging crime. The equalizing of false-positive and false-negative rates across the groups follows from this. \subsubsection*{Extensions} The main insights of our baseline model continue to hold even when many of its core assumptions are relaxed. We summarize them here. First, the baseline model assumes that a signal is observed by the adjudicator for every agent (or equivalently, at equal rates across populations). There is significant empirical evidence, however, that this is often not the case: for example, arrest rates (and hence prosecution rates) are substantially higher in minority populations for certain drug offenses, despite evidence that the underlying prevalence is more uniform across groups \citep{drugdisparities}. Section \ref{subsec:hetobs} introduces an elaboration of the baseline model based on \cite{Persico02}. In this variant, the adjudicator must rely on an intermediary (police) to inspect agents and generate a signal. The adjudicator observes a signal from an individual only if the police inspect them, and individuals are punished only if they are inspected \emph{and} their signal crosses the adjudicator's threshold for establishing guilt. The police have their own objectives: to maximize the number of successful inspections (e.g. inspections that result in an arrest). Formally, we study the following game: The adjudicator commits to a decision rule on guilt/ innocence based on signals. Then police and agents play a simultaneous move game: The police choose an \emph{inspection intensity} for each group to maximize their objectives, given the adjudicator's rule and crime rates in each group, subject to an overall capacity constraint. Agents of different groups commit crime based on both the adjudicator's rule, and the police's inspection rate for their group. We show that similar results continue to hold in this model, i.e. the optimal rule for the adjudicator will continue to equalize the disincentive to commit crime across the groups. This will result in equalizing \emph{conditional} false-positive and false-negative rates across the groups, i.e. the rate conditional on being inspected. In Section \ref{sec:hetsignals}, we consider a setting in which the signal distribution depends not just on the action chosen by the agent (crime or no crime) but may also depend on their group. For example, the underlying signal generating process may be less noisy for agents from certain groups, and noisier for others. Of course, in general, the structure of the optimal solution is closely tied to the relationship between the signal distributions, and our results cannot carry over without further assumptions.\footnote{Consider, for example, the case in which there is a perfectly informative signal for one group but not for another. Then, the optimal solution will have zero error for the former group, whereas error will be inevitable for the latter, for whom we only have a noisy signal.} Nevertheless, the insights provided by our previous analysis allow us to study the tradeoff between the adjudicator's objective (minimizing overall crime) and various fairness notions. In particular, Theorem \ref{thm:het_eqInc} gives conditions under which our baseline insights continue to hold, i.e. conditions under which the adjudicator's optimal rule will continue to equalize the disincentive to commit crime across groups. Conversely, Theorem \ref{thm:het_lem_comparison} shows conditions under which rules that equalize false-positive or false-negative rates across groups outperform rules that equalize incentives. Finally, note that crime rate in our model (or what is referred to in the classification literature as ``base rate'') is endogenous, where it is normally modeled as exogenously fixed. This allows us to consider an additional notion of fairness that is ill-defined in the existing literature; namely classification rules that equalize base rate across groups. In Section \ref{sec:equalbase}, we study when this may be better or worse than the other notions we considered above in terms of the adjudicator's objective (overall crime rate). \subsection{Additional Related Work} The economic literature starting from \cite{arrow1972some} has considered models of discrimination where agent decisions (e.g. to gain education) are endogenous and their incentives determined by a principal's choice (e.g. employer's hiring rule). \cite{CL93,FV92} study models in which individuals have a choice about how much effort to exert, and identical populations can have different outcomes in (e.g. hiring markets) because of asymmetric self-confirming equilibria. There has also been extensive interest in the design of affirmative action policies for example in higher education. \cite{loury} makes the case that affirmative action may be necessary to correct historical inequity by constructing a dynamic model in which heterogeneity between two groups may persist if the principal uses a non-discriminatory rule going forward. The subsequent literature is too large to comprehensively cite, see \cite{fang2011theories} for a survey. There has also been substantial interest in evaluating outcome data for evidence of discrimination, using (or developing) an underlying theoretical prediction of how such discrimination would manifest: see e.g. \cite{knowles2001racial}, \cite{Persico02} or \cite{anwar2006alternative} in the context of policing/ traffic stops. A large literature has studied lending data for evidence of discrimination against women and minorities: see e.g. \cite{ferguson1995constitutes} or \cite{ladd1998evidence} for overviews of both the debate on what measures of (un-)fairness to use, and an overview of the existing research. More recently, in the computer science literature, several papers consider effort-based models that are similar in spirit to \cite{CL93,FV92}. \cite{HC18} propose a two-stage model of a labor market with a ``temporary'' (i.e. internship) and ``permanent'' stage, and study the equilibrium effects of imposing a fairness constraint (``statistical parity'', which corresponds to hiring from two populations at equal rates) on the temporary stage. \cite{fat20} consider a model of the labor market with higher dimensional signals, and study equilibrium effects of ``subsidy'' interventions which can lessen the cost of exerting effort. \cite{downstream} study the effects of admissions policies on a two-stage model of education and employment, in which a downstream employer makes rational decisions, but student types are exogenously determined. Two recent papers \cite{delayed,delayed2} study non game-theoretic models by which classification interventions in an earlier stage can have effects on individual type distributions at later stages, and show that for many commonly studied fairness constraints (including several that we consider in this paper), their effects can either be positive or negative in the long term, depending on the functional form of the relationship between classification decisions and changes in the agent type distribution. \renewcommand{\Re}{\mathbb{R}} \section{Preliminaries} \label{sec:model} \subsubsection*{Baseline Model} Each agent belongs to a group $g \in \mathcal{G}} % the group G = {1,2$. A group corresponds to some observable characteristic of the agent, for instance race or gender. There are $N_g$ agents in group $g$. For simplicity assume just two groups $\{1,2\}$ though the results extend straightforwardly to any finite number. Each agent makes a single binary decision to either commit a crime ($c$) or remain innocent ($i$). Then, for each agent, the adjudicator observes a random signal $\sigVal \in \Re$ which is informative of the agent's guilt/innocence. The distribution of the signal depends only on whether the agent has committed a crime (and is therefore conditionally independent of their group). Criminals' signals are drawn according to the distribution $\crimeSigCDF$ (with pdf $\crimeSigPDF$) and innocents' signals drawn from $\noncrimeSigCDF$ (with pdf $\noncrimeSigPDF$). It is without loss of generality (reordering signals if necessary) to assume that the signal distributions satisfy the Monotone Likelihood Ratio Property (MLRP), i.e. higher signals imply a higher likelihood of guilt. The adjudicator commits to a decision rule $\beta$, which labels an agent in group $g$ with signal $s$ as guilty with probability $ \probPolicy_g(s)\in [0,1] $. Note that implicitly this means the adjudicator perfectly observes an agent's group membership, along with the signal, at the time of adjudication. We write $\guilty=1$ to indicate that the agent is labeled guilty and $\guilty=0$ otherwise. Now we describe agents' incentives to commit a crime in the first place. The agent receives a reward $ \crimeReward $ when he commits a crime, but pays a penalty of $ \crimeCost $ if he is labeled as guilty. An agent who does not commit a crime receives his outside option value $ \outOptVal $. All three quantities are privately known only to the agent and are drawn independently from a distribution that may potentially differ across the groups. An agent in group $ g $ commits a crime if his net utility from committing a crime is higher than not: \begin{align} &\crimeReward - \crimeCost \Pr ( \guilty =1 \mid c, \group) \geq \outOptVal - \crimeCost \Pr(\guilty =1 \mid i, \group) \label{eqn:ic}, \intertext{which can be written as} & \Pr ( \guilty =1 \mid c, \group) - \Pr ( \guilty =1 \mid i, \group) \leq \frac{\crimeReward - \outOptVal}{\crimeCost}. \label{eqn:ic2} \intertext{where $\frac{\rho-\omega}{\kappa} $ is the \emph{marginal benefit} of committing a crime normalized by the penalty. Define} & \margCostCrime_g = \Pr(\guilty=1 \mid c, \group) - \Pr(\guilty=1 \mid i,g) \label{eqn:disincentive} \end{align} as the \emph{disincentive} for committing a crime: it is the group specific additional probability of being found guilty having committed a crime relative to not. Then, the crime rate of group $g$ can be expressed in terms of $\outOptCdf_g$, the survivor function (i.e. 1-CDF) associated with the relevant quantity on the right hand side of \eqref{eqn:ic2} given the joint distribution of $\crimeReward, \crimeCost$ and $\outOptVal$: \begin{align} \crimeRate_g = \Pr\left( \margCostCrime_{g} \le \frac{\crimeReward - \outOptVal}{\crimeCost} \right) = \outOptCdf_g\left( \margCostCrime_{g} \right).\label{eqn:crimerate} \end{align} The adjudicator's objective is to minimize the overall crime rate, i.e. to solve \begin{align} \min_{\beta \in B} \sum_{g \in \mathcal{G}} % the group G = {1,2} \numPeople_g \crimeRate_g. \label{eqn:obj}\tag{OPT} \end{align} where of course, $\beta$ determines $\Delta_g$ by \eqref{eqn:disincentive} and therefore $CR_g$ as per \eqref{eqn:crimerate}. Here, $B$ is the set of all feasible policies for the adjudicator, i.e. $B=\{\beta_g, g\in \mathcal{G}} % the group G = {1,2: \beta_g: \Re \rightarrow [0,1]\}$. \subsection{Fairness Measures}\label{fairmeasures} We are interested in how decision rules $\beta$ respecting various notions of fairness perform relative to the optimal policy, and to each other. There are five main notions of fairness that we discuss throughout this paper, each of which corresponds to equalizing some statistical quantity across groups. Three of them have been considered both in the literature and in the popular press: equalizing false positive rates, false negative rates, and positive predictive value. These three notions of fairness are of particular interest to us because it has been shown that attaining all three measures simultaneously is impossible (\cite{KMR16,Chou17}). Given a policy $ \beta_g $ for group $g$, true positive rate (TPR), false positive rate (FPR), false negative rate (FNR), and positive predictive value (PPV) are defined as \begin{align} &\TPR_g = \Pr(\guilty=1 \mid \crime, g)=\int_{\mathbb{R}}\crimeSigPDF(\sigVal)\probPolicy_g(s)d\sigVal \label{eqn:tpr}\tag{TPR}\\ &\FPR_g = \Pr(\guilty=1 \mid i,g ) =\int_{\mathbb{R}}\noncrimeSigPDF(\sigVal)\probPolicy_g(s)d\sigVal\label{eqn:FPR}\tag{FPR}\\ &\FNR_g = \Pr(\guilty=0 \mid c,g) =\int_{\mathbb{R}}\crimeSigPDF(\sigVal)(1-\beta_g(\sigVal))d\sigVal \,\, (= 1- \TPR_g) \label{eqn:FNR}\tag{FNR}\\ &\PPV_g = \Pr(\crime \mid \guilty=1, \group)=\frac{\crimeRate_g \TPR_g}{\crimeRate_g \TPR_g + (1- \crimeRate_g) \FPR_g} \label{eqn:PPV}\tag{PPV} \intertext{Note that in light of these definitions, we can rewrite \eqref{eqn:disincentive} as:} &\margCostCrime_g = \TPR_g - \FPR_g \label{eqn:disincentive2} \tag{$\margCostCrime$} \end{align} Additionally, we propose two {\em new} notions of fairness: equalizing disincentives (denoted $\margCostCrime$) and equalizing crime rates (denoted $\crimeRate$). We say the policy $\beta$ achieves fairness notion $\fairnessNotion \in \{\FPR, \FNR, \PPV, \margCostCrime, \crimeRate\}$ if the resulting respective quantity is the same across the groups when the adjudicator chooses policy $\beta$. We write $ \Beta_{\fairnessNotion} $ to be the set of all policies that achieve fairness notion $\fairnessNotion$. Given this framework we are interested in two questions. First, which of these fairness notions is compatible with the adjudicator's problem \eqref{eqn:obj}? Second, under what conditions is a particular fairness notion better than another in terms of the objective of minimizing overall crime rate, i.e. when do we have that for fairness notions $\fairnessNotion, \fairnessNotion'$: \[ \min_{\beta \in \Beta_{\fairnessNotion}} \sum_{g \in \mathcal{G}} % the group G = {1,2} \numPeople_g \crimeRate_g \le \min_{\beta \in \Beta_{\fairnessNotion'}} \sum_{g \in \mathcal{G}} % the group G = {1,2} \numPeople_g \crimeRate_g. \] One final piece of notation will be useful. We denote by $\beta^\star$ the solution to the adjudicator's problem \eqref{eqn:obj}, i.e. the policy that minimizes crime overall. We sometimes refer to this as the optimal policy. Further, for fairness notion $\fairnessNotion$, we denote as $\beta^\star_\fairnessNotion$ the solution to the adjudicator's problem among all rules that satisfy fairness notion $\fairnessNotion$,i.e. the rule that solves \begin{align*} \min_{\beta \in \Beta_{\fairnessNotion}} \sum_{g \in \mathcal{G}} % the group G = {1,2} \numPeople_g \crimeRate_g. \end{align*} \section{Results in the Baseline Model} \label{sec:baseline} The main result we build around is that the solution to the adjudicator's problem \eqref{eqn:obj} is naturally ``fair'' in terms of three of the five measures above. It provides an interesting counterpoint to the impossibility results of \cite{KMR16} and \cite{Chou17}. Those results state that it is impossible to simultaneously equalize false negative rates, false positive rates, and positive predictive value across groups. This raises a question of which of the fairness measures should be preferred over the others. By endogenizing the base rate of criminal activity, we find that equalizing false positive rates and equalizing false negative rates are preferred to equalizing positive predictive value in the sense that the former two are compatible with the optimal policy while the latter is not. Formally, \begin{theorem}\label{thm:opt} The adjudicator's optimal policy $\probPolicy^\star$ (i.e. the policy which solves \eqref{eqn:obj}) equalizes the disincentive to commit crime \eqref{eqn:disincentive2} across groups. As a result, it also equalizes the false negative rates \eqref{eqn:FNR}, and false positive rates \eqref{eqn:FPR}. \end{theorem} \begin{proof} First note that because $\probPolicy_g$ can be set independently for each group $g$, minimizing the total crime rate is achieved by individually minimizing the crime rate within each group. Recall that the crime rate within a group is $ \outOptCdf_g(\margCostCrime_g)$. This in turn is minimized by maximizing the disincentive of crime $ \margCostCrime_g $, since $\outOptCdf_g$ being a survivor function is non-increasing. Recall that \[ \margCostCrime_g = \int_{\sigSet} (\crimeSigPDF(\sigVal)-\noncrimeSigPDF(\sigVal))\beta_g(s)d\sigVal. \] Therefore the optimal $\probPolicy_g$ is independent of the (group-dependent) distribution over private values defining $H_g$, and is therefore the same for all groups: \[ \probPolicy_g(s)=\begin{cases} 1\text{ if } \crimeSigPDF(\sigVal)\geq \noncrimeSigPDF(\sigVal),\\ 0\text{ if } \crimeSigPDF(\sigVal) < \noncrimeSigPDF(\sigVal), \end{cases} \] Since the disincentive to commit crime is a function only of $\probPolicy_g$, this results in the same disincentive to commit crime at the optimal solution. Finally note that both the $ \FNR_g $ and $\FPR_g $ for each group are a function only of $\noncrimeSigPDF(\sigVal)$ and $\crimeSigPDF(\sigVal)$ (which are identical across groups), and the chosen policies $\beta_g(\sigVal)$, which we have shown in the optimal solution will be identical across groups. Hence, the adjudicator's optimal policy will equalize false positive rates and false negative rates across groups. \end{proof} Rather than thinking of an arbitrary function $\beta_g(\sigVal)$, it is more natural to think of the adjudicator as selecting a threshold $\sigThresh_g$ for each group $g$ so that any member of group $g$ whose signal $\sigVal$ exceeds $\sigThresh_g$ is labeled guilty. That is \[ \probPolicy_g(s)=\begin{cases} 1\text{ if } s \ge \sigThresh_g\\ 0\text{ if } s < \sigThresh_g. \end{cases} \] \begin{remark}\label{thresholdopt} Since, $ \crimeSigPDF$ and $\noncrimeSigPDF $ satisfy the MLRP property (i.e. $ \frac{\crimeSigPDF(\sigVal)}{\noncrimeSigPDF(\sigVal)} $ is non-decreasing in $ \sigVal $), $\beta^\star$ is a threshold policy by observation. Note that if strict MLRP holds, then the optimal thresholds are unique. \end{remark} Under a threshold policy, group $g$'s true positive rate reduces to $\TPR_g =\crimeSigCDF(\sigThresh_g)$ and the false positive rate simplifies to $\FPR_g =1-\noncrimeSigCDF(\sigThresh_g)$. \subsection{Discussion} \subsubsection*{Posterior Thresholds} It is interesting to contrast the policy of setting equal thresholds on the signal, which we show to be the optimal policy here, as opposed to equal thresholds on the `posterior' or another calibrated risk score. The latter is advocated in \cite{sharad2}, for example. In that paper, the authors consider a setting where crime choices are exogenous/ fixed, and study the choice of policy that minimizes weighted misclassification rates (i.e. acquittal of the guilty and incarceration of the innocent). They show that an optimal policy involves a common `threshold' on the posterior across groups, i.e., first, the adjudicator estimates the prior probability that an individual has committed a crime by considering the base rate of crime for the individual's group and then uses the observed signal to update her prior probability to her posterior belief that the individual in question has committed a crime. Second, the individual is deemed guilty if the posterior probability of guilt exceeds some threshold. Of course, in our setting, choice of crime is endogenous, and the planner's objective function is minimizing crime rate rather than minimizing mislabeling costs. Nevertheless, it is interesting to inquire into the implications of equalizing the thresholds on the posterior in our setting. In our setting, the posterior after observing the signal $ s $ and the group $ g $ is \[ \Pr(c\mid s,g) = \frac{\crimeSigPDF(s)\crimeRate_g}{\crimeSigPDF(s) \crimeRate_g + \noncrimeSigPDF(s)(1-\crimeRate_g)} \] which increases in $ s $ when the signal structure satisfies monotone likelihood ratio property. Thresholding the posterior corresponds to choosing a value $ \pi_g\in [0,1] $ and classifying as guilty whenever $ \Pr(c\mid s,g)\geq \pi_g $. Let $ T^* $ be the threshold on the signal under the planner's optimal policy. The optimal policy classifies the defendant as guilty if the signal $ s $ exceeds the threshold $ T^* $. With the monotone likelihood ratio property, this implies a threshold rule on the posterior which is to classify the defendant as guilty whenever the posterior exceeds \[ \pi_g = \Pr(c \mid T^*,g). \] By observation, the posterior thresholds are equalized across groups if and only if $\crimeRate_g = \crimeRate_{g'}$, i.e. if and only if crime rates are equalized under the optimal policy $T^*$. This in turn will only occur if $H_g( \margCostCrime) = H_{g'} (\margCostCrime)$ which of course will not obtain in general because $H_g$ need not be the same as $ H_{g'}$. \section{Heterogenous Signal Observation}\label{subsec:hetobs} The baseline model assumes that when an agent commits a crime, the adjudicator observes the signal generated and adjudicates. What if the signals generated by members of each group are observed at different rates? This can happen if the adjudicator relies on an intermediary to record the signals. For example, in the crime application, the groups may be policed at different rates. Their (dis)incentives to commit crime will then differ as a result. Critically, we suppose that the police's incentives differ from the adjudicator's. The adjudicator's choice of rule therefore influences the police's choice on how to divide their manpower across different groups. Both the adjudicator's rule and the police's choice influence the incentives of agents to commit crime, and ultimately determine the overall crime levels in society. To model this, we build upon upon the framework of \cite{Persico02}. There are a continuum of police officers who choose inspection intensities for each group $\{\inspecInsty_g\}_{g \in \mathcal{G}} % the group G = {1,2}$ given a search capacity $\searchCap$. The choice of inspection intensity determines the rate at which signals are observed from the two groups: upon inspection, a police officer observes a signal about whether a crime was committed or not. In \cite{Persico02}, the signal is assumed to be perfect. We depart from this assumption in that in our setting, the observed signal is {\em noisy}, as in our baseline model. The adjudicator, as before, wishes to minimize the overall crime rate. As in \cite{Persico02}, the police have different incentives. Specifically, each police officer tries to maximize the number of `successful' inspections, i.e. where the signal recorded exceeds the threshold set by the adjudicator. As in \cite{Persico02}, we motivate this incentive as driven by the career concerns of individual police officers (who are e.g. promoted if they have many successful arrests etc.). The timing of the game is therefore: (1) The adjudicator chooses the function $\probPolicy$, (2) the police takes this as fixed and chooses inspection intensities $ \inspecInsty_g\in [0,1] $ for each group subject to the constraint $ \numPeople_1 \inspecInsty_1 + \numPeople_2 \inspecInsty_2 \leq \searchCap $ (recall that $\numPeople_g$ is the number of agents in group $g$),\footnote{To make this model non trivial, we assume that search capacity is limited, i.e., $\searchCap < \numPeople_1 + \numPeople_2 $.} and then (3) given the adjudicator's choice $\probPolicy$, and inspection probability $ \inspecInsty $, an agent of group $g$ with crime reward $\crimeReward$, cost of being found guilty $\crimeCost$ and outside option commits a crime if (analogous to (\ref{eqn:ic}), but now taking into account also the probability of inspection): \begin{align*} &\crimeReward - \inspecInsty_g \crimeCost \Pr( q=1 \mid c, g) \geq \outOptVal - \kappa \inspecInsty_g \Pr (q=1 \mid i,g), \\ \text{i.e., whenever }\, & \inspecInsty_g \margCostCrime_g \leq \frac{\crimeReward - \outOptVal}{\crimeCost} \end{align*} where $\margCostCrime_g$ is as defined in \eqref{eqn:disincentive}. By analogy with \eqref{eqn:crimerate} an agent of group $g$ commits crime with probability $H_g (\inspecInsty_g \margCostCrime_g)$. As a benchmark, consider a setting where the adjudicator can choose both $\beta$ and $\theta$. Here the objective function is to minimize the overall crime rate just as in \eqref{eqn:obj}, and the additional constraint simply reflects that the choice of inspection rule must be feasible, i.e. the total level of inspection cannot exceed total search capacity $S.$ That is to say the adjudicator's problem in this benchmark can be written as: \begin{equation} \begin{aligned} \min_{\{\beta, \theta\}} &\sum_g \numPeople_g \outOptCdf_g( \theta_g \margCostCrime_g) \\ \text{s.t. } &\sum_g \numPeople_g \inspecInsty_g \leq \searchCap. \end{aligned}\label{persico-first-best} \end{equation} We refer to the solution of (\ref{persico-first-best}) as the {\bf first-best} solution. Now return to our setting above where the adjudicator chooses $\beta$ but not $\inspecInsty$. The police take $\beta$ as given and choose $\theta$ to maximize the number of successful inspections. Note that an inspection in group $g$ successful with probability $H_g(\inspecInsty_g \margCostCrime_g).$ As in \cite{Persico02}, we assume an interior equilibrium solution, i.e., $ \inspecInsty_g>0\ \forall g$.% \footnote{A corner solution entails the police completely ignoring a group. In this case, the setting is trivial because conditional false positive rates are undefined for the ignored group.} Intuitively, in this case, the optimal strategy for the police will equalize the crime rate between the two groups. Otherwise, police that are trying to maximize their successful inspection probability will exclusively search the group with the highest crime rate. Recognizing this, the adjudicator will solve the following problem: \begin{equation} \begin{aligned} \min_{\beta, \theta} &\sum_g \numPeople_g \outOptCdf_g(\theta_g \margCostCrime_g)\\ \text{s.t. } &\sum_g \numPeople_g \inspecInsty_g = \searchCap\\ &\outOptCdf_1( \theta_1 \margCostCrime_1)=\outOptCdf_2( \theta_2 \margCostCrime_2). \end{aligned}\label{persico-second-best} \end{equation} The solution to problem \eqref{persico-second-best} is the {\bf second-best} solution. We note that because groups will be inspected at different rates, the TPR and FPR as we have defined them should correctly be called the \emph{conditional} true and false positive rates respectively, i.e., the rates conditional on being inspected.\footnote{We observe that these conditional rates are implicitly what has been studied in the fairness in machine learning literature, because these are the rates that can be computed from the data.} Theorem \ref{thm:opt} now carries over mutatis mutandis, i.e. in both the first and second best solution, the thresholds will be set so as to equalize the conditional false and true positive rates $\CFPR$ and $\CTPR$. Proofs for this and subsequent theorems are in the appendix. \begin{restatable}{theorem}{thmPersicoEquivalenceFirstSecond} \label{thm:persico-equivalence-first-second} The optimal solutions to both the first best (\ref{persico-first-best}) and second best (\ref{persico-second-best}) equalize the $\CFPR$ and $\CTPR$ across groups. \end{restatable} While the optimal $\beta$ under the first and second best outcomes coincide, the optimal inspection intensities need not. In particular, the first- and second-best solutions coincide when $ \outOptCdf $ is convex. However, when $\outOptCdf$ is concave, then the inspection intensities under the second-best outcome \emph{maximize} the average number of crimes out of all possible search intensities given the optimal signal thresholds. \begin{restatable}{theorem}{thminspection} \label{inspection} Suppose that the $ \outOptCdf_g $ belong to the same location family, i.e. $ \outOptCdf_g(\sigVal)=\outOptCdf(\sigVal-\mu_g) $ for some $ \mu_g $ for each $ i \in \mathcal{G}} % the group G = {1,2$ and that $ \outOptCdf $ is convex (concave). Then, the inspection intensities in the second best solution minimize (maximize) the crime rate among all thresholds that equalize conditional false positive rates and conditional true positive rates $\CFPR$ and $\CTPR$. \end{restatable} \section{Heterogeneous Signal Structure} \label{sec:hetsignals} We now examine the extent to which the conclusion of Theorem \ref{thm:opt} holds if we allow the signal structure $\sigStruct_g = (\crimeSigCDF_g,\noncrimeSigCDF_g)$ to be different across groups $g \in \mathcal{G}} % the group G = {1,2$. The signal structure $ \sigStruct_g $ and the strategy $ \probPolicy_g(\sigVal) $ matter to the extent they discourage crime. Recall, that the relevant sufficient statistic of a strategy $\probPolicy_g(\sigVal)$ is what we called the \emph{disincentive} to commit crime: \[\margCostCrime_g = \TPR_g - \FPR_g = \int_{\sigSet} (\crimeSigPDF(\sigVal)-\noncrimeSigPDF(\sigVal))\probPolicy_g(\sigVal)d\sigVal.\] The set of achievable disincentives given a signal structure is $ [\underline \Delta_g, \overline \Delta_g] $ where \begin{align*} &\underline \Delta_g = \int_{S^-_g} (\crimeSigPDF_g(\sigVal)-\noncrimeSigPDF_g(\sigVal))ds, \;\text{where } S^-_g \equiv \{s: \crimeSigPDF_g(\sigVal)-\noncrimeSigPDF_g(\sigVal)<0\},\\ & \overline \Delta_g = \int_{S^+}(\crimeSigPDF_g(\sigVal)-\noncrimeSigPDF_g(\sigVal))ds, \;\text{where } S^+_g \equiv \{s: \crimeSigPDF_g(\sigVal)-\noncrimeSigPDF_g(\sigVal)>0\}. \end{align*} The relevant sufficient statistics for the signal structure $ (\crimeSigCDF_g,\noncrimeSigCDF_g) $ for group $g$ are its minimal and maximal disincentive $ \underline \Delta_g $ and $ \overline \Delta_g $ which determines the range of disincentives a classification rule is able to provide. \newcommand{\sigma}{\sigma} \subsection{General Analysis} In this section, we give some insight into what happens when the signal structure varies across groups without making further assumptions on how the signal structure varies. First, as should be clear from the intuition previously, what really matters for our results in the baseline model is not that the signal distributions are identical across populations, but rather that the maximal disincentive $\overline \margCostCrime_g$ is the same across groups: if we have this, then the basic insight of Theorem \ref{thm:opt} holds as before (maximizing disincentives across groups). This is summarized in Theorem \ref{thm:het_eqInc}. Note that since the signal structures are different, the implication of Theorem \ref{thm:opt}, i.e. that FPR/ FNR will also be equalized across groups, will not hold in general. Further, if the maximal disincentive differs, then in general the result does not hold, as we show in Example \ref{ex:het_eqInc}. Theorem \ref{thm:het_lem_comparison} then provides conditions under which various ``natural'' fair policies are ranked under the adjudicator's objective to minimize overall crime. Let us start with analyzing the optimal policy that minimizes average crime. As in Section \ref{sec:baseline}, average crime is minimized by maximizing the disincentive for crime in each group, which is attained by setting $ \probPolicy_g(\sigVal) $ such that $ \Delta_g = \overline \Delta_g $ for every $g$. Unlike in Section \ref{sec:baseline}, the optimal policy does not guarantee any of the fairness notions described in section \ref{fairmeasures} --- equalizing disincentives, equalizing false positive rates, equalizing false negative rates or equalizing positive predictive value --- when the signal structures differ across the groups. An immediate observation is that when the signal structures have the same maximal disincentives $ \overline \Delta_g $, then the optimal effective policy equalizes disincentives. \begin{restatable}{theorem}{thmhetEqInc} \label{thm:het_eqInc} Suppose that $ \overline \Delta_1 = \overline \Delta_2 $. The adjudicator's optimal policy (i.e. the solution to \eqref{eqn:obj}) equalizes disincentives \eqref{eqn:disincentive2} across groups. \end{restatable} Theorem \ref{thm:het_eqInc} follows from the core insight of Theorem \ref{thm:opt} that the optimal rule maximizes the disincentive to commit crime for each group. When the signal structures across the groups are identical as in Theorem \ref{thm:opt}, the signal structures have the same maximal disincentives, and therefore, the optimal policy equalizes disincentives. Equalizing disincentives coincides with equalizing false positive rates and equalizing false negative rates in this case. However when the distributions of signals are different, equalizing disincentives need not be be the same as equalizing false positive/ false negative rates. Indeed, equalizing disincentives may yield a strictly lower crime rate than equalizing false positive rates and equalizing false negative rates even when $ \overline \Delta_1=\overline \Delta_2 $. To see this, consider the following example. \begin{example}\label{ex:het_eqInc} Suppose that the signal for group $ g $, $ s_g $, is generated according to \[ s_g = \eta_g + 1_{c} \] where $\eta_g $ is a random variable that has pdf $ f_g(\eta) $ that is strictly log-concave and has full support on $ \mathbb{R} $, and $ 1_{c} $ is an indicator function that equals $ 1 $ if and only if the agent has committed a crime. In words, committing a crime produces a signal that exceeds the signal from not committing a crime by at least $ 1 $ on average, while the underlying distribution $ f_g $ may differ across the groups. Note that \begin{align*} \noncrimeSigPDF_g(\sigVal) = f_g(\sigVal)\quad\text{and}\quad \crimeSigPDF_g(\sigVal) = f_g(\sigVal-1) \end{align*} and the strict log-concavity guarantees that the strict Monotone Likelihood Ratio Property between $ \noncrimeSigPDF $ and $ \crimeSigPDF $ is satisfied, that is, $ \frac{\crimeSigPDF_g(s)}{\noncrimeSigPDF_g(s)} $ strictly increases in $ s $, and that $ f_g $ is unimodal. Finally, suppose that $ f_1(\eta) $ is asymmetric around its mode, and that $ f_2(\eta) = f_1(-\eta)$ is a horizontal reflection of $ f_1 $. For each threshold $ T_g $, the corresponding disincentives satisfy $ \Delta_g(T_g) = F_g(T_g) - F_g(T_g-1) $. The threshold $ T_g^* $ maximizes $ \Delta_g(T_g) $ if and only if it equalizes the pdfs at $ T_g^* $ and $ T_g^*-1 $: \[ f_g(T_g^*)=f_g(T_g^*-1). \] Graphically, $ T_g^* $ and $ T_g^*-1 $ are obtained as a pair of intersection points between the pdf $ f_g $ and a horizontal line, where the distance between the intersection points has to be $ 1 $ as in Figure \ref{fig:mirrored_sig_distr}. The maximal disincentive $ \overline\Delta_g = \Delta_g(T_g^*) $ is the white area under $ f_g $ between $ T_g^*-1 $ and $ T_g^* $. Since $ f_2 $ is merely a horizontal reflection of $ f_1 $, so is the maximal disincentives, and therefore, $ \overline \Delta_1 = \overline \Delta_2 $. By Theorem \ref{thm:het_eqInc}, the optimal policy $ T_g^* $ equalizes disincentives. The false positive rate and false negative rate for each group are colored in blue and red. Clearly, $ FPR_1 \neq FPR_2 $ and $ FNR_1\neq FNR_2 $. Consequently, equalizing false positive rates and equalizing false negative rates yield strictly higher crime rates than equalizing disincentives. This also implies $ PPV_1 \neq PPV_2 $ so that equalizing PPVs also yields a strictly higher crime rate in general. \end{example} \input{mirrored_sig_distr.tex} When the signal structures across the groups have different maximal disincentives, we identify conditions under which both equalizing false positive rates and equalizing false negative rates yields a strictly lower crime rate than equalizing disincentives. Without loss of generality, let us assume that $ \overline\Delta_2 > \overline \Delta_1 $. \begin{restatable}{theorem}{thmHetLemComparison} \label{thm:het_lem_comparison} Let that $ \overline \margCostCrime_2 > \overline \margCostCrime_1 $. Then, the following are equivalent: \begin{enumerate} \item The optimal policy subject to equalizing false positive rates ($\probPolicy_{\FPR}^\star$) attains a (weakly) lower crime rate than equalizing disincentives ($\probPolicy_{\margCostCrime}^\star$). \item The optimal policy subject to equalizing false negative rates ($\probPolicy_{\FNR}^\star$) attains a (weakly) crime rate than equalizing disincentives ($\probPolicy_{\margCostCrime}^\star$). \item $ (\crimeSigCDF_{2})^{-1} \circ \crimeSigCDF_1 (\sigThresh_1^*) > (\geq) (\noncrimeSigCDF_2)^{-1} \circ \noncrimeSigCDF_1(\sigThresh_1^*) $ where $ \sigThresh_g^* $ is the threshold under the optimal policy for group $ g $. \end{enumerate} \end{restatable} In general, the optimal policy does not guarantee any of the fairness notions. Theorem \ref{thm:het_lem_comparison} provides a sufficient and necessary condition under which equalizing false positive rates and equalizing false negative rates attains lower crime rates than equalizing disincentives. However, condition 3 is hard to interpret. Further, it is unclear which of equalizing false positive rate and false negative rate would be better overall for the adjudicator. Without additional structure on the signal structures, it is hard to proceed further. To explore these issues, we restrict attention to signal distributions that are members of location-scale families of distributions. \subsection{Location Scale Families} \begin{definition}\label{def:location-scale} We say that the signal structure is from a location-scale family if each group's signal is a location-scale transformation of the same underlying random variable $ \eta $ that has absolutely continuous and log-concave density function $ f $ with full support on the real line. Specifically, the signal $ \sigVal_g $ for group $g$ is generated according to: \begin{equation*} \sigVal_g = \mu_g + \sigma_g \eta + m_g 1_c \end{equation*} where $ \mu_g $ is a location shifter, $ \sigma_g$ is a scale shifter, $ m_g$ is the marginal effect of crime on the signal and $ 1_c $ is an indicator function that equals $ 1 $ if and only if the agent has committed a crime. Equivalently, the conditional pdfs of signal $ s $ for group $ g $ conditioning on being innocent and having committed a crime are \begin{equation} \begin{aligned} &\noncrimeSigPDF(\sigVal) = f\left(\frac{\sigVal-\mu_g}{\sigma_g}\right) \quad \text{and}\quad\crimeSigPDF(\sigVal) = f\left(\frac{\sigVal-\mu_g-m_g}{\sigma_g}\right). \end{aligned} \label{hetSig} \end{equation} \end{definition} Note that the underlying distribution $ f $ is identical across the groups as in contrast to Example \ref{ex:het_eqInc} where the underlying distribution $ f_g $ differed across the groups. Combined with the functional form \eqref{hetSig}, log-concavity of $ f $ is equivalent to the signal structure satisfying the monotone likelihood ratio property for each group $g$ which implies that the optimal $ \beta $ is a threshold strategy. The log-concavity of $ f $ also implies that $ f $ is unimodal, which guarantees the uniqueness of the threshold that attains the optimal policy. There are many natural location-scale families of distributions satisfying log-concavity including normal distributions, logistic distributions, and extreme value distributions. A property that makes location-scale family particularly tractable is that the disincentives engendered by a threshold depend only on $ \frac{m_g}{\sigma_g} $, i.e. is the ratio between the scale shift $\sigma_g$ and the marginal effect of crime $ m_g $. For this class of distributions, we can say that it is \emph{always} preferable to equalize either false positive rates or false negative rates compared to equalizing disincentives. \begin{restatable}{theorem}{thmscale} \label{scale} Suppose the distributions across groups are from the location-scale family as defined in Definition \ref{def:location-scale}. Then \begin{enumerate} \item If $\frac{m_{1}}{\sigma_{1}} = \frac{m_{2}}{\sigma_{2}}$, then the optimal policy equalizes disincentives, false positive rates and negative rates. \item Suppose $ \frac {m_1} {\sigma_1} \neq \frac {m_2} {\sigma_2}$, and assume $\frac {m_2} {\sigma_2}$ is larger without loss of generality. Then $ \overline \Delta_2 > \overline \Delta_1 $. Further, the optimal policy subject to equalizing false positive rates ($\probPolicy_{\FPR}^\star$) and equalizing false negative rates ($\probPolicy_{\FNR}^\star$) attain strictly lower crime rates than equalizing disincentives ($\probPolicy_{\Delta}^\star$). Further, all three attain strictly higher crime rates than the optimal policy ($\probPolicy^*$). \end{enumerate} \end{restatable} The formal proof is in the appendix. For some intuition, note that the maximum disincentive $ \overline \Delta_g $ is determined by and increasing in $ \frac{m_g}{\sigma_g} $. Intuitively, this is because the larger the normalized marginal effect of crime on the signal is, the better the adjudicator is able to distinguish between the criminal and the non-criminal based on the signal, and therefore the adjudicator can increase the disincentive to commit crime. If this term is equal across groups, then Theorem \ref{thm:het_eqInc} applies and part (1) follows as a corollary. Now, without loss of generality, suppose instead that $ \frac {m_1} {\sigma_1} < \frac {m_2} {\sigma_2} $. Then $ \overline \Delta_2 > \overline \Delta_1 $. It can also be verified that the condition $ (3) $ in Theorem \ref{thm:het_lem_comparison} holds, so that equalizing false positive rates and equalizing false negative rates always yield a lower crime rate than equalizing disincentives. Furthermore, we can also verify that equalizing false positive rates and equalizing false negative rates never attains the crime rate under the optimal policy. A natural question to ask is whether one of equalizing false positive rates or equalizing false negative rates will have a lower crime rate than the other. It is a hard question to answer in general. When the underlying distribution $ f $ is symmetric around $0$ ($0$ is without loss of generality since the $\mu_g$'s can always be shifted if $f$ is symmetric around some other number), however, the set of feasible disincentives $ (\Delta_1, \Delta_2) $ are identical under equalizing false positive rates and false negative rates, and therefore, the two notions of fairness yield the same crime rate. \begin{restatable}{theorem}{thmHetLocscaleFprfnrthesame} \label{het_locscale_fprfnrthesame} Suppose the signal structure is from the location-scale family as in Definition \ref{def:location-scale}, and $ f $ is symmetric around $ 0 $. Then, the optimal policy subject to equalizing false positive rates and that subject to equalizing false negative rates yield the same crime rate. \end{restatable} \section{Equalizing Crime Rates}\label{sec:equalbase} In our model, the crime rates are endogenously determined by agents' decisions as a response to the policy implemented. This is unlike most of the fairness literature that assumes that the underlying rates are fixed. It motivates us to study another fairness measure that previous papers could not have asked for: equalizing crime rates. To understand the implications of equalizing crime rates, let us assume that group $ 2 $ is `riskier' than group $ 1 $ without loss of generality. Specifically, assume that $\outOptCdf_2$ stochastically dominates $\outOptCdf_1$ -- that is to assume $\outOptCdf_2(\margCostCrime) \ge \outOptCdf_1(\margCostCrime)\ \forall \margCostCrime$. In this section, we focus on the comparison between equalizing crime rates and equalizing disincentives, while allowing any arbitrary signals structures $\sigStruct_g = (\crimeSigCDF_g, \noncrimeSigCDF_g)$. Equalizing disincentives is an appropriate fairness measure to compare because it is attained by the optimal policy when $ \overline \margCostCrime_1 = \overline \margCostCrime_2 $. The first question to ask is whether equalizing crime rates can ever attain a lower crime rate than equalizing disincentives. We find that equalizing crime rates attains a lower crime rate than equalizing disincentives if and only if $ \overline \Delta_2 $ is sufficiently larger than $ \overline \Delta_1 $. \begin{restatable}{theorem}{thmEqBaseRatesEqIncen} \label{eqBaseRates_eqIncen} Suppose that $ \outOptCdf_2 $ first-order stochastically domiantes $ \outOptCdf_1 $. Then, there is an $ \epsilon>0 $ such that equalizing crime rates attains a lower crime rate than equalizing disincentives if and only if $ \overline\Delta_2\geq \overline\Delta_1+\epsilon $. \end{restatable} \input{equalizing_base_rates_figures.tex} The theorem is best demonstrated using the diagrams, although the formal proof is provided in the appendix. For theorem \ref{eqBaseRates_eqIncen}, we consider 4 cases: (i) $\overline{\margCostCrime}_1 + \epsilon \le \overline{\margCostCrime}_2$, (ii) $\overline{\margCostCrime}_1 < \overline{\margCostCrime}_2 < \overline{\margCostCrime}_1 + \epsilon$, (iii) $\overline{\margCostCrime}_1 =\overline{\margCostCrime}_2$, and (iv) $\overline{\margCostCrime}_1 > \overline{\margCostCrime}_2$. For initial illustration purposes, we will focus on figure \ref{eqBase_1} which corresponds to the first case, $\overline{\margCostCrime}_1 + \epsilon < \overline{\margCostCrime}_2$. Each red and blue curve represents $ \outOptCdf_1(\cdot) $ and $ \outOptCdf_2(\cdot) $, respectively. Note the `thicker' segments of each curve are the set of crime rates $\outOptCdf_g(\margCostCrime_g)$ that can be achieved by varying the disincentive, $\margCostCrime_g \in [\underline \margCostCrime_g, \overline \margCostCrime_g] $. For each group, we denote the optimal policy by a triangle. The optimal policy while equalizing disincentives, which is denoted by `X', is obtained as intersections of the black line and the outside option distribution functions. The optimal policy while equalizing crime rates, which is denoted by `o', is obtained as intersections of the orange line and the outside option distribution function. In figure \ref{eqBase_1}, note that for group $1$, the crime rate stay the same under equalizing disincentives and crime rates, but the crime rate increases once one changes from the optimal policy that equalizes disincentives to the one that equalizes crime rates. Therefore, the optimal policy subject to equalizing crime rates is more preferred than under equalizing disincentives. Figure \ref{fig:eql_base_rates_combined} shows the other cases. Note that as we decrease $\overline{\margCostCrime}_2$ (or equivalently increase the crime rate of group $2$), there comes a point determined by $\numPeople_1$ and $\numPeople_2$ at which equalizing crime rates is more preferred than equalizing disincentives. More specifically, in figure \ref{eqBase_2}, imagine moving the right most blue triangle to the left and hence raising the orange line; as this happens, both `X' marks, which denote the optimal policy that equalizes crime rate, need to go up, while the optimal policy that equalizes incentive stays the same. Therefore, depending on the ratio of the number of people ($\numPeople_1$ and $\numPeople_2$), there exists some $\epsilon$ such that $\overline{\margCostCrime}_2 \le \overline{\margCostCrime}_1 + \epsilon$ if and only if equalizing crime rate attains lower crime rate than equalizing crime rates. And it's easy to see from figure \ref{eqBase_3} and \ref{eqBase_4} that equalizing disincentives achieves lower crime rate than equalizing crime rates in the corresponding cases. Therefore, these arguments together imply that equalizing crime rates attains a lower crime rate than equalizing disincentives if and only if $ \overline \margCostCrime_2 $ is sufficiently higher than $ \overline \margCostCrime_1 $. \begin{figure} [h] \centering \resizebox{0.4\columnwidth}{!}{% \begin{tikzpicture} \begin{axis}[% xlabel=$\margCostCrime$, ylabel=$\outOptCdf_g(\margCostCrime_g)$, legend entries={$\outOptCdf_1$, $\outOptCdf_2$}, legend pos=south east] \addplot [smooth, red] {normcdf(x,2,2)}; \addplot [smooth, blue] {normcdf(x,0,2)}; \addplot [very thick, red, domain=-4:0] {normcdf(x,2,2)}; \addplot [very thick, blue, domain=-2.5:2] {normcdf(x,0,2)}; \addplot [thick, orange] {normcdf(0,2,2)}; \addplot[thick, black] coordinates {(-0,0)(0,1)}; \addplot [only marks, mark = triangle*, mark options={rotate=90},mark size=5pt, red] coordinates {(0, {normcdf(0,2,2)})}; \addplot [only marks, mark = triangle*, mark options={rotate=90},mark size=5pt, blue] coordinates {(2, {normcdf(2,0,2)})}; \end{axis} \end{tikzpicture} } \caption{$ \outOptCdf_1(\overline \margCostCrime_1) = \outOptCdf_2 (\overline \margCostCrime_2) $} \label{eqBase_5} \end{figure} Having verified that equalizing crime rates can attain lower crime rates than equalizing disincentives, the next question is whether equalizing crime rates can ever attain a lower crime rate than any of other fairness notions. The answer to this question is positive which we establish by finding a condition under which the optimal policy attains equalizing crime rates. \begin{restatable}{theorem}{eqBaseRatesOpt} \label{eqBaseRates_opt} Suppose that $ \outOptCdf_2 $ first-order stochastically dominates $ \outOptCdf_1 $. When $ \outOptCdf_1( \overline{\margCostCrime}_1) = \outOptCdf_2( \overline{\margCostCrime}_2) $, the optimal policy equalizes crime rates but not necessarily false positive rates, false negative rates or disincentives in general. \end{restatable} Figure \ref{eqBase_5} depicts the case when $ H_1(\overline \Delta_1) = H_2(\overline \Delta_2) $. By construction, the optimal policy equalizes crime rates. As it can be seen from Figure \ref{eqBase_5}, equalizing disincentives attains a strictly higher crime rates than equalizing crime rates. Furthermore, it can be shown that other notions of fairness --- equalizing false positive rates, false negative rates and positive predictive value - are not satisfied in general. \section{Discussion and Conclusions} \label{sec:conclude} This paper gives a general model in which classification rules which equalize false positive and false negative rates can be compatible with natural objectives, \emph{in spite of failing to capitalize on statistically relevant information}. We derived the model using the language of criminal justice, but one could just as easily apply the base model to settings in which the principle was making some other binary decision based on partial information, such as a lending or employment decision. The underlying reason is that conditioning on demographic information, while statistically useful, leads to decision rules that incentivize different groups differently --- \emph{because demographic information is not under individual control}. Hence, in settings in which the underlying objective depends on the decisions of rational agents, the decision rule should explicitly commit \emph{not} to condition on information that relates to an individual's demographic group, and instead use only information that is affected by the choices of the individual. Abstracting away, the necessary conditions under which our conclusions hold are that: \begin{enumerate} \item The underlying base rates are rationally responsive to the decision rule deployed by the principle, \item Signals are observed by the adjudicator at the same rates across populations, and \item The signals that the adjudicator must use to make her decision are conditionally independent of an individual's group, conditioned on the individual's decision. \end{enumerate} Here, conditions (2) and (3) are unlikely to hold precisely in most situations, but we give settings under which they can be relaxed. More generally, if we are in a setting in which we believe that individual decisions are rationally made in response to the deployed classifier, and yet the deployed classifier does \emph{not} equalize false positive and negative rates, then this is an indication that \emph{either} the deployed classifier is sub-optimal (for the purpose of minimizing base rates), \emph{or} that one of conditions (2) and (3) fails to hold. Since in fairness relevant settings, the failure of conditions (2) and (3) is itself undesirable, this can be a diagnostic to highlight discriminatory conditions earlier in the pipeline than the adjudicator's decision rule. In particular, if conditions (2) or (3) fail to hold, then imposing technical fairness constraints on a deployed classifier may be premature, and instead attention should be focused on structural differences in the observations that are being fed into the deployed classifier. \bibliographystyle{plainnat}
2,869,038,155,987
arxiv
\section{Proofs} \begin{proof}[Proof of Proposition~\ref{lem:nonsingular}] The first claim follows directly from the fact that $\ones$ is an eigenvector of $P_{X^{\perp}} + \lambda L$ with eigenvalue 1, since $\ones \in \col(X)^{\perp}$ and $L\ones = 0$. To show the second claim, note that the minimum eigenvalue of $P_{X^{\perp}} + \lambda L$ is the solution of the optimization problem $$\min_{\norm{\V{u}}=1} \V{u}^T(P_{X^{\perp}} + \lambda L)\V{u}.$$ Assume $\V{u} = \V{u}_1 + \V{u}_2,$ where $\V{u}_1 \in \col(X)^{\perp}$, $\V{u}_2 \in \col(X)$ and $\norm{\V{u}_1}^2+\norm{\V{u}_2}^2=1$. Then the objective function can be rewritten as $$\lambda \V{u}^TL\V{u} + \norm{\V{u}_1}^2.$$ This is zero if and only if $\norm{\V{u}_1}=0$ and $\V{u}^TL\V{u}=0$, but these two contradict Assumption~\ref{ass:nonSingular}. As discussed in Section~\ref{penalizedLS}, the RNC estimator exists whenever $P_{X^{\perp}} + \lambda L$ is invertible, which shows that the RNC estimate exists. \end{proof} One formula that will be used frequently later is the decomposition of MSE for a vector estimation: $$\e\norm{\hat{\V{\theta}} - \V{\theta}}^2 = \norm{\e\hat{\V{\theta}}-\V{\theta}}^2 + \tr(\var(\hat{\V{\theta}})),$$ in which we call the second term total variance of $\hat{\V{\theta}}$. We first derive the bias and variance of both the OLS and the RNC estimators. We use $b(\cdot)$ to denote the bias of an estimator. The bias, variance and MSE of the OLS estimator are standard. We state the MSE here for completeness without proof. \begin{lem}\label{lem:OLS-bias} For the OLS estimator given by $$\hat{\bbeta}_{OLS} = (X^TX)^{-1}X^T\V{Y},~ \hat{\balpha}_{OLS} = \bar{y}\mbone, $$ we have \begin{align*} \mathrm{MSE}(\hat{\balpha}_{OLS}) & = \norm{\bar{\balpha}\mbone - \balpha}^2 + \frac{\sigma^2}{n}, \\ \mathrm{MSE}(\hat{\bbeta}_{OLS}) & = \norm{(X^TX)^{-1}X^T\balpha}^2 + \sigma^2\tr((X^TX)^{-1}), \end{align*} $$ \e\norm{\hat{\V{Y}}_{OLS} - \e \V{Y}}^2 = \norm{(\frac{1}{n}\mbone\mbone^T+X(X^TX)^{-1}X^T)\V{\alpha}-\V{\alpha}}^2+\sigma^2\norm{\frac{1}{n}\mbone\mbone^T+X(X^TX)^{-1}X^T}_F^2.$$ \end{lem} \begin{lem}\label{lem:Net-bias} The bias of the RNC estimator is given by \begin{equation}\label{eq:Net-bias} b(\hat{\btheta}) = -\lambda(\tildeX^T\tildeX + \lambda M)^{-1}M\btheta. \end{equation} Equivalently, one can write it in the following decomposed form: \begin{equation}\label{eq:Net-bias2} b(\hat{\btheta}) = (b(\hat{\balpha})^T, ((X^TX)^{-1}X^Tb(\hat{\balpha}))^T)^T, \end{equation} where $b(\hat{\balpha}) = -(\frac{1}{\lambda}P_{X^{\perp}}+L )^{-1}L\balpha$, and $P_{X} = X(X^TX)^{-1}X^T$ is the projection matrix onto $\col(X)$. The variance of the RNC estimator is given by $$\var(\hat{\btheta}) = \sigma^2(\tildeX^T\tildeX + \lambda M)^{-1}\tildeX^T\tildeX(\tildeX^T\tildeX + \lambda M)^{-1} \preceq \sigma^2(\tildeX^T\tildeX + \lambda M)^{-1}.$$ \end{lem} \begin{proof} For the bias term, \begin{align*} b(\hat{\btheta}) &= \e(\tildeX^T\tildeX + \lambda M)^{-1}\tildeX^TY - \btheta\\ &= (\tildeX^T\tildeX + \lambda M)^{-1}\tildeX^T\tildeX\btheta - \btheta\\ &= -\lambda(\tildeX^T\tildeX + \lambda M)^{-1}M\btheta. \end{align*} Note that we have $M\btheta = \begin{bmatrix} L\balpha \\ 0 \\ \end{bmatrix}$. By the block matrix inverse formula, we have \begin{align*} &(\tildeX^T\tildeX + \lambda M)^{-1} =\begin{bmatrix} (P_{X^{\perp}} + \lambda L )^{-1}& (P_{X^{\perp}} + \lambda L )^{-1}X(X^TX)^{-1} \\ (X^TX)^{-1}X^T(P_{X^{\perp}} + \lambda L )^{-1} & (X^TX)^{-1} + (X^TX)^{-1}X^T(P_{X^{\perp}} + \lambda L)^{-1}X(X^TX)^{-1} \\ \end{bmatrix}. \end{align*} Then \eqref{eq:Net-bias2} follows directly from decomposing the bias vector into the $\balpha$ and $\bbeta$ parts. The variance can be calculated by the standard OLS formula taking $\tildeX$ as the design matrix. The positive semi-definiteness follows from the fact that $$X^TX \preceq X^TX + \lambda M$$ whenever $M$ is positive semi-definite. \end{proof} From Lemma~\ref{lem:Net-bias} and the bias-variance decomposition, we can directly get the closed form expressions for the MSE of RNC estimation. In particular, \begin{align}\label{eq:ExactMSE} \mse(\V{\theta}) = &\norm{\lambda(P_{X^{\perp}} + \lambda L)^{-1}L\balpha}^2 + \norm{\lambda(X^TX)^{-1}X^T(P_{X^{\perp}}+ \lambda L )^{-1}L\balpha}^2 \notag \\ &+ \sigma^2\tr((\tildeX^T\tildeX + \lambda M)^{-1}\tildeX^T\tildeX(\tildeX^T\tildeX + \lambda M)^{-1}). \end{align} \begin{proof}[Proof of Theorem~\ref{thm:non-regularized-compare-OLS}] Note that $P_{X^{\perp}} + \lambda L \succeq \nu I$. Thus the squared bias term for $\balpha$ is $$\norm{\lambda(P_{X^{\perp}} + \lambda L)^{-1}L\balpha}^2 \le \frac{\lambda^2}{\nu^2}\norm{L\balpha}^2. $$ The total variance of $\hat{\balpha}$ can be upper bounded by $$\tr(\sigma^2(P_{X^{\perp}} + \lambda L)^{-1}) \le \frac{\sigma^2}{\nu}\tr(I) = \frac{n\sigma^2}{\nu}.$$ Thus the bound \eqref{th1:alpha-MSE} on MSE$(\hat{\balpha})$ follows. From Lemma~\ref{lem:Net-bias}, we have \begin{align} \norm{b(\hat{\bbeta})}^2 =& b(\hat{\balpha})^TX(X^TX)^{-1}(X^TX)^{-1}X^Tb(\hat{\balpha}) \notag \\ \le & \frac{1}{\mu}b(\hat{\balpha})^TX(X^TX)^{-1}(X^TX)(X^TX)^{-1}X^Tb(\hat{\balpha}) \notag \\ = & \frac{1}{\mu}b(\hat{\balpha})^TX(X^TX)^{-1}X^Tb(\hat{\balpha}) = \frac{1}{\mu}b(\hat{\balpha})^T(P_{X}b(\hat{\balpha})) \notag \\ = & \frac{1}{\mu}\norm{P_{X}b(\hat{\balpha})}^2 \le \frac{1}{\mu}\norm{b(\hat{\balpha})}^2 \le \frac{\lambda^2}{\nu^2\mu}\norm{L\balpha}^2. \label{eq:net-sq-bias} \end{align} By Lemma~\ref{lem:Net-bias} and Schur complement, the covariance matrix of $\hat{\bbeta}$ is \begin{align}\label{eq:net-varianceupper} \var(\hat{\bbeta})& \preceq \sigma^2(X^TX)^{-1} + \sigma^2(X^TX)^{-1}X^T(P_{X^{\perp}} + \lambda L)^{-1}X(X^TX)^{-1} \notag \\ & \preceq \sigma^2(X^TX)^{-1} +\frac{\sigma^2}{\nu}(X^TX)^{-1}X^TX(X^TX)^{-1} = \sigma^2(\frac{1}{\nu}+1)(X^TX)^{-1}. \end{align} Combining the squared bias \eqref{eq:net-sq-bias} and variance \eqref{eq:net-varianceupper} gives the bound \eqref{th1:beta-MSE} on MSE$(\hat{\V{\beta}})$. The mean squared prediction error can be similarly derived. With $\hat{\V{V}} = \tildeX\hat{\V{\theta}}$, we have $$b(\hat{\V{V}}) = \tildeX b(\hat{\V{\theta}}) = -\lambda\tildeX(\tildeX^T\tildeX+\lambda M)^{-1}M\V{\theta},$$ and $$\var(\hat{\V{V}}) = \sigma^2\tildeX(\tildeX^T\tildeX + \lambda M)^{-1}\tildeX^T\tildeX(\tildeX^T\tildeX + \lambda M)^{-1}\tildeX^T.$$ Thus \begin{align*} \e\norm{\hat{\V{V}}-\e \V{Y}}^2 &= \norm{b(\hat{\V{V}}) }^2 + \tr(\var(\hat{\V{V}}))\\ & \le \lambda^2 (L\balpha)^T(P_{X^{\perp}} + \lambda L)^{-1}(L\balpha) + \sigma^2 \tr(S_{\lambda}^TS_{\lambda})\\ & \le \frac{\lambda^2}{\nu}\norm{L\balpha}^2 + \sigma^2\norm{S_{\lambda}}_F^2. \end{align*} This completes the proof of Theorem ~\ref{thm:non-regularized-compare-OLS}.\end{proof} \begin{proof}[Proof of Theorem~\ref{thm:sparsificationImpacts}] Denote $ \ell(\V{\alpha} + X\V{\beta}; \V{Y}) $ by $\ell(\V{\theta})$. Define $$ M = \begin{bmatrix} L & 0_{n\times p} \\ 0_{p\times n} & 0_{p\times p} \\ \end{bmatrix}.$$ The matrix $M^*$ is defined similarly. Then by the optimality of $\hat{\V{\theta}}^*$ under $f^*$, we have \begin{align}\label{eq:optimalitybound} \ell(\hat{\V{\theta}}^*) + \lambda\hat{\V{\theta}}^{*T}M^*\hat{\V{\theta}}^* & = f^*(\hat{\V{\theta}}^*) \\ \notag & \le f^*(\hat{\V{\theta}}) \\ \notag & = \ell(\hat{\V{\theta}}) + \lambda\hat{\V{\theta}}^{T}M^*\hat{\V{\theta}}\\ \notag & \le \ell(\hat{\V{\theta}}) + \lambda(1+\epsilon)\hat{\V{\theta}}^{T}M\hat{\V{\theta}}, \end{align} in which the last inequality can be easily derived from \eqref{eq:spectralapprox} by noticing that $M^*$ has all zeros except in the upper left corner. By Taylor expansion of $\ell$ at $\hat{\V{\theta}}$, we have \begin{align}\label{eq:taylorexpansion} \ell(\hat{\V{\theta}}^*) &= \ell(\hat{\V{\theta}}) + \grad \ell(\hat{\V{\theta}})^T(\hat{\V{\theta}}^*-\hat{\V{\theta}}) + \frac{1}{2}(\hat{\V{\theta}}^*-\hat{\V{\theta}})^T\grad^2\ell(\bar{\V{\theta}})(\hat{\V{\theta}}^*-\hat{\V{\theta}}) \notag\\ &= \ell(\hat{\V{\theta}}) + \grad \ell(\hat{\V{\theta}})^T(\hat{\V{\theta}}^*-\hat{\V{\theta}}) + \frac{1}{2}(\hat{\V{\theta}}^*-\hat{\V{\theta}})^T(\grad^2\ell(\bar{\V{\theta}})+2\lambda M)(\hat{\V{\theta}}^*-\hat{\V{\theta}}) - \lambda(\hat{\V{\theta}}^*-\hat{\V{\theta}})^T M(\hat{\V{\theta}}^*-\hat{\V{\theta}}) \notag\\ & \ge \ell(\hat{\V{\theta}}) + \grad \ell(\hat{\V{\theta}})^T(\hat{\V{\theta}}^*-\hat{\V{\theta}}) + \frac{m}{2}\norm{\hat{\V{\theta}}^*-\hat{\V{\theta}}}^2 - \lambda(\hat{\V{\theta}}^*-\hat{\V{\theta}})^T M(\hat{\V{\theta}}^*-\hat{\V{\theta}}). \end{align} In \eqref{eq:taylorexpansion}, $\bar{\V{\theta}}$ is some point between $\hat{\V{\theta}}$ and $\hat{\V{\theta}}^*$ and the last inequality comes from the strong convexity assumption on $f$. Substituting \eqref{eq:taylorexpansion} into \eqref{eq:optimalitybound} yields \begin{align}\label{eq:boundtocontrol} \frac{m}{2}\norm{\hat{\V{\theta}}^*-\hat{\V{\theta}}}^2 &\le -\grad \ell(\hat{\V{\theta}})^T(\hat{\V{\theta}}^*-\hat{\V{\theta}})+\lambda(\hat{\V{\theta}}^*-\hat{\V{\theta}})^T M(\hat{\V{\theta}}^*-\hat{\V{\theta}}) + \lambda(1+\epsilon)\hat{\V{\theta}}^{T}M\hat{\V{\theta}}-\lambda\hat{\V{\theta}}^{*T}M^*\hat{\V{\theta}}^* \notag \\ &= -\grad \ell(\hat{\V{\theta}})^T(\hat{\V{\theta}}^*-\hat{\V{\theta}})+\lambda(2+\epsilon)\hat{\V{\theta}}^{T}M\hat{\V{\theta}} + \lambda\hat{\V{\theta}}^{*T}M\hat{\V{\theta}}^*-\lambda\hat{\V{\theta}}^{*T}M^*\hat{\V{\theta}}^* -2\lambda\hat{\V{\theta}}^{T}M\hat{\V{\theta}}^*. \end{align} Since $\hat{\V{\theta}}$ is the minimizer of $f$, we have the stationary condition \begin{equation}\label{eq:stationary} \grad \ell(\hat{\V{\theta}}) + 2\lambda M\hat{\V{\theta}} = \V{0}. \end{equation} Substituting \eqref{eq:stationary} into \eqref{eq:boundtocontrol} gives \begin{align}\label{eq:boundtocontrol2} \frac{m}{2}\norm{\hat{\V{\theta}}^*-\hat{\V{\theta}}}^2 &\le 2\lambda\hat{\V{\theta}}^TM(\hat{\V{\theta}}^*-\hat{\V{\theta}})+\lambda(2+\epsilon)\hat{\V{\theta}}^{T}M\hat{\V{\theta}} + \lambda\hat{\V{\theta}}^{*T}M\hat{\V{\theta}}^*-\lambda\hat{\V{\theta}}^{*T}M^*\hat{\V{\theta}}^* -2\lambda\hat{\V{\theta}}^{T}M\hat{\V{\theta}}^* \notag \\ & = \epsilon\lambda\hat{\V{\theta}}^{T}M\hat{\V{\theta}} + \lambda\hat{\V{\theta}}^{*T}M\hat{\V{\theta}}^*-\lambda\hat{\V{\theta}}^{*T}M^*\hat{\V{\theta}}^* \notag \\ & \le \epsilon\lambda\hat{\V{\theta}}^{T}M\hat{\V{\theta}} + \epsilon\lambda\hat{\V{\theta}}^{*T}M\hat{\V{\theta}}^*\notag\\ & \le \epsilon\lambda\hat{\V{\theta}}^{T}M\hat{\V{\theta}} + \frac{\epsilon}{1-\epsilon}\lambda\hat{\V{\theta}}^{*T}M\hat{\V{\theta}}^* \notag\\ & = \epsilon\lambda\hat{\V{\alpha}}^TL\hat{\V{\alpha}} + \frac{\epsilon}{1-\epsilon}\lambda\hat{\V{\alpha}}^{*T}L\hat{\V{\alpha}}^*, \end{align} where we use \eqref{eq:spectralapprox} again. This gives the bound we need. However, it would be better to have a bound with a dominant term that only depends on $\hat{\V{\alpha}}$ and $L$. Thus we rearrange the terms as \begin{align} \frac{m}{2}\norm{\hat{\V{\theta}}^*-\hat{\V{\theta}}}^2 &\le \epsilon\lambda\hat{\V{\alpha}}^TL\hat{\V{\alpha}} + \frac{\epsilon}{1-\epsilon}\lambda\hat{\V{\alpha}}^{*T}L\hat{\V{\alpha}}^* \notag\\ & \le \epsilon\lambda\hat{\V{\alpha}}^TL\hat{\V{\alpha}} + (1+2\epsilon)\lambda\hat{\V{\alpha}}^{*T}L\hat{\V{\alpha}}^* \notag \\ & = \epsilon\lambda[2\hat{\V{\alpha}}^TL\hat{\V{\alpha}} + (\hat{\V{\alpha}}^{*T}L\hat{\V{\alpha}}^*-\hat{\V{\alpha}}^TL\hat{\V{\alpha}}) + 2\epsilon\hat{\V{\alpha}}^{*T}L\hat{\V{\alpha}}^*] \notag \\ & \le \epsilon\lambda[2\hat{\V{\alpha}}^TL\hat{\V{\alpha}} + |\hat{\V{\alpha}}^{*T}L\hat{\V{\alpha}}^*-\hat{\V{\alpha}}^TL\hat{\V{\alpha}}| + 2\epsilon\hat{\V{\alpha}}^{*T}L\hat{\V{\alpha}}^*], \end{align} in which the second inequality comes from the fact that $\frac{1}{1-\epsilon} <1 + 2\epsilon$ for $\epsilon < 1/2$. Note that we expect $|\hat{\V{\alpha}}^{*T}L\hat{\V{\alpha}}^*-\hat{\V{\alpha}}^TL\hat{\V{\alpha}}|$ to be negligible compared to the first term. We now proceed to proving the second bound that only involves $\norm{\hat{\V{\alpha}}}$. By Taylor expansion, we have, for any $\V{\theta}, \V{\theta}_0 \in \bR^n$, \begin{align*} f^*(\V{\theta}) &= f^*(\V{\theta}_0) + \grad f^*(\V{\theta}_0)^T(\V{\theta}-\V{\theta}_0) + \frac{1}{2}(\V{\theta}-\V{\theta}_0)^T\grad^2f^*(\tilde{\V{\theta}})(\V{\theta}-\V{\theta}_0)\\ & \ge f^*(\V{\theta}_0) + \grad f^*(\V{\theta}_0)^T(\V{\theta}-\V{\theta}_0) + \frac{m}{2}\norm{\V{\theta}-\V{\theta}_0}^2, \end{align*} where the inequality follows from strong convexity. In particular, taking $\V{\theta} = \hat{\V{\theta}}$ and $\V{\theta}_0 = \hat{\V{\theta}}^*$ and noticing that $ \grad f^*(\hat{\V{\theta}}^*) = \V{0}$, we get $$\norm{\hat{\V{\theta}}^*-\hat{\V{\theta}}}^2 \le \frac{2}{m}(f^*(\hat{\V{\theta}}) - f^*(\hat{\V{\theta}}^*)).$$ Strong convexity also implies (equation (9.9) of \citep{boyd2004convex}) that $$(f^*(\hat{\V{\theta}}) - f^*(\hat{\V{\theta}}^*)) \le \frac{1}{2m}\norm{\grad f^*(\hat{\V{\theta}})}^2.$$ Combining the two parts, we have \begin{equation}\label{eq:generalbound} \norm{\hat{\V{\theta}}^*-\hat{\V{\theta}}}^2 \le \frac{1}{m^2}\norm{\grad f^*(\hat{\V{\theta}})}^2 = \frac{1}{m^2}\norm{\grad f^*(\hat{\V{\theta}}) - \grad f({\hat{\V{\theta}}})}^2, \end{equation} in which the last equality comes from the fact that $\grad f({\hat{\V{\theta}}}) = \V{0}$. From \eqref{eq:glm-net-again}, the gradients of $f$ and $f^*$ are $$\grad f(\hat{\V{\theta}}) = \grad \ell + 2\lambda M\hat{\V{\theta}}, \ \ \grad f^*(\hat{\V{\theta}}) = \grad \ell + 2\lambda M^*\hat{\V{\theta}}.$$ Thus the difference between $\hat{\V{\theta}}^*$ and $\hat{\V{\theta}}$ can be bounded by \begin{equation}\label{eq:implicitcontrol} \norm{\hat{\V{\theta}}^*-\hat{\V{\theta}}}^2 \le \frac{1}{m^2}\norm{2\lambda(M-M^*)\hat{\V{\theta}}}^2. \end{equation} Finally, from \eqref{eq:spectralapprox}, we obtain \begin{align}\label{eq:spectralnorm} \norm{2\lambda(M-M^*)\hat{\V{\theta}}}^2 &= \norm{2\lambda(L-L^*)\hat{\V{\alpha}}}^2 \notag \\ & \le 4\lambda^2\norm{L-L^*}_2^2\norm{\hat{\V{\alpha}}}^2 \notag \\ &\le 4\lambda^2\epsilon^2\norm{L}_2^2\norm{\hat{\V{\alpha}}}^2. \end{align} Combining \eqref{eq:implicitcontrol} and \eqref{eq:spectralnorm} yields the second bound and completes the proof. \end{proof} \section{Complexity of solving RNC estimator by block elimination}\label{sec:complexity} We calculate the complexity of solving RNC estimator here assuming the block elimination strategy described in Section~\ref{secsec:computation} is used. The first major part is solving an $n\times n$ sparse symmetric diagonal dominant system to obtain $(I+\lambda L)^{-1}X$ and $(I+\lambda L)^{-1}\V{b}_1$ in the estimator. Using the linear system notations, we want to solve $$A\V{x}=\V{b}$$ where $A = I +\lambda L$. Naively solve it by Cholesky decomposition ignoring special structures would result in $O(n^3)$ operations. When $A$ is sparse as in a great many of applications, we can first find a permutation matrix $P$ to permute $A$ and then find sparse factorization for the resulting permuted matrix $$PAP^T = LL^T.$$ The operation counts in this step depends on the heuristic algorithm to find a good permutation, the number of nonzero elements in $A$ (which is $n+2|E|$ in our setting) and the positions of these nonzeros (depicted by the network). Roughly speaking, it depends on $\sum_i d_i^2$ \cite{spielman2010algorithms}. Though the general complexity is not available, it is shown in \cite{lipton1979generalized} that the complexity for the network transformed from a $\sqrt{n}\times\sqrt{n}$ grid is $O(n^{3/2})$ by using an algorithm called George's Nested Dissection. Solving both $(I+\lambda L)^{-1}X$ and $(I+\lambda L)^{-1}\V{b}_1$ thus requires $O(n^{3/2} + pn)$ and when $n$ dominates $p$, we just have $O(n^{3/2})$ there. We refer readers to \cite{lipton1979generalized} for details. Alternatively, one can solve the system approximately by iterative methods \cite{spielman2010algorithms,koutis2010approaching}. In particular, \cite{koutis2010approaching} propose an iterative algorithm with preconditioning such that for any $n$-node network, an approximate solution $\hat{\V{x}}$ of accuracy $$\norm{\hat{\V{x}}-A^{-1}\V{b}}_A < \epsilon \norm{A^{-1}\V{b}}_A$$ can be computed in expected time $O(m\log^2n\log(1/\epsilon))$ where $m = n+2|E|$ and the $A$-norm is defined by $$\norm{\V{x}}_A = \sqrt{\V{x}^TA\V{x}}.$$ To solve both $(I+\lambda L)^{-1}X$ and $(I+\lambda L)^{-1}\V{b}_1$, this is expected to takes $O(pm\log^2n\log(1/\epsilon))$ operations. Notice that even if $A$ is fully dense with $n^2$ nonzero entries, the cost is still much lower than the naive solving. The rest steps in the block elimination only involve matrix multiplications and general solving for a $p\times p$ symmetric positive definite system. The order is then $O(np^2+p^3)$, the same as OLS procedure. In summary, if one tries to compute the estimator exactly, the order depends on the network connecting the samples. When the network is from a $\sqrt{n}\times\sqrt{n}$ grid, the complexity is in the order of $O(n^{3/2}+pn+np^2+p^3)$. If approximate methods are used instead, the order is expected to be $O(p(n+2|E|)\log^3n + np^2+p^3)$ for general networks with high accuracy (taking approximation tolerance $\epsilon = O(1/n)$). Both of dense and sparse Cholesky factorizations can be further parallelized on modern distributed systems \cite{bosilca2012dague,faverge2008dynamic,lacoste2014taking}, when high computational performance is needed. The complexity in such settings heavily depends the systems. \end{appendix} \section{Applications}\label{sec:app} In this section, we use our method to incorporate network effects and improve prediction in two applications using the data from the National Longitudinal Study of Adolescent Health (the AddHealth study) \citep{harris2009national}. AddHealth was a major national longitudinal study of students in grades 7-12 during the school year 1994-1995, after which three further follow-ups were conducted in 1996, 2001-2002, and 2007-2008. We will only use Wave I data. In the Wave I survey, all students in the sample completed in-school questionnaires, and a follow-up in-home interview with more detailed questions was conducted for a subsample. There are questions in both the in-school survey and the in-home interview asking for friends nominations (up to 10), and we can construct friendship networks based on this information, ignoring the direction of nominations. The networks from the two surveys are different. We will consider two specific prediction tasks in this section. The first task, considered by \citep{bramoulle2009identification}, is predicting students' recreational activity from their demographic covariates and their friendship networks, accomplished via a network autoregressive model in \citep{bramoulle2009identification}, who used the in-school survey data. In order to compare with our method directly, we also use the in-school data only for this task. Our second application is to predict the first time of marijuana use, via Cox's proportional hazard model. Since the data on marijuana use are only available from the in-home interview records, in the second application we will use the friendship network constructed from the in-home interviews as well. \begin{comment} \subsection{Central America atmospheric data} This NASA dataset contains several atmospheric measurements for Jan 1995 to Dec 2000 on a $24\times 24$ grid covering central America \citep{murrel06expo}. The variables include elevation, ozone, surface temperature, air temperature, pressure and cloud coverage at three different altitudes (low, medium, or high). All the variables are monthly averages except elevation which is fixed, so the six years correspond to 72 measurements at each location on the grid. The cloud coverage at low altitudes has a lot of missing values at four locations; we test the methodology on filling in the missing values for this variable using other observations from the same time point and the information about the spatial grid. This is a regular task in spatial statistics, and thus as a benchmark for comparison we include the Bayesian CAR model mentioned in Remark \ref{rem:LMM}, as well as linear regression by OLS without taking any spatial information into account. \begin{figure}[H] \centering \begin{subfigure}[b]{0.445\textwidth} \includegraphics[width=\textwidth]{./GridPlot-elevation} \vspace{-1.6cm} \caption{Elevation} \label{fig:gull} \end{subfigure}% ~ \begin{subfigure}[b]{0.43\textwidth} \includegraphics[width=\textwidth]{./FourVar} \caption{Cloud coverage (low alt.), temperature, ozone, pressure} \label{fig:tiger} \end{subfigure} \caption{The distribution of atmospheric variables over the grid. Values correspond to point size, and red points indicate the four locations with many missing values. (a) Elevation (does not change over time). (b) Clockwise from top left: Cloud coverage at low altitudes, temperature, ozone, and pressure for Apr, 1995.} \end{figure} We use conditional mean imputation for the missing data, that is, we regress the variable with missing values on other variables using complete observations and then predict the missing values using the other variables as covariates. The cloud coverage at low altitudes is the variable with missing values we take to be the response, and all other variables as predictors. We compare the RNC estimator, OLS, and the Bayesian CAR model (as implemented in the R package \cite{CARBayes}) using the spatial grid as the network. The CAR model does not output predictions for new individual effects, so we need to estimate the predicted value. Using the equivalence discussed in Section \ref{sec:bayesian}, we compute this by taking the CAR estimates of $\sigma^2$ and $\tau^2$ and using $\lambda = \hat{\sigma}^2/\hat{\tau}^2$ in the RNC estimator to compute Henderson's predictor \citep{henderson53estimation}. This gives the empirical BLUP. Then combined with covariate coefficients estimated by MCMC we can make the prediction. We call this CAR-RNC. We evaluate the imputation errors at each of the four locations using time snapshots that have the cloud coverage at those locations observed, and thus we can calculate the true imputation errors. Table~\ref{tab:ImputatingMissingData} shows the average prediction squared errors. Both RNC and CAR are far better than OLS, which is expected. In particular, the prediction errors from RNC estimator are only about 20\% of the OLS errors, which indicates a significant improvement on the missing value imputation accuracy. At three locations, RNC gives better imputation while at one location, CAR gives better result. \begin{table}[H] \centering \begin{tabular}{rrrrr} \hline Location & 1 (40) & 2 (40) & 3 (57) & 4 (41) \\ \hline RNC & 0.11(0.020) & 0.32(0.060) & 0.44(0.061) & 0.20(0.050) \\ CAR-RNC & 0.37(0.065) & 0.35(0.070) & 0.15(0.028) & 0.24(0.038) \\ OLS & 1.11(0.184) & 1.12(0.185) & 1.73(0.344) & 1.17(0.187) \\ \hline \end{tabular} \caption{Average prediction values (regular errors in parentheses) at the four locations. The averages for each location are taken over all time points that have no missing records (the number of each location given in parentheses in the header). } \label{tab:ImputatingMissingData} \end{table} \end{comment} \subsection{Recreational activity in adolescents: a linear model example} In \citep{bramoulle2009identification}, social effects were incorporated into ordinary linear regression via the auto-regressive model \begin{equation}\label{eq:ar} y_v = \frac{1}{|N(v)|}\sum_{u \in N(v)} (\gamma y_u + \V{x}_u^T\V{\tau}) + \V{x}_v^T\V{\beta} + \epsilon_v, v \in V. \end{equation} The authors called this the social interaction model (SIM). In econometric terminology, the local average of responses models endogenous effects, and the local averages of predictors are the exogenous effects. When there are known groups in the data, fixed effects can be added to this model \cite{lee2007identification}. In \cite{bramoulle2009identification}, SIM was applied to the AddHealth data to predict levels of recreational activity from a number of demographic covariates as well as the friendship network. The covariates are age, grade, sex, race, born in the U.S. or not, living with the mother or not, living with the father or not, mother's education, father's education, and parents' participation in the labor market. For some of the categorical variables, some levels were merged; refer to \citep{bramoulle2009identification} for details. The recreational activity was measured by the number of clubs or organizations of which the student is a member, with ``4 or more'' recorded as 4. The histogram as well as the mean and standard deviation of recreational activity are shown in Figure~\ref{fig:recreational}. We used exactly the same variables with the same level merging. \begin{figure}[H] \begin{center} \includegraphics[width=0.8\textwidth]{./recreationHistCut.png} \caption{Histogram of the response, recreational activity level, from the data set used in the linear regression example. The data has mean value 1.224 and standard deviation 1.231. } \label{fig:recreational} \end{center} \end{figure} We compare performance of our proposed RNC method with the SIM model \eqref{eq:ar} from \cite{bramoulle2009identification}, and to regular linear regression without network effects implemented by ordinary least squares (OLS), with the same response and predictors as in \citep{bramoulle2009identification}. The null model again gave results nearly identical to the OLS so we omit it from the subsequent discussion. We use the largest school in the dataset, where, after deleting records with missing values for the variables we use, the network has 1995 students. To see the effect of additional predictors, we include the variables in the model one at a time following the standard forward selection algorithm with OLS. To avoid underestimating prediction errors, we use the largest connected component of the network as our prediction evaluation data, with 871 nodes and the average degree of 3.34. The remaining 1124 samples are used for variable selection to determine the order of variables to be added to the model. After deleting students with missing values, all the rest were living with both parents so we omit those two variables from further analysis. To evaluate predictive performance, we randomly hold out 80 students from the largest connected component as test data, and fit all the models using the rest. The order in which the variables are added to the models is fixed in advance using the separate variable selection dataset and is the same for all models. The mean squared prediction errors on the 80 students are averaged over 50 independent random data splits into training and test sets. The RMSEs over these 50 splits are shown in Table~\ref{tab:LinearRegression}. In each row, the differences between the three models are all statistically significant with a $p$-value of less than $10^{-4}$ using a paired $t$-test over the 50 random splits. It is clear that both SIM and RNC are able to improve prediction by using the network information, but RNC is more effective at this than SIM. Note that none of the predictors are very strong, and the network information is relatively more helpful: e.g., the RNC error using only the network cohesion correction and no predictors at all is lower than the error of {\em any} model fitted by either OLS or SIM. As with any other prediction task, adding unhelpful covariates tends to slightly corrupt performance, and all models achieve their best performance using the first three variables (mother's education, born in the US, and race). \begin{table}[ht] \centering \begin{tabular}{l|rrr} model & OLS & SIM & RNC\\ \hline no covariates & 1.235 & 1.185 & 1.163\\ + mother's education & 1.231 & 1.183 &1.159 \\ + born in the US & 1.231 & 1.180 &1.162 \\ + race & 1.213 & 1.173 & 1.154\\ + father's education & 1.215 & 1.179 & 1.160\\ + sex & 1.214 & 1.178 & 1.158\\ + age & 1.215 & 1.176 & 1.157\\ + grade & 1.214 & 1.176 & 1.157\\ + parents in labor market & 1.215 & 1.177 & 1.158\\ \end{tabular} \caption{Root mean squared errors for predicting students' recreational activity level. The average is taken over 50 independent data splits, in each of which 80 samples are randomly chosen to be test set. All differences across rows are statistically significant with a $p$-value $< 10^{-4}$ as measured by a paired $t$-test. The model in each row includes all the variables from previous rows. } \label{tab:LinearRegression} \end{table} \subsection{Predicting the risk of adolescent marijuana use} This application illustrates the benefits of network cohesion in the setting of survival analysis. While prediction of continuous or categorical responses on networks is common, there are settings where survival analysis is more appropriate. In the AddHealth survey, the students were asked ``How old were you when you tried marijuana for the first time?", and the answer can either be age (an integer up to 18) or ``never". The students who say ``never'' should be treated as censored observations, and modeling the time until a student tries marijuana for the first time in a survival model is more appropriate than treating this as a continuous response in a linear model. Here we apply Cox's proportional hazard model with network cohesion regularization to the largest community in the dataset with 1862 students from the Wave I in-home interview (this question was only asked in the in-home interviews). The friendship network is also based on friend nominations from in-home data for consistency, and there are 2820 additional covariates on each student collected from the in-home surveys. In order to illustrate the benefits of network cohesion on concrete models, we first select a small subset of variables that can act as informative covariates. To do this, we split the data roughly into 2/3 for variable selection, and 1/3 for fitting the proportional hazard model. Specifically, we randomly set aside 500 students, and took the largest connected component among the remaining 1362 students to fit the hazard model. This largest connected component consists of 668 nodes and has the average node degree of 2.83. All the remaining 1194 students were used for variable selection. The baseline we compare with is the regular Cox's model since the SIM model does not extend to the survival setting. Again, the null RNC gives results almost identical to the regular Cox model, so we omit it from comparisons. For variable selection, we first order covariates by their $p$-values from fitting the regular univariate Cox's model with that single covariate. Then we pick five most significant covariates with the requirement that each survey category (survey questions were grouped into categories) has no more than one variable selected, and the selected variable has no missing values in the 668 samples. We then use a regular forward selection algorithm to determine the order in which these five variables should be added to the model, and compare the five models chosen by forward selection. Note that with the network cohesion penalty, we can still fit the model with no covariates and individual hazards only, but this is not possible for the regular Cox's model since the partial likelihood is not defined without covariates. Evaluating predictive performance in Cox's model is not straightforward since the nonparametric $h_0$ in \eqref{eq:coxhazard1} is not estimated and the partial log-likelihood is not separable. We use the metric of \citep{verweij1993cross,witten2009survival} to measure the prediction power. Suppose we have a training set and all quantities associated with it labeled (1), and a test set labeled (2). Let $\hat{\balpha}_{(1)},\hat{\bbeta}_{(1)}$ be the estimates of $\balpha$ and $\bbeta$ on the training set. The predictive partial log-likelihood (PPL) for the test set is calculated as $$\ell_{(1+2)}(\hat{\balpha}_{(1)},\hat{\bbeta}_{(1)}) - \ell_{(1)}(\hat{\balpha}_{(1)},\hat{\bbeta}_{(1)})$$ where $\ell_{(1+2)}$ is the partial log-likelihood evaluated on all samples (both training and test), and $\ell_{(1)}$ is the partial log-likelihood evaluated only on the training samples. When $\ell$ is a log-likelihood separable across individuals, this gives exactly the predictive log-likelihood in the usual sense. In our evaluation, we randomly select 60 nodes as the test set and use the remaining nodes and their induced sub-network as the training set. This is independently repeated 50 times and we use the average PPL of the 50 replications as the performance measure. For simplicity of comparisons, we fixed the value of tuning parameter $\lambda = 0.005$ for all models based on validation on a different school, and set $\gamma = 0.1$. This is a conservative approach to comparing our method with the regular Cox's model, since tuning each model separately for RNC can only improve its performance. \begin{table}[H] \centering \begin{tabular}{l|ccl} model & Cox & RNC & p-value \\ \hline no covariates & -- & -156.40 & --\\ + received school suspension & -157.43 & -155.63 & $4.1 \times 10^{-11}$\\ + has driven a car & -158.05 & -156.03 & $7.5 \times10^{-10}$\\ + illegal drugs easily available at home & -157.82 & -155.97 & $5.3 \times10^{-9}$\\ + has a permanent tattoo & -156.68 & -155.46 & $2.7 \times10^{-5}$\\ + speaks English at home & -156.66 & -155.47 & $3.1 \times 10^{-5}$\\ \end{tabular} \caption{Average predictive partial log-likelihood (PPL) for the five models chosen by forward selection on a hold-out sample. The average is taken over 50 random splits of the data into 60 test samples and 608 training samples. The $p$-value corresponds to a paired t-test between the averages reported in the first two columns. The model in each row includes all the variables from previous rows. } \label{tab:Cox} \end{table} Table~\ref{tab:Cox} shows the average PPL after adding each variable to the model and the $p$-values from paired $t$-tests on the difference between regular Cox's model and Cox's model regularized by network cohesion. The model using the network information always does better than the same model without the network. RNC with no covariates is already somewhat better than the regular model with all the covariates, and RNC with just the first variable is significantly better than any of the regular models. Using the complete network, the predicted individual effects range from -0.588 to 1.22. That means for the subject with the highest potential risk for marijuana usage, the individual hazard is about 3.4 times of the population baseline hazard. The estimated individual hazards $\exp(\hat{\alpha}_v)$'s are shown in Figure~\ref{fig:AddHealthNet}, represented by node size, together with the friendship network and the observed age when one first tried marijuana, represented by node color. One can see the cohesion effect in both the data itself and in the estimated hazards. \begin{figure}[H] \vspace{-1cm} \begin{center} \includegraphics[width=0.8\textwidth]{AddHealthComm77} \vspace{-2cm} \caption{The friendship network of data set for marijuana risk prediction. Node size represents its estimated individual hazard for using marijuana, and node color the observed age when the student first tried marijuana. } \label{fig:AddHealthNet} \end{center} \end{figure} \section{Discussion} We have proposed a general framework for introducing network cohesion effects into prediction problems, without losing the interpretability and meaning of the original prediction models and in a computationally efficient manner. In a regression setting, we have also demonstrated theoretically when this approach will outperform regular regression and have shown the proposed estimator is consistent. In general, we can view this setting as another example of benefits of regularization when there are more parameters than one can estimate with the data available. Encouraging network cohesion implicitly reduces the number of free parameters that effectively need to be estimated, somewhat in the same spirit as the fused lasso penalty \citep{tibshirani2005sparsity}. There are important differences, however; here we have a computationally efficient way to use the available network data and can explicitly assess the trade-off in bias and variance that results from encouraging cohesion. Another direction to explore is understanding the behavior of network cohesion on different kinds of networks. This can be accomplished if we leverage the large literature on random graph models for networks and instead of treating the network as given and fixed, model it as a realization of a network model with certain structure. Alternatively, one could analyze the effects on cohesion of certain network properties (degree distribution, communities, etc) implied by the properties of the graph Laplacian. While we focused on prediction in this paper, the cohesion penalty may also turn out to be useful in causal inference on networks when such inference is possible. \section{Introduction} Advances in data collection and social media have resulted in network data being collected in many applications, recording relational information between units of analysis; for example, information about friendships between adolescents is now frequently available in studies of health-related behaviors \citep{michell1996peer,pearson2000smoke,pearson2003drifting}. This information is often collected along with more traditional covariates on each unit of analysis; in the adolescent example, these may include variables such as age, gender, race, socio-economic status, academic achievement, etc. There is a large body of work extending over decades on predicting a response variable of interest from such covariates, via linear or generalized linear models, survival analysis, classification methods, and the like, which typically assume the training samples are independent and do not extend to situations where the samples are connected by a network. There is also now a large body of work focusing on analyzing the network structure implied by the relational data alone, for example, detecting communities; see \citep{fortunato2010community, goldenberg2010survey} for reviews. The more traditional covariates, if used at all in such network analyses, are typically used to help analyze the network itself, e.g., find better communities \citep{binkiewicz2014covariate, newman2015structure, zhang2015community}. There has not been much focus on developing a general statistical framework for using network data in prediction, although there are methods available for specific applications \citep{wolf2009predicting, asur2010predicting, vogelstein2013graph}. In the social sciences and especially in economics, on the other hand, there has been a lot of recent interest in causal inference on the relationship between a response variable and both covariates and network influences \citep{shalizi2011homophily, manski2013identification}. While in certain experimental settings such inference is possible \citep{rand2011dynamic, choi2014estimation, phan2015natural}, in most observational studies on networks establishing causality is substantially more difficult than in regular observational studies. While network cohesion (a generic term by which in this paper we mean linked nodes acting similarly) is a well known phenomenon observed in numerous social behavior studies \citep{fujimoto2012social,haynie2001delinquent}, explaining it causally on the basis of observational data is very challenging. An excellent analysis of this problem can be found in \cite{shalizi2011homophily}, showing that it is in general impossible to distinguish network cohesion resulting from homophily (nodes become connected because they act similarly) and cohesion resulting from contagion (behavior spreads from node to node through the links), and to separate that from the effect of node covariates themselves. However, making good predictions of node behavior is an easier task than causal inference, and is often all we need for practical purposes. Our goal in this paper is to take advantage of the network cohesion phenomenon in order to better predict a response variable associated with the network nodes, using both node covariates and network information. While we do not attempt to make causal inferences, we focus on interpreptable models where effects of individual variables can be explicitly estimated. Using network information in predictive models has not yet been well studied. Most classical predictive models treat the training data as independently sampled from one common population, and, unless explicitly modeled, network cohesion violates virtually all assumptions that provide performance guarantees. More importantly, cohesion is potentially helpful in making predictions, since it suggests pooling information from neighboring nodes. In certain specific contexts, regression with dependent observations has been studied. For example, in econometrics, following the concepts initially discussed in \citep{manski1993identification}, assuming some type of an auto-regressive model on the response variables is common, such as the basic autoregressive model in \citep{bramoulle2009identification} and its variants including group interactions and group fixed effects \citep{lee2007identification}. Such models assume specific forms of different types of network effects, namely, endogenous effects, exogenous effects and correlated effects, and most of this literature is focused on identifiability of such effects. In \citep{bramoulle2009identification, lin2010identifying}, these ideas were applied to the adolescent health data from the AddHealth study \citep{harris2009national} which we discuss in detail in Section \ref{sec:app}. However, these methods have mainly been used to identify social effects in a very specific format, without a focus on interpretability or good prediction performance. For instance, including neighbors' responses as covariates in linear regression makes interpretation of other covariate effects more difficult. In addition, they do not extend easily beyond linear regression (for example, to generalized linear models and Cox's proportional hazard model). Our approach is to introduce network cohesion into regression using the idea of fusion penalties \citep{land1997variable, tibshirani2005sparsity}, framing the problem as penalized regression. Fusion penalties based on a network have been used in variable selection \citep{li2008network, li2010variable, pan2010incorporating, kim2013network}, but this line of work is not directly relevant here since we are interested in using the network of observations, not variables. However, our approach can be viewed as a regression version of the point estimation problem discussed in \citep{sharpnack2012detecting, wang2014trend}. We show that our method gives consistent estimates of covariate effects and can be directly extended to generalized linear models and survival analysis; we also derive explicit conditions on when enforcing network cohesion in regression can be expected to perform better than ordinary least squares. In contrast to previous work, we assume no specific form for the cohesion effects and require no information about potential groups. We also derive a computationally efficient algorithm for implementing our approach, which is efficient for both sparse and dense networks, the latter with an extra sparsification step which we prove preserves the relevant network properties. To the best of our knowledge, this is the first proposal of a general regression framework with network cohesion among the observations. The rest of this paper is organized as follows. In Section~\ref{sec:model}, we introduce our approach in the setting of linear regression as a penalized least squares problem and demonstrate its Bayesian interpretation and the connection to linear mixed effects models. The idea is then extended to generalized linear models. Finite-sample and asymptotic properties are discussed in Section~\ref{sec:theory}. Simulation results demonstrating the theoretical bounds and the advantage over regression without using networks are presented in Section~\ref{sec:sim}. Section~\ref{sec:app} discusses two examples in detail, applying our method to predict levels of recreational activity and marijuana usage among teenagers. The algorithms in this paper are implemented in the R package \textbf{netcoh} \cite{li2016netcoh}, available on CRAN. Code for the examples in the paper can be found on the first author's webpage. \section{Regression with network cohesion}\label{sec:model} \subsection{Set-up and notation} We start from setting up notation. By default, all vectors are treated as column vectors. The data consist of $n$ observations $(y_1, \V{x}_1), (y_2, \V{x}_2), \cdots, (y_n, \V{x}_n)$, where $y_i \in \bR$ is the response variable and $\V{x}_i \in \bR^p$ is the vector of covariates for observation $i$. We write $\V{Y} = (y_1, y_2, \cdots, y_n)^T$ for the response vector, and $\M{X} = (\V{x}_1, \V{x}_2, \cdots, \V{x}_n)^T$ for the $n\times p$ design matrix. We treat $\M{X}$ as fixed and assume its columns have been standardized to have mean 0 and variance 1. We also observe the network connecting the observations, $\gcal = (V, E)$, where $V = \{1, 2, \cdots, n\}$ is the node set of the graph, and $E \subset V \times V$ is the edge set. We represent the graph by its adjacency matrix $A \in \bR^{n \times n}$, where $A_{uv} = 1$ if $ (u,v) \in E$ and 0 otherwise. We assume there are no loops so $A_{vv} = 0$ for all $v\in V$, and the network is undirected, i.e., $A_{uv} = A_{vu}$. The (unnormalized) Laplacian of $\gcal$ is given by $L = D - A$, where $D ={\rm diag}(d_1, d_2, \cdots, d_n)$ is the degree matrix, with node degree $d_u$ defined by $d_u = \sum_{v \in V}A_{uv}$. \subsection{Linear regression with network cohesion}\label{penalizedLS} We start from discussing what we mean by the term ``cohesion". This is a vague term which can be interpreted in several ways depending on whether it refers to the network itself or both the network and additional covariates. Cohesion defined on the network alone can be reflected in various properties, such as local density, connectivity and community structure; we refer the readers to Chapter 4 of \citep{kolaczyk2009statistical} for details. In the context of regression on networks which is the focus of this paper, two types of cohesion are commonly discussed: homophily (also known as assortative mixing) and contagion. Homophily means nodes similar in their characteristics tend to connect, with the implication of a causal direction from sharing individual characteristics to forming a connection. In contrast, contagion means that nodes tend to behave similarly to their neighbors, with a casual direction from having a connection to exhibiting similar characteristics. Distinguishing these two phenomena in an observational study without additional strong assumptions is not possible \cite{shalizi2011homophily}. Nonetheless, both of these indicate a correlation between network connections and node similarities, observed empirically by many social behavior studies \cite{haynie2001delinquent, pearson2003drifting, fujimoto2012social}, and that is all we need and assume in this paper. We use the generic term ``cohesion" in order to cover both possibilities of homophily and contagion, which we do not need to distinguish. While the regularization idea for encouraging network cohesion is general, it is simplest to demonstrate in the context of linear regression, so we start from this setting. Assume that \begin{equation} \V{Y} = \V{\alpha} + \M{X} \V{\beta} + \V{\epsilon} \end{equation} where $\V{\alpha} = (\alpha_1, \alpha_2, \cdots, \alpha_n)^T \in \bR^n$ is the vector of individual node effects, and $\V{\beta} = (\beta_1, \beta_2, \cdots, \beta_p)^T \in \bR^p$ is the vector of regression coefficients. At this stage, no assumption on the distribution of the error $\beps$ is needed, but we assume $\e\V{\epsilon} = \V{0}$ and $\var(\V{\epsilon}) = \sigma^2I_n$, where $I_n$ is the $n \times n$ identity matrix. For simplicity, we further assume that $n>p$ and $X^TX$ is invertible. If $p > n$ and this is not the case, the usual remedies such as a lasso penalty on $\V{\beta}$ can be applied; our focus here, however, is on regularizing the individual effects, and so we will not focus on additional regularization on $\V{\beta}$ that may be necessary. Including the individual node effects $\V{\alpha}$ instead of a common shared intercept turns out to be key to incorporating network cohesion. In general $\V{\alpha}$ and $\V{\beta}$, which add up to $n + p$ unknown parameters, cannot be estimated from the $n$ observations without additional assumptions. One well-known example of such assumptions is the simple fixed effects model (see e.g. \citep{searle2009variance}), when $n$ samples come from known groups, and within each group individuals share a common intercept. Here, we regularize the problem through a network cohesion penalty on $\V{\alpha}$ instead of making explicit assumptions about any structure in $\V{\alpha}$. The regression with network cohesion (RNC) estimator we propose is defined as the minimizer of the objective function \begin{equation}\label{net-linear:obj} L(\V{\alpha}, \V{\beta}) = \norm{\V{Y} - X\V{\beta} - \V{\alpha}}^2 + \lambda \V{\alpha}^TL\V{\alpha}, \end{equation} where $\| \cdot \|$ is the $L_2$ vector norm and $\lambda>0$ is a tuning parameter. An equivalent and more intuitive form of the penalty, which follows from a simple property of the graph Laplacian, is \begin{equation}\label{eq:cohesionDegree} \V{\alpha}^TL\V{\alpha} = \sum_{(u,v)\in E}(\alpha_u-\alpha_v)^2. \end{equation} Thus, we penalize differences between individual effects of nodes connected by an edge in the network. We call this term the {\em cohesion penalty} on $\V{\alpha}$. We assume that the effect of covariates $X$ is the same across the network; as with any linear regression, two nodes with similar covariates will have similar values of $\V{x}\V{\beta}$, and the cohesion penalty makes sure the neighboring nodes have similar individual effects $\alpha$. Note that this is different from imposing network homophily (which would require nodes with similar covariates to be more likely to be connected). The minimizer of \eqref{net-linear:obj} can be computed explicitly (if it exists) as \begin{equation}\label{eq:thetaNet} \hat{\V{\theta}} = (\hat{\V{\alpha}}, \hat{\V{\beta}}) = (\tildeX^T\tildeX + \lambda M)^{-1}\tildeX^T \V{Y}. \end{equation} Here, $\M{\tildeX} = (I_n, X)$ and $$ M = \begin{bmatrix} L & 0_{n\times p} \\ 0_{p\times n} & 0_{p\times p} \\ \end{bmatrix}$$ where $\M{0}_{a\times b}$ is an $a \times b$ matrix of all zeros. The estimator exists if $\tildeX^T\tildeX + \lambda M$ is invertible. Note that \begin{equation}\label{eq:matToinv} \tildeX^T\tildeX + \lambda M = \begin{bmatrix} I_n + \lambda L & X \\ X^T & X^TX \\ \end{bmatrix}, \end{equation} so it is positive definite if and only if the Schur complement $I_n + \lambda L - X(X^TX)^{-1}X^T =P_{X^{\perp}} +\lambda L$ is positive definite. From \eqref{eq:cohesionDegree}, we can see that $L$ is positive semi-definite but singular since $L\mbone_n = 0$ where $\mbone$ is the vector of all ones, and thus in principle the estimator may not be computable. In Section~\ref{sec:theory}, we will give an interpretable theoretical condition for the estimator to exist. In practice, a natural solution is to ensure numerical stability by replacing $L$ with the regularized Laplacian $L + \gamma I$, where $\gamma$ is a small positive constant. Then the estimator always exists, and in fact the regularized Laplacian may better represent certain network properties, as discussed by \cite{chaudhuri2012spectral,amini2013pseudo, le2015sparse} and others. The resulting penalty is \begin{equation}\label{eq:regularizedLaplacian} \sum_{(u,v)\in E}(\alpha_u-\alpha_v)^2 + \gamma\sum_v\alpha_v^2, \end{equation} which one can also interpret as adding a small ridge penalty on $\alpha$ for numerical stability. \begin{rem}\label{rem:emptygraph} The penalty \eqref{eq:regularizedLaplacian} suggests a natural baseline comparison to our model which can be used to assess whether cohesion is in fact present in the data. If the graph has no edges (i.e., no information about network connections can be used), the penalty (with $\gamma = 1$) reduces to a ridge penalty on the individual effects $\alpha$. Comparing the prediction error of this ``null'' model to that of RNC via cross-validation can provide qualitative evidence of cohesion. In the case of linear regression, it is easy to see from \eqref{eq:thetaNet} that the null model gives exactly the same estimate of $\V{\beta}$ as OLS. \end{rem} \begin{rem}\label{rem:fixedeffects} The fixed effects regression model with subjects divided into groups can be viewed as a special case of RNC. If the graph $\mathcal G$ represents the groups as cliques (everyone within the same group is connected), there are no connections between groups, and we let $\lambda \rightarrow \infty$, then all nodes in one group will share a common intercept. \end{rem} \subsection{Network cohesion for generalized linear models and Cox's proportional hazard model}\label{sec:glm} The RNC methodology extends naturally to generalized linear models and many other regression or classification models such as Cox's proportional hazard model \citep{cox1972regression} for survival problems and support vector machines \citep{vapnik2013nature} for classification using the formulation of \cite{wahba1999support}. For any generalized linear model with a link function $\phi(\e \V{Y}) = X\V{\beta}+\V{\alpha}$, where $\V{\alpha} \in \bR^n$ are the individual effects, suppose the log-likelihood (or partial log-likelihood) function is $\ell(\V{\alpha}, \V{\beta}; X, \V{Y})$. Then if the observations are linked by a network, to induce network cohesion one can fit the model by maximizing the penalized likelihood \begin{equation}\label{eq:glm-net} \ell(\V{\alpha} + X\V{\beta}; \V{Y}) - \lambda\V{\alpha}^T(L + \gamma I)\V{\alpha}. \end{equation} When $\ell$ is concave in $\V{\alpha}$ and $\V{\beta}$, which is the case for exponential families, the optimization problem can be solved via Newton-Raphson or another appropriate convex optimization algorithm. Note that the quadratic approximation to \eqref{eq:glm-net} is the quadratic approximation to the log-likelihood plus the penalty, and thus the problem can be efficiently solved by iteratively reweighed linear regression with network cohesion, just like the GLM is fitted by iteratively reweighed least squares. The ridge penalty term $\gamma I$ helps with numerical stability and avoids fitted probabilities of 0 and 1 for isolated nodes, which may cause the iterative algorithm to diverge; as discussed in the previous section, adding this term to the Laplacian also improves its representation of the underlying network structure. RNC can be similarly generalized to Cox's proportional hazard model \citep{cox1972regression}. In this setting, we observe times until some event occurs, called survival times, which may be censored (unobserved) if the event has not occurred for a particular node. Cox's model assumes the hazard function $h_v(y)$ for each individual $v$ is $$h_v(y) = h_0(y)\exp(\V{x}_v^T\bbeta), v\in V,$$ where $y$ is the survival time, $\V{x}_ v$ is the vector of $p$ observed covariates for individual $v$, $\V{\beta} \in R^p$ is the coefficient vector and $h_0$ is an unspecified baseline hazard function. When we have observations connected by a network, as in the RNC setting, we can also model the individual effects and then encourage network cohesion. Thus we will assume the hazard for each node $v$ is given by \begin{equation}\label{eq:coxhazard1} h_v(y) = h_0(y)\exp(\V{x}_v^T\bbeta + \alpha_v), v \in V, \end{equation} where $\alpha_v$ is the individual effect of node $v$. The appropriate loss function in terms of the parameters $\V{\theta} = (\V{\alpha}, \V{\beta})$ is the partial log-likelihood \begin{equation}\label{eq:l-trans} \ell(\V{\theta}; \V{y}) = \sum_{v}\delta_v \bigg[\V{x}_v^T\bbeta + \alpha_v - \log \big( \sum_{u: y_u \ge y_v} \exp(\V{x}_u^T\bbeta + \alpha_u ) \big) \bigg] \end{equation} where $y_v$ is the observed survival time for node $v$, and $\delta_v$ is the censoring indicator, which is 0 if the observation is right-censored and 1 otherwise. Note that the partial log-likelihood is invariant under a shift in $\V{\alpha}$ since such a shift can always be absorbed into $h_0$. Thus for identifiability, we require $\sum\alpha_v = 0$. For fixed covariates $\V{x}_v$, $\alpha_v$ is the individual deviation from the population average log hazard. The sum-to-zero constraint can be automatically enforced by replacing the network Laplacian $L$ in the network cohesion penalty with its regularized version $L + \gamma I$, or equivalently adding a ridge penalty on $\alpha$'s. Thus we maximize the following objective function, adding a regularized cohesion penalty to the partial log-likelihood: $$ \ell(\V{\theta}) - \lambda \V{\alpha}^T (\M{L}+\gamma I) \V{\alpha}. $$ \subsection{A Bayesian interpretation} \label{sec:bayesian} The RNC estimator can also be derived from a Bayesian regression model. Consider the model \begin{align}\label{net-linear:bayes} \V{Y}|\V{\alpha}, \V{\beta} \sim \ncal(\V{\alpha}+X\V{\beta}, \sigma^2I) , \ \ \V{\beta} \sim \pi_{\V{\beta}}(\phi) , \ \ \V{\alpha} \sim \pi_{\V{\alpha}}(\Phi) , \notag \end{align} where $ \pi_{\V{\beta}}(\phi)$ is the prior for $\V{\beta}$ with hyperparameter $\phi$, $\pi_{\V{\alpha}}(\Phi)$ is the prior for $\V{\alpha}$ with hyperparameter $\Phi$, and $\sigma^2$ is assumed to be known. Suppose we take $\pi_{\V{\beta}}(\phi)$ to be the non-informative Jeffrey's prior, reflecting lack of prior knowledge about the coefficients, and set $\pi_{\V{\beta}}(\phi) \propto 1.$ For $\V{\alpha}$, assume a Gaussian Markov random field (GMRF) prior $ \pi_{\V{\alpha}} = \ncal_{\gcal}(\V{0}, \Phi)$, where $\Phi = \Omega^{-1} =\zeta^2(L+\gamma I)^{-1}$. Note that when $\gamma=0$, $\Omega$ is not invertible, and $ \pi_{\V{\alpha}}$ is an improper prior called intrinsic GMRF \citep{rue2005gaussian}. If the posterior modes are used as the estimators for $\V{\alpha}$ and $\V{\beta}$, then this is equivalent to \eqref{net-linear:obj} with $\lambda = \sigma^2/\zeta^2$ and the Laplacian replaced by the regularized Laplacian $L+\gamma I$. Thus the estimator of \eqref{net-linear:obj} is the Bayes estimator with the improper intrinsic GMRF prior over the network on $\V{\alpha}$. Note that this Bayesian interpretation is also valid for the generalized linear models. \subsection{Connection to other models}\label{secsec:connection} \paragraph{Mixed effects models. } The main difference between mixed effects models and our setting is that we do not have repeated observations. If we think of $\V{\alpha}$ as random effects in a mixed model, then instead of repeated measurements we can view the Bayesian interpretation of our method as inducing correlations between the random effects, $\V{\alpha} \sim \ncal_{\gcal}(0, \Phi)$. The estimator \eqref{eq:thetaNet} is then the mixed model equation in \citep{henderson1953estimation} for estimating fixed effects and predicting random effects simultaneously (see \citep{searle2009variance}). \paragraph{Spatial prediction.} In spatial problems, data points are typically indexed by their locations. Then a neighborhood weight matrix $A$ can be computed as a function of some distance between locations, and $A$ can be viewed as a weighted analogue of our network adjacency matrix. This leads to natural connections between RNC and methods used in spatial statistics. In particular, ignoring the covariates $X$, RNC reduces to the Laplacian smoothing point estimation procedure in \citep{sharpnack2012detecting, wang2014trend}, which is equivalent to krigging in spatial statistics \cite{cressie1990origins}. More generally, with covariates $X$ included, the Bayesian interpretation of RNC assumes the same Gaussian Markov random field distribution for $\alpha$ as the one assumed for spatial errors in the conditional autoregressive model (CAR) \cite{besag1974spatial} in spatial regression and its GLM generalization (Chapter 9 of \citep{waller2004applied}). However, $\zeta^2$ and $\sigma^2$ in our Bayesian interpretation are treated as parameters in the CAR, while $\lambda = \sigma^2/\zeta^2$ is treated as a tuning parameter in RNC. Further, the CAR model is fitted either by maximum likelihood involving computationally expensive integration steps, or by posterior inference via Markov chain Monte Carlo after assuming a full Bayesian model (with additional priors on $\V{\beta}$ and $\zeta^2$, etc). Both ways involve much heavier computations than RNC, especially for GLM where the Gaussian Markov random field is no longer a conjugate prior. More importantly, CAR models cannot be applied to general loss functions that are not a well-defined likelihood function as in the case of Cox's model and SVM. Also, CAR models suffer from conceptual difficulties in making out-of-sample predictions \citep{waller2004applied}. In contrast, RNC provides a universal strategy under general loss functions and comes with a natural out-of-sample predictor, discussed in Section \ref{secsec:predict}. \paragraph{Manifold embeddings.} Our Laplacian-based penalty has connections to the substantial literature on manifold embeddings. The general idea there is to embed data points which is usually in high dimension Euclidean space equipped with some non-Euclidean similarity measure (e.g., an adjacency matrix) into Euclidean space, and then use the Euclidean coordinates of the data points for the task at hand, for example clustering \cite{shi2000normalized} or visualization \cite{tenenbaum2000global}. In comparison, we assume the network is given instead of constructed from some original features. Perhaps the algorithm most closely related to ours is Laplacian Eigenmaps \cite{belkin2003laplacian}, which proposed using $k$ eigenvectors of the constructed graph Laplacian $L$ corresponding to the smallest eigenvalues as the Euclidean embedding of the graph in order to obtain a low-dimensional representation of the data. The early manifold literature focused on the unsupervised task of embedding the given data and cannot be generalized to new data. However, later extensions \citep{bengio2004out,cai2007spectral,vural2016out} were developed by assuming the embedding coordinates take certain specific forms as functions of the original data points, enabling out-of-sample embedding after estimating the functions. However, when the network alone is given without the original data points available in the context of multivariate analysis, there are no out-of-sample extensions that we are aware of. In contrast, our proposed method results in natural out-of-sample predictions, discussed in the next section. Supervised variants of manifold embedding have also been proposed when class labels are available in training data, including for the Laplacian Eigenmaps \cite{yang2011multi,raducanu2012supervised, vural2016out}. The basic idea is to learn a low-dimensional embedding of the data according to the constructed adjacency matrix that also corresponds to a good separation of classes, and then use the coordinates in this embedding as predictors instead of the original variables. For general response variables instead of class labels, there is no supervised variant of Laplacian Eigenmaps. More importantly, the embedding coordinates are typically complicated implicit functions of all the variables, and their coefficients cannot be interpreted in any meaningful way. Our method, on the other hand, has the original variables as predictors in the model (and nothing else), and thus their regression coefficients are readily interpretable. \subsection{Prediction and choosing the tuning parameter}\label{secsec:predict} Since we have a different $\alpha_v$ for each node $v$, predicted individual effects are needed for predicting the responses for a group of $n'$ new samples, denoted by $\V{\alpha}_{1}$. Note that now we have an enlarged network with $n+n'$ nodes. Assume the associated Laplacian for the enlarged network is $$ L' = \begin{bmatrix} L_{11} & L_{12} \\ L_{21} & L_{22} \\ \end{bmatrix},$$ where $L_{11}$ corresponds to the positions of the $n'$ test samples and $L_{22}$ corresponds to the positions of the original $n$ training samples. To make prediction of individual effects on new samples given the ones in the training set, we minimize the network cohesion penalty fixing $\hat{\V{\alpha}}$: $$\ourmin_{{\V{\alpha}}_{1}} ({\V{\alpha}}_{1}, \hat{\V{\alpha}})^T L' ({\V{\alpha}}_{1}, \hat{\V{\alpha}}).$$ This gives $$\hat{\V{\alpha}}_{1} = -L_{11}^{-1}L_{12}\hat{\V{\alpha}}.$$ This process is equivalent to estimating the RNC estimator \eqref{net-linear:obj} on the enlarged network, fixing the training estimation. As the response for the new samples are not observed, only the cohesion penalty is involved. The tuning parameter $\lambda$ can be selected by cross-validation. Randomly splitting or sampling from a network is not straightforward and how to do this is, in general, an open problem; however, we found that the usual ``naive'' cross-validation finds very good tuning parameters for our method, perhaps because it is fundamentally a regression problem and we are not attempting to make any inferences about the structure of the network. We tune using regular $10$-fold cross-validation, randomly splitting the samples into $10$ folds, leaving each fold out in turn, and training the model using the remaining nine folds and the corresponding induced subnetwork. The cross-validation error is computed as the average of the prediction errors on the fold that was left out, and the tuning parameter is picked to minimize the cross-validation error. \subsection{An efficient computation strategy}\label{secsec:computation} Computing the estimator \eqref{eq:thetaNet} involves solving a $(n+p) \times (n+p)$ linear system so a naive implementation would require $O((n+p)^3)$ operations. For GLMs, such a system has to be solved in each Newton step. This computational burden can be reduced significantly by taking advantage of the fact that most networks in practice have sparse adjacency matrices as well as sparse Laplacians, which allows for using block elimination. A general description of this strategy can be found in many standard texts (see e.g.\ \cite{boyd2004convex}, Ch. 4). Here we give the details in our setting. The linear system we need to solve is $$(\tildeX^T\tildeX + \lambda M)\V{a} = \V{b}.$$ From \eqref{eq:matToinv}, we can rewrite this system with the following block structure: $$\begin{bmatrix} I + \lambda L & X \\ X^T & X^TX \\ \end{bmatrix} \begin{bmatrix} \V{a}_1 \\ \V{a}_2 \\ \end{bmatrix}= \begin{bmatrix} \V{b}_1 \\ \V{b}_2 \\ \end{bmatrix}. $$ The top row gives $$(I + \lambda L)\V{a}_1 = (\V{b}_1 - X \V{a}_2)$$ and substituting this into the bottom row, we have $$(X^TX - X^T(I + \lambda L)^{-1}X)\V{a}_2 = \V{b}_2 - X^T(I + \lambda L)^{-1}\V{b}_1.$$ Note that $I + \lambda L$ is a symmetric diagonal dominant (SDD) matrix, and is sparse most of the time in practice, so $(I + \lambda L)^{-1}\V{b}_1$ and $(I + \lambda L)^{-1}X$ can be efficiently computed \citep{koutis2010approaching, cohen2014solving}. The cost of this step is roughly $O(p(n+2|E|) (\log n)^{1/2})$, where $|E|$ is the number of edges in the network and $c$ is some absolute constant. The cost of the remaining computations is dominated by the cost of inverting the $p\times p$ matrix $X^TX - X^T(I + \lambda L)^{-1}X$, which is of the same order as the cost of solving a standard least squares problem. When $A$ and $L$ are dense matrices, with $|E| = O(n^2)$, the strategy above has the cost of $O(pn^2((\log n)^{1/2})$, which is still better than naively solving the system, but we do not gain anything from block elimination unless $L$ is sparse. However, we can first apply a graph sparsification algorithm to $A$ and use the sparsified $A^*$ as input for RNC. For instance, the algorithm of \cite{spielman2011spectral} can find $A^*$ with $O(\epsilon^{-2} n \log n)$ edges at the cost of $O(|E|\log^2 n)$ operations such that its sparsified Laplacian $L^*$ satisfies $$(1-\epsilon) L \preceq L^* \preceq (1+\epsilon) L, $$ for a given constant $\epsilon > 0$. After this sparsification step, the complexity of solving the linear system reduces to to $O(pn\log^c n)$ for $c \le 3$. In Section~\ref{sec:theory}, we will provide theoretical guarantees for the accuracy of the RNC estimator based on $L^*$ compared to that based on $L$. Note that when the number of edges is in the order of $O(n^2)$, the sparsification step itself has complexity of $O(n^2\log^c n)$, which is not necessarily cheaper than directly solving the original dense linear system using the SDD property. However, the advantage of sparsification becomes obvious when one has to iteratively solve the linear systems for the GLM or Cox's model, and/or compute a solution path for a sequence of $\lambda$ values. In such situations, sparsificaiton only has to be done once and the average complexity of solving the linear system can be close to $O(n\log^c n)$ for the whole estimation procedure. Details of complexity calculations for the RNC are given in Appendix~\ref{sec:complexity}; a comprehensive discussion of the computational trade-off of sparsification can be found in \citep{sadhanala2016graph}. \section*{Acknowledgements}% \addtocontents{toc}{\protect\vspace{6pt}}% \addcontentsline{toc}{section}{Acknowledgements}% } \setlength{\textwidth}{15.3 truecm} \setlength{\textheight}{23.9 truecm} \newcommand{\nonumber \\}{\nonumber \\} \def\pr{\textsf{P}} \def\ep{\textsf{E}} \def\Cov{\textsf{Cov}} \def\Var{\textsf{Var}} \def\Cal#1{{\mathcal #1}} \def\bk#1{{\mathbf #1}} \def\bkg#1{\mbox{\boldmath{$#1$}}} \def\smallbkg#1{\mbox{\scriptsize \boldmath{$#1$}}} \begin{document} \def\text#1{\mbox{\rm #1}} \def\overset#1#2{\stackrel{#1}{#2} } \def\mathbf{\mathbf} \def\mathrm{\mathrm} \def\displaystyle\sum{\displaystyle\sum} \def\displaystyle\int{\displaystyle\int} \def\displaystyle\frac{\displaystyle\frac} \renewcommand{\baselinestretch}{1} \newcommand{\pkg}[1]{\textsf{#1}} \title{\LARGE \bf Prediction models for network-linked data} \author{Tianxi Li, Elizaveta Levina, Ji Zhu \\ \normalsize Department of Statistics,\\ \normalsize University of Michigan, Ann Arbor \\} \maketitle \bigskip \begin{abstract} Prediction problems typically assume the training data are independent samples, but in many modern applications samples come from individuals connected by a network. For example, in adolescent health studies of risk-taking behaviors, information on the subjects' social networks is often available and plays an important role through network cohesion, the empirically observed phenomenon of friends behaving similarly. Taking cohesion into account in prediction models should allow us to improve their performance. Here we propose a regression model with a network-based penalty on individual node effects to encourage similarity between predictions for linked nodes, and show that it performs better than traditional models both theoretically and empirically when network cohesion is present. The framework is easily extended to other models, such as the generalized linear model and Cox's proportional hazard model. Applications to predicting levels of recreational activity and marijuana usage among teenagers based on both demographic covariates and their friendship networks are discussed in detail and demonstrate the effectiveness of our approach. \end{abstract} \input{intro} \input{model} \input{theory} \input{simulation} \input{application} \input{conclusion} \section*{Acknowledgements} This research uses data from Add Health, a program project designed by J. Richard Udry, Peter S. Bearman, and Kathleen Mullan Harris, and funded by a grant P01-HD31921 from the Eunice Kennedy Shriver National Institute of Child Health and Human Development, with cooperative funding from 17 other agencies. Special acknowledgment is due Ronald R. Rindfuss and Barbara Entwisle for assistance in the original design. Persons interested in obtaining Data Files from Add Health should contact Add Health, The University of North Carolina at Chapel Hill, Carolina Population Center, 206 W. Franklin St., Chapel Hill, NC 27516-2524 (addhealth\_contracts\@unc.edu). No direct support was received from grant P01-HD31921 for this analysis. This research was partially supported by NSF grants DMS-1159005 and DMS-1521551 (to E. Levina), NSF grant DMS-1407698 and NIH grant R01GM096194 (to J. Zhu), and a Rackham International Student Fellowship (to T. Li). \section{Numerical performance evaluation}\label{sec:sim} In this section, we investigate the effects of including network cohesion on simulated data, using both linear regression and logistic regression as examples. The networks are generated from the stochastic block model with $n = 300$ nodes and $K = 3$ blocks. Under the stochastic block model, the nodes are assigned to blocks independently by sampling from a multinomial distribution with parameters $(\pi_1, \dots, \pi_K)$. Then given block labels $c_i$ for $i = 1, \dots, n$, the edges $A_{ij}$, $1 \le i < j \le n$, are generated as independent Bernoulli variables with $P(A_{ij} = 1) = B_{c_i c_j}$, where the $K \times K$ symmetric matrix $B$ contains probabilities of within-block and between-block connections. We set $\pi_1 = \pi_2 = \pi_3 = 1/3$, $B_{kk} = p_w = 0.5$, $B_{kl} = p_b = 0.1$ for all $k \neq l$. As in Example \ref{example1}, the individual effects $\alpha_i$'s are generated independently from a normal distribution with the mean determined by the node's block, $\ncal(\eta_{c_i}, s^2)$, where $\eta_1 = -1$, $\eta_2 = 0$, $\eta_3 = 1$, and the parameter $s$ controls how close the $\alpha_i$'s within each block are. The smaller $s$ is, the more cohesion we expect in the network. The predictor coefficients $\V{\beta}$ are drawn independently from $\ncal(1,1)$. On top of the baseline method (OLS for continuous response and logistic regression for binary response), we include two additional methods for comparison: the ``null model'' discussed in Remark \ref{rem:emptygraph}, where the graph is empty and we simply add a ridge penalty on the individual effects, and a fixed effects model which uses the same $\alpha$ for all the nodes in the same block. Note that the latter is an oracle model in the sense that it uses the true block memberships which are not known in practice. We use the relative improvement over OLS (or logistic regression) as the measure of performance, computing for each candidate model $$1-\mse/\mse_{\F{OLS}}$$ separately for $\V{\alpha}$ and $\V{\beta}$. We also report the relative improvement of estimated mean squared prediction error (MSPE) $\e\norm{\hat{\V{Y}}-\e \V{Y}}^2/n$, measured by $1-\F{MSPE}/\F{MSPE}_{\F{OLS}}$. Figure~\ref{fig:HasBetween-All} shows the relative improvements of these three methods over OLS, for MSE of $\alpha$, $\beta$, and the MSPE. As discussed in Remark~\ref{rem:emptygraph}, the null model always gives the same $\V{\beta}$ as OLS for linear regression. When we choose the tuning parameter by cross-validation, it turns out that the null model also has nearly the same MSE as the OLS, so their two curves overlap. For $\V{\alpha}$ estimation, when $s$ is very small, the oracle fixed effects model performs best, since it is very close to the true model. The RNC comes close to the oracle fixed effects models, and as $s$ increases, it outperforms the fixed effects model which is no longer close to the truth, whereas the RNC is adapting to the amount of cohesion in the network through the cross-validated choice of tuning parameter. The same patterns holds for MSPE. Eventually, as $s$ increases even more, the blocks lose their meaning and there is no longer cohesion present to take advantage of, with all methods approaching the performance of OLS. For estimating $\beta$, a very similar pattern holds, except that the oracle fixed effects models retains a small advantage of the RNC estimator over the whole range of $s$. While one might argue that the oracle block information could be replaced by estimated communities in the network, for which many methods are available, this would only help if the underlying model does indeed have communities. The RNC, on the other hand, does not require an assumption on communities and can adapt to cohesion over many different types of underlying graphs. Next, we use the same setting for generating the network, covariates, and parameters, but instead of taking $Y$ to be Gaussian, we generate $\V{Y}$ from the Bernoulli distribution with probabilities of success given by the logit of $X^T\bbeta + \balpha$. We then estimate the parameters by the usual logistic regression and by a logistic regression with our proposed network cohesion penalty. We fix a small value of the ridge regularization tuning parameter, $\gamma=0.01$, as we only use this for numerical stability. We compare the methods by computing the MSE of $\balpha$, $\bbeta$, and of the vector of $n$ Bernoulli probabilities estimated as $$\hat{p}_i = \frac{\exp(\V{x}_i^T\hat{\bbeta} + \hat{\alpha}_i)}{1+\exp(\V{x}_i^T\hat{\bbeta} + \hat{\alpha}_i)}.$$ \begin{figure}[H] \begin{center} \includegraphics[width=\textwidth]{./RegressionEvaluation-Revision1} \caption{Relative improvement of three regression methods over OLS, for the MSE of $\V{\alpha}$, $\V{\beta}$, and mean squared prediction errors. } \label{fig:HasBetween-All} \end{center} \end{figure} Figure~\ref{fig:logistic-noBetween} shows the relative improvements over the usual logistic regression in the MSE of $\balpha$, $\bbeta$, and the estimated probabilities. The ridge penalty provides a small improvement for estimating $\V{\alpha}$ and $\V{p}$, and does not make any difference for $\V{\beta}$. Similarly to linear regression, the cohesion penalty using oracle groups makes the largest gains at small $s$, when the cohesion is highest. The logistic RNC also performs best for smaller $s$, and and while it does not come as close to the oracle as it does in the linear case, it uniformly outperforms the other non-oracle methods, and in fact outperforms the oracle in estimation of the probabilities $p$ when $s$ is large, adapting to a lesser degree of cohesion which the oracle with its fixed groups cannot do. \begin{figure}[H] \begin{center} \includegraphics[width=\textwidth]{./LogisticEvaluation-Revision1} \caption{Relative improvement of the three logistic regression methods over standard logistic regression, for the MSE of $\V{\alpha}$, $\V{\beta}$, and ${\V{p}}$. } \label{fig:logistic-noBetween} \end{center} \end{figure} \begin{figure}[H] \vspace{0.2cm} \begin{center} \includegraphics[width=\textwidth]{./SparsificationSummary.pdf} \caption{Top left: the adjacency matrix of the sparsified network for $\epsilon=0.15$ (white indicates a nonzero entry, black is a zero entry); Top right: $\norm{\hat{\V{\theta}}^*-\hat{\V{\theta}}}^2$ and the bound \eqref{eq:linearSparseDiff}; Bottom left: relative improvement of the sparsifed estimator $\V{\alpha}^*$ over the original estimator $\hat{\V{\alpha}}$, that is, $1-\mse_{\V{\alpha}^*}/\mse_{\hat{\V{\alpha}}}$; Bottom right: relative improvement of the sparsified estimator $\V{\beta}^*$ over the original estimator $\hat{\V{\beta}}$.} \label{fig:sparsification} \end{center} \end{figure} We conclude this section with a simple example illustrating the graph sparsification approach to dense networks. We generate a weighted network with $n=3000$ nodes, divided into three blocks of 1000 nodes each. All the within-block entries of the weighted adjacency matrix are 1 and the other entries are 0.1. Thus the network matrix is a fully dense matrix. The other settings are the same as we used in the linear regression simulation, and we compare the linear RNC estimator estimated using the original Laplacian $L$ to the one based on the sparsified $L^*$. Figure~\ref{fig:sparsification} shows the results as a function for different values of the approximation accuracy $\epsilon$'s, defined in \eqref{eq:spectralapprox}. The top left plot shows the the sparsified matrix corresponding to $\epsilon=0.1$, which has around 52\% of all elements set to 0. The top right plots shows the observed approximation error $\norm{\hat{\V{\theta}}^*-\hat{\V{\theta}}}^2$ and its theoretical upper bound \eqref{eq:linearSparseDiff}. The theoretical bound is conservative but follows the same trend. Finally, the bottom plots of the difference in estimation errors for $\V{\alpha}$ and $\V{\beta}$ show that the difference between the sparsified and the original estimators goes to 0 as $\epsilon \rightarrow 0$, as it should, and that for moderate values of $\epsilon$ the differences are small and go in either direction, which suggests an increase in variance but not much change in bias. Overall, in this example sparsification provides a reliable approximation to the original RNC estimator, and is a useful tool to save computational time for large dense networks. \section{Theoretical properties of the RNC estimator}\label{sec:theory} Recall the RNC estimator is given by \begin{equation}\label{eq:nonregular} \hat{\btheta} = (\tildeX^T\tildeX + \lambda M)^{-1}\tildeX^T \V{Y}, \end{equation} where $$ M = \begin{bmatrix} L & 0 \\ 0 & 0 \\ \end{bmatrix}.$$ We continue to assume that $X$ has centered columns and full column rank. Intuitively, we expect the network cohesion effect to improve prediction only when the network provides ``new'' information that is not already contained in the predictors $X$. We formalize this intuition in the following assumption: \begin{ass}\label{ass:nonSingular} For any $\V{u} \neq 0$ in the column space of $X$, $\V{u}^TL\V{u} > 0$. \end{ass} This natural and fairly mild assumption is enough to ensure the existence of the RNC estimator. Write $\col(X)$ for the linear space spanned by columns of $X$ and $\col(X)^{\perp}$ for its orthogonal complement. Then the projection matrix onto $\col(X)^{\perp}$ is $P_{X^{\perp}} = I_n - P_{X}$, where $P_{X} = X(X^TX)^{-1}X^T$. Write $\lambda_{\min}(M)$ for the minimum eigenvalue of any matrix $M$. Then we have the following lemma: \begin{prop}\label{lem:nonsingular} Whenever $\lambda > 0$, we have $0 \leq \nu = \lambda_{\min}(P_{X^{\perp}} + \lambda L) \leq 1$. Under Assumption~\ref{ass:nonSingular} the RNC estimator \eqref{eq:nonregular} exists. \end{prop} Lemma~\ref{lem:nonsingular} in the Appendix shows that when the network is connected and $X$ is centered, the RNC estimator always exists since in a connected graph, $L$ has rank $n-1$, and an eigenvector $\mbone$. \begin{thm}\label{thm:non-regularized-compare-OLS} Under Assumption~\ref{ass:nonSingular}, the RNC estimator $\hat{\V{\theta}} = (\hat{\V{\alpha}}, \hat{\V{\beta}})$ defined by \eqref{eq:nonregular} satisfies \begin{eqnarray} \mse(\hat{\V{\alpha}} )& \le & \frac{\lambda^2}{\nu^2}\norm{L\V{\alpha}}^2 + \frac{n}{\nu}\sigma^2, \label{th1:alpha-MSE} \\ \mse(\hat{\bbeta}) & \le & \frac{\lambda^2}{\nu^2\mu}\norm{L\balpha}^2 + \sigma^2(\frac{1}{\nu}+1)\tr((X^TX)^{-1}), \label{th1:beta-MSE}\\ \e\norm{\hat{\V{Y}} - \e \V{Y}}^2 & \le & \frac{\lambda^2}{\nu}\norm{L\balpha}^2 + \sigma^2 \norm{S_{\lambda}}_F^2, \label{th1:prediction-error} \end{eqnarray} where the minimum eigenvalue of $X^TX$ is denoted by $\mu$ and $\norm{S_{\lambda}}_F$ is the Frobenius norm of the shrinkage matrix $S_{\lambda} = \tildeX(\tildeX^T\tildeX + \lambda L)^{-1}\tildeX^T$. Moreover, when $\norm{L\balpha}=0$, RNC is unbiased. \end{thm} The proof is given in the Appendix where the expressions for exact errors are also available. Theorem \ref{thm:non-regularized-compare-OLS} applies to any fixed $n$. The asymptotic results as the size of the network $n$ grows are presented next in Theorem~\ref{thm:non-regularized-asymptotic}. We add the subscript $n$ to previously defined quantities to emphasize the asymptotic nature of this result. \begin{thm}\label{thm:non-regularized-asymptotic} If Assumption~\ref{ass:nonSingular} holds, $\mu_{n} = O(n)$, $\norm{L_n\balpha_n}^2 = o(n^c)$ for some constant $c<1$, and there exists a sequence of $\lambda_n$ and a constant $\rho>0$ such that $\liminf_n\nu_n > \rho$, then $$\mse(\hat{\bbeta}) \le O(\lambda_n^2n^{-(1-c)}) + O(n^{-1}).$$ Therefore if $\lambda_n^2 = o(n^{1-c})$, $\hat{\bbeta}$ is an $L_2$-consistent estimator of $\bbeta$. \end{thm} \begin{rem}\label{rem:gradient} Note that the quantity $L\balpha$ appearing in the assumptions is the gradient of the cohesion penalty with respect to $\balpha$, $ \nabla_{\balpha} \balpha^TL\balpha = 2L\balpha$. We call $L\balpha$ the cohesion gradient. In physics, cohesion gradient is used to measure heat diffusion on graphs when $\balpha$ is a heat function: $$(L\balpha)_v = |\nei{v}| \left(\alpha_v - \frac{\sum_{u \in \nei{v}}\alpha_u}{|\nei{v}|}\right).$$ where $\nei{v}$ is the set of neighbors of $v$ defined by the graph. Thus $\norm{L\balpha}$ represents the difference between nodes' individual effects and the average of their neighbors' effects. The condition of Theorem~\ref{thm:non-regularized-asymptotic} requires that the norm of the vector $L\balpha \in \mathbb R^n$ grows slower than $O(\sqrt{n})$. \end{rem} It is instructive to compare the MSE of our estimator with the MSE of the ordinary least squares (OLS) estimator $$\hat{\bbeta}_{OLS} = (X^TX)^{-1}X^T \V{Y},~ \hat{\balpha}_{OLS} = \bar{y}\mbone, $$ which does not enforce network cohesion. Here $\hat{\balpha}_{OLS} $ is the common intercept. The RNC estimator reduces bias caused by the network-induced dependence among samples and as a trade-off increases variance; thus intuitively, one would expect that the signal-to-noise ratio and the degree of cohesion in the network will determine which estimator performs better. From Theorem~\ref{thm:non-regularized-compare-OLS} and the basic properties of the OLS estimator (stated as Lemma~\ref{lem:OLS-bias} in the Appendix), it is easy to see that if \begin{equation}\label{eq:alphaCompare} \left( \frac{n}{\nu}-1\right)\sigma^2 \le V(\balpha) -\frac{\lambda^2}{\nu^2}\norm{L\balpha}^2 \end{equation} where $V(\balpha) = \sum_v(\balpha_v - \bar{\balpha})^2$, then the RNC estimator of the individual effects $\hat{\balpha}$ has a lower MSE than that of $\hat{\balpha}_{OLS}$. The left hand side of \eqref{eq:alphaCompare} represents the increase in variance induced by adding the network penalty, whereas the right hand size is the corresponding reduction in squared bias. When $\balpha$ is smooth enough over the network, $\norm{L\balpha}$ is negligible compared to other terms, and the condition essentially requires that the total variation of $\alpha_v$ around its average is larger than the total noise level. Similarly, for the coefficients $\beta$, if \begin{equation}\label{eq:betaCompare} \tr((X^TX)^{-1})\frac{\sigma^2}{\nu} \le \norm{(X^TX)^{-1}X^T\balpha}^2 - \frac{\lambda^2}{\mu}\norm{L\balpha}^2 \end{equation} then the RNC estimator $\hat{\bbeta}$ has a lower MSE than $\hat{\bbeta}_{OLS}$. Again, the two sides of the inequality represent the increase in variance and the reduction in squared bias, respectively. \begin{figure}[H] \begin{center} \includegraphics[width=\textwidth]{./Tradeoff} \caption{Mean squared prediction error $\e\norm{\hat{\V{Y}} - \e \V{Y}}^2/n$ and the bias-variance trade-off of the RNC estimator (based on the upper bound \eqref{th1:prediction-error} in Theorem~\ref{thm:non-regularized-compare-OLS}), in the setting of Example \ref{example1} with $\sigma=0.5$. } \label{fig:tradeoff} \end{center} \end{figure} \begin{ex} \label{example1} We illustrate the bias-variance trade-off on a simple example. Suppose we have a network with $n=300$ nodes which consists of three disconnected components $G_1$, $G_2$, $G_3$, of 100 nodes each. Each component is generated as an Erdos-Renyi graph, with each pair of nodes forming an edge independently with probability 0.05. Individual effects $\alpha_i$ are generated independently from $\ncal(\eta_{c_i}, 0.1^2)$, where $c_i \in \{1, 2, 3\}$ is the component to which nodes $i$ belongs, $\eta_1 = -1$, $\eta_2 = 0$, $\eta_3 = 1$. We set $\lambda=0.1$. Substituting the expectation $EA$ for $A$, we have $\nu \approx 0.5$, $\norm{L\balpha}^2\approx 105$, and $V(\balpha)\approx 203$. Then as long as the noise variance $\sigma < 0.57$, \eqref{eq:alphaCompare} will be satisfied. Similarly, $X^TX \approx nI_2$, and $\norm{X^T\balpha}^2 \approx 406$ in expectation. Thus \eqref{eq:betaCompare} holds and the RNC is beneficial if $\sigma < 0.54$ (approximately). The bias-variance trade-off in the mean squared prediction errors (MSPE) can be demonstrated explicitly when varying $\lambda$; Figure~\ref{fig:tradeoff} shows this trade-off between bias and variance together with the OLS baseline when $\sigma=0.5$. Note that this calculation is based on conservative bounds and in reality the RNC is going to be beneficial for a larger range of $\sigma$ values. \end{ex} \begin{rem} If we use \eqref{eq:regularizedLaplacian} and are willing to make strong assumptions about the distribution as in the Bayesian interpretation, then it is easy to show (see \citep{searle2009variance}, Ch. 7 for details) that $\hat{\balpha}$ is the best linear unbiased predictor (BLUP) of $\balpha$ and $\hat{\bbeta}$ is the best linear unbiased estimator (BLUE) of $\beta$. \end{rem} Finally, we investigate the effects of graph sparsification, proposed In Section~\ref{secsec:computation} to reduce computational cost, on the properties of the RNC estimator. For any $\epsilon >0 $, let $L^*$ be the Laplacian of a network on the same nodes satisfying \begin{equation}\label{eq:spectralapprox} (1-\epsilon) L \preceq L^* \preceq (1+\epsilon) L. \end{equation} In addition, let $\hat{\V{\theta}}$ be the minimizer of \begin{equation}\label{eq:glm-net-again} f(\V{\theta}) = \ell(\V{\alpha} + X\V{\beta}; \V{Y}) + \lambda\V{\alpha}^TL\V{\alpha}, \end{equation} and $\hat{\V{\theta}}^*$ be the minimizer of \begin{equation}\label{eq:glm-net-again2} f^*(\V{\theta}) = \ell(\V{\alpha} + X\V{\beta}; \V{Y}) + \lambda\V{\alpha}^TL^*\V{\alpha}, \end{equation} where $\ell$ can be a general loss function, such as the sum of squared errors in linear model or the negative log-likelihood in GLM. \begin{thm}\label{thm:sparsificationImpacts} Given two Laplacians $L$ and $L^*$ satisfying \eqref{eq:spectralapprox} for $0 < \epsilon < 1/2$, assume $\ell$ in \eqref{eq:glm-net-again} is twice differentiable and $f$ is strongly convex with $m>0$, such that for any $\V{\theta} = (\V{\alpha}, \V{\beta})\in \bR^{n+p}$, $$\grad^2f(\V{\theta}) \succeq m I_{n+p}.$$ Then $\hat{\V{\theta}}$ and $\hat{\V{\theta}}^*$ minimizing \eqref{eq:glm-net-again} and \eqref{eq:glm-net-again2} respectively, with the same $\lambda$, satisfy \begin{equation}\label{eq:solution-diff} \norm{\hat{\V{\theta}}^*-\hat{\V{\theta}}}^2 \le \frac{2\epsilon\lambda}{m} \min\Big(2\hat{\V{\alpha}}^TL\hat{\V{\alpha}}+|\hat{\V{\alpha}}^TL\hat{\V{\alpha}}-\hat{\V{\alpha}}^{*T}L^*\hat{\V{\alpha}}^*| + 2\epsilon\hat{\V{\alpha}}^{*T}L^*\hat{\V{\alpha}}^*~,~ \frac{2\epsilon\lambda}{m} \lambda_1(L)^2\norm{\hat{\V{\alpha}}}^2\Big). \end{equation} \end{thm} The proof is given in the Appendix. Theorem~\ref{thm:sparsificationImpacts} is a generalization of the result \cite{sadhanala2016graph} for point estimation by Laplacian smoothing (or krigging) for Gaussian and binary data. Our bound is slightly better than that of \cite{sadhanala2016graph}. \begin{rem}\label{rem:rem-dominating} The term $\hat{\V{\alpha}}^TL\hat{\V{\alpha}}$ is the cohesion penalty and is expected to be small for estimated $\hat\alpha$. Further, we can expect both $|\hat{\V{\alpha}}^TL\hat{\V{\alpha}}-\hat{\V{\alpha}}^{*T}L^*\hat{\V{\alpha}}^*|$ and $\epsilon \hat{\V{\alpha}}^{*T}L^*\hat{\V{\alpha}}^*$ to be much smaller than $\hat{\V{\alpha}}^TL\hat{\V{\alpha}}$, and the first bound in \eqref{eq:solution-diff} is typically much smaller than the second. Therefore, the bound is essentially \begin{equation}\label{eq:essentialapproxbound} \norm{\hat{\V{\theta}}^*-\hat{\V{\theta}}}^2 \lesssim \frac{4\epsilon\lambda}{m}\hat{\V{\alpha}}^TL\hat{\V{\alpha}}. \end{equation} \end{rem} \begin{rem}\label{rem:linearsparse} The theorem shows that the squared error in estimation with an $\epsilon$-approximated Laplacian is decreasing linearly in $\epsilon$. In particular, it is easy to check that for the linear regression case, we have $$\grad^2\ell(\V{\theta}) = 2(\tildeX^T\tildeX + \lambda M). $$ Strong convexity always holds whenever RNC estimate exists, and the bound becomes \begin{equation}\label{eq:linearSparseDiff} \norm{\hat{\V{\theta}}^*-\hat{\V{\theta}}}^2 \lesssim \frac{2\epsilon\lambda\hat{\V{\alpha}}^TL\hat{\V{\alpha}}}{\lambda_n(\tildeX^T\tildeX + \lambda M)}. \end{equation} \end{rem}
2,869,038,155,988
arxiv
\section{Introduction} For a positive integer $n$, a {\it partition} of $n$ is a sequence $\lambda = (\lambda_1, \ldots, \lambda_l)$ of integers with $\lambda_1 \ge \lambda_2 \ge \cdots \ge \lambda_l \ge 1$ and $\sum_{i=1}^l \lambda_i =n$. A partition $\lambda$ is frequently represented by its Young diagram. The {\it Young tableau} of shape $\lambda$ is a bijective filling of the Young diagram of $\lambda$ by the integers in $[n]:=\{1,2, \ldots, n\}$. For example, the following is a tableau of shape $(4,2,1)$. \begin{equation}\label{ex of T} \ytableausetup{mathmode, boxsize=1.3em} \begin{ytableau} 3 & 5 & 1& 7 \\ 6 & 2 \\ 4 \\ \end{ytableau} \end{equation} Let $\operatorname{Tab}(\lambda)$ be the set of Young tableaux of shape $\lambda$. Let $R=K[x_1, \ldots, x_n]$ be a polynomial ring over a field $K$, and consider a tableau $T \in \operatorname{Tab}(\lambda)$ for a partition $\lambda$ of $n$. If the $j$-th column of $T$ consists of $j_1, j_2, \ldots, j_m$ in the order from top to bottom, then $$f_T (j) := \prod_{1 \le s < t \le m} (x_{j_s}-x_{j_t}) \in R.$$ The {\it Specht polynomial} $f_T$ of $T$ is given by $$f_T := \prod_{j=1}^{\lambda_1} f_T(j).$$ For example, if $T$ is the tableau \eqref{ex of T}, then $f_T=(x_3-x_6)(x_3-x_4)(x_6-x_4)(x_5-x_2).$ The symmetric group ${\mathfrak S}_n$ acts on the vector space spanned by $\{ \, f_T \mid T \in \operatorname{Tab}(\lambda) \}$. As an ${\mathfrak S}_n$-module, this vector space is isomorphic to the {\it Specht module} $V_\lambda$, which is very important in the representation theory of symmetric groups (cf. \cite{Sa}). Here we study the Specht {\it ideal} $$I^{\rm Sp}_\lambda :=(\, f_T \mid T \in \operatorname{Tab}(\lambda) )$$ of $R$. In the previous paper \cite{Y}, the second author showed the following. \begin{thm}[{\cite[Proposition~2.8 and Corollary~4.4]{Y}}] \label{prev paper main} If $R/I^{\rm Sp}_\lambda$ is Cohen--Macaulay, then one of the following conditions holds. \begin{itemize} \item[(1)] $\lambda =(n-d, 1, \ldots, 1)$, \item[(2)] $\lambda =(n-d,d)$, \item[(3)] $\lambda =(d,d,1)$. \end{itemize} If $\operatorname{char}(K)=0$, the converse is also true. \end{thm} The case (1) is treated in the joint paper \cite{WY} with J. Watanabe, and it is shown that a minimal free resolution of $R/I^{\rm Sp}_{(n-d,1,\ldots,1)}$ is given by the Eagon-Northcott complex of a ``Vandermonde-like" matrix. Since $I^{\rm Sp}_{(n-1,1)}$ is a linear complete intersection, its free resolution is easy. The second author showed that $R/I^{\rm Sp}_{(n-2,2)}$ is a 2-dimensional Gorenstein ring (\cite[Proposition~5.2]{Y}). Similarly, when $\operatorname{char}(K)=0$, $R/I^{\rm Sp}_{(d,d,1)}$ is Cohen--Macaulay and has a linear free resolution. In the present paper, we will construct minimal free resolutions of $R/I^{\rm Sp}_{(n-2,2)}$ and $R/I^{\rm Sp}_{(d,d,1)}$. However, after an earlier version was submitted, the authors were informed that minimal free resolutions of $R/I^{\rm Sp}_{(n-d,d)}$ for $1 \le d \le n/2$ had been studied by Berkesch Zamaere, Griffeth, and Sam \cite{BSS}. More precisely, \cite{BSS} determined the ${\mathfrak S}_n$-module structure of $\operatorname{Tor}_i^R (K, R/I^{\rm Sp}_{(n-d,d)})$. (They called $I^{\rm Sp}_{(n-d,d)}$ the ``$(d+1)$-equal ideal''. Of course, this name comes from the decomposition \eqref{decomposition} below.) However the paper \cite{BSS} dose not give the differential maps of their resolutions, and uses highly advanced tools of the representation theory (rational Cherednik algebras, Jack polynomials, etc). By contrast, the present paper describes the differential maps explicitly, and uses only the basic theory of Specht modules. Here we do not use results of \cite{BSS} to make the exposition self-contained. It is also noteworthy that, recently, many people study monomial ideals in $R$ on which the symmetric group ${\mathfrak S}_n$ naturally acts (cf. \cite{B+,MR}). However, their behavior is quite different from that of Specht ideals. For example, $\operatorname{Tor}_i^R (K, R/I^{\rm Sp}_{(n-2,2)})$ and $\operatorname{Tor}_i^R (K, R/I^{\rm Sp}_{(d,d,1)})$ are irreducibe as ${\mathfrak S}_n$-modules (the same holds for $I^{\rm Sp}_{(n-d,d)}$ as shown in \cite{BSS}), but this is far from true for symmetric monomial ideals (cf. \cite{MR}). \section{Preliminaries and backgrounds} In this section, we briefly explain Specht modules and related notions. See \cite[Chapter 2]{Sa} for details. For a partition $\lambda=(\lambda_1,\ldots , \lambda_l)$, we sometimes use ``exponential notation''. For example, $(4,3^2,2,1^3)$ means $(4,3,3,2,1,1,1)$. If $\lambda = (\lambda_1, \ldots, \lambda_l)$, then $\operatorname{Tab}(\lambda)$ can be simply written as $\operatorname{Tab}(\lambda_1, \ldots, \lambda_l)$. We say a tableau $T$ is {\it standard}, if all columns (resp. rows) are increasing from top to bottom (resp. from left to right). Let $\operatorname{SYT}(\lambda)$ be the set of standard Young tableaux of shape $\lambda$. Given any set $A$, let $S(A)$ be the set of all permutations of $A$. Suppose that $T\in \operatorname{Tab}(\lambda)$ has columns $C_1,\ldots ,C_k$. Then $C(T):=S(C_1)\times \cdots \times S(C_k)$ is the {\it column-stabilizer} of $T$. For $T, T'\in \operatorname{Tab}(\lambda)$, $T$ and $T'$ are {\it row equivalent}, if corresponding rows of $T$ and $T'$ contain the same elements. For $T\in \operatorname{Tab}(\lambda)$, the {\it tabloid} $\mbox{\boldmath $\{$}T\mbox{\boldmath $\}$}$ of $T$ is defined by $$\mbox{\boldmath $\{$}T\mbox{\boldmath $\}$}:=\{T'\in \operatorname{Tab}(\lambda)\, |\, T \, \, {\rm and} \, \, T' \, \, {\rm are\, \, row \, \, equivalent}\},$$ and the {\it polytabloid} of $T$ is defined by $$e(T):=\displaystyle \sum_{\pi\in C(T)}\operatorname{sgn}(\pi) \, \pi\mbox{\boldmath $\{$}T\mbox{\boldmath $\}$}.$$ It is easy to see that $e(T)=\operatorname{sgn}(\sigma) \, e(\sigma T)$ for $\sigma \in C(T)$. The vector space $$V_\lambda:=\displaystyle \sum_{T\in \operatorname{Tab}(\lambda)}Ke(T)$$ becomes an $\mathfrak{S}_n$-module in the natural way, and it is called the {\it Specht module} of $\lambda$. If $\operatorname{char}(K)=0$, the Specht modules $V_\lambda$ are irreducible, and $V_\lambda$ for partitions $\lambda$ of $n$ form a complete list of irreducible representations of $\mathfrak{S}_n$. In the previous section, we defined the Specht polynomial $f_T \in R=K[x_1, \ldots, x_n]$. Since $\mathfrak{S}_n$ acts on $R$, the vector subspace $$\sum_{T\in \operatorname{Tab}(\lambda)} Kf_T$$ is also an $\mathfrak{S}_n$-module. Moreover, the map \begin{equation}\label{Isom} V_{\lambda}\xrightarrow{\cong} \sum_{T\in \operatorname{Tab}(\lambda)} Kf_T, \, \, \, \, e(T) \longmapsto f_T . \end{equation} is well-defined, and gives an isomorphism as $\mathfrak{S}_n$-modules. \begin{comment} For a subset $A$ of $[n]$, let $S(A)$ be the symmetric group on $A$. $$S(A,B):=\displaystyle \bigsqcup_{\sigma} \sigma(S(A)\times S(B)).$$ For $T\in \operatorname{Tab}(\lambda)$, assume that the columns of $T$ are increasing from top to bottom. Now suppose that \end{comment} Note that $\{e(T)\, |\, T\in \operatorname{Tab}(\lambda)\}$ is linearly dependent, and there are relations called {\it Garnir relations}. Its definition for general $\lambda$ becomes long, so we explain it using our examples. See \cite[\S 2.6]{Sa} for the general case. For \begin{equation}\label{T for Gor} T= \ytableausetup {mathmode, boxsize=3em} \begin{ytableau} a_1 & b_1 & c_1& c_2 & \cdots & c_{n-3-i} \\ a_2 & b_2 \\ \vdots \\ a_{i+1} \end{ytableau} \in \operatorname{Tab}(n-1-i,2, 1^{i-1}) \end{equation} and $A=\{a_1,\ldots, a_{i+1}\}$, $B=\{b_1\}$, set \begin{equation*} S_T(A,B):=\left\{ \sigma \in \mathfrak{S}_n \, \middle| \, \; \parbox[center]{13.3em}{$\sigma(i)=i$ for $i\notin A\cup B$,\\ $\sigma(a_1)<\sigma(a_2)<\cdots<\sigma(a_{i+1})$.} \right\} \end{equation*} (if there is no danger of confusion, we write $S(A,B)$ for $S_T(A,B)$) and \begin{equation}\label{G element} g_{A,B} :=\sum_{\sigma\in S(A,B)} \operatorname{sgn}(\sigma) \, \sigma. \end{equation} Here we regard $g_{A,B}$ as an element of the group ring of ${\mathfrak S}_n$. Then we have \begin{equation}\label{G relation} g_{A,B}e(T)=\sum_{\sigma\in S(A,B)} \operatorname{sgn}(\sigma)e(\sigma T)=0 \end{equation} by \cite[Proposition~2.6.3]{Sa}. Next, for $A=\{a_2,\ldots, a_{i+1}\}$, $B=\{b_1,b_2\}$, set \begin{equation*} S(A,B):=\left\{ \sigma \in \mathfrak{S}_n \, \middle| \, \; \parbox{20em}{\begin{center}$\sigma(i)=i$ for $i\notin A\cup B$, \\ $\sigma(a_2)<\sigma(a_3)<\cdots<\sigma(a_{i+1})$, $\sigma(b_1)<\sigma(b_2)$ \end{center}} \right\}. \end{equation*} Then $g_{A,B}$ is given in the same way as \eqref{G element}, and we have $g_{A,B}e(T)=0$ as in \eqref{G relation} again. The same is true for $A=\{b_1, b_2\}, B=\{c_1\}$, and $A=\{c_i \}$, $B=\{c_{i+1}\}$. In these cases, $g_{A,B}$ is called the {\it Garnir element} associated with $A$ and $B$. It is a classical result that $\{e(T)\mid T\in \operatorname{SYT}(\lambda)\}$ is a basis of $V_\lambda$ (cf. \cite[Theorem 2.6.5]{Sa}), and that $\{e(T)\mid T\in \operatorname{SYT}(\lambda)\}$ spans $V_\lambda$ is shown using Garnir relations. \begin{ex}\label{Garnir example} For $$ T= \ytableausetup {mathmode, boxsize=1.3em} \begin{ytableau} 2 &1 & 6\\ 3& 5 \\ 4 \end{ytableau}, $$ set $A=\{2,3,4\}$ and $B=\{1\}$, then $$ g_{A,B}e(T)=e(T) -e( \ytableausetup {mathmode, boxsize=1.3em} \begin{ytableau} 1 &2 & 6\\ 3& 5 \\ 4 \end{ytableau}) +e( \begin{ytableau} 1 &3 & 6\\ 2& 5 \\ 4 \end{ytableau}) -e( \begin{ytableau} 1 &4 & 6\\ 2& 5 \\ 3 \end{ytableau})\\ =0. $$ Next we consider the following tableau \begin{equation}\label{T example} T= \ytableausetup {mathmode, boxsize=1.3em} \begin{ytableau} 1 &2 & 6\\ 4& 3 \\ 5 \end{ytableau}. \end{equation} Set $A=\{4,5\}$ and $B=\{2,3\}$, then \begin{eqnarray*} g_{A,B}e(T)&=&e(T) -e( \ytableausetup {mathmode, boxsize=1.3em} \begin{ytableau} 1 &2 & 6\\ 3& 4 \\ 5 \end{ytableau}) + e(\begin{ytableau} 1 &2 & 6\\ 3& 5 \\ 4 \end{ytableau}) -e( \begin{ytableau} 1 &3 & 6\\ 2& 5 \\ 4 \end{ytableau})\\ &&+e( \begin{ytableau} 1 &3 & 6\\ 2& 4 \\ 5 \end{ytableau}) +e( \begin{ytableau} 1 &4 & 6\\ 2& 5 \\ 3 \end{ytableau})\\ &=&0. \end{eqnarray*} \end{ex} \begin{comment} Let the $i$-th column of $T$ has $j$-th row, and set $A$(resp. $B$) be all numbers of $i$-th(resp. $(i+1)$-th) column below(resp. above) $j$-th row. Let $A$(resp. $B$) be a subset of numbers in the $i$-th(resp. $(i+1)$-th) column of $T$, then the Garnir element asociated with $T$ and $A,B$ is defined by the group algebra element $$g_{A,B}=\sum_{\sigma\in S(A,B)} \operatorname{sgn}(\sigma)\, \sigma,$$ where $S(A,B)$ be the set of permutations $\sigma$ such that the elements of $A\cup B$ are increasing from top to bottom on $\sigma T$. Then $g_{A,B}e(T)$ is zero, since \cite[Proposition~2.6.3]{Sa}. So linear relations of $\{e(T)\, |\, T\in \operatorname{Tab}(\lambda)\}$ are given by Garnir elements. Refer \cite[2.6]{Sa} for details of Garnir elements. \end{comment} \begin{prop}[cf. {\cite[Theorem 2.6.4]{Sa}}]\label{relation generated} Any linear relations among $\{e(T)\mid T\in \operatorname{Tab}(\lambda)\}$ is a linear combination of Garnir relations. That is, if \begin{equation}\label{linear relation} \displaystyle \sum_{i=1}^m a_i e(T_i)=0 \end{equation} in $V_\lambda$ for $T_1,\ldots, T_m\in \operatorname{Tab}(\lambda)$ and $a_i\in K$, then $\sum_{i=1}^m a_i T_i$ (this is a formal sum, and there is no relation among $T_1,\ldots, T_m$) is contained in the linear space $V$ spanned by $$ \left \{ \sum_{\sigma\in S(A,B)} \operatorname{sgn}(\sigma) \, \sigma T \ \middle| \ \text{$S(A,B)$ gives a Garnir element $g_{A,B}$} \right \}. $$ \end{prop} \begin{proof} Assume that \eqref{linear relation} holds. Each $e(T_i)$ can be rewritten as $$ e(T_i)=\sum_{T \in \operatorname{SYT}(\lambda)} b_{i,T}e(T) $$ for some $b_{i,T} \in K$ using only Garnir relations, see the proof of {\cite[Theorem 2.6.4]{Sa}}. Hence we have $$v_i:= T_i - \sum_{T \in \operatorname{SYT}(\lambda)} b_{i,T} T \in V$$ for each $i$. Note that $$\sum_{i=1}^m \sum_{T \in \operatorname{SYT}(\lambda)} a_i b_{i,T}e(T) =\sum_{i=1}^m a_i e(T_i)=0.$$ However, since $\{e(T)\mid T\in \operatorname{SYT}(\lambda)\}$ is a basis, we have $\sum_{i=1}^m a_i b_{i,T}=0$ for all $T \in \operatorname{SYT}(\lambda)$. Hence $$ \sum_{i=1}^m a_i T_i = \sum_{i=1}^m a_i v_i \in V.$$ \end{proof} In the rest of this section, we collect a few remarks on the Specht ideals $I^{\rm Sp}_{(n-d,d)}$ and $I^{\rm Sp}_{(d,d,1)}$. First, we have the decomposition \begin{equation}\label{decomposition} I^{\rm Sp}_{(n-d,d)}=\bigcap_{\substack{F \subset [n] \\ \# F = d +1}}(x_i -x_j \mid i, j \in F), \end{equation} and the same is true for $I^{\rm Sp}_{(d,d,1)}$. So these ideals can be seen as special cases of the ideals associated with subspace arrangements (cf. \cite{BCES,LL}). The second author (\cite{Y}) made much effort to show that $\sqrt{I^{\rm Sp}_{\lambda}}=I^{\rm Sp}_{\lambda}$ for $\lambda=(n-d,d), (d,d,1)$, but it directly follows from \cite[Corollary~3.2]{LL}. To prove the Cohen--Macaulay-ness of $I^{\rm Sp}_{(n-d,d)}$ and $I^{\rm Sp}_{(d,d,1)}$ in characteristic 0, the second author (\cite{Y}) cited a result in \cite{EGL}, which uses the representation theory of rational Cherednik algebras. Recently, McDaniel and Watanabe (\cite{MW}) gave a purely ring theoretic proof. Moreover, in the positive characteristic case, they showed that $R/I^{\rm Sp}_{(n-d,d)}$ (resp. $R/I^{\rm Sp}_{(d,d,1)}$) is Cohen--Macaulay if and only if $\operatorname{char}(K) \ge d$ (resp. $\operatorname{char}(K) \ge d+1$). As stated in \cite{BSS, SY}, $I^{\rm Sp}_{(d,d,1)}$ has a linear free resolution if it is Cohen--Macaulay. Anyway, this is an easy consequence of \cite[Theorem 5.3.7]{V}. \section{The case $(n-2,2)$: construction} For $R/I^{\rm Sp}_{(n-2,2)}$, we define the chain complex \begin{equation}\label{Gor} {\mathcal F}_\bullet^{(n-2,2)}:0 \longrightarrow F_{n-2} \stackrel{\partial_{n-2}}{\longrightarrow} F_{n-3} \stackrel{\partial_{n-3}}{\longrightarrow} \cdots \stackrel{\partial_2}{\longrightarrow} F_1 \stackrel{\partial_1}{\longrightarrow} F_0 \longrightarrow 0 \end{equation} of graded free $R$-modules as follows. Here $F_0 =R$, $F_1 = V_{(n-2,2)} \otimes_K R(-2)$, $$ F_i = V_{(n-1-i,2, 1^{i-1})} \otimes_K R(-1-i) $$ for $1 \le i \le n-3$, and $F_{n-2}= V_{(1^n)} \otimes_K R(-n)$. For $T \in \operatorname{Tab}(n-2,2)$, set $\partial_1(e(T)\otimes 1) :=f_T \in R =F_0$. To describe $\partial_i$ for $ 2 \le i \le n-3$, we need the preparation. For the tableau $T \in \operatorname{Tab}(n-1-i,2, 1^{i-1})$ of \eqref{T for Gor} and $j$ with $1 \le j \le i+1$, set $$ T_j:= \ytableausetup {mathmode, boxsize=3em} \begin{ytableau} a_1 & b_1 & c_1& c_2 & \cdots & c_{n-3-i} & a_j \\ a_2 & b_2 \\ \vdots \\ a_{j-1}\\ a_{j+1} \\ \vdots \\ a_{i+1} \end{ytableau} \in \operatorname{Tab}(n-i,2, 1^{i-2}). $$ Then we set $$\partial_i(e(T) \otimes 1):=\sum_{j=1}^{i+1} (-1)^{j-1} e(T_j) \otimes x_{a_j} \in V_{(n-i,2, 1^{i-2})} \otimes_K R(-i)=F_{i-1}.$$ Recall that $e(\sigma T)=\operatorname{sgn}(\sigma)e(T)$ for $\sigma \in C(T)$. It is easy to check that the construction of $\partial_i$ is compatible with this principle, that is, $\partial_i(e(\sigma T)\otimes 1)=\operatorname{sgn}(\sigma) \, \partial_i(e(T)\otimes 1)$ holds for $\sigma \in C(T)$. However, this is not enough. Since $\{e(T)\, |\, T\in \operatorname{Tab}(\lambda)\}$ is linearly dependent, the well-definedness of $\partial_i$ is still non-trivial. We will show this in Theorem~\ref{wdGor} below. Finally, we define the differential map $$\partial_{n-2}: V_{(1^n)} \otimes_K R(-n) \longrightarrow V_{(2,2,1^{n-4})} \otimes_K R(-n+2).$$ Since $\dim V_{(1^n)}=1$, it suffices to define $\partial_{n-2}(e(T))$ for \begin{equation}\label{T for Gor last} T:= \ytableausetup{mathmode, boxsize=1.3em} \begin{ytableau} 1 \\ 2 \\ \vdots \\ n \end{ytableau} \in \operatorname{Tab}(1^n), \end{equation} and we do not have to care the well-definedness. For $j,k$ with $1 \le j < k \le n$, set $$ T_{j,k} := \ytableausetup{mathmode, boxsize=1.3em} \begin{ytableau} \vdots & j\\ \vdots & k\\ \vdots \\ \vdots \end{ytableau} \in \operatorname{Tab}(2,2, 1^{n-4}), $$ where the first column is the ``transpose'' of $$\ytableausetup {boxsize=2.5em}\begin{ytableau} 1 & 2 & \cdots & j-1 & j+1 & \cdots & k-1 & k+1 &\cdots & n \end{ytableau}.$$ Then $$\partial_{n-2}(e(T) \otimes 1):=\sum_{1 \le j < k \le n}(-1)^{j+k-1} e(T_{j,k}) \otimes x_{j}x_{k} \in V_{(2,2, 1^{n-4})} \otimes_K R(-n+2) \in F_{n-3}.$$ We define the $\mathfrak{S}_n$-module structure on $F_i=V_\lambda \otimes_K R(-j)$ (here $\lambda$ is a suitable partition of $n$ and $j$ is a suitable integer) by $\sigma(v\otimes f):=\sigma v \otimes \sigma f$ for $\sigma \in{\mathfrak S}_n$. By (\ref{Isom}), $\partial_1$ is an $\mathfrak{S}_n$-homomorphism. In fact, we have $$\partial_1(\sigma(e(T) \otimes g)) = \partial_1(\sigma(e(T)) \otimes \sigma g) = \sigma(f_T) \cdot \sigma g =\sigma(f_T \cdot g)=\sigma(\partial_1(e(T) \otimes g) )$$ for $T \in \operatorname{Tab}(n-2,2)$ and $g \in R$. For $i$ with $2\leq i \leq n-3$, $T\in \operatorname{Tab}(n-i,2, 1^{i-2})$ and $\sigma\in \mathfrak{S}_n$, we have $\sigma(T)_j=\sigma(T_j)$. Hence $\partial_i(\sigma(e(T)\otimes g))=\sigma (\partial_i(e(T)\otimes g))$, that is, $\partial_i$ are $\mathfrak{S}_n$-homomorphisms. Similarly, $\partial_{n-2}$ is also. \begin{ex}\label{(4,2)} Our minimal free resolution ${\mathcal F}_\bullet^{(4,2)}$ of $R/I^{\rm Sp}_{(4,2)}$ is of the form $$0 \longrightarrow V_{\ytableausetup{boxsize=0.25em} \begin{ytableau} \\ \\ \\ \\ \\ \\ \end{ytableau}}\otimes_K R(-6) \stackrel{\partial_{4}}{\longrightarrow} V_{\ytableausetup{boxsize=0.25em} \begin{ytableau} {} & \\ {} & \\ \\ \\ \end{ytableau}} \otimes_K R(-4) \stackrel{\partial_{3}} {\longrightarrow} V_{\ytableausetup{boxsize=0.25em} \begin{ytableau} {} & {} & \\ {} & \\ \\ \end{ytableau}} \otimes_K R(-3) \qquad \qquad \qquad \qquad \quad $$ \begin{equation}\label{F^(4,2)} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \stackrel{\partial_{2}}{\longrightarrow} V_{\ytableausetup{boxsize=0.25em} \begin{ytableau} {} & {} & {} & \\ {} & \\ \end{ytableau}} \otimes_K R(-2) \stackrel{\partial_{1}}{\longrightarrow} R \longrightarrow 0. \end{equation} The differential maps are given by \begin{eqnarray*} \partial_4 ( e ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 1 \\ 2 \\ 3 \\ 4 \\ 5 \\ 6 \end{ytableau} \, ) \otimes 1) &=& e ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 3 & 1\\ 4 & 2\\ 5 \\ 6 \end{ytableau} \, ) \otimes x_1x_2 - e ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 2 & 1\\ 4 & 3\\ 5 \\ 6 \end{ytableau} \, ) \otimes x_1x_3 + e ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 2 & 1\\ 3 & 4\\ 5 \\ 6 \end{ytableau} \, ) \otimes x_1x_4\\ & & - e ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 2 & 1\\ 3 & 5\\ 4 \\ 6 \end{ytableau} \, ) \otimes x_1x_5 + e ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 2 & 1\\ 3 & 6\\ 4 \\ 5 \end{ytableau} \, ) \otimes x_1x_6 + e ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 1 & 2\\ 4 & 3\\ 5 \\ 6 \end{ytableau} \, ) \otimes x_2x_3 \\ & & - e ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 1 & 2\\ 3 & 4\\ 5 \\ 6 \end{ytableau} \, ) \otimes x_2x_4 + e ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 1 & 2\\ 3 & 5\\ 4 \\ 6 \end{ytableau} \, ) \otimes x_2x_5 - e ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 1 & 2\\ 3 & 6\\ 4 \\ 5 \end{ytableau} \, ) \otimes x_2x_6 \\ && + e ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 1 & 3\\ 2 & 4\\ 5 \\ 6 \end{ytableau} \, ) \otimes x_3x_4 - e ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 1 & 3\\ 2 & 5\\ 4 \\ 6 \end{ytableau} \, ) \otimes x_3x_5 + e ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 1 & 3\\ 2 & 6\\ 4 \\ 5 \end{ytableau} \, ) \otimes x_3x_6 \\ & & + e ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 1 & 4\\ 2 & 5\\ 3 \\ 6 \end{ytableau} \, ) \otimes x_4x_5 - e ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 1 & 4\\ 2 & 6\\ 3 \\ 5 \end{ytableau} \, ) \otimes x_4x_6 + e ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 1 & 5 \\ 2 & 6\\ 3 \\ 4 \end{ytableau} \, ) \otimes x_5x_6, \end{eqnarray*} $$\partial_3(e ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 3 & 1\\ 4 & 2\\ 5 \\ 6 \end{ytableau} \, ) \otimes 1) = e ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 4 & 1 & 3\\ 5 & 2\\ 6 \end{ytableau}) \otimes x_3 - e ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 3 & 1 & 4\\ 5 & 2\\ 6 \end{ytableau}) \otimes x_4 + e ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 3 & 1 & 5\\ 4 & 2\\ 6 \end{ytableau}) \otimes x_5 - e ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 3 & 1 & 6\\ 4 & 2\\ 5 \end{ytableau}) \otimes x_6,$$ and $$ \partial_2(e ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 4 & 1 & 3\\ 5 & 2\\ 6 \end{ytableau}) \otimes 1) = e ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 5 & 1 & 3 & 4\\ 6 & 2\\ \end{ytableau}) \otimes x_4 - e ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 4 & 1 & 3 & 5\\ 6 & 2\\ \end{ytableau}) \otimes x_5 + e ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 4 & 1 & 3 & 6\\ 5 & 2\\ \end{ytableau}) \otimes x_6. $$ \end{ex} \begin{thm}\label{1st thm} If $\operatorname{char}(K)=0$, the complex ${\mathcal F}_\bullet^{(n-2,2)}$ of \eqref{Gor} is a minimal free resolution of $R/I^{\rm Sp}_{(n-2,2)}$. \end{thm} \section{The case $(n-2,2)$: Proof} \begin{lem}\label{Gor Betti} We have $$\beta_i(R/I^{\rm Sp}_{(n-2,2)})=\beta_{i,i+1}(R/I^{\rm Sp}_{(n-2,2)})=\dim_K V_{(n-i-1,2,1^{i-1})}$$ for all $1\leq i\leq n-3$, and $$\beta_{n-2}(R/I^{\rm Sp}_{(n-2,2)})=\beta_{n-2, n}(R/I^{\rm Sp}_{(n-2,2)})=1=\dim_K V_{(1^n)}.$$ \end{lem} \begin{proof} By the hook formula (\cite[Theorem~3.10.2]{Sa}), for all $i$ with $1\leq i \leq n-3$, we have \begin{eqnarray*} \dim_K V_{(n-i-1,2,1^{i-1})} \!\! &=& \!\! \frac{n!}{(n-1)(n-(i-1)-2)(n-(i-1)-4)!(i-1+2)!(i-1)! } \\ \!\! &=& \!\! \frac{n!}{(n-1)(n-i-1)(n-i-3)!(i+1)!(i-1)! } \\ \!\! &=& \!\! \frac{n!(n-i-2)i}{(i+1)!(n-i-1)!(n-1)}. \end{eqnarray*} On the other hand, $R/I^{\rm Sp}_{(n-2,2)}$ is a Gorenstein ring with the Hilbert series $$\frac{1+(n-2)t+t^2}{(1-t)^2}$$ by \cite[Proposition~5.2]{Y} (see its proof for the Hilbert series), and we have \begin{eqnarray*} \beta_i (R/I^{\rm Sp}_{(n-2,2)}) &=&\beta_{i,i+1} (R/I^{\rm Sp}_{(n-2,2)})\\ &=& \binom{n-1}{i+1}\binom{i}{i-1} +\binom{n-1}{i}\binom{n-i-2}{1} -\binom{n-2}{i}\binom{n-2}{1}\\ &=& \frac{(n-1)!i}{(i+1)!(n-i-2)!}+\frac{(n-1)!(n-i-2)}{i!(n-1-i)!}-\frac{(n-2)!(n-2)}{i!(n-2-i)!}\\ &=& \frac{n!i}{(i+1)!(n-i-2)!n}+\frac{n!(n-i-2)}{i!(n-1-i)!n}-\frac{n!(n-2)!}{i!(n-2-i)!n(n-1)}\\ &=&\frac{n!(n-i-2)i}{(i+1)!(n-i-1)!(n-1)} \end{eqnarray*} for all $i$ with $1\leq i \leq n-3$. Here we use \cite[Proposition~5.3.14]{V} (note that $I^{\rm Sp}_{(n-2,2)}$ is a Gorenstein ideal generated by quadrics and $\operatorname{ht}(I^{\rm Sp}_{(n-2,2)})=n-2$). So we get the first equation. The second one is easy. \end{proof} \begin{thm}\label{wdGor} The maps $\partial_i$ ($1 \le i \le n-3$) defined in the previous section are well-defined. \end{thm} The proof of this lemma is elementary, but (therefore?) long and technical. By the purpose of the present paper, we do not skip the details. Example~\ref{sigma-tau} below, which explain a few details of the proof, must be helpful for better understanding. Note that an element $\varphi \in V_\lambda \otimes_K R_1$ is uniquely written as $\sum_{i=1}^n v_i \otimes x_i$ ($v_i \in V_\lambda$). We call $v_i \otimes x_i$ the {\it $x_i$-part} of $\varphi$. \begin{proof} The well-definedness of $\partial_1$ is nothing other than that of the map \eqref{Isom}. So we assume that $2 \le i \le n-3$. By Proposition~\ref{relation generated}, it suffices to show that \begin{equation}\label{quasi Garnir} \sum_{\sigma\in S_T(A,B)} \operatorname{sgn}(\sigma) \,\partial_i(e(\sigma T)\otimes 1)=0 \end{equation} for $T\in \operatorname{Tab}(n-1-i,2,1^{i-1})$. Let $T$ be as in \eqref{T for Gor}. Then there are three types of $S_T(A,B)$. {\it Case 1.} When $A=\{b_1, b_2\}$ and $B=\{c_1\}$, or $A=\{c_l \}$ and $B=\{c_{l+1}\}$: The left side of \eqref{quasi Garnir} can be decomposed to the sum of the $x_{a_1}$-part, \ldots , and the $x_{a_{i+1}}$-part. Since $g_{A,B}$ is a Garnir element also for $T_j$, the $x_{a_j}$-part of the left side of \eqref{quasi Garnir} is \begin{eqnarray*} \displaystyle \sum_{\sigma\in S(A,B)} (-1)^{j-1}\operatorname{sgn}(\sigma) \, e((\sigma T)_j) \otimes x_{a_j}&=&\displaystyle (-1)^{j-1}\sum_{\sigma\in S(A,B)} \operatorname{sgn}(\sigma) \, e(\sigma(T_j)) \otimes x_{a_j}\\ &=&\displaystyle (-1)^{j-1}g_{A,B} e(T_j) \otimes x_{a_j}=0. \end{eqnarray*} Hence \eqref{quasi Garnir} holds in this case. {\it Case 2.} When $A=\{a_1,\ldots,a_{i+1}\}$ and $B=\{b_1\}$: The left side of \eqref{quasi Garnir} can be decomposed to the sum of the $x_{a_1}$-part, \ldots, the $x_{a_{i+1}}$-part, and the $x_{b_1}$-part. To treat the Garnir relation, we may assume that $b_1<a_1<a_2<\cdots<a_{i+1}$. Fix an integer $j$ with $1\le j \le i+1$, and set $A_j:=A\backslash \{a_j\}$. Note that \begin{eqnarray*} S(A,B)=\{ \sigma\in S(A,B) \, | \, \sigma(a_j)=a_{j}\} \! \! \! &\sqcup& \! \! \! \{\sigma\in S(A,B)\, |\, \sigma(a_{j+1})= a_{j}\} \\ & \sqcup& \! \! \! \{\sigma\in S(A,B)\, |\, \sigma(b_1)= a_{j}\} \end{eqnarray*} For $\sigma\in S(A,B)$, $ \sigma(a_j)=a_{j}$ if and only if $\sigma(b_1)\leq a_{j-1}$. Similarly, $\sigma(a_{j+1})= a_{j}$ if and only if $\sigma(b_1)\geq a_{j+1}$. If $\sigma(a_j) =a_j$, then $\sigma$ also belongs to $S_{T_j}(A_j,B)$, and we have $$\{\sigma \in S(A,B)\mid \sigma(a_j)=a_j\}= \{\sigma \in S_{T_j}(A_j,B)\mid \sigma(b_1)\leq a_{j-1}\},$$ and $\sigma(T_j)=(\sigma T)_{j}$. (For notational simplicity, we will write $S(A_j, B)$ for $S_{T_j}(A_j,B)$ below.) If $\sigma(a_{j+1}) =a_j$, then we have $\tau:= (\sigma(a_j) \ a_j) \cdot \sigma \in S(A_j, B)$. In fact, $$\tau(k) = \begin{cases} a_j & (k=a_j) \\ \sigma(a_j) & (k=a_{j+1}) \\ \sigma(k) & (k \ne a_j, a_{j+1}), \end{cases}$$ and hence $\tau$ only moves elements in $A_j \cup B$, and $\tau(k) < \tau(l)$ for $k, l \in A_j$ with $k<l$. We also have $\tau(b_1)=\sigma(b_1) \ge a_{j+1}$, $\operatorname{sgn}(\tau)=-\operatorname{sgn}(\sigma)$, and \begin{equation}\label{tau-sigma} \tau(T_j)=(\sigma T)_{j+1}. \end{equation} (In Example~\ref{sigma-tau} (1) below, we will check \eqref{tau-sigma} very carefully to get the feeling. From the next paragraph, we will leave similar computations to the reader as easy exercises.) Moreover, the map $$ f : \{ \sigma \in S(A,B) \mid \sigma(a_{j+1}) = a_j\} \longrightarrow \{ \tau \in S(A_j,B) \mid \tau(b_1) \ge a_{j+1} \} $$ defined by the above operation is bijective. In fact, the inverse map is given by $S(A_j, B) \ni \tau \longmapsto (a_j \ \tau(a_{j+1})) \cdot \tau$. Hence the $x_{a_j}$-part of the left side of \eqref{quasi Garnir} is \begin{eqnarray*} && \left \sum_{ \substack{\sigma\in S(A,B) \\ \sigma(a_j)=a_{j}}}(-1)^{j-1}\operatorname{sgn}(\sigma)e((\sigma T)_{j}) +\sum_{\substack{\sigma\in S(A,B)\\ \sigma(a_{j+1})= a_{j}} }(-1)^{j}\operatorname{sgn}(\sigma)e((\sigma T)_{j+1})) \right) \otimes x_{a_j}\\ &=& \left( \sum_{ \substack{\sigma \in S(A_j,B) \\ \sigma(b_1)\leq a_{j-1}}}(-1)^{j-1}\operatorname{sgn}(\sigma)e(\sigma (T_j)) +\sum_{\substack{\tau\in S(A_j,B) \\ \tau(b_1)\geq a_{j+1} } }(-(-1)^{j}\operatorname{sgn}(\tau)e(\tau (T_{j})) ) \right) \otimes x_{a_j}\\ &=&(-1)^{j-1}g_{A_j,B}e(T_j)\otimes x_{a_j} \\ &=& 0. \end{eqnarray*} Here the last equality follows from that $g_{A_j,B}$ is a Garnir element for $T_j$. It remains to check the $x_{b_1}$-part of the left side of \eqref{quasi Garnir}. Consider the tableau $T':= ((a_1\, \, b_1)T)_1$, that is, $$ T'= \ytableausetup {mathmode, boxsize=2em} \begin{ytableau} a_2 & a_1 & c_1& c_2 & \cdots & b_1 \\ a_3 & b_2 \\ \vdots \\ a_{i+1} \end{ytableau}. $$ Set $A':=A\backslash \{a_1\}$ and $B':=\{a_1\}$, and consider the map $$ f':\{ \sigma \in S(A,B) \mid \sigma(a_1)=b_1\} \ni \sigma \longmapsto (\sigma(b_1) \ b_1)\sigma \in S_{T'}(A',B'). $$ Of course, we have to check that $\tau := f'(\sigma)$ actually belongs $ S_{T'}(A',B')$, but it follows from the fact that $$ \tau(k) = \begin{cases} b_1 & (k=b_1) \\ \sigma(b_1) & (k=a_1) \\ \sigma(k) & (k \ne a_1, b_1). \end{cases} $$ Moreover, $f'$ is bijective. In fact, the inverse map is given by $ S_{T'}(A',B') \ni \tau \longmapsto (\tau(a_1) \ b_1) \, \tau$. We also remark that $\operatorname{sgn}(\sigma)=-\operatorname{sgn}(f'(\sigma))$ and $(\sigma T)_1=f'(\sigma)T'$. Hence the $x_{b_1}$-part of the left side of \eqref{quasi Garnir} is $$ \Bigl(\sum_{\substack{\sigma\in S(A,B)\\ \sigma(a_1)=b_1}}\operatorname{sgn}(\sigma)(\sigma T)_{1} ) \Bigr) \otimes x_{b_1} =-g_{A',B'}e(T')\otimes x_{b_1} =0.$$ Below, we will use bijections similar to $f, f'$ repeatedly. Each time, we will define the bijections explicitly, but we will not check they work well, and leave them as easy exercises. {\it Case 3.} When $A:=\{a_2,\ldots,a_{i+1}\}$ and $B:=\{b_1, b_2\}$: Note that the left side of \eqref{quasi Garnir} can be decomposed to the sum of the $x_{a_1}$-part, \ldots, the $x_{a_{i+1}}$-part, the $x_{b_1}$-part and the $x_{b_2}$-part. Fix $j$ with $2 \le j \le i+1$, and set $A_j:=A\backslash \{a_j\}$. To treat the Garnir relation, we may assume that $b_1<b_2<a_2<\cdots<a_{i+1}$. First, we treat the $x_{a_j}$-part for $j \ge 2$. Set \begin{eqnarray*} G_1&:=&\{\sigma\in S(A,B) \mid a_j<\sigma(b_1) \}, \\ G_2&:=&\{\sigma\in S(A,B) \mid \sigma(b_1)<a_j<\sigma(b_2)\} \\ G_3&:=&\{\sigma\in S(A,B) \mid a_j>\sigma(b_2)\}, \\ \end{eqnarray*} and \begin{eqnarray*} G'_1&:=&\{\sigma\in S(A_j,B) \mid a_j<\sigma(b_1)\}, \\ G'_2&:=&\{\sigma\in S(A_j,B) \mid \sigma(b_1)<a_j<\sigma(b_2)\},\\ G'_3&:=&\{\sigma\in S(A_j,B) \mid a_j>\sigma(b_2)\}, \\ \end{eqnarray*} Then we have $$\{ \sigma \in S(A,B) \mid \sigma(b_1), \sigma(b_2) \ne a_j \} = G_1\sqcup G_2\sqcup G_3$$ and $$S(A_j,B)=G'_1 \sqcup G'_2\sqcup G'_3.$$ For $\sigma\in S(A,B)$, $\sigma\in G_1$ if and only if $\sigma(a_{j+2})=a_j$. Similarly, if $\sigma \in G_2$ (resp. $\sigma \in G_3$), then $\sigma(a_{j+1})=a_j$ (resp. $\sigma(a_j)=a_j$). Hence we have $G_3=G'_3$. Moreover, we have the following bijections \begin{eqnarray*} && f_1: G_1 \ni \sigma \longmapsto (a_j \, \sigma(a_{j+1})\, \sigma(a_{j}))\sigma \in G'_1\\ && f_2: G_2 \ni \sigma \longmapsto (a_j \, \sigma(a_{j}))\sigma \in G'_2. \\ \end{eqnarray*} Clearly, $\operatorname{sgn}(f_1(\sigma))=\operatorname{sgn}(\sigma)$ and $\operatorname{sgn}(f_2(\sigma)) =-\operatorname{sgn}(\sigma)$. We also remark that $(\sigma T)_{j+2}= f_1(\sigma) (T_j)$ for $\sigma \in G_1$, $(\sigma T)_{j+1}= f_2(\sigma) (T_j)$ for $\sigma \in G_2$. For simplicity, set $\tau:=f_k(\sigma)$ for $\sigma \in G_k$. By these bijections, we see that the $x_{a_j}$-part of the left side of \eqref{quasi Garnir} for $j \ge 2$ is \begin{eqnarray*} &&\Bigl( \sum_{\sigma \in G_1}(-1)^{j+2-1}\operatorname{sgn}(\sigma)e((\sigma T)_{j+2}) +\sum_{\sigma\in G_2}(-1)^{j+1-1}\operatorname{sgn}(\sigma)e((\sigma T)_{j+1}) \\ && \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad +\sum_{\sigma\in G_3}(-1)^{j-1}\operatorname{sgn}(\sigma)e((\sigma T)_{j}) \Bigr) \otimes x_{a_j} \\ &=& (-1)^{j-1}\Bigl( \sum_{\tau\in G'_1}\operatorname{sgn}(\tau)e(\tau (T_j)) +\sum_{\tau\in G'_2}\operatorname{sgn}(\tau)e(\tau (T_j))\\ && \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad +\sum_{\sigma \in G'_3=G_3}\operatorname{sgn}(\sigma)e(\sigma (T_j)) \Bigr) \otimes x_{a_j} \\ &=&(-1)^{j-1}g_{A_j,B}e(T_j) \otimes x_{a_j}\\ &=&0. \end{eqnarray*} Next we check the $x_{a_1}$-part. We decompose the $x_{a_1}$-part of the left side of \eqref{quasi Garnir} as follows \begin{equation}\label{a_1 part} \left( \sum_{\substack{\sigma\in S(A,B)\\ \sigma(b_2)=a_{i+1} }}\operatorname{sgn}(\sigma)e((\sigma T)_{1}) + \sum_{\substack{\sigma\in S(A,B)\\ \sigma(a_{i+1})=a_{i+1} }}\operatorname{sgn}(\sigma)e((\sigma T)_{1}) \right )\otimes x_{a_1}. \end{equation} First, we consider the former, that is, the case $\sigma(b_2)=a_{i+1}$. Set $\widetilde{A}=(A\cup \{b_2\})\backslash \{a_{i+1}\}$, $\widetilde{B}=\{b_1\}$, and $\widetilde{T}:=(a_{i+1} \, \, a_i\, \, a_{i-1}\, \, a_{i-2}\, \, \cdots a_3\, \, a_2\, \, b_2)(T_1).$ Note that \begin{equation*} \widetilde{T}= \ytableausetup {mathmode, boxsize=2em} \begin{ytableau} b_2 & b_1 & c_1& c_2 & \cdots & a_1 \\ a_2 & a_{i+1} \\ \vdots \\ a_{i} \end{ytableau}. \end{equation*} We have a bijection $$ \begin{array}{ccc} \widetilde{f}:\{\sigma\in S(A,B) \mid \sigma(b_2)=a_{i+1}\} & \longrightarrow &S_{\widetilde{T}}(\widetilde{A},\widetilde{B}) \\%[0pt] \rotatebox{90}{$\in$} & & \rotatebox{90}{$\in$} \\ \sigma & \longmapsto & (a_{i+1} \; \sigma(a_2)\; \sigma(a_3) \; \cdots \; \sigma(a_{i+1}))\sigma \end{array} $$ and can easily check that $(\sigma T)_1=\widetilde{f}(\sigma) \widetilde{T}$. For simplicity, set $\tau:=f(\sigma)$. Then we have $$\sum_{\substack{\sigma\in S(A,B)\\ \sigma(b_2)=a_{i+1} }}\operatorname{sgn}(\sigma)e((\sigma T)_{1}) =(-1)^i\sum_{\substack{\tau\in S_{\widetilde{T}}(\widetilde{A},\widetilde{B}) }}\operatorname{sgn}(\tau)e(\tau \widetilde{T}) =(-1)^i g_{\widetilde{A},\widetilde{B}}e(\widetilde{T})=0. $$ Next, we consider the case $\sigma(a_{i+1})=a_{i+1}$ in \eqref{a_1 part}. Set $\overline{A}=A\backslash \{a_{i+1}\}$ and $$ \overline{T}:= \ytableausetup {mathmode, boxsize=2em} \begin{ytableau} a_{i+1} & b_1 & c_1& c_2 & \cdots & a_1 \\ a_2 & b_2 \\ a_3\\ \vdots \\ a_{i} \end{ytableau}. $$ We have $$\{ \sigma \in S(A,B) \mid \sigma(a_{i+1})=a_{i+1}\}=S_{\overline{T}}(\overline{A},B),$$ and we have $$(\sigma T)_1 = (\sigma(a_2) \ \sigma(a_3) \ \cdots \sigma(a_i) \ a_{i+1}) \, \sigma \overline{T}$$ for $\sigma \in S(A,B)$ with $\sigma(a_{i+1})=a_{i+1}$. Hence $e((\sigma T)_1)=(-1)^{i-1} e( \sigma \overline{T})$ and \begin{eqnarray*} \sum_{\substack{\sigma\in S(A,B)\\ \sigma(a_{i+1})=a_{i+1} }}\operatorname{sgn}(\sigma)e((\sigma T)_{1}) &=&(-1)^{i-1}\sum_{\sigma \in S_{\overline{T}}(\overline{A},B)}\operatorname{sgn}(\sigma)e(\sigma \overline{T}) \\ &=&(-1)^{i-1} g_{\overline{A},B}e(\overline{T})\\ &=&0. \end{eqnarray*} It remains to check the $x_{b_1}$-part and the $x_{b_2}$-part of the left side of \eqref{quasi Garnir}. Set $A':=A\backslash \{a_2\}$, $B':=\{b_2,a_2\}$ and $T':= ((b_1\, \, b_2\, \, a_2)T)_2$. Note that \begin{equation*} T'= \ytableausetup {mathmode, boxsize=2em} \begin{ytableau} a_1 & b_2 & c_1& c_2 & \cdots & b_1 \\ a_3 & a_2 \\ \vdots \\ a_{i+1} \end{ytableau}. \end{equation*} We have a bijection $$f':\{\sigma\in S(A,B)\mid \sigma(a_2)=b_1 \}\ni \sigma \longmapsto (\sigma(b_2)\, \, \sigma(b_1)\, \, b_1)\sigma \in S_{T'}(A',B'),$$ and then $(\sigma T)_2 = f'(\sigma)(T')$. Hence the $x_{b_1}$-part of the left side of \eqref{quasi Garnir} is \begin{eqnarray*} \left(-\sum_{\substack{\sigma\in S(A,B)\\ \sigma(a_2)=b_1}}\operatorname{sgn}(\sigma)e((\sigma T)_{2}) ) \right) \otimes x_{b_1} \begin{comment} &=& -\left(\sum_{\substack{\sigma\in G'_1}}\operatorname{sgn}(\sigma)e((\sigma T)_{2}) ) +\sum_{\substack{\sigma\in G'_2}}\operatorname{sgn}(\sigma)e((\sigma T)_{2}) ) \right) \otimes x_{b_1} \\ &=& -\left(\sum_{\substack{\tau\in G'_3}}\operatorname{sgn}(\tau)e((\tau T)_{2}) ) +\sum_{\substack{\tau\in G'_4}}\operatorname{sgn}(\tau)e((\sigma T)_{2}) ) \right) \otimes x_{b_1} \\ \end{comment} =-g_{A',B'}e(T')\otimes x_{b_1}=0. \end{eqnarray*} Set $A'':=A\backslash \{a_2\}$, $B'':=\{b_1,a_2\}$ and $T'':= ((b_2\, \, a_2)T)_2$. Note that \begin{equation*} T''= \ytableausetup {mathmode, boxsize=2em} \begin{ytableau} a_1 & b_1 & c_1& c_2 & \cdots & b_2 \\ a_3 & a_2 \\ \vdots \\ a_{i+1} \end{ytableau}. \end{equation*} Set \begin{eqnarray*} L_1&=& \{\sigma \in S(A,B) \, \mid \, \sigma(a_2)=b_2\},\\ L_2&=& \{\sigma \in S(A,B) \, \mid \, \sigma(a_3)=b_2\}, \end{eqnarray*} and \begin{eqnarray*} L'_1&=& \{\tau \in S_{T''}(A'',B'') \, \mid \, \tau(b_1)=b_1\},\\ L'_2&=& \{\tau \in S_{T''}(A'',B'') \, \mid \, \tau(a_3)=b_1\}. \end{eqnarray*} If $\sigma\in L_1$ (resp. $\sigma\in L_2$), then $\sigma(b_1)=b_1$ (resp. $\sigma(a_2)=b_1$). We also have $$\{\sigma \in S(A,B) \mid \sigma(b_1), \sigma(b_2)\neq b_2\}=L_1\sqcup L_2$$ and $$S_{T''}(A'',B'')=L'_1\sqcup L'_2.$$ We have bijections $$f_1'':L_1 \ni \sigma \longmapsto (\sigma(b_2)\, \, b_2)\sigma \in L'_1$$ and $$f_2'':L_2 \ni \sigma \longmapsto (\sigma(b_2)\, \, b_2\, \, b_1)\sigma \in L'_2.$$ Note that $(\sigma T)_2 = f''_1(\sigma)(T'')$ for $\sigma \in L_1$, and $(\sigma T)_3 = f_2''(\sigma)(T'')$ for $\sigma \in L_2$. As before, set $\tau:= f_k''(\sigma)$ for $\sigma \in L_k$. Then the $x_{b_2}$-part of the left side of \eqref{quasi Garnir} is \begin{eqnarray*} &&\left(\sum_{\substack{\sigma\in L_1}}(-1)^{2-1}\operatorname{sgn}(\sigma)e((\sigma T)_{2}) +\sum_{\substack{\sigma\in L_2}}(-1)^{3-1}\operatorname{sgn}(\sigma)e((\sigma T)_{3}) \right) \otimes x_{b_2}\\ &=&\left(\sum_{\substack{\tau\in L'_1}}\operatorname{sgn}(\tau)e(\tau T'') +\sum_{\substack{\tau\in L'_2 }}\operatorname{sgn}(\tau)e(\tau T'') \right) \otimes x_{b_2}\\ &=&g_{A'',B''}e(T'') \otimes x_{b_2}. \end{eqnarray*} So we are done. \end{proof} \begin{ex}\label{sigma-tau} (1) Here we will check \eqref{tau-sigma} step by step. For simplicity, set $j=2$ (so $\sigma(a_3)=a_2$ now). Then we have $$ \tau(k)=\begin{cases} a_2 & \text{if $k=a_2$,} \\ \sigma(a_2) & \text{if $k=a_3$,} \\ \sigma(k) & \text{otherwise,} \end{cases} $$ $$T_2= \ytableausetup {mathmode, boxsize=1.8em} \begin{ytableau} a_1 & b_1 & c_1 & \cdots & a_2 \\ a_3 & b_2 \\ a_4\\ \vdots \\ \end{ytableau} $$ and $$ \tau(T_2)= \ytableausetup {mathmode, boxsize=2.7em} \begin{ytableau} \tau(a_1) & \tau(b_1) & \tau(c_1) & \cdots & \tau(a_2) \\ \tau(a_3) & \tau(b_2) \\ \tau(a_4)\\ \vdots \\ \end{ytableau} = \ytableausetup {mathmode, boxsize=2.7em} \begin{ytableau} \sigma(a_1) & \sigma(b_1) & c_1 & \cdots & a_2 \\ \sigma(a_2) & b_2 \\ \sigma(a_4)\\ \vdots \\ \end{ytableau}. $$ Since $$ \sigma T = \ytableausetup {mathmode, boxsize=2.7em} \begin{ytableau} \sigma(a_1) & \sigma(b_1) & c_1 & \cdots \\ \sigma(a_2) & b_2 \\ a_2 \\ \sigma(a_4)\\ \vdots \\ \end{ytableau}, $$ we have $\tau(T_2)=(\sigma T)_3$. (2) For the tableau $T$ of \eqref{T example} in Example~\ref{Garnir example}, we will check that the $x_1$-part of $$\sum_{\sigma \in S(A,B)}\partial_2(e(T)\otimes 1)$$ is 0 for $A=\{4,5\}$ and $B=\{2,3\}$. The $x_1$-part is $$ \Bigg( e(\ytableausetup{mathmode, boxsize=1.3em} \begin{ytableau} 4 &2 & 6& 1\\ 5 & 3 \\ \end{ytableau}) - e( \begin{ytableau} 3 &2 & 6& 1\\ 5 & 4 \\ \end{ytableau}) + e(\begin{ytableau} 3 &2 & 6& 1\\ 4 & 5 \\ \end{ytableau}) -e( \begin{ytableau} 2 &3 & 6 & 1\\ 4 & 5 \\ \end{ytableau})$$ $$\qquad \qquad \qquad \qquad+e( \begin{ytableau} 2 &3 & 6 & 1\\ 5& 4 \\ \end{ytableau}) +e( \begin{ytableau} 2 &4 & 6& 1\\ 3 & 5 \\ \end{ytableau}) \Biggr)\otimes x_1, $$ but we have $$ e(\begin{ytableau} 3 &2 & 6 & 1\\ 4 & 5 \\ \end{ytableau}) -e( \begin{ytableau} 2 &3 & 6 & 1\\ 4 & 5 \\ \end{ytableau}) +e( \begin{ytableau} 2 &4 & 6 & 1\\ 3 & 5 \\ \end{ytableau})=0 $$ and $$ e( \begin{ytableau} 4 &2 & 6 & 1\\ 5 & 3 \\ \end{ytableau}) -e( \begin{ytableau} 3 &2 & 6 & 1\\ 5 & 4 \\ \end{ytableau}) +e( \begin{ytableau} 2 &3 & 6 & 1\\ 5& 4 \\ \end{ytableau})=0. $$ In fact, we can get the former (resp. latter) applying the Garnir element $g_{A,B}$ for $A=\{3,4\},B=\{2\}$ (resp. $A=\{4\},B=\{2,3\}$) to $$ \begin{ytableau} 3 &2 & 6 & 1\\ 4 & 5 \\ \end{ytableau} \qquad \qquad \Big(\, \text{resp.} \ \begin{ytableau} 5 &2 & 6 & 1\\ 4 & 3 \\ \end{ytableau} \, \Bigr).$$ Recall that $$e( \begin{ytableau} 4 &2 & 6 & 1\\ 5 & 3 \\ \end{ytableau}) =- e( \begin{ytableau} 5 &2 & 6 & 1\\ 4 & 3 \\ \end{ytableau} ), $$ and the same is true for the related tableaux. \end{ex} \noindent{\it The proof of Theorem~\ref{1st thm}.} First, we will show that ${\mathcal F}_\bullet^{(n-2,2)}$ is a chain complex. For the tableau $T$ of \eqref{T for Gor} and any permutation $\sigma$ of $\{c_1, c_2, \ldots \}$, we have $e(T)= e(\sigma(T))$. Hence it is easy to see that $\partial_{i-1} \partial_i=0$ holds for $3 \le i \le n-3$. For \begin{equation}\label{3 rows} T= \ytableausetup {mathmode, boxsize=1.5em} \begin{ytableau} a_1 & b_1 & c_1& c_2 & \cdots \\ a_2 & b_2 \\ a_3 \end{ytableau} \end{equation} we have $$ T_1= \ytableausetup {mathmode, boxsize=1.5em} \begin{ytableau} a_2 & b_1 & c_1& c_2 & \cdots \\ a_3 & b_2 \end{ytableau}, \qquad T_2= \ytableausetup {mathmode, boxsize=1.5em} \begin{ytableau} a_1 & b_1 & c_1& c_2 & \cdots \\ a_3 & b_2 \\ \end{ytableau}, \qquad T_3= \ytableausetup {mathmode, boxsize=1.5em} \begin{ytableau} a_1 & b_1 & c_1& c_2 & \cdots \\ a_2 & b_2 \\ \end{ytableau} $$ and \begin{eqnarray*} \partial_1 \partial_2(e(T) \otimes 1) &=& \partial_1(e(T_1) \otimes x_{a_1}-e(T_2) \otimes x_{a_2} + e(T_3) \otimes x_{a_3}) \\ &=& x_{a_1} f_{T_1} -x_{a_2}f_{T_2}+x_{a_3}f_{T_3} \\ &=& (x_{a_1}(x_{a_2}-x_{a_3})-x_{a_2}(x_{a_1}-x_{a_3})+x_{a_3}(x_{a_1}-x_{a_2}))(x_{b_1}-x_{b_2}) \\ &=& 0,\nonumber \end{eqnarray*} so $\partial_1 \partial_2=0$. Finally, we will show that $\partial_{n-3} \partial_{n-2}=0$. Let $T \in \operatorname{Tab}(1^n)$ be the tableau in \eqref{T for Gor last}. Then $\partial_{n-3} \partial_{n-2}(e(T) \otimes 1)$ equals $$ \sum_{1 \le j < k < l \le n} (-1)^{j+k+l}\biggl( e \Bigl ( \, \ytableausetup{mathmode, boxsize=1.3em} \begin{ytableau} \vdots & j & l\\ \vdots & k\\ \vdots \\ \vdots \end{ytableau} \, \Bigr ) - e \Bigl(\, \begin{ytableau} \vdots & j & k\\ \vdots & l\\ \vdots \\ \vdots \end{ytableau} \, \Bigr ) + e \Bigl(\, \begin{ytableau} \vdots & k & j\\ \vdots & l\\ \vdots \\ \vdots \end{ytableau} \, \Bigr ) \biggr ) \otimes x_{j}x_{k} x_{l}, $$ where all of the first columns of the above three tableaux are the ``transpose'' of $$\ytableausetup {boxsize=2.5em}\begin{ytableau} 1 & 2 & \cdots & j-1 & j+1 & \cdots & k-1 & k+1 & \cdots & l-1 & l+1 &\cdots & n \end{ytableau}.$$ However, we have $$ \ytableausetup {boxsize=1.3em} e \Bigl ( \, \begin{ytableau} \vdots & j & l\\ \vdots & k\\ \vdots \\ \vdots \end{ytableau} \, \Bigr ) - e \Bigl(\, \begin{ytableau} \vdots & j & k\\ \vdots & l\\ \vdots \\ \vdots \end{ytableau} \, \Bigr ) + e \Bigl(\, \begin{ytableau} \vdots & k & j\\ \vdots & l\\ \vdots \\ \vdots \end{ytableau} \, \Bigr ) = 0$$ by the Garnir relation. Hence $\partial_{n-3} \partial_{n-2}(e(T) \otimes 1)=0$, and $\partial_{n-3} \partial_{n-2}=0$. So we have shown that ${\mathcal F}_\bullet^{(n-2,2)}$ is a chain complex. Since $\operatorname{Im} \partial_1=I^{\rm Sp}_{(n-2,2)}$, ${\mathcal F}_\bullet^{(n-2,2)}$ is a subcomplex of a minimal free resolution of $R/I^{\rm Sp}_{(n-2,2)}$. Recall that we regard $F_i$ as an ${\mathfrak S}_n$-module by $\sigma (v \otimes f) := \sigma v \otimes \sigma f \in V_\lambda \otimes_K R(-j) =F_i$, where $\lambda$ is a suitable partition of $n$ and $j$ is a suitable integer uniquely determined by $i$. In the previous section, we have shown that $\partial_i :F_i \to F_{i-1}$ is an ${\mathfrak S}_n$-homomorphism. The restriction $$[\partial_i]_j : [F_i]_j =V_\lambda \otimes_K [R(-j)]_j = V_\lambda \otimes_K K \longrightarrow V_{\lambda'} \otimes_K R_l = [F_{i-1}]_j$$ is also an ${\mathfrak S}_n$-homomorphism, where $l=1$ if $2 \le i \le n-3$, and $l=2$ if $i=1, n-2$. Since $V_\lambda \otimes_K K \cong V_\lambda$ is irreducible as an ${\mathfrak S}_n$-module and $[\partial_i]_j$ is clearly nonzero, we have $[\partial_i]_j$ is injective. Since $\mu(\operatorname{Ker} \partial_{i-1})=\beta_{i,j}(R/I^{\rm Sp}_{(n-2,2)}) =\dim V_\lambda = \dim_K [\operatorname{Im} \partial_i]_j $ for $i \ge 2$ by Lemma~\ref{Gor Betti}, ${\mathcal F}_\bullet^{(n-2,2)}$ is a (minimal) free resolution of $R/I^{\rm Sp}_{(n-2,2)}$. Here $\mu(-)$ denote the minimal number of generators as an $R$-module. \qed \section{The case $(d,d,1)$: Construction} For $R/I^{\rm Sp}_{(d,d,1)}$, we define the chain complex \begin{equation}\label{linear} {\mathcal F}_\bullet^{(d,d,1)}:0 \longrightarrow F_d \stackrel{\partial_d}{\longrightarrow} F_{d-1} \stackrel{\partial_{d-1}}{\longrightarrow} \cdots \stackrel{\partial_2}{\longrightarrow} F_1 \stackrel{\partial_1}{\longrightarrow} F_0 \longrightarrow 0 \end{equation} of graded free $R$-modules as follows. Here $F_0 =R$ and $$ F_i = V_{(d, d-i+1, 1^i)} \otimes_K R(-d-i-1) $$ for $1 \le i \le d$. As before, set $\partial_1(e(T)\otimes 1) :=f_T \in R =F_0$. To describe $\partial_i$ for $i \ge 2$, we need the preparation. For \begin{equation}\label{T for linear} T= \ytableausetup {mathmode, boxsize=3.1em} \begin{ytableau} a_1 & b_2 & \cdots & b_{d-i+1}& b_{d-i+2}& \cdots & b_d \\ a_2 & c_2 & \cdots & c_{d-i+1} \\ \vdots \\ a_{i+2} \end{ytableau} \end{equation} in $\operatorname{Tab}(d, d-i+1, 1^i)$ and $j$ with $1 \le j \le i+2$, set $$ T_j := \ytableausetup {mathmode, boxsize=3.1em} \begin{ytableau} a_1 & b_2 & \cdots & b_{d-i+1} & b_{d-i+2} & b_{d-i+3} & \cdots & b_d \\ a_2 & c_2 & \cdots & c_{d-i+1} & a_j \\ \vdots \\ a_{j-1}\\ a_{j+1} \\ \vdots\\ a_{i+2} \end{ytableau} $$ in $\operatorname{Tab}(d, d-i+2, 1^{i-1})$. Then we set \begin{equation}\label{dif for linear} \partial_i(e(T) \otimes 1) := \sum_{j=1}^{i+2} \sum_{\sigma \in H} (-1)^{j-1} e(\sigma(T_j)) \otimes x_{a_j} \in V_{(d, d-i+2, 1^{i-1})} \otimes_K R(-d-i) =F_{i-1} \end{equation} for $3 \le i \le d-1$, where $H$ is the set of permutations of $\{ b_{d-i+2}, b_{d-i+3}, \ldots ,b_d \}$ satisfying $\sigma(b_{d-i+3}) < \sigma(b_{d-i+4}) < \cdots < \sigma(b_d)$, and $$\partial_2(e(T) \otimes 1) = \sum_{j=1}^3 (-1)^{j-1} e({T_j}) \otimes x_{a_j} \in V_{(d, d, 1)} \otimes_K R(-d-2) =F_1$$ for $T \in \operatorname{Tab}(d, d-1, 1,1)$. That these $\partial_i$ are well-defined will be shown in Theorem~\ref{wd linear}. We are not sure whether $\partial_d$ can be defined in the same way, that is, the well-definedness is not clear in this case. However, for our purpose (i.e., to show $\partial_{d-1} \circ \partial_d=0$), it suffices to define $\partial_d(e(T) \otimes 1)$ for $T \in \operatorname{SYT}(d, 1^{d+1})$. Since $\{ e(T) \mid T \in \operatorname{SYT}(d, 1^{d+1})\}$ is a basis, we do not have to care the well-definedness. Anyway, we define $\partial_d(e(T) \otimes 1)$ for $T \in \operatorname{SYT}(d, 1^{d+1})$ by \eqref{dif for linear}. \begin{ex}\label{(4,4,1)} Our minimal free resolution ${\mathcal F}_\bullet^{(4,4,1)}$ of $R/I^{\rm Sp}_{(4,4,1)}$ is of the form $$ 0 \longrightarrow V_{\ytableausetup{boxsize=0.25em} \begin{ytableau} {} & {} & {} & {}\\ \\ \\ \\ \\ \\ \end{ytableau}}\otimes_K R(-9) \stackrel{\partial_{4}}{\longrightarrow} V_{\ytableausetup{boxsize=0.25em} \begin{ytableau} {} & {} & {} & \\ {} & \\ \\ \\ \\ \end{ytableau}} \otimes_K R(-8) \stackrel{\partial_{3}}{\longrightarrow} V_{\ytableausetup{boxsize=0.25em} \begin{ytableau} {} & {} & {} & \\ {} & & \\ \\ \\ \end{ytableau}} \otimes_K R(-7) \qquad \qquad \qquad \quad $$ $$ \qquad \qquad \qquad \qquad \qquad \qquad \qquad \stackrel{\partial_{2}}{\longrightarrow} V_{\ytableausetup{boxsize=0.25em} \begin{ytableau} {} & {} & {} & \\ {} & {} & & \\ \\ \end{ytableau}} \otimes_K R(-6) \stackrel{\partial_{1}}{\longrightarrow} R \longrightarrow 0. $$ The differential maps are given by \begin{eqnarray*} \partial_4 \bigl ( e \bigl( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 1 & 2 & 3 & 4\\ 5 \\ 6 \\ 7 \\ 8 \\ 9 \end{ytableau} \, \bigr) \otimes 1 \bigr) &=& \bigl ( \, e \bigl( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 5 & 2 & 3 & 4\\ 6 & 1 \\ 7 \\ 8 \\ 9 \end{ytableau} \, \bigr) + e \bigl( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 5 & 3 & 2 & 4\\ 6 & 1 \\ 7 \\ 8 \\ 9 \end{ytableau} \, \bigr) + e \bigl ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 5 & 4 & 2 & 3\\ 6 & 1 \\ 7 \\ 8 \\ 9 \end{ytableau} \, \bigr) \, \bigr ) \otimes x_1 \\ & & - \bigl ( \ e \bigl ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 1 & 2 & 3 & 4\\ 6 & 5 \\ 7 \\ 8 \\ 9 \end{ytableau} \, \bigr ) + e \bigl ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 1 & 3 & 2 & 4\\ 6 & 5 \\ 7 \\ 8 \\ 9 \end{ytableau} \, \bigr) + e \bigl ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 1 & 4 & 2 & 3\\ 6 & 5 \\ 7 \\ 8 \\ 9 \end{ytableau} \, \bigr) \, \bigr) \otimes x_5 \\ & & \qquad \vdots \\ & & \qquad \vdots \\ & & - ( \ e \bigl ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 1 & 2 & 3 & 4\\ 5 & 9 \\ 6 \\ 7 \\ 8 \end{ytableau} \, \bigr ) + e \bigl ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 1 & 3 & 2 & 4\\ 5 & 9 \\ 6 \\ 7 \\ 8 \end{ytableau} \, \bigr) + e \bigl( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 1 & 4 & 2 & 3\\ 5 & 9 \\ 6 \\ 7 \\ 8 \end{ytableau} \, \bigr) \, \bigr ) \otimes x_9 \end{eqnarray*} and \begin{eqnarray*} \partial_2 \bigl( e\bigl ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 1 & 2 & 3 & 4\\ 5 & 6 & 7\\ 8 \\ 9 \end{ytableau} \, \bigr ) \otimes 1 \bigr ) &=& e \bigl( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 5 & 2 & 3 & 4\\ 8 & 6 & 7 & 1\\ 9 \\ \end{ytableau} \, \bigr ) \otimes x_1 - e \bigl( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 1 & 2 & 3 & 4\\ 8 & 6 & 7 & 5\\ 9 \\ \end{ytableau} \, \bigr ) \otimes x_5\\ & & + e\bigl ( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 1 & 2 & 3 & 4\\ 5 & 6 & 7 & 8\\ 9 \\ \end{ytableau} \, \bigr ) \otimes x_8 - e \bigl( \, \ytableausetup {mathmode, boxsize=1em} \begin{ytableau} 1 & 2 & 3 & 4\\ 5 & 6 & 7 & 9\\ 8 \\ \end{ytableau} \,\bigr ) \otimes x_9. \end{eqnarray*} \end{ex} \begin{thm}\label{2nd thm} If $\operatorname{char}(K)=0$, the complex ${\mathcal F}_\bullet^{(d,d,1)}$ of \eqref{linear} is a minimal free resolution of $R/I^{\rm Sp}_{(d,d,1)}$. \end{thm} \section{The case $(d,d,1)$: Proof} \begin{lem}\label{linear Betti} For all $i$ with $1\leq i \leq d$, we have $$\beta_{i,i+d+1}(R/I^{\rm Sp}_{(d,d,1)})= \dim_K V_{(d,d-i+1,1^{i})}. $$ \end{lem} \begin{proof} By the hook formula, we have \begin{eqnarray*} \dim_K V_{(d,d-i+1,1^{i})}&=& \frac{(2d+1)!}{(d+i+1)\frac{d!}{i}(d+1)(d-i)!i!}\\ &=&\frac{(2d+1)!}{(d+i+1)(d+1)!(d-i)!(i-1)!} \end{eqnarray*} for all $i$ with $1\leq i \leq d$. Since $I^{\rm Sp}_{(d,d,1)}$ is a Cohen-Macaulay ideal of codimensin $d$ and has a $(d+2)$-linear resolution, we have $$\beta_i(R/I^{\rm Sp}_{(d,d,1)})=\beta_{i,i+d+1}(R/I^{\rm Sp}_{(d,d,1)})$$ for all $i \ge 1$, and \cite[Theorem~3.5.17]{V} implies that \begin{eqnarray*} \beta_i(R/I^{\rm Sp}_{(d,d,1)})&=& \prod_{j=1}^{i-1}\frac{d+1+j}{i-j} \prod_{j=i+1}^{d}\frac{d+1+j}{j-i} \\ &=&\frac{(d+i)!}{(i-1)!(d+1)!} \cdot \frac{(2d+1)!}{(d-i)!(d+i+1)!} \\ &=&\frac{(2d+1)!}{(d+i+1)(d+1)!(d-i)!(i-1)!}. \end{eqnarray*} So we are done. \end{proof} \begin{thm}\label{wd linear} The maps $\partial_i$ defined in the previous section are well-defined. \end{thm} \begin{proof} The well-definedness of $\partial_1$ is nothing other than that of \eqref{Isom}, and that of $\partial_d$ is clear by the construction. So we may assume that $2 \leq i \leq d-1$. By Proposition~\ref{relation generated}, if suffices to show that \begin{equation}\label{quasi Garnir2} \sum_{\sigma\in S_T(A,B)} \operatorname{sgn}(\sigma) \,\partial_i(e(\sigma T)\otimes 1) \end{equation} equals 0 for the tableau $T$ of \eqref{T for linear}. The cases $A=\{b_k, c_k\}$ and $B=\{b_{k+1}\}$ for $2 \le k \le d-i$ are easy. The non-trivial cases are \begin{itemize} \item[(1)] $A=\{a_1,\ldots,a_{i+2}\}, B=\{b_2 \}$, \item[(2)] $A=\{a_2,\ldots,a_{i+2}\}, B=\{b_2, c_2\}$, \item[(3)] $A=\{b_{d-i+1}, c_{d-i+1}\}$, $B=\{b_{d-i+2}\}$. \end{itemize} First, we treat the case (1). We may assume that $b_2<a_1<a_2<\cdots<a_{i+2}$. Fix $j$ with $1\leq j \leq i+2$, and set $A_j:=A\backslash \{a_j\}$. Since $\tau \in H$ is a permutations of $\{ b_{d-i+2}, b_{d-i+3}, \ldots ,b_d \}$, $\sigma\in S(A_j,B)$ and $\tau \in H$ are disjoint. Hence we have \begin{eqnarray*} (\, \text{the $x_{a_j}$-part of \eqref{quasi Garnir2}} \, )=(-1)^{j-1}\sum_{\tau \in H} g_{A_j,B} e(\tau(T_j)) \otimes x_{a_j}. \end{eqnarray*} For each $\tau$, we can show that $g_{A_j,B} e(\tau(T_j)) =0$ by an argument similar to the proof of Theorem \ref{wdGor}. That $(\, \text{the $x_{b_2}$-part of \eqref{quasi Garnir2}} \, )=0$, and the case (2) can be proved in a similar way. For the case (3), the tableau $$ \ytableausetup {mathmode, boxsize=4.3em} \begin{ytableau} a_1 & b_2 & \cdots & \sigma(b_{d-i+1})& b_k & \cdots &\sigma(b_{d-i+2})& \cdots \\ a_2 & c_2 & \cdots & \sigma(c_{d-i+1}) & a_j \\ \vdots \\ \end{ytableau} $$ appears in the $x_{a_j}$-part of \eqref{quasi Garnir2} for $d-i+3 \le k \le d$ (here we assume that $j \ge 3$ for notational simplicity). However the above tableau and $$ \ytableausetup {mathmode, boxsize=4.3em} \begin{ytableau} a_1 & b_2 & \cdots & b_k &\sigma(b_{d-i+1})&\sigma(b_{d-i+2})& \cdots \\ a_2 & c_2 & \cdots & a_j & \sigma(c_{d-i+1}) \\ \vdots \\ \end{ytableau} $$ give the same polytabloid $e(-)$, so the previous argument also works. \end{proof} \medskip \noindent{\it The proof of Theorem~\ref{2nd thm}.} First, we will show that ${\mathcal F}^{(d,d,1)}_\bullet$ is a chain complex. It is easy to see that $\partial_{i-1}\partial_i =0$ for $3 \le i \le d$. To show $\partial_1\partial_2 =0$, consider $$ T= \ytableausetup {mathmode, boxsize=2em} \begin{ytableau} a_1 & b_2 & \cdots & b_{d-1} & b_d \\ a_2 & c_2 & \cdots & c_{d-1} \\ a_3\\ a_4 \end{ytableau}. $$ Then we have $$T_1= \ytableausetup {mathmode, boxsize=2em} \begin{ytableau} a_2 & b_2 & \cdots & b_{d-1} & b_d \\ a_3 & c_2 & \cdots & c_{d-1} & a_1\\ a_4 \end{ytableau}, \qquad T_2= \ytableausetup {mathmode, boxsize=2em} \begin{ytableau} a_1 & b_2 & \cdots & b_{d-1} & b_d \\ a_3 & c_2 & \cdots & c_{d-1} & a_2\\ a_4 \end{ytableau} $$ $$T_3= \ytableausetup {mathmode, boxsize=2em} \begin{ytableau} a_1 & b_2 & \cdots & b_{d-1} & b_d \\ a_2 & c_2 & \cdots & c_{d-1} & a_3\\ a_4 \end{ytableau}, \qquad T_4= \ytableausetup {mathmode, boxsize=2em} \begin{ytableau} a_1 & b_2 & \cdots & b_{d-1} & b_d \\ a_2 & c_2 & \cdots & c_{d-1} & a_4\\ a_3 \end{ytableau} $$ in the notation of the previous section, and it holds that $ \partial_1 \partial_2 (e(T) \otimes 1)= x_{a_1}f_{T_1}-x_{a_2}f_{T_2}+x_{a_3}f_{T_3}-x_{a_4}f_{T_4}. $$ One can show that this equals 0 by a direct computation (note that each part of the right side can be divided by $\prod_{i=2}^{d-1}(x_{b_i} -x_{c_i})$). \begin{comment} , but we have the following exposition. Note that $\prod_{i=2}^{d-1}(x_{b_i} -x_{c_i})$ divides the right hand side of \eqref{d1d2}. Considering the Laplace expansion with respect to the 1st row, we see that the following determinant $$\left | \begin{array}{cccc} x_{a_1}x_{b_d}-x_{a_1}^2 & x_{a_2}x_{b_d}-x_{a_2}^2 & x_{a_3}x_{b_d}-x_{a_3}^2 & x_{a_4}x_{b_d}-x_{a_4}^2 \\ 1 & 1 & 1 & 1 \\ x_{a_1} & x_{a_2} & x_{a_3} & x_{a_4} \\ x_{a_1}^2 & x_{a_2}^2 & x_{a_3}^2 & x_{a_4}^2 \end{array} \right | $$ equals $\partial_1 \partial_2 (e(T) \otimes 1)/\prod_{i=2}^{d-1}(x_{b_i} -x_{c_i})$. However, this determinant equals 0. To see this, add $-x_{b_d}$ times of the 3rd row to the 1st, then add the 4th row to the 1st. So we get $\partial_1 \partial_2 (e(T) \otimes 1)=0$. Now we have shown that ${\mathcal F}_\bullet^{(d,d,1)}$ is a chain complex. Moreover, it is a subcomplex of a minimal free resolution of $R/I^{\rm Sp}_{(d,d,1)}$, since $\operatorname{Im} \partial_1=I^{\rm Sp}_{(d,d,1)}$. The last step of the proof is same as that of Theorem~\ref{1st thm}. As in the case of ${\mathcal F}_\bullet^{(n-2,2)}$, we can regard ${\mathcal F}_\bullet^{(d,d,1)}$ as a chain complex of ${\mathfrak S}_n$-modules. By the irreducibility of the ${\mathfrak S}_n$-modules $V_\lambda$, the ${\mathfrak S}_n$-homomorphism $$[\partial_i]_{d+i+1}: [F_i]_{d+i+1} = [V_{(d, d-i+1, 1^i)} \otimes_K R(-d-i-1)]_{d+i+1} \cong V_{(d, d-i+1, 1^i)} \longrightarrow [F_{i-1}]_{d+i+1} $$ is injective for $i \ge 1$. By Lemma~\ref{linear Betti}, we have $\operatorname{Ker} \partial_{i-1}=\operatorname{Im} \partial_i$ for $i \ge 2$, and hence ${\mathcal F}_\bullet^{(d,d,1)}$ is a (minimal) free resolution of $R/I^{\rm Sp}_{(d,d,1)}$. \end{comment} \qed \section*{Acknowledgements} The authors are grateful to Professor Satoshi Murai for stimulating discussion in the beginning of this project. They also thank Professors Steven V. Sam, Junzo Watanabe, Alexander Woo, and the anonymous referees of the previous version for valuable comments/information on \cite{BSS,LL} and related works.
2,869,038,155,989
arxiv
\section{Introduction} \begin{table*} \caption{Parameters found in the literature of the stars used in this study.} \vspace{-2mm} \centering \tabcolsep=15pt \begin{tabular}{lcrrrrc} \hline\hline \multicolumn{1}{c}{KIC} & \multicolumn{1}{c}{\num} & \multicolumn{1}{c}{M} & \multicolumn{1}{c}{R} & \multicolumn{1}{c}{P$_{\rm rot}$} & \multicolumn{1}{c}{Age} & \multicolumn{1}{c}{Ref.} \\ & \multicolumn{1}{c}{[$\mu$Hz]} & \multicolumn{1}{c}{[M$_\odot$]} & \multicolumn{1}{c}{[R$_\odot$]}& \multicolumn{1}{c}{[days]} & \multicolumn{1}{c}{[Gyr]}& \multicolumn{1}{c}{} \\ \hline 3241581$^\star$ & 2969$\pm$17& 1.04$\pm$0.02 & 1.08$\pm$0.10 & 26.3$\pm$2.0 & 3.8$\pm$0.6 & 1 \\ 3656476$^\star$ & 1947$\pm$78& 1.10$\pm$0.03 & 1.32$\pm$0.01 & 31.7$\pm$3.5 & 8.9$\pm$0.4 & 2 \\ 4914923$^\star$ & 1844$\pm$73& 1.04$\pm$0.03 & 1.34$\pm$0.02 & 20.5$\pm$2.8 & 7.0$\pm$0.5 & 2 \\ 5084157 & 1788$\pm$14& 1.06$\pm$0.13 & 1.36$\pm$0.08 & 22.2$\pm$2.8 & 7.8$\pm$3.4 & 3 \\ 5774694 & 3671$\pm$20& 1.06$\pm$0.05 & 1.00$\pm$0.03 & 12.1$\pm$1.0 & 1.9$\pm$1.8 & 3 \\ 6116048$^\star$ & 2098$\pm$84& 1.05$\pm$0.03 & 1.23$\pm$0.01 & 17.3$\pm$2.0 & 6.1$\pm$0.5 & 2 \\ 6593461 & 2001$\pm$18& 0.94$\pm$0.16 & 1.29$\pm$0.07 & 25.7$\pm$3.0 & 10.7$\pm$4.4 & 3 \\ 7296438$^\star$ & 1846$\pm$73& 1.10$\pm$0.02 & 1.37$\pm$0.01 & 25.2$\pm$2.8 & 6.4$\pm$0.6 & 2 \\ 7680114$^\star$ & 1709$\pm$58& 1.09$\pm$0.03 & 1.40$\pm$0.01 & 26.3$\pm$1.9 & 6.9$\pm$0.5 & 2 \\ 7700968 & 2010$\pm$25& 1.00$\pm$0.12 & 1.21$\pm$0.06 & 36.2$\pm$4.2 & 7.5$\pm$3.1 & 3 \\ 9049593 & 1983$\pm$13& 1.13$\pm$0.14 & 1.40$\pm$0.06 & 12.4$\pm$2.5 & 6.4$\pm$3.4 & 3 \\ 9098294$^\star$ & 2347$\pm$84& 0.98$\pm$0.02 & 1.15$\pm$0.01 & 19.8$\pm$1.3 & 8.2$\pm$0.5 & 2 \\ 10130724 & 2555$\pm$27& 0.85$\pm$0.12 & 1.08$\pm$0.05 & 32.6$\pm$3.0 & 13.8$\pm$5.0 & 3 \\ 10215584 & 2172$\pm$28& 0.99$\pm$0.13 & 1.12$\pm$0.05 & 22.2$\pm$2.9 & 6.8$\pm$3.5 & 3 \\ 10644253$^\star$ & 2892$\pm$157& 1.09$\pm$0.09& 1.09$\pm$0.02 & 10.9$\pm$0.9 & 0.9$\pm$0.3& 2 \\ 10971974 & 2231$\pm$6& 1.04$\pm$0.12 & 1.09$\pm$0.03 & 26.9$\pm$4.0 & 5.8$\pm$3.0 & 3 \\ 11127479 & 1983$\pm$7& 1.14$\pm$0.12 & 1.36$\pm$0.06 & 17.6$\pm$1.8 & 5.1$\pm$2.2 & 3 \\ 11971746 & 1967$\pm$23& 1.11$\pm$0.14 & 1.35$\pm$0.06 & 19.5$\pm$2.1 & 6.0$\pm$2.8 & 3 \\ \hline \end{tabular} \tablefoot{\Kepler input catalogue (KIC) number, central frequency of the oscillation power excess \num~\newer{by \cite{Chaplin2014}}, stellar mass and radius in solar units from seismology, surface rotation period P$_{\rm rot}$ from \cite{Garcia2014b}, stellar age from seismic modelling and reference to the published seismic studies: [1]\,Garcia et al. \citep[in prep, see also][]{Beck2016}, [2]\,\cite{Creevey2016} and \hbox{[3]\,\cite{Chaplin2014}}. Stars, for which the parameters were obtained through the detailed-modelling approach as described in \Section{sec:sample}, are flagged with an asterisk. } \label{tab:literatureParameters} \end{table*} In the last decade, numerous studies focused on the question whether the rotation and chemical abundances of the Sun are typical for a solar-type star, i.e. a star of a solar mass and age \citep[e.g.][]{Gustafsson1998, AllendePrieto2006, DelgadoMena2014, Datson2014, Ramirez2014, Carlos2016, dosSantos2016}. These studies compared the Sun with solar-like stars and were inconclusive due to relatively large systematic errors \citep{Gustafsson2008, Robles2008, Reddy2003}. The fragile element lithium is a distinguished tracer of mixing processes and loss of angular momentum inside a star \citep{Talon1998}. Its abundance in stars changes considerably over the lifetime. For low-mass stars during the main sequence, proton-capture reactions destroy most of the initial stellar Li content in the stellar interior. Only a small fraction of the original Li is preserved in the cool, outer convective envelopes. The solar photospheric Li abundance A(Li) is 1.05$\pm$0.10\,dex \citep{Asplund2009}. This value is substantially lower than the protosolar nebular abundance of A(Li) = 3.3 dex \citep{Asplund2009} measured from meteorites and illustrates that the lithium surface abundance does not reflect the star's original abundance. A comparison between the Sun and typical stars of one solar mass and solar metallicity in the thin galactic disc by \cite{Lambert2004} showed the Sun as ``lithium-poor'' by a factor of 10. Because the temperature at the base of the solar convective zone is not hot enough to destroy lithium, this large depletion in the observed solar to the meteoritic Li abundance by a factor of 160 remains as one of the biggest challenges of standard solar models. This is known as the solar Li problem \cite[e.g.][and references therein]{Maeder2009,Melendez2010}. There are two major challenges for understanding the Li abundance in stars similar to the Sun. First, for stars with masses M\,$\leq$\,1.1\,M\hbox{$_\odot$}\xspace, there is a strong dependence of A(Li) on mass and metallicity, which are governed by the depth of the convective envelope for these stars \citep{doNascimento2009,doNascimento2010,Baumann2010,Castro2016}. While the stellar chemical composition can be determined from optical spectroscopy, the mass of a star is a difficult task when a star does not belong to a cluster. In this context, solar analogues, following the classical definition of \cite{CayreldeStrobel1996}, constitute a homogeneous set of stars in which mass and metallicity are well constrained with values close to solar ones as defined before. Fortunately, asteroseismology is a tool that provides precise and accurate values {for} mass, radius as well as ages {of} oscillating stars \citep*[e.g.][]{Aerts2010}. The highest level of accuracy of the parameters determined through seismology is reached when the models are constrained by individual frequencies and combined with results from high-resolution spectroscopy \citep[e.g.][]{Chaplin2011,Chaplin2014,LebretonGoupil2014}. The second unknown is the complex interplay of various transport mechanisms and their efficiency inside the stellar interior and with stellar rotation. Standard models, that only include mixing through convective motion, fail to model the general trend of the A(Li) evolution. This indicates that additional mixing processes have to be taken into account, such as microscopic diffusion \citep{Eddington1916,Chapman1917}, inertial gravity waves \citep{GarciaLopez1991,Schatzman1996,Charbonnel2005} and the effects of stellar rotation. Rotation has a substantial impact on the stellar evolution \citep[e.g.][and references therein]{Zahn1992,MaederZahn1998,Brun1999,Mathis2004,Maeder2009,Ekstroem2012} and can change the properties of solar-type stars by reducing the effects of atomic diffusion, and inducing extra mixing. More specifically, observations of light element abundances bring precious constraints for mixing in models and transport processes in stars \citep{Talon1998,Charbonnel1998,Pinsonneault2010IAUS,Somers2016}. \newer{Numerous observational and theoretical studies have explored the Li surface abundance in the context of rotation, stellar evolution, age and angular momentum transport \citep[e.g.][and references therein]{vandenHeuvel1971, Skumanich1972, Rebolo1988, Zahn1992, Zahn1994, Charbonnel1994, Talon1998, Charbonnel1998, King2000, Clarke2004,Talon2005, Pinsonneault2010IAUS, Bouvier2016, Somers2016}.} Recently, \cite{Bouvier2016} showed that a rotation-lithium relation exists already at an age of 5\,Myrs and also exhibits a significant dispersion. Moreover, \cite{Bouvier2008} proposed a possible link between lithium depletion and the rotational history of exoplanet-host stars. Thus, authors seek a complete and coherent description of the influence of rotation on the lithium abundances on the main-sequence of solar-type stars. A particular challenge for studying A(Li) as a function of the stellar rotation is the surface rotation velocity. If determined from spectroscopy through the Doppler broadening of the absorption lines, only the projected surface velocity $v \sin i$\xspace could be measured, where the axis of the rotation axis remains unknown. If the surface rotation rate is determined through modulation of photometry or activity proxies, one measures the angular velocity in terms of surface rotation period, because a precise measure of the stellar radius is missing. \begin{figure} \centering \vspace{-3.25mm} \includegraphics[width=0.9\columnwidth]{histogramMass.eps} \caption{Distributions of the stellar mass from seismology for the 18 solar-analogue stars in the presented sample. The grey bars flag the total distributions, while the dark shaded areas indicated the distribution of stars with detailed seismic modelling} \label{fig:histograms} \vspace{3mm} \end{figure} Understanding the evolution of the lithium abundance as a function of the mass, metallicity and rotation and explaining its dispersion in G~dwarfs, is critical to construct a comprehensive model of the Sun as a star \citep[e.g.][]{Pace2012,Castro2016}. Comparing the measured value of A(Li) in solar analogues with predictions from evolutionary models, calibrated onto the solar case will allow to test the evolution of the Li dilution for \textcolor{blue} {typical} 'Suns' at \textcolor{blue} {different} ages. This gives us the possibility to test if the mixing processes, assumed to act in the Sun\textcolor{blue} {,} are peculiar or if the solar lithium value is normal. This paper is structured as follows: In \Section{sec:sample}, \textcolor{blue} {we} set \textcolor{blue} {the} stars to be studied\textcolor{blue} {. Its} properties and the new spectroscopic parameters are described. From the observations, the relation between lithium and rotation is discussed in \Section{sec:lithiumRotation}. In \Section{sec:discussion}, the measured A(Li) is confronted with theoretical predictions \newer{from the Toulouse-Geneva stellar evolution code \citep[TGEC,][]{HuiBonHoa2008, doNascimento2009}} and \newer{we} compare age estimates from seismology \newer{derived from previous studies using different approaches}. Conclusions of this work are summarised in \Section{sec:Conclusions}. \section{Data set and stellar parameters \label{sec:sample}} \begin{figure} \centering \vspace{-1.75mm} \includegraphics[width=1\columnwidth,height=120mm]{lithiumStrip.eps} \caption{Lithium doublet observed in the full dataset, sorted from strong to weak lithium lines (top to bottom, respectively). The solar spectrum (red) is shown in the bottom of the diagram. The centre of the lithium \referee{and the neighbouring iron line are flagged} through the vertical dashed line \referee{and blue dotted line, respectively. The achieved S/N, measured lithium abundances as well as upper limits of A(Li) are indicated in the right side.}} \label{fig:lithiumObservationalSequence} \vspace{-3mm} \end{figure} The sample of solar analogues investigated in this study is composed by the 18 stars presented in \cite{Salabert2016Activity}. A summary of the main global properties of these stars are shown \hbox{in \Table{tab:literatureParameters}.} The stellar masses, radii and ages reported in the literature (\Table{tab:literatureParameters}) were obtained by either grid-modelling analysis of the global-seismic parameters or by using individual frequencies and high-resolution spectroscopy \newer{(hereafter also referred to as \textit{detailed} modelling)}. For a \textcolor{blue} {specific} discussion of the different modelling approaches\textcolor{blue} {,} we refer the reader to \cite{LebretonGoupil2014}. Detailed modelling using individual frequencies with the \textit{Asteroseismic Modeling Portal} \citep[\textsc{amp},][]{Metcalfe2009} is available for the following stars KIC\,3656476, KIC\,4914923, KIC\,6116048, KIC\,7296438, KIC\,7680114, KIC\,9098294, and KIC\,10644253, which were modelled by \cite{Mathur2012}, \cite{Metcalfe2014} and \cite{Creevey2016}. \newer{In this paper we use whenever possible, the latest results of \cite{Creevey2016}.} An additional star, KIC\,3241581, has been modelled by Garcia~et~al. \citep[in prep., see also][]{Beck2016}, using the \textit{Modules for Experiments in Stellar Astrophysics} \citep[MESA,][and references therein]{MESA2013}. For the remaining 10 stars, we adopted the masses and ages obtained by \cite{Chaplin2014} using global seismic parameters determined from 1-month long \Kepler time series and constraints on temperature and metallicity from multicolour photometry. The mass distribution in \Figure{fig:histograms} shows that \textcolor{blue} {this} sample mainly consists of stars with masses in the upper half of the allowed mass regime for solar analogues (1-1.15\,M\hbox{$_\odot$}\xspace). The dark shaded regions in \Figure{fig:histograms} depict the distribution of the stars for which detailed seismic modelling was performed \citep[Garcia et al., in prep.]{Mathur2012,Metcalfe2014,Creevey2016}. Whenever applicable, we distinguish in the diagrams represented in this paper the values originating from the two analysis approaches. Surface rotation periods (P$_{\rm rot}$), \textcolor{blue} {measured by \cite{Garcia2014b}, are} reported in \Table{tab:literatureParameters}. {Selecting oscillating targets with known rotation periods adds \textcolor{blue} {a} constraint that these stars are magnetically active. However, they are not too active to suppress oscillations. This is another characteristic of our host star.} \begin{table*} \vspace{-1mm} \caption{\textcolor{blue} {Summary of the seismic solar analogue observations.}} \vspace{-1mm} \centering \tabcolsep=8pt \begin{tabular}{rrrrrrrrrl} \hline\hline \multicolumn{1}{c}{KIC} & \multicolumn{1}{c}{V}& \multicolumn{1}{c}{N} & \multicolumn{1}{c}{ToT}& \multicolumn{1}{c}{$\Delta$T}& \multicolumn{1}{c}{$\overline{RV}$} & \multicolumn{1}{c}{$\Delta$RV} & \multicolumn{1}{c}{comment}\\ & \multicolumn{1}{c}{[mag]} & & \multicolumn{1}{c}{[hrs]} & \multicolumn{1}{c}{[days]} & \multicolumn{1}{c}{[km/s]} & \multicolumn{1}{c}{[km/s]} & \\ \hline 3241581$^\star$ & 10.35$\pm$0.04 &24 &9.3 & 709.1 &-30.68 & 0.96 &binary\\ 3656476 & 9.55$\pm$0.02 &6 &2.5 & 351.1 &-13.23 & 0.14 \\ 4914923 & 9.50$\pm$0.02 &7 &2.1 & 297.3 &-31.16 & 2.11 &binary\\ 5084157 & 11.56$\pm$0.12 &10 &5.2 & 300.1 &-19.66 & 0.21\\ 5774694 & 8.37$\pm$0.01 &7 &1.3 & 348.1 &-17.67 & 0.16 \\ 6116048 & 8.47$\pm$0.01 &5 &1.0 & 347.2 &-53.28 & 0.17 \\ 6593461 & 11.22$\pm$0.10 &9 &4.1 & 296.2 &-35.39 & [0.37] & large scatter\\ 7296438 & 10.13$\pm$0.03 &(+2) 4 &1.5 & 350.2 &-2.08 & 16.65 &Binary (KOI\,364.01)\\ 7680114 & 10.15$\pm$0.04 &8 &3.4 & 351.0 &-58.96 & 0.180 \\ 7700968 & 10.37$\pm$0.04 &(+2) 4 &1.4 & 299.1 &+39.47 & 27.62 &binary\\ 9049593 & 10.35$\pm$0.04 &4 &1.5 & 299.1 &-21.02 & 0.24 \\ 9098294 & 9.91$\pm$0.03 &7 &2.6 & 346.1 &-55.78 & 41.35 &binary\\ 10130724 & 12.03$\pm$0.19 &7 &2.8 & 299.1 &-54.51 & 2.12 &binary\\ 10215584 & 10.62$\pm$0.05 &6 &2.7 & 337.0 &-11.02 & 0.25 \\ 10644253 & 9.26$\pm$0.02 &14 &5.9 & 416.0 &-19.01 & 0.18 \\ 10971974 & 11.05$\pm$0.07 &4 &1.6 & 300.1 &-34.58 & 0.27\\ 11127479 & 11.21$\pm$0.09 &5 &2.1 & 300.2 &-29.33 & 0.30 &KOI\,2792.01, large scatter \\ 11971746 & 11.00$\pm$0.07 &8 &3.6 & 301.0 &-44.20 & 0.22 \\ \hline \end{tabular} \tablefoot{The star's identifier in the \Kepler input catalogue (KIC)\textcolor{blue} {,} apparent magnitude in Johnson\,V, number $N$ of spectra (in bracket the number of additional spectra with S/N only high enough to determine the radial velocity), total accumulated time on target (ToT), the time base $\Delta$$T$ covered by the observations, the mean radial velocity ($\overline{RV}$), and the difference between the positive and negative extrema of the measured RV values and their internal error, and a comment on the star/system. KOI stand for \textit{Kepler Objects of Interests} and indicates planet host star candidates. } \medski \label{tab:journalOfObservations} \caption{Fundamental parameters of the solar analogues from the spectroscopic analysis of \hermes data.} \centering \tabcolsep=10pt \begin{tabular}{rlrrrrrr} \hline\hline \multicolumn{1}{c}{KIC} & \multicolumn{1}{c}{$T_{\mathrm{eff}}$\xspace} & \multicolumn{1}{c}{\logg} & \multicolumn{1}{c}{$v_{\rm min}$} & \multicolumn{1}{c}{$v \sin i$\xspace} & \multicolumn{1}{c}{[Fe/H]} & \multicolumn{1}{c}{A(Li)} & \multicolumn{1}{c}{S/N} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{[K]} & \multicolumn{1}{c}{[dex]} & \multicolumn{1}{c}{[km/s]} & \multicolumn{1}{c}{[km/s]} & \multicolumn{1}{c}{[dex]} & \multicolumn{1}{c}{[dex]} & \multicolumn{1}{c}{(Li)} \\ \hline 3241581 & 5685$\pm$59 & 4.3$\pm$0.1 & 1.0$\pm$0.2 & 4.0$\pm$0.6 & 0.22$\pm$0.04 & \referee{$\leq$0.31} & 180\\ 3656476 & 5674$\pm$50 & 4.2$\pm$0.1 & 1.1$\pm$0.1 & 4.1$\pm$0.7 & 0.25$\pm$0.04 & \referee{$\leq$0.51} & 120\\ 4914923 & 5869$\pm$74 & 4.2$\pm$0.1 & 1.2$\pm$0.1 & 5.0$\pm$0.6 & 0.12$\pm$0.04 & 2.02$\pm$0.02 & 175\\ 5084157 & 5907$\pm$60 & 4.2$\pm$0.1 & 1.1$\pm$0.1 & 4.8$\pm$0.7 & 0.24$\pm$0.04 & 2.31$\pm$0.01 & 85\\ 5774694 & 5962$\pm$59 & 4.6$\pm$0.1 & 1.0$\pm$0.2 & 5.5$\pm$0.7 & 0.10$\pm$0.03 & 2.60$\pm$0.01 & 190\\ 6116048 & 6129$\pm$97 & 4.3$\pm$0.2 & 1.3$\pm$0.2 & 5.8$\pm$0.6 & -0.18$\pm$0.05 & 2.61$\pm$0.01 & 150\\ 6593461 & 5803$\pm$126 & 4.4$\pm$0.2 & 1.3$\pm$0.3 & 4.8$\pm$0.8 & 0.25$\pm$0.09 & 1.85$\pm$0.02 & 70\\ 7296438 & 5854$\pm$64 & 4.3$\pm$0.1 & 1.2$\pm$0.2 & 4.5$\pm$0.8 & 0.24$\pm$0.05 & 1.89$\pm$0.03 & 80\\ 7680114 & 5978$\pm$107 & 4.3$\pm$0.2 & 1.2$\pm$0.3 & 4.4$\pm$0.7 & 0.15$\pm$0.07 & 1.57$\pm$0.02 & 135\\ 7700968 & 5992$\pm$144 & 4.4$\pm$0.3 & 1.4$\pm$0.4 & 5.3$\pm$0.7 & -0.18$\pm$0.08 & 2.19$\pm$0.02 & 100\\ 9049593 & 6009$\pm$151 & 4.3$\pm$0.3 & 1.5$\pm$0.3 & 7.2$\pm$0.6 & 0.20$\pm$0.08 & 2.82$\pm$0.02 & 95\\ 9098294 & 5913$\pm$67 & 4.4$\pm$0.1 & 1.0$\pm$0.2 & 4.7$\pm$0.7 & -0.14$\pm$0.04 & \referee{$\leq$0.86} & 140\\ 10130724 & 5649$\pm$95 & 4.3$\pm$0.2 & 0.9$\pm$0.3 & 4.4$\pm$0.9 & 0.27$\pm$0.09 & \referee{$\leq$1.10} & 65\\ 10215584 & 5888$\pm$67 & 4.3$\pm$0.1 & 1.1$\pm$0.2 & 5.1$\pm$0.6 & 0.05$\pm$0.04 & 2.25$\pm$0.05 & 105\\ 10644253 & 6117$\pm$64 & 4.4$\pm$0.2 & 0.9$\pm$0.2 & 4.3$\pm$0.6 & 0.11$\pm$0.04 & 2.99$\pm$0.02 & 215\\ 10971974 & 5895$\pm$114 & 4.4$\pm$0.2 & 1.2$\pm$0.3 & 4.8$\pm$0.8 & 0.02$\pm$0.07 & \referee{$\leq$1.62} & 55\\ 11127479 & 5884$\pm$116 & 4.4$\pm$0.3 & 1.5$\pm$0.3 & 6.1$\pm$0.7 & 0.11$\pm$0.08 & 2.44$\pm$0.02 & 50\\ 11971746 & 5953$\pm$63 & 4.3$\pm$0.1 & 1.2$\pm$0.2 & 4.5$\pm$0.8 & 0.18$\pm$0.04 & 2.19$\pm$0.02 & 105\\ \hline \end{tabular} \tablefoot{The star's identifier in the \Kepler input catalogue, the effective temperature $T_{\mathrm{eff}}$\xspace, the surface acceleration \logg, the micro turbulence $v_{\rm min}$, the projected surface rotational velocity $v \sin i$\xspace, the stellar metallicity, and the abundance of lithium are given with their respective uncertainties. \referee{Upper limits of the measured A(Li) are indicated for stars with low lithium abundance as consequence of insufficient S/N in the spectra.} The last column reports the signal-to-noise ratio around the lithium line.} \label{tab:fundamentalParameters} \vspace{-2mm} \end{table*}% \subsection{Spectroscopic observations} \begin{figure} \centering \vspace{-1mm} \includegraphics[width=\columnwidth,height=50mm]{mgTripletKIC7700966.eps} \caption{Mg triplet in the star KIC\,7700968. The observed and synthetic spectra are represented by black dots and solid line, respectively. } \label{fig:MgTriplet} \end{figure} To guarantee a homogenous sample of fundamental parameters, we obtained for each target high-resolution spectra with the \Hermes spectrograph \citep{Raskin2011,RaskinPhD}, mounted to the 1.2m \textsc{Mercator} telescope, La Palma, Canary Island, Spain. The observations were performed in four observing runs of 3 to 6 days each. In 2015, spectroscopic data was obtained in June and July, while in 2016 observations were obtained in April and May. The overview on the observations is presented in Table\,\ref{tab:journalOfObservations}. In total 53.1\,hrs worth of exposure time were collected. The \Hermes spectra cover a wavelength range between 375 and 900\,nm with a spectral resolution of R$\simeq$85\,000. The wavelength reference was obtained from emission spectra of Thorium-Argon-Neon reference frames in close proximity to the individual exposure. The spectral reduction was performed with the instrument-specific pipeline \citep{Raskin2011,RaskinPhD}. The radial velocity (RV) for each individual spectrum was determined from the cross correlation of the stellar spectrum in the wavelength range between 478 and 653\,nm\xspace with a standardised G2-mask provided by the \hermes pipeline toolbox. For \Hermes, the 3$\sigma$ level of the night-to-night stability for the observing mode described above is $\sim$300\,m/s, which is used as the classical threshold for RV variations to detect binarity. Using \cite{Beck2016} \textcolor{blue} {methods} we corrected individual spectra for the Doppler shift before normalisation and the {process of combining} individual spectra. The {signal-to-noise ratios (S/N)} of each combined spectrum around 670\,nm\xspace is reported in Table\,\ref{tab:fundamentalParameters} and in \Figure{fig:lithiumObservationalSequence}. A solar flux spectrum was observed with the same \hermes instrument setup in reflected sunlight from the Jovian moon Europa \citep{Beck2016}. This spectrum has a S/N\,$\sim$\,450 (\Figure{fig:lithiumObservationalSequence}). \subsection{Fundamental parameters \label{sec:fundamentalParameters}} To determine the fundamental parameters, we started with effective temperature $T_{\mathrm{eff}}$\xspace, surface gravity \logg, metallicity [Fe/H], and micro turbulence $v_{\rm min}$ and projected surface rotational velocity $v \sin i$\xspace from \textcolor{blue} {an} analysis with the \textit{Grid Search in Stellar Parameters} ({\sc gssp}\xspace\footnote{The GSSP package is available for download at https://fys.kuleuven.be/ster/meetings/binary-2015/gssp-software-package.}) software package \citep[][]{Lehmann2011,Tkachenko2012, Tkachenko2015}. The library of synthetic spectra were computed using the {\sc SynthV} radiative transfer code \citep{Tsymbal1996} based on the {\sc LLmodels} code \citep{Shulyak2004}. Then, we used the \Hermes high-quality spectra to determine the final stellar fundamental parameters ($T_{\mathrm{eff}}$\xspace, \logg, [Fe/H]) and lithium abundance. We employed the excitation/ionisation equilibrium balance technique to find the stellar parameters that produced consistent abundances of Fe\,\textsc{i} and Fe\,\textsc{ii}, and by using the solar reference value as described by \cite{Melendez2012}, and \cite{Ramirez2014}. For all stars in the sample we determined the fundamental stellar parameters by performing a differential excitation-ionization equilibrium from the abundances of Fe\,\textsc{i} and Fe\,\textsc{ii}, \textcolor{blue} {and by} using the solar value as a reference\textcolor{blue} {,} as described by \cite{Melendez2012}, \cite{Monroe2013}, \cite{Melendez2014a} and \cite{Ramirez2014}. We combined Kurucz atmospheric models \citep{Castelli2004} with equivalent width (EW) measurements of Fe\,\textsc{i} and Fe\,\textsc{ii} and the 2014 version of the 1D LTE code MOOG \citep{Sneden1973}. The EW were determined from the automated code ARES \citep{Sousa2007}. We applied the same method for all stars in our sample, considering the same regions of continuum. Final spectroscopic parameters for the stars are given in \Table{tab:fundamentalParameters}. Formal uncertainties of the stellar parameters were computed as in \cite{Epstein2010} and \cite{Bensby2014}. The median metallicity for the sample is~0.15\,dex. We note that we find a higher temperature for KIC\,10644253, compared to the previous findings of \cite{Salabert2016Mowgli}. We adopt this new value, because the analysis was improved through increased \textcolor{blue} {observing} time and \textcolor{blue} {by applying} a differential analysis\textcolor{blue} {, with the} {\Hermes solar spectrum}. We adopt the values in \Table{tab:fundamentalParameters}. {For KIC\,3241581, we confirm the results reported previously by \cite{Beck2016}.} \begin{figure} \centering \centering \includegraphics[width=0.9\columnwidth]{lithiumComparisonBrunt_Sun.eps} \caption{Comparison of the measured lithium abundances with values \textcolor{blue} {from the literature} \referee{(see \Table{tab:LiLitComparison}). Black squares flag the four stars from our sample that are overlapping with the \cite{Bruntt2012}. The red circle depicts the comparison of the solar lithium abundance derived from our spectrum with the canonical value by \cite{Asplund2009}. The blue} line \textcolor{blue} {denotes} the 1:1 ratio between the two datasets.} \label{fig:comparisonBrunt} \vspace{3mm} \end{figure} The wings of Balmer and Mg-lines in cool dwarfs stars are highly sensitive to the temperature, \logg and metallicity \citep{Gehren1981,Fuhrmann1993,Barklem2001}. These lines are formed in deep layers of the stellar atmosphere and they are expected to be insensitive to the non-LTE effects \citep{Barklem2007}, although they depend on convection \citep{Ludwig2009}. The comparison between the observed and synthetic spectra for the region between 516.0 and 518.8 \,nm\xspace, containing the Mg-triplet and a sufficient number of metal lines, shown in \Figure{fig:MgTriplet}. The agreement of the \textcolor{blue} {line} widths shows the quality of our determined fundamental parameters ($T_{\mathrm{eff}}$\xspace, \logg, [Fe/H]). \begin{figure*} \centering \includegraphics[width=\textwidth]{mainFigureRotation_limits.eps} \caption{{Lithium abundance versus rotation for the 18 seismic solar analogues. The left and the right panel compare A(Li) with the surface rotation period from space photometry and the computed surface rotation velocity, respectively. Stars found to be located in binaries are shown as filled blue \referee{octagons} in the left-hand panel and they are removed from the right panel. In both panels, the full range covered by the differential solar rotation is represented by the horizontal error bar in the solar symbol. The solid line in the left panel depicts the best fitting relation for single stars between the rotation period and A(Li) for stars with rotation periods shorter than the solar value. In the right panel, the source of the asteroseismic radius is illustrated through the choice of symbols. Triangle markers indicate stars for which the radius has been determined through global-parameter modelling. \referee{Upper limits of the measured A(Li) are shown for stars with low lithium abundance or insufficient S/N in the spectra.} The single stars with the radius from detailed modelling from \textsc{amp}, KIC\,10644253, KIC\,6116048, KIC\,7680114, and KIC\,3656476 are plotted as black octagon, yellow diamond, red square and magenta pentagon, respectively. }} \label{fig:lithiumRotation} \vspace{2mm} \end{figure*} \begin{table} \caption{\referee{Comparison of lithium abundances with other values in the literature.}} \centering \begin{tabular}{rrrc} \hline\hline \multicolumn{1}{c}{Star} & \multicolumn{2}{c}{A(Li) [dex]} & Literature \\ & This work & Literature & reference\\ \hline KIC\,4914923 & 2.02$\pm$0.02 & 2.1$\pm$0.2 & B12\\ KIC\,5774694 & 2.6$\pm$0.01 & 2.4$\pm$0.2 & B12\\ KIC\,6116048 & 2.61$\pm$0.01 & 2.4$\pm$0.2 & B12\\ KIC\,10644253 & 2.99$\pm$0.02 & 2.7$\pm$0.2 & B12\\\hline Sun & 1.06$\pm$0.01 & 1.05$\pm$0.1 & A09\\ \hline \end{tabular} \tablefoot{\referee{The stellar identifier, the lithium abundance derived in this work as well as in the literature value, provided by \cite{Bruntt2012} (B12) or \cite{Asplund2009} (A09). The comparison is depicted in \Figure{fig:comparisonBrunt}.}} \label{tab:LiLitComparison} \end{table}% \subsection{Lithium abundance} The A(Li) was derived from the Li I resonance doublet feature at 670.78\,nm as depicted for all stars in our sample in Figure\,\ref{fig:lithiumObservationalSequence}. We used the 'synth' driver of the 2014 version code MOOG \citep{Sneden1973} and adopted A(Li)$_\odot$ = 1.05 dex as the standard solar lithium abundance \citep{Asplund2009}. The atmosphere model used were interpolated from the new Kurucz's grid \citep{Castelli2004} for \textcolor{blue} {a} set of spectroscopic atmospheric parameters, $T_{\mathrm{eff}}$\xspace, \logg, [Fe/H], and micro turbulence given in \Table{tab:fundamentalParameters}. We used the Fe\,\textsc{i} and Fe\,\textsc{ii} absorption lines \textcolor{blue} {as specified in} \cite{Melendez2014}, and we neglected possible $^6$Li influences. Due to the vicinity of the Li lines to the Fe\,\textsc{i} line at 670.78\,nm\xspace (\referee{blue dotted line in} Figure\,\ref{fig:lithiumObservationalSequence}), strong Li or iron lines as well as fast rotation can lead to blended lines. Therefore, an accurate value of the iron abundance, the \logg, and the projected surface velocity is needed to correctly derive the lithium abundance. The main sources for the Li abundance error are related to the uncertainties on the stellar parameters and the EW measurement. However $T_{\mathrm{eff}}$\xspace is by far the dominant source of error. For the spectroscopic atmospheric parameters\footnote{In this work, we use the standard definitions: $[X/Y]$=$\log$(N$_X$/N$_Y$)-$\log$(N$_X$/N$_Y$)$_\odot$, and $A_X$=$\log$(N$_X$/N$_H$)+12, where $N_X$ is the number density of element X in the stellar photosphere.}, we determined the lithium abundance in our sample ranging between 0.06 and 3.03\,dex. For comparison, the solar lithium abundance was \textcolor{blue} {also} derived from the \hermes solar spectrum (Figure\,\ref{fig:lithiumObservationalSequence}) collected from the reflected light of the Jovian moon Europa. We measured A(Li)$_\odot$=1.06$\pm$0.1\,\,dex\xspace in agreement with \cite{Asplund2009}. The final values of A(Li) are listed in the last column of Table\,\ref{tab:fundamentalParameters} and Figure\,\ref{fig:lithiumObservationalSequence}. \Figure{fig:lithiumObservationalSequence} also illustrates the sequence of spectral segments, containing the two lithium as well as two iron lines for all stars in our sample, sorted by decreasing value of A(Li). For comparison, the solar spectrum obtained by \cite{Beck2016} was plotted at the bottom of the sequence. The comparison between our derived abundances with the values from \cite{Bruntt2012} is presented in \Figure{fig:comparisonBrunt}. A good agreement was found between the values for A(Li). The differences are probably originating from the NLTE effects in the hotter stars \citep{Lind2009}. \subsection{Binarity occurrence} The time span covered by our measurements, as well as the mean value and the dispersion of the radial velocities, are reported in Table\,\ref{tab:fundamentalParameters} for each~star in the sample. The measurements range between 260 and 700 days. Based on an earlier analysis of \KIC{3241581}, \cite{Beck2016} confirmed this star to be a binary with \textcolor{blue} {an orbital} period longer than 1.5\,years. Based on the first 35 days of the observations of this campaign, \cite{Salabert2016Activity} reported \KIC{4914923}, \KIC{7296438} and \KIC{9098294} as binary candidates. Additional spectra were needed to confirm the binarity status of those systems. From the full available dataset analysed in this paper, we confirm the three above mentioned systems are binaries and we report that \KIC{10130724} and \KIC{7700968} are also binary systems. \textcolor{blue} {For none of the systems a binary period is known yet, because we do not detect the signature of stellar binarity (eclipses or tidally induced flux modulation) in their light curves. Also the RV measurements are too sparsely sampled to derive an orbital period from it.} Therefore no meaningful upper or even lower limit can be proposed on the orbital periods. The mean value reported in Table\,\ref{tab:journalOfObservations} will roughly resemble the systemic velocity of the binary system. \newer{Without information on the orbital parameters, the interpretation of A(Li) in the stellar components of the system is not reliable.} Therefore, continuous RV monitoring is required to draw further constraints on the orbital parameters, such as period or eccentricity. \section{Lithium abundance and surface rotation \label{sec:lithiumRotation}} There is a large number of observational works studying the connection between Li and rotation, searching for correlations between these parameters \citep[e.g.][]{Skumanich1972,Rebolo1988,King2000,Clarke2004,Bouvier2008,Bouvier2016}. {Because of the difficulty \newer{in coverage and stability} of photometric follow-up observations, most of them have employed $v \sin i$\xspace measurements. Due to the undetermined inclination angle~$i$, such values yield a lower limit on the rotational velocity.} The rotation period from the light-curve modulation, \newer{such as determined by \cite{Krishnamurthi1998} for the Pleiades or from the modulation of the emission in the core of the Ca\,H\&K lines \citep[e.g.][]{Choi1995}}, is independent of the inclination. This, linked to the observational difficulty resolving the tiny absorption line of lithium at 670.7\,nm\xspace with high S/N for the solar-analogue stars, partially explains the difficulty to connect the dependence between true rotation (rotational period) and the lithium abundance in low-mass, solar-analogues stars at different ages. In the left panel of \Figure{fig:lithiumRotation}, the lithium abundance is plotted as a function of the surface rotation period derived from the \Kepler light curves \citep{Garcia2014b} and surface rotation velocity. For comparison, the Sun is presented by the longitudinal average rotation period of 27\,days. The full range of solar differential rotation, spreads between 25\,days at the equator and 34 days at the poles \citep[e.g.][]{Thompson1996}, is represented by the horizontal error bar in the solar symbol in \Figure{fig:lithiumRotation}. From this figure we can see that fast rotating stars have high Li abundances. This confirms the \textcolor{blue} {well known} general trend for lithium and rotation found in the \textcolor{blue} {earlier} mentioned studies of clusters and for single field stars \citep[e.g.][]{Skumanich1972,Rebolo1988,King2000,Clarke2004,Bouvier2008,Bouvier2016}. There is a large scattering \textcolor{blue} {for} rotation periods longer than the solar rotation period. This is also found in similar studies of clusters and large sample of field stars. In this context, our sample is unique in the sense that for field stars we combine the existing information about the \textcolor{blue} {true} rotation period, lithium abundance, seismic \textcolor{blue} {age and} masses, and binarity status. In addition, stellar binarity can affect the measured lithium abundance either through interactions of the components \cite[e.g.][]{Zahn1994} or due to observational biases. Given that an orbital solution is needed for a complete analysis of such system, we will concentrate on the single stars \textcolor{blue} {in this work}. This approach reduces the scatter in the A(Li)-P$_{\rm rot}$\xspace plane, if only single stars are taken into account. In the rest of the analysis we only use single stars and stars with P$_{\rm rot}$\xspace below 27\,days. For this subsample, the Li-rotation correlation shows a trend following a linear regression, \begin{eqnarray} {\rm A(Li)}~=~-(0.08\pm0.01)\,\times\,{\rm P}_{\rm rot}~+~(3.85\pm0.17)\,.\label{eq:LiProt} \end{eqnarray} This relation indicates that lithium appears to evolve similarly to the rotation velocity for stars on this range of mass and metallicities \hbox{for P$_{\rm rot}$\xspace$\lesssim$\,27\,days}. We note that by fitting a trend in the P$_{\rm rot}$\xspace-Li plane, we do not take explicitly into account the age of the stars. But implicitly, it is taken into account as surface rotation is a proxy of age \citep{Barnes2007}. Although \cite{vanSaders2016} showed that stars, once they reach approximately the age of the Sun, are not slowing down as much as predicted by the empirical relations \new{between rotation and stellar age} \citep[e.g.][]{Barnes2007,Gallet2013,Gallet2015}. \textcolor{blue} {However} gyrochronology is still valid for stars younger than the Sun. Thus, a rotation rate higher than the solar value implies a star younger than the Sun. This is in agreement with the modelled evolution of Li, because the strongest \textcolor{blue} {depletion} of the lithium surface abundance on the main sequence is taking place in the early stages \citep[e.g.][and references therein; see also \Section{sec:Conclusions} in this paper]{Castro2016}. The evolution of the lithium abundance with the rotation period, as described by \Equation{eq:LiProt} is the best fit of the trend for stars with masses 1.0\,$\lesssim$\,M/M$_\odot$\,$\lesssim$\,1.1 and [Fe/H] in the range 0.1 to 0.3\,dex, (with the exceptions of KIC\,10971974 and KIC\,6116048, \Table{tab:fundamentalParameters}). The trend is well defined for rotation periods shorter than 27 days. The bulk of stars in our sample is rotating with shorter rotation periods than the Sun. This could be due to a selection bias in which longer periods are more difficult to detect. Besides that, more data seems to be necessary to extrapolate this results for slow rotators (long period) regimes. Although, two stars \textcolor{blue} {with} the same angular velocity {(as used in the left panel of \Figure{fig:lithiumRotation}), the rotational velocity still can be different, since it depends on the unknown stellar radius.} We can overcome this degeneracy by using the asteroseismic radius (\Table{tab:literatureParameters}) to convert the rotation period into the unprojected rotational velocity $v_{\rm rot}$ in kilometers-per-second. In the right panel of \Figure{fig:lithiumRotation}, we plot the rotational velocity computed as a function of lithium abundance. Because spots can be found at a relatively wide range of latitudes, surface differential rotation might contribute to the scatter in the rotation-lithium relation. \newer{Furthermore, the flux modulation introduced by spots, in combination with an assumed solar or anti-solar differential rotation profile will lead to an under or over estimation of the rotational velocity at the equator, respectively \newer{\citep[Brun et al., subm.]{Brun2015}}. From the right panel of \Figure{fig:lithiumRotation} we find that f}or stars with rotational velocities higher than 3 km\newer/s a linear trend is found in the Li-velocity relation. Between values of $v_{\rm rot}$ of 2 and 3 km/s, a large dispersion in the measured values of lithium could be present (A(Li)$\lesssim$2\,dex), engulfing also the position of the Sun in this diagram. \newer{Comparing the right panel of \Figure{fig:lithiumRotation} to large-sample studies of $v \sin i$\xspace, such as \cite{Takeda2010} shows a good agreement. In the asteroseismic approach, the complications introduced by the unknown inclination of the rotation axis or the assumptions on the turbulent velocity fields that could influence the line profile, are eliminated. Applying this asteroseismic approach on large samples should thus reduce the systematic scatter.} \section{Discussion \label{sec:discussion}} Both observables, rotation and A(Li) are expected {to evolve with time} for stars of this mass range during the main sequence. This was suggested by \cite{Skumanich1972} from the observations of stars in the Hyades, Pleiades, Ursa Major and the Sun, which were further investigated \citep[e.g.][]{Rebolo1988, King2000,Clarke2004}. In general, the Li abundance is a function of the convective envelope deepening relative to the age of a star on the main sequence \citep{doNascimento2009,Baumann2010}. {It can also be the consequence} of mixing below the convective zone \citep{Brun1999} \new{or in the radiative core \citep{Talon2005,Charbonnel2005}}. {This confirms that the age, the angular momentum history, the mass and the metallicity are the} governing physical processes in the evolution of the lithium content. Recently, \cite{Castro2016} showed that in the cluster \object{M67}, there is a relatively large scatter of the lithium abundance in the main-sequence stars with the same effective temperature and same age. The scatter is the largest around the 1\,M\hbox{$_\odot$}\xspace-range (0.5\,$\lesssim$\,A(Li)\,$\lesssim$\,2.0\,dex) and suggests that another, yet unknown process could have an influence on the lithium abundance. \cite{Somers2016} suggest that an intrinsically different mixing history \new{than in other stellar clusters, such as a higher fraction of fast rotators or inhomogeneities of the initial rotation distribution \citep{Coker2016}}, could explain the scatter in lithium depletion \hbox{for stars older than 100\,Myr.} \subsection{Stellar ages} In a cluster all stars have the same age. This cannot be assumed for \textcolor{blue} {the field} stars in our sample. {As described in \hbox{\Section{sec:sample}}, there are two ways to use the seismic information to infer the age. When grid-modelling based on global-seismic parameters is used, the inferred ages have large uncertainties \citep{LebretonGoupil2014}. In our sample, several of these stars are rotating faster than the Sun \citep{Garcia2014b}, \hbox{indicating} that they should be younger {\citep{Barnes2007, vanSaders2016}}. However, \textcolor{blue} {some} ages \textcolor{blue} {are} derived from global-parameter seismology\textcolor{blue} {, and} are larger than the solar age. To avoid these \textcolor{blue} {inconsistency} problems, in the following analysis, we will only use ages inferred from detailed modelling, constrained by individual frequencies or frequency ratios.} \new{As described in \Section{sec:sample} and listed in \Table{tab:literatureParameters}. We have age and mass estimates through this approach for eight stars from studies of \cite{Mathur2012}, \cite{Metcalfe2014}, {\cite{Creevey2016} and Garcia et\,al.\ (in\,prep.)}.} \subsection{Comparison with stellar models} To compare the derived stellar age and measured lithium abundance with stellar modelling predictions, a grid of models of the temporal evolution of A(Li) due to rotation-induced mixing using the \textit{Toulouse-Geneva stellar evolution code} \citep[TGEC,][]{HuiBonHoa2008, doNascimento2009} was computed. A description of \referee{the} physics used for this grid is given in Appendix\,\ref{sec:TGEC}. {For details on the calculation of the theoretical Li abundance we refer to \cite{Castro2009,Castro2016}, and \cite{doNascimento2009}. } \referee{These models include {the impact of the} rotation-induced mixing {on chemicals due to the combined actions of meridional circulation and shear-induced turbulence in stellar radiation zones} {computed} as prescribed by \cite{Zahn1992}, \cite{MaederZahn1998}, \cite{TheadoVauclair2003} and \cite{HuiBonHoa2008}. This non-standard mixing, modelled as a vertical {effective turbulent} diffusion {applied on} chemical elements \citep[][and Appendix\,\ref{sec:TGEC}]{Chaboyer1992}, is calibrated to reproduce the solar lithium abundance at the solar age in a solar model \citep[we refer the reader to][for a detailed description of the calibration]{Castro2016}.} The calibration is then used for the other models with different masses and metallicities. {In this framework, it is important to point that we focus here only on the non-standard mixing for chemicals (we refer the reader to Appendix\,\ref{sec:TGEC} for more details). This rotational mixing and the resulting Lithium abundance strongly depend on internal transport mechanisms \citep[e.g.][]{Maeder2009,Mathis2013LNP} and on angular momentum extraction by stellar winds \citep[][]{Skumanich1972,Mattetal2015} as illustrated for exemple by the recent work by \cite{Amardetal2016}. These processes are here calibrated as explained in Appendix\,\ref{sec:TGEC} on the Sun and its rotational history through the effective vertical turbulent diffusion acting on chemicals. This may introduce a bias towards solar characteristics. However this work constitutes a first step. In a near future, more realistic models will be computed where rotational and chemical evolutions will be treated coherently and simultaneously. These models will take into account all angular momentum processes and potentially different rotational histories.} The setup of the input physics used in {our} models is compatible with the one used in the \textsc{amp} models. Our models are calculated from the zero-age main sequence (ZAMS) to the helium burning in the core and for masses from 1.0 to 1.15 M$_\odot$ with a step size of 0.05\,M\hbox{$_\odot$}\xspace. {The main grid was calculated with a metallicity of [Fe/H]=$+$0.20.} \new{An additional model of} {the Li-evolution was calculated for a 1.0\,M\hbox{$_\odot$}\xspace star with [Fe/H]=$-$0.20. For comparison, the evolutionary track of the solar model computed by \cite{doNascimento2009} is shown in \Figure{fig:LithiumAge}.} We note that the grid of models contains only representative models chosen for average mass and metallicity. The step size of 0.05\,M$_\odot$ is roughly the averaged uncertainty (7\%) of the detailed modelling approach found by \cite{LebretonGoupil2014}, accounting for the observational uncertainties and those of the model approaches. The authors found also a typical uncertainty of the seismic age estimate of $\sim$10\%. Therefore, this comparison provides a qualitative idea if the models and the measurements agree in general, but these model tracks are not calibrated to resemble specific stars on a case-to-case basis. \new{By focusing on stars for which detailed modelling has been performed {and were found to be without a companion}, our sample is narrowed down to 4 stars, \textcolor{blue} {they are} $-$} \new{ KIC\,3656476 (age=8.9\,Gyr, [Fe/H]=+0.25\,dex), KIC\,6116048 (6.1\,Gyr, $-0.18$\,dex), KIC\,7680114 (6.9\,Gyr, +0.15\,dex), and KIC\,10644253 (0.9\,Gyr, +0.1\,dex). The measured lithium abundances are compared against the stellar age from \textsc{amp}-modelling \textcolor{blue} {for} these 4 stars in \Figure{fig:LithiumAge}. \textcolor{blue} {Here, these} three stars KIC\,10644253, KIC\,7680114 and KIC\,3656476, are of particular interest, as they form a sequence of constant mass of 1.1\,M\hbox{$_\odot$}\xspace (within their uncertainties) and a metallicity above \textcolor{blue} {the} solar \textcolor{blue} {value}, which spans over stellar ages between 1 and 9\,Gyr. With its 1.05$\pm$0.03\,M\hbox{$_\odot$}\xspace, KIC\,6116048 is closer to the solar mass, but has a clear sub-solar metallicity. In \Figure{fig:LithiumAge} the observed A(Li) is plotted as a function of the estimated age from asteroseismology and compared to the predictions of these quantities from the above described model tracks.} \begin{figure} \centering \includegraphics[width=1\columnwidth,height=60mm]{LithiumAge_limits.eps} \caption{Lithium abundance \textcolor{blue} {of} single stars with \textcolor{blue} {age computed by} detailed modelling \new{ with the \textsc{amp} code. The stars KIC\,10644253, KIC\,6116048, KIC\,7680114, and KIC\,3656476 are plotted as black hexagon, yellow diamond, red square and magenta pentagon, respectively.} \referee{For KIC\,3656476 the upper limits of the measured A(Li) is shown.} {TGEC-}evolutionary tracks shown in red, and yellow represents the theoretical evolution of A(Li) for models of the indicated mass with [Fe/H]=+0.2, and $-0.2$\,dex, respectively. The black dashed evolutionary track depicts the evolution of Li calculated by \cite{doNascimento2009} and the solar marker depicts measured A(Li\hbox{$_\odot$}\xspace) and age of the Sun.} \label{fig:LithiumAge} \end{figure} \subsubsection{Case of KIC\,10644253} \new{The comparison depicted in \Figure{fig:LithiumAge} shows that the best agreement between measurements and models for age and lithium is found for KIC\,10644253. The measured activity levels as well as the short rotation period reported by \cite{Salabert2016Activity,Salabert2016Mowgli} further confirm that this \textcolor{blue} {is a young} stellar object.}\vspace{-1mm} \subsubsection{Case of KIC\,7680114} {KIC\,7680114 complies reasonably well with the evolutionary track of a star of 1.1\,M\hbox{$_\odot$}\xspace with a metallicity of +0.2\,dex. {The star has a rotation period of 26.3\,days, compatible with the solar one. The} asteroseismic modelling places it at $\sim$7\,Gyr, {about 2.5\,Gyr older than the Sun.} \cite{vanSaders2016} show that when increasing mass or temperature the reduced efficiency of magnetic breaking starts earlier, which may explain this discrepancy.}\vspace{-1mm} \subsubsection{Case of KIC\,3656476} The slowly rotating star KIC\,3656476, is confirmed from stellar modelling and the comparison with the lithium abundance to be an old object. Although the 1.1\,M\hbox{$_\odot$}\xspace evolutionary track of the TGEC modelling does not reach the age predicted by the \textsc{amp} model it is not a worrying disagreement. Taking the typical uncertainty of $\sim$7\% on the seismic mass estimate, we find a good agreement with the Li evolution for a 1.05\,M\hbox{$_\odot$}\xspace star. Such uncertainty corresponds to 2\,$\sigma$ of the mass uncertainty of the star's model reported by \cite{Creevey2016}. \vspace{-1mm} \subsubsection{Case of KIC\,6116048} For KIC\,6116048, the rotation period of $\sim$10 days shorter than the value of the Sun suggests that this is a young star \citep{Garcia2014b}. On the other hand, this star is one of the \textcolor{blue} {lowest} active stars in our sample \citep{Salabert2016Activity} and the seismic modelling suggests an age of 6\,Gyr \citep{Creevey2016}. In principle, it is possible that the rotation period from the literature could be half of the actual value \citep{Garcia2014b}. In such case however, this star would fall out of the observed relation between P$_{\rm rot}$\xspace and A(Li.) Despite the non-agreement of the age indicators, we find a general good agreement \textcolor{blue} {between} A(Li) measurement and the models. Within the conservative view on the uncertainty of the mass, the measured A(Li) is in good agreement with the \textcolor{blue} {theoretical} Li evolution for a 1.0 and 1.05\,M\hbox{$_\odot$}\xspace star. \new{Because A(Li) remains relatively constant over a large range in time, Li cannot be used to distinguish between the two scenarios. KIC\,6116048 is a puzzling case which definitely needs further investigation.} \vspace{-1mm} \subsection{Results} {From the comparison of the observations with models, calculated for the determined seismic mass, we find A(Li) in good agreement for all \textcolor{blue} {four} single stars with available \textsc{amp} modelling. } For the Sun it has been shown that gravity waves have to be included in order to reproduce the solar rotation and lithium profile \citep{Charbonnel2005}. {Although gravity waves were not explicitly included in the applied macroscopic \referee{hydrodynamical} mixing modelling, they were implicitly taken into account through the calibration of the models to the Sun. } {The good agreement of measured and modelled A(Li) in \Figure{fig:LithiumAge} shows that the four stars share the same internal physics as it is working in the Sun.} \section{Conclusions \label{sec:Conclusions}} In this work, we have presented the combined analysis of seismic, photometric, and spectroscopic values for a set of 18 solar-analogue stars. The sample is \textcolor{blue} {important and} unique, as not only \textcolor{blue} {for} the lithium abundance and the rotation period are known, but also \textcolor{blue} {for the} mass, radius and age estimates for all stars \textcolor{blue} {from seismology.} The rotation periods and seismic parameters used in this study were determined by several earlier studies \newer{\citep[][Garcia et al. in prep.]{Mathur2012,Garcia2014b,Chaplin2014,Metcalfe2014,Creevey2016}}. \newer{For an overview of the literature values and references please see \Table{tab:literatureParameters}.} In a dedicated observing campaign we obtained high-resolution spectroscopy with the \hermes \'echelle spectrograph, allowing us to determine a consistent set of spectroscopic fundamental parameters, including the \textcolor{blue} {Li} surface abundance. From our spectroscopic observations, \textcolor{blue} {we detected} six new binary systems. The surface abundance of lithium of a star is very sensitive to {its rotation rate, mass, and metallicity.} \textcolor{blue} {M}asses from asteroseismology allows us to select our targets accurately on the criterion of mass. Choosing a sample of stars within the very narrow mass range, which is accepted \textcolor{blue} {as a} solar-analogues, enables us to study the interplay of these parameters. From the sample of single stars, we could quantify a linear relationship between the rotation period and the A(Li) for rotation periods shorter than the solar one. Binary stars show a larger scatter in the parameter plane. We demonstrated that observational restriction can be overcome by calculating the actual rotational velocity $v_{\rm rot}$ using the asteroseismically determined radius of the star. This allows a better comparison with model predictions. \newer{By focusing on 4 single stars with available masses and \newer{robust} ages from detailed modelling with the \textsc{amp}-code and confronting them with TGEC evolutionary track for the lithium abundance, we confirm the high degree of sensitivity of A(Li) to the combination of stellar mass, metallicity, rotation rate and age.} \newer{For \newer{two of the} 'massive' solar analogues ($\sim$1.1\,M\hbox{$_\odot$}\xspace) with detailed modelling, KIC\,10644253 and KIC\,7680114, the measured A(Li) and the stellar mass and age from asteroseismology agree well} with the predicted Li abundance. For the third 'massive' solar analogue, KIC\,3656476, a good agreement is found within the conservative mass uncertainty of $\sim$7\%. A similar case is the solar-analogue with sub-solar metallicity, KIC\,6116048. Also for this star we find a good agreement with the modelled evolution of A(Li) within the conservative mass uncertainty. The measured value of A(Li) agrees with the plateau value found for 1.0\,M\hbox{$_\odot$}\xspace star but the rotation period indicates a young object while seismology suggest a target, older than the Sun. In principle the rotation period could be underestimated by a factor of two, which however would lead to a strong outlier in the P$_{\rm rot}$\xspace-A(Li) relation, depicted in \Figure{fig:lithiumRotation}. The comparison of A(Li) with age and the rotation rate demonstrates that gyrochronology is valid for stars younger than the Sun \textcolor{blue} {and until the age of the Sun}. The small number of stars with individual-frequency modelling does not allow us to draw further conclusions on their evolution with age. A larger dataset will be required to confirm the conclusions outlined here. \textcolor{blue} {For these genuine solar analogues, a} good agreement within the uncertainties is found between three independent approaches $-$ the observed A(Li) from spectroscopy, the stellar age and mass from asteroseismology as well as the stellar model prediction of A(Li) for representative TGEC-models. Because the TGEC-models for A(Li) were calibrated to reproduce the solar internal mixing, such consensus with the measured A(Li) in the solar analogues {may} indicate that the solar analogues share the same acting internal mixing than the Sun. In this light, the solar value of A(Li) \textcolor{blue} {is absolutely} normal. \vspace{7mm} \begin{acknowledgements} \textcolor{blue} {We thank the referee for his/her constructive report that allows us to improve the article.} We acknowledge the work of the team behind \Kepler and \textsc{Mercator}. PGB and RAG acknowledge the ANR (Agence Nationale de la Recherche, France) program IDEE (n$^\circ$ ANR-12-BS05-0008) "Interaction Des Etoiles et des Exoplanetes". PGB and RAG also received funding from the CNES grants at CEA. JDN, MC and TD acknowledge the CNPq and the PPGF/UFRN. PGB acknowledges the PPGF/UFRN for founding partially a scientific visit to the G3 team at UFRN, Natal, Brazil. DS and RAG acknowledge the financial support from the CNES GOLF and PLATO grants. DM acknowledges financial support from the Spanish Ministerio de Econom{\'i}a y Competitividad under grant AYA2014-54348- C3-3-R. StM acknowledges support by the ERC through ERC SPIRE grant No. 647383. AT received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement N$^{\circ}$670519: MAMSIE). SaM would like to acknowledge support from NASA grants NNX12AE17G and NNX15AF13G and NSF grant AST-1411685. The research leading to these results has received funding from the European Community's Seventh Framework Programme ([FP7/2007-2013]) under grant agreement No. 312844 (SPACEINN) and under grant agreement No. 269194 (IRSES/ASK). The observations are based on spectroscopy made with the \Mercator Telescope, operated on the island of La Palma by the Flemish Community, at the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrof{\'i}sica de Canarias. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. \end{acknowledgements} \bibliographystyle{aa}
2,869,038,155,990
arxiv
\section{Introduction} The microscopic accounting of black hole entropy for suitably simple (BPS) black holes \cite{StromingerVafa} is one of the triumphs of string theory. Hints of deep ties between this subject and natural objects in number theory and the theory of automorphic forms began to appear with the work of Dijkgraaf, Verlinde and Verlinde \cite{DVV}, where a degree two Siegel modular form -- the Igusa cusp form $\Phi_{10}$ -- appears and plays a central role in determining the microstate counts in ${\cal N}=4$ supersymmetric string theory on $K3 \times T^2$. A heuristic explanation of the appearance of a hidden genus two curve in this problem was provided by Gaiotto \cite{Gaiotto}, and important further developments are reviewed in, for instance, \cite{Sen, DMZ}. In a parallel line of development, it was soon realized that a given charge sector in supersymmetric string theory may also support multi-center BPS black hole configurations \cite{Denef:2000nb, Denef:2002ru, Bates:2003vx}. These play a crucial role in resolving various paradoxes with the attractor mechanism for BPS black holes \cite{FKS,mooreattractor}, and appear in the proper interpretation of the states the Igusa cusp form (or more properly $1/\Phi_{10}$) is enumerating \cite{Sen:2007vb,Dabholkar:2007vk,Sen:2007pg,Cheng:2007ch,Banerjee:2008yu,Sen:2008ht,Dabholkar:2009dq}. In this paper, we further tie these lines of development together by proposing that there is a preferred degree three Siegel form whose Fourier coefficients are counting the microstates of three-center BPS solutions in type II string theory on $K3 \times T^2$. It will not escape the attention of the reader that to the extent our conjecture holds, it suggests a family of conjectures capturing higher multi-center degeneracies as well. The organization of our exposition is as follows. In section \ref{sec:Siegel}, we introduce the hero of our story, the degree three Siegel form $\chi_{18}$. In section \ref{sec:conjecture-rough} we give a first formulation of our conjecture tying the Fourier coefficients of $1/\sqrt{\chi_{18}}$ to degeneracies of BPS states, ignoring subtleties related to wall crossing. In section \ref{sec:threeparticleboundstates}, we make some of the ingredients introduced here more precise, and review some relevant properties of three-center BPS bound states, including BPS degeneracies of three-node quivers corresponding to the Higgs branch of three-center ``scaling solutions'' found in earlier work \cite{Denef:2007vg,Bena:2012hf,Lee:2012sc,Manschot:2013sya}. In section \ref{sec:conjecture}, we formulate a more precise version of the conjecture, taking into account wall crossing ambiguities. Sections \ref{sec:leading-q-examples} and \ref{sec:higher-order} witness various tests of the conjecture -- with tests of wall-crossing appearing in \ref{sec:leading-q-examples}, and limits yielding $1/\Delta$ and $1/\Phi_{10}$ tested in section \ref{sec:higher-order}. We close with a discussion of (admittedly big) open questions in section \ref{sec:discussion}. \section{An interesting degree three Siegel form} \label{sec:Siegel} We begin with the observation that the known counting functions for BPS state degeneracies on $K3 \times T^2$ have simple relations to bosonic string partition functions. Recall that the counting function for $1/2$-BPS Dabholkar-Harvey states is given by $$Z_{\rm genus \,\, one} \equiv {1\over \Delta} = q^{-1} \prod_n {1 \over (1-q^n)^{24}}$$ where we have indicated that the inverse of $\Delta$ is also the (chiral) 1-loop bosonic partition function. Similarly, $$Z_{\rm genus\,\, two} \equiv {1\over \Phi_{10}}$$ also arises as the genus two chiral measure in the bosonic string. So suggestively, the genus one partition function counts 1/2-BPS objects, realized in supergravity as single-center black holes, while the genus two partition function counts bound states of two such $1/2$-BPS objects, realized in supergravity as black hole configurations with up to two centers. \medskip These facts motivate, as a natural guess, the analogous genus-three measure $$ Z_{\rm genus\,\, three} \equiv \frac{1}{\sqrt{\chi_{18}}} \equiv \frac{1}{\chi_9}$$ as the counting function for bound states of three 1/2-BPS objects, realized in supergravity as black hole configurations with up to three centers. This function was found to occur in the genus three partition function in \cite{BKMP}. It is easiest to describe as a product of theta functions with characteristic, in the following way. \medskip On a compact Riemann surface $X$ of genus $g$, one can choose a symplectic basis for $H_1(X,{\mathbb Z})$ $a_1, \cdots a_g, b_1, \cdots, b_g$. Then a basis of holomorphic 1-differentials is determined by the conditions $$\int_{a_i} \omega_j = \delta_{ij}~.$$ The period matrix of the surface is then fixed by $$\int_{b_i} \omega_j = \tau_{ij},$$ with $\tau$ symmetric and ${\rm Im}(\tau) > 0.$ $\tau$ gives the parametrization of Riemann surfaces $X$ by the Siegel upper half space. \medskip In terms of this data, recall that the genus $g$ theta functions are given by (see e.g.\ \cite{Beilinson}) $$\theta(z,\tau) = \sum_{m \in {\mathbb Z}^g} {\rm exp}\left( 2\pi i (m^t z + {1\over 2}m^t \tau m)\right)~.$$ Here, $\theta: {\mathbb C}^g \times H_g \to {\mathbb C}$ takes as arguments $z \in {\mathbb C}^g$ and the $g \times g$ period matrix $\tau$. \medskip The theta functions with characteristic can be similarly defined. Let $$\alpha = \epsilon + \tau \delta \in {\mathbb C}^g$$ with $\epsilon, \delta$ taking values in ${1\over 2}{\mathbb Z}^g$. With the Jacobian of $X$ being $J = {\mathbb C}^g / T$, the class of $\alpha$ in $J$ is called a theta characteristic. We define the parity of $\alpha$ as $4 \epsilon \delta ~{\rm mod} ~2$. In particular, of the $4^g$ possible choices of characteristic, we then have $2^{g-1} (2^g - 1)$ odd ones and $2^{g-1} (2^g + 1)$ even ones. Now, define $$\theta[\alpha](z,\tau) = {\rm exp}\left(2\pi i(\delta^t (z + \epsilon) + {1\over 2} \delta^t \tau\delta) \right) \theta(z+\epsilon + \tau\delta,\tau)~.$$ These are the desired theta functions with characteristic. We will only need these evaluated at $z=0$, and denote $\theta[\alpha] \equiv \theta[\alpha](\tau) \equiv \theta[\alpha](z,\tau)|_{z=0}$. \medskip The automorphic forms $\Delta$, $\Phi_{10}$ and $\chi_{9}=\sqrt{\chi_{18}}$ all have simple definitions as products of genus $g=1,2,3$ theta functions with characteristics: \begin{align} \Delta &= 2^{-8} \prod_{\alpha \,\,{\rm even}} \theta[\alpha]^8 \qquad \,\,\, (g=1) \\ \Phi_{10} &= 2^{-12} \prod_{\alpha \,\,{\rm even}} \theta[\alpha]^2 \qquad \, (g=2) \\ \chi_{9} &= 2^{-14} \prod_{\alpha \,\,{\rm even}} \theta[\alpha]^{\frac{1}{2}} \qquad (g=3) \end{align} As there are 36 even theta functions with characteristic at genus three, $\chi_{18}$ indeed defines an automorphic form of weight 18. Our claim --- which we make more precise below --- is that $Z_{g=3} = 1/\chi_9$ gives the BPS counting function for bound states of three 1/2-BPS constituents on $K3 \times T^2$, corresponding to black hole configurations with up to three centers. \section{Conjecture} \label{sec:conjecture-rough} In this section we give a first formulation of our conjecture, without being precise about the moduli-dependence of bound state degeneracies, and without being precise about various sign ambiguities. We will likewise deliberately be vague about the distinction between BPS indices and absolute degeneracies. In section \ref{sec:conjecture} we will give a more precise version of the conjecture, and argue that the sign ambiguities and moduli-dependence are in fact closely related. \begin{figure} \begin{center} \includegraphics[width=\textwidth]{g123.pdf} \end{center} \caption{D-brane configurations, quiver diagrams encoding intersection numbers, and bound state counting functions for $g=1$, $g=2$ and $g=3$. \label{g123}} \label{branes} \end{figure} We first review the interpretation of $Z_{g=1}$ and $Z_{g=2}$ as BPS bound state counting functions and then state our proposed generalization for $Z_{g=3}$. The basic idea and our notations are illustrated in fig.\ \ref{g123}. For concreteness we consider type IIB string theory compactified on $T^2 \times K3$. In a suitable duality frame, 1/2-BPS states are represented as D-branes wrapping a 1-cycle $\gamma$ on $T^2$ and an even-dimensional cycle $Q \in H_{\rm even}(K3) = H_0(K3) \oplus H_2(K3) \oplus H_4(K3)$. The number of BPS states $\Omega$ with charge $\Gamma = \gamma \times Q$ depends only on the duality invariant \begin{align} r \equiv \frac{1}{2} Q \cdot Q \, , \end{align} where the dot product denotes the signature $(4,20)$ intersection product on $H_{\rm even}(K3)$. The generating function for the degeneracies $\Omega(r)$ is $Z_{g=1} = 1/\Delta$, expanded in powers of \begin{align} x \equiv e^{2 \pi i \tau} \, , \end{align} that is to say, \begin{align} \label{Zg1} Z_{g=1}(x) = \frac{1}{\Delta(x)} = \sum_r \Omega(r) \, x^r = x^{-1} + 24 \, x^0 + 324 \, x + 3200 \, x^2 + \cdots \end{align} In the 4d ${\cal N}=4$ low energy effective supergravity theory, these 1/2-BPS states are realized as ``small'' single-center black holes. The minimal value of $r=\frac{1}{2}Q^2$ is $-1$. This corresponds to a rigid cycle $Q$ wrapped in K3, for example K3 itself or a supersymmetric 2-sphere. Higher $r$ correspond to wrapped branes with deformation moduli. A smooth genus $g$ supersymmetric 2-cycle has $r=g-1$. Next we consider two 1/2-BPS branes with charges $\Gamma_i = \gamma_i \times Q_i$, $i=1,2$, with $\gamma_1$ and $\gamma_2$ wrapped as in figure \ref{g123}. These can form 1/4-BPS bound states with each other. The brane configuration has the following duality invariants: \begin{align} r \equiv \frac{1}{2} Q_1^2 \, , \qquad s \equiv \frac{1}{2} Q_2^2 \, , \qquad a \equiv - Q_1 \cdot Q_2 \, . \end{align} The sign is chosen for consistency with conventions in later sections, but does not matter at the level of precision of this section. The generating function for the bound state degeneracies $\Omega(r,s,a)$ is $Z_{g=2} = 1/\Phi_{10}$ \cite{DVV}, expanded in powers of \begin{align} x \equiv e^{2 \pi i \tau_{11}} \, , \qquad y \equiv e^{2 \pi i \tau_{22}} \, , \qquad u \equiv e^{2 \pi i \tau_{12}} \, , \end{align} where $\tau_{ij}$ is the $g=2$ period matrix; that is to say, \begin{align} \label{Zg2} Z_{g=2}(x,y;u) = \frac{1}{\Phi_{10}(x,y;u)} = \sum_{rsa} \Omega(r,s,a) \, x^r y^s u^{\pm a} \, . \end{align} The sign ambiguity in the power of the expansion parameter $u$ depends on whether one views this as a large-$u$ or a small-$u$ expansion, and is related to ambiguities in the definition of $\Omega(r,s,a)$ due to background moduli dependence of the BPS spectrum, i.e.\ wall-crossing \cite{Sen:2007vb,Dabholkar:2007vk,Sen:2007pg,Cheng:2007ch,Banerjee:2008yu,Sen:2008ht}. There are other subtleties when the charges $Q_i$ are non-primitive \cite{Atishthree}. True to our promise, we ignore all of this here. In the 4d ${\cal N}=4$ low energy effective supergravity theory, these 1/4-BPS bound states are realized either as a single-center (large) black hole, or as a 2-center bound state of 1/2-BPS (small) black holes \cite{Sen:2007vb,Dabholkar:2007vk,Sen:2007pg,Cheng:2007ch,Sen:2008ht}. We are now ready to formulate our conjecture. To this end we consider {\it three} 1/2-BPS branes with charges $\Gamma_i = \gamma_i \times Q_i$, $i=1,2,3$, with the $\gamma_i$ wrapped as in figure \ref{g123}. The brane configuration has the following duality invariants: \begin{align} \label{rstabcdef} (r,s,t) = \bigl(\tfrac{1}{2} Q_1^2,\tfrac{1}{2} Q_2^2,\tfrac{1}{2} Q_3^2 \bigr) \, , \qquad (a,b,c) = -\bigl( Q_1 \!\cdot\! Q_2,\,Q_2 \!\cdot \! Q_3,\,Q_3\! \cdot\! Q_1 \bigr). \end{align} Although for generic $Q_i$ and at generic points in the $T^2 \times K3$ moduli space, these three branes will not form BPS bound states \cite{Dabholkar:2009dq}, it is nevertheless the case that for suitable charges and on suitable subloci of the moduli space, they {\it will} form BPS bound states. We conjecture that the generating function for the bound state degeneracies $\Omega(r,s,t,a,b,c)$ is $Z_{g=3} = 1/\chi_9$, expanded in powers of \begin{align} \label{xyzuvw} (x,y,z) \equiv (e^{2 \pi i \tau_{11}},e^{2 \pi i \tau_{22}},e^{2 \pi i \tau_{33}}) \, , \qquad (u,v,w) \equiv (e^{2 \pi i \tau_{12}},e^{2 \pi i \tau_{23}},e^{2 \pi i \tau_{13}}) \end{align} where $\tau_{ij}$ is the $g=3$ period matrix; that is to say, \begin{align} \label{Zg3} Z_{g=3}(x,y,z;u,v,w) = \frac{1}{\chi_9(x,y,z;u,v,w)} = \sum_{rstabc} \Omega(r,s,t,a,b,c) \, x^r y^s z^t u^{\pm a} v^{\pm b} w^{\pm c} \, . \end{align} As we will make precise in section \ref{sec:conjecture}, the sign ambiguities are again related to wall crossing ambiguities. In the 4d ${\cal N}=4$ low energy effective supergravity theory, these bound states are realized either as a single-center black hole, or as a 2-center bound state of 1/2-BPS and a 1/4-BPS black hole, or as a 3-center bound state of 1/2-BPS black holes. The latter includes in particular also scaling solutions. To state a more precise version of the conjecture and to subject it to tests, we need a more detailed description of these black hole configurations and their wall crossing properties. We turn to this next. \section{Black hole bound states} \label{sec:threeparticleboundstates} In this section we study in some detail the black hole configurations corresponding to bound states of three 1/2-BPS D-branes as in fig.\ \ref{g123} on the right. The level of detail is needed because several of our tests of the conjecture use wall-crossing in an essential way, so we will need to be precise about bound state stability conditions and various signs on which these conditions depend. We build up the relevant technology in steps. We begin by making a few things in our discussion above a little bit more precise. We then discuss single-, two- and three-center black hole realizations of the bound states of interest. In the final part we review known results \cite{Denef:2007vg,Bena:2012hf,Lee:2012sc,Manschot:2013sya} about the ground state degeneracy of 3-node quiver quantum mechanics. Although these quivers do not accurately describe the bound states of interest to us, they do have similar 3-particle bound state realizations, and their degeneracies exhibit striking qualitative features suggestively similar to the degeneracies extracted from $Z_{g=3} = 1/\chi_9$. \subsection{D-brane setup} \label{sec:Dbranesetup} We consider again the three 1/2-BPS D-branes wrapped on cycles $\Gamma_i = \gamma_i \times Q_i$ as depicted in fig.\ \ref{g123} on the right. More precisely, denoting the horizontal and vertical 1-cycles of the $T^2$ by $A$ and $B$, oriented such that the intersection product $\langle A, B \rangle_{T^2} = +1$, we have: \begin{align} \label{charges} \Gamma_1 = A \times Q_1 \, , \qquad \Gamma_2 = (B-A) \times Q_2 \, , \qquad \Gamma_3 = - B \times Q_3 \, , \end{align} where $Q_i \in H_{\rm even}(K3)$. The 1-cycles are chosen to have intersection product $\langle \gamma_i,\gamma_j\rangle_{T^2} = +1$ for all cyclically ordered pairs $(\gamma_i,\gamma_j)$. The intersection products of the $\Gamma_i$ are \begin{align} a = \langle \Gamma_2,\Gamma_1 \rangle = -Q_1 \cdot Q_2 \, , \quad b = \langle \Gamma_3,\Gamma_2 \rangle = -Q_2 \cdot Q_3 \, , \quad c = \langle \Gamma_1,\Gamma_3 \rangle = -Q_3 \cdot Q_1 \, , \end{align} as defined earlier in (\ref{rstabcdef}). We assume the charges $Q_i$ are chosen such that \begin{align} a,b,c > 0 \, , \end{align} and we assume there exists a locus in the $T^2 \times K3$ moduli space where the three branes are mutually supersymmetric. On general grounds \cite{Brunner:1999jq,Kachru:1999vj,Douglas:2000ah,Denef:2007vg}, supersymmetric bound states of these branes will form when moving away from this locus along suitable (though not arbitrary) directions in the moduli space. A simple analogous brane setup, where existence of such brane configurations is readily checked by elementary means, is given by a system of intersecting D3-branes on $T^2 \times T^2 \times T^2$ instead of $T^2 \times K3$, with $\Gamma_1 = A_1 \times A_2 \times A_3$, $\Gamma_2 = (B_1 - A_1) \times (B_2-A_2) \times (B_3-A_3)$, $\Gamma_3 = - B_1 \times B_2 \times B_3$. The intersection products in this case are easily computed to be $a=b=c=+1>0$. If we take the complex structure moduli of the $T^2$ factors to be $\tau_1 = \tau_2 = \tau_3 = e^{i \pi/3}$, the periods ${\cal Z}_i = \int_{\Gamma_i} \Omega$ of the holomorphic 3-form $\Omega = dz_1 \wedge dz_2 \wedge dz_3$ on $T^2 \times T^2 \times T^2$ are all equal to 1, so the branes are mutually supersymmetric at this point in the moduli space. Moving the complex structure moduli slightly away from $\tau_i = e^{i \pi/3}$ in the appropriate directions produces BPS bound states of these intersecting D-branes. Near the locus where the three constituents are mutually supersymmetric, the low energy dynamics of the D-brane system is captured by a quiver-like ${\cal N}=4$ supersymmetric quantum mechanics model, similar but not identical to the 3-node cyclic quiver model introduced in \cite{Denef:2007vg}. Counting BPS bound states amounts to counting the supersymmetric ground states of this system. This problem was solved for 3-node cyclic quivers with generic cubic superpotential in \cite{Denef:2007vg,Bena:2012hf,Lee:2012sc,Manschot:2013sya}. However those results are not directly applicable here. One difference is that the constituent branes necessarily have moduli in the case of interest, since there are always at least the translation and Wilson line moduli of the 1-cycles wrapped on $T^2$. This allows for more general superpotentials depending nontrivially on these moduli. For intersecting branes on $T^6$, including generalizations to more complicated wrappings with larger values of $a,b,c$, explicit expressions for the superpotential were obtained in \cite{Cremades:2003qj} in terms of theta functions. Related explicit models in IIA on $T^6$ were constructed in \cite{Chowdhury:2014yca}. One could at this point try to deduce the appropriate microscopic description for the $T^2 \times K3$-wrapped brane systems of interest, and to identify and count the appropriate collection of BPS states directly in this description. We will not attempt this here. Instead we will consider the 4d ${\cal N}=4$ low energy supergravity description of these bound states, and use this in combination with the interpretation of $Z_{g=1} = 1/\Delta$ and $Z_{g=2} = 1/\Phi_{10}$ as BPS counting functions to perform a number of rather nontrivial checks of our conjecture. \def\CQ{{\cal Q}} \def\CP{{\cal P}} \subsection{Single-center black hole solutions} A regular single-center BPS black hole solution with total charge $\Gamma = A \times \CQ + B \times \CP$ exists provided \cite{Cvetic:1995bj} $\CQ^2>0$, $\CP^2>0$, $\Delta > 0$, where $\Delta$ is called the discriminant, \begin{align} \Delta \equiv \CQ^2 \CP^2 - (\CQ \cdot \CP)^2 \, . \end{align} The black hole entropy is then given by \begin{align} S = \pi \sqrt{\Delta} \, . \label{SDelta} \end{align} When $\Delta=0$, the black hole becomes singular to leading order in the supergravity approximation, but may still have a finite size horizon when $\alpha'$-corrections are taken into account, giving rise to a small black hole. This is the case in particular for generic half-BPS charges, i.e.\ charges in the duality orbit of $\Gamma = A \times \CQ$ with $\CQ^2 \geq 0$. If $\Gamma = A \times \CQ$ with $\CQ^2=-2$, obtained for example by wrapping a D3 on a supersymmetric 2-sphere in the K3, we get an elementary BPS particle rather than a black hole. The total charge $\Gamma$ of the D-brane system considered above in section \ref{sec:Dbranesetup} is \begin{align} \Gamma = \Gamma_1 + \Gamma_2 + \Gamma_3 = A \times (Q_1-Q_2) + B \times (Q_2 - Q_3) \, , \end{align} so $\CQ = Q_1 - Q_2$ and $\CP = Q_2 - Q_3$. The corresponding duality invariants are \begin{align} \CQ^2 &= Q_1^2 + Q_2^2 - 2 Q_1 \cdot Q_2 = 2(r+s+a) \nonumber \\ \CP^2 &= Q_2^2 + Q_3^2 - 2 Q_2 \cdot Q_3 = 2(s+t+b) \nonumber \\ \CQ \cdot \CP &= - Q_1 \cdot Q_3 + Q_1 \cdot Q_2 + Q_2 \cdot Q_3 - Q_2^2 = c - a - b - 2 s \, . \label{CQCPCQCP} \end{align} Plugging in (\ref{CQCPCQCP}) and reworking things a bit, we get \begin{align} \Delta & = 2(ab + bc + ca) - (a^2+b^2+c^2) +4 (rs + st + tr) + 4(rb+sc+ta) \nonumber \\ &= (\sqrt{a}+\sqrt{b}+\sqrt{c})(\sqrt{a}+\sqrt{b}-\sqrt{c})(\sqrt{a}-\sqrt{b}+\sqrt{c})(-\sqrt{a}+\sqrt{b}+\sqrt{c}) \nonumber \\ & \quad +4 (rs + st + tr) + 4(rb+sc+ta) \, . \label{Deltaexplicit} \end{align} In the limit $a,b,c \gg r,s,t$, we have $\CQ^2 \gg 1$ and $\CP^2 \gg 1$, and $\Delta>0$ provided $\sqrt{a}$, $\sqrt{b}$, $\sqrt{c}$ satisfy the triangle inequalities, i.e.\ \begin{align} \sqrt{a}+\sqrt{b} > \sqrt{c} \, , \qquad \sqrt{b}+\sqrt{c} > \sqrt{a} \, , \qquad \sqrt{c}+\sqrt{a} > \sqrt{b} \, . \label{squareroottriangineq} \end{align} It can be checked that this remains a sufficient condition for existence of the BPS black hole even when the $a,b,c$ are not parametrically larger than $r,s,t$. \subsection{Two-center solutions} \label{sec:twocentersol} Under suitable conditions, 2-center black hole bound states of the same total charge $\Gamma$ exist. In general there may be a huge number of different ways of splitting up $\Gamma = \Gamma_A + \Gamma_B$ to form a 2-center bound state with charges $\Gamma_A,\Gamma_B$, but here we will only consider charges obtained by merging two out of the three constituent charges $\Gamma_1,\Gamma_2,\Gamma_3$ into a black hole, and binding this to the third one as a 2-center bound state. Consider first the split \begin{align} \Gamma_A=\Gamma_1+\Gamma_2 = A \times (Q_1-Q_2) + B \times Q_2, \qquad \Gamma_B=\Gamma_3 = - B \times Q_3 \, . \end{align} The quadratic invariants for $\Gamma_A$ are \begin{align} \CQ_A^2 = (Q_1-Q_2)^2 = 2(r+s+a) \, , \qquad \CP_A^2 = 2s \, , \qquad \CQ_A \cdot \CP_A = -(a+2s) \, , \end{align} so its discriminant is \begin{align} \Delta_A = \CQ_A^2 \CP_A^2 - (\CQ_A \cdot \CQ_B)^2 = 4 r s - a^2 \, . \end{align} Thus a regular single-center black hole of charge $\Gamma_A$ exists provided \begin{align} r,s,t > 0 \, , \qquad 4 r s > a^2 \, . \end{align} Its entropy is $S_A = \pi \sqrt{4rs-a^2}$. These two centers may form a bound state with equilibrium separation given by the BPS constraint \cite{Denef:2000nb,Bates:2003vx} \begin{align} \label{twocenterposition} \frac{\langle \Gamma_B,\Gamma_A \rangle}{|x_B-x_A|} = \theta_B = -\theta_A \, . \end{align} The constants $\theta_B$, $\theta_A$ are determined by the charges and background moduli. In ${\cal N}=2$ language, they are more specifically expressed in terms of the central charges $Z_A$ and $Z_B$, as $\theta_B = 2 \, {\rm Im}\bigl( e^{-i\alpha} Z_B \bigr)$, $\theta_A = 2 \, {\rm Im}\bigl( e^{-i\alpha} Z_A \bigr) = - \theta_B$, where $e^{i\alpha} \equiv \frac{Z}{|Z|}$, $Z \equiv Z_A + Z_B$. Since distances are positive, the bound state can only exist if $\langle \Gamma_B,\Gamma_A \rangle \theta_B > 0$. In the case at hand, $\theta_B=\theta_3 = {\rm Im}\bigl( e^{-i\alpha} Z_3 \bigr)$ and $\langle \Gamma_B,\Gamma_A \rangle = Q_3 \cdot (Q_1-Q_2) = b-c$, so the existence condition becomes $(b-c) \theta_3 > 0$. The BPS degeneracy associated with this configuration is $\Omega = |\langle \Gamma_B,\Gamma_A \rangle| \, \Omega_A \, \Omega_B$, where $\Omega_A$, $\Omega_B$ are the degeneracies of the two centers and $|\langle \Gamma_B,\Gamma_A \rangle| = |b-c|$ is an electromagnetic intrinsic angular momentum degeneracy \cite{Denef:2002ru}. The other charge splittings, namely $\Gamma_A = \Gamma_2 + \Gamma_3$, $\Gamma_B=\Gamma_1$ and $\Gamma_A=\Gamma_3+\Gamma_1$, $\Gamma_B=\Gamma_2$ can be treated analogously. To summarize, we get the following existence conditions and degeneracies for the three possible two-center bound states under consideration: \begin{align} (\Gamma_1+\Gamma_2,\Gamma_3): && r,s,t > 0 \, , \,\, 4rs > a^2 \, , \,\, (b-c) \theta_3 > 0 &\quad \leadsto& \Omega = |b-c| \, \Omega_{1+2} \, \Omega_3 \nonumber \\ (\Gamma_2+\Gamma_3,\Gamma_1): && r,s,t > 0 \, , \,\, 4st > b^2 \, , \,\, (c-a) \theta_1 > 0 &\quad \leadsto& \Omega = |c-a| \, \Omega_{2+3} \, \Omega_1 \nonumber \\ (\Gamma_1+\Gamma_2,\Gamma_3): && r,s,t > 0 \, , \,\, 4rs > c^2 \, , \,\, (a-b) \theta_2 > 0 &\quad \leadsto& \Omega = |a-b| \, \Omega_{3+1} \, \Omega_2 \label{Omfacttwo} \end{align} Marginal cases in which some of these inequalities are relaxed to equalities may exist as well, as discussed under (\ref{SDelta}). Note that in the limit $a,b,c \gg r,s,t$, none of these 2-center solutions exist at all. The exact single-center degeneracies $\Omega_i$ and $\Omega_{j+k}$ can be obtained from the generating functions $Z_{g=1}=1/\Delta$ and $Z_{g=2}=1/\Phi_{10}$, as in (\ref{Zg1}) and (\ref{Zg2}). \subsection{Three-center solutions} \label{sec:threecentersol} Likewise, three-center bound states with center charges $\Gamma_1,\Gamma_2,\Gamma_3$ may exist. The BPS position constraints generalizing (\ref{twocenterposition}) to three centers are \begin{align} \frac{\langle \Gamma_1,\Gamma_3 \rangle}{|x_1-x_3|} + \frac{\langle \Gamma_1,\Gamma_2 \rangle}{|x_1-x_2|} = \theta_1 \, , \quad \frac{\langle \Gamma_2,\Gamma_1 \rangle}{|x_2-x_1|} + \frac{\langle \Gamma_2,\Gamma_3 \rangle}{|x_2-x_3|} = \theta_2 \, , \quad \frac{\langle \Gamma_3,\Gamma_2 \rangle}{|x_3-x_2|} + \frac{\langle \Gamma_3,\Gamma_1 \rangle}{|x_3-x_1|} = \theta_3 \, , \nonumber \end{align} that is \begin{align} \frac{c}{|x_1-x_3|} - \frac{a}{|x_1-x_2|} = \theta_1 \, , \quad \frac{ a }{|x_2-x_1|} -\frac{b}{|x_2-x_3|} = \theta_2 \, , \quad \frac{b}{|x_3-x_2|} -\frac{c}{|x_3-x_1|} = \theta_3 \, \nonumber \, . \end{align} Here $\theta_i = 2 \, {\rm Im}(e^{-i \alpha} Z_i)$, $e^{i \alpha} = \frac{Z}{|Z|}$, $Z=Z_1+Z_2+Z_3$. Note that this implies $\theta_1+\theta_2+\theta_3 = 0$, so summing up the three equations above just gives $0=0$. These equations do not always have solutions. Assume for example $\theta_3>0$, $\theta_1 < 0$. Then the first equation implies $\frac{a}{|x_1-x_2|} > \frac{c}{|x_1-x_3|}$ and the second equation implies $\frac{b}{|x_3-x_2|} > \frac{c}{|x_1-x_3|}$. Combining these two inequalities implies $a+b > \frac{|x_1-x_2|+|x_3-x_2|}{|x_1-x_3|} \, c \geq c$, where the last inequality follows from the the triangle inequalities for the $(x_1,x_2,x_3)$-triangle. Therefore if $a+b \leq c$ and $\theta_3 > 0$, $\theta_1 < 0$, the 3-center bound state does not exist. On the other hand, if $a+b > c$, $b+c > a$, $c+a > b$, that is to say, if the intersection products $(a,b,c)$ satisfy the triangle inequalities, a branch of 3-center solutions always exists. This branch is connected to so-called scaling solutions \cite{Denef:2007vg,Denef:2002ru,Bena:2006kb}, consisting of configurations for which the centers approach each other arbitrarily closely in coordinate space, $|x_i-x_j| \to 0$. In this limit, the position constraints reduce to the scale-invariant equations \begin{align} \frac{a}{|x_1-x_2|} = \frac{b}{|x_2-x_3|} = \frac{c}{|x_3-x_1|} \, . \end{align} Consistency with the triangle inequalities for the $(x_1,x_2,x_3)$-triangle then requires that $a,b,c$ themselves satisfy the triangle inequalities, i.e.\ \begin{align} a+b > c, \qquad b+c > a, \qquad c+a > b \, . \label{triangleineq} \end{align} (Marginal cases in which an inequality becomes equality require a more careful discussion, as the earlier analysis of the $\theta_3>0$, $\theta_1<0$ case illustrates, but we will skip this.) Although the coordinate size of these configurations goes to zero in the limit, their physical size remains finite in the full supergravity solution \cite{Bena:2006kb}. In fact, in the scaling limit, the solution becomes indistinguishable to a distant observer from a single-center BPS black hole solution of total charge $\Gamma=\Gamma_1+\Gamma_3+\Gamma_3$. Consistency thus requires that the single-center discriminant $\Delta$ as given in (\ref{Deltaexplicit}) is positive whenever the triangle inequalities (\ref{triangleineq}) are satisfied. Happily, this is the case, as in general (\ref{triangleineq}) implies (\ref{squareroottriangineq}). The implication does not run the other way, so the existence of scaling solutions implies the existence of single-center solutions, but the converse is not true in general. If the triangle inequalities are {\it not} satisfied, scaling solutions do not exist, and by tuning the background moduli close to values where the three central charge phases line up (so $\theta_i \to 0$), the centers can be taken to be arbitrarily well-separated. In this case the BPS degeneracy of the configuration can be determined from wall crossing arguments \cite{Denef:2007vg} or by direct quantization of the system \cite{deBoer:2008zn,Manschot:2010qz,Manschot:2013sya,Manschot:2014fua}. Explicitly, \begin{align} \label{Omfactthree} \Omega = \Omega_c \, \Omega_1 \, \Omega_2 \, \Omega_3 \, , \end{align} where $\Omega_c$ is the ``configurational'' degeneracy (discussed below) and the $\Omega_i$ are the BPS degeneracies of the 1/2-BPS centers $\Gamma_i$. The exact single-center degeneracies $\Omega_i$ can be obtained from the generating functions $Z_{g=1}=1/\Delta$ as in (\ref{Zg1}). The configurational factor $\Omega_c$ is most easily obtained from wall crossing. Assuming $a+b \leq c$, we know from our earlier discussion above that $\Omega_c=0$ in the moduli space chamber $\theta_1>0$, $\theta_3<0$. When passing to other chambers, this may jump to nonzero values. However such jumps can only happen at walls where the size of a bound state diverges, i.e.\ when one of the centers is pushed out to or pulled in from infinity. If the first center goes to infinity, then $R \equiv |x_1-x_2| \approx |x_1-x_3| \to \infty$ while $r \equiv |x_2-x_3|$ remains finite. From the position constraint equations it then follows that $\frac{c-a}{R} \approx \theta_1$, $\frac{b}{r} \approx -\theta_2 \approx \theta_3$. Hence jumps of this kind occur when $\theta_3>0$, $\theta_2<0$ while $\theta_1$ passes through zero, with the bound state existing on the side with $(c-a) \theta_1 > 0$. Since we assumed $a+b \leq c$, we have in particular $c-a>0$ so the bound state exists when $\theta_1>0$. Near the transition, $\Gamma_1$ is loosely bound to a tighter bound state of $\Gamma_2$ and $\Gamma_3$. The latter has degeneracy $|\langle \Gamma_3,\Gamma_2\rangle|=b$, and binding this $(2+3)$-atom to the first center multiplies this degeneracy by a factor $|\langle \Gamma_1,\Gamma_2+\Gamma_3\rangle| = c-a$, resulting in a total configurational degeneracy $\Omega_c = b(a-c)$. Thus we conclude that in the region $\theta_1 > 0$, $\theta_2 < 0$, $\theta_3 > 0$, we have $\Omega_c = b(a-c)$. Similar arguments can be used to determine $\Omega_c$ in all chambers, as well as for the other possible violations of the triangle inequalities. We summarize the results for $\Omega_c$ in all of these cases in the following table: \begin{center} \begin{tabular}{|ccc|ccc|} \hline $\theta_1$&$\theta_2$&$\theta_3$&$a+b \leq c$&$b+c \leq a $& $c+a \leq b$ \\ \hline $-$ & $\pm$ & $+$ & $\Omega_c = 0$ & $b(a-c)$ & $a(b-c)$ \\ $+$ & $-$ & $\pm$ & $b(c-a)$ & $0$ & $c(b-a)$ \\ $\pm$ & $+$ & $-$ & $a(c-b)$ & $c(a-b)$ & $0$ \\ \hline \end{tabular} \end{center} Notice that the {\it difference} $\Delta \Omega_c$ between these rows is always the same, independent of which triangle inequality is violated, as dictated by the wall crossing formula. For example the difference between rows 2 and 1 is \begin{align} \label{Omegacjump} \Omega_{c,2} - \Omega_{c,1} = b(c-a) \end{align} for all three cases. Moreover, since the jumps are entirely determined by bound states of divergent size, as opposed to scaling solutions, the same $\Delta \Omega$ can be expected to apply even when the triangle inequalities {\it are} satisfied. This is borne out by explicit computations in microscopic quiver models \cite{Denef:2007vg,Bena:2012hf,Lee:2012sc,Manschot:2013sya}. To get the total degeneracy when scaling solutions {\it do} exist, i.e.\ when $(a,b,c)$ satisfy the triangle inequalities, one has to take into account condensation of light open strings stretched between the constituent branes in the scaling regime, also known as the ``Higgs branch'' of the system. A simple 3-node cyclic quiver model sharing this feature was considered in \cite{Denef:2007vg} and further analyzed in \cite{Bena:2012hf,Lee:2012sc,Manschot:2013sya}. As mentioned before at the end of section \ref{sec:Dbranesetup}, there is no reason to expect this to be a quantitatively accurate model for the bound states of interest to us. However it should nevertheless give a reasonable model for at least some qualitative features of the degeneracies. A generating function counting BPS states of this model was found in \cite{Bena:2012hf}. In the chamber chamber $\theta_1<0$, $\theta_2<0$, $\theta_3>0$, it is given by \begin{align} \sum_{a,b,c} \Omega(a,b,c) \, u^a v^b w^c = \frac{uv(1-uv)}{(1-u)^2(1-v)^2(1-uv-vw-wu + 2 \, uvw)} \, . \end{align} For example for $(a,b)=(5,9)$, the degeneracies are given by \begin{center} \footnotesize \begin{tabular}{|r|rrrrrrrrrrrrrrrr|} \hline c&0&1&2&3&4&5&6&7&8&9&10&11&12&13&14&15 \\ \hline $\Omega(5,9,c)$&45& 40& 35& 30& 25& 20& 141& --578& 1583& --2556& 2685& --1650& 495& 0& \ 0& 0 \\ \hline \end{tabular} \end{center} This agrees with the previous table for the range in which the triangle inequalities are {\it not} satisfied, $c + 5 \leq 9$ and $5 + 9 \leq c$. When they {\it are} satisfied, the degeneracies become exponentially large. Asymptotically for large $a$, $b$, $c$ \cite{Bena:2012hf}, \begin{align} \label{threenodequiverasymptotics} \Omega(\alpha N,\beta N,\gamma N) \sim \left(\frac{\alpha^\alpha \beta^\beta \gamma^\gamma \, 2^{\alpha+\beta+\gamma}}{(\alpha+\beta-\gamma)^{\alpha+\beta-\gamma} (\alpha+\gamma-\beta)^{\alpha+\gamma-\beta} (\beta+\gamma-\alpha)^{\beta+\gamma-\alpha} }\right)^N \, . \end{align} In particular when $a=b=c=N$, this becomes $\Omega(N,N,N) \sim 2^{3N}$. \section{A more precise conjecture} \label{sec:conjecture} From the discussion above it is clear that the 3-constituent bound states degeneracies conjecturally counted by $1/\chi_9$ depend on the background moduli, and considerably more so than the 2-constituent bound states counted by $1/\Phi_{10}$. More specifically, the spectrum depends on the signs of the parameters $\theta_1,\theta_2,\theta_3$ introduced above. In the following section we will provide evidence that these wall crossing ambiguities are related to the sign ambiguities in our conjecture as formulated in (\ref{Zg3}). Based on this evidence, a more precise version of the conjecture appears to be \begin{align} \label{Zg3b} Z_{g=3}(x,y,z;u,v,w) = \frac{1}{\chi_9(x,y,z;u,v,w)} = \sum_{rstabc} \Omega(r,s,t,a,b,c) \, x^r y^s z^t u^{\rho a} v^{\sigma b} w^{\tau c} \, , \end{align} where $(\rho,\sigma,\tau)$ are the following $\theta$-dependent signs: \begin{center} \begin{tabular}{|ccc|ccc|} \hline $\theta_1$&$\theta_2$&$\theta_3$&$\rho$&$\sigma$& $\tau$ \\ \hline $-$ & $\pm$ & $+$ & $+$ & $+$ & $-$ \\ $+$ & $-$ & $\pm$ & $-$ & $+$ & $+$ \\ $\pm$ & $+$ & $-$ & $+$ & $-$ & $+$ \\ \hline \end{tabular} \end{center} An ambiguity still implicit in (\ref{Zg3b}) is that the Taylor expansion of $Z_{g=3}$ depends on the order in which $u$, $v$ and $w$ are expanded. Equivalently, if the coefficients are extracted by contour integration, the result depends on the contour, and more specifically on the relative sizes of $|u|$, $|v|$ and $|w|$. In the above it is understood that the variable with the negative sign is expanded last. So for example if $\theta_1<0$, $\theta_3>0$ (row 1), we first expand $u$, $v$ around zero (the order does not matter in this case), and next $w$ around zero. Alternatively this corresponds to picking a contour with $|u|,|v| \ll |w| < 1$. Another way of phrasing the above table is that in each chamber we have a particular ordering of $\{\Gamma_1,\Gamma_2,\Gamma_3 \}$, to wit \begin{center} \begin{tabular}{|ccc|c|} \hline $\theta_1$&$\theta_2$&$\theta_3$& ordering \\ \hline $-$ & $\pm$ & $+$ & $\Gamma_1 < \Gamma_2 < \Gamma_3$ \\ $+$ & $-$ & $\pm$ & $\Gamma_2 < \Gamma_3 < \Gamma_1$ \\ $\pm$ & $+$ & $-$ & $\Gamma_3 < \Gamma_1 < \Gamma_2$ \\ \hline \end{tabular} \end{center} Then the power $m_{ij}$ of the off-diagonal $e^{2 \pi \tau_{ij}}$ in the expansion is fixed by \begin{align} m_{ij} = \langle \Gamma_j,\Gamma_i \rangle \quad \mbox{ if } \quad \Gamma_j > \Gamma_i . \end{align} The Taylor expansion order (or integration contour) is likewise specified by this ordering, with the expansion order determined by the induced pair ordering. For example if $\Gamma_2 < \Gamma_3 < \Gamma_1$, we first expand in $v=e^{2 \pi i \tau_{23}}$, then in $w=e^{2 \pi i \tau_{31}}$, and finally in $u=e^{2 \pi i \tau_{12}}$. \section{Tests at leading order in the $q$-expansion of $1/\chi_9$} \label{sec:leading-q-examples} Recalling the definition (\ref{xyzuvw}) of the expansion parameters, let us explore the conjecture for the leading term in the $q$-expansion, i.e.\ the small $(x,y,z)$-expansion of $1/\chi_9$, that is \begin{align} \frac{1}{\chi_9(x,y,z;u,v,w)} = x^{-1} y^{-1} z^{-1} Z(u,v,w) + \cdots \, , \end{align} where \begin{align} Z(u,v,w) &= \frac{u v w}{(1-u)(1-v)(1-w)\sqrt{P(u,v,w)}} \nonumber \\ P(u,v,w) &= u^2+v^2+w^2-2(uv+vw+wu)-2\,uvw(u+v+w-4)+u^2v^2w^2 \nonumber \\ &=(\sqrt{u}+\sqrt{v}+\sqrt{w}+\sqrt{u v w})(\sqrt{u}-\sqrt{v}-\sqrt{w}+\sqrt{u v w}) \nonumber \\ & \quad \times (-\sqrt{u}+\sqrt{v}-\sqrt{w}+\sqrt{u v w}) (-\sqrt{u}-\sqrt{v}+\sqrt{w}+\sqrt{u v w}) \, . \end{align} Note that $Z(u,v^{-1},w^{-1}) = Z(u,v,w)$ and cyclic permutations thereof, but $Z(u,v,w^{-1}) \neq Z(u,v,w)$. According to the conjecture, the expansion of $Z$ in powers of $u,v,w$ should count bound states in the $(r,s,t) = (-1,-1,-1)$ sector of our D-brane setup. As recalled under (\ref{Zg1}), this means that each of the constituent 1/2-BPS D-branes wraps a rigid cycle $Q_i$. According to (\ref{Omfacttwo}), there are no 2-center bound states to consider in this sector. Therefore the degeneracies predicted by $1/\chi_9$ should count the three-particle bound states discussed in \ref{sec:threecentersol}. We asserted in section \ref{sec:conjecture} that Taylor expanding $Z$ about $(u,v,w)=(0,0,0)$ depends on the order in which we are expanding, or equivalently on the contour we use to extract the coefficients. For example, if we first expand in $u$, then in $v$, and finally in $w$, we get, up to cubic order in $u$, $v$ and up to zeroth\footnote{All terms of positive order in $w$ have the same coefficient as the corresponding terms of zeroth order in $w$. For example the terms of order $O(u^2 v^3 w^n)$, $n \geq 0$, are $u^2 v^3(6+6 w + 6 w^2 + 6 w^3 + 6 w^4 + \cdots)$.} order in $w$: \begin{align} Z(u,v,w) &= u \bigl(v + v^2 (w^{-1}+ 2) + v^3 (w^{-2} + 2 \, w^{-1} + 3) \bigr) \nonumber \\ &\quad + u^2 \bigl(v (w^{-1} + 2) + v^2 (4 \, w^{-2} + 2 \, w^{-1} + 4) + v^3 (9 \, w^{-3} + 2 \, w^{-2} + 4 \, w^{-1} + 6) \bigr) \nonumber \\ &\quad + u^3 \bigl(v (w^{-2} + 2\, w^{-1} + 3) + v^2 (9 \,w^{-3} + 2 \,w^{-2} + 4 \,w^{-1} + 6) \nonumber \\ &\quad \quad \quad \,\, + v^3 (36 \,w^{-4} - 18 \,w^{-3} + 12 \,w^{-2} + 6\, w^{-1} + 9) \bigr) + \cdots \, . \end{align} While this is symmetric under exchange $u \leftrightarrow v$, it is evidently not symmetric under exchange of $u \leftrightarrow w$, nor under exchange of $u \leftrightarrow w^{-1}$. Denote the coefficient of $u^a v^b w^{-c}$ in this $|u|,|v| \ll |w| \ll 1$ expansion by $\Omega(a,b,c)$. An alternative expansion is obtained by first expanding in $u$ and $w$ and then in $v$, or equivalently $|u|, |w| \ll |v| \ll 1$. The analog of $\Omega(a,b,c)$ is now the coefficient of $u^a w^c v^{-b}$. Denote this coefficient by $\Omega'(a,b,c)$. Finally we can consider $|v|,|w| \ll u \ll 1$, with coefficients $\Omega''(a,b,c)$. The coefficients $\Omega(a,b,c)$, $\Omega'(a,b,c)$ and $\Omega''(a,b,c)$ are almost the same, but not quite. In the table below the coefficients are listed for $(a,b)=(5,9)$: \begin{center} \tiny \begin{tabular}{|r|rrrrrrrrrrrrrrrrr|} \hline c&0&1&2&3&4&5&6&7&8&9&10&11&12&13&14&15&16 \\ \hline $\Omega(5,9,c)$&45& 40& 35& 30& 25& 20& 15891& --144638& 569633& --1210896& 1451475& \ --925650& 245025& 0& 0& 0& 0 \\ $\Omega'(5,9,c)$&0& 0& 0& 0& 0& 0& 15876& --144648& 569628& --1210896& 1451480& --925640& 245040& 20& 25& 30& 35 \\ $\Omega''(5,9,c)$& 0& 4& 8& 12& 16& 20& 15900& --144620& 569660& --1210860& 1451520& --925596& 245088& 72& 81& 90& 99 \\ $\Omega'-\Omega$& --45& --36& --27& --18& --9& 0& 9& 18& 27& 36& 45& 54& 63& 72& 81& 90& 99 \\ $\Omega''-\Omega$&--45& --40& --35& --30& --25& --20& --15& --10& --5& 0& 5& 10& 15& 20& 25& 30&35 \\ \hline \end{tabular} \end{center} This exhibits a few notable features. First, the coefficients become exponentially large in some range: roughly when $(a,b,c)$ satisfy the triangle inequalities. Second, although in the exponential regime, $\Omega$, $\Omega'$, $\Omega''$ are practically the same, there is a persistent difference throughout. In fact the difference always equals \begin{align} \Omega'(a,b,c) - \Omega(a,b,c) = a(c-b) \, , \qquad \Omega''(a,b,c) - \Omega(a,b,c) = b(c-a) \, . \end{align} Third, $a+b \leq c$ implies $\Omega(a,b,c)=0$ and $c+a \leq b$ implies $\Omega''(a,b,c)=0$. Similarly, though not visible in this example, $b+c \leq a$ implies $\Omega'(a,b,c)=0$. We see that this exactly matches the non-triangle degeneracies and wall crossing relations discussed in our analysis of 3-center configurations, if and only if we make the chamber/expansion identifications proposed in section \ref{sec:conjecture}. Here, we have just discussed some simple examples. More general proofs and generalizations of our checks to higher order terms in the $q$-expansion can be carried out, and will appear in \cite{Zimo}. One can also obtain the large $a$, $b$, $c$ asymptotics of $\Omega(a,b,c)$, similar to (\ref{threenodequiverasymptotics}), which we reproduce here (see \cite{Zimo} for further details): \begin{equation} \label{asymptoticformula} \Omega(\alpha N,\beta N,\gamma N) \sim \left(\frac{(\alpha+\beta+\gamma)^{\alpha+\beta+\gamma}}{(\alpha+\beta-\gamma)^{\alpha+\beta-\gamma}(\alpha+\gamma-\beta)^{\alpha+\gamma-\beta}(\beta+\gamma-\alpha)^{\beta+\gamma-\alpha}}\right)^N. \end{equation} This asymptotic formula is valid provided $\alpha$, $\beta$ and $\gamma$ satisfy the triangle inequalities $\alpha+\beta > \gamma, \beta + \gamma > \alpha, \gamma+ \alpha> \beta$. The degeneracy is exponentially large if these inequalities are satisfied and $N\gg 1$. Physical consistency requires that the degeneracy can only become exponentially large if scaling solutions exist. According to (\ref{triangleineq}), scaling solutions exist provided $(a,b,c)=N(\alpha,\beta,\gamma)$ satisfy the linear triangle inequalities, in striking agreement with what we find here from the behavior of the coefficients of $Z$. The expression (\ref{asymptoticformula}) is reminiscent of (\ref{threenodequiverasymptotics}), but is nevertheless different. In particular when $a=b=c=N$, the above becomes $\Omega(N,N,N) \sim 3^{3N}$, in contrast to $\Omega(N,N,N) \sim 2^{3N}$ one obtains from (\ref{threenodequiverasymptotics}). Still, the similarity is rather suggestive. Presumably the difference is due to differences in the microscopic model describing the bound states, in particular the superpotential. It would be very interesting to show this explicitly. \begin{figure}[h] \begin{center} \includegraphics[width=0.4\textwidth]{entropy-comparison.pdf} \end{center} \caption{Single center black hole entropy $S(\alpha N,\beta N,N)$ (orange) is always larger than 3-center entropy $\log \Omega(\alpha N,\beta N,N)$ (blue).} \label{entropy-comparison} \end{figure} Either way, according to our conjecture, the asymptotic formula (\ref{asymptoticformula}) should give the degeneracy of the Higgs branch of 3-particle scaling solutions in our setup, analogous to the quiver model asymptotics (\ref{threenodequiverasymptotics}). This degeneracy should correspond to a fraction of the total degeneracy of charge $\Gamma$. It can only be a fraction because there are many other ways of splitting up $\Gamma$, and the total degeneracy should sum over all of those. Hence the predicted $\log \Omega(a,b,c)$ should be bounded above by the single-center horizon area entropy. Another argument is the holographic principle: since scaling solutions can be squeezed into a region with surface area equal to the single-center black hole horizon area $A$, their entropy must be bounded above by $A/4G = S_{\rm BH}$: \begin{align} \log \Omega(a,b,c) \leq S_{BH} = \pi \sqrt{\Delta(a,b,c)} \, , \end{align} with $\Delta(a,b,c)$ given by (\ref{Deltaexplicit}) with $r=s=t=-1$ and $a,b,c \gg 1$ satisfying the triangle inequalities. For example for $a=b=c=N$, this translates to the requirement $3 \log 3 N \leq \pi \sqrt{3} N$, which is indeed satisfied as $3 \log 3 \approx 3.3$ and $\pi \sqrt{3} \approx 5.4$. Happily, the inequality persists for all values $(a,b,c)$, as can be seen from fig.\ \ref{entropy-comparison} combined with some simple considerations of asymptotics. \section{Higher order test and the appearance of $1/\Phi_{10}$} \label{sec:higher-order} In section \ref{sec:leading-q-examples} we gave examples illustrating that expansion ambiguities reproduce precisely the expected wall crossing formulae for 3-center configurations in the $(r,s,t)=(-1,-1,-1)$ sector. However this sector is quite insensitive to the detailed geometry of $K3 \times T^2$, because the constituents in this case are rigid and do not probe the internal space geometry. In particular, the same wall crossing formulae would be obtained for the simple 3-node quiver model with cubic superpotential (which however does not reproduce the correct degeneracies in the scaling regime). So this test, although it passes a number of nontrivial self-consistency checks, is still fairly weak in terms of singling out specifically $K3 \times T^2$ as the relevant compactification manifold. To unambiguously see the fingerprints of $K3 \times T^2$ in wall crossing formulae, we need to consider larger values of $(r,s,t)$ as well as values of $(a,b,c)$ such that {\it two}-center black hole bound states may form and play a role in wall crossing, not just three-center ones. Indeed for larger values of $(r,s,t)$, the individual $\Omega_i$ appearing in 3-center degeneracy formulae such as (\ref{Omfactthree}) are counted by the 1/2-BPS partition function $1/\Delta$ for $K3 \times T^2$ compactifications, and the $\Omega_{i+j}$ appearing in 2-center degeneracy formulae such as (\ref{Omfacttwo}) are counted by the 1/4-BPS partition function $1/\Phi_{10}$. Although $1/\Delta$ appears in many contexts in string theory, $1/\Phi_{10}$ is pretty much a smoking gun for $K3 \times T^2$ specifically. Recall that \begin{align} \label{oneoverDelta} \frac{1}{\Delta(x)} = \frac{1}{x} +24 +324 \, x+3200 \, x^2 + 25650 \, x^3 + \cdots \end{align} and {\begin{align} \frac{1}{\Phi_{10}(y,z;v)} &=\frac{1}{y z}(v+2 \, v^2+\cdots)+\left(\frac{1}{y}+\frac{1}{z}\right)(2 + 24 \, v + 48 \, v^2+\cdots)\nonumber\\ &+\left(\frac{z}{y}+\frac{y}{z}\right)\left( \frac{3}{v} +48+ 327 \, v + 648\, v^2+\cdots\right)+\left(\frac{24}{v} + 600\, v + 1152 \, v^2+\cdots\right)\nonumber\\ &+(y+z)\left( \frac{48}{v^2} + \frac{600}{v} -648 + 8376\, v + 15600\, v^2+\cdots\right)\nonumber\\ &+y z\left( \frac{327}{v^3} - \frac{648}{v^2} + \frac{25353}{v} -50064+ 130329 \, v + 209304\, v^2+\cdots\right)+\cdots \end{align}} Below we bring focus to a number of concrete examples evident in these expansions which strikingly confirm the $K3 \times T^2$ expectations, including both the appearance of $1/\Delta$ and $1/\Phi_{10}$: \begin{center} \tiny \captionof{table}{Degeneracies in the $(-1,-1,0)$ sector} \begin{tabular}{|r|rrrrrrrrrrrrr|} \hline c&1&2&3&4&5&6&7&8&9&10&11&12&13 \\ \hline $\Omega(5,7,c)$&720& 600& --1870& 74760& --631504& 2698992& -6550056& 9308496& --7467552& 2968704& -387000\ & 0& 0 \\ $\Omega'(5,7,c)$& 0&0&--2350&74400&--631744& 2698872& -6550056&9308616&--7467312&2969064&--386520& 600& 720 \\ $\Omega''(5,7,c)$&48&96&--2206&74592&--631504&2699160&--6549720&9309000&--7466880 &2969544& --385992&1176&1344 \\ $\Omega'-\Omega$& --720& --600& --480& --360& --240& --120& 0& 120& 240& 360 &480& 600& 720 \\ $\Omega''-\Omega$&--672& --504& --336& --168& 0& 168& 336& 504& 672& 840& 1008& 1176& 1344 \\ \hline \end{tabular} \end{center} So in this sector, the relations of these degeneracies are \begin{equation} \Omega'(a,b,c)-\Omega(a,b,c)=24 \times a(c-b) \qquad \Omega''(a,b,c)-\Omega(a,b,c)=24 \times b(c-a) \end{equation} \begin{center} \tiny \captionof{table}{Degeneracies in the $(-1,-1,1)$ sector} \begin{tabular}{|r|rrrrrrrrrr|} \hline c&2&3&4&5&6&7&8&9&10&11 \\ \hline $\Omega(4,6,c)$&5552&--20106&399488& --1967682& 4729920& --5762306& 3336160& --675690& 37824& 0 \\ $\Omega'(4,6,c)$& 368&--23994&396896&--1968978&4729920&--5761010& 3338752&--671802&43008&6480 \\ $\Omega''(4,6,c)$&1664&---22050&399488&--1965738&4733808&--5756474&3343936&--665970 &49488& 13608 \\ $\Omega'-\Omega$& --5184& --3888& --2592& --1296& 0& 1296& 2592& 3888& 5184& 6480\\ $\Omega''-\Omega$&--3888& --1944& 0& 1944& 3888& 5832& 7776& 9720& 11664& 13608 \\ \hline \end{tabular} \end{center} In this sector, the relations of these degeneracies are \begin{equation} \Omega'(a,b,c)-\Omega(a,b,c)=324 \times a(c-b) \qquad \Omega''(a,b,c)-\Omega(a,b,c)=324 \times b(c-a) \end{equation} \begin{center} \tiny \captionof{table}{Degeneracies in the $(-1,0,1)$ sector} \begin{tabular}{|r|rrrrrrrr|} \hline c&3&4&5&6&7&8&9&10\\ \hline $\Omega(3,5,c)$&--407472&4448168&--13869776& 19957200&--12621568& 3200704& --200250& 0 \\ $\Omega'(3,5,c)$& --454128&4424840&--13869776&19980528&--12574912&3270688& --106938&116640 \\ $\Omega''(3,5,c)$&--407472&4487048&--13792016&20073840&--12466048&3395104&33030 &272160 \\ $\Omega'-\Omega$& --46656& --23328&0&23328& 46656& 69984& 93312& 116640\\ $\Omega''-\Omega$&0&38880& 77760& 116640& 155520& 194400& 233280& 272160 \\ \hline \end{tabular} \end{center} In this sector, the relations of these degeneracies are \begin{equation} \Omega'(a,b,c)-\Omega(a,b,c)=24\times 324 \times a(c-b) \qquad \Omega''(a,b,c)-\Omega(a,b,c)=24\times 324 \times b(c-a) \end{equation} Note in the second and third tables, we did not include some small values of $c$, because wall crossing at these charges is captured by $\Phi^{-1}_{10}\times\Delta^{-1}$ instead of $\Delta^{-1}\times\Delta^{-1}\times\Delta^{-1}$: in this regime, the bound state decay products include a 1/4-BPS black hole. Here are some numerical examples. Consider $(a,b,c)=(2,5,1)$ in the $(-1,0,1)$ sector, where $\Omega'(2,5,1)=0$ and $\Omega''(2,5,1)=23544$. Note an integer factorization of 23544 is \begin{equation} 23544=3\times 24\times 327 \end{equation} where $3=b-a$, $24$ is the coefficient of $x^0$ in the Taylor expansion of $\Delta(x)^{-1}$ and $327$ is the coefficient of $w^1$ in the $(-1,1)$ sector of $\Phi_{10}^{-1}(y,z;w)$. For $(a,b,c)=(3,4,1)$ in the $(-1,1,1)$ sector, we have \begin{equation} \Omega''(3,4,1)-\Omega'(3,4,1)=105948=1\times 324\times 327 \end{equation} where $1=b-a$, $324$ is the coefficient of $x^1$ in the Taylor expansion of $\Delta(x)^{-1}$. Both of these 2 examples show the following pattern(schematically) of wall-crossing \begin{equation}\label{wc} \Omega''-\Omega'=(b-a)\Omega_{1+3}\Omega_2 \end{equation} where $\Omega_{1+3}$ means the number of states in the (1,3) subsystem and $\Omega_2$ is the appropriate expansion coefficient in (\ref{oneoverDelta}). Thus equation (\ref{wc}) exactly matches the structure of the 2-center wall crossing formula (\ref{Omfacttwo}), with $\Omega_{1+3}$ the correct degeneracy for the charge $\Gamma_1+\Gamma_2$ on $K3 \times T^2$! The following two tables are a short summary of more numerical results in the $(r,s,t)=(-1,1,1)$ sector. \begin{center} \captionof{table}{$(-1,1,1)$ sector wall-crossing from $(b-a)\Omega_2\Omega_{1+3}$} \begin{tabular}{|r|ccccc|} \hline &$\Omega''-\Omega'$& $b-a $ & $\Omega_2$ &$\Omega_{1+3}$& $(b-a)\times \Omega_2 \times \Omega_{1+3}$\\ \hline (2,4,1)&211896 & 2 & 324& 327&211896 \\ (3,4,1)&105948 & 1 & 324& 327&105948\\ (2,5,1)&317844 & 1 & 324& 327&317844\\ (3,5,1) &211896 & 2 & 324& 327&211896\\ \hline \end{tabular} \end{center} \begin{center} \captionof{table}{$(-1,1,1)$ sector wall-crossing from $(c-a)\Omega_1\Omega_{2+3}$} \begin{tabular}{|r|ccccc|} \hline &$\Omega''-\Omega$& $c-a $ & $\Omega_1$ &$\Omega_{2+3}$& $(b-a)\times \Omega_2 \times \Omega_{2+3}$\\ \hline (2,1,4)&260658 & 2 & 1& 130329&260658 \\ (2,2,4)&418608 & 2 & 1& 209304&418608\\ \hline \end{tabular} \end{center} where $130329$ and $209304$ are the coefficients of $yzv$ and $yzv^2$ in the expansion of $\Phi_{10}^{-1}$. \section{Discussion} \label{sec:discussion} In this paper we have put forward a conjecture for a precise counting function governing the three-center BPS solutions in type II string compactification on $K3 \times T^2$. Support for our conjecture comes from correct behavior under wall-crossing, and from the appearance of the known counting functions governing single and two-center solutions ($1/\Delta$ and $1/\Phi_{10}$) in appropriate degenerate limits. \medskip \noindent The paper raises a number of questions: \medskip \noindent $\bullet$ The objects we have described depend on a partition of the total charge $\Gamma$ into three 1/2-BPS charges $\Gamma_1,\Gamma_2,\Gamma_3$. The invariants $(r,s,t;a,b,c)$ defined in (\ref{rstabcdef}) depend on this partition. Thus the coefficients $\Omega(r,s,t;a,b,c)$ of $1/\chi_9$ do not count all BPS states with a given total charge $\Gamma$, but rather count a partition-dependent subset thereof. Our results suggest this subset is captured by a microscopic model characterized by $(r,s,t;a,b,c)$, akin to the simple cyclic 3-node quiver quantum mechanics models studied in \cite{Denef:2007vg,Bena:2012hf,Lee:2012sc,Manschot:2013sya}. More specifically this should be a supersymmetric quantum mechanics model describing the D-brane systems of section \ref{sec:Dbranesetup}, in the spirit of for example the explicit models of \cite{Chowdhury:2014yca} describing D-branes on $T^6$. What is the precise microscopic model appropriate for our setup? More generally, one could ask if there exists a model-independent way of characterizing $\Omega(\Gamma_1,\Gamma_2,\Gamma_3)$. A natural physical object depending on charge partitions is the S-matrix; perhaps this may provide such a characterization along the lines of \cite{Harvey:1996gc}. \medskip \noindent $\bullet$ We have loosely interpreted the coefficients of $1/\chi_9$ as BPS degeneracies, but did not provide a definition in terms of a protected index. The standard index counting 1/4-BPS states in ${\cal N}=4$ theories at generic points in the moduli space is the helicity supertrace $B_6$. However this cannot be the appropriate index counting the states of interest to us: with the exception of two-center bound states of 1/2-BPS black holes, multi-center bound states are BPS only on positive-codimension subspaces of the moduli space, and have too many fermionic zero modes to contribute to $B_6$ \cite{Sen:2008ht,Dabholkar:2009dq,Sen:2009md,Ashoke,Kachru:2017yda}. Does there exist an index interpretation of the coefficients $\Omega(r,s,t;a,b,c)$ on suitable subspaces of the moduli space, perhaps along the lines of \cite{Sen:2009md}? \medskip \noindent $\bullet$ Our analysis of wall crossing and its relation to contour choices did not reach the level of precision and generality of the prescriptions in \cite{Sen:2007vb,Sen:2007pg,Cheng:2007ch,Banerjee:2008yu} for extracting BPS degeneracies from $1/\Phi_{10}$ at a given point in the $T^2 \times K3$ moduli space. What is the analogous prescription for extracting moduli-dependent degeneracies from $1/\chi_9$? \medskip \noindent $\bullet$ Is there a natural geometric way of understanding the origin of the genus-three Riemann surface associated with $1/\chi_9$? A geometric origin of the genus-two Riemann surface associated with $1/\Phi_{10}$ was suggested in \cite{Gaiotto} and further clarified in \cite{Banerjee:2008yu}. Higher genus generalizations of this construction have appeared in counting higher-torsion dyons in ${\cal N}=4$ string theory \cite{Atishtwo,Atishthree} and BPS states in geometrically engineered quantum field theories \cite{Hollowood:2003cv,Braden:2003gv}. These higher-genus Riemann surfaces are non-generic, however, as they are holomorphically embedded in $T^4$, and thus correspond to a three-dimensional subspace of the higher-genus Siegel upper-half space. Understanding the relationship of our results with these constructions should be instructive. \medskip \noindent $\bullet$ The appearance of a degree three Siegel form counting three-center bound states suggests that there should be a higher genus generalization, with a degree four Siegel form counting four-center bound states and so forth. Indeed the number of duality invariants of a $g$-center configuration equals $g+{g \choose 2} = \frac{1}{2} g(g+1)$, which equals the dimension of the genus-$g$ Siegel upper-half space.\footnote{Note that this is strictly larger than the dimension $3g-3$ of the complex structure moduli space of genus-$g$ Riemann surfaces when $g > 3$, so it is important that the form extends over the full Siegel upper-half space.} At genus four we have a precise candidate, involving the Schottky form $J_8$ \cite{Morozov}. Can one make a uniform story capturing the physics at all genera? A natural conjecture is that it involves the chiral genus $g$ bosonic string partition function. \section*{Acknowledgements} We thank M.\ Douglas, C.\ Vafa and M.\ Zimet for helpful and stimulating discussions. S.K.\ and A.T.\ are grateful to the Aspen Center for Physics for hospitality while this work was in progress. The research of F.D.\ and Z.S.\ is supported in part by the Department of Energy under contract DOE DE-SC0011941. The research of S.K.\ is supported in part by a Simons Investigator Award and by the National Science Foundation under grant number PHY-1720397. The research of A.T.\ is supported by the National Science Foundation under NSF MSPRF grant number 1705008.
2,869,038,155,991
arxiv
\section*{Acknowledgements} \vspace{-2mm} This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement n$^{\circ}$ 725594 - time-data) and was supported by the Swiss National Science Foundation (SNSF) under grant number 200021\_178865 / 1. \bibliographystyle{abbrv} \section{Convergence Rate \label{sec:cvg rate}} This section presents the total iteration complexity of Algorithm~\ref{Algo:2} for finding first and second-order stationary points of problem \eqref{eq:minmax}. All the proofs are deferred to Appendix~\ref{sec:theory}. Theorem~\ref{thm:main} characterizes the convergence rate of Algorithm~\ref{Algo:2} for finding stationary points in the number of outer iterations. \begin{theorem}\textbf{\emph{(convergence rate)}} \label{thm:main} For integers $2 \le k_0\le k_1$, consider the interval $K=[k_0:k_1]$, and let $\{x_k\}_{k\in K}$ be the output sequence of Algorithm~\ref{Algo:2} on the interval $K$.\footnote{The choice of $k_1 = \infty$ is valid here too.} Let also $\rho:=\sup_{k\in [K]} \|x_k\|$.\footnote{If necessary, to ensure that $\rho<\infty$, one can add a small factor of $\|x\|^2$ to $\mathcal{L}_{\b}$ in \eqref{eq:Lagrangian}. Then it is easy to verify that the iterates of Algorithm \ref{Algo:2} remain bounded, provided that the initial penalty weight $\beta_0$ is large enough, $\sup_x \|\nabla f(x)\|/\|x\|< \infty$, $\sup_x \|A(x)\| < \infty$, and $\sup_x \|D A(x)\| <\infty$. } Suppose that $f$ and $A$ satisfy (\ref{eq:smoothness basic}) and let \begin{align} \lambda'_f = \max_{\|x\|\le \rho} \|\nabla f(x)\|,\qquad \lambda'_A = \max_{\|x\| \le \rho} \|DA(x)\|, \label{eq:defn_restricted_lipsichtz} \end{align} be the (restricted) Lipschitz constants of $f$ and $A$, respectively. With $\nu>0$, assume that \begin{align} \nu \|A(x_k)\| & \le \operatorname{dist}\left( -DA(x_k)^\top A(x_k) , \frac{\partial g(x_k)}{ \b_{k-1}} \right), \label{eq:regularity} \end{align} for every $k\in K$. We consider two cases: \begin{itemize}[leftmargin=*] \item If a first-order solver is used in Step~2, then $x_k$ is an $(\epsilon_{k,f},\b_k)$ first-order stationary point of (\ref{eq:minmax}) with \begin{align} \epsilon_{k,f} & = \frac{1}{\beta_{k-1}} \left(\frac{2(\lambda'_f+\lambda'_A y_{\max}) (1+\lambda_A' \sigma_k)}{\nu}+1\right) =: \frac{Q(f,g,A,\sigma_1)}{\beta_{k-1}}, \label{eq:stat_prec_first} \end{align} for every $k\in K$, where $y_{\max}(x_1,y_0,\sigma_1)$ is specified in (\ref{eq:dual growth}) due to the limited space. \item If a second-order solver is used in Step~2, then $x_k$ is an $(\epsilon_{k,f}, \epsilon_{k,s},\b_k)$ second-order stationary point of~(\ref{eq:minmax}) with $\epsilon_{k,s}$ specified above and with \begin{align} \epsilon_{k,s} &= \epsilon_{k-1} + \sigma_k \sqrt{m} \lambda_A \frac{ 2\lambda'_f +2 \lambda'_A y_{\max} }{\nu \b_{k-1}} = \frac{\nu + \sigma_k \sqrt{m} \lambda_A 2\lambda'_f +2 \lambda'_A y_{\max} }{\nu \b_{k-1}} =: \frac{Q'(f,g,A,\sigma_1)}{\beta_{k-1}}. \end{align} \end{itemize} \end{theorem} Theorem~\ref{thm:main} states that Algorithm~\ref{Algo:2} converges to a (first- or second-) order stationary point of \eqref{eq:minmax} at the rate of $1/\b_k$, further specified in Corollary \ref{cor:first} and Corollary \ref{cor:second}. A few remarks are in order about Theorem \ref{thm:main}. \paragraph{Regularity.} The key geometric condition in Theorem~\ref{thm:main} is \eqref{eq:regularity} which, broadly speaking, ensures that the primal updates of Algorithm \ref{Algo:2} reduce the feasibility gap as the penalty weight $\b_k$ grows. We will verify this condition for several examples in Section \ref{sec:experiments}. This condition in \eqref{eq:regularity} is closely related to those in the existing literature. In the special case where $g=0$ in~\eqref{prob:01}, it is easy to verify that \eqref{eq:regularity} reduces to the Polyak-Lojasiewicz (PL) condition for minimizing $\|A(x)\|^2$~\cite{karimi2016linear}. PL condition itself is a special case of Kurdyka-Lojasiewicz with $\theta = 1/2$, see \cite[Definition 1.1]{xu2017globally}. When $g=0$, it is also easy to see that \eqref{eq:regularity} is weaker than the Mangasarian-Fromovitz (MF) condition in nonlinear optimization \cite[Assumption 1]{bolte2018nonconvex}. {Moreover, {when $g$ is the indicator on a convex set,} \eqref{eq:regularity} is a consequence of the \textit{basic constraint qualification} in \cite{rockafellar1993lagrange}, which itself generalizes the MF condition to the case when $g$ is an indicator function of a convex set.} We may think of \eqref{eq:regularity} as a local condition, which should hold within a neighborhood of the constraint set $\{x:A(x)=0\}$ rather than everywhere in $\mathbb{R}^d$. There is a constant complexity algorithm in \cite{bolte2018nonconvex} to reach this so-called ``information zone'', which supplements Theorem \ref{thm:main}. Lastly, in contrast to most conditions in the nonconvex optimization literature, such as~\cite{flores2012complete}, the condition in~\eqref{eq:regularity} appears to be easier to verify, as we see in the sequel. \paragraph{Penalty method.} A classical algorithm to solve \eqref{prob:01} is the penalty method, which is characterized by the absence of the dual variable ($y=0$) in \eqref{eq:Lagrangian}. Indeed, ALM can be interpreted as an adaptive penalty or smoothing method with a variable center determined by the dual variable. It is worth noting that, with the same proof technique, one can establish the same convergence rate of Theorem \ref{thm:main} for the penalty method. However, while both methods have the same convergence rate in theory, we ignore the uncompetitive penalty method since it is significantly outperformed by iALM in practice. \paragraph{Computational complexity.} Theorem~\ref{thm:main} specifies the number of (outer) iterations that Algorithm~\ref{Algo:2} requires to reach a near-stationary point of problem~\eqref{eq:Lagrangian} with a prescribed precision and, in particular, specifies the number of calls made to the solver in Step~2. In this sense, Theorem~\ref{thm:main} does not fully capture the computational complexity of Algorithm~\ref{Algo:2}, as it does not take into account the computational cost of the solver in Step~2. To better understand the total iteration complexity of Algorithm~\ref{Algo:2}, we consider two scenarios in the following. In the first scenario, we take the solver in Step~2 to be the Accelerated Proximal Gradient Method (APGM), a well-known first-order algorithm~\cite{ghadimi2016accelerated}. In the second scenario, we will use the second-order trust region method developed in~\cite{cartis2012complexity}. We have the following two corollaries showing the total complexity of our algorithm to reach first and second-order stationary points. Appendix \ref{sec:opt_cnds} contains the proofs and more detailed discussion for the complexity results. \begin{corollary}[First-order optimality]\label{cor:first} For $b>1$, let $\beta_k =b^k $ for every $k$. If we use APGM from~\cite{ghadimi2016accelerated} for Step~2 of Algorithm~\ref{Algo:2}, the algorithm finds an $(\epsilon_f,\b_k)$ first-order stationary point of~\eqref{eq:minmax}, after $T$ calls to the first-order oracle, where \begin{equation} T = \mathcal{O}\left( \frac{Q^3 \rho^2}{\epsilon^{3}}\log_b{\left( \frac{Q}{\epsilon} \right)} \right) = \tilde{\mathcal{O}}\left( \frac{Q^{3} \rho^2}{\epsilon^{3}} \right). \end{equation} \end{corollary} For Algorithm~\ref{Algo:2} to reach a near-stationary point with an accuracy of $\epsilon_f$ in the sense of \eqref{eq:inclu3} and with the lowest computational cost, we therefore need to perform only one iteration of Algorithm~\ref{Algo:2}, with $\b_1$ specified as a function of $\epsilon_f$ by \eqref{eq:stat_prec_first} in Theorem~\ref{thm:main}. In general, however, the constants in \eqref{eq:stat_prec_first} are unknown and this approach is thus not feasible. Instead, the homotopy approach taken by Algorithm~\ref{Algo:2} ensures achieving the desired accuracy by gradually increasing the penalty weight. This homotopy approach increases the computational cost of Algorithm~\ref{Algo:2} only by a factor logarithmic in the $\epsilon_f$, as detailed in the proof of Corollary~\ref{cor:first}. \begin{corollary}[Second-order optimality]\label{cor:second} For $b>1$, let $\beta_k =b^k $ for every $k$. We assume that \begin{equation} \mathcal{L}_{\beta}(x_1, y) - \min_{x}\mathcal{L}_{\beta}(x, y) \leq L_{u},\qquad \forall \beta. \end{equation} If we use the trust region method from~\cite{cartis2012complexity} for Step~2 of Algorithm~\ref{Algo:2}, the algorithm finds an $\epsilon$-second-order stationary point of~\eqref{eq:minmax} in $T$ calls to the second-order oracle where \begin{equation} T = \mathcal{O}\left( \frac{L_u Q'^{5}}{\epsilon^{5}} \log_b{\left( \frac{Q'}{\epsilon} \right)} \right) = \widetilde{\mathcal{O}}\left( \frac{L_u Q'^{5}}{\epsilon^{5}} \right). \end{equation} \end{corollary} \paragraph{Remark.} These complexity results for first and second-order are stationarity with respect to~\eqref{eq:Lagrangian}. We note that these complexities match~\cite{cartis2018optimality} and~\cite{birgin2016evaluation}. However, the stationarity criteria and the definition of dual variable in these papers differ from ours. We include more discussion on this in the Appendix. \section{Conclusions} In this work, we have proposed and analyzed an inexact augmented Lagrangian method for solving nonconvex optimization problems with nonlinear constraints. We prove convergence to the first and second order stationary points of the augmented Lagrangian function, with explicit complexity estimates. Even though the relation of stationary points and global optima is not well-understood in the literature, we find out that the algorithm has fast convergence behavior to either global minima or local minima in a wide variety of numerical experiments. \section{Optimality Conditions} \section{Complexity Results}\label{sec:opt_cnds} \subsection{First-Order Optimality \label{sec:first-o-opt}} Let us first consider the case where the solver in Step~2 is is the first-order algorithm APGM, described in detail in ~\cite{ghadimi2016accelerated}. At a high level, APGM makes use of $\nabla_x \mathcal{L}_{\beta}(x,y)$ in \eqref{eq:Lagrangian}, the proximal operator $\text{prox}_g$, and the classical Nesterov acceleration~\cite{nesterov1983method} to reach first-order stationarity for the subproblem in~\eqref{e:exac}. Suppose that $g=\delta_\mathcal{X}$ is the indicator function on a bounded convex set $\mathcal{X}\subset \mathbb{R}^d$ and let \begin{align} \rho= \max_{x\in \mathcal{X}}\|x\|, \end{align} be the radius of a ball centered at the origin that includes $\mathcal{X}$. Then, adapting the results in~\cite{ghadimi2016accelerated} to our setup, APGM reaches $x_{k}$ in Step 2 of Algorithm~\ref{Algo:2} after \begin{equation} \mathcal{O}\left ( \frac{\lambda_{\beta_{k}}^2 \rho^{2} }{\epsilon_{k+1}} \right) \label{eq:iter_1storder} \end{equation} (inner) iterations, where $\lambda_{\beta_{k}}$ denotes the Lipschitz constant of $\nabla_x{\mathcal{L}_{\beta_{k}}(x, y)}$, bounded in~\eqref{eq:smoothness of Lagrangian}. For the clarity of the presentation, we have used a looser bound in \eqref{eq:iter_1storder} compared to~\cite{ghadimi2016accelerated}. Using \eqref{eq:iter_1storder}, we derive the following corollary, describing the total iteration complexity of Algorithm~\ref{Algo:2} in terms of the number calls made to the first-order oracle in APGM. \begin{corollary}\label{cor:first_supp} For $b>1$, let $\beta_k =b^k $ for every $k$. If we use APGM from~\cite{ghadimi2016accelerated} for Step~2 of Algorithm~\ref{Algo:2}, the algorithm finds an $(\epsilon_f,\b_k)$ first-order stationary point, after $T$ calls to the first-order oracle, where \begin{equation} T = \mathcal{O}\left( \frac{Q^3 \rho^2}{\epsilon^{3}}\log_b{\left( \frac{Q}{\epsilon} \right)} \right) = \tilde{\mathcal{O}}\left( \frac{Q^{3} \rho^2}{\epsilon^{3}} \right). \end{equation} \end{corollary} \begin{proof} Let $K$ denote the number of (outer) iterations of Algorithm~\ref{Algo:2} and let $\epsilon_{f}$ denote the desired accuracy of Algorithm~\ref{Algo:2}, see~(\ref{eq:inclu3}). Recalling Theorem~\ref{thm:main}, we can then write that \begin{equation} \epsilon_{f} = \frac{Q}{\b_{K}}, \label{eq:acc_to_b} \end{equation} or, equivalently, $\b_{K} = Q/\epsilon_{f}$. We now count the number of total (inner) iterations $T$ of Algorithm~\ref{Algo:2} to reach the accuracy $\epsilon_{f}$. From \eqref{eq:smoothness of Lagrangian} and for sufficiently large $k$, recall that $\lambda_{\b_k}\le \lambda'' \b_k$ is the smoothness parameter of the augmented Lagrangian. Then, from \eqref{eq:iter_1storder} ad by summing over the outer iterations, we bound the total number of (inner) iterations of Algorithm~\ref{Algo:2} as \begin{align}\label{eq: tk_bound} T &= \sum_{k=1}^K\mathcal{O}\left ( \frac{\lambda_{\beta_{k-1}}^2 \rho^2 }{\epsilon_k} \right) \nonumber\\ & = \sum_{k=1}^K\mathcal{O}\left (\beta_{k-1}^3 \rho^2 \right) \qquad \text{(Step 1 of Algorithm \ref{Algo:2})} \nonumber\\ & \leq \mathcal{O} \left(K\beta_{K-1}^3 \rho^2 \right) \qquad \left( \{\b_k\}_k \text{ is increasing} \right) \nonumber\\ & \le \mathcal{O}\left( \frac{K Q^{{3}} \rho^2}{\epsilon_{f}^{{3}}} \right). \qquad \text{(see \eqref{eq:acc_to_b})} \end{align} In addition, if we specify $\beta_k=b^k$ for all $k$, we can further refine $T$. Indeed, \begin{equation} \beta_K = b^K~~ \Longrightarrow~~ K = \log_b \left( \frac{Q}{\epsilon_f} \right), \end{equation} which, after substituting into~\eqref{eq: tk_bound} gives the final bound in Corollary~\ref{cor:first}. \end{proof} \subsection{Second-Order Optimality \label{sec:second-o-opt}} Let us now consider the second-order optimality case where the solver in Step~2 is the the trust region method developed in~\cite{cartis2012complexity}. Trust region method minimizes a quadratic approximation of the function within a dynamically updated trust-region radius. Second-order trust region method that we consider in this section makes use of Hessian (or an approximation of Hessian) of the augmented Lagrangian in addition to first order oracles. As shown in~\cite{nouiehed2018convergence}, finding approximate second-order stationary points of convex-constrained problems is in general NP-hard. For this reason, we focus in this section on the special case of~\eqref{prob:01} with $g=0$. Let us compute the total computational complexity of Algorithm~\ref{Algo:2} with the trust region method in Step~2, in terms of the number of calls made to the second-order oracle. By adapting the result in~\cite{cartis2012complexity} to our setup, we find that the number of (inner) iterations required in Step~2 of Algorithm~\ref{Algo:2} to produce $x_{k+1}$ is \begin{equation} \mathcal{O}\left ( \frac{\lambda_{\beta_{k}, H}^2 (\mathcal{L}_{\beta_{k}}(x_1, y) - \min_{x}\mathcal{L}_{\beta_k}(x, y))}{\epsilon_k^3} \right), \label{eq:sec_inn_comp} \end{equation} where $\lambda_{\beta, H}$ is the Lipschitz constant of the Hessian of the augmented Lagrangian, which is of the order of $\beta$, as can be proven similar to Lemma~\ref{lem:smoothness} and $x_1$ is the initial iterate of the given outer loop. In~\cite{cartis2012complexity}, the term $\mathcal{L}_{\beta}(x_1, y) - \min_{x}\mathcal{L}_{\beta}(x, y)$ is bounded by a constant independent of $\epsilon$. We assume a uniform bound for this quantity for every $ \beta_k$, instead of for one value of $\beta_k$ as in~\cite{cartis2012complexity}. Using \eqref{eq:sec_inn_comp} and Theorem~\ref{thm:main}, we arrive at the following: \begin{corollary}\label{cor:second_supp} For $b>1$, let $\beta_k =b^k $ for every $k$. We assume that \begin{equation} \mathcal{L}_{\beta}(x_1, y) - \min_{x}\mathcal{L}_{\beta}(x, y) \leq L_{u},\qquad \forall \beta. \end{equation} If we use the trust region method from~\cite{cartis2012complexity} for Step~2 of Algorithm~\ref{Algo:2}, the algorithm finds an $\epsilon$-second-order stationary point of~(\ref{prob:01}) in $T$ calls to the second-order oracle where \begin{equation} T = \mathcal{O}\left( \frac{L_u Q'^{5}}{\epsilon^{5}} \log_b{\left( \frac{Q'}{\epsilon} \right)} \right) = \widetilde{\mathcal{O}}\left( \frac{L_u Q'^{5}}{\epsilon^{5}} \right). \end{equation} \end{corollary} Before closing this section, we note that the remark after Corollary~\ref{cor:first} applies here as well. \subsection{Approximate optimality of \eqref{prob:01}.} Corollary \ref{cor:first} establishes the iteration complexity of Algorithm~\ref{Algo:2} to reach approximate first-order stationarity for the equivalent formulation of \eqref{prob:01} presented in \eqref{eq:minmax}. Unlike the exact case, approximate first-order stationarity in \eqref{eq:minmax} does not immediately lend itself to approximate stationarity in \eqref{prob:01}, and the study of approximate stationarity for the penalized problem (special case of our setting with dual variable set to $0$) has also precedent in~\cite{bhojanapalli2018smoothed}. However, it is not difficult to verify that, with the more aggressive regime of $\epsilon_{k+1}=1/\b_k^2$ in Step 1 of Algorithm~\ref{Algo:2}, one can achieve $\epsilon$-first-order stationarity for \eqref{prob:01} with the iteration complexity of $T=\tilde{O}(Q^3\rho^2/\epsilon^6)$ in Corollary~\ref{cor:first}. Note that this conversion is by a naive computation using loose bounds rather than using duality arguments for a tight conversion. For a precedent in convex optimization for relating the convergence in augmented Lagrangian to the constrained problem using duality, see~\cite{tran2018smooth}. For the second-order case, it is in general not possible to establish approximate second-order optimality for \eqref{eq:minmax} from Corollary~\ref{cor:second}, with the exception of linear constraints. } \section{Proof of Theorem \ref{thm:main} \label{sec:theory}} For every $k\ge2$, recall from (\ref{eq:Lagrangian}) and Step~2 of Algorithm~\ref{Algo:2} that $x_{k}$ satisfies \begin{align} & \operatorname{dist}(-\nabla f(x_k) - DA(x_k)^\top y_{k-1} \nonumber\\ & \qquad - \b_{k-1} DA(x_{k})^\top A(x_k) ,\partial g(x_k) ) \nonumber\\ & = \operatorname{dist}(-\nabla_x \L_{\b_{k-1}} (x_k ,y_{k-1}) ,\partial g(x_k) ) \le \epsilon_{k}. \end{align} With an application of the triangle inequality, it follows that \begin{align} & \operatorname{dist}( -\b_{k-1} DA(x_k)^\top A(x_k) , \partial g(x_k) ) \nonumber\\ & \qquad \le \| \nabla f(x_k )\| + \| DA(x_k)^\top y_{k-1}\| + \epsilon_k, \end{align} which in turn implies that \begin{align} & \operatorname{dist}( -DA(x_k)^\top A(x_k) , \partial g(x_k)/ \b_{k-1} ) \nonumber\\ & \le \frac{ \| \nabla f(x_k )\|}{\b_{k-1} } + \frac{\| DA(x_k)^\top y_{k-1}\|}{\b_{k-1} } + \frac{\epsilon_k}{\b_{k-1} } \nonumber\\ & \le \frac{\lambda'_f+\lambda'_A \|y_{k-1}\|+\epsilon_k}{\b_{k-1}} , \label{eq:before_restriction} \end{align} where $\lambda'_f,\lambda'_A$ were defined in \eqref{eq:defn_restricted_lipsichtz}. We next translate \eqref{eq:before_restriction} into a bound on the feasibility gap $\|A(x_k)\|$. Using the regularity condition \eqref{eq:regularity}, the left-hand side of \eqref{eq:before_restriction} can be bounded below as \begin{align} & \operatorname{dist}( -DA(x_k)^\top A(x_k) , \partial g(x_k)/ \b_{k-1} ) \ge \nu \|A(x_k) \|. \qquad \text{(see (\ref{eq:regularity}))} \label{eq:restrited_pre} \end{align} By substituting \eqref{eq:restrited_pre} back into \eqref{eq:before_restriction}, we find that \begin{align} \|A(x_k)\| \le \frac{ \lambda'_f + \lambda'_A \|y_{k-1}\| + \epsilon_k}{\nu \b_{k-1} }. \label{eq:before_dual_controlled} \end{align} In words, the feasibility gap is directly controlled by the dual sequence $\{y_k\}_k$. We next establish that the dual sequence is bounded. Indeed, for every $k\in K$, note that \begin{align} \|y_k\| & = \| y_0 + \sum_{i=1}^{k} \sigma_i A(x_i) \| \quad \text{(Step 5 of Algorithm \ref{Algo:2})} \nonumber\\ & \le \|y_0\|+ \sum_{i=1}^k \sigma_i \|A(x_i)\| \qquad \text{(triangle inequality)} \nonumber\\ & \le \|y_0\|+ \sum_{i=1}^k \frac{ \|A(x_1)\| \log^2 2 }{ k \log^2(k+1)} \quad \text{(Step 4)} \nonumber\\ & \le \|y_0\|+ c \|A(x_1) \| \log^2 2 =: y_{\max}, \label{eq:dual growth} \end{align} where \begin{align} c \ge \sum_{i=1}^{\infty} \frac{1}{k \log^2 (k+1)}. \end{align} Substituting \eqref{eq:dual growth} back into \eqref{eq:before_dual_controlled}, we reach \begin{align} \|A(x_k)\| & \le \frac{ \lambda'_f + \lambda'_A y_{\max} + \epsilon_k}{\nu \b_{k-1} } \nonumber\\ & \le \frac{ 2\lambda'_f +2 \lambda'_A y_{\max} }{\nu \b_{k-1} } , \label{eq:cvg metric part 2} \end{align} where the second line above holds if $k_0$ is large enough, which would in turn guarantees that $\epsilon_k=1/\b_{k-1}$ is sufficiently small since $\{\b_k\}_k$ is increasing and unbounded. It remains to control the first term in \eqref{eq:cvg metric}. To that end, after recalling Step 2 of Algorithm~\ref{Algo:2} and applying the triangle inequality, we can write that \begin{align} & \operatorname{dist}( -\nabla_x \L_{\b_{k-1}} (x_k,y_{k}), \partial g(x_{k}) ) \nonumber\\ & \le \operatorname{dist}( -\nabla_x \L_{\b_{k-1}} (x_k,y_{k-1}) , \partial g(x_{k}) ) \nonumber\\ & + \| \nabla_x \L_{\b_{k-1}} (x_k,y_{k})-\nabla_x \L_{\b_{k-1}} (x_k,y_{k-1}) \|. \label{eq:cvg metric part 1 brk down} \end{align} The first term on the right-hand side above is bounded by $\epsilon_k$, by Step 5 of Algorithm~\ref{Algo:2}. For the second term on the right-hand side of \eqref{eq:cvg metric part 1 brk down}, we write that \begin{align} & \| \nabla_x \L_{\b_{k-1}} (x_k,y_{k})-\nabla_x \L_{\b_{k-1}} (x_k,y_{k-1}) \| \nonumber\\ & = \| DA(x_k)^\top (y_k - y_{k-1}) \| \qquad \text{(see \eqref{eq:Lagrangian})} \nonumber\\ & \le \lambda'_A \|y_k- y_{k-1}\| \qquad \text{(see \eqref{eq:defn_restricted_lipsichtz})} \nonumber\\ & = \lambda'_A \sigma_k \|A (x_k) \| \qquad \text{(see Step 5 of Algorithm \ref{Algo:2})} \nonumber\\ & \le \frac{2\lambda'_A \sigma_k }{\nu \b_{k-1} }( \lambda'_f+ \lambda'_Ay_{\max}) . \qquad \text{(see \eqref{eq:cvg metric part 2})} \label{eq:part_1_2} \end{align} By combining (\ref{eq:cvg metric part 1 brk down},\ref{eq:part_1_2}), we find that \begin{align} & \operatorname{dist}( \nabla_x \L_{\b_{k-1}} (x_k,y_{k}), \partial g(x_{k}) ) \nonumber\\ & \le \frac{2\lambda'_A \sigma_k }{\nu \b_{k-1} }( \lambda'_f+ \lambda'_Ay_{\max}) + \epsilon_k. \label{eq:cvg metric part 1} \end{align} By combining (\ref{eq:cvg metric part 2},\ref{eq:cvg metric part 1}), we find that \begin{align} & \operatorname{dist}( -\nabla_x \L_{\b_{k-1}}(x_k,y_k),\partial g(x_k)) + \| A(x_k)\| \nonumber\\ & \le \left( \frac{2\lambda'_A \sigma_k }{\nu \b_{k-1} }( \lambda'_f+ \lambda'_Ay_{\max}) + \epsilon_k \right) \nonumber\\ & \qquad + 2\left( \frac{ \lambda'_f + \lambda'_A y_{\max}}{\nu \b_{k-1} } \right). \end{align} Applying $\sigma_k\le \sigma_1$, we find that \begin{align} & \operatorname{dist}( -\nabla_x \L_{\b_{k-1}}(x_k,y_k),\partial g(x_k)) + \| A(x_k)\| \nonumber\\ & \le \frac{ 2\lambda'_A\sigma_1 + 2}{ \nu\b_{k-1}} ( \lambda'_f+\lambda'_A y_{\max}) + \epsilon_k. \end{align} For the second part of the theorem, we use the Weyl's inequality and Step 5 of Algorithm~\ref{Algo:2} to write \begin{align}\label{eq:sec} \lambda_{\text{min}} &(\nabla_{xx} \mathcal{L}_{\beta_{k-1}}(x_k, y_{k-1})) \geq \lambda_{\text{min}} (\nabla_{xx} \mathcal{L}_{\beta_{k-1}}(x_k, y_{k})) \notag \\&- \sigma_k \| \sum_{i=1}^m A_i(x_k) \nabla^2 A_i(x_k) \|. \end{align} The first term on the right-hand side is lower bounded by $-\epsilon_{k-1}$ by Step 2 of Algorithm~\ref{Algo:2}. We next bound the second term on the right-hand side above as \begin{align*} & \sigma_k \| \sum_{i=1}^m A_i(x_k) \nabla^2 A_i(x_k) \| \\ &\le \sigma_k \sqrt{m} \max_{i} \| A_i(x_k)\| \| \nabla^2 A_i(x_k)\| \\ &\le \sigma_k \sqrt{m} \lambda_A \frac{ 2\lambda'_f +2 \lambda'_A y_{\max} }{\nu \b_{k-1} }, \end{align*} where the last inequality is due to~(\ref{eq:smoothness basic},\ref{eq:cvg metric part 2}). Plugging into~\eqref{eq:sec} gives \begin{align*} & \lambda_{\text{min}}(\nabla_{xx} \mathcal{L}_{\beta_{k-1}}(x_k, y_{k-1}))\nonumber\\ & \geq -\epsilon_{k-1} - \sigma_k \sqrt{m} \lambda_A \frac{ 2\lambda'_f +2 \lambda'_A y_{\max} }{\nu \b_{k-1} }, \end{align*} which completes the proof of Theorem \ref{thm:main}. \section{Nonconvex Slater's Condition \label{sec:slater}} The (convex) Slater's condition (CSC) plays a key role in convex optimization as a sufficient condition for strong duality. As a result, CSC guarantees the success of a variety of primal-dual algorithms for convex and constrained programming. As a visual example, in Program~\eqref{prob:01}, when $f=0$, $g=1_C$ is the indicator function of a convex set $C$, and $A$ is an affine operator, CSC removes any pathological cases by ensuring that the affine subspace is not tangent to $C$, see Figure~\ref{fig:convex_slater}. \textbf{We should add a figure here with circle and line.} Likewise, to successfully solve Program~\eqref{prob:01} with nonlinear constraints, we require the following condition which, loosely speaking, extends CSC to the nonconvex setting, as clarified shortly afterwards. \begin{definition}\label{defn:nonconvex slater} \textbf{\emph{(Nonconvex Slater's condition)}} For $\rho,\rho'>0$ and subspace $S\subset \mathbb{R}^{d}$, let \begin{align} \nu(g,A,S) := \begin{cases} \underset{v,x}{\min} \, \frac{\left\| \left( \operatorname{id} - P_S P_{\partial g(x)/\b} \right) ( DA(x)^\top v) \right\|}{\|v\|} \\ \|v\|\le \rho\\ \|x\|\le \rho', \end{cases} \label{eq:new slater defn} \end{align} where $\operatorname{id}$ is the identity operator, $P_{\partial g(x)/\b}$ projects onto the set $\partial g(x)/\b$, and $DA(x)$ is the Jacobian of $A$. We say that Program (\ref{prob:01}) satisfies the nonconvex Slater's condition if $\nu(g,A,S) >0$. \end{definition} A few remarks about the nonconvex Slater's condition (NSC) is in order. \paragraph{Jacobian $DA$.} As we will see later, $DA(x)^\top \overset{\operatorname{QR}}{=} Q(x) R(x)$ in NSC might be replaced with its orthonormal basis, namely, $Q(x)$. For simplicity, we will avoid this minor change and instead, whenever needed, assume that $DA(x)$ is nonsingular; otherwise a simple projection can remove any redundancy from $A(x)=0$ in Program~\eqref{prob:01}. \paragraph{Subspace $S$.} The choice of subspace $S$ helps broaden the generality of NSC. In particular, when $S = \mathbb{R}^{d}$, \eqref{eq:new slater defn} takes the simpler form of \begin{align} \nu(g,A,S) := \begin{cases} \underset{v,x}{\min} \, \frac{ \operatorname{dist}( DA(x)^\top v , \partial g(x)/\b) }{\|v\|} \\ \|v\|\le \rho\\ \|x\|\le \rho', \end{cases} \label{eq:defn_nu_A} \end{align} where $\operatorname{dist}(\cdot,\partial g(x)/\b)$ returns the Euclidean distance to the set $\partial g(x)/\b$. \paragraph{Convex case.} To better parse Definition \ref{defn:nonconvex slater}, let us assume that $f:\mathbb{R}^d\rightarrow\mathbb{R}$ is convex, $g = 1_C:\mathbb{R}^d\rightarrow\mathbb{R}$ is the indicator function for a convex and bounded set $C$, and $A$ is a nonsingular linear operator represented with the full-rank matrix $A\in \mathbb{R}^{m\times d}$. We can now study the geometric interpretation of $\nu()$. Assuming that $S=\mathbb{R}^d$ and using the well-known Moreau decomposition, it is not difficult to rewrite \eqref{eq:new slater defn} as \begin{align} \nu(g,A,S) & := \begin{cases} \underset{v,x}{\min} \, \, \frac{\left\| P_{T_C(x)} A^\top v \right\|}{\|v\|} \\ \|v\|\le \rho\\ \|x\|\le \rho' \end{cases} \nonumber\\ & = \begin{cases} \underset{x}{\min} \,\, \eta_{m}\left( P_{T_C(x)} A^\top \right) \\ \|x\|\le \rho', \end{cases} \label{eq:nu for cvx} \end{align} where $P_{T_C(x)}$ is the projection onto the tangent cone of $C$ at $x$, and $\eta_{m}(\cdot)$ returns the $m$th largest singular value of its input matrix. Intuitively then, NSC ensures that the the row span of $A$ is not tangent to $C$, similar to CSC. This close relationship between NSC and CSC is formalized next and proved in Appendix~\ref{sec:proof of prop}. \begin{proposition}\label{prop:convex} In Program (\ref{prob:01}), suppose that \begin{itemize} \item $f:\mathbb{R}^d\rightarrow\mathbb{R}$ is convex, \item $g=1_C$ is the indicator on a convex and bounded set $C\subset\mathbb{R}^d$, \item $A:\mathbb{R}^d\rightarrow\mathbb{R}^m$ is a nonsingular linear operator, represented with the full-rank matrix $A\in \mathbb{R}^{m\times d}$.\footnote{As mentioned earlier, it is easy to remove the full-rank assumption by replacing $DA(x)$ in \eqref{eq:new slater defn} with its orthonormal basis. We assume $A$ to be full-rank for clarity at the cost of a simple projection to remove any ``redundant measurements'' from Program~\eqref{prob:01}.} \end{itemize} Assume also that Program (\ref{prob:01}) is feasible, namely, there exists $x\in C$ such that $Ax=0$. Then CSC holds if NSC holds. Moreover, suppose that $S$ is the affine hull of $C$. Then CSC holds if and only if NSC holds. \end{proposition} \paragraph{Boundedness of $C$.} Let us add that the boundedness of $C$ in Proposition~\ref{prop:convex} is necessary. For example, suppose that $A\in \mathbb{R}^{1\times d}$ is the vector of all ones and, for a small perturbation vector $\epsilon\in \mathbb{R}^d$, let $C=\{x\in \mathbb{R}^d: (A+\epsilon) x\ge 0\}$ be a half space. Then CSC holds but $\nu(g,A,S)$ (with $S=\mathbb{R}^d$) can be made arbitrarily small by making $\epsilon$ small. \section{Proof of Proposition \ref{prop:convex} \label{sec:proof of prop}} To prove the first claim of the proposition, suppose that CSC does not hold, namely, that \begin{equation} \operatorname{relint}(\operatorname{null}(A) \cap C) = \operatorname{null}(A)\cap \operatorname{relint}(C) = \emptyset, \label{eq:no feas} \end{equation} where $\operatorname{null}(A)$ and $\operatorname{relint}(C)$ denote the null space of the matrix $A$ and the relative interior of $C$, respectively. By assumption, there exists $x_0\in C$ such that $Ax_0=0$. It follows from \eqref{eq:no feas} that $x_0\in \text{boundary}(C)$ and that $\operatorname{null}(A)$ supports $C$ at $x_0$, namely, $A x\ge 0$, for every $x\in C$. (The inequality applies to each entry of the vector $Ax$.) Consequently, $\operatorname{null}(A) \cap T_C(x_0) \ne \{0\}$, where $T_C(x_0)$ is the tangent cone of the set $C$ at $x_0$. Equivalently, it holds that $\operatorname{row}(A)\cap N_C(x_0) \ne \{0\}$, where $\operatorname{row}(A)$ is the row space of the matrix $A$. That is, there exists a unit-norm vector $v$ such that $P_{T_C(x_0)}A^\top v=0$ and, consequently, $P_S P_{T_C(x_0)}A^\top v=0$. Let us take $\rho'=\|x_0\|$ in \eqref{eq:nu for cvx}. We then conclude that $\nu(g,A,S)=0$, namely, NSC also does not hold, which proves the first claim in Proposition \ref{prop:convex}. For the converse, suppose that NSC does not hold with $\rho'=0$, namely, there exists $x\in \mathbb{R}^d$ such that \begin{align*} \eta_m(P_S P_{T_C(x)} A^\top) & = \eta_m( P_{T_C(x)} A^\top) =0, \end{align*} \begin{align*} \quad A(x) = 0, \quad x\in C, \end{align*} where the first equality above holds because $S$ is the affine hull of $C$. Then, thanks to the boundedness of $C$, it must be the case that $x\in \text{boundary}(C)$. Indeed, if $x\in \text{relint}(C)$, we have that $T_C(x)=S$ and thus \begin{equation*} \eta_m(P_{S}P_{T_C(x)} A^\top)= \eta_m( P_S A^\top)>0, \end{equation*} where the inequality holds because, by assumption, $A$ is full-rank. Since $x\in\text{boundary}(C)$, it follows that $\text{dim}(T_C(x)) \ge m-1$. That is, $\text{row}(A)$ contains a unique direction orthogonal to $T_C(x)$. In particular, it follows that $\text{null}(A) \subset T_C(x)$ and, consequently, $\text{int}(\text{null}(A)\cap C) =\emptyset $, namely, CSC does not hold. This proves the second (and last) claim in Proposition \ref{prop:convex}. \section{Related Work \label{sec:related work}} ALM has a long history in the optimization literature, dating back to~\cite{hestenes1969multiplier, powell1969method}. In the special case of~\eqref{prob:01} with a convex function $f$ and a linear operator $A$, standard, inexact, and linearized versions of ALM have been extensively studied~\cite{lan2016iteration,nedelcu2014computational,tran2018adaptive,xu2017inexact}. Classical works on ALM focused on the general template of~\eqref{prob:01} with nonconvex $f$ and nonlinear $A$, with arguably stronger assumptions and required exact solutions to the subproblems of the form \eqref{e:exac}, which appear in Step 2 of Algorithm~\ref{Algo:2}, see for instance \cite{bertsekas2014constrained}. A similar analysis was conducted in~\cite{fernandez2012local} for the general template of~\eqref{prob:01}. The authors considered inexact ALM and proved convergence rates for the outer iterates, under specific assumptions on the initialization of the dual variable. However, in contrast, the authors did not analyze how to solve the subproblems inexactly and did not provide total complexity results with verifiable conditions. Problem~\eqref{prob:01} with similar assumptions to us is also studied in~\cite{birgin2016evaluation} and~\cite{cartis2018optimality} for first-order and second-order stationarity, respectively, with explicit iteration complexity analysis. As we have mentioned in Section~\ref{sec:cvg rate}, our iteration complexity results matches these theoretical algorithms with a simpler algorithm and a simpler analysis. In addition, these algorithms require setting final accuracies since they utilize this information in the algorithm while our Algorithm~\ref{Algo:2} does not set accuracies a priori. \cite{cartis2011evaluation} also considers the same template~\eqref{prob:01} for first-order stationarity with a penalty-type method instead of ALM. Even though the authors show $\mathcal{O}(1/\epsilon^2)$ complexity, this result is obtained by assuming that the penalty parameter remains bounded. We note that such an assumption can also be used to improve our complexity results to match theirs. \cite{bolte2018nonconvex} studies the general template~\eqref{prob:01} with specific assumptions involving local error bound conditions for the~\eqref{prob:01}. These conditions are studied in detail in~\cite{bolte2017error}, but their validity for general SDPs~\eqref{eq:sdp} has never been established. This work also lacks the total iteration complexity analysis presented here. Another work~\cite{clason2018acceleration} focused on solving~\eqref{prob:01} by adapting the primal-dual method of Chambolle and Pock~\cite{chambolle2011first}. The authors proved the convergence of the method and provided convergence rate by imposing error bound conditions on the objective function that do not hold for standard SDPs \cite{burer2003nonlinear, burer2005local} is the first work that proposes the splitting $X=UU^\top$ for solving SDPs of the form~\eqref{eq:sdp}. Following these works, the literature on Burer-Monteiro (BM) splitting for the large part focused on using ALM for solving the reformulated problem~\eqref{prob:nc}. However, this proposal has a few drawbacks: First, it requires exact solutions in Step 2 of Algorithm~\ref{Algo:2} in theory, which in practice is replaced with inexact solutions. Second, their results only establish convergence without providing the rates. In this sense, our work provides a theoretical understanding of the BM splitting with inexact solutions to Step 2 of Algorithm~\ref{Algo:2} and complete iteration complexities. \cite{bhojanapalli2016dropping, park2016provable} are among the earliest efforts to show convergence rates for BM splitting, focusing on the special case of SDPs without any linear constraints. For these specific problems, they prove the convergence of gradient descent to global optima with convergence rates, assuming favorable initialization. These results, however, do not apply to general SDPs of the form~\eqref{eq:sdp} where the difficulty arises due to the linear constraints. Another popular method for solving SDPs are due to~\cite{boumal2014manopt, boumal2016global, boumal2016non}, focusing on the case where the constraints in~\eqref{prob:01} can be written as a Riemannian manifold after BM splitting. In this case, the authors apply the Riemannian gradient descent and Riemannian trust region methods for obtaining first- and second-order stationary points, respectively. They obtain~$\mathcal{O}(1/\epsilon^2)$ complexity for finding first-order stationary points and~$\mathcal{O}(1/\epsilon^3)$ complexity for finding second-order stationary points. While these complexities appear better than ours, the smooth manifold requirement in these works is indeed restrictive. In particular, this requirement holds for max-cut and generalized eigenvalue problems, but it is not satisfied for other important SDPs such as quadratic programming (QAP), optimal power flow and clustering with general affine constraints. In addition, as noted in~\cite{boumal2016global}, per iteration cost of their method for max-cut problem is an astronomical~$\mathcal{O}({d^6})$. Lastly, there also exists a line of work for solving SDPs in their original convex formulation, in a storage efficient way~\cite{nesterov2009primal, yurtsever2015universal, yurtsever2018conditional}. These works have global optimality guarantees by their virtue of directly solving the convex formulation. On the downside, these works require the use of eigenvalue routines and exhibit significantly slower convergence as compared to nonconvex approaches~\cite{jaggi2013revisiting}. \section{Introduction}\vspace{-3mm} \label{intro} We study the nonconvex optimization problem \begin{equation} \label{prob:01} \underset{x\in \mathbb{R}^d}{\min}\,\, f(x)+g(x) \quad \text{s.t.} \quad A(x) = 0, \end{equation} where $f:\mathbb{R}^d\rightarrow\mathbb{R}$ is a {continuously-differentiable} nonconvex function and $A:\mathbb{R}^d\rightarrow\mathbb{R}^m$ is a nonlinear operator. We assume that $g:\mathbb{R}^d\rightarrow\mathbb{R}$ is a proximal-friendly convex function \cite{parikh2014proximal}. A host of problems in computer science \cite{khot2011grothendieck, lovasz2003semidefinite, zhao1998semidefinite}, machine learning \cite{mossel2015consistency, song2007dependence}, and signal processing~\cite{singer2011angular, singer2011three} naturally fall under the template~\eqref{prob:01}, including max-cut, clustering, generalized eigenvalue decomposition, as well as the {quadratic assignment problem (QAP) \cite{zhao1998semidefinite}} To solve \eqref{prob:01}, we propose an intuitive and easy-to-implement {augmented Lagrangian} algorithm, and provide its total iteration complexity under an interpretable geometric condition. Before we elaborate on the results, let us first motivate~\eqref{prob:01} with an application to semidefinite programming (SDP): \paragraph{Vignette: Burer-Monteiro splitting.} A powerful convex relaxation for max-cut, clustering, and many others is provided by the SDP \begin{equation} \label{eq:sdp} \underset{X\in\mathbb{S}^{d \times d}}{\min} \langle C, X \rangle \quad \text{s.t.} \quad B(X) = b, \,\, X \succeq 0, \end{equation} where $C\in \mathbb{R}^{d\times d}$, $X$ is a positive semidefinite $d\times d$ matrix, and ${B}: \mathbb{S}^{d\times d} \to \mathbb{R}^m$ is a linear operator. If the unique-games conjecture is true, the SDP \eqref{eq:sdp} obtains the best possible approximation for the underlying discrete problem~\cite{raghavendra2008optimal}. Since $d$ is often large, many first- and second-order methods for solving such SDP's are immediately ruled out, not only due to their high computational complexity, but also due to their storage requirements, which are $\mathcal{O}(d^2)$. A contemporary challenge in optimization is therefore to solve SDPs using little space and in a scalable fashion. The recent homotopy conditional gradient method, which is based on linear minimization oracles (LMOs), can solve \eqref{eq:sdp} in a small space via sketching \cite{yurtsever2018conditional}. However, such LMO-based methods are extremely slow in obtaining accurate solutions. A different approach for solving \eqref{eq:sdp}, dating back to~\cite{burer2003nonlinear, burer2005local}, is the so-called Burer-Monteiro (BM) factorization $X=UU^\top$, where $U\in\mathbb{R}^{d\times r}$ and $r$ is selected according to the guidelines in~\cite{pataki1998rank, barvinok1995problems}, which is tight~\cite{waldspurger2018rank}. The BM factorization leads to the following nonconvex problem in the template~\eqref{prob:01}: \begin{equation} \label{prob:nc} \underset{U\in\mathbb{R}^{d \times r}}{\min} \langle C, UU^\top \rangle \quad \text{s.t.} \quad B(UU^\top) = b, \end{equation} The BM factorization does not introduce any extraneous local minima~\cite{burer2005local}. Moreover,~\cite{boumal2016non} establishes the connection between the local minimizers of the factorized problem~\eqref{prob:nc} and the global minimizers for~\eqref{eq:sdp}. To solve \eqref{prob:nc}, the inexact Augmented Lagrangian method (iALM) is widely used~\cite{burer2003nonlinear, burer2005local, kulis2007fast}, due to its cheap per iteration cost and its empirical success. Every (outer) iteration of iALM calls a solver to solve an intermediate augmented Lagrangian subproblem to near stationarity. The choices include first-order methods, such as the proximal gradient descent \cite{parikh2014proximal}, or second-order methods, such as the trust region method and BFGS~\cite{nocedal2006numerical}.\footnote{BFGS is in fact a quasi-Newton method that emulates second-order information.} Unlike its convex counterpart~\cite{nedelcu2014computational,lan2016iteration,xu2017inexact}, the convergence rate and the complexity of iALM for ~\eqref{prob:nc} are not well-understood, see Section~\ref{sec:related work} for a review of the related literature. Indeed, addressing this important theoretical gap is one of the contributions of our work. In addition, { $\triangleright$ { We derive the convergence rate of iALM to first- or second-order optimality, and find the total iteration complexity of iALM using different solvers for the augmented Lagrangian subproblems. Our complexity bounds match the best theoretical results in optimization, see Section~\ref{sec:related work}.} \\[2mm] $\triangleright$ Our iALM framework is future-proof in the sense that different subsolvers can be substituted. \\[2mm] $\triangleright$ We propose a geometric condition that simplifies the algorithmic analysis for iALM, and clarify its connection to well-known Polyak-Lojasiewicz \cite{karimi2016linear} and Mangasarian-Fromovitz \cite{bertsekas1982constrained} conditions. We also verify this condition for key problems in Section~\ref{sec:experiments}. \section{Algorithm \label{sec:AL algorithm}} To solve the equivalent formulation of \eqref{prob:01} presented in \eqref{eq:minmax}, we propose the inexact ALM (iALM), detailed in Algorithm~\ref{Algo:2}. At the $k^{\text{th}}$ iteration, Step 2 of Algorithm~\ref{Algo:2} calls a solver that finds an approximate stationary point of the augmented Lagrangian $\L_{\b_k}(\cdot,y_k)$ with the accuracy of $\epsilon_{k+1}$, and this accuracy gradually increases in a controlled fashion. The increasing sequence of penalty weights $\{\b_k\}_k$ and the dual update (Steps 4 and 5) are responsible for continuously enforcing the constraints in~\eqref{prob:01}. The appropriate choice for $\{\b_k\}_k$ will be specified in Corrollary Sections \ref{sec:first-o-opt} and \ref{sec:second-o-opt}. The particular choice of the dual step sizes $\{\sigma_k\}_k$ in Algorithm~\ref{Algo:2} ensures that the dual variable $y_k$ remains bounded, see~\cite{bertsekas1976penalty} in the ALM literature where a similar dual step size is considered. \begin{algorithm}[h!] \begin{algorithmic} \STATE \textbf{Input:} Non-decreasing, positive, unbounded sequence $\{\b_k\}_{k\ge 1}$, stopping thresholds $\tau_f, \tau_s > 0$. \vspace{2pt} \STATE \textbf{Initialization:} Primal variable $x_{1}\in \mathbb{R}^d$, dual variable $y_0\in \mathbb{R}^m$, dual step size $\sigma_1>0$. \vspace{2pt} \FOR{$k=1,2,\dots$} \STATE \begin{enumerate \item \textbf{(Update tolerance)} $\epsilon_{k+1} = 1/\b_k$. \item \textbf{(Inexact primal solution)} Obtain $x_{k+1}\in \mathbb{R}^d$ such that \begin{equation*} \operatorname{dist}(-\nabla_x \L_{\beta_k} (x_{k+1},y_k), \partial g(x_{k+1}) ) \le \epsilon_{k+1} \end{equation*} for first-order stationarity \begin{equation*} \lambda_{\text{min}}(\nabla _{xx}\mathcal{L}_{\beta_k}(x_{k+1}, y_k)) \ge -\epsilon_{k+1} \end{equation*} for second-order-stationarity, if $g=0$ in \eqref{prob:01}. \item \textbf{(Update dual step size)} \begin{align*} \sigma_{k+1} & = \sigma_{1} \min\Big( \frac{\|A(x_1)\| \log^2 2 }{\|A(x_{k+1})\| (k+1)\log^2(k+2)} ,1 \Big). \end{align*} \item \textbf{(Dual ascent)} $y_{k+1} = y_{k} + \sigma_{k+1}A(x_{k+1})$. \item \textbf{(Stopping criterion)} If \begin{align*} & \operatorname{dist}(-\nabla_x \L_{\b_k}(x_{k+1}),\partial g(x_{k+1})) + \|A(x_{k+1})\| \le \tau_f,\nonumber \end{align*} for first-order stationarity and if also $\lambda_{\text{min}}(\nabla _{xx}\mathcal{L}_{\beta_{k}}(x_{k+1}, y_k)) \geq -\tau_s$ for second-order stationarity, then quit and return $x_{k+1}$ as an (approximate) stationary point of \eqref{eq:minmax}. \end{enumerate} \ENDFOR \end{algorithmic} \caption{Inexact ALM} \label{Algo:2} \end{algorithm} \section{Numerical Evidence \label{sec:experiments}} We first begin with a caveat: It is known that quasi-Newton methods, such as BFGS and lBFGS, might not converge for nonconvex problems~\cite{dai2002convergence, mascarenhas2004bfgs}. For this reason, we have used the trust region method as the second-order solver in our analysis in Section~\ref{sec:cvg rate}, which is well-studied for nonconvex problems~\cite{cartis2012complexity}. Empirically, however, BFGS and lBGFS are extremely successful and we have therefore opted for those solvers in this section since the subroutine does not affect Theorem~\ref{thm:main} as long as the subsolver performs well in practice. \subsection{Clustering} Given data points $\{z_i\}_{i=1}^n $, the entries of the corresponding Euclidean distance matrix $D \in \mathbb{R}^{n\times n}$ are $ D_{i, j} = \left\| z_i - z_j\right\|^2 $. Clustering is then the problem of finding a co-association matrix $Y\in \mathbb{R}^{n\times n}$ such that $Y_{ij} = 1$ if points $z_i$ and $z_j$ are within the same cluster and $Y_{ij} = 0$ otherwise. In~\cite{Peng2007}, the authors provide a SDP relaxation of the clustering problem, specified as \begin{align} \underset{Y \in \mathbb{R}^{nxn}}{\min} \text{tr}(DY) \quad \text{s.t.} \quad Y\mathbf{1} = \mathbf{1}, ~\text{tr}(Y) = s,~ Y\succeq 0,~Y \geq 0, \label{eq:sdp_svx} \end{align} where $s$ is the number of clusters and $Y $ is both positive semidefinite and has nonnegative entries. Standard SDP solvers do not scale well with the number of data points~$n$, since they often require projection onto the semidefinite cone with the complexity of $\mathcal{O}(n^3)$. We instead use the BM factorization to solve \eqref{eq:sdp_svx}, sacrificing convexity to reduce the computational complexity. More specifically, we solve the program \begin{align} \label{eq:nc_cluster} \underset{V \in \mathbb{R}^{n\times r}}{\min} \text{tr}(DVV^{\top}) \quad \text{s.t.} \quad VV^{\top}\mathbf{1} = \mathbf{1},~~ \|V\|_F^2 \le s, ~~V \geq 0, \end{align} where $\mathbf{1}\in \mathbb{R}^n$ is the vector of all ones. Note that $Y \geq 0$ in \eqref{eq:sdp_svx} is replaced above by the much stronger but easier-to-enforce constraint $V \geq 0$ in \eqref{eq:nc_cluster}, see~\cite{kulis2007fast} for the reasoning behind this relaxation. Now, we can cast~\eqref{eq:nc_cluster} as an instance of~\eqref{prob:01}. Indeed, for every $i\le n$, let $x_i \in \mathbb{R}^r$ denote the $i$th row of $V$. We next form $x \in \mathbb{R}^d$ with $d = nr$ by expanding the factorized variable $V$, namely, $ x := [x_1^{\top}, \cdots, x_n^{\top}]^{\top} \in \mathbb{R}^d, $ and then set \begin{align*} f(x) =\sum_{i,j=1}^n D_{i, j} \left\langle x_i, x_j \right\rangle, \qquad g = \delta_C, \qquad A(x) = [x_1^{\top}\sum_{j=1}^n x_j -1, \cdots, x_n^{\top}\sum_{j=1}^n x_j-1]^{\top}, \end{align*} where $C$ is the intersection of the positive orthant in $\mathbb{R}^d$ with the Euclidean ball of radius $\sqrt{s}$. In Appendix~\ref{sec:verification2}, {we verify that Theorem~\ref{thm:main} applies to~\eqref{prob:01} with $f,g,A$ specified above. } In our simulations, we use two different solvers for Step~2 of Algorithm~\ref{Algo:2}, namely, APGM and lBFGS. APGM is a solver for nonconvex problems of the form~\eqref{e:exac} with convergence guarantees to first-order stationarity, as discussed in Section~\ref{sec:cvg rate}. lBFGS is a limited-memory version of BFGS algorithm in~\cite{fletcher2013practical} that approximately leverages the second-order information of the problem. We compare our approach against the following convex methods: \begin{itemize} \item HCGM: Homotopy-based Conditional Gradient Method in~\cite{yurtsever2018conditional} which directly solves~\eqref{eq:sdp_svx}. \item SDPNAL+: A second-order augmented Lagrangian method for solving SDP's with nonnegativity constraints~\cite{yang2015sdpnal}. \end{itemize} As for the dataset, our experimental setup is similar to that described by~\cite{mixon2016clustering}. We use the publicly-available fashion-MNIST data in \cite{xiao2017/online}, which is released as a possible replacement for the MNIST handwritten digits. Each data point is a $28\times 28$ gray-scale image, associated with a label from ten classes, labeled from $0$ to $9$. First, we extract the meaningful features from this dataset using a simple two-layer neural network with a sigmoid activation function. Then, we apply this neural network to 1000 test samples from the same dataset, which gives us a vector of length $10$ for each data point, where each entry represents the posterior probability for each class. Then, we form the $\ell_2$ distance matrix ${D}$ from these probability vectors. The solution rank for the template~\eqref{eq:sdp_svx} is known and it is equal to number of clusters $k$ \cite[Theorem~1]{kulis2007fast}. As discussed in~\cite{tepper2018clustering}, setting rank $r>k$ leads more accurate reconstruction in expense of speed. Therefore, we set the rank to 20. The results are depicted in Figure~\ref{fig:clustering}. We implemented 3 algorithms on MATLAB and used the software package for SDPNAL+ which contains mex files. It is predictable that the performance of our nonconvex approach would even improve by using mex files. \begin{figure}[] \begin{center} {\includegraphics[width=.4\columnwidth]{figs/clustering_fig4_times_linearbeta_last.pdf}} {\includegraphics[width=.4\columnwidth]{figs/cluster_feas_time.pdf}} \caption{Clustering running time comparison. } \label{fig:clustering} \end{center} \end{figure} \subsection{Additional demonstrations} We provide several additional experiments in Appendix \ref{sec:adexp}. Section \ref{sec:bp} discusses a novel nonconvex relaxation of the standard basis pursuit template which performs comparable to the state of the art convex solvers. In Section \ref{sec:geig}, we provide fast numerical solutions to the generalized eigenvalue problem. In Section \ref{sec:gan}, we give a contemporary application example that our template applies, namely, denoising with generative adversarial networks. Finally, we provide improved bounds for sparse quadratic assignment problem instances in Section \ref{sec:qap}. \section{Preliminaries \label{sec:preliminaries}}\vspace{-3mm} \paragraph{\textbf{{Notation.}}} We use the notation $\langle\cdot ,\cdot \rangle $ and $\|\cdot\|$ for the {standard inner} product and the norm on $\mathbb{R}^d$. For matrices, $\|\cdot\|$ and $\|\cdot\|_F$ denote the spectral and the Frobenius norms, respectively. For the convex function $g:\mathbb{R}^d\rightarrow\mathbb{R}$, the subdifferential set at $x\in \mathbb{R}^d$ is denoted by $\partial g(x)$ and we will occasionally use the notation $\partial g(x)/\b = \{ z/\b : z\in \partial g(x)\}$. When presenting iteration complexity results, we often use $\widetilde{O}(\cdot)$ which suppresses the logarithmic dependencies. We denote $\delta_\mathcal{X}:\mathbb{R}^d\rightarrow\mathbb{R}$ as the indicator function of a set $\mathcal{X}\subset\mathbb{R}^d$. The distance function from a point $x$ to $\mathcal{X}$ is denoted by $\operatorname{dist}(x,\mathcal{X}) = \min_{z\in \mathcal{X}} \|x-z\|$. For integers $k_0 \le k_1$, we use the notation $[k_0:k_1]=\{k_0,\ldots,k_1\}$. For an operator $A:\mathbb{R}^d\rightarrow\mathbb{R}^m$ with components $\{A_i\}_{i=1}^m$, $DA(x) \in \mathbb{R}^{m\times d}$ denotes the Jacobian of $A$, where the $i$th row of $DA(x)$ is the vector $\nabla A_i(x) \in \mathbb{R}^d$. \paragraph{Smoothness.} We assume smooth $f:\mathbb{R}^d\rightarrow\mathbb{R}$ and $A:\mathbb{R}^d\rightarrow \mathbb{R}^m$; i.e., there exist $\lambda_f,\lambda_A\ge 0$ s.t. \begin{align} \| \nabla f(x) - \nabla f(x')\| \le \lambda_f \|x-x'\|, \quad \| DA(x) - DA(x') \| \le \lambda_A \|x-x'\|, \quad \forall x, x' \in \mathbb{R}^d . \label{eq:smoothness basic} \end{align} \paragraph{Augmented Lagrangian method (ALM)}. ALM is a classical algorithm, which first appeared in~\cite{hestenes1969multiplier, powell1969method} and extensively studied afterwards in~\cite{bertsekas1982constrained, birgin2014practical}. For solving \eqref{prob:01}, ALM suggests solving the problem \begin{equation} \min_{x} \max_y \,\,\mathcal{L}_\beta(x,y) + g(x), \label{eq:minmax} \end{equation} where, for penalty weight $\b>0$, $\mathcal{L}_\b$ is the corresponding augmented Lagrangian, defined as \begin{align} \label{eq:Lagrangian} \mathcal{L}_\beta(x,y) := f(x) + \langle A(x), y \rangle + \frac{\beta}{2}\|A(x)\|^2. \end{align} The minimax formulation in \eqref{eq:minmax} naturally suggests the following algorithm for solving \eqref{prob:01}: \begin{equation}\label{e:exac} x_{k+1} \in \underset{x}{\operatorname{argmin}} \,\, \mathcal{L}_{\beta}(x,y_k)+g(x), \end{equation} \begin{equation*} y_{k+1} = y_k+\sigma_k A(x_{k+1}), \end{equation*} where the dual step sizes are denoted as $\{\sigma_k\}_k$. However, computing $x_{k+1}$ above requires solving the nonconvex problem~\eqref{e:exac} to optimality, which is typically intractable. Instead, it is often easier to find an approximate first- or second-order stationary point of~\eqref{e:exac}. Hence, we argue that by gradually improving the stationarity precision and increasing the penalty weight $\b$ above, we can reach a stationary point of the main problem in~\eqref{eq:minmax}, as detailed in Section~\ref{sec:AL algorithm}. \paragraph{{\textbf{Optimality conditions.}}} {First-order necessary optimality conditions} for \eqref{prob:01} are well-studied. {Indeed, $x\in \mathbb{R}^d$ is a first-order stationary point of~\eqref{prob:01} if there exists $y\in \mathbb{R}^m$ such that \begin{align} -\nabla_x \mathcal{L}_\beta(x,y) \in \partial g(x),\qquad A(x) = 0, \label{e:inclu2} \end{align} which is in turn the necessary optimality condition for \eqref{eq:minmax}.} Inspired by this, we say that $x$ is an $(\epsilon_f,\b)$ first-order stationary point of \eqref{eq:minmax} if {there exists a $y \in \mathbb{R}^m$} such that \begin{align} \operatorname{dist}(-\nabla_x \mathcal{L}_\beta(x,y), \partial g(x)) \leq \epsilon_f, \qquad \| A(x) \| \leq \epsilon_f, \label{eq:inclu3} \end{align} for $\epsilon_f\ge 0$. In light of \eqref{eq:inclu3}, a metric for evaluating the stationarity of a pair $(x,y)\in \mathbb{R}^d\times \mathbb{R}^m$ is \begin{align} \operatorname{dist}\left(-\nabla_x \mathcal{L}_\beta(x,y), \partial g(x) \right) + \|A(x)\| , \label{eq:cvg metric} \end{align} which we use as the first-order stopping criterion. As an example, for a convex set $\mathcal{X}\subset\mathbb{R}^d$, suppose that $g = \delta_\mathcal{X}$ is the indicator function on $\mathcal{X}$. Let also $T_\mathcal{X}(x) \subseteq \mathbb{R}^d$ denote the tangent cone to $\mathcal{X}$ at $x$, and with $P_{T_\mathcal{X}(x)}:\mathbb{R}^d\rightarrow\mathbb{R}^d$ we denote the orthogonal projection onto this tangent cone. Then, for $u\in\mathbb{R}^d$, it is not difficult to verify that \begin{align}\label{eq:dist_subgrad} \operatorname{dist}\left(u, \partial g(x) \right) = \| P_{T_\mathcal{X}(x)}(u) \|. \end{align} When $g = 0$, a first-order stationary point $x\in \mathbb{R}^d$ of \eqref{prob:01} is also second-order stationary if \begin{equation} \lambda_{\text{min}}(\nabla _{xx} \mathcal{L}_{\beta}(x,y))\ge 0, \end{equation} where $\nabla_{xx}\mathcal{L}_\b$ is the Hessian of $\mathcal{L}_\b$ with respect to $x$, and $\lambda_{\text{min}}(\cdot)$ returns the smallest eigenvalue of its argument. Analogously, $x$ is an $(\epsilon_f, \epsilon_s,\b)$ second-order stationary point if, in addition to \eqref{eq:inclu3}, it holds that \begin{equation}\label{eq:sec_opt} \lambda_{\text{min}}(\nabla _{xx} \mathcal{L}_{\beta}(x,y)) \ge -\epsilon_s, \end{equation} for $\epsilon_s\ge 0$. Naturally, for second-order stationarity, we use $\lambda_{\text{min}}(\nabla _{xx} \mathcal{L}_{\beta}(x,y))$ as the stopping criterion. \paragraph{{\textbf{Smoothness lemma.}}} This next result controls the smoothness of $\L_\b(\cdot,y)$ for a fixed $y$. The proof is standard but nevertheless is included in Appendix~\ref{sec:proof of smoothness lemma} for completeness. \begin{lemma}[\textbf{smoothness}]\label{lem:smoothness} For fixed $y\in \mathbb{R}^m$ and $\rho,\rho'\ge 0$, it holds that \begin{align} \| \nabla_x \mathcal{L}_{\beta}(x, y)- \nabla_x \mathcal{L}_{\beta}(x', y) \| \le \lambda_\b \|x-x'\|, \end{align} for every $x,x' \in \{ x'': \|x''\|\le \rho, \|A(x'') \|\le \rho'\}$, where \begin{align} \lambda_\beta \le \lambda_f + \sqrt{m}\lambda_A \|y\| + (\sqrt{m}\lambda_A\rho' + d \lambda'^2_A )\b =: \lambda_f + \sqrt{m}\lambda_A \|y\| + \lambda''(A,\rho,\rho') \b. \label{eq:smoothness of Lagrangian} \end{align} Above, $\lambda_f,\lambda_A$ were defined in (\ref{eq:smoothness basic}) and \begin{align} \lambda'_A := \max_{\|x\|\le \rho}\|DA(x)\|. \end{align} \end{lemma} \section{Proof of Corollary~\ref{cor:first}} \section{Proof of Lemma \ref{lem:smoothness}\label{sec:proof of smoothness lemma}} \begin{proof} Note that \begin{align} \mathcal{L}_{\beta}(x,y) = f(x) + \sum_{i=1}^m y_i A_i (x) + \frac{\b}{2} \sum_{i=1}^m (A_i(x))^2, \end{align} which implies that \begin{align} & \nabla_x \mathcal{L}_\beta(x,y) \nonumber\\ & = \nabla f(x) + \sum_{i=1}^m y_i \nabla A_i(x) + \frac{\b}{2} \sum_{i=1}^m A_i(x) \nabla A_i(x) \nonumber\\ & = \nabla f(x) + DA(x)^\top y + \b DA(x)^\top A(x), \end{align} where $DA(x)$ is the Jacobian of $A$ at $x$. By taking another derivative with respect to $x$, we reach \begin{align} \nabla^2_x \mathcal{L}_\beta(x,y) & = \nabla^2 f(x) + \sum_{i=1}^m \left( y_i + \b A_i(x) \right) \nabla^2 A_i(x) \nonumber\\ & \qquad +\b \sum_{i=1}^m \nabla A_i(x) \nabla A_i(x)^\top. \end{align} It follows that \begin{align} & \|\nabla_x^2 \mathcal{L}_\beta(x,y)\|\nonumber\\ & \le \| \nabla^2 f(x) \| + \max_i \| \nabla^2 A_i(x)\| \left (\|y\|_1+\b \|A(x)\|_1 \right) \nonumber\\ & \qquad +\beta\sum_{i=1}^m \|\nabla A_i(x)\|^2 \nonumber\\ & \le \lambda_h+ \sqrt{m} \lambda_A \left (\|y\|+\b \|A(x)\| \right) + \b \|DA(x)\|^2_F. \end{align} For every $x$ such that $\|x\|\le \rho$ and $\|A(x)\|\le \rho$, we conclude that \begin{align} \|\nabla_x^2 \mathcal{L}_\beta(x,y)\| & \le \lambda_f + \sqrt{m}\lambda_A \left(\|y\| + \b\rho' \right)+ \b \max_{\|x\|\le \rho}\|DA(x)\|_F^2, \end{align} which completes the proof of Lemma \ref{lem:smoothness}. \end{proof} \section{Clustering \label{sec:verification2}} We only verify the condition in~\eqref{eq:regularity} here. Note that \begin{align} A(x) = VV^\top \mathbf{1}- \mathbf{1}, \end{align} \begin{align} DA(x) & = \left[ \begin{array}{cccc} w_{1,1} x_1^\top & \cdots & w_{1,n} x_{1}^\top\\ \vdots\\ w_{n,1}x_{n}^\top & \cdots & w_{n,n}1 x_{n}^\top \end{array} \right] \nonumber\\ & = \left[ \begin{array}{ccc} V & \cdots & V \end{array} \right] + \left[ \begin{array}{ccc} x_1^\top & \\ & \ddots & \\ & & x_n^\top \end{array} \right], \label{eq:Jacobian clustering} \end{align} where $w_{i.i}=2$ and $w_{i,j}=1$ for $i\ne j$. In the last line above, $n$ copies of $V$ appear and the last matrix above is block-diagonal. For $x_k$, define $V_k$ accordingly and let $x_{k,i}$ be the $i$th row of $V_k$. Consequently, \begin{align} DA(x_k)^\top A(x_k) & = \left[ \begin{array}{c} (V_k^\top V_k - I_n) V_k^\top \mathbf{1}\\ \vdots\\ (V_k^\top V_k - I_n) V_k^\top \mathbf{1} \end{array} \right] \nonumber\\ & \qquad + \left[ \begin{array}{c} x_{k,1} (V_k V_k^\top \mathbf{1}- \mathbf{1})_1 \\ \vdots \\ x_{k,n} (V_k V_k^\top \mathbf{1}- \mathbf{1})_n \end{array} \right], \end{align} where $I_n\in \mathbb{R}^{n\times n}$ is the identity matrix. Let us make a number of simplifying assumptions. First, we assume that $\|x_k\|< \sqrt{s}$ (which can be enforced in the iterates by replacing $C$ with $(1-\epsilon)C$ for a small positive $\epsilon$ in the subproblems). Under this assumption, it follows that \begin{align} (\partial g(x_k))_i = \begin{cases} 0 & (x_k)_i > 0\\ \{a: a\le 0\} & (x_k)_i = 0, \end{cases} \qquad i\le d. \label{eq:exp-subgrad-cluster} \end{align} Second, we assume that $V_k$ has nearly orthonormal columns, namely, $V_k^\top V_k \approx I_n$. This can also be enforced in each iterate of Algorithm~\ref{Algo:2} and naturally corresponds to well-separated clusters. While a more fine-tuned argument can remove these assumptions, they will help us simplify the presentation here. Under these assumptions, the (squared) right-hand side of \eqref{eq:regularity} becomes \begin{align} & \operatorname{dist}\left( -DA(x_k)^\top A(x_k) , \frac{\partial g(x_k)}{ \b_{k-1}} \right)^2 \nonumber\\ & = \left\| \left( -DA(x_k)^\top A(x_k) \right)_+\right\|^2 \qquad (a_+ = \max(a,0)) \nonumber\\ & = \left\| \left[ \begin{array}{c} x_{k,1} (V_k V_k^\top \mathbf{1}- \mathbf{1})_1 \\ \vdots \\ x_{k,n} (V_k V_k^\top \mathbf{1}- \mathbf{1})_n \end{array} \right] \right\|^2 \qquad (x_k\in C \Rightarrow x_k\ge 0) \nonumber\\ & = \sum_{i=1}^n \| x_{k,i}\|^2 (V_kV_k^\top \mathbf{1}-\mathbf{1})_i^2 \nonumber\\ & \ge \min_i \| x_{k,i}\|^2 \cdot \sum_{i=1}^n (V_kV_k^\top \mathbf{1}-\mathbf{1})_i^2 \nonumber\\ & = \min_i \| x_{k,i}\|^2 \cdot \| V_kV_k^\top \mathbf{1}-\mathbf{1} \|^2. \label{eq:final-cnd-cluster} \end{align} Therefore, given a prescribed $\nu$, ensuring $\min_i \|x_{k,i}\| \ge \nu$ guarantees \eqref{eq:regularity}. When the algorithm is initialized close enough to the constraint set, there is indeed no need to separately enforce \eqref{eq:final-cnd-cluster}. In practice, often $n$ exceeds the number of true clusters and a more intricate analysis is required to establish \eqref{eq:regularity} by restricting the argument to a particular subspace of $\mathbb{R}^n$. \section{Additional Experiments}{\label{sec:adexp}} \subsection{Basis Pursuit}{\label{sec:bp}} Basis Pursuit (BP) finds sparsest solutions of an under-determined system of linear equations by solving \begin{align} \min_{z} \|z\|_1 \quad \text{s.t.} \quad Bz = b, \label{eq:bp_main} \end{align} where $B \in \mathbb{R}^{n \times d}$ and $b \in \mathbb{R}^{n}$. Various primal-dual convex optimization algorithms are available in the literature to solve BP, including~\cite{tran2018adaptive,chambolle2011first}. We compare our algorithm against state-of-the-art primal-dual convex methods for solving \eqref{eq:bp_main}, namely, Chambole-Pock~\cite{chambolle2011first}, ASGARD~\cite{tran2018smooth} and ASGARD-DL~\cite{tran2018adaptive}. Here, we take a different approach and cast~(\ref{eq:bp_main}) as an instance of~\eqref{prob:01}. Note that any $z \in \mathbb{R}^d$ can be decomposed as $z = z^+ - z^-$, where $z^+,z^-\in \mathbb{R}^d$ are the positive and negative parts of $z$, respectively. Then consider the change of variables $z^+ = u_1^{\circ 2}$ and $z^-= u_2^{\circ 2} \in \mathbb{R}^d$, where $\circ$ denotes element-wise power. Next, we concatenate $u_1$ and $u_2$ as $x := [ u_1^{\top}, u_2^{\top} ]^{\top} \in \mathbb{R}^{2d}$ and define $\overline{B} := [B, -B] \in \mathbb{R}^{n \times 2d}$. Then, \eqref{eq:bp_main} is equivalent to \eqref{prob:01} with \begin{align} f(x) =& \|x\|^2, \quad g(x) = 0,\quad \text{s.t.} \quad A(x) = \overline{B}x^{\circ 2}- b. \label{eq:bp-equiv} \end{align} We draw the entries of $B$ independently from a zero-mean and unit-variance Gaussian distribution. For a fixed sparsity level $k$, the support of $z_*\in \mathbb{R}^d$ and its nonzero amplitudes are also drawn from the standard Gaussian distribution. Then the measurement vector is created as $b = Bz + \epsilon$, where $\epsilon$ is the noise vector with entries drawn independently from the zero-mean Gaussian distribution with variance $\sigma^2=10^{-6}$. The results are compiled in Figure~\ref{fig:bp1}. Clearly, the performance of Algorithm~\ref{Algo:2} with a second-order solver for BP is comparable to the rest. It is, indeed, interesting to see that these type of nonconvex relaxations gives the solution of convex one and first order methods succeed. \begin{figure}[] \centering {\includegraphics[width=.4\columnwidth]{figs/bp_all_obj.pdf}} {\includegraphics[width=.4\columnwidth]{figs/bp_all_feas.pdf}} \caption{Basis Pursuit} \label{fig:bp1} \end{figure} \paragraph{Discussion:} The true potential of our reformulation is in dealing with more structured norms rather than $\ell_1$, where computing the proximal operator is often intractable. One such case is the latent group lasso norm~\cite{obozinski2011group}, defined as \begin{align*} \|z\|_{\Omega} = \sum_{i=1}^I \| z_{\Omega_i} \|, \end{align*} {where $\{\Omega_i\}_{i=1}^I$ are (not necessarily disjoint) index sets of $\{1,\cdots,d\}$. Although not studied here, we believe that the nonconvex framework presented in this paper can serve to solve more complicated problems, such as the latent group lasso. We leave this research direction for future work. \paragraph{Condition verification:} In the sequel, we verify that Theorem~\ref{thm:main} indeed applies to~\eqref{prob:01} with the above $f,A,g$. Note that \begin{align} DA(x) = 2 \overline{B} \text{diag}(x), \label{eq:jacob-bp} \end{align} where $\text{diag}(x)\in\mathbb{R}^{2d\times 2d}$ is the diagonal matrix formed by $x$. The left-hand side of \eqref{eq:regularity} then reads as \begin{align} & \text{dist} \left( -DA(x_k)^\top A(x_k) , \frac{\partial g(x_k)}{\b_{k-1}} \right) \nonumber\\ & = \text{dist} \left( -DA(x_k)^\top A(x_k) , \{0\} \right) \qquad (g\equiv 0)\nonumber\\ & = \|DA(x_k)^\top A(x_k) \| \nonumber\\ & =2 \| \text{diag}(x_k) \overline{B}^\top ( \overline{B}x_k^{\circ 2} -b) \|. \qquad \text{(see \eqref{eq:jacob-bp})} \label{eq:cnd-bp-pre} \end{align} To bound the last line above, let $x_*$ be a solution of~\eqref{prob:01} and note that $\overline{B} x_*^{\circ 2} = b $ by definition. Let also $z_k,z_*\in \mathbb{R}^d$ denote the vectors corresponding to $x_k,x_*$. Corresponding to $x_k$, also define $u_{k,1},u_{k,2}$ naturally and let $|z_k| = u_{k,1}^{\circ 2} + u_{k,2}^{\circ 2} \in \mathbb{R}^d$ be the vector of amplitudes of $z_k$. To simplify matters, let us assume also that $B$ is full-rank. We then rewrite the norm in the last line of \eqref{eq:cnd-bp-pre} as \begin{align} & \| \text{diag}(x_k) \overline{B}^\top ( \overline{B}x_k^{\circ 2} -b) \|^2 \nonumber\\ & = \| \text{diag}(x_k) \overline{B}^\top \overline{B} (x_k^{\circ 2} -x_*^{\circ 2}) \|^2 \qquad (\overline{B} x_*^{\circ 2} = b) \nonumber\\ & = \| \text{diag}(x_k)\overline{B}^\top B (x_k - x_*) \|^2 \nonumber\\ & = \| \text{diag}(u_{k,1})B^\top B (z_k - z_*) \|^2 \nonumber\\ & \qquad + \| \text{diag}(u_{k,2})B^\top B (z_k - z_*) \|^2 \nonumber\\ & = \| \text{diag}(u_{k,1}^{\circ 2}+ u_{k,2}^{\circ 2}) B^\top B (z_k - z_*) \|^2 \nonumber\\ & = \| \text{diag}(|z_k|) B^\top B (z_k - z_*) \|^2 \nonumber\\ & \ge \eta_n ( B \text{diag}(|z_k|) )^2 \| B(z_k - z_*) \|^2 \nonumber\\ & = \eta_n ( B \text{diag}(|z_k|) )^2 \| B z_k -b \|^2 \qquad ( Bz_* = \overline{B} x^{\circ2}_* = b) \nonumber\\ & \ge \min_{|T|=n}\eta_n(B_T)\cdot |z_{k,(n)}|^2 \|Bz_k - b\|^2, \end{align} where $\eta_n(\cdot)$ returns the $n$th largest singular value of its argument. In the last line above, $B_T$ is the restriction of $B$ to the columns indexed by $T$ of size $n$. Moreover, $z_{k,(n)}$ is the $n$th largest entry of $z$ in magnitude. Given a prescribed $\nu$, \eqref{eq:regularity} therefore holds if \begin{align} |z_{k,(n)} | \ge \frac{\nu}{2 \sqrt{\min_{|T|=n} \eta_n(B_T)}} , \label{eq:final-bp-cnd} \end{align} for every iteration $k$. If Algorithm \ref{Algo:2} is initialized close enough to the solution $z^*$ and the entries of $z^*$ are sufficiently large in magnitude, there will be no need to directly enforce \eqref{eq:final-bp-cnd}. \subsection{Generalized Eigenvalue Problem}{\label{sec:geig}} \begin{figure}[!h] \begin{tabular}{l|l|l} ~~~~~~~~~(i) $C:$ Gaussian iid & ~~~~~(ii) $C:$ Polynomial decay & ~~~~~(iii) $C:$ Exponential decay \\ \includegraphics[width=.3\columnwidth]{figs/gen_eig/gen_eig_case1_convergence.pdf}& \includegraphics[width=.3\columnwidth]{figs/gen_eig/gen_eig_case2_convergence.pdf}& \includegraphics[width=.3\columnwidth]{figs/gen_eig/gen_eig_case3_convergence.pdf}\\ \includegraphics[width=.31\columnwidth]{figs/gen_eig/gen_eig_case1_eigenvalues.pdf} & \includegraphics[width=.31\columnwidth]{figs/gen_eig/gen_eig_case2_eigenvalues.pdf} & \includegraphics[width=.31\columnwidth]{figs/gen_eig/gen_eig_case3_eigenvalues.pdf} \\ \hline ~~~~~~~~~~~~~~~~~~~~(iv) & ~~~~~~~~~~~~~~~~~~~~(v) & ~~~~~~~~~~~~~~~~~~~(vi) \\ \includegraphics[width=.3\columnwidth]{figs/gen_eig/gen_eig_case4_convergence.pdf}& \includegraphics[width=.3\columnwidth]{figs/gen_eig/gen_eig_case6_convergence.pdf}& \includegraphics[width=.3\columnwidth]{figs/gen_eig/gen_eig_case7_convergence.pdf}\\ \includegraphics[width=.31\columnwidth]{figs/gen_eig/gen_eig_case4_eigenvalues.pdf} & \includegraphics[width=.31\columnwidth]{figs/gen_eig/gen_eig_case6_eigenvalues.pdf} & \includegraphics[width=.31\columnwidth]{figs/gen_eig/gen_eig_case7_eigenvalues.pdf} \end{tabular} \caption{{\it{(Top)}} Objective convergence for calculating top generalized eigenvalue and eigenvector of $B$ and $C$. {\it{(Bottom)}} Eigenvalue structure of the matrices. For (i),(ii) and (iii), $C$ is positive semidefinite; for (iv), (v) and (vi), $C$ contains negative eigenvalues. {[(i): Generated by taking symmetric part of iid Gaussian matrix. (ii): Generated by randomly rotating diag($1^{-p}, 2^{-p}, \cdots, 1000^{-p}$)($p=1$). (iii): Generated by randomly rotating diag($10^{-p}, 10^{-2p}, \cdots, 10^{-1000p}$)($p=0.0025$).]} } \label{fig:geig1} \end{figure} Generalized eigenvalue problem has extensive applications in machine learning, statistics and data analysis~\cite{ge2016efficient}. The well-known nonconvex formulation of the problem is~\cite{boumal2016non} given by \begin{align} \begin{cases} \underset{x\in\mathbb{R}^n}{\min} x^\top C x \\ x^\top B x = 1, \end{cases} \label{eq:eig} \end{align} where $B, C \in \mathbb{R}^{n \times n}$ are symmetric matrices and $B$ is positive definite, namely, $B \succ 0$. The generalized eigenvector computation is equivalent to performing principal component analysis (PCA) of $C$ in the norm $B$. It is also equivalent to computing the top eigenvector of symmetric matrix $S = B^{-1/2}CB^{1/2}$ and multiplying the resulting vector by $B^{-1/2}$. However, for large values of $n$, computing $B^{-1/2}$ is extremely expensive. The natural convex SDP relaxation for~\eqref{eq:eig} involves lifting $Y = xx^\top$ and removing the nonconvex rank$(Y) = 1$ constraint, namely, \begin{align} \begin{cases} \underset{Y \in \mathbb{R}^{n \times n}}{\min} \text{tr}(CY)\\ \text{tr}(BY) = 1, \quad X \succeq 0. \end{cases} \label{eq:eig-sdp} \end{align} Here, however, we opt to directly solve~\eqref{eq:eig} because it fits into our template with \begin{align} f(x) =& x^\top C x, \quad g(x) = 0,\nonumber\\ A(x) =& x^\top B x - 1. \label{eq:eig-equiv} \end{align} We compare our approach against three different methods: manifold based Riemannian gradient descent and Riemannian trust region methods in \cite{boumal2016global} and the linear system solver in~\cite{ge2016efficient}, abbrevated as GenELin. We have used Manopt software package in \cite{manopt} for the manifold based methods. For GenELin, we have utilized Matlab's backslash operator as the linear solver. The results are compiled in Figure \ref{fig:geig1}. \paragraph{Condition verification:} Here, we verify the regularity condition in \eqref{eq:regularity} for problem \eqref{eq:eig}. Note that \begin{align} DA(x) = (2Bx)^\top. \label{eq:jacobian-gen-eval} \end{align} Therefore, \begin{align} \operatorname{dist}\left( -DA(x_k)^\top A(x_k) , \frac{\partial g(x_k)}{ \b_{k-1}} \right)^2 & = \operatorname{dist}\left( -DA(x_k)^\top A(x_k) , \{0\} \right)^2 \qquad (g\equiv 0) \nonumber\\ & = \| DA(x_k)^\top A(x_k) \|^2 \nonumber\\ & = \|2Bx_k (x_k^\top Bx_k - 1)\|^2 \qquad \text{(see \eqref{eq:jacobian-gen-eval})} \nonumber\\ & = 4 (x_k^\top Bx_k - 1)^2\|Bx_k\|^2 \nonumber\\ & = 4\|Bx_k\|^2 \|A(x_k)\|^2 \qquad \text{(see \eqref{eq:eig-equiv})} \nonumber \\ & \ge \eta_{\min}(B)^2\|x_k\|^2 \|A(x_k)\|^2, \end{align} where $\eta_{\min}(B)$ is the smallest eigenvalue of the positive definite matrix $B$. Therefore, for a prescribed $\nu$, the regularity condition in \eqref{eq:regularity} holds with $\|x_k\| \ge \nu /\eta_{min}$ for every $k$. If the algorithm is initialized close enough to the constraint set, there will be again no need to directly enforce this latter condition. \subsection{$\ell_\infty$ Denoising with a Generative Prior}{\label{sec:gan}} The authors of \cite{Samangouei2018,Ilyas2017} have proposed to project onto the range of a Generative Adversarial network (GAN) \cite{Goodfellow2014}, as a way to defend against adversarial examples. For a given noisy observation $x^* + \eta$, they consider a projection in the $\ell_2$ norm. We instead propose to use our augmented Lagrangian method to denoise in the $\ell_\infty$ norm, a much harder task: \begin{align} \begin{array}{lll} \underset{x, z}{\text{min}} & & \|x^* + \eta - x\|_\infty \\ \text{s.t. } && x=G(z). \end{array} \end{align} \begin{figure}[!h] \begin{center} {\includegraphics[scale=0.9]{figs/example_denoising_fab.pdf}} \caption{Augmented Lagrangian vs Adam and Gradient descent for $\ell_\infty$ denoising} \label{fig:comparison_fab} \end{center} \end{figure} We use a pretrained generator for the MNIST dataset, given by a standard deconvolutional neural network architecture \cite{Radford2015}. We compare the succesful optimizer Adam \cite{Kingma2014} and gradient Descent against our method. Our algorithm involves two forward and one backward pass through the network, as oposed to Adam that requires only one forward/backward pass. For this reason we let our algorithm run for 2000 iterations, and Adam and GD for 3000 iterations. Both Adam and gradient descent generate a sequence of feasible iterates $x_t=G(z_t)$. For this reason we plot the objective evaluated at the point $G(z_t)$ vs iteration count in figure \ref{fig:comparison_fab}. Our method successfully minimizes the objective value, while Adam and GD do not. \subsection{Quadratic assginment problem}{\label{sec:qap}} Let $K$, $L$ be $n \times n$ symmetric metrices. QAP in its simplest form can be written as \begin{align} \max \text{tr}(KPLP), \,\,\,\,\,\,\, \text{subject to}\,\, P \,\,\text{be a permutation matrix} \label{eq:qap1} \end{align} A direct approach for solving \eqref{eq:qap1} involves a combinatorial search. To get the SDP relaxation of \eqref{eq:qap1}, we will first lift the QAP to a problem involving a larger matrix. Observe that the objective function takes the form \begin{align*} \text{tr}((K\otimes L) (\text{vec}(P)\text{vec}(P^\top))), \end{align*} where $\otimes$ denotes the Kronecker product. Therefore, we can recast \eqref{eq:qap1} as \begin{align} \text{tr}((K\otimes L) Y) \,\,\,\,\text{subject to} \,\,\, Y = \text{vec}(P)\text{vec}(P^\top), \label{eq:qapkron} \end{align} where $P$ is a permutation matrix. We can relax the equality constraint in \eqref{eq:qapkron} to a semidefinite constraint and write it in an equivalent form as \begin{align*} X = \begin{bmatrix} 1 & \text{vec}(P)^\top\\ \text{vec}(P) & Y \end{bmatrix} \succeq 0 \,\,\, \text{for a symmetric} X \in \mathbb{S}^{(n^2+1) \times (n^2+1)} \end{align*} We now introduce the following constraints such that \begin{equation} B_k(X) = {\bf{b_k}}, \,\,\,\, {\bf b_k} \in \mathbb{R}^{m_k}\,\, \label{eq:qapcons} \end{equation} to make sure X has a proper structure. Here, $B_k$ is a linear operator on $X$ and the total number of constraints is $m = \sum_{k} m_k.$ Hence, SDP relaxation of the quadratic assignment problem takes the form, \begin{align} \max\,\,\, & \langle C, X \rangle \nonumber \\%\,\text{trace}(K\otimes L) \text{subject to} \,\,\,& P1 = 1, \,\, 1^\top P = 1,\,\, P\geq 0 \nonumber \\ & \text{trace}_1(Y) = I \,\,\, \text{trace}_2(Y) = I \nonumber \\ & \text{vec}(P) = \text{diag}(Y) \nonumber \\ & \text{trace}(Y) = n \,\, \begin{bmatrix} 1 & \text{vec}(P)^\top\\ \text{vec}(P) & Y \end{bmatrix} \succeq 0, \label{eq:qap_sdp} \end{align} where $\text{trace}_1(.)$ and $\text{trace}_2(.)$ are partial traces satisfying, $$ \text{trace}_1(K \otimes L) = \text{trace}(K)L \,\,\,\,\,\, \text{and} \,\,\,\,\, \text{trace}_2(K\otimes L)= K \text{trace}(L) $$ $$ \text{trace}_1^*(T) = I \otimes T \,\,\,\,\,\, \text{and} \,\,\,\,\, \text{trace}_2^*(T)= T \otimes I $$ $1st$ set of equalities are due to the fact that permutation matrices are doubly stochastic. $2nd$ set of equalities are to ensure permutation matrices are orthogonal, i.e., $PP^\top = P^\top P=I$. $3rd$ set of equalities are to enforce every individual entry of the permutation matrix takes either $0$ or $1$, i.e., $X_{1, i} = X_{i,i} \,\,\, \forall i \in [1, n^2+1]$. . Trace constraint in the last line is to bound the problem domain. By concatenating the $B_k$'s in \eqref{eq:qapcons}, we can rewrite \eqref{eq:qap_sdp} in standard SDP form as \begin{align} \max\,\,\, & \langle C, X \rangle \nonumber \\%\,\text{trace}(K\otimes L) \text{subject to} \,\,\,& B(X) = {\bf{b},\,\,\, b} \in \mathbb{R}^m \nonumber \\ & \text{trace}(X) = n+1 \nonumber \\ & X_{ij} \geq 0, \,\,\,\, i,j\,\,\, \mathcal{G}\nonumber \\ & X\succeq0, \end{align} where $\mathcal{G}$ represents the index set for which we introduce the nonnegativities. When $\mathcal{G}$ covers the wholes set of indices, we get the best approximation to the original problem. However, it becomes computationally undesirable as the problem dimension increases. Hence, we remove the redundant nonnegativity constraints and enforce it for the indices where Kronecker product between $K$ and $L$ is nonzero. We penalize the non-negativity constraints and add it to the augmented Lagrangian objective since a projection to the positive orthant approach in the low rank space as we did for the clustering does not work here. We take \cite{ferreira2018semidefinite} as the baseline. This is an SDP based approach for solving QAP problems containing a sparse graph. We compare against the best feasible upper bounds reported in \cite{ferreira2018semidefinite} for the given instances. Here, optimality gap is defined as $$ \% \text{Gap} = \frac{|\text{bound} - \text{optimal}|}{\text{optimal}} \times 100 $$ We used a (relatively) sparse graph data set from the QAP library. We run our low rank algorithm for different rank values. $r_m$ in each instance corresponds to the smallest integer satisfying the Pataki bound~\cite{pataki1998rank, barvinok1995problems}. Results are shown in Table \ref{tb:qap}. Primal feasibility values except for the last instance $esc128$ is less than $10^{-5}$ and we obtained bounds at least as good as the ones reported in \cite{ferreira2018semidefinite} for these problems. For $esc128$, the primal feasibility is $\approx 10^{-1}$, hence, we could not manage to obtain a good optimality gap \begin{table}[] \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline & \multicolumn{1}{l|}{} & \multicolumn{6}{c|}{Optimality Gap ($\%$)} \\ \hline \multicolumn{1}{|c|}{\multirow{2}{*}{Data}} & \multicolumn{1}{c|}{\multirow{2}{*}{Optimal Value}} & \multicolumn{1}{c|}{\multirow{2}{*}{Sparse QAP~\cite{ferreira2018semidefinite}}} & \multicolumn{5}{c|}{iAL} \\ \cline{4-8} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & $r=10$ & $r=25$ & $r=50$ & $r=r_m$ & $r_m$ \\ \hline esc16a & 68 & 8.8 & 11.8 & $\mathbf{0}$ & $\mathbf{0}$ & 5.9 & 157 \\ \hline esc16b & 292 & $\mathbf{0}$ & $\mathbf{0}$ & $\mathbf{0}$ & $\mathbf{0}$ & $\mathbf{0}$ & 224 \\ \hline esc16c & 160 & 5 & 5.0 & 5.0 & $\mathbf{2.5}$ & 3.8 & 177 \\ \hline esc16d & 16 & 12.5 & 37.5 & $\mathbf{0}$ & $\mathbf{0}$ & 25.0 & 126 \\ \hline esc16e & 28 & 7.1 & 7.1 & $\mathbf{0}$ & 14.3 & 7.1 & 126 \\ \hline esc16g & 26 & $\mathbf{0}$ & 23.1 & 7.7 & $\mathbf{0}$ & $\mathbf{0}$ & 126 \\ \hline esc16h & 996 & $\mathbf{0}$ & $\mathbf{0}$ & $\mathbf{0}$ & $\mathbf{0}$ & $\mathbf{0}$ & 224 \\ \hline esc16i & 14 & $\mathbf{0}$ & $\mathbf{0}$ & $\mathbf{0}$ & 14.3 & $\mathbf{0}$ & 113 \\ \hline esc16j & 8 & $\mathbf{0}$ & $\mathbf{0}$ & $\mathbf{0}$ & $\mathbf{0}$ & $\mathbf{0}$ & 106 \\ \hline esc32a & 130 & 93.8 & 129.2 & 109.2 & 104.6 &$\mathbf{83.1}$ & 433 \\ \hline esc32b & 168 & 88.1 & 111.9 & 92.9 & 97.6 & $\mathbf{69.0}$ & 508 \\ \hline esc32c & 642 & 7.8 & 15.6 & 14.0 & 15.0 & $\mathbf{4.0}$ & 552 \\ \hline esc32d & 200 & 21 & 28.0 & 28.0 & 29.0 &$\mathbf{17.0}$ & 470 \\ \hline esc32e & 2 & $\mathbf{0}$ & $\mathbf{0}$ & $\mathbf{0}$ & $\mathbf{0}$ & $\mathbf{0}$ & 220 \\ \hline esc32g & 6 & $\mathbf{0}$ & 33.3 &$\mathbf{0}$ & $\mathbf{0}$ & $\mathbf{0}$ & 234 \\ \hline esc32h & 438 & 18.3 & 25.1 & 19.6 & 25.1 & $\mathbf{13.2}$ & 570 \\ \hline esc64a & 116 & 53.4 & 62.1 & 51.7 & 58.6 & $\mathbf{34.5}$ & 899 \\ \hline esc128 & 64 & $\mathbf{175}$ & 256.3 & 193.8 & 243.8 & 215.6 & 2045 \\ \hline \end{tabular} \caption{Comparison between upper bounds on the problems from the QAP library with (relatively) sparse $L$.} \label{tb:qap} \end{table}
2,869,038,155,992
arxiv
\section{Introduction} Zero knowledge (ZK for short) proof, a proof that reveals nothing but the validity of the assertion, is put forward in the seminal paper of Goldwasser, Micali and Rackoff \cite{GMR}. Since its introduction, especially after the generality demonstrated in \cite{GMW}, ZK proofs have become a fundamental tools in design of some cryptographic protocols. In recent years, the research is moving towards extending the security to cope with some more malicious communication environment. In particular, Dwork et al. \cite{DNS}introduced the concept of concurrent zero knowledge, and initiate the study of the effect of executing ZK proofs concurrently in some realistic and asynchronous networks like the Internet. Though the concurrent zero knowledge protocols have wide applications, unfortunately, they requires logarithmic rounds for languages outside $\mathcal{BPP}$ in the plain model for the black-box case \cite{CKPR} and therefore are of round inefficiency. In the Common Reference String model, Damgaard \cite{D1} showed that 3-round concurrent zero-knowledge can be achieved efficiently. Surprisingly, using non-black-box technique, Barak \cite{B} constructed a constant round non-black-box bounded concurrent zero knowledge protocol though it is very inefficient. Motivated by the application in which the prover (such as the user of a smart card) may encounter resetting attack, Canetti et al. \cite{CGGM} introduced the notion of resettable zero knowledge (rZK for short). An rZK formalizes security in a scenario in which the verifier is allowed to reset the prover in the middle of proof to any previous stage. Obviously the notion of resettable zero knowledge is stronger than that of concurrent zero knowledge and therefore we can not construct a constant round black-box rZK protocol in the plain model for non-trivial languages. To get constant round rZK, the work \cite{CGGM} also introduced a very attracting model, the bare public-key model(BPK). In this model, Each verifier deposits a public key $pk$ in a public file and stores the associated secret key $sk$ before any interaction with the prover begins. Note that no protocol needs to be run to publish $sk$, and no authority needs to check any property of $pk$. Consequently the BPK model is considered as a very weak set-up assumption compared to previously models such as common reference model and PKI model. However, as Micali and Reyzin \cite{MR1} pointed out, the notion of soundness in this model is more subtle. There are four distinct notions of soundness: one time, sequential, concurrent and resettable soundness, each of which implies the previous one. Moreover they also pointed out that there is NO black-box rZK satisfying resettable soundness for non-trivial language and the original rZK arguments in the BPK model of \cite{CGGM} does not seem to be concurrently sound. The 4-round(optimal) rZK arguments with concurrent soundness in the bare public-key model was proposed by Di Crescenzo et al. in \cite{DPV2} and also appeared in \cite{YZ}. All above rZK arguments in BPK model need some cryptographic primitives secure against sub-exponential time adversaries, which is not a standard assumption in cryptography. Using non-black-box techniques, Barak et al. obtained a constant-round rZK argument of knowledge assuming only collision-free hash functions secure against supperpolynomial-time algorithms\footnote{using idea from\cite{BG}, this results also holds under standard assumptions that there exist hash functions that are collision-resistent against all polynomial-time adversaries.}, but their protocol enjoys only sequential soundness. The existence of constant round rZK arguments with concurrent soundness in BPK model under only polynomial-time hardness assumption is an interesting problem. \bigskip\noindent\textbf{Our results.} In this paper we resolve the above open problem by presenting a constant-round rZK argument with concurrent soundness in BPK model for $\mathcal{NP}$ under the standard assumptions that there exist hash functions collision-resistant against \emph{polynomial time} adversaries, We note that our protocol is a argument of knowledge and therefore the non-black-box technique is inherently used. In our protocol, we use the resettably-sound non-black-box zero knowledge argument as a building block in a manner different from that in \cite{BGGL}: instead of using it for the verifier to prove the knowledge of its secret key, the verifier uses it in order to proves that a challenge matches the one he committed to in a previous step. This difference is crucial in the concurrent soundness analysis of our protocol: we just need to simulate \emph{only one execution} among all concurrent executions of the resettably-sound zero knowledge argument for justifying concurrent soundness, instead of simulating all these concurrent executions. \section{Preliminaries} In this section we recall some definitions and tools that will be used later. In the following we say that function $f(n)$ is negligible if for every polynomial $q(n)$ there exists an $N$ such that for all $n\geq N$, $f(n)\leq 1/q(n)$. We denote by $\delta\leftarrow_{\small{R}}\Delta$ the process of picking a random element $\delta$ from $\Delta$. \bigskip\noindent\textbf{The BPK Model}.The bare public-key model(BPK model)assumes that: \begin{itemize} \item A public file $F$ that is a collection of records, each containing a verifier's public key, is available to the prover. \item An (honest)prover $P$is an interactive deterministic polynomial-time algorithm that is given as inputs a secret parameter $1^n$, a $n$-bit string $x\in L$, an auxiliary input $y$, a public file $F$ and a random tape $r$. \item An (honest) verifier $V$ is an interactive deterministic polynomial-time algorithm that works in two stages. In stage one, on input a security parameter $1^n$ and a random tape $w$, $V$ generates a key pair $(pk, sk)$ and stores $pk$ in the file $F$. In stage two, on input $sk$, an $n$-bit string $x$ and an random string $w$, $V$ performs the interactive protocol with a prover, and outputs "accept $x$" or "reject $x$". \end{itemize} \begin{definition} We say that the protocol $<P,V>$ is complete for a language $L$ in $\mathcal{NP}$, if for all $n$-bit string $x\in L$ and any witness $y$ such that $(x,y)\in R_L$, here $R_L$ is the relation induced by $L$, the probability that $V$ interacting with $P$ on input $y$, outputs "reject $x$" is negligible in $n$. \end{definition} \textbf{Malicious provers and Its attacks in the BPK model}. Let $s$ be a positive polynomial and $P^*$ be a probabilistic polynomial-time algorithm on input $1^n$. $P^*$ is a \emph{s-concurrent malicious} prover if on input a public key $pk$ of $V$, performs at most s interactive protocols as following: 1) if $P^*$ is already running $i-1$ interactive protocols $1\leq i-1\leq s$, it can output a special message "Starting $x_i$," to start a new protocol with $V$ on the new statement $x_i$; 2) At any point it can output a message for any of its interactive protocols, then immediately receives the verifier's response and continues. A concurrent attack of a \emph{s-concurrent malicious} prover $P^*$ is executed in this way: 1) $V$ runs on input $1^n$ and a random string and then obtains the key pair $(pk,sk)$; 2) $P^*$ runs on input $1^n$ and $pk$. Whenever $P^*$ starts a new protocol choosing a statement, $V$ is run on inputs the new statement, a new random string and $sk$. \begin{definition} $<P,V>$ satisfies \emph{concurrent soundness} for a language $L$ if for all positive polynomials $s$, for all \emph{s-concurrent malicious} prover $P^*$, the probability that in an execution of concurrent attack, $V$ ever outputs "accept $x$" for $x\notin L$ is negligible in $n$. \end{definition} The notion of resettable zero-knowledge was first introduced in \cite{CGGM}. The notion gives a verifier the ability to rewind the prover to a previous state (after rewinding the prover uses the same random bits), and the \emph{malicious} verifier can generate an arbitrary file $F$ with several entries, each of them contains a public key generated by the malicious verifier. We refer readers to that paper for intuition of the notion. Here we just give the definition. \begin{definition} An interactive argument system $<P,V>$ in the BPK model is black-box resettable zero-knowledge if there exists a probabilistic polynomial-time algorithm $S$ such that for any probabilistic polynomial-time algorithm $V^*$, for any polynomials $s$, $t$, for any $x_i\in L$, the length of $x_i$ is $n$, $i=1,...,s(n)$, $V^*$ runs in at most $t$ steps and the following two distributions are indistinguishable: \begin{enumerate} \item the view of $V^*$ that generates $F$ with $s(n)$ entries and interacts (even concurrently) a polynomial number of times with each $P(x_i,y_i,j,r_k,F)$ where $y_i$ is a witness for $x_i\in L$, $r_k$ is a random tape and $j$ is the identity of the session being executed at present for $1\leq i,j,k \leq s(n)$; \item the output of $S$ interacting with on input $x_1,...x_{s(n)}$. \end{enumerate} \end{definition} \bigskip\noindent\textbf{$\Sigma\textbf{-protocols}$} A protocol $<P,V>$ is said to be $\Sigma$-protocol for a relation $R$ if it is of 3-move form and satisfies following conditions:\begin{enumerate} \item \emph{Completeness}: for all $(x,y)\in R$, if $P$ has the witness $y$ and follows the protocol, the verifier always accepts. \item \emph{Special soundness}: Let $(a,e,z)$ be the three messages exchanged by prover $P$ and verifier $V$. From any statement $x$ and any pair of accepting transcripts $(a,e,z)$ and $(a,e',z')$ where $e\neq e'$, one can efficiently compute $y$ such that $(x,y)\in R$. \item \emph{Special honest-verifier ZK}: There exists a polynomial simulator $M$, which on input $x$ and a random $e$ outputs an accepting transcript of form $(a,e,z)$ with the same probability distribution as a transcript between the honest $P$, $V$ on input $x$. \end{enumerate} Many known efficient protocols, such as those in \cite{GQ} and \cite{S}, are $\Sigma$-protocols. Furthermore, there is a $\Sigma$-protocol for the language of Hamiltonian Graphs \cite{B}, assuming that one-way permutation families exists; if the commitment scheme used by the protocol in \cite{B} is implemented using the scheme in \cite{N} from any pseudo-random generator family, then the assumption can be reduced to the existence of one-way function families, at the cost of adding one preliminary message from the verifier. Note that adding one message does not have any influence on the property of $\Sigma$-protocols: assuming the new protocol is of form $(f,a,e,z)$, given the challenge $e$, it is easy to indistinguishably generate the real transcript of form $(f,a,e,z)$; given two accepting transcripts $(f,a,e,z)$ and $(f,a,e',z')$, where $e\neq e'$, we can extract a witness easily. We can claim that any language in $\mathcal{NP}$ admits a 4-round $\Sigma$-protocol under the existence of any one-way function family (or under an appropriate number-theoretic assumption), or a $\Sigma$-protocol under the existence of any one-way permutation family. Though the following OR-proof refers only to 3-round $\Sigma$-protocol, readers should keep in mind that the way to construct the OR-proof is also applied to 4-round $\Sigma$-protocol. Interestingly, $\Sigma$-protocols can be composed to proving the OR of atomic statements, as shown in \cite{DDPY,CDS}. Specifically, given two protocols $\Sigma_0$,$\Sigma_1$ for two relationships $R_0$, $R_1$, respectively, we can construct a $\Sigma_{OR}$-protocol for the following relationship efficiently: $R_{OR}={((x_0,x_1),y): (x_0,y)\in {R_0} or (x_1,y)\in {R_1}}$, as follows. Let $(x_b,y)\in {R_b}$ and $y$ is the private input of $P$. $P$ computes $a_b$ according the protocol $\Sigma_b$ using $(x_b, y)$. $P$ chooses $e_{1-b}$ and feeds the simulator $M$ guaranteed by $\Sigma_{1-b}$ with $e_{1-b}, x_{1-b}$, runs it and gets the output $(a_{1-b},e_{1-b},z_{1-b})$. $P$ sends $a_b$, $a_{1-b}$ to $V$ in first step. In second step, $V$ picks $e\leftarrow_{\small{R}} \mathbb{Z}_q$ and sends it to $P$. Last, $P$ sets $e_b=e\oplus e_{1-b}$, and computes the last message $z_b$ to the challenge $e_b$ using $x_b, y$ as witness according the protocol $\Sigma_b$. $P$ sends $e_b$, $e_{1-b}$, $z_b)$ and $e_{1-b}$, $z_{1-b}$ to $V$. $V$ checks $e=e_b\oplus e_{1-b}$, and the two transcripts $(a_b, e_b, z_b)$ and $(a_{1-b}, e_{1-b}, z_{1-b})$ are accepting. The resulting protocol turns out to be witness indistinguishable: the verifier can not tell which witness the prover used from a transcript of a session. In our rZK argument, the verifier uses a 3-round Witness Indistinguishable Proof of Knowledge to prove knowledge of one of the two secret keys associating with his public key. As required in \cite{DV}, we need a \emph{partial-witness-independence} property from above proof of knowledge: the message sent at its first round should have distribution independent from any witness for the statement to be proved. We can obtain such a protocol using \cite{S} \cite{DDPY}. \bigskip\noindent\textbf{Commitment scheme.} A commitment scheme is a two-phase (committing phase and opening phase) two-party (a sender $S$ and a receiver $R$)protocol which has following properties: 1) hiding: two commitments (here we view a commitment as a variable indexed by the value that the sender committed to) are computationally distinguishable for every probabilistic polynomial-time (possibly malicious) $R^*$; 2) Binding: after sent the commitment to a value $m$, any probabilistic polynomial-time (possibly malicious) sender $S^*$ cannot open this commitment to another value $m'\neq m$ except with negligible probability. Under the assumption of existence of any one-way function families (using the scheme from \cite{N} and the result from \cite{HILL}) or under number-theoretic assumptions (e.g., the scheme from \cite{P}), we can construct a schemes in which the first phase consists of 2 messages. Assuming the existence of one-way permutation families, a well-known non-interactive (in committing phase) construction of a commitment scheme (see, e.g. \cite{G}) can be given. \emph{ A statistically-binding commitment scheme (with computational hiding)} is a commitment scheme except with a stronger requirement on binding property: for all powerful sender $S^*$ (without running time restriction), it cannot open a valid commitment to two different values except with exponentially small probability. We refer readers to \cite{G,N} for the details for constructing statistically-binding commitments. \emph{A perfect-hiding commitment scheme (with computational binding)} is the one except with a stronger requirement on hiding property: the distribution of the commitments is indistinguishable for all powerful receiver $R^*$. As far as we know, all perfect-hiding commitment scheme requires interaction (see also \cite{P,NOVM})in the committing phase. \begin{definition} \cite{G}. Let $d,r: N\rightarrow N$. we say that \[\{f_s: \{0,1\}^{d(|s|)}\rightarrow \{0,1\}^{r(|s|)}\}_{s \in \{0,1\}^*}\] is an pseudorandom function ensemble if the following two conditions hold: \begin{enumerate} \item 1. Efficient evaluation: There exists a polynomial-time algorithm that on input $s$ and $x\in{{0,1}^{d(|s|)}}$ returns $f_s(x)$; \item 2. Pseudorandomness: for every probabilistic polynomial-time oracle machine $M$, every polynomial $p(\cdot)$, and all sufficient large $n's$, \[|[Pr[M^{F_n}(1^n)=1]-Pr[M^{H_n}(1^n)=1]|<1/p(n)\] where $F_n$ is a random variable uniformly distributed over the multi-set $\{f_s\}_{s \in \{0,1\}^n}$, and $H_n$ is uniformly distributed among all functions mapping $d(n)$-bit-long strings to $r(n)$-bit-long strings. \end{enumerate} \end{definition} \section{A Simple Observation on Resettably-sound Zero Knowledge Arguments} resettably-sound zero knowledge argument is a zero knowledge argument with stronger soundness: for all probabilistic polynomial-time prover $P^*$, even $P^*$ is allowed to reset the verifier $V$ to previous state (after resetting the verifier $V$ uses the same random tape), the probability that $P^*$ make $V$ accept a false statement $x\notin L$ is negligible. In \cite{BGGL} Barak et al. transform a constant round public-coin zero knowledge argument $<P,V>$ for a $\mathcal{NP}$ language $L$ into a constant round resettably-sound zero knowledge argument $<P,W>$ for $L$ as follows: equip $W$ with a collection of pseudorandom functions, and then let $W$ emulate $V$ except that it generate the current round message by applying a pseudorandom function to the transcript so far. We will use a resettably-sound zero knowledge argument as a building block in which the verifier proves to the prover that a challenge matches the one that he have committed to in previous stage. The simulation for such sub-protocols plays a important role in our security reduction, but there is a subtlety in the simulation itself. In the scenario considered in this paper, in which the prover (i.e., the verifier in the underlying sub-protocol)can interact with many copies of the verifier and schedule all sessions at its wish, the simulation seems problematic because we do not know how to simulate all the concurrent executions of the Barak's protocol described below \footnote{Barak also presented a constant round bounded concurrent ZK arguments, hence we can obtain a constant round resettably-sound bounded concurrent ZK argument by applying the same transformation technique to the bounded concurrent ZK argument. We stress that in this paper we do not require the bounded concurrent zero knowledge property to hold for the resettably-sound ZK argument.}(therefore the resettably-sound zero knowledge argument). However, fortunately, it is not necessary to simulate all the concurrent executions of the underlying resettably-sound zero knowledge argument. Indeed, in order to justify concurrent soundness, we just need to simulate \emph{only one execution} among all concurrent executions of the resettably-sound zero knowledge argument. We call this property \emph{one-many simulatability}. We note that Pass and Rosen \cite{PR} made a similar observation (in a different context) that enables the analysis of concurrent non-malleability of their commitment scheme. Now we recall the Barak's constant round public-coin zero knowledge argument \cite{B}, and show this protocol satisfies \emph{one-many simulatability}, and then so does the resettably-sound zero knowledge argument transformed from it. Informally, Barak's protocol for a $\mathcal{NP}$ language $L$ consists of two subprotocol: a general protocol and a WI universal argument. An real execution of the general protocol generates an instance that is unlikely in some properly defined language, and in the WI universal argument the prover proves that the statement $x\in L$ or the instance generated above is in the properly defined language. Let $n$ be security parameter and $\{\mathcal{H}_n\}_{n\in \mathbb{N}}$ be a collection of hash functions where a hash function $h\in \mathcal{H}_n$ maps $\{0,1\}^*$ to $\{0,1\}^n$, and let $\textsf{C}$ be a statistically binding commitment scheme. We define a language $\Lambda$ as follows. We say a triplet $(h,c,r)\in \mathcal{H}_n\times \{o,1\}^n\times \{o,1\}^n$ is in $\Lambda$, if there exist a program $\Pi$ and a string $s\in \{0,1\}^{poly(n)}$ such that $z=\textsf{C}(h(\Pi),s)$ and $\Pi(z)=r$ within superpolynomial time (i.e., $n^{\omega(1)}$).\\ \noindent\textbf{The Barak's Protocol} \cite{B}\\ \textbf{Common input:} an instance $x\in L$ ($|x|=n$)\\ \textbf{Prover's private input:} the witness $w$ such that $(x,w)\in R_L$\\ $V\rightarrow P$: Send $h\leftarrow_{\small{R}}\mathcal{H}_n$;\\ $P\rightarrow V$: Pick $s\leftarrow_{\small{R}}\{0,1\}^{poly(n)}$ and Send $c=\textsf{C}(h(0^{3n},s)$;\\ $V\rightarrow P$: Send $r\leftarrow_{\small{R}}\{0,1\}^n$;\\ $P\Leftrightarrow V$: A WI universal argument in which $P$ proves $x\in L$ or $(h,c,r)\in \Lambda$. \medskip\noindent\textbf{Fact 1.} The Barak's protocol enjoys \emph{one-many simulatability}. That is, For every malicious probabilistic polynomial time algorithm $V^*$ that interacts with (arbitrary) polynomial $s$ copies of $P$ on true statements $\{x_i\}, 1\leq i\leq s$, and for every $j\in \{1,2,...,s\}$, there exists a probabilistic polynomial time algorithm $\textsf{S}$, takes $V^*$ and all witness but the one for $x_j$, such that the output of $\textsf{S}(V^*,\{(x_i,w_i)\}_{1\leq i\leq s, i\neq j},x_j)$ (where $(x_i,w_i)\in R_L$) and the view of $V^*$ are indistinguishable. \medskip We can construct a simulator $\textsf{S}=(\textsf{S}_{real},\textsf{S}_j)$ as follows: $\textsf{S}_{real}$, taking as inputs $\{(x_i,w_i)\}_{1\leq i\leq s, i\neq j}$, does exactly what the honest provers do on these statements and outputs the transcript of all but the $j$th sessions (in $j$th session $x_j\in L$ is to be proven), and $\textsf{S}_j$ acts the same as the simulator associated with Barak's protocol in the session in which $x_j\in L$ is to be proven, except that when $\textsf{S}_j$ is required to send a commitment value (the second round message in Barak's protocol), it commit to the hash value of the \textbf{joint} residual code of $V^*$ and $\textsf{S}_{real}$ at this point instead of committing to the hash value of the residual code of $V^*$ (that is, we treat $\textsf{S}_{real}$ as a subroutine of $V^*$, and it interacts with $V^*$ internally). We note that the next message of the joint residual code of $V^*$ and $\textsf{S}_{real}$ is only determined by the commitment message from $\textsf{S}_j$, so as showed in \cite{B}, $\textsf{S}_j$ works. On the other hand, the $\textsf{S}_{real}$'s behavior is identical to the honest provers. Thus, the whole simulator $\textsf{S}$ satisfies our requirement. When we transform a constant round public-coin zero knowledge argument into a resettably-sound zero knowledge argument, the transformation itself does not influence the simulatability (zero knowledge) of the latter argument because the zero knowledge requirement does not refer to the honest verifier (as pointed out in \cite{BGGL}). Thus, the same simulator described above also works for the resettably-sound zero knowledge argument in concurrent settings. So we have \medskip\noindent\textbf{Fact 2.} The resettably-sound zero knowledge arguments in \cite{BGGL} enjoy \emph{one-many simulatability}. \section{rZK Argument with Concurrent Soundness for $\mathcal{NP}$ in the BPK model Under Standard Assumption} In this section we present a constant-round rZK argument with concurrent soundness in the BPK model for all $\mathcal{NP}$ language without assuming any subexponential hardness. For the sake of readability, we give some intuition before describe the protocol formally. We construct the argument in the following way: build a concurrent zero knowledge argument with concurrent soundness and then transform this argument to a resettable zero knowledge argument with concurrent soundness. Concurrent zero knowledge with concurrent soundness was presented in \cite{DV} under standard assumption (without using "complexity leveraging"). For the sake of simplification, we modify the \emph{flawed} construction presented in \cite{Z2} to get concurrent zero knowledge argument with concurrent soundness. Considering the following two-phase argument in BPK model: Let $n$ be the security parameter, and $f$ be a one way function that maps $\{0,1\}^{\kappa(n)}$ to $\{0,1\}^n$ for some function $\kappa: \mathbb{N}\rightarrow\mathbb{N}$. The verifier chooses two random numbers $x_0,x_1\in \{0,1\}^{\kappa(n)}$, computes $y_0=f(x_0)$, $y_1=f(x_1)$ then publishes $y_0$, $y_1$ as he public key and keep $x_0$ or $x_1$ secret. In phase one of the argument, the verifier proves to the prover that he knows one of $x_0$, $x_1$ using a \emph{partial-witness-independently} Witness Indistinguishable Proof of Knowledge protocol $\Pi_v$. In phase two, the prover proves that the statement to be proven is true or he knows one of preimages of $y_0$ and $y_1$ via a witness indistinguishable argument of knowledge protocol $\Pi_p$. Note that In phase two we use \emph{argument} of knowledge, this means we restrict the prover to be a probabilistic polynomial-time algorithm, and therefore our whole protocol is an argument (not a proof). Though the above two-phase argument does not enjoy concurrent soundness \cite{DV}, it is still a good start point and We can use the same technique in \cite{DV} in spirit to fix the flaw: in phase two, the prover uses a commitment scheme\footnote{In contrast to \cite{DV}, we proved that computational binding commitment scheme suffices to achieve concurrent soundness. In fact, the statistically binding commitment scheme in \cite{DV} could also be replaced with computational binding one without violating the concurrent soundness.}$\textsf{COM}_1$ to compute a commitments to a random strings $s$, $c=\textsf{COM}_1(s,r)$ ($r$ is a random string needed in the commitment scheme), and then the prover prove that the statement to be proven is true or he committed to a preimage of $y_0$ or $y_1$. We can prove that the modified argument is concurrent zero knowledge argument with concurrent soundness using technique similar to that in \cite{DV}. Given the above (modified) concurrent zero knowledge argument with concurrent soundness, we can transform it to resettable zero knowledge argument with concurrent soundness in this way: 1) using a statistically-binding commitment scheme $\textsf{COM}_0$, the verifier computes a commitment $c_e=\textsf{COM}_0(e,r_e)$ ($r_e$ is a random string needed in the scheme) to a random string $e$ in the phase one, and then he sends $e$ (note that the verifier does not send $r_e$, namely, it does not open the commitment $c_e$) as the second message (i.e the challenge) of $\Pi_p$ and prove that $e$ is the string he committed to in the first phase using resettably sound zero knowledge argument; 2)equipping the prover with a pseudorandom function, whenever the random bits is needed in a execution, the prover applied the pseudorandom function to what he have seen so far to generate random bits. Let's Consider concurrent soundness of the above protocol. Imagine that a malicious prover convince a honest verifier of a false statement on a session (we call it a cheating session) in an execution of concurrent attack with high probability. Then we can use this session to break some hardness assumption: after the first run of this session, we rewind it to the point where the verifier is required to send a challenge and chooses an arbitrary challenge and run the simulator for this underlying resettably-sound zero knowledge proof. At the end of the second run of this session, we will extract one of preimages of $y_0$ and $y_1$ from the two different transcripts, and this contradicts either the witness indistinguishability of $\Pi_v$ or the binding property of the commitment scheme $\textsf{COM}_1$. Note that in the above reduction we just need to simulate the single execution of the resettably-sound zero knowledge argument in that cheating session, and do not care about other sessions that initiated by the malicious prover (in other sessions we play the role of honest verifier). We have showed the simulation in this special concurrent setting can be done in a simple way in last section. \medskip\noindent{\textbf{The Protocol (rZK argument with concurrent soundness in BPK model)}} \medskip Let $\{prf_r: \{0,1\}^*\rightarrow \{0,1\}^{d(n)}\}_{r\in\{0,1\}^n}$ be a pseudorandom function ensembles, where $d$ is a polynomial function, $\textsf{COM}_0$ be a \emph{statistically-binding} commitment scheme, and let $\textsf{COM}_1$ be a general commitment scheme (can be either statistically-binding or computational-binding\footnote{If the computational-binding scheme satisfies perfect-hiding, then this scheme requires stronger assumption, see also \cite{P,NOVM}}). Without loss of generality, we assume both the preimage size of the one-way function $f$ and the message size of $\textsf{COM}_1$ equal $n$. \textbf{Common input:} the public file $F$, $n$-bit string $x\in L$, an index $i$ that specifies the $i$-th entry $pk_i=(f,y_0,y_1)$ ($f$ is a one-way function) of $F$. \textbf{$P$'s Private input:} a witness $w$ for $x\in L$, and a fixed random string $(r_1,r_2)\in {\{0,1\}^{2n}}$. \textbf{$V$'s Private input:} a secret key $\alpha$ ($y_0=f(\alpha)$ or $y_1=f(\alpha)$). \medskip\noindent\textbf{Phase 1:}$V$ Proves Knowledge of $\alpha$ and Sends a Committed Challenge to $P$. \begin{enumerate} \item $V$ and $P$ runs the 3-round \emph{partial-witness-independently} witness indistinguishable protocol ($\Sigma_{OR}$-protocol) $\Pi_v$ in which $V$ prove knowledge of $\alpha$ that is one of the two preimages of $y_0$ and $y_1$. the randomness bits used by $P$ equals $r_1$; \item $V$ computes $c_e=\textsf{COM}_0(e,r_e)$ for a random $e$ ($r_e$ is a random string needed in the scheme), and sends $c_e$ to $P$. \end{enumerate} \noindent\textbf{Phase 2:} $P$ Proves $x\in L$. \begin{enumerate} \item $P$ checks the transcript of $\Pi_v$ is accepting. if so, go to the following step. \item $P$ chooses a random string $s, |s|=n$, and compute $c=\textsf{COM}_1(s, r_s)$ by picking a randomness $r_s$; $P$ forms a new relation $R'$=$\{(x,y_0,y_1,c,w')\mid (x,w')\in R_L \vee (w'=(w{''},r_{w{''}})\wedge y_0=f(w{''})\wedge c=\textsf{COM}_1(w{''},r_{w{''}})) \vee (w'=(w{''},r_{w{''}})\wedge y_1=f(w{''})\wedge c=\textsf{COM}_1(w{''},r_{w{''}})))\}$; $P$ invokes the 3-round witness indistinguishable argument of knowledge ($\Sigma_{OR}$-protocol) $\Pi_p$ in which $P$ prove knowledge of $w'$ such that $(x,y_0,y_1,c;w')\in R'$, computes and sends the first message $a$ of $\Pi_p$.\\ All randomness bits used in this step is obtained by applying the pseudorandom function $prf_{r_2}$ to what $P$ have seen so far, including the common inputs, the private inputs and all messages sent by both parties so far. \item $V$ sends $e$ to $P$, and execute a resettably sound zero knowledge argument with $P$ in which $V$ proves to $P$ that $\exists$ $r_e$ s.t. $c_e=\textsf{COM}_0(e,r_e)$. Note that the subprotocol will costs several (constant) rounds. Again, the randomness used by $P$ is generated by applying the pseudorandom function $prf_{r_2}$ to what $P$ have seen so far. \item $P$ checks the transcript of resettably sound zero knowledge argument is accepting. if so, $P$ computes the last message $z$ of $\Pi_p$ and sends it to $V$. \item $V$ accepts if only if $(a,e,z)$ is accepting transcript of $\Pi_p$. \end{enumerate} \textbf{Theorem 1.} Let $L$ be a language in $\mathcal{NP}$, If there exists hash functions collision-resistant against any polynomial time adversary, then there exists a constant round rZK argument with concurrent soundness for $L$ in BPK model. \medskip\textbf{Remark on complexity assumption.} We prove this theorem by showing the protocol described above is a rZK argument with concurrent soundness. Indeed, our protocol requires collision-resistant hash functions and one-way \emph{permutations}, this is because the 3-round $\Sigma$-protocol (therefore $\Sigma_{OR}$-protocol) for $\mathcal{NP}$ assumes one-way permutations and the resettably sound zero knowledge argument assumes collision-resistant hash functions. However, we can build 4-round $\Sigma$-protocol (therefore $\Sigma_{OR}$-protocol) for $\mathcal{NP}$ assuming existence of one-way functions by adding one message (see also discussions on $\Sigma$-protocol in section 2), and our security analysis can be also applied to this variant. We also note that collision-resistant hash functions implies one-way functions which suffices to build statistically-binding commitment scheme \cite{N}(therefore computational-binding scheme), thus, if we proved our protocol is a rZK argument with concurrent soundness, then we get theorem 1. Here we adopt the 3-round $\Sigma_{OR}$-protocol just for the sake of simplicity. \bigskip\emph{Proof.} \textbf{Completeness.} Straightforward. \textbf{Resettable (\emph{black-box}) Zero Knowledge.} The analysis is very similar to the analysis presented in \cite{CGGM,DPV2}. Here we omit the tedious proof and just provide some intuition. As usual, we can construct a simulator $\textsf{Sim}$ that extracts all secret keys corresponding to those public keys registered by the malicious verifier from $\Pi_v$ and then uses them as witness in executions of $\Pi_p$, and $\textsf{Sim}$ can complete the simulation in expected polynomial time. We first note that when a malicious verifier resets a an honest prover, it can not send two different challenge for a fixed commitment sent in Phase 1 to the latter because of statistically-binding property of $\textsf{COM}_0$ and resettable soundness of the underlying sub-protocol used by the verifier to prove the challenge matches the value it has committed to in Phase 1. To prove the property of rZK, we need to show that the output of $\textsf{Sim}$ is indistinguishable form the real interactions. This can be done by constructing a non-uniform hybrid simulator $\textsf{HSim}$ and showing the output of $\textsf{HSim}$ is indistinguishable from both the output of $\textsf{Sim}$ and the real interaction. $\textsf{HSim}$ runs as follows. Taking as inputs all these secret keys and all the witnesses of statements in interactions, $\textsf{HSim}$ computes commitments exactly as $\textsf{Sim}$ does but executes $\Pi_p$ using the same witness of the statement used by the honest prover. It is easy to see that the output of the hybrid simulator is indistinguishable from both the transcripts of real interactions (because of the computational-hiding property of $\textsf{COM}_1$) and the output of $\textsf{Sim}$ (because of the witness indistinguishability of $\Pi_p$), therefore, we proved the the output of $\textsf{Sim}$ is indistinguishable form the real interactions. \textbf{Concurrent Soundness.} Proof proceeds by contradiction. Assume that the protocol does not satisfy the concurrent soundness property, thus there is a $s$-concurrently malicious prover $P^*$, concurrently interacting with $V$, makes the verifier accept a false statement $x\notin L$ in $j$th session with non-negligible probability $p$. We now construct an algorithm $\textsf{B}$ that takes the code (with randomness hardwired in)of $P^*$ as input and breaks the one-wayness of $f$ with non-negligible probability. $\textsf{B}$ runs as follows. On input the challenge $f,y$ (i.e., given description of one-way function, $\textsf{B}$ finds the preimage of $y$), $\textsf{B}$ randomly chooses $\alpha\in \{0,1\}^n$, $b\in {\{0,1\}}$, and guess a session number $j\in{\{1,...,s\}}$(guess a session in which $P^*$ will cheat the verifier successfully on a false statement $x$. Note that the event that this guess is correct happens with probability $1/s$), then $B$ registers $pk=(f,y_0,y_1)$ as the public key, where $y_b=f(\alpha)$, $y_{1-b}=y$. For convenience we let $x_b=\alpha$, and denote by $x_{1-b}$ one of preimages of $y_{1-b}$ ($y_{1-b}=y=f(x_{1-b})$). Our goal is to find one preimage of $y_{1-b}$. We write $\textsf{B}$ as $\textsf{B}=(\textsf{B}_{real},\textsf{B}_j)$. $\textsf{B}$ interacts with $P^*$ as honest verifier (note that $\textsf{B}$ knows the secret key $\alpha$ corresponding the public key $pk$) for all but $j$th session. Specifically, $\textsf{B}$ employs the following extraction strategy: \begin{enumerate} \item $\textsf{B}$ acts as the honest verifier in this stage. That is, it completes $\Pi_v$ using $\alpha=x_b$ as secret key, and commits to $e$, $c_e=\textsf{COM}_0(e,r_e)$ in phase 1 then runs resettably sound ZK argument in Phase 2 using $e$, $r_e$ as the witness. In particular, $\textsf{B}$ uses $\textsf{B}_j$ to play the role of verifier in the $j$th session, and uses $\textsf{B}_{real}$ to play the role of verifier in all other sessions. At the end of $j$th session, if $B$ gets an accepting transcript $(a,e,z)$ of $\Pi_p$, it enters the following rewinding stage; otherwise, $B$ halts and output $"\bot"$ \item $\textsf{B}_j$ rewind $P^*$ to the point of beginning of step 3 in Phase 2 in $j$th session, it chooses a random string $e'\neq e$ and simulates the underlying resettably sound ZK argument in the same way showed in section 3: it commits to the hash value of the joint residual code of $P^*$ and $\textsf{B}_{real}$ in the second round of the resettably sound ZK argument (note this subprotocol is transformed from Barak's protocol) and uses them as the witness to complete the proof for the following \emph{false} statement: $\exists$ $r_e$ s.t. $c_e=\textsf{COM}_0(e',r_e)$. If this rewinds incurs some other rewinds on other sessions, $\textsf{B}_{real}$ always acts as an honest verifier. When $\textsf{B}$ get another accepting transcript $(a,e',z')$ of $\Pi_p$ at step 5 in Phase 2 in $j$th session, it halts, computes the witness from the two transcripts and outputs it, otherwise, $\textsf{B}$ plays step 3 in $j$th session again. \end{enumerate} We denote this extraction with $\emph{Extra}$. We first note that $\textsf{B}$'s simulation of $P^*$'s view only differs from $P^*$'s view in real interaction with an honest verifier in the following: In the second run of $\Pi_p$ in $j$th session $\textsf{B}$ proves a \emph{false} statement to $P^*$ via the resettably sound zero knowledge argument instead of executing this sub-protocol honestly. We will show that this difference is computationally indistinguishable by $P^*$ using the technique presented in the analysis of resettable zero knowledge property, or otherwise we can use $P^*$ to violate the zero knowledge property of the underlying resettably sound zero knowledge argument or the statistically-binding property of the commitment scheme $\textsf{COM}_0$. We also note that if the simulation is successful, $\textsf{B}$ gets an accepting transcript of $\Pi_p$ in stage 1 with probability negligibly close to $p$, and once $\textsf{B}$ enters the rewinding stage (stage 2) it will obtain another accepting transcript in expected polynomial time because $p$ is non-negligible. In another words, $\textsf{B}$ can outputs a valid witness with probability negligibly close to $p$ in the above extraction. Now assume $\textsf{B}$ outputs a valid witness $w'$ such that $(x,y_0,y_1,c,w')\in R'$, furthermore, the witness $w'$ must satisfy $w'=(w{''},r_{w{''}})$ and $y_b=f(w{''})$ or $y_{1-b}=f(w{''})$ because $x\notin L$. If $y_{1-b}=f(w{''})$, we break the one-way assumption of $f$ (find the one preimage of $y_{1-b}$), otherwise(i.e., $w{''}$ satisfies $y_b=f(w{''})$), we fails. Next we claim $\textsf{B}$ succeed in breaking the one-way assumption of $f$ with non-negligible probability. Assume otherwise, with at most a negligible probability $q$, $\textsf{B}$ outputs one preimage of $y_{1-b}$. Then We can construct a non-uniform algorithm $\textsf{B'}$ (incorporating the code of $P^*$)to break the witness indistinguishability of $\Pi_v$ or the computational binding of the commitment scheme $\textsf{COM}_1$. The non-uniform algorithm $\textsf{B'}$ takes as auxiliary input $(y_0, y_1, x_0, x_1)$ (with input both secret keys) and interacts with $P^*$ under the public key $(y_0, y_1)$. It performs the following experiment: \begin{enumerate} \item \emph{Simulation} (\emph{until} $\textsf{B'}$ \emph{receives the first message $a$ of $\Pi_p$ in $j$th session}). $\textsf{B'}$ acts exactly as the $\textsf{B}$. Without loss of generality, let $\textsf{B'}$ uses $x_0$ as witness in all executions of $\Pi_v$ that completed before step 2 in Phase 2 of the $j$th session. Once $\textsf{B'}$ receives the first message $a$ of $\Pi_p$ in $j$th session, it splits this experiment and continues independently in following games: \item \emph{Extracting Game 0}. $\textsf{B'}$ continues the above simulation and uses the same extraction strategy of $\textsf{B}$. In particular, it runs as follows. 1) continuing to simulate: $\textsf{B}$ uses $x_0$ as witness in all executions of $\Pi_v$ that take place during this game; 2) extracting: if $\textsf{B}$ obtained an accepting transcript $(a, e_0, z_0)$ at the end of the first run of $\Pi_p$ in $j$th session, it rewinds to the point of beginning of step 3 in Phase 2 in $j$th session and replays this round by sending another random challenge $e'\neq e$ until he gets another accepting transcript $(a, e_0', z_0')$ of $\Pi_p$, and then $\textsf{B}$ outputs a valid witness, otherwise outputs $"\bot"$. \item \emph{Extracting Game 1}: $\textsf{B'}$ repeats Extracting Game 0 but $\textsf{B'}$ uses $x_1$ as witness in all executions of $\Pi_v$ during this game (i.e., those executions of $\Pi_v$ completed after the step 2 in Phase 2 in the $j$th session). At the end of this game, $\textsf{B'}$ either obtains two accepting transcripts $(a, e_1, z_1)$, $(a, e_1', z_1')$ and outputs an valid witness, or outputs $"\bot"$. Note that an execution of $\Pi_v$ that takes place during this game means at least the last (third) message of $\Pi_v$ in that execution has not yet been sent before step 2 in Phase 2 in $j$th session. Since the $\Pi_v$ is \emph{partial-witness-independent} $\Sigma$-protocol (so we can decide to use which witness at the last (third) step of $\Pi_v$), $\textsf{B'}$ can choose witness at its desire to complete that execution of $\Pi_v$ after the step 2 in Phase 2 in the $j$th session. \end{enumerate} We denote by $\emph{EXP}_0$ the \emph{Simulation} in stage 1 described above with its first continuation \emph{Extracting Game 0}, similarly, denote by $\emph{EXP}_1$ the same \emph{Simulation} with its second continuation \emph{Extracting Game 1}. Note that the $P^*$'s view in $\emph{EXP}_0$ is identical to its view in $\emph{EXTRA}$ in which $\textsf{B}$ uses $x_0$ ($b=0$)as witness in all executions of $\Pi_v$, so the outputs of $\textsf{B'}$ at the end of $\emph{EXP}_0$ is identical to the outputs of $\textsf{B}$ taking $x_0$ as the secret key in $\emph{EXTRA}$, that is, with non-negligible probability $p$ $\textsf{B'}$ outputs one preimage of $y_0$, and with negligible probability $q$ it outputs one preimage of $y_1$. Consider $\textsf{B}$'s behavior in $\emph{EXTRA}$ when it uses $x_1$($b=1$)as the secret key. The behavior of $\textsf{B}$ only differs from the behavior of $\textsf{B'}$ in $\emph{EXP}_1$ in those executions of $\Pi_v$ that completed before the step 2 in Phase 2 in the $j$th session: $\textsf{B'}$ uses $x_0$ as witness in all those executions, while $\textsf{B}$ uses $x_1$ as witness. However, the $P^*$ cannot tell these apart because $\Pi_v$ is witness indistinguishable and all those executions of $\Pi_v$ have not been rewound during both $\emph{EXTRA}$ and $\emph{EXP}_1$ (note that $\textsf{B'}$ does not rewind past the the step 2 in Phase 2 in the $j$th session in the whole experiment). Thus, we can claim that at the end of $\emph{EXP}_1$, $\textsf{B'}$ outputs one preimage of $y_1$ with probability negligibly close to $p$, and it outputs one preimage of $y_0$ with probability negligibly close to $q$. In the above experiment conducted by $\textsf{B}$, the first message $a$ sent by $P^*$ in the $j$th session contains a commitment $c$ and this message $a$ (therefore $c$) remains unchanged during the above whole experiment. Clearly, with probability negligibly close to $p^2$ (note that $q$ is negligible), $\textsf{B'}$ will output two valid witness $w_0'=(w_0{''},r_{w_0{''}})$ and $w_1'=(w_1{''},r_{w_1{''}})$ (note that $w_0{''}\neq w_1{''}$ except for a very small probability) from the above two games such that the following holds: $y_0=f(w_0{''})$, $y_1=f(w_1{''})$, $c=\textsf{COM}_1(w_0{''},r_{w_0{''}})$ and $c=\textsf{COM}_1(w_1{''},r_{w_1{''}})$. This contradicts the computational-binding property of the scheme $\textsf{COM}_1$. In sum, we proved that if $\textsf{COM}_1$ enjoys computational-binding and $\Pi_v$ is witness indistinguishable protocol with \emph{partial-witness-independence} property, then $\textsf{B}$ succeeds in breaking the one-wayness of $f$ with non-negligible probability. In another words, if the one-way assumption on $f$ holds, it is infeasible for $P^*$ to cheat an honest verifier in concurrent settings with non-negligible probability. $\Box$\\ \bigskip\noindent\textbf{Acknowledgments.} Yi Deng thanks Giovanni Di Crescenzo, Rafael Pass, Ivan Visconti and Yunlei Zhao for many helpful discussions and classifications.
2,869,038,155,993
arxiv
\section{Introduction} Quadratic forms on a free supertropical module, and their bilinear companions, were introduced and classified in \cite{QF1,QF2}, and studied further in \cite{Quasilinear,VR1,UB}. These objects establish a version of tropical trigonometry, where the CS-ratio takes the role of the Cauchy-Schwarz inequality, which is not always applicable. ({``CS'' is an acronym of ``Cauchy-Schwarz''.}) With the notion of CS-ratio, the space of equivalence classes of a suitable equivalence relation, termed rays, provides a framework which carries a type of convex geometry. The study of this geometry was initiated in \cite{Quasilinear}, focusing on the so called \emph{quasilinear stars}. The present paper proceeds to develop this theory, employing mostly special characteristic functions, called CS-functions, that emerge from the CS-ratio on ray spaces. These CS-functions provide a useful tool for convex analysis, which is of much help in understanding the variety of quasilinear stars in the ray space. Supertropical modules are modules over supertropical semirings, which carry a rich algebraic structure \cite{zur05TropicalAlgebra,nualg,IzhakianKnebuschRowen2010Linear,IR1,IR2,IR3}, and are at the heart of our framework. A supertropical semiring (\Qdefref{0.3}) is a semiring $R$ with idempotent element $e:=1+1$ (i.e., $e+e = e$) such that, for all~ $a,b\in R$, $a+b\in\{a,b\}$ whenever $ea \ne eb$ and $a+b=ea$ otherwise. The ideal $eR$ of~ $R$ is a bipotent semiring (with unit element $e$), i.e., $a+b$ is either $a$ or $b$, for any $a,b\in eR$. The total ordering \begin{equation*}\label{eq:0.5} a\le b \dss \Leftrightarrow a+b=b \end{equation*} of $eR$, together with the ghost map $\nu: a\mapsto ea$, induces the $\nu$-ordering \begin{equation}\label{eq:nuorderring} a <_\nu b \dss \ \Leftrightarrow \ ea < eb \end{equation} and the $\nu$-equivalence \begin{equation}\label{eq:nucong} a \cong_\nu b \dss\ \Leftrightarrow \ ea = eb \end{equation} on the entire semiring $R$, which determines the addition of~$R$: \begin{equation*}\label{eq:0.6} a+b =\begin{cases} b&\ \text{if}\ a <_\nu b,\\ a&\ \text{if}\ a>_\nu b,\\ eb&\ \text{if}\ a \cong_\nu b. \end{cases} \end{equation*} Consequently, $ea=0 \Rightarrow a =0$, and the zero $0 = e0$ is regarded mainly as a ghost. The set $\mathcal T:=R\setminus(eR)$ consists of the \bfem{tangible} elements of $R$, while the ideal $\mathcal G :=(eR)\setminus\{0\}$ contains the \bfem{ghost} elements. The semiring $R$ itself is said to be \bfem{tangible}, if $e\mathcal T=\mathcal G$, i.e., $R$ is generated by $\mathcal T$ as a semiring. Then, for $\mathcal T \neq \emptyset$, $R':=\mathcal T \cup e\mathcal T \cup\{0\}$ is the largest tangible sub-semiring of $R$. An $R$-module $V$ over a commutative supertropical semiring $R$ is defined in the familiar way. A \bfem{quadratic form} on $V$ is a function $q: V\to R$ satisfying \begin{equation*} q(ax)=a^2q(x) \end{equation*} for any $a\in R$, $x\in V$, for which there exists a symmetric bilinear form $b:V\times V\to R$, called a \bfem{companion} of $q$, such that \begin{equation*} q(x+y)= q(x)+q(y)+b(x,y) \end{equation*} for any $x,y\in V$. ($q$ may have several companions.) The pair $(q,b)$ is called a \bfem{quadratic pair}. It is called \textbf{balanced}, and $b$ is said to be a \textbf{balanced companion} of $q$, if $b(x, x) = e q(x)$ for any $x \in V$. In our version of ``\textit{tropical trigonometry}'' the familiar formula $ \cos(x,y) = \frac {\langle x,y\rangle} {\| x\| \; \| y\| } $ in euclidian geometry is replaced by the CS-ratio \begin{equation}\label{eq:0.10} \operatorname{CS}(x,y):=\frac{eb(x,y)^2}{eq(x)q(y)}\in eR \end{equation} of \textbf{anisotropic} vectors $x,y \in V$, i.e., $q(x)\ne0,$ $q(y)\ne 0$. (As for any supertropical semiring the map $\lambda \mapsto \lambda^2$ is an injective endomorphism, there is no loss of information by squaring $\operatorname{CS}(x,y)$ \cite[Proposition 0.5]{QF1}.) The function $x\mapsto \operatorname{CS}(x,w)$ is subadditive for any anisotropic vector $w \in V$ (Theorem \ref{thm:II.2.10}). In this setting, features of noneuclidian geometry arise, since, not like in euclidian geometry, the CS-ratio $\operatorname{CS}(x,y)$ may take values larger than $e $. These features are closely related to excessiveness \cite[Definition 2.8]{QF2}. When $eR$ is densely ordered, a pair $(x,y)$ is excessive if $\operatorname{CS}(x,y)>e.$ When $eR$ is discrete, $(x,y)$ is excessive if either $\operatorname{CS}(x,y)>c_0,$ with $c_0$ the smallest element of $eR$ larger than $e,$ or $\operatorname{CS}(x,y)=c_0$ and $q(x)$ or $q(y)$ is tangible. A pair $(x,y)$ is \textbf{exotic quasilinear}, if $\operatorname{CS}(x,y) = c_0$ and both $q(x)$ and $q(y)$ are ghost \cite[~Theorems ~2.7 and 2.14]{QF2}. A pair of vectors $(x,y)$ is called \textbf{$\nu$-excessive} (resp. \textbf{$\nu$-quasilinear}), if the pair $(ex,ey)$ is excessive (resp. quasilinear). Since $\operatorname{CS}(x,y) = \operatorname{CS}(ex,ey)$, it is often simpler to work with $\nu$-excessiveness and $\nu$-quasilinearity. To wit, $(x,y)$ is $\nu$-quasilinear, if $$ q(x,y) \cong_\nu q(x)+q(y),$$ and is $\nu$-excessive otherwise. When $eR$ is dense, $(x,y)$ is $\nu$-quasilinear iff $\operatorname{CS}(x,y) \leq e$, while for $eR$ discrete, $\nu$-quasilinear iff $\operatorname{CS}(x,y) \leq c_0$; it is exotic quasilinear iff $\operatorname{CS}(x,y) = c_0$. The CS-ratio obeys important subadditivity rules, involving $\nu$-excessiveness as well as $\nu$-quasilinearity, which are utilized in this paper. \begin{thm}[{\cite[Subadditivity Theorem 3.6]{QF2}}]\label{thm:II.2.10} Let $x,y,w \in V $ be anisotropic vectors. \begin{enumerate} \item[a)] $ \operatorname{CS}(x+y,w)\ds\le\operatorname{CS}(x,w)+\operatorname{CS}(y,w).$ \item[b)] If $(x,y)$ is $\nu$-excessive and $\operatorname{CS}(x,w)+\operatorname{CS}(y,w)\ne0,$ then $$ \operatorname{CS}(x+y,w)\ds <\operatorname{CS}(x,w)+\operatorname{CS}(y,w).$$ \item[c)] If $(x,y)$ is $\nu$-quasilinear and either $q(x)\operatorname{CS}(y,w)=q(y)\operatorname{CS}(x,w)$, or $\operatorname{CS}(x,w)=\operatorname{CS}(y,w)$, or $q(x)\cong_\nu q(y),$ then $$\operatorname{CS}(x+y,w)\ds=\operatorname{CS}(x,w)+\operatorname{CS}(y,w).$$ \end{enumerate} \end{thm} On an $R$-module $V$ we use the equivalence relation: $x \sim y$ iff $\lambda x = \mu y$ for some $\lambda, \mu \in R \setminus \{ 0 \}$ (where~ $\lambda, \mu$ need not be invertible as in the usual projective equivalence), whose classes $X$ are called \textbf{rays}. It delivers a projective version of the theory on $V \setminus \{ 0 \}$, cf. \cite[\S6]{QF2}. When ~ $x$ and $y$ are anisotropic, the CS-ratio $\operatorname{CS}(x,y)$ depends only on the rays $X,Y$ containing $x,y$, and provides a well defined CS-ratio $\operatorname{CS}(X,Y)$ for anisotropic rays $X,Y$, i.e., rays $X,Y$ in $V \setminus q^{-1}(0).$ Subadditivity of rays occurs on \textbf{intervals} $[X,Y]$ with endpoints $X,Y$, cf \S\ref{sec:1}, as a consequence of Theorem~ \ref{thm:II.2.10}. A comparison of $\operatorname{CS}(Z,W)$ to $\operatorname{CS}(X,W)+\operatorname{CS}(Y,W)$ for anisotropic ray $Z \in [X,Y]$ and arbitrary $W$ is given by \cite[Theorem ~7.7]{QF2}, and uniqueness of the boundary of~ $[X,Y]$ by Theorem \ref{thm:II.8.8}. The \textbf{ray space} $\operatorname{Ray}(V)$ of $V $ consists of all rays and carries a natural notion of convexity: A subset $A \subset \operatorname{Ray}(V)$ is \textbf{convex}, if $[X,Y] \subset A$ for any $X,Y \in A$. Basics structures of rays and convexity in $\operatorname{Ray}(V)$ are reviewed in \S\ref{sec:1}. By relying on a fine detailed analysis of the monotonicity behavior of the CS-functions $$\operatorname{CS}(W,-): \operatorname{Ray}(V) \longrightarrow eR$$ on a fixed interval $[Y_1, Y_2]$ in $\operatorname{Ray}(V)$, given in \S\ref{sec:6}, CS-profiles on interval are defined in \S\ref{sec:7}. This fine analysis enlarges the scope of results in \cite{Quasilinear} and determines a partition of $\operatorname{Ray}(V)$ into convex subsets (Theorem \ref{thm:8.6}) according to the monotonicity behavior of $\operatorname{CS}(W,-)$ on the intervals $[Y_i, Y_j]$, $1 \leq i < j \leq m$, for a given finite set of rays $\{ Y_1, \dots, Y_m\}$ and $W$ running through $\operatorname{Ray}(V)$. A {pair $(X,Y)$ of rays} in $V$ is \textbf{quasilinear} (with respect to $q$), if the restriction $q |_{ Rx + Ry}$ is quasilinear for any $x \in X$, $y \in Y$. A {subset} $C \subset \operatorname{Ray}(V)$ is \textbf{quasilinear}, if all pairs $(X,Y) $ in $C$ are quasilinear. Quasilinearity is governed by \textbf{QL-stars} $\operatorname{QL}(X)$ of rays $X$. $\operatorname{QL}(X)$ is the set of all $Y \in \operatorname{Ray}(V)$ for which the pair $(X,Y)$ is quasilinear\footnote{$\operatorname{QL}(X)$ is not necessarily quasilienar.}; equivalently, the interval $[X,Y]$ is quasilinear. \S\ref{sec:9} presents the \textbf{downset} of a QL-star, this is the set of all QL-stars contained in $\operatorname{QL}(X)$, while \S\ref{sec:10} introduces the \textbf{median} on an interval and links it to convexity properties (Corollary \ref{cor:10.6}). The study of median, leads in \S\ref{sec:11} to enquire the existence of extrema of CS-functions $\operatorname{CS}(W,-)$. Theorem \ref{thm:11.1} specifies a condition and the place where an $eR$-valued function has a minimal, while Theorem \ref{thm:11.4} provides an upper bound in terms of generators for CS-functions over finitely generated $eR$-modules. Inquiring after the minima of a CS-function is then a natural question. An intriguing issue is that the minimum of $\operatorname{CS}(W,-)$ over the convex hull $\operatorname{conv}(Y_1, \dots , Y_n)$ of $\{Y_1, \dots, Y_n \} $ can be smaller than $\min\limits_{i} \operatorname{CS}(W,Y_i)$. The rays for which this holds compose the ``\textbf{glen}'' of $Y_1, \dots, Y_n $, which is discussed in detail in \S\ref{sec:12}. Glens extend to intervals, and establish a useful correspondence to CS-functions (Theorem ~ \ref{thm:12.3}). Given a finitely generated convex set $C$ in a ray space $\operatorname{Ray}(V)$, we may ask whether there exists a quadratic pair $(q,b)$ on $V$ with $q$ anisotropic on $V$. In case that $(q,b)$ exists we can move a ray~ $W$ around, examine the minima of $\operatorname{CS}(W,-)$ on $C$. In the easiest case that $q$ is quasilinear on $C$, the following holds. Every ray $Z$ which is an isolated minimum of a CS-function $\operatorname{CS}(W,-)$, i.e., this function is not constant on an interval emanating from~ $Z$, is an ``indispensable generator'' of $C$, i.e., $Z$ occurs in every set of generators of $C$. If $C= \operatorname{conv}( S)$ is the convex hull of some finite set $S$ for which certain pairs $(Z,Z')$ in $S$ are $\nu$-excessive, then the situation is more involved, since $\operatorname{CS}(W,-)$ can be non monotonic on $[ Z,Z']$ and the minimum can be attained at the $W$-median. Nevertheless, this gives a constraint, on, say, the minimal sets of generators of $C$. An intriguing phenomenon is that one can choose ~$W$ nearly arbitrarily. Motivated by this phenomenon, \S\ref{sec:13} explores the set ${\operatorname{Min}}\operatorname{CS}(W,C)$ of minima of a CS-function $\operatorname{CS}(W,-)$ on the convex hull $C$ of a finite set $S$ of rays in $\operatorname{Ray}(V)$. Theorem ~\ref{thm:13.1} characterizes properties of ${\operatorname{Min}}\operatorname{CS}(W,C)$, linking these properties to medians. A $Z$-polar of a subset ~ $P \subset Z^\uparrow = \{ Y \ds | \operatorname{CS}(W,Y) > \operatorname{CS}(W,Z) \} $ is a set of rays (Definition \ref{def:13.6}) for which there exist $X \in P$ with $Y \in Z^\uparrow $ such that $M_W(X,Y) =Z$. This subset is closed for taking convex hull (Theorem ~\ref{thm:13.9}), compatible with the ordering induced by $Z$ (Theorem~ ~\ref{thm:13.10}), and induces the $Z$-equivalence relation on $Z^\uparrow$. The next step of this study is then to describe the classes of this equivalence relation (Problem \ref{prob:13.14}). In this paper we give only a partial description in terms of convex hulls (Theorem \ref{thm:13.13}), but, by introducing the notion of \textbf{median stars} in \S\ref{sec:14}, we lay out a possible machinery to address this problem. \section{Convex sets in the ray space}\label{sec:1} We review our setup as was laid out in \cite{QF2,Quasilinear} in which $V$ denotes an $R$-module, where $R$ a supertropical semiring $R$ such that $eR$ is a (bipotent) semifield and $R \setminus \{ 0 \} $ is closed for multiplication, i.e., $\lambda \mu =0 \ds \Rightarrow \lambda =0 \ds{\text{or}} \mu =0, \ \text{for any } \lambda, \mu \in R .$ $V$ is assumed to have the property $\lambda x = 0 \ds \Rightarrow \lambda =0 \ds{\text{or}} x =0, \ \text{for any } x \in V. $ These properties hold when $eR = \mathcal G \cup \{ 0 \} $ is a semifield. Vectors $x,y \in V$ are \textbf{ray-equivalent}, written $x \sim_{\operatorname{r}} y ,$ if $\lambda x = \mu y$ for some $\lambda, \mu \in R \setminus \{ 0 \}$. This is the finest equivalence relation $E$ on $V$ with $x \sim_E \lambda x $ for any $\lambda \in R \setminus \{ 0 \}$, which gives $V \setminus \{ 0 \} $ as a union of ray-equivalence classes. The \textbf{rays} in $V$ are the ray-equivalence classes $\neq \{ 0 \}$. The \textbf{ray} $\operatorname{ray}_V(x)$ of $x$ in $V$ is the ray-equivalence class of a vector $x \in V \setminus \{ 0 \}$, written $\operatorname{ray}(x)$ when $V$ is clear form the context. The \textbf{ray space} $\operatorname{Ray}(V)$ of $V$ is the set of all rays in~ $V$. The set $X_0 := X \cup \{ 0 \} $ is the smallest submodule of~$V$ containing the ray $X$. $X_0 + Y_0$ is the smallest submodule of $V$ containing both rays $X$ and $Y$. It is a disjoint union of subsemigroups of $(V,+)$ as follows \begin{equation}\label{eq:II.5.3} X_0 + Y_0 = (X + Y) \ds{ \cup} X \ds{ \cup} Y \ds{ \cup} \{ 0 \}. \end{equation} The \bfem{closed interval} $[X,Y]$ consists of all rays $Z$ in the submodule $X_0+Y_0$ of $V$, generated by $X\cup Y$. The \bfem{open interval} $\, ]X,Y[$ consists of all rays $$Z\subset X+Y:=\{x+y \ds | x\in X,y\in Y\}.$$ Thus, $[X,Y] \ds =\, ]X,Y[ \ds \cup\{X,Y\}.$ The \textbf{half open intervals} are $$[X,Y[\ds{:=} ]X,Y[\ds \cup\{X\},\qquad ]X,Y] \ds{:=} ]X,Y[ \ds \cup\{Y\}.$$ \begin{schol}[{\cite[Scholium 7.6]{QF2}}]\label{schol:II.6.6} Set $X=\operatorname{ray}(x),$ $Y=\operatorname{ray}(y)$ for $x,y\in V$. For any ray~ $Z$ in $V,$ the following hold: \begin{enumerate} \setlength{\itemsep}{2pt} \item[a)] $Z \ds\in\, ]X,Y[$ iff $Z=\operatorname{ray}(\lambda x+\mu y)$ with $\lambda,\mu\in R\setminus\{0\}$ iff $Z=\operatorname{ray}(\lambda x+ y)$ with $\lambda\in R\setminus\{0\}.$ \vskip 1.5mm \noindent \item[b)] $Z\ds\in ]X,Y]$ iff $Z=\operatorname{ray}(\lambda x+\mu y)$ with $\lambda\in R,$ $\mu\in R\setminus\{0\}$ iff $Z=\operatorname{ray}(\lambda x+y)$ with $\lambda\in R.$\vskip 1.5mm \noindent \item[c)] $Z\in[X,Y]$ iff $Z=\operatorname{ray}(\lambda x+\mu y)$ with $\lambda,\mu\in R.$\end{enumerate} $R$ and $R\setminus\{0\}$ may be replaced respectively by $eR$ and $\mathcal G$ everywhere. \end{schol} \begin{thm}[{\cite[Theorem 8.8]{QF2}}]\label{thm:II.8.8} Let $X, Y , X_1, Y_1$ be rays in $V$ with $[X,Y] = [X_1, Y_1]$. \begin{enumerate} \item[a)] Either $X= X_1,$ $Y = Y_1$ or $X = Y_1, $ $Y = X_1.$ \vskip 1.5mm \noindent \item[b)] If $[X,Y]$ is not a singleton, i.e., $X \neq Y,$ then $X= X_1,$ $Y = Y_1$ iff $[X, X_1] \neq [X, Y_1].$ \end{enumerate} \end{thm} A subset $M \subset \operatorname{Ray}(V)$ is \textbf{convex} (in $\operatorname{Ray} (V)$), if for any two rays $X, Y \in M$ the closed interval $[X, Y]$ is contained in $M$. The \textbf{convex hull} $\operatorname{conv} (S)$ of a nonempty set $S \subset \operatorname{Ray} (V)$ is the smallest convex subset of $\operatorname{Ray} (V)$ containing $S$. When $S = \{ X_1, \dots, X_n \}$ is finite, $ \operatorname{conv} (S) $ is written $ \operatorname{conv} (X_1, \dots, X_n) $, for short. Clearly $[X, Y] = \operatorname{conv} (X, Y)$, and by \cite[Proposition 8.1]{QF2} all the intervals $] X, Y [ \, , ] X, Y ], [ X, Y [ \, , [ X, Y ]$ are convex sets, for any rays $X, Y$ in $\operatorname{Ray}(V)$. A subset $U \subset V$ is \textbf{\emph{ray-closed}} in $V$, if $U \setminus \{ 0 \}$ is a union of rays of~ $V$. \begin{prop}[{\cite[Proposition 2.6]{Quasilinear}}]\label{prop:1.3} $ $ \begin{enumerate} \etype{\alph} \setlength{\itemsep}{2pt} \item If $U_1, \dots, U_n$ are ray-closed subsets of $V \setminus \{ 0 \}$, then the set $U_1 + \dots + U_n$ is again ray-closed in $V$, consisting of all rays $\operatorname{ray}_V (\lambda_1 u_1 + \dots + \lambda_n u_n)$ with $u_i \in U_i$, $\lambda_i \in R \setminus \{ 0 \}$. In particular, for any rays $X_1, \dots, X_n$ in $V$ the set $X_1 + \dots + X_n$ is ray-closed in $V$. \item The convex hull of a finite set of rays $\{ X_1, \dots, X_n \}$ has the disjoint decomposition \[ \operatorname{conv} (X_1, \dots, X_n) = \bigcup\limits_{i_1 < \dots < i_r} \operatorname{Ray} (X_{i_1} + \dots + X_{i_r}) \] with $r \leq n$, $1 \leq i_1 < \dots < i_r \leq n$. \end{enumerate} \end{prop} We denote by $\widetilde{A}$ the set of all sums of finitely many members of a subset $A \subset \operatorname{Ray} (V)$. As the convex hull of $A$ is the union of all sets $\operatorname{conv} (X_1, \dots, X_r)$ with $r \in \mathbb{N}$, $X_1, \dots, X_r \in A$, we obtain the following. \begin{cor}[{\cite[Corollary 2.8]{Quasilinear}}]\label{cor:1.5} Let $C$ denote the convex hull of $A_1 \cup \dots \cup A_n$, where $A_1, \dots, A_n$ are convex subsets of $\operatorname{Ray} (V)$. \begin{enumerate} \etype{\alph} \setlength{\itemsep}{2pt} \item $C$ is the union of all convex hulls $\operatorname{conv} (X_1, \dots, X_n)$ with $X_i \in A_i$, $1 \leq i \leq n$. \item $\widetilde{C}$ is the union of all sets $\widetilde{A}_{i_1} + \dots + \widetilde{A}_{i_r}$ with $r \leq n$, $1 \leq i_1 < \dots < i_r \leq n$. \end{enumerate} \end{cor} Proposition \ref{prop:1.3} and Corollary \ref{cor:1.5} can be inferred from the next observation. \begin{prop}[{\cite[Proposition 2.9]{Quasilinear}}]\label{prop:1.6} The convex subsets $A$ of $\operatorname{Ray} (V)$ correspond uniquely to the ray-closed submodules $W$ of $V$ via $ W = \widetilde{A} \cup \{ 0 \} , A = \operatorname{Ray} (W). $ \end{prop} \section{The function $\operatorname{CS} (X_1, - )$ on $[X_2, X_3]$}\label{sec:6} In this section $R$ denotes a supertropical semiring whose ghost ideal $eR = \{ 0 \} \cup \mathcal G$ is a nontrivial (bipotent) semifield, and $V$ stands for an $R$-module equipped with a fixed quadratic pair $(q, b)$ with $q$ anisotropic on $V$, i.e., $q^{-1}(0) = \{0\}$. Assuming that $X_1, X_2, X_3$ are three rays on ~$V$, we explicitly analyze the monotonicity behavior of the function $\operatorname{CS} (X_1, -)$ on the interval $[X_2, X_3]$ with $X_2 \ne X_3$. For vectors $\varepsilon_i \in X_i$, $i = 1, 2, 3$, we employ the following six parameters \[ \alpha_i = q (\varepsilon_i) \ne 0, \quad \alpha_{ij} = b (\varepsilon_i, \varepsilon_j) = \alpha_{ji} \qquad i, j \in \{ 1, 2, 3 \}, \ i < j. \] As our computations take place in the semifield $eR$, we can write $\leq, <$ instead of $\leq_{\nu}, <_{\nu}$. But, the forthcoming formulas are to be used later in a supertropical context without fuss, otherwise we could assume that the parameters $\alpha_i, \alpha_{ij}$ belong to $eR$. Our analysis is performed in terms of the function \[ f (\lambda) = \operatorname{CS} (\varepsilon_1, \varepsilon_2 + \lambda \varepsilon_3) \] with $\lambda \in eR \cup \{ \infty \} = \{ 0 \} \cup \mathcal G \cup \{ \infty \}$. Here $\lambda = \infty$ corresponds to $\mu = 0$ for $\mu = \lambda^{-1}$, and $\operatorname{CS} (\varepsilon_1, \varepsilon_2 + \lambda \varepsilon_3) = \operatorname{CS} (\varepsilon_1, \mu \varepsilon_2 + \varepsilon_3)$, thus $f(\infty) = \operatorname{CS} (\varepsilon_1, \varepsilon_3)$. We have \[ b (\varepsilon_1, \varepsilon_2 + \lambda \varepsilon_3) = \alpha_{12} + \lambda \alpha_{13}, \] and so \begin{equation}\label{eq:6.2} f (\lambda) = e \frac{\alpha_{12}^2 + \lambda^2 \alpha_{13}^2}{\alpha_1 \; q (\varepsilon_2 + \lambda \varepsilon_3)} \in eR,\end{equation} which decomposes as \begin{equation}\label{eq:6.3} f (\lambda) = f_1 (\lambda) + f_2 (\lambda) = \max (f_1 (\lambda), f_2 (\lambda)) \end{equation} with \begin{equation}\label{eq:6.4} {\displaystyle f_1 (\lambda) = e \frac{\alpha_{12}^2}{\alpha_1 \; q (\varepsilon_2 + \lambda \varepsilon_3)} , \qquad f_2 (\lambda) = e \frac{\lambda^2 \alpha_{13}^2}{\alpha_1 \; q (\varepsilon_2 + \lambda \varepsilon_3)}}.\footnote{In the following we omit the factor $e = 1_{eR}$, reading all formulas in $eR$.} \end{equation} We proceed by analysing the monotonicity behavior of $f_1, f_2$ on $[0, \infty]$. Without loss of generality we assume that\begin{equation}\label{eq:6.5} \operatorname{CS} (\varepsilon_1, \varepsilon_2) \leq \operatorname{CS} (\varepsilon_1, \varepsilon_3). \end{equation} (Otherwise interchange $X_2$ and $X_3$.) If $\operatorname{CS} (\varepsilon_1, \varepsilon_3) = 0$, then $\alpha_{12} = \alpha_{13} = 0$, and thus $f_1 = 0$, $f_2 = 0$, $f = 0$. \textit{Discarding this trivial case, we assume that} $\operatorname{CS} (\varepsilon_1, \varepsilon_3) > 0$. We rewrite the functions $f_1, f_2$ as follows \begin{equation}\label{eq:6.6} f_1 (\lambda) = \frac{\alpha_{12}^2}{\alpha_1 \alpha_2} \; \frac{\alpha_2}{q (\varepsilon_2 + \lambda \varepsilon_3)} = \operatorname{CS} (\varepsilon_1, \varepsilon_2) \frac{q (\varepsilon_2)}{q (\varepsilon_2 + \lambda \varepsilon_3)}, \end{equation} \begin{equation}\label{eq:6.7} f_2 (\lambda) = \frac{\alpha_{13}^2}{\alpha_1 \alpha_3} \; \frac{\lambda^2 \alpha_3}{q (\varepsilon_2 + \lambda \varepsilon_3)} = \operatorname{CS} (\varepsilon_1, \varepsilon_3) \frac{q (\lambda \varepsilon_3)}{q (\varepsilon_2 + \lambda \varepsilon_3)}. \end{equation} These formulas imply that \begin{equation}\label{eq:6.8} f_1 (\lambda) \leq \operatorname{CS} (\varepsilon_1, \varepsilon_2), \quad \mbox{$f$}_2 (\lambda) \leq \operatorname{CS} (\varepsilon_1, \varepsilon_3), \qquad \text{for all } \lambda \in [0, \infty]. \end{equation} More explicitly, \begin{equation}\label{eq:6.9} q (\varepsilon_2 + \lambda \varepsilon_3) = \alpha_2 + \lambda \alpha_{23} + \lambda^2 \alpha_3, \end{equation} and so \begin{equation}\label{eq:6.10} f_1 (\lambda) = \operatorname{CS} (\varepsilon_1, \varepsilon_2) \frac{\alpha_2}{\alpha_2 + \lambda \alpha_{23} + \lambda^2 \alpha_3}, \end{equation} \begin{equation}\label{eq:6.11}f_2 (\lambda) = \operatorname{CS} (\varepsilon_1, \varepsilon_3) \frac{\alpha_3}{\alpha_3 + \lambda^{-1} \alpha_{23} + \lambda^{-2} \alpha_2}. \end{equation} We conclude from these formulas that $f_1$ decreases (monotonically) on $[0, \infty]$ from $\operatorname{CS} (\varepsilon_1, \varepsilon_2)$ to zero, while $f_2$ increases (monotonically) from zero to $\operatorname{CS} (\varepsilon_1, \varepsilon_3)$. Moreover, we infer from~ \eqref{eq:6.4} that $f_1 (\lambda) = f_2 (\lambda)$ precisely when $\alpha^2_{12} = \lambda^2 \alpha^2_{13}$, whence $\lambda^2 = \frac{\alpha_{12}^2}{\alpha_{13}^2}$. So $f_1 (\lambda) = f_2 (\lambda)$ holds on the unique argument $\xi$, that is \begin{equation}\label{eq:6.12} \xi = \frac{\alpha_{12}}{\alpha_{13}}. \end{equation} It follows that $f$ coincides with $f_1$ on $[0, \xi]$ and with $f_2$ on $[\xi, \infty]$. Furthermore $f(\xi) = f_1 (\xi) = f_2 (\xi)$ is the minimal value attained by the function $f$ on $[0, \infty]$. In other words, $f(\xi)$ is the minimal value of $\operatorname{CS} (X_1, Z)$ for $Z$ running over $[X_2, X_3]$. $\xi$ corresponds to the ray \begin{equation}\label{eq:6.13} M : = \operatorname{ray} \bigg(\varepsilon_2 + \frac{\alpha_{12}}{\alpha_{13}} \varepsilon_3\bigg) = \operatorname{ray} (\alpha_{13} \varepsilon_2 + \alpha_{12} \varepsilon_3), \end{equation} which we call the $X_1$-\textbf{median} of the interval $[X_2, X_3]$. This important ray $M$ will be studied in detail later. So far we have obtained an outline of the monotonicity behavior of $f_1, f_2, f$. This picture will now be refined. We start with the case that \begin{equation}\label{eq:6.14} \alpha_2 \alpha_3 \leq \alpha_{23}^2, \end{equation} which, except in the border case $\alpha_2 \alpha_3 = \alpha_{23}^2$, implies that the interval $[X_2, X_3]$ is excessive or exotic quasilinear \cite[Definition 2.8]{QF2}. In particular $\alpha_{23} \ne 0$. We determine the subsets of $[0, \infty]$ where the decreasing function $f_1$ takes its maximal value $\operatorname{CS} (\varepsilon_1, \varepsilon_2)$ and the increasing function $f_2$ takes its maximal value $\operatorname{CS} (\varepsilon_1, \varepsilon_3)$ as follows. When $\operatorname{CS} (\varepsilon_1, \varepsilon_2) \ne 0$, we read off from Formula \eqref{eq:6.8}, applied to $q (\varepsilon_2 + \lambda \varepsilon_3)$, that $f_1 (\lambda) = \operatorname{CS} (\varepsilon_1, \varepsilon_2)$ precisely when the summand $\alpha_2$ is $\nu$-dominant, i.e., $\alpha_2 \geq \lambda \alpha_{23}$, $\alpha_2 \geq \lambda^2 \alpha_3$, equivalently, \[ \lambda^2 \leq \frac{\alpha_2^2}{\alpha_{23}^2}, \qquad \lambda^2 \leq \frac{\alpha_2}{\alpha_3}. \] From \eqref{eq:6.14} we infer that $\frac{\alpha_2}{\alpha_3} \geq \frac{\alpha_2^2}{\alpha_{23}^2}$, and therefore the condition $\lambda^2 \leq \frac{\alpha_2}{\alpha_3}$ can be dismissed. Thus \begin{equation}\label{eq:6.15} f_1 (\lambda) = \operatorname{CS} (\varepsilon_1, \varepsilon_2) \dss\ \Leftrightarrow \ \lambda \leq \frac{\alpha_2}{\alpha_{23}}. \end{equation} The case of $\operatorname{CS} (\varepsilon_1, \varepsilon_2) = 0$ is degenerate, in which $f_1 = 0$, $f_2 = f$ (and $\xi = 0$). Concerning $f_2$, we read off from \eqref{eq:6.7} and \eqref{eq:6.11} that $f_2 (\lambda) = \operatorname{CS} (\varepsilon_1, \varepsilon_3)$ iff $\lambda \ne 0$ and the term $\alpha_3$ in the sum $\alpha_3 + \lambda^{-1} \alpha_{23} + \lambda^{-2} \alpha_2$ is $\nu$-dominant, which means that $\lambda^{-1} \alpha_{23} \leq \alpha_3$, $\lambda^{-2} \alpha_2 \leq \alpha_3$, equivalently, \[ \frac{\alpha^2_{23}}{\alpha_3^2} \leq \lambda^2, \quad \frac{\alpha_2}{\alpha_3} \leq \lambda^2. \] We conclude from \eqref{eq:6.14} that $\frac{\alpha_{23}^2}{\alpha_3^2} \geq \frac{\alpha_2}{\alpha_3}$, and thus \begin{equation}\label{eq:6.16} f_2 (\lambda) = \operatorname{CS} (\varepsilon_1, \varepsilon_3) \dss \ \Leftrightarrow \ \lambda \geq \frac{\alpha_{23}}{\alpha_3}. \end{equation} $\{$Recall that we initially assume that $\operatorname{CS} (\varepsilon_1, \varepsilon_3) \ne 0$.$\}$ We have seen that the intervals $[ 0, \frac{\alpha_2}{\alpha_{23}} ]$ and $[ \frac{\alpha_{23}}{\alpha_3}, \infty ]$ are the sets where the terms $\alpha_1$ and $\lambda^2 \alpha_2$ in the sum \eqref{eq:6.9} are $\nu$-dominant and conclude that $[ \frac{\alpha_2}{\alpha_{23}}, \frac{\alpha_{23}}{\alpha_3} ]$ is the interval in which the middle term $\lambda \alpha_{23}$ is $\nu$-dominant. Note that in the border case $\alpha_2 \alpha_3 = \alpha_{23}^2$ this interval retracts to the single point $\frac{\alpha_2}{\alpha_{23}} = \frac{\alpha_{23}}{\alpha_3}$. We infer from \eqref{eq:6.7} and \eqref{eq:6.11} that in the interval $[0, \frac{\alpha_2}{\alpha_{23}}]$ \begin{equation}\label{eq:6.17} f_2 (\lambda) = \operatorname{CS} (\varepsilon_1, \varepsilon_3) \frac{\lambda^2 \alpha_3}{\alpha_2}, \end{equation} and that in $[ \frac{\alpha_2}{\alpha_{23}}, \frac{\alpha_{23}}{\alpha_3} ]$ \begin{equation}\label{eq:6.18} f_2 (\lambda) = \operatorname{CS} (\varepsilon_1, \varepsilon_3) \frac{\lambda \alpha_3}{\alpha_{23}}. \end{equation} Thus $f_2$ strictly increases on $[0, \frac{\alpha_2}{\alpha_{23}} ]$ from zero to \[ f_2 \left(\frac{\alpha_2}{\alpha_{23}}\right) = \operatorname{CS} (\varepsilon_1, \varepsilon_3) \frac{\alpha_3}{\alpha_2} \left(\frac{\alpha_2}{\alpha_{23}}\right)^2 = \frac{\operatorname{CS} (\varepsilon_1, \varepsilon_3)}{\operatorname{CS} (\varepsilon_2, \varepsilon_3)}, \] and then strictly increases on $[\frac{\alpha_2}{\alpha_{23}}, \frac{\alpha_{23}}{\alpha_3} ]$ from this value to $\operatorname{CS} (\varepsilon_1, \varepsilon_3)$. Furthermore, we infer from \eqref{eq:6.6} and \eqref{eq:6.10} that in the interval $[\frac{\alpha_2}{\alpha_{23}}, \frac{\alpha_{23}}{\alpha_3} ]$ \begin{equation}\label{eq:6.19} f_1 (\lambda) = \operatorname{CS} (\varepsilon_1, \varepsilon_2) \frac{\alpha_2}{\lambda \alpha_{23}}, \end{equation} while in $[\frac{\alpha_{23}}{\alpha_3}, \infty ]$ \begin{equation}\label{eq:6.19.b} f_1 (\lambda) = \operatorname{CS} (\varepsilon_1, \varepsilon_2) \frac{\alpha_2}{\lambda^2\alpha_3}. \end{equation} Thus $f_1$ strictly decreases on $[\frac{\alpha_2}{\alpha_{23}}, \frac{\alpha_{23}}{\alpha_3} ]$ from the value $\operatorname{CS} (\varepsilon_1, \varepsilon_2)$ to \[ f_1 \left(\frac{\alpha_{23}}{\alpha_3}\right) = \operatorname{CS} (\varepsilon_1, \varepsilon_2) \frac{\alpha_2}{\alpha_3} \left(\frac{\alpha_3}{\alpha_{23}}\right)^2 = \frac{\operatorname{CS} (\varepsilon_1, \varepsilon_2)}{\operatorname{CS} ( \varepsilon_2, \varepsilon_3)} \] and then on $[\frac{\alpha_{23}}{\alpha_3}, \infty ]$ again strictly decreases from this value to zero. Note that the arguments $\lambda = \frac{\alpha_2}{\alpha_{23}}, \frac{\alpha_{23}}{\alpha_3}$ correspond to the rays \begin{equation}\label{eq:6.20} X_{23} : = \operatorname{ray} (\alpha_{23} \varepsilon_2 + \alpha_2 \varepsilon_3), \qquad X_{32} : = \operatorname{ray} (\alpha_3 \varepsilon_2 + \alpha_{23} \varepsilon_3), \end{equation} which in the case $\alpha_2 \alpha_3 <_{\nu} \alpha_{23}^2$ are the \textbf{critical rays} of $[ X_2, X_3 ]$ (cf. \cite{QF2}). Summarizing the above study we obtain: \begin{prop}\label{prop:6.1} Assume that $0 \leq \operatorname{CS} (\varepsilon_1, \varepsilon_2) \leq \operatorname{CS} (\varepsilon_1, \varepsilon_3)$ and that $\alpha_2 \alpha_3 \leq_{\nu} \alpha^2_{23}$. \begin{itemize} \item[a)] The function $f_1$ is constant on $[0, \frac{\alpha_2}{\alpha_{23}} ]$ with value $\operatorname{CS} (\varepsilon_1, \varepsilon_2)$ and strictly decreases on $[ \frac{\alpha_2}{\alpha_{23}}, \infty ]$ from $\operatorname{CS} (\varepsilon_1, \varepsilon_2)$ to zero, with the intermediate value \[ f_1 \left( \frac{\alpha_2}{\alpha_{23}} \right) = \frac{\operatorname{CS} (\varepsilon_1, \varepsilon_2)}{\operatorname{CS} (\varepsilon_2, \varepsilon_3)}, \] provided that $\operatorname{CS} (\varepsilon_1, \varepsilon_2) \ne 0$. Otherwise $f_1 = 0$, whence $f_2 = f$ on $[0, \infty ]$. \item[b)] The function $f_2$ strictly increases on $[0, \frac{\alpha_{23}}{\alpha_5} ]$ from zero to $\operatorname{CS} (\varepsilon_1, \varepsilon_3)$ and remains constant on $[\frac{\alpha_{23}}{\alpha_3}, \infty ]$, with intermediate value \[ f_2 \left( \frac{\alpha_2}{\alpha_{23}} \right) = \frac{\operatorname{CS} (\varepsilon_1, \varepsilon_3)}{\operatorname{CS} (\varepsilon_2, \varepsilon_3)}, \] provided that $\operatorname{CS} (\varepsilon_1, \varepsilon_3) \ne 0$. Otherwise $f_1 = f_2 = f = 0$ on $[ 0, \infty ]$. \end{itemize} \end{prop} Since $\xi$ is the unique argument $\lambda \in [0, \infty]$ with $f_1 (\lambda) = f_2 (\lambda)$, it follows from Proposition ~\ref{prop:6.1} that $f_1 \geq f_2$ on $[0, \xi]$ and $f_1 \leq f_2$ on $[\xi, \infty]$, whence \begin{equation}\label{eq:6.22} f = f_1 \; \mbox{on} \; [0, \xi] \; \dss{\mbox{and}} \; f = f_2 \; \mbox{on} \; [\xi, \infty]. \end{equation} As seen below, the monotonicity behavior of $f$ is determined by the location of $\xi$ with respect to the interval $[ \frac{\alpha_2}{\alpha_{23}}, \frac{\alpha_{23}}{\alpha_3}]$. Since $f_2 = \operatorname{CS} (\varepsilon_1, \varepsilon_3) \geq$ $f_1$ on $[\frac{\alpha_{23}}{\alpha_3}, \infty]$ (cf. \eqref{eq:6.8}), it is clear that always \begin{equation}\label{eq:6.23} \xi \leq \frac{\alpha_{23}}{\alpha_3}. \end{equation} We have $\xi = 0$ iff $\operatorname{CS} (\varepsilon_1, \varepsilon_2) = 0$, and then $f = f_2$ strictly increases on $[0, \frac{\alpha_{23}}{\alpha_3}]$ to $\operatorname{CS} (\varepsilon_1, \varepsilon_3)$ and remains constant on $[ \frac{\alpha_{23}}{\alpha_3}, \infty ]$. Assuming that $\operatorname{CS} (\varepsilon_1, \varepsilon_2) > 0$, if $\xi \leq \frac{\alpha_2}{\alpha_{23}}$, then $f$ has the constant value $\operatorname{CS} (\varepsilon_1, \varepsilon_2)$ on $[0, \xi]$. Thus, as follows from Proposition~\ref{prop:6.1}, $f$ strictly increases on $[\xi, \frac{\alpha_2}{\alpha_{23}} ]$ from $\operatorname{CS} (\varepsilon_1, \varepsilon_2)$ to $\frac{\operatorname{CS} (\varepsilon_1, \varepsilon_3)}{\operatorname{CS} (\varepsilon_1, \varepsilon_2)}$, and it strictly increases on $[ \frac{\alpha_2}{\alpha_{23}}, \frac{\alpha_{23}}{\alpha_3} ]$ from this value to $\operatorname{CS} (\varepsilon_1, \varepsilon_3)$. Finally, $f$ remains constant on $[\frac{\alpha_{23}}{\alpha_3}, \infty ]$. The graph of the function $f$ is illustrated as follows. \begin{figure}[h]\label{fig:1} \resizebox{0.5\textwidth}{!}{ \begin{tikzpicture \draw[thick] (0,0) node[below]{0} --(14,0)node[below]{$\lambda$} ; \draw[thick] (0,0) -- (0,11)node[left]{} node[above]{f}; \draw[blue, thick] (0,8) node[left]{$CS(\varepsilon_1,\varepsilon_3)$}--(13,8); \draw[blue, thick] (0,5) node[left]{$\frac{CS(\varepsilon_1,\varepsilon_3)}{CS(\varepsilon_2,\varepsilon_3)}$}--(13,5); \draw[blue, thick] (0,2) node[left]{${CS(\varepsilon_1,\varepsilon_2)}$}--(13,2); \draw[blue, thick](10,0)node[below]{$\frac{\alpha_{23}}{\alpha_{2}}$}--(10,10); \draw[blue, thick](2,0)node[below]{$\xi$}--(2,10); \draw[blue, thick](3,0)node[below]{$\frac{\alpha_{2}}{\alpha_{23}}$}--(3,10); \draw[help lines] (0,0) grid (13,10); \draw[red, line width=0.5mm] (0,0) to[out=0, in=-120] (2,2) to [out=70, in=-100] (3,5); \draw[red, line width=0.5mm] (3,5) node[left]{}--(10,8); \draw[red, line width=0.5mm] (10,8) --(13,8) node[right]{$\bf f_2$}; \draw[red, line width=0.5mm] (0,2) node[left]{}--(3,2); \draw[red, line width=0.5mm] (3,2) to[out=-30, in=180] (13,0.1) node[right]{$\bf f_1$}; \end{tikzpicture}} \caption{} \end{figure} We read off from this analysis that \begin{equation}\label{eq:6.24} \xi < \frac{\alpha_2}{\alpha_{23}} \dss \ \Leftrightarrow \ \operatorname{CS} (\varepsilon_1, \varepsilon_2) < \frac{\operatorname{CS} (\varepsilon_1, \varepsilon_2)}{\operatorname{CS} (\varepsilon_2, \varepsilon_3)}, \end{equation} \begin{equation}\label{eq:6.25} \xi = \frac{\alpha_2}{\alpha_{23}} \dss \ \Leftrightarrow \ \operatorname{CS} (\varepsilon_1, \varepsilon_2) = \frac{\operatorname{CS} (\varepsilon_1, \varepsilon_3)}{\operatorname{CS} (\varepsilon_2, \varepsilon_3)} .\end{equation} In the remaining case that \[\frac{\alpha_2}{\alpha_{23}} < \xi \leq \frac{\alpha_{23}}{\alpha_{2}} \] we conclude by Proposition \ref{prop:6.1} and \eqref{eq:6.22} that $f$ has the constant value $\operatorname{CS} (\varepsilon_1, \varepsilon_2)$ on $[0, \frac{\alpha_2}{\alpha_{23}} ]$, then strictly decreases on $[\frac{\alpha_2}{\alpha_{23}}, \xi ]$ to a value $\rho : = f_1 (\xi) = f_2 (\xi)$ which we compute below. Then $f$ strictly increases on $[\xi, \frac{\alpha_{23}}{\alpha_3} ]$ from the value $\rho$ to $\operatorname{CS} (\varepsilon_1, \varepsilon_3)$, and finally remains constant on $[\frac{\alpha_{23}}{\alpha_{3}}, \infty ]$. This implies that \begin{equation}\label{eq:6.23p} \xi < \frac{\alpha_{23}}{\alpha_3}, \tag{\ref{eq:6.23}'}\end{equation} improving \eqref{eq:6.23}. By \eqref{eq:6.12} and \eqref{eq:6.18} we have $\rho = f_2 (\xi) = \operatorname{CS} (\varepsilon_1, \varepsilon_3) \frac{\alpha_{12}}{\alpha_{13}} \frac{\alpha_{2}}{\alpha_{23}}$, yielding \begin{equation}\label{eq:6.25} \rho = \frac{\alpha_{12} \alpha_{13}}{\alpha_1 \alpha_{23}}, \end{equation} whose square gives \[ \rho^2 = \frac{\alpha_{12}^2 \alpha_{13}^2}{\alpha_1^2 \alpha_{23}^2} = \frac{\alpha_{12}^2}{\alpha_1 \alpha_{2}} \frac{\alpha_{13}^2}{\alpha_1 \alpha_{3}} \frac{\alpha_{2} \alpha_3}{\alpha_{23}^2}, \] i.e., \begin{equation}\label{eq:6.26} \rho^2 = \frac{\operatorname{CS} (\varepsilon_1, \varepsilon_2) \operatorname{CS} (\varepsilon_1, \varepsilon_3)}{\operatorname{CS} (\varepsilon_2, \varepsilon_3)}. \end{equation} It is now clear that $\xi > \frac{\alpha_2}{\alpha_{23}}$ iff $f_2 (\xi) < \operatorname{CS} (\varepsilon_1, \varepsilon_2)$ iff \[ \frac{\operatorname{CS} (\varepsilon_1, \varepsilon_2) \operatorname{CS} (\varepsilon_1, \varepsilon_3)}{\operatorname{CS} (\varepsilon_2, \varepsilon_3)} < \operatorname{CS} (\varepsilon_1, \varepsilon_2)^2, \] and thus \begin{equation}\label{eq:6.27} \xi > \frac{\alpha_2}{\alpha_{23}} \dss\ \Leftrightarrow \ \frac{\operatorname{CS} (\varepsilon_1, \varepsilon_3)}{\operatorname{CS} (\varepsilon_2, \varepsilon_3)} < \operatorname{CS} (\varepsilon_1, \varepsilon_2). \end{equation} Recall that we have assumed (cf. \eqref{eq:6.5}) that $0 < \operatorname{CS} (\varepsilon_1, \varepsilon_2) \leq \operatorname{CS} (\varepsilon_1, \varepsilon_3)$. Assuming further that \begin{equation}\label{eq:6.28.a} \frac{\operatorname{CS} (\varepsilon_1, \varepsilon_3)}{\operatorname{CS} (\varepsilon_2, \varepsilon_3)} < \operatorname{CS} (\varepsilon_1, \varepsilon_2), \end{equation} we still need to distinguish the cases $\operatorname{CS} (\varepsilon_1, \varepsilon_2) < \operatorname{CS} (\varepsilon_1, \varepsilon_3)$ and $\operatorname{CS} (\varepsilon_1, \varepsilon_2) = \operatorname{CS} (\varepsilon_1, \varepsilon_3)$ (where \eqref{eq:6.28.a} holds automatically). This means that either $f(0) < f (\infty)$ or $f(0) = f (\infty)$, which we judge as a difference in the monotonicity behavior of $f$. The graph of $f$ is illustrated in Figure 2. \begin{figure}[h!]\label{fig:2} \resizebox{1\textwidth}{!}{ \begin{tikzpicture}[] \draw[thick] (0,0) node[below]{0} --(14,0)node[below]{$\lambda$} ; \draw[thick] (0,0) -- (0,11)node[left]{} node[above]{f}; \draw[blue, thick] (0,8) node[left]{$CS(\varepsilon_1,\varepsilon_3)$}--(13,8); \draw[blue, thick] (0,3) node[left]{$\frac{CS(\varepsilon_1,\varepsilon_3)}{CS(\varepsilon_2,\varepsilon_3)}$}--(13,3); \draw[blue, thick] (0,4.9) node[left]{$\rho$}--(13,4.9); \draw[blue, thick] (0,6) node[left]{${CS(\varepsilon_1,\varepsilon_2)}$}--(13,6); \draw[blue, thick](9,0)node[below]{$\frac{\alpha_{23}}{\alpha_{2}}$}--(9,10); \draw[blue, thick](4.6,0)node[below]{$\xi$}--(4.6,10); \draw[blue, thick](2,0)node[below]{$\frac{\alpha_{2}}{\alpha_{23}}$}--(2,10); \draw[help lines] (0,0) grid (13,10); \draw[red, line width=0.5mm] (0,0) to[out=0, in=-100] (2,3) ; \draw[red, line width=0.5mm] (2,3) node[left]{}--(9,8); \draw[red, line width=0.5mm] (9,8) --(13,8) node[right]{$\bf f_2$}; \draw[red, line width=0.5mm] (0,6) node[left]{}--(2,6); \draw[red, line width=0.5mm] (2,6) node[left]{}--(9,3); \draw[red, line width=0.5mm] (9,3) to[out=-60, in=180] (13,0.1) node[right]{$\bf f_1$}; \end{tikzpicture} \begin{tikzpicture}[] \draw[thick] (0,0) node[below]{0} --(14,0)node[below]{$\lambda$} ; \draw[thick] (0,0) -- (0,11)node[left]{} node[above]{f}; \draw[blue, thick] (0,8) node[left]{$CS(\varepsilon_1,\varepsilon_2)$}--(13,8); \draw[blue, thick] (0,3) node[left]{$\frac{CS(\varepsilon_1,\varepsilon_3)}{CS(\varepsilon_2,\varepsilon_3)}$}--(13,3); \draw[blue, thick] (0,5.5) node[left]{$\rho$}--(13,5.5); \draw[blue, thick](9,0)node[below]{$\frac{\alpha_{23}}{\alpha_{2}}$}--(9,10); \draw[blue, thick](5.5,0)node[below]{$\xi$}--(5.5,10); \draw[blue, thick](2,0)node[below]{$\frac{\alpha_{2}}{\alpha_{23}}$}--(2,10); \draw[help lines] (0,0) grid (13,10); \draw[red, line width=0.5mm] (0,0) to[out=0, in=-100] (2,3) ; \draw[red, line width=0.5mm] (2,3) node[left]{}--(9,8); \draw[red, line width=0.5mm] (9,8) --(13,8) node[right]{$\bf f_2$}; \draw[red, line width=0.5mm] (0,8) node[left]{}--(2,8); \draw[red, line width=0.5mm] (2,8) node[left]{}--(9,3); \draw[red, line width=0.5mm] (9,3) to[out=-60, in=180] (13,0.1) node[right]{$\bf f_1$}; \end{tikzpicture} } \caption{A. $(\operatorname{CS} (\varepsilon_1, \varepsilon_2) < \operatorname{CS} (\varepsilon_1, \varepsilon_3))$, \qquad B. $(\operatorname{CS} (\varepsilon_1, \varepsilon_2) = \operatorname{CS} (\varepsilon_1, \varepsilon_3)).$} \end{figure} We summarize the above analysis of $f(\lambda) = \operatorname{CS} (\varepsilon_1, \varepsilon_2 + \lambda \varepsilon_2)$ for $\operatorname{CS} (\varepsilon_2, \varepsilon_3) > e$ as follows. \begin{thm}\label{thm:6.2} Assume that $0 \leq \operatorname{CS} (\varepsilon_1, \varepsilon_2) \leq \operatorname{CS} (\varepsilon_1, \varepsilon_3)$ and that $\alpha_2 \alpha_3 <_{\nu} \alpha^2_{23}$. \begin{itemize} \item[a)] If $\operatorname{CS} (\varepsilon_1, \varepsilon_3) = 0$, then $f = 0$. If $\operatorname{CS} (\varepsilon_1, \varepsilon_2) = 0$ and $\operatorname{CS} (\varepsilon_1, \varepsilon_3) > 0$, then $f = 0$ on $[0, \frac{\alpha_2}{\alpha_{23}} ]$, but $f$ strictly increases on $[ \frac{\alpha_2}{\alpha_{23}}, \frac{\alpha_{23}}{\alpha_3} ]$ from zero to $\operatorname{CS} (\varepsilon_1, \varepsilon_3)$ and finally remains constant on $[\frac{\alpha_{23}}{\alpha_3}, \infty ]$. \item[b)] Assume now that \[ 0 < \operatorname{CS} (\varepsilon_1, \varepsilon_2) \leq \operatorname{CS} (\varepsilon_1, \varepsilon_3). \] In the case\\ \begin{equation} \operatorname{CS} (\varepsilon_1, \varepsilon_2) < \frac{\operatorname{CS} (\varepsilon_1, \varepsilon_3)}{\operatorname{CS} (\varepsilon_2, \varepsilon_3)} < \operatorname{CS} (\varepsilon_1, \varepsilon_3)\tag{A} \end{equation} the function $f$ increases on $[0, \infty ]$ monotonically from $\operatorname{CS} (\varepsilon_1, \varepsilon_2)$ to $\operatorname{CS} (\varepsilon_1, \varepsilon_3)$. More precisely, $\frac{\alpha_{12}}{\alpha_{13}} < \frac{\alpha_2}{\alpha_{23}}$, and $f$ has constant value $\operatorname{CS} (\varepsilon_1, \varepsilon_2)$ on $[0, \frac{\alpha_{12}}{\alpha_{13}} ]$, then it strictly increases on $[\xi, \frac{\alpha_2}{\alpha_{23}} ]$ to the value $\frac{\operatorname{CS} (\varepsilon_1, \varepsilon_3)}{\operatorname{CS} (\varepsilon_2, \varepsilon_3)}$ and strictly increases again on $[ \frac{\alpha_2}{\alpha_{23}}, \frac{\alpha_{23}}{\alpha_3} ]$ to the value $\operatorname{CS} (\varepsilon_1, \varepsilon_3)$ for which it remains constant on $[ \frac{\alpha_{23}}{\alpha_3}, \infty ]$. In the border case \begin{equation} \operatorname{CS} (\varepsilon_1, \varepsilon_2) = \frac{\operatorname{CS} (\varepsilon_1, \varepsilon_3)}{\operatorname{CS} (\varepsilon_2, \varepsilon_3)}, \tag{$\partial A$}\end{equation} we have $\frac{\alpha_{12}}{\alpha_{13}} = \frac{\alpha_2}{\alpha_{23}}$, where $f$ is constant of value $\operatorname{CS} (\varepsilon_1, \varepsilon_2)$ on $[0, \frac{\alpha_2}{\alpha_{23}} ]$, strictly increases on $[ \frac{\alpha_2}{\alpha_{23}}, \frac{\alpha_{23}}{\alpha_3} ]$ from $\operatorname{CS} (\varepsilon_1, \varepsilon_2)$ to $\operatorname{CS} (\varepsilon_1, \varepsilon_3)$, and finally is constant of value $\operatorname{CS} (\varepsilon_1, \varepsilon_3)$ on $[ \frac{\alpha_{23}}{\alpha_3}, \infty ]$. In the remaining case \begin{equation} \frac{\operatorname{CS} (\varepsilon_1, \varepsilon_3)}{\operatorname{CS} (\varepsilon_2, \varepsilon_3)} < \operatorname{CS} (\varepsilon_1, \varepsilon_2) < \operatorname{CS} (\varepsilon_1, \varepsilon_3) \tag{B} \end{equation} and its border case \begin{equation} 0 < \operatorname{CS} (\varepsilon_1, \varepsilon_2) = \operatorname{CS} (\varepsilon_1, \varepsilon_3) \tag{$\partial B$}\end{equation} the function $f$ attains its minimal value $\rho$ at the unique point $\lambda = \xi = \frac{\alpha_{12}}{\alpha_{13}}$, where \[ \frac{\alpha_2}{\alpha_{23}} < \xi < \frac{\alpha_{23}}{\alpha_3}. \] Explicitly, $f$ has the constant value $\operatorname{CS} (\varepsilon_1, \varepsilon_2)$ on $[0, \frac{\alpha_2}{\alpha_{23}} ]$, it strictly decreases on $[ \frac{\alpha_2}{\alpha_{23}}, \xi ]$ to the value \[ \rho : = \frac{\alpha_{12} \alpha_{13}}{\alpha_1 \alpha_{23}} = \sqrt{\frac{\operatorname{CS} (\varepsilon_1, \varepsilon_2) \operatorname{CS} (\varepsilon_1, \varepsilon_3)}{\operatorname{CS} (\varepsilon_2, \varepsilon_3)}}, \] then it strictly increases on $[ \xi, \frac{\alpha_{23}}{\alpha_3} ]$ to the value $\operatorname{CS} (\varepsilon_1, \varepsilon_3)$ and remains constant on ~$[\frac{\alpha_{23}}{\alpha_3}, \infty ]$. \end{itemize} \end{thm} Finally we discuss the behavior of $f(\lambda)$ in the easier case that $\alpha_{23}^2 \leq_{\nu} \alpha_2 \alpha_3$, assuming as before that $0 \leq \operatorname{CS} (\varepsilon_1, \varepsilon_2) \leq \operatorname{CS} (\varepsilon_1, \varepsilon_3)$. Formula \eqref{eq:6.9} for $q (\varepsilon_2 + \lambda \varepsilon_3)$ simplifies to \[ q (\varepsilon_2 + \lambda \varepsilon_3) = \alpha_2 + \lambda^2 \alpha_3, \] and so \begin{equation}\label{eq:6.28} f_1 (\lambda) = \frac{\alpha_{12}^2}{\alpha_1 (\alpha_2 + \lambda^2 \alpha_3)}, \qquad f_2 (\lambda) = \frac{\lambda^2 \alpha_{13}^2}{\alpha_1 (\alpha_2 + \lambda^2 \alpha_3)}. \end{equation} If $\operatorname{CS} (\varepsilon_1, \varepsilon_3) = 0$, i.e., $\alpha_{13} = 0$, then $f_1 = 0$, $f_2 = 0$, $f = 0$. Henceforth we assume that $\operatorname{CS} (\varepsilon_1, \varepsilon_3) > 0$ (but allow that $\operatorname{CS} (\varepsilon_1, \varepsilon_2) = 0$). We read off from \eqref{eq:6.28} that, if $\lambda^2 \leq \frac{\alpha_2}{\alpha_3}$, then \begin{equation}\label{eq:6.29} f_1 (\lambda) = \frac{\alpha_{12}^2}{\alpha_1 \alpha_2} = \operatorname{CS} (\varepsilon_1, \varepsilon_2), \qquad \mbox{$f$}_2 (\lambda) = \frac{\lambda^2 \alpha_{13}^2}{\alpha_1 \alpha_2} = \lambda^2 \frac{\alpha_3}{\alpha_2} \operatorname{CS} (\varepsilon_1, \varepsilon_3), \end{equation} while, if $\lambda^2 \geq \frac{\alpha_2}{\alpha_3}$, then \begin{equation}\label{eq:6.30} f_1 (\lambda) = \frac{\alpha_{12}^2}{\lambda \alpha_1 \alpha_3} = \lambda^{-2} \frac{\alpha_2}{\alpha_3} \operatorname{CS} (\varepsilon_1, \varepsilon_2), \qquad f_2 (\lambda) = \frac{\alpha_{13}^2}{\alpha_1 \alpha_3} = \operatorname{CS} (\varepsilon_1, \varepsilon_3). \end{equation} For the unique point $\lambda = \frac{\alpha_{12}}{\alpha_{13}} = \xi$ where $f_1 (\lambda) = f_2 (\lambda)$ we have \begin{equation}\label{eq:6.31} \xi^2 = \frac{\alpha_{12}^2}{\alpha_{13}^2} \leq \frac{\alpha_2}{\alpha_3}. \end{equation} The case $\xi^2 = \frac{\alpha_2}{\alpha_3}$ means that $\operatorname{CS} (\varepsilon_1, \varepsilon_2) = \operatorname{CS} (\varepsilon_1, \varepsilon_3)$, for which for $\lambda^2 \leq \frac{\alpha_2}{\alpha_3}$ we have \[ f_1 (\lambda) = \operatorname{CS} (\varepsilon_1, \varepsilon_2) \geq \mbox{$f$}_2 (\lambda), \] and for $\lambda^2 \geq \frac{\alpha_2}{\alpha_3}$ we have \[ f_2 (\lambda) = \operatorname{CS} (\varepsilon_1, \varepsilon_3) \geq \mbox{$f$}_1 (\lambda). \] Thus $f = \max (f_1, f_2)$ has constant value $\operatorname{CS} (\varepsilon_1, \varepsilon_2) = \operatorname{CS} (\varepsilon_1, \varepsilon_3)$ on $[0, \infty]$. We are left with the case \[ 0 \leq \operatorname{CS} (\varepsilon_1, \varepsilon_2) < \operatorname{CS} (\varepsilon_1, \varepsilon_3). \] Let $\sqrt{\frac{\alpha_2}{\alpha_3}}$ denote the square root of $\frac{\alpha_2}{\alpha_3}$ in the ordered abelian group $\mathcal G^{\frac{1}{2}} \supset \mathcal G$. We learn from ~\eqref{eq:6.29} that $f_1 (\lambda) \geq f_2 (\lambda)$ if $0 \leq \lambda \leq \xi$, while $f_1 (\lambda) \leq f_2 (\lambda)$ if $\xi \leq \lambda \leq \infty$,\footnote{Actually we know this for long, cf. the arguments following \eqref{eq:6.12}.} and thus obtain \begin{equation}\label{eq:6.32} f (\lambda) = \left\{ \begin{array}{ll} \operatorname{CS} (\varepsilon_1, \varepsilon_2) & 0 \leq \lambda \leq \xi, \\ \lambda^2 \frac{\alpha_3}{\alpha_2} \operatorname{CS} (\varepsilon_1, \varepsilon_3) & \xi \leq \lambda \leq \sqrt{ \frac{\alpha_2}{\alpha_3}}, \\ \operatorname{CS} (\varepsilon_1, \varepsilon_3) & \sqrt{ \frac{\alpha_2}{\alpha_3}} \leq \lambda \leq \infty. \end{array} \right. \end{equation} We summarize all this as follows. \begin{thm}\label{thm:6.3} Assume that $\alpha_{23}^2 \leq_{\nu} \alpha_2 \alpha_3$ and $\operatorname{CS} (\varepsilon_1, \varepsilon_2) \leq \operatorname{CS} (\varepsilon_1, \varepsilon_3)$. \begin{itemize} \item[a)] If $\operatorname{CS} (\varepsilon_1, \varepsilon_2) = \operatorname{CS} (\varepsilon_1, \varepsilon_3)$, then $f$ is constant on $[0, \infty]$ with value $\operatorname{CS} (\varepsilon_1, \varepsilon_2)$. \item[b)] If $\operatorname{CS} (\varepsilon_1, \varepsilon_2) < \operatorname{CS} (\varepsilon_1, \varepsilon_3)$, then $\xi < \sqrt{\frac{\alpha_2}{\alpha_3}}$. If $\sqrt{\frac{\alpha_2}{\alpha_3}} \in \mathcal G$, then $f$ is constant with value $\operatorname{CS} (\varepsilon_1, \varepsilon_2)$ on $[0, \xi]$ (in particular $\xi = 0$ if $\operatorname{CS} (\varepsilon_1, \varepsilon_2) = 0$), further increases strictly from $\operatorname{CS} (\varepsilon_1, \varepsilon_2)$ to $\operatorname{CS} (\varepsilon_1, \varepsilon_3)$ on $\big[\xi, \sqrt{\frac{\alpha_2}{\alpha_3}} \big]$, and then remains constant of value $\operatorname{CS} (\varepsilon_1, \varepsilon_3)$ on $\big[ \sqrt{\frac{\alpha_2}{\alpha_3}}, \infty \big]$. If $\sqrt{\frac{\alpha_2}{\alpha_3}} \not\in \mathcal G$, this holds again after replacing these intervals by $\big[\xi, \sqrt{\frac{\alpha_2}{\alpha_3}} \big[ \; : = \big \{ \lambda \in \mathcal G \ds \vert \xi \leq \lambda < \sqrt{\frac{\alpha_2}{\alpha_3}} \big\}$ and \ $ \big ] \sqrt{\frac{\alpha_2}{\alpha_3}}, \infty \big ]:= \big\{ \lambda \in \mathcal G \ds \vert \sqrt{\frac{\alpha_2}{\alpha_3}} < \lambda < \infty \big\} \cup \{ \infty \}$. \end{itemize} \end{thm} In the case $\sqrt{\frac{\alpha_2}{\alpha_3}} \in \mathcal G$ the graph of $f(\lambda)$ with respect to the variable $\lambda^2$ looks as follows. \begin{figure}[h]\label{fig:3} \resizebox{0.5\textwidth}{!}{ \begin{tikzpicture}[] \draw[thick] (0,0) node[below]{0} --(14,0)node[below]{$\lambda^2$} ; \draw[thick] (0,0) -- (0,11)node[left]{} node[above]{f}; \draw[blue, thick] (0,8) node[left]{$CS(\varepsilon_1,\varepsilon_3)$}--(13,8); \draw[blue, thick] (0,4) node[left]{${CS(\varepsilon_1,\varepsilon_2)}$}--(13,4); \draw[blue, thick](9,0)node[below]{$\frac{\alpha_{2}}{\alpha_{3}}$}--(9,10); \draw[blue, thick](4.5,0)node[below]{$\frac{\alpha_{12}^2}{\alpha_{13}^2}$}--(4.5,10); \draw[help lines] (0,0) grid (13,10); \draw[red, line width=0.5mm] (0,0) node[left]{}--(9,8); \draw[red, line width=0.5mm] (9,8) --(13,8) node[right]{$\bf f_2$}; \draw[red, line width=0.5mm] (0,4) node[left]{}--(9,4); \draw[red, line width=0.5mm] (9,4) to[out=-60, in=180] (13,0.1) node[right]{$\bf f_1$}; \end{tikzpicture} } \caption{The case of $\sqrt{\frac{\alpha_3}{\alpha_3}} \in \mathcal G.$} \end{figure} \section{The CS-profiles on a ray interval}\label{sec:7} As before we assume that $eR$ is a (bipotent) semifield. Given anisotropic rays $Y_1, Y_2, W$ in the $R$-module $V$ (i.e., $Y_1, Y_2, W \in \operatorname{Ray} (V_{{\operatorname{an}}})$, $V_{{\operatorname{an}}} : = \{ x \in V \ds \vert q (x) \ne 0 \} \cup \{ 0 \}$) with $Y_1 \ne Y_2$), we are interested in the $\operatorname{CS}$-\textit{profile of $W$ on the interval} $[Y_1, Y_2]$, by which we mean the monotonicity behavior of the function $\operatorname{CS} (W, -)$ on $[Y_1, Y_2]$ with respect to the total ordering \footnote{This also includes information about the zero set of this function.} $\leq_{Y_1}$, as studied in \S\ref{sec:6}. (There we labeled $W = X_1$, $Y_1 = X_2$, $Y_2 = X_3$.) More succinctly we denote the set $[Y_1, Y_2]$, equipped with the total ordering $\leq_{Y_1}$, by $\overrightarrow{[Y_1, Y_2]}$, and call it an \textbf{oriented closed ray interval}. Often we use the shorter term ``$W$-\textbf{profile on} $\overrightarrow{[Y_1, Y_2]}$'' instead of ``$\operatorname{CS}$-profile of $W$ on $\overrightarrow{[Y_1, Y_2]}$'', whenever it is clear from the context that we are dealing with $\operatorname{CS}$-ratios. \begin{defn}\label{def:7.1} Let $W, Y_1, Y_2 \in \operatorname{Ray} (V_{{\operatorname{an}}})$ be anisotropic, and assume that $Y_1 \ne Y_2$. \begin{itemize}\setlength{\itemsep}{2pt} \item[a)] We call a $W$-profile on $\overrightarrow{[Y_1, Y_2]}$ \textbf{ascending}, if $\operatorname{CS} (W, Y_1) \leq \operatorname{CS} (W, Y_2)$, and \textbf{descending} if $\operatorname{CS} (W, Y_1) \geq \operatorname{CS} (W, Y_2)$. \item[b)] We say that a $W$-profile on $\overrightarrow{[Y_1, Y_2]}$ is \textbf{monotone}, if it is either increasing\footnote{In basic terms, it means that $\operatorname{CS} (W, Z_1) \leq \operatorname{CS} (W, Z_2)$ for all $Z_1, Z_2 \in [Y_1, Y_2]$ with $[Y_1, Z_1] \subset [Y_1, Z_2]$.} or decreasing, and then usually speak of a ``monotone $W$-profile on $[Y_1, Y_2]$'', omitting the arrow indicating orientation, since it is irrelevant. \item[c)] We say that a $W$-profile on $\overrightarrow{[Y_1, Y_2]}$ is \textbf{positive}, if $\operatorname{CS} (W, Y_1) > 0$, $\operatorname{CS} (W, Y_2) > 0$, and so $\operatorname{CS} (W, Z) > 0$ for all $Z \in [Y_1, Y_2]$, and we say that the $W$-profile is \textbf{non-positive} (or ``attains zero''), if $\operatorname{CS} (W, Y_1) = 0$ or $\operatorname{CS} (W, Y_2) = 0$. \end{itemize} \end{defn} \begin{rem}\label{rem:7.2} It is clear from \S\ref{sec:6} that if, say, $\operatorname{CS} (W, Y_1) = 0$ and $\operatorname{CS} (W, Y_2) > 0$, then $\operatorname{CS} (W, Z) > 0$ for all $Z \ne Y_1$ in $[Y_1, Y_2]$, while, if $\operatorname{CS} (W, Y_1) = \operatorname{CS} (W, Y_2) = 0$, then $\operatorname{CS} (W, Z) = 0$ for all $Z \in [Y_1, Y_2]$. \end{rem} We learned in \S\ref{sec:6} (cf. Theorems \ref{thm:6.2} and \ref{thm:6.3}) that the monotonicity behavior of the function $\operatorname{CS} (W, -)$ on $\overrightarrow{[Y_1, Y_2]}$ is essentially determined\footnote{In the case $\operatorname{CS} (Y_1, Y_2) \leq e$ we also need the information whether the square class $eq (Y_1) q(Y_2) \subset \mathcal G$ of $eR$ is trivial or not (cf. Theorem \ref{thm:6.3}).} by the three ratios $\operatorname{CS} (Y_1, Y_2)$, $\operatorname{CS} (W, Y_1)$, $\operatorname{CS} (W, Y_2)$, more precisely by certain strict inequalities ($<$) and equalities (=) involving these $\operatorname{CS}$-ratios. Accordingly, we classify the $\operatorname{CS}$-profiles on $\overrightarrow{[Y_1, Y_2]}$ by characterizing them into ``basic types'', each is given by a conjunction $T$ of inequalities involving these $\operatorname{CS}$-ratios and zero. From Theorems \ref{thm:6.2} and \ref{thm:6.3} we gain the following list of ``\textbf{basic ascending types}'', for which every ascending $W$-profile on $\overrightarrow{[Y_1, Y_2]}$ belongs to exactly one type $T$, and the condition~ $T$ encodes completely the monotonicity behavior of the function $\operatorname{CS} (W,-)$ on $[Y_1, Y_2]$.\footnote{The reader may argue that our notion of basic type lacks a precise definition. We can remedy this by \textit{defining} the basic types on $[Y_1, Y_2]$ as all the conditions $A$, $\partial A$, $B$, $\dots$ appearing in Tables \ref{table:7.3}, \ref{table:7.4} and Scholium \ref{schol:7.5} below.} \begin{tabl}\label{table:7.3} Assume that $\operatorname{CS} (Y_1, Y_2) > e$. \begin{itemize} \item[a)] The positive ascending basic types (i.e., types of positive ascending $W$-profiles) on $\overrightarrow{[Y_1, Y_2]}$ are \begin{itemize}\setlength{\itemsep}{2pt} \item[$\phantom{\partial} A$:] \quad $0 < \operatorname{CS} (W, Y_1) < \frac{\operatorname{CS} (W, Y_2)}{\operatorname{CS} (Y_1, Y_2)}$, \item[$\partial A$:] \quad $0 < \operatorname{CS} (W, Y_1) = \frac{\operatorname{CS} (W, Y_2)}{\operatorname{CS} (Y_1, Y_2)}$, \item[$\phantom{\partial} B$:] \quad $0 < \frac{\operatorname{CS} (W, Y_2)}{\operatorname{CS} (Y_1, Y_2)} < \operatorname{CS} (W, Y_1) < \operatorname{CS} (W, Y_2)$, \item[$\partial B$:] \quad $0 < \operatorname{CS} (W, Y_1) = \operatorname{CS} (W, Y_2)$. \end{itemize} (Note that $\partial B$ implies $0 < \frac{\operatorname{CS} (W, Y_2)}{\operatorname{CS} (Y_1, Y_2)} < \operatorname{CS} (W, Y_1)$. So we could also write \\ $\partial B$: \quad $0 < \frac{\operatorname{CS} (W, Y_2)}{\operatorname{CS} (Y_1, Y_2)} < \operatorname{CS} (W, Y_1) = \operatorname{CS} (W, Y_2)$.) \item[b)] The non-positive ascending basic types are \begin{itemize} \item[$\phantom{\partial} E$:] \quad $0 = \operatorname{CS} (W, Y_1) < \operatorname{CS} (W, Y_2)$, \item[$\partial E$:] \quad $0 = \operatorname{CS} (W, Y_1) = \operatorname{CS} (W, Y_2)$. \end{itemize} \end{itemize} All these types are increasing, i.e., determine increasing $W$-profiles, except $B$ and $\partial B$. \end{tabl} \begin{tabl}\label{table:7.4} Assume that $\operatorname{CS} (Y_1, Y_2) \leq e$ and $Y_1 \ne Y_2$. We have the following list of ascending basic types on $\overrightarrow{[Y_1, Y_2]}$. \begin{itemize} \item[a)] The positive ascending basic types\\ $\phantom{\partial} C$: \quad $0 < \operatorname{CS} (W, Y_1) < \operatorname{CS} (W, Y_2)$, \\ $\partial C$: \quad $0 < \operatorname{CS} (W, Y_1) = \operatorname{CS} (W, Y_2)$. \item[b)] The non-positive ascending basic types\footnote{Although $D$ and $\partial D$ are the same sentences as $E$ and $\partial E$ in Table~\ref{table:7.3}, we use a different letter ``$D$'', since we include in the type the information whether $\operatorname{CS} (Y_1, Y_2) >e$ or $\operatorname{CS} (Y_1, Y_2) \leq e$.}\\ $\phantom{\partial} D$: \quad $0 = \operatorname{CS} (W, Y_1) < \operatorname{CS} (W, Y_2)$,\\ $\partial D$: \quad $0 = \operatorname{CS} (W, Y_1) = \operatorname{CS} (W, Y_2)$. \end{itemize} Note that all these types are increasing. \end{tabl} For each basic type $T$ there is a \textbf{reverse type} $T'$, obtained by interchanging $Y_1$ and $Y_2$ in condition $T$. Thus the reverses of the types listed in Tables~\ref{table:7.3} and \ref{table:7.4} exhaust all basic types of descending $\operatorname{CS}$-profiles on $\overrightarrow{[Y_1, Y_2]}$. We obtain the following list of such types. \begin{schol}\label{schol:7.5} $ $ \begin{itemize} \item[a)] When $\operatorname{CS} (Y_1, Y_2) > e$, the descending basic types are $A'$, $\partial A' : = (\partial A)'$, $B'$, $\partial B' : = (\partial B)' = \partial B$. $E'$, $\partial E' : = (\partial E)' = \partial E$. \item[b)] When $\operatorname{CS} (Y_1, Y_2) \leq e$, the descending basic types are $C'$, $\partial C' : = \partial C$, $D'$, $\partial D' : = (\partial D)' = \partial D$. \end{itemize} All these types are decreasing except $B'$ and $\partial B'$, which are not monotone. \end{schol} In later sections additional conditions on $\operatorname{CS} (Y_1, Y_2)$, $\operatorname{CS} (W, Y_1)$, $\operatorname{CS} (W, Y_2)$ will come into play, which arise from a basic type $T$ by relaxing the strict inequality sign $<$ to $\leq$ at one or several places. We name such condition $U$ a \textbf{relaxation} of $T$, and call all the arising relaxations the \textbf{composed} $\operatorname{CS}$-\textbf{types} on $[Y_1, Y_2]$. The reason for the latter term is that such a relaxation $U$ is a disjunction \begin{equation}\label{eq:7.1} U = T_1 \vee \dots \vee T_r \end{equation} of several basic types $T_i$, as will be seen (actually with $r \leq 4$). Since every $W$-profile on $\overrightarrow{[Y_1, Y_2]}$ belongs to exactly one basic type $T$, it is then obvious that the $T_i$ in \eqref{eq:7.1} are uniquely determined by $U$ up to permutation, $T$ being one of them. We call the $T_i$ the \textbf{components} of the relaxation $U$. We extend part of the terminology of basic types to their relaxations in the obvious way. The \textbf{reverse type} $U'$ of $U$ arises by interchanging $Y_1$ and $Y_2$ in the condition $U$. The composed type $U$ is \textbf{ascending} (resp. \textbf{descending}), if the sentence $\operatorname{CS} (W, Y_1) \leq \operatorname{CS} (W, Y_2)$ (resp. $\operatorname{CS} (W, Y_1) \geq \operatorname{CS} (W, Y_2)$) is a consequence of $U$, and $U$ is \textbf{positive}, if $U$ implies $0 < \operatorname{CS} (W, Y_i)$ for $i = 1,2$. We list out all relaxations of all ascending basic types on $\overrightarrow{[Y_1, Y_2]}$, at first in the case $\operatorname{CS} (Y_1, Y_2)> e$, an then in the case $\operatorname{CS} (Y_1, Y_2) \leq e$. It will turn out that all these relaxations are again ascending. \begin{schol}\label{schol:7.6} Assume that $\operatorname{CS} (Y_1, Y_2) > e$. \begin{itemize} \item[a)] The basic type $E : 0 = \operatorname{CS} (W, Y_1) < \operatorname{CS} (W, Y_2)$ has only one relaxation \[ \overline{E} : \quad 0 = \operatorname{CS} (W, Y_1) \leq \operatorname{CS} (W, Y_2), \] for which $\overline{E} = E \vee \partial E$. \item[b)] The basic type $A : 0 < \operatorname{CS} (W, Y_1) < \frac{\operatorname{CS} (W, Y_2)}{\operatorname{CS} (Y_1, Y_2)}$ has the relaxations $$\begin{array}{rll} A_0: & 0 \leq \operatorname{CS} (W, Y_1) < \frac{\operatorname{CS} (W, Y_2)}{\operatorname{CS} (Y_1, Y_2)}, \\[2mm] \overline{A}: & 0 < \operatorname{CS} (W, Y_1) \leq \frac{\operatorname{CS} (W, Y_2)}{\operatorname{CS} (Y_1, Y_2)},\\[2mm] \overline{A}_0: & 0 \leq \operatorname{CS} (W, Y_1) \leq \frac{\operatorname{CS} (W, Y_2)}{\operatorname{CS} (Y_1, Y_2)}. \end{array}$$ For these relaxations we have $$\begin{array}{lll} A_0 &= A \vee (0 = \operatorname{CS} (W, Y_1) < \operatorname{CS} (W, Y_2)) = A \vee E\\[2mm] \overline{A} &= A \vee (0 = \operatorname{CS} (W, Y_1) < \operatorname{CS} (W, Y_2)) = A \vee \partial A\\[2mm] \overline{A}_0 &= \overline{A} \vee (0 = \operatorname{CS} (W, Y_1) \leq \frac{\operatorname{CS} (W, Y_2)}{\operatorname{CS} ( Y_1, Y_2)}) = \overline{A} \vee \overline{E} = A \vee \partial A \vee E \vee \partial E. \end{array}$$ \item[c)] The positive relaxations of $B : 0 < \frac{\operatorname{CS} (W, Y_2)}{\operatorname{CS} (Y_1, Y_2)} < \operatorname{CS} (W, Y_1) < \operatorname{CS} (W, Y_2)$ are $$\begin{array}{rll} \overline{B}: & 0 < \frac{\operatorname{CS} (W, Y_2)}{\operatorname{CS} (Y_1, Y_2)} \leq \operatorname{CS} (W, Y_1) < \operatorname{CS} (W, Y_2),\\ \widetilde{B}:& 0 < \frac{\operatorname{CS} (W, Y_2)}{\operatorname{CS} (Y_1, Y_2)} < \operatorname{CS} (W, Y_1) \leq \operatorname{CS} (W, Y_2),\\ \widetilde{\overline{B}}: & 0 < \frac{\operatorname{CS} (W, Y_2)}{\operatorname{CS} (Y_1, Y_2)} \leq \operatorname{CS} (W, Y_1) \leq \operatorname{CS} (W, Y_2). \end{array}$$ We have $\widetilde{\overline{B}} = \overline{B} \vee \widetilde{B}$, since $\operatorname{CS} (W, Y_1) = \operatorname{CS} (W, Y_2)$ implies $\frac{\operatorname{CS} (W, Y_2)}{\operatorname{CS} (Y_1, Y_2)} < \operatorname{CS} (W, Y_1)$, and $\frac{\operatorname{CS} (W, Y_2)}{\operatorname{CS} (Y_1, Y_2)} = \operatorname{CS} (W, Y_1)$ implies $\operatorname{CS} (W, Y_1) < \operatorname{CS} (W, Y_2)$. Also \begin{equation}\label{eq:7.2} \widetilde{B} = B \vee (0 < \operatorname{CS} (W, Y_1) = \operatorname{CS} (W, Y_2)) = B \vee \partial B, \end{equation} since $0 < \operatorname{CS} (W, Y_1) = \operatorname{CS} (W, Y_2)$ implies $0 < \frac{\operatorname{CS} (W, Y_2)}{\operatorname{CS} (W, Y_2)} < \operatorname{CS} (W, Y_1)$,\\ and \begin{equation}\label{eq:7.3} \overline{B} = B \vee (0 < \operatorname{CS} (W, Y_1) = \frac{\operatorname{CS} (W, Y_2)}{\operatorname{CS} (W, Y_1)}) = B \vee \partial A. \end{equation} Finally \begin{equation}\label{eq:7.4} \widetilde{\overline{B}} = \overline{B} \vee \widetilde{B} = B \vee \partial B \vee \partial A . \end{equation} We obtain the non-positive relaxations of $B$ by replacing in all these sentences $B, \overline{B}, \widetilde{B}, \widetilde{\overline{B}}$ the part $\big(0 < \frac{\operatorname{CS} (W, Y_2)}{\operatorname{CS} (Y_1, Y_2)}\big)$ by $\big(0 \leq \frac{\operatorname{CS} (W, Y_2)}{\operatorname{CS} (Y_1, Y_2)}\big) = \big(0 < \frac{\operatorname{CS} (W, Y_2)}{\operatorname{CS} (Y_1, Y_2)}\big) \vee \big (0 = \operatorname{CS} (W, Y_2)\big)$. Since all 4 sentences $B, \overline{B}, \widetilde{B}, \widetilde{\overline{B}}$ have the consequence $\operatorname{CS} (W, Y_1) \leq \operatorname{CS} (W, Y_2)$, we get 4 non-positive relaxations of $B$, namely \begin{equation}\label{eq:7.5} \begin{array}{lll} B_0 & = B \vee (0 = \operatorname{CS} (W, Y_1) = \operatorname{CS} (W, Y_2)) = B \vee \partial E,\\[1mm] \overline{B}_0 & = \overline{B} \cup \partial E, \\[1mm] \widetilde{B}_0 & = \widetilde{B} \vee \partial E, \\ \widetilde{\overline{B}}_0 &= \widetilde{B} \cup \partial E = B \vee \partial B \vee \partial A \vee \partial E.\end{array} \end{equation} \end{itemize} \end{schol} \begin{rem}\label{rem:7.7} Assume that $\operatorname{CS} (Y_1, Y_2) > e$. \begin{itemize} \item[a)] We have $$ \begin{array}{lll} A_0 \wedge \overline{A} &= (A \vee E) \wedge (A \vee \partial E) = A, \\[2mm] \overline{B} \wedge \widetilde{B} & = (B \vee \partial A) \wedge (B \vee \partial B) = B,\end{array}$$ since different basic types are incompatible (= contradictory). \item[b)] The condition $$\rm Asc : = 0 \leq \operatorname{CS} (W, Y_1) \leq \operatorname{CS} (W, Y_2)$$ is the disjunction of all ascending basic types, \begin{equation}\label{eq:7.9} \rm Asc = A \vee \partial A \vee B \vee \partial B \vee E \vee \partial E, \end{equation} since every $W \in \operatorname{Ray} (V)$ fulfills exactly one of the basic type sentences.\\ The condition $$\rm Asc^+ : = 0 < \operatorname{CS} (W, Y_1) < \operatorname{CS} (W, Y_2)$$ is the disjunction of all positive strictly ascending basic types, \begin{equation}\label{eq:7.10} \rm Asc^+ = A \vee \partial A \vee B. \end{equation} \end{itemize} Note that both $\rm Asc$ and $\rm Asc^+$ are not relaxations of basic types, and thus are not regarded as composite types. \end{rem} The table of composite types in the case $\operatorname{CS} (Y_1, Y_2) \leq e$ is much simpler. \begin{schol}\label{schol:7.8} Assume that $\operatorname{CS} (Y_1, Y_2) \leq e$. The basic type $$D : 0 = \operatorname{CS} (W, Y_1) < \operatorname{CS} (W, Y_2)$$ has only one relaxation: \[ \overline{D} = D \cup \partial D : 0 = \operatorname{CS} (W, Y_1) \leq \operatorname{CS} (W, Y_2). \] The basic type $C : 0 < \operatorname{CS} (W, Y_1) < \operatorname{CS} (W, Y_2)$ has three relaxations, namely \[ \begin{array}{lcl} \overline{C} & : = & C \vee \partial C : 0 < \operatorname{CS} (W, Y_1) \leq \operatorname{CS} (W, Y_2),\\[2mm] C_0 & : = & C \vee D : 0 \leq \operatorname{CS} (W, Y_1) < \operatorname{CS} (W, Y_2),\\[2mm] \overline{C}_0 & := & \overline{C} \vee D = \overline{C} \vee \overline{D} : 0 \leq \operatorname{CS} (W, Y_1) \leq \operatorname{CS} (W, Y_2). \end{array} \] $\overline{C}_0$ is the disjunction of all increasing basic types on $\overrightarrow{[Y_1, Y_2]}$. \end{schol} \section{A convexity lemma for linear CS-inequalities, and first applications}\label{sec:8} As before we assume that $eR$ is a semifield and $(q, b)$ is a quadratic pair on an $R$-module ~$V$ with $q$ anisotropic. \begin{lem}\label{lem:8.1} Given $w_1, \dots, w_n \in V \setminus \{ 0 \}$ and $\lambda_1, \dots, \lambda_n \in \mathcal G$, let $w : = \sum\limits_{i = 1}^n \lambda_i w_i$. For any $y \in V \setminus \{ 0 \}$ the following holds \begin{equation}\label{eq:8.1} \operatorname{CS} (w, y) = \sum\limits_{i = 1}^n \operatorname{CS} (w_i, y) \alpha_i \end{equation} with $\alpha_i \in \mathcal G$, $0 < \alpha_i \leq e$, namely \begin{equation}\label{eq:8.2} \alpha_i : = \frac{q (\lambda_i w_i)}{q(w)}, \end{equation} and thus $0 < \alpha_i \leq e$. \end{lem} \begin{proof} Since $b (w, y) = \sum\limits_{i = 1}^n b (\lambda_i w_i, y)$, we have \[ \operatorname{CS} (w, y) = \frac{b (w, y)^2}{q(w) q(y)} = \sum\limits_{i = 1}^n \frac{b (\lambda_i w_i, y)^2}{q(w) q(y)} = \sum\limits_{i = 1}^n \frac{b (\lambda_i w_i, y)^2}{q(\lambda_i w_i) q(y)} \; \frac{q(\lambda_i w_i)}{q(w)}\] \[ \; = \sum\limits_{i = 1}^n \operatorname{CS} (\lambda_i w_i, y) \frac{q(\lambda_i w_i)}{q(w)} = \sum\limits_{i = 1}^n \operatorname{CS} (w_i, y) \frac{q (\lambda_i w_i)}{q(w)}. \] \end{proof} \begin{lem}\label{lem:8.2} Given $x_1, \dots, x_m$, $y_1, \dots, y_m$, $\alpha_1, \dots, \alpha_m$ in $eR$ such that that \begin{equation} \alpha_i x_i < \alpha_i y_i \qquad \mbox{for} \; 1 \leq i \leq m, \tag{$*$} \end{equation} then \begin{equation} \sum\limits_{i = 1}^m \alpha_i x_i < \sum\limits_{i = 1}^m \alpha_i y_i. \tag{$**$} \end{equation} \end{lem} \begin{proof} Choose $r \in \{ 1, \dots, m \}$ such that $\alpha_r x_r = \mathop{\rm Max}\limits\limits_{1 \leq i \leq m} \{\alpha_i x_i\}$, then \[ \sum\limits_{i = 1}^m \alpha_i x_i = \alpha_r x_r < \alpha_r y_r \leq \sum\limits_{i=1}^m \alpha_i y_i. \] \end{proof} \begin{rem}\label{rem:8.3} If $(*)$ holds with the strict inequality $<$ replaced by the weak inequality $\leq$ (respectively the equality sign =) everywhere, then $(**)$ holds with $\leq$ (respectively =) everywhere. This is trivial. \end{rem} We are ready to prove a convexity lemma for linear $\operatorname{CS}$-inequalities, which will play a central role in the rest of the paper. \begin{lem}[CS-Convexity Lemma]\label{lem:8.4} Let $\Box$ be one of the symbols $<, \leq, =$. Given rays $W_1, \dots, W_m$, $Y_1, \dots, Y_n$ in $V$ and scalars $\gamma_1, \dots, \gamma_n$, $\delta_1, \dots, \delta_n$ in $eR$ such that \begin{equation} \sum\limits_{j = 1}^n \gamma_j \operatorname{CS} (W_i, Y_j) \; \ds\Box \; \sum\limits_{j = 1}^n \delta_j \operatorname{CS} (W_i, Y_j), \qquad \text{for } i = 1, \dots, m. \tag{$*$} \end{equation} Then, for every $W \in \operatorname{conv} (W_1, \dots, W_m)$, \begin{equation} \sum\limits_{j = 1}^n \gamma_j \operatorname{CS} (W, Y_j) \; \ds \Box \; \sum\limits_{j = 1}^n \delta_i \operatorname{CS} (W, Y_j). \tag{$**$} \end{equation} \end{lem} \begin{proof} We verify the assertion for $<$. By Lemma \ref{lem:8.1} we have scalars $\alpha_1, \dots, \alpha_m \in \mathcal G$ such that \[ \operatorname{CS} (W, Y_j) = \sum\limits_{i = 1}^m \alpha_i \operatorname{CS} (W_i, Y_j) \] for $j = 1, \dots, n$. Using Lemma \ref{lem:8.2} and the fact that the $\alpha_i$ are units of $eR$, we obtain \begin{align*} \sum\limits_{j = 1}^n \gamma_j \operatorname{CS} (W, Y_j) & = \sum\limits_{j = 1}^n \gamma_j \sum\limits_{j = 1}^m \alpha_i \operatorname{CS} (W_i, Y_j) = \sum\limits_{i = 1}^m \alpha_i \sum\limits_{j = 1}^n \gamma_j \operatorname{CS} (W_i, Y_j) \\ & < \sum\limits_{i = 1}^m \alpha_i \sum\limits_{j = 1}^n \delta_j \operatorname{CS} (W_i, Y_j) = \sum\limits_{j = 1}^n \delta_j \sum\limits_{i = 1}^m \alpha_i \operatorname{CS} (W_i, Y_j) = \sum\limits_{j = 1}^n \delta_j \operatorname{CS} (W, Y_j). \end{align*} For the other signs $\leq,$ $ =$ the argument is analogous, using Remark~\ref{rem:8.3} instead of Lemma~\ref{lem:8.2}. (Here it does not matter that the $\alpha_i$'s are units of $eR$.) \end{proof} We start with an application of the CS-Convexity Lemma~\ref{lem:8.4} upon subsets of the ray space $\operatorname{Ray} (V)$ related to the CS-profile types introduced in \S\ref{sec:7}. \begin{defn}\label{def:8.5} Given a pair $(Y_1, Y_2)$ of different rays in $V$ and a basic type $T$ (as listed in Tables \ref{table:7.3}, \ref{table:7.4} and Scholium \ref{schol:7.5}), we define the $T$-\textbf{locus of} $\overrightarrow{[Y_1, Y_2]}$ as the set of all rays~ $W$ in $V$ which have a $W$-profile of type $T$ on $\overrightarrow{[Y_1, Y_2]}$, and denote this subset of $\operatorname{Ray} (V)$ by $\rm Loc_T (Y_1, Y_2)$. \end{defn} It is understood that, if $T$ is defined under the condition, say, $\operatorname{CS} (Y_1, Y_2) > e$,\footnote{Taken up to interchanging $Y_1, Y_2$, the type $T$ is listed in Table \ref{table:7.3}.} the sentence ($\operatorname{CS} (Y_1, Y_2) > e$) is part of the sentence $T$, and thus $\rm Loc_T (Y_1, Y_2) = \emptyset$ when actually $\operatorname{CS} (Y_1, Y_2) \leq e$. So, using standard notation from logic, we may write \begin{equation}\label{eq:8.3} \rm Loc_T (Y_1, Y_2) = \{ W \in \operatorname{Ray} (V) \ds \vert W \models T \} \end{equation} for any pair $(Y_1, Y_2)$ of different rays in $V$ and any basic type $T$. We call these subsets $\rm Loc_T (Y_1, Y_2)$ of $\operatorname{Ray} (V)$ the \textbf{basic loci} of the interval $[Y_1, Y_2]$. \begin{thm}\label{thm:8.6} The family of nonempty basic loci of any interval $[Y_1, Y_2]$ in $\operatorname{Ray} (V)$ is a partition of $\operatorname{Ray} (V)$ into convex subsets. \end{thm}\begin{proof} a) The family of basic loci of $[Y_1, Y_2]$ is a partition of $\operatorname{Ray} (V)$, as for a given $W \in \operatorname{Ray} (V)$ the function $\operatorname{CS} (W, -)$ on $\overrightarrow{[Y_1, Y_2]}$ has a profile of type $T$ for exactly one basic $T$.\vskip 1.5mm \noindent b) Given a basic type $T$ for $\overrightarrow{[Y_1, Y_2]}$ it is a straightforward consequence of Lemma~\ref{lem:8.4} (with $m = 2$) that $\rm Loc_T (Y_1, Y_2)$ is convex. We show this in the case that $\operatorname{CS} (Y_1, Y_2) > e$ and $T$ is the condition $B$ in Table~\ref{table:7.3}. Let $W_1, W_2 \in \rm Loc_B (Y_1, Y_2)$ and $W \in [W_1, W_2]$ be given. Then \begin{equation} 0 < \frac{\operatorname{CS} (W_i, Y_2)}{\operatorname{CS} (Y_1, Y_2)} < \operatorname{CS} (W_i, Y_1) < \operatorname{CS} (W_i, Y_2), \qquad \text{for } i = 1,2. \tag{$*$} \end{equation} Applying the CS-Convexity Lemma~\ref{lem:8.4} to the inequality on the right (with $n = 1$, $\gamma_1 = \delta_1 = ~ e$), we obtain \[ \operatorname{CS} (W, Y_1) < \operatorname{CS} (W, Y_2), \] and thus also $0 < \frac{\operatorname{CS} (W, Y_2)}{\operatorname{CS} (Y_1, Y_2)}$. Applying the lemma to the inequality in the middle of ($\ast$) (with $n = 1$, $\gamma_1 = \operatorname{CS} (Y_1, Y_2)^{-1}$, $\delta_1 = e$) we obtain \[ \frac{\operatorname{CS} (W, Y_2)}{\operatorname{CS} (Y_1, Y_2)} < \operatorname{CS} (W, Y_1). \] This proves condition $B$ for $(W, Y_1, Y_2)$. \end{proof} \begin{defn}\label{def:8.7} Given a composite type $U$ of CS-profiles (cf. \S\ref{sec:7}), in analogy to \eqref{eq:8.3} we define the \textbf{$U$-locus} of $\overrightarrow{[Y_1, Y_2]}$ as \begin{equation}\label{eq:8.4} \rm Loc_U (Y_1, Y_2) : = \{ W \in \operatorname{Ray} (V) \ds \vert W \models U \}. \end{equation} \end{defn} \begin{thm}\label{thm:8.8} The subset $\rm Loc_U (Y_1, Y_2)$ is convex in $\operatorname{Ray} (V)$ for every pair $(Y_1, Y_2)$ of different rays in $V$. \end{thm} \begin{proof} $U$ is by definition a relaxation of a basic type $T$, and so the inequalities in $U$ are obtained by replacing in $T$ the strict inequality $<$ by $\leq$ at several places. The CS-Convexity Lemma~\ref{lem:8.4}, taken now for weak inequalities, gives the claim. \end{proof} \begin{rem}\label{rem:8.9} As stated in \eqref{eq:7.1}, $U$ is a disjunction of finitely many basic types, \[ U = T_1 \vee T_2 \vee \dots \vee T_r. \] It follows from \eqref{eq:8.4} that \begin{equation}\label{eq:8.5} \rm Loc_U (Y_1, Y_2) = \bigcup\limits_{i = 1}^r \rm Loc_{T_i} (Y_1, Y_2). \end{equation} \end{rem} \section{Downsets of restricted $\operatorname{QL}$-stars}\label{sec:9} Recall that the \textbf{QL-star} $\operatorname{QL}(X)$ of a ray $X$ (with respect to $q$) is the set of all $Y \in \operatorname{Ray}(V)$ for which the pair $(X,Y)$ is quasilinear; equivalently, the interval $[X,Y]$ is quasilinear \cite[Definition 4.5]{Quasilinear}. The QL-stars determine the quasilinear behavior of $q$ on the ray space. Given a $\operatorname{QL}$-star $\operatorname{QL}(X)$ we investigate the \textbf{downset of} $\operatorname{QL}(X)$, i.e., the set of all $\operatorname{QL}$-stars $\operatorname{QL}(Y) \subset \operatorname{QL}(X)$, partially ordered by inclusion. We translate this problem into the language of rays by considering the downset $\{ Y \in \operatorname{Ray} (V) \ds \vert Y \preceq_{\operatorname{QL}} X \}$ with respect to the quasiordering $\preceq_{\operatorname{QL}}$ given in~(4.1). (Recall from \cite[\S5]{Quasilinear} that a $\operatorname{QL}$-star $\operatorname{QL}(Y)$ corresponds uniquely to the equivalence class of $Y$ with respect to $\sim_{\operatorname{QL}}$.) More generally fixing a nonempty set $D \subset \operatorname{Ray} (V)$, for any $X \in \operatorname{Ray} (V)$ we define \[ \operatorname{QL}_D (X) : = \operatorname{QL} (X) \cap D, \] and explore the downsets of this family of sets, ordered by inclusion. To do so, without extra costs, we pass to a coarsening $\preceq_D$ of the quasiordering $\preceq_{\operatorname{QL}}$, defined as \[ X \preceq_D X' \dss \ \Leftrightarrow \ \operatorname{QL}_D (X) \subset \operatorname{QL}_D (X'), \] with associated equivalence relation \[ X \sim_D X' \dss \ \Leftrightarrow \ \operatorname{QL}_D (X) = \operatorname{QL}_D (X'). \] We call the set $\operatorname{QL}_D (X)$ the \textbf{restriction of the $\operatorname{QL}$-star of $X$ to $D$.} We use the monotone $\operatorname{CS}$-profiles on an interval $[Y_1, Y_2]$, whenever they occur, to investigate these downsets. Yet, we need a criterion for quasilinearity of pairs of anisotropic rays in \cite{QF2} (under a stronger assumption than before on $R$) which for the present paper reads as: \begin{thm}[{\cite[Theorems 6.7 and 6.11]{QF2}}]\label{thm:9.1} Assume that $R$ is a \textbf{nontrivial tangible supersemifield}, i.e., $R$ is a supertropical semiring in which both $\mathcal G = eR \setminus \{ 0 \}$ and $\mathcal{T} = R \setminus (eR)$ are abelian groups under multiplication, $e \mathcal{T} = \mathcal G$, and $\mathcal G \ne \{ e \}$. Then a pair $(W, Z)$ of anisotropic rays on $V$ is quasilinear iff either $\operatorname{CS} (W, Z) \leq e$, or $\mathcal G$ is discrete, $\operatorname{CS} (W, Z) = c_0$\footnote{$c_0$ denotes the smallest element $> e$ in $\mathcal G$. It exists since $\mathcal G$ is discrete.}, and both $W$ and $Z$ are $g$-isotropic (i.e., $q(w)$, $q(z) \in \mathcal G$ for all $w \in W$, $z \in Z$), saying in the latter case that $(W, Z)$ is \textbf{exotic quasilinear}. \end{thm} \begin{thm}\label{thm:9.2} Assume that $R$ is a nontrivial tangible supersemifield and that the quadratic form $q$ is anisotropic on $V$. Given a (nonempty) subset $D$ of $\operatorname{Ray} (V)$ let $X, Y_1, Y_2$ be rays in~ $V$ with $Y_1 \preceq_D X$ and $Y_2 \preceq_D X$, and assume that the $\operatorname{CS}$-profile of every $W \in D$ on $[Y_1, Y_2]$ is monotone. Then the following holds. \begin{itemize}\setlength{\itemsep}{2pt} \item[i)] If $\mathcal G$ is dense, then $Y \preceq_D X$ for every $Y \in [Y_1, Y_2]$. \item[ii)] If $\mathcal G$ is discrete, then $Y \preceq_D X$ for every $g$-anisotropic $Y \in [Y_1, Y_2]$. \item[iii)] If $\mathcal G$ is discrete and at least one of the rays $Y_1, Y_2$ is $g$-isotropic, then $Y \preceq_D X$ for every $Y \in [Y_1, Y_2]$. \end{itemize} \end{thm}\begin{proof} The study in \S\ref{sec:6} of the functions $\operatorname{CS} (W, -)$ on closed intervals reveals that for a given $W \in \operatorname{Ray} (V)$ this function is monotone, i.e., increasing or decreasing on $\overrightarrow{[Y_1, Y_2]}$ iff \begin{equation}\label{eq:9.1} \forall Z \in [Y_1, Y_2]: \quad \operatorname{CS} (W, Z) \geq \min (\operatorname{CS} (W, Y_1), \operatorname{CS} (W, Y_2)). \end{equation} In the following we only rely on this property. Let $Y \in [Y_1, Y_2]$ and $W \in \operatorname{QL} (Y) \cap D$ be given. We prove that $W \in \operatorname{QL} (X)$ under conditions i) -- iii), and then will be done. \\ a) Suppose that $\operatorname{CS} (W, Y) \leq e$. Since the $W$-profile on $[Y_1, Y_2]$ is monotonic, we conclude from \eqref{eq:9.1} that $\operatorname{CS} (W, Y_i) \leq e$ for $i = 1$ or 2, whence $W \in \operatorname{QL}_D (Y_i)$, and so $W \in \operatorname{QL}_D (X)$ since $Y_i \preceq_D X$. This settles claim i) of the theorem, as well as the other claims in the case $\operatorname{CS} (W, Y) \leq e$.\vskip 1.5mm \noindent b) There remains the case that $eR$ is discrete and $\operatorname{CS} (W, Y) = c_0$. The pair $(W, Y)$ is exotic quasilinear, since $W \in \operatorname{QL}(Y)$, and thus $W$ and $Y$ are $g$-isotropic. But, under the assumption in ii) of the theorem this does not hold, whence this case cannot occur.\vskip 1.5mm \noindent c) We are left with a proof of part iii). If $\operatorname{CS} (W, Y_1) \leq e$ or $\operatorname{CS} (W, Y_2) \leq e$, the same argument as in a) gives that $W \in \operatorname{QL} (Y_1)$ or $W \in \operatorname{QL} (Y_2)$, and so $W \in \operatorname{QL} (X)$. Henceforth we assume that $\operatorname{CS} (W, Y_1) = \operatorname{CS} (W, Y_2) = c_0$. If, say, $Y_1$ is $g$-isotropic, then the pair $(W, Y_1)$ is exotic quasilinear, whence $W \in \operatorname{QL} (Y_1)$, and so $W \in \operatorname{QL} (X)$, as desired. \end{proof} We list several cases where a given $\operatorname{CS}$-profile is monotonic, now using in detail the profile analysis from \S\ref{sec:6}. (Here it suffices to assume that $eR$ is a semifield.) \begin{schol}\label{schol:9.3} As before we assume that $q$ is anisotropic on $V$. \begin{itemize}\setlength{\itemsep}{2pt} \item[a)] If $[Y_1, Y_2]$ is $\nu$-quasilinear, then $[Y_1, Y_2]$ has a monotonic $W$-profile for every $W \in \operatorname{Ray} (V)$. \item[b)] If $[Y_1, Y_2]$ is $\nu$-excessive, with critical rays $Y_{12}$ (near $Y_1$) and $Y_{21}$ (near $Y_2$), then both $[Y_1, Y_{12}]$ and $[Y_{21}, Y_2]$ have a monotonic $W$-profile for every $W$. \item[c)] Given $W \in \operatorname{Ray} (V)$ let $M = M (W, Y_1, Y_2)$ denote the $W$-median of $[Y_1, Y_2]$. Assume that the $W$-profile of $[Y_1, Y_2]$ is not monotone. Then $[Y_1, M]$ and $[M, Y_2]$ are the maximal closed subintervals of $[Y_1, Y_2]$ with a monotone $W$-profile. \item[d)] Of course, if $[Y_1, Y_2]$ has a monotonic $W$-profile for a given ray $W$, then the same holds for every closed subinterval of $[Y_1, Y_2]$. \end{itemize} \end{schol} We next search for rays $Z \preceq_D Y$ in a given interval $[X, Y]$. Here and elsewhere it is convenient to extend our notion of $\operatorname{QL}$-stars and related objects from rays to vectors in a trivial way as follows (assuming only that $eR$ is a semifield). \begin{notation}\label{notat:9.4} Given $\mbox{x, y} \in V \setminus \{ 0 \}$ we define $$ \operatorname{QL} (x) : = \operatorname{QL} (\operatorname{ray} (x)), $$ and set \[ \mbox{x} \preceq_{\operatorname{QL}} \mbox{y} \dss \ \Leftrightarrow \ \operatorname{ray} (x) \preceq_{\operatorname{QL}} \operatorname{ray} (y), \] equivalently \[ \mbox{x} \preceq_{\operatorname{QL}} \mbox{y} \dss \ \Leftrightarrow \ \operatorname{QL} (x) \subset \operatorname{QL} (y). \] Consequently we define \[ \begin{array}{lclll} \mbox{x} \sim_{\operatorname{QL}} \mbox{y} & \ \Leftrightarrow \ & \operatorname{QL} (x) = \operatorname{QL} (y) & \ \Leftrightarrow \ & \operatorname{ray} (x) \sim_{\operatorname{QL}} \operatorname{ray} (y). \end{array} \] More generally, if a set $D \subset \operatorname{Ray} (V)$ is given, we put \[ \begin{array}{lcl} \operatorname{QL}_D (x) & = & \operatorname{QL}_D (\operatorname{ray} (x)) ,\\[2mm] \mbox{x} \preceq_D \mbox{y} & \ \Leftrightarrow \ & \operatorname{QL}_D (x) \subset \operatorname{QL}_D (y), \\[2mm] \mbox{x} \sim_D \mbox{y} & \ \Leftrightarrow \ & \operatorname{QL}_D (x) = \operatorname{QL}_D (y). \end{array} \] \end{notation} As before we assume that $q$ is anisotropic on $V$, and that $R$ is a nontrivial tangible supersemifield. \begin{lem}\label{lem:9.5} Let $x, y \in V \setminus \{ 0 \}$ and assume that $q(x+y)\cong_{\nu} q(y)$. \begin{enumerate} \item[a)] For any $w \in V \setminus \{ 0 \}$ \[ \operatorname{CS} (w, y) \leq \operatorname{CS} (w, x+y). \] \item[b)] If $q(y) \in \mathcal G$, then $q(x+y) = q(y)$. \end{enumerate} \end{lem} \begin{proof} a): As in the proof of Lemma \ref{lem:8.1}, we have \[ \operatorname{CS} (w, x+y) = \operatorname{CS} (w, x) \frac{q(x)}{q(x+y)} + \operatorname{CS} (w, y) \frac{q(y)}{q (x+y)} \] which implies that $\operatorname{CS} (w, y) \leq \operatorname{CS} (w, x+y)$, since $\frac{q(y)}{q(x+y)} \cong_{\nu} e$. \vskip 1.5mm \noindent b): A priori we have $q(x+y) = q(x) + q(y) + b (x,y)$, and so $q(y) \leq q(x+y)$ in the minimal ordering on $R$. Since by assumption $eq (y) = eq (x+y)$ and $q(y) \in \mathcal G$, it follows that $ q(x+y) = q (y)$.\footnote{Note that $\alpha \cong_{\nu} \beta $, $\alpha \in \mathcal G \Rightarrow \beta \leq \alpha$ for any $\alpha, \beta \in R$.} \end{proof} \begin{thm}\label{thm:9.6} Assume that $q (x+y) \cong_{\nu} q(y)$ for given $x, y \in V \setminus \{ 0 \}$. When $\mathcal G$ is discrete, assume also that $q(y) \in \mathcal G$. \begin{enumerate}\setlength{\itemsep}{2pt} \item[a)] $x+y \preceq_{\operatorname{QL}} y$ (and so $x+y \preceq_D y$ for any $D \subset \operatorname{Ray} (V)$). \item[b)] The interval $[ \operatorname{ray} (x+y), \operatorname{ray} (y)]$ has a monotone $W$-profile for every ray $W$ in $V$. \end{enumerate} \end{thm} \begin{proof} a): Let $w \in V \setminus \{ 0 \}$ be given with $w \in \operatorname{QL} (x+y)$, we verify that $w \in \operatorname{QL} (y)$. By Lemma~\ref{lem:9.5}.a we know that \begin{equation} \operatorname{CS} (w, y) \leq \operatorname{CS} (w, x+y). \tag{$*$}\end{equation} Assume first that $\mathcal G$ is dense, then $\operatorname{CS} (w, x+y) \leq e$, since $w \in \operatorname{QL} (x+y)$. From ($\ast$) it follows that $\operatorname{CS} (w, y) \leq e$, and so $w \in \operatorname{QL} (y)$. Assume next that $\mathcal G$ is discrete. If $\operatorname{CS} (w, y)\leq e$, then certainly $w \in \operatorname{QL} (y)$. There remains the case that \begin{equation} \operatorname{CS} (w, y) \geq c_0. \tag{$**$}\end{equation} Now ($\ast$) tells us that $\operatorname{CS} (w, x+y) \geq c_0$. Since $w \in \operatorname{QL} (x+y)$, we conclude by Theorem~\ref{thm:9.1} that $\operatorname{CS} (w, x+y) = c_0$ and $q(w) \in \mathcal G$, $q(x+y) \in \mathcal G$. From ($\ast$) and ($\ast \ast$) we infer that $\operatorname{CS} (w, y) = c_0$. Thus, again by Theorem~\ref{thm:9.1}, $w \in \operatorname{QL} (y)$.\vskip 1.5mm \noindent b): Let $Z : = \operatorname{ray} (x+y)$, $Y : = \operatorname{ray} (y)$. We proved that $Z \preceq_{\operatorname{QL}} Y$, i.e., $\operatorname{QL} (Z) \subset \operatorname{QL} (Y)$. This implies that $Z \in \operatorname{QL} (Y)$, i.e., that $[Y, Z]$ is quasilinear. All the more $[Y, Z]$ is $\nu$-quasilinear, and Scholium~\ref{schol:9.3}.a confirms that it has a monotone $W$-profile for any $W \in \operatorname{Ray} (V)$. \end{proof} \section{The medians of a closed ray-interval}\label{sec:10} For a short period we only assume that $(q, b)$ is a quadratic pair on an $R$-module $V$ where~ $R$ is a supertropical semiring without zero divisors, and $\lambda x \neq 0$ for nonzero $\lambda \in R$ and all nonzero $x\in V $, cf. \cite[\S6]{QF2}. Recall that two vectors $x, x' \in V$ are said to be \textbf{ray-equivalent}, written $x \sim_r x'$, if there exist scalars $\lambda, \lambda' \in R \setminus \{ 0 \} = \mathcal G$ with $\lambda x = \lambda' x'$; the ray-equivalence class of $x \ne 0 $ is denoted $\operatorname{ray} (x)$. We introduce a map \[ m : V \times V \times V \longrightarrow V \] by the rule \begin{equation}\label{eq:10.1} m (w, x, y) : = b (w, y) x + b (w, x) y. \end{equation} Obviously this map is $R$-trilinear, and so is compatible with ray-equivalence, i.e., if $w \sim_r w'$, $x \sim_r x'$, $y \sim_r y'$ then $m (w, x, y) \sim_r m (w', x', y')$. Thus for any three rays $W = \operatorname{ray} (w)$, $X = \operatorname{ray} (x)$, $Y = \operatorname{ray} (y)$, where \textbf{not} $b (w, x) = b (w, y) = 0$, we obtain a well defined ray \begin{equation}\label{eq:10.2} M(W, X, Y) = \operatorname{ray} ( m (w, x, y) )\in [X, Y]. \end{equation} In fact, at least one of the vectors $b(w, y) x$, $b (w, x)y$ is not zero, and so $$m (w, x, y) \in (R x + Ry) \setminus \{ 0 \}.$$ Here the notion of the ``polar'' of a subset $C$ of $\operatorname{Ray} (V)$ comes into play, defined as follows. \begin{defn}\label{def:10.1} Given a (nonempty) subset $C$ of $\operatorname{Ray} (V)$, the \textbf{polar} $C^{\perp}$ is the set of all $W \in \operatorname{Ray} (V)$ with $b (w, x) = 0$ for all $w \in W$ and $x \in V \setminus \{ 0 \}$ with $\operatorname{ray} (x) \in C$. \end{defn} It is immediate from Definition \ref{def:10.1} that any polar $C^{\perp}$ is convex and also that a set $C \subset \operatorname{Ray}( V)$ and its convex hull $\operatorname{conv} (C)$ have the same polar, \begin{equation}\label{eq:10.3} C^{\perp} = \operatorname{conv} (C)^{\perp}. \end{equation} Thus it suffices most often to consider polars of convex sets. If $C$ is convex, then we can characterize both $C$ and $C^{\perp}$ by the ray closed subsets $\widetilde{C}$ and $(C^{\perp})^{\sim}$ of $V \setminus \{ 0 \}$, associated to~ $C$ and $C^{\perp}$ (cf. \cite[Notation 2.4]{Quasilinear}) apparently as follows: \begin{equation}\label{eq:10.4} (C^{\perp})^{\sim} = \{ w \in V \setminus \{ 0 \} \ds\vert b (w, x) = 0 \ \mbox{for every} \; x \in \widetilde{C} \}. \end{equation} For the ray-closed submodules \[ T = \widetilde{C} \cup \{ 0 \}, \quad U = (C^{\perp})^{\sim} \cup \{ 0 \} \] (cf. \cite[Remark 2.6]{Quasilinear}) we conclude that \begin{equation}\label{eq:10.5} U = \{ w \in V \ds\vert b (w, t) = 0 \ \mbox{for every} \; t \in T \}.\end{equation} We now take a look at the complement of a polar in $\operatorname{Ray} (V)$. \begin{prop}\label{prop:10.2} Assume that $C$ is a subset of $\operatorname{Ray} (V)$ and that $W_1$ is a ray in $V$ where $W_1 \not\in C^{\perp}$. Then for any $W_2 \in \operatorname{Ray} (V)$ the half-open interval $[W_1, W_2[$ \footnote{ The overall assumption that $eR$ is a semifield is not necessary, cf. \cite[Definition 7.5]{QF2}.} is disjoint from~ $C^{\perp}$. \end{prop} \begin{proof}Writing $X^{\perp} : = \{ X \}^{\perp}$ for $X \in \operatorname{Ray} (V)$ it is obvious that \begin{equation}\label{eq:10.6} C^{\perp} = \bigcap\limits_{X \in C} X^{\perp}.\end{equation} Thus, there exists some $X \in C$ such that $W_1 \not\in X^{\perp}$. Picking vectors $w_1 \in W_1$, $w_2 \in W_2$, $x \in X$, then $b(w_1, x) \ne 0$, which implies that for any scalars $\lambda_1 \in R \setminus \{ 0 \}$, $\lambda_2 \in R$ \[ b (\lambda_1 w_1 + \lambda_2 w_2, x) = \lambda_1 b (w_1, x) + \lambda_2 b (w_2, x) \ne 0. \] Since such vectors $\lambda_1 w_1 + \lambda_2 w_2$ represent all rays in $[W_1, W_2 [$ , we conclude that $[W_1, W_2 [ \, \cap \, X^{\perp} \linebreak = \emptyset$. All the more $[W_1, W_2 [ \, \cap \, C^{\perp} = \emptyset$. \end{proof} \begin{cor} \label{cor:10.3.} Given any subset $C$ of $\operatorname{Ray} (V)$, both sets $C^{\perp}$ and $\operatorname{Ray} (V) \setminus C^{\perp}$ are convex.\end{cor} \begin{proof} Convexity of $C^{\perp}$ had been observed above. The convexity of $\operatorname{Ray} (V) \setminus C^{\perp}$ follows from Proposition~\ref{prop:10.2} by taking $W_2$ in $\operatorname{Ray} (V) \setminus C^{\perp}$. \end{proof} We are ready for the key definition of this section. \begin{defn} \label{def:10.4} Given three rays $W, X, Y$ in $V$ with $W \not\in [X, Y]^{\perp} = \{ X, Y \}^{\perp}$, the ray $M (W, X, Y)$ from \eqref{eq:10.2} is called the $W$-\textbf{median} of the pair $(X, Y)$.\end{defn} We denote this ray most often by $M_W (X, Y)$ instead of $M (W, X, Y)$. This notation emphasizes the fact, obvious from \eqref{eq:10.1} and \eqref{eq:10.2}, that \begin{equation}\label{eq:10.7} M_W (X, Y) = M_W (Y, X). \end{equation} The assignment $W \mapsto M_W (X, Y)$ has convexity properties as follows. \begin{thm} \label{thm:10.5} Assume that $W_1, W_2, X, Y$ are rays in $V$ with $W_1 \not\in \{ X, Y \}^{\perp}$, $W_2 \not\in \{ X, Y \}^{\perp}$. \begin{enumerate}\setlength{\itemsep}{2pt} \item[a)] $[W_1, W_2] \cap \{ X, Y \}^{\perp} = \emptyset$ and so $M_W (X, Y)$ is defined for every $W \in [W_1, W_2 ]$. \item[b)] For any $W \in [W_1, W_2 ]$ \begin{equation}\label{eq:10.8} M_W (X, Y) \in [M_{W_1} (X, Y), M_{W_2} (X, Y) ]. \end{equation} \end{enumerate} \end{thm} \begin{proof} a): Clear from Proposition 10.2, applied to $C = \{ X, Y \}$.\vskip 1.5mm \noindent b): We may assume that $W \ne W_1$, $W \ne W_2$. For vectors $w_1 \in W_1$, $w_2 \in W_2$, $x \in X$, $y \in Y$. there exist scalars $\lambda_1, \lambda_2 \in R \setminus \{ 0 \}$ such that $w = \lambda_1 w_1 + \lambda_2 w_2 \in W$. Then $m (w, x, y) = \lambda_1 m (w_1, x, y) + \lambda_2 m (w_2, x, y)$, and so $M_W (X, Y) = \operatorname{ray} (\lambda_1 m (w_1, x, y) + \lambda_2 m (w_2, x, y) ) \in [ M_{W_1} (X, Y), M_{W_2} (X, Y) )]$. \end{proof} We state an immediate consequence of this theorem. \begin{cor} \label{cor:10.6} Let $X, Y$ be (different) rays in $V$. Assume that $S$ is a convex subset of the closed interval $[X, Y]$. Then the set of all rays $W$ in $V$ with $W \not\in \{ X, Y \}^{\perp}$ and $M_W (X, Y) \in ~S$ is convex in $\operatorname{Ray} (V)$. \end{cor} Assume now again, as mostly from \S 6 onward, that $eR$ is a semifield. Then we know from \cite[Theorem 8.8]{QF2}, that the ``border rays'' $X, Y$ of $[X, Y]$ are uniquely determined by $[X, Y]$ up to permutation. Thus we are entitled to call $M_W (X, Y)$ the $W$-\textbf{median of} $[X, Y]$. The $W$-median $M_W (X, Y)$ have already appeared in our analysis of the function $f(\lambda) = CS (\varepsilon_1, \varepsilon_2 + \lambda \varepsilon_3)$ in \S\ref{sec:6} under the labeling $X_1, X_2, X_3$ instead of $W, X, Y$, with $\varepsilon_i \in X_i$ and $\varepsilon_1, \varepsilon_2, \varepsilon_3$ instead of $w, x, y$. From the analysis in \S\ref{sec:6} we can read off the following important facts about $M_W (X, Y)$. \begin{thm} \label{thm:10.7} Assume that $eR$ is a semifield and $W, X, Y$ are rays in $V$ where $W \not\in [X, Y]^{\perp} = \{ X, Y \}^{\perp}$, so that the $W$-median $M_W (X, Y)$ is defined. Assume also that the rays $W, X, Y$ are anisotropic (i.e., none of the sets $q(W)$, $q(X)$, $q(Y)$ contains zero), so that the $\operatorname{CS}$-ratio $\operatorname{CS} (W, Z)$ is defined for every $Z \in [X, Y]$. \begin{enumerate}\setlength{\itemsep}{2pt} \item[a)] The function \[ Z \longmapsto \operatorname{CS} (W, Z), \quad [X, Y] \longrightarrow eR \] attains its minimal value at \[ M : = M_W (X, Y). \] \item[b)] If the function $\operatorname{CS} (W, -)$ is monotone on $[X, Y]$ (with respect to $\leq_X$), then \begin{equation}\label{eq:10.8b} \operatorname{CS} (W, M) = \min (\operatorname{CS} (W, X), \operatorname{CS} (W, Y)) \end{equation} is this minimal value. \item[c)] $\operatorname{CS} (W, -)$ is monotone on $[X, Y]$ iff either $\operatorname{CS} (X, Y) \leq e$ or $\operatorname{CS} (X, Y) > e$ and $M \not\in \, ] X^*, Y^* [$ where $X^*, Y^*$ denote the critical rays of $[X, Y]$ near $X$ and $Y$ respectively (cf. \cite[\S9]{QF2}). \item[d)] If $\operatorname{CS} (W, -)$ is not monotone (and so $\operatorname{CS} (X, Y) > e$, $M \in \, ] X^*, Y^*[$ ), the minimal value of $\operatorname{CS} (W, -)$ on $[X, Y]$ is \begin{equation}\label{eq:10.9} \operatorname{CS} (W, M) = \sqrt{\frac{\operatorname{CS} (W, X) \operatorname{CS} (W, Y)}{\operatorname{CS} (X, Y)}} < \min (\operatorname{CS} (W, X), \operatorname{CS} (W, Y)). \end{equation} Furthermore, $Z = M$ is the \textbf{only ray in} $[X, Y]$ \textbf{where the minimum is attained.} \end{enumerate} \end{thm} \begin{proof} As pointed out, all proofs have been done in \S\ref{sec:6}. The ray $M$ corresponds to $\lambda = \xi : = \frac{\alpha_{12}}{\alpha_{13}}$, cf. \eqref{eq:6.12} and \eqref{eq:6.13}. Claim a) is evident by the argument following \eqref{eq:6.12}. Claim b) then is trivial. The other two claims c) and d) follow from the description of the monotonic behavior of $f(\lambda)$ in Proposition~\ref{prop:6.1} and Theorems~\ref{thm:6.2} and \ref{thm:6.3}. Note that there, when $\operatorname{CS} (X_2, X_3) > e$, the critical rays of $[X_2, X_3]$ near $X_2$ and $X_3$ (as defined in \cite[\S9]{QF2}) correspond to $\lambda = \frac{\alpha_2}{\alpha_{23}}$ and $\lambda = \frac{\alpha_{23}}{\alpha_3}$ respectively, cf. \eqref{eq:6.20}. \end{proof} \begin{rem} \label{rem:10.8} In the terminology of \S\ref{sec:7} the function $\operatorname{CS} (W,-)$ on $[X, Y]$ is not monotone iff the $\operatorname{CS}$-profile of $W$ on $[X, Y]$ is of type $B, B'$ or $\partial B \ (= \partial B')$, cf. Tables~\ref{table:7.3} and \ref{table:7.4} and Scholium~\ref{schol:7.5}. The ray $W$ is in the polar $\{ X, Y \}^{\perp}$ iff $\operatorname{CS} (W, -)$ is zero everywhere on $[X, Y]$, which means that the $\operatorname{CS}$-profile of $W$ on $[X, Y]$ is of type $\partial D$ or $\partial E$. \end{rem} \section{On maxima and minima of $\operatorname{CS} (W,-)$ on finitely generated convex sets in the ray space}\label{sec:11} We call a convex subset $C$ of $\operatorname{Ray} (V)$ \textbf{finitely generated}, if $C$ is the convex hull of a finite set of rays $\{ Y_1, \dots, Y_n \}$, and call $\{ Y_1, \dots, Y_n \}$ a \textbf{set of generators} of $C$. Note that then the sum $(Y_i)_0 + \dots + (Y_n)_0$ of the submodules $(Y_i)_0 = Y_i \cup \{ 0 \}$ of $V$ is the ray-closed submodule $U$ of $V$ with $\operatorname{Ray} (U) = C$, cf. \S\ref{sec:1}. Assume again, as previously, that the ghost ideal $eR$ of the supertropical semiring $R$ is a semifield and $(q, b)$ is a quadratic pair on the $R$-module $V$. Assume also that $q$ is anisotropic on $V$. (Otherwise we replace $V$ by $V_{{\operatorname{an}}}$). Then the $\operatorname{CS}$-ratio $\operatorname{CS} (W, Z)$ is defined for any two rays $W, Z$ in $V$. Assume finally that $C$ is a finitely generated convex subset of $\operatorname{Ray} (V)$ and $(Y_1, \dots, Y_n)$ is a fixed sequence of generators of $C$. Given a ray $W$ on $V$, we enquire whether the function $\operatorname{CS} (W, -)$ on $C$ has a minimal value, and then, where on $C$ this minimal value is attained.\footnote{We leave the important problem aside, whether $C$ has a \textit{unique minimal} set of generators. It would take us too far afield.} To have a precise hold at the function $Y \mapsto \operatorname{CS} (W, Y)$, $C \rightarrow eR$, we use the following notation. Given vectors $y_i \in e Y_i$ $(1 \leq i \leq n)$, a ray $W \in \operatorname{Ray} (V)$, and vectors $w \in eW$, $y \in e Y$, $Y \in C$, then $Y_i = \operatorname{ray} (y_i)$ and $W = \operatorname{ray} (w)$, $Y = \operatorname{ray} (y)$. We have a presentation \begin{equation}\label{eq:11.1} y = \lambda_1 y_1 + \dots + \lambda_n y_n \end{equation} with a sequence $(\lambda_1, \dots, \lambda_n) \in e R^n$, not all $\lambda_i = 0$. Let $\alpha_i : = q (y_i)$, $\beta_{ij} : = b (y_i, y_j)$. For $i, j \in \{ 1, \dots, n \}$ we define \[ \gamma_{ij} = \left\{ \begin{array}{lcl} \alpha_i & \mbox{if} & i = j, \\[1mm] \beta_{ij} & \mbox{if} & i \ne j, \\ \end{array} \right. \] and write \begin{equation}\label{eq:11.2} q (y) = \sum\limits_{1 \leq i \leq j \leq n} \gamma_{ij} \lambda_i \lambda_j. \end{equation} By a computation as in the proof of Lemma~\ref{lem:8.1} we obtain a useful formula for $\operatorname{CS} (W, Y) = \operatorname{CS} (w, y)$, interchanging there the arguments $w, y$, namely \[ \operatorname{CS} (w, y) = \sum\limits_{i = 1}^n \operatorname{CS} (w, \lambda_i y_i) \frac{q (\lambda_i y_i)}{q (y)}, \] and so \begin{equation}\label{eq:11.3} \operatorname{CS} (W, Y) = \sum\limits_{i = 1}^n \operatorname{CS} (W, Y_i) \frac{\lambda_i^2 \alpha_i}{q (y)}. \end{equation} We now are ready for the central result of this section. \begin{thm}\label{thm:11.1} Let $W, Y_1, \dots, Y_n \in \operatorname{Ray} (V)$ be given and $C : = \operatorname{conv} (Y_1, \dots, Y_n)$. Then the $eR$-valued function $\operatorname{CS} (W, -)$ on $C$ has a minimum. It is attained at $Y_r$ for some $r \in \{ 1, \dots, n \}$ or at the $W$-median $M_W (Y_r, Y_s)$ of some interval $[Y_r, Y_s], 1 \leq r \leq s \leq n$, on which $\operatorname{CS} (W, -)$ is not monotone. \end{thm} \begin{proof} Without loss of generality we assume that $\operatorname{CS} (W, Y_1) \leq \operatorname{CS} (W, Y_i)$ for $2 \leq i \leq n$. We distinguish two cases. \begin{description}\setlength{\itemsep}{2pt} \item[\textbf{Case A}] $\operatorname{CS} (W, Y_1) \leq \operatorname{CS} (W, Y)$ for every $Y \in C$. Now the minimum is attained at ~$Y_1$. \item[\textbf{Case B}] There exists some $Y \in C$ with $\operatorname{CS} (W, Y) < \operatorname{CS} (W, Y_1)$. \end{description} We choose $(\lambda_1, \dots, \lambda_n) \in eR^n$ such that \eqref{eq:11.1} and \eqref{eq:11.2} hold for $W$ and $Y$, and so does \eqref{eq:11.3}. We further choose a dominant term $\gamma_{r, s} \lambda_r \lambda_s$, $r \leq s$, in the sum on the right of \eqref{eq:11.2}. Then $q (y) = \gamma_{rs} \lambda_r \lambda_s$, and so \[ \operatorname{CS} (W, Y) = \sum\limits_{i = 1}^n \operatorname{CS} (W, Y_i) \frac{\lambda_i^2 \alpha_i}{\gamma_{rs} \lambda_r \lambda_s}. \] Clearly \begin{equation}\label{eq:str} \sum\limits_{i = r}^s \operatorname{CS} (W, Y_i) \frac{\alpha_i \lambda_i^2}{\gamma_{rs} \lambda_r \lambda_s} \ds \leq \operatorname{CS} (W, Y) \ds < \operatorname{CS} (W, Y_1). \tag{$*$} \end{equation} Suppose that $r = s$. Then we obtain that $\operatorname{CS} (W, Y_r) < \operatorname{CS} (W, Y_1)$, contradicting our initial assumption that $\operatorname{CS} (W, Y_1) \leq \operatorname{CS} (W, Y_i)$ for all $i \in \{ 1, \dots, n \}$. Thus $r < s$. Let \[ Z : = \operatorname{ray} (\lambda_r y_r + \lambda_s y_s) \in [Y_r, Y_s]. \] Formula \eqref{eq:11.3} for this ray $Z$ tells us that the sum on the left in ($\ast$) equals $\operatorname{CS} (W, Z)$. Thus \begin{equation*}\label{eq:str.2} \operatorname{CS} (W, Z) \leq \operatorname{CS} (W, Y) < \operatorname{CS} (W, Y_1). \end{equation*} It follows that $\operatorname{CS} (W, Z)$ is smaller than both $\operatorname{CS} (W, Y_r)$ and $\operatorname{CS} (W, Y_s)$. By our analysis of the minimal value of $\operatorname{CS} (W, - )$ on closed intervals (Theorem~\ref{thm:10.7}), we conclude that $\operatorname{CS} (W, - )$ is not monotone on $[Y_r, Y_s]$. So, the minimum of $\operatorname{CS} (W, -)$ on $[Y_r, Y_s]$ is attained at the $W$-median $M_W (Y_r, Y_s)$ and only at this ray. It is now evident that the minimum of $\operatorname{CS} (W, -)$ on~ $C$ exists and is attained at one of the finitely many $W$-medians $M_W (Y_i, Y_j)$ with $\operatorname{CS} (W, -)$ not monotone on $[Y_i, Y_j]$. \end{proof} Given a finitely generated convex set $C$ in $\operatorname{Ray} (V)$, a sequence $(Y_1, \dots, Y_n)$ of generators of $C$, and a ray $W$ on $V$, we define \begin{equation}\label{eq:11.4} \mu_W (Y_1, \dots, Y_n) : = \mu_W (C) : = \min\limits_{Z \in C} \operatorname{CS} (W, Z). \end{equation} \begin{thm}\label{thm:11.2} In this setting assume for a fixed $W$ that \[ \mu_W (C) < \min\limits_{1 \leq i \leq n} \operatorname{CS} (W, Y_i), \] that $Y \in C$ is a ray with $\operatorname{CS} (W, Y) = \mu_W (C)$, and that a presentation $Y = \operatorname{ray} (\lambda_1 y_1 + \dots + \lambda_n y_n)$ is given with $y_i \in e Y_i$, $\lambda_i \in eR$. Then for every dominant term $\gamma_{rs} \lambda_r \lambda_s$ in the sum ~\eqref{eq:11.2} above we have $r < s$. The ray \[ Z_{rs} : = \operatorname{ray} (\lambda_r y_r + \lambda_s y_s) \in [Y_r, Y_s] \] is the $W$-median of $[Y_r, Y_s]$, and \begin{equation}\label{eq:11.5} \mu_W (C) = \mu_W (Y_r, Y_s) = \operatorname{CS} (W, Z_{rs}) = \sqrt{\frac{\operatorname{CS} (W, Y_r) \operatorname{CS} (W, Y_s)}{\operatorname{CS} (Y_r, Y_s)}}. \end{equation} Moreover, $Z_{rs}$ is the unique ray $Z$ in $[Y_r, Y_s]$ for which $\operatorname{CS} (W, Z) = \mu_W (C)$. \end{thm} \begin{proof} By the arguments in the proof of Theorem \ref{thm:11.1}, for Case B, we have \[ \operatorname{CS} (W, Z_{rs}) \leq \operatorname{CS} (W, Y) = \mu_W (C). \] Trivially \[ \mu_W (C) \leq \mu_W (Y_r, Y_s) \leq \operatorname{CS} (W, Z_{rs}). \] We conclude that equality holds here everywhere. Since $\mu_W (Y_r, Y_s) = \mu_W (C)$ is smaller than both $\operatorname{CS} (W, Y_r)$ and $\operatorname{CS} (W, Y_s)$, the function $\operatorname{CS} (W, -)$ on $[Y_r, Y_s]$ is certainly not monotone. We conclude by Theorem~\ref{thm:10.7}.d, that the most right equality in \eqref{eq:11.5} holds, as well as the last assertion in the theorem. \end{proof} \begin{cor}\label{cor:11.3} Assume that $Y_1, \dots, Y_n, W, W'$ are rays in $V$ with $\operatorname{CS} (W, Y_i) = \operatorname{CS} (W', Y_i)$ for $1 \leq i \leq n$. Then \[ \mu_W (Y_1, \dots, Y_n) = \mu_{W'} (Y_1, \dots, Y_n). \] \end{cor} \begin{proof} Let $C = \mbox{conv} (Y_1, \dots, Y_n)$. We shall infer from \S\ref{sec:7}, \S \ref{sec:8}, and Theorem~\ref{thm:11.1} that the value $\mu_W (C) = \mu_W (Y_1, \dots, Y_n)$ is uniquely determined by the quantities $\operatorname{CS} (W, Y_i)$, $1 \leq i \leq n$. This is trivial for $n = 1$, while for $n = 2$, \[ \mu_W (C) = \min (\operatorname{CS} (W, Y_1), \operatorname{CS} (W, Y_2)) \] except in the case that the profile of $\operatorname{CS} (W, -)$ on $[Y_1, Y_2]$ is not monotone. This property only depends on the values $\operatorname{CS} (W, Y_1)$ and $\operatorname{CS} (W, Y_2)$ (cf. Table \ref{table:7.3}). Then \[ \mu_W (C) = \sqrt{\frac{\operatorname{CS} (W, Y_1) \operatorname{CS} (W, Y_2)}{\operatorname{CS} (Y_1, Y_2)}}. \] When $n \geq 3$ it follows from Theorem \ref{thm:11.1} (and more explicitly from Theorem~\ref{thm:11.2}) that $\mu_W (C)$ is determined by the values $\operatorname{CS} (W, Y_i)$, $1 \leq i \leq n$, and those values $\mu_W (Y_r, Y_s)$, $1 \leq r < s \leq n$, which are smaller than $\operatorname{CS} (W, Y_r)$ and $\operatorname{CS} (W, Y_s)$. Thus in all cases $\mu_W (C)$ remains unchanged if we replace $W$ by $W'$. \end{proof} Formula \eqref{eq:11.3} has been the main new ingredient for proving Theorems~\ref{thm:11.1} and \ref{thm:11.2}. We quote another (immediate) consequence of this formula. \begin{thm}\label{thm:11.4} Assume the $eR$-module $eV$ is free with base $y_1, \dots, y_n$. Let $y \in eV$, $Y = \operatorname{ray}(y)$, and $Y_i = \operatorname{ray}(y_i)$. Then \begin{equation}\label{eq:11.6} \operatorname{CS} (W, Y) \leq \frac{q_{\operatorname{QL}} (y)}{q (y)} \cdot \max\limits_{1 \leq i \leq n} \operatorname{CS} (W, Y_i). \end{equation} \end{thm} \noindent This theorem is a sharpening of \cite[Theorem~7.9.a]{QF2} in the free case. Conversely it is immediate to deduce the quoted result in \cite{QF2} from \eqref{eq:11.6} by pulling back the quadratic pair $(eq,eb)$ to a free module. \vskip 1.5mm \noindent We now study the minimal values of $\operatorname{CS} (W, - )$ on the convex hulls of subsets of $\{ Y_1, \dots, Y_n \}$. \begin{defn}\label{def:11.5} Given a finite set $S = \{ Y_1, \dots, Y_n\}$ of rays in $V$ and a ray $W$ in $V$, we define the subset \[ \overrightarrow{\mu}_W (S) : = \overrightarrow{\mu}_W (Y_1, \dots, Y_n), \] of $e R$ as follows: \begin{equation}\label{eq:11.7} \overrightarrow{\mu}_W (S) : = \{ \mu_W (T) \ds \vert T \subset S, T \ne \emptyset \}, \end{equation} i.e., $\overrightarrow{\mu}_W (S)$ is the set of minimal values of $\operatorname{CS} (W, - )$ on the convex hulls of all nonempty subsets of $S$. We call $\overrightarrow{\mu}_W (S)$ the $\operatorname{CS}$-\textbf{spectrum} of $W$ on the set of rays~$S$. \end{defn} We list the finite poset $\overrightarrow{\mu}_W (S)$ as a sequence \begin{equation}\label{eq:11.8} \mu^0_W (S) < \mu^1_W (S) < \dots < \mu^m_W (S) \end{equation} in $eR$. Here $\mu^0_W (S)$ and $\mu^m_W (S)$ are the minimum and the maximum, respectively, of $\operatorname{CS} (W, - )$ on the convex hull $C = \operatorname{conv} (S)$ of $S$. Notice that the other values $\mu^{i}_W (S)$ will often depend on the set of generators $S$ of the convex set $C$ instead of $C$ alone. Aa a consequence of Theorem \ref{thm:11.1} we have the following fact. \begin{schol} \label{schl:11.6} Let $S = \{ Y_1, \dots, Y_n \}$. The elements of $\overrightarrow{\mu}_W (S)$ are the values $\operatorname{CS} (W, Y_i)$, $1 \leq i \leq r$, and $\operatorname{CS} (W, M_W (Y_r, Y_s))$ where $(Y_r, Y_s)$ runs through all pairs in $S$ such that $\operatorname{CS} (W, -)$ is not monotone on $[Y_r, Y_s]$. \end{schol} \begin{prop} \label{prop:11.7} Assume that $P$ and $Q$ are subsets of $S$, such that all intervals $[Y, Z]$ with $Y \in P$, $Z \in Q$ have a monotone $W$-profile. Then \begin{equation}\label{eq:11.9} \overrightarrow{\mu}_W (P \cup Q) = \overrightarrow{\mu}_W (P) \cup \overrightarrow{\mu}_W (Q). \end{equation} In particular, this holds \textbf{for every} $W \in \operatorname{Ray} (V)$, if the quadratic form $eq$ is quasilinear on these intervals $[Y, Z]$. \end{prop} \begin{proof} This is evident from the preceding description of CS-spectra. \end{proof} \section{The glens and the glen locus of a finite set of rays}\label{sec:12} As previously, we assume that the ghost ideal $eR$ of the supertropical semiring $R$ is a semifield and that the quadratic form $q$ on $V$ is anisotropic. \begin{defn}\label{def:12.1} The \textbf{glen} of a finite sequence of rays $Y_1, \dots, Y_n$ in $V$ \textbf{at a ray} $W$ in $V$ is the set of all $Z \in \operatorname{conv} (Y_1, \dots, Y_n)$ such that \[ \operatorname{CS} (W, Z) < \min\limits_{1 \leq i \leq n} \operatorname{CS} (W, Y_i). \] We denote this set by $\rm Glen_W (Y_1, \dots, Y_n)$, and call it the $W$-\textbf{glen of} $(Y_1, \dots, Y_n)$, for short. \end{defn} For notational reasons we do not exclude the case $n = 1$. Then, of course, all $W$-glens are empty. \begin{prop}\label{prop:12.2} $\rm Glen_W(Y_1,\dots, Y_n))$ is a convex subset of $\operatorname{Ray}(V)$ (perhaps empty). \end{prop} \begin{proof} Given three rays $Z_1, Z_2, Z$ in conv$(Y_1, \dots, Y_n)$ with $Z \in [Z_1, Z_2]$ and $\operatorname{CS} (W, Z_1) < \operatorname{CS} (W, Y_i)$, $\operatorname{CS} (W, Z_2) < \operatorname{CS} (W, Y_i)$ for all $i \in [1, n]$, we infer from Theorem ~\ref{thm:11.4} that $\operatorname{CS} (W, Z) < \operatorname{CS} (W, Y_i)$ for all $i \in [1, n]$. \end{proof} For $n = 2$ and $Y_1 \ne Y_2$ the rays $Y_1$ and $Y_2$ are uniquely determined, up to permutation, by the closed interval $[Y_1, Y_2]$, as we know. Thus we are entitled to define the $W$-\textbf{glen of} $[Y_1, Y_2]$ as \[ \rm Glen_W [Y_1, Y_2] ; = \rm Glen_W (Y_1, Y_2). \] From the analysis of the function $CS (W, - )$ on closed intervals in \S \ref{sec:6} we infer the following statement, which justifies the use of the name ``glen'' at least for $n = 2$. See also Figure 4 below. \begin{schol} \label{schl:12.2} $\rm Glen_W [Y_1, Y_2]$ is not empty iff the $W$-profile on $[Y_1, Y_2]$ is not monotonic, and thus is of type $B$ or $B'$ or $\partial B$ (cf. \S \ref{sec:7}). \end{schol} Relying on \S \ref{sec:6}, we give an explicit description of the $W$-glen of $[Y_1, Y_2]$ in the case of type $B$ or $\partial B$, i.e., when $\operatorname{CS} (W, - )$ is not monotonic on $[Y_1, Y_2]$ and $\operatorname{CS} (W, Y_1) \leq (W, Y_2)$. Choosing vectors $\varepsilon_1, \varepsilon_2, \varepsilon_3 \in V$ such that $W = \operatorname{ray} (\varepsilon_1)$, $Y_1 = \operatorname{ray} (\varepsilon_2)$, $Y_2 = \operatorname{ray} (\varepsilon_3)$, we have the following illustration of the function $f (\lambda) = \operatorname{CS} (\varepsilon_1, \varepsilon_2 + \lambda \varepsilon_3) = f_1 (\lambda) + f_2 (\lambda)$, using the notations from \S \ref{sec:6}. \begin{figure}[h]\label{fig:4} \resizebox{0.5\textwidth}{!}{ \begin{tikzpicture}[] \draw[thick] (-4,0) --(0,0) ; \draw[thick] (0,0) node[below]{$\frac{\alpha_2}{\alpha_{23}}$} --(15,0)node[below]{$\lambda$} ; \draw[blue,thick] (0,0) -- (0,10); \draw[blue, thick] (0,8) node[left]{$CS(\varepsilon_1,\varepsilon_3)$}--(13,8); \draw[blue, thick] (0,6) --(13,6)node[right]{${CS(\varepsilon_1,\varepsilon_2)}$}; \draw[blue, thick](9,0)node[below]{$\zeta$}--(9,10); \draw[blue, thick](13,0)node[below]{$\frac{\alpha_{23}}{\alpha_3}$}--(13,10); \draw[blue, thick](4.5,0)node[below]{$\xi$}--(4.5,10); \draw[help lines] (0,0) grid (13,10); \draw[red, line width=0.5mm] (0,2) node[left]{}--(13,8); \draw[red, line width=0.5mm] (13,8) --(15,8) node[right]{$\bf f_2$}; \draw[red, line width=0.5mm] (-3,6) node[left]{$f_1$}--(0,6); \draw[red, line width=0.5mm] (0,6) node[left]{}--(4.5,4); \end{tikzpicture} } \caption{ $\xi=\frac{\alpha_{12}}{\alpha_{13}}$. } \end{figure} The $W$-glen of $[Y_1, Y_2]$ is contained in the open interval $] Y_{12}, Y_{21} [$ , where $Y_{12}$ is the characteristic ray of $[Y_1, Y_2]$ near $Y_1$ and $Y_{21}$ is the characteristic ray near $Y_2$. It starts at the argument $\lambda = \frac{\alpha_2}{\alpha_{23}}$ corresponding to $Y_{12}$ and ends at the argument $\zeta \in \, ] \frac{\alpha_2}{\alpha_{23}}, \frac{\alpha_{23}}{\alpha_{3}} [$ with $f_2 (\zeta) = \operatorname{CS} (\varepsilon_1, \varepsilon_2)$. In the interval $[ \frac{\alpha_2}{\alpha_{23}}, \frac{\alpha_{23}}{\alpha_{3}}]$ the function $f_2$ reads \[ f_2 (\lambda) = \frac{\lambda^2 \alpha^2_{13}}{\alpha_1 \cdot \lambda \alpha_{23}} = \frac{\lambda \alpha^2_{13}}{\alpha_1 \cdot \alpha_{23}} \] (cf. \eqref{eq:6.3}), since here the term $\lambda \alpha_{23}$ is dominant in the formula \eqref{eq:6.8} for $q (\varepsilon_2 + \lambda \varepsilon_3)$. Thus we have to solve \[ \frac{\zeta \alpha_{13}^2}{\alpha_1 \alpha_{23}} = \frac{\alpha_{12}^2}{\alpha_1 \alpha_2}, \] and obtain \begin{equation}\label{eq:12.1} \zeta = \frac{\alpha_{12}^2}{\alpha_{13}^2} \cdot \frac{\alpha_{23}}{\alpha_2}. \end{equation} In the subcase $\operatorname{CS} (\varepsilon_1, \varepsilon_2) = \operatorname{CS} (\varepsilon_1, \varepsilon_2)$, i.e., $\frac{\alpha_{12}^2}{\alpha_1 \alpha_{2}} = \frac{\alpha_{13}^2}{\alpha_1 \alpha_3}$, we obtain \begin{equation}\label{eq:12.2} \zeta = \frac{\alpha_{23}}{\alpha_3}. \end{equation} Introducing the ray \[ Z_{21} : = \operatorname{ray} (\varepsilon_2 + \zeta \varepsilon_3) = \operatorname{ray} (\alpha_{13}^2 \alpha_2 \varepsilon_2 + \alpha_{12}^2 \alpha_{23} \varepsilon_3), \] we summarize our study of the $W$-glen of $[Y_1, Y_2]$ as follows. \begin{thm}\label{thm:12.3} Assume that $\operatorname{CS} (W, Y_1) \leq \operatorname{CS} (W, Y_2)$ and $\rm Glen_W [Y_1, Y_2] \ne \emptyset$. Then \begin{enumerate} \etype{\alph} \setlength{\itemsep}{2pt} \item $\rm Glen_W [Y_1, Y_2] =\, ] Y_{12}, Z_{21} [\, \subset \, ] Y_{12}, Y_{21} [$, \item $\rm Glen_W [Y_1, Y_2] = \, ] Y_{12}, Y_{21} [$ iff $\operatorname{CS} (W, Y_1) = \operatorname{CS} (W, Y_2)$. \end{enumerate}\end{thm} \begin{rem} \label{rem:12.4} Note that \[ \sqrt{ \frac{\alpha_2}{\alpha_{23}} \cdot \frac{\alpha^2_{12}}{\alpha_{13}^2} \cdot \frac{\alpha_{23}}{\alpha_2}} = \frac{\alpha_{12}}{\alpha_{13}} = \xi. \] Thus the median $M_W (Y_1, Y_2)$ may be viewed as a kind of geometric mean of the rays $Y_{12}$ and ~ $Z_{21}$. \end{rem} We now look at the set of rays $W$ where a nonempty glen of $(Y_1, \dots, Y_n)$ occurs. \begin{defn}\label{def:12.5} The \textbf{glen-locus} of $(Y_1, \dots, Y_n)$ is the set \[ \begin{array}{ll} \rm Loc_{\rm glen} (Y_1, \dots, Y_n) & : = \{ W \in \operatorname{Ray} (V) \ds \vert \rm Glen_W (Y_1, \dots, Y_n) \ne \emptyset \}\\[1mm] &\; = \big\{ W \in \operatorname{Ray} (V) \ds \vert \mu_W (Y_1, \dots, Y_n) < \min\limits_{1 \leq i \leq n} \operatorname{CS} (W, Y_i) \big \}. \end{array} \] \end{defn} For $n = 2$, $Y_1 \ne Y_2$, we define \[ \rm Loc_{\rm glen} [Y_1, Y_2] : = \rm Loc_{\rm glen} (Y_1, Y_2), \] which makes sense, since the rays $Y_1, Y_2$ are uniquely determined, up to permutation, by the interval $[Y_1, Y_2]$. Theorem ~\ref{thm:11.2} translates into the following statement, where we use the definition of loci of basic and composed profile types from \S \ref{sec:8} (Definitions~\ref{def:8.5} and \ref{def:8.7}). \begin{schol} \label{schl:12.6} $\rm Loc_{\rm Glen} [Y_1, Y_2]$ is the disjoint union of the basic loci $\rm Loc_B [\overrightarrow{Y_1, Y_2}]$, $\rm Loc_{B'} [\overrightarrow{Y_1, Y_2}]$, and $\rm Loc_{\partial B} [Y_1, Y_2]$, which are disjoint convex subsets of $\operatorname{Ray} (V)$.It is also the union of the composed loci $\rm Loc_{\overline{B}} [\overrightarrow{Y_1, Y_2}]$, $\rm Loc_{\overline{B'}} [\overrightarrow{Y_1, Y_2}]$, which are again convex, and have the intersection $\rm Loc_{\partial B} [Y_1, Y_2]$ (cf. Theorems~\ref{thm:8.6} and \ref{thm:8.8}). \end{schol} Of course it may happen that $\rm Loc_{\rm Glen} [Y_1, Y_2]$ is empty. In particular this occurs if $[Y_1, Y_2]$ is $\nu$-quasilinear. Given a set $\{ Y_1, \dots, Y_n \}$ of rays in $V$ with $n > 2$, we have the important fact in consequence of Theorem~\ref{thm:11.1} (cf. Scholium~\ref{schl:11.6}), that $\rm Loc_{\rm glen} (Y_1, \dots, Y_n)$ is contained in the union of all sets $\rm Loc_{\rm glen} (Y_r, Y_s)$ with $1 \leq r < s \leq n$, such that $[Y_r, Y_s]$ is $\nu$-excessive \cite[Definition ~7.3]{QF2}, equivalently, $\rm Loc_{\rm glen} [Y_r, Y_s] \ne \emptyset$. If there are $u = u (Y_1, \dots, Y_n)$ such pairs~ $(r, s)$, then $\rm Loc_{\rm glen} (Y_1, \dots, Y_n)$ is a union of at most $2 u$ convex subsets of $\operatorname{Ray} (V)$. \section{Explicit description of the set of minima of a CS-function on a finitely generated convex set in the ray space}\label{sec:13} \emph{In the whole section $R$ is a supertropical semiring, $eR$ is a nontrivial semifield, and $(q,b)$ is a quadratic pair on an $R$-module $V$ with $q$ anisotropic}. Given a finite subset $S \subset \operatorname{Ray}(V)$ and a fixed ray $W$ in $V$, we explore the set of minima of $\operatorname{CS}(W,-)$ on the convex hull $C$ of~ $S$ in $\operatorname{Ray}(V)$, denoted by ${\operatorname{Min}}\operatorname{CS}(W,C)$. We already know that ${\operatorname{Min}}\operatorname{CS}(W,C)$ is nonempty. Our first goal is to prove that ${\operatorname{Min}}\operatorname{CS}(W,C)$ is again a finitely generated convex subset of $\operatorname{Ray}(V)$, and to determine a set of generators of ${\operatorname{Min}}\operatorname{CS}(W,C)$ starting from $S$. \begin{thm}\label{thm:13.1} $ $ \begin{enumerate} \etype{\alph} \setlength{\itemsep}{2pt} \item ${\operatorname{Min}}\operatorname{CS}(W,C)$ is convex. \item If $X \in {\operatorname{Min}}\operatorname{CS}(W,C)$ and $Y \in C \setminus {\operatorname{Min}}\operatorname{CS}(W,C)$, then $[X,Y] \cap {\operatorname{Min}}\operatorname{CS}(W,C) = [X, M_W(X,Y)]$. \item If $\operatorname{CS}(W,-)$ is constant on $S$, then $\operatorname{CS}(W,-)$ is constant on $C$. \item Assuming that $\operatorname{CS}(W,-)$ is not constant on $S$. Let $S^*$ denote the set of all medians $M_W(X,Y)$ with $X,Y \in S$ and $\operatorname{CS}(W,M_W(X,Y) ) = \mu_W(S)$, \footnote{Recall that $\mu_W(S) =\mu_W(C)$ denotes the minimal value of $\operatorname{CS}(W,-)$ on $S$, and hence on $C$.} which may be empty. Let $$ \begin{array}{ll} P := & (S \cup S^*) \cap {\operatorname{Min}}\operatorname{CS}(W,C), \\[1mm] Q := & S \setminus P = (S \cup S^*) \setminus P. \end{array}$$ Then ${\operatorname{Min}}\operatorname{CS}(W,C)$ is the convex hull of the finite set $P \cup M_W(P,Q)$, where $$ M_W(P,Q) := \{ M_W(X,Y) \ds | X \in P, Y \in Q\}. $$ \end{enumerate} \end{thm} \begin{proof} (a): If $X,Y$ are rays in ${\operatorname{Min}}\operatorname{CS}(W,C)$, then $\operatorname{CS}(W,X) = \operatorname{CS}(W,Y)$, from which we conclude that $\operatorname{CS}(W,-)$ is constant on $[X,Y]$, since no $W$-glen is possible on $[X,Y]$. Thus $[X,Y] \subset {\operatorname{Min}}\operatorname{CS}(W,C)$. \vskip 1.5mm \noindent (b): $\operatorname{CS}(W,Y)$ attains its minimal value on $[X,Y] $ in $X$. Thus it is clear from our analysis of $\operatorname{CS}(W,-)$ on $[\overrightarrow{X,Y}]$ in \S\ref{sec:6}, that the set $Z$ of rays in $[X,Y]$ with $\operatorname{CS}(W,Z) = \operatorname{CS}(W,X)$ is $[X, M_W(X,Y)]$, cf. \eqref{eq:6.12} and \eqref{eq:6.13}. \vskip 1.5mm \noindent (c): Assume that $\operatorname{CS}(W,-)$ is constant on $S$, then $\operatorname{CS}(W,-)$ has no glens at all, and we conclude as in (a) that $\operatorname{CS}(W,-)$ is constant on $C$. \vskip 1.5mm \noindent (d): It follows from (b) that $M_W(P,Q)$ is contained in ${\operatorname{Min}}\operatorname{CS}(W,C)$ and then by (a) that $$\operatorname{conv}(P \cup M_W(P,Q)) \subset {\operatorname{Min}}\operatorname{CS}(W,C).$$ We now verify that any given ray $Y \in {\operatorname{Min}}\operatorname{CS}(W,C)$ is contained in the convex hull of $P \cup M_W(P,Q)$. Let $S = \{Y_1,\dots,Y_n\}$. We choose a minimal subset $\{Y_k \ds| k \in K \}$ of $S$, $K \subset \{1, \dots, n\}$, such that \begin{equation}\label{eq:13.1} Y \in \operatorname{conv}(P \cup \{Y_k \ds| k \in K \}). \end{equation} If $K= \emptyset$, then $Y\in\operatorname{conv}(P)$, and we are done. In the case that $K \neq \emptyset$ we choose a minimal set of rays $\{Z_j \ds | j \in J\}$ in $P$ such that \begin{equation}\label{eq:13.2} Y = \operatorname{conv}(\{Z_j \ds | j \in J\} \cup \{Y_k \ds | k \in K\}), \end{equation} so that $Y \in \operatorname{conv}(A \cup B)$ where $$ A = \operatorname{conv}(\{Z_j \ds | j \in J\}), \qquad B = \operatorname{conv}( \{Y_k \ds | k \in K\}).$$ Then $A \subset P \subset {\operatorname{Min}}\operatorname{CS}(W,C)$ while $B$ is disjoint from $P$ due to the minimality of the set $\{Y_k \ds | k \in K\}$ in \eqref{eq:13.1}. Since $Y \in {\operatorname{Min}}\operatorname{CS}(W,C)$, we conclude by assertion (b) that $Y \in M_W(Z,T)$ for some rays $Z \in A$, $T \in B$. Choosing vectors $z_j \in e Z_j$, $y_k \in e Y_k$, $w \in eW$ we have $Y = \operatorname{ray} (y)$, with \begin{equation}\label{eq:13.3} y = m_w \bigg( \sum_{j\in J} \mu_j z_j, \sum_{k\in K} \lambda_k y_k\bigg) \end{equation} and nonzero coefficients $\mu_j, \lambda_k \in eR$. Thus (cf. \eqref{eq:10.1} and \eqref{eq:10.2}) \begin{align*} y & = b\bigg(w,\sum_{k\in K} \lambda_k y_k\bigg) \sum_{j\in J} \mu_j z_j + b\bigg(w, \sum_{j\in J} \mu_j z_j\bigg) \sum_{k\in K} \lambda_k y_k \\ & = \sum_{j\in J, k\in K } \mu_j \lambda_k \big( b(w,y_k) z_j + b(w,z_j) y_k\big), \end{align*} which proves that \begin{equation}\label{eq:13.4} Y \in \operatorname{conv} \big(\{ M_W(Z_j, Y_k) \ds | j \in J, k \in K \} \big) \subset \operatorname{conv}(M_W(P,Q)). \end{equation} \end{proof} It is to be expected from Theorem \ref{thm:13.1} that usually many more rays are needed to generate the convex set ${\operatorname{Min}}\operatorname{CS}(W,C)$ than to generate $C$, But now we exhibit cases, where ${\operatorname{Min}}\operatorname{CS}(W,C) = {\operatorname{Min}}\operatorname{CS}(W,S)$ can be generated by very few rays. \begin{prop}\label{prop:13.2} If $\mu_W(S) = 0$, then ${\operatorname{Min}}\operatorname{CS}(W,C)$ is the convex hull of $$ W^\perp \cap S = \{Z \in S \ds | \operatorname{CS}(W,Z) = 0 \}. $$ \end{prop} \begin{proof} By the results in \S\ref{sec:10} there are no rays $X,Y$ in $V$ with $\operatorname{CS}(W,X) > 0 $, $\operatorname{CS}(W,Y) > 0 $, $\operatorname{CS}(W,M_W(X,Y)) = 0 $ (cf. e.g. \eqref{eq:10.9}). Thus $S^* = 0$, and we conclude from Theorem \eqref{thm:13.1} that ${\operatorname{Min}}\operatorname{CS}(W,C) = \operatorname{conv}(W^\perp \cap S)$. \end{proof} \begin{examp}\label{exmp:13.3} Assume that $S = \{X_1,\dots,X_n\}$, $n \geq 2$, is a finite set of rays in $V$ such that there exists a ray $Z$ with $0 < \operatorname{CS}(W,Z) <\operatorname{CS}(W,X_i)$ for every $i \in \{1,\dots, n\}$ and $M_W(X_i,X_j)=Z$ for $1 \leq i < j \leq n$. Then $S^* = \{ Z \}$, and $M_W(X_i, X_j) = Z$ implies that $\operatorname{CS}(W,-)$ is strictly increasing on $[\overrightarrow{Z,X_i}]$ (and $[\overrightarrow{Z,X_j}]$), cf. \S\ref{sec:6}, whence $M_W(Z,X_i) = Z$ for every $i$, Thus $\operatorname{CS}(W,S) = \{ Z \}$. \end{examp} \begin{defn}\label{def:13.4} We call a set $S $ as described in Example \ref{exmp:13.3} a \textbf{median cluster for~ $W$}, or $W$-median cluster, \textbf{with apex $Z$}. \end{defn} \begin{examp}\label{exmp:13.5} Assume that $S = P_1 \cup P_2$ is a disjoint union of two $W$-median clusters $P_1$ and $ P_2$ with apices $Z_1$ and $Z_2$. Assume further that all pairs $X,Y$ with $X \in P_1$, $Y \in P_2$ have the same median $M_W(X,Y) = Z_{12}$ and that $$ \operatorname{CS}(W,Z_{1})= \operatorname{CS}(W,Z_{2}) = \operatorname{CS}(W,Z_{12}).$$ Then we conclude from Theorem \ref{thm:13.1} that \begin{equation}\label{eq:13.5} {\operatorname{Min}}\operatorname{CS}(W,S) = \operatorname{conv}(Z_1, Z_2, Z_{12}). \end{equation} Indeed, in the notation there $ P = S^* = \{ Z_1, Z_2\}$, $Q= S = P_1 \cup P_2,$ and so $M_W(P,Q) = \{ Z_1, Z_2, Z_{12} \}$. In the case $Z_1 = Z_2$ we obtain \begin{equation}\label{eq:13.6} {\operatorname{Min}}\operatorname{CS}(W,S) = [Z, Z_{12}] \end{equation} with $Z:= Z_1 = Z_2$. \end{examp} Given a ray $Z$ in $V$, we now focus on the set of all $W$-median clusters in $\operatorname{Ray}(V)$ with apex~ $Z$. We assume that $\operatorname{CS}(W,Z)>0$, since otherwise it is clear from Proposition \ref{prop:13.2}, that there are no median clusters with apex $Z$. The next lemma, a simplification of an argument in the proof of Theorem \ref{thm:13.1}.d (\eqref{eq:13.1}--\eqref{eq:13.4}), will be of help. \begin{lem}[$M_W$-Convexity Lemma]\label{lem:13.6} Let $Z \in \operatorname{Ray}(V)$. \footnote{Here it is not necessary to assume that $\operatorname{CS}(Z,W)>0$.} Assume that $P = \{ Y_j \ds | j \in K\}$ and $Q = \{ Y_k \ds | k \in J\}$ are disjoint sets of rays with $M_W(Y_j, Y_k) = Z$ for any $Y_j \in P$ and $Y_k \in Q$. Then also $M_W(Y,T) = Z$ for any $Y \in \operatorname{conv}(P)$ and $T \in \operatorname{conv}(Q)$. \end{lem} \begin{proof} Given $Y \in P $, $T \in Q$ we choose vectors $y_j \in e Y_j$, $y_k \in e Y_k$, $z \in eZ$, $t \in eT$. Then $Y = \operatorname{ray}(y)$, $T = \operatorname{ray}(t)$ with $y = \sum\limits_{j \in J} \lambda_j y_j$, not all $\lambda_j = 0$, and $y = \sum\limits_{k \in K} \mu_k y_k$, not all $\mu_k = 0$. Since $M_W(Y_j,Y_k) = Z$ for $j \in J $, $k \in K$, we have $$ b(w,y_k) y_j + b(w,y_j) y_k = m_w(y_j, y_k) = \alpha_{jk }z,$$ for these indices $j,k$, with $\alpha_{jk}\neq 0$. Thus \begin{align*} b(w,t)y + b(w,y)t & = \sum_{k \in K } \mu_{k} b(w,y_k) \sum_{j\in J } \lambda_j y_j + \sum_{j\in J } \lambda_j b(w,y_j) \sum_{k\in K } \mu_k y_k \\ & = \sum_{j\in J, k\in K} \lambda_j \mu_k [b(w,y_k) y_j + b(w,y_j)y_k] \\ & =\bigg( \sum_{j\in J, k\in K} \alpha_{jk} \lambda_j \mu_k \bigg)z. \end{align*} Since $\sum\limits_{j\in J, k\in K} \alpha_{jk} \lambda_j \mu_k \neq 0$, this proves that $M_W(Y,T) = Z$. \end{proof} Given a ray $Z$ in $V$ with $\operatorname{CS}(W,Z) >0$, we introduce the ray set \begin{equation}\label{eq:13.7} Z^\uparrow := \{ X \in \operatorname{Ray}(V) \ds | \operatorname{CS}(W,X) > \operatorname{CS}(W,Z) \} . \end{equation} This set contains every $W$-median cluster having apex $Z$. Note that typically the set $Z^\uparrow$ is not convex. \begin{defn}\label{def:13.6} Let $P \subset Z$, $P \neq \emptyset$. The \textbf{$Z$-polar of $P$ for $W$} (or $W$-$Z$-polar of $P$) is the set \begin{equation}\label{eq:13.8} \widecheck{P} = P^\vee := \{ Y \in Z^\uparrow \ds | \exists X \in P : M_W(X,Y) = Z \} . \end{equation} \end{defn} Note that \begin{equation}\label{eq:13.9} P_1 \subset P_2 \subset Z^\uparrow \dss \Rightarrow \widecheck{P}_2 \subset \widecheck{P}_1, \end{equation} and, that \begin{equation}\label{eq:13.10} \bigg( \bigcup_{\lambda \in \Lambda } P_\lambda \bigg)^\vee = \bigcap_{\lambda \in \Lambda } \widecheck{P}_\lambda \end{equation} for any family $(P_\lambda \ds | \lambda \in \Lambda )$ of subsets $P_\lambda$ of $Z^\uparrow$. \begin{remarks}\label{rem:13.8} Let $P \subset Z^\uparrow$, $P \neq \emptyset$. \begin{enumerate}\etype{\alph} \setlength{\itemsep}{2pt} \item Then $P$ and $\widecheck{P}$ are disjoint, since $M_W(X,X) = X \neq Z$ for every $X \in P$. \item If $\widecheck{P} \neq \emptyset$, then $P \subset P^{\vee \vee}$, This implies in the usual way that \begin{equation}\label{eq:13.11} P^{\vee\vee\vee} = P^{\vee}. \end{equation} \end{enumerate} \end{remarks} We define for $P \subset Z^\uparrow$ the set \begin{equation}\label{eq:13.12} \operatorname{conv}_0(P) := \operatorname{conv}(P) \cap Z^\uparrow. \end{equation} \begin{thm}\label{thm:13.9} Let $P \subset Z^\uparrow$ and $P \neq \emptyset$, then $$\widecheck{P} = \operatorname{conv}_0(\widecheck{P}) = \operatorname{conv}_0(P)^\vee.$$ \end{thm} \begin{proof} We have $P \cap \widecheck{P} = \emptyset$ and $M_W(X,Y) = Z$ for $X \in P $, $Y \in \widecheck{P}$. By the Median Convexity Lemma \ref{lem:13.6}, this implies $M_W(X',Y') = Z$ for $X' \in \operatorname{conv}(P)$ and $Y'\in \operatorname{conv}(\widecheck{P})$. Thus $\operatorname{conv}_0(\widecheck{P}) \subset \operatorname{conv}_0(P)^\vee $. We further infer from $P \subset \operatorname{conv}_0(P)$ that $\operatorname{conv}_0(P)^\vee \subset \widecheck{P}$, and so $\operatorname{conv}_0(\widecheck{P}) \subset \widecheck{P}$. Since trivially $\widecheck{P} \subset \operatorname{conv}_0(\widecheck{P})$, this proves that $\widecheck{P} = \operatorname{conv}_0(\widecheck{P}) = \operatorname{conv}_0(P)^\vee.$ \end{proof} We now employ the partial ordering $\leq_Z$ on $\operatorname{Ray}(V)$, given by $$ Y' \leq_Z Y \dss \Leftrightarrow [Z,Y'] \subset [Z,Y],$$ the basics of which can be found in \cite[\S8]{VR1}. This ordering extends the total ordering on the oriented intervals $[\overrightarrow{Z,Y}]$ used in the previous sections. \begin{thm}\label{thm:13.10} For any nonempty subset $P$ of $Z^\uparrow$ the $Z$-polar $\widecheck{P}$ is compatible with $\leq_Z$ in the following sense. If $Y,Y' \in Z^\uparrow$ and $Y \leq_Z Y'$, then \begin{equation}\label{eq:13.13} Y \in \widecheck{P} \dss \Leftrightarrow Y' \in \widecheck{P}. \end{equation} \end{thm} \begin{proof} This follows from the fact that for any $X \in P$ the CS-function $\operatorname{CS}(W,-)$ is not monotonic on $[X,Y]$ iff it is not monotonic on $[X,Y']$, and then $\operatorname{CS}(W,- )$ attains its unique minimum at $M_W(X,Y) = M_W(X,Y')$, as is clear from \S\ref{sec:6} and \S\ref{sec:7}, cf. Figures 1--3 in \S\ref{sec:6}. \end{proof} We describe a procedure to build up clusters with apex $Z$, basing on some more terminology. For any ray $X \in Z^\uparrow$, we write $\widecheck{X} = X^\vee = \{ X\} ^\vee $ for short. \begin{defn}\label{def:13.10} We say that $X$ is $Z$-polar, if $\widecheck{X} \neq \emptyset$, and so $X$ is in the $Z$-polar of the set $\widecheck{X}$. More explicitly, $X$ is $Z$-polar, if $M_W(X,Y) = Z$ for some $Y \in Z^\uparrow$. \end{defn} Note that for any set $P \subset Z^\uparrow$ we have \begin{equation}\label{eq:13.14} \widecheck{P} = \bigcap_{X \in P} \widecheck{X}. \end{equation} If $Z^\uparrow$ does not contain $Z$-polar sets, then, of course, there do not exist median clusters with apex $Z$. Otherwise we choose $X_1,X_2 \in Y^\uparrow$ with $M_W(X_1,X_2) = Z$. If $\{ X_1, X_2 \}^\vee = X_1^\vee \cap X_2^\vee \neq \emptyset$, we choose a ray $X_3 \in Z^\uparrow$ with $X_3 \in \{ X_1, X_2 \}^\vee$. Proceeding in this way we obtain a sequence of rays $X_1, \dots, X_r$ in $Y^\uparrow$ with $r \geq 2$ and \begin{equation}\label{eq:13.15} X_{i+1} \in \{X_1, \dots, X_i \}^\vee \qquad \text{for $1 \leq i < r$.} \end{equation} There are two cases. \begin{description} \item[\underline{Case A}] We reach a set $S = \{X_1, \dots, X_r \}$ with $\{X_1, \dots, X_r \}^\vee = \widecheck{X}_1 \cap \cdots \cap \widecheck{X}_r = \emptyset$. Then $S$ is a maximal median cluster with apex $Z$. \item[\underline{Case B}] We obtain infinite sets $S \subset Z^\uparrow$, such that every finite subset $T \subset S$, $|T| \geq 2$, is a $W$-median cluster with apex $Z$. We call such set $S$ a \textbf{generalized $W$-median cluster} with apex $Z$ (or generalized $W$-$Z$-median cluster). More specifically, using mild set theory, we obtain by a transfinite induction procedure a sequence of rays $\{ X_i \ds | 1 \leq i \leq \lambda \} $ with ordinal $\lambda \geq \omega$ which is a \textbf{maximal generalized $W$-$Z$-median cluster}. \end{description} \section{The equal polar relation}\label{sec:14} Let $W$ and $Z$ be any rays in $V$. Given $X_1, X_2 \in Z^\uparrow$, cf. \eqref{eq:13.7}, we say that $X_1$ and $X_2$ are \textbf{$W$-$Z$-equivalent} (or $Z$-equivalent for short), and write $X_1 \sim_Z X_2$, if $\widecheck{X}_1 = \widecheck{X}_2$. We call this equivalence relation on $Z^\uparrow$ the \textbf{equal polar relation} for $W$ and $Z$ (or the \textbf{$W$-$Z$-equivalence relation}). For this relation, the equivalence class of a ray $X \in Z^\uparrow$ is denoted by $$ [X] := [X]_Z := [X]_{W,Z}.$$ Note that, if $\widecheck{X}_1 \neq \emptyset$, then $X_1 \sim_Z X_2$ iff $X_1^{\vee\vee} = X_2^{\vee\vee}$, cf. \eqref{eq:13.11}. We then abbreviate $X^{\vee \vee} = \widetilde{X}$. For most problems concerning $Z$-polars of rays, and in particular all problems appearing in ~\S\ref{sec:13}, only the class $[X]_{W,Z}$ matters. For example, in a (generalized, maximal) median cluster~ $P$ with apex $Z$ we may replace any $X \in P$ by an $Z$-equivalent ray $X'$, and have again a (generalized, maximal) median cluster $P'$ with apex $Z$. Therefore, understanding the $W$-$Z$-equivalence is a very basic goal, which we first pose vaguely as follows. \begin{problem}\label{prob:13.14} Describe the pattern of any $Z$-equivalence classes $[X] \subset Z^\uparrow$. \end{problem} To approach this problem, so far, we only know: \begin{enumerate} \etype{\alph} \setlength{\itemsep}{2pt} \item All rays $X$ with $\widecheck{X} = \emptyset$ are in one equivalence class -- the class of non-polar rays (Definition \ref{def:13.10}). This is trivial. We denote this class by $C_\emptyset$: $$C_\emptyset = \{ X \in Z^\uparrow \ds | \forall Y \in Z^\uparrow: M_W(X,Y) \neq Z\}.$$ Perhaps it is best to discard $C_\emptyset$ from $Z^\uparrow$. \item $[X]_Z \subset \widetilde{X}$. Indeed, if $X_1^{\vee} = X_2^{\vee}$, then $X_1 \in X_1^{\vee\vee} = X_2^{\vee\vee}$. \item The relation $\sim_Z$ is compatible with the partial ordering $\leq_Z$ on $Z^\uparrow$, i.e., if $X_1$ and $X_2$ are comparable under $\leq_Z$, then $X_1 \sim_Z X_2$, cf. Theorem \ref{thm:13.10}. \end{enumerate} We now can point more precisely at the type of questions arising from Problem \ref{prob:13.14}. If~ $\widecheck{X} \neq \emptyset$, then $\widetilde{X}$ is a convex subset of $\operatorname{Ray}(V)$ contained in $Z^\uparrow$, with $[X] \subset \widetilde{X}$ by (b). If~ $T \in \widetilde{X}$, then $[T] \subset \widetilde{T} \subset \widetilde{X}$, and so \emph{$\widetilde{X}$ is the disjoint union of all classes $[T]$ contained in ~$\widetilde{X}$}. Furthermore, since $\widetilde{T}$ is convex, also the convex hull $\operatorname{conv}([T])$ of the set $[T]$ is contained in~ $\widetilde{X}$. This leads to the next two intriguing questions. \begin{enumerate} \item[A)] Is $\operatorname{conv}([T])$ also a union of $Z$-equivalence classes? \item[B)] When is a class $[T]$ by itself convex? \end{enumerate} Due to (c) the whole pattern of classes $[T]$ is compatible with the partial ordering $\leq_Z$. Concerning question B), so far we have only a partial answer. \begin{thm}\label{thm:13.13} If $X$ is a $Z$-polar ray in $Z^\uparrow$ (i.e., $\widecheck{X} \neq \emptyset$), then (cf. \eqref{eq:13.13}) $$ [X] = \operatorname{conv}_0\big([X]\big) = \operatorname{conv}\big([X]\big) \cap Z^\uparrow.$$ \end{thm} \begin{proof} We need to prove the following. If $X_1, X_2 \in Z^\uparrow$, $[X_1, X_2] \in Z^\uparrow$, and $\widecheck{X}_1= \widecheck{X}_2 \neq \emptyset,$ then $\widecheck{X}_1 = \widecheck{T}$ for any $T \in[X_1,X_2]$. We have to verify for any $Y \in Z^\uparrow$ that $$ M_W(X_1,Y) = Z \dss \Leftrightarrow M_W(T,Y) = Z .$$ \vskip 1.5mm \noindent $(\Rightarrow)$: If $ M_W(X_1,Y) = Z$, then $M_W(X_2,Y) = Z$, since $\widecheck{X}_1 = \widecheck{X}_2$, whence by the $M_W$-Convexity Theorem: $M_W(T,Y) = Z$ for any $T\in [X_1,X_2]$. \vskip 1.5mm \noindent $(\Leftarrow)$: Let $S = \{X_1, X_2, Z \} $. The CS-function $\operatorname{CS}(W,-)$ is strictly decreasing on $[\overrightarrow{X_1,Z}]$ and on $[\overrightarrow{X_2,Z}]$, furthermore $\operatorname{CS}(W,T) > \operatorname{CS}(W,Z)$ and $\operatorname{CS}(W,Y) > \operatorname{CS}(W,Z)$. We conclude from this that $\operatorname{CS}(W,-)$ is not monotonic on $[\overrightarrow{T,Y}]$ and has there minimum value $\operatorname{CS}(W,Z)$. This implies $M_W(T,Y) = Z$. \end{proof} We introduce two more notations around $Z$-equivalence. Recall that $X \sim_Z T \ \Leftrightarrow \ \widetilde{X} = \widetilde{T}$ (provided that $\widecheck{X} \neq \emptyset$). \begin{defn}\label{def:13.14} A \textbf{path} in a class $[X]_Z$ is a sequence of rays $X_0, \dots, X_r $ in $[X]_Z$ where $[X_{i-1}, X_i] \in Z^\uparrow$ for $0 < i \leq r$, $r \geq 1$. \end{defn} Note that, in consequence of Theorem \ref{thm:13.13}, \begin{equation}\label{eq:13.16} \bigcup_{i = 1} ^r [X_{i-1}, X_i] \subset [X]_Z. \end{equation} This gives us an obvious notion of \textbf{path components} of $[X]_Z$. More generally, we may define paths and path components in any subset of $Z^\uparrow$. \begin{defn}\label{def:13.15} Given rays $X \in Z^\uparrow$ and $T\in [X]_Z$, we define the \textbf{median star} (=$W$-$Z$-median star) $\operatorname{st}_T(X)$ as the set of all rays $T' $ with $[T, T'] \subset [X]_Z$. \end{defn} In other words, $\operatorname{st}_T(X)$ is the union of all intervals $[T,T']$ contained in $[X]_Z$. \begin{rem}\label{rem:13.16} If $T',T'' \in \operatorname{st}_T(X)$, then perhaps $[T',T''] \nsubset Z^\uparrow$. But, if $[T',T''] \subset Z^\uparrow$, then $\operatorname{conv}(T,T', T'' ) \subset [X]_Z$, and so $\operatorname{conv}(T,T', T'' ) \subset \operatorname{st}_T(X)$. Note also that \begin{equation}\label{eq:13.18} \operatorname{st}_T([X]_Z) \subset \widetilde{T} \subset \widetilde{X}. \end{equation} \end{rem} Every $Z$-equivalence class $[X]_Z$ is the disjoint union of the path components contained in ~$[X]_Z$. If $A$ and $B$ are such path components, then obviously every interval $[Y_1, Y_2]$ with $Y_1 \in A$, $Y_2 \in B$ has a ``\textbf{deep glen}'' with respect to $Z$, i.e., the median $M_W(Y_1,Y_2)$ is not contained in $Z^\uparrow$. We can refine Problem \ref{prob:13.14} to a description of the pattern of path components of the $W$-$Z$-equivalence classes, which we call the \textbf{refined version of~ Problem~ \ref{prob:13.14}}. This seems to be natural and easier than Problem \ref{prob:13.14} above. Note also that every such path component is the union of all median stars $\operatorname{st}_T(X)$ contained in it. We have gained a very rough view to the family of path components of $Z$-equivalence classes as follows. For simplicity, we assume that $eR= \{ 0 \} \cup \mathcal G$ is a nontrivial bipotent semifield which is \textbf{square-root closed}, i.e., the injective endomorphism $x \mapsto x^2$ is also surjective, and so is an order preserving automorphism of $eR$. This setup can be reached for any (nontrivial) bipotent semifield by a canonical extension involving only square-roots, cf.~ \cite[\S7]{QF1}. Assume that $A$ and $B$ are different sets, which are path components of $Z$-equivalence classes different from $ C_\emptyset$. Then for any $T_1\in A$ and $T_2 \in B$ the interval $[T_1, T_2]$ has a deep glen, and so we have a decomposition of $[T_1, T_2]$ into subintervals \begin{equation}\label{eq:13.17} [T_1, T_2] = [T_1, T_{12} [ \ds \cup [T_{12}, T_{21}] \ds \cup ] T_{21}, T_2] \end{equation} such that \begin{equation}\label{eq:13.18} \begin{array}{cc} [T_1, T_2] \cap \operatorname{st}_{Y_1} (A) & = [T_1, T_{12} [ \; , \\[2mm] [T_1, T_2] \cap \operatorname{st}_{Y_2} (B) & = \ ] T_{21}, T_{2} ], \end{array} \end{equation} with \begin{equation}\label{eq:13.19} M_W(T_1 T_2) = M_W [T_{12}, T_{21}) \notin Y^\uparrow, \end{equation} \begin{equation}\label{eq:13.20} \operatorname{CS}(W,T_{12}) = \operatorname{CS}(W,Z) = \operatorname{CS}(W,T_{21}). \end{equation} This subdivision can be deduced from the defining formula \eqref{eq:0.10} of a CS-ratio and the formulas for $M_W(T_1,T_2)$ in \S\ref{sec:6} in the case that $\operatorname{CS}(W,-)$ is not monotone on $[T_1,T_2]$, and the formulas of the glen of $[T_1, T_2]$ in \S\ref{sec:12}. These formulas show that square roots suffice for the above subdivision. We omit the details. To store the facts \eqref{eq:13.17}--\eqref{eq:13.20}, we say, that the set of all path components of $Z$-equivalence classes $ \neq C_\emptyset$ is the \textbf{$W$-$Z$-archipelago} in $\operatorname{Ray}(V)$ (for given rays $W$ and $Z$ in $V$ with $\operatorname{CS}(W,Z) > 0$), and that these path components are the \textbf{$W$-$Z$-islands} in $\operatorname{Ray}(V)$, having proved that $Z^\uparrow$ is the disjoint union of all $W$-$Z$-islands and the set $\{ X \in Z^\uparrow \ds | \widecheck{X} = \emptyset \} $, and that, for any two intervals $A,B$ and rays $T_1 \in A$, $T_2 \in B$, the interval $[T_1, T_2]$ has glen $[T_{12}, T_{21}]$ in the ``deep sea'' $$ \operatorname{Ray}(V ) \setminus Z^\uparrow = \{ X \in \operatorname{Ray}(V) \ds | \operatorname{CS}(W,X) \leq \operatorname{CS}(W,Z)\} $$ while $[T_1, T_{12} [ \subset A$, $]T_{21}, T_2] \subset B$. A further study is needed to describe the sets of $W$-$Z$-islands which constitute the $Z$-equivalence classes in $Z^\uparrow$ different from the useless class of non-polar rays. This study is left for a future work.
2,869,038,155,994
arxiv
\section{Real-time conic optimization with infeasibility detection} \label{sec: PIPG} The key step in Algorithm~\ref{alg: trigger} is to solve optimization~\eqref{opt: trigger} if it is feasible, and prove that it is infeasible otherwise. Such a problem is also known as \emph{infeasibility detection} in constrained optimization. In this section we introduce an infeasibility detection method customized for optimization~\eqref{opt: trigger}. This method is based on the proportional-integral projected gradient method (PIPG), a primal-dual conic optimization method \cite{yu2020proportional,yu2021proportionalA,yu2021proportionalB,yu2022extrapolated}. \subsection{Reformulating a trajectory optimization as a conic optimization} \label{subsec: conic} Conic optimization is the minimization of a convex objective function subject to conic constraints. In the following, we will reformulate the trajectory optimization problem in \eqref{opt: trigger} as a special case of conic optimization. To this end, we need to rewrite the objective function and constraints in optimization~\eqref{opt: trigger} in a more compact form as follows. First, we introduce the following trajectory variable: \begin{equation}\label{eqn: traj var} x\coloneqq \begin{bmatrix} r_{[0, t]}^\top & v_{[0, t]}^\top & u_{[0, t]}^\top & w_{[0, t-1]}^\top \end{bmatrix}^\top. \end{equation} where \(w_k\coloneqq u_{k+1}-u_k\) for all \(k\in[0, t-1]\). With this variable, we can rewrite the quadratic objective function in optimization~\eqref{opt: trigger} as follows: \begin{equation}\label{eqn: quad obj} \begin{aligned} \frac{1}{2}x^\top \underbrace{\mathop{\rm diag}\left(\begin{bmatrix} 0_{6(t+1)}^\top & 1_{3(t+1)}^\top & \omega 1_{3t}^\top \end{bmatrix}^\top \right)}_Px. \end{aligned} \end{equation} Second, we define the following submatrices: \begin{equation}\label{eqn: H blocks} \begin{aligned} &H_{11}=\begin{bmatrix} 0_{3t\times 3} & I_{3t} \end{bmatrix}-\begin{bmatrix} I_{3t} & 0_{3t\times 3} \end{bmatrix}, \\ &H_{12}=-\Delta \begin{bmatrix} I_{3t} & 0_{3t\times 3} \end{bmatrix},\enskip H_{14}=0_{3t\times 3t},\\ & \textstyle H_{13}=-\frac{\Delta^2}{3m}\begin{bmatrix} 0_{3t\times 3} & I_{3t} \end{bmatrix}-\frac{\Delta^2}{6m}\begin{bmatrix} I_{3t} & 0_{3t\times 3} \end{bmatrix},\\ & H_{21}=0_{3t\times 3(t+1)}, \enskip H_{22}=H_{11},\enskip H_{24}=0_{3t\times 3t},\\ &\textstyle H_{23}=-\frac{\Delta}{2m}\begin{bmatrix} 0_{3t\times 3} & I_{3t} \end{bmatrix}-\frac{\Delta}{2m}\begin{bmatrix} I_{3t} & 0_{3t\times 3} \end{bmatrix}, \\ & H_{31}=H_{32}=0_{3t\times 3(t+1)}, \enskip H_{33}=H_{11},\\ & H_{34}=I_{3t},\enskip H_{41}=H_{42}=0_{(t+1)\times 3(t+1)}, \\ &H_{43}=I_{t+1}\otimes \begin{bmatrix} 0 & 0 & 1 \end{bmatrix}, \enskip H_{44}=0_{(t+1)\times 3t}. \end{aligned} \end{equation} With the definition in \eqref{eqn: traj var} and \eqref{eqn: H blocks}, we can rewrite the linear equality and inequality constraints in optimization~\eqref{opt: trigger}--which include the linear dynamics constraints and the linear lower bound constraints on the thrust vectors--equivalently as follows: \begin{equation}\label{eqn: linear constr} \underbrace{\begin{bmatrix} H_{11} & H_{12} & H_{13} & H_{14}\\ H_{21} & H_{22} & H_{23} & H_{24}\\ H_{31} & H_{32} & H_{33} & H_{34}\\ H_{41} & H_{42} & H_{43} & H_{44}\\ \end{bmatrix}}_{H}x-\underbrace{\begin{bmatrix} 0_{9t}\\ \underline{\gamma}1_{t+1} \end{bmatrix}}_b\in \underbrace{\{0_{9t}\}\times \mathbb{R}_+^{t+1}}_{\mathbb{K}}, \end{equation} Note that \(\underline{\gamma}\in\mathbb{R}_+\) is the thrust lower bound introduced in \eqref{eqn: u lower bound}. Third, we define the following closed convex set \begin{equation}\label{eqn: D_i} \mathbb{D}_i=\mathbb{H}_i\times \mathbb{V}\times\mathbb{U}_a\times \mathbb{W}, \end{equation} for all \(i=1, 2, \ldots, l\), where set \(\mathbb{H}_i\), \(\mathbb{V}\), \(\mathbb{W}\) are given in \eqref{eqn: cylinder}, \eqref{eqn: v ball}, \eqref{eqn: w ball}, respectively; set \(\mathbb{U}_a\) is given by \eqref{eqn: u icecream}. Notice the only difference between set \(\mathbb{U}_a\cap\mathbb{U}_b\) and set \(\mathbb{U}_a\) is that the latter does not include the linear lower bound constraint in \(\mathbb{U}_b\); this constraint is already included in the last \(t+1\) linear inequality constraints in \eqref{eqn: linear constr}. With these sets, we can compactly rewrite the second-order-cone constraints in optimization~\eqref{opt: trigger}--which include those for position, velocity, thrust, and thrust rate vectors--as follows: \begin{equation}\label{eqn: SOC constr} x\in\underbrace{(\mathbb{D}_1)^{\tau_1}\times (\mathbb{D}_2)^{\tau_2}\times \cdots \times (\mathbb{D}_l)^{\tau_l}}_{\mathbb{D}}, \end{equation} where \((\mathbb{D}_i)^{\tau_i}\) is the Cartesian product of \(\tau_i\) copies of set \(\mathbb{D}_i\). With the above definition, we can now rewrite optimization~\eqref{opt: trigger} equivalently as optimization~\eqref{opt: conic}, where matrix \(P\) is given in \eqref{eqn: quad obj}; matrix \(H\), vector \(b\), cone \(\mathbb{K}\) are given in \eqref{eqn: linear constr}; set \(\mathbb{D}\) is given in \eqref{eqn: SOC constr}. \vspace{1em} \noindent\fbox{ \centering \parbox{0.94\linewidth}{ {\bf{Conic optimization}} \begin{equation}\label{opt: conic} \begin{array}{ll} \underset{x}{\mbox{minimize}} & \frac{1}{2} x^\top P x\\ \mbox{subject to} & Hx-b\in\mathbb{K}, \enskip x\in\mathbb{D}. \end{array} \end{equation} } } \vspace{1em} \begin{figure}[!ht] \centering \includegraphics[width=0.35\linewidth]{PIPG/figs/pattern} \caption{The sparsity pattern of matrix \(H\) in \eqref{eqn: linear constr}. Each zero and nonzero entry corresponds to a black and white pixel, respectively.} \label{fig: sparse} \end{figure} Optimization~\eqref{opt: conic} has two salient features: the sparsity pattern of matrix \(P\) and \(H\), and the geometric structure of set \(\mathbb{D}\). First, matrix \(P\) is diagonal, and matrix \(H\) has many zero elements; see Fig.~\ref{fig: sparse} for an illustration. The presence of these zero elements is because the dynamics constraints in \eqref{sys: DT} only apply to variables corresponding to adjacent time steps. Second, set \(\mathbb{D}\) is a Cartesian product of many simple sets, such as cylinder, ball, or the intersection of an icecream cone and a ball. See Fig.~\ref{fig: sets} for an illustration. \begin{figure}[!ht] \centering \begin{subfigure}[b]{0.49\columnwidth} \centering \input{quasicvx/figs/cylinder} \caption{Cylinder.} \label{fig: cylinder} \end{subfigure} \begin{subfigure}[b]{0.49\columnwidth} \centering \input{quasicvx/figs/ball} \caption{Ball.} \label{fig: ball} \end{subfigure} \begin{subfigure}[b]{\columnwidth} \centering \input{quasicvx/figs/icecream} \caption{The intersection of an icecream cone and a ball.} \label{fig: icecream} \end{subfigure} \caption{An illustration of the geometric structure of the simple sets that constitute the set \(\mathbb{D}\) in \eqref{eqn: SOC constr}. } \label{fig: sets} \end{figure} \subsection{Proportional-integral projected gradient method} \label{subsec: PIPG} To exploit the salient features of optimization~\eqref{opt: conic}, we propose to use the proportional-integral projected gradient method (PIPG). PIPG is a state-of-the-art first-order primal-dual optimization method that combines the idea of projected gradient method and proportional-integral feedback of constraint violation; such a combination was first introduced in distributed optimization \cite{yu2020mass,yu2020rlc} and later extended to optimal control problems \cite{yu2020proportional,yu2021proportionalA,yu2021proportionalB,yu2022extrapolated}. Algorithm~\ref{alg: PIPG} is the pseudocode implementation of PIPG with extrapolation \cite{yu2022extrapolated}, where \(\Pi_{\mathbb{D}}\) and \(\Pi_{\mathbb{K}}\) denote the Euclidean projection map onto set \(\mathbb{D}\) and the polar cone of cone \(\mathbb{K}\), respectively; these projection maps will be discussed in details later. The if-clause between line~\ref{alg: inf start} and line~\ref{alg: inf end} determines whether optimization~\eqref{opt: conic} is infeasible by monitoring the difference between two consecutive iterates \cite{yu2022extrapolated}. \begin{algorithm}[!ht] \caption{PIPG for convex trajectory optimization} \begin{algorithmic}[1] \Require Parameters in optimization~\eqref{opt: conic}, number of iterations \(j_{\max}\), step sizes \(\alpha, \beta, \lambda\), feasibility tolerance \(\epsilon\)\label{alg: pipg start} \State Randomly initialize \(x,\overline{x}\in\mathbb{R}^{12t+9}\), \(y,\overline{y}\in\mathbb{R}^{10t+1}\). \For{\(j=1, 2, \ldots, j_{\max}\)} \State{\(y^-\gets y\)} \State{\(x\gets\Pi_{\mathbb{D}}[\overline{x}-\alpha(P\overline{x}+H^\top \overline{y})]\)}\label{alg: proj D} \State{\(y\gets \Pi_{\mathbb{K}^\circ}[\overline{y}+\beta(H(2x-\overline{x})-g)]\)}\label{alg: proj K polar} \State{\(\overline{x}\gets(1-\lambda)\overline{x}+\lambda x\)} \State{\(\overline{y}\gets(1-\lambda)\overline{y}+\lambda y\)} \EndFor\label{alg: pipg end} \If{\(\frac{\norm{y-y^-}}{\beta\lambda \norm{x}}\leq \epsilon\)}\label{alg: inf start} \State{\Return{\(x\)}} \Else \State{\Return{``Infeasible"}} \EndIf \label{alg: inf end} \Ensure \(x\) or ``Infeasible". \end{algorithmic} \label{alg: PIPG} \end{algorithm} Compared with other numerical methods for optimization~\eqref{opt: conic}, PIPG has the following advantages. First, PIPG does not compute the inverse of any matrices or solve any linear equation systems, making it suitable for real-time implementation with light digital footprints \cite{yu2021proportionalB}. Second, compared with other first-order methods, PIPG achieves the fastest convergence rates in terms of both the primal-dual gap and constraint violation \cite{yu2021proportionalA}. Third, PIPG automatically generates proof of infeasibility if possible \cite{yu2021proportionalB,yu2022extrapolated}. When solving optimal control problems, PIPG is much faster than many state-of-the-art optimization solvers in numerical experiments \cite{yu2022extrapolated}. \subsection{Implementation} \label{subsec: implementation} In order to implement Algorithm~\ref{alg: PIPG}, we need to determine several algorithm parameters, and efficiently compute the projections in line~\ref{alg: proj D} and line~\ref{alg: proj K polar} of Algorithm~\ref{alg: PIPG}. We will discuss these implementation details in the following. \subsubsection{Parameter selection} \paragraph{Step sizes} The iterates of PIPG converge if parameter \(\alpha\) and \(\beta\) satisfy the following constraint, which is a special case of those in \cite[Rem. 1]{yu2022extrapolated}: \begin{equation} 0<\alpha=\beta<\frac{2}{\sqrt{\mnorm{P}^2+4\mnorm{H}^2}}. \end{equation} By using the definition of matrix \(P\) in \eqref{eqn: quad obj}, one can verify that \(\mnorm{P}=\max\{1, \omega\}\), where \(\omega\) is the weighting parameter in the objective function in optimization~\eqref{opt: trigger}. As for the value of \(\mnorm{H}^2\), we compute an approximate of it using the \emph{power iteration algorithm} \cite{kuczynski1992estimating}, summarized in Algorithm~\ref{alg: power}. \begin{algorithm}[!ht] \caption{The power iteration method \cite{kuczynski1992estimating} } \begin{algorithmic}[1] \Require Matrix \(H\in\mathbb{R}^{(10t+1)\times (12t+9)}\), accuracy tolerance \(\epsilon\). \State Randomly initialize \(x\in\mathbb{R}^{12t+9}\); let \(\sigma=\norm{x}, \sigma^-=\epsilon+\sigma\). \While{\(|\sigma-\sigma^-|\geq \epsilon\)} \State{\(\sigma^-\gets \sigma\)} \State{\(y\gets \frac{1}{\sigma}Hx\)} \State{\(x\gets H^\top y\)} \State{\(\sigma\gets \norm{x}\)} \EndWhile \Ensure \(\sigma\). \end{algorithmic} \label{alg: power} \end{algorithm} As for parameter \(\lambda\)--which denotes the step length of extrapolation in PIPG \cite{yu2022extrapolated}-- numerical experiments shows that values between \(1.6\) and \(1.9\) leads to the best convergence performance in practice \cite{yu2022extrapolated}. In our implementation, we let \(\lambda=1.9\). \paragraph{Maximum number of iteration and feasibility tolerance} As a first order method, PIPG tends to converge within hundreds of iterations. In the implementation of Algorithm~\ref{alg: PIPG}, we set \(j_{\max}=10^4\) and \(\epsilon=10^{-3}\). \subsubsection{Computing the projections} We now provide explicit formulas for computing the projections onto the closed convex sets that constitute the set \(\mathbb{D}\) in \eqref{eqn: SOC constr}; see Fig.~\ref{fig: sets} for an illustration. For projection formulas of other closed convex sets, such as the cone \(\mathbb{K}\) in \eqref{eqn: linear constr}, we refer the interested readers to \cite[Chp. 29]{bauschke2017convex}. \paragraph{Cylinder} Given a position vector \(r\in\mathbb{R}^3\), the projection of \(r\) onto the set \(\mathbb{H}_i\) in \eqref{eqn: cylinder} is given as follows \cite[Exe. 29.1]{bauschke2017convex}: \begin{equation} \Pi_{\mathbb{H}_i}[r]=\max(-\eta_i, \min(\eta_i, r_a))+\frac{\rho_i}{\max(\norm{r_b}, \rho_i)}r_b, \end{equation} where \begin{equation} r_a=\langle d_i, r\rangle d_i, \enskip r_b = r-\langle d_i, r\rangle d_i. \end{equation} \paragraph{Ball} Given a velocity vector \(v\in\mathbb{R}^3\), the projection of \(v\) onto the set \(\mathbb{V}\) in \eqref{eqn: v ball} is given as follows \cite[Prop. 29.10]{bauschke2017convex}: \begin{equation}\label{eqn: vel ball proj} \Pi_{\mathbb{V}}[v]=\frac{\xi}{\max(\xi, \norm{v})}v. \end{equation} The projection onto set \eqref{eqn: w ball} is similar. \paragraph{The intersection of a ball and an icecream cone} Computing a projection onto the intersection of a icecream cone and a ball is the same as first computing a projection onto the icecream cone then computing a projection onto the ball \cite[Thm. 7.1]{bauschke2018projecting}. In particular, give a thrust vector \(u\in\mathbb{R}^3\), the projection of \(u\) onto the set \(\mathbb{U}_a\) in \eqref{eqn: u icecream} is given by \begin{equation}\label{eqn: thrust ball proj} \Pi_{\mathbb{U}_a}[u]=\frac{\overline{\gamma}}{\max(\overline{\gamma}, \norm{u_a})}u_a, \end{equation} where \begin{equation}\label{eqn: thrust cone proj} u_a=\begin{cases} u, \quad \text{if } \cos\lambda\norm{u}\leq [u]_3,\\ 0, \quad \text{if } \sin\lambda\norm{u}\leq -[u]_3,\\ \langle u, u_b\rangle u_b, \quad \text{otherwise,} \end{cases} \end{equation} and \begin{equation} u_b=\begin{bmatrix} 0\\ 0\\ \cos\lambda \end{bmatrix}+\frac{\sin\lambda}{\sqrt{([u]_1)^2+([u]_2)^2}}\begin{bmatrix} [u]_1\\ [u]_2\\ 0 \end{bmatrix}. \end{equation} The formula in \eqref{eqn: thrust ball proj} is similar to that in \eqref{eqn: vel ball proj}. The formula in \eqref{eqn: thrust cone proj} is a special case of the projection formula of an icecream cone \cite[Exe. 29.12]{bauschke2017convex}. \section{Conclusion} \label{sec: conclusion} We introduce a novel bisection method that approximates the nonconvex corridor constraints using time-triggered convex corridor constraints, and develop customized implementation of this method that enables real-time trajectory optimization subject to second-order cone constraints. Our results provide a novel benchmark solution approach for trajectory optimization, which is about 50--200 times faster than mixed integer programming in numerical experiments. Future direction includes onboard implementation and extensions to trajectory optimization with nonlinear dynamics model, such as six-degree-of-freedom rigid body dynamics for space vehicles \cite{malyuta2021advances}. \section{Numerical simulation and indoor flight experiments} \label{sec: experiment} We demonstrate the efficiency of Algorithm~\ref{alg: PIPG} by comparing its computation time against the state-of-the-art optimization solvers, and demonstrate the effectiveness of the trajectories computed by Algorithm~\ref{alg: PIPG} using indoor flight experiments via a custom quadrotor. \subsection{Numerical simulation with randomly generated corridors} We first evaluate the efficiency of the algorithms developed in Section~\ref{sec: trigger} and Section~\ref{sec: PIPG} using instances of optimization~\eqref{opt: nonconvex} with randomly generated corridors as follows. First, we let \begin{equation} \overline{r}_0=\overline{v}_0=\overline{v}_f=\begin{bmatrix} 0 & 0 & 0\end{bmatrix}^\top, \enskip \overline{u}_f=-g. \end{equation} Second, we set the scalar parameters in optimization~\eqref{opt: nonconvex} using the values listed in Table~\ref{tab: quadrotor}. Third, we generate 100 random sequences of corridors, see Fig.~\ref{fig: dataset} for an illustration of the center lines of these corridor sequences. Each sequence contains 7 corridors. Each corridor starts at the origin and is uniquely characterized by four scalar parameters: radius, length, and two angles that defines its direction in a spherical coordinates--azimuthal angle and elevation angle. Each scalar parameter is sampled from a uniform distribution over an interval, see Table~\ref{tab: corridor} for the interval bounds of these parameters. Finally, we vary the number of corridors traversed by the trajectory by setting the final position \(\overline{r}_f\) to be the end point of different corridors in each sequence. \begin{figure}[!ht] \centering \includegraphics[width=0.45\columnwidth]{experiment/figs/dataset.png} \caption{The center lines of the 100 random sequences of corridors. } \label{fig: dataset} \end{figure} \begin{table}[!ht] \centering \caption{The parameter values in optimization~\eqref{opt: nonconvex} (all units are omitted for simplicity)} \begin{tabular}{ c|c|c|c|c|c|c|c } \hline \(m\) & \(\Delta\) & \(\omega\) & \(\xi\) & \(\underline{\gamma}\) & \(\overline{\gamma}\) & \(\theta\) & \(\delta\)\\ \hline \hline 0.35 & 0.20 & 1.00 & 3.00 & 2.00 & 5.00 & \(\frac{\pi}{4}\) & 3.00\\ \hline \end{tabular} \label{tab: quadrotor} \end{table} \begin{table}[!ht] \centering \caption{The interval bounds of the corridor parameters.} \begin{tabular}{ m{7em} | m{5em} } \hline parameter & interval\\ \hline \hline radius & \([0.10, 0.50]\) \\ length & \([1.00, 4.00]\) \\ azimuthal angle & \([\frac{\pi}{4}, \frac{3\pi}{4}]\)\\ elevation angle & \([-\frac{\pi}{4}, \frac{\pi}{4}]\)\\ \hline \end{tabular} \label{tab: corridor} \end{table} We demonstrate the performance of Algorithm~\ref{alg: trigger} using the aforementioned random instances of optimization~\eqref{opt: nonconvex}, where we use Algorithm~\ref{alg: PIPG} for infeasibility detection and optimizing a trajectory with time-varying corridor constraints. We implement the combination of Algorithm~\ref{alg: trigger} and Algorithm~\ref{alg: PIPG} in C++; see \url{https://github.com/Kartik-Nagpal/PIPG-Cpp} for details. We choose the values of time sequence \(\overline{\tau}_{[1, l]}\) and \(\underline{\tau}_{[1, l]}\) in Algorithm~\ref{alg: trigger} using the length of each corridor and the quadrotor's maximum speed, given by \(\xi\); and a coarse estimates of its minimum speed, given by \(\xi/2\). Fig.~\ref{fig: time & cost} shows the computation time and solution quality of Algorithm~\ref{alg: trigger} combined with Algorithm~\ref{alg: PIPG}, and compares them against the performance of various combinations of Algorithm~\ref{alg: trigger}, mixed integer programming (MIP), off-the-shelf parser YALMIP \cite{lofberg2004yalmip}, commercial conic optimization solver GUROBI \cite{gurobi}, and open-source conic optimization solver ECOS \cite{domahidi2013ecos}. All numerical experiments are executed on a desktop computer equipped with the AMD Ryzen 9 5900X 12 Core Processor. Overall the combination of Algorithm~\ref{alg: trigger} and Algorithm~\ref{alg: PIPG} is about 50--200 times faster than the MIP approach as well as the combination of Algorithm~\ref{alg: trigger} and off-the-shelf solvers, at the price of at most a \(10\%\) increase in the cost function value. \begin{figure}[!ht] \centering \begin{subfigure}{0.45\columnwidth} \includegraphics[trim=0.1cm 0.1cm 0.1cm 0.1cm,width=\textwidth]{experiment/figs/time.png} \caption{Computation time}\label{fig: time} \end{subfigure} \begin{subfigure}{0.45\columnwidth} \includegraphics[trim=0.1cm 0.1cm 0.1cm 0.1cm,width=\textwidth]{experiment/figs/cost.png} \caption{Trajectory cost divided by trajectory length.} \label{fig: C matrix} \end{subfigure} \caption{The comparison of the computation time and the average state cost--which equals the objective function in optimization~\eqref{opt: nonconvex} divided by trajectory length \(t\)--of the trajectories computed by different solvers for optimization~\eqref{opt: nonconvex}, averaged over 100 randomly generated scenarios. The error bar shows the maximum and minimum value. } \label{fig: time & cost} \end{figure} \subsection{Indoor flight experiments with hoop obstacles} We demonstrate the application of Algorithm~\ref{alg: trigger} and Algorithm~\ref{alg: PIPG} using the quadrotor platform in the Autonomous Control Laboratory ( see \url{https://depts.washington.edu/uwacl/}). This platform contains a custom-made quadrotor equipped with a 2200-milliAmp-hour lithium-polymer battery; accelerometers and gyroscopes that measure the acceleration and the angular velocity, respectively, at a 100-1000 Hz rate; a 500 MHz dual-core Intel Edison and a 1.7 GHz quad-core Intel Joule processor; and an IEEE 802.11n compliant WiFi communication link. See Fig.~\ref{fig: acl quad} for an illustration. The platform also include an 4 meters by 7 meters by 3 meters indoor flight space, equipped with an OptiTrack motion capture system that can measure the attitude and position of a quadrotor at 50-150 Hz rate. We conduct the quadrotor flight experiments using the trajectories computed by Algorithm~\ref{alg: trigger} and Algorithm~\ref{alg: PIPG} as reference guidance. We also use hoop obstacles to mark out the boundary of each flight corridor. Fig.~\ref{fig: flight exp} shows the reference trajectories and experiment trajectories in three different corridor scenarios\footnote{To ensure flight safety, we use a reduced hoop radius (about 20\% of the actual size) when computing the flight trajectories.}. These experiments demonstrate how to use the proposed approach in actual flight experiments in cluttered environments. \begin{figure}[!ht] \centering \input{experiment/figs/quad} \caption{The custom quadrotor in the Autonomous Control Laboratory.} \label{fig: acl quad} \end{figure} \begin{figure} \centering \includegraphics[width=0.4\textwidth]{experiment/figs/acl_lab.png} \caption{The indoor flight environment with hoop obstacles.} \label{fig: acl lab} \end{figure} \begin{figure}[!ht] \centering \begin{subfigure}{0.49\columnwidth} \includegraphics[trim=0.1cm 0.1cm 0.1cm 0.1cm,width=\textwidth]{experiment/figs/s1.png} \caption{Scenario 1.} \end{subfigure} \begin{subfigure}{0.49\columnwidth} \includegraphics[trim=0.1cm 0.1cm 0.1cm 0.1cm,width=\textwidth]{experiment/figs/s2.png} \caption{Scenario 2.} \end{subfigure} \begin{subfigure}{0.49\columnwidth} \includegraphics[trim=0.1cm 0.1cm 0.1cm 0.1cm,width=\textwidth]{experiment/figs/s3.png} \caption{Scenario 3.} \end{subfigure} \caption{Reference trajectory computed by Algorithm~\ref{alg: trigger} and Algorithm~\ref{alg: PIPG} and the measured flight trajectories in experiments. For each scenario, we showcase the measured trajectories in five separate flight experiments.} \label{fig: flight exp} \end{figure} \section{Skye: Quadrotor Feedback Control Draft} \label{sec: feedback2} Citation: Chapter 6 of Miki's dissertation. To achieve high-performance agile flight experiments with our quadrotor drones, we combine both feed-forward guidance and feedback control. The feed-forward guidance is produced via our real-time trajectory generation algorithms, which are both forward-looking and allow us to consider both optimality and constraints when planning the vehicle's motion. However, feed-forward methods have limitations as they are sensitive to model uncertainties and external disturbances. To counteract this, the control terms generated by our trajectory generation algorithms are ignored. Instead, the generated optimal time history of quadrotor states including Cartesian and angular position, velocity, and acceleration are passed as a reference trajectory into a high-order on-board feedback controller. Feedback control is robust to model uncertainties and external disturbances, but is in nature myopic and does not consider state constraints. In this way, we combine the optimality and constraint satisfaction benefits of feed-forward guidance with the robustness and performance guarantees of feedback control to execute aggressive trajectory tracking on-board the quadrotor drones in real-time. The use of a high-order feedback controller on-board the quadrotor flight computer also enables the use of a convex simplified dynamic model to constrain the optimal control problem. This simplified model assumes 3 degree of freedom (DoF) translational motion of the drone. However, in reality the 6-DoF dynamics of the vehicle have a coupling between the translational and attitude state evolution. The feedback controllers on-board account for this discrepancy, and this coupling will become important later in the allocation step of the underlying controller. First, the high-level controller is commanded by a trajectory specified through five inputs: inertial position, a specified force, a reference attitude, an angular velocity, and a torque. This high-level controller also receives feedback on the inertial position and attitude via the motion-capture system, as well as acceleration measured by an accelerometer on-board. This controller then passes a series of four outputs as inputs to the mid-level controller: a commanded force along the negative vehicle Z-axis, a commanded body-frame attitude and attitude rate, and a commanded body-frame torque. In principle, these inputs can be in any arbitrary frame denoted as the "mid-level frame", but in practice this frame is the body-frame. This mid-level controller also receives feedback on the vehicle attitude via the motion-capture systems, as well as the body-rates from an on-board gyroscope. Finally, this mid-level controller computes four commands that are passed as inputs to the low-level controller in the body-frame: a force of thrust along the vehicle's negative Z-axis, a feed-forward torque, a feed-forward body rate, and a feedback body rate. This low-level controller also receives feedback on the body rates and acceleration from on-board sensors. This low-level controller then produces a single vector output: a requested wrench, which consists of a force and three torques about the vehicle body-frame axes. Due to the aforementioned coupling between the vehicle's translational and attitude dynamics, the requested wrench cannot necessarily be achieved instantaneously; the vehicle must adjust it's attitude in the inertial frame in order to vary how the body frame torques and forces are evolving the vehicle's motion in the inertial frame. Due to this, an arbitrary set of attitudes and positions (and their rates) cannot be achieved instantaneously at any point along the trajectory. To account for this, an allocator produces a series of projections that prioritize the requested torques and force, before producing a dynamically feasible wrench that is the closest possible to the requested wrench. The difference between the requested wrench and this allocated wrench is then passed as an additional feedback channel to the low-level controller, and over time is driven to zero. More details on how this allocated wrench is produced are described in [Szmuk Dissertation, Ch. 6]. The allocator also produces a force vector from each of the actuators, which is then passed into a converter at the lowest software-layer of the high-order controller. This converter models the relationship between the requested set of rotor forces, and the input PWM signal that will produce this set of forces for the physical actuators. Finally, the converter outputs the necessary PWM signals to the rotors, which controls the vehicle along the desired trajectory in the closest dynamically feasible motion to the requested motion. The discrepancy between the translational motion and attitude coupling is negligible when conducting flight experiments. 1. PVA traj, have PID gains that look at the error in position, velocity and feedforward acceleration, and adding all those together we obtain commanded acceleration… May not adhere exactly to interfaces as written there. Magnitude of this accel (force) vector goes into Z channel, normalize vector. Attitude projects that into inertial with normalized vector (attitude controller). Caveats: Attitude in general won’t be exactly as commanded due to time constant. Must make a decision when attitude isn’t as commanded, what should be done about the force. To get aggressive tracking close to the ground, look at what attitude currently is and command acceleration to be what keeps commanded vertical acceleration. 2. Low level controller rejects disturbances and errors of low level. If you don’t follow exactly path that you wanted to, will incur errors at other levels of hierarchy. If attitude is off from plan, then position will drift, so will velocity…. All levels have to play cleanup. High level have to augment the error correction in addition to low level control. Integration of error, CEO, has to be addressed at high level. \subsection{Flight experiments with hoop obstacles} The multi-rotors we consider are capable of generating only one force and three torques, and thus cannot simultaneously control the six degrees-of-freedom along which they evolve. \input{experiment/figs/quad} \subsubsection{Hardware} The hardware of the quadrotor includes a set of actuators, sensors, and processors. \paragraph{Actuators} Our quadrotor has four actuators, each consists of a variable-speed brushless direct current motor and a fixed-pitch variable-revolutions-per-minute propeller. Each motor is powered by a 2200-milliAmp-hour lithium-polymer battery that provides a flight time of 15-30 minutes. Each propeller generates a thrust, produced by the lifting force of the propeller blades, and a torque, produced by the drag force of the propeller blades. \paragraph{Sensors} Our quadrotor has access to the following two set of sensors. \begin{enumerate} \item On-board sensors: The quadrotor has two set of on-board sensors: accelerometers and gyroscopes, which measure the acceleration and the angular velocity, respectively, at a 100-1000 Hertz (Hz) rate. \item Off-board sensors: The quadrotor also has access to an OptiTrack motion capture system that measures attitude and position at 50-150 Hz rate. \end{enumerate} \paragraph{Processors} The on-board electronics include a inertial measurement unit, a 500 MHz dual-core Intel Edison, a 1.7 GHz quad-core Intel Joule, and an IEEE 802.11n compliant WiFi communication link. \subsubsection{Software} \input{experiment/figs/diagram} We developed a feedback-control software architecture, illustrated in Fig.~\ref{fig: diagram}, to track the trajectory computed by Algorithm~\ref{alg: trigger}. Such architecture contains four different blocks: controller, allocator \& converter, actuator, ans sensor. We will explain the function and ideas of each block in the following. \paragraph{Controller} The controller block takes the input trajectory computed by Algorithm~\ref{alg: trigger}, which is in the inertial frame of reference, as input, and outputs a \emph{wrench trajectory} in the body frame of reference. This trajectory is composed of a collection of wrench vector; each wrench vector includes three torques and one force. The controller contains three separate layers of proportional-integral-derivative controllers. These layers address several technical challenges including the time-varying mismatch between the inertial frame and body frame of reference, the bias and time-delays of the sensor measurements, actuator saturation, and external force and torque disturbances. The design of these controllers is based on the nonlinear dynamic compensator and loop-shaping techniques \cite[Chp. 6]{szmuk2019successive}. \paragraph{Allocator \& converter} The allocator and converter block takes the output of the controller block as inputs, and outputs the command signals for the onboard motors of the quadrotor. In particular, the allocator computes a physically realizable wrench command that most closely matches the output of the controller. Such computation is based on solving a sequence of linear programs in a lexicographic ordering that ensures a collection of prioritization properties \cite[Sec. 6.4]{szmuk2019successive}. The converter converts each wrnch vector computed by the allocator to pulse-width-modualted signals that are subsequently sent to the on-board motors. The conversion is based on a quadratic formula between motor force and brushless DC electric motor commands \cite[Sec. 6.3]{szmuk2019successive}. \subsection{Flight experiments} \section{Introduction} \label{sec: intro} One of the keys to flying quadrotors in a dynamically changing environment is to optimize their trajectories subject to dynamics and collision-avoidance constraints in real-time \cite{elmokadem2021towards,lan2021survey}. Along such a trajectory, the position of the quadrotor needs to stay within a set of \emph{collision-free corridors}. Each corridor is a bounded convex flight space; the union of all these corridors form a nonconvex pathway connecting the quadrotor's current position to its target position \cite{ioan2019obstacle,ioan2020navigation}; see Fig~\ref{fig: corridor} for a simple illustration. To avoid collisions with obstacles whose positions change rapidly or uncertain, it is critical to update these corridors in real-time. As a result, one needs to optimize trajectories subject to nonconvex corridor constraints in real-time: the faster the optimization, the faster the quadrotor can react to sudden changes of the obstacles. \input{intro/figs/network} Since the flight space defined by the union of the corridors is nonconvex, optimizing the trajectories for the quadrotor is computationally challenging. One standard solution approach is \emph{mixed integer programming} \cite{grossmann2002review,richards2005mixed,ioan2020mixed}, which first uses binary variables to describe the union of all corridors, then optimizes quadrotor trajectories together with these binary variables \cite{richards2002aircraft,mellinger2012mixed,tang2015mixed,landry2016aggressive}. However, the worst-case computation time of this approach increases exponentially as the number of binary variables increases. As a result, even with the state-of-the-art solvers--such as GUROBI \cite{gurobi}--real-time quadrotor trajectory optimization via mixed integer programming is still difficult, if at all possible. Alternatively, one can model the corridor constraints as smooth nonconvex constraints and solve the resulting trajectory optimization using the successive convexification method \cite{mao2018successive}. But this approach suffers from slow computation speed \cite{szmuk2019real}, and requires careful parameter tuning to ensure the desired algorithm convergence \cite{malyuta2021convex}. Recently, there has been an increasing interest in approximating the nonconvex corridor constraints with time-triggered constraints, where each convex corridor is activated only within one time interval \cite{mellinger2011minimum,yu2014energy,deits2015efficient,watterson2015safe,liu2016high,janevcek2017optiplan,liu2017planning,mohta2018fast,gao2018optimal}. These approximations make the resulting trajectory optimization convex and thus computationally more tractable. However, the existing results have the following limitations. First, they only consider polytopic constraints on trajectory variables, such as elementwise upper and lower bounds on the velocity and acceleration of the quadrotor. These polytopic constraints do not accurately capture the geometric structure of many practical operational constraints--such as the magnitude and pointing direction constraint of the thrust vector \cite{szmuk2017convexification,szmuk2018real,szmuk2019real}--and flight corridors with nonpolytopic boundaries--such as cylindrical or spherical corridors. Second, to our best knowledge, none of the existing methods explicitly test whether the resulting trajectory optimization is feasible. Consequently, the resulting trajectory optimization can be close to infeasible, in which case, a numerical solver will fail to provide a solution; or the trajectory optimization can be far away from being infeasible, which can cause conservative trajectories with unnecessarily long time of flight. We introduce a novel bisection method that approximates the nonconvex corridor constraints using time-triggered convex corridor constraints, and develop customized implementation of this method that enables real-time quadrotor trajectory optimization subject to general second-order constraints. Our contributions are as follows. \begin{enumerate} \item We theoretically prove that nonconvex corridor constraints are equivalent to time-varying convex corridor constraints, provided that an optimal triggering time for each corridor is known. \item We propose a novel bisection method to estimate the optimal triggering time via repeated infeasibility detection in conic optimization. This method systematically reduces the trajectory length while ensuring that the resulting trajectory optimization is feasible up to a given tolerance. The estimated triggering time reduces a nonconvex trajectory optimization problem to a sequence of convex ones. \item We develop a customized C++ trajectory optimization solver based on the bisection method. This solver automatically detects infeasibility and exploits the sparsity and geometric structure of trajectory optimization by implementing the proportional-integral projected gradient method (PIPG), an efficient first-order primal-dual conic optimization method. \item We demonstrate the application of the proposed bisection method using numerical simulation and indoor flight experiments. Compared with mixed integer programming, the proposed bisection method and C++ solver shows 50--200 times speedups at the price of an increase in the cost function value by less than 10\% . \end{enumerate} The implications of our work are threefold. First, our work sets a new benchmark for real-time quadrotor trajectory optimization, which significantly improves the mixed integer programming approach in terms of computation time. Second, our work provides a fresh perspective to deal with nonconvexity in collision avoidance for general autonomous vehicles using bisection search and infeasibility detection. Third, our work demonstrates the potential of PIPG--and in general, first-order optimization methods--in solving nonconvex optimal control problems via not only numerical simulation but also flight experiments. \paragraph*{Notation} Given a real number \(\alpha\in\mathbb{R}\), we let \(\lfloor \alpha\rfloor\) denote the largest integer lower bound of \(\alpha\), and \(\lceil\alpha \rceil\) denote the smallest integer upper bound of \(\alpha\). Given a vector \(r\) and a matrix \(M\), we let \(\norm{r}\) denote the \(\ell_2\)-norm of vector \(r\), \([r]_j\) denote the \(j\)-th element of vector \(r\), and \(\mnorm{M}\) denote the largest singular value of matrix \(M\). We let \(1_n\) and \(0_n\) denote the \(n\)-dimensional vector whose entries are all 1's and all 0's, respectively. We let \(0_{m\times n}\) denote the \(m\times n\) zero matrix, and \(I_n\) denote the \(n\times n\) identity matrix. Given a closed convex cone \(\mathbb{K}\), we let \(\mathbb{K}^\circ\) denote its polar cone. Given \(i, j\in\mathbb{N}\) with \(i< j\) and \(a_i, a_{i+1}, \ldots, a_{j-1}, a_j\in\mathbb{R}^n\), we let \(a_{[i, j]}\coloneqq \begin{bmatrix} a_i^\top & a_{i+1}^\top & \ldots & a_{j-1}^\top & a_j^\top \end{bmatrix}^\top\). We say an constrained optimization is \emph{feasible} if its constraints can be satisfied, and \emph{infeasible} otherwise. \section*{Nomenclature} {\renewcommand\arraystretch{1.0} \noindent\begin{longtable*}{@{}l @{\quad=\quad} l@{}} \multicolumn{2}{@{}l}{Sets}\\ \(\mathbb{N}\) & the set of positive integers \\ \(\mathbb{R}, \mathbb{R}_+\) & the set of real and non-negative real numbers \\ \(\mathbb{H}_i\) & the set of feasible position vectors in the \(i\)-th corridor \\ \(\mathbb{V}\) & the set of feasible velocity vectors\\ \(\mathbb{U}_a\) & the set of thrust vectors with pointing direction and magnitude upper bound constraints\\ \(\mathbb{U}_b\) & the set of thrust vectors with magnitude lower bound\\ \(\mathbb{W}\) & the set of feasible thrust rate\\ \multicolumn{2}{@{}l}{Parameters}\\ \(m\) & quadrotor mass\\ \(g\) & acceleration vector caused by gravity\\ \(\Delta\) & sampling time period\\ \(\omega\) & weighting parameter for thrust rates\\ \(c_i, d_i, \rho_i, \eta_i\) & center coordinates, direction vector, radius, and length of the \(i\)-th cylindrical corridor\\ \(\xi\) & maximum speed\\ \(\underline{\gamma}, \overline{\gamma}, \theta\) & minimum thrust magnitude, maximum thrust magnitude, and maximum tilting angle\\ \(\delta\) & maximum thrust rate\\ \(\overline{r}_0, \overline{v}_0\) & initial position and initial velocity of the quadrotor\\ \(\overline{r}_f, \overline{v}_f, \overline{u}_f\) & final position, final velocity, and final thrust of the quadrotor\\ \multicolumn{2}{@{}l}{Variables}\\ \(r_k, v_k, u_k\) & position, velocity, thrust of the quadrotor at time \(k\Delta\)\\ \(\tau_i\) & the length of the trajectory segment for the \(i\)-th corridor\\ \(t\) & total length of trajectory\\ \(b_{ik}\) & binary variable, takes value \(1\) if the quadrotor is in the \(i\)-th corridor at time \(k\Delta\) \end{longtable*}} \input{intro/intro} \input{quadrotor/quadrotor} \input{trigger/trigger} \input{quasicvx/quasicvx} \input{PIPG/PIPG} \input{experiment/simulation} \input{conclusion/conclusion} \section{Three-degree-of-freedom dynamics model for quadrotors} \label{subsec: quad model} Trajectory optimization for a dynamical system requires a mathematical model that predicts the future state of the system given its current state and input. We introduce a quadrotor dynamics model with three-degrees-of-freedom (3DoF), along with various constraints on the position, velocity, thrust, and thrust rate of the quadrotor. This model lays the foundation of the trajectory optimization problem in the next section. \subsection{Three degree-of-freedom dynamics} We consider a 3DoF dynamics model for a quadrotor. In particular, at time \(s\in\mathbb{R}_+\), we let \(r(s)\in\mathbb{R}^3\) and \(v(s)\in\mathbb{R}^3\) denote the position and velocity of the center of mass of the quadrotor, and \(u(s)\in\mathbb{R}^3\) denote the total thrust force provided by the propellers. Furthermore, we let \(m\in\mathbb{R}_+\) and \(g=\begin{bmatrix}0 & 0 & -9.81\end{bmatrix}^\top\) denote the mass of the quadrotor and the acceleration vector caused by gravity, respectively. The 3DoF continuous-time dynamics model for quadrotor dynamics is described by the following set of differential equations: \begin{equation}\label{sys: CT} \begin{aligned} \frac{d}{ds}r(s)&=v(s),\\ \frac{d}{ds}v(s)&=\frac{1}{m}u(s)+g. \end{aligned} \end{equation} We discretize the above continuous-time differential equation using a first-order-hold scheme. Particularly, we let \(\Delta\in\mathbb{R}_+\) denote the discretization step size. Let \begin{equation} r_k\coloneqq r(k\Delta),\enskip v_k\coloneqq v(k\Delta),\enskip u_k\coloneqq u(k\Delta), \end{equation} for all \(k\in\mathbb{N}\). We apply a piecewise linear input thrust such within each \(\Delta\in\mathbb{R}_+\) time interval, {\it i.e.}, \begin{equation} u(s)=\left(k+1-\frac{s}{\Delta}\right)u_k+\left(\frac{s}{\Delta}-k\right)u_{k+1}, \end{equation} for all \(k\Delta\leq s\leq (k+1)\Delta\). Under this assumption, the equations in \eqref{sys: CT} are equivalent to the following: \begin{equation}\label{sys: DT} \begin{aligned} r_{k+1}=& r_k+\Delta v_k+\frac{\Delta^2}{3 m}\left(u_k+\frac{1}{2}u_{k+1}\right)+\frac{\Delta^2}{2}g,\\ v_{k+1}=& v_k+\frac{\Delta}{2m}(u_k+u_{k+1})+\Delta g, \end{aligned} \end{equation} for all \(k\in\mathbb{N}\). \subsection{Position, velocity, and thrust constraints} The position, velocity, and thrust vector of the quadrotor are subject to the following constraints. \subsubsection{Position} The quadrotor's position is constrained within the union of a set of three-dimensional cylinders, or \emph{corridors}. We let \(l\in\mathbb{N}\) denote the total number of corridors. For the \(i\)-th corridor, we let \(c_i\in\mathbb{R}^3\) denote its center, \(d_i\in\mathbb{R}^3\) with \(\norm{d_i}=1\) denote its direction vector, \(\eta_i\in\mathbb{R}_+\) and \(\rho_i\in\mathbb{R}_+\) denote its half-length and radius, respectively. See Fig.~\ref{fig: cylinder corridor} for an illustration. We define the \(i\)-th corridor as follows: \begin{equation}\label{eqn: cylinder} \mathbb{H}_i\coloneqq \{c_i+r\in\mathbb{R}^3|\norm{r-\langle d_i, r\rangle d_i}\leq \rho_i, |\langle d_i, r\rangle|\leq \eta_i\}. \end{equation} \begin{figure}[!ht] \centering \input{quadrotor/corridor} \caption{A illustration of the set \(\mathbb{H}_i\) in \eqref{eqn: cylinder}.} \label{fig: cylinder corridor} \end{figure} \subsubsection{Velocity} The quadrotor's speed is upper bounded by \(\xi\in\mathbb{R}_+\). The set of feasible velocity vectors is as follows: \begin{equation}\label{eqn: v ball} \mathbb{V}\coloneqq \{v\in\mathbb{R}^3| \norm{v}\leq \xi\}. \end{equation} \subsubsection{Thrust} The thrust vectors of the quadrotor are subject to the following two different set of constraints: magnitude constraints and direction constraints. \paragraph{Magnitude constraints} The Euclidean norm of the thrust vector is upper bounded by \(\overline{\gamma}\in\mathbb{R}_+\), and the thrust along the direction opposite to the gravity is lower bounded by \(\underline{\gamma}\in\mathbb{R}_+\). \paragraph{Direction constraints} The direction of the thrust vector is constrained as follows: the angle between the thrust direction and the the direction opposite to the gravity is no more than a fixed angle \(\theta\in[0, \frac{\pi}{2}]\). The above constraints on the thrust magnitude and direction ensure that the on-board motors can provide the thrust needed, and the tilting angle of the quadrotor is upper bounded. See Fig.~\ref{fig: tilting} for an illustration of the tilting angle. \begin{figure}[!ht] \centering \input{trigger/figs/tilting} \caption{Tilting angle \(\theta\) of the quadrotor.} \label{fig: tilting} \end{figure} By combining the aforementioned constraints, we define the set of feasible thrust vectors as the intersection of the following two sets: \begin{subequations}\label{eqn: u muffin} \begin{align} \mathbb{U}_a \coloneqq & \{u\in\mathbb{R}^3| \norm{u}\leq \overline{\gamma}, \cos\theta \norm{u}\leq [u]_3\},\label{eqn: u icecream}\\ \mathbb{U}_b \coloneqq & \{u\in\mathbb{R}^3| [u]_3\geq \underline{\gamma}\}.\label{eqn: u lower bound} \end{align} \end{subequations} \subsubsection{Thrust rate} The difference between two consecutive thrust vectors, termed a \emph{thrust rate vector}, is subject to an upper bound of \(\delta\in\mathbb{R}_+\) on its Euclidean norm. The set of all feasible thrust rate vectors is as follows: \begin{equation}\label{eqn: w ball} \mathbb{W}\coloneqq \{w\in\mathbb{R}^3| \norm{w}\leq \delta\}. \end{equation} The constraints in \eqref{eqn: w ball} prevents large changes in the thrust vector within a \(\Delta\)-seconds time interval, hence ensuring the smoothness of the thrust trajectory. \subsection{Computing the triggering time via bisection method} \label{subsec: min time} In this section, we introduce a numerical algorithm for optimization \eqref{opt: trigger} using an approximate triggering time sequence \(\tau_{[1, l]}\). To this end, we make the following assumption about optimization~\eqref{opt: trigger}. \begin{assumption}\label{asp: bound} There exists \(\underline{\tau}_1, \underline{\tau}_2, \ldots, \underline{\tau}_l\) and \(\overline{\tau}_1, \overline{\tau}_2, \ldots, \overline{\tau}_l\) with \(\underline{\tau}_j\leq \overline{\tau}_j\) for all \(j=1, 2, \ldots, l\), such that 1) optimization~\eqref{opt: trigger} is feasible if \(\tau_{[1, l]}=\overline{\tau}_{[0, l]}\) and \(t=\sum_{j=1}^l\overline{\tau}_j\), and 2) optimization~\eqref{opt: trigger} is infeasible if \(\tau_{[1, l]}=\underline{\tau}_{[0, l]}\) and \(t=\sum_{j=1}^l\underline{\tau}_j\). \end{assumption} \begin{remark} Assumption~\ref{asp: trigger} implies that optimization~\eqref{opt: trigger} is feasible if we allocate an sufficient amount of time for each corridor, and infeasible otherwise. Using the length of each corridor and an upper and lower bounds on the average speed of the quadrotor, we can obtain an interval estimate for each corridor. \end{remark} Given lower and upper bound sequences that satisfy Assumption~\ref{asp: trigger}, we introduce a heuristic method, summarized in Algorithm~\ref{alg: trigger}. The idea is to first use a bisection search method to tighten the interval bounds for each corridor, one at a time. Then using these tightened upper bounds to solve optimization~\eqref{opt: trigger}. \begin{algorithm}[!ht] \caption{Trajectory optimization with time-triggered corridor constraints} \begin{algorithmic}[1] \Require Two time sequence \(\overline{\tau}_{[1, l]}\) and \(\underline{\tau}_{[1, l]}\) that satisfy Assumption~\ref{asp: bound}, positive accuracy tolerance \(\epsilon\). \For{\(i=1, 2, \ldots, l\)}\label{alg: trigger start} \While{\(\overline{\tau}_i-\underline{\tau}_i>\epsilon\)}\label{alg: bisec start} \State{\(\hat{\tau}_j=\begin{cases} \lfloor \frac{1}{2}(\overline{\tau}_i+\underline{\tau}_i)\rfloor, & \text{if \(j=i\).}\\ \overline{\tau}_j, & \text{otherwise.} \end{cases}\)} \State{Let \(t=\sum_{j=1}^l\hat{\tau}_j\) and \(\tau_{[1, l]}=\hat{\tau}_{[1, l]}\) in optimization~\eqref{opt: trigger}. } \If{optimization \eqref{opt: trigger} is infeasible} \State{\(\underline{\tau}_i\gets \hat{\tau}_i\)} \Else \State{\(\overline{\tau}_i\gets \hat{\tau}_i\)} \EndIf \EndWhile\label{alg: bisec end} \EndFor\label{alg: trigger end} \State{Let \(\tau_{[1, l]}=\overline{\tau}_{[1, l]}\) and \(t=\sum_{j=1}^l\overline{\tau}_j\) in optimization \eqref{opt: trigger}, then solve for the optimal trajectory \(u^\star_{[0, \tau]}\).}\label{alg: solve trigger} \Ensure \(u^\star_{[0, t]}\) \end{algorithmic} \label{alg: trigger} \end{algorithm} We note that the upper bound sequence \(\overline{\tau}_{[0, l]}\) computed by the for-loop between line~\ref{alg: trigger start} and line~\ref{alg: trigger end} in Algorithm~\ref{alg: trigger} is not necessarily the same sequence in Assumption~\ref{asp: trigger}. Consequently the instance of optimization~\eqref{opt: trigger} solved in line~\ref{alg: solve trigger} is merely an \emph{approximation} of optimization~\eqref{opt: nonconvex}. However, such an approximation has the following attractive properties. First a feasible solution is guaranteed to exist by construction, and each convex corridor constraint is active within the corresponding time interval. Second, up to the accuracy tolerance \(\epsilon\), each element of the upper bound sequence is reduced greedily until optimization~\eqref{opt: trigger} becomes infeasible, which reduces the conservativeness of the initial estimates. \section{Trajectory optimization with time-triggered constraints} \label{sec: trigger} We will introduce the quadrotor trajectory optimization with time-triggered corridor constraints. To this end, we will first consider the trajectory optimization with nonconvex corridor constraints, then propose an approximate problem that replaces these nonconvex corridor constraints with convex ones. \subsection{Trajectory optimization with nonconvex corridor constraints} We will introduce a trajectory optimization problem subject to nonconvex corridor constraints. In this problem, we use the quadrotor dynamics in \eqref{sys: DT}. We let \(t\in\mathbb{N}\) denote the total length of the trajectory. We let \(\overline{r}_0, \overline{v}_0\in\mathbb{R}^3\) denote the known initial position and initial velocity of the quadrotor, respectively. Similarly, we let \(\overline{r}_f, \overline{v}_f, \overline{u}_f\in\mathbb{R}^3\) denote the known final position, final velocity, and final thrust of the quadrotor, respectively. We will use the set \(\mathbb{V}\), \(\mathbb{U}\) and \(\mathbb{W}\) defined in \eqref{eqn: v ball}, \eqref{eqn: u muffin}, and \eqref{eqn: w ball}, respectively. We let \(\{\mathbb{H}_1, \mathbb{H}_2, \ldots, \mathbb{H}_l\}\) denote a sequence of corridors, where \(\mathbb{H}_i\) is defined by \eqref{eqn: cylinder} for all \(i\in[1, l]\). We now introduce the following quadrotor trajectory optimization with nonconvex state constraints, where \(\omega\in\mathbb{R}_+\) is a weight scalar for the cost for the thrust rates: by changing the value of \(\omega\), one can obtain different trade-offs between cost for the thrust and the thrust rates. \vspace{1em} \noindent\fbox{% \centering \parbox{0.96\linewidth}{% {\bf{Trajectory optimization with nonconvex corridor constraints}} \begin{equation}\label{opt: nonconvex} \begin{array}{ll} & \underset{u_{[0, t]}}{\mbox{minimize}} \enskip \frac{1}{2}\sum_{k=0}^{t} \norm{u_k}^2+\frac{\omega}{2}\sum_{k=0}^{t-1} \norm{u_{k+1}-u_k}^2\\ &\mbox{subject to} \\ &r_{k+1}=r_k+\Delta v_k+\frac{\Delta^2}{3 m}(u_k+\frac{1}{2}u_{k+1})+\frac{\Delta^2}{2}g,\\ &v_{k+1}=v_k+\frac{\Delta}{2m}(u_k+u_{k+1})+\Delta g, \enskip \forall k\in[0, t-1],\\ & u_{k+1}-u_k\in\mathbb{W}, \enskip \forall k\in[0, t-1],\\ &u_k\in\mathbb{U}_a\cap \mathbb{U}_b,\enskip v_k\in\mathbb{V}, \enskip r_k\in\bigcup_{i=1}^{l}\mathbb{H}_i,\enskip \forall k\in[0, t],\\ & r_0=\overline{r}_0, \enskip v_0=\overline{v}_0,\enskip r_t=\overline{r}_f, \enskip v_t=\overline{v}_f, \enskip u_t=\overline{u}_f. \end{array} \end{equation} }% } \vspace{1em} Optimization \eqref{opt: nonconvex} is equivalent to a mixed integer optimization problem. To see this equivalence, notice that optimization~\eqref{opt: nonconvex} contains the following constraints: \begin{equation}\label{eqn: noncvx corridor} r_k\in\bigcup\limits_{i=1}^{l}\mathbb{H}_i, \enskip \forall k\in[0, t]. \end{equation} The constraints in \eqref{eqn: noncvx corridor} are equivalent to the following set of constraints with binary variables: \begin{subequations}\label{eqn: mixed-integer corridor} \begin{align} &\norm{(r_k-c_i)-\langle d_i, r_k-c_i\rangle}\leq b_{ik}\rho_i+\mu(1-b_{ik}),\label{eqn: big M cylinder 1}\\ & |\langle d_i, r_k-c_i\rangle|\leq b_{ik}\eta_i+\mu(1-b_{ik}),\label{eqn: big M cylinder 2}\\ & b_{ik}\in\{0, 1\}, \enskip \sum_{i=1}^{l} b_{ik}\geq 1, \enskip \forall k\in[0, t], i\in[1, l],\label{eqn: binary} \end{align} \end{subequations} where \(\mu\in\mathbb{R}_+\) denotes a very large positive scalar. Indeed, if \(b_{ik}=0\), then the constraints in \eqref{eqn: big M cylinder 1} and \eqref{eqn: big M cylinder 2} become redundant, since \(\mu\) is very large. On the other hand, if \(b_{ik}=1\), then the constraints in \eqref{eqn: big M cylinder 1} and \eqref{eqn: big M cylinder 2} imply that \(r_k\in\mathbb{H}_i\). Finally, the constraints in \eqref{eqn: binary} in \eqref{eqn: binary} ensure that there exists \(i\in[1, l]\) such that \(b_{ik}=1\), hence \(r_k\in\mathbb{H}_i\) for some \(i\in[1, l]\). Therefore, the constraints in \eqref{eqn: noncvx corridor} and \eqref{eqn: mixed-integer corridor} are equivalent. Since optimization \eqref{opt: nonconvex} is equivalent to a mixed-integer optimization, the computation time for solving optimization \eqref{opt: nonconvex} increases exponentially as the number of integer variables--in this case, jointly determined by the trajectory length \(\tau\) and number of corridors \(l\)--increases. Consequently, a real-time solution method is only possible if the values of \(\tau\) and \(n\) are both sufficiently small. \subsection{Trajectory optimization with time-triggered corridor constraints} We will show that optimization~\eqref{opt: nonconvex} takes a simpler form if we know \emph{a priori} the sequence of corridors that the optimal trajectory traverses. To this end, we start with the following assumption on the ordering of corridor sequence \(\{\mathbb{H}_1, \mathbb{H}_2, \ldots, \mathbb{H}_l\}\). \begin{assumption}\label{asp: trigger} Suppose optimization~\eqref{opt: nonconvex} is feasible. Let \(u_{[0, t]}\) be an optimal thrust trajectory for optimization \eqref{opt: nonconvex}, and \(r_{[0, t]}\) and \(u_{[0, t]}\) satisfy the constraints in optimization \eqref{opt: nonconvex}. There exists \(\tau_1, \tau_2, \ldots, \tau_l\in\mathbb{R}_{+}\) such that \(t=\sum_{j=1}^l\tau_j\) and \(r_k\in\mathbb{H}_i\) for all \(k\in[\sum_{j=1}^{i-1}\tau_j, \sum_{j=1}^i\tau_j]\) and \(i\in[1, l]\), where \(\sum_{j=1}^0\tau_j\coloneqq 0\). \end{assumption} For Assumption~\ref{asp: trigger} to hold, we need to know \emph{a priori} the ordered sequence of corridors that order in which the optimal trajectory traverses. Many corridor generating algorithms, such as convex lifting, can provide such an ordered sequence of corridors; see \cite{ioan2019obstacle,ioan2020navigation} for some recent examples. Assumption~\ref{asp: trigger} also implies that no corridor appeared more than once along the optimal corridor path. Since reentering the same corridor twice will increase the value of the objective function in optimization~\eqref{opt: nonconvex}, such an implication always holds in practice. Under Assumption~\ref{asp: trigger}, it is tempting to replace the nonconvex corridor constraints in \eqref{eqn: noncvx corridor} with time-varying constraints. After this replacement, optimization~\eqref{opt: nonconvex} becomes the following optimization in \eqref{opt: trigger}. \vspace{1em} \noindent\fbox{% \centering \parbox{0.96\linewidth}{ {\bf{Trajectory optimization with time-triggered corridor constraints}} \begin{equation}\label{opt: trigger} \begin{array}{ll} & \underset{u_{[0, t]}}{\mbox{minimize}} \enskip \frac{1}{2}\sum_{k=0}^t \norm{u_k}^2+\frac{\omega}{2}\sum_{k=0}^{t-1} \norm{u_{k+1}-u_k}^2\\ &\mbox{subject to} \\ &r_{k+1}=r_k+\Delta v_k+\frac{\Delta^2}{3 m}(u_k+\frac{1}{2}u_{k+1})+\frac{\Delta^2}{2}g,\\ &v_{k+1}=v_k+\frac{\Delta}{2m}(u_k+u_{k+1})+\Delta g, \enskip \forall k\in[0, t-1],\\ & u_{k+1}-u_k\in\mathbb{W}, \enskip \forall k\in[0, t-1],\\ &u_k\in\mathbb{U}_a\cap\mathbb{U}_b,\enskip v_k\in\mathbb{V},\enskip \forall k\in[0, t],\\ & r_k\in\mathbb{H}_i, \enskip \forall k\in[\sum_{j=1}^{i-1}\tau_j, \sum_{j=1}^i\tau_j], \enskip i\in[1, l],\\ & r_0=\overline{r}_0, \enskip v_0=\overline{v}_0,\enskip r_t=\overline{r}_f, \enskip v_t=\overline{v}_f, \enskip u_t=\overline{u}_f. \end{array} \end{equation} }% } \vspace{1em} The following proposition shows that, under Assumption~\ref{asp: trigger}, solving optimization~\eqref{opt: trigger} is equivalent to solving optimization~\eqref{opt: nonconvex}. \begin{proposition}\label{prop: trigger} Suppose that Assumption~\ref{asp: trigger} holds. If \(u^\star_{[0, t]}\) is an optimal thrust trajectory for optimization~\eqref{opt: trigger}, then \(u^\star_{[0, t]}\) is an an optimal thrust trajectory for optimization~\eqref{opt: nonconvex}. \end{proposition} \begin{proof} Since Assumption~\eqref{asp: trigger} holds, optimization~\eqref{opt: nonconvex} has at least one optimal solution, and so does optimization~\eqref{opt: trigger}. Let \(u^\star_{[0, t]}\) be an optimal solution for optimization~\eqref{opt: trigger}, \(u_{[0, t]}\) be an optimal solution for optimization \eqref{opt: nonconvex}, \(\phi(u_{[0, t]})=\frac{1}{2}\sum_{k=0}^t \norm{u_k}^2+\frac{\omega}{2}\sum_{k=0}^{t-1} \norm{u_{k+1}-u_k}^2\), and \(\phi(u^\star_{[0, t]})=\frac{1}{2}\sum_{k=0}^t \norm{u_k^\star}^2+\frac{\omega}{2}\sum_{k=0}^{t-1} \norm{u_{k+1}^\star-u_k^\star}^2\). First, since trajectory \(u_{[0, t]}\) also satisfies the constraints in \eqref{opt: trigger} and \(u^\star_{[0, t]}\) is optimal for optimization~\eqref{opt: trigger}, we must have \(\phi(u^\star_{[0, t]})\leq \phi(u_{[0, t]})\). Second, Assumption~\ref{asp: trigger} implies that \(u^\star_{[0, t]}\) also satisfies the constraints in optimization~\eqref{opt: nonconvex}. Combining this fact with the assumption that \(u_{[0, t]}\) is optimal for optimization~\eqref{opt: nonconvex}, we conclude that \(\phi(u_{[0, t]})\leq \phi(u^\star_{[0, t]})\). Therefore we conclude that \(u^\star_{[0, t]}\) satisfies the constraints in optimization~\eqref{opt: nonconvex} and \(\phi(u^\star_{[0, t]})=\phi(u_{[0, t]})\). Hence \(u^\star_{[0, t]}\) is also optimal for optimization~\eqref{opt: nonconvex}. \end{proof} Proposition~\ref{prop: trigger} provides valuable insights in solving optimization~\eqref{opt: nonconvex}: rather than the value of the \((t+1)l\) binary variables in \eqref{eqn: mixed-integer corridor}, we only need to determine the value of \(l\) integers that determines the triggering time, given by \(\tau_1, \tau_2, \ldots, \tau_l\), in optimization~\eqref{opt: trigger}. Although computing the exact value of this sequence is as difficult as solving optimization~\eqref{opt: nonconvex} itself, one can compute a good approximation very efficiently, as we will show next.
2,869,038,155,995
arxiv
\section{Observations of Diffuse Light} The study of the intracluster light (ICL) began with Zwicky's (1951) claimed discovery of an excess of light between galaxies in the Coma Cluster. Its low surface brightness ($\mu_B > 28$ mag arcsec$^{-2}$) makes it difficult to study the ICL systematically (Oemler 1973; Thuan \& Kormendy 1977; Bernstein et al. 1995; Gregg \& West 1998; Gonzalez et al. 2000). Presence of diffuse light can be revealed by tails, arcs or plumes which are narrow (about $\sim 2$ kpc) and extended ($\sim 50 - 100$ kpc), or as a halo of light at the cluster scales, which is present in the 200 -700 kpc radial range. In the Coma cluster, Adami et al. (2005) have searched for ICL small features using a wavelet analysis and reconstruction technique. They identified 4 extended sources, with 50 to 100 kpc diameter and V band magnitudes in the 14.5 - 16.0 range. Quantitative, large scale measurements of the diffuse light in the Virgo cluster were recently attempted by Mihos et al. (2005): these deep observations show the intricate and complex structure of the ICL in Virgo. On the cluster scales, the presence of diffuse light can be revealed when the whole distribution of stars in clusters is analysed in a way similar to Schombert's (1986) photometry of brightest cluster galaxies (BCGs). When this component is present, the surface-brightness profiles centred on the BCG turn strongly upward in a $(\mu,R^{1/\alpha})$ plot for radii from 200 to 700 kpc. This approach to ICL low surface brightness measurements was taken by Zibetti et al. (2005), who studied the spatial distribution and colors of the ICL in 683 clusters of galaxies at $z\simeq 0.25$ by stacking their images, after rescaling them to the same metric size and masking out resolved sources. In nearby galaxy clusters, intracluster planetary nebulae (ICPNe) can be used as tracers of the ICL; this has the advantages that detection of ICPNe are possible with deep narrow band images and that the ICPN radial velocities can be measured to investigate the dynamics of the ICL component. ICPN candidates have been identified in Virgo (Arnaboldi et al. 1996, 2002, 2003; Feldmeier et al. 2003, 2004a) and Fornax (Theuns \& Warren 1997), with significant numbers of ICPN velocities beginning to become available (Arnaboldi et al. 2004). The overall amount of the ICL in galaxy clusters is still a matter of debate. However, there is now observational evidence that it may depend on the physical parameters of clusters, with rich galaxy clusters containing 20\% or more of their stars in the intracluster component (Gonzalez et al. 2000; Gal-Yam et al. 2003), while the Virgo Cluster has a fraction of $\sim 10$\% in the ICL (Ferguson et al. 1998; Durrell et al. 2002; Arnaboldi et al. 2002, 2003; Feldmeier et al. 2004a), and the fraction of detected intragroup light (IGL) is 1.3\% in the M81 group (Feldmeier et al. 2004b) and less than 1.6\% in the Leo I group (Castro-Rodr\'iguez et al. 2003). Recent hydrodynamical simulations of galaxy cluster formation in a $\Lambda$CDM cosmology have corroborated this observational evidence: in these simulated clusters, the fraction of the ICL increases from $\sim$ 10\% $-$ 20\% in clusters with $10^{14} M_\odot$ to up to 50\% for very massive clusters with $10^{15} M_\odot$ (Murante et al. 2004). Strong correlation between ICL fraction and cluster mass is also predicted from semi-analytical models of structure formation (Lin \& Mohr 2004). The mass fraction and physical properties of the ICL and their dependence on cluster mass will be related with the mechanisms by which the ICL is formed. Theoretical studies predict that if most of the ICL is removed from galaxies because of their interaction with the galaxy cluster potential or in fast encounters with other galaxies, the amount of the ICL should be a function of the galaxy number density (Richstone \& Malumuth 1983; Moore et al. 1996). The early theoretical studies about the origin and evolution of the ICL suggested that it might account for between 10\% and 70\% of the total cluster luminosity (Richstone \& Malumuth 1983; Malumuth \& Richstone 1984; Miller 1983; Merritt 1983, 1984). These studies were based on analytic estimates of tidal stripping or simulations of individual galaxies orbiting in a smooth gravitational potential. Nowadays, cosmological simulations allow us to study in detail the evolution of galaxies in cluster environments (see, e.g., Moore et al. 1996; Dubinski 1998; Murante et al. 2004; Willman et al. 2004; Sommer-Larsen et al. 2005). Napolitano et al. (2003) investigated the ICL for a Virgo-like cluster in one of these hierarchical simulations, predicting that the ICL in such clusters should be unrelaxed in velocity space and show significant substructures; spatial substructures have been observed in one field in the ICPNe identified with [O III] and H$\alpha$ (Okamura et al. 2002). \section{Diffuse light in clusters from cosmological simulations} Cosmological simulations of structure formation facilitate studies of the diffuse light and its expected properties. Dubinski (1998) constructed compound models of disk galaxies and placed them into a partially evolved simulation of cluster formation, allowing an evolutionary study of the dark matter and stellar components independently. Using an empirical method to identify stellar tracer particles in high-resolution cold dark matter (CDM) simulations, Napolitano et al. (2003) studied a Virgo-like cluster, finding evidence of a young dynamical age of the intracluster component. The main limitations in these approaches is the restriction to collisionless dynamics. Murante et al. (2004) analyzed for the first time the ICL formed in a cosmological hydrodynamical simulation including a self-consistent model for star formation. In this method, no assumptions about the structural properties of the forming galaxies need to be made, and the gradual formation process of the stars, as well as their subsequent dynamical evolution in the non-linearly evolving gravitational potential can be seen as a direct consequence of the $\Lambda$CDM initial conditions. Murante et al. (2004) identified 117 clusters in a large volume of $192^3\, h^{-3}{\rm Mpc}^3$, and analyze the correlations of properties of diffuse light with, e.g., cluster mass and X-ray temperatures. Galaxies at the centers of these clusters have surface-brightness profiles which turn strongly upward in a $(\mu,R^{1/\alpha})$ plot. This light excess can be explained as IC stars orbiting in the cluster potential. Integrating its density distribution along the line-of-sight (LOS), the slopes from Murante et al. (2004) simulations are in agreement with those observed for the surface brightness profiles of the diffuse light in nearby clusters. At large cluster radii, the surface brightness profile of the ICL appears more centrally concentrated than the surface brightness profile of cluster galaxies (see Figure~\ref{fig1} ). The prediction of ICL being more centrally concentrated than the galaxy cluster light has been tested observationally. Zibetti et al. (2005) have presented surface photometry from the stacking of 683 clusters of galaxies imaged in the g- , r-, and i-bands in the SDSS. They have been able to measure surface brightness as deep as $\mu_r \sim 32$ mag arcsec$^{-2}$ for the ICL light and $\mu_r \sim 29.0$ for the total light, out to 700 kpc from the BCG. They finds that the ICL is significantly more concentrated than the total light. From the simulations carried out by Murante et al. (2004), they also obtained the redshifts $z_{form}$ at which the stars formed: those in the IC component have a $z_{form}$ distribution which differs from that in cluster galaxies, see Figure~\ref{fig2}. The ``unbound'' stars are formed earlier than the stars in galaxies. The prediction for an old stars' age in the diffuse component agrees with the HST observation of the IRGB stars in the Virgo IC field, e.g. $t> 2$Gyr (Durrell et al. 2002), and points toward the early tidal interactions as the preferred formation process for the ICL. The different age and spatial distribution of the stars in the diffuse component indicate that it is a stellar population that is not a random sampling of the stellar populations in cluster galaxies. \begin{figure} \includegraphics[height=.3\textheight]{arnaboldi.fig1.eps} \caption{Schombert--like analysis on the {\it stacked} 2D radial density profile (BCG + ICL) of clusters in the Murante et al (2004) simulation (triangles). The light excess is evident at large cluster radii. The solid line shows the function $ \log \Sigma(r) = \log \Sigma_e -3.33 [(r/r_e)^{1/\alpha} -1]$, with best--fit parameters $\log \Sigma_e = 20.80$, $r_e = 0.005$, $\alpha = 3.66$ to the BCG inner stellar light. Also shown are the averaged 2D density profile of stars in galaxies (dotted line) and in the field (dashed line). In the inserts, the results are shown from the same analysis for the most luminous clusters with $T>4$ keV (left panel), and for less luminous ones with $0<T<2$ keV (right panel). The resulting best--fit parameters are respectively $\log \Sigma_e = 16.47$, $r_e = 0.11$, $\alpha = 1.24$ and $\log \Sigma_e = 23.11$, $r_e = 0.00076$, $\alpha = 4.37$. In the main plot and in the inserts the unit $(R/R_{200})^{1/\alpha}$ refers to the $\alpha$ values given by each Sersic profile. From Murante et al. (2004).}\label{fig1} \end{figure} Murante et al. (2004) studied the correlation between the fraction of stellar mass in the diffuse component and the clusters' total mass in stars, based on their statistical sample of 117 clusters. This fraction is $\sim 0.1$ for cluster masses $M > 10^{14}h^{-1}M_\odot$ and it increases with cluster mass: the more massive clusters have the largest fraction of diffuse light, see Figure~\ref{fig2}). For $M \sim 10^{15}h^{-1}M_\odot$, simulations predict as many stars in the diffuse component as in cluster galaxies. \begin{figure} \includegraphics[height=.3\textheight]{arnaboldi.fig2.eps} \caption{Left: Fraction of stellar mass in diffuse light vs.\ cluster mass. Dots are for clusters in the simulated volume; asterisks show the average values of this fraction in 9 mass bins with errorbars. Right panel: histograms of clusters over mean formation redshift, of their respective bound (dashed) and IC star particles (solid line). Mean formation redshifts are evaluated for each cluster as the average on the formation redshift of each star particle. From Murante et al. (2004).} \label{fig2} \end{figure} \subsection{Predicted dynamics of the ICL} In the currently favored hierarchical clustering scenario, fast encounters and tidal interactions within the cluster potential are the main players of the morphological evolution of galaxies in clusters. Fast encounters and tidal stirring cause a significant fraction of the stellar component in individual galaxies to be stripped and dispersed within the cluster in a few dynamic times. If the timescale for significant phase-mixing is on the order of few cluster internal dynamical times, then a fraction of the ICL should still be located in long streams along the orbits of the parent galaxies. Detections of substructures in phase space would be a clear sign of late infall and harassment as the origin of the ICL. A high resolution simulation of a Virgo-like cluster in a $\Lambda CDM$ cosmology was used to predict the velocity and the clustering properties of the diffuse stellar component in the intracluster region at the present epoch (Napolitano et al. 2003). The simulated cluster builds up hierarchically and tidal interactions between member galaxies and the cluster potential produce a diffuse stellar component free-flying in the intracluster medium. The simulations are able to predict the radial velocity distribution expected in spectroscopic follow-up surveys: they find that at $z=0$ the intracluster stellar light is mostly dynamically unmixed and clustered in structures on scales of about 50 kpc at a radius of $400-500$ kpc from the cluster center. \begin{figure} \includegraphics[height=.3\textheight]{arnaboldi.fig3.eps} \caption{Projected phase-space diagram for a simulated ICPN sample in a N-body simulation of a Virgo-like cluster. From Napolitano et al. (2003).} \label{fig3} \end{figure} Willman et al. (2004) and Sommer-Larsen et al. (2005) have studied the dynamics of the ICL in cosmological hydrodynamical simulations for the formation of a rich galaxy cluster. In a Coma-like rich cluster, Willman et al. (2004) finds that the ICL show significant substructure in velocity space, tracing separate streams of stripped IC stars. Evidence is given that despite an un-relaxed distribution, IC stars are useful mass tracers, when several fields at a range of radii have measured LOS velocities. According to Sommer-Larsen et al. (2005), IC stars are colder than cluster galaxies. This is to be expected because diffuse light is more centrally concentrated than cluster galaxies, as found in cosmological simulations (see Murante et al. 2004) and confirmed from observations of intermediate redshift clusters (Zibetti et al. 2005), and both the ICL and galaxies are in equilibrium with the same cluster potential. \section{Intracluster planetary nebulae in the Virgo cluster: the projected phase space distribution} Intracluster planetary nebulae (ICPNe) have several unique features that make them ideal for probing the ICL. The diffuse envelope of a PN re-emits 15\% of the UV light of the central star in one bright optical emission line, the green [OIII]$\lambda 5007$ \AA\ line. PNe can therefore readily be detected in external galaxies out to distances of 25 Mpc and their velocities can be determined from moderate resolution $(\lambda /\Delta \lambda \sim 5000)$ spectra: this enables kinematical studies of the IC stellar population. PNe trace stellar luminosity and therefore provide an estimate of the total IC light. Also, through the [OIII] $\lambda 5007$ \AA\ planetary nebulae luminosity function (PNLF), PNe are good distance indicators, and the observed shape of the PNLF provides information on the LOS distribution of the IC starlight. Therefore ICPNe are useful tracers to study the spatial distribution, kinematics, and metallicity of the diffuse stellar population in nearby clusters. \subsection{Current narrow band imaging surveys} Several groups (Arnaboldi et al. 2002, 2003; Aguerri et al. 2005; Feldmeier et al. 2003, 2004a) have embarked on narrow-band [OIII] imaging surveys in the Virgo cluster, with the aim of determining the radial density profile of the diffuse light, and gaining information on the velocity distribution via subsequent spectroscopic observations of the obtained samples. Given the use of the PNLF as distance indicators, one also obtain valuable information on the 3D shape of the Virgo cluster from these ICPN samples. \begin{figure} \includegraphics[width=.6\textwidth]{arnaboldi.fig4.eps} \caption{Aguerri et al. (2005) surveyed fields in the Virgo cluster core. The CORE field was obtained at the ESO MPI 2.2m telescope, and the SUB field with the Suprime Cam at the 8.2m Subaru telescope. The FCJ field is from prime focus camera of the Kitt Peak 4m telescope and the lower right field, LPC, was observed at the La Palma INT. From Aguerri et al. (2005).} \label{fig4} \end{figure} Wide-field mosaic cameras, such as the WFI on the ESO MPI 2.2m telescope and the Suprime Cam on the Subaru 8.2m, allow us to identify the ICPNe associated with the extended ICL (Arnaboldi et al. 2002, 2003; Okamura et al. 2002; Aguerri et al. 2005). These surveys require the use of data reduction techniques suited for mosaic images, and also the development and refining of selection criteria based on color-magnitude diagrams from photometric catalogs, produced with SExtractor (Bertin \& Arnout 1996). The data analysed by Aguerri et al. (2005) constitute a sizable sample of ICPNe in the Virgo core region, constructed homogeneously and according to rigorous selection criteria; a layout of the pointings is shown in Figure~\ref{fig4}. From the study of five wide-fields they conclude that the number density plot in Figure~\ref{fig6} shows no clear trend with distance from the cluster center at M87, except that the value in the innermost FCJ field is high. However, the spectroscopic results of Arnaboldi et al. (2004) have shown that 12/15 PNe in this field have a low velocity dispersion of 250 kms$^{-1}$, i.e. in fact they belong to the outer halo of M87, which thus extends to at least 65 kpc radius. In the SUB field, 8/13 PNe belong to the similarly cold, extended halo of M84, while the remaining PNe are observed at velocities that are close to the systemic velocities of M86 and NGC 4388, the two other large galaxies in or near this field. It is possible that in a cluster as young and unrelaxed as Virgo, a substantial fraction of the ICL is still bound to the extended halos of galaxies, whereas in denser and older clusters these halos might already have been stripped. If so, it is not inappropriate to already count the luminosity in these halos as part of the ICL. However, in Figure~\ref{fig6} the plot of the PN number density with radius is also shown for the case in which the PNe in the outer halos of M87 and M84 are removed from the FCJ and SUB samples. In this case, the resulting number density is even more nearly flat with radius, but there are still significant field-to-field variations; in particular, the remaining number densities in SUB and LPC are low. When one wishes to compare the luminosity of the ICL at the positions of Aguerri et al. (2005) fields with the luminosity from the Virgo galaxies, one adds in further uncertainties, because the luminosities of nearby Virgo galaxies depend very much on the location and field size surveyed in the Virgo Cluster. Aguerri et al. (2005) consider therefore their reported intervals in surface brightness to be their primary result, while the relative fractions of the ICL with respect to the Virgo galaxy light are evaluated for comparison with previous ICPN works, and considered them to be more uncertain. From the study of four wide fields in the Virgo core, Aguerri et al. (2005) obtain a mean surface luminosity density of $2.7 \times 10^6$ L$_{B\odot}$ arcmin$^{-2}$, rms = $2.1 \times 10^6$ L$_{B\odot}$ arcmin$^{-2}$, and a mean surface brightness of $\mu_B$ = 29.0 mag arcsec$^{-2}$. Their best estimate of the ICL fractions with respect to light in galaxies in the Virgo core is $\sim 5\%$. However, there are significant field-to-field variations. The fraction of the ICL versus total light ranges from $\sim 8\%$ in the CORE and FCJ fields, to less than 1\% in the LPC field, which in its low ICL fraction is similar to low-density environments (Castro-Rodr\'iguez et al. 2003). This latter field corresponds to the lowest luminosity density in the mosaic image of the Virgo core region from Mihos et al. (2005). \begin{figure} \includegraphics[height=.3\textheight]{arnaboldi.fig7.eps} \caption{Number density of PNe (top) and surface brightness (bottom) in our surveyed fields. In the top panel, circles show the measured number densities from Table 3 of Aguerri et al. (2005), and error bars denote the Poisson errors. For the LPC field our upper limit is given. For the RCN1 field at the largest distance from M87, the uncertainty from the correction for Ly$\alpha$ emitters is substantial and is included in the error bar. The large stars with Poisson error bars show the number densities of PNe in FCJ and SUB fields not including PNe bound to the halos of M87 and M84. In the lower panel, circles show the surface brightness inferred with the average value of $\alpha$ in Table 4 of Aguerri et al. (2005), and error bars show the range of values implied by the Poisson errors and the range of adopted $\alpha$ values. Triangles represent the measurements of the ICL from RGB stars; error bars indicate uncertainties in the metallicity, age, and distance of the parent population as discussed in Durrell et al. (2002). The stars indicate the surface brightness associated with the ICPNe in the FCJ and SUB fields that are not associated with the M87 or M84 halos but are free flying in the Virgo Cluster potential, (Arnaboldi et al. 2004). The dashed line and diamonds show the B-band luminosity of Virgo galaxies averaged in rings (Binggeli et al. 1987). Distances are relative to M87. The ICL shows no trend with cluster radius out to 150 arcmin. From Aguerri et al. (2005).} \label{fig6} \end{figure} \section{Spectroscopic follow-up} ICPNe are the only component of the ICL whose kinematics can be measured at this time. This is important since the high-resolution N-body and hydrodynamical simulations predict that the ICL is un-relaxed, showing significant substructure in its spatial and velocity distributions in clusters similar to Virgo. The spectroscopic follow-up with FLAMES of the ICPN candidates selected from three survey fields in the Virgo cluster core was carried out by Arnaboldi et al. (2004). Radial velocities of 40 ICPNe in the Virgo cluster were obtained with the new multi-fiber FLAMES spectrograph on UT2 at VLT. The spectra were taken for a homogeneously selected sample of ICPNe, previously identified in three $\sim 0.25$ deg$^2$ fields in the Virgo cluster core. For the first time, the $\lambda$ 4959 \AA\ line of the [OIII] doublet is seen in a large fraction (40\%) of ICPNe spectra, and a large fraction of the photometric candidates with m(5007) $ < 27.2$ is spectroscopically confirmed. \subsection{The LOS velocity distributions of ICPNe in the Virgo cluster core.} With these data, Arnaboldi et al. (2004) were able for the first time to determine radial velocity distributions of ICPNe and use these to investigate the dynamical state of the Virgo cluster. Figure~\ref{fig5} shows an image of the Virgo cluster core with the positions of the imaged fields. The radial velocity distributions obtained from the FLAMES spectra in three of these fields are also displayed in Figure~\ref{fig5}. Clearly the velocity distribution histograms for the three pointings are very different. In the FCJ field, the ICPNe distribution is dominated by the halo of M87. There are 3 additional outliers, 2 at low velocity, which are also in the brightest PNLF bin, and therefore may be in front of the cluster. The surface brightness of the ICL associated with the 3 outliers, e.g.\ the ICPNe in the FCJ field, amounts to $\mu_B \simeq 30.63$ mag arcsec$^{-2}$, in agreement with the surface brightness measurements of Ferguson et al. (1998) and Durrell et al. (2002) of the intracluster red giant stars. The M87 peak of the FCJ velocity distribution contains 12 velocities with $\bar{v}_{p} = 1276\pm 71$ km s$^{-1}$ and $\sigma_{p} = 247\pm 52$ km s$^{-1}$. The average velocity is consistent with that of M87, $v_{sys} = 1258$ km s$^{-1}$. The distance of the center of the FCJ field from the center of M87 is $15.'0\simeq\,65$ kpc for an assumed M87 distance of $15$ Mpc. The value of $\sigma_p$ is very consistent with the stellar velocity dispersion profile extrapolated outwards from $\simeq 150''$ in Figure~5 of Romanowsky \& Kochanek (2001) and falls in the range spanned by their dynamical models for the M87 stars. The main result from our measurement of $\sigma_p$ is that M87 has a stellar halo in approximate dynamical equilibrium out to at least $65$ kpc. In the CORE field, the distribution of ICPN LOS velocities is clearly broader than in the FCJ field. It has $\bar{v}_{C} = 1491\pm 290$ km s$^{-1}$ and $\sigma_{C} = 1000\pm 210$ km s$^{-1}$. The CORE field is in a region of Virgo devoid of bright galaxies, but contains 7 dwarfs, and 3 low luminosity E/S near its S/W borders. None of the confirmed ICPNe lies within a circle of three times half the major axis diameter of any of these galaxies, and there are no correlations of their velocities with the velocities of the nearest galaxies where these are known. Thus in this field there is a true IC stellar component. The mean velocity of the ICPN in this field is consistent with that of 25 Virgo dE and dS0 within 2$^\circ$ of M87, $<v_{\rm dE,M87}> = 1436\pm108$ km s$^{-1}$ (Binggeli et al.\ 1987), and with that of 93 dE and dS0 Virgo members, $<v_{\rm dE,Virgo}> = 1139\pm67$ km s$^{-1}$ (Binggeli et al.\ 1993). However, the velocity dispersion of these galaxies is smaller, $\sigma_{\rm dE,M87}=538\pm 77$ km s$^{-1}$ and $\sigma_{\rm dE,Virgo}=649\pm 48$ km s$^{-1}$. \begin{figure} \includegraphics[height=.6\textheight,angle=-90]{arnaboldi.fig5.eps} \caption{ICPN radial velocity distributions in the three pointings (FCJ, CORE, and SUB) from Aguerri et al. (2005). In FCJ panel, blue dashed line shows a Gaussian with $\bar{v}_{rad} = 1276$ km s$^{-1}$ and $\sigma_{rad} = 247$ km s$^{-1}$. In CORE, green dashed line shows a Gaussian with $\bar{v}_{rad} = 1436$ km s$^{-1}$ and $\sigma_{rad} = 538$ km s$^{-1}$, for VC galaxies dE and dS0 within 2$^\circ$ of M87 (from Binggeli et al. 1987). In SUB, dashed histogram shows radial velocities from TNG spectroscopic follow-up (Arnaboldi et al. 2003). Dashed red line shows a Gaussian with $\bar{v}_{rad} = 1080$ km s$^{-1}$ and $\sigma_{rad} = 286$ km s$^{-1}$. Dashed-dotted lines show the SUB-FLAMES spectra including those spectra for HII regions, which have radial velocities in M84 \& NGC~4388 redshift ranges.} \label{fig5} \end{figure} The inferred luminosity from the ICPNe in the CORE field is $1.8\times 10^9 L_{B,\odot}$. This is about three times the luminosity of all dwarf galaxies in this field, $5.3\times 10^8 L_{B,\odot}$, but an order of magnitude less than the luminosities of the three low-luminosity E/S galaxies near the field borders. Using the results of Nulsen \& B\"ohringer (1995) and Matsushita et al.\ (2002), Arnaboldi et al. (2004) estimate the mass of the M87 subcluster inside 310 kpc (the projected distance $D$ of the CORE field from M87) as $4.2\times 10^{13} M_\odot$, and compute a tidal parameter $T$ for all these galaxies as the ratio of the mean density within the CORE field to the mean density of the galaxy. They find $T=0.01-0.06$, independent of galaxy luminosity. Since $T\sim D^{-2}$, any of these galaxies whose orbit {\sl now} comes closer to M87 than $\sim 60$ kpc would be subject to severe tidal mass loss. Based on the evidence so far, a tantalizing possibility is that the ICPN population in the CORE field could be debris from the tidal disruption of small galaxies on nearby orbits in the M87 halo. In the SUB field the velocity distribution from FLAMES spectra is again different from CORE and FCJ. The histogram of the LOS velocities shows substructures related to M86, M84 and NGC 4388, respectively, and in Figure~\ref{fig5} the projected phase space is shown. The association with the three galaxies is strengthened when we plot the LOS velocities of 4 HII regions (see Gerhard et al.\ 2002) detected with FLAMES in this pointing. The substructures in this distribution are highly correlated with the galaxy systemic velocities. The highest peak in the distribution coincides with M84, and even more so when we add the LOS velocities obtained previously at the TNG (Arnaboldi et al.\ 2003). The 10 TNG velocities give $\bar{v}_{\rm M84} = 1079\pm 103$ km s$^{-1}$ and $\sigma_{\rm M84} = 325\pm75$ km s$^{-1}$ within a square of $4 R_e \times 4 R_e$ of the M84 center. The 8 FLAMES velocities give $\bar{v}_{\rm M84} = 891\pm 74$ km s$^{-1}$ and $\sigma_{\rm M84} = 208\pm54$ km s$^{-1}$, going out to larger radii. Note that this includes the over-luminous PNe not attributed to M84 previously. The combined sample of 18 velocities gives $\bar{v}_{\rm M84} = 996\pm 69$ km s$^{-1}$ and $\sigma_{\rm M84} = 293\pm50$ km s$^{-1}$. Most likely, all these PNe belong to a very extended halo around M84 (see the deep image in Arnaboldi et al.\ 1996). It is possible that the somewhat low velocity with respect to M84 may be a sign of tidal stripping by M86. \section{Future prospects and Conclusions} The observations indicate that the diffuse light is important in understanding cluster evolution, the star formation history and the enrichment of the Intracluster Medium. Measuring the projected phase space distribution of the IC stars constrains how and when this light originates, and the ICPNe are the only abundant stellar component of the ICL whose kinematics can be measured at this time. These measurements are not restricted only to clusters within 25 Mpc distance: by using a technique similar to those adopted for studies of Ly$\alpha$ emitting galaxies at very high redshift, Gerhard et al. (2005) were able to detect PNe associated with the diffuse light in the Coma cluster, at 100 Mpc distance, in a field which was previously studied by Bernstein et al. (1995); see also O. Gerhard's contribution, this conference. Now it has become possible to study ICL kinematics also in denser environments like the Coma cluster, and we can explore the effect of environments with different densities on galaxy evolution. \begin{theacknowledgments} M.A. would like to thank the organizing committee of the Conference on Planetary Nebulae as Astronomical Tools (Gdansk Poland, 28 June-2 July 2005) for the invitation to give this review. This work has been done in collaboration with Ortwin E. Gerhard, Kenneth C. Freeman, and J. Alfonso Aguerri, Massimo Capaccioli, Nieves Castro-Rodriguez, John Feldmeier, Fabio Governato, Rolf-P. Kudritzki, Roberto Mendez, Giuseppe Murante, Nicola R. Napolitano, Sadanori Okamura, Maurilio Pannella, Naoki Yasuda. M.A. wishes to thank ESO for the support of this project and the observing time allocated both at La Silla and Paranal Telescopes. M.A. wishes to thank the National Astronomical Observatory of Japan, for the observing time allocated at the Subaru Telescope. This work has been supported by INAF and the Swiss National Foundation. \end{theacknowledgments}
2,869,038,155,996
arxiv
\section{Introduction} We are interested in developing efficient numerical methods for high frequency wave propagation. For simplicity and clarity we take the following linear scalar wave equation to present the idea, \begin{equation}\label{eq:wave} \partial_t^2 u - c^2(\bd{x}) \Delta u = 0,\qquad \bd{x}\in\mathbb{R}^d, \end{equation} with WKB initial conditions, \begin{equation}\label{eq:WKBini} \begin{cases} u_0(\bd{x}) = A_0(\bd{x}) e^{\frac{\imath}{\varepsilon} S_0(\bd{x})}, \\ \partial_t u_0(\bd{x}) = \frac{1}{\varepsilon} B_0(\bd{x}) e^{\frac{\imath}{\varepsilon} S_0(\bd{x})}, \end{cases} \end{equation} where $u$ is the wave field, $d$ is the dimensionality and $\imath=\sqrt{-1}$ is the imaginary unit. We assume that the local wave speed $c(\bd{x})$ is a smooth function. The small parameter $\varepsilon \ll 1$ characterizes the high frequency nature of the wave. The proposed method can be generalized to other types of wave equations \cite{LuYang:MMS}. Numerical computation of high frequency wave propagation is an important problem arising in many applications, such as electromagnetic radiation and scattering, seismic and acoustic waves traveling, just to name a few. It is a two-scale problem. The {\it large} length scale comes from the characteristic size of the medium, while the {\it small} length scale is the wavelength. The disparity between the two length scales makes direct numerical computations extremely hard. In order to achieve accurate results, the mesh size has to be chosen comparable to the wavelength or even smaller. On the other hand, the domain size is large so that a huge number of grid points are needed. In order to compute efficiently high frequency wave propagation, algorithms based on asymptotic analysis have been developed. One of the most famous examples is geometric optics. In the method, it is assumed that the solution has a form of \begin{equation}\label{eq:GO} u(t,\bd{x})=A(t,\bd{x})e^{\imath S(t,\bd{x})/\varepsilon}. \end{equation} To the leading order, the phase function $S(t,\bd{x})$ satisfies the eikonal equation, \begin{equation}\label{eq:eikonal} \abs{\partial_tS}^2-c^2(\bd{x}) \abs{\nabla_{\bd{x}}S}^2=0, \end{equation} and the amplitude $A(t,\bd{x})$ satisfies the transport equation, \begin{equation*} \partial_tA-c^2(\bd{x})\frac{\nabla_{\bd{x}}S}{\partial_t S}\cdot\nabla_{\bd{x}}A+\frac{\bigl(\partial_t^2S-c^2(\bd{x})\Delta S\bigr)}{2\partial_t S}A=0. \end{equation*} The merit of geometric optics is that it only solves the macroscopic quantities $S(t,\bd{x})$ and $A(t,\bd{x})$ which are $\varepsilon$-independent. Computational methods based on the geometric optics are reviewed in \cites{EnRu:03, Ru:07}. However, since the eikonal equation \eqref{eq:eikonal} is of Hamilton-Jacobi type, the solution of \eqref{eq:eikonal} becomes singular after the formation of caustics. At caustics, the approximate solution of geometric optics is invalid since the amplitude $A(t,\bd{x})$ blows up. To overcome this problem, Popov introduced Gaussian beam method in \cite{Po:82}. The single beam solution of the Gaussian beam method has a similar form to geometric optics, \[ u(t,\bd{x})=A(t,\bd{y})e^{\imath \tilde{S}(t,\bd{x},\bd{y})/\varepsilon}.\] The difference lies in that the Gaussian beam method uses a {\it complex} phase function, \begin{equation}\label{eq:GBphs} \tilde{S}(t,\bd{x},\bd{y})=S(t,\bd{y})+\bd{p}(t,\bd{y})\cdot(\bd{x}-\bd{y})+ \frac{1}{2}(\bd{x}-\bd{y})\cdot M(t,\bd{y})(\bd{x}-\bd{y}), \end{equation} where $S\in\mathbb{R},\;\bd{p}\in\mathbb{R}^d,\;M\in\mathbb{C}^{d\times d}$. The imaginary part of $M$ is chosen to be positive definite so that the solution decays exponentially away from $\bd{x}=\bd{y}$, where $\bd{y}$ is called the beam center. This makes the solution a Gaussian function, and hence the method was named the Gaussian beam method. If the initial wave is not in a form of single beam, one can approximate it by using a number of Gaussian beams. The validity of this construction at caustics was analyzed by Ralston in \cite{Ra:82}. Recently, there have been a series of numerical studies including both the Lagrangian type \cites{TaQiRa:07, Ta:08, MoRu:app, QiYi:app1, QiYi:app2} and the Eulerian type \cites{LeQiBu:07, LeQi:09, JiWuYa:08, JiWuYaHu:10, JiWuYa:10, JiWuYa:11, LiRa:09, LiRa:10}. The construction of Gaussian beam approximation is based on the truncation of the Taylor expansion of $\tilde{S}$ around the beam center $\bd{y}$ up to the quadratic term, hence it loses accuracy when the width of the beam becomes large, \textit{i.e.}, when the imaginary part of $M(t, \bd{y})$ in \eqref{eq:GBphs} becomes small so that the Gaussian function is not localized any more. This happens for example when the solution of the wave equation spreads (the opposite situation of forming caustics). This is a severe problem in general, as shown by examples in Section \ref{sec:numer}. One could overcome the problem of spreading of beams by doing reinitialization once in a while, see \cites{QiYi:app1, QiYi:app2}. This increases the computational complexity especially when beams spread quickly. Therefore a method working in both scenario of spreading and caustics is required. The main idea of the method proposed in the current work is to use Gaussian functions with fixed widths, instead of using those that might spread over time, to approximate the wave solution. That is why this type of method is called frozen Gaussian approximation (FGA). Despite its superficial similarity with the Gaussian beam method (GBM), it is different at a fundamental level. FGA is based on phase plane analysis, while GBM is based on the asymptotic solution to a wave equation with Gaussian initial data. In FGA, the solution to the wave equation is approximated by a superposition of Gaussian functions living in the phase space, and each function is {\it not} necessarily an asymptotic solution, while GBM uses Gaussian functions (named as beams) in the physical space, with each individual beam being an asymptotic solution to the wave equation. The main advantage of FGA over GBM is that the problem of beam spreading no longer exists.\footnote{Divergence is still an issue for the Lagrangian approach, one needs to work in the Eulerian framework to completely solve the problem, which is considered in \cite{LuYang:MMS}.} Besides, numerically we observe that FGA has better accuracy than GBM when keeping the same order of terms in asymptotic series. On the other hand, the solution given by FGA is asymptotically accurate around caustics where geometric optics breaks down. Our work is motivated by the chemistry literature on the propagation of time dependent Schr\"odinger equation, where the spreading of solution is a common phenomenon, for example, in the dynamics of a free electron. In \cite{He:81}, Heller introduced frozen Gaussian wavepackets to deal with this issue, but it only worked for a short time propagation of order $\mathcal{O}(\hbar)$ where $\hbar$ is the Planck constant. To make it valid for longer time of order $\mathcal{O}(1)$, Herman and Kluk proposed in \cite{HeKl:84} to change the weight of Gaussian packets by adding so-called Herman-Kluk prefactor. Integral representation and higher order approximations were developed by Kay in \cite{Ka:94} and \cite{Ka:06}. Recently, the semiclassical approximation underlying the method was analyzed rigorously by Swart and Rousse in \cite{SwRo:09} and also Robert in \cite{Ro:09}. We generalize their ideas for propagation of high frequency waves, aiming at developing an efficient computational method. We decompose waves into several branches of propagation, and each of them is approximated using Gaussian functions on phase plane. Their centers follow different Hamiltonian dynamics for different branches. Their weight functions, which are analogous to the Herman-Kluk prefactor, satisfy new evolution equations derived from asymptotic analysis. The rest of paper is organized as follows. In Section \ref{sec:formu}, we state the formulations and numerical algorithm of the frozen Gaussian approximation. In Section \ref{sec:asymder}, we provide asymptotic analysis to justify the formulations introduced in Section \ref{sec:formu}. The numerical examples are given in Section \ref{sec:numer} to verify the accuracy and to compare the frozen Gaussian approximation (FGA) with the Gaussian beam method (GBM). In Section \ref{sec:conclusion}, we discuss the efficiency of FGA in comparison with GBM and higher order GBM, with some comments on the phenomenon of error cancellation, and we give some conclusive remarks in the end. \section{Formulation and algorithm}\label{sec:formu} In this section we present the basic formulation and the main algorithm of the frozen Gaussian approximation (FGA), and leave the derivation to the next section. \subsection{Formulation} FGA approximates the solution to the wave equation \eqref{eq:wave} by the integral representation, \begin{equation}\label{eq:ansatz} \begin{aligned} u^{\mathrm{FGA}}(t, \bd{x}) & = \frac{1}{(2\pi \varepsilon)^{3d/2}} \int_{\mathbb{R}^{3d}} a_+(t, \bd{q}, \bd{p}) e^{\frac{\imath}{\varepsilon} \Phi_+(t, \bd{x}, \bd{y}, \bd{q}, \bd{p})} u_{+,0}(\bd{y}) \,\mathrm{d} \bd{y} \,\mathrm{d} \bd{p} \,\mathrm{d} \bd{q} \\ & + \frac{1}{(2\pi \varepsilon)^{3d/2}} \int_{\mathbb{R}^{3d}} a_-(t, \bd{q}, \bd{p}) e^{\frac{\imath}{\varepsilon} \Phi_-(t, \bd{x}, \bd{y}, \bd{q} , \bd{p})} u_{-,0}(\bd{y}) \,\mathrm{d} \bd{y} \,\mathrm{d} \bd{p} \,\mathrm{d} \bd{q}, \end{aligned} \end{equation} where $u_{\pm, 0}$ are determined by the initial value, \begin{equation}\label{eq:ini_2branch} u_{\pm,0}(\bd{x}) = A_{\pm}(\bd{x}) e^{\frac{\imath}{\varepsilon} S_0(\bd{x})}, \end{equation} with \begin{equation*} A_{\pm}(\bd{x}) = \frac{1}{2} \biggl( A_0(\bd{x}) \pm \frac{\imath B_0(\bd{x})}{ c(\bd{x}) \abs{ \partial_{\bd{x}} S_0(\bd{x})}} \biggr). \end{equation*} The equation \eqref{eq:ansatz} implies that the solution consists of two branches (``$\pm$''). In \eqref{eq:ansatz}, $\Phi_{\pm}$ are the phase functions given by \begin{multline}\label{eq:phi} \Phi_{\pm}(t, \bd{x}, \bd{y}, \bd{q}, \bd{p}) = \bd{P}_{\pm}(t, \bd{q}, \bd{p}) \cdot ( \bd{x} - \bd{Q}_{\pm}(t, \bd{q}, \bd{p})) - \bd{p} \cdot ( \bd{y} - \bd{q} ) \\ + \frac{\imath}{2} \abs{\bd{x} - \bd{Q}_{\pm}(t, \bd{q}, \bd{p})}^2 + \frac{\imath}{2} \abs{\bd{y} - \bd{q}}^2. \end{multline} Given $\bd{q}$ and $\bd{p}$ as parameters, the evolution of $\bd{Q}_{\pm}$ and $\bd{P}_{\pm}$ are given by the equation of motion corresponding to the Hamiltonian $H_{\pm} = \pm c(\bd{Q}_{\pm}) \abs{\bd{P}_{\pm}}$, \begin{equation}\label{eq:characline} \begin{cases} \displaystyle \frac{\,\mathrm{d} \bd{Q}_{\pm}}{\,\mathrm{d} t} = \partial_{\bd{P}_{\pm}} H_{\pm} = \pm c \frac{\bd{P}_{\pm}}{\abs{\bd{P}_{\pm}}},\\ \displaystyle \frac{\,\mathrm{d} \bd{P}_{\pm}}{\,\mathrm{d} t} = - \partial_{\bd{Q}_{\pm}} H_{\pm} = \mp \partial_{\bd{Q}_{\pm}} c \abs{\bd{P}_{\pm}}, \end{cases} \end{equation} with the initial conditions $\bd{Q}_{\pm}(0, \bd{q}, \bd{p}) = \bd{q}$ and $\bd{P}_{\pm}(0, \bd{q}, \bd{p}) = \bd{p}$. The evolution equation of $a_{\pm}$ is given by \begin{equation}\label{eq:partialta} \begin{aligned} \frac{\,\mathrm{d} a_{\pm}}{\,\mathrm{d} t} &= \pm\frac{a_{\pm}}{2} \Bigl( \frac{\bd{P}_{\pm}}{\abs{\bd{P}_{\pm}}}\cdot \partial_{\bd{Q}_{\pm}} c - \frac{(d-1)\imath}{\abs{\bd{P}_{\pm}}} c \Bigr) \\ &\pm \frac{a_{\pm}}{2} \tr\biggl( Z_{\pm}^{-1} \partial_{\bd{z}} \bd{Q}_{\pm} \Bigl( 2 \frac{\bd{P}_{\pm}}{\abs{\bd{P}_{\pm}}} \otimes \partial_{\bd{Q}_{\pm}} c \\&\hspace{6em} - \frac{\imath c}{\abs{\bd{P}_{\pm}}} \Bigl( \frac{\bd{P}_{\pm}\otimes \bd{P}_{\pm}}{\abs{\bd{P}_{\pm}}^2} - I \Bigr) - \imath \abs{\bd{P}_{\pm}}\partial_{\bd{Q}_{\pm}}^2 c \Bigr) \biggr) \end{aligned} \end{equation} with the initial condition, \begin{equation*} a_{\pm}(0, \bd{q}, \bd{p}) = 2^{d/2}. \end{equation*} In \eqref{eq:partialta}, $\bd{P}_{\pm}$ and $\bd{Q}_{\pm}$ are evaluated at $(t, \bd{q}, \bd{p})$, $c$ and $\partial_{\bd{Q}_{\pm}} c$ are evaluated at $\bd{Q}_{\pm}$, $I$ is the identity matrix, and we have introduced short hand notations \begin{equation}\label{eq:op_zZ} \partial_{\bd{z}}=\partial_{\bd{q}}-\imath\partial_{\bd{p}}, \qquad Z_{\pm}=\partial_{\bd{z}}(\bd{Q}_{\pm}+\imath\bd{P}_{\pm}). \end{equation} The evolution of the weight $a_{\pm}$ is analogous to the Herman-Kluk prefactor \cite{HeKl:84}. \begin{remark} 1. The equation \eqref{eq:partialta} can be reformulated as \begin{equation}\label{eq:a_formu} \frac{\,\mathrm{d} a_{\pm}}{\,\mathrm{d} t}=\pm a_{\pm}\frac{\bd{P}_{\pm}}{\abs{\bd{P}_{\pm}}}\cdot \partial_{\bd{Q}_{\pm}} c+\frac{a_{\pm}}{2} \tr\left(Z_{\pm}^{-1}\frac{\,\mathrm{d} Z_{\pm}}{\,\mathrm{d} t} \right). \end{equation} When $c$ is constant, \eqref{eq:a_formu} has an analytical solution $a_{\pm}=(\det Z_{\pm})^{1/2}$ with the branch of square root determined continuously in time by the initial value. 2. $\partial_{\bd{z}}\bd{Q}_{\pm}$ and $\partial_{\bd{z}}\bd{P}_{\pm}$ satisfy the following evolution equations \begin{align}\label{eq:dzQ} &\frac{\,\mathrm{d} (\partial_{\bd{z}}\bd{Q}_{\pm})}{\,\mathrm{d} t}=\pm\partial_{\bd{z}}\bd{Q}_{\pm} \frac{\partial_{\bd{Q}_{\pm}}c\otimes\bd{P}_{\pm}}{\abs{\bd{P}_{\pm}}}\pm c\partial_{\bd{z}}\bd{P}_{\pm}\left(\frac{I}{\abs{\bd{P}_{\pm}}}- \frac{\bd{P}_{\pm}\otimes\bd{P}_{\pm}}{\abs{\bd{P}_{\pm}}^3} \right), \\ &\frac{\,\mathrm{d} (\partial_{\bd{z}}\bd{P}_{\pm})}{\,\mathrm{d} t}=\mp\partial_{\bd{z}}\bd{Q}_{\pm}\partial_{\bd{Q}_{\pm}}^2c\abs{\bd{P}_{\pm}} \mp\partial_{\bd{z}}\bd{P}_{\pm} \frac{\bd{P}_{\pm}\otimes\partial_{\bd{Q}_{\pm}}c}{\abs{\bd{P}_{\pm}}}.\label{eq:dzP} \end{align} One can solve \eqref{eq:dzQ}-\eqref{eq:dzP} to get $\partial_{\bd{z}}\bd{Q}_{\pm}$ and $\partial_{\bd{z}}\bd{P}_{\pm}$ in \eqref{eq:partialta}. This increases the computational cost, but avoids the errors of using divided difference to approximate derivative. \end{remark} Notice that \eqref{eq:ansatz} can be rewritten as \begin{equation}\label{eq:recons} \begin{aligned} u^{\mathrm{FGA}}(t, \bd{x}) & = \int_{\mathbb{R}^{2d}} \frac{a_+}{(2\pi \varepsilon)^{3d/2}} \psi_+ e^{\frac{\imath}{\varepsilon}\bd{P}_+\cdot(\bd{x} - \bd{Q}_+) - \frac{1}{2\varepsilon} \abs{\bd{x} - \bd{Q}_+}^2} \,\mathrm{d} \bd{p} \,\mathrm{d} \bd{q} \\ & + \int_{\mathbb{R}^{2d}} \frac{a_-}{(2\pi \varepsilon)^{3d/2}} \psi_- e^{\frac{\imath}{\varepsilon} \bd{P}_-\cdot(\bd{x} - \bd{Q}_-) - \frac{1}{2\varepsilon} \abs{\bd{x} - \bd{Q}_-}^2} \,\mathrm{d} \bd{p} \,\mathrm{d} \bd{q}, \end{aligned} \end{equation} where \begin{equation}\label{eq:psi} \psi_{\pm}(\bd{q}, \bd{p}) = \int_{\mathbb{R}^d} u_{\pm, 0}(\bd{y}) e^{- \frac{\imath}{\varepsilon} \bd{p}\cdot(\bd{y} - \bd{q}) - \frac{1}{2\varepsilon} \abs{\bd{y} - \bd{q}}^2} \,\mathrm{d}\bd{y}. \end{equation} Therefore, the method first decomposes the initial wave into several Gaussian functions in phase space, and then propagate the center of each function along the characteristic lines while keeping the width of the Gaussian fixed. This vividly explains the name frozen Gaussian approximation of this method. The formulation above gives the leading order frozen Gaussian approximation with an error of $\mathcal{O}(\varepsilon)$. It is not hard to obtain higher order approximations by the asymptotics presented in Section \ref{sec:asymder}. We will focus mainly on the leading order approximation in this paper and leave the higher order corrections and rigorous numerical analysis to future works. \subsection{Algorithm} We first give a description of the overall algorithm. To construct the frozen Gaussian approximation on a mesh of $\bd{x}$, one needs to compute the integral \eqref{eq:recons} numerically with a mesh of $(\bd{q},\bd{p})$. This will relate to the numerical computation of \eqref{eq:psi} with a mesh of $\bd{y}$. Hence three different meshes are needed in the algorithm. Moreover, the stationary phase approximation implies that $\psi_{\pm}$ in \eqref{eq:psi} is localized around the submanifold $\bd{p}=\nabla_{\bd{q}}S_0(\bd{q})$ on phase plane for WKB initial conditions \eqref{eq:WKBini} when $\varepsilon$ is small. This means we only need to put the mesh grids of $\bd{p}$ around $\nabla_{\bd{q}}S_0(\bd{q})$ initially to get a good approximation of the initial value. A one-dimensional example is given to illustrate this localization property of $\psi_{\pm}$ in Figure \ref{fig:meshpq} (left). The associated mesh grids are shown in Figure \ref{fig:meshpq} (right). \begin{figure}[htp] \begin{tabular}{cc} \resizebox{2.3in}{!}{\includegraphics{pqcurve.eps}} & \resizebox{2.3in}{!}{\includegraphics{pqmesh.eps}} \end{tabular} \caption{Left: an illustration of the localization of $\psi_+$ on $(q,p)$ domain for $u_{+,0}(y)=\exp\left(\imath\frac{\sin(6y)}{12\varepsilon}\right)$, $\varepsilon=1/128$; the black solid curve is $p=\cos(6q)/2$. Right: the corresponding mesh grids of $(q,p)$.}\label{fig:meshpq} \end{figure} Next we describe in details all the meshes used in the algorithm. \begin{enumerate} \item Discrete mesh of $(\bd{q},\bd{p})$ for initializing $\bd{Q},\bd{P}$. Denote $\bd{\delta q}=(\delta q_1,\cdots,\delta q_d)$ and $\bd{\delta p}=(\delta p_1,\cdots,\delta p_d)$ as the mesh size. Suppose $\bd{q}^0=(q^0_1,\cdots,q^0_d)$ is the starting point, then the mesh grids $\bd{q^k}$, $\bd{k}=(k_1,\cdots,k_d)$, are defined as \[\bd{q^k}=\bigl(q^0_1+(k_1-1)\delta q_1,\cdots, q^0_d+(k_d-1)\delta q_d\bigr),\] where $k_j=1,\cdots,N_q$ for each $j\in\{1,\cdots,d\}$. The mesh grids $\bd{p^{k,\ell}}$, $\bd{\ell}=(\ell_1,\cdots,\ell_d)$, are defined associated with the mesh grids $\bd{q^k}$, \[\bd{p^{k,\ell}}=\bigl(\partial_{q_1}S_0(\bd{q^k})+\ell_1\delta p_1,\cdots, \partial_{q_d}S_0(\bd{q^k})+\ell_d \delta p_d\bigr),\] where $\ell_j=-N_p,\cdots,N_p$ for each $j\in\{1,\cdots,d\}$. \item Discrete mesh of $\bd{y}$ for evaluating $\psi_{\pm}$ in \eqref{eq:psi}. $\bd{\delta y}=(\delta y_1,\cdots,\delta y_d)$ is the mesh size. Denote $\bd{y}^0=(y^0_1,\cdots,y^0_d)$ as the starting point. The mesh grids $\bd{y^m}$ are, $\bd{m}=(m_1,\cdots,m_d)$, \[\bd{y^m}=\bigl(y^0_1+(m_1-1)\delta y_1,\cdots, y^0_d+(m_d-1)\delta y_d\bigr),\] where $m_j=1,\cdots,N_y$ for each $j\in\{1,\cdots,d\}$. \item Discrete mesh of $\bd{x}$ for reconstructing the final solution. $\bd{\delta x}=(\delta x_1,\cdots,\delta x_d)$ is the mesh size. Denote $\bd{x}^0=(x^0_1,\cdots,x^0_d)$ as the starting point. The mesh grids $\bd{x^n}$ are, $\bd{n}=(n_1,\cdots,n_d)$, \[\bd{x^n}=\bigl(x^0_1+(n_1-1)\delta x_1,\cdots, x^0_d+(n_d-1)\delta x_d\bigr),\] where $n_j=1,\cdots,N_x$ for each $j\in\{1,\cdots,d\}$. \end{enumerate} With the preparation of the meshes, we introduce the algorithm as follows. \begin{itemize} \item[Step 1.] Decompose the initial conditions \eqref{eq:WKBini} into two branches of waves according to \eqref{eq:ini_2branch}. \item[Step 2.] Compute the weight function $\psi_{\pm}$ by \eqref{eq:psi} for $(\bd{Q},\bd{P})$ initialized at $(\bd{q^{k}},\bd{p^{k,\ell}})$, \begin{multline}\label{eq:discpsi} \psi_{\pm}(\bd{q^{k}},\bd{p^{k,\ell}})=\sum_{\bd{m}} e^{\frac{\imath}{\varepsilon}(- \bd{p^{k,\ell}} \cdot ( \bd{y^m} - \bd{q^{k}} ) +\frac{\imath}{2} \abs{\bd{y^{m}} - \bd{q^{k}}}^2)}\\\times u_{\pm,0}(\bd{y^m}) r_{\theta}(\abs{\bd{y^{m}} - \bd{q^{k}}})\delta y_1\cdots \delta y_d, \end{multline} where $r_{\theta}$ is a cutoff function such that $r_{\theta} = 1$ in the ball of radius $\theta>0$ centered at origin and $r_{\theta} = 0$ outside the ball. \item[Step 3.] Solve \eqref{eq:characline}-\eqref{eq:partialta} with the initial conditions \begin{align*} & \bd{Q}_{\pm}(0,\bd{q^{k}},\bd{p^{k,\ell}}) = \bd{q^{k}},\qquad \bd{P}_{\pm}(0,\bd{q^{k}},\bd{p^{k,\ell}}) = \bd{p^{k,\ell}},\\ & a_{\pm}(0,\bd{q^{k}},\bd{p^{k,\ell}})=2^{d/2}, \end{align*} by standard numerical integrator for ODE, for example the fourth-order Runge-Kutta scheme. Denote the numerical solutions as $(\bd{Q}_{\pm}^{\bd{k,\ell}},\bd{P}_{\pm}^{\bd{k,\ell}})$ and $a_{\pm}^{\bd{k,\ell}}$. \item[Step 4.] Reconstruct the solution by \eqref{eq:recons}, \begin{equation}\label{eq:discFGB} \begin{aligned} u^{\mathrm{FGA}}(t, \bd{x^{n}}) & = \sum_{\bd{k,\ell}} \biggl( \frac{a_+^{\bd{k,\ell}}r_{\theta}^+} {(2\pi \varepsilon)^{3d/2}} \psi_+(\bd{q^{k}}, \bd{p^{k,\ell}}) e^{\frac{\imath}{\varepsilon} \bd{P}^{\bd{k,\ell}}_+\cdot(\bd{x}^{\bd{n}} - \bd{Q}^{\bd{k,\ell}}_+) - \frac{1}{2\varepsilon} \abs{\bd{x}^{\bd{n}} - \bd{Q}^{\bd{k,\ell}}_+}^2} \\ &\qquad + \frac{a_-^{\bd{k,\ell}}r_{\theta}^-}{(2\pi \varepsilon)^{3d/2}} \psi_-(\bd{q^{k}}, \bd{p^{k,\ell}}) e^{\frac{\imath}{\varepsilon} \bd{P}^{\bd{k,\ell}}_-\cdot(\bd{x}^{\bd{n}} - \bd{Q}^{\bd{k,\ell}}_-) - \frac{1}{2\varepsilon} \abs{\bd{x}^{\bd{n}} - \bd{Q}^{\bd{k,\ell}}_-}^2} \biggr) \\ & \qquad \times \delta q_1\cdots \delta q_d \delta p_1\cdots \delta p_d, \end{aligned} \end{equation} where $r_{\theta}^\pm=r_{\theta}(\abs{\bd{x}^{\bd{n}} - \bd{Q}^{\bd{k,\ell}}_{\pm}})$. \end{itemize} \begin{remark} 1. In setting up the meshes, we assume that the initial condition \eqref{eq:WKBini} either has compact support or decays sufficiently fast to zero as $\bd{x}\rightarrow \infty$ so that we only need finite number of mesh points in physical space. 2. The role of the truncation function $r_{\theta}$ is to save computational cost, since although a Gaussian function is not localized, it decays quickly away from the center. In practice we take $\theta=\mathcal{O}(\sqrt{\varepsilon})$, the same order as the width of each Gaussian, when we evaluate \eqref{eq:discpsi} and \eqref{eq:discFGB} numerically. 3. There are two types of errors present in the method. The first type comes from the asymptotic approximation to the wave equation. This error can {\it not} be reduced unless one includes higher order corrections. The other type is the numerical error which comes from two sources: one is from the ODE numerical integrator; the other is from the discrete approximations of integrals \eqref{eq:recons} and \eqref{eq:psi}. It {\it can} be reduced by either taking small mesh size and time step or using higher order numerical methods. 4. Note that the assumption that the initial conditions are either compactly supported or decay quickly implies that the values on the boundary are zero (or close to zero). Then \eqref{eq:discpsi} and \eqref{eq:discFGB} are the trapezoidal rules to approximate \eqref{eq:psi} and \eqref{eq:recons}. Notice that, due to the Gaussian factor, the integrand functions in \eqref{eq:psi} and \eqref{eq:recons} are exponentially small unless $\bd{x-Q}$ and $\bd{y-q}$ are in the order of $\mathcal{O}({\varepsilon}^{1/2})$, which implies their derivatives with respect to $\bd{y},\;\bd{q},\;\bd{p}$ are of the order $\mathcal{O}(\varepsilon^{-1/2})$. This suggests $\bd{\delta y},\; \bd{\delta q},\;\bd{\delta p}$ should be taken as the size of $\mathcal{O}(\sqrt{\varepsilon})$. Hence $N_y$ and $N_q$ are of order $\mathcal{O}(\varepsilon^{-d/2})$. As illustrated in Figure \ref{fig:meshpq}, $N_p$ is usually taken as $\mathcal{O}\left(\frac{\sqrt{\varepsilon}}{\min_j \;\delta p_j}\right)$, which is of order $\mathcal{O}(1)$. $N_x$ is not constrained by $\varepsilon$, and is only determined by how well represented one wants the final solution. 5. Step $2$ and $4$ can be expedited by making use of the discrete fast Gaussian transform, as in \cites{QiYi:app1, QiYi:app2}. \end{remark} \section{Asymptotic derivation}\label{sec:asymder} We now derive the formulation shown in Section \ref{sec:formu} using asymptotic analysis. We start with the following ansatz for the wave equation \eqref{eq:wave}, \begin{equation}\label{eq:ansatz1} \begin{aligned} u(t, \bd{x}) & = \frac{1}{(2\pi \varepsilon)^{3d/2}} \int_{\mathbb{R}^{3d}} a_+(t, \bd{q}, \bd{p}) e^{\frac{\imath}{\varepsilon} \Phi_+(t, \bd{x}, \bd{y}, \bd{q}, \bd{p})} u_{+,0}(\bd{y}) \,\mathrm{d} \bd{y} \,\mathrm{d} \bd{p} \,\mathrm{d} \bd{q} \\ & + \frac{1}{(2\pi \varepsilon)^{3d/2}} \int_{\mathbb{R}^{3d}} a_-(t, \bd{q}, \bd{p}) e^{\frac{\imath}{\varepsilon} \Phi_-(t, \bd{x}, \bd{y}, \bd{q} , \bd{p})} u_{-,0}(\bd{y}) \,\mathrm{d} \bd{y} \,\mathrm{d} \bd{p} \,\mathrm{d} \bd{q}, \end{aligned} \end{equation} where $\Phi_{\pm}$ are given by \begin{multline}\label{eq:phi1} \Phi_{\pm}(t, \bd{x}, \bd{y}, \bd{q}, \bd{p}) = S_{\pm}(t, \bd{q}, \bd{p})+\bd{P}_{\pm}(t, \bd{q}, \bd{p}) \cdot ( \bd{x} - \bd{Q}_{\pm}(t, \bd{q}, \bd{p})) - \bd{p} \cdot ( \bd{y} - \bd{q} ) \\ + \frac{\imath}{2} \abs{\bd{x} - \bd{Q}_{\pm}(t, \bd{q}, \bd{p})}^2 + \frac{\imath}{2} \abs{\bd{y} - \bd{q}}^2. \end{multline} The initial conditions are taken as \begin{equation}\label{eq:ini1} \begin{aligned} & \bd{Q}_{\pm}(0,\bd{q},\bd{p}) = \bd{q},\qquad \bd{P}_{\pm}(0,\bd{q},\bd{p}) = \bd{p},\\ & S_{\pm}(0,\bd{q},\bd{p})=0,\qquad a_{\pm}(0,\bd{q},\bd{p})=2^{d/2}. \end{aligned} \end{equation} The subscript $\pm$ indicates the two branches that correspond to two different Hamiltonian, \begin{equation} H_+(\bd{Q_+}, \bd{P_+}) = c(\bd{Q_+}) \abs{\bd{P_+}}, \qquad H_-(\bd{Q_-}, \bd{P_-}) = -c(\bd{Q_-}) \abs{\bd{P_-}}. \end{equation} $\bd{P}_{\pm}$ and $\bd{Q}_{\pm}$ satisfy the equation of motion given by the Hamiltonian $H_{\pm}$ \begin{equation}\label{eq:characline1} \begin{cases} \displaystyle \partial_t \bd{Q}_{\pm} = \partial_{\bd{P}_{\pm}} H_{\pm} = \pm c \frac{\bd{P}_{\pm}}{\abs{\bd{P}_{\pm}}},\\ \displaystyle \partial_t \bd{P}_{\pm} = - \partial_{\bd{Q}_{\pm}} H_{\pm} = \mp \partial_{\bd{Q}_{\pm}} c \abs{\bd{P}_{\pm}}. \end{cases} \end{equation} By plugging \eqref{eq:ansatz1} into \eqref{eq:wave}, the leading order terms show that the evolution of $S_{\pm}$ simply satisfies \begin{equation}\label{eq:Spm} {\partial_t S_{\pm}}=0. \end{equation} This implies $S_{\pm}(t, \bd{q}, \bd{p})=0$. Hence we omit the terms $S_{\pm}$ in Section \ref{sec:formu} and later calculations. Before proceeding further, let us state some lemmas that will be used. \begin{lemma}\label{lem:FBI} For $u \in L^2(\mathbb{R}^d)$, it holds \begin{equation}\label{eq:FBI} u(\bd{x}) = \frac{1}{(2\pi \varepsilon)^{3d/2}} \int_{\mathbb{R}^{3d}} 2^{d/2} e^{\frac{\imath}{\varepsilon} \Phi_{\pm}(0, \bd{x}, \bd{y}, \bd{q}, \bd{p})} u(\bd{y}) \,\mathrm{d} \bd{y}\,\mathrm{d} \bd{p} \,\mathrm{d} \bd{q}. \end{equation} \end{lemma} \begin{proof} By the initial conditions \eqref{eq:ini1}, \begin{equation} \Phi_{\pm}(0, \bd{x}, \bd{y}, \bd{q}, \bd{p}) = \bd{p} \cdot ( \bd{x} - \bd{q} ) - \bd{p} \cdot ( \bd{y} - \bd{q} ) + \frac{\imath}{2} \abs{ \bd{x} - \bd{q} }^2 + \frac{\imath}{2} \abs{ \bd{y} - \bd{q} }^2. \end{equation} Therefore, \eqref{eq:FBI} is just the standard wave packet decomposition in disguise (see for example \cite{Fo:89}). \end{proof} The proof of the following important lemma follows the one of Lemma $3$ in \cite{SwRo:09}. \begin{lemma}\label{lem:veps1} For any vector $\bd{a}(\bd{y},\bd{q},\bd{p})$ and matrix $M(\bd{y},\bd{q},\bd{p})$ in Schwartz class viewed as functions of $(\bd{y},\bd{q},\bd{p})$, we have \begin{equation}\label{eq:con1} \bd{a}(\bd{y},\bd{q},\bd{p}) \cdot (\bd{x} - \bd{Q}) \sim - \varepsilon \partial_{z_k} ( a_j Z_{jk}^{-1} ), \end{equation} and \begin{equation}\label{eq:con2} (\bd{x} - \bd{Q})\cdot M(\bd{y},\bd{q},\bd{p}) (\bd{x} - \bd{Q}) \sim \varepsilon \partial_{z_l} Q_j M_{jk} Z_{kl}^{-1} + \varepsilon^2 \partial_{z_m} \bigl( \partial_{z_l} (M_{jk} Z_{kl}^{-1} ) Z_{jm}^{-1} \bigr), \end{equation} where Einstein's summation convention has been used. Moreover, for multi-index $\alpha$ that $\abs{\alpha} \geq 3$, \begin{equation}\label{eq:con3} (\bd{x} - \bd{Q})^{\alpha} \sim \mathcal{O}(\varepsilon^{\abs{\alpha}-1}). \end{equation} Here we use the notation $ f \sim g $ to mean that \begin{equation} \int_{\mathbb{R}^{3d}} f e^{\frac{\imath}{\varepsilon} \Phi_{\pm}} \,\mathrm{d} \bd{y} \,\mathrm{d} \bd{p} \,\mathrm{d} \bd{q} = \int_{\mathbb{R}^{3d}} g e^{\frac{\imath}{\varepsilon} \Phi_{\pm}} \,\mathrm{d} \bd{y} \,\mathrm{d} \bd{p} \,\mathrm{d} \bd{q}. \end{equation} \end{lemma} \begin{proof} Since the proof is exactly the same for the cases of $\Phi_+$ and $\Phi_-$, we omit the subscript $\pm$ for simplicity. As $\bd{a}$ and $M$ are in Schwartz class, all the manipulations below are justified. Observe that at $t=0$, \begin{equation*} -(\partial_{\bd{q}}\bd{Q}) \bd{P}+\bd{p}=0, \qquad (\partial_{\bd{p}}\bd{Q}) \bd{P}=0. \end{equation*} Using \eqref{eq:characline1}, we have \begin{align*} \partial_t\bigl(-(\partial_{\bd{q}}\bd{Q}) \bd{P}+\bd{p}\bigr)&=-\partial_{\bd{q}}\left(\partial_t \bd{Q} \right)\bd{P}-\partial_{\bd{q}}\bd{Q} \partial_t \bd{P}\\ &=-\partial_{\bd{q}}\left(c\frac{\bd{P}}{\abs{\bd{P}}} \right)\bd{P}+\partial_{\bd{q}}\bd{Q}\partial_{\bd{Q}}c\abs{\bd{P}}\\ &=0. \end{align*} Analogously we have $\displaystyle \partial_t\bigl((\partial_{\bd{p}}\bd{Q}) \bd{P}\bigr)=0$. Therefore for all $t>0$, \[-(\partial_{\bd{q}}\bd{Q}) \bd{P}+\bd{p}=0,\qquad (\partial_{\bd{p}}\bd{Q}) \bd{P}=0.\] Then straightforward calculations yield \begin{align*} & \partial_{\bd{q}}\Phi=(\partial_{\bd{q}}\bd{P} -\imath\partial_{\bd{q}}\bd{Q})(\bd{x}-\bd{Q})-\imath (\bd{y}-\bd{q}), \\ & \partial_{\bd{p}}\Phi=(\partial_{\bd{p}}\bd{P} -\imath\partial_{\bd{p}}\bd{Q})(\bd{x}-\bd{Q})- (\bd{y}-\bd{q}), \end{align*} which implies that \begin{equation} \imath\partial_{\bd{z}}\Phi=Z(\bd{x}-\bd{Q}), \end{equation} where $\partial_{\bd{z}}$ and $Z$ are defined in \eqref{eq:op_zZ}. Note that, $Z$ can be rewritten as \begin{equation*} Z = \partial_{\bd{z}}( \bd{Q} + \imath \bd{P} ) = \begin{pmatrix} \imath I & I \end{pmatrix} \begin{pmatrix} \partial_{\bd{q}} \bd{Q} & \partial_{\bd{q}} \bd{P} \\ \partial_{\bd{p}} \bd{Q} & \partial_{\bd{p}} \bd{P} \end{pmatrix} \begin{pmatrix} - \imath I \\ I \end{pmatrix}, \end{equation*} where $I$ stands for the $d \times d$ identity matrix. Therefore, define \begin{equation*} F = \begin{pmatrix} \partial_{\bd{q}} \bd{Q} & \partial_{\bd{q}} \bd{P} \\ \partial_{\bd{p}} \bd{Q} & \partial_{\bd{p}} \bd{P} \end{pmatrix}, \end{equation*} then \begin{equation*} \begin{aligned} Z Z^{\ast} & = \begin{pmatrix} \imath I & I \end{pmatrix} F \begin{pmatrix} I & -\imath I \\ \imath I & I \end{pmatrix} F^{\mathrm{T}} \begin{pmatrix} - \imath I \\ I \end{pmatrix} \\ & = \begin{pmatrix} \imath I & I \end{pmatrix} F F^{\mathrm{T}} \begin{pmatrix} - \imath I \\ I \end{pmatrix} + \begin{pmatrix} \imath I & I \end{pmatrix} F \begin{pmatrix} 0 & -\imath I \\ \imath I & 0 \end{pmatrix} F^{\mathrm{T}} \begin{pmatrix} - \imath I \\ I \end{pmatrix} \\ & = \begin{pmatrix} \imath I & I \end{pmatrix} F F^{\mathrm{T}} \begin{pmatrix} - \imath I \\ I \end{pmatrix} + 2I. \end{aligned} \end{equation*} In the last equality, we have used the fact that \begin{equation*} F \begin{pmatrix} 0 & -\imath I \\ \imath I & 0 \end{pmatrix} F^{\mathrm{T}} = \begin{pmatrix} 0 & -\imath I \\ \imath I & 0 \end{pmatrix}, \end{equation*} due to the Hamiltonian flow structure. Therefore $ZZ^{\ast}$ is positive definite for all $t$, which implies $Z$ is invertible and \begin{equation}\label{eq:dzPhi} (\bd{x} - \bd{Q}) = \imath Z^{-1} \partial_{\bd{z}} \Phi. \end{equation} Using \eqref{eq:dzPhi}, one has \begin{align*} \int_{\mathbb{R}^{3d}} \bd{a} \cdot (\bd{x} - \bd{Q}) e^{\frac{\imath}{\varepsilon} \Phi} \,\mathrm{d} \bd{y} \,\mathrm{d} \bd{p} \,\mathrm{d} \bd{q} &=\varepsilon\int_{\mathbb{R}^{3d}} {a}_j Z^{-1}_{jk} \left(\frac{\imath}{\varepsilon}\partial_{z_k}\Phi\right) e^{\frac{\imath}{\varepsilon} \Phi} \,\mathrm{d} \bd{y} \,\mathrm{d} \bd{p} \,\mathrm{d} \bd{q} \\ &=-\varepsilon \int_{\mathbb{R}^{3d}} \partial_{z_k}\big({a}_j Z^{-1}_{jk} \big) e^{\frac{\imath}{\varepsilon} \Phi} \,\mathrm{d} \bd{y} \,\mathrm{d} \bd{p} \,\mathrm{d} \bd{q}, \end{align*} where the last equality is obtained from integration by parts. This proves \eqref{eq:con1}. Making use of \eqref{eq:con1} twice produces \eqref{eq:con2} \begin{align*} (\bd{x} - \bd{Q})\cdot M (\bd{x} - \bd{Q}) & = (x-Q)_jM_{jk}(x-Q)_k \\ & \sim -\varepsilon \partial_{z_l}\bigl((x-Q)_jM_{jk}Z^{-1}_{kl} \bigr) \\ & = \varepsilon \partial_{z_l} Q_j M_{jk} Z_{kl}^{-1} -\varepsilon(x-Q)_j\partial_{z_l}(M_{jk}Z^{-1}_{kl}) \\ & \sim \varepsilon \partial_{z_l} Q_j M_{jk} Z_{kl}^{-1} + \varepsilon^2 \partial_{z_m} \bigl( \partial_{z_l} (M_{jk} Z_{kl}^{-1} ) Z_{jm}^{-1} \bigr). \end{align*} By induction it is easy to see that \eqref{eq:con3} is true. \end{proof} \subsection{Initial value decomposition} By \eqref{eq:phi1} and \eqref{eq:characline1} we obtain that \begin{equation}\label{eq:partialtPhi} \begin{aligned} \partial_t \Phi_{\pm} & = - \bd{P}_{\pm} \cdot \partial_t \bd{Q}_{\pm} + (\partial_t \bd{P}_{\pm} - \imath \partial_t \bd{Q}_{\pm}) \cdot (\bd{x} - \bd{Q}_{\pm}) \\ & = \mp c \abs{\bd{P}_{\pm}} \mp (\bd{x} - \bd{Q}_{\pm}) \cdot \Bigl( \abs{\bd{P}_{\pm}} \partial_{\bd{Q}_{\pm}} c + \imath \frac{\bd{P}_{\pm}}{\abs{\bd{P}_{\pm}}} c \Bigr), \end{aligned} \end{equation} and in particular for $t = 0$, \begin{equation} \partial_t \Phi_{\pm}(0, \bd{x}, \bd{y}, \bd{q}, \bd{p}) = \mp c \abs{\bd{p}} \mp (\bd{x} - \bd{q}) \cdot \Bigl( \abs{\bd{p}} \partial_{\bd{q}} c + \imath \frac{\bd{p}}{\abs{\bd{p}}} c \Bigr). \end{equation} The ansatz \eqref{eq:ansatz1} shows that \begin{equation}\label{eq:initialu} \begin{aligned} u(0, \bd{x}) & = \frac{1}{(2\pi \varepsilon)^{3d/2}} \int_{\mathbb{R}^{3d}} a_+(0, \bd{q}, \bd{p}) e^{\frac{\imath}{\varepsilon} \Phi_+(0, \bd{x}, \bd{y}, \bd{q}, \bd{p})} u_{+,0}(\bd{y}) \,\mathrm{d} \bd{y} \,\mathrm{d} \bd{p} \,\mathrm{d} \bd{q} \\ & + \frac{1}{(2\pi \varepsilon)^{3d/2}} \int_{\mathbb{R}^{3d}} a_-(0, \bd{p}, \bd{q}) e^{\frac{\imath}{\varepsilon} \Phi_-(0, \bd{x}, \bd{y}, \bd{q} , \bd{p})} u_{-,0}(\bd{y}) \,\mathrm{d} \bd{y} \,\mathrm{d} \bd{p} \,\mathrm{d} \bd{q}, \end{aligned} \end{equation} and \begin{equation}\label{eq:initialv} \begin{aligned} \partial_t u(0, \bd{x}) & = \frac{1}{(2\pi \varepsilon)^{3d/2}} \int_{\mathbb{R}^{3d}} \bigl(\partial_t a_+ + \frac{\imath a_+}{\varepsilon} \partial_t \Phi_+\bigr) e^{\frac{\imath}{\varepsilon} \Phi_+(0, \bd{x}, \bd{y}, \bd{q}, \bd{p})} u_{+,0}(\bd{y}) \,\mathrm{d} \bd{y} \,\mathrm{d} \bd{p} \,\mathrm{d} \bd{q} \\ & + \frac{1}{(2\pi \varepsilon)^{3d/2}} \int_{\mathbb{R}^{3d}} \bigl(\partial_t a_- + \frac{\imath a_-}{\varepsilon} \partial_t \Phi_-\bigr) e^{\frac{\imath}{\varepsilon} \Phi_-(0, \bd{x}, \bd{y}, \bd{q} , \bd{p})} u_{-,0}(\bd{y}) \,\mathrm{d} \bd{y} \,\mathrm{d} \bd{p} \,\mathrm{d} \bd{q}. \end{aligned} \end{equation} We take \begin{align} & a_{\pm}(0, \bd{q}, \bd{p}) = 2^{d/2}, \label{eq:a0}\\ & u_{+,0}(\bd{x}) = A_+(\bd{x}) e^{\frac{\imath}{\varepsilon} S_0(\bd{x})}, \\ & u_{-,0}(\bd{x}) = A_-(\bd{x}) e^{\frac{\imath}{\varepsilon} S_0(\bd{x})}, \end{align} with \begin{equation}\label{eq:Apm} A_{\pm}(\bd{x}) = \frac{1}{2} \biggl( A_0(\bd{x}) \pm \frac{\imath B_0(\bd{x})}{ c(\bd{x}) \abs{ \partial_{\bd{x}} S_0(\bd{x})}} \biggr). \end{equation} We next show that this will approximate the initial condition to the leading order in $\varepsilon$. Substituting \eqref{eq:a0}-\eqref{eq:Apm} into \eqref{eq:initialu} and using Lemma~\ref{lem:FBI}, we easily confirm that \begin{equation*} u(0, \bd{x}) = u_{+,0}(\bd{x}) + u_{-,0}(\bd{x}) = A_0(\bd{x}) e^{\frac{\imath}{\varepsilon} S_0(\bd{x})}. \end{equation*} For the initial velocity, we substitute \eqref{eq:a0}-\eqref{eq:Apm} into \eqref{eq:initialv} and keep only the leading order terms in $\varepsilon$. According to Lemma~\ref{lem:veps1}, only the term $\mp c\abs{\bd{p}}$ in $\partial_t \Phi_{\pm}$ will contribute to the leading order, since the other terms that contain $(\bd{x} - \bd{q})$ are $\mathcal{O}(\varepsilon)$. Hence, \begin{equation*} \begin{aligned} \partial_t u(0, \bd{x}) = & - \frac{2^{d/2}}{(2\pi \varepsilon)^{3d/2}} \int_{\mathbb{R}^{3d}} \frac{\imath}{\varepsilon} c(\bd{q})\abs{\bd{p}} e^{\frac{\imath}{\varepsilon} \Phi_+(0, \bd{x}, \bd{y}, \bd{q}, \bd{p})} u_{+,0}(\bd{y}) \,\mathrm{d} \bd{y} \,\mathrm{d} \bd{p} \,\mathrm{d} \bd{q} \\ & + \frac{2^{d/2}}{(2\pi \varepsilon)^{3d/2}} \int_{\mathbb{R}^{3d}} \frac{\imath}{\varepsilon} c(\bd{q}) \abs{\bd{p}} e^{\frac{\imath}{\varepsilon} \Phi_-(0, \bd{x}, \bd{y}, \bd{q} , \bd{p})} u_{-,0}(\bd{y}) \,\mathrm{d} \bd{y} \,\mathrm{d} \bd{p} \,\mathrm{d} \bd{q} + \mathcal{O}(1). \end{aligned} \end{equation*} Consider the integral \begin{equation*} \int_{\mathbb{R}^d} c(\bd{q}) \abs{\bd{p}} e^{- \frac{\imath}{\varepsilon} \bd{p} \cdot (\bd{y} - \bd{q}) - \frac{1}{2\varepsilon} \abs{\bd{y} - \bd{q}}^2} A_{\pm}(\bd{y}) e^{\frac{\imath}{\varepsilon} S_0(\bd{y})} \,\mathrm{d} \bd{y} = \int_{\mathbb{R}^d} c(\bd{q}) \abs{\bd{p}} A_{\pm}(\bd{y}) e^{\frac{\imath}{\varepsilon} \Theta(\bd{y}, \bd{q}, \bd{p})} \,\mathrm{d} \bd{y}. \end{equation*} The phase function $\Theta$ is given by \begin{equation*} \Theta(\bd{y}, \bd{q}, \bd{p}) = - \bd{p} \cdot (\bd{y} - \bd{q}) + \frac{\imath}{2} \abs{\bd{y} - \bd{q}}^2 + S_0(\bd{y}). \end{equation*} Clearly, $\Im \;\Theta \geq 0$ and $\Im \;\Theta = 0$ if and only if $ \bd{y} = \bd{q}$. The derivatives of $\Theta$ with respect to $\bd{y}$ are \begin{align*} & \partial_{\bd{y}} \Theta = - \bd{p} + \partial_{\bd{y}} S_0(\bd{y}) + \imath (\bd{y} - \bd{q}), \\ & \partial_{\bd{y}}^2 \Theta = \partial_{\bd{y}}^2 S_0(\bd{y}) + i I. \end{align*} Hence, the first derivative vanishes only when $\bd{y} = \bd{q}$ and $\bd{p} = \partial_{\bd{y}} S_0(\bd{y})$, and $\det \partial_{\bd{y}}^2 \Theta \not = 0$. Therefore, we can apply stationary phase approximation with complex phase (see for example \cite{Ho:83}) to conclude, for $(\bd{q}, \bd{p}) \in \mathbb{R}^{2d}$, \begin{multline*} \int_{\mathbb{R}^d} c(\bd{q}) \abs{\bd{p}} e^{- \frac{\imath}{\varepsilon} \bd{p} \cdot (\bd{y} - \bd{q}) - \frac{1}{2\varepsilon} \abs{\bd{y} - \bd{q}}^2} A_{\pm}(\bd{y}) e^{\frac{\imath}{\varepsilon} S_0(\bd{y})} \,\mathrm{d} \bd{y} \\ = \int_{\mathbb{R}^d} c(\bd{y}) \abs{\partial_{\bd{y}} S_0(\bd{y})} e^{- \frac{\imath}{\varepsilon} \bd{p} \cdot (\bd{y} - \bd{q}) - \frac{1}{2\varepsilon} \abs{\bd{y} - \bd{q}}^2} A_{\pm}(\bd{y}) e^{\frac{\imath}{\varepsilon} S_0(\bd{y})} \,\mathrm{d} \bd{y} + \mathcal{O}(\varepsilon). \end{multline*} Therefore, \begin{multline}\label{eq:dtu0} \partial_t u(0, \bd{x}) = - \frac{2^{d/2}}{(2\pi \varepsilon)^{3d/2}} \int_{\mathbb{R}^{3d}} \frac{\imath}{\varepsilon} c(\bd{y}) \abs{\partial_{\bd{y}} S_0(\bd{y})} e^{\frac{\imath}{\varepsilon} \Phi_+(0, \bd{x}, \bd{y}, \bd{q}, \bd{p})} u_{+,0}(\bd{y}) \,\mathrm{d} \bd{y} \,\mathrm{d} \bd{p} \,\mathrm{d} \bd{q} \\ + \frac{2^{d/2}}{(2\pi \varepsilon)^{3d/2}} \int_{\mathbb{R}^{3d}} \frac{\imath}{\varepsilon} c(\bd{y}) \abs{\partial_{\bd{y}} S_0(\bd{y})} e^{\frac{\imath}{\varepsilon} \Phi_-(0, \bd{x}, \bd{y}, \bd{q} , \bd{p})} u_{-,0}(\bd{y}) \,\mathrm{d} \bd{y} \,\mathrm{d} \bd{p} \,\mathrm{d} \bd{q} + \mathcal{O}(1). \end{multline} Substitute \eqref{eq:Apm} into \eqref{eq:dtu0} and use Lemma~\ref{lem:FBI}, then \begin{equation*} \partial_t u(0, \bd{x}) = \frac{1}{\varepsilon} B_0(\bd{x}) e^{\frac{\imath}{\varepsilon} S_0(\bd{x})}, \end{equation*} which agrees with \eqref{eq:WKBini}. \subsection{Derivation of the evolution equation of $a_{\pm}$} In order to derive the evolution equation for the weight function $a$, we carry out the asymptotic analysis of the wave equation \eqref{eq:wave} using the ansatz \eqref{eq:ansatz1} in this section. As the equation \eqref{eq:wave} is linear, we can deal with the two branches separately. In the following, we only deal with the ``$+$'' branch that corresponds to $H_+$, and the other is completely analogous. For simplicity, we drop the subscript ``$+$'' in the notations. Substituting \eqref{eq:ansatz1} into the equation \eqref{eq:wave} (keeping only the ``$+$'' branch) gives \begin{equation*} \partial_t^2 u = \frac{1}{(2\pi \varepsilon)^{3d/2}} \int_{\mathbb{R}^{3d}} \Bigl( \partial_t^2 a + 2 \frac{\imath}{\varepsilon} \partial_t a \partial_t \Phi + \frac{\imath}{\varepsilon} a \partial_t^2 \Phi - \frac{1}{\varepsilon^2} a(\partial_t\Phi)^2 \Bigr) e^{\imath\Phi/\varepsilon} u_0 \,\mathrm{d} \bd{y} \,\mathrm{d} \bd{p} \,\mathrm{d} \bd{q}, \end{equation*} and \begin{equation*} \Delta u = \frac{1}{(2\pi \varepsilon)^{3d/2}} \int_{\mathbb{R}^{3d}} \Bigl( \frac{\imath}{\varepsilon} \Delta \Phi - \frac{1}{\varepsilon^2} (\partial_{\bd{x}}\Phi \cdot \partial_{\bd{x}} \Phi ) \Bigr) a e^{\imath\Phi/\varepsilon} u_0 \,\mathrm{d} \bd{y} \,\mathrm{d} \bd{p} \,\mathrm{d} \bd{q}. \end{equation*} Squaring both sides of \eqref{eq:partialtPhi} yields \begin{multline} (\partial_t \Phi)^2 = c^2 \abs{\bd{P}}^2 + \Bigl( (\bd{x} - \bd{Q}) \cdot \Bigl( \abs{\bd{P}} \partial_{\bd{Q}} c + \imath c \frac{\bd{P}}{\abs{\bd{P}}} \Bigr)\Bigr)^2 \\ + 2 c \abs{\bd{P}} (\bd{x} - \bd{Q}) \cdot \Bigl( \abs{\bd{P}} \partial_{\bd{Q}} c + \imath c \frac{\bd{P}}{\abs{\bd{P}}} \Bigr). \end{multline} Differentiating \eqref{eq:partialtPhi} with respect to $t$, one has \begin{equation} \begin{aligned} \partial_t^2 \Phi & = - \partial_t(\abs{\bd{P}} c) + \partial_t \bd{Q} \cdot \Bigl( \abs{\bd{P}}\partial_{\bd{Q}} c + \imath \frac{\bd{P}}{\abs{\bd{P}}} c \Bigr) \\ & - ( \bd{x} - \bd{Q} ) \cdot \Bigl( \partial_{\bd{Q}} c \frac{\bd{P}\cdot \partial_t \bd{P}}{\abs{\bd{P}}} + \partial_{\bd{Q}}^2 c \cdot \partial_t \bd{Q} \abs{\bd{P}} \\ & \hspace{6em} + \imath c \frac{\partial_t \bd{P}}{\abs{\bd{P}}} - \imath c \bd{P} \frac{\bd{P} \cdot \partial_t \bd{P}}{\abs{\bd{P}}^3} + \imath \frac{\bd{P}}{\abs{\bd{P}}} \partial_{\bd{Q}} c \cdot \partial_t \bd{Q} \Bigr). \end{aligned} \end{equation} We simplify the last equation using \eqref{eq:characline}, \begin{equation} \begin{aligned} \partial_t^2 \Phi & = c \bd{P} \cdot \partial_{\bd{Q}} c + \imath c^2 \\ & - (\bd{x} - \bd{Q}) \cdot \Bigl(- \partial_{\bd{Q}} c \bd{P} \cdot \partial_{\bd{Q}} c + c \partial_{\bd{Q}}^2 c \cdot \bd{P} \\ & \hspace{6em} - \imath c \partial_{\bd{Q}} c + 2\imath c \bd{P} \frac{\bd{P} \cdot \partial_{\bd{Q}} c} {\abs{\bd{P}}^2}\Bigr). \end{aligned} \end{equation} Taking derivatives with respect to $\bd{x}$ produces \begin{equation} \partial_{\bd{x}} \Phi = \bd{P} + \imath (\bd{x} - \bd{Q}), \end{equation} \begin{equation} \partial_{\bd{x}} \Phi \cdot \partial_{\bd{x}} \Phi = \abs{\bd{P}}^2 + 2 \imath \bd{P} \cdot (\bd{x} - \bd{Q}) - \abs{ \bd{x} - \bd{Q} }^2, \end{equation} and \begin{equation} \Delta \Phi = d \imath. \end{equation} We next expand $c(\bd{x})$ around the point $\bd{Q}$, \begin{equation} c(\bd{x}) = c + \partial_{\bd{Q}} c \cdot ( \bd{x} - \bd{Q} ) + \frac{1}{2} (\bd{x} - \bd{Q}) \cdot \partial_{\bd{Q}}^2 c (\bd{x} - \bd{Q}) + \mathcal{O}(\abs{\bd{x} - \bd{Q}})^3, \end{equation} and \begin{multline} c^2(\bd{x}) = c^2 + 2 c \partial_{\bd{Q}} c \cdot ( \bd{x} - \bd{Q} ) + (\partial_{\bd{Q}} c \cdot (\bd{x} - \bd{Q}))^2 \\ + c (\bd{x} - \bd{Q}) \cdot \partial_{\bd{Q}}^2 c (\bd{x} - \bd{Q}) + \mathcal{O}(\abs{\bd{x} - \bd{Q}})^3. \end{multline} The terms $c$, $\partial_{\bd{Q}} c$ and $\partial_{\bd{Q}}^2 c$ on the right hand sides are all evaluated at $\bd{Q}$. Substituting all the above into the wave equation \eqref{eq:wave} and keeping only the leading order terms give \begin{equation*} \begin{aligned} & 2 \frac{\imath}{\varepsilon} \partial_t a (- c \abs{\bd{P}} ) u + \frac{\imath}{\varepsilon} a ( c \bd{P}\cdot \partial_{\bd{Q}} c + \imath c^2 ) u \\ & - \frac{1}{\varepsilon^2} a \Bigl( 2 c (\bd{x} - \bd{Q}) \cdot (\abs{\bd{P}}^2 \partial_{\bd{Q}} c + \imath c \bd{P}) + \bigl( (\bd{x} - \bd{Q}) \cdot (\abs{\bd{P}} \partial_{\bd{Q}}c + \imath c \bd{P} / \abs{\bd{P}}) \bigr)^2 \Bigr) u \\ & - c^2 \Bigl( - \frac{1}{\varepsilon} ad - \frac{2\imath}{\varepsilon^2} a \bd{P}\cdot (\bd{x} - \bd{Q}) + \frac{1}{\varepsilon^2} a \abs{\bd{x} - \bd{Q}}^2\Bigr) u \\ & + \frac{2}{\varepsilon^2} a c \partial_{\bd{Q}} c \cdot (\bd{x} - \bd{Q}) (\abs{\bd{P}}^2 + 2\imath \bd{P} \cdot ( \bd{x} - \bd{Q} )) u \\ & + \frac{1}{\varepsilon^2} a \abs{\bd{P}}^2 \bigl( (\partial_{\bd{Q}}c \cdot (\bd{x} - \bd{Q}))^2 u + c (\bd{x} - \bd{Q}) \cdot \partial_{\bd{Q}}^2 c (\bd{x} - \bd{Q}) \bigr) u \sim \mathcal{O}(1). \end{aligned} \end{equation*} After reorganizing the terms, we get \begin{equation} 2 \frac{\imath}{\varepsilon} c \abs{\bd{P}} \partial_t a u \sim \frac{\imath}{\varepsilon} a ( c\bd{P}\cdot \partial_{\bd{Q}} c - (d - 1)c^2 \imath) u - \frac{1}{\varepsilon^2} a (\bd{x} - \bd{Q}) \cdot M (\bd{x} - \bd{Q}) u, \end{equation} where \begin{multline} M = (\abs{\bd{P}} \partial_{\bd{Q}}c - \imath c \bd{P}/ \abs{\bd{P}}) \otimes (\abs{\bd{P}} \partial_{\bd{Q}}c - \imath c \bd{P}/ \abs{\bd{P}}) + c^2 I \\ - \abs{\bd{P}}^2 \partial_{\bd{Q}} c \otimes \partial_{\bd{Q}} c - \abs{\bd{P}}^2 c \partial_{\bd{Q}}^2 c. \end{multline} Lemma \ref{lem:veps1} shows that \begin{equation}\label{eq:quadform} a (\bd{x} - \bd{Q}) \cdot M (\bd{x} - \bd{Q}) u \sim \varepsilon a \tr(Z^{-1} \partial_{\bd{z}} \bd{Q} M) u + \mathcal{O}(\varepsilon^2). \end{equation} Therefore, to the leading order, we obtain the evolution equation of $a$, \begin{multline}\label{eq:aeqn} \partial_t a = \frac{a}{2} \Bigl( \frac{\bd{P}}{\abs{\bd{P}}}\cdot \partial_{\bd{Q}} c - \frac{(d-1)\imath}{\abs{\bd{P}}} c \Bigr) \\ + \frac{a}{2} \tr\biggl( Z^{-1} \partial_{\bd{z}} \bd{Q} \Bigl( 2 \frac{\bd{P}}{\abs{\bd{P}}} \otimes \partial_{\bd{Q}} c - \frac{\imath c}{\abs{\bd{P}}} \Bigl( \frac{\bd{P}\otimes \bd{P}}{\abs{\bd{P}}^2} - I \Bigr) - \imath \abs{\bd{P}}\partial_{\bd{Q}}^2 c \Bigr) \biggr). \end{multline} Notice that \begin{align*} &\frac{\,\mathrm{d} Z}{\,\mathrm{d} t}=\partial_{\bd{z}}\left( \frac{\,\mathrm{d} \bd{Q}}{\,\mathrm{d} t}+\imath\frac{\,\mathrm{d} \bd{P}}{\,\mathrm{d} t} \right)=\partial_{\bd{z}}\left(c\frac{\bd{P}}{\abs{\bd{P}}}-\imath \partial_{\bd{Q}}c\abs{\bd{P}}\right) \\ &\qquad =\partial_{\bd{z}}\bd{Q}\frac{ \partial_{\bd{Q}}c\otimes \bd{P}}{\abs{\bd{P}}}+c\partial_{\bd{z}}\bd{P} \Bigl( \frac{I}{\abs{\bd{P}}}-\frac{\bd{P}\otimes \bd{P}}{\abs{\bd{P}}^3} \Bigr)-\imath\partial_{\bd{z}}\bd{Q}\partial^2_{\bd{Q}}c\abs{\bd{P}}- \imath\partial_{\bd{z}}\bd{P}\frac{\bd{P}\otimes \partial_{\bd{Q}}c}{\abs{\bd{P}}},\end{align*}and \begin{equation*}-\frac{(d-1)\imath}{\abs{\bd{P}}}c=\tr\bigg(Z^{-1}(\partial_{\bd{z}}\bd{Q} +\imath\partial_{\bd{z}}\bd{P})\frac{\imath c}{\abs{\bd{P}}}\Big(\frac{\bd{P}\otimes\bd{P}}{\abs{\bd{P}}^2}-I\Big) \bigg). \end{equation*} By using the fact that \eqref{eq:quadform} has a quadratic form, one has \[\tr\bigg(Z^{-1}\partial_{\bd{z}}\bd{Q}\frac{\bd{P}}{\abs{\bd{P}}} \otimes\partial_{\bd{Q}}c\bigg)=\tr\bigg(Z^{-1}\partial_{\bd{z}}\bd{Q} \frac{\partial_{\bd{Q}}c}{\abs{\bd{P}}} \otimes\bd{P}\bigg).\] Hence \eqref{eq:aeqn} can be reformulated as \begin{equation*} \frac{\,\mathrm{d} a}{\,\mathrm{d} t}=a\frac{\bd{P}}{\abs{\bd{P}}}\cdot \partial_{\bd{Q}} c+\frac{a}{2}\tr\left(Z^{-1}\frac{\,\mathrm{d} Z}{\,\mathrm{d} t} \right). \end{equation*} This completes the asymptotic derivation. We remark that in the case of time dependent Schr\"odinger equation, the asymptotics have been made rigorous in \cites{SwRo:09, Ro:09}. \section{Numerical examples}\label{sec:numer} In this section, we give both one and two dimensional numerical examples to justify the accuracy of the frozen Gaussian approximation (FGA). Without loss of generality, we only consider the wave propagation determined by the ``$+$'' branch of \eqref{eq:ansatz} which implies that $B_0(\bd{x})=-\imath c(\bd{x})\abs{\nabla_{\bd{x}}S_0(\bd{x})}A_0(\bd{x})$ in \eqref{eq:WKBini}. \subsection{One dimension} Using one-dimensional examples in this section, we compare FGA with the Gaussian beam method (GBM) in both the accuracy and the performance when beams spread in GBM. We denote the solution of GBM as $u^{\mathrm{GBM}}$, and summarize its discrete numerical formulation (only the ``$+$'' branch) as follows for readers' convenience (\cites{Ra:82,Ta:08,LiRa:09}), \begin{multline*} u^{\mathrm{GBM}}(t,x)=\sum_{j=1}^{N_{y_0}}\left(\frac{1}{2\pi\epsilon}\right)^\frac{1}{2} r_{\theta}(\abs{x-y_j})A(t,y_j)\\ \times \exp\left(\frac{\imath}{\varepsilon} (S(t,y_j)+\xi(t,y_j)(x-y_j)+M(t,y_j)(x-y_j)^2/2) \right)\delta y_0, \end{multline*} and $y_j,\;\xi,\;S,\;M,\;A$ satisfy \begin{align*} & \frac{\,\mathrm{d} y_j}{\,\mathrm{d} t}=c(y_j)\frac{\xi}{\abs{\xi}},\qquad y_j(0)=y_0^j, \\ & \frac{\,\mathrm{d} \xi}{\,\mathrm{d} t}=-\partial_{y_j} c(y_j)\abs{\xi},\qquad \xi(0)=\partial_{y_j}S_0(y_j),\\ & \frac{\,\mathrm{d} S}{\,\mathrm{d} t}=0,\qquad S(0)=S_0(y_j), \\ & \frac{\,\mathrm{d} M}{\,\mathrm{d} t}=-2\partial_{y_j}c(y_j)\frac{\xi}{\abs{\xi}}M-\partial^2_{y_j}c(y_j) \abs{\xi},\qquad M(0)=\partial_{y_j}^2S_0(y_j)+\imath, \\ & \frac{\,\mathrm{d} A}{\,\mathrm{d} t}=\frac{1}{2}\partial_{y_j}c(y_j)\frac{\xi}{\abs{\xi}}A,\qquad A(0)=A_0(y_j), \end{align*} where $r_{\theta}$ is the cutoff function, $y_0^j$'s are the equidistant mesh points, $\delta y_0$ is the mesh size and $N_{y_0}$ is the total number of the beams initially centered at $y_0^j$. \begin{example}\label{exa:1} The wave speed is $c(x)=x^2$. The initial conditions are \begin{align*} &u_0=\exp\bigl(-100(x-0.5)^2\bigr)\exp\left(\frac{\imath x}{\varepsilon}\right),\\ &\partial_t u_0=-\frac{\imath x^2}{\varepsilon}\exp\bigl(-100(x-0.5)^2\bigr)\exp\left(\frac{\imath x}{\varepsilon}\right). \end{align*} \end{example} The final time is $T=0.5$. We plot the real part of the wave field obtained by FGA compared with the true solution in Figure \ref{fig:ex1_real} for $\varepsilon=1/64,\;1/128,\;1/256$. The true solution is computed by the finite difference method using the mesh size of $\delta x=1/2^{12}$ and the time step of $\delta t=1/2^{18}$ for domain $[0,2]$. Table \ref{tab:ex1_err} shows the $\ell^\infty$ and $\ell^2$ errors of both the FGA solution $u^{\mathrm{FGA}}$ and the GBM solution $u^{\mathrm{GBM}}$. The convergence orders in $\varepsilon$ of $\ell^\infty$ and $\ell^2$ norms are $1.08$ and $1.17$ separately for FGA, and $0.54$ and $0.57$ for GBM. We observe a better accuracy order of FGA than GBM. We choose $\delta t=1/2^{11}$ for solving the ODEs and $\delta x=1/2^{12}$ to construct the final solution in both FGA and GBM. In FGA, we take $\delta q=\delta p=\delta y=1/2^7,\;N_q=128,\;N_p=45$ for $\varepsilon=1/128,\;1/256$ and $\delta q=\delta p=\delta y=1/2^5,\;N_q=32,\;N_p=33$ for $\varepsilon=1/64$. In GBM, we take $\delta y_0=1/2^7,\;N_{y_0}=128$ for $\varepsilon=1/128,\;1/256$ and $\delta y_0=1/2^5,\;N_{y_0}=32$ for $\varepsilon=1/64$. We remark that in this example the mesh sizes of $p$ and $q$ have been taken very small and $N_p$ large enough to make sure that the error of FGA mostly comes from asymptotic expansion, but not from initial value decomposition, numerical integration of ODEs and so on. Such a choice of fine mesh is not necessary for the accuracy of FGA, as one can see in Example \ref{exa:3}. \begin{table} \caption{Example \ref{exa:1}, the $\ell^\infty$ and $\ell^2$ errors for FGA and GBM.}\label{tab:ex1_err} \begin{tabular}{cccc} \hline \\[-1em] $\varepsilon$ & ${1}/{2^6}$ & ${1}/{2^7}$ & ${1}/{2^8}$ \\ \hline \\[-1em] $\norm{u-u^{\mathrm{FGA}}}_{\ell^\infty}$ & $1.12\times 10^{-1}$ & $6.18\times10^{-2}$ & $2.51\times10^{-2}$\\ \hline \\[-1em] $\norm{u-u^{\mathrm{FGA}}}_{\ell^2}$& $6.05\times 10^{-2}$ & $2.96\times10^{-2}$ & $1.19\times10^{-2}$ \\ \hline \\[-1em] $\norm{u-u^{\mathrm{GBM}}}_{\ell^\infty}$ & $7.15\times 10^{-1}$ & $5.08\times10^{-1}$ & $3.36\times10^{-1}$\\ \hline \\[-1em] $\norm{u-u^{\mathrm{GBM}}}_{\ell^2}$& $3.26\times 10^{-1}$ & $2.28\times10^{-1}$ & $1.47\times10^{-1}$ \\ \hline \end{tabular} \end{table} \begin{figure}[h t p] \begin{tabular}{cc} \resizebox{2.3in}{!}{\includegraphics{Ex1/veps64_real.eps}} & \resizebox{2.3in}{!}{\includegraphics{Ex1/err64_real.eps}} \\ \multicolumn{2}{c}{(a) $\varepsilon=\frac{1}{64}$} \\[3mm] \resizebox{2.3in}{!}{\includegraphics{Ex1/veps128_real.eps}} & \resizebox{2.3in}{!}{\includegraphics{Ex1/err128_real.eps}} \\ \multicolumn{2}{c}{(b) $\varepsilon=\frac{1}{128}$} \\[3mm] \resizebox{2.3in}{!}{\includegraphics{Ex1/veps256_real.eps}} & \resizebox{2.3in}{!}{\includegraphics{Ex1/err256_real.eps}} \\ \multicolumn{2}{c}{(c) $\varepsilon=\frac{1}{256}$} \end{tabular} \caption{Example \ref{exa:1}, the comparison of the true solution (solid line) and the solution by FGA (dashed line). Left: the real part of wave field; right: the errors between them.}\label{fig:ex1_real} \end{figure} \begin{example}\label{exa:2} The wave speed is $c(x)=x^2$. The initial conditions are \begin{align*} & u_0=\exp\left(-\frac{(x-0.55)^2}{2\varepsilon}\right) \exp\left(\frac{\imath x}{\varepsilon} \right),\\ & \partial_t u_0=-\frac{\imath x^2}{\varepsilon}\exp\left(-\frac{(x-0.55)^2}{2\varepsilon}\right) \exp\left(\frac{\imath x}{\varepsilon}\right). \end{align*} \end{example} We use this example to illustrate the performances of FGA and GBM when the beams spread in GBM. The final time is $T=1.0$ and $\varepsilon=1/256$. Remark that the initial condition is chosen as a single beam on purpose so that one can apply GBM without introducing any initial errors. The true solution is provided by the finite difference method using $\delta x=1/2^{11}$ and $\delta t=1/2^{17}$ for domain $[0,4]$. We take $\delta q=\delta p=\delta y=1/2^7,\;N_q=128,\;N_p=45$ in FGA to make sure that the error in the initial value decomposition of FGA is very small. The time step is $\delta t=1/2^{10}$ for solving the ODEs and the mesh size is chosen as $\delta x=1/2^{11}$ to construct the final solution in both FGA and GBM. Figure \ref{fig:ex2_compare} compares the amplitudes of the wave field computed by FGA and GBM, and the true solution. One can see that the beam has spread severely in GBM. The results confirm that FGA has a good performance even when the beam spreads, while GBM does not. Moreover, it does not help improving the accuracy if one uses more Gaussian beams to approximate the initial condition in GBM as shown in Figure \ref{fig:ex2_multi}, where $N_{y_0}=128$ beams are used initially and $\delta y_0=1/2^7$. Remark that GBM can still give good approximation around beam center where Taylor expansion does not introduce large errors. This can be seen around $x=1.2$ in Figure \ref{fig:ex2_compare}. \begin{figure}[h t p] \begin{tabular}{cc} \resizebox{2.3in}{!}{\includegraphics{Ex2/amp_comp.eps}} & \resizebox{2.3in}{!}{\includegraphics{Ex2/err_comp.eps}} \end{tabular} \caption{Example \ref{exa:2}, the comparison of the true solution (solid line), the solution by FGA (dashed line) and the solution by GBM (dots) for $\varepsilon=\frac{1}{256}$. Left: the amplitude of wave field; right: the error between them (dashed line for FGA, dots for GBM).}\label{fig:ex2_compare} \end{figure} \begin{figure}[h t p] \begin{tabular}{cc} \resizebox{2.3in}{!}{\includegraphics{Ex2/multiGB_comp.eps}} & \resizebox{2.3in}{!}{\includegraphics{Ex2/multiGBerr_comp.eps}} \end{tabular} \caption{Example \ref{exa:2}, the comparison of the true solution (solid line) and the solution by GBM using multiple Gaussian initial representation (dots) for $\varepsilon=\frac{1}{256}$. Left: the amplitude of wave field; right: the error between them.}\label{fig:ex2_multi} \end{figure} \subsection{Two dimension} \begin{example}\label{exa:3} The wave speed is $c(x_1,x_2)=1$. The initial conditions are \begin{align*} & u_0=\exp\bigl(-100(x_1^2+x_2^2)\bigr) \exp\left(\frac{\imath}{\varepsilon}(-x_1+\cos(2x_2))\right),\\ & \partial_t u_0=-\frac{\imath}{\varepsilon}\sqrt{1+4\sin^2(2x_2)} \exp\bigl(-100(x_1^2+x_2^2)\bigr) \exp\left(\frac{\imath}{\varepsilon}(-x_1+\cos(2x_2))\right). \end{align*} \end{example} This example presents the cusp caustics shown in Figure \ref{fig:ex3_caustic}. The final time is $T=1.0$. The true solution is given by the spectral method using the mesh $\delta x_1=\delta x_2=1/512$ for domain $[-1.5,0.5]\times[-1,1]$. We take $\delta q_1=\delta q_2=\delta p_1=\delta p_2=\delta y_1=\delta y_2=1/32$, $N_{q}=32,\;N_{p}=8$ in FGA, and use $\delta x_1=\delta x_2=1/128$ to reconstruct the solution. Figure \ref{fig:ex3_amp} compares the wave amplitude of the true solution and the one by FGA for $\varepsilon=1/128$ and $1/256$. The $\ell^\infty$ and $\ell^2$ errors of the wave amplitude are $1.98\times10^{-1}$ and $4.42\times10^{-2}$ for $\varepsilon=1/128$, and $1.07\times10^{-1}$ and $2.20\times10^{-2}$ for $\varepsilon=1/256$. This shows a linear convergence in $\varepsilon$ of the method. \begin{figure}[h t p] \begin{tabular}{cc} \resizebox{2.3in}{!}{\includegraphics{Ex3/charline2d_x1.eps}} & \resizebox{2.3in}{!}{\includegraphics{Ex3/charline2d_x2.eps}} \end{tabular} \caption{Example \ref{exa:3}, a set of the characteristic lines develops the cusp caustic.}\label{fig:ex3_caustic} \end{figure} \begin{figure}[h t p] \begin{tabular}{cc} \resizebox{2.3in}{!}{\includegraphics{Ex3/veps128_HK.eps}} & \resizebox{2.3in}{!}{\includegraphics{Ex3/veps256_HK.eps}} \\ \multicolumn{2}{c}{(a) Frozen Gaussian approximation} \\[3mm] \resizebox{2.3in}{!}{\includegraphics{Ex3/veps128_SP.eps}} & \resizebox{2.3in}{!}{\includegraphics{Ex3/veps256_SP.eps}} \\ \multicolumn{2}{c}{(b) True solution} \\[3mm] \resizebox{2.3in}{!}{\includegraphics{Ex3/veps128_err.eps}} & \resizebox{2.3in}{!}{\includegraphics{Ex3/veps256_err.eps}} \\ \multicolumn{2}{c}{(c) Errors} \end{tabular} \caption{Example \ref{exa:3}, the comparison of the true solution and the solution by FGA. Left: wave amplitude of $\varepsilon=\frac{1}{128}$; right: wave amplitude of $\varepsilon=\frac{1}{256}$.}\label{fig:ex3_amp} \end{figure} \section{Discussion and Conclusion}\label{sec:conclusion} We first briefly compare the efficiency of frozen Gaussian approximation (FGA) with the Gaussian beam method (GBM). GBM uses only one Gaussian function for each grid point in physical space, while FGA requires more Gaussians per grid point with different initial momentum to capture the behavior of focusing or spreading of the solution. However, the stationary phase approximation suggests that the number of Gaussians is only increased by a small constant multiple of the number of those used in GBM. In addition, in GBM one has to solve the Riccati equation, which is a coupled nonlinear ODE system in high dimension, to get the dynamics of the Hessian matrix for each Gaussian, while in FGA the Hessian matrix is determined initially and has no dynamics. Therefore, the overall efficiency of FGA is comparable to GBM. Admittedly, higher order GBM gives better asymptotic accuracy, and only requires solving a constant number of additional ODEs as in FGA. The numerical cost of higher order GBM is comparable to FGA. However, higher order GBM has its drawbacks: The imaginary part of higher order (larger than two) tensor function dose not preserve positive definiteness in time evolution, which may destroy the decay property of the ansatz of higher order GBM . This is even more severe when beams spread. Moreover, the ODEs in higher order GBM are in the form of coupled nonlinear system in high dimension. It raises numerical difficulty caused by stability issues. We also note that, the numerical integration of ODEs in FGA can be easily parallelized since the Hamiltonian flow \eqref{eq:characline} is independent for different initial $(\bd{q},\bd{p})$, while it is not so trivial for higher order tensors in GBM. From the accuracy point of view, our numerical examples show that first order FGA method has asymptotic accuracy $\mathcal{O}(\varepsilon)$. The existing rigorous analysis (\cites{CoRo:97, BoAkAl:09, LiRuTa:10}) proves that the $k$-th order GBM has an accuracy of $\mathcal{O}(\varepsilon^{k/2})$. Hence, at the first order, FGA has better asymptotic accuracy than GBM. We note that, however, there has been numerical evidence presenting $\mathcal{O}(\varepsilon)$ asymptotic accuracy order for first order GBM, for example in \cites{JiWuYa:08,JiWuYa:11,MoRu:app,LiRuTa:10}. This phenomenon is usually attributed to error cancellation between different beams. To the best of our knowledge, the mechanism of error cancellation in GBM has not been systematically understood yet. With the the gain of halfth order in asymptotic accuracy due to cancellation, the first order GBM has the same accuracy order as FGA (of course GBM still loses accuracy when beams spread). Remark that the gain in asymptotic accuracy order depends on the choice of norm. For example, the first order GBM has a halfth order convergence in $\ell^\infty$ norm, first order convergence in $\ell^2$ norm and $3/2$-th order convergence in $\ell^1$ norm in Example $1$ of \cite{JiWuYa:08}. Moreover, the error cancellation seems not to be easily observed in numerics unless $\varepsilon$ is very small. For instance, the convergence order of GBM in Example \ref{exa:1} is only a bit better than $1/2$ for $\varepsilon$ up to $1/256$. While in FGA, we numerically observe the first order asymptotic accuracy in both $\ell^2$ and $\ell^{\infty}$ norms. Actually the accuracy of FGA can also be understood from a viewpoint of error cancellation. Note that the equalities \eqref{eq:con1}, \eqref{eq:con2} and \eqref{eq:con3} in Lemma \ref{lem:veps1} play the role of determining the accuracy of FGA. In \eqref{eq:con1}, the term $\bd{x}-\bd{Q}$ is of order $\mathcal{O}(\sqrt{\varepsilon})$ due to the Gaussian factor, but after integration with respect to $\bd{q}$ and $\bd{p}$, which is similar to the beam summation in GBM, it becomes $\mathcal{O}(\varepsilon)$. Similar improvement of order also happens in \eqref{eq:con2} and \eqref{eq:con3}. Integration by parts along with \eqref{eq:dzPhi} explains the mechanism of this type of error cancellation. We conclude the paper as follows. In this work, we propose the frozen Gaussian approximation (FGA) for computation of high frequency wave propagation, motivated by the Herman-Kluk propagator in chemistry literature. This method is based on asymptotic analysis and constructs the solution using Gaussian functions with fixed widths that live on the phase plane. It not only provides an accurate asymptotic solution in the presence of caustics, but also resolves the problem in the Gaussian beam method (GBM) when beams spread. These merits are justified by numerical examples. Additionally, numerical examples also show that FGA exhibits better asymptotic accuracy than GBM. These advantages make FGA quite competitive for computing high frequency wave propagation. For the purpose of presenting the idea simply and clearly, we only describe the method for the linear scalar wave equation using leading order approximation. The method can be generalized for solving other hyperbolic equations and systems with a character of high frequency. The higher order approximation can also be derived. Since the method is of Lagrangian type, the issue of divergence still remains, which will be resolved in an Eulerian framework. We present these results in the subsequent paper \cite{LuYang:MMS}. \FloatBarrier \bibliographystyle{amsalpha}
2,869,038,155,997
arxiv
\section{Introduction} Despite the long history of quantum mechanics, the quantum measurement problem continues to attract interest motivated mostly by the counter-intuitive features of the ``wavefunction reduction''. Although dynamics of any measurement set-up is governed by the Schr\"{o}dinger equation with an appropriate Hamiltonian, full description of the measurement process can not be obtained without account of the changes in the wavefunction of the measured system caused by the random process of selection of one specific outcome of measurement out of the range of possible outcomes. This selection process is trivial in the case of classical dynamics, when all possible outcomes of measurement are ``orthogonal'' and the observation of the measured system in one particular state does not imply any changes in the system beyond the statement that it occupies this and not any other state. For a quantum system, however, existence of the non-commuting observables implies that selection of one particular outcome of measurement can change the wavefunction of the system in a highly non-trivial way. Such a reduction of the wavefunction appears as an evolution principle additional to the Schr\"{o}dinger equation. Moreover, the changes in the wavefunction of the measured system induced by it can violate the basic features of dynamics which follows from the Schr\"{o}dinger equation, despite the fact that the measurement process as a whole is governed by this equation. The best known example of this situation is the case of EPR correlations \cite{r1} between the two spatially separated spins, which violate the no-action-at-a-distance principle as quantified by the Bell's inequalities \cite{r2}. From the perspective of the wavefunction reduction, the EPR correlations appear as a result of selection of one random specific outcome of the local spin measurement. On average, there is no action-at-a-distance in a sense that the correlations by themselves can not lead to information transfer between the points where the spins are located. Current interest to the solid-state quantum information processing (see, e.g., the reviews \cite{l3,l4,l5}), motivates development of mesoscopic solid-state structures that can serve both as simple quantum systems, e.g., qubits or harmonic oscillators, and the detectors. Although in experiments, the mesoscopic detectors did not reach the stage yet where they can be used to look into the basic questions of the quantum measurement theory (which requires the quantum-limited detection) one can expect this to happen quite soon. A new element introduced by the mesoscopic structures in the discussion of quantum measurements is the fact that the wavefunction reduction is not necessarily caused by interaction of a ``microscopic'' measured system with the ``macroscopic'' detector. In mesoscopic structures, the measured systems and the detectors are similar in many respects (including dimensions and typical dynamics) and are frequently interchangeable: a measured system in one context can act as the detector in the other, and vice versa. This shows that the boundary at which the quantum coherent dynamics should be complemented with the wavefunction reduction is not universal. The aim of this work is to provide a quantitative discussion of models and measurement dynamics of the mesoscopic detectors. The discussion emphasizes the interplay between the dynamic and information sides of the measurement process and can serve as an introduction to the problem of wavefunction reduction in mesoscopic structures. \section{Measurements dynamics of ballistic mesoscopic detectors} Majority of the mesoscopic detectors use as their operating principle ability of a measured system to control the transport of some particles between the two reservoirs. The information about the state of the system is contained then in the magnitude of the particle current between the reservoirs which serves as the detector output. In the most direct form, this principle is implemented in the quantum point contact (QPC) detector~\cite{q1,q2}, which presently is the main detector used for measurements of the quantum dot qubits~\cite{q3,q4,q5,q6,q7}. In the QPC detector (Fig.~\ref{fm2}), the propagating particles are electrons which move ballistically through a short one-dimensional constriction formed between the two electrodes of the QPC. The electrodes can be viewed as reservoirs of independent and effectively non-interacting electrons. The measured system creates electrostatic potential that makes the scattering potential $U_j(x)$ for electrons in the constriction dependent on the state $|j\rangle$ of the system, and in this way controls transmission probability of the QPC. The output of the QPC detector is the electric current $I$ driven by the voltage difference $V$ between the electrodes. The current depends on the electron transmission probability, and as a result, contains information about the state $|j\rangle$. Since interaction between the QPC electrons and the measured system is dominated by the electrostatic potential, the QPC acts as the charge detector. Another example of the ballistic mesoscopic detector is the magnetic analog of the QPC based on the ballistic motion of the magnetic flux quanta (fluxons) through a one-dimensional channel, the role of which is played by the Josephson transmission line (JTL)~\cite{q8}. The scattering potential $U_j(x)$ for the fluxons in the JTL is created by the magnetic flux or current, and the JTL detector can be used for measurements of superconducting flux qubits. In the JTL detector, the fluxons can be injected into the JTL individually providing control over the individual scattering events. \begin{figure} \hspace{4cm} \epsfxsize=5cm \epsfbox{fig2.eps} \caption{Schematic of the QPC detector measuring charge qubit. The two qubit states $|j\rangle$, $j=1,2$ are localized on the opposite sides of a tunnel barrier and are coherently coupled by tunneling across this barrier with coupling strength $\Delta$. Transfer of the qubit charge between the states $|j\rangle$ changes electrostatic potential in the scattering region of the QPC affecting the current $I$ through it that is driven by the applied voltage $V$.} \label{fm2} \end{figure} The detector model in which the output information is contained in the transport current flowing between the two reservoirs applies to many of the mesoscopic detectors (see Sec. 4). There are several reasons for this. One is the strong (in the tunnel limit, exponential) dependence of the scattering amplitudes on parameters of the scattering potential that leads to sufficiently large sensitivity of the detector to the measured system. Another, more important, is the fact that the scattering dynamics contains strongly divergent transmitted and reflected trajectories that create easily detectable different outcomes of measurement. This feature of scattering is not easily reproducible in other types of the dynamics~\cite{q9}. Finally, the transport between large reservoirs makes it possible to repeat scattering events at a certain rate amplifying the results of scattering of one particle. \begin{figure} \hspace{2.5cm} \epsfxsize=8cm \epsfbox{fig1.eps} \caption{Measurement dynamics of a ballistic mesoscopic detector. The wavepacket of a particle with momentum $k$ is scattered by the potential $U_j(x)$ controlled by the measured system. The scattering potential and the transmission/reflection amplitudes $t_j(k)$, $r_j(k)$ contain information about the state $|j\rangle$ of the system.} \label{fm1} \end{figure} In general, the process of quantum measurement can be understood as creation of an entangled state of the measured system and the detector as a result of interaction between them. The states of the detector are classical and suppress quantum superposition of different outcomes of measurement. The two consequences of this process are the acquisition of information about the system by the detector, and ``back-action'' dephasing of the measured system - see, e.g., \cite{q10,q11}. Because of the system-detector entanglement, finding a given detector output provides some indication of what state the measured system is in. On the other hand, the same entanglement means that quantum coherence among the states of the measured system is suppressed. This implies that there is a close connection between the information acquisition and back-action dephasing. In the optimal situation, the rate $W$ with which the detector obtains information about the system and the dephasing rate $\Gamma$ are the same. Of course, the detector can always introduce some parasitic dephasing into the system dynamics, so that in general $W\leq \Gamma$. In view of this inequality, the detector with $W= \Gamma$ is called ``ideal'' or ``quantum limited''. If the detector is far from being quantum-limited, it destroys quantum coherence in the measured system long before it provides information that can be used to select specific outcomes of measurement. Because of this, only the detectors that are close to being quantum-limited can give rise to non-trivial wavefunction reduction. \subsection{Back-action dephasing rate} Measurement dynamics with the ballistic mesoscopic detector is illustrated in Fig.~\ref{fm1}. For the ballistic detector, the detector-system entanglement arises as a result of scattering (Fig.~\ref{fm1}), and the rates of information acquisition and back-action dephasing can be expressed in terms of the scattering amplitudes \cite{q12}. To do this, we consider evolution of the density matrix $\rho$ of the measured system in scattering of one particle. For simplicity, the Hamiltonian of the system itself is assumed to be zero (e.g., $\Delta=0$ in the example of Fig.~\ref{fm2}), and the system evolution is caused only by the interaction with the detector. The evolution of $\rho$ is obtained then from the time dependence of the total wavefunction of the scattered particle and the stationary wavefunction $\sum_j c_j |j\rangle$ of the measured system: \begin{equation} \psi(x,t=0) \cdot \sum_j c_j|j\rangle \rightarrow \sum_j c_j\,\psi_j(x,t) \cdot |j\rangle \, . \label{e11} \end{equation} Here $\psi(x,t=0)$ is the initial wavefunction of the particle injected in the scattering region from the reservoir, and its time evolution $\psi_j(x,t)$ depends on the realization $U_j(x)$ of the potential created by the measured system. Tracing over the detector, i.e., the scattering wavefunction, one gets from Eq.~(\ref{e11}): \begin{equation} \rho_{ij} =c_ic_j^* \rightarrow c_ic_j^* \int dx \psi_i(x,t) \psi_j^*(x,t) \, . \label{e12} \end{equation} Qualitatively, the time evolution in (\ref{e11}) describes propagation of the initial wavepacket towards the scattering potential and its subsequent separation in coordinate space into the transmitted and reflected parts that are well-localized on the opposite sides of the scattering region. At time $t>t_{sc}$, where $t_{sc}$ is the characteristic scattering time, the separated wavepackets move in the region free from the $j$-dependent potential and the unitarity of the quantum-mechanical evolution of $\psi_j(x,t)$ implies that the overlap of the scattered wavefunctions in Eq.~(\ref{e12}) becomes independent of $t$. This overlap can be directly found in the momentum representation: \begin{equation} \int dx \psi_i(x,t) \psi_j^*(x,t)= \int dk |b(k)|^2[t_i(k)t_j^*(k)+r_i(k) r_j^*(k)]\, , \label{e13} \end{equation} where $b(k)$ is the probability amplitude for the injected particle to have momentum $k$ in the initial state. Equations (\ref{e12}) and (\ref{e13}) show that the diagonal elements of the density matrix $\rho$ do not change in the scattering process: \begin{equation} \int dk |b(k)|^2[|t_i(k)|^2+|r_i(k)|^2]=1 \, , \label{e14} \end{equation} while the off-diagonal elements are suppressed by the factor \begin{equation} \left|\int dk |b(k)|^2 [t_i(k)t_j^*(k)+ r_i(k)r_j^*(k)] \right|\leq 1 \, .\label{e14*} \end{equation} The inequality in this relation follows from the Swartz inequality for the scalar product in the Hilbert space of ``vectors'' $|b(k)|\cdot \{t_j(k),r_j(k)\}$ of the unit length (\ref{e14}). Suppression of the off-diagonal elements of $\rho$ is manifestation of the back-action dephasing of the measured system by the detector. Assuming that the particles are injected from the reservoir with frequency $f$ and combining the suppression factors (\ref{e14*}) for the successive scattering events, we obtain the dephasing rate as \[ \Gamma_{ij} = - f \ln \left|\int dk |b(k)|^2 [t_i(k)t_j^*(k)+ r_i(k)r_j^*(k)] \right| \, . \] If the scattering amplitudes do not depend on momentum $k$ in the range of momenta limited by $|b(k)|^2$, the back-action dephasing rate becomes independent of the form of initial wavepacket of injected particles: \begin{equation} \Gamma_{ij} = - f \ln \left| t_it_j^*+ r_ir_j^* \right| \, . \label{e15} \end{equation} Equation (\ref{e15}) is the general expression for the back-action dephasing rate of a ballistic mesoscopic detector with momentum- (and energy-) independent scattering amplitudes. It is valid, in particular, for the QPC detector at low temperatures $T\ll eV$, if the injection frequency $f$ is taken to be equal to the ``attempt frequency'' $eV/h$ with which electrons are incident on the scattering region~\cite{q13}. In the linear-response regime, when the changes in the scattering amplitudes with $j$ are small, the limiting form of Eq.~(\ref{e15}) was obtained in \cite{l6,l7,b9,l8,l9}. In the tunnel limit $t_j\rightarrow 0$, Eq.~(\ref{e15}) reduces to \begin{equation} \Gamma_{ij} = (f/2)| \bar{t}_i-\bar{t}_j |^2 \, , \label{e151} \end{equation} where \begin{equation} \bar{t}_j \equiv |t_j|e^{i\phi_j} , \;\; \phi_j \equiv \mbox{arg}(t_j /r_j) \, . \label{e152} \end{equation} When the phases $\phi_j$ can be neglected, Eq.~(\ref{e151}) reproduces earlier results \cite{gur} for the dephasing by the QPC detector in the tunnel limit. As we will see in Sec.~4, Eq.~(\ref{e151}) can be obtained in this limit under more general assumptions, and describes large number of different mesoscopic detectors. \subsection{Information acquisition rate} As was discussed above, the back-action dephasing is only one part of the measurement process. The other part is acquisition by the detector of information about the state of the measured system. In the case of ballistic detector, the information is contained in the scattering amplitudes of the incident particles, and the rate of its acquisition depends on specific characteristics of the amplitudes recorded by the detector. One of the simplest possibilities in this respect, realized, e.g., in the QPC detectors, is to record the changes in the transmission probabilities which determine the magnitude of the particle current through the scattering region. (Alternatively, one could modify the scattering scheme by forcing the scattered particles to interfere, and in this way use the phase information \cite{q12} in the scattering amplitudes.) The rate of information extraction from the current magnitude, i.e., the rate of increase of the confidence level in distinguishing different states $|j\rangle$, can be calculated simply by starting with the transmission/reflection probabilities $T_j=|t_j|^2$ and $R_j=1-T_j$ when the measured system is in the state $|j\rangle$. Since successive scattering event are independent, the probability $p_j(n)$ to have $n$ out of $N$ incident particles transmitted, is given by the binomial distribution $p_j(n)=C_N^n T_j^nR_j^{N-n}$. The task of distinguishing different states $|j\rangle$ of the measured system is transformed by the detector into distinguishing the probability distributions $p_j(n)$ for different $j$s. Since the number $N=ft$ of scattering attempts increases with time $t$, the distributions $p_j(n)$ become peaked successively more strongly around the corresponding average numbers $T_jN$ of transmitted particles. The states with different probabilities $T_j$ can be distinguished then with increasing certainty. The rate of increase of this certainty can be characterized quantitatively by some measure of the overlap of the distributions $p_j(n)$. While in general there are different ways to characterize the overlap of different probability distributions \cite{q14}, the characteristic which is appropriate in the quantum measurement context \cite{q15,q11} is closely related to ``fidelity'' in quantum information \cite{q14}: $\sum_n [p_i(n)p_j(n)]^{1/2}$. The rate of information acquisition can then be defined naturally as \cite{q12}: \begin{equation} W_{ij} = - (1/t) \ln \sum_n [p_i(n) p_j(n)]^{1/2} \, . \label{e17} \end{equation} Using the binomial distribution in this expression we get: \begin{equation} W_{ij} = - f \ln [(T_iT_j)^{1/2}+(R_iR_j)^{1/2} ] \, . \label{e18} \end{equation} Equation (\ref{e18}) gives the information acquisition rate by ballistic mesoscopic detectors. Comparing Eqs.~(\ref{e15}) and (\ref{e18}) we see that for this type of the detector, in accordance with the general understanding of quantum measurements, the back-action dephasing rate and information acquisition rate satisfy the inequality \begin{equation} W_{ij} \leq \Gamma_{ij} \, . \label{e21} \end{equation} Equality in this relation gives the condition of the quantum-limited operation of the ballistic mesoscopic detector under the assumption of energy-independent scattering amplitudes. It holds if \begin{equation} \phi_j = \phi_i \, , \label{e19} \end{equation} where the phases $\phi_j$ are defined in Eq.~(\ref{e152}). Condition (\ref{e19}) has simple interpretation as the statement that there is no information on the states $|j\rangle$ in the phases of the scattering amplitudes. Deviations from Eq.~(\ref{e19}) mean that the phases contain information about the measured system which is lost in the detection scheme sensitive only to the transmission probabilities $T_j$. In this case, the information loss in the detector prevents it from being quantum-limited. In practical terms, the simplest way to satisfy condition (\ref{e19}) is to make the scattering potential symmetric $U_j(-x)=U_j(x)$ for all states $|j\rangle$. The unitarity of the scattering matrix for this potential implies then that $\phi_j =\pi/2$ for any $j$, and Eq.~(\ref{e19}) is automatically satisfied. If the detector is quantum-limited, it can demonstrate non-trivial wavefunction reduction. \subsection{Conditional evolution} Quantitative description of the wavefunction reduction due to interaction with a detector can be formulated as ``conditional'' evolution, in which dynamics of the measured system is conditioned on the observation of particular outcome of measurement. In the axiomatic approach, wavefunction reduction is formalized together with the dynamic evolution as ``quantum operation''~\cite{r3}, arbitrary linear transformation of the system density matrix satisfying physically motivated axioms. In this approach, a detector is characterized by a set of positive operators which correspond to all possible outcomes of measurements with this detector, or ``positive operator valued measure'' (POVM)~\cite{r4}. In practice, for any real specific detector, it is clear what the possible classical outcomes of measurements are, and the emphasis is then on development of dynamic equations that would describe evolution of the measured system conditioned on a given detector output. Since the different outcomes of the detector evolution are classically distinguishable, it is meaningful to ask how the measured system evolves for a given output. Such a conditional evolution of the measured system describes quantitatively the wavefunction reduction in the measurement process with a particular detector (see, e.g., \cite{q16,q17,q18,q11}). In the case of ballistic mesoscopic detectors, each act of particle scattering represents an elementary measurement process. Since the particle trajectories that correspond to different outcomes of scattering: transmission through or reflection from the scattering region are strongly separated, these outcomes should be considered as non-interfering classical events. Although the absence of quantum coherence between these two outcomes is an assumption of the conditional approach, this assumption is very natural. Propagation of the scattered particles in different reservoirs of the detector entangles them with different environments, the process that very efficiently suppresses their mutual quantum coherence \cite{r5}. While this ``common-sense'' assumption of absence of quantum coherence between different outputs of a realistic detector is sometimes considered unsatisfactory from an abstract point of view \cite{r6,l1}, it can be given a fairly rigorous description in terms of decoherence in open quantum systems -- see, e.g., \cite{r7}. Quantitatively, conditional equations are obtained by separating in the total wavefunction the terms that correspond to a specific classical outcome of measurement and renormalizing this part of the wavefunction so that it corresponds to the total probability of 1~\cite{q20,q11,q8}. In the ballistic detector, there are two classically different outcomes of scattering, transmission and reflection, for each injected particle. This means that the wavefunction of the measured system should be conditioned on the observation of either transmitted or reflected particle in each elementary cycle of measurement. The evolution of the total wavefunction ``detector+measured system'' as a result of scattering of one particle is described by Eq.~(\ref{e11}). Under the assumption of energy-independent scattering amplitudes, momentum and coordinate dependence of the states of the scattered particles in the detector is the same for different states $|j\rangle$, and can be factored out from the total wavefunction. The evolution of the measured system can then be conditioned on the transmission/reflection of a particle simply by keeping in Eq.~(\ref{e11}) the terms that correspond to the actual outcome of scattering in the form of the appropriate scattering amplitudes. If the particle is transmitted through the scattered region or reflected from it in a given measurement cycle, amplitudes $c_j$ for the system to be in the state $|j\rangle$ change then, respectively, as follows: \begin{equation} c_j \rightarrow t_j c_j \big/ \left[\Sigma_j |c_j |^2 T_j\right]^{1/2}, \;\;\;\; c_j \rightarrow r_j c_j \big/ \left[\Sigma_j |c_j |^2 R_j \right]^{1/2} . \label{e32} \end{equation} We see that the expansion coefficients $c_j$ of the system's wavefunction are changing in conditional evolution despite the initial assumption that the system Hamiltonian is zero. This is unusual from the point of view of the Schr\"{o}dinger equation, and provides quantitative expression of reduction of the wavefunction in the measurement process. Also, it should be noted that the transformations (\ref{e32}) do not decohere the measured system despite the back-action dephasing by the detector discussed above. To understand this, one should note that as in the case of any dephasing, the back-action dephasing can be viewed as the loss of information. For the quantum-limited detection, the overall evolution of the detector and the measured system is quantum-coherent and the only possible source of the information loss is averaging over the detector evolution. This means that the back-action dephasing arises as the result of averaging over different measurement outcomes~\cite{q19}, and specifying definite outcome as done in the conditional dynamics removes all losses of information and eliminates the dephasing. Equations (\ref{e32}) can be applied directly to the detectors which provide control over scattering of individual particles, e.g. to the JTL detector~\cite{q8}, where it makes sense to discuss changes in the wavefunction of the measured system induced by one scattering event. In some detectors, however, such a control over individual scattering events is not fully possible. For instance, in the QPC detector, the picture of individual scattering events leading to Eqs.~(\ref{e15}) and (\ref{e18}) for the back-action dephasing and information rates is strictly speaking valid only on the relatively large time scale $t \gg h/eV$, when the typical number of electron scattering attempts is larger than 1. One can generalize Eq.~(\ref{e32}) to this situation by considering the time interval $t$ which includes a number $N=ft>1$ of scattering attempts, where for the QPC $f=eV/h$. Combining the transformations (\ref{e32}) one can see that observation of any sequence of transmission/reflection events that includes $n$ transmissions and $N-n$ reflections changes the wavefunciton as: \begin{equation} c_j \rightarrow t_j^n r_j^{(N-n)} c_j \Big/ \left[\Sigma_j |c_j|^2 T_j^n R_j^{(N-n)}\right]^{1/2}, \label{e33} \end{equation} regardless of the specific ordering of these events. This equation includes as a particular case Eq.~(\ref{e32}) which follows when $N=1$. Since the wavefunction obtained as a result of transformation (\ref{e33}) is the same for all $C_N^n$ sequences with the same total number $n$ of transmitted particles, one can distinguish all scattering outcomes only by $n$. For each of the $N+1$ outcomes with different $n$ the wavefunction is transformed according to Eq.~(\ref{e33}). This means that the wavefunction reduction has the form (\ref{e33}) independently of whether the detector suppresses quantum coherence between all the sequences of scattering outcomes or only between the states with different total numbers of transmitions/reflections. Transformations (\ref{e33}) can be used to study quantitatively unusual manifestations of the wavefuntion reduction in the mesoscopic solid-state qubits. \section{Tunneling without tunneling: wavefunction reduction in a mesoscopic qubit} Probably the simplest example of the counter-intuitive features of the wavefuntion reduction in mesoscopic qubits arises from a question whether a quantum particle can tunnel through a barrier which has vanishing transparency? Immediate answer to this question is ``no'' as follows from the elementary properties of the Schr\"{o}dinger equation. It seems basically the tautology to say that if the tunneling amplitude is zero (e.g., the barrier is infinitely high) the tunneling is suppressed. Of course somewhat more careful consideration reminds that evolution according to the Schr\"{o}dinger equation is not the only way for a state of a quantum particle to change in time. Changes in the particle state can also be caused by the wavefunction reduction, which, as discussed also in the Introduction, can in principle violate any feature of dynamic evolution of a quantum system. This can be expressed quantitatively through the ``Bell'' inequalities generalizing the classic Bell's inequalities which quantify violation of the no-action-at-a-distance principle. For measurements of a mesoscopic qubit (Fig.~\ref{fm2}), the peculiarities of quantum dynamics of the system originate from the possibility of quantum-coherent uncertainty in the position of the charge or flux between the two basis states $|j\rangle$ of the qubit. In the regime of coherent oscillations of the qubit ($\Delta \neq 0$), this uncertainty gives rise to several ``temporal'' Bell inequalities \cite{q21,q22,q23,q24}. In this Section, we discuss a sequence of quantum transformation that is centered around the qubit dynamics with suppressed tunneling, $\Delta=0$. The transformations lead to the associated Bell-type inequality, which quantify violation of the fundamental intuition of many solid-state physicists: charge or magnetic flux can not tunnel through an infinitely large barrier. The sequence of transformation includes a measurement done on the qubit and shows that the wavefunction reduction can indeed violate this Schr\"{o}dinger-equation-based intuition, and a particle can be transferred through an ``impenetrable'' barrier in the process of quantum measurement. The required manipulations of the qubit state are close to those in current experiments on coherent oscillations and more complex dynamics of mesoscopic qubits. Although the discussion in this Section applies in general to all types of qubits, the physics content of the wavefunction reduction is more striking for the semiconductor \cite{q4,q6,q7} or superconductor \cite{b1,l10,b2,b3,b4} charge qubits, or for the flux qubits \cite{l11,b5,b6}. In this case, the basic set-up is equivalent to the one shown in Fig.~\ref{fm2}. The two basis states $|j\rangle $, $j=1,2$, of the qubit differ by some amount of magnetic flux or by an individual charge (electron charge $e$ in semiconductor quantum dot qubit, or Cooper-pair charge $2e$ in a superconductor qubit) localized on the opposite sides of a tunnel barrier. The states are coupled by the tunnel amplitude $\Delta >0$. At the point of resonance, when the bias energy $\epsilon$ between the two basis states vanishes, the qubit Hamiltonian reduces to \begin{equation} H=-\Delta \sigma_x \, , \label{e34} \end{equation} and describes quantum coherent oscillations with frequency $2\Delta$. The tunnel amplitude $\Delta$ and the bias energy $\epsilon$ are assumed to be controlled externally. The main element of the sequence of qubit transformations consists in preparing a superposition (for simplicity, symmetric superposition, $\sigma_x=1$) of the qubit basis states, then switching off the tunnel amplitude $\Delta$ and performing quantum-limited but weak measurement of the qubit position in the $\sigma_z$ basis. The idea is to prove that the measurement-induced transfer of the qubit wavefunction between the two qubit states at $\Delta=0$ gives not only the changes of the probability (representing our knowledge about the qubit position) but the actual transfer of charge or flux through the infinitely large barrier. In order to do this, we include the measurement-induced wavefunction transfer as a part of the cyclic transformation, the other part of which is known to transfer charge or flux. In the ideal situation, when the operations are precise and there is no intrinsic decoherence, the cycle should lead to precisely the same initial state $\sigma_x=1$, making it possible to conclude that the measurement part of it transferred the qubit state through the barrier with vanishing tunneling amplitude. In the presence of external perturbations, there will be a probability $p^{(-)}$ to find the qubit not in the initial state. One can, however, derive a condition in the form of inequality on $p^{(-)}$, violation of which shows that this observation cannot be explained within the assumption of some initial classical probability distribution over the qubit basis states. This means that explanation of the observed violation should necessarily involve the transfer of the qubit state through suppressed barrier by the process of the wavefunction reduction. In detail, the starting point of the sequence of transformations is the ground state of the Hamiltonian (\ref{e34}) \begin{equation} |\psi_0 \rangle=(|1 \rangle+ |2 \rangle)/\sqrt{2} \, , \label{e35} \end{equation} in which the qubit will find itself because of the unavoidable, but assumed to be weak, relaxation processes, if the Hamiltonian (\ref{e34}) is kept stationary for some time. Starting from this state, the tunnel amplitude $\Delta$ is abruptly switched off, $\Delta=0$. The rate of this process is not important in the case of the Hamiltonian (\ref{e34}), since the final state will coincide with (\ref{e35}) regardless of how slowly or quickly $\Delta$ is switched off. However, in the presence of some parasitic residual bias $\epsilon$, the rate of variation of $\Delta$ should be much larger than $\epsilon$ to preserve the state (\ref{e35}) at the end of the switching process. Next, as the first step of actual transformations of the qubit state (\ref{e35}), the weak quantum-limited measurement of the $\sigma_z$ operator is performed on this state. The result of this operation does not depend on the specific model of measurement, as long as it is quantum-limited. For such a measurement, we know that specifying the detector output $n$ leaves the qubit in a pure state which is obtained from (\ref{e35}) by increased degree of localization in the $\sigma_z$ basis because of the information about $\sigma_z$ provided by the measurement: \begin{equation} |\psi_1 \rangle=\alpha_n |1 \rangle+ \beta_n |2 \rangle \, . \label{e36} \end{equation} As a suitable model of this measurement one can use the measurement with a ballistic detector discussed above. In this case, the detector output is the number $n$ of transmitted particles in $N=ft$ scattering attempts. In this case, according to Eq.~(\ref{e33}) \[ \alpha_n = \frac{t_1^nr_1^{(N-n)}}{\left[T_1^n R_1^{(N-n)} + T_2^n R_2^{(N-n)} \right]^{1/2}}\, , \;\;\; \beta_n = \frac{ t_2^n r_2^{(N-n)} }{ \left[T_1^n R_1^{(N-n)} + T_2^n R_2^{(N-n)} \right]^{1/2}} \, .\] Using the condition (\ref{e19}) of the detector ideality one can see that the relative phase $\xi$ of the coefficients $\alpha_n$ and $\beta_n$ is independent of the detector output $n$: $\xi=N[\arg(t_1)-\arg(t_2)] $. It can be viewed then as renormalization of the qubit bias $\epsilon$ due to the detector-qubit coupling, and can be compensated for by the bias shift $\delta \epsilon = f [\arg(t_1)-\arg(t_2)]$ during the period of measurement. The coefficients $\alpha, \beta$ have then the following form: \begin{equation} \alpha_n = \left[ \frac{w_1^{(n)}}{w_1^{(n)} + w_2^{(n)}} \right]^{1/2} , \;\;\; \beta_n = \left[\frac{ w_2^{(n)}}{ w_1^{(n)} + w_2^{(n)} } \right]^{1/2} , \label{e37} \end{equation} where \[ w_j^{(n)}= T_j^n R_j^{(N-n)} \] can be interpreted as the relative probability for the qubit to be in the state $|j\rangle$ for a given detector outcome $n$. In the situation when the detector provides no information on $\sigma_z$, e.g. if $T_1=T_2$, the coefficients (\ref{e37}) are unchanged from their initial values (\ref{e35}). Otherwise, the probability amplitude is shifted in the direction of the more probable state: the amplitude (\ref{e37}) of one qubit state is increased in comparison with (\ref{e35}) if the observed $n$ is closer to the value $n_j=T_jN$ characteristic for this state than the other state. As an illustration, Fig.~\ref{fm3} shows the amplitude $\alpha_n$ for $N=10$, $T_1=0.8$, and $T_2=0.4$, as a function of $n$. One can see that $\alpha_n$ maintains its original value $1/\sqrt{2}$ from Eq.~(\ref{e35}) for $n$'s roughly in the middle between the two characteristic values $n_1=8$ and $n_2=4$. For $n$ smaller than or close to $n_2$, $\alpha_n$ decreases from its original value, for $n$ close to or larger than $n_1$, $\alpha_n$ approaches 1. Such a shift due to the wavefunction reduction is the central part of the transformation cycle. \begin{figure} \hspace{3.5cm} \epsfxsize=5cm \epsfbox{fig3.eps} \caption{Probability amplitude $\alpha_n$ \protect (\ref{e37}) of finding the qubit in the state $|1\rangle$ as a function of the observed number $n$ of particles transmitted in $N=10$ attempts through the ballistic detector with transmission probabilities $T_1=0.8$ and $T_2=0.4$. The detector measures the state (\ref{e35}). } \label{fm3} \end{figure} The remaining steps of the cycle aim at returning the qubit to its initial state (\ref{e35}). To do this, one needs to transfer back the charge or flux that was transferred in the wavefunction reduction process leading to the state (\ref{e37}). This is achieved by creating for some time the non-vanishing tunneling amplitude, i.e. realizing a fraction of a period of the regular coherent oscillations in which the charge or flux goes back-and-forth between the two qubit basis states. In the most direct way, this can be done if the qubit structure makes it possible to create non-vanishing phase of the tunnel amplitude $\Delta'(t)$ (e.g., in the superconducting qubits, where the tunnel amplitude can be controlled through quantum interference, producing any complex value of this amplitude). In this case, the state (\ref{e37}) can be returned back directly into the initial form (\ref{e35}) if $\arg{\Delta'}=\pi/2$. In the diagram (Fig.~\ref{fm4}a) in which the qubit states are represented in the language of spin-1/2, i.e. \[ |\psi_1 \rangle=\cos (\theta_n/2) |1 \rangle+ \sin (\theta_n/2) |2 \rangle \, , \] such a tunneling amplitude corresponds to rotation around the $y$ axis. The diagram in Fig.~\ref{fm4}a shows then directly that the rotation around $y$ axis turning $|\psi_1 \rangle$ into $|\psi_0 \rangle$ should have the magnitude: \begin{equation} \int |\Delta'(t)|dt/\hbar= (\pi/2- \theta_n)/2 \, , \label{e38} \end{equation} where \[\theta_n = 2 \tan^{-1}(\beta_n/\alpha_n) = 2 \tan^{-1}\left[(T_2/T_1)^{n/2}(R_2/R_1)^{(N-n)/2}\right] .\] If the qubit structure allows only for the real tunnel amplitude $\Delta$ (the situation that can be expected, e.g., in semiconductor quantum dot qubits), the $y$-axis rotation $R_y= \exp \{-i \sigma_y \int |\Delta'(t)|dt/\hbar \}$ (\ref{e38}) can be simulated in three steps in which the rotation $R_x= \exp \{-i \sigma_x \int \Delta(t)dt/\hbar\}$ around the $x$ axis of the same magnitude (\ref{e38}) is preceded and followed by the rotations around the $z$-axis: \begin{equation} R_y=U^{-1} R_x U\, , \;\;\; U=\exp \{i \sigma_z \pi/4 \}\, . \label{e39} \end{equation} The $z$-axis rotations can be created by the pulses of the qubit bias: $\int \epsilon (t)dt/\hbar = \pm \pi/4$. The three-step sequence (\ref{e39}) can be simplified into two steps (Fig.~\ref{fm4}b) by changing the order of rotations: first, the $x$-axis rotation by $\pi/4$ (opening tunneling $\Delta(t)$ for appropriate interval of time) followed by one $z$-axis rotation: \begin{equation} \int \Delta(t)dt/\hbar= \pi/4 \, , \;\;\;\;\; \int \epsilon (t)dt/\hbar = (\pi/2- \theta_n)/2 \, . \label{e40} \end{equation} \begin{figure} \hspace{2.4cm} \epsfxsize=8cm \epsfbox{fig4.eps} \caption{Diagram of the two possible transformations of the qubit state in the spin representation returning the state $|\psi_1 \rangle$ to $|\psi_0 \rangle$ after the measurement-induced state reduction $|\psi_0 \rangle \rightarrow |\psi_1 \rangle$. (a) Direct one-step $y$-axis rotation \protect (\ref{e38}). (b) Projection on the $z-y$ plane of the two-step transformation (\ref{e40}) with the same end result. } \label{fm4} \end{figure} All these versions of the transformation cycle bring the qubit to its initial state (\ref{e35}). In all cases, completion of the cycle that started with a shift of the wavefunction amplitudes due to the state reduction involves a part of a period of coherent oscillations which reverses this shift. Coherent qubit oscillations are known to actually transfer the charge of flux between the two qubit states. Since the cycle as a whole is closed, this fact shows that the changes in the qubit state caused by the wavefunction reduction can not be interpreted only as the changes in our knowledge of probabilities of the state of the qubit, but involve the actual transfer of the charge or flux in the absence of the tunneling amplitude. To see this more quantitatively, one can derive the Bell-type inequality, violation of which should show that understanding of the state reduction solely in terms of probability changes can not be correct. The inequality is obtained by assuming that the process of switching off the tunneling amplitude $\Delta$ in the beginning of the transformation cycle does not leave the state (\ref{e35}) unchanged but instead localizes the qubit in one of the basis states. This means that the process produces an incoherent mixture of the qubit states with some, in general unspecified, probability $p$ to be in the state $|1\rangle$. This process would provide then an alternative, classical description of the evolution during the measurement process. In this description, the qubit state is ``objectively'' well defined, but is unknown to us, and the measurement gradually provides information about this unknown state. The measurement would only change the probabilities we ascribe to the two qubit states, but not the state itself, and in particular would not transfer the charge of flux. One should then see how well this classical description can mimic the quantum result of the transformation cycle described above. A convenient way of making this comparison is provided by the probability of ending the cycle in the wrong state. The unperturbed quantum evolution should end up in the initial state $\sigma_x=1$, whereas the same transformation cycle performed on the classical initial state will always have a finite probability $p^{(-)}$ of ending in the state $\sigma_x=-1$. This probability is found by applying the transformations not to the state (\ref{e35}) but to the incoherent state with the density matrix \[ \rho_0 =p|1\rangle \langle 1| + (1-p)|2\rangle \langle 2|\, . \] In this case, the measurement changes only the probability $p$ in this expression. Similarly to Eq.~(\ref{e37}), if the detector gives the output $n$, the density matrix of the system is \begin{equation} \rho_1= \rho_0 \big|_{p\rightarrow \bar{p}}\, , \;\;\;\; \bar{p}= \frac{pw_1^{(n)}}{pw_1^{(n)} + (1-p)w_2^{(n)} }\, . \label{e41} \end{equation} All versions (\ref{e38}) -- (\ref{e40}) of the transformation cycle produce the same probability $p^{(-)}$ of being in the state $\sigma_x=-1$, when applied to the density matrix $\rho_1$ (\ref{e41}). For instance, one can see directly that in the density matrix $R_y^{-1}\rho_1R_y$ obtained from $\rho_1$ by rotation (\ref{e38}), the probability $p^{(-)}$ is: \[ p^{(-)}=\frac{w_1w_2}{(w_1 + w_2)(pw_1 + (1-p)w_2) } \, .\] Minimizing this expression with respect to $p$, we see that the minimum probability of finding the state $\sigma_x=-1$ in the classical case is \begin{equation} p^{(-)}= \frac{\mbox{min} \{ w_1\, ,w_2\} }{w_1 + w_2 }\, . \label{e42} \end{equation} Instead of looking for the minimum with respect to $p$, one can adopt a natural additional assumption that when the tunneling amplitude is switched off, the qubit localization process can only be symmetric, since there is no reason to prefer one qubit state to another. In this case, $p=1/2$, and we obtain somewhat different expression for the probability $p^{(-)}$ with qualitatively similar properties: \begin{equation} p^{(-)}= \frac{2 w_1w_2 }{(w_1 + w_2)^2 }\, . \label{e43} \end{equation} This expression would also be obtained if the qubit wavefunction would be reduced to the density matrix $\rho_1$ (\ref{e41}) during the measurement, not in the process of suppression of the tunneling amplitude. Equations (\ref{e42}) and (\ref{e43}) show that in order to distinguish the quantum coherent evolution (for which $p^{(-)}=0$) and incoherent evolution with non-vanishing probability $p^{(-)}$, it is important to employ weak measurement. If the measurement is projective, i.e. if one of the probabilities $w_j$ is zero so that the measurement completely reduces the qubit state to one of the basis states, then $p^{(-)}=0$ and it is impossible to distinguish the two types of evolution. This conclusion should be independent of the specific form of the employed transformation cycle, since projective measurement is always expected to fully separate different components of the initial state of the measured system and completely suppress quantum coherence between them. The discussion above means that observation of the probability of the state $\sigma_x=-1$ smaller than $p^{(-)}$, \begin{equation} p(\sigma_x=-1) < p^{(-)} \label{e44} \end{equation} at the end of the transformation cycle proves that all transformations in this cycle, including the wavefunction reduction, are quantum coherent. Combined with the non-vanishing transfer of charge or flux during the ``oscillation'' step [(\ref{e38}) -- (\ref{e40})] of the cycle, this fact implies that the wavefunction reduction induces similar transfer across the tunnel barrier separating the qubit basis states even if the corresponding tunnel amplitude is zero. It is important to note that this counter-intuitive feature of the wavefunction reduction does not contradict the fact that all dynamic properties of the measurement, including the back-action dephasing and information rates can be calculated from the dynamic detector model without any reference to the wavefunction reduction. While the dynamic properties are average characteristics of the detector, the wavefunction reduction appears if one considers separately individual outcomes of measurement. In the transformations discussed above, this separation is achieved by introducing the feedback, operations on the measured system which depend on the specific measurement outcome. In ballistic detectors, the measurement outcomes are distinguished by the number $n$ of transmitted particles, and to see the wavefunction reduction one needs to distinguish individual particles, and is done, e.g., in the electron counting experiments \cite{q25}. For the detectors for which distinguishing individual transmitted particles can be problematic (e.g., the QPC detector), the allowed uncertainty in $n$ should be smaller than the width $\delta n$ of the transition region in the $n$-dependence of the wavefunction amplitudes of the measured qubit -- see Fig.~\ref{fm3}. Since the width of this region can be estimated roughly as $\delta n \simeq 1/\ln(T_1R_2/T_2R_1)$, the uncertainty in $n$ can be compensated for by making the difference between transmission probabilities $T_j$ smaller, thus increasing $\delta n$. The limit to this increase is set by the decoherence processes in the measured system which make it impossible to increase the measurement time of the detector beyond the coherence time of the system without losing the non-trivial character of the measurement dynamics. \section{Tunneling detectors} So far, the discussion was based on the ballistic model of the mesoscopic detector, in which the measured system controls ballistic motion of some particles between the two reservoirs (Fig.~\ref{fm1}). If one assumes that the particle transmission probabilities are small, $T_j \ll 1$, and the transfer processes between the reservoirs can be described in the tunneling approximation, specific nature of the detector transport becomes irrelevant. In this case, the range of applicability of the detector model can be extended significantly to include the detectors in which it is not possible to identify regions of ballistic transport, but which are still based on the very similar dynamic principle: control by the measured system of transport between the two reservoirs. Examples of such ``tunneling'' detectors include the superconducting SET electrometer \cite{b7,b8,b9}, normal SET electrometer in the co-tunneling regime \cite{b14}, or dc SQUID magnetometer (see, e.g., \cite{b10,b11,b12,b13}) used for measurements of superconducting qubits. The aim of this Section is to show briefly that the measurement properties of this type of mesoscopic detectors coincide in essence with those of the ballistic detectors. Since the measured system controls the tunneling amplitude $\hat{t}$ of particles in the detector, this amplitude should be treated as a non-trivial operator acting on the measured system. The detector tunneling can be described then with the standard tunnel Hamiltonian, the transfer terms in which are split into a product of operators of the measured system and the detector. In this case, the tunnel Hamiltonian describes the detector-system coupling and can be written as \begin{equation} H_T=\hat{t} \xi+\hat{t}^{\dagger} \xi^{\dagger} \, , \label{e1} \end{equation} where $\xi, \xi^{\dagger}$ are the operators that describe the detector part of the tunneling dynamics, e.g., creation of excitations in the detector reservoirs when a particle tunnels, respectively, forward and backward between them. Inclusion of the operators $\xi, \xi^{\dagger}$ means, therefore, that the Hamiltonian (\ref{e1}) makes it possible to describe the detectors in which the tunneling processes are strongly inelastic. This fact, however, does not prevent correct account of elastic transport in the case of ballistic detectors. Qualitative reason for this can be seen easily using as an example the QPC detector. While scattering of individual electrons in the QPC is elastic, in the tunnel limit, electron transfer between the two electrodes of the QPC can also be viewed as creation of electron-hole excitation in the electrodes, with an electron removed from a state below the Fermi level in one electrode, and transferred to a state above the Fermi level in the other. Accordingly, as we will see later, Hamiltonian (\ref{e1}) leads to the evolution equations that coincide in the tunnel limit with those obtained above in the ballistic case. Under the assumption that the detector tunneling is weak, the precise form of the internal detector Hamiltonian is not important and dynamics of measurement is defined by the correlators of the operators $\xi, \xi^{\dagger}$: \begin{equation} \gamma_+=\int_{0}^{\infty} dt\langle \xi(t) \xi^{\dagger}\rangle \, , \;\;\; \gamma_-=\int_{0}^{\infty}dt\langle \xi^{\dagger}(t) \xi\rangle \, . \label{e3} \end{equation} Here the angled brackets denote averaging over the detector reservoirs which are taken to be in a stationary state with some fixed number of particles in them and the density matrix $\rho_D$: $\langle ... \rangle = \mbox{Tr}_D \{ ... \rho_D \}$. The correlators (\ref{e3}) set the scale $\Gamma_{\pm}\equiv 2\mbox{Re} \gamma_{\pm}$ of the forward and backward tunneling rates in the detector. A reasonable tunneling detector should satisfy some additional assumptions related to the fact that its output should be classical in order to provide a complete measurement dynamics. Similarly to the ballistic detector, the output information in the tunneling case is contained in the number $n$ of the particles transmitted between the detector reservoirs. For this number to behave classically, the correlators $\langle \xi(t) \xi \rangle$, $\langle \xi^{\dagger} (t) \xi^{\dagger} \rangle$ that do not conserve the number of tunneling particles should vanish. Another consequence of the assumption of classical detector output is that the energy bias $\Delta E$ for tunneling through the detector should be much larger than the typical energies of the measured system. In this case, one can neglect quantum fluctuations of the detector current in the relevant frequency range that corresponds to frequencies of evolution of the measured system. If one assumes in addition that all other characteristic frequencies of the detector tunneling are also much larger than those of the measured system, the functions $\xi(t)$, $\xi^{\dagger}(t)$ in Eq.~(\ref{e3}) are effectively $\delta$-correlated on the time scale of the system. Vanishing correlation time in the correlators (\ref{e3}) makes it possible to write down simple evolution equations for the density matrix $\rho$ of the measured system. To describe the system dynamics conditioned on particular outcome of measurement, we also keep in the evolution equation the number $n$ of particles transferred through the detector. Since the correlators that do not conserve $n$ vanish, only the terms diagonal in $n$ are important. In the interaction representation with respect to the tunnel Hamiltonian (\ref{e1}), the density matrix $\rho(t)$ is given by the standard expression: \begin{equation} \rho(t) = \langle S \rho \rho_D S^{\dagger} \rangle , \;\; S = T \exp \{-i\int^tdt' H_T (t') \} . \label{e5} \end{equation} For $\delta$-correlated operators in Eq.~(\ref{e3}), one can see that the full perturbation expansion of Eq.~(\ref{e5}) in $H_T$ is equivalent to the evolution equation for $\rho(t)$ that follows from the lowest-order perturbation theory. Keeping track of the number $n$ of particles transferred through the detector, we get: \begin{equation} \;\;\;\; \dot{\rho}^{(n)}= \Gamma_+\, \hat{t}^{\dagger}\rho^{(n-1)} \hat{t}+\Gamma_- \, \hat{t} \rho^{(n+1)} \hat{t}^{\dagger} -(\gamma_+\hat{t}\hat{t}^{\dagger} + \gamma_-\hat{t}^{\dagger}\hat{t})\rho^{(n)} - \rho^{(n)}(\gamma^*_+\hat{t} \hat{t}^{\dagger}+ \gamma^*_- \hat{t}^{\dagger}\hat{t}) \, . \label{e4} \end{equation} Since the tunneling amplitude $\hat{t}$ is a function of some observable of the measured system, there is a system of eigenstates $|j\rangle$ common to the operators $\hat{t}$ and $\hat{t}^{\dagger}$: \[ \hat{t}|j\rangle = t_j |j\rangle\, , \;\;\;\; \hat{t}^{\dagger}|j\rangle = t_j^* |j\rangle\, , \] where $t_j$ is the detector tunneling amplitude when the measured system is in the state $|j\rangle$. It is convenient to write the evolution equation (\ref{e4}) in the basis of states $|j\rangle$: \begin{eqnarray} \dot{\rho}^{(n)}_{ij}= -(1/2)(\Gamma_+ +\Gamma_-)(|t_i|^2+ |t_j|^2) \rho^{(n)}_{ij} + \Gamma_-\, t_it_j^*\rho^{(n+1)}_{ij} \label{e6} \\ + \Gamma_+ \, t_i^*t_j \rho^{(n-1)}_{ij} - i[\delta H, \rho^{(n)}]_{ij} \, , \nonumber \end{eqnarray} where \begin{equation} \delta H = \mbox{Im}(\gamma_+ +\gamma_-) |t_j|^2\, |j\rangle \langle j| \label{ren} \end{equation} is the renormalization of the Hamiltonian of the measured system due to its coupling to the detector. If one omits the term $\delta H$ which can be combined with the internal Hamiltonian of the measured system, Eq.~(\ref{e6}) can be solved in $n$ by noticing that it coincides in essence with a recurrence relations for the modified Bessel functions $I_n$~\cite{q26}. In order to interpret $n$ as the number of particles transferred through the detector during the time interval $t$, we solve this equation with the initial condition $\rho^{(n)}_{ij}(t=0)= \rho_{ij}(0) \delta_{n,0}$. The corresponding solution is: \begin{equation} \;\;\; \frac{\rho^{(n)}_{ij}(t)}{\rho_{ij}(0)}= \left( \frac{\Gamma_+}{\Gamma_-}\right)^{n/2} I_n (2t|t_it_j|\sqrt{\Gamma_+\Gamma_-}) \exp \{ -\frac{\Gamma_+ +\Gamma_-}{2} (|t_i|^2+|t_j|^2) t -in \varphi_{ij} \} \, , \label{e7} \end{equation} where $\varphi_{ij} \equiv \arg(t_it_j^*)$. As discussed above, the qubit density matrix conditioned on the particular measurement outcome $n$ follows from Eq.\ (\ref{e7}) if one selects in this equation the terms with given $n$ and normalizes the resulting reduced density matrix back to 1. If one of the tunneling rates $\Gamma_+$ or $\Gamma_-$ vanishes, Eq.\ (\ref{e7}) reduces to the usual Poisson distribution characteristic for tunneling in one direction. Conditional description of measurement in this case was developed in \cite{q11}. When both rates are non-vanishing, specifying the total number $n$ of the transferred particles does not specify uniquely evolution of the detector, since the same $n$ results from the balance between different numbers of particles transferred forward and backward. This means that some information is lost in this regime and the detector is not quantum-limited (see the discussion below). In contrast to the situation with the quantum-limited detectors considered in the previous Sections, conditional dynamics that follows from Eq.~(\ref{e7}) in this case \cite{q27} does not preserve the purity of the quantum state of the measured system. Evolution of the density matrix $\rho$ averaged over the measurement outcomes $n$ can be obtained by either disregarding simply the index $n$ in Eq.~(\ref{e6}), or directly taking the sum over $n$ in (\ref{e7}) with the help of a summation formula \cite{q26} for the Bessel functions. In both cases, equation for the measurement-induced evolution of $\rho$ is: \begin{equation} \dot{\rho}_{ij}= -\Gamma_{ij}\rho_{ij} -i (\Gamma_+ -\Gamma_-)|t_it_j| \sin \varphi_{ij}\rho_{ij}\, , \label{e8} \end{equation} where \begin{equation} \Gamma_{ij} \equiv (1/2)(\Gamma_+ +\Gamma_-)|t_i-t_j|^2 \label{e9} \end{equation} is the back-action dephasing rate by the tunneling detector. The last term in Eq.~(\ref{e8}) can be viewed as another contribution to renormalization of the energy difference between the states $|i\rangle$ and $|j\rangle$, although in general it can not be reduced to the energy shifts of individual states [in contrast to the renormalization term (\ref{ren})]. The back-action dephasing rate (\ref{e9}) coincides with that of the ballistic detectors in the tunneling limit given by Eq.~(\ref{e151}), if we take into account that our discussion of ballistic detectors assumed for simplicity that particles are incident on the scattering region from only one electrode. It can be seen directly that the difference in the phases [see Eq.~(\ref{e152})] of the tunneling amplitudes in the two expressions is not essential. The reason for this difference is that the tunnel Hamiltonian (\ref{e1}) describes explicitly only the effects associated with the actual transfer of particles across the detector. It is assumed that the other effects of the detector-system coupling, e.g., renormalization of the system energy due to reflection processes in the detector, are already accounted for. In terms of the scattering amplitudes, this implies that the reflection amplitudes $r$ and $\bar{r}$ for particles incident on the tunnel barrier from the two electrodes of the detector, should satisfy the conditions $\arg(r_ir_j^*)=0$ and $\arg(\bar{r}_i\bar{r}_j^*)=0$. In this case, Eq.~(\ref{e151}) coincides with (\ref{e9}) even with account of extra phases (\ref{e152}). This means that the model of the tunnel detector considered in this Section is equivalent to that of the ballistic detector in the appropriate small-transparency limit. To calculate the information rate $W_{ij}$ of the tunneling detector in the situation when the rates of both forward and backward tunneling are non-vanishing (in contrast to scattering from one direction discussed for the ballistic detectors), one needs to use in Eq.~(\ref{e17}) the diagonal part of Eq.~(\ref{e7}) which gives the probabilities $p_j(n) = \rho^{(n)}_{jj}$ for $n$ particles to tunnel when the system is in the state $|j\rangle$. An example of the rate $W_{ij}$ defined in this way is shown in Fig.~\ref{fm5}. This figure shows that in general $W_{ij}$ is time-dependent and approaches constant value only after a transition period. If the tunneling probabilities $T_j=|t_j|^2$ do not differ very strongly, this transition period is shorter than the back-action dephasing time $\Gamma_{ij}^{-1}$. \begin{figure} \hspace{3.5cm} \epsfxsize=5.5cm \epsfbox{fig5.eps} \caption{The information acquisition rate $W_{ij}$ of the tunneling detector normalized to the dephasing rate $\Gamma_{ij}$ \protect (\ref{e9}) (for $\varphi_{ij}=0$) as a function of time $t$ for several ratios of the forward and backward tunneling rates. The time $t$ is normalized to the typical forward tunneling rate $\gamma=\Gamma_+\, (T_i+T_j)/2$. The curves are plotted for the detector transparencies $T_i=0.2$, $T_j=0.4$. The dashed lines show the corresponding asymptotic values (\ref{e51}). For $\Gamma_+/\Gamma_-=100$, the dashed line overlaps with the main curve. } \label{fm5} \end{figure} The constant asymptotic values of the information rate can be obtained from the asymptotic behavior of the Bessel functions $I_n(z)$. Using the standard integral representation for $I_n(z)$ in Eqs.~(\ref{e17}) and (\ref{e7}), and making use of the fact that the constant rates $W_{ij}$ are determined by the exponential behavior of the integrals at large time $t$, we find directly \begin{equation} W_{ij}=(\Gamma_+ +\Gamma_-)(T_i+T_j)/2 - \left[(T_i^2+T_j^2)\Gamma_+\Gamma_- + T_iT_j (\Gamma_+^2 +\Gamma_-^2)\right]^{1/2} \, . \label{e51} \end{equation} If the particles tunnel only in one direction, e.g. $\Gamma_- =0$, Eq.~(\ref{e51}) reduces to the previously known result \cite{gur,q19} $W_{ij}=\Gamma_+ (\sqrt{T_i}-\sqrt{T_j})^2/2$, in which the information rate and the back-action dephasing rates coincide, when the phases of the tunneling amplitudes satisfy the appropriate ideality condition $\varphi_{ij}=0$. In the other limit of small difference $2\Delta T$ between the transmission probabilities $T_{i,j}= T\pm \Delta T $, Eq.~(\ref{e51}) reduces to \begin{equation} W_{ij}=\frac{(\Delta T)^2(\Gamma_+ - \Gamma_-)^2}{2 T (\Gamma_+ +\Gamma_-)} \, . \label{e52} \end{equation} This equation agrees with the general theory of linear measurements (see, e.g., \cite{q10}), in which it can be interpreted as the rate with which one can distinguish the difference $\Delta T(\Gamma_+ - \Gamma_-)$ between the two detector currents in the presence of the current noise with spectral density $T (\Gamma_+ +\Gamma_-)$. In the most typical situation, the two rates $\Gamma_{\pm}$ are both non-vanishing because of the finite temperature $\Theta$ of the detector electrodes. The electrodes can be in equilibrium even when a non-vanishing current is driven between them by finite energy difference $\Delta E$ created for the tunneling particles. In this case, the tunneling rates $\Gamma_{\pm}$ are related by the detailed balance relation and can be written as $\Gamma_{\pm}= \Gamma_0 \exp (\pm \Delta E/2\Theta)$, where $\Gamma_0$ is the typical tunneling rate which can also depend on temperature and energy bias. The information rate (\ref{e51}) then is \begin{equation} W_{ij}/\Gamma_0 =(T_i+T_j)\cosh (\Delta E/2\Theta )- \left[ T_i^2+T_j^2+ 2T_iT_j \cosh (\Delta E/\Theta )\right]^{1/2} \, . \label{e53} \end{equation} Comparison of Eqs.~(\ref{e53}) or (\ref{e51}) with Eq.~(\ref{e9}) for the back-action dephasing rate (see also Fig.~\ref{fm5}) shows that the detector with temperature $\Theta \sim \Delta E$ which creates non-vanishing rates $\Gamma_{\pm}$, is not quantum-limited, $W_{ij} < \Gamma_{ij}$, even if the phases of the tunneling amplitudes satisfy the ideality condition $\varphi_{ij}=0$. Equation (\ref{e53}) can be used to establish quantitative condition on the detector temperature necessary for the desired degree of the detector ideality. \section{Conclusion} Two general models of realistic mesoscopic solid-state detectors have been described in this paper. The detectors are based on ability of the measured system to control transport current between two particle reservoirs. The models enable detailed analysis of the dynamics of the measurement process. Wavefunction reduction is introduced in this dynamics through the assumption of suppressed quantum coherence between the particle states in different reservoirs. This procedure is very natural and can be justified qualitatively within the general approach to decoherence in quantum systems. The main element of the justification is the increased level of difficulty of maintaining quantum coherence between the states of progressively more complex systems. This fact makes the boundary between the quantum and classical domains not very sharp and dependent on details of specific measurement set-up. This leads to an interesting question whether it is possible to formulate more general and self-consistent conditions defining the boundary between the quantum and classical behaviors of dynamic systems. Mesoscopic solid-sate structures provide a convenient setting for further studies of this question. \vspace{6ex} \acknowledgments Part of the discussion in this paper is based on the work done in collaboration with A. di Lorenzo, A.N. Korotkov, K. Rabenstein, R. Ruskov, V.K. Semenov, E.V. Sukhorukov, and W. Mao. The author would like to thank them, and also G.~Benenti, R.~Fazio, J.W. Lee, F.~Plastina, and D.L.~Shepelyansky, for collaboration and discusssions of quantum measurements. This work was supported by the NSF under grant \# DMR-0325551.
2,869,038,155,998
arxiv
\section{Introduction} Measuring high-order intensity correlation function is the key tool to reconstruct the object information in ghost imaging with thermal light (GITL) \cite{r0,r2,r3,r4,r5,r6,wtj1}. The correlation orders were natural numbers in all present scenarios. The most favorite order was $2$, and the second-order correlation functions in GITL were widely investigated both in theory and in practice. Since the source plays a role of a conjugate mirror \cite{wtj1}, ghost imaging with thermal light can be implemented without lenses \cite{wtj2,wtj3}. The ghost images in computational GITL can be formed with bucket signals measured by only one single pixel-detector \cit {cgi1}. Now GITL were applied in remote sensing \cite{remote}, lidar \cit {remote2}, imaging encryption \cite{encryp}, and biomedical imaging \cit {bio}. Multi-color GITL has been investigated to discriminate wavelength information \cite{color1}, and to reconstruct RGB information of the color object \cite{color2}. Higher-order correlation functions were used to enhance the visibility degree \cite{3gi2} and improve contrast-to-noise ratios \cite{3gi3,3gi4} of the ghost images. Third-order GITL were also applied to construct two ghost images \cite{3gi1}. Recent investigations showed that ghost images can be formed in first-order correlation measurements with thermal light \cite{1gi1}. Nevertheless, the orders of natural numbers are relative rough parameters in application. Besides integer-order moments, the fractional-order moments made great success in such processes as truncated L\'{e}vy flights \cit {levy} or atmospheric laser scintillations \cite{scill}. In this paper, we report a GITL experiment in which the fractional-order moments of the stochastic bucket and reference intensity signals are calculated. That is, the object information is reconstructed by measuring the fractional-order moments of the bucket and reference signals. In calculating the fractional-order moments, the positive orders of the reference signals are set to avoid infinity, while the orders of the bucket signals can be positive or negative numbers. We find that negative (positive) ghost images can be obtained with negative (positive) orders of the bucket signals. In theory, an elaborate analysis based on probability theory is provided. The crucial step is to determine the precise relation between the bucket signals and reference signals. The reference signals can be regarded as an array of independent stochastic variables, each of which meets negative exponential distribution \cite{3gi3}. So the probability density function of the bucket signals, as well as the joint probability function between the bucket and reference signals, can be obtained since the bucket signals can be regarded as a linear sum of the reference signals. Also the visibility degrees and signal-to-noise ratios of the ghost images are analyzed according to our theory. The experimental results and numerical simulations are in good agreement with our theory. Our paper is organised as follows. Section II gives the theory of the joint probability density function between the bucket and reference signals. Section III shows the theoretical analysis and experimental results of the fractional-order moments for binary objects. Section IV shows the numerical simulations of the fractional-order moments for a complicated object. The conclusions and discussions are shown in Sec. V. \section{Joint probability density function between the bucket and reference signals} Figure 1 shows the sketch of our experimental setup of GITL. The two sets of correlated random speckles are the thermal light fields in the object plane and reference plane. In the object arm, the bucket detector D$_{\text{B}}$ converts the total optical intensity out of the object, depicted by the letter \textquotedblleft A\textquotedblright , into bucket signal $I_{B}$. While the reference detector D$_{\text{r}}$ scans and converts local intensity into reference signal $I_{r}$. The bucket detector and the reference detector are two charge coupled devices (CCDs) in experiment. The correlator, which is a computer indeed, is used to measure fractional-order moment functions $\left\langle I_{B}^{\mu }I_{r}^{\nu }\right\rangle $, where $\mu $\ and $\nu $ are fractional numbers. The fraction-order moment function retrieves the object information which is shown in the screen. \begin{figure}[tbh] \centering \includegraphics[clip,width=7.5cm]{setup1.eps} \caption{Sketch of the experimental setup. The random speckles are two identical pseudo-thermal light beams. The upper one is the object beam and the lower is the reference beam. $D_{B}$ is the bucket detector,and the pixel detector $D_{r}$ registers the reference beam. The fractional moments \left\langle I_{B}^{\protect\mu }I_{r}^{\protect\nu }\right\rangle $ is performed by the correlator.} \end{figure} The resolving power of GITL is inversely proportional to the coherence length of the optical fields in the object and reference planes. Throughout the paper we consider the case of perfect GITL that the GITL system has the ability to completely reconstruct the object information. For simplicity of mathematics, as shown in Fig. 1, we synchronically divide the object and reference planes into such $n$ small units that (i) the details of the object are maintained, and (ii) the thermal fields in all the units are statistically independent from each other. From the viewpoint of probability theory, the reference signals, i.e., the thermal light intensities in all the units, can be regarded as a set of stochastic variables $I_{r}=\{I_{1},I_{2},\cdots ,I_{n}\}$. Each element of the reference signal $I_{i}$ ($i=1,2,\cdots ,n$) meets the negative exponential probability distribution $p(I_{i})=\frac{1}{I_{0}}e^{-\frac{I_{i }{I_{0}}}$, where the constant $I_{0}$ represents the intensity average. The fractional moment of the reference signal is $\left\langle I_{i}^{\nu }\right\rangle =\Gamma (1+\nu )I_{0}^{\nu }$ for any\ fractional number $\nu $, where the Gamma function is $\Gamma (1+\nu )=\int_{0}^{\infty }t^{\nu }\exp [-t]dt$. Due to the fact that the largest probability of the reference signal is $p(0)$ that $p(0)\geqslant p(I_{i})$, we usually set $\nu $ positive ($\nu >0$) in experiment to avoid infinity. The bucket detector D$_{\text{B}}$, which collects the variables scattered from the object, output the bucket signals \begin{equation} I_{B}=I_{r}T=\sum\nolimits_{i=1}^{n}I_{i}t_{i}, \label{a1} \end{equation where $0\leq t_{i}\leq 1$ is the transmittance or reflectivity of the $i$th unit in the object $T=\{t_{1},t_{2},\cdots ,t_{n}\}^{\prime }$ (the prime denotes matrix transposition). Obviously, the bucket signal $I_{B}$ in Eq. \ref{a1})\ is a linear sum of $n$ independent variables, each of which fulfills the negative exponential probability density function with weight t_{i}$. The probability density function for the variable from the $i$th object unit becomes $\frac{p(I/t_{i})}{t_{i}}=\frac{1}{I_{0}t_{i}}e^{-\frac I_{i}}{I_{0}t_{i}}}$, and its Laplace transformation i \begin{equation} \widetilde{p}_{i}(s)=\int_{0}^{\infty }\frac{p(I/t_{i})}{t_{i}}e^{-sI}dI \frac{1}{1+s\times I_{0}t_{i}}. \label{a3} \end{equation} We can see that each variable from the object unit also fulfill the negative exponential distribution, with a modified average $I_{0}t_{i}$. After some algebra, the probability density function of the bucket signal is calculated ou \begin{equation} P_{B}(I_{B})=\underset{s\rightarrow I_{B}}{\mathcal{L}^{-1}}\left[ \prod\nolimits_{i=1}^{n}\widetilde{p}_{i}(s)\right] , \label{a2} \end{equation where $\underset{s\rightarrow I_{B}}{\mathcal{L}}^{-1}$ denotes the inverse Laplace transformation from variable $s$ to $I_{B}$. We note that $P_{B}(I_{B})$ in Eq. (\ref{a2}) does not completely depend on the object structure. The probability density function $P_{B}(I_{B})$ can be calculated out as long as the histogram of the object is known. A remarkable feature of the bucket probability density function is that $P_{B}(0)=0$, if the object is composed of at least two nonzero units. So the fractional moment of the bucket signal $\left\langle I_{B}^{\mu }\right\rangle =\int_{0}^{\infty }I_{B}^{\mu }P_{B}(I_{B})dI_{B}$, where $\mu $ is a fractional number, is tenable for both $\mu >0$ and $\mu <0$. Since the reference signal $I_{i}$ is one constituent of the bucket signal I_{B}$, we combine Eqs. (\ref{a1}) and (\ref{a2}), and write the joint probability density function between the bucket signal $I_{B}$ and the reference signal $I_{i}$ as \begin{equation} P_{2}(I_{B},I_{i})=P_{B}^{\prime }(I_{B}-I_{i}t_{i})\times p(I_{i}), \label{a4} \end{equation where $P_{B}^{\prime }(x)=\underset{s\rightarrow x}{\mathcal{L}^{-1}}\left[ \prod\nolimits_{j=1,j\neq i}^{n}\widetilde{p}_{j}(s)\right] $, and I_{B}\geq I_{i}t_{i}$. The object information can be reconstructed by calculating out the fractional-order moments $\left\langle I_{B}^{\mu }I_{r}^{\nu }\right\rangle $ in a computer in experiment. According to the probability theory, the function of the fractional-order moment i \begin{equation} \left\langle I_{B}^{\mu }I_{i}^{\nu }\right\rangle =\int_{0}^{\infty }dI_{B}\int_{0}^{I_{B}/t_{i}}dI_{i}\times I_{B}^{\mu }I_{i}^{\nu }P_{2}(I_{B},I_{i}). \label{a5} \end{equation As mentioned above, the fractional numbers in Eq. (\ref{a5}) meet $\nu >0$ and $\mu \neq 0$. What we should pay more attention to is the fractional-order moments $\left\langle I_{B}^{\mu }I_{i}^{\nu }\right\rangle _{0}$ and $\left\langle I_{B}^{\mu }I_{i}^{\nu }\right\rangle _{1}$ for t_{i}=0$ and $1$, respectively. The former defines the background of the ghost images $\left\langle I_{B}^{\mu }I_{i}^{\nu }\right\rangle _{0}=\left\langle I_{B}^{\mu }\right\rangle \left\langle I_{i}^{\nu }\right\rangle $. However, the latter meet \begin{equation} \begin{array}{c} \left\langle I_{B}^{\mu }I_{i}^{\nu }\right\rangle _{1}>\left\langle I_{B}^{\mu }I_{i}^{\nu }\right\rangle _{0},\quad (\mu >0,\nu >0) \\ \left\langle I_{B}^{\mu }I_{i}^{\nu }\right\rangle _{1}<\left\langle I_{B}^{\mu }I_{i}^{\nu }\right\rangle _{0}.\quad (\mu <0,\nu >0 \end{array} \label{a6} \end{equation It is clear that the ghost image is above its background when $\mu >0$, and is below its background when $\mu <0$. That is, negative ghost images can be obtained for negative fractional order $\mu $. Consequently, the visibility degree and peak SNR of the ghost images in fractional-order moments are defined a \begin{equation} \begin{array}{c} V=\frac{\left\vert \left\langle I_{B}^{\mu }I_{i}^{\nu }\right\rangle _{1}-\left\langle I_{B}^{\mu }I_{i}^{\nu }\right\rangle _{0}\right\vert } \left\langle I_{B}^{\mu }I_{i}^{\nu }\right\rangle _{1}+\left\langle I_{B}^{\mu }I_{i}^{\nu }\right\rangle _{0}}, \\ R_{p}=\frac{\sqrt{N}\left\vert \left\langle I_{B}^{\mu }I_{i}^{\nu }\right\rangle _{1}-\left\langle I_{B}^{\mu }I_{i}^{\nu }\right\rangle _{0}\right\vert }{\sqrt{\left\vert \left\langle I_{B}^{2\mu }I_{i}^{2\nu }\right\rangle _{1}-\left\langle I_{B}^{\mu }I_{i}^{\nu }\right\rangle _{1}^{2}\right\vert }} \end{array} \label{a8} \end{equation where $N$ is the number of sampling in experiment \cite{3gi3}. In the following, we show the experimental results of the fractional-order moments for binary objects in ghost imaging, and then show the numerical simulations of the fractional moments for a complicated object in ghost imaging. \section{Experiment results of binary objects} To explicitly exhibit the probability theory method and to illustrate the characteristics of ghost images from fractional-order moments in GITL, we consider the case of binary objects that the values of the object units are t_{i}=0$ or $1$. From Eq. (\ref{a2}), the bucket signals meet Gamma distribution with probability density function as \begin{equation} P_{B}(I_{B})=\frac{I_{B}^{m-1}}{(m-1)!I_{0}^{n}}\exp [-\frac{I_{B}}{I_{0}}], \label{b1} \end{equation where $m$ is the number of the effective object units (whose values are t_{i}=1$). The subplot in Fig. 1 shows probability density functions of Eq. \ref{b1}) for $p(I_{r})$ $m=1$ with solid black line, $P_{B}(I_{B})$ $m=2$ with dashed blue line, and $P_{B}(I_{B})$ $m=5$ with dotted red line, respectively. We can find that $p(0)\geq p(I_{r})$ for single-unit object and $P_{B}(0)\leq P_{B}(I_{B})$ for complex object. The joint probability density function between the reference and bucket signals i \begin{equation} P_{2}(I_{B},I_{i})=\left\{ \begin{array}{c} \frac{(I_{B}-I_{i})^{m-2}}{(m-2)!I_{0}^{m}}\exp [-\frac{I_{B}}{I_{0} ],(t_{i}=1) \\ \frac{I_{B}^{m-1}}{(m-1)!I_{0}^{m+1}}\exp [-\frac{I_{B}+I_{i}}{I_{0} ],(t_{i}=0 \end{array \right. \label{b2} \end{equation where $I_{i}\leq I_{B}$ must be considered. The ensemble average of the reference signals is $\left\langle I_{i}\right\rangle =I_{0}$, and the ensemble average of the bucket signals is $\left\langle I_{B}\right\rangle =mI_{0}$. The fractional-order moments is calculated ou \begin{equation} \left\langle I_{B}^{\mu }I_{i}^{\nu }\right\rangle _{0}=\frac{\Gamma (m+\mu )\Gamma (1+\nu )}{\Gamma (m)}I_{0}^{\mu +\nu }, \label{b3} \end{equation \begin{equation} \left\langle I_{B}^{\mu }I_{i}^{\nu }\right\rangle _{1}=\frac{\Gamma (m+\mu +\nu )\Gamma (1+\nu )}{\Gamma (m+\nu )}I_{0}^{\mu +\nu }, \label{b4} \end{equation for $t_{i}=0$ and $t_{i}=1$, respectively. The visibility degree and peak SNR of the ghost images ar \begin{equation} V=\left\vert \frac{\Gamma (m+\mu +\nu )\Gamma (m)-\Gamma (m+\mu )\Gamma (m+\nu )}{\Gamma (m+\mu +\nu )\Gamma (m)+\Gamma (m+\mu )\Gamma (m+\nu ) \right\vert , \label{b5} \end{equation \begin{equation} R_{p}=\frac{\left\vert \frac{\Gamma (m+\mu +\nu )\Gamma (1+\nu )}{\Gamma (m+\nu )}-\frac{\Gamma (m+\mu )\Gamma (1+\nu )}{\Gamma (m)}\right\vert } \sqrt{\frac{\Gamma (m+2\mu +2\nu )\Gamma (1+2\nu )}{N\cdot \Gamma (m+2\nu )} \frac{\Gamma ^{2}(m+\mu +\nu )\Gamma ^{2}(1+\nu )}{N\cdot \Gamma ^{2}(m+\nu }}}, \label{b6} \end{equation respectively for binary objects. In experiment, the pseudo-thermal light source is obtained by projecting a laser beam (laser diode: $\lambda =650$nm) onto a rotating ground glass plate \cite{3gi2} (which is not shown in Fig. 1). We set the diameter of the laser spot on the glass plate $d=4.50$mm, and the distance between the object and the glass plate $L=8.50cm$. The coherent length of the random laser speckles in the object plane is about $\frac{1.22\lambda L}{d}\simeq 14.98\mu m$, which is smaller than the pixel pitch of the CCD $20.0\mu m$. This ensures well-performed GITL, and fulfills the two assumptions proposed above. \begin{figure}[tbh] \centering\includegraphics[clip,width=3.5cm,angle=-90]{imagetu.eps} \caption{Experimental results of ghost images from fractional-order moments with binary objects. The fractional orders are set $\protect\mu =-2.7183$ in (b), $\protect\mu =-1.414$ in (c), $\protect\mu =-0.618$ in (d), $\protec \mu =0.618$ in (e), $\protect\mu =1.414$ in (f), and $\protect\mu =2.7183$ in (g). The parameter $\protect\nu =0.5$ is fixed in all the ghost images. The binary object is shown in (a).} \end{figure} Figure 2 shows our experimental results of measuring the fractional-order moments in GITL with binary objects over $N=120,000$ samplings. The object is depicted in Fig. 3(a). Figures 3(b-f) show the normalized fractional-order moments $\frac{\left\langle I_{B}^{\mu }I_{i}^{\nu }\right\rangle }{\left\langle I_{B}^{\mu }\right\rangle \left\langle I_{i}^{\nu }\right\rangle }$. The ghost images from the fractional-order moments are depicted in Figs. 2(b), 2(c), 2(d), 2(e), 2(f), and 2(g), for \mu =-2.7183$, $-1.414$, $-0.618$, $0.618$, $1.414$, and $2.7183$ respectively. The parameter $\nu =0.5$ is fixed in all the ghost images. We can see that the negative ghost images are obtained for negative orders $\mu <0$ in Figs. 2(b), 2(c), and 2(d). While the positive ghost images are obtained for positive orders $\mu >0$ in Figs. 2(e), 2(f), and 2(g). We find that the greater the absolute order $\left\vert \mu \right\vert $ is, the higher visibility degree of the ghost image becomes. In general, the negative ghost images has better visibility than the positive ones. But the behaviors of the signal-to-noise ratio (SNR) differs greatly from that of the visibility degree. The SNRs decrease when $\left\vert \mu \right\vert $ increases. The Visibility degrees are $V=6.77\times 10^{-3}$, $3.45\times 10^{-3}$, $1.49\times 10^{-3}$, $1.47\times 10^{-3}$, $3.32\times 10^{-3}$,\ $6.26\times 10^{-3}$, and the peak SNRs (defined in Eq. (\ref{a8})) are R_{p}=2.118$, $2.126$, $2.130$, $2.132$, $2.133$, $2.131$ for Figs. 2(b), 2(c), 2(d), 2(e), 2(f), and 2(g), respectively. The orders $\mu $ and $\nu $\ determine the quality of ghost images. Therefore, fractional orders provide more detailed parameters to exquisitely adjust or modify the quality of the ghost images in practice than integer orders. Below is our theory of the fractional-order moments in GITL. The visibility $V$ in Eq. (\ref{b5}) and the relative peak SNR $R_{p}/\sqrt{ }$ in Eq. (\ref{b6}) versus the fractional orders are plotted in Fig. 3. The number of effective object unit is $m=20$ in Figs. 3(a) and 3(b), and $m=30$ in Figs. 3(c) and 3(d), respectively. We can see from figs. 3(a) and 3(c) that the visibility degree increases as the absolute values of the correlation orders $\mu $\ and $\nu $ increase. An evident feature is that the visibility degree of the negative image ($\mu <0$) increase faster than that of the positive image ($\mu >0$). Also, the greater number of effective object unit can degrade the visibility degree. The relative peak SNR is plotted in Figs. 3(b) and 3(d). The peak SNR first increases and then decreases as the correlation orders $\left\vert \mu \right\vert $\ and $\nu $\ increase. We can also find that the maximum value of peak SNR of the negative image\ ($\mu <0$) is greater than that of the positive image ($\mu >0$). We can conclude that the image quality of the negative ghost images can be better than that of the positive ghost images for the opposite fractional orders. Furthermore, the visibility degree and SNR of the ghost image vary continuously with fractional (continuous) orders. So we can adjust the visibility degree and SNR at will by choosing appropriate experimental parameters. \begin{figure}[htb] \centering\includegraphics[clip,width=5.5cm,angle=-90]{V_SNR.eps} \caption{Visibility degrees $V$ (a,c) and relative peak SNR $R_{p}/\protec \sqrt{N}$ (b,d) of binary objects. The object unit number (for $t_{i}=1$) is $m=20$ in (a,b), and $m=30$ in (c,d).} \end{figure} \section{Numerical simulations of complex objects} So far we have presented the theory of the fractional ghost imaging with thermal light, and have illustrated ghost images in fractional-order moments in experiment with binary objects. Our proposal of course is suitable for more complicated objects. In this section we show the numerical simulations of the fractional-order moments in GITL with a more complicated object. The object is an image of the part of cameraman (size: $64\times 64$ pixels). Figure 4 shows the results of numerical simulations by calculating the normallized fractional-order moments $\frac{\left\langle I_{B}^{\mu }I_{i}^{\nu }\right\rangle }{\left\langle I_{B}^{\mu }\right\rangle \left\langle I_{i}^{\nu }\right\rangle }$. The reference signals are the stochastic numbers crated by a computer. The number of sampling is N=200,000 $. Other parameters are the same as in Fig. 2. We again obtain negative ghost images for $\mu <0$ in Figs. 4(a), 4(b) and 4(c). The positive ghost images for $\mu >0$ 4(d), 4(e), and 4(f), are also obtained. The visibility degrees are $V=7.84\times 10^{-4}$, $3.97\times 10^{-4}$, 1.76\times 10^{-4} $, $1.76\times 10^{-4}$, $4.06\times 10^{-4}$, 7.72\times 10^{-4}$, for Figs. 4(a), 4(b), 4(c), 4(d), 4(e), and 4(f), respectively. Again the visibility increases when the absolute value of the order increases. The corresponding peak SNRs for all the ghost images are R_{p}=5.27$, $5.57$, $5.44$, $6.95$, $5.65$, and $6.18$, respectively. The peak SNRs of the ghost images vary with the fraction orders. \begin{figure}[tbh] \centering \includegraphics[clip,width=5.0cm,angle=-90]{image3b.eps} \caption{Numerical simulations of ghost images from fractional moments with a gray object ($64\times 64$ pixels). The number of sampling is $N=200,000$. Other parameters are the same in Fig. 2.} \end{figure} \section{Conclusion} In conclusion, we have investigated the fractional-order moments in GITL experiment. The reference signals have been regarded as an array of stochastic variables, while the bucket signals have been regarded as a linear sum of these stochastic variables. We then have deduced the joint probability density function between the bucket signals and reference signals according probability theory. The object information can be retrieved through fractional-order moments. We have found that negative (positive) ghost images can be obtained if the orders of the bucket signals are smaller (greater) than zero. Ghost imaging with fractional-order moments has been implemented in experiments with a binary object and in numerical simulations with a more complicated object. The visibility degrees and peak SNRs of the ghost images vary with the correlation fractional-orders. So we have the chance to carefully adjust the image quality by choosing appropriate fractional orders in GITL. Our technique can provides abundant ghost images, and has potential to work in complex environments. This work was benefited from financial support by the National Natural Science Foundation of China under Grant Nos. 11674273, 11304016, and 11204062.
2,869,038,155,999
arxiv
\section{Introduction} A sequence $(z_j)$ of points in a domain $G$ of the complex plane ${\mathbb{C}}$ is called the zero set of an analytic function $f: G \to {\mathbb{C}}$, if $f$ vanishes precisely on this set $(z_j)$. This means that $f(z)\not=0$ for $z\in G \backslash (z_j)$ and if the point $\xi \in G$ occurs $m$ times in the sequence $(z_j)$, then $f$ has a zero at $\xi$ of precise order $m$. By definition, the critical set of a nonconstant analytic function is the zero set of its first derivative. There is an extensive literature on critical sets. In particular, there are many interesting results on the relation between the zeros and the critical points of analytic and harmonic functions. A classical reference for this is the book of Walsh \cite{Wal1950}. \smallskip In this paper we study the problem of describing the critical sets of analytic self--maps of the open unit disk ${\mathbb{D}}$ of the complex plane ${\mathbb{C}}$. For this purpose the classical characterization of the zero sets of bounded analytic functions due to Jensen \cite{Jen1899}, Blaschke \cite{Bla1915} and F.~and R.~Nevanlinna \cite{Nevs1922} serves as a kind of model. \begin{satz}\label{thm:classical_zero} Let $ (z_j)$ be a sequence in ${\mathbb{D}}$. Then the following statements are equivalent. \begin{itemize} \item[(a)] There is an analytic self--map of ${\mathbb{D}}$ with zero set $ (z_j)$. \item[(b)] There is a Blaschke product $B$ with zero set $ (z_j)$, i.\;\!e.~$B(z)=\displaystyle \prod \limits_{j=1}^{\infty}\frac{\overline{z_j}}{|z_j|}\, \frac{z_j -z}{1-\overline{z_j}z}$. \item[(c)] The sequence $(z_j)$ fulfills the Blaschke condition, i.\!\;e.~$ \sum \limits_{j=1}^{\infty} \big(1-|z_j|\big) < + \infty\, . $ \item[(d)] There is a function in the Nevanlinna class ${\cal N}$ with zero set $(z_j)$. \end{itemize} \end{satz} Let us recall that a function $f$ analytic in ${\mathbb{D}}$ belongs to ${\cal N}$ if and only if the integrals \begin{equation*} \int \limits_{0}^{2\pi} \log^+|f(re^{it})|\, dt \end{equation*} remain bounded as $r\to 1$. \medskip For the special case of a {\it finite} sequence a result related to Theorem \ref{thm:classical_zero} but for critical sets instead of zero sets can be found in work of Heins \cite[\S29]{Hei62}, Wang \& Peng \cite{WP79}, Zakeri \cite{Z96} and Stephenson \cite{Ste2005}: For every finite sequence ${\cal C}=(z_j)$ in ${\mathbb{D}}$ there is always a {\it finite} Blaschke product whose critical set coincides with ${\cal C}$. A recent first generalization of this result to {\it infinite} sequences is discussed in \cite{KR}. There it is shown that every Blaschke sequence $(z_j)$ is the critical set of an infinite Blaschke product. However, the converse to this, known as the Bloch--Nevanlinna conjecture \cite{Dur1969}, is false. There do exist Blaschke products, whose critical sets fail to satisfy the Blaschke condition, see \cite[Theorem 3.6]{Col1985}. Thus the critical sets of bounded analytic functions are not just the Blaschke sequences and the situation for critical sets seems more subtle than for zero sets. \smallskip The main result of this paper is the following counterpart of Theorem \ref{thm:classical_zero} for critical sets of bounded analytic functions. \begin{theorem}\label{thm:main} Let $ (z_j)$ be a sequence in ${\mathbb{D}}$. Then the following statements are equivalent. \begin{itemize} \item[(a)] There is an analytic self--map of ${\mathbb{D}}$ with critical set $ (z_j)$. \item[(b)] There is an indestructible Blaschke product with critical set $ (z_j)$. \item[(c)] There is a function in the weighted Bergman space $ {\cal A}^2_1$ with zero set $ (z_j)$. \item[(d)] There is a function in the Nevanlinna class ${\cal N}$ with critical set $(z_j)$. \end{itemize} \end{theorem} We note that a Blaschke product $B$ is said to be indestructible, if $T \circ B$ is a Blaschke product for {\it every} unit disk automorphism $T$, see \cite[p.~51]{Col1985}. The weighted Bergman space $ {\cal A}^2_1$ consists of all functions analytic in ${\mathbb{D}}$ for which \begin{equation*} \iint \limits_{{\mathbb{D}}} (1-|z|^2) \, |f(z)|^{2}\, d\sigma_z< + \infty\, , \end{equation*} where $\sigma_z$ denotes two--dimensional Lebesgue measure with respect to $z$, see for instance \cite[p.~2]{HKZ}. \medskip A few remarks are in order. First, implication (d) $\Rightarrow$ (a) of Theorem \ref{thm:main} is an old result by Heins \cite[\S 30]{Hei62}. However, as part of this paper we provide a new and different approach to this result. \smallskip Second, the simple geometric characterization of the zero sets of bounded analytic functions via the Blaschke condition (c) in Theorem \ref{thm:classical_zero} has not found an explicit counterpart for critical sets yet. However, condition (c) of Theorem \ref{thm:main} might be seen as an implicit substitute. The zero sets of (weighted) Bergman space functions have intensively been studied in the 1970's and 1990's by Horowitz \cite{Hor1974,Hor1977}, Korenblum \cite{Kor1975} and Seip \cite{Seip1994, Seip1995}. As a result quite sharp necessary as well as sufficient conditions for a sequence to be the zero set of a Bergman space function are available. In view of Theorem \ref{thm:main} all these results about zero sets of Bergman space functions carry now over to the critical sets of bounded analytic functions and vice versa. Unfortunately, a {\it geometric} characterization of the zero sets of (weighted) Bergman space functions is still unknown, ``and it is well known that this problem is very difficult'', cf.~\cite[p.~133]{HKZ}. \medskip Although, at first sight Theorem \ref{thm:main} appears to be a result exclusively in the realm of complex analysis, the ideas of its proof have their origin in differential geometry and partial differential equations. We now give a brief account of the relevant interconnections. \subsection*{Conformal metrics and associated analytic functions} The proof of implication (a) $\Rightarrow$ (b) of Theorem \ref{thm:main} relies on conformal pseudometrics with Gauss curvature bounded above by $-4$. The key steps are the following. Suppose $f$ is a nonconstant analytic self--map of ${\mathbb{D}}$ with critical set ${\cal C}=(z_j)$. Then the pullback of the Poincar\'e metric \begin{equation*} \lambda_{{\mathbb{D}}}(w)\,|dw| = \frac{1}{1-|w|^2}\, |dw| \end{equation*} for ${\mathbb{D}}$ with constant curvature $-4$ via $f$, i.\!\;e.~ \begin{equation*} \lambda(z)\, |dz|=\frac{|f'(z)|}{1-|f(z)|^2}\,|dz|\, , \end{equation*} induces a conformal pseudometric of constant curvature $-4$ which vanishes on ${\cal C}$ (cf.~Definition \ref{def:zeroset}). This allows us to apply a version of Perron's method\footnote{See Subsection \ref{sec:perron}, in particular Theorem \ref{thm:perron2}.} for conformal pseudometrics which guarantees a unique {\bf maximal} conformal pseudometric $\lambda_{max}(z)\, |dz|$ on ${\mathbb{D}}$ with constant curvature $-4$ which vanishes precisely on ${\cal C}$. Now an extension of Liouville's representation theorem (Theorem \ref{thm:liouville}) says that every conformal pseudometric with constant curvature $-4$ and zero set ${\cal C}$ can be represented as the pullback of the Poincar\'e metric $\lambda_{{\mathbb{D}}}(w)\,|dw|$ via an analytic self--map of ${\mathbb{D}}$. In particular, \begin{equation*} \lambda_{max}(z)\, |dz|= \frac{|F'(z)|}{1-|F(z)|^2}\,|dz| \end{equation*} for some analytic self--map $F$ of ${\mathbb{D}}$. We call $F$ a {\bf maximal function} for ${\cal C}$ since it is a ``developing map'' of the maximal conformal pseudometric with constant curvature $-4$ and zero set ${\cal C}$. Now, roughly speaking, the maximality of $\lambda_{max}(z)\, |dz|$ forces its developing map $F$ to be maximal. More precisely, we have the following result. \begin{theorem}\label{thm:0} Every maximal function is an indestructible Blaschke product. \end{theorem} We note that in case of a finite sequence ${\cal C}$ the maximal functions are just the finite branched coverings of ${\mathbb{D}}$. One is therefore inclined to consider maximal functions for infinite branch sets as ``infinite branched coverings'': \begin{center} \setlength{\extrarowheight}{6pt} \begin{tabular}{|l|p{4.3cm}|l|} \hline {\bf critical set} & { \bf maximal function} & {\bf mapping properties }\\[2mm] \hline ${\cal C}=\emptyset$ & automorphism of ${\mathbb{D}}$& unbranched covering of ${\mathbb{D}}$\!\;; \\[-2mm] &&conformal self--map of ${\mathbb{D}}$\\[2mm] \hline ${\cal C}$ finite &finite Blaschke product &finite branched covering of ${\mathbb{D}}$\\[2mm] \hline ${\cal C}$ infinite & indestructible infinite \,\, Blaschke product& ``\!\;infinite branched covering of ${\mathbb{D}}$\!\;''\\[7mm] \hline \end{tabular} \end{center} \smallskip Hence maximal conformal pseudometrics of constant negative curvature and their associated maximal functions are of special interest for function theoretic con\-sider\-ations. The class of maximal conformal pseudometrics and their corresponding maximal functions have already been studied by Heins in \cite[\S25 \& \S 26]{Hei62}. Heins discusses some necessary as well as sufficient conditions for maximal functions regarding their topological properties. He also posed the problem of characterizing maximal functions, cf.~\cite[\S 26, \S 29]{Hei62}. Theorem \ref{thm:0} gives a partial answer to Heins' question. \subsection*{The Gauss curvature PDE and the Berger--Nirenberg problem} Heins' proof of implication (d) $\Rightarrow$ (a) of Theorem \ref{thm:main} relies on conformal pseudometrics with curvature bounded above by $-4$. Our approach is more general and will be based on the Gauss curvature equation and an extension of Liouville's representation theorem, see Theorem \ref{thm:liouville}. It uses the following idea. Suppose $g$ is a nonconstant holomorphic function on ${\mathbb{D}}$. We consider the Gauss curvature equation \begin{equation}\label{eq:gauss} \Delta u = 4 \,|g'(z)|^2 e^{2u} \end{equation} on ${\mathbb{D}}$. Here $\Delta$ denotes the standard Laplacian. If we can guarantee the existence of a realvalued $C^2$--solution $u: {\mathbb{D}} \to {\mathbb{R}}$ to this PDE, then $\lambda(z)\, |dz |:=|g'(z)|\,e^{u(z)}\, |dz|$ turns out to be a conformal pseudometric of constant curvature $-4$ on ${\mathbb{D}}$ which vanishes on the critical set of $g$. Hence by Theorem \ref{thm:liouville}, \begin{equation*} \lambda(z)\, |dz|=\frac{|f'(z)|}{1-|f(z)|^2}\, |dz| \end{equation*} for some analytic self--map $f$ of ${\mathbb{D}}$ and so the critical set of $f$ agrees with the critical set of $g$. In short, if for an analytic function $g$ in ${\mathbb{D}}$ the equation (\ref{eq:gauss}) has a solution on ${\mathbb{D}}$, then there exists an analytic self--map $f$ of ${\mathbb{D}}$ such that the critical sets of $g$ and $f$ coincide. Thus the main point is to characterize those holomorphic functions $g: {\mathbb{D}} \to {\mathbb{C}}$ for which the PDE (\ref{eq:gauss}) has a solution on ${\mathbb{D}}$. In fact this problem is a special case of the well--known Berger--Nirenberg problem from differential geometry, i.\!\;e.~the question, whether for a Riemann surface $R$ and a given function $\kappa: R \to {\mathbb{R}}$ there exists a conformal metric on $R$ with Gauss curvature $\kappa$. \smallskip We note that the Berger--Nirenberg problem is well--understood for the projective plane, see \cite{Mos1973}, and has been extensively studied for compact Riemann surfaces, see for instance \cite{Aub1998,Chang2004,Kaz1985,Str2005} as well as for the complex plane \cite{Avi1986, CN1991, Ni1989}. However much less is known for proper subdomains $G$ of the complex plane, see \cite{BK1986, HT92, KY1993}. In this situation the Berger--Nirenberg problem reduces to the question if for a given function $k:G \to {\mathbb{R}}$ the Gauss curvature equation \begin{equation}\label{eq:gauss1} \Delta u= k(z)\, e^{2u} \end{equation} has a solution on $G$. We just note that $k$ is the negative of the curvature $\kappa$ of the conformal metric $e^{u(z)}\, |dz|$. In Theorem \ref{thm:sol1} we give some necessary as well as sufficient conditions for the solvability of the Gauss curvature equation (\ref{eq:gauss1}) only in terms of the function $k$. For instance, we shall see that the Gauss curvature equation (\ref{eq:gauss1}) has a solution on ${\mathbb{D}}$ if $k$ is a nonnegative locally H\"older continuous function on ${\mathbb{D}}$ and \begin{equation}\label{eq:sufcon} \iint \limits_{{\mathbb{D}}} (1-|z|^2)\, k(z) \, d\sigma_z <+ \infty\, . \end{equation} \smallskip Let us make some remarks. First, this result generalizes previous results by Kalka $\&$ Yang in \cite{KY1993}. There the authors find a family of nonnegative locally H\"older continuous functions $k_j$ on ${\mathbb{D}}$ tending uniformly to $+\infty$ at the boundary of ${\mathbb{D}}$ such that if $k$ is an essentially positive function (see page \pageref{page:ess}) which satisfies $k\le k_j$ on ${\mathbb{D}}$ for some $j$, then (\ref{eq:gauss1}) has a solution on ${\mathbb{D}}$. We note that all $k_j$ are radially symmetric and fulfill (\ref{eq:sufcon}), see Example \ref{ex:k1} as well as the comments following Example \ref{ex:k1}. \smallskip Secondly, although condition (\ref{eq:sufcon}) is not necessary for the existence of a solution to (\ref{eq:gauss1}) on ${\mathbb{D}}$ it is strong enough to deduce a necessary and sufficient condition for the solvability of the Gauss curvature equation of the particular form (\ref{eq:gauss}): \begin{theorem}\label{thm:d} Let $g:{\mathbb{D}} \to {\mathbb{C}}$ be an analytic function. Then the Gauss curvature equation (\ref{eq:gauss}) has a solution on ${\mathbb{D}}$ if and only if $g'$ has a representation as a product of an ${\cal A}_1^2$ function and a nonvanishing analytic function in ${\mathbb{D}}$. \end{theorem} We note that Theorem \ref{thm:d} solves the Berger--Nirenberg problem for the special case $R={\mathbb{D}}$ and curvature functions of the form $\kappa(z)=-|\varphi(z)|^2$ where $\varphi$ is analytic in ${\mathbb{D}}$. \medskip A second observation is that Theorem \ref{thm:d} leads to a characterization of the class of all holomorphic functions $g: {\mathbb{D}} \to {\mathbb{C}}$ whose critical sets coincide with the critical sets of the class of bounded analytic function, see Section \ref{sec:5} for more details. \medskip Finally, let us return to the implication (d) $\Rightarrow$ (a) of Theorem \ref{thm:main}. Suppose $g$ is a function in the Nevanlinna class. Then it turns out that $g'$ is the product of an ${\cal A}_1^2$ function and a nonvanishing function, see the proof of Theorem \ref{thm:main} in Section \ref{sec:5}. Consequently, by Theorem \ref{thm:d} the Gauss curvature equation (\ref{eq:gauss}) has a solution which as we have seen implies that there exists a bounded analytic function such that the critical sets of these two functions agree. \bigskip We now will give a brief outline of the paper and record in passing some further results, which might be of interest in their own right. In Section \ref{sec:Metrics} we discuss the theory of conformal pseudometrics with curvature bounded above by $-4$ as far as it is needed for this paper. We begin with some introductory material in Subsection \ref{sec:pseudometrics}. Subsection \ref{sec:liouville} is devoted to Liouville's theorem and some of its extensions. We then relate in Subsection \ref{sec:boundary} the growth of a conformal pseudometric with constant negative curvature on ${\mathbb{D}}$ with inner functions. This leads for instance to the following result, which might be viewed as an extension of Heins' characterization of finite Blaschke products \cite{Hei86}\footnote{Cf. Theorem \ref{thm:boundary} in Subsection \ref{sec:boundary}.}. \begin{corollary} \label{cor:5} Let $f : {\mathbb{D}} \to {\mathbb{D}}$ be an analytic function. Then the following statements are equivalent. \begin{itemize} \item[(a)] $\angle \lim \limits_{z \to \zeta} \left( 1-|z|^2 \right) \displaystyle \frac{|f'(z)|}{1-|f(z)|^2}=1$ for a.\!\;e.~$\zeta \in \partial {\mathbb{D}}$. \item[(b)] $f$ is an inner function with finite angular derivative at almost every point of $\partial {\mathbb{D}}$. \end{itemize} \end{corollary} Here, $\angle \lim $ denotes the nontangential limit. \smallskip Subsection \ref{sec:sk-metrics} will focus on some properties of conformal pseudometrics with curvature bounded above by $-4$. Finally, in Subsection \ref{sec:perron} we apply Perron's method to guarantee the existence of maximal conformal pseudometrics with constant curvature $-4$ and preassigned zeros. \smallskip In Section \ref{sec:Berger_Nirenberg_problem} we turn to the Berger--Nirenberg problem for planar domains. Subsection \ref{sec:results} contains the main results, illustrative examples and remarks. In particular, we establish some necessary and some sufficient conditions for the solvability of the Gauss curvature equation only in terms of the curvature function and the domain in Theorem \ref{thm:sol1} and Corollary \ref{cor:solutions1}. As a consequence of these conditions, we obtain Theorem \ref{thm:d}. The proofs of the results are deferred to Subsection \ref{sec:proofs1} and Subsection \ref{sec:proofs2}. \smallskip Section \ref{sec:maximal} treats maximal functions. We begin with the proof of Theorem \ref{thm:0} as well as of the equivalence of (a) and (b) in Theorem \ref{thm:main}. We then discuss maximal functions whose critical sets form finite and Blaschke sequences respectively in more detail. For instance, maximal functions with finite critical sets are finite Blaschke products and vice versa. This together with Theorem \ref{thm:0} implies a series of refined Schwarz--Pick type inequalities: \begin{corollary}\label{cor:3} Let $f: {\mathbb{D}} \to {\mathbb{D}}$ be a nonconstant analytic function with critical set ${\cal C}$ and let ${\cal C}^*$ be a subsequence of ${\cal C}$. Then there exists a (indestructible) Blaschke product $F$ with critical set ${\cal C}^*$ such that \begin{equation*} \frac{|f'(z)|}{1-|f(z)|^2} \le \frac{|F'(z)|}{1-|F(z)|^2}\, ,\quad z \in {\mathbb{D}}\!\;. \end{equation*} If ${\cal C}^*$ is finite, then $F$ is a finite Blaschke product. \smallskip Furthermore, $f=T \circ F$ for some automorphism $T$ of ${\mathbb{D}}$ if and only if \begin{equation*} \lim_{z \to w} \frac{|f'(z)|}{1-|f(z)|^2} \, \frac{1-|F(z)|^2}{|F'(z)|}=1 \end{equation*} for some $w \in {\mathbb{D}}$. \end{corollary} We note that for {\it finite} sequences ${\cal C}^*$ Corollary \ref{cor:3} is a classical result of Nehari \cite{Neh1946}. Since {\it infinite} sequences are explicitly allowed in Corollary \ref{cor:3}, it generalizes Nehari's result. In addition, for ${\cal C}^*={\cal C} \not= \emptyset$ we get a best possible sharpening \begin{equation*} \frac{|f'(z)|}{1-|f(z)|^2} \le \frac{|F'(z)|}{1-|F(z)|^2} < \frac{1}{1-|z|^2} \end{equation*} of the Schwarz--Pick inequality \begin{equation*} \frac{|f'(z)|}{1-|f(z)|^2} \le \frac{1}{1-|z|^2} \, . \end{equation*} We end Section \ref{sec:maximal} with a criterion for maximal functions with finite and Blaschke sequences as critical sets. \smallskip In a short final Section \ref{sec:5} we prove Theorem \ref{thm:main} and conclude by some additional remarks. \bigskip Before beginning with the details it is worth making some comment about notation. The action in this paper takes place on domains in the complex plane. The letters $D$ and $G$ exclusively denote planar domains and will be used without further explanation. \section{Conformal metrics and pseudometrics} \label{sec:Metrics} The following subsections discuss some selected topics on conformal pseudometrics with negative curvature. For more information we refer to \cite{BM2007, Hei62, KL2007, KR2008, Smi1986}. \subsection{Conformal metrics and pseudometrics} \label{sec:pseudometrics} We begin our brief account of conformal metrics and pseudometrics with some basic definitions and results. \medskip \begin{definition} A nonnegative continuous function $\lambda$ on $G$, $\lambda: G \to [0, + \infty)$, $\lambda \not\equiv 0$, is called conformal density on $G$ and the corresponding quantity \label{def:leng} $\lambda(z) \, |dz|$ conformal pseudometric on $G$. If $\lambda(z)>0$ for all $z \in G$, we say $\lambda(z) \, |dz|$ is a conformal metric on $G$. We call a conformal pseudometric $\lambda(z) \, |dz|$ regular on $G$, if $\lambda$ is of class $C^2$ in $\{z \in G \, : \, \lambda(z)>0\}$\footnote{$C^2(G)$ denotes the set of realvalued twice continuously differentiable functions on $G$.}. \end{definition} We wish to emphasize that, according to our definition, $\lambda\equiv 0$ is {\bf not} a conformal density (of a conformal pseudometric). \smallskip A second remark is that some authors call a nonnegative upper semicontinuous function a conformal density. For our applications however it suffices to ask for continuity. \smallskip A geometric quantity associated with a conformal pseudometric is its Gauss curvature. \begin{definition}[Gauss curvature]\label{def:curvature} Let $\lambda (z) \, |dz|$ be a regular conformal pseudometric on $G$. Then the (Gauss) curvature $\kappa_{\lambda}$ of $\lambda(z) \, |dz|$ is defined by \begin{equation*} \kappa_{\lambda}(z):=-\frac{\Delta (\log \lambda)(z)}{\lambda(z)^2} \end{equation*} for all points $z \in G$ where $\lambda(z) > 0$. \end{definition} An important property of the Gauss curvature is its conformal invariance. It is based on the following definition. \begin{definition}[Pullback of conformal pseudometrics] Let $\lambda(w) \, |dw|$ be a conformal pseudometric on $D$ and $w=f(z)$ be a nonconstant analytic map from $G$ to $D$. Then the conformal pseudometric \begin{equation*} (f^*\lambda)(z) \, |dz|:=\lambda(f(z)) \, |f'(z)| \, |dz| \end{equation*} defined on $G$, is called the pullback of $\lambda(w) \, |dw|$ under the map $f$. \end{definition} \begin{theorem}[Theorema Egregium] For every analytic map $w=f(z)$ and every regular conformal pseudometric $\lambda(w)\,|dw|$ the relation \begin{equation*} \kappa_{f^*\lambda}(z)=\kappa_{\lambda}(f(z)) \end{equation*} is satisfied provided $\lambda(f(z)) \, |f'(z)|>0$. \end{theorem} Definition \ref{def:curvature} shows that if $\lambda(z)\, |dz|$ is a regular conformal metric with curvature $\kappa_{\lambda}=\kappa$ on $G$, then the function $u:= \log \lambda$ satisfies the partial differential equation \begin{equation}\label{eq:pde1} \Delta u=-\kappa(z) \, e^{2\;\!u} \end{equation} on $G$. If, conversely, a $C^2$--function $u$ fulfills (\ref{eq:pde1}) on $G$, then $\lambda(z):=e^{u(z)}$ induces a regular conformal metric on $G$ with curvature $\kappa_{\lambda}=\kappa$. \medskip The ubiquitous example of a conformal metric is the Poincar\'{e} or hyperbolic metric $\lambda_{{\mathbb{D}}}(z)\, |dz|$ for the unit disk ${\mathbb{D}}$ with constant curvature $-4$. It has the following important property. \begin{theorem}[Fundamental Theorem]\label{thm:ahlfors} Let $\lambda(z)\,|dz|$ be a regular conformal pseudometric on ${\mathbb{D}}$ with curvature bounded above by $-4$. Then $\lambda(z) \le \lambda_{{\mathbb{D}}}(z)$ for every $z \in {\mathbb{D}}$. \end{theorem} Theorem \ref{thm:ahlfors} is due to Ahlfors \cite{Ahl1938} and it is usually called Ahlfors' lemma. However, in view of its relevance Beardon and Minda proposed to call Ahlfors' lemma the {\bf fundamental theorem}. We will follow their suggestion in this paper. \subsection{Liouville's Theorem} \label{sec:liouville} Conformal pseudometrics of constant curvature $-4$ have a special nature. First, they give us via the pullback a means of constructing conformal pseudometrics with various prescribed properties without changing their curvature. Second, every conformal pseudometric of constant curvature $-4$ can locally be represented by a holomorphic function. This is Liouville's theorem. In order to give a precise statement we begin with a formal definition. \begin{definition}[Zero set]\label{def:zeroset} Let $\lambda(z)\, |dz|$ be a conformal pseudometric on $G$. We say $\lambda(z)\, |dz|$ has a zero of order $m_0>0$ at $z_0 \in G$ if \begin{equation*} \lim_{z \to z_0} \frac{\lambda(z)}{|z-z_0|^{m_0}} \quad\text{ exists and } \not=0 \, . \end{equation*} \smallskip We will call a sequence ${\cal C}=(\xi_j) \subset G$ \begin{equation*} (\xi_j):=(\underbrace{z_1, \ldots, z_1}_{m_1 -\text{times}},\underbrace{z_2, \ldots, z_2}_{m_2-\text{times}} , \ldots )\,, \, \, z_k \not=z_n \text{ if } k\not=n, \end{equation*} the zero set of a conformal pseudometric $\lambda(z) \, |dz|$, if $\lambda(z)>0$ for $z \in G\backslash {\cal C}$ and if $\lambda(z)\, |dz|$ has a zero of order $m_k \in {\mathbb{N}}$ at $z_k$ for all $k$. \end{definition} \begin{theorem}[Liouville's Theorem] \label{thm:liouville} Let ${\cal C}$ be a sequence of points in a simply connected domain $G$ and let $\lambda(z) \, |dz|$ be a regular conformal pseudometric on $G$ with constant curvature $-4$ on $G$ and zero set ${\cal C}$. Then $\lambda(z) \, |dz|$ is the pullback of the hyperbolic metric $\lambda_{{\mathbb{D}}}(w)\, |dw|$ under some analytic map $f:G \to {\mathbb{D}}$, i.\!\;e. \begin{equation}\label{eq:liouville} \lambda(z) =\frac{|f'(z)|}{1-|f(z)|^2}\, , \quad z \in G. \end{equation} If $g:G \to \mathbb{D}$ is another analytic function, then $\lambda(z)=(g^*\lambda_{{\mathbb{D}}})(z)$ for all $z\in G$ if and only if $g=T\circ f$ for some automorphism $T$ of ${\mathbb{D}}$. \end{theorem} A holomorphic function $f$ with property (\ref{eq:liouville}) will be called a {\bf developing map} for $\lambda(z) \, |dz|$. \medskip Note that the critical set of each developing map coincides with the zero set of the corresponding conformal pseudometric. \medskip For later applications we wish to mention the following variant of Theorem \ref{thm:liouville}. \begin{remark}\label{rem:liouville} Let $G$ be a simply connected domain and let $\varphi:G \to {\mathbb{C}}$, $\varphi \not \equiv 0$, be an analytic map. If $\lambda(z)\, |dz|$ is a regular conformal metric with curvature $-4 \, |\varphi(z)|^2$, then there exists a holomorphic function $f:G \to {\mathbb{D}}$ such that \begin{equation*} \lambda(z) =\frac{1}{|\varphi(z)|}\frac{|f'(z)|}{1-|f(z)|^2}\, , \quad z \in G. \end{equation*} Moreover, $f$ is uniquely determined up to postcomposition with a unit disk automorphism. \end{remark} Liouville \cite{Lio1853} stated Theorem \ref{thm:liouville} for the special case that $\lambda(z) \, |dz|$ is a regular conformal metric. We therefore like to refer to Theorem \ref{thm:liouville} as well as to Remark \ref{rem:liouville} as Liouville's theorem. \smallskip Theorem \ref{thm:liouville} and in particular the special case that $\lambda(z) \, |dz|$ is a conformal metric has a number of different proofs, see for instance \cite{Bie16, CW94, CW95, Min, Nit57, Yam1988}. Remark \ref{rem:liouville} is discussed in \cite{KR}. \subsection{Boundary behavior of developing maps} \label{sec:boundary} By Liouville's theorem it is perhaps not too surprising that there is some relation between the boundary behavior of a conformal pseudometric and the boundary behavior of a corresponding developing map. The next result illustrates this relation. \smallskip \label{page1} \begin{satz}[\text{\small cf.~\cite{Hei86, KRR06}\!\;}] \label{thm:boundary} Let $f: {\mathbb{D}} \to {\mathbb{D}}$ be an analytic function and $I \subset \partial {\mathbb{D}}$ some open arc. Then the following are equivalent. \begin{itemize} \item[(a)] $ \lim \limits_{z \to \zeta} \left( 1-|z|^2 \right) \displaystyle \frac{|f'(z)|}{1-|f(z)|^2}=1 \qquad \text{ for every } \zeta \in I \, , $ \item[(b)]$ \lim \limits_{z \to \zeta} \displaystyle \frac{|f'(z)|}{1-|f(z)|^2}=+\infty \qquad \text{ for every } \zeta \in I \, , $ \item[(c)] $f$ has a holomorphic extension across the arc $I$ with $f(I) \subset \partial {\mathbb{D}}$. \end{itemize} In particular, if $I=\partial {\mathbb{D}}$, then $f$ is a finite Blaschke product. \end{satz} \medskip In fact similar results can be derived when the unrestricted limits are replaced by angular limits. \begin{lemma} \label{lem:lemma2} Let $f : {\mathbb{D}} \to {\mathbb{D}}$ be an analytic function and $I$ some subset of $\partial{\mathbb{D}}$. \begin{itemize} \item[(1)] If \begin{equation*} \angle \lim \limits_{z \to \zeta} \left( 1-|z|^2 \right) \displaystyle \frac{|f'(z)|}{1-|f(z)|^2}=1 \qquad \text{ for every } \zeta \in I \, , \end{equation*} then $f$ has a finite angular derivative\footnote{see \cite[p.~57]{Sha1993}.} at a.\!\;e.~$\zeta \in I$. In particular, \begin{equation*} \angle \lim \limits_{z \to \zeta} |f(z)|=1 \qquad \text{ for a.\!\;e. } \zeta \in I \, . \end{equation*} \item[(2)] If $f$ has a finite angular derivative (and $\angle \lim_{z \to \zeta} |f(z)|=1$) at some $\zeta \in I$, then \begin{equation*} \angle \lim \limits_{z \to \zeta} \left( 1-|z|^2 \right) \displaystyle \frac{|f'(z)|}{1-|f(z)|^2}=1 \, . \end{equation*} \end{itemize} \end{lemma} \medskip Now, if $I=\partial {\mathbb{D}}$, then as a direct consequence of Lemma \ref{lem:lemma2} we obtain Corollary \ref{cor:5} in the Introduction. \medskip {\bf Proof of Lemma \ref{lem:lemma2}.} \\ (1) Let $\zeta \in I$ and assume that $\liminf \limits_{z \to \zeta} \displaystyle \frac{1-|f(z)|}{1-|z|}=+\infty$. Since \begin{equation*} |f'(z)|=\left( 1-|z|^2 \right) \frac{|f'(z)|}{1-|f(z)|^2} \frac{1-|f(z)|^2}{1-|z|^2} \, , \end{equation*} we deduce \begin{equation*} \angle \lim \limits_{z \to \zeta} |f'(z)|=+\infty \, . \end{equation*} By Privalov's theorem, cf.~\cite[p.~47]{Pri1956}, this is only possible for $\zeta$ from a nullset $I' \subset \partial {\mathbb{D}}$. Thus \begin{equation*} \liminf \limits_{z \to \zeta} \displaystyle \frac{1-|f(z)|}{1-|z|}<+\infty \qquad \text{ for a.\!\;e. } \zeta \in I \, . \end{equation*} Therefore, $f$ has a finite angular derivative $f'(\zeta)$ at a.\!\;e.~$\zeta \in I$, see \cite[p.~57]{Sha1993}. \medskip (2) For convenience we may assume $\zeta=1$ and $f(1):=\angle \lim_{z \to 1} f(z)=1$. Thus $f'(1):=\angle \lim_{z \to 1} f'(z)>0$, see \cite[p.~57]{Sha1993}. We define for $z \in {\mathbb{D}}$ \begin{equation*} \varrho(z):= \frac{f(z)-1}{z-1} -f'(1)\, . \end{equation*} Then $\angle \lim_{z \to 1}\varrho(z)=0$ and $f(z)=1+f'(1)\, (z-1) + \varrho(z)\, (z-1)$ for $z \in {\mathbb{D}}$. Hence we can write \begin{equation*} 1-|f(z)|^2= 2\, f'(1)\, \mathop{{\rm Re}}(1-z)+ \mathop{{\rm Re}}(1-z)\,r(z)\, , \end{equation*} where $\angle \lim_{z \to 1}r(z)=0$. This yields \begin{equation*} \left( 1-|z|^2 \right) \frac{|f'(z)|}{1-|f(z)|^2}= \frac{1-|z|^2}{2\;\!\mathop{{\rm Re}}(1-z)}\, \frac{|f'(z)|}{f'(1)}\, \frac{1}{1+\frac{r(z)}{2\, f'(1)}}\, . \end{equation*} If we now choose $z \in S_{\delta}:=\{\, 1+r\, e^{i\, \alpha} \in {\mathbb{D}} \, : \, r>0, \, \, \alpha\in [ \pi/2+ \delta, 3\pi/2 - \delta]\, \} $ for $\delta >0$, then we have $1-|z|^2=-2\;\! r \cos \alpha -r^2$ and $\mathop{{\rm Re}}(1-z)=-r\, \cos \alpha$. Hence \begin{equation*} \angle \lim \limits_{z \to 1}\frac{1-|z|^2}{2\;\!\mathop{{\rm Re}}(1-z)}=1+ \angle \lim \limits_{z \to 1} \frac{r}{2\cos\alpha}\ge 1+ \lim \limits_{r \to 0} \frac{r}{2\, \cos(\frac{\pi}{2} +\delta)}=1\, . \end{equation*} So we can conclude that \begin{equation*} \angle \lim \limits_{z \to \zeta} \left( 1-|z|^2 \right) \frac{|f'(z)|}{1-|f(z)|^2}\ge 1\,. \end{equation*} Combining this with the Schwarz--Pick lemma gives the desired result. \hfill{$\blacksquare$} \medskip Finally, there is a counterpart of implication (b) $\Rightarrow$ (c) of Theorem \ref{thm:boundary}. \begin{lemma} \label{lem:lemma} Let $f : {\mathbb{D}} \to {\mathbb{D}}$ be an analytic function. If \begin{equation}\label{eq:innerfunction} \angle \lim \limits_{z \to \zeta} \frac{|f'(z)|}{1-|f(z)|^2}=+\infty \, \qquad \text{ for a.\!\;e. } \zeta \in \partial {\mathbb{D}} \, , \end{equation} then $f$ is an inner function. \end{lemma} {\bf Proof.} Assume $f$ is not inner. Then the angular limit $f(\zeta)$ of $f$ exists and belongs to ${\mathbb{D}}$ for every $\zeta \in I$ for a set $I \subseteq \partial {\mathbb{D}}$ of positive measure. Now, in view of (\ref{eq:innerfunction}) the angular limit of $f'$ would be $\infty$ for a set $I'\subseteq \partial {\mathbb{D}}$ of positive measure, contradicting Privalov's theorem. \hfill{$\blacksquare$} \bigskip It would be interesting to see an example of an inner function for which (\ref{eq:innerfunction}) is not true. We further note that conditions (1) and (2) in Lemma \ref{lem:lemma2} do not complement each other. Therefore we may ask if an analytic self--map $f$ of ${\mathbb{D}}$ which satisfies \begin{equation*} \angle\lim \limits_{z \to 1} \frac{|f'(z)|}{1-|f(z)|^2}\, (1-|z|^2)=1 \end{equation*} does have an angular limit or even a finite angular derivative at $z=1$; this might then be viewed as a converse of the Julia--Wolff--Carath\'eodory theorem, see \cite[p.~57]{Sha1993}. \subsection{SK--metrics} \label{sec:sk-metrics} In the next two subsections we take a brief look at Heins' theory of SK--metrics\footnote{In particular \S 2, \S 3, \S 7, \S 10, \S 12 and \S 13 in \cite{Hei62}; see also \cite{KR2008}.}, refine and extend it in order to give a self--contained overview of the results which are needed for this paper. An SK--metric in the sense of Heins is a conformal pseudometric whose ``generalized curvature'' is bounded above by $-4$. More precisely, this ``generalized curvature'' is obtained by replacing the standard Laplacian in Definition \ref{def:curvature} by the generalized lower Laplace operator, which is defined for a continuous function $u$ by \begin{equation*} \Delta u(z)= \liminf_{ r \to 0} \, \frac{4}{r^2} \left( \frac{1}{2 \pi} \int \limits_{0}^{2 \pi} u(z +r e^{it}) \, dt - u(z) \right) \, . \end{equation*} We note that in case $u$ is a $C^2$--function the generalized lower Laplace operator coincides with the standard Laplace operator. Hence one can assign to an arbitrary conformal pseudometric $\lambda(z)\, |dz|$ a Gauss curvature $\kappa_{\lambda}$ in a natural way. \begin{definition}[SK--metric]\label{def:sk_metric} A conformal pseudometric $\lambda(z)\, |dz|$ on $G$ is called {SK--metric} on $G$ if $\kappa_{\lambda}(z)\le -4$ for all $z \in G$ where $\lambda(z)>0$. \end{definition} Note that every SK--metric is a subharmonic function. Second, if the curvature of an SK--metric $\lambda(z)\, |dz|$ is locally H\"older continuous on $G$, then $\lambda$ is regular on $G$ by elliptic regularity (cf.~Theorem \ref{thm:existence2}). \smallskip We now record some basic but essential properties of SK--metrics. \begin{lemma} [\text{\small cf.~\cite[\S 10]{Hei62}}\!\;] \label{lem:maxmetrics} Let $\lambda(z) \, |dz|$ and $\mu(z) \, |dz|$ be SK--metrics on $G$. Then $\sigma(z) := \max \{\lambda(z), \mu(z)\}$ induces an SK--metric on $G$. \end{lemma} \begin{lemma}[\text{\small cf.~\cite[Lemma 3.7]{KR2008}}\!\;) \, (Gluing Lemma] \label{lem:gluing} Let $\lambda(z)\, |dz|$ be an SK--metric on $G$ and let $\mu(z)\, |dz|$ be an SK--metric on a subdomain $D$ of $G$ such that the ``gluing condition'' \begin{equation*} \limsup \limits_{ D \ni z \to \zeta} \mu(z) \le \lambda(\zeta) \end{equation*} holds for all $\zeta \in \partial D \cap G$. Then $\sigma(z)\, |dz|$ defined by \begin{equation*} \sigma(z):=\begin{cases} \, \max \{\lambda(z), \mu(z)\}\, , & \hspace{2mm} \, \text{if } \, z \in D, \\[2mm] \, \lambda(z) \, , & \hspace{2mm} \, \text{if } \, z \in G \backslash D, \end{cases} \end{equation*} is an SK--metric on $G$. \end{lemma} \begin{lemma}[\text{\small cf.~\cite[\S 10]{Hei62}}\!\;]\label{lem:new_sk} Let $\lambda(z)\, |dz|$ be an SK--metric on $G$ and let $s$ be a nonpositive subharmonic function on $G$, then $\mu(z)\, |dz|:=e^{s(z)}\, \lambda(z)\, |dz|$ is an SK--metric on $G$. \end{lemma} \medskip \begin{theorem}[\text{\small cf.~\cite[\S 2]{Hei62}}\!\;) \, (Generalized Maximum Principle]\label{thm:gmp} Let $\lambda(z)\, |dz|$ be an SK--metric on $G$ and $\mu(z)\, |dz|$ be a regular conformal metric on $G$ with constant curvature $-4$. If \begin{equation*} \limsup\limits_{z \to \zeta} \frac{\lambda(z)}{\mu(z)} \le 1 \quad \text{for all } \zeta \in \partial_{\infty} G\, \footnote{$\partial_{\infty} G$ means the boundary of $G$ in ${\mathbb{C}} \cup \{ \infty\}$.}\, , \end{equation*} then $\lambda(z) \le \mu(z)$ for $z \in G$. \end{theorem} The following lemma provides a converse to the generalized maximum principle. It also might be viewed as an alternative definition of an SK--metric. \begin{lemma}[\text{\small cf.~\cite[\S 3]{Hei62}}\!\;]\label{lem:char_metrics} Let $\lambda$ be a continuous function on $G$. Then the following are equivalent. \begin{itemize} \item[(a)] $\lambda(z) \, |dz|$ is an SK--metric on $G$. \item[(b)] Whenever $D$ is a relatively compact subdomain of $G$, and $\mu(z) \, |dz|$ is a regular conformal metric with constant curvature $ -4$ on $D$ satisfying \begin{equation*} \limsup_{z \to \zeta} \frac{\lambda(z)}{\mu(z)} \le 1 \end{equation*} for all $\zeta \in \partial D$, then $\lambda(z) \le \mu(z)$ for $z \in D$. \end{itemize} \end{lemma} We now apply Lemma \ref{lem:char_metrics} to prove a removable singularity theorem for SK--metrics. \begin{lemma}[Removable singularities]\label{lem:hebsing} Let $\lambda$ be a continuous function on $G$ which induces an SK--metric on $G\backslash \{ z_0\}$. Then $\lambda(z) \, |dz|$ is an SK--metric on $G$. \end{lemma} {\bf Proof.} Let $D$ be a relatively compact subdomain of $ G$ which contains $z_0$ and let $\mu(z)\, |dz |$ be a regular conformal metric with constant curvature $-4$ on $D$ such that \begin{equation*} \limsup_{z \to \zeta} \frac{\lambda(z)}{\mu(z)} \le 1 \quad \text{for all } \zeta \in \partial D\, . \end{equation*} Then the nonnegative function \begin{equation*} s(z):= \log^+\left( \frac{\lambda(z)}{\mu(z)} \right)= \max\left\{ 0,\, \log\left( \frac{\lambda(z)}{\mu(z)} \right) \right\}\, , \quad z \in D\,, \end{equation*} is subharmonic on $D \backslash \{z_0 \}$. To see this, let $s(z_*)>0$ at some point $z_*\in D \backslash \{z_0 \} $. Thus $s(z)>0$ in a neighborhood of $z_*$ and consequently $\Delta s (z) \ge 4\, (\lambda(z)^2-\mu(z)^2) >0$ in this neighborhood, i.\!\;e.~$s$ is subharmonic there. If $s(z_*)=0$ for some $z_* \in D \backslash \{z_0 \}$ then $s$ satisfies the submean inequality \begin{equation*} s(z_*)=0\le \frac{1}{2\,\pi} \int \limits_0^{2\pi} s(z_*+re^{it})\, dt \end{equation*} for all small $r$. We note that $s$ has a subharmonic extension to $D$, since $s$ is bounded near $z_0$. By hypothesis $\limsup_{z \to \zeta } s(z)=0$ for all $\zeta \in \partial D$ and so the maximum principle for subharmonic functions implies that $s\equiv 0$, i.\!\;e.~$\lambda \le \mu$ in $D$. Finally, by Lemma \ref{lem:char_metrics}, $\lambda(z) \, |dz|$ is an SK--metric on $G$.~\hfill{$\blacksquare$} \subsection{Perron's method} \label{sec:perron} Perron's method for subharmonic functions \cite{Per1923} is used to treat the classical Dirichlet problem in arbitrary bounded domains. One attractive feature of this method is that it separates the interior existence problem from that of the boundary behavior of the solution. In addition the solution is characterized by a maximality property. Perron's method, for instance, can be imitated to ensure the existence of solutions to fairly general elliptic PDEs, cf.~\cite[Chapter 6.3]{GT}. We apply Perron's method to guarantee the existence of maximal conformal SK--metrics with prescribed zeros. In passing let us quickly recall the definition of a Perron family. For more information regarding Perron families for SK--metrics, see \cite[\S 12 \& \S 13]{Hei62} and \cite[Section 3.2]{KR2008}. \begin{lemma}[Modification]\label{lem:mod} Let $\lambda(z)\, |dz|$ be an SK--metric on $G$ and let $K$ be an open disk which is compactly contained in $G$. Then there exists a unique SK--metric $M_K\lambda(z)\, |dz|$ on $G$, called modification of $\lambda$ on $K$, with the following properties: \begin{itemize} \item[(i)] $M_K\lambda(z) =\lambda(z)$ for every $z \in G\backslash K$ and $M_K\lambda(z) \ge \lambda(z)$ for every $z \in K$, \item[(ii)] $M_K\lambda(z)\, |dz| $ is a regular conformal metric on $K$ with constant curvature $-4$. \end{itemize} \end{lemma} \begin{definition}[Perron family] A family $\Phi$ of (densities of) SK--metrics on $G$ is called a Perron family, if the following conditions are satisfied: \begin{itemize} \item[(i)] If $\lambda \in \Phi$ and $\mu \in \Phi$, then $\sigma \in \Phi$, where $\sigma(z) := \max \{\lambda(z), \mu(z)\}$ for $z \in G$. \item[(ii)] If $\lambda \in \Phi$, then $M_K\lambda \in \Phi$ for any open disk $K$ compactly contained in~$G$. \end{itemize} \end{definition} \begin{theorem} \label{thm:perron0} Let $\Phi$ be a Perron family of SK--metrics on $G$. If $\Phi \not= \emptyset$, then \begin{equation*} \lambda_{\Phi}(z):=\sup_{\lambda \in \Phi} \lambda(z), \quad z \in G\, , \end{equation*} induces a regular conformal metric with constant curvature $-4$ on $G$. \end{theorem} We remark that if $\Phi$ is the Perron family of all SK--metrics on $G$, then $\lambda_{\Phi }(z)\, |dz|$ is the unique maximal conformal metric with curvature $\le -4$, i.\:\!e.~the hyperbolic metric $\lambda_G(z)\, |dz|$ for $G$. In particular, it follows that $\lambda_G(z)\, |dz| \le \lambda_D(z)\,|dz|$ for $z \in D$, if $D\subseteq G$. \smallskip \begin{theorem}\label{thm:perron2} Let $E=(z_j)$ be a sequence of pairwise distinct points in $G$ and let $(m_j)$ a be sequence of positive integers. Let \begin{equation*} \Phi:= \left\{ \lambda : \lambda(z)\, |dz| \text{ is an SK--metric on $G$ and } \limsup \limits_{z \to z_j} \frac{\lambda(z)}{|z-z_j|^{m_j}} < + \infty \text{ for all j } \right\} \, . \end{equation*} If $\Phi \not \equiv \emptyset$, then \begin{equation*} \lambda_{\Phi}(z):= \sup_{\lambda \in \Phi} \lambda(z)\, , \quad z \in G\, , \end{equation*} induces a regular conformal pseudometric on $G$ with $\kappa_{\lambda_{\Phi}}(z)=-4$ for all $z \in G \backslash E$. Furthermore, $\lambda_{\Phi}(z)\, |dz|$ has a zero of order $m_j$ at $z_j$ for all $j$. \end{theorem} Theorem \ref{thm:perron2} guarantees the existence of a unique maximal conformal pseudometric $\lambda_{\Phi}(z)\, |dz|$ on $G$ with preassigned zeros. In other words every conformal pseudometric $\lambda(z)\, |dz|$ on $G$ with curvature bounded above by $-4$ which vanishes to at least the prescribed order $m_j$ at each $z_j$ of $E$ is dominated by $\lambda_{\Phi}$, i.\!\;e.~$\lambda\le \lambda_{\Phi}$ in $G$. This just means $\lambda_{\Phi}(z)\, |dz|$ takes the r\^{o}le of the hyperbolic metric and Theorem \ref{thm:perron2} is a refined version of the fundamental theorem respecting zeros. \medskip Since Theorem \ref{thm:perron2} plays an important r\^{o}le later, we include a proof for convenience of the reader. \medskip {\bf Proof of Theorem \ref{thm:perron2}.} We first note that $\Phi$ is a Perron family of SK--metrics on $G\backslash E$. Thus $\lambda_{\Phi}(z)$ is well--defined on $G$ and induces a regular conformal pseudometric on $G$ with $\kappa_{\lambda_{\Phi}}(z)=-4$ for $z \in G \backslash E$, see Theorem \ref{thm:perron0}. \smallskip Pick $z_j \in E$ and choose an open disk $K:=K_{r_j}(z_j)=\{z : |z-z_j|< r_j\}$ such that $K$ is compactly contained in $G$ and $K \cap (E\backslash \{z_j\}) = \emptyset$. \smallskip To prove that $\lambda_{\Phi}$ has a zero of order $m_j$ at $z_j$ we first show that $\lambda_{\Phi} \in \Phi$. Note that there exists by the fundamental theorem some constant $c$ such that $\lambda(z) \le c,\, z \in K, $ for all $\lambda \in \Phi$. We now define on $K$ the function \begin{equation*} \sigma(z):=c\, \left( \frac{|z-z_j|}{r_j} \right)^{m_j}\, . \end{equation*} For a fixed $\lambda \in \Phi$ we consider the nonnegative function $s(z):=\log^+\left(\lambda(z)/\sigma(z)\right)$ on $K \backslash \{z_j\}$. Observe that $s$ is subharmonic on $K\backslash \{z_j\}$ and since $s$ is bounded at $z_j$ it has a subharmonic extension to $K$. By construction $ \limsup _{z \to \zeta} s(z)=0$ for all $\zeta \in \partial K$. Hence, $\lambda(z) \le \sigma(z) $ for $z \in K$. As this holds for every $\lambda \in \Phi$ we obtain $\lambda_{\Phi}(z) \le \sigma(z)$ for $z \in K$. Thus we conclude that $\lambda_{\Phi} \in \Phi$. \smallskip By Theorem \ref{thm:existence2} there exists a regular conformal metric $\mu(z)\, |dz|$ on $K$ with curvature \begin{equation*} \kappa_{\mu}(z)=-4\, \left( \frac{|z-z_j|}{r_j} \right)^{2m_j} \end{equation*} such that $\mu$ is continuous on the closure $\overline{K}$ and $\mu\equiv \lambda_{\Phi}$ on $\partial K$. Thus \begin{equation*} \nu(z):= \left( \frac{|z-z_j|}{r_j} \right)^{m_j}\, \mu(z) \end{equation*} induces a regular conformal pseudometric on $K$ with constant curvature $-4$, $\nu$ is continuous on $\overline{K}$ and $\nu\equiv \lambda_{\Phi}$ on $ \partial K$. \smallskip We are now going to show that $\lambda_{\Phi} (z)= \nu(z)$ for $z \in K$. To do this, we define $\tilde{s}(z):= \log^+( \lambda_{\Phi}(z)/\nu(z) )$ for $z \in K$. Similarly as above, we can conclude that $\tilde{ s}$ is a nonnegative subharmonic function on $K$. The boundary condition on $\nu$ implies $\lim_{z \to \zeta} \tilde{s}(z)=0$ for all $\zeta \in \partial K$. So $\lambda_{\Phi}(z) \le \nu(z)$ for $z \in K$. On the other hand, since $\lambda_{\Phi} \in \Phi$, the gluing lemma (Lemma \ref{lem:gluing}) guarantees that \begin{equation*} \tau(z):= \begin{cases} \max\{\lambda_{\Phi}(z), \nu(z) \}\, , & z \in K\, ,\\[2mm] \lambda_{\Phi}(z)\, , & z \in G\backslash K, \end{cases} \end{equation*} belongs to $\Phi$. Thus $\nu(z)\le \lambda_{\Phi}(z)$ for $z \in K$. Consequently, $\lambda_{\Phi}(z) \, |dz|$ has a zero of order $m_j$ at $z_j$, which completes the proof. \hfill{$\blacksquare$} \medskip We conclude this section with a result, similar to case of equality in the fundamental theorem, that is a ``strong version of Ahlfors' lemma'', see \cite[\S 7]{Hei62} and \cite{Chen2001,KR2008,Min1987,Roy1986}. \begin{lemma}\label{lem:gen_max_3} Let ${\cal C}$ be a sequence of points in $G$ and let $\lambda(z)\, |dz|$ and $\mu(z)\, |dz|$ be conformal pseudometrics on $G$ with constant curvature $-4$. Suppose ${\cal C}$ is the zero set of $\mu(z)\, |dz|$ and $\lambda(z)\le \mu(z)$ for all $z \in G$. If \begin{equation}\label{eq:equal} \lim_{z\to z_0} \frac{\lambda(z)}{\mu(z)}=1 \end{equation} for some $z_0 \in G$, then $\lambda\equiv \mu$. \end{lemma} \smallskip {\bf Proof.} We observe that we can proceed as in the case of ${\cal C}=\emptyset$ if (\ref{eq:equal}) is fulfilled for $z_0 \in G \backslash {\cal C}$ as well as if (\ref{eq:equal}) is valid for some $z_0\in {\cal C}$ provided that the function $z \mapsto \lambda(z)/\mu(z)$ has a twice continuously differentiable extension to a neighborhood of $z_0$. \smallskip To see this let $\nu(z)\, |dz|$ be conformal pseudometric on some open disk $K:=K_r(z_0)$ which has constant curvature $-4$ on $K \backslash \{z_0\}$ and a zero of order $m_0$ at $z_0$. We can further suppose that $\nu$ is continuous on $\overline{K}$. Now let $\tilde{\nu}$ be the continuous extension of $\nu(z)\,|z-z_0|^{-m_0}$ on $\overline {K}$. We will show that $\tilde{\nu}$ is twice continuously differentiable on $K$. For this we note that $\tilde{\nu}$ induces a regular conformal metric on $K\backslash \{z_0\}$ with curvature $\kappa_{\tilde{\nu}}(z)=-4 \, |z-z_0|^{2m_0}$. By Theorem \ref{thm:existence2} there exists a regular conformal metric $\tau(z)\, |dz|$ on $K$ with curvature $\kappa_{\tau}(z) = -4\, |z-z_0|^{2m_0}$ which is continuous on $\overline{K}$ and satisfies $\tau(\zeta)=\tilde{\nu}(\zeta)$ for all $\zeta \in \partial K$. Then the nonnegative function $s(z):=\log^+(\tilde{\nu}(z)/\tau(z))$ is subharmonic first on $K \backslash \{z_0\}$ and because $s$ is bounded near $z_0$ it extends to a subharmonic function on $K$. Since, by construction, $\limsup_{z \to \zeta} s(z)=0$ for all $\zeta \in \partial K$ we deduce that $\tilde{\nu}(z) \le \tau(z)$ for all $z \in K$. Switching the r\^oles of $\tilde{\nu}$ and $\tau$, we get $\tilde{\nu}\equiv \tau$. \smallskip Now suppose that (\ref{eq:equal}) holds for some $z_0 \in {\cal C}$. If $m_0$ denotes the multiplicity of $z_0$ in ${\cal C}$, then (\ref{eq:equal}) implies that $\lambda(z)\, |dz|$ and $\mu(z)\, |dz|$ have a zero of order $m_0$ at $z_0$. Thus $\lambda(z)\, |dz|$ and $\mu(z)\, |dz|$ enjoy the same properties as $\nu(z)\, |dz|$ and the desired result follows.\hfill{$\blacksquare$} \section{On the Berger--Nirenberg problem for planar domains} \label{sec:Berger_Nirenberg_problem} \subsection{Results} \label{sec:results} Suppose $D$ is a regular\footnote{i.\!\;e.~there exists Green's function for $D$ which vanishes continuously on $\partial D$.} and bounded domain and $k$ a nonnegative, bounded and (locally) H\"older continuous function. Under these assumptions it is well--known that there is always a solution\footnote{A function $u: D \to {\mathbb{R}}$ is called solution to (\ref{eq:curvature1}) on $D$, if $u \in C^2(D)$ and $u$ satisfies (\ref{eq:curvature1}) in $D$.} to the Gauss curvature equation \begin{equation}\label{eq:curvature1} \Delta u= k(z)\, e^{2u} \end{equation} on $D$. On the other hand, if $D$ or $k$ is unbounded, there might be no solution to (\ref{eq:curvature1}). For example, take $D={\mathbb{C}}$ and $k(z) = 4 \,|f(z)|^2$ for some entire function $f\not\equiv 0$. By Remark \ref{rem:liouville}, any solution $u$ would be of the form \begin{equation*} u(z)=\log \left(\frac{1}{|f(z)|}\, \frac{|g'(z)|}{1-|g(z)|^2} \right)\,, \quad z \in {\mathbb{C}}\,\! , \end{equation*} for some analytic function $g: {\mathbb{C}} \to {\mathbb{D}}$. The fact that a bounded entire function is constant would then imply that $u \equiv -\infty$, violating the fact, that $u$ is a solution to (\ref{eq:curvature1}). \medskip In our first result we give for regular and bounded domains $D$ necessary as well as sufficient conditions on the function $k$ for the existence of a solution to (\ref{eq:curvature1}) on $D$. In the following $g_D$ denotes Green's function for $D$. \begin{theorem}\label{thm:sol1} Let $D$ be a bounded and regular domain and let $k$ be a nonnegative locally H\"older continuous function on $D$. \begin{itemize} \item[(1)] If for some (and therefore for every) $z_0 \in D$ \begin{equation*} \iint \limits_{D} g_D(z_0, \xi)\, k(\xi) \, d\sigma_{\xi} < + \infty\, , \end{equation*} then (\ref{eq:curvature1}) has a solution $u : D \to {\mathbb{R}}$, which is bounded from above. \item[(2)] If (\ref{eq:curvature1}) has a solution $u: D \to {\mathbb{R}}$ which is bounded from below and has a harmonic majorant, then \begin{equation*} \iint \limits_{D} g_D(z, \xi) \, k(\xi) \, d\sigma_{\xi} < + \infty \end{equation*} for all $z \in D$. \item[(3)] There exists a bounded solution $u:D \to {\mathbb{R}}$ to (\ref{eq:curvature1}) if and only if \begin{equation*} \sup \limits_{z \in D} \iint \limits_{D} g_D(z, \xi) \, k(\xi)\, d\sigma_{\xi} < + \infty\, . \end{equation*} \end{itemize} \end{theorem} Now let $D= {\mathbb{D}}$ in Theorem \ref{thm:sol1}. Then using the elementary estimate \begin{equation*} \frac{1-|\xi|^2}{2} \le \log \frac{1}{|\xi|} \le \frac{1-|\xi|^2}{2\,|\xi|}\, , \quad 0< |\xi| <1, \end{equation*} for $g_{{\mathbb{D}}}(0,\xi)=-\log|\xi|$ leads to the following equivalent formulation of Theorem \ref{thm:sol1}. \begin{corollary}\label{cor:solutions1} Let $k$ be a nonnegative locally H\"older continuous function on ${\mathbb{D}}$. \begin{itemize} \item[(1)] If \begin{equation*} \iint \limits_{{\mathbb{D}}} (1-|\xi|^2 )\, k(\xi) \, d\sigma_{\xi} < + \infty\, , \end{equation*} then (\ref{eq:curvature1}) has a solution $u : {\mathbb{D}} \to {\mathbb{R}}$, which is bounded from above. \item[(2)] If (\ref{eq:curvature1}) has a solution $u: {\mathbb{D}} \to {\mathbb{R}}$ which is bounded from below and has a harmonic majorant, then \begin{equation*} \iint \limits_{{\mathbb{D}}} (1-|\xi|^2 )\, k(\xi) \, d\sigma_{\xi} < + \infty\, . \end{equation*} \item[(3)] There exists a bounded solution $u: {\mathbb{D}} \to {\mathbb{R}}$ to (\ref{eq:curvature1}) if and only if \begin{equation*} \sup \limits_{z \in {\mathbb{D}}} \iint \limits_{{\mathbb{D}}} \log\left| \frac{1- \overline{\xi} z}{z - \xi} \right|\, k(\xi)\, d\sigma_{\xi} < + \infty\, . \end{equation*} \end{itemize} \end{corollary} \bigskip It might be worth making some remarks on Theorem \ref{thm:sol1} and Corollary \ref{cor:solutions1}. Both, Theorem \ref{thm:sol1} and Corollary \ref{cor:solutions1}, are not best possible, because (\ref{eq:curvature1}) may indeed have solutions, even if \begin{equation*} \iint \limits_{D} g_{D}(z, \xi)\, k(\xi) \, d\sigma_{\xi} =+ \infty \end{equation*} for some (and therefore for all) $z \in D$. Here is an explicit example. \begin{example} \label{ex:0} For $\alpha \ge 3/2$ define \begin{equation*} \varphi(z)= \frac{1}{(z-1)^{\alpha}} \end{equation*} for $z \in {\mathbb{D}}$ and set $k(z)=4\, |\varphi(z)|^2$ for $z \in {\mathbb{D}}$. Then an easy computation yields \begin{equation*} \iint \limits_{{\mathbb{D}}} (1-|z|^2 )\, k(z) \, d\sigma_{z}= +\infty\, . \end{equation*} On the other hand, a straightforward check shows that for every analytic and locally univalent self--map $f$ of ${\mathbb{D}}$ the function \begin{equation*} u_f(z):= \log \left( \frac{1}{|\varphi(z)|} \, \, \frac{|f'(z)|}{1-|f(z)|^2} \right) \end{equation*} is a solution to (\ref{eq:curvature1}) on ${\mathbb{D}}$. \end{example} Observe that in Example \ref{ex:0} the function $k$ is the squared modulus of a holomorphic function. Thus Theorem \ref{thm:d} applies and shows that (\ref{eq:curvature1}) does have solutions on ${\mathbb{D}}$. \bigskip A second remark is that Theorem \ref{thm:sol1} (c) characterizes those functions $k$ for which (\ref{eq:curvature1}) has at least one bounded solution. In particular, when $D={\mathbb{D}}$ and $k=4\,|f|^2$ for some holomorphic function $f$ in ${\mathbb{D}}$, then we have the following connection. \begin{remark}\label{rem:boundedsol} Let $\varphi: {\mathbb{D}} \to {\mathbb{C}}$ be analytic and $k(z)=4 \, |\varphi'(z)|^2$. Then there exists a bounded solution to (\ref{eq:curvature1}) if and only if $\varphi \in BMOA$, where \begin{equation*} BMOA=\left\{ \varphi: {\mathbb{D}} \to {\mathbb{C}} \text{ analytic} \, : \, \sup_{ z \in {\mathbb{D}}} \iint \limits_{{\mathbb{D}}} g_{{\mathbb{D}}}(z, \xi)\, |\varphi'(\xi)|^2 \, d\sigma_{\xi} < + \infty \right \} \end{equation*} is the space of analytic functions of bounded mean oscillation on ${\mathbb{D}}$, see \cite[p.~314/315]{BF2008}. \end{remark} We further note that in Theorem \ref{thm:sol1} and Corollary \ref{cor:solutions1} condition (1) does not imply condition (3). The Gauss curvature equation (\ref{eq:curvature1}) may indeed have solutions but none of the solutions is bounded. For example choose $\varphi \in H^2\backslash BMOA$, where $H^2$ denotes the Hardy space consisting of the functions $f$ analytic in ${\mathbb{D}}$ for which the integrals \begin{equation*} \int \limits_0^{2\pi} |f(re^{it})|^2\, dt \end{equation*} remain bounded as $r \to 1$. By Littlewood--Paley's identity, cf.~Remark \ref{rem:little_paley}, it follows that $\varphi' \in {\cal A}_1^2$. Now set $k(z)=4\, |\varphi'(z)|^2$. Then by Theorem \ref{thm:d} the Gauss curvature equation (\ref{eq:curvature1}) does have solutions and according to Remark \ref{rem:boundedsol} every solution to (\ref{eq:curvature1}) must be unbounded. \medskip Finally, suppose $k$ is a nonnegative, locally H\"older continuous and radially symmetric function on ${\mathbb{D}}$. Then Corollary \ref{cor:solutions1} allows us to characterize those functions $k$ for which the Gauss curvature equation (\ref{eq:curvature1}) has a solution on ${\mathbb{D}}$ with a harmonic majorant. \begin{corollary}\label{cor:radialsol} Let $k$ be a nonnegative locally H\"older continuous function on ${\mathbb{D}}$ such that $k(\xi)=k(|\xi|)$ for all $\xi \in {\mathbb{D}}$. Then (\ref{eq:curvature1}) has a solution $u:{\mathbb{D}} \to {\mathbb{R}}$ with a harmonic majorant if and only if \begin{equation}\label{eq:k_radsym} \iint \limits_{{\mathbb{D}}} (1-|\xi|^2 )\, k(\xi) \, d\sigma_{\xi} < + \infty\, . \end{equation} \end{corollary} \medskip To illustrate the use of Corollary \ref{cor:solutions1} and Corollary \ref{cor:radialsol} respectively, here is an example. \begin{example}\label{ex:k1} Let $\gamma \in {\mathbb{R}}$, $\gamma \ge 1$, and define for $z \in {\mathbb{D}}$ \begin{alignat*}{1} k_1^{\gamma}(z)= & \frac{1}{(1-|z|^2)^2}\, \frac{1}{\left[\log \left( \frac{e}{1-|z|^2} \right)\right]^{\gamma}}\\[2mm] k_2^{\gamma}(z) = & \frac{1}{(1-|z|^2)^2}\, \frac{1}{\log \left( \frac{e}{1-|z|^2} \right)}\, \frac{1}{\left[\log \left(e \log \left( \frac{e}{1-|z|^2} \right)\right)\right]^{\gamma}}\\[2mm] k_3^{\gamma}(z)= &\frac{1}{(1-|z|^2)^2}\, \frac{1}{\log \left( \frac{e}{1-|z|^2} \right)}\, \frac{1}{\log \left(e \log \left( \frac{e}{1-|z|^2} \right)\right)}\, \frac{1}{\left[\log\left( e \log \left(e \log \left( \frac{e}{1-|z|^2} \right)\right)\right)\right]^{\gamma}}\\[2mm] \text{etc.\!\;.} \quad \, \,& \end{alignat*} \begin{itemize} \item[(a)] If a nonnegative locally H\"older continuous function $k$ on ${\mathbb{D}}$ satisfies $k(z) \le k_j^{\gamma}(z)$ for all $z \in {\mathbb{D}}$ and some $\gamma > 1$ and $j \in {\mathbb{N}}$, then the Gauss curvature equation (\ref{eq:curvature1}) has a solution on ${\mathbb{D}}$. \item[(b)] If $k$ is a continuous function on ${\mathbb{D}}$ such that $k(z) \ge k_j^1(z)$ for all $z \in {\mathbb{D}}$ and some $j \in {\mathbb{N}}$, then there is no solution to (\ref{eq:curvature1}) on ${\mathbb{D}}$. \end{itemize} \end{example} \smallskip A few comments are in order. First, part (a) of Example \ref{ex:k1} is a consequence of Corollary \ref{cor:solutions1} (1). In fact, a straightforward computation gives \begin{equation*} \iint \limits_{{\mathbb{D}}} (1-|z|^2)\, k_j^{\gamma}(z)\, d\sigma_z < +\infty \end{equation*} for every $\gamma >1$ and $j \in {\mathbb{N}}$. \smallskip For the special case that $k$ is an essentially positive\footnote{A locally H\"older continuous function is called essentially positive, if there is a strictly increasing sequence $(G_n)$ of relatively compact subdomains $G_n$ of ${\mathbb{D}}$ such that ${\mathbb{D}}=\cup_{n} G_n$ and $k(\zeta)>0$ for $\zeta \in \partial G_n$ for all $n\in {\mathbb{N}}$.}\label{page:ess} function Example \ref{ex:k1} (a) has been discussed by Kalka and Yang \cite[Theorem 3.1]{KY1993}. This additional hypothesis on $k$ guarantees a solution to (\ref{eq:curvature1}) on ${\mathbb{D}}$ which even tends to $+ \infty$ at the boundary of ${\mathbb{D}}$. For the proof Kalka and Yang use a generalized Perron method. In particular, the existence of a subsolution\footnote{A function $u: G \to {\mathbb{R}}$ is said to be a subsolution to (\ref{eq:curvature1}) on $G$, if $u \in C^2(G)$ and $\Delta u \ge k(z) \, e^{2\!\; u}$ on $G$.} to the Gauss curvature equation (\ref{eq:curvature1}) on ${\mathbb{D}}$ guarantees a solution to (\ref{eq:curvature1}) on ${\mathbb{D}}$. The authors give for each $\gamma >1$ and $j \in {\mathbb{N}}$ an explicit subsolution to the PDE $\Delta u =k_j^{\gamma}(z)\, e^{2u}$ on ${\mathbb{D}}$. \smallskip Second, statement (b) of Example \ref{ex:k1} is due to Kalka and Yang \cite[Theorem 3.1]{KY1993}. The key step in their proof consists in showing that for each $j \in {\mathbb{N}}$ there is no solution to the Gauss curvature equation $\Delta u =k^1_{j}(z)\, e^{2u}$ on ${\mathbb{D}}$. The assertion of Example \ref{ex:k1} (b) follows then directly by employing their generalized Perron method. For the key step, Kalka and Yang give a quite intricate argument, which relies heavily on Yau's celebrated maximum principle for complete metrics \cite{Yau1975, Yau1978}. A much simpler, almost elementary proof that there is no solution to (\ref{eq:curvature1}) on ${\mathbb{D}}$ for $k(z)=k^1_{j}(z)$, $j \in {\mathbb{N}}$, can be found in \cite{Kra2011b}. This approach has also other ramifications which are discussed in \cite{Kra2011b}. \smallskip Third, we note that conditions (a) and (b) in Example \ref{ex:k1} do not complement each other. For example, choose $k(z)=|z+1|^{-2} + |z-1|^{-2}$ for $z \in {\mathbb{D}}$. Since $k$ fulfills the hypothesis of Corollary \ref{cor:solutions1} we conclude that (\ref{eq:curvature1}) has a solution on ${\mathbb{D}}$, but condition (a) of Example \ref{ex:k1} is not applicable. Hence Theorem \ref{thm:sol1}, Corollary \ref{cor:solutions1} and Corollary \ref{cor:radialsol} generalize the results of Kalka and Yang. \smallskip Finally, an easy computation shows that \begin{equation*} \iint \limits_{{\mathbb{D}}} (1-|\xi|^2 )\, k^1_j(\xi) \, d\sigma_{\xi} = + \infty\, \end{equation*} for $j \in {\mathbb{N}}$. On the other hand, by Example \ref{ex:k1} (b) no solution to the Gauss curvature equation (\ref{eq:curvature1}) on ${\mathbb{D}}$ is possible for $k(z)=k_j^{1}(z)$, $j \in {\mathbb{N}}$. Thus one is inclined to ask whether for a radially symmetric, locally H\"older continuous function $k: {\mathbb{D}} \to [0, +\infty)$ equation (\ref{eq:curvature1}) has no solution on ${\mathbb{D}}$ if and only if \begin{equation*} \iint \limits_{{\mathbb{D}}} (1-|\xi|^2 )\, k(\xi) \, d\sigma_{\xi} = + \infty\, . \end{equation*} \subsection{Proof of Theorem \ref{thm:sol1}} \label{sec:proofs1} To prove Theorem \ref{thm:sol1} we need two results. The first is the solvability of the Dirichlet problem for the Gauss curvature equation (\ref{eq:curvature1}) and the second is a Harnack type theorem for solutions to (\ref{eq:curvature1}). \begin{theorem}\label{thm:existence2} Let $D$ be a bounded and regular domain, let $k$ be a bounded and nonnegative locally H\"older continuous function on $D$ and let $\tau: \partial D \to {\mathbb{R}}$ be a continuous function. \begin{itemize} \item[(a)] There exists a (unique) function $u \in C(\overline{D}) \cap C^2(D)$ which solves the boundary value problem \begin{equation}\label{eq:ex1} \begin{array}{rclll} \Delta u&=&k(z) \, e^{2 u} &\text{in } & D\!\;,\\[2mm] u &\equiv & \tau & \text{on } & \partial D\!\;. \end{array} \end{equation} In particular, \begin{equation}\label{eq:ex2} u(z)=h(z)- \frac{1}{2\pi}\iint \limits_{D} g_D(z, \xi)\, k(\xi)\, e^{2\!\;u(\xi)}\, d\sigma_{\xi}\, , \quad z \in D\, , \end{equation} where $h$ is harmonic in $D$ and continuous on $\overline{D}$ satisfying $h \equiv \tau$ on $\partial D$. \item[(b)] If, conversely, a bounded and integrable function $u$ on $D$ satisfies (\ref{eq:ex2}), then $u$ belongs to $C(\overline{D}) \cap C^2(D)$ and solves (\ref{eq:ex1}). \end{itemize} \end{theorem} For a proof of Theorem \ref{thm:existence2} we refer the reader to \cite[p.~286]{Cou1968} and \cite[p.~53--55 $\&$ p.~304]{GT}. \medskip \begin{lemma} \label{cor:monosequence} Let $k$ be a nonnegative locally H\"older continuous function on $G$ and $(u_n)$ be a monotonically decreasing sequence of solutions to (\ref{eq:curvature1}) on $G$. If $\lim_{n \to \infty} u_n(z_0) =-\infty$ for some $z_0 \in G$, then $(u_n)$ converges locally uniformly in $G$ to $-\infty$, otherwise $(u_n)$ converges locally uniformly in $G$ to $u:=\lim_{n \to \infty} u_n$ and $u$ is a solution to (\ref{eq:curvature1}) on $G$. \end{lemma} A proof of Lemma \ref{cor:monosequence} for the special case $k \equiv 4$ can be found in \cite[\S 11]{Hei62}. The proof for the more general situation needs only slight modifications and will therefore be omitted. See also \cite[Proposition 4.1]{MT2002}. \medskip {\bf Proof of Theorem \ref{thm:sol1}.}\\ (1) Let $(D_n)$ be a sequence of relatively compact regular subdomains of $D$ such that $z_0 \in D_1\subset D_2 \subset D_3 \subset \ldots$ and $D =\cup_n \, D_n$. Pick a constant $c \in {\mathbb{R}}$. For each $n$, let $u_n: D_n\to {\mathbb{R}}$ denote the solution to the boundary value problem \[ \begin{array}{rclll} \Delta u&=&k(z) \, e^{2 u} &\text{in } & D_n\, ,\\[2mm] u &\equiv & c & \text{on } & \partial D_n\, . \end{array} \] Thus we can write \begin{equation*} u_n(z)=c-\frac{1}{2 \pi} \iint \limits_{D_n}g_{D_n}(z, \xi)\, k(\xi) \, e^{2u_n(\xi)} \, d\sigma_{\xi}\, , \quad \, z \in D_n\, . \end{equation*} Since $u_n$ is subharmonic on $D_n$ and $g_{D_n}(z, \xi)\le g_D(z, \xi)$ for all $z, \xi \in D_n$, we obtain \begin{equation}\label{eq:1} \begin{split} u_n(z_0)&=c-\frac{1}{2 \pi} \iint \limits_{D_n} g_{D_n}(z_0, \xi) \, k(\xi) \, e^{2u_n(\xi)} \, d\sigma_{\xi}\\[2mm] &\ge c- e^{2c } \, \iint \limits_{D} g_D(z_0, \xi) \, k(\xi) \, d\sigma_{\xi} \ge \tilde{c} \end{split} \end{equation} for some finite constant $\tilde{c}$. Letting $n \to \infty$ yields \begin{equation*} \liminf\limits_{n \to \infty} u_n(z_0) > - \infty\, . \end{equation*} Note that the boundary condition on $u_n$ implies that $(u_n)$ is a monotonically decreasing sequence of solutions to $\Delta u=k(z)\, e^{2\!\;u}$. Thus Lemma \ref{cor:monosequence} applies and \begin{equation*} u(z):= \lim_{n \to \infty} u_n(z)\, , \quad z \in D\,, \end{equation*} is a solution to (\ref{eq:curvature1}) on $D$, which is bounded above by construction. \smallskip (2) Let $u : D \to {\mathbb{R}}$ be a solution to (\ref{eq:curvature1}) with some harmonic majorant. Then by the Poisson--Jensen formula \begin{equation*} u(z)=h(z)-\frac{1}{2 \pi} \iint \limits_{D}g_D(z, \xi)\, k(\xi)\, e^{2u(\xi)} \, d\sigma_{\xi} \end{equation*} for $z \in D$, where $h$ is the least harmonic majorant of $u$ on $D$. As $u(z)>c> -\infty $ for $z \in D$, we get for fixed $z \in D$ \begin{equation*} \begin{split} \frac{1}{2 \pi} \iint \limits_{D}g_D(z, \xi)\, k(\xi) \, d\sigma_{\xi} &\le e^{-2 c} \, \frac{1}{2 \pi} \iint \limits_{D}g_D(z, \xi)\, k(\xi)\, e^{2 u(\xi)} \, d\sigma_{\xi}\\[2mm] &= e^{-2 c}\, \big(h(z)-u(z)\big ) < + \infty\, , \end{split} \end{equation*} as desired. \smallskip (3) Let $u: D \to {\mathbb{R}}$ be a bounded solution to (\ref{eq:curvature1}), i.\!\;e.~$|u(z)| \le c$ for $z \in D$, where $c$ is some positive constant. Hence \begin{equation*} \begin{split} \frac{1}{2 \pi} \iint \limits_{D} g_D(z, \xi) \, k(\xi)\, d\sigma_{\xi} &\le e^{2 c}\, \frac{1}{2 \pi} \iint \limits_{D} g_D(z, \xi) \, k(\xi)\, e^{2 u(\xi)}\, d\sigma_{\xi}\\[2mm]& = e^{2c} \, \big( h(z) -u(z)\big) \le 2 c\, e^{2c} \end{split} \end{equation*} for $z \in D$, where $h$ is the least harmonic majorant of $u$ on $D$. \smallskip Conversely, suppose that \begin{equation}\label{eq:char1} \sup \limits_{z \in D} \iint \limits_{D} g_D(z, \xi) \, k(\xi)\, d\sigma_{\xi} < + \infty\, . \end{equation} Let $(u_n)$ be the sequence constructed in (1). Then \begin{equation*} u(z):=\lim \limits_{n \to \infty} u_n(z)\, , \quad z \in D, \end{equation*} is a solution to (\ref{eq:curvature1}), which is bounded from above. Inequality (\ref{eq:1}) combined with (\ref{eq:char1}) shows that there is a constant $c_1$ such that $u_n(z) \ge c_1$ for all $ z \in D$ and all $n$, so $u$ is also bounded from below. \hfill{$\blacksquare$} \subsection{Proof of Theorem \ref{thm:d} and Corollary \ref{cor:radialsol}} \label{sec:proofs2} The proof of Theorem \ref{thm:d} relies on Liouville's theorem (Remark \ref{rem:liouville}) and the Littlewood--Paley identity. Before beginning the proof, it will be useful to recall the Littlewood--Paley identity \cite[p.~178]{Sha1993}. \begin{remark}[Littlewood--Paley's identity]\label{rem:little_paley} Let $\varphi: {\mathbb{D}} \to {\mathbb{C}}$ be a holomorphic function. Then we have \begin{equation*} \frac{1}{2\pi} \int \limits_0^{2\pi} |\varphi(e^{it})|^2\, dt= |\varphi(0)|^2 + \frac{2}{\pi} \iint \limits_{{\mathbb{D}}} \log \frac{1}{|z|} \, |\varphi'(z)|^2\, d\sigma_{z}\, . \end{equation*} This in particular shows that \begin{equation*} {\cal A}_1^2=\{\varphi': \varphi \in H^2 \}\, . \end{equation*} \end{remark} \bigskip {\bf Proof of Theorem \ref{thm:d}.} We will show that for an analytic function $\varphi$ on ${\mathbb{D}}$ the Gauss curvature equation \begin{equation}\label{eq:curvature_holomorph} \Delta u = 4\, |\varphi(z)|^2 e^{2u} \end{equation} has a solution $u:{\mathbb{D}} \to {\mathbb{R}}$ if and only if $\varphi(z)=\varphi_1(z)\, \varphi_2(z)$ for some function $\varphi_1 \in {\cal A}_1^2$ and a nonvanishing analytic function $\varphi_2: {\mathbb{D}} \to {\mathbb{C}}$. \medskip We first note that it suffices to consider the case $\varphi \not \equiv 0$. Now suppose $\varphi \not \equiv 0$ and $u: {\mathbb{D}} \to {\mathbb{R}}$ is a solution to (\ref{eq:curvature_holomorph}). Then, by Liouville's theorem, \begin{equation*} u(z)=\log \left( \frac{1}{|\varphi(z)|}\, \frac{|f'(z)|}{1-|f(z)|^2} \right)\,, \quad z \in {\mathbb{D}}\, , \end{equation*} for some analytic self--map $f$ of ${\mathbb{D}}$. Since $\varphi$ and $f'$ have the same zeros, we can write $\varphi(z)=f'(z)\, \varphi_2(z)$, where $\varphi_2$ is analytic and zerofree in ${\mathbb{D}}$. From Remark \ref{rem:little_paley} it follows that $f' \in {\cal A}_1^2$. \smallskip Conversely, let $\varphi(z)=\varphi_1(z)\, \varphi_2(z)$, where $\varphi_1 \in {\cal A}_1^2$, $\varphi_1 \not \equiv 0$, and $\varphi_2: {\mathbb{D}} \to {\mathbb{C}}\backslash \{ 0 \}$ is an analytic function. Then Corollary \ref{cor:solutions1} (1) ensures a solution $u$ to $\Delta u= 4\, |\varphi_1(z)|^2\, e^{2\!\;u}$ on ${\mathbb{D}}$. By Liouville's theorem there exists an analytic function $f:{\mathbb{D}} \to {\mathbb{D}}$ such that \begin{equation*} u(z)=\log \left(\frac{1}{|\varphi_1(z)|}\, \frac{|f'(z)|}{1-|f(z)|^2}\right)\,, \quad z \in {\mathbb{D}}\, . \end{equation*} Hence the function $f'/\varphi_1$ is analytic and zerofree in ${\mathbb{D}}$. So, \begin{equation*} \tilde{u}(z):= \log \left(\frac{1}{|\varphi(z)|}\, \frac{|f'(z)|}{1-|f(z)|^2} \right)\,, \quad z \in {\mathbb{D}}\, , \end{equation*} is well--defined and a solution to $\Delta u= 4\, |\varphi(z)|^2\, e^{2\!\;u}$ on ${\mathbb{D}}$.\hfill{$\blacksquare$} \medskip If $k$ is a nonnegative, locally H\"older continuous and radially symmetric function on ${\mathbb{D}}$, then only the existence of a solution to equation (\ref{eq:curvature1}) on ${\mathbb{D}}$ with a harmonic majorant yields (\ref{eq:k_radsym}). Thus, in this special case the hypothesis in part (2) of Corollary \ref{cor:solutions1} can slightly be relaxed. \medskip {\bf Proof of Corollary \ref{cor:radialsol}.} By Corollary \ref{cor:solutions1} (1) condition (\ref{eq:k_radsym}) ensures a solution to (\ref{eq:curvature1}) with a harmonic majorant. \smallskip For the converse, let $u$ be a solution to (\ref{eq:curvature1}) on ${\mathbb{D}}$ with a harmonic majorant. Then it follows by Green's theorem and Jensen's inequality that the function \begin{equation*} v(z):=\frac{1}{2\pi} \int \limits_{0}^{2\pi} u(|z|e^{it})\, dt\, ,\quad z \in {\mathbb{D}}\, , \end{equation*} is a subsolution to (\ref{eq:curvature1}), i.\!\;e.~$v \in C^2({\mathbb{D}})$ and satisfies $\Delta v \ge k(z) \, e^{2v}$ on ${\mathbb{D}}$. Note, that $v$ has a harmonic majorant, see \cite[Chapter I, Theorem 6.7]{Gar2007}. As $v$ is subharmonic in ${\mathbb{D}}$, we apply the Poisson--Jensen formula and get with the help of Jensen's inequality \begin{equation*} \begin{split} h(0)-v(0)&= \frac{1}{2\pi} \iint \limits_{{\mathbb{D}}}\log\frac{1}{|\xi|}\, \Delta v(\xi) \, d \sigma _{\xi} \ge \frac{1}{2\pi} \iint \limits_{{\mathbb{D}}} \log\frac{1}{|\xi|}\, k(\xi)\, e^{2\!\;v(\xi)} \, d \sigma _{\xi}\\[2mm] & \ge e^{2\!\;v(0)} \, \frac{1}{2\pi} \iint \limits_{{\mathbb{D}}} \log\frac{1}{|\xi|}\, k(\xi) \, d \sigma _{\xi}\, , \end{split} \end{equation*} where $h$ is the least harmonic majorant of $v$ on ${\mathbb{D}}$. Hence (\ref{eq:k_radsym}) holds. \hfill{$\blacksquare$} \section{Maximal conformal pseudometrics and maximal functions} \label{sec:maximal} Let ${\cal C}=(z_1, \ldots, z_1, z_2, \ldots, z_2, \ldots)$, $z_j \not=z_k$ for $j\not=k$, be a sequence in ${\mathbb{D}}$ and denote by $m_j$ the multiplicity of $z_j$ in ${\cal C}$. Suppose that the family $\Phi_{\cal C}$ of all SK--metrics $\lambda(z)\, |dz|$ which vanishes at least on ${\cal C}$, i.\!\;e.~ \begin{equation*} \limsup_{z\to z_j} \frac{\lambda(z)}{|z-z_j|^{m_j}}<+ \infty \end{equation*} for all $j$, is not empty. Then, by Theorem \ref{thm:perron2}, \begin{equation*} \lambda_{max}(z)\, |dz|:= \sup_{\lambda \in \Phi_{\cal C} } \lambda(z) \,|dz|\, , \quad z \in {\mathbb{D}}\, , \end{equation*} defines the unique maximal conformal pseudometric on ${\mathbb{D}}$ with constant curvature $-4$ and zero set ${\cal C}$. Even though the existence of $\lambda_{max}(z)\, |dz|$ is guaranteed if ${\cal C}$ is the zero set of an ${\cal A}_1^2$ function, see Theorem \ref{thm:main}, we lack in explicit examples. One way round this problem is Liouville's theorem. In particular, we are interested in the developing map $F$ of a maximal conformal metric $\lambda_{max}(z)\, |dz|$, i.\!\;e. \begin{equation*} \lambda_{max}(z)= \frac{|F'(z)|}{1-|F(z)|^2}\, ,\quad z \in {\mathbb{D}}\, , \end{equation*} for some analytic function $F: {\mathbb{D}} \to {\mathbb{D}}$. The analytic functions which represent maximal conformal metrics are of some interest in their own right as they are natural generalizations of the unit disk automorphisms, i.\!\;e.~the developing maps of the Poincar\'e metric $\lambda_{{\mathbb{D}}}(z)\, |dz|$. Thus the following definition might be appropriate. \begin{definition}\label{def:maximal-function} Let ${\cal C}$ be a sequence in ${\mathbb{D}}$ and assume that $\lambda_{max}(z)\, |dz|$ is the maximal conformal pseudometric for ${\mathbb{D}}$ with constant curvature $-4$ and zero set ${\cal C}$. Then every developing map $F$ of $\lambda_{max}(z)\, |dz|$ is called maximal function with critical set ${\cal C}$. \end{definition} Note that by Liouville's theorem a maximal function is uniquely determined by its critical set up to postcomposition with a unit disk automorphism. \medskip As an immediate consequence of Theorem \ref{thm:0}, i.\!\;e.~every maximal function is an indestructible Blaschke product, we obtain the equivalence of statements (a) and (b) in Theorem \ref{thm:main}, which we now restate for convenience of reference. \medskip \begin{corollary} \label{cor:1} Let ${\cal C}$ be a sequence of points in ${\mathbb{D}}$. Then the following are equivalent. \begin{itemize} \item[(a)] There exists an analytic self--map of ${\mathbb{D}}$ with critical set ${\cal C}$. \item[(b)] There exists an indestructible Blaschke product with critical set ${\cal C}$. \end{itemize} \end{corollary} {\bf Proof.} \\ (a) $\Rightarrow$ (b): Let $f :{\mathbb{D}} \to {\mathbb{D}}$ be analytic with critical set ${\cal C}$. Then \begin{equation*} \lambda(z)\, |dz|= \frac{|f'(z)|}{1-|f(z)|^2}\, |dz| \end{equation*} is a conformal pseudometric on ${\mathbb{D}}$ with constant curvature $-4$ and zero set ${\cal C}$. So $\lambda \in \Phi_{\cal{C}}$\footnote{Recall $\Phi_{\cal C}$ denotes the family of all SK--metrics which vanishes at least on ${\cal C}$.} and $\Phi_{\cal C}$ is not empty. This legitimizes the use of Theorem \ref{thm:perron2} which gives a maximal conformal pseudometric on ${\mathbb{D}}$ with constant curvature $-4$ and zero set ${\cal C}$. Thus there is a maximal function $F:{\mathbb{D}} \to {\mathbb{D}}$ with critical set ${\cal C}$, which is an indestructible Blaschke product by Theorem \ref{thm:0}. \hfill{$\blacksquare$} \medskip \label{page:2} Theorem \ref{thm:0} and Corollary \ref{cor:1} merit some comment. First, if ${\cal C} \subset {\mathbb{D}}$ is a finite sequence, then Heins observed that the maximal functions for ${\cal C}$ are precisely the finite Blaschke products with critical set ${\cal C}$, cf.~\cite[\S 29]{Hei62}. Heins' proof is purely topological and splits into two parts. In the first he shows that an analytic self--map of ${\mathbb{D}}$ which has constant finite valence, i.\!\;e.~a finite Blaschke product, is a maximal function. Secondly he establishes the following theorem: \smallskip \begin{satz}\label{thm:finite_blaschke} Let ${\cal C}$ be a finite sequence in ${\mathbb{D}}$ that contains $n$ points. Then there exists a finite Blaschke product $F$ with critical set ${\cal C}$. $F$ is unique up to postcomposition with a unit disk automorphism. In this case $F$ has degree $m=n+1$. \end{satz} \smallskip The uniqueness statement in Theorem \ref{thm:finite_blaschke} follows easily from Nehari's generalization of Schwarz' lemma \cite[Corollary to Theorem 1]{Neh1946}. To settle the existence part Heins showed that the set of critical points of all finite Blaschke products of degree $n+1$, which is clearly closed, is also open in the poly disk ${\mathbb{D}}^n$ by applying Brouwer's fixed point theorem. Similar proofs to Theorem \ref{thm:finite_blaschke} can also be found in papers by Wang \& Peng \cite{WP79} and Zakeri \cite{Z96}. A completely different approach to Theorem \ref{thm:finite_blaschke} via Circle Packing is due to Stephenson, see \cite[Lemma 13.7 and Theorem 21.1]{Ste2005}. Stephenson builds discrete finite Blaschke products with prescribed branch set and shows that under refinement these discrete Blaschke products converge locally uniformly in ${\mathbb{D}}$ to a classical Blaschke product with the desired critical points. \medskip \label{page:3} A further remark is that if ${\cal C}$ is finite, then Corollary \ref{cor:1} does not directly imply the existence statement of Theorem \ref{thm:finite_blaschke}. However, as a consequence of Theorem \ref{thm:0} we can deduce that if ${\cal C}$ is finite, then every maximal function for ${\cal C}$ is a finite Blaschke product with critical set ${\cal C}$, see Theorem \ref{thm:finiteC} (a). Now applying Nehari's uniqueness result we also arrive at Heins' characterization of maximal functions for finite sequences ${\cal C}$, but in a completely different way. \medskip Finally, Corollary \ref{cor:1} is discussed in \cite[Theorem 2.1]{KR} for the case that ${\cal C} \subset {\mathbb{D}}$ is a Blaschke sequence. There the idea of the proof is the following. In a first step the existence of a solution $u: {\mathbb{D}} \to {\mathbb{R}}$ to the boundary value problem \begin{alignat*}{2} \Delta u &= |B(z)|^2\, e^{2u} & \quad &\text{ in } {\mathbb{D}}\, ,\\ \lim_{z \to \zeta} u(z)&=+ \infty & \quad &\text{ for every } \zeta \in \partial {\mathbb{D}}\, , \end{alignat*} is guaranteed, where $B$ is a Blaschke product whose {\it zero set} consists precisely of the set ${\cal C}$. Then as a consequence of Liouville's theorem there is an analytic self--map $f$ of ${\mathbb{D}}$ with critical set ${\cal C}$ which represents the solution $u$. The boundary condition on $u$ then implies that $f$ is an inner function and a result by Frostman (cf.~\cite[Chapter II, Theorem 6.4]{Gar2007}) gives the desired Blaschke product with critical set ${\cal C}$. \smallskip All methods which were employed to prove Theorem \ref{thm:0} and Corollary \ref{cor:1} for finite and Blaschke sequences ${\cal C}$ seem too restrictive to be adaptable to the general situation, since they heavily rely on the special choice of ${\cal C}$. Our approach to Theorem \ref{thm:0} is exclusively based on conformal pseudometrics with curvature bounded above by $-4$, i.\!\;e.~SK--metrics. \medskip {\bf Proof of Theorem \ref{thm:0}.} Let $F: {\mathbb{D}} \to {\mathbb{D}}$ be a maximal function with critical set \begin{equation*} {\cal C}=(\underbrace{z_1, \ldots, z_1}_{m_1 -\text{times}},\underbrace{z_2, \ldots, z_2}_{m_2 -\text{times}} , \ldots ) \end{equation*} and let \begin{equation*} \lambda_{max}(z) \, |dz|= \frac{|F'(z)|}{1-|F(z)|^2} \, |dz|\, , \quad z \in {\mathbb{D}}\, , \end{equation*} be the maximal conformal pseudometric of constant curvature $-4$ with zero set ${\cal C}$. In particular, if $\lambda(z)\, |dz|$ is a conformal pseudometric on ${\mathbb{D}}$ with curvature bounded above by $-4$ which satisfies \begin{equation*} \limsup_{z \to z_j} \frac{\lambda(z)}{|z-z_j|^{m_j}} < + \infty \end{equation*} for all $j$, then $\lambda(z)\le \lambda_{max}(z)$ for $z \in {\mathbb{D}}$. \smallskip To show that $F$ is a Blaschke product, we use its canonical factorization, i.\!\;e.~$F=B\, S\, O$, where $B$ is a Blaschke product, $S$ a singular function and $O$ an outer function, see \cite[Chapter II, Corollary 5.7]{Gar2007}. \smallskip We assume that $S\not \equiv \eta$, where $|\eta|=1$. Then by a result due to Frostman \cite[Chapter II, Theorem 6.2]{Gar2007} there is some $\zeta \in \partial {\mathbb{D}}$ such that the angular limit \begin{equation*} \angle \lim_{z \to \zeta} S(z)=0\, . \end{equation*} In particular, $\angle \lim_{z \to \zeta} F(z)=0$. Choose $ \alpha \in (-1,0)$ and let \begin{equation*} \lambda_{\alpha}(w)\, |dw|:= \frac{(\alpha +1)}{|w|^{|\alpha|}} \, \frac{ 1}{1-|w|^{2\,(\alpha +1)}}\, |dw|\, . \end{equation*} We note that $\lambda_{\alpha}(w)\, |dw|$ is a conformal metric of constant curvature $-4$ on ${\mathbb{D}}':={\mathbb{D}} \backslash \{ 0 \}$. Thus the pullback of $\lambda_{\alpha}(z)\, |dz|$ via $F$ restricted to ${\mathbb{D}} \backslash {\cal C}$ defines a conformal metric on ${\mathbb{D}}\backslash {\cal C}$ with constant curvature $-4$, i.\!\;e. \begin{equation*} \nu(z)\, |dz|:=\frac{(\alpha +1)}{|F(z)|^{|\alpha|}}\, \frac{|F'(z)|}{1-|F(z)|^{2( \alpha +1)}}\, |dz|\, . \end{equation*} By Lemma \ref{lem:new_sk} we conclude that \begin{equation*} \begin{split} \sigma(z)\, |dz|&:=\nu(z) \, |B(z)|^{|\alpha|} \,|O(z)|^{|\alpha|}\,|dz|\\[2mm] & =\frac{(\alpha +1)}{|S(z)|^{|\alpha|}}\, \frac{|F'(z)|}{1-|F(z)|^{2( \alpha +1)}} \, |dz| \end{split} \end{equation*} is a conformal metric on ${\mathbb{D}} \backslash {\cal C}$ with curvature bounded above by $-4$. Further, by construction $\sigma$ is a continuous function on ${\mathbb{D}}$. Now Lemma \ref{lem:hebsing} guarantees that $\sigma(z)\, |dz|$ is a conformal pseudometric on ${\mathbb{D}}$ with curvature bounded above by $-4$. Obviously, ${\cal C}$ is the zero set of $\sigma(z)\, |dz|$. This immediately implies that \begin{alignat*}{2} \sigma (z) &\le \lambda_{max}(z) &\text{for } z\in {\mathbb{D}} \phantom{\, .} \intertext{ and therefore} \frac{(\alpha +1)}{|S(z)|^{|\alpha|}}\, \frac{1}{1-|F(z)|^{2( \alpha +1)}} &\le \frac{1}{1-|F(z)|^{2}}\, & \quad \text{for } z\in {\mathbb{D}}\, . \end{alignat*} Now letting $z \stackrel{\angle}{\to} \zeta$, we get \begin{equation*} +\infty\stackrel{z \stackrel{\angle}{\to} \zeta}{\longleftarrow} \frac{(\alpha +1)}{|S(z)|^{|\alpha|}}\, \frac{1}{1-|F(z)|^{2( \alpha +1)}} \le \frac{1}{1-|F(z)|^{2}}\stackrel{z \stackrel{\angle}{\to} \zeta}{\longrightarrow} 1 \, , \end{equation*} the desired contradiction. Hence $F(z)=\eta \, B(z)\, O(z)$ for some constant $\eta$ with $|\eta|=1$. \smallskip Since $B$ and $O$ are bounded analytic functions, there exists by Fatou's theorem a subset $A \subset \partial {\mathbb{D}}$ of full Lebesgue measure such that \begin{equation*} \angle \lim_{ z \to \zeta} B(z) \in \partial {\mathbb{D}} \quad \text{ and } \quad \angle \lim_{ z \to \zeta} O(z) \in \overline{{\mathbb{D}}} \end{equation*} exist for all $\zeta \in A$. Pick a point $\zeta \in A$ and assume $\angle \lim_{ z \to \zeta} O(z)= \beta$ where $0\le |\beta| <1$. Thus $\angle \lim_{ z \to \zeta} |F(z)|= |\beta| $. We now consider the hyperbolic metric $\lambda_{{\mathbb{D}}'}(w)\, |dw|$ on ${\mathbb{D}}'$ with constant curvature $-4$, that is \begin{equation*} \lambda_{{\mathbb{D}}' }(w) \, |dw| = \frac{1}{2} \, \frac{1}{|w| \, \log \frac{1}{|w|}}\, |dw|\, . \end{equation*} Then Lemma \ref{lem:new_sk} and Lemma \ref{lem:hebsing} imply that \begin{equation*} \mu(z)\, |dz|=\frac{1}{2} \, \frac{|F'(z)|}{|F(z)| \, \log \frac{1}{|F(z)|}}\, |B(z)|\, |dz|= \frac{1}{2} \, \frac{|F'(z)|}{|O(z)| \, \log \frac{1}{|F(z)|}}\, |dz| \end{equation*} is an SK--metric on ${\mathbb{D}}$. We further note that \begin{equation*} \limsup_{z \to z_j} \frac{\mu(z)}{|z-z_j|^{m_j}}<+ \infty \end{equation*} for all $j$. Hence $\mu(z) \le \lambda_{max}(z)$ for $z \in {\mathbb{D}}$ and therefore \begin{equation*} \frac{1}{2} \, \frac{1}{|O(z)| \, \log \frac{1}{|F(z)|}} \le \frac{1}{1-|F(z)|^2}\,\quad \text{for } z\in {\mathbb{D}}\, . \end{equation*} Letting $z \stackrel{\angle}{\to} \zeta$ yields \begin{equation*} \frac{1}{2} \, \frac{1}{|\beta| \, \log \frac{1}{|\beta|}} \le \frac{1}{1-|\beta|^2}\,, \end{equation*} violating $\lambda_{{\mathbb{D}}'} (z) > \lambda_{{\mathbb{D}}}(z)$ for all $z \in {\mathbb{D}}$. Hence $ \angle \lim_{z \to \zeta} |O(z)|=1$ for a.\!\;e.~$\zeta \in \partial {\mathbb{D}}$ and therefore $O\equiv c$ for some constant $c$, $|c|=1$. Thus $F$ is a Blaschke product, as required. \medskip It remains to show that $F$ is an indestructible Blaschke product. We have seen that {\it each} developing map $F$ of $\lambda_{max}(z)\, |dz|$ is a Blaschke product. Thus the result follows by Liouville's theorem (Theorem \ref{thm:liouville}). \hfill{$\blacksquare$} \medskip We next consider maximal functions whose critical sets form finite and Blaschke sequences respectively. \begin{theorem}\label{thm:finiteC} \begin{itemize} \item[(a)] Let ${\cal C}$ be a finite sequence of $n$ points in ${\mathbb{D}}$. Then every maximal function for $\mathcal{C}$ is a finite Blaschke product of degree $m=n+1$. \item[(b)] If ${\cal C}$ is a Blaschke sequence in ${\mathbb{D}}$ then every maximal function for ${\cal C}$ is an indestructible Blaschke product which has a finite angular derivative at almost every point of $\partial {\mathbb{D}}$. \end{itemize} \end{theorem} {\bf Proof.} Suppose ${\cal C}$ is a finite or Blaschke sequence. Let $F: {\mathbb{D}} \to {\mathbb{D}}$ be a maximal function for ${\cal C}$ and $\lambda_{max}(z)\, |dz|= F^*\lambda_{{\mathbb{D}}}(z)\, |dz|$ be the maximal conformal metric with constant curvature $-4$ and zero set ${\cal C}$. By Theorem \ref{thm:0} the maximal function $F$ is an indestructible Blaschke product. Now, let $B$ be a Blaschke product with zero set ${\cal C}$. Then using Lemma \ref{lem:new_sk} we see that \begin{equation*} \lambda(z)\, |dz| :=|B(z)| \, \lambda_{{\mathbb{D}}}(z) \, |dz| \end{equation*} is a conformal pseudometric on ${\mathbb{D}}$ with curvature bounded above by $-4$ and zero set ${\cal C}$. Thus \begin{equation*} \lambda(z) \le \lambda_{max}(z) \quad \text{ for } z \in {\mathbb{D}} \end{equation*} and consequently \begin{equation}\label{eq:ab1} |B(z)| \le \frac{\lambda_{max}(z)}{\lambda_{{\mathbb{D}}}(z)} \quad \text{ for all } z \in {\mathbb{D}} \, . \end{equation} \medskip (a) If $B$ is a finite Blaschke product, we deduce from (\ref{eq:ab1}) and the Schwarz--Pick lemma that \begin{equation*} \lim_{z \to \zeta} \frac{\lambda_{max}(z)}{\lambda_{{\mathbb{D}}}(z)}=\lim_{z \to \zeta} \frac{|F'(z)|}{1-|F(z)|^2}\, (1-|z|^2) =1 \quad \text{ for all } \zeta \in \partial {\mathbb{D}} \, . \end{equation*} Applying Theorem \ref{thm:boundary} (see Subsection \ref{sec:boundary}) shows that $F$ is a finite Blaschke product. The branching order of $F$ is clearly $2\!\;n$. Thus, according to the Riemann--Hurwitz formula, see \cite[p.~140]{For1999}, the Blaschke product $F$ has degree $m=n+1$. \medskip (b) Suppose now that ${\cal C}$ is a Blaschke sequence. Since $\angle \lim_{z \to \zeta} |B(z)|=1$ for a.\!\;e.~$\zeta \in \partial {\mathbb{D}}\, , $ it follows from (\ref{eq:ab1}) and the Schwarz--Pick lemma that \begin{equation*} \angle \lim_{z \to \zeta} \frac{\lambda_{max}(z)}{\lambda_{{\mathbb{D}}}(z)}=\angle\lim_{z \to \zeta} \frac{|F'(z)|}{1-|F(z)|^2}\, (1-|z|^2) =1 \quad \text{ for a.\!\;e. } \zeta \in \partial {\mathbb{D}} \, . \end{equation*} Hence, by Corollary \ref{cor:5}, we conclude that $F$ has a finite angular derivative at almost every boundary point of $ {\mathbb{D}}$. \hfill{$\blacksquare$} \medskip As already remarked, Theorem \ref{thm:finiteC} (a) combined with Nehari's uniqueness result \cite[Corollary to Theorem 1]{Neh1946} characterizes maximal functions with finite critical sets. Part (b) of Theorem \ref{thm:finiteC} raises the question of whether Nehari's result extends to indestructible Blaschke products with Blaschke sequences as critical sets and a finite angular derivative at almost every boundary point of ${\mathbb{D}}$. \medskip Combining Theorem \ref{thm:0} and Theorem \ref{thm:finiteC} leads to Corollary \ref{cor:3} which we are going to prove now. \medskip {\bf Proof of Corollary \ref{cor:3}.} The uniqueness assertion is a direct consequence of Lemma \ref{lem:gen_max_3} and Liouville's theorem respectively. \smallskip For the existence part we consider $\lambda(z)\, |dz|:= f^*\lambda_{{\mathbb{D}}}(z)\, |dz|$. Then $\lambda(z)\, |dz|$ is a con\-formal pseudometric on ${\mathbb{D}}$ with constant curvature $-4$ and zero set ${\cal C}$. Thus if ${\cal C}^* \subseteq {\cal C}$, then by Theorem \ref{thm:perron2} there exists a maximal conformal pseudometric $\lambda_{max}(z)\, |dz|$ on ${\mathbb{D}}$ with constant curvature $-4$ and zero set ${\cal C}^*$. In particular, \begin{equation*} \frac{|f'(z)|}{1-|f(z)|^2} = \lambda(z) \le \lambda_{max}(z)= \frac{|F'(z)|}{1-|F(z)|^2}\, ,\quad z \in {\mathbb{D}} \,. \end{equation*} Now the critical set of the maximal function $F$ is ${\cal C}^*$ and $F$ is an indestructible Blaschke product by Theorem \ref{thm:0}. If ${\cal C}^*$ is finite then Theorem \ref{thm:finiteC} (a) shows that $F$ is a finite Blaschke product. \hfill{$\blacksquare$} \bigskip We conclude this section by giving an intrinsic characterization of maximal conformal pseudo\-metrics with constant curvature $-4$ whose zero set form finite and Blaschke sequences. It might be of interest to see whether this condition generalizes to all maximal conformal pseudometrics with constant curvature $-4$. \begin{theorem} Let ${\cal C}$ be a finite or Blaschke sequence in ${\mathbb{D}}$. A conformal pseudometric $\lambda(z)\, |dz|$ on ${\mathbb{D}}$ with constant curvature $-4$ and zero set ${\cal C}$ is the maximal conformal pseudometric $\lambda_{max}(z)\, |dz|$ on ${\mathbb{D}}$ with constant curvature $-4$ and zero set ${\cal C}$ if and only if \begin{equation}\label{eq:boundary2} \lim_{ r\to 1} \, \int \limits_0^{2\pi} \log \frac{\lambda(re^{it})}{\lambda_{{\mathbb{D}}}(re^{it})}\,dt=0\, . \end{equation} \end{theorem} {\bf Proof.} Let $B$ be a Blaschke product with zero set ${\cal C}$. Applying Lemma \ref{lem:new_sk}, we see that \begin{equation*} \lambda(z)\, |dz| :=|B(z)| \, \lambda_{{\mathbb{D}}}(z) \, |dz| \end{equation*} defines a conformal pseudometric on ${\mathbb{D}}$ with curvature bounded above by $-4$ and zero set ${\cal C}$. Thus $\lambda(z) \le \lambda_{max}(z)$ for $z \in {\mathbb{D}}$ and therefore \begin{equation*} |B(z)| \le \frac{\lambda_{max}(z)}{\lambda_{{\mathbb{D}}}(z)} \quad \text{ for all } z \in {\mathbb{D}} \, . \end{equation*} Since $\lim_{ r\to 1} \, \int \limits_0^{2\pi} \log|B(re^{it})|\, dt=0$, see \cite[Chapter II, Theorem 2.4]{Gar2007}, we deduce that \begin{equation*} \lim_{ r\to 1} \, \int \limits_0^{2\pi} \log \frac{\lambda_{max}(re^{it})}{\lambda_{{\mathbb{D}}}(re^{it})}\,dt=0\, , \end{equation*} as desired. \smallskip Conversely, let $\lambda(z)\, |dz|$ be a conformal pseudometric on ${\mathbb{D}}$ with constant curvature $-4$ and zero set ${\cal C}$ which satisfies (\ref{eq:boundary2}). We now consider the nonnegative subharmonic function $z \mapsto \log (\lambda_{max}(z)/\lambda(z))$ on ${\mathbb{D}}$ and note that \begin{equation*} \lim_{ r\to 1} \, \int \limits_0^{2\pi} \log \frac{\lambda_{max}(re^{it})}{\lambda(re^{it})}\,dt= \lim_{ r\to 1} \, \int \limits_0^{2\pi} \log \frac{\lambda_{max}(re^{it})}{\lambda_{{\mathbb{D}}}(re^{it})}\,dt - \lim_{ r\to 1} \, \int \limits_0^{2\pi} \log \frac{\lambda(re^{it})}{\lambda_{{\mathbb{D}}}(re^{it})}\,dt= 0\, . \end{equation*} Hence $\lambda(z)\, |dz|=\lambda_{max}(z)\, |dz|$.{\hfill{$\blacksquare$}} \section{Proof of Theorem \ref{thm:main}} \label{sec:5} We now combine our previous results to complete the proof of Theorem \ref{thm:main}. \medskip {\bf Proof of Theorem \ref{thm:main}.}\\ (a) $\Rightarrow$ (b): This is Corollary \ref{cor:1}. \smallskip (b) $\Rightarrow$ (c): By Remark \ref{rem:little_paley} we have $B' \in {\cal A}_1^2$. \smallskip (c) $\Rightarrow$ (d): Again using Remark \ref{rem:little_paley}, we see that if $\varphi \in {\cal A}_1^2$, then \begin{equation*} z \mapsto \int \limits_0^z \varphi(\xi) \, d\xi\, , \quad z \in {\mathbb{D}}, \end{equation*} belongs to $H^2$ and therefore to the Nevanlinna class ${\cal N}$. \smallskip \label{page5} (d) $\Rightarrow$ (a): Let $\varphi \in {\cal N}$. Then $\varphi= \varphi_1/\varphi_2$ is the quotient of two analytic self--maps of ${\mathbb{D}}$, see for instance \cite[Theorem 2.1]{Dur2000}. W.\!\;l.\!\;o.\!\;g.~we may assume $\varphi_2$ is zerofree. Differentiation of $\varphi$ yields \begin{equation*} \varphi'(z)=\frac{1}{\varphi_2(z)^2} \, \big( \varphi_1'(z) \, \varphi_2 (z)- \varphi_1(z)\, \varphi_2'(z) \big)\, . \end{equation*} Since $\varphi'_1, \varphi_2' \in {\cal A}_1^2$ and ${\cal A}_1^2$ is a vector space, it follows that the function $\varphi_1' \, \varphi_2 - \varphi_1\, \varphi_2'$ belongs to ${\cal A}_1^2$. Thus Theorem \ref{thm:d} ensures the existence of a solution $u : {\mathbb{D}} \to {\mathbb{R}}$ to \begin{equation*} \Delta u= \left| \varphi'(z) \right|^2 \, e^{2u}\, . \end{equation*} By Liouville's theorem \begin{equation*} u(z) =\log \left(\frac{1}{|\varphi'(z)|}\, \frac{|f'(z)|}{1-|f(z)|^2} \right)\,, \quad z \in {\mathbb{D}}\, , \end{equation*} for some analytic self--map $f$ of ${\mathbb{D}}$. Hence the critical set of $\varphi$ coincides with the critical set of $f$, as required.\hfill{$\blacksquare$} \medskip We wish to point out that Theorem \ref{thm:main} characterizes the class of analytic functions whose critical sets agree with the critical sets of the class of bounded analytic functions. In fact, the critical set ${\cal C}_{g}$ of an analytic function $g: {\mathbb{D}} \to {\mathbb{C}}$ coincides with the critical set ${\cal C}_f$ of an analytic function $f: {\mathbb{D}} \to {\mathbb{D}}$ if and only if $g'=\varphi_1\, \varphi_2$ for some function $\varphi_1 \in {\cal A}_1^2$ and a nonvanishing analytic function $\varphi_2: {\mathbb{D}} \to {\mathbb{C}}$. \medskip A final remark is that the list of equivalent statements in Theorem \ref{thm:main} can be extended. \begin{remark}\label{rem:(e)} Let $ (z_j)$ be a sequence in ${\mathbb{D}}$. Then the following statements are equivalent. \begin{itemize} \item[(a)] There is an analytic self--map of $ {\mathbb{D}}$ with critical set $(z_j)$. \item[(e)] There is a meromorphic function $g$ on ${\mathbb{D}}$ with critical set $(z_j)$, where $g$ is the quotient of two analytic self--maps of ${\mathbb{D}}$. \end{itemize} \end{remark} \smallskip {\bf Proof.} We adapt the idea already used in the proof of implication (d) $\Rightarrow$ (a) of Theorem \ref{thm:main}. Let $(z_j)$ be the critical set of the meromorphic function $g= \varphi_1/\varphi_2$, where $\varphi_1$ and $\varphi_2$ are analytic self--maps of ${\mathbb{D}}$. We may assume that $\varphi_1$ and $\varphi_2$ have no common zeros. Since $g'(z) \, \varphi_2(z)^2$ belongs to ${\cal A}_1^2$, Theorem \ref{thm:d} guarantees a solution $u: {\mathbb{D}} \to {\mathbb{R}}$ to \begin{equation*} \Delta u= \left| g'(z) \, \varphi_2(z)^2 \right|^2 \, e^{2u}\, . \end{equation*} Now Liouville's theorem shows that \begin{equation*} u(z) =\log \left(\frac{1}{|g'(z)\, \varphi_2(z)^2|}\, \frac{|f'(z)|}{1-|f(z)|^2} \right)\,, \quad z \in {\mathbb{D}}\, , \end{equation*} for some analytic self--map $f$ of ${\mathbb{D}}$. Thus the critical set of $f$ coincides with the critical set of $g$ and the zero set of $\varphi_2^2$. Now Lemma \ref{lem:subset} below tells us that every subset of a critical set of a bounded analytic function is again the critical set of another bounded analytic function. Hence there exists an analytic self--map of ${\mathbb{D}}$ with critical set $(z_j)$, as desired. \hfill{$\blacksquare$} \medskip \begin{lemma}\label{lem:subset} Every subset of a critical set of a bounded analytic function on ${\mathbb{D}}$ is the critical set of another bounded analytic function on ${\mathbb{D}}$. \end{lemma} \medskip {\bf Proof.} It suffices to consider nonconstant analytic self--maps of ${\mathbb{D}}$. \smallskip The statement, using Theorem \ref{thm:main}, follows from the corresponding result about the zero sets of functions in ${\cal A}_1^2$, see \cite[Theorem 7.9]{Hor1974} and \cite[Corollary 4.36]{HKZ}. \smallskip Here, we would like to give an alternative proof. Suppose $f$ is a nonconstant analytic self--map of ${\mathbb{D}}$ with critical set ${\cal C}$. Now let $ {\cal C}^*$ be a subsequence of ${\cal C}$. By Theorem \ref{thm:perron2} there exists a regular conformal pseudometric on ${\mathbb{D}}$ with constant curvature $-4$ and zero set ${\cal C}^*$. Applying Liouville's theorem gives the desired analytic self--map $g$ of ${\mathbb{D}}$ with critical set ${\cal C}^*$.~\hfill{$\blacksquare$}
2,869,038,156,000
arxiv
\section{Introduction} In this paper we study the low Mach number limit of local smooth solutions to the following full compressible magnetohydrodynamic (MHD) equations with general initial data in the whole space $\mathbb{R}^3$ (see \cite{Hu87,JT64,KL,LL}): \begin{align} &\partial_t\rho +{\rm div }(\rho\u)=0, \label{haa} \\ &\partial_t(\rho\u)+{\rm div }\left(\rho\u\otimes\u\right)+ {\nabla P} =\frac{1}{4\pi}({\rm curl\, } \H)\times \H+{\rm div }\Psi(\u), \label{hab} \\ &\partial_t\H-{\rm curl\, }(\u\times\H)=-{\rm curl\, }(\nu\,{\rm curl\, }\H),\quad {\rm div }\H=0, \label{hac}\\ &\partial_t{\mathcal E}+{\rm div }\left(\u({\mathcal E}'+P)\right) =\frac{1}{4\pi}{\rm div }((\u\times\H)\times\H)\nonumber\\ & \qquad \qquad \qquad\qquad\qquad\, \, +{\rm div }\Big(\frac{\nu}{4\pi} \H\times({\rm curl\, }\H)+ \u\Psi(\u)+\kappa\nabla\theta\Big). \label{had} \end{align} Here the unknowns $\rho $, $\u=(u_1,u_2,u_3)\in {\mathbb R}^3$ , $\H=(H_1,H_2,H_3)\in {\mathbb R}^3$, and $\theta$ denote the density, velocity, magnetic field, and temperature, respectively; $\Psi(\u)$ is the viscous stress tensor given by \begin{equation* \Psi(\u)=2\mu \mathbb{D}(\u)+\lambda{\rm div }\u \;\mathbf{I}_3 \end{equation*} with $\mathbb{D}(\u)=(\nabla\u+\nabla\u^\top)/2$, $\mathbf{I}_3$ the $3\times 3$ identity matrix, and $\nabla \u^\top$ the transpose of the matrix $\nabla \u$; ${\mathcal E}$ is the total energy given by ${\mathcal E}={\mathcal E}'+|\H|^2/({8\pi})$ and ${\mathcal E}'=\rho\left(e+|\u|^2/2 \right)$ with $e$ being the internal energy, $\rho|\u|^2/2$ the kinetic energy, and $|\H|^2/({8\pi})$ the magnetic energy. The viscosity coefficients $\lambda$ and $\mu$ of the flow satisfy $\mu>0$ and $2\mu+3\lambda>0$. The parameter $\nu>0$ is the magnetic diffusion coefficient of the magnetic field and $\kappa>0$ the heat conductivity. For simplicity, we assume that $\mu,\lambda,\nu$ and $\kappa$ are constants. The equations of state $P=P(\rho,\theta)$ and $e=e(\rho,\theta)$ relate the pressure $P$ and the internal energy $e$ to the density $\rho$ and the temperature $\theta$ of the flow. Multiplying \eqref{hab} by $\u$ and \eqref{hac} by $\H/({4\pi})$ and summing over, one finds that \begin{align}\label{haaz} &\frac{d}{dt}\Big(\frac{1}{2}\rho|\u|^2+\frac{1}{8\pi}|\H|^2\Big) +\frac{1}{2}{\rm div }\left(\rho|\u|^2\u\right)+\nabla P\cdot\u \nonumber\\ &\quad ={\rm div }\Psi\cdot \u+\frac{1}{4\pi}({\rm curl\, }\H)\times\H\cdot\u +\frac{1}{4\pi}{\rm curl\, }(\u\times\H)\cdot\H\nonumber\\ & \, \, \qquad\qquad -\frac{\nu}{4\pi}{\rm curl\, }({\rm curl\, }\H)\cdot\H. \end{align} Due to the identities \begin{eqnarray} && {\rm div }(\H\times({\rm curl\, }\H)) =|{\rm curl\, }\H|^2-{\rm curl\, }({\rm curl\, }\H)\cdot\H , \nonumber \\ && {\rm div }((\u\times\H)\times\H) =({\rm curl\, }\H)\times\H\cdot\u+{\rm curl\, }(\u\times\H)\cdot\H, \label{nae} \end{eqnarray} one can subtract \eqref{haaz} from \eqref{had} to rewrite the energy equation \eqref{had} in terms of the internal energy as \begin{equation}\label{hagg} \partial_t (\rho e)+{\rm div }(\rho\u e)+({\rm div }\u)P=\frac{\nu}{4\pi}|{\rm curl\, }\H|^2+\Psi(\u):\nabla\u+\kappa \Delta \theta, \end{equation} where $\Psi(\u):\nabla\u$ denotes the scalar product of two matrices: \begin{equation*} \Psi(\u):\nabla\u=\sum^3_{i,j=1}\frac{\mu}{2}\left(\frac{\partial u^i}{\partial x_j} +\frac{\partial u^j}{\partial x_i}\right)^2+\lambda|{\rm div }\u|^2= 2\mu|\mathbb{D}(\u)|^2+\lambda|\mbox{tr}\mathbb{D}(\u)|^2. \end{equation*} To establish the low Mach number limit for the system \eqref{haa}--\eqref{hac} and \eqref{hagg}, in this paper we shall focus on the ionized fluids obeying the following perfect gas relations \begin{align} P=\mathfrak{R}\rho \theta,\quad e=c_V\theta,\label{hpg} \end{align} where the parameters $\mathfrak{R}>0$ and $ c_V\!>\!0$ are the gas constant and the heat capacity at constant volume, respectively, which will be assumed to be one for simplicity of the presentation. We also ignore the coefficient $1/(4\pi)$ in the magnetic field. Let $\epsilon$ be the Mach number, which is a dimensionless number. Consider the system \eqref{haa}--\eqref{hac}, \eqref{hagg} in the physical regime: \begin{gather*} P \sim P_0+O(\epsilon), \quad \u\sim O(\epsilon), \quad \H\sim O(\epsilon),\quad \nabla\theta\sim O(1), \end{gather*} where $P_0>0$ is a certain given constant which is normalized to be $P_0= 1$. Thus we consider the case when the pressure $P$ is a small perturbation of the given state $1$, while the temperature $\theta$ has a finite variation. As in \cite{A06}, we introduce the following transformation to ensure positivity of $P $ and $\theta$ \begin{align} & \ P(x,t)= e^{\epsilon p^\epsilon(x,\epsilon t)}, \quad \theta(x,t)=e^{\theta^\epsilon(x,\epsilon t)}, \label{transa} \end{align} where a longer time scale $t=\tau/\epsilon$ (still denote $\tau$ by $t$ later for simplicity) is introduced in order to seize the evolution of the fluctuations. Note that \eqref{hpg} and \eqref{transa} imply that $ \rho(x,t)=e^{\epsilon p^\epsilon(x,\epsilon t)-\theta^\epsilon(x,\epsilon t)}$ since $ \mathfrak{R}\equiv c_V\equiv 1$. Set \begin{align}\label{transb} {\H} (x,t)=\epsilon \H^\epsilon(x,\epsilon t), \quad {\u} (x,t)=\epsilon \u^\epsilon(x,\epsilon t), \end{align} and \begin{align}\nonumbe \mu=\epsilon \mu^\epsilon, \quad \lambda=\epsilon \lambda^\epsilon, \quad \nu=\epsilon \nu^\epsilon, \quad \kappa=\epsilon \kappa^\epsilon. \end{align} Under these changes of variables and coefficients, the system, \eqref{haa}--\eqref{hac}, \eqref{hagg} with \eqref{hpg}, takes the following equivalent form: \begin{align} &\partial_t p^\epsilon +(\u^\epsilon \cdot\nabla)p^\epsilon +\frac{1}{\epsilon}{\rm div }(2\u^\epsilon-\kappa^\epsilon e^{-\epsilon p^\epsilon+\theta^\epsilon}\nabla \theta^\epsilon)\nonumber\\ & \quad \quad \qquad =\epsilon e^{-\epsilon p^\epsilon} [\nu^\epsilon|{\rm curl\, }\H^\epsilon|^2 +\Psi(\u^\epsilon):\nabla\u^\epsilon]+\kappa^\epsilon e^{-\epsilon p^\epsilon+\theta^\epsilon}\nabla p^\epsilon \cdot \nabla \theta^\epsilon, \label{hnaaa} \\ &e^{-\theta^\epsilon}[\partial_t\u^\epsilon+(\u^\epsilon\cdot\nabla)\u^\epsilon] +\frac{\nabla p^\epsilon}{\epsilon} =e^{- \epsilon p^\epsilon}[({\rm curl\, } \H^\epsilon)\times \H^\epsilon +{\rm div }\Psi^\epsilon(\u^\epsilon)], \label{hnabb} \\ &\partial_t\H^\epsilon-{\rm curl\, }(\u^\epsilon\times\H^\epsilon) -\nu^\epsilon\Delta \H^\epsilon=0,\quad {\rm div }\H^\epsilon=0,\label{hnacc} \\ &\partial_t \theta^\epsilon +(\u^\epsilon\cdot\nabla) \theta^\epsilon+ {\rm div }\u^\epsilon\nonumber\\ & \quad \quad \qquad =\epsilon^2e^{-\epsilon p^\epsilon}[\nu^\epsilon|{\rm curl\, }\H^\epsilon|^2 +\Psi^\epsilon(\u^\epsilon):\nabla\u^\epsilon]+\kappa^\epsilon e^{-\epsilon p^\epsilon}{\rm div } (e^{\theta^\epsilon}\nabla \theta^\epsilon), \label{hnadd} \end{align} where $\Psi^\epsilon(\u^\epsilon)=2\mu^\epsilon \mathbb{D}(\u^\epsilon)+\lambda^\epsilon{\rm div }\u^\epsilon\,\mathbf{I}_3$, and the identity ${\rm curl\, } ({\rm curl\, } \H^\epsilon)=\nabla {\rm div }\H^\epsilon-\Delta \H^\epsilon$ and the constraint that ${\rm div }\H^\epsilon=0$ have been used. We shall study the limit as $\epsilon\to 0$ of solutions to the system \eqref{hnaaa}--\eqref{hnadd}. Formally, as $\epsilon$ goes to zero, if the sequence $(p^\epsilon, \u^\epsilon,\H^\epsilon, \theta^\epsilon)$ converges strongly to a limit $(1,{\mathbf w},\t,\vartheta)$ in some sense, and $(\mu^\epsilon,\lambda^\epsilon,\nu^\epsilon,\kappa^\epsilon)$ converges to a constant vector $(\bar\mu,\bar\lambda,\bar\nu,\bar\kappa)$, then taking the limit to \eqref{hnaaa}--\eqref{hnadd}, we have \begin{align} &{\rm div }(2{\mathbf w} -\bar{\kappa}\, e^\vartheta\nabla \vartheta)=0, \label{hna1} \\ &e^{-\vartheta}[\partial_t{\mathbf w}+({\mathbf w}\cdot\nabla){\mathbf w}]+\nabla \pi =({\rm curl\, } \t)\times \t+{\rm div }\Phi({\mathbf w}), \label{hna2} \\ &\partial_t\t -{\rm curl\, }({\mathbf w}\times\t)-\bar \nu\Delta\t=0,\quad {\rm div }\t=0, \label{hna3} \\ &\partial_t\vartheta +({\mathbf w}\cdot\nabla)\vartheta +{\rm div } {\mathbf w}=\bar \kappa \, {\rm div } (e^{\vartheta}\nabla \vartheta), \label{hna4} \end{align} with some function $\pi$, where $\Phi({\mathbf w})$ is defined by \begin{equation}\nonumber \Phi({\mathbf w})=2\bar \mu \mathbb{D}({\mathbf w})+\bar\lambda{\rm div }{\mathbf w} \,\mathbf{I}_3. \end{equation} The purpose of this paper is to establish the above limit process rigorously. For this purpose, we supplement the system \eqref{hnaaa}--\eqref{hnadd} with the following initial conditions \begin{align} (p^\epsilon,\u^\epsilon,\H^\epsilon, \theta^\epsilon)|_{t=0} =(p^\epsilon_{\rm in}(x),\u^\epsilon_{\rm in}(x),\H^\epsilon_{\rm in}(x), \theta^\epsilon_{\rm in}(x)), \quad x \in \mathbb{R}^3. \label{hnas} \end{align} For simplicity of presentation, we shall assume that $\mu^\epsilon \equiv \bar\mu>0$, $\nu^\epsilon \equiv \bar\nu>0$, $\kappa^\epsilon \equiv \bar\kappa>0$, and $\lambda^\epsilon \equiv \bar\lambda$. The general case $\mu^\epsilon\rightarrow \bar\mu>0$, $\nu^\epsilon\rightarrow\bar\nu>0$, $\kappa^\epsilon\rightarrow\bar\kappa>0$ and $\lambda^\epsilon\rightarrow\bar\lambda$ simultaneously as $\epsilon \rightarrow 0$ can be treated by slightly modifying the arguments presented here. As in \cite{A06}, we will use the notation $\|v\|_{H_\eta^\sigma}:=\|v\|_{H^{\sigma-1}}+\eta\|v\|_{H^\sigma}$ for any $\sigma\in \mathbb{R}$ and $\eta\geq 0$. For each $\epsilon>0$, $t\geq 0$ and $s\geq 0$, we will also use the following norm: \begin{align*}\nonumbe &\|(p^\epsilon,\u^\epsilon, \H^\epsilon,\theta^\epsilon-\bar \theta)(t)\|_{s,\epsilon} \nonumber\\ &\qquad := \sup_{\tau\in [0,t]}\big\{\|(p^\epsilon,\u^\epsilon,\H^\epsilon)(\tau)\|_{H^{s}} +\|(\epsilon p^\epsilon,\epsilon\u^\epsilon, \epsilon\H^\epsilon, \theta^\epsilon-\bar \theta)(\tau)\|_{H_\epsilon^{s+2}}\big\}\nonumber\\ & \qquad \ \ \quad + \Big\{\int^t_0[\|\nabla(p^\epsilon,\u^\epsilon,\H^\epsilon)\|^2_{H^s} +\|\nabla (\epsilon\u^\epsilon,\epsilon H^\epsilon,\theta^\epsilon)\|^2_{H_\epsilon^{s+2}}](\tau)d\tau\Big\}^{1/2}. \end{align*} Then, the main result of this paper reads as follows. \begin{thm}\label{main} Let $s\geq 4$. Assume that the initial data $ (p^\epsilon_{\rm in},\u^\epsilon_{\rm in},\H^\epsilon_{\rm in}, \theta^\epsilon_{\rm in})$ satisfy \begin{align}\label{initial} \|(p^\epsilon_{\rm in},\u^\epsilon_{\rm in},\H^\epsilon_{\rm in})\|_{H^{s}} +\|(\epsilon p^\epsilon_{\rm in},\epsilon\u^\epsilon_{\rm in}, \epsilon\H^\epsilon_{\rm in}, \theta^\epsilon_{\rm in}-\bar \theta)\|_{H_\epsilon^{s+2}} \leq L_0 \end{align} for all $\epsilon \in (0,1]$ and two given positive constants $\bar \theta$ and $L_0$. Then there exist positive constants $T_0$ and $\epsilon_0<1$, depending only on $L_0$ and $\bar \theta$, such that the Cauchy problem \eqref{hnaaa}--\eqref{hnadd}, \eqref{hnas} has a unique solution $(p^\epsilon,\u^\epsilon,\H^\epsilon, \theta^\epsilon)$ satisfying \begin{align}\label{conda} \|(p^\epsilon,\u^\epsilon,\H^\epsilon, \theta^\epsilon-\bar \theta)(t)\|_{s,\epsilon}\leq L, \qquad \forall \,\, t\in [0,T_0], \ \forall\,\epsilon \in (0,\epsilon_0], \end{align} where $L$ depends only on $L_0$, $\bar \theta$ and $T_0$. Moreover, assume further that the initial data satisfy the following conditions \begin{gather} |\theta^\epsilon_0(x)-\bar{\theta} |\leq {N}_0 |x|^{-1-\zeta}, \quad |\nabla \theta^\epsilon_0(x)|\leq N_0 |x|^{-2-\zeta}, \quad \forall \, \epsilon \in (0,1],\label{dacay}\\ \!\!\!\!\big(p^\epsilon_{\rm in},{\rm curl\, }(e^{-\theta^\epsilon_{\rm in}}\u^\epsilon_{\rm in}), \H^\epsilon_{\rm in}, \theta^\epsilon_{\rm in}-\bar \theta\big)\rightarrow (0,{\mathbf w}_0,\t_0, \vartheta_0-\bar \theta)\;\mbox{ in } H^{s}(\mathbb{R}^3) \label{conver} \end{gather} as $\epsilon\rightarrow 0$, where $N_0$ and $\zeta$ are fixed positive constants. Then the solution sequence $(p^\epsilon,\u^\epsilon,\H^\epsilon,$ $ \theta^\epsilon-\bar\theta)$ converges weakly in $L^\infty(0,T_0; H^s({\mathbb R}^3))$ and strongly in $L^2(0,T_0;$ $H^{s_2}_{\mathrm{loc}}({\mathbb R}^3))$ for all $0\leq s_2<s$ to the limit $(0,{\mathbf w},\t, \vartheta-\bar\theta)$, where $({\mathbf w},\t, \vartheta)$ satisfies the system \eqref{hna1}--\eqref{hna4} with initial data $({\mathbf w},\t, \vartheta)|_{t=0} =({\mathbf w}_0,\t_0, \vartheta_0)$. \end{thm} We now give some comments on the proof of Theorem \ref{main}. The key point in the proof is to establish the uniform estimates in Sobolev norms for the acoustic components of solutions, which are propagated by the wave equations whose coefficients are functions of the temperature. Our main strategy is to bound the norm of $(\nabla p^\epsilon, {\rm div } \u^\epsilon)$ in terms of the norm of $(\epsilon \partial_t) (p^\epsilon,\u^\epsilon, \H^\epsilon)$ and $(\epsilon p^\epsilon,\epsilon \u^\epsilon,\epsilon \H^\epsilon, \theta^\epsilon )$ through the density and the momentum equations. This approach is motivated by the previous works on the compressible Navier-Stokes equations due to Alazard in \cite{A06}, and Levermore, Sun and Trivisa \cite{LST}. It should be pointed out that the analysis for \eqref{hnaaa}--\eqref{hnadd} is complicated and difficult due to the strong coupling of the hydrodynamic motion and the magnetic fields. Moreover, it is observed that the terms $({\rm curl\, }{\H^\epsilon}) \times {\H^\epsilon}$ in the momentum equations, ${\rm curl\, }({\u^\epsilon} \times {\H^\epsilon})$ in the magnetic field equation, and $|\nabla \times {\H^\epsilon}|^2$ in the temperature equation change basically the structure of the system. More efforts should be paid on the estimates involving these terms, in particular, on the estimate of higher order spatial derivatives. We shall exploit the special structure of the system to obtain the tamed estimate on higher order derivatives, so that we can close our estimates on the uniform boundedness of the solutions. Once the uniform boundedness of the solutions has been established, one can the convergence result in Theorem \ref{main} by applying the compactness arguments and the dispersive estimates on the acoustic wave equations in the whole space developed in \cite{MS01}. \begin{rem} The positivity of the coefficients $\mu$, $\nu$ and $\kappa$ plays an fundamental role in the proof of Theorem \ref{main}. The arguments given in this paper can not be applied to the case when one of them disappears. Recently, Jiang, Ju and Li \cite{JJL4} have studied the incompressible limit of the compressible non-isentropic ideal MHD equations with general initial data in the whole space $\mathbb{R}^d$ ($d=2,3$) when the initial initial data belong to $H^s(\mathbb{R}^d)$ with $s$ being an even integer. We emphasize that the restriction on the Sobolev index $s$ to be even plays a crucial role in the proof since in this case the nonstandard highest order derivative operators applied to the momentum equations are not intertwined with the pressure equation, and thus we can apply the same operators to the magnetic field equations to close the estimates on $\u$ and $\H$. On the other hand, the proof presented in \cite{JJL4} fully exploits the structure of the ideal MHD equations and can not be directly extended to the full compressible MHD equations studied in the current paper, where the heat conductivity is positive. \end{rem} We point out that the low Mach number limit is an interesting topic in fluid dynamics and applied mathematics. Now we briefly review some related results on the Euler, Navier-Stokes and MHD equations. In \cite{S86}, Schochet obtained the convergence of the non-isentropic compressible Euler equations to the incompressible non-isentropic Euler equations in a bounded domain for local smooth solutions and well-prepared initial data. As mentioned above, in \cite{MS01} M\'{e}tivier and Schochet proved rigorously the incompressible limit of the compressible non-isentropic Euler equations in the whole space with general initial data, see also \cite{A05, A06, LST} for further extensions. In \cite{MS03} M\'{e}tivier and Schochet showed the incompressible limit of the one-dimensional non-isentropic Euler equations in a periodic domain with general data. For compressible heat-conducting flows, Hagstrom and Lorenz established in \cite{HL02} the low Mach number limit under the assumption that the variation of the density and temperature is small. In the case of without heat conductivity, Kim and Lee \cite{KL05} investigated the incompressible limit to the non-isentropic Navier-Stokes equations in a periodic domain with well-prepared data, while Jiang and Ou \cite{JO} investigated the incompressible limit in three-dimensional bounded domains, also for well-prepared data. The justification of the low Mach number limit for the non-isentropic Euler or Navier-Stokes equations with general initial data in bounded domains or multi-dimensional periodic domains is still open. We refer the interested reader to \cite{BDGL} on formal computations for viscous polytropic gases, and to \cite{MS03,BDG} for the study on the acoustic waves of the non-isentropic Euler equations in periodic domains. Compared with the non-isentropic case, the description of the propagation of oscillations in the isentropic case is simpler and there are many articles on this topic (isentropic flows) in the literature, see, for example, Ukai \cite{U86}, Asano \cite{As87}, Desjardins and Grenier \cite{DG99} in the whole space case; Isozaki \cite{I87,I89} in the case of exterior domains; Iguchi \cite{Ig97} in the half space case; Schochet \cite{S94} and Gallagher \cite{Ga01} in the case of periodic domains; and Lions and Masmoudi \cite{LM98}, and Desjardins, et al. \cite{DGLM} in the case of bounded domains. For the compressible isentropic MHD equations, the justification of the low Mach number limit has been established in several aspects. In \cite{KM} Klainerman and Majda studied the low Mach number limit to the compressible isentropic MHD equations in the spatially periodic case with well-prepared initial data. Recently, the low Mach number limit to the compressible isentropic viscous (including both viscosity and magnetic diffusivity) MHD equations with general data was studied in \cite{HW3,JJL1,JJL2}. In \cite{HW3} Hu and Wang obtained the convergence of weak solutions to the compressible viscous MHD equations in bounded domains, periodic domains and the whole space. In \cite{JJL1} Jiang, Ju and Li employed the modulated energy method to verify the limit of weak solutions of the compressible MHD equations in the torus to the strong solution of the incompressible viscous or partially viscous MHD equations (zero shear viscosity but with magnetic diffusion), while in \cite{JJL2} the convergence of weak solutions of the viscous compressible MHD equations to the strong solution of the ideal incompressible MHD equations in the whole space was established by using the dispersion property of the wave equation, as both shear viscosity and magnetic diffusion coefficients go to zero. For the full compressible MHD equations, the incompressible limit in the framework of the so-called variational solutions was established in \cite{K10,KT,NRT}. Recently, the low Mach number limit for the full compressible MHD equations with small entropy or temperature variation was justified rigourously in \cite{JJL3}. Besides the references mentioned above, the interested reader can refer to the monograph \cite{FN} and the survey papers \cite{Da05,M07,S07} for more related results on the low Mach number limit to fluid models. We also mention that there are a lot of articles in the literatures on the other topics related to the compressible MHD equations due to theirs physical importance, complexity, rich phenomena, and mathematical challenges, see, for example, \cite{BT02,CW02,CW03,Hu87,LY11,DF06,FJN,FS,HT,LL,HW1,HW2,ZJX} and the references cited therein. This paper is arranged as follows. In Section 2, we describe some notations, recall basic facts and present commutators estimates. In Section 3 we first establish a priori estimates on $(\H^\epsilon,\theta^\epsilon)$, $(\epsilon p^\epsilon, \epsilon \u^\epsilon, \epsilon \H^\epsilon, \theta^\epsilon)$ and on $(p^\epsilon, \u^\epsilon)$. Then, with the help of these estimates we establish the uniform boundeness of the solutions and prove the existence part of Theorem \ref{main}. Finally, in Section 4 we study the local energy decay for the acoustic wave equations and prove the convergence part of Theorem \ref{main}. \section{Preliminary} In this section, we give some notations and recall basic facts which will be frequently used throughout the paper. We also present some commutators estimates introduced in \cite{LST} and state the results on local solutions to the Cauchy problem \eqref{hnaaa}--\eqref{hnadd}, \eqref{hnas}. We denote by $\langle\cdot,\cdot\rangle$ the standard inner product in $L^2({\mathbb R}^3)$ with norm $\langle f,f\rangle=\|f\|^2_{L^2}$ and by $H^k$ the standard Sobolev space $W^{k,2}$ with norm $\|\cdot\|_{H^k}$. The notation $\|(A_1,\dots, A_k)\|_{L^2}$ means the summation of $\|A_i\|_{L^2},i=1,\cdots,k$, and it also applies to other norms. For a multi-index $\alpha = (\alpha_1,\alpha_2,\alpha_3)$, we denote $\partial^\alpha =\partial^{\alpha_1}_{x_1}\partial^{\alpha_2}_{x_2}\partial^{\alpha_3}_{x_3}$ and $|\alpha|=|\alpha_1|+|\alpha_2|+|\alpha_3|$. We will omit the spatial domain ${\mathbb R}^3$ in integrals for convenience. We use $l_i>0$ ($i\in\mathbb{N}$) to denote given constants. We also use the symbol $K$ or $C_0$ to denote generic positive constants, and $C(\cdot)$ to denote a smooth function which may vary from line to line. For a scalar function $f$, vector functions $\mathbf{a}$ and $\mathbf{b}$, we have the following basic vector identities: \begin{align} {\rm div }(f \mathbf{a}) & = f {\rm div } \mathbf{a} +\nabla f\cdot \mathbf{a}, \label{vbd}\\ {\rm curl\, }(f \mathbf{a}) & =f\cdot {\rm curl\, } \mathbf{a} -\nabla f\times \mathbf{a}, \label{vbc}\\ {\rm div } (\mathbf{a}\times \mathbf{b})& =\mathbf{b}\cdot{\rm curl\, } \mathbf{a} -\mathbf{a}\cdot{\rm curl\, }\mathbf{b},\label{va}\\ {\rm curl\, }(\mathbf{a}\times\mathbf{b})& =(\mathbf{b}\cdot\nabla)\mathbf{a} - (\mathbf{a}\cdot\nabla)\mathbf{b} + \mathbf{a} ({\rm div } \mathbf{b}) - \mathbf{b} ({\rm div } \mathbf{a}),\label{vb}\\ \nabla (\mathbf{a}\cdot \mathbf{b}) &=(\mathbf{a}\cdot \nabla )\mathbf{b}+(\mathbf{b}\cdot \nabla)\mathbf{a} +\mathbf{a}\times ({\rm curl\, } \mathbf{b})+\mathbf{b}\times ({\rm curl\, } \mathbf{a}). \label{vv} \end{align} Below we recall some results on commutators estimates. \begin{lem}\label{Lb} Let $\alpha=(\alpha_1,\alpha_2,\alpha_3)$ be a multi-index such that $|\alpha|=k$. Then, for any $\sigma\geq 0$, there exists a positive constant $C_0$, such that for all $f,g\in H^{k+\sigma}({\mathbb R}^3)$, \begin{align}\label{comc} \|[f,\partial^\alpha] g\|_{H^{\sigma}}\leq & C_0(\|f\|_{W^{1,\infty}}\| g\|_{H^{\sigma+k-1}} +\| f\|_{H^{\sigma+k}}\|g\|_{L^{\infty}}). \end{align} \end{lem} \begin{lem}\label{Lc} Let $s>5/2$. Then there exists a positive constant $C_0$, such that for all $\epsilon\in (0,1]$, $T>0$ and multi-index $\beta=(\beta_1,\beta_2,\beta_3)$ satisfying $0\leq |\beta|\leq s-1$, and any $f,g\in C^\infty([0,T],H^{s}({\mathbb R}^3))$, it holds that \begin{align}\label{comd} \|[f,\partial^\beta(\epsilon\partial_t)] g\|_{L^2}\leq & \epsilon C_0 (\|f\|_{H^{s-1}}\|\partial_t g\|_{H^{s-2}} +\|\partial_t f\|_{H^{s-1}}\| g\|_{H^{s-1}}). \end{align} \end{lem} Since the system \eqref{haa}--\eqref{hac}, \eqref{hagg}, \eqref{hpg} is hyperbolic-parabolic, so the classical result of Vol'pert and Hudiaev \cite{VK} implies that \begin{prop}\label{hPa} Let $s\geq 4$. Assume that the initial data $(\rho_0,\u_0 ,\H_0,\theta_0)$ satisfy \begin{align*} \|(\rho_0 -\underline\rho, \u_0,\H_0,\theta_0 -\underline\theta)\|_{H^s}\leq C_0 \end{align*} for some positive constants $\underline\rho$, $\underline\theta$ and $C_0$. Then there exists a $\tilde T>0$, such that the system \eqref{haa}--\eqref{hac}, \eqref{hagg}, and \eqref{hpg} with these initial data has a unique classical solution $(\rho,\u,\H,\theta)$ enjoying $\rho-\underline\rho\in C([0,\tilde T],H^s({\mathbb R}^3))$, $(\u,\H,\theta-\bar \theta)\in C([0,\tilde T], \linebreak H^s({\mathbb R}^3))\cap L^2(0,\tilde{T};H^{s+1}({\mathbb R}^3))$, and \begin{align*} \sup_{0\leq t\leq\tilde T}\|(\rho-\underline\rho,\u,\H,\theta-\bar\theta)\|^2_{H^s} &+ \int^{\tilde T}_0 \Big\{\mu \|\mathbb{D}(\u)\|^2_{H^{s}}+\lambda\|{\rm div } \u\|^2_{H^{s}} \nonumber\\ & \qquad \quad +\nu \|\nabla \H\|^2_{H^{s}}+\kappa \|\nabla \theta\|^2_{H^{s}} \Big\}(\tau)d\tau\leq 4C_0^2. \end{align*} \end{prop} It follows from Proposition \ref{hPa}, and the transforms \eqref{transa} and \eqref{transb} that there exists a $T_\epsilon>0$, depending on $\epsilon$ and $L_0$, such that for each fixed $\epsilon$ and any initial data \eqref{hnas} satisfying \eqref{initial}, the Cauchy problem \eqref{hnaaa}--\eqref{hnadd}, \eqref{hnas} has a unique solution $(p^\epsilon,\u^\epsilon,\H^\epsilon,\theta^\epsilon-\bar \theta)$ satisfying $(p^\epsilon,\u^\epsilon,\H^\epsilon,\theta^\epsilon-\bar\theta)\in C([0,T_\epsilon),H^{s}({\mathbb R}^3))$ and $(\u^\epsilon,\H^\epsilon, \theta^\epsilon-\bar\theta)\in L^2(0,T_\epsilon;H^{s+1}({\mathbb R}^3))$. Moreover, let $T_\epsilon^*$ be the maximal time of existence of such a smooth solution, then if $T_\epsilon^*$ is finite, one has \begin{align* {\underset{t\rightarrow T_\epsilon^*} {\lim \sup}}\, \left\{ \|(p^\epsilon,\u^\epsilon,\H^\epsilon,\theta^\epsilon)(t)\|_{W^{1,\infty}} +\|(\u^\epsilon,\H^\epsilon,\theta^\epsilon)(t)\|_{W^{2,\infty}}\right\}=\infty. \end{align*} Therefore, we shall see by the same argument as in \cite{MS01} that the existence part of Theorem \ref{main} is a consequence of the above assertion and the following key a priori estimates which will be shown in the next section. \begin{prop}\label{hPb} For any given $s\geq 4$ and fixed $\epsilon>0$, let $(p^\epsilon,\u^\epsilon,\H^\epsilon, \theta^\epsilon)$ be the classical solution to the Cauchy problem \eqref{hnaaa}--\eqref{hnadd} and \eqref{hnas}. Denote \begin{align} \mathcal{O}(T):=&\|(p^\epsilon,\u^\epsilon,\H^\epsilon,\theta^\epsilon-\bar \theta)(T)\|_{s,\epsilon}, \nonumber\\% \label{Oa}\\ \mathcal{O}_0: = &\|(p^\epsilon_{\rm in},\u^\epsilon_{\rm in},\H^\epsilon_{\rm in})\|_{H^{s}}+ \|(\epsilon p^\epsilon_{\rm in},\epsilon\u^\epsilon_{\rm in}, \epsilon\H^\epsilon_{\rm in}, \theta^\epsilon_{\rm in}-\bar \theta)\|_{H_\epsilon^{s+2}}.\nonumbe \end{align} Then there exist positive constants $\hat{T}_0$ and $\epsilon_0<1$, and an increasing positive function $C(\cdot)$, such that for all $T\in [0,\hat T_0]$ and $\epsilon \in (0,\epsilon_0]$, \begin{align*} \mathcal{O}(T) \leq C(\mathcal{O}_0)\exp\{(\sqrt{T}+\epsilon)C(\mathcal{O}(T))\}. \end{align*} \end{prop} \section{Uniform estimates} In this section we shall establish the uniform bounds of the solutions to the Cauchy problem \eqref{hnaaa}--\eqref{hnadd} and \eqref{hnas} stated in Proposition \ref{hPb} by modifying the approaches developed in \cite{MS01,A06,LST} and making careful use of the special structure of the system \eqref{hnaaa}--\eqref{hnadd}. In the rest of this section, we will drop the superscripts $\epsilon$ of the variables in the Cauchy problem and denote \begin{equation*} \Psi(\u)=2\bar \mu \mathbb{D}(\u)+\bar \lambda{\rm div }\u \;\mathbf{I}_3. \end{equation*} Recall that it has been assumed that $\mu^\epsilon \equiv \bar\mu>0$, $\nu^\epsilon \equiv \bar\nu>0$, $\kappa^\epsilon \equiv \bar\kappa>0$, and $\lambda^\epsilon \equiv \bar\lambda$ independent of $\epsilon$. \subsection{$H^s$-estimates on $(\H,\theta)$ and $ (\epsilon p,\epsilon\u )$ } To prove Proposition \ref{hPb}, we first give some estimates derived directly from the system \eqref{hnaaa}--\eqref{hnadd}. Denoting \begin{align} \mathcal{Q}:& =\|( p, \u, \H, \theta-\bar \theta) \|_{H^{s}} +\|(\epsilon p,\epsilon\u,\epsilon\H, \theta-\bar \theta) \|_{H_\epsilon^{s+2}}, \nonumber\\% \label{Rna}\\ \mathcal{S}:& =\| (\nabla\u, \nabla p,\nabla\H)\|_{H^s} +\|\nabla (\epsilon\u,\epsilon\H,\theta)\|_{H_\epsilon^{s+2}}, \nonumber \end{align} one has \begin{lem}\label{HES} Let $s\geq 4$ and $(p,\u,\H,\theta)$ be a solution to the problem \eqref{hnaaa}--\eqref{hnadd}, \eqref{hnas} on $[0,T_1]$. There exists an increasing function $C(\cdot)$ such that, for any $\epsilon \in (0,1]$ and $t\in [0,T]$, $T=\min\{T_1,1\}$, it holds that \begin{align} &\sup_{\tau\in[0,t]}\{\|(\H,\theta)(\tau)\|_{H^s}+\|\epsilon\H(\tau)\|_{H_\epsilon^{s+1}}\}\nonumber\\ &\quad +\Big\{\int^t_0\big(\|\nabla(\H,\theta)(\tau)\|_{H^{s}}^2+ \|\nabla(\epsilon\H)(\tau)\|^2_{H_\epsilon^{s+1}}\Big)d\tau\Big\}^{1/2} \nonumber\\ & \qquad \leq C(\mathcal{O}_0)\exp\{\sqrt{T}C(\mathcal{O}(T))\}. \nonumber \end{align} \end{lem} \begin{proof} For any multi-index $\alpha=(\alpha_1,\alpha_2,\alpha_3)$ satisfying $|\alpha|\leq s$, let $\H_\alpha=\partial^\alpha\H$. Then \begin{align*} \partial_t\H_\alpha+(\u\cdot\nabla)\H_\alpha-\bar{\nu}\Delta\H_\alpha=-[\partial^\alpha,\u]\cdot\nabla\H -\partial^\alpha(\H{\rm div }\u)+\partial^\alpha((\H\cdot\nabla)\u). \end{align*} Taking inner product of the above equations with $\H_\alpha$ and integrating by parts, we have \begin{align*} \frac{1}{2}\frac{d}{dt}\|\H_\alpha\|_{L^2}^2+\bar{\nu}\|\nabla\H_\alpha\|_{L^2}^2=&-\langle(\u\cdot\nabla) \H_\alpha,\H_\alpha\rangle-\langle[\partial^\alpha,\u]\cdot\nabla\H,\H_\alpha\rangle\nonumber\\ &-\langle\partial^\alpha(\H{\rm div }\u),\H_\alpha\rangle+ \langle\partial^\alpha((\H\cdot\nabla)\u),\H_\alpha\rangle. \end{align*} By integration by parts we obtain \begin{align*} -\langle(\u\cdot\nabla) \H_\alpha,\H_\alpha\rangle=\frac{1}{2}\int {\rm div } \u|\H_\alpha|^2 dx\leq C(\mathcal{Q})\| \H_\alpha\|^2_{L^2}. \end{align*} It follows from the commutator inequality \eqref{comc} that $$ \|[\partial^\alpha,\u]\cdot\nabla\H\|_{L^2}\leq C_0(\|\u\|_{W^{1,\infty}}\|\nabla\H\|_{H^{s-1}}+\|\u\|_{H^{s}}\|\nabla\H\|_{L^\infty}) \leq C(\mathcal{Q}). $$ By Sobolev's inequality, one gets $$ -\langle\partial^\alpha(\H{\rm div }\u),\H_\alpha\rangle + \langle\partial^\alpha(\H\cdot\nabla\u),\H_\alpha\rangle \leq C_0\|\H\|_{H^s}^2\|\u\|_{H^{s+1}}\leq C(\mathcal{Q})\mathcal{S}. $$ Thus, we conclude that \begin{align*} & \ \ \ \sup_{\tau\in[0,t]}\|\H(\tau)\|_{H^s}+\bar{\nu}\Big\{\int^t_0\|\nabla\H\|^2_{H^s}d\tau\Big\}^{1/2}\\ &\leq C(\mathcal{O}_0)+C(\mathcal{O}(t)t+C(\mathcal{O}(t))\int_0^t\mathcal{S}(\tau)d\tau \\ &\leq C(\mathcal{O}_0)+C(\mathcal{O}(t)t+C(\mathcal{O}(t))\sqrt{t} \\ &\leq C(\mathcal{O}_0)+C(\mathcal{O}(T))\sqrt{T} \\ &\leq C(\mathcal{O}_0)\exp\{\sqrt{T}C(\mathcal{O}(T))\}. \end{align*} Now denote $\hat{\H}=\epsilon \H$ and $\hat{\H}_\alpha=\partial^\alpha(\epsilon\H)$ for $|\alpha|= s+1$. Then, $\hat{\H}_\alpha$ satisfies \begin{align}\label{hhs} \partial_t\hat{\H}_\alpha+(\u\cdot\nabla)\hat{\H}_\alpha-\bar{\nu}\Delta\hat{\H}_\alpha=-\epsilon[\partial^\alpha,\u]\cdot\nabla\H -\epsilon\partial^\alpha(\H{\rm div }\u)+\epsilon\partial^\alpha((\H\cdot\nabla)\u). \end{align} The commutator inequality \eqref{comc} implies that $$ \|-\epsilon[\partial^\alpha,\u]\cdot\nabla\H\|_{L^2}\leq C_0(\|\u\|_{W^{1,\infty}}\|\nabla\hat{\H}\|_{H^s} +\|\epsilon\u\|_{H^{s+1}}\|\nabla\H\|_{L^\infty}) \leq C(\mathcal{Q}), $$ while an integration by parts and Sobolev's inequality lead to \begin{align*} -\langle\epsilon\partial^\alpha(\H{\rm div }\u),\hat{\H}_\alpha\rangle +\langle\epsilon\partial^\alpha((\H\cdot\nabla)\u),\hat{\H}_\alpha\rangle & \leq \frac{\bar{\nu}}{2}\|\nabla\hat{\H}_\alpha\|_{L^2}^2 +C_0\|\H\|_{H^s}^2\|\epsilon\u\|_{H^{s+1}}^2\\ & \leq \frac{\bar{\nu}}{2}\|\nabla\hat{\H}_\alpha\|_{L^2}^2+C(\mathcal{Q}). \end{align*} Hence, we obtain \begin{align*} \sup_{\tau\in[0,t]}\|\epsilon\H(\tau)\|_{H^{s+1}}+\Big\{\int^t_0\bar{\nu}\|\nabla(\epsilon\H)(\tau)\|^2_{H^{s+1}}d\tau\Big\}^{1/2}&\leq C(\mathcal{O}_0)\exp\{\sqrt{T}C(\mathcal{O}(T))\}. \end{align*} Similarly, we can obtain \begin{align*} \sup_{\tau\in[0,t]}\epsilon^2\|\H(\tau)\|_{H^{s+2}}+\epsilon^2\Big\{\int^t_0\bar{\nu}\|\nabla\H(\tau)\|^2_{H^{s+2}}d\tau\Big\}^{1/2}&\leq C(\mathcal{O}_0)\exp\{\sqrt{T}C(\mathcal{O}(T))\}. \end{align*} Next, we estimate $\theta$. Using Sobolev's inequality, one finds that \begin{align*} & \|\partial^s(\epsilon^2e^{-\epsilon p}[\bar{\nu}|{\rm curl\, }\H|^2 +\Psi(\u):\nabla\u])\|_{L^2}\nonumber\\ & \qquad\qquad\qquad \leq C_0\|\epsilon p\|_{H^s}(\|\epsilon\nabla\H\|^2_{H^{s}}+\|\epsilon \nabla\u\|^2_{H^s})\leq C(\mathcal{Q}). \end{align*} Employing arguments similar to those used for $\H$, we can obtain \begin{align*} \sup_{\tau\in[0,t]}\|\theta(\tau)\|_{H^{s}}+\Big\{\int^t_0\bar{\kappa}\|\nabla\theta(\tau)\|^2_{H^{s}}d\tau\Big\}^{1/2}&\leq C(\mathcal{O}_0)\exp\{\sqrt{T}C(\mathcal{O}(T))\}. \end{align*} Thus, the lemma is proved. \end{proof} \begin{lem}\label{EP1} Let $s\geq 4$ and $(p,\u,\H,\theta)$ be a solution to the problem \eqref{hnaaa}--\eqref{hnadd}, \eqref{hnas} on $[0,T_1]$. Then there exists an increasing function $C(\cdot)$ such that, for any $\epsilon \in (0,1]$ and $t\in [0,T], T=\min\{T_1,1\}$, it holds that \begin{align} &\sup_{\tau\in [0,t]}\|(\epsilon p,\epsilon\u)(\tau)\|_{H^s}+\Big\{\int^t_0\bar{\mu} \|\nabla (\epsilon\u )(\tau)\|^2_{H^{s}}d\tau\Big\}^{1/2} \leq C(\mathcal{O}_0)\exp\{\sqrt{T}C(\mathcal{O}(T))\}.\nonumber \end{align} \end{lem} \begin{proof} Let $\check{p}=\epsilon p$, and $\check{p}_\alpha=\partial^\alpha(\epsilon p)$ for any multi-index $\alpha=(\alpha_1,\alpha_2,\alpha_3)$ satisfying $|\alpha|\leq s$. Then \begin{align} \partial_t \check{p}_\alpha+(\u\cdot\nabla)\check{p}_\alpha=&-[\partial^\alpha,\u] \cdot(\nabla\check{p})-\partial^\alpha[{\rm div }(2\u-\kappa a(\epsilon p)b(\theta)\nabla \theta)]\nonumber\\ & +\partial^\alpha \{a(\epsilon p) [\nu|{\rm curl\, }(\epsilon\H)|^2 +\Psi(\epsilon\u):\nabla(\epsilon\u)]\}\nonumber\\ &+\kappa \partial^\alpha\{ a(\epsilon p)b(\theta)\nabla(\epsilon p)\cdot\nabla\theta\}\nonumber\\ :=&h_1+h_2+h_3+h_4, \label{ee0} \end{align} where, for simplicity of presentation, we have set $$ a(\epsilon p): = e^{-\epsilon p}, \quad b(\theta): =e^{\theta}.$$ It is easy to see that the energy estimate for \eqref{ee0} gives \begin{align}\label{ee1} \frac{1}{2}\frac{d}{dt}\|\check{p}_\alpha\|_{L^2}^2=-\langle(\u\cdot\nabla)\check{p}_\alpha, \check{p}_\alpha\rangle+\langle h_1+h_2+h_3+h_4, \check{p}_\alpha\rangle , \end{align} where we have to estimate each term on the right-hand side of \eqref{ee1}. First, an integration by parts yields \begin{align*} -\langle (\u\cdot\nabla)\check{p}_\alpha, \check{p}_\alpha\rangle=\frac{1}{2}\int {\rm div } \u|\check{p}_\alpha|^2 dx\leq C(\mathcal{Q})\| \check{p}_\alpha\|^2_{L^2}, \end{align*} while the commutator inequality leads to \begin{align*} \|h_1\|\leq C_0(\|\u\|_{W^{1,\infty}}\|\nabla\check{p}\|_{H^{s-1}}+ \|\u\|_{H^s}\|\nabla\check{p}\|_{L^\infty})\leq C(\mathcal{Q}). \end{align*} Consequently, \begin{align*} \langle h_1,\check{p}_\alpha\rangle\leq \|\check{p}_\alpha\|_{L^2}\|h_1\|_{L^2}\leq C(\mathcal{Q}). \end{align*} From Sobolev's inequality one gets $$ \|h_2\|\leq C_0\|\u\|_{H^{s+1}}+\|\theta\|_{H^{s}}\|\epsilon p\|_{H^s}\|\theta\|_{H^{s+2}} \leq C(\mathcal{S})(1+ C(\mathcal{Q})), $$ whence, \begin{align*} \langle h_2, \check{p}_\alpha\rangle \leq& C(\mathcal{Q})C(\mathcal{S}). \end{align*} Similarly, one can prove that \begin{align*} \langle( h_3+h_4), \check{p}_\alpha\rangle \leq& C(\mathcal{Q}). \end{align*} Hence, we conclude that \begin{align*} \sup_{\tau\in [0,t]}\|\epsilon p (\tau)\|_{H^s}\leq C(\mathcal{O}_0)\exp\{\sqrt{T}C(\mathcal{O}(T))\}. \end{align*} In a similar way, we can estimate $\u$. Thus the proof of the lemma is completed. \end{proof} Next, we control the term $\|(\u,p)\|_{H^s}$. The idea is to bound the norm of $({\rm div } \u,$ $ \nabla p)$ in terms of the suitable norm of $(\epsilon\u, \epsilon p, \epsilon\H,\theta)$ and $\epsilon(\partial_t\u,\partial_t p)$ by making use of the structure of the system. To this end, we first estimate $\|(\epsilon\u,\epsilon p,\theta)\|_{H^{s+1}}$. \subsection{$H^{s+1}$-estimates on $(\epsilon\u,\epsilon p,\epsilon \H,\theta)$ } Following \cite{A06}, we set \begin{align*} (\hat{p}, \hat \u,\hat \H,\hat \theta):= (\epsilon p-\theta, \epsilon \u, \epsilon\H, \theta-\bar \theta). \end{align*} A straightforward calculation results in that $(\hat p, \hat \u,\hat\H,\hat \theta)$ solves the following system: \begin{align} &\partial_t \hat{p} +(\u \cdot\nabla)\hat{p} +\frac{1}{\epsilon}{\rm div } \hat \u =0, \label{slowf} \\ &b(-\theta)[\partial_t\hat \u +(\u\cdot\nabla)\hat\u ]+\frac{1}{\epsilon} (\nabla \hat p+\nabla \hat \theta)\nonumber\\ & \qquad \qquad \qquad \qquad \qquad =a( \epsilon p)[({\rm curl\, } \H)\times \hat\H+{\rm div }\Psi (\hat \u )], \label{slowg} \\ &\partial_t\hat \H +\u\cdot\nabla\hat \H+\H{\rm div }\hat\u-\H\cdot\nabla\hat\u- \bar\nu \Delta\hat \H = 0,\quad {\rm div }\hat \H =0,\label{slowh} \\ &\partial_t \hat \theta +(\u\cdot\nabla)\hat \theta + \frac{1}{\epsilon}{\rm div }\hat \u = \epsilon a(\epsilon p)[ \bar \nu\, {\rm curl\, }\H :{\rm curl\, }\hat \H +\epsilon a(\epsilon p)\Psi (\u ):\nabla\hat \u ]\nonumber\\ & \qquad \qquad \qquad \qquad \qquad\ \quad +\bar \kappa a(\epsilon p){\rm div } (b(\theta)\nabla \hat \theta ). \label{slowi} \end{align} We have \begin{lem}\label{LLb} Let $s\geq 4$ and $(p,\u,\H,\theta)$ be a solution to the problem \eqref{hnaaa}--\eqref{hnadd}, \eqref{hnas} on $[0,T_1]$. Then there exist a constant $l_1>0$ and an increasing function $C(\cdot)$ such that, for any $\epsilon \in (0,1]$ and $t\in [0,T]$, $T=\min\{T_1,1\}$, it holds that \begin{align}\label{slowao} & \sup_{\tau\in [0,t]}\|(\epsilon q,\epsilon\u, \theta-\bar \theta)(\tau)\|_{H^{s+1}} +l_1\Big\{\int^t_0\|\nabla (\epsilon\u,\theta)\|^2_{H^{s+1}}(\tau)d\tau\Big\}^{1/2}\nonumber\\ & \leq C(\mathcal{O}_0)\exp\{\sqrt{T}C(\mathcal{O}(T))\}. \end{align} \end{lem} \begin{proof} Let $\alpha=(\alpha_1,\alpha_2,\alpha_3)$ be a multi-index such that $|\alpha|= s+1$. Set \begin{align*} (\hat{p}_\alpha,\hat{\u}_\alpha,\hat{H}_\alpha,\hat {\theta}_\alpha):= \left(\partial^\alpha(\epsilon p-\theta), \partial^\alpha (\epsilon \u),\partial^\alpha(\epsilon\H), \partial^\alpha (\theta-\bar\theta)\right). \end{align*} Then, $\hat{\H}_\alpha$ satisfies \eqref{hhs} and $(\hat{p}_\alpha,\hat{\u}_\alpha,\hat\theta_\alpha)$ solves \begin{align} &\partial_t \hat{p}_\alpha +(\u \cdot\nabla)\hat{p}_\alpha +\frac{1}{\epsilon}{\rm div } \hat{\u}_\alpha =g_1, \label{slowap} \\ &b(-\theta )[\partial_t\hat{\u}_\alpha +(\u \cdot\nabla)\hat{\u}_\alpha ] +\frac{1}{\epsilon}(\nabla \hat{p}_\alpha+\nabla \hat{\theta}_\alpha)\nonumber\\ & \quad \quad \qquad \quad = a( \epsilon p )({\rm curl\, } \H )\times \hat{\H}_\alpha +a( \epsilon p ){\rm div }\Psi (\hat{\u}_\alpha )+g_2, \label{slowaq} \\ &\partial_t \hat\theta_\alpha +(\u \cdot\nabla)\hat\theta_\alpha + \frac{1}{\epsilon}{\rm div }\hat{\u}_\alpha =\epsilon a(\epsilon p )[ \bar \nu\, {\rm curl\, }\H :{\rm curl\, }\hat\H_\alpha +\Psi (\u ):\nabla\hat\u_\alpha] \nonumber\\ & \quad \quad \qquad \quad +\bar \kappa a(\epsilon p ){\rm div } (b(\theta )\nabla \hat \theta_\alpha )+g_3, \label{slowas} \end{align} with initial data \begin{align} (\hat{p}_\alpha ,\hat{\u}_\alpha ,\hat{\H}_\alpha , \hat\theta_\alpha )|_{t=0} :=\big( & \partial^\alpha(\epsilon p_{\rm in}(x) - \theta_{\rm in}(x)),\partial^\alpha(\epsilon\u_{\rm in}(x)),\nonumber\\ & \partial^\alpha\H_{\rm in}(x)), \partial^\alpha(\theta_{\rm in}(x)-\bar \theta)\big),\label{slowat} \end{align} where \begin{align} g_1:=& -[\partial^\alpha, \u]\cdot \nabla(\epsilon p-\theta),\nonumber\\%\label{slowba}\\ g_2:=& -[\partial^\alpha, b(-\theta)]\partial_t(\epsilon \u) -[\partial^\alpha, b(-\theta)\u]\cdot \nabla (\epsilon \u)\nonumber\\ & +[\partial^\alpha, a(\epsilon p){\rm curl\, } (\epsilon\H)]\times \H +[\partial^\alpha, a(\epsilon p)]{\rm div } \Psi(\epsilon \u), \nonumber\\% \label{slowbb}\\ g_3:=& -[\partial^\alpha, \u]\cdot \nabla \theta +\bar \nu\,[\partial^\alpha, a(\epsilon p ) {\rm curl\, }(\epsilon\H)] :{\rm curl\, }(\epsilon\H)\nonumber\\ & +\epsilon[\partial^\alpha, a(\epsilon p)\Psi (\u )]:\nabla(\epsilon\u) +\bar \kappa \partial^\alpha\big ( a(\epsilon p ){\rm div } (b(\theta )\nabla \theta) \big)\nonumber\\ & -\bar \kappa a(\epsilon p) {\rm div } (b(\theta)\nabla \hat \theta_\alpha). \nonumber \end{align} It follows from Proposition \ref{hPa} and the positivity of $a(\cdot)$ and $b(\cdot)$ that $a(\cdot)$ and $b(\cdot)$ are bounded away from $0$ uniformly with respect to $\epsilon$, i.e. \begin{align} a(\epsilon p)\geq \underline{a}>0,\;\;\;b(-\theta)\geq \underline{b}>0.\label{ab1} \end{align} The standard $L^2$-energy estimates for \eqref{slowap}, \eqref{slowaq} and \eqref{slowas} yield that \begin{align} &\frac{1}{2}\frac{d}{dt}\big(\|\hat{p}_\alpha\|_{L^2}^2+\langle b(-\theta)\hat{\u}_\alpha, \hat{\u}_\alpha\rangle+ \|\hat \theta_\alpha\|_{L^2}^2\big)\nonumber\\ & \leq \frac{1}{2}\langle b_t(\theta)\hat\u_\alpha,\hat\u_\alpha\rangle-\langle (\u \cdot\nabla)\hat{p}_\alpha,\hat{p}_\alpha\rangle-\langle b(-\theta)(\u \cdot\nabla)\hat{\u}_\alpha, \hat{\u}_\alpha\rangle-\langle (\u \cdot\nabla)\hat \theta_\alpha,\hat \theta_\alpha \rangle\nonumber\\ &\quad +\langle a( \epsilon p )({\rm curl\, } \H )\times \hat{\H}_\alpha, \hat\u_\alpha\rangle+\langle a( \epsilon p ){\rm div }\Psi (\hat{\u}_\alpha ), \hat\u_\alpha\rangle\nonumber\\ &\qquad \langle\epsilon a(\epsilon p )[ \bar \nu\, {\rm curl\, }\H :{\rm curl\, }\hat\H_\alpha +\Psi (\u ):\nabla\hat\u_\alpha],\theta_\alpha\rangle+\langle \bar \kappa a(\epsilon p ){\rm div } (b(\theta )\nabla \theta_\alpha ),\hat\theta_\alpha\rangle\nonumber\\ &\quad +\langle g_1,\hat{p}_\alpha\rangle+\langle g_2,\hat{u}_\alpha\rangle+\langle g_3,\hat\theta_\alpha\rangle . \label{3.21} \end{align} It follows from equation \eqref{hnadd}, and the definition of $\mathcal{Q}$ and $\mathcal{S}$ that \begin{align*} \|b_t(\theta)\|_{L^\infty}\leq \|b(\theta)\|_{H^s}\|\theta_t\|_{H^s}\leq C(\mathcal{Q})(1+\mathcal{S}). \end{align*} Therefore, \begin{align*} \frac{1}{2}\langle b_t(\theta)\hat\u_\alpha,\hat\u_\alpha\rangle \leq C(\mathcal{Q})(1+\mathcal{S}). \end{align*} On the other hand, it is easy to see that \begin{align*} -\langle (\u \cdot\nabla)\hat{p}_\alpha,\hat{p}_\alpha\rangle-\langle b(-\theta)(\u \cdot\nabla)\hat{\u}_\alpha, \hat{\u}_\alpha\rangle-\langle (\u \cdot\nabla)\hat\theta_\alpha,\hat\theta_\alpha \rangle \leq C(\mathcal{Q}) \end{align*} and \begin{align*} \langle a( \epsilon p )({\rm curl\, } \H )\times \hat{\H}_\alpha, \hat\u_\alpha\rangle\leq C(\mathcal{Q}). \end{align*} By integration by parts we have \begin{align}\label{slowt} -\langle a(\epsilon p_0) {\rm div } \Psi(\hat \u), \hat \u\rangle = & \int \mu a(\epsilon p_0)[|\nabla\hat \u_\alpha|^2+(\mu+\lambda) |{\rm div } \hat \u_\alpha|^2]dx\nonumber\\ & + \mu\langle(\nabla a(\epsilon p)\cdot\nabla)\hat\u_\alpha, \hat \u_\alpha\rangle\nonumber\\ & + (\mu+\lambda)\langle (\nabla a(\epsilon p){\rm div } \hat \u_\alpha, \hat \u_\alpha) dx\nonumber\\ \equiv &\, d_1+d_2+d_3. \end{align} Thanks to the assumption that $\bar \mu>0$ and $2\bar \mu +3\bar \lambda>0$, there exists a positive constant $\xi_1$, such that \begin{align}\label{slowu} d_1& \geq {\underline a}\xi\int|\nabla\hat\u_\alpha|^2dx , \end{align} while Cauchy-Schwarz's inequality implies \begin{align}\label{slowv} | d_2|+|d_3|\leq C(\mathcal{Q})\mathcal{S}. \end{align} Similarly, we can obtain \begin{align}\label{sloww} -\langle \bar \kappa a(\epsilon p){\rm div } (b(\theta)\nabla \hat\theta_\alpha ),\hat\theta_\alpha\rangle \geq \bar \kappa {\underline a}\,{\underline b}\|\nabla \hat \theta_\alpha\|^2_{L^2}-C(\mathcal{Q})\mathcal{S}. \end{align} Easily, one has \begin{align*} |\langle\epsilon a(\epsilon p )[ \bar \nu\, {\rm curl\, }\H :{\rm curl\, }\hat\H_\alpha +\Psi (\u ):\nabla\hat\u_\alpha],\hat\theta_\alpha\rangle|\leq C(\mathcal{Q})(1+\mathcal{S}). \end{align*} It remains to estimate $\langle g_1,\hat{p}_\alpha\rangle$, $\langle g_2,\hat{u}_\alpha\rangle$ and $\langle g_3,\hat\theta_\alpha\rangle$ in \eqref{3.21}. First, an application of H\"{o}lder's inequality gives \begin{align*} |\langle \hat{p}_\alpha, g_1\rangle|\leq C_0\|\hat{p}_\alpha\|_{L^2}\|g_1\|_{L^2}, \end{align*} where $\|g_1\|_{L^2}$ can be bounded, by using \eqref{comc}, as follows \begin{align*} \|g_1\|_{L^2}=& \|[\partial^\alpha, \u]\cdot \nabla(\epsilon p-\theta)\|_{L^2}\\ \leq & C_0(\|\u\|_{W^{1,\infty}}\| \nabla(\epsilon p-\theta)\|_{H^{s}} +\|\u\|_{H^{s+1}}\|\nabla(\epsilon p-\theta)\|_{L^{\infty}}). \end{align*} It follows from the definition of $\mathcal{Q}$ and Sobolev's inequalities that \begin{align*} \| \nabla(\epsilon p,\theta)\|_{H^{s}}\leq \mathcal{Q},\;\;\;\;\;\|\nabla(\epsilon p,\theta)\|_{L^\infty}\leq \mathcal{Q}. \end{align*} Therefore, we obtain $\|g_1\|_{L^2}\leq C(\mathcal{Q})(1+\mathcal{S})$, and \begin{align}\label{slowbe} |\langle p_\alpha, g_1\rangle|\leq C(\mathcal{Q})(1+\mathcal{S}). \end{align} Next, we turn to the term $|\langle\u_\alpha ,g_2\rangle|$. Due to the equation \eqref{hnabb}, one has \begin{align}\label{slowbf} -[\partial^\alpha, b(-\theta)]\partial_t(\epsilon \u) =& [\partial^\alpha, b(-\theta)] \big((\u\cdot \nabla)(\epsilon \u)\big) + [\partial^\alpha, b(-\theta)]\big(b(\theta)\nabla p\big)\nonumber\\ & -[\partial^\alpha, b(-\theta)]\big(b(\theta) a(\epsilon p)({\rm curl\, } \H\times (\epsilon\H))\big)\nonumber\\ &-[\partial^\alpha, b(-\theta)]\big(b(\theta) a(\epsilon p){\rm div } \Psi(\epsilon \u)\big). \end{align} The inequality \eqref{comc} implies that \begin{align*} & \left|\left\langle \u_\alpha,[\partial^\alpha, b(-\theta)] \big((\u\cdot \nabla)(\epsilon \u)\big)\right\rangle\right| \nonumber \\ & \leq C_0\| \u_\alpha\|_{L^2}\|[\partial^\alpha, b(-\theta)] \big(\u\cdot \nabla(\epsilon \u)\big)\|_{L^2}\nonumber\\ & \leq C(\mathcal{Q})\big(\|b(-\theta)\|_{W^{1,\infty}}\|\u\cdot \nabla(\epsilon \u)\|_{H^s}+\|b(-\theta)\|_{H^{s+1}}\|\u\cdot \nabla(\epsilon \u)\|_{L^\infty}\nonumber\\ & \leq C(\mathcal{Q}), \end{align*} and \begin{align*} & \left|\left\langle \u_\alpha,[\partial^\alpha, b(-\theta)] \big(b(\theta)\nabla p\big)\right\rangle\right| \nonumber \\ & \leq C_0\| \u_\alpha\|_{L^2}\|[\partial^\alpha, b(-\theta)] \big(b(\theta)\nabla p\big)\|_{L^2}\nonumber\\ & \leq C(\mathcal{Q})\big(\|b(-\theta)\|_{W^{1,\infty}}\|b(\theta)\nabla p\|_{H^s}+\|b(-\theta)\|_{H^{s+1}}\|b(\theta)\nabla p\|_{L^\infty}\nonumber\\ & \leq C(\mathcal{Q})(1+\mathcal{S}). \end{align*} The third term on the right-hand side of \eqref{slowbf} can be treated in a similar manner, and we obtain \begin{align*} \left|\left\langle \u_\alpha, [\partial^\alpha, b(-\theta)]\big(b(\theta) a(\epsilon p)({\rm curl\, } \H\times (\epsilon\H))\big)\right\rangle\right|\leq C(\mathcal{Q})(1+\mathcal{S}). \end{align*} To bound the last term on the right-hand side of \eqref{slowbf}, we use \eqref{comc} to deduce that \begin{align*} & \left\langle \u_\alpha,[\partial^\alpha, b(-\theta)]\big(b(\theta) a(\epsilon p){\rm div } \Psi(\epsilon \u)\big)\right\rangle\nonumber\\ & \leq C_0\|\u_\alpha\|_{L^2}\|[\partial^\alpha, b(-\theta)]\big(b(\theta) a(\epsilon p){\rm div } \Psi(\epsilon \u)\big)\|_{L^2}\nonumber\\ & \leq C(\mathcal{Q})(\|b(-\theta)\|_{W^{1,\infty}}\|b(\theta) a(\epsilon p){\rm div } \Psi(\epsilon \u)\|_{H^s} \\ &\quad +\|b(-\theta)\|_{H^{s+1}}\|b(\theta) a(\epsilon p){\rm div } \Psi(\epsilon \u)\|_{L^\infty})\nonumber\\ & \leq C(\mathcal{Q})(1+\mathcal{S}). \end{align*} Hence, it holds that \begin{align}\label{slowbh} \left|\left\langle \u_\alpha, g_2\right\rangle\right| \leq C(\mathcal{Q})(1+\mathcal{S}). \end{align} Since $g_3$ is similar to $g_1$ in structure, we easily get \begin{align}\label{slowbk} \big|\big\langle \hat\theta_\alpha, g_3\big\rangle\big|\leq C(\mathcal{Q})(1+\mathcal{S}). \end{align} Therefore, it follows from \eqref{slowbe}, \eqref{slowbh}--\eqref{slowbk}, the positivity of $b(-\theta)$, and the definition of $\mathcal{O}$, $\mathcal{O}_0$, $\mathcal{Q}$ and $\mathcal{S}$, that there exists a constant $l_1>0$, such that for $t\in [0,T]$ and $T=\min\{T_1,1\}$, \begin{align*} & \sup_{\tau \in [0,t]}\|(\hat{p}_\alpha, \hat\u_\alpha, \hat \theta_\alpha )(\tau)\|^2_{H^{s+1}} +l_1\int^t_0 \| \nabla(\hat\u_\alpha, \hat\theta_\alpha )\|^2_{H^{s+1}}(\tau)d\tau\nonumber\\ & \leq C(\mathcal{O}_0)+C(\mathcal{O}(t))t +C(\mathcal{O}(t))\int^t_0\mathcal{S}(\tau)d \tau\nonumber\\ & \leq C(\mathcal{O}_0)+C(\mathcal{O}(t))\sqrt{t} \nonumber\\ & \leq C(\mathcal{O}_0)\exp\{\sqrt{T} C(\mathcal{O}(T)\}. \end{align*} Summing up the above estimates for all $\alpha$ with $0\leq |\alpha|\leq s+1$, we obtain the desired inequality \eqref{slowao}. \end{proof} In a way similar to the proof of Lemma \ref{LLb}, we can show that \begin{lem}\label{LLb1} Let $s\geq 4$ and $(p,\u,\H,\theta)$ be a solution to \eqref{hnaaa}--\eqref{hnadd}, \eqref{hnas} on $[0,T_1]$. Then there exist a constant $l_2>0$ and an increasing function $C(\cdot)$ such that, for any $\epsilon \in (0,1]$ and $t\in [0,T]$, $T=\min\{T_1,1\}$, it holds that \begin{align}\nonumber & \sup_{\tau\in [0,t]}\|(\epsilon^2 q,\epsilon^2\u, \epsilon(\theta-\bar \theta)(\tau)\|_{H^{s+2}} +l_2\Big\{\int^t_0\|\nabla (\epsilon^2\u,\epsilon\theta)\|^2_{H^{s+2}}(\tau)d\tau\Big\}^{1/2}\nonumber\\ & \leq C(\mathcal{O}_0)\exp\{(\sqrt{T})C(\mathcal{O}(T))\}.\nonumber \end{align} \end{lem} Recalling Lemma \ref{Lc} and the definition of $\mathcal{Q}$ and $\mathcal{S}$, we find that \begin{align} & \|\partial_t(\epsilon p,\epsilon \u,\epsilon\H,\theta)\|_{H^{s-1}}\leq C(\mathcal{Q}),\label{t11}\\ &\|\partial_t(\epsilon p,\epsilon \u,\epsilon\H,\theta)\|_{H^{s}}\leq C(\mathcal{Q})(1+\mathcal{S}),\label{t111} \\ & \epsilon\|\partial_t(\epsilon p,\epsilon \u,\epsilon\H,\theta)\|_{H^{s}}\leq C(\mathcal{Q}). \label{t12} \end{align} Moreover, it follows easily from Lemmas \ref{HES}--\ref{LLb1} and the equation \eqref{hnadd} that for some constant $l_3>0$, \begin{align} \sup_{\tau\in [0,t]}\|\epsilon\partial_t\theta\|_{H^s}^2+ l_3\int_0^t\|\nabla((\epsilon\partial_t)\theta\|_{H^s}^2(\tau)d\tau\leq C(\mathcal{O}_0) \exp\{\sqrt{T} C(\mathcal{O}(T)\}. \label{theta1} \end{align} \subsection{$H^{s-1}$-estimates on $({\rm div } \u,\nabla p)$} To establish the estimates for $p$ and the acoustic part of $\u$, we first control the term $(\epsilon\partial_t)(p,\u)$. To this end, we start with a $L^2$-estimate for the linearized system. For a given state $(p_0,\u_0,\H_0,\theta_0)$, consider the following linearized system of \eqref{hnaaa}--\eqref{hnadd}: \begin{align} &\partial_t p +(\u_0 \cdot\nabla)p +\frac{1}{\epsilon}{\rm div }(2\u -\bar \kappa a(\epsilon p_0)b(\theta_0)\nabla \theta ) =\epsilon a(\epsilon p_0) [\bar \nu \, {\rm curl\, }\H_0 :{\rm curl\, }\H]\nonumber\\ & \quad \quad \qquad \quad\quad \ \ +\epsilon a(\epsilon p_0)\Psi(\u_0 ):\nabla\u +\bar \kappa a(\epsilon p_0) b(\theta_0)\nabla p_0 \cdot \nabla \theta +f_1, \label{slowa} \\ &b(-\theta_0)[\partial_t\u +(\u_0 \cdot\nabla)\u ]+\frac{\nabla p }{\epsilon} =a( \epsilon p_0)[({\rm curl\, } \H_0)\times \H+{\rm div }\Psi (\u )]+f_2, \label{slowb} \\ &\partial_t\H -{\rm curl\, }(\u_0 \times\H )-\bar\nu \Delta\H =f_3,\quad {\rm div }\H =0,\label{slowc} \\ &\partial_t \theta +(\u_0 \cdot\nabla)\theta + {\rm div }\u =\epsilon^2 a(\epsilon p_0) [\bar \nu\, {\rm curl\, }\H_0 :{\rm curl\, }\H +\Psi (\u_0 ):\nabla\u ]\nonumber\\ & \quad \quad \qquad \quad\quad \ \ +\bar \kappa a(\epsilon p_0){\rm div } (b(\theta_0)\nabla \theta )+f_4, \label{slowd} \end{align} where we have added the source terms $f_i$ ($1\leq i\leq 4$) on the right-hands sides of \eqref{slowa}--\eqref{slowd} for latter use, and used the following notations: $$ a(\epsilon p_0): = e^{-\epsilon p_0}, \quad b(\theta_0): =e^{\theta_0}.$$ The system \eqref{slowa}--\eqref{slowd} is supplemented with initial data \begin{align} (p ,\u ,\H , \theta )|_{t=0} =(p_{\rm in}(x),\u_{\rm in}(x),\H_{\rm in}(x), \theta_{\rm in}(x)), \quad x \in \mathbb{R}^3. \label{slowe} \end{align} \begin{lem}\label{Lefast} Let $( p , \u , \H , \theta )$ be a solution to the Cauchy problem \eqref{slowa}--\eqref{slowe} on $[0,\hat T]$. Then there exist a constant $l_4>0$ and an increasing function $C(\cdot)$ such that, for any $\epsilon \in (0,1]$ and $t\in [0,T]$, $T=\min\{\hat T,1\}$, it holds that \begin{align}\label{fasta} & \sup_{\tau \in [0,t]}\|( p , \u , \H)(\tau)\|^2_{L^2} +l_4\int^t_0 \| \nabla( \u , \H)\|_{L^2}^2(\tau)d\tau\nonumber\\ & \leq e^{TC(R_0)}\|( p , \u , \H )(0)\|^2_{L^2} +C(R_0)e^{TC(R_0)}\sup_{\tau \in [0,T]}\|\nabla\theta(\tau)\|^2_{L^2}\nonumber\\ &\quad +C(R_0)\int_0^T\|\nabla(\epsilon\u,\epsilon\H)\|_{L^2}^2d\tau+ C(R_0) \int^T_0 \|\nabla \theta\|^2_{H^1}(\tau)d \tau\nonumber\\ & \quad + C(R_0)\int^T_0\left\{\|f_1\|^2_{L^2}+\|f_2\|^2_{L^2}+\|f_3\|^2_{L^2}+ \|\nabla f_4\|_{L^2}^2\right\}(\tau)d \tau, \end{align} where \begin{align}\label{R00} R_0=\sup_{\tau\in [0,T]}\{\|\partial_t\theta_0(\tau)\|_{L^\infty}, \|(p_0, \u_0,\H_0, \theta_0)(\tau)\|_{W^{1,\infty}}\}. \end{align} \end{lem} \begin{proof} Set \begin{align* (\tilde p,\tilde \u, \tilde \H,\tilde \theta) =(p, 2\u-\bar \kappa a(\epsilon p_0)b(\theta_0)\nabla \theta, \H,\theta). \end{align*} Then $\tilde p$ and $\tilde \H$ satisfy \begin{align} \partial_t \tilde p &+(\u_0 \cdot\nabla)\tilde p +\frac{1}{\epsilon}{\rm div } \tilde\u = \epsilon a(\epsilon p_0) [\bar \nu \, {\rm curl\, }\H_0 :{\rm curl\, }\tilde \H] +\frac{\epsilon}{2} a(\epsilon p_0)\Psi(\u_0 ):\nabla \tilde\u\nonumber\\ & + \frac{\epsilon}{2} a(\epsilon p_0)\Psi(\u_0 ):\nabla(\bar \kappa a(\epsilon p_0)b(\theta_0)\nabla\tilde \theta) +\bar \kappa a(\epsilon p_0)b(\theta_0)\nabla p_0 \cdot \nabla \tilde \theta +f_1 \label{fastb} \end{align} and \begin{align} &\partial_t\tilde \H -{\rm curl\, }(\u_0 \times\tilde \H)-\bar\nu \Delta\tilde \H = f_3,\quad {\rm div }\tilde \H =0, \label{fastbbb} \end{align} respectively. One can derive the equation for $\tilde \u$ by applying the operator $\nabla$ to \eqref{slowd} to obtain \begin{align} \label{fastc} & \partial_t \nabla\tilde \theta +(\u_0 \cdot\nabla)\nabla \tilde \theta + \frac12 \nabla {\rm div }\tilde \u + \frac12 \nabla {\rm div }(\bar \kappa a(\epsilon p_0)b(\theta_0)\nabla \tilde \theta)\nonumber\\ & = \nabla\left\{\epsilon^2 a(\epsilon p_0) [\bar \nu\, {\rm curl\, }\H_0 :{\rm curl\, }\tilde\H]\right\} +\frac12 \nabla\left\{\epsilon^2 a(\epsilon p_0)\Psi (\u_0 ):\nabla\tilde\u \right\}\nonumber\\ & \quad +\frac12 \nabla\left\{\epsilon^2 a(\epsilon p_0) \Psi (\u_0 ):\nabla(\bar \kappa a(\epsilon p_0)b(\theta_0)\nabla \tilde \theta) \right\}\nonumber\\ & \quad +\nabla\left\{\bar \kappa a(\epsilon p_0){\rm div } (b(\theta_0)\nabla \tilde \theta )\right\} +[\nabla,\u_0]\cdot \nabla \tilde \theta +\nabla f_4. \end{align} If we multiply \eqref{fastc} with $\frac{1 }{2}\bar \kappa a(\epsilon p_0)$, we get \begin{align} \label{fastd} & \frac12 b(-\theta_0)\left\{\partial_t( \bar \kappa a(\epsilon p_0)b(\theta_0) \nabla\tilde \theta) +(\u_0 \cdot\nabla)[\bar \kappa a(\epsilon p_0)b(\theta_0)\nabla \tilde \theta]\right\} \nonumber\\ & = \frac{\bar \kappa}{2} b(-\theta_0)\partial_t\{a(\epsilon p_0) b(\theta_0)\} \nabla\tilde \theta +\frac{\bar \kappa}{2} b(-\theta_0)\left\{\u_0\cdot \nabla [a(\epsilon p_0)b(\theta_0)] \nabla\tilde \theta\right\}\nonumber\\ & \quad - \frac{1 }{2} \bar \kappa a(\epsilon p_0) \nabla {\rm div } \tilde \u - \frac{1 }{2} \bar \kappa a(\epsilon p_0) \nabla {\rm div } (\bar \kappa a(\epsilon p_0)b(\theta_0)\nabla \tilde \theta)\nonumber\\ & \quad +\frac{1 }{2} \bar \kappa a(\epsilon p_0) \nabla\left\{\epsilon^2 a(\epsilon p_0) [\bar \nu\, {\rm curl\, }\H_0 :{\rm curl\, }\tilde \H]\right\} +\frac{1 }{4} \bar \kappa a(\epsilon p_0) \nabla\left\{\epsilon^2 a(\epsilon p_0) \Psi (\u_0 ):\nabla \tilde\u \right\} \nonumber\\ & \quad +\frac{1 }{4} \bar \kappa a(\epsilon p_0) \nabla\left\{\epsilon^2 a(\epsilon p_0) \Psi (\u_0 ):\nabla(\bar \kappa a(\epsilon p_0)b(\theta_0)\nabla \tilde\theta)\right\}\nonumber\\ & \quad +\frac{1 }{2} \bar \kappa a(\epsilon p_0) \nabla\left\{\bar \kappa a(\epsilon p_0){\rm div } (b(\theta_0)\nabla \tilde \theta )\right\}\nonumber\\ &\quad +\frac{1 }{2} \bar \kappa a(\epsilon p_0) [\nabla,\u_0]\cdot \nabla \tilde \theta +\frac{1 }{2} \bar \kappa a(\epsilon p_0) \nabla f_4. \end{align} Subtracting \eqref{fastd} from \eqref{slowb} yields \begin{align}\label{faste} \frac{1}{2} & b(-\theta_0)[\partial_t\tilde \u+\u_0\cdot \nabla \tilde\u] +\frac{\nabla \tilde p}{\epsilon}\nonumber\\ = & -\frac{\bar \kappa}{2} b(-\theta_0)\partial_t\{a(\epsilon p_0)b(\theta_0)\} \nabla\tilde \theta -\frac{\bar \kappa}{2} b(-\theta_0)\left\{\u_0\cdot \nabla [a(\epsilon p_0)b(\theta_0)] \nabla\tilde \theta\right\}\nonumber\\ &+ \frac{1 }{4} \bar \kappa a(\epsilon p_0) \nabla {\rm div } \tilde \u + \frac{1 }{4} \bar \kappa a(\epsilon p_0) \nabla {\rm div } ( \bar \kappa a(\epsilon p_0)b(\theta_0) \nabla\tilde \theta ) \nonumber\\ &-\frac{1 }{2} \bar \kappa a(\epsilon p_0) \nabla\left\{\epsilon^2 a(\epsilon p_0) [\bar \nu\, {\rm curl\, }\H_0 :{\rm curl\, }\tilde \H]\right\}\nonumber\\ &-\frac{1 }{4} \bar \kappa a(\epsilon p_0) \nabla\left\{\epsilon^2 a(\epsilon p_0) \Psi (\u_0 ):\nabla\tilde \u \right\}\nonumber\\ & -\frac{1 }{4} \bar \kappa a(\epsilon p_0) \nabla\left\{ \epsilon^2 a(\epsilon p_0)\Psi (\u_0 ):\nabla( \bar \kappa a(\epsilon p_0)b(\theta_0) \nabla\tilde \theta ) \right\}\nonumber\\ & -\frac{1 }{2} \bar \kappa a(\epsilon p_0) \nabla\left\{\bar \kappa a(\epsilon p_0){\rm div } (b(\theta_0) \nabla\tilde\theta)\right\}+ a(\epsilon p_0) [\bar \nu\, {\rm curl\, }\H_0 :{\rm curl\, }\tilde \H]\nonumber\\ & + \frac{1}{2}a(\epsilon p_0) [\Psi (\u_0 ):\nabla\tilde \u ] + \frac{1}{2}a(\epsilon p_0) [\Psi (\u_0 ):\nabla ( \bar \kappa a(\epsilon p_0)b(\theta_0) \nabla\tilde \theta ) ]\nonumber\\ & -\frac{1 }{2} \bar \kappa a(\epsilon p_0) [\nabla,\u_0]\cdot \nabla \tilde \theta +a( \epsilon p_0)[({\rm curl\, } \H_0)\times \tilde\H] \nonumber\\ & +\frac{1}{2} a( \epsilon p_0){\rm div }\Psi (\bar \kappa a(\epsilon p_0)b(\theta_0)\nabla \tilde\theta) +\frac{1}{2} a( \epsilon p_0){\rm div }\Psi (\tilde \u )-\frac{1 }{2} \bar \kappa a(\epsilon p_0) \nabla f_4+f_2\nonumber\\ :=& \sum^{14}_{i=1}h_i +\frac{1}{2} a( \epsilon p_0){\rm div }\Psi (\tilde\u ) -\frac{1 }{2} \bar \kappa a(\epsilon p_0) \nabla f_4+f_2. \end{align} Multiplying \eqref{fastb} by $\tilde p$, \eqref{fastbbb} by $\tilde \H$, and \eqref{faste} by $\tilde \u$ in $L^2(\mathbb{R}^3)$ respectively, and summing up the resulting equations, we deduce that \begin{align}\label{fastf} \frac{d}{dt} & \Big\{\frac12 \langle \tilde p, \tilde p\rangle +\frac14 \langle b(-\theta_0) \tilde \u, \tilde \u\rangle +\frac12 \langle \tilde \H,\H \rangle \Big\}+ \bar \nu \|\nabla \tilde\H\|^2_{L^2} \nonumber\\ = & -\langle (\u_0\cdot \nabla) \tilde p, \tilde p \rangle +\frac{1}{4}\langle \partial_t b(-\theta_0) \tilde \u, \tilde \u\rangle -\frac12 \langle b(-\theta_0)(\u_0\cdot \nabla) \tilde \u, \tilde \u\rangle\nonumber\\ &+\langle \epsilon a(\epsilon p_0) [\bar \nu \, {\rm curl\, }\H_0 :{\rm curl\, }\tilde \H], \tilde p\rangle +\frac{\epsilon}{2}\langle a(\epsilon p_0)\Psi(\u_0 ):\nabla \tilde\u, \tilde p\rangle\nonumber\\ & + \frac{\epsilon}{2}\langle a^2(\epsilon p_0)b(\theta_0)\Psi(\u_0 ):\nabla(\nabla \tilde\theta), \tilde p\rangle +\langle \bar \kappa a(\epsilon p_0)b(\theta_0)\nabla p_0 \cdot \nabla \tilde \theta, \tilde p\rangle\nonumber\\ & +\sum^{14}_{i=1}\left\langle h_i, \tilde \u \right\rangle +\frac12 \langle a(\epsilon p_0) {\rm div } \Psi(\tilde\u),\tilde\u\rangle\nonumber\\ & - \frac{1 }{2}\left \langle \bar \kappa a(\epsilon p_0) \nabla f_4, \tilde \u \right\rangle +\left\langle f_2, \tilde \u \right\rangle + \langle f_3, \tilde \H \rangle +\langle f_1,\tilde p\rangle, \end{align} where the singular terms have been canceled out. Now, the terms on the right-hand side of \eqref{fastf} can be estimated as follows. First, it follows from the regularity of $(p_0,\u_0,\H_0,\theta_0)$, a partial integration and Cauchy-Schwarz's inequality that \begin{align} & \frac{1}{4}\left|\langle \partial_t b(-\theta_0) \tilde \u,\tilde \u\rangle\right| \leq \frac{1}{4}\| \partial_t b(-\theta_0)\|_{L^\infty}\|\tilde \u\|^2_{L^2}\leq C(R_0)\|\tilde \u\|^2_{L^2},\nonumber\\ &|\langle (\u_0\cdot \nabla)\tilde p, \tilde p \rangle| =\frac{1}{2}\left|\int({\rm div } \u_0)|\tilde p|^2dx\right| \leq C(R_0)\|\tilde p\|_{L^2}^2,\nonumber\\ &\frac12 |\langle b(-\theta_0)(\u_0\cdot \nabla)\tilde \u, \tilde \u\rangle| \leq C(R_0)\|\tilde \u\|_{L^2}^2,\nonumber\\ & |\langle \epsilon a(\epsilon p_0) [\bar \nu \, {\rm curl\, }\H_0 :{\rm curl\, }\tilde \H], \tilde p\rangle| \leq C(R_0)(\|\epsilon\nabla \tilde\H\|^2_{L^2}+\|\tilde p\|^2_{L^2}),\nonumber\\ & \frac{\epsilon}{2}|\langle a(\epsilon p_0)\Psi(\u_0 ):\nabla \tilde\u, \tilde p\rangle| \leq C(R_0)(\|\epsilon \nabla \tilde\u\|^2_{L^2}+\|\tilde p\|^2_{L^2}), \nonumber\\ & \frac{\epsilon}{2}|\langle a^2(\epsilon p_0)b(\theta_0)\Psi(\u_0 ) :\nabla(\nabla \tilde\theta), \tilde p\rangle| \leq C(R_0)\|\tilde p\|^2_{L^2} \nonumber \\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad +G_1(\epsilon p_0,\theta_0)\sum_{|\alpha|=2} \|\partial^\alpha(\epsilon \tilde\theta)\|^2_{L^2}, \nonumber \\% \label{fasto}\\ & |\langle \bar \kappa a(\epsilon p_0)b(\theta_0)\nabla p_0 \cdot \nabla \tilde \theta, \tilde p\rangle| \leq C(R_0)(\| \nabla \tilde\theta\|^2_{L^2}+\|\tilde p\|^2_{L^2}),\nonumber \end{align} where $G_1(\cdot,\cdot)$ is a smooth function. Similarly, one can bound the terms involving $h_i$ in (\ref{faste}) as follows. \begin{align} \sum^{14}_{i=1} |\langle h_i, \tilde \u\rangle| \leq & \frac{\bar \nu}{8}\|\nabla\tilde \H\|^2_{L_2}+\frac{\underline a \bar \mu}{8}\|\nabla\tilde \u\|^2_{L_2} +\frac{\underline a \bar \nu}{8}\|{\rm div }\tilde \u\|^2_{L_2}\nonumber\\ &+ C(R_0) \|\tilde \u\|^2_{L_2}+ C(R_0) \|\nabla (\epsilon \tilde \u,\tilde \theta)\|^2_{L_2} + G_2(\epsilon p_0,\theta_0)\|\Delta\tilde \theta\|^2_{L_2},\nonumber \end{align} where $G_2(\cdot,\cdot)$ is a smooth function. For the dissipative term $\frac12 \langle a(\epsilon p_0) {\rm div } \Psi( \tilde \u),\tilde\u\rangle $, we can employ arguments similar to those used in the estimate of the slow motion in \eqref{slowt}--\eqref{slowv} to obtain that \begin{align}\nonumber -\frac12\langle a(\epsilon p_0) {\rm div } \Psi(\hat \u), \hat \u\rangle\geq \frac{\underline a \bar \mu}{4} (\|\nabla \hat \u\|^2_{L^2}+ \|{\rm div } \hat \u\|^2_{L^2}) -C(R_0)\|\hat \u\|^2_{L^2}. \end{align} Finally, putting all estimates above into \eqref{fastf} and applying Cauchy-Schwarz's and Gronwall's inequalities, we get \eqref{fasta}. \end{proof} In the next lemma we utilize Lemma \ref{Lefast} to control $\big((\epsilon \partial_t)p, (\epsilon \partial_t)\u,(\epsilon \partial_t) \H\big)$. \begin{lem}\label{LLfast} Let $s\geq 4$ and $(p,\u,\H,\theta)$ be the solution to the Cauchy problem \eqref{hnaaa}--\eqref{hnadd}, \eqref{hnas} on $[0,T_1]$. Set $$(p_\beta, \u_\beta,\H_\beta,\theta_\beta):=\partial^\beta\big((\epsilon\partial_t)p, (\epsilon\partial_t)\u,(\epsilon\partial_t)\H ,(\epsilon\partial_t)\theta\big), $$ where $1\leq |\beta|\leq s-1$. Then there exist a constant $l_5>0$ and an increasing function $C(\cdot)$ such that, for any $\epsilon \in (0,1]$ and $t\in [0,T]$, $T=\min\{ T_1,1\}$, it holds that \begin{align} \label{fastba} \sup_{\tau \in [0,t]}\|(p_\beta, \u_\beta,\H_\beta)(\tau)\|^2_{L^2} +& l_5\int^t_0 \| \nabla( \u_\beta,\H_\beta)\|_{L^2}^2(\tau)d\tau \nonumber\\ & \leq C(\mathcal{O}_0)\exp\{(\sqrt{T})C(\mathcal{O}(T))\}. \end{align} \end{lem} \begin{proof} An application of the operator $\partial^\beta(\epsilon\partial_t)$ to the system \eqref{hnaaa}--\eqref{hnadd} leads to \begin{align} &\partial_t p_\beta +(\u \cdot\nabla)p_\beta +\frac{1}{\epsilon}{\rm div }(2\u_\beta -\bar \kappa a(\epsilon p )b(\theta )\nabla \theta_\beta ) =\epsilon a(\epsilon p )[\bar \nu \, {\rm curl\, }\H :{\rm curl\, }\H_\beta] \nonumber\\ & \quad \quad \qquad +\epsilon a(\epsilon p )\Psi(\u ):\nabla\u_\beta +\bar \kappa a(\epsilon p )b(\theta )\nabla p \cdot \nabla \theta_\beta + \tilde g_1, \label{fastbb} \\ &b(-\theta )[\partial_t\u_\beta +(\u \cdot\nabla)\u_\beta ]+\frac{\nabla p_\beta }{\epsilon} =a( \epsilon p )[({\rm curl\, } \H )\times \H_\beta +{\rm div }\Psi (\u_\beta )]+\tilde g_2, \label{fastbc} \\ &\partial_t\H_\beta -{\rm curl\, }(\u \times\H_\beta )-\bar\nu \Delta\H_\beta =\tilde g_3,\quad {\rm div }\H_\beta =0,\label{fastbd} \\ &\partial_t \theta_\beta +(\u \cdot\nabla)\theta_\beta + {\rm div }\u_\beta =\epsilon^2 a(\epsilon p ) [\bar \nu\, {\rm curl\, }\H :{\rm curl\, }\H_\beta]\nonumber\\ & \quad \quad \qquad +\epsilon^2 a(\epsilon p )\Psi (\u ):\nabla\u_\beta +\bar \kappa a(\epsilon p ) {\rm div } (b(\theta )\nabla \theta_\beta )+\tilde g_4, \label{fastbe} \end{align} where \begin{align*} \tilde g_1:=& -[\partial^\beta(\epsilon \partial_t), \u]\cdot \nabla p +\frac{1}{\epsilon}[\partial^\beta(\epsilon \partial_t),(\bar \kappa a(\epsilon p ) b(\theta ))] \Delta \theta \nonumber\\ & +\frac{1}{\epsilon} [\partial^\beta(\epsilon \partial_t), \nabla (\bar \kappa a(\epsilon p ) b(\theta ))]\cdot \nabla \theta +\epsilon\bar \nu\,[ \partial^\beta(\epsilon \partial_t), a(\epsilon p ) {\rm curl\, }\H] :{\rm curl\, }\H\nonumber\\ & +\epsilon[\partial^\beta(\epsilon \partial_t), a(\epsilon p)\Psi (\u )]:\nabla\u +[\partial^\beta(\epsilon \partial_t),\bar \kappa a(\epsilon p )b(\theta )\nabla p]\cdot\nabla\theta, \\% \label{fastbf}\\ \tilde g_2:=& -[\partial^\beta(\epsilon \partial_t), b(-\theta)]\partial_t \u -[\partial^\beta(\epsilon \partial_t), b(-\theta)\u]\cdot \nabla \u,\nonumber\\ & +[\partial^\beta(\epsilon \partial_t), a(\epsilon p){\rm curl\, } \H]\times \H +[\partial^\beta(\epsilon \partial_t), a(\epsilon p)]{\rm div } \Psi( \u), \\% \label{fastbg}\\ \tilde g_3:=& \, \partial^\beta(\epsilon \partial_t)\big({\rm curl\, }(\u\times \H)\big) -{\rm curl\, }(\u\times \H_\beta), \\% \label{fastbh}\\ \tilde g_4:=& -[\partial^\beta(\epsilon \partial_t), \u]\cdot \nabla \theta +\epsilon^2\bar \nu\,[ \partial^\beta(\epsilon \partial_t), a(\epsilon p ) {\rm curl\, }\H] :{\rm curl\, } \H \nonumber\\ & +\epsilon^2[\partial^\beta(\epsilon \partial_t), a(\epsilon p)\Psi (\u )]:\nabla \u \nonumber\\ & +\bar \kappa \partial^\beta(\epsilon \partial_t)\big ( a(\epsilon p ){\rm div } (b(\theta ) \nabla \theta_\alpha) \big) -\bar \kappa a(\epsilon) {\rm div } (b(\theta )\nabla \theta_\alpha ). \\% \label{fastbi} \end{align*} It follows from the linear estimate \eqref{fasta} that for some $l_4>0$, \begin{align} \label{fastbq} & \sup_{\tau \in [0,t]}\|(p_\beta, \u_\beta , \H_\beta)(\tau)\|^2_{L^2} +l_4\int^t_0 \| \nabla( \u_\beta , \H_\beta)\|_{L^2}^2(\tau)d\tau\nonumber\\ & \leq e^{TC(R)}\|(p_\beta, \tilde \u_\beta , \H_\beta)(0)\|^2_{L^2} +C(R)e^{TC(R)}\sup_{\tau \in [0,T]}\|\nabla\theta_\beta(\tau)\|^2_{L^2}\nonumber\\ & \quad +TC(R)\sup_{\tau \in [0,T]}\|\nabla( \epsilon \u_\beta, \epsilon \H_\beta)(\tau)\|^2_{L^2}+C(R)\int^T_0 \|\nabla \theta_\beta\|^2_{H^1}(\tau)d \tau\nonumber\\ & \quad + C(R)\int^T_0\left \{\|\tilde{g}_1\|^2_{L^2}+\|\tilde{g}_2\|^2_{L^2} +\|\tilde{g}_3\|^2_{L^2} + \|\nabla \tilde{g}_4\|_{L^2}^2\right\}(\tau)d \tau , \end{align} where $R$ is defined as $R_0$ in \eqref{R00} with $(p_0,\u_0,\H_0, \theta_0)$ replaced with $(p,\u,\H,\theta)$. It remains to control the terms $\|\tilde g_1\|_{L^2}^2$, $\|\tilde g_2\|_{L^2}^2$, $\|\tilde g_3\|_{L^2}^2$, and $\|\nabla \tilde g_4\|_{L^2}^2$. The first term of $\tilde g_1$ can be bounded as follows. \begin{align} \left\|[\partial^\beta(\epsilon \partial_t), \u]\cdot \nabla p\right\|_{L^2} \leq & \epsilon C_0 (\|\u\|_{H^{s-1}}\|(\epsilon \partial_t) \nabla p\|_{H^{s-2}} +\|(\epsilon\partial_t) \u\|_{H^{s-1}}\|\nabla p \|_{H^{s-1}})\nonumber\\ \leq & C(\mathcal{Q}). \nonumber \end{align} Similarly, the second term of $\tilde g_1$ admits the following boundedness: \begin{align} & \frac{1}{\epsilon}\|[\partial^\beta(\epsilon \partial_t),(\bar \kappa a(\epsilon p ) b(\theta ))] \Delta \theta\|\nonumber\\ & \leq C_0 \big(\| a(\epsilon p ) b(\theta )\|_{H^{s-1}}\|\partial_t\Delta \theta \|_{H^{s-2}} + \|\partial_t( a(\epsilon p ) b(\theta ))\|_{H^{s-1}} \|\Delta \theta\|_{H^{s-1}}\big) \nonumber\\ & \leq C(\mathcal{Q})(1+\mathcal{S}). \nonumber \end{align} The other four terms in $\tilde g_1$ can be treated similarly and hence can be bounded from above by $C(\mathcal{Q})(1+\mathcal{S})$. For the first term of $\tilde g_2$, one has by the equation \eqref{hnabb} that \begin{align} \label{fastbl} [\partial^\beta(\epsilon \partial_t), b(-\theta)]\partial_t \u= & [\partial^\beta(\epsilon \partial_t), b(-\theta)]\{(\u \cdot\nabla)\u\}\nonumber\\ &+\frac{1}{\epsilon} [\partial^\beta(\epsilon \partial_t), b(-\theta)]\{b^{-1}(-\theta_0)\nabla p \}\nonumber\\ & - [\partial^\beta(\epsilon \partial_t), b(-\theta)]\{b^{-1}(-\theta)a( \epsilon p)[({\rm curl\, } \H_0)\times \H]\}\nonumber\\ & - [\partial^\beta(\epsilon \partial_t), b(-\theta)]\{b^{-1}(-\theta)a( \epsilon p){\rm div }\Psi (\u ) \} . \end{align} Note that the terms on the right-hand side of \eqref{fastbl} have similar structure as that of $\tilde g_1$. Thus, we see that \begin{align} \nonumbe \|[\partial^\beta(\epsilon \partial_t), b(-\theta)]\partial_t \u\|_{L^2}\leq C(\mathcal{Q})(1+\mathcal{S}). \end{align} Similarly, the other four terms of $\tilde g_2$ can be bounded from above by $C(\mathcal{Q})(1+\mathcal{S})$. Next, by the identity \eqref{vb}, one can rewrite $\tilde g_3$ as \begin{align}\nonumbe \tilde g_3 =& -[\partial^\beta(\epsilon \partial_t), {\rm div } \u] \H - [\partial^\beta(\epsilon \partial_t),\u]\cdot \nabla \H +\sum_{i=1}^3 [\partial^\beta(\epsilon \partial_t), \nabla \u_i] \H . \end{align} Following a process similar to that in the estimate of $\tilde g_1$, one gets \begin{align} \nonumbe \|\tilde g_3\|_{L^2}\leq C(\mathcal{Q})(1+\mathcal{S}). \end{align} And analogously, \begin{align} \nonumbe \|\nabla \tilde g_4\|_{L^2}\leq C(\mathcal{Q})(1+\mathcal{S}). \end{align} We proceed to control the other terms on the right-hand side of \eqref{fastbq}. It follows from \eqref{theta1} that $$ C(R)e^{TC(R)}\sup_{\tau \in [0,T]}\|\nabla\theta_\beta(\tau)\|^2_{L^2}\leq C(\mathcal{O}(T)) \exp\{\sqrt{T} C(\mathcal{O}(T)\} $$ and $$ \int^T_0 \|\Delta \theta_\beta\|^2_{L_2}(\tau)d \tau\leq \int^T_0 \|(\epsilon \partial_t) \theta \|^2_{H^{s+1}}(\tau)d \tau \leq C(\mathcal{O}_0) \exp\{\sqrt{T} C(\mathcal{O}(T))\}. $$ Thanks to \eqref{t12}, one has \begin{align} TC(R)\sup_{\tau \in [0,T]}\|\nabla( \epsilon \u_\beta, \epsilon \H_\beta)(\tau)\|^2_{L^2} & \leq TC(\mathcal{O}(T))\sup_{\tau \in [0,T]}\|(\epsilon \partial_t)( \epsilon \u, \epsilon \H)(\tau)\|^2_{H^{s}} \nonumber \\ &\leq TC(\mathcal{O}(T)).\nonumber% \end{align} Then, the desired inequality \eqref{fastba} follows from the above estimates and the inequality \eqref{fastbq}. \end{proof} Now we are in a position to estimate the Sobolev norm of $({\rm div } \u,\nabla p)$ based on Lemma \ref{LLfast}. \begin{lem}\label{LLfastb} Let $s\geq 4$ and $(p,\u,\H,\theta)$ be the solution to the Cauchy problem \eqref{hnaaa}--\eqref{hnadd}, \eqref{hnas} on $[0,T_1]$. Then there exist a constant $l_6>0$ and an increasing function $C(\cdot)$ such that, for any $\epsilon \in (0,1]$ and $t\in [0,T_1]$, $T=\min\{ T_1,1\}$, it holds that \begin{align} \label{fastca} \sup_{\tau \in [0,t]}\left\{\|p(\tau)\|_{H^s}+\|{\rm div } \u(\tau)\|_{H^{s-1}}\right\} & + l_6\int^t_0\left\{\|\nabla p\|^2_{H^s}+\|\nabla{\rm div }\u\|^2_{H^{s-1}}\right\}(\tau)d\tau\nonumber\\ & \leq C(\mathcal{O}_0)\exp\big\{(\sqrt{T}+\epsilon)C(\mathcal{O}(T))\big\}. \end{align} \end{lem} \begin{proof} Rewrite the equations \eqref{hnaaa} and \eqref{hnabb} as \begin{align} {\rm div } \u = & -\frac{1}{2}(\epsilon \partial_t) p -\frac{\epsilon}{2}(\u \cdot\nabla)p+ \frac{1}{2}{\rm div }(\bar \kappa a(\epsilon p)b(\theta)\nabla \theta)+\frac{\epsilon^2\bar \nu }{2} a(\epsilon p_0) |{\rm curl\, }\H|^2\nonumber\\ & +\frac{\epsilon^2}{2} a(\epsilon p)\Psi(\u):\nabla\u +\frac{\epsilon\bar \kappa}{2} a(\epsilon p)b(\theta)\nabla p \cdot \nabla \theta, \label{fastcb} \\ {\nabla p }= & - b(-\theta) (\epsilon\partial_t)\u -{\epsilon} b(-\theta)(\u \cdot\nabla)\u \nonumber\\ & +\epsilon a( \epsilon p)[({\rm curl\, } \H)\times \H] +\epsilon a( \epsilon p){\rm div }\Psi (\u ). \label{fastcc} \end{align} Then, \begin{align} \|{\rm div } \u\|_{H^{s-1}}\leq &C_0\|(\epsilon \partial_t) p\|_{H^{s-1}} +C_0 \epsilon\,\|\u\|_{H^{s-1}}\|\nabla p\|_{H^{s-1}}\nonumber\\ & +C_0 \|{\rm div }(\bar \kappa a(\epsilon p)b(\theta)\nabla \theta)\|_{H^{s-1}}+C_0 \| a(\epsilon p_0)\|_{L^\infty} \|\epsilon\,{\rm curl\, }\H\|_{H^{s-1}}^2\nonumber\\ & +C_0\| a(\epsilon p)\|_{L^\infty}\|\Psi(\epsilon\u):(\epsilon\nabla\u)\|_{H^{s-1}} \nonumber\\ & +C_0\| a(\epsilon p)b(\theta)\|_{L^\infty}\|(\epsilon\nabla p)\|_{H^{s-1}}\| \nabla \theta\|_{H^{s-1}}. \label{fastcd} \end{align} It follows from Lemmas \ref{EP1}--\ref{LLb1} and \ref{LLfast}, and the inequalities \eqref{t11}--\eqref{theta1} that \begin{align*} \|(\epsilon \partial_t) p\|_{H^{s-1}}\leq & C(\mathcal{O}_0)\exp\{(\sqrt{T}+\epsilon)C(\mathcal{O}(T))\}, \\ \epsilon\,\|\u\|_{H^{s-1}}\|\nabla p\|_{H^{s-1}}\leq & \epsilon C(\mathcal{O}), \\ \|{\rm div }(\bar \kappa a(\epsilon p)b(\theta)\nabla \theta)\|_{H^{s-1}} \leq & C_0 \|\Delta \theta\|_{H^{s-1}}+C_0\|\nabla \theta\|_{H^{s-1}}\\ \leq & C(\mathcal{O}_0)\exp\{(\sqrt{T}+\epsilon)C(\mathcal{O}(T))\}, \\ \|\epsilon\,{\rm curl\, }\H\|_{H^{s-1}}\leq & C(\mathcal{O}_0)\exp\{(\sqrt{T}+\epsilon)C(\mathcal{O}(T))\}, \\ \|\Psi(\epsilon\u):(\epsilon\nabla\u)\|_{H^{s-1}} \leq & C(\mathcal{O}_0)\exp\{(\sqrt{T}+\epsilon)C(\mathcal{O}(T))\}, \\ \|(\epsilon\nabla p)\|_{H^{s-1}}\| \nabla \theta\|_{H^{s-1}} \leq & C(\mathcal{O}_0)\exp\{(\sqrt{T}+\epsilon)C(\mathcal{O}(T))\}. \nonumber \end{align*} These bounds together with \eqref{fastcd} imply that \begin{align}\nonumber \sup_{\tau \in [0,t]} \|{\rm div } \u\|(\tau)\|_{H^{s-1}} \leq C(\mathcal{O}_0)\exp\{(\sqrt{T}+\epsilon)C(\mathcal{O}(T))\}. \end{align} Similar arguments applying to the equation \eqref{fastcc} for $\nabla p$ yield \begin{align}\label{fastcf} \sup_{\tau \in [0,t]} \|p\|_{H^s} + l_6\int^t_0 \|\nabla p\|^2_{H^s}(\tau)d\tau \leq C(\mathcal{O}_0)\exp\{(\sqrt{T}+\epsilon)C(\mathcal{O}(T))\} \end{align} for some positive constant $l_6>0$. To obtain the desired inequality \eqref{fastca}, we shall establish the following estimate \begin{align} \label{fastcg} \int^T_0 \| \nabla {\rm div } \u\|^2_{H^{s-1}} (\tau)d\tau \leq C(\mathcal{O}_0)\exp\{(\sqrt{T}+\epsilon)C(\mathcal{O}(T))\}. \end{align} In fact, for any multi-index $\alpha$ satisfying $1\leq |\alpha|\leq s$, one can apply the operator $\partial^\alpha$ to \eqref{fastcb} and then take the inner product with $\partial^\alpha {\rm div } \u$ to obtain \begin{align}\label{fastch} \int^T_0 \|\partial^\alpha {\rm div } \u\|^2_{L^2}(\tau)d \tau = & -\frac{1}{2}\int^T_0 \langle \partial^\alpha(\epsilon \partial_t) p, \partial^\alpha {\rm div } \u\rangle (\tau)d\tau\nonumber\\ & +\int^T_0\langle \Xi,\partial^\alpha {\rm div } \u\rangle (\tau)d\tau, \end{align} where \begin{align*} \Xi : =& -\frac{\epsilon}{2}(\u \cdot\nabla)p+ \frac{1}{2}{\rm div }(\bar \kappa a(\epsilon p)b(\theta)\nabla \theta)+\frac{\epsilon^2\bar \nu }{2} a(\epsilon p_0) |{\rm curl\, }\H|^2\nonumber\\ & +\frac{\epsilon^2}{2} a(\epsilon p)\Psi(\u):\nabla\u +\frac{\epsilon\bar \kappa}{2} a(\epsilon p)b(\theta)\nabla p \cdot \nabla \theta. \end{align*} It thus follows from \eqref{fastba} and similar arguments to those for \eqref{fastcf}, that for all $1\leq |\alpha|\leq s$, \begin{align*} \int^T_0 \|\partial^\alpha\Xi (\tau)\|^2_{L^2}d\tau \leq C(\mathcal{O}_0)\exp\{(\sqrt{T}+\epsilon)C(\mathcal{O}(T))\}, \end{align*} whence, \begin{align} &\int^T_0\left|\langle \Xi,\partial^\alpha {\rm div } \u\rangle\right| (\tau)d\tau\nonumber\\ & \leq C(\mathcal{O}_0)\exp\{(\sqrt{T}+\epsilon)C(\mathcal{O}(T))\} \left\{\int^T_0\| \partial^\alpha {\rm div } \u\|^2_{L^2} (\tau)d\tau \right\}^{1/2} \nonumber\\ & \leq C(\mathcal{O}_0)\exp\{(\sqrt{T}+\epsilon)C(\mathcal{O}(T))\} +\frac14 \int^T_0\| \partial^\alpha {\rm div } \u\|^2_{L^2} (\tau)d\tau . \nonumber \end{align} For the first term on the right-hand side of \eqref{fastch}, one gets by integration by parts that \begin{align*} -\frac{1}{2}\int^T_0 \langle \partial^\alpha(\epsilon \partial_t) p, \partial^\alpha {\rm div } \u\rangle (\tau)d\tau = & -\frac{1}{2} \langle \partial^\alpha p, \epsilon\partial^\alpha {\rm div } (\epsilon\u)\rangle\Big|^T_0 \nonumber\\ & +\frac{1}{2}\int^T_0\langle \partial^\alpha\nabla p,\partial^\alpha (\epsilon\partial_t) \u\rangle (\tau)d\tau. \end{align*} By virtue of the estimate \eqref{slowao} on $(\epsilon q,\epsilon\u,\theta-\bar \theta)$ and \eqref{fastcf}, we find that \begin{align} \left| \frac{1}{2} \langle \partial^\alpha p, \epsilon\partial^\alpha {\rm div } (\epsilon\u)\rangle\Big|^T_0\right| \leq & \sup_{\tau\in [0,T]}\{\|p\|_{H^s}\|\epsilon\u\|_{H^{s+1}}\}\nonumber\\ \leq & C(\mathcal{O}_0)\exp\{(\sqrt{T}+\epsilon)C(\mathcal{O}(T))\}, \nonumber\\ \frac{1}{2} \bigg|\int^T_0 \langle \partial^\alpha \nabla p, \partial^\alpha (\epsilon \partial_t) \u\rangle (\tau)d\tau\bigg| \leq &\frac{1}{2} \bigg\{\int^T_0 \|\partial^\alpha\nabla p \|^2_{L^2} (\tau)d\tau\bigg\}^{1/2}\nonumber\\ & \times \bigg\{\int^T_0 \|\partial^\alpha (\epsilon \partial_t) \u\|^2_{L^2} (\tau)d\tau\bigg\}^{1/2} \nonumber\\ \leq & C(\mathcal{O}_0)\exp\{(\sqrt{T}+\epsilon)C(\mathcal{O}(T))\}. \nonumber \end{align} These bounds, together with \eqref{fastch}, yield the desired estimate \eqref{fastcg}. This completes the proof. \end{proof} \subsection{$H^{s-1}$-estimate on ${\rm curl\, } \u$ } The another key point to obtain a uniform bound for $\u$ is the following estimate on ${\rm curl\, }\u$. \begin{lem}\label{LLT} Let $s>5/2$ be an integer and $(p,\u,\H,\theta)$ be the solution to the Cauchy problem \eqref{hnaaa}--\eqref{hnadd}, \eqref{hnas} on $[0,T_1]$. Then there exist a constant $l_7>0$ and an increasing function $C(\cdot)$ such that, for any $\epsilon \in (0,1]$ and $t\in [0,T_1]$, $T=\min\{ T_1,1\}$, it holds that \begin{align} \label{exta} & \sup_{\tau \in [0,t]}\left\{\|{\rm curl\, }(b({-\theta})\u)(\tau)\|^2_{H^{s-1}} +\|{\rm curl\, } \H (\tau)\|^2_{H^{s-1}}\right\}\nonumber\\ & + l_7\int^t_0\left\{\|\nabla{\rm curl\, }(b({-\theta})\u)\|^2_{H^{s-1}} +\|\nabla{\rm curl\, } \H (\tau)\|^2_{H^{s-1}}\right\}(\tau)d \tau \nonumber\\ \leq & C(\mathcal{O}_0)\exp\{(\sqrt{T}+\epsilon)C(\mathcal{O}(T))\}. \end{align} \end{lem} \begin{proof} Applying the operator \emph{curl} to the equations \eqref{hnabb} and \eqref {hnacc}, using the identities \eqref{vbd} and \eqref{vbc}, and the fact that ${\rm curl\, } \nabla =0$, one infers \begin{align} & \partial_t ({\rm curl\, }(b(-\theta)\u)) +(\u \cdot\nabla)({\rm curl\, }(b(-\theta)\u))\nonumber\\ & \quad \ \ = {\rm curl\, }\{a( \epsilon p) ({\rm curl\, } \H)\times \H\} +\bar\mu {\rm div }\{a(\epsilon p)b(\theta)\nabla ({\rm curl\, }(b(-\theta)\u))\}+ \Upsilon_1, \label{extb} \\ &\partial_t({\rm curl\, } \H) -{\rm curl\, }[{\rm curl\, }(\u\times\H )]-\bar\nu \Delta({\rm curl\, }\H) = 0,\label{extc} \end{align} where $\Upsilon_1$ is defined by \begin{align*} \Upsilon_1:= &\bar \mu {\rm div }\big(a(\epsilon p)(\nabla b(\theta))\otimes {\rm curl\, }(b(-\theta)\u)\big) -\bar \mu \nabla a(\epsilon p)\cdot \nabla (b(\theta){\rm curl\, }(b(-\theta)\u))\nonumber\\ & - \bar \mu a(\epsilon p)\Delta((\nabla b(\theta))\times (b(-\theta)\u)) -\nabla a(\epsilon p)\times ( \bar \mu \Delta \u+(\bar \mu +\bar \lambda)\nabla {\rm div } \u)\nonumber\\ & +{\rm curl\, }(b(-\theta)\u \partial_t \theta)+[{\rm curl\, }, \u]\cdot \nabla (b(-\theta)\u) +{\rm curl\, }(b(-\theta)\u(\u\cdot\nabla \theta)). \end{align*} For any multi-index $\alpha$ satisfying $0\leq |\alpha|\leq s-1$, we apply the operator $\partial^\alpha$ to \eqref{extb} and \eqref{extc} to obtain \begin{align} & \partial_t \partial^\alpha ({\rm curl\, }(b(-\theta)\u)) +(\u \cdot\nabla)[\partial^\alpha({\rm curl\, }(b(-\theta)\u))]\nonumber\\ & \quad\quad\quad \ \ = \partial^\alpha {\rm curl\, }\{a( \epsilon p) ({\rm curl\, } \H)\times \H\}\nonumber\\ & \quad \quad\quad\quad\ \ +\bar\mu {\rm div }\{a(\epsilon p)b(\theta)\nabla [ \partial^\alpha ({\rm curl\, }(b(-\theta)\u))]\} + \partial^\alpha \Upsilon_1 + \Upsilon_2 , \label{extd} \\ &\partial_t \partial^\alpha ({\rm curl\, } \H) -\partial^\alpha{\rm curl\, }[{\rm curl\, }(\u\times\H )]-\bar\nu \Delta({\rm curl\, }\H) = 0,\label{exte} \end{align} where \begin{align*} \Upsilon_2: = & -[\partial^\alpha, \u]\cdot \nabla[\partial^\alpha({\rm curl\, }(b(-\theta)\u))]\nonumber\\ & -[\partial^\alpha, {\rm div } (a(\epsilon p)b(\theta))]\nabla[\partial^\alpha({\rm curl\, }(b(-\theta)\u))]. \end{align*} Multiplying \eqref{extd} by $\partial^\alpha ({\rm curl\, }(b(-\theta)\u))$ and \eqref{exte} by $\partial^\alpha ({\rm curl\, } \H)$ respectively, summing up, and integrating over $\mathbb{R}^3$, we deduce that \begin{align}\label{extf} & \frac{1}{2}\frac{d}{dt} \{\|\partial^\alpha ({\rm curl\, }(b(-\theta)\u))\|^2_{L^2} +\|\partial^\alpha ({\rm curl\, } \H)\|^2_{L^2} \} +\bar \nu \|\partial^\alpha ({\rm curl\, } \H)\|^2_{L^2}\nonumber\\ & + \langle a(\epsilon p)b(\theta)\nabla [ \partial^\alpha ({\rm curl\, }(b(-\theta)\u))],\nabla [ \partial^\alpha ({\rm curl\, }(b(-\theta)\u))] \rangle \nonumber\\ =&-\langle (\u \cdot\nabla)[\partial^\alpha({\rm curl\, }(b(-\theta)\u))], \partial^\alpha ({\rm curl\, }(b(-\theta)\u))\rangle \nonumber\\ & - \langle \partial^\alpha {\rm curl\, }\{a( \epsilon p) ({\rm curl\, } \H)\times \H\}, \partial^\alpha ({\rm curl\, }(b(-\theta)\u))\rangle \nonumber\\ & + \langle \partial^\alpha{\rm curl\, }[{\rm curl\, }(\u\times\H )], \partial^\alpha ({\rm curl\, } \H)\rangle +\langle \partial^\alpha \Upsilon_1 + \Upsilon_2, \partial^\alpha ({\rm curl\, }(b(-\theta)\u))\rangle \nonumber\\ := & \mathcal{J}_1+ \mathcal{J}_2+ \mathcal{J}_3+ \mathcal{J}_4, \end{align} where $\mathcal{J}_i$ ($i=1,\cdots,4$) will be bounded as follows. An integration by parts leads to \begin{align*} |\mathcal{J}_1|\leq \|{\rm div }\u\|_{L^\infty}\|\nabla [ \partial^\alpha ({\rm curl\, }(b(-\theta)\u))]\|^2_{L^2}. \end{align*} By virtue of \eqref{va}, the Cauchy-Schwarz and Moser-type inequalities (see \cite{KM}), the term $\mathcal{J}_2$ can be bounded as follows. \begin{align*} |\mathcal{J}_2 |\leq & | \langle \partial^\alpha\{a( \epsilon p) ({\rm curl\, } \H)\times \H\}, \partial^\alpha {\rm curl\, } ({\rm curl\, }(b(-\theta)\u))\rangle|\nonumber\\ \leq & |\partial^\alpha\{a( \epsilon p) ({\rm curl\, } \H)\times \H\}\|_{L^2} \|\partial^\alpha \nabla ({\rm curl\, }(b(-\theta)\u))\|_{L^2}\nonumber\\ \leq& \eta_2\|\partial^\alpha \nabla ({\rm curl\, }(b(-\theta)\u))\|_{L^2}^2\nonumber\\ & + C_{\eta}\{\|{\rm curl\, } \H\|^2_{L^\infty}\|a( \epsilon p)\H\|_{H^{s-1}}^2 +\|a( \epsilon p)\H\|^2_{L^\infty}\|{\rm curl\, } \H\|^2_{H^{s-1}}\}, \end{align*} where $\eta_{2}>0$ is a sufficiently small constant independent of $\epsilon$. If we integrate by parts, make use of \eqref{va} and the fact that ${\rm curl\, } {\rm curl\, } \mathbf{a} =\nabla \,{\rm div } \,\mathbf{a} -\Delta \mathbf{a}$ and ${\rm div } \H=0$, we see that the term $\mathcal{J}_3$ can be rewritten as \begin{align*} \mathcal{J}_3 = \left\langle \partial^\alpha {\rm curl\, }(\u\times \H), \partial^\alpha\Delta \H\right \rangle ,\nonumber \end{align*} which, together with the Moser-type inequality, implies that \begin{align*} | \mathcal{J}_3|\leq C(\mathcal{S})+\eta_{3} \|\H^\epsilon(\tau)\|^2_{s+1}, \end{align*} where $\eta_{3}>0$ is a sufficiently small constant independent of $\epsilon$. To handle $\mathcal{J}_4$, we note that the leading order terms in $\Upsilon_1$ are of third-order in $\theta$ and of second-order in $\u$, and the leading order terms in $\Upsilon_2$ are of order $s+1$ in $\u$ and of order $s + 1$ in $(\epsilon p, \theta)$. Then it follows that \begin{eqnarray*} |\mathcal{J}_4| &\leq & C_0 ( \|\partial^\alpha \Upsilon_1\|_{L^2} + \|\Upsilon_2\|_{L^2})\| \partial^\alpha ({\rm curl\, }(b(-\theta)\u))\|_{L^2} \\ &\leq& C(\mathcal{S})\| \partial^\alpha ({\rm curl\, }(b(-\theta)\u))\|_{L^2}. \end{eqnarray*} Putting the above estimates into the \eqref{extf}, choosing $\eta_2$ and $\eta_3$ sufficient small, summing over $\alpha$ for $0\leq |\alpha|\leq s-1$, and then integrating the result on $[0,t]$, we conclude \begin{align*} & \sup_{\tau \in [0,t]}\left\{\|{\rm curl\, }(b({-\theta})\u)(\tau)\|^2_{H^{s-1}} +\|{\rm curl\, } \H (\tau)\|^2_{H^{s-1}}\right\}\nonumber\\ & \quad + l_7\int^t_0\left\{\|\nabla{\rm curl\, }(b({-\theta})\u)\|^2_{H^{s-1}} +\|\nabla{\rm curl\, } \H (\tau)\|^2_{H^{s-1}}\right\}(\tau)d \tau \nonumber\\ & \leq C_0 \big \{\|{\rm curl\, }(b({-\theta})\u)(0)\|^2_{H^{s-1}}+\|{\rm curl\, } \H (0)\|^2_{H^{s-1}}\big\} \nonumber\\ & \quad + C(\mathcal{O}_0)\exp\{(\sqrt{T}+\epsilon)C(\mathcal{O}(T))\}\nonumber\\ & \leq C(\mathcal{O}_0)\exp\{(\sqrt{T}+\epsilon)C(\mathcal{O}(T))\}. \end{align*} \end{proof} \begin{proof} [Proof of Proposition \ref{hPb}] By virtue of the definition of the norm $\|\cdot\|_{s,\epsilon}$ and the fact that \begin{align*} \|\v\|_{H^{m+1}}\leq K\big(\|{\rm div } \v\|_{H^{m}}+\|{\rm curl\, } \v\|_{H^{m}}+\|\v\|_{H^{m}} \big),\quad \forall \, \, \v\in H^{m+1}(\mathbb{R}^3), \end{align*} Proposition \ref{hPb} follows directly from Lemmas \ref{HES}, \ref{LLb}, \ref{LLfastb} and \ref{LLT}. \end{proof} Once Proposition \ref{hPb} is established, the existence part of Theorem \ref{main} can be proved by directly applying the same arguments as in \cite{A06,MS01}, and hence we omit the details here. \section{Decay of the local energy and zero Mach number limit} In this section, we shall prove the convergence part of Theorem \ref{main} by modifying the arguments developed by M\'{e}tivier and Schochet \cite{MS01}, see also some extensions in \cite{A05,A06,LST}. \begin{proof}[Proof of the convergence part of Theorem \ref{main}] The uniform estimate \eqref{conda} implies that \begin{align*} \sup_{\tau\in [0,T_0]} \|(p^\epsilon,\u^\epsilon,\H^\epsilon)(\tau)\|_{H^{s}} + \sup_{\tau\in [0,T_0]} \| \theta^\epsilon-\bar\theta\|_{H^{s+1}}<+\infty. \end{align*} Thus, after extracting a subsequence, one has \begin{align} & (p^\epsilon, \u^\epsilon) \rightharpoonup (\bar p, {\mathbf w} ) & \text{weakly-}\ast \ \text{in} & \quad\quad L^\infty(0,T_0; H^s(\mathbb{R}^3)), \label{heata}\\ & \H^\epsilon \rightharpoonup \t & \text{weakly-}\ast \ \text{in} & \qquad L^\infty(0,T_0; H^s(\mathbb{R}^3)), \label{heataa}\\ & \theta^\epsilon-\bar\theta \rightharpoonup \vartheta-\bar\theta \!\!\!\!\!\! & \text{weakly-}\ast \ \text{in} & \qquad L^\infty(0,T_0; H^{s+1}(\mathbb{R}^3)). \label{heatb} \end{align} It follows from the equations for $\H^\epsilon$ and $\theta^\epsilon$ that \begin{align}\label{heatbb} \partial_t \H^\epsilon,\, \partial_t\theta^\epsilon \in C([0,T_0],H^{s-2}(\mathbb{R}^3)). \end{align} \eqref{heataa}--\eqref{heatbb} implies, after further extracting a subsequence, that for all $s'<s$, \begin{align} & \H^\epsilon \rightarrow \t \!\!\!\!\!\!\!\!\!\!\!\!\! & \text{strongly in} & \quad C([0,T_0],H^{s'}_{\mathrm{loc}}(\mathbb{R}^3)),\label{heatc}\\ & \theta^\epsilon -\bar \theta \rightarrow \vartheta -\bar\theta \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\! & \text{strongly in}& \quad C([0,T_0], H^{s'+1}_{\mathrm{loc}}(\mathbb{R}^3)), \label{heatd} \end{align} where the limit $\t \in C([0,T_0], H^{s'}_{\mathrm{loc}}(\mathbb{R}^3))\cap L^\infty(0,T_0;H^{s}_{\mathrm{loc}}(\mathbb{R}^3))$ and $(\vartheta -\bar \theta) \in C([0,T_0], H^{s'+1}_{\mathrm{loc}}(\mathbb{R}^3))\cap L^\infty(0,T_0;H^{s+1}_{\mathrm{loc}}(\mathbb{R}^3))$. Similarly, from \eqref{exta} we get \begin{align} & {\rm curl\, } \big(e^{ -\theta^\epsilon} \u^\epsilon\big) \rightarrow {\rm curl\, } \big(e^{-\vartheta} {\mathbf w} \big) \quad \text{strongly in} \quad C([0,T_0],H^{s'-1}_{\mathrm{loc}}(\mathbb{R}^3)) \label{heate} \end{align} for all $ s'<s$. In order to obtain the limit system, one needs to show that the limits in \eqref{heata} hold in the strong topology of $L^2(0,T_0;H^{s'}_{\mathrm{loc}}(\mathbb{R}^3))$ for all $s'<s$. To this end, we first show that $\bar p=0$ and ${\rm div }(2{\mathbf w} - \bar \kappa e^\vartheta\nabla \vartheta)=0$. In fact, the equations \eqref{hnaaa} and \eqref{hnabb} can be rewritten as \begin{align} & \epsilon \, \partial_t p^\epsilon+ {\rm div }(2\u^\epsilon-\bar \kappa e^{-\epsilon p^\epsilon+\theta^\epsilon}\nabla \theta^\epsilon) = \epsilon f^\epsilon,\label{heatf}\\ &\epsilon\, e^{-\theta^\epsilon} \partial_t\u^\epsilon+ \nabla p^\epsilon = \epsilon\, \mathbf{g}^\epsilon. \label{heatg} \end{align} By virtue of \eqref{conda}, $f^\epsilon$ and $\mathbf{g}^\epsilon$ are uniformly bounded in $C([0,T_0],H^{s-1}(\mathbb{R}^3))$. Passing to the weak limit in \eqref{heatf} and \eqref{heatg}, respectively, we see that $\nabla \bar p=0$ and ${\rm div }(2{\mathbf w} -\bar \kappa e^\vartheta\nabla \vartheta)=0$. Since $\bar p\in L^\infty (0,T_0;H^s(\mathbb{R}^3))$, we infer that $\bar p=0$. Notice that by virtue of \eqref{heate}, the strong compactness for the incompressible component of $e^{ -\theta^\epsilon} \u^\epsilon$ holds. So, it is sufficient to prove the following proposition on the acoustic components in order to get the strong convergence of $\u^\epsilon$. \begin{prop}\label{LC} Suppose that the assumptions in Theorem \ref{main} hold. Then, $p^\epsilon$ converges to $0$ strongly in $L^2(0,T_0; H^{s'}_{\mathrm{loc}}(\mathbb{R}^3))$ and ${\rm div }(2\u^\epsilon-\bar \kappa e^{-\epsilon p^\epsilon+\theta^\epsilon}\nabla \theta^\epsilon)$ converges to $0$ strongly in $L^2(0,T_0; H^{s'-1}_{\mathrm{loc}}(\mathbb{R}^3))$ for all $s'<s$. \end{prop} The proof of Proposition \ref{LC} is based on the following dispersive estimates on the wave equation obtained by M\'{e}tivier and Schochet \cite{MS01} and reformulated in \cite{A06}. \begin{lem} {\rm (\cite{MS01,A06})}\label{LD} Let $T>0$ and $v^\epsilon$ be a bounded sequence in $C([0,T],H^2(\mathbb{R}^3))$, such that \begin{align*} \epsilon^2\partial_t(a^\epsilon \partial_t v^\epsilon)-\nabla\cdot (b^\epsilon \nabla v^\epsilon)=c^\epsilon, \end{align*} where $c^\epsilon$ converges to $0$ strongly in $L^2(0,T;L^2(\mathbb{R}^3))$. Assume further that for some $s> 3/2+1$, the coefficients $(a^\epsilon,b^\epsilon)$ are uniformly bounded in $C([0,T],H^s(\mathbb{R}^3))$ and converge in $C([0,T],H^s_{\mathrm{loc}}(\mathbb{R}^3))$ to a limit $(a,b)$ satisfying the decay estimates \begin{gather*} |a(x,t)-\hat a|\leq C_0 |x|^{-1-\zeta}, \quad |\nabla_x a(x,t)|\leq C_0 |x|^{-2-\zeta}, \\ |b(x,t)-\hat b|\leq C_0 |x|^{-1-\zeta}, \quad |\nabla_x b(x,t)|\leq C_0 |x|^{-2-\zeta}, \end{gather*} for some positive constants $\hat a$, $\hat b$, $C_0$ and $\zeta$. Then the sequence $v^\epsilon$ converges to $0$ strongly in $L^2(0,T; L^2_{\mathrm{loc}}(\mathbb{R}^3))$. \end{lem} \begin{proof}[Proof of Proposition \ref{LC}] We fist show that $p^\epsilon$ converges to $0$ strongly in $L^2(0,T_0; \linebreak H^{s'}_{\mathrm{loc}}(\mathbb{R}^3))$ for all $s'<s$. Applying $\epsilon^2 \partial_t$ to \eqref{hnaaa}, we find that \begin{align} \label{heath} &\epsilon^2\partial_t \{\partial_t p^\epsilon +(\u^\epsilon \cdot\nabla)p^\epsilon\} + {\epsilon}\partial_t\{{\rm div }(2\u^\epsilon-\bar \kappa e^{-\epsilon p^\epsilon+\theta^\epsilon}\nabla \theta^\epsilon)\}\nonumber\\ & \quad \quad =\epsilon^3\partial_t\{ e^{-\epsilon p^\epsilon} [\bar\nu |{\rm curl\, }\H^\epsilon|^2 +\Psi(\u^\epsilon):\nabla\u^\epsilon]\} +\epsilon^2 \partial_t\{\bar \kappa e^{-\epsilon p^\epsilon +\theta^\epsilon}\nabla p^\epsilon \cdot \nabla \theta^\epsilon\}. \end{align} Dividing \eqref{hnabb} by $e^{-\theta^\epsilon}$ and then applying the operator \emph{div} to the resulting equations, one gets \begin{align} \label{heati} \epsilon\partial_t{\rm div }\u^\epsilon+ {\rm div } \big(e^{\theta^\epsilon} {\nabla p^\epsilon}\big) = & -\epsilon {\rm div }\{(\u^\epsilon\cdot\nabla)\u^\epsilon\} \nonumber\\ & + \epsilon {\rm div }\big\{ e^{- \epsilon p^\epsilon+\theta^\epsilon} [({\rm curl\, } \H)\times \H+{\rm div }\Psi^\epsilon(\u^\epsilon)]\big\}, \end{align} Subtracting \eqref{heati} from \eqref{heath}, we have \begin{align}\label{heatj} \epsilon^2\partial_t \Big(\frac12\partial_t p^\epsilon\Big) -{\rm div } \big(e^{\theta^\epsilon} {\nabla p^\epsilon}\big) = \epsilon F^\epsilon(p^\epsilon, \u^\epsilon, \H^\epsilon,\theta^\epsilon), \end{align} where $F^\epsilon(p^\epsilon, \u^\epsilon, \H^\epsilon,\theta^\epsilon)$ is a smooth function in its variables with $F(0)=0$. By the uniform boundedness of $(p^\epsilon, \u^\epsilon, \H^\epsilon,\theta^\epsilon)$ one infers that \begin{align*} \epsilon F^\epsilon(p^\epsilon, \u^\epsilon, \H^\epsilon,\theta^\epsilon) \rightarrow 0 \quad \text{strongly in} \quad L^2(0,T_0; L^2 (\mathbb{R}^3)). \end{align*} By the the strong convergence of $\theta^\epsilon$, the initial conditions \eqref{dacay}, and the arguments in Section 8.1 in \cite{A06}, one can easily prove that the coefficients in \eqref{heatj} satisfy the conditions in Lemma \ref{LD}. Therefore, we can apply Lemma \ref{LD} to obtain \begin{align*} p^\epsilon \rightarrow 0 \quad \text{strongly in} \quad L^2(0,T_0; L^2_{\mathrm{loc}}(\mathbb{R}^3)). \end{align*} Since $p^\epsilon$ is bounded uniformly in $C([0,T_0], H^s(\mathbb{R}^3))$, an interpolation argument gives \begin{align*} p^\epsilon \rightarrow 0 \quad \text{strongly in} \quad L^2(0,T_0; H^{s'}_{\mathrm{loc}}(\mathbb{R}^3))\ \ \text{for all} \ \ s'<s. \end{align*} Similarly, we can obtain the strong convergence of ${\rm div }(2\u^\epsilon -\kappa^\epsilon e^{-\epsilon p^\epsilon+\theta^\epsilon}\nabla \theta^\epsilon)$. This completes the proof. \end{proof} We continue our proof of Theorem \ref{main}. It follows from Proposition \ref{LC} and \eqref{heatd} that \begin{align*} {\rm div }\,\u^\epsilon \rightarrow {\rm div }\,{\mathbf w}\quad \text{strongly in} \quad L^2(0,T_0; H^{s'-1}_{\mathrm{loc}}(\mathbb{R}^3)). \end{align*} Thus, using \eqref{heate}, one obtains \begin{align*} \u^\epsilon \rightarrow {\mathbf w}\quad \text{strongly in} \quad L^2(0,T_0; H^{s'}_{\mathrm{loc}}(\mathbb{R}^3))\qquad\mbox{for all }s'<s. \end{align*} By \eqref{heatc}, \eqref{heatd}, and Proposition \ref{LC}, we find that \begin{equation*} \begin{array}{lcl} \nabla \u^\epsilon \rightarrow \nabla {\mathbf w} & \text{strongly in}\quad & L^2(0,T_0; H^{s'-1}_{\mathrm{loc}}(\mathbb{R}^3));\\ \nabla \H^\epsilon \rightarrow \nabla \t \qquad & \text{strongly in}\quad & L^2(0,T_0; H^{s'-1}_{\mathrm{loc}}(\mathbb{R}^3));\\ \nabla \theta^\epsilon \rightarrow \nabla \vartheta & \text{strongly in}\quad & L^2(0,T_0; H^{s'-1}_{\mathrm{loc}}(\mathbb{R}^3)). \end{array} \end{equation*} Passing to the limits in the equations for $p^\epsilon$, $\H^\epsilon$, and $\theta^\epsilon$, one sees that the limit $(0, {\mathbf w},\t, \vartheta)$ satisfies, in the sense of distributions, that \begin{align} &{\rm div }(2{\mathbf w} -\bar{\kappa}\, e^\vartheta\nabla \vartheta)=0, \label{heatk} \\ &\partial_t\t -{\rm curl\, }({\mathbf w}\times\t)-\bar \nu\Delta\t=0,\quad {\rm div }\t=0, \label{heatl} \\ &\partial_t\vartheta +({\mathbf w}\cdot\nabla)\vartheta +{\rm div } {\mathbf w} =\bar \kappa \, {\rm div } (e^{\vartheta}\nabla \vartheta).\label{heatm} \end{align} On the other hand, applying \emph{curl} to the momentum equations \eqref{hnabb}, using the equations \eqref{hnaaa} and \eqref{hnadd} on $p^\epsilon$ and $\theta^\epsilon$, and then taking to the limit on the resulting equations, we deduce that \begin{align*} {\rm curl\, }\left\{\partial_t \big(e^{-\vartheta} {\mathbf w})+{\rm div }\big({\mathbf w} e^{-\vartheta} \otimes {\mathbf w}\big)-({\rm curl\, } \t)\times \t-{\rm div }\Phi({\mathbf w})\right\}=0 \end{align*} holds in the sense of distributions. Therefore it follows from \eqref{heatk}--\eqref{heatm} that \begin{align} e^{-\vartheta}[\partial_t{\mathbf w}+({\mathbf w}\cdot\nabla){\mathbf w}]+\nabla \pi =({\rm curl\, } \t)\times \t+{\rm div }\Phi({\mathbf w}), \label{heatn} \end{align} for some function $\pi$. Following the same arguments as those in the proof of Theorem 1.5 in \cite{MS01}, we conclude that $({\mathbf w}, \t, \vartheta)$ satisfies the initial condition \begin{align}\label{heato} ({\mathbf w},\t, \vartheta)|_{t=0} =({\mathbf w}_0,\t_0, \vartheta_0) . \end{align} Moreover, the standard iterative method shows that the system \eqref{heatk}--\eqref{heatn} with initial data \eqref{heato} has a unique solution $({\mathbf w}^*, \t^*, \vartheta^*-\bar \theta)\in C([0,T_0],H^s(\mathbb{R}^3))$. Thus, the uniqueness of solutions to the limit system \eqref{heatk}--\eqref{heatn} implies that the above convergence holds for the full sequence of $(p^\epsilon, \u^\epsilon, \H^\epsilon, \theta^\epsilon)$. Therefore the proof is completed. \end{proof} \bigskip \noindent {\bf Acknowledgements:} This work was partially done when Li was visiting the Institute of Mathematical Sciences, CUHK. He would like to thank the institute for hospitality. Jiang was supported by the National Basic Research Program under the Grant 2011CB309705 and NSFC (Grant No. 11229101). Ju was supported by NSFC (Grant No. 11171035). Li was supported by NSFC (Grant No. 11271184, 10971094), NCET-11-0227, PAPD, and the Fundamental Research Funds for the Central Universities. Xin was supported in part by the Zheng Ge Ru Foundation, Hong Kong RGC Earmarked Research Grants CUHK4042/08P and CUHK4041/11p.
2,869,038,156,001
arxiv
\section{Splining Strings} \label{sec:triespline} \sparagraph{Problem Statement.} Given an immutable, lexicographically sorted array of strings, we want to build an index on top which supports two key operations. First, the index should support equality lookups: if the string is present, return its index, and if not, return NULL. Second, the index must support lower bound queries: given a string, find the index of the greatest string greater than or equal to the provided one. These two operations are important in column stores that use dictionary encoding. For equality queries (i.e.,\@\xspace WHERE str = X) one needs to support equality lookups on the dictionary, and for prefix queries (i.e.,\@\xspace WHERE str LIKE 'A\%') lower bound lookups are required (to find the first string in the dictionary that is lexicographically greater than or equal to 'A'). \sparagraph{Method.} Our method consists of two parts. First, we describe the RadixStringSpline (RSS), a compact and efficient adaptation of RadixSpline to the string domain \cite{radixspline}. This approach provides both fast queries as well as bounded error. Second, we provide an optional add-on hash corrector which makes use of this bounded error to, at the cost of 12 bits per element, further improve equality lookup performance. \sparagraph{RadixStringSpline.} One useful perspective of the order-preserving indexing problem is that of a compressive mapping. If a million keys use a 64 bit integer type, the overall density of information is quite low relative to the capacity of the type. Indexing transforms these million keys into a continuous range, effectively compacting the data into the last 20 bits. \begin{figure}[H] \centering \includegraphics[width=.75\linewidth]{rss-diagram.png} \caption{A sample RSS tree structure, for the toy data and settings indicated. The root node corresponds to the entire data (bounds [0,8]) and indexes the first two bytes of the data (K=2). The RadixSpline in the root indexes the only two keys which do not have collisions in the first two bytes (bc and ef). The remaining two-byte prefixes (ab and cd) have collisions and are redirected to child nodes.} \label{fig:rssdiagram} \end{figure} The core difference between string indexing and integer indexing is that integer keys have much higher entropy in the first few bytes than strings do because integer keys must be entirely distinct in this range. By contrast, string keys can (and in most practical applications frequently) share long prefixes, requiring examination of more data to fully distinguish them. There are several (not necessarily exclusive) approaches one might take to ameliorate this. For example, one can actually directly compact the data, as per \cite{hope}, and this certainly does help. In our approach, our goal is to have the model quickly operate on the minimum amount of data to get the job done. An RSS is a tree, in which each node contains: \begin{itemize} \item The bounds of the range over which it operates. \item A redirector map. This contains the keys for which the current node cannot satisfy the error bound, and pointers to context-aware nodes for each of those keys. \item A RadixSpline model operating on K bytes, with an error-bound of $E$. \end{itemize} We illustrate a sample RSS in Figure \ref{fig:rssdiagram}. The RSS indexes the indicated data in the bottom left from 0 to 8, with each node operating on two-byte chunks and a maximum allowable error of 0. (We note that practical RSS's usually have a greater allowable error but these are harder to intuitively illustrate.) All searches begin at the root node (top) and simultaneously traverse the tree and the string until the partial key can no longer be found in the redirector array. At that point, the local RadixSpline at that node is guaranteed to provide a bounded-error prediction. To build an RSS, one begins by building a RadixSpline on the first K bytes of every string in the dataset (this is the root node), including all duplicates. Then, we iterate through the unique K-byte prefixes, and check if the estimated position is within the prescribed error bounds for both the first and last appearance of the prefix. This will always be the case for unique prefixes but might not be for prefixes which have duplicates, and cannot be the case for prefixes which have $>2E$ duplicates -- then, even predicting the median instance can't satisfy the extrema. For each prefix which fails the test, we add it to the redirector table, and build a new RSS over just the range of the problematic prefix and starting at byte K instead of byte 0. This process continues recursively until every key is satisfied. We note that the radix table in the RadixSpline should be adjusted depending on where in the tree one is. Near the root, the radix table should be large; near the leaves we often use just 6 bits to save memory. Practically we have found K=8 or K=16 and E=127 to be good settings; in our experiments we use K=16 (via gcc's builtin \_\_uint128\_t type) since it is more robust to sparser data. For simplicity we keep these parameters fixed; pragmatically there are probably reasonable improvements to be had in both memory and query time by switching to K=8 for smaller, easier-to-model datasets. Note that a fanout spanning 16 bytes is generally far higher than the maximum fanout of ART (1 byte, i.e.,\@\xspace 256 keys, per level) or HOT (32 keys per level). Querying an RSS is simple. One begins by extracting the first K bytes of the string, and conducting a binary search of the redirector to try to find the prefix. If it is found, then one follows the redirect to the new RSS node and the process begins again, only operating on the next K bytes. If it is not found in the redirector, then that means the key is guaranteed to be within acceptable bounds for the RadixSpline, so one queries the RadixSpline at the current node with the appropriate substring and returns the result. As an example, suppose we want to query the string ``cdeg'' from the RSS in Figure 1. We begin by extracting the first two bytes of the string, in this case ``cd'', which are packed into a 16-bit integer. We then search the redirector of the root node for this key. Since we find it in the second slot, we follow the pointer at that slot to the next node of the RSS. Now, we extract the next two bytes, ``eg'' and check the node's redirector. This time, we fail to find it, so we know it is correctly indexed by the local RadixSpline. So, we query the RadixSpline and return the result as our prediction. Finally, we execute a local binary search in the data to either find the string and its index or else to validate its non-existence. Analogously, suppose we wanted to conduct a lower bound query for the string ``defg'', not found in the data. We again search the root node's redirector for ``de'', and, failing to find it, execute the root's RadixSpline on that prefix. We again perform a binary search, only this time after failing to find it we return the left bound. One additional benefit of our approach is that if it is appropriately constructed (requiring a synchronization of the predictions of different layers of the tree) it can also be made perfectly monotonic, which may prove useful in future work on accelerating the last mile search. We believe that one should not undervalue the importance of the model being error-bounded for two reasons. First, in a string setting, even with relatively low errors and an optimized last-mile search, the last mile search still turns out to be the dominant cost. A bounded error means one only needs to conduct a binary search rather than an exponential search. Second, it enables a memory-efficient hash corrector, to be described below. \sparagraph{Hash Corrector.} Often, while index structures certainly need to support lower bound queries as described above, the optimization of the actual direct lookup of known string keys is equally important. To this end, we provide an auxiliary data structure which can improve performance for this problem at the cost of a small amount of additional memory. Essentially, one stores a contiguous array of signed int8 offsets, with -128 reserved as empty. To build the HC, for each string in the dataset one runs the RSS and computes the difference between the predicted and true values, which is guaranteed to fit in the range -127 to 127. Then, one hashes the string into these slots several times (up to a predetermined number) to find an empty slot. If one is found, we insert the offset at that slot. Compared to traditional Cuckoo hashing, this technique trades false positives (i.e.,\@\xspace the string at the offset does not match the lookup key) for memory efficiency. When one queries a string, one again hashes the string to a few of these slots and tries each offset. If it's a match, the expensive binary search is avoided. Otherwise, the bounded binary search is a reasonable fall-back (this is also required for negative lookups, although we deem them to be less important in practice (e.g.,\@\xspace in the dictionary encoding scenario). To have some benefit from false positive lookups, our implementation uses the keys it finds at these offsets to at least reduce the bounds of the binary search, and these reduced bounds can also help us rapidly reject wrong offsets (which are out of the current bounds). So, each query to the underlying data is guaranteed to provide at least some benefit. In our implementation, we use a 128-bit MurmurHash3 hash \cite{murmurhash3}, \cite{murmurhash3code} giving us 4 attempts, and we set the load factor to be 2/3. This then provides a speed boost to >95\% of lookup queries at the cost of 12 bits per key. Further tuning this time-space trade-off might lead to additional gains. For example, suppose we want to look up a string S in a database of N strings. We first execute the RSS to get an error-bounded index prediction $p$. We then hash S into 4 positions in range [0, 3N/2) -- $h_1$, $h_2$, $h_3$, $h_4$. For each position $h_i$, if offsets[$h_i$] == -128 or exceeds the bounds, we immediately know it is invalid and skip it. Otherwise, we compare S to the string at $p$+offsets[$h_i$]. If they match, we return; if S is larger, we set the location as a left bound, and if S is smaller we set the location as a right bound. Finally, if we try four times and still have not found it, we execute a binary search between the left and right bounds. The data structure can and should be entirely ignored for lower bound queries; it does not currently accelerate them. \pagebreak \section{Conclusions} \label{sec:conclusions} In this work, we have introduced a novel learned string index for the purposes of primary indexing or dictionary encoding which is competitive with existing string index structures in speed and superior in size. We evaluated the learned index on datasets varied in both distribution and size, and found that our method works especially well in conjunction with string compression schemes. \sparagraph{Future Work.} There exist many interesting future directions: First, better compression techniques could further improve the performance of both RSS and other index structures. Second, the internal redirector of RSS could be improved to be more efficient for large datasets with common prefixes (like URL). Third, RSS currently only uses splines as the main model. However, other types of models could provide significant benefits. In fact, one might see the tree and redirector structure of RSS as a new model for generating error-bounded models out of unbounded components.\footnote{RadixSpline is usually error bounded but is not in this scenario due to the possibility of many duplicate partial keys.} Consequently, this general approach might also be applied to numerical keys to achieve greater space-efficiency. Finally, we also believe there is likely considerable further tuning (and auto-tuning) which could be done on RSS to further improve performance. \section{Evaluation} \label{sec:evaluation} We evaluate RSS against ART and HOT as baselines on four datasets: \begin{itemize} \item Wiki: $>$13M unique Wikipedia URL tails. \cite{wiki} \item TwitterSentiment: 1.6M tweets, meant to provide representative natural language. \cite{twittersentiment} \item Examiner: 3M headlines from the Examiner. \cite{examiner} \item URL: Approximately 100M URLs from a 2007 web crawl; approximately 10GB total. \cite{url} \end{itemize} \begin{table}[] \centering \resizebox{\linewidth}{!}{ \begin{tabular}{l|l|l|l|l} \textbf{Build, ns/item} & Wiki & Twitter & Examiner & URL \\ \hline ART & 131 & 147 & 143 & 197 \\ \hline HOT & 207 & 209 & 217 & 243 \\ \hline RSS & \textbf{42} & \textbf{41} & \textbf{37} & \textbf{94} \\ \hline RSS+HC & 171 & 200 & 184 & 383 \\ \hline \hline \makecell{\textbf{Lookup, ns} \\ \textbf{(LowerBound)}} & Wiki & Twitter & Examiner & URL \\ \hline ART & 785 & 530 & 666 & 1592 \\ \hline HOT & 494 (\textbf{584}) & \textbf{365} (472) & \textbf{394} (\textbf{514}) & \textbf{786} (\textbf{920}) \\ \hline RSS & 629 & 452 & 554 & 1733 \\ \hline RSS+HC & \textbf{477} (629) & 378 (\textbf{452}) & 427 (554) & 1314 (1733) \\ \hline \hline \textbf{Memory, MB} & Wiki & Twitter & Examiner & URL \\ \hline ART & 1219.8 & 223.5 & 374.7 & 15,573.0 \\ \hline HOT & 205.9 & 24.7 & 48.3 & 1,372.3 \\ \hline RSS & \textbf{3.6} & \textbf{1.1} & \textbf{1.6} & \textbf{198.8} \\ \hline RSS+HC & 23.8 & 3.4 & 6.1 & 350.7 \end{tabular}} \caption{Results for ART, HOT, and the RadixStringSpline with and without its add-on Hash Corrector. Note that for the second sub-table, parenthetical values are for lower bound queries if they are appreciably different from lookup queries.} \label{tab:mainresult} \end{table} We summarize the results in Table~\ref{tab:mainresult}. The RSS is generally faster than ART but slower than HOT, but far smaller (7-70$\times$) than either of them. With its hash corrector, lookup speed is usually comparable to HOT and the data structure remains smaller, although memory usage increases somewhat. The RSS is also extremely fast to construct compared to existing structures, with a speed boost of 2-3$\times$, although this is at the expense of not supporting inserts. However, the fast construction time emphasizes that RSS is particularly useful for bulk-loading and delta-updates. We take note of RSS's poor performance on the URL dataset; we discuss this below. The performance character of RSS is distinguished from traditional trie-like data structures in that its compound nodes have unlimited fanout; what decides the cost of the model is the depth of the data required, since each inner node requires another local redirector query to find a new node with the additional context. To this end, the URL dataset is practically an adversarial scenario (as would be a filesystem) -- virtually all strings share a relatively small number of long prefixes, which lowers the discriminatory capacity of the local spline. To validate this understanding, we run the same datasets, only this time encoded by HOPE's two-gram approach in order to localize more data at the start of the model. The result is a considerable improvement in performance and usually also memory, as can be seen in Table 2. With more aggressive compression schemes, this would likely improve further.\footnote{For the sake of completeness, we would have liked to benchmark ART and HOT too on these compressed datasets, but we were unable to easily adapt the libraries for this purpose. We believe our errors were due to HOPE outputting null characters which interfere with cstring functions in internal use. We do not believe that this is inherently impossible to fix, though.} \begin{table}[] \centering \resizebox{\linewidth}{!}{ \begin{tabular}{l|l|l|l|l} \textbf{Build, ns/item} & Wiki & Twitter & Examiner & URL \\ \hline RSS & 44 & 41 & 42 & 67 \\ \hline RSS+HC & 152 & 163 & 163 & 317 \\ \hline \hline \makecell{\textbf{Lookup, ns} \\ \textbf{(LowerBound)}} & Wiki & Twitter & Examiner & URL \\ \hline RSS & 513 & 406 & 482 & 1428 \\ \hline RSS+HC & 375 (513) & 292 (406) & 351 (482) & 1095 (1428) \\ \hline \hline \textbf{Memory, MB} & Wiki & Twitter & Examiner & URL \\ \hline RSS & 2.6 & 1.4 & 1.6 & 123.0 \\ \hline RSS+HC & 22.8 & 3.7 & 6.1 & 278.8 \\ \end{tabular}} \caption{Results for the RadixStringSpline with and without its add-on Hash Corrector, operating on HOPE-compressed datasets.} \end{table} We also wish to note that these comparisons, while accurate to the current state-of-the-art, also take our competitors somewhat out of their element. Both ART and HOT are designed as secondary indexes, storing considerable additional information in leaf nodes which are not used in this primary index / dictionary encoding scenario. Consequently, it is probably possible to streamline both of these data structures for the reduced task at hand. We suspect that doing this would yield a reasonable improvement in memory consumption, but probably not one of the same order as can be provided by a learned model leveraging last-mile search on the underlying data. \section{Introduction} Learned indexing as introduced by \cite{learnedindexes} has brought new perspective to indexing by reframing it as a cumulative distribution function (CDF) modeling problem. The burgeoning field, despite its nascence, has brought with it many opportunities and efficiencies. However, most work in this area has focused on efficiently indexing numerical keys. There are several reasons why building learned indexes for strings is usually harder than for numerical keys: \begin{itemize} \item Strings are variable-length, complicating both model inference and memory layout. \item Strings are much larger objects which are expensive to store, compare, and manipulate. \item Strings tend to have even worse distributional properties than integer keys due to the prevalence of common prefixes and substrings, making them exceedingly hard to model accurately and compactly. \end{itemize} We also distinguish between two indexing cases: (1) primary index: the data is sorted according to the key and the index is used to more efficiently find the data inside the sorted data, and (2) secondary index: the data itself is not sorted and the index has to store pairs of keys and tuple identifiers (TIDs) in the leaf nodes. Commonly-used string-key secondary index structures include B-trees and their variants \cite{btree}, ART \cite{art}, HOT \cite{hot}, and the highly space-efficient FST \cite{surf}; learned indexes are absent. Recent learned primary index structures include SIndex~\cite{sindex} (also operating on strings), ALEX~\cite{alex} and Hist-Tree~\cite{cht}. In our view, the most significant problem with learned indexes for strings is the last-mile search which corrects the learned model's prediction. Last-mile search is usually implemented as an exponential search around the prediction which expands out until a bound is found, followed by a binary search inwards to find the true index. This search is especially expensive in string scenarios for two reasons. First, average model error in these scenarios is often quite high, due to difficulty in modeling real-world datasets. Because many real-world datasets have both long shared prefixes and relatively low discriminative content per byte, the CDF appears to be almost step-wise from afar, which is difficult for traditional learned models to accurately predict or capture. Second, actually conducting the last mile search is slow. Each comparison is expensive and requires potentially many sequential operations, and the strings’ large size decreases the number of keys which fit in cache. Modern column stores typically store strings in fixed length sections (first few bytes) and variable length sections (remainder of the string) \cite{umbra}. Increasing the size of the fixed-length section can waste memory, and searching the variable length section will incur expensive cache misses. One clear improvement, which forms the basis of this research, comes from having bounded error (i.e.,\@\xspace the model outputs a bounded interval in which the sought key will lie, if present). Then, one can replace the exponential search with binary search and skip many string comparisons. % We use this to ameliorate the cost of the last-mile search for lower bound lookups (i.e.,\@\xspace finding the key that is equal to or larger than the lookup key), and then, for equality lookups, we bypass it (more later). In this work, we emphasize increasing the lookup performance and decreasing the size for the primary-index scenario. For example, such an index could be used for global dictionary encoding \cite{carstenpaper}. We introduce the RadixStringSpline (RSS), a learned string index consisting of a tree of the learned index structure RadixSpline (RS) \cite{radixspline, sosd-neurips, sosd-vldb}. RS is a learned index that consists of an error-bounded spline which is in turn indexed in a radix lookup table. Similar to nodes in ART, each RSS node indexes a fixed partial key (e.g.,\@\xspace 8 bytes). But unlike ART, our nodes may have extremely high fan-out due to the incorporation of a learned-model at each node. To increase the fan-out of a given byte-prefix, we encode the input data using HOPE \cite{hope}. HOPE eliminates common prefixes and increases information density. Like the original string data, HOPE-encoded data has variable length. For a popular URL dataset, HOPE encoded-data is on average 1.6$\times$ smaller than the original string data. Often, the first 8 bytes of the HOPE encoding are sufficient to distinguish between most of the string keys. RSS only requires a single node to index these keys. However, typically there also are many collisions among the 8-byte prefixes, which requires RSS to recurse on the following 8 bytes. While we do not discuss updates, techniques as proposed in ALEX~\cite{alex} are generally applicable to our approach. Compared to ART and HOT, RSS is faster to build, similarly fast to query, and consumes much less memory (7-70$\times$). % RSS particularly benefits from a high information density in the most significant bytes. Some data distributions, like the Twitter Semantic140 dataset, satisfy this well, and correspondingly RSS has high performance. For other data distributions (e.g.,\@\xspace URLs) that require many low-information bytes to distinguish keys, RSS needs many levels and hence may have low performance.
2,869,038,156,002
arxiv
\section{Introduction} In the conventional model of binary stars, there is no consideration of spin and tidal effects (Eggleton 1971,1972,1973; Hofmeister, Kippenhahn \& Weigert, 1964; Kippenhahn et al. 1967; etc.); however, rotation and tide have been regarded as two important physical factors in recent years, so they need to be considered for a better understanding of the evolution of massive close binaries (e.g., Heger, Langer \& Woosley 2000a; Meynet \& Maeder 2000). The structure and evolution of rotating single stars has been studied by many investigators (Kippenhahn \& Thomas \cite{kippenhahn70}; Endal \& Sofia \cite{endal76}; Pinsonneaul et al. \cite{pinsonneaul89}; Meynet \& Maeder \cite{meynet97}; Langer 1998, 1999; Huang 2004a). However, it is also very important to study the evolution of rotating binary stars (Jackson 1970; Chan \& Chau 1979; Langer 2003; Huang 2004b; Petrovic et al 2005a,b; Yoon et al. 2006). The effect of spin on structure equations has been investigated (e.g. the present Eggleton's stellar evolution code; Li et al 2004a,b, 2005; K$\ddot{a}$hler 2002). They adopted the lowest-order approximate analysis in which two components were treated as spherical stars. In fact, with the joint effects of spin and tide, the structure of a star changes from spherically symmetric to non-spherically symmetric. Then, the stellar structure equations become three dimensional. Theory distinguishes two components in the tide, namely equilibrium tide (Zahn 1966) and the dynamical tide (Zahn 1975). Then, the dissipation mechanisms acting on those tides, namely the viscous friction for the equilibrium tide and the radiative damping for the dynamical tide, have been identified (Zahn 1966, 1975, 1977). The distortion throughout the outer regions of the two components is not small in short-period binary systems. The higher-order terms in the external gravitational field should not be ignored (Jackson 1970). It is a very complex process to determine the equilibrium structure of the two components. Therefore, approximate methods have been widely adopted for studying these effects. In 1933, the theory of distorted polytropes was introduced by Chandrasekhar. Kopal (1972,1974) developed the concept of Roche equipotential and of Roche coordinates to analyse the problem of rotationally and tidally distorted stars in a binary system. Bur$\check{s}$a (1989a,1988) took advantage of the high-order perturbing potential to describe rotational and tidal deformations to discuss the figures and dynamic parameters of synchronously orbiting satellites in the solar system. The equilibrium structure of the two components were treated as two non-symmetric rotational ellipsoids with two different semi-major axes $a_{1}$ and $a_{2}$ ($a_{1}>a_{2}$) by Huang (2004b). It is very important that Kippenhahn $\&$ Thomas (1970) introduced a method of simplifying the two-dimensional model with conservative rotation and allowed the structure equations for a one-dimensional star to incorporate the hydrostatic effect of rotation. This method has been adopted by Endal \& Sofia (1976) and Meynet $\&$ Maeder (1997), who applied it to the case of shellular rotation law (Zahn 1992). In this case, the rotation rate takes the simplified form of $\Omega=\Omega (r)$. It was demonstrated that the shape of an isobar in the case of the shellular rotation law is identical to one of the equipotentials in the conservative case of Meynet $\&$ Maeder (1997). At the semi-detached stage, both mass transfer between the components and luminosity change of a secondary exist due to the release of accretion energy which is correlative with the external potential of the two components. When the joint effect of rotation and tide are considered, the potential of the two components are different from those in non-rotational cases. Therefore, the luminosity due to the release of accretion energy, as well as irradiation energy, can significantly alter the structure and evolution of the secondary. In a rotating star, meridional circulation and shear turbulence exist, both of which can drive the transport of chemical elements. This effect is stronger and has already been studied by many scholars (Endal $\&$ Sofia 1978; Pinsonneaul et al. \cite{pinsonneaul89}; Chaboyer $\&$ Zahn 1992; Zahn 1992; Meynet $\&$ Maeder 1997; Maeder 1997; Meader $\&$ Zahn 1998; Maeder $\&$ Meynet 2000; Denissenkov et al. 1999; Talon et al. 1997; Decressin et al.2009). In this paper, the amplitude expression for the radial component of the meridional circulation velocity $U(r)$ considers the effect of tidal force, which may be important in a massive close binary system. This paper is divided into four main sections. In section 2, the structure equations of rotating binary stars are presented. Material diffusion equations and boundary conditions are provided. Then, the accretion luminosity, including gravitational energy, heat energy, and radiation energy, is deduced. In section 3, the results of numerical calculation are described and discussed in detail. In section 4, conclusions are drawn. \section{Model for rotating binary stars} \subsection{Potential of rotating binary stars} It is well known that the rotation of a component is synchronous with the orbital motion of a system thanks to a strong tidal effect. Such synchronous rotation also exists inside the component (Giuricin et al. \cite{Giur84}; Van Hamme \& Wilson \cite {van90}); therefore, conventional theories usually assume that two components rotate synchronously and revolve in circular orbits (Kippenhahn \& Weigert \cite{kippenhahn67}; De Loore \cite{de80}; Huang \& Taam \cite{huang90}; Vanbeveren \cite{vanbeveren}; De Greve \cite {de93}). A coordinate system rotating with the orbital angular velocity of the stars is introduced. The mass centre of the primary is regarded as the origin, and it is presumed that the z-axis is perpendicular to the orbital plane, and the positive x-axis penetrates the mass centre of the secondary. The gravitational potential at any point $P(r,\theta,\varphi)$ of the surface of the primary can be approximately expressed as \begin{equation} \Psi=V+\frac{1}{3}\Omega^{2}r^{2}(1-P_{2}(cos\theta))+V_{t}, \end{equation} where $V$ is the gravitational potential and given by Bur$\check{s}$e (1989a,1988), \begin{equation} V=\frac{GM_{1}}{r_{p}}[\frac{r_{p}}{r}+(\frac{r_{p}}{r})^{3}J_{2}^{(0)}P_{2}^{0}(\cos\theta) +(\frac{r_{p}}{r})^{3}J_{2}^{(2)}P_{2}^{2}(\cos\theta)\cos 2\varphi]. \end{equation} Here, $V_{t}$ is the tidal potential (Bur$\check{s}$e 1989a) \begin{equation} V_{t}=\frac{GM_{2}}{D}(\frac{r}{D})^{2}[-\frac{1}{2}P_{2}^{0}(\cos\theta)+\frac{1}{4}P_{2}^{2}(\cos\theta)cos 2\varphi], \end{equation} where it is assumed that the mean equatorial radius equals that of the equivalent sphere in the above equation for the convenience of calculation. Both $M_1$ and $M_2$ are the mass of the primary and the secondary, respectively, and $r_{p}$ represents each equivalent radius inside the star, $P_{2}^{2}(\cos\theta)$ and $P_{2}^{0}(\cos\theta)$ are the associated Legendre function($P_{2}^{0}(\cos\theta)=\frac{3}{2}\cos^{2}\theta-\frac{1}{2}$, $P_{2}^{2}(\cos\theta)=3\sin^{2}\theta $), $D$ is the distance between the two components, and $\Omega $ is the orbital angular velocity of the system. It can be represented by \begin{equation} \Omega ^2=G(M_1+M_2)/D^3, \end{equation} where $J_{2}^{(0)}$ and $J_{2}^{(2)}$ are dimensionless stokes parameters. If $M_{1}$ can generally be negligible compared to $M_{2}$, the stokes parameters can be expressed as (Bur$\check{s}$e 1989a,1988) \begin{equation} J_{2}^{(0)}=-[\frac{1}{3}k_{s}+\frac{1}{2}k_{t}]q(\frac{r_{p}}{D})^{3}=-J_{2}, \end{equation} \begin{equation} J_{2}^{(2)}=\frac{1}{4}k_{t}q(\frac{r_{p}}{D})^{3}, \end{equation} where $k_{s}$ is the secular Love number, which is expressed as a measure of the body-yield-to-centrifugal deformation, and $k_{t}$ is an analogous parameter that is introduced to describe the secular tidal deformations. The response of the body to its centrifugal acceleration and to the tidal perturbing potential is different in the usual case. Therefore, the body-yield-to centrifugal deformation is not equal to the body-yield-to-tidal deformation. If the subject investigated is regarded as an ideal elastic body, the body-yield-to centrifugal deformation is equal to the body-yield-to-tidal deformation, $k_{s}=k_{t}$. In the ideal static equilibrium, $k_{s}=k_{t}=1$ (Bur$\check{s}$e 1989a). We assume the ideal static equilibrium in this paper. $q$ is the mass ratio of the secondary to the primary ($q=\frac{M_{2}}{M_{1}}$). With Eqs. (2) and (3) being combined with Eq. (1), the potential of the primary can be obtained as \begin{eqnarray} \Psi_{P}&=&\frac{GM_{1}}{D}\{\frac{D}{r_{1}}+\frac{1}{2} \frac{D}{r_{1}}\left(\frac{r_{p}}{r_{1}}\right)^{2} [-J_{2}(3\cos^{2}\theta-1)\\ && \nonumber+6J_{2}^{(2)}\sin^{2}\theta\cos2\varphi] +\frac{1}{2}(1+q)(\frac{r_{1}}{D})^{2}\sin^{2}\theta \\&&+\frac{1}{4}q(\frac{r_{1}}{D})^{2}[3\sin^{2}\theta(1+cos2\varphi)-2]\}, \nonumber \end{eqnarray} The potential of the secondary is deduced by substituting $M_{2}$ for $M_{1}$ and $\frac{1}{q}$ for $q$. The isobar defined by the equation $P=const$ is assumed to be a triaxial ellipsoid with three semi-major axes: $a$, $b$, and $c$. The shortest axis defined by $c$ is identical to its rotational axis and perpendicular to its orbital plane. The longest axis defined by $a$ is identical with its $x$-axis. \subsection{Considering stellar structure equations with spin and tidal effects} The spin of the two components is rigid rotation, and it belongs to conservative rotation. The definition of equivalent sphere was adopted in a practical calculation. Therefore, the triaxial ellipsoid model is simplified to a one-dimensional model. The structure equations are presented as \begin{equation} \frac{\partial r_{P} }{\partial M_{P}}=\frac 1{4\pi r_{P}^2\rho}, \end{equation} \begin{equation} \frac{\partial P}{\partial M_{P} }=-\frac{GM_{P}}{4\pi r_{P} ^4}f_P, \end{equation} \begin{equation} \frac{\partial L_{P}}{\partial M_{P} }=\varepsilon _N-\varepsilon _\nu +\varepsilon _g +\sigma_{ac}, \end{equation} where $\sigma_{ac}$ is the energy source per unit mass caused by mass overflow and irradiation. Because accretion luminosity is caused by energy sources in the gainer's outermost layer, there exists \begin{equation} \sigma_{ac}\Delta m =\Delta L_{acc}, \end{equation} where $\Delta m$ is the photosphere mass of the secondary. The surface temperature of the secondary may be approximated by the formula, $L_{2}+\Delta L_{acc}=4\pi R_{2}^{2}\sigma T_{eff}^{4}$, where $L_{2}$ is the luminosity coming to the photosphere from the stellar interior, and $\sigma$ is the Stefan-Boltzmann constant: \begin{equation} \frac{d\ln T}{d\ln P}=\Biggl\{\matrix {\nabla _{\mathrm{R}}f_{\mathrm{T}}/ f_{\mathrm{p}}\cr \nabla _{\mathrm{con}}\cr } \end{equation} \begin{equation} f_P=\frac{4\pi r_{p}^4}{GM_{p} S_{p} }\frac 1{<g_{eff}^{-1}>} , \end{equation} \begin{equation} f_T=\frac{4\pi r_{p} ^2}{S_{p}} \frac {1}{<g_{eff}><g_{eff}^{-1}>} , \end{equation} \begin{equation} \nabla _R=\frac{3}{16\pi acG}\frac{\kappa LP}{M_{P} T^4}, \end{equation} where $<g_{eff}> and <g_{eff}^{-1}>$ are the mean values of effective gravity and its opposites over the isobar surface, and $\nabla _R$ is the radiative temperature gradient. The factors $f_P$ and $f_T$ depend on the shape of the isobars. \subsection{ Calculation of quantities $f_P$ and $f_T$} \subsubsection{Shape and gravitational acceleration of triaxial ellipsoid} To obtain the factors $f_P$ and $f_T$, the mean values $<g_{eff}>$ and $<g_{eff}^{-1}>$ over the isobar surface have to be calculated. Therefore, the shape of isobars must be given first. The functions for the semi-major axes $a$, $b$, and $c$ to the radius of the equivalent sphere $r_{P} $ can be obtained from Eq. (7) as \begin{eqnarray} \frac{4\pi abc}3 =\frac{4\pi r_{P} ^3}3 , \end{eqnarray} \begin{eqnarray} &\frac{GM_1}D[\frac{D}{a}+\frac 12 \frac{D}{a}(\frac{r_{p}}{a})^{2}(J_{2}+6J_{2}^{(2)}))+ \\ & \nonumber\frac{1}{2}(1+q)(\frac{a}{D})^{2}+q(\frac{a}{D})^{2}] =\frac{GM_1}D[ \frac{D}{c}+\frac 12 \frac{D}{c}(\frac{r_{p}}{c})^{2}(-2J_{2})\\ &-\frac{1}{2}q(\frac{c}{D})^{2}], \nonumber \end{eqnarray} \begin{eqnarray} &\frac{GM_1}{D}[\frac{D}{b}+\frac{1}{2} \frac{D}{b}(\frac{r_{p}}{b})^{2}(J_{2}-6J_{2}^{(2)})+\frac{1}{2}(1+q) (\frac{b}{D})^{2}\\&\nonumber-\frac{1}{2}q(\frac{b}{D})^{2}]=\frac{GM_1}D[ \frac{D}{c}+\frac 12 \frac{D}{c}(\frac{r_{p}}{c})^{2}(-2J_{2})\\& \nonumber-\frac{1}{2}q(\frac{c}{D})^{2}]. &\nonumber \end{eqnarray} The left hand side of Eq.(17) corresponds to $\theta=\frac{\pi}{2}$ and $\varphi=0$, while the one of Eq.(18) corresponds to $\theta=\frac{\pi}{2}$ and $\varphi=\frac{\pi}{2}$. The three semi-major axes $a$, $b$, and $c$ of a triaxial ellipsoid can be obtained numerically by solving (16), (17), and (18). From (7), the quantities $g_{r}$, $g_{\theta}$, and $g_{\varphi}$ at the surface of the two components take the forms of \begin{eqnarray} &g_{r}=-\frac{\partial \Psi}{\partial r}=\frac{GM_{1}}{D^{2}}\{(\frac{D }{r})^{2}+\frac{3}{2}\frac{D^{2}}{r^{2}}(\frac{r_{p}}{r})^{2}\\ &\nonumber[-J_{2}(3\cos^{2}\theta-1)+6J_{2}^{(2)}\sin^{2}\theta\cos2\varphi] -\frac{r}{D}(1+q)\sin^{2}\theta\\ &\nonumber-\frac{r}{D}q(3\sin^{2}\theta\cos^{2}\varphi-1)\},& \end{eqnarray} \begin{eqnarray} &g_{\theta}=-\frac{1}{r}\frac{\partial \Psi}{\partial \theta}=\frac{GM_{1}}{D^{2}}\{-\frac{D^{2}}{r^{2}} (\frac{r_{p}}{r})^{2}[3J_{2}+6J_{2}^{(2)}\\ &\nonumber(2\cos^{2}\varphi-1)]-(1+q)\frac{r}{D} -3q\frac{r}{D}\cos^{2}\varphi \}\sin\theta \cos\theta &,\nonumber \end{eqnarray} \begin{eqnarray} &g_{\varphi}=-\frac{1}{r\sin \theta}\frac{\partial \Psi}{\partial \varphi}\\&\nonumber=\frac{GM_{1}}{D^{2}}[12(\frac{D}{r})^{2}(\frac{r_{p}}{r})^{2}J_{2}^{(2)} +3q\frac{r}{D}]\sin\theta \cos\varphi \sin\varphi. \end{eqnarray} However, the total potential in the stellar interior (to first-order approximation) can be composed by four parts (Kopal 1959, 1960, 1974; Endal \& Sofia 1976 and Landin 2009): $\psi_{s}$, the spherical symmetric part of the gravitational potential; $\psi_{r}$, the cylindrically symmetric potential due to rotation; $\psi_{t}$ the non-symmetric potential due to tidal force, and $\psi_{d}$, the non-symmetric part of the gravitational potential due to the distortion of the component considering the rotational and tidal effects. Therefore, the total potential at $P(r,\theta,\varphi)$ is \begin{eqnarray} \Psi&=&\psi_{s}+\psi_{r}+\psi_{t}+\psi_{d}\nonumber\\&&= \frac{GM_{\psi}}{r}+\frac{1}{2}\omega^{2}r^{2}\sin^{2}\theta +\frac{GM_{2}}{D}[1+\sum_{j=2}^{4}(\frac{r_{0}}{D})^{j}P_{j}(\sin\theta\cos\varphi)]\nonumber\\ &&-\frac{4\pi}{3r^{3}}P_{2}(\cos\theta)\int_{0}^{r_{0}}\rho\frac{ r_{0}'^{7}}{M_{\psi}}\Omega^{2}\frac{5+\eta_{2}}{2+\eta_{2}}dr_{0}'\\ &&+4\pi GM_{2}\sum_{j=2}^{4}\frac{P_{j}(\sin\theta\cos\varphi)}{(rD)^{j+1}}\int_{0}^{r_{0}}\rho\frac{ r_{0}'^{2j+3}}{M_{\psi}}\frac{j+3+\eta_{j}}{j+\eta_{j}}dr_{0}'. \nonumber \end{eqnarray} The quantity $\eta_{j}$ can be evaluated by numerically integrating the Radau's equation (cf. Kopal 1959) \begin{equation} r_{0}\frac{d\eta_{j}}{dr_{0}}+6\frac{\rho(r_{0})}{\overline{\rho}(r_{0})} (\eta_{j}+1)+\eta_{j}(\eta_{j}-1)=j(j+1), \end{equation} for j=2,3,4, and boundary condition $\eta_{j}(0)=j-2$. The quantity $r_{0}$ is the mean radius of the corresponding isobar. The local effective gravity is given by differentiation of the total potential and is written as \begin{equation} g_{i}=[(\frac{\partial \psi}{\partial r})^{2}+(\frac{1}{r}\frac{\partial \psi}{\partial \theta})^{2}+(\frac{1}{r\sin\theta}\frac{\partial \psi}{\partial \varphi})^{2}]^{\frac{1}{2}}, (i=1,2). \end{equation} The integral in above equations and their derivatives must be evaluated numerically. The mean values of $g_{effi}$ and $g_{effi}^{-1}$ over the surfaces of the triaxial ellipsoids can be obtained as \begin{equation} <g_{effi}>=\frac 1{S_{P}}\int_0^{\pi}\int_0^{2 \pi}g_{effi} r_{i}^2\sin \theta d\theta d \varphi, (i=1,2) \end{equation} \begin{equation} <g_{effi}^{-1}>=\frac 1{S_{P}}\int_0^{\pi}\int_0^{2 \pi}g_{effi}^{-1} r_{i}^2\sin \theta d\theta d \varphi, (i=1,2). \end{equation} According to Eqs. (13) and (14), the values of $f_P$ and $f_T$ can be obtained when the mean values $<g_{eff}>$ and $<g_{eff}^{-1}>$ are known. $r_{1}$ and $r_{2}$ are the distances between the centre of the components and the surfaces of two triaxial ellipsoids. They are \begin{eqnarray} r_{i}^2=\frac{a_{i}^{2}b_{i}^{2}c_{i}^{2}}{b_{i}^{2} c_{i}^{2} \sin^{2}\theta \cos^{2}\varphi+a_{i}^{2} c_{i}^{2}\sin^{2}\theta \sin^{2}\varphi+a_{i}^{2}b_{i}^{2} cos^{2}\theta}\\&\nonumber,(i=1,2). \end{eqnarray} The surface area $S_{p}$ of the isobar can be expressed as \begin{equation} S_{p} =\frac{4\pi}{3}(a^{2}+b^{2}+c^{2}) . \end{equation} \subsection{Element diffusion process} The effect of meridian circulation can drive the transport of chemical elements and angular momentum in rotating stars. For the components in solid-body rotation, no differential rotation exists that can cause shear turbulence. According to Endal \& Sofia (1978) and Pinsonneault (1989), the transport of chemical composition is treated as a diffusion process. The equation takes the form of (Chaboyer \& Zahn 1992) \begin{eqnarray} \left( \frac{\partial y_\alpha }{\partial t}\right) &=&\frac{1}{\rho r^{2}} \frac\partial {\partial r}\left[\rho r^{2} D_{dif}\left( \frac{\partial y_\alpha }{\partial r}\right)\right] +\left( \frac{\partial y_\alpha }{\partial t}\right) _{nuc}, \end{eqnarray} where $(\frac{\partial y_\alpha }{\partial t})_{nuc}$ is a source term from nuclear reactions, and $y_{\alpha}$ is the relative abundance of $\alpha-th$ nuclide. The diffusion coefficient $D_{dif}$ given by Heger, Langer \& Woosley (2000a) can be expressed as \begin{equation} D_{dif}\equiv min\{d_{inst}, H_{v,ES}\}U(r), \end{equation} where $d_{inst}$ and $H_{v,ES}$ denote the extent of the instability and the velocity scale height, respectively. The expression for the amplitude of the radial component of the meridional circulation velocity $U(r)$ (derived from Kippenhahn $\&$ Weight 1990) has been modified to take the effects of radiation pressure and tidal force into-account, which are important in a massive close binary system. It is noticed that \begin{eqnarray} &U(r)=\frac{8}{3}(\frac{\Omega^{2}r^{3}}{GM_{r}}+\frac{GM_{2}r^{3}}{GM_{r}D^{^{3}}}) \frac{L_{r}}{g_{r}M_{r}}\frac{\gamma-1}{\gamma}\frac{1} {\nabla_{ad}-\nabla}&\\& \nonumber (1-\frac{\Omega^{2}}{2\pi G\rho}-\frac{\varepsilon}{\varepsilon_{m}})& \end{eqnarray} where the term $\frac{\Omega^{2}r^{3}}{GM_{r}}+\frac{M_{2}r^{3}}{M_{r}D^{^{3}}}$ is the local ratio of centrifugal force and tidal force to gravity, $\gamma$ is the ratio of the specific heats $\frac{C_{p}}{C_{v}}$, $L_{r}$ represents the luminosity at radius $r$, $M_{r}$ is the mass enclosed within a sphere of radius r, $\nabla$ is the actual gradient and $\nabla_{ad}$ is adiabatic temperature gradient, $\varepsilon_{m}=\frac{L}{m}$ gives the mean energy production rate, and $\varepsilon$ is local generation rate of nuclear-energy. There is no source or sink at the inner and the outer boundaries of the two components. Therefore, the boundary conditions are used as \begin{equation} (\frac{\partial y_\alpha}{\partial r})_{i=1}=0=(\frac{\partial y_\alpha}{\partial r})_{i=M} \end{equation} where the subscript $i$ denotes different layers inside stars. The initial abundance equals the one at the zero-age main sequence. Therefore, the initial condition is \begin{equation} (y_\alpha)_{i}|_{t=0}=(y_\alpha)_{i}|_{init}. \end{equation} \subsection{Luminosity accretion} In the case where the joint effect of rotation and tide is ignored, the two components are spherically symmetric. The star fills its Roche lobe and begins to transfer matter to the companion. However, in the case with the effects of rotation and tide being considered, the components are triaxial ellipsoids. The condition for the mass overflow through Roche lobe flow should be revised as $a_1=r_{robe}$ (Huang 2004b). It is assumed that the transferred mass is distributed within a thin shell at the surface of the primary before the transfer, and within a thin shell at the surface of the secondary after the transfer. Three forms of energy (including potential energy, heat energy, and radiative energy) are transferred to the secondary. The mass transfer rate is $\dot{m}$. Two different cases are considered: a) If the joint effect of rotation and tide is ignored, the accretion luminosity can be expressed directly in terms of the Roche lobe potential at the inner Lagrangian point, $\Psi_{L_{1}}$, and at the surface of the secondary $\Psi_{s}$ (Han \& Webbink 1999): \begin{eqnarray} &\triangle L_{P}= \dot{m}(\Psi_{L_{1}}-\Psi_{S})=\frac{G\dot{m}M_1}{D} [\frac{1}{X_{L_{1}}/D}+\frac{q}{1-X_{L_{1}}/D}+\frac{1+q}{2}(\frac{X_{L_{1}}}{D}\nonumber\\ &-\frac{q}{1+q})^{2}-\frac{q}{R_{2}/D}-\frac{1}{1-R_{2}/D} -\frac{1+q}{2}(1-\frac{R_{2}}{D}-\frac{q}{1+q})^{2})],& \end{eqnarray} where $X_{L1}$ is the distance between the primary and $L_{1}$, and $R_{2}$ is the radius of the secondary. b) If the joint effect of rotation and tide is considered, the equilibrium structure of the two components will be treated as triaxial ellipsoids. The release of potential energy because of the accretion of a mass rate $\dot{m}$ to the secondary is given by \begin{eqnarray} &\triangle L_{P}= \dot{m}(\Psi_{L_{1}}-\Psi_{S})=\frac{G\dot{m}M_1}{D}[\frac{1} {X_{L_{1}}/D}+\frac{q}{1-X_{L_{1}}/D}+\frac{1+q}{2}(\frac{X_{L_{1}}}{D}\nonumber\\ &-\frac{q}{1+q})^{2}]-\frac{GM_2\dot{m}}{D}[ \frac{ D}{c_{2}}+\frac 12\frac{D}{c_{2}}(\frac{r_{p}}{c_{2}})^{2}(2J_{2}^{0})-\frac{1}{2}q (\frac{c_{2}}{D})^{2}], \end{eqnarray} where $\Psi_{s}$ is the potential of the secondary. Similarly, as the two components have different temperatures, the transmitted thermal energy will be \begin{equation} \triangle L_{T}=\dot{m}(\frac{3kT_{eff1}}{2\mu_{1}m_{p}}-\frac{3kT_{eff2}}{2\mu_{2}m_{p}}), \end{equation} where $T_{eff1}$ and $T_{eff2}$ represent the effective temperature of the primary and the secondary, respectively, and $\mu_{1}$ and $\mu_{2}$ are the mean molecular weights of the primary and the secondary, respectively. $m_{p}$ refers to proton mass. Because of the irradiation, energy accumulated by the primary and the secondary can be given by (Huang \& Taam 1990) \begin{equation} \triangle L_{r,1,2}= \frac{1}{2}[1-(D^{2}-R_{1,2}^{2})^{\frac{1}{2}}/D]L_{2,1}, \end{equation} where $R_{1}$ and $R_{2}$ are the radii of the primary and the secondary, and $L_{1}$ and $L_{2}$ are the luminosities of the primary and the secondary, respectively. The total accretion luminosity is \begin{equation} \triangle L_{acc}=\beta \triangle L_{tot}=\beta(\Delta L_{P}+\Delta L_{T}+\Delta L_{r}). \end{equation} Because a part of the total energy may be dissipated dynamically, $\beta$ is assumed to range from $0.1$ to $0.5$ (Huang 1993). A value $\beta=0.3$ is adopted. \section{Results of numerical calculation} The structure and evolution of binary system was traced with the modified version of a stellar structure program, which was developed by Kippenhahn et al., (1967) and has been updated to include mass and energy transfer processes. The calculation method is based on the technique of Kippenhahn and Thomas (1970) and takes advantage of the concept of isobar (Zahn 1992, Meynet and Maeder 1997). Both components of the binary are calculated simultaneously. The initial mass of the system components is set at 9$M_{\odot}$ and 6$M_{\odot}$. The initial chemical composition X equals X=0.70, and Z=0.02 is adopted for the two components. Similarly, the initial orbital separation between the two components for all sequences is defined as 20.771$R_{\odot}$, so mass transfer via Roche lobe occurs in case A (at the central hydrogen-burning phase of the primary). Two evolutionary sequences corresponding to the evolution with the joint effect of rotation and tide being considered or ignored are calculated. The sequence denoted by case 1 represents the evolution without the effects of rotation and tide being considered, while the sequence denoted by case 2 represents the evolution with the effects of rotation and tide being considered. The calculation of Roche lobe is taken from the study by Huang \& Taam (1990). The non-conservative evolution in the two cases was considered. Because the local flux at colatitude $\theta$ is proportional to the effective gravity $g_{e}$ according to Von Zeipel theorem (Maeder 1999), the mass-loss rate due to the stellar winds intensified by tidal, rotational, and irradiative effects is obtained according to Huang \& Taam(1990) (cf.Table 1). The angular velocity of the system and the orbital separation between the two components change due to a number of factors: changes in physical processes as the binary system evolves, including the loss of mass and angular momentum via stellar winds, mass transfer via Roche lobe overflow, exchange of angular momentum between component rotation and the orbital motion of the system caused by tidal effect, and changes in moments of inertia of the components. The changes in the angular velocity of the system and the orbital separation between the two components can be calculated according to Huang \& Taam (1990), and the results are listed in Table 1. Other parameters are treated in the same way for two sequences. \begin{table*}[tbp] \caption{Parameters at different evolutionary points a, b, c, d, e, and f in sequences of cases 1 and 2. } \label{table1} \begin{flushleft} \begin{tabular}{cccccccccccccccc} \hline\hline Sequence & $Time $ & $P$ & $M _{1} $ & $M _{2} $ & $log L_{1}/L_{\odot}$ & $log T_{1,eff}$ &$log L_{2}/L_{\odot}$ & $log T_{2,eff}$ & $Y_{1}(c)$ & $Y_{1}$ & $V_{rot,1}$ & $V_{rot,2}$\\ & $10^{7}yr$ & $day$ & $M_{\odot}$ & $M_{\odot}$ & $$ & $$ & $$ & $$ & & $$ & $km/sec$ & $km/sec$\\ \hline a & & & & & & & & & & & & \\ Case 1 & 0.0000 & 2.777 & 9.000 & 6.000 & 3.639 & 4.385 &3.077& 4.287& 0.2800 & 0.2800 & & \\ Case 2& 0.0000 & 2.776 & 9.000 & 6.000 & 3.629 & 4.381 &3.055 & 4.280 & 0.2800 & 0.2800 & 68.66 & 56.39\\ \hline b & & & & & & & & & & & & \\ Case 1& 2.6725 & 2.737& 8.929& 5.994 & 3.959 & 4.285 & 3.110 & 4.267 & 0.8744 & 0.2800 & & \\ Case 2& 2.6267 & 2.743 & 8.939 & 5.995 & 3.914& 4.295 & 3.085& 4.265& 0.8247 & 0.2800 & 143.69 & 63.39\\ \hline c & & & & & & & & & & & & \\ Case 1& 2.6854& 3.190 & 5.314& 9.608& 3.457& 4.120 & 3.811& 4.396 & 0.8799& 0.2801 & & \\ Case 2& 2.6329 & 3.192 &5.318& 9.616 & 3.264& 4.135 &3.834& 4.406& 0.8267& 0.2801 & 121.81 & 67.59\\ \hline d & & & & & & & & & & & & \\ Case 1& 3.0784 & 11.743& 2.727 & 12.168& 3.776&4.128 & 4.145 &4.461& 0.9800 &0.5892 & & \\ Case 2& 3.3567 & 7.213 & 3.388 & 11.497& 3.442 &4.162& 4.123&4.405 &0.9800 & 0.3481 &58.41& 41.79\\ \hline e & & & & & & & & & & & & \\ Case 1& 3.0941 & 32.534& 1.797 & 13.082& 3.959& 4.028 & 4.276&4.539 & 0.9799 &0.8775 & & \\ Case 2 & & & & & && & & & & & \\ \hline f & & & & & & & & & & & & \\ Case 1& 3.1484& 42.382& 1.572 & 13.245&3.186 &4.775 & 4.308& 4.549 & 0.8856& 0.8793& & \\ Case 2& 3.5470& 32.531& 1.714& 13.037& 3.280& 4.686& 4.308& 4.419&0.9800 & 0.8261 & 0.97 &10.74 \\ \hline\hline \end{tabular} \end{flushleft} \end{table*} \begin{figure*} \centering \includegraphics[width=7.5cm,clip]{11144fig1.eps} \includegraphics[width=7.5cm,clip]{11144fig2.eps} \includegraphics[width=7.5cm,clip]{11144fig3.eps} \includegraphics[width=7.5cm,clip]{11144fig4.eps} \caption{Surface rotating velocity distribution of primary varying with time. Four panels (a), (b), (c), and (d) correspond to periods: $2.776$, $2.760$, $2.746$, and $2.628$ days, and corresponding evolutive time is 0, $2.3386\times 10^{7}$, $2.6194\times 10^{7}$, $2.6287\times 10^{7}$ yrs, respectively. } \label{appfig} \end{figure*} \begin{figure*} \centering \includegraphics[width=7.5cm,clip]{11144fig5.eps} \includegraphics[width=7.5cm,clip]{11144fig6.eps} \includegraphics[width=7.5cm,clip]{11144fig7.eps} \includegraphics[width=7.5cm,clip]{11144fig8.eps} \includegraphics[width=7.5cm,clip]{11144fig9.eps} \includegraphics[width=7.5cm,clip]{11144fig10.eps} \caption{Variation of relative gravitational accelerations at the surface of primary under coordinate $\theta$ and $\varphi$ as mass overflow begins. The quantities $g_{r}$, $g_{\theta}$, and $g_{\varphi}$ are the three components of the gravitational acceleration $g_{tot}$. Quantity $g$ equals the gravitational acceleration of the corresponding equivalent sphere ($g=\frac{GM_{1}}{r_{p}^{2}}$).} \label{appfig} \end{figure*} \begin{figure*} \centering \includegraphics[width=7.5cm,clip]{11144fig11.eps} \includegraphics[width=7.5cm,clip]{11144fig12.eps} \caption{Time variation of relative accretion luminosity at semi-detached stage. Panel (a) represents case 1 and panel (b) represents case 2. The solid, dotted, dashed and dotted-dashed curves correspond to the relative accretion luminosity with respect to total, thermal, potential and irradiative energies, respectively.} \label{appfig} \end{figure*} \begin{figure*} \centering \includegraphics[width=7.5cm,clip]{11144fig13.eps} \includegraphics[width=7.5cm,clip]{11144fig14.eps} \caption{ Panel (a): variation in total H-burning generation energy rate in two cases. Panel (b): time variation in total H-burning generation energy rate in case 2 after main sequence. The solid curve represents case 2 and the dashed curve represents case 1. } \label{appfig} \end{figure*} \begin{figure*} \centering \includegraphics[width=7.5cm,clip]{11144fig15.eps} \includegraphics[width=7.5cm,clip]{11144fig16.eps} \caption{ Time-dependent variation in luminosity and equivalent radius of primary in two cases. The solid and dotted curves have the same meaning as in Fig. 4. } \label{appfig} \end{figure*} \begin{figure*} \centering \includegraphics[width=7.5cm,clip]{11144fig17.eps} \caption{Time-dependent variation in surface helium in two cases. The solid and dotted curves have the same meaning as in Fig. 4.} \label{appfig} \end{figure*} The evolution of the binary system proceeded as follows (cf. Table 1). Evolutionary time, orbital period, mass of two stars, luminosities and effective temperature of two stars, central and surface helium mass fraction of the primary, and mean equatorial rotational velocities of two stars are listed in Table 1. Points a, b, c, d, e, and f denote the zero-age main sequence, the beginning of the mass transfer stage, the beginning of H-shell burning, the end of central hydrogen-burning, the beginning of the central helium-burning stage, and the end of calculation, respectively. At the beginning of mass exchange, the luminosity and effective temperature of the primary component decrease rapidly. The secondary accretes $6.174M_{\odot}$ for case 1 and $5.502M_{\odot}$ for case 2 during the mass transfer in case A. Because of this mass gain, the luminosity and the temperature of the secondary go up. When the mass is transferred from the more massive star to the less massive one, the separation between the centres of the two components as well as the orbital period of the system decrease. Some orbital angular momentum is transformed into the spin angular momentum of both components, and this process is crucial to model the spin-up of the accretion star. With mass overflow, the mass of the primary will be less than that of the secondary. When the mass is transferred from the less massive star to the more massive one, the separation between the centres of the two components as well as the orbital period of the system increases. Some spin angular momenta in both of the components are transformed into orbital angular momentum. This physical process results in a longer epilogue after mass transfer. The equilibrium configuration deviates from spherical symmetry because of the centrifugal forces and tidal forces. And the deviated region mainly lies in the outer layer of a star. In fact, the distorted stellar surface forms the shape of a triaxial ellipsoid. A distorted isobar surface can be expressed as \begin{equation} r=r_{p}[1+f(r)P_{2}(\cos\theta)+g(r)P_{2}^{2}(\cos\theta)cos2\varphi], \end{equation} which corresponds to the form of the disturbing potential (Zahn 1992). The coefficients $f(r)$ and $g(r)$ can be defined as $f(r)=-\frac{C_1\Omega^{2}}{\pi G\varrho}-\frac{C_2 M_{2}}{\pi\varrho D^{3}}$ and $g(r)=\frac{C_2 M_{2}}{2\pi\varrho D^{3}}$. The quantity $\varrho$ is the mean density of a star with the mass of $M_{1}$. It was noticed that at the central hydrogen-burning phase, two parameters $C_1$ and $C_2$ in Eq. (39) gain the values of $0.703\pm 0.125$ and $0.491\pm 0.102$, respectively. This formula indicates that the shapes of the two components vary with the potentials of the centrifugal force and the tidal force. The radial deformation is inversely proportional to the mean density of the component. In order to describe the distortion, the distribution of the surface rotating velocities of the primary is illustrated in Fig. 1. The four panels (a), (b), (c), and (d) correspond to the evolutive time of 0, $2.3386\times 10^{7}$, $2.6194\times 10^{7}$, $2.6287\times 10^{7}$ years and corresponding periods of $2.776$, $2.760$, $2.746$, and $2.628$ days, respectively. The rotational velocity rates for the peaks of the semi-major axes $b$ and $a$ are $\frac{b}{a}=\frac{v_{b}}{v_{a}}=0.9867, 0.9401, 0.8814,$ and $0.8664$ in four panels, respectively. The results show that the surface deformation is intensified with the evolution and volume-expansion of the primary. The distortion throughout the outer region of the primary is considerable. The detailed theoretical models that focus on investigation of the outer regions have somewhat deviated from the Roche model. The high-order perturbed potential is required for studying the structure and evolution of short-period binary systems. Matthews \& Mathieu (1992) examined 62 spectroscopic binaries with A-type primaries and orbital periods less than 100 days. They concluded that all systems with orbital periods less than or equal to three days have circular orbits or nearly circular orbits. Zahn (1977) and Rieutord \& Zahn (1997) have shown how binary synchronization and circularization result from tidal dissipation. Based on smoothed particle hydrodynamics (SPH) simulation, Renvoiz$\acute{e}$ et al.(2002) have quantified the geometrical distortion effect due to the tidal and rotational forces acted on the polytropic secondaries of semi-detached binaries. They suggest that the tidal and rotational distortion on the secondary may not be negligible, for it may reach observable levels of $\sim 10\%$ on the radius in specific cases of polytropic index and mass ratio. Georgy et al.(2008) display that various effects of the rotation on the surface of a 20$M_{\odot}$ star at a metallicity of $10^{-5}$ and at $\sim 95\%$ of the critical rotation velocity. They point out that the star becomes oblate with an equatorial-to-polar radius ratio $\frac{R_{eq}}{R_{pol}}\simeq 1.3$. These results agree closely with ours. The variation relative gravitational accelerations, the tidal force, and the ratio of $f_{cen}/f_{tid}$ on the surface of the primary under the coordinate $\theta$ and $\varphi$ at the beginning of mass overflow are shown in Fig. 2. The quantities $g_{r}$, $g_{\theta}$, and $g_{\varphi}$ are the three components of gravitational acceleration. The six panels (a), (b), (c), (d), (e), and (f) represent the distribution of $g_{r}/g$, $g_{\theta}/g$, $g_{\varphi}/g$, $g_{tot}/g$, $f_{tid}$, and $f_{cen}/f_{tid}$, respectively. The quantity $g$ equals the gravitational acceleration of the corresponding equivalent sphere ($g=\frac{GM_{1}}{r_{p}^{2}}$). When the joint effect of rotation and tide is considered, the gravitational accelerations are different from those in the conventional model. Gravitational acceleration generally has three components. It is shown in panel (a) that the relative quantity $\frac{g_{r}}{g}$ reaches the maximum value of $1.048$ at the two polar points and drops to the minimum value $0.6987$ on the equatorial plane because the inward tidal force acts on the primary and causes the polar radius to become shorter. The tidal and centrifugal forces pull the primary outwards and change gravitational accelerations greatly on the equatorial plane. Furthermore, the maximum value is $0.9486$ and the minimum value is $0.6987$ on the equatorial plane. The lower values are at the peak of the longest axis $a$ and the higher values are at the peak of the axis $b$. The relative quantity $\frac{g_{\theta}}{g}$ reaches the maximum value of $0.20399$ at the point of $\theta=\frac{k\pi}{2}+\frac{\pi}{4}; \varphi=k \pi$, $k=0,1$ and vanishes at the two polar points and on the equatorial plane in panel (b). It can be seen that a secondary maximal value of $0.10214$ exists at point of $\theta=\frac{k\pi}{2}+\frac{\pi}{4}; \varphi=k \pi+\frac{\pi}{2}$, $k=0,1$. The relative quantity $\frac{g_{\varphi}}{g}$ reaches the maximum value of $0.1050$ at point $\theta=\frac{\pi}{2}; \varphi=\frac{k\pi}{2}+\frac{\pi}{4}$, k=0,1,2,3 and decreases to zero at the point of $\varphi=\frac{k\pi}{2}$, $k=0,1,2,3$ in panel (c). The total gravitational acceleration at the surface of the primary is shown in panel (d). Its distribution is similar to the one of $\frac{g_{r}}{g}$ because the radial component is the maximum value. It is noticed that, as expected, the average gravitational acceleration of the rotating model is less than for the non-rotating model. It can be observed that the tidal force reaches the highest value of $340.05 cm/s^{2}$ at the point of $\theta=\frac{\pi}{2}; \varphi=k\pi$, $k=0,1$ and decreases to the lowest of $136.24 cm/s^{2}$ at the two polar points. The quantity $f_{cen}/f_{tid}$ reaches the maximum value of $2.4905$ at the point of $\theta=\frac{\pi}{2}; \varphi=k\pi+\frac{\pi}{2}$, $k=0,1$ and reaches the secondary maximal value of $1.2445$ at the point of $\theta=\frac{\pi}{2}; \varphi= k\pi$, $k=0,1$. The results show that the effect produced by tidal distortion is lower in comparison with what is produced by rotational distortion on the equatorial plane. However, with the mass conversion, the opposite situation can emerge. It is concluded that tidal distortions are related to the mass ratio of the secondary to the primary. These results suggest that rotation and tide have strong influences on the stellar surface. They modify the gravity and change the spherically-symmetric shape into the triaxial ellipsoid shape. Furthermore, the stellar structure equations are basically revised due to the distribution of the relative quantity in the outer region. According to the Von Zeipel theorem, the mass loss due to stellar winds should be proportional to local effective gravity. Polar ejection is intensified by the tidal effect. The higher gravity at the peak of the axis $b$ makes it hotter. The ejection of an equatorial ring may be favoured by both the opacity effect and the higher temperature at the peak of the semi-axis $b$. This effect is called the $g_{e}(\theta,\varphi)$-effect in this paper. It is predicted that the $g_{e}(\theta,\varphi)$-effect is as important as the $g_{e}$-effect suggested by Maeder (1999) and Maeder \& Desjacques (2001). The shapes of planetary nebulae that deviate from spherical symmetry (axisymmetrical one in particular) are often ascribed to rotation or tidal interaction (Soker 1997). Frankowski and Tylenda (2001) suggest that a mass-losing star can be noticeably distorted by tidal forces, thus the wind will exhibit an intrinsic directivity and may be globally intensified. Interestingly enough, the group of the B[e] stars shows a two-component stellar wind with a hot, highly ionized, fast wind at the poles and a slow, dense, disk-like wind at the equator (Zickgraf 1999). Maeder and Desjacques (2001) have noticed that the polar lobes and skirt in $\eta$ Carinae and other LBV stars may naturally result from the $g_{eff}$ and $\kappa$-effects. Langer et al. (1999) have shown that giant LBV outbursts depend on the initial rotation rate. Tout and Eggleton (1988) proposed a formula according to which the tidal torque would enhance the mass-loss rate by a factor of $1+B\times(\frac{R}{R_{RL}})^{6}$, where $B$ is a parameter free to be adjusted (ranging from $5\times10^{2}$ to $10^{4}$). Mass loss and associated loss of angular momentum are anisotropic in rotating binary stars. The theories for describing the mass loss and angular momentum loss from stellar winds should be altered partly in future work. The time variation of relative accretion luminosity at the semi-detached stage is shown in Fig. 3. The two panels (a) and (b) correspond to cases 1 and 2, respectively. The figure shows that the release of transferred thermal energy approaches zero, which indicates that the transferred thermal energy can be ignored in the two cases. The irradiation energy plays an important role in the early stage of mass overflow and attenuates at the subsequent stage, which can be explained by the luminosity of the primary decreasing rapidly and the luminosity of the secondary increasing with mass transfer gradually. The transferred potential can exceed the irradiation energy as the mass transfer rate grows. The total accretion luminosity in case 2 is higher than the one in case 1 because the potentials in the two cases are different. From panel (b), it can be seen that the curve of the accretion luminosity fluctuates, indicating that the mass transfer process is unstable. The total H-burning energy-generation rates of the primary in the two cases are shown in Fig. 4. Panel (b) shows the H-burning energy-generation rate in case 2 after the main sequence. From the difference between curves in panel (a), it is noticed that the effect of rotation causes the total H-burning energy-generation rate lower. As a result, the evolutive time in the main-sequence stage gets longer (cf. Table 1). Moreover, the larger fuel supply and lower initial luminosity of the rotating stars help to prolong the time which they spend on the main sequence (Heger \& Langer 2000b). The lifetime extension in rotating binary star at the main-sequence stage can also be illustrated according to Suchkov (2001). Their results show that the age-velocity relation (AVR) between F stars in the binary system is different from the one between ``truly single'' F stars. The discrepancy between the two AVRs indicates that the putative binaries are, on average, older than similar normal single F stars at the same effective temperature and luminosity. It is speculated that this peculiarity comes from the impact of the interaction of components in a tight pair on stellar evolution, which results in the prolonged main-sequence lifetime of the primary F star. Moreover, no central helium-burning stage exists for case 2 (cf. Table 1). From panel (b), it can be seen that the energy-generation rate of the primary vibrates at the H-shell burning stage in case 2. These facts suggest that the burning of H-shell is unstable in case 2. The reason lies in the centrifugal force reducing the effective gravity at the stellar envelope. The luminosity and surface temperature there decrease (Kippenhahn 1977; Langer 1998; Meynet and Maeder 1997). Thus, the shell source becomes cooler, thinner, and more degenerated as the He core mass increases. As the hydrogen shell becomes instable, the thickness $\frac{D}{r_{s}}$ and surface temperature are $\sim0.203$ and $1.1885\times10^{4}K$, respectively. This physical condition leads to thermal instability (Yoon et al., 2004), and the H-shell source experiences slight oscillation. It is well known that the energy-generation rate is proportional to temperature and density ($\varepsilon \propto \rho T^{n}$); therefore, the curve of the H-shell energy-generation rate fluctuates. The time-dependent variation in the luminosity and the equivalent radius of the primary in the two cases are illustrated in Fig. 5. Because the rotating star has a lower energy-generation rate, the luminosity of the primary is lower, which is the consequence of decreased central temperature in rotating models due to decreased effective gravity (Meynet and Maeder 1997). Then, the primary expands slowly in case 2. It is observed that case 1 reaches point b at $t=2.6725\times10^{7}yr$, while case 2 reaches point b at $t=2.6267\times10^{7}yr$. The initiation time of mass transfer for case 2 is advanced by about $\sim 1.71\%$. Similarly, numerical calculation by Petrovic et al.(2005b) shows the radius of the rotating primary increases faster than that of the non-rotating primary due to the influence of centrifugal forces. Their results also show that mass transfer of Case A starts earlier in rotating binary system, which is consistent with ours. If the rotating star is still treated as a spherical star, the initiation time of mass overflow should be later than that in the non-rotational case. Actually, because of the distortion by rotation and tide, the time for mass overflow may be extended. Therefore, it is very important to investigate distortion in close binary systems. The time-dependent variation in the helium compositions at the surface of the primary is illustrated in Fig. 6. The H-shell burning begins at $t=2.6854\times10^{7}yr$ in case 1 while at $t=2.6329\times10^{7}yr$ in case 2 (cf. Table 1). Therefore, the initiation time of H-shell burning is advanced by $1.71\times10^{5}yr$. Moreover, the helium composition at the surface of the primary is $0.280051$ at point c, suggesting that the diffusion process progresses slowly in a rotating star. Cantiello et al.(2007) also indicate that rotationally induced mixing before the onset of mass transfer is negligible, in contrast to typical $O$ stars evolving separately; hence, the alteration of surface compositions depends on both initial mass and rotation rates. The sample of the OB-type binaries with orbital periods ranging from one to five days by Hilditch et al. (2005) shows enhanced N abundance up to $0.4$ dex. Langer et al. (2008) have discovered that for the same binary system, but with the initial period of six days instead of three days, its mass gainer is accelerated to a rotational velocity of nearly $500km s^{-1}$, which produces an extra nitrogen enrichment from more than a factor two to about 1 dex in total. Because there is no central helium-burning phase for case 2, the diffusion process can be neglected in the interior region of the primary after the main sequences. \section{Conclusions} The main achievements of this study may be summarised as follows. (a) The distortion throughout the outer layer of the primary is considerable. The detailed theoretical models that investigate the outer regions of the two components have deviated somewhat from the lowest approximation of the Roche model. The high-order perturbing potential is required especially in the investigation of the evolution of short-period binary system. (b) The equilibrium structures of distorted stars are actually triaxial ellipsoids. A formula describing rotationally and tidally distorted stars is presented. The shape of the ellipsoid is related to the mean density of the component and the potentials of centrifugal and tidal force. (c)The radial components of the centrifugal force and the tidal force cause the variation in gravitation. The tangent components of the centrifugal force and the tidal force cannot be equalized and, instead, they change the shapes of the components from perfect spheres to triaxial ellipsoids. Mass loss and associated angular momentum loss are anisotropic in rotating binary stars. Ejection is intensified by tidal effect. The ejection of an equatorial ring may be favoured by both the opacity effect and the higher temperature at the peak of semi-axis $b$. This effect is called the $g_{e}(\theta,\varphi)$-effect in this paper. (d) The rotating star has an unstable H-burning shell after the main sequence. The components expand slowly due to their lower luminosity. If the components are still treated as spherical stars, some important physical processes can be ignored. \begin{acknowledgements} We are grateful to Professor Norbert Langer and Dr. St$\acute{e}$phane Mathis for their valuable suggestions and insightful remarks, which have improved this paper greatly. Also we thank Professor Norbert Langer for his kind help in improving our English. \end{acknowledgements}
2,869,038,156,003
arxiv
\section{Introduction} $O(N)$ spinning particles \cite{Gershun:1979fb,Howe:1988ft,Kuzenko:1995mg} have been useful to describe higher spin fields in first quantization \cite{Siegel:1999ew,Bastianelli:2007pv}. Similarly, $U(N)$ spinning particles \cite{Marcus:1994em, Marcus:1994mm} have been instrumental to discover a new class of higher spin field equations which possess a novel type of gauge invariance \cite{Bastianelli:2009vj}. To investigate the quantum properties of these equations in their worldline formulation, it is important to study the related quantum mechanics. It is the purpose of this paper to discuss these quantum mechanics, which in the most general case take the form of nonlinear sigma models. First we shall discuss linear sigma models, i.e. models with flat complex space $\mathbb{C}^d$ as target space. These sigma models exhibit a $U(N)$ extended supersymmetry on the worldline. They define ``spinning particle'' models once the extended supersymmetry is made local. It is useful, and almost effortless, to extend these models by adding extra bosonic coordinates. This extension produces $U(N|M)$ sigma models, by which we mean sigma models with a worldline extended supersymmetry characterized by supercharges transforming in the fundamental representation of $U(N|M)$ (i.e. $U(N|M)$ is the $R$-symmetry group of the supersymmetry algebra). This extension may be useful for constructing wider classes of spinning particles, as happened in the case of the $OSp(N|2M)$ extension \cite{Hallowell:2007qk} of the standard $O(N)$ supersymmetric quantum mechanics, used for example in \cite{Campoleoni:2008jq,Alkalaev:2008gi,Bastianelli:2009eh,Cherney:2009vg} to describe higher spin fields. We present these quantum mechanical models and their symmetry algebra in section 2. In section 3 we consider sigma models with generic K\"ahler manifolds as target spaces. The symmetry algebra gets modified by the geometry, so that it will not be always possible to gauge the extended supersymmetry to obtain spinning particles and corresponding higher spin equations. This signals the difficulties of coupling higher spin fields to generic backgrounds, not to mention the even more difficult problem of constructing nonlinear field equations. However, on special backgrounds one can find a deformed $U(N|M)$ susy algebra that becomes first class, so that it can be gauged to produce consistent spinning particles. An example is the case of K\"ahler manifolds with constant holomorphic sectional curvature. No restrictions apply to the special cases of $U(1|0)$ and $U(2|0)$, whose susy algebra can be gauged to produce nontrivial field equations on any K\"ahler space, in analogy with standard $N=1$ and $N=2$ susy quantum mechanics on arbitrary riemannian manifolds (i.e. $O(1)$ and $O(2)$ quantum mechanics in the language used above). Nevertheless, before gauging, the $U(N|M)$ quantum mechanics here constructed are perfectly consistent on any K\"ahler manifold, and even posses conserved supercharges when the Riemann tensor obeys a locally symmetric space condition (again in close analogy with the riemannian case \cite{Hallowell:2007qk}). Thus, in section 4 we work with an arbitrary K\"ahler manifold and compute the quantum mechanical transition amplitude in euclidean time (i.e. the heat kernel) in the limit of short propagation time and using operatorial methods. This last result is going to be particularly useful for obtaining an unambiguous construction of the corresponding path integral, which is needed when considering worldline applications. This is indeed one of our future aims, namely using worldline descriptions of higher spin fields to obtain useful and computable representations of their one-loop effective actions, as done in \cite{Ba:2005vk} for the $O(2)$ spinning particle. In that case a worldline representation allowed to compute in a single stroke the first few heat kernel coefficients and prove various duality relations for massless and massive $p$-forms in arbitrary dimensions. Finally, we present our conclusions and outlook in section 5, and confine to the appendices details of our calculations. \section{Linear $U(N|M)$ sigma model} We introduce here the $U(N|M)$ extended supersymmetric quantum mechanics. In the most simple case it describes the motion of a particle in $\mathbb{C}^d$, the flat complex space of $d$ complex dimensions with coordinates ($x^\mu$, $\bar x^{\bar\mu}$), $\mu=1,...,d$. The flat metric in these complex coordinates is simply $\delta_{\mu\bar\nu}$, and we use it to raise and lower indices. In addition, the particle carries extra degrees of freedom described by worldline Dirac fermions ($\psi^\mu_a$, $\bar\psi^a_\mu$) and complex bosons ($z^\mu_\alpha$, $\bar z^\alpha_\mu$), where $a=1,...,N$ and $\alpha=1,...,M$ are indices in the $U(N)$ and $U(M)$ subgroups of $U(N|M)$, respectively. These extra degrees of freedom can be interpreted as worldline superpartners of the coordinates ($x^\mu$, $\bar x^{\bar\mu}$). Of course, when the superpartners have bosonic character one finds a kind of ``bosonic'' supersymmetry, that generalizes usual concepts. With these degrees of freedom at hand the phase space lagrangian defining our model has the standard form ${\cal L}\sim p\dot q - H$, namely \begin{equation}\label{U(N|M) flat lagrangian} \mathcal{L}=p_\mu\dot x^\mu+\bar p_{\bar\mu}\dot{\bar x}^{\bar\mu}+i\bar\psi_\mu^a\dot\psi^\mu_a+i\bar z_\mu^\alpha\dot z^\mu_\alpha-p_\mu\bar p^\mu\;. \end{equation} This model enjoys a $U(N|M)$ extended supersymmetry, which we are going to describe directly in the quantum case. The fundamental (anti)-commutators are easily read off from \eqref{U(N|M) flat lagrangian} \begin{equation}\label{CCR} \begin{split} & [x^\mu, p_\nu]=i\hbar\delta^\mu_\nu\;,\qquad\quad\; [\bar x^{\bar\mu}, \bar p_{\bar\nu}]=i\hbar\delta^{\bar\mu}_{\bar\nu}\\[1.2mm] & \{\psi_a^\mu, \bar\psi^b_\nu\}=\hbar\delta_a^b\delta^\mu_\nu\;, \qquad [z_\alpha^\mu, \bar z^\beta_\nu]=\hbar\delta_\alpha^\beta\delta^\mu_\nu \; . \end{split} \end{equation} The $U(N|M)$ charges are readily constructed from the worldline operators \begin{equation} \begin{split} J^a_b &= \frac12[\bar\psi^a_\mu, \psi^\mu_b] - c\hbar \delta^a_b =\bar\psi^a_\mu \psi^\mu_b -m\hbar\delta^a_b \qquad\text{$U(N)$ subgroup},\\ J^\alpha_\beta &= \frac12\{\bar z^\alpha_\mu, z_\beta^\mu\} + c\hbar \delta^a_b = \bar z^\alpha_\mu z_\beta^\mu + m \hbar\delta^\alpha_\beta \qquad\text{$U(M)$ subgroup},\\ J_b^\alpha &= \bar z^\alpha_\mu\psi_b^\mu\;,\qquad \ \; J^a_\beta=\bar\psi^a_\mu z_\beta^\mu\qquad\qquad\qquad\ \text{$U(N|M)$ fermionic generators,} \end{split} \end{equation} where $m= c+\frac{d}{2}$. They obey the $U(N|M)$ algebra \begin{equation}\label{U(N|M) internal superalgebra flat} \begin{split} [J^a_b, J^c_d] &= \hbar\,(\delta^c_bJ^a_d-\delta^a_dJ^c_b)\\ [J^\alpha_\beta, J^\gamma_\delta] &= \hbar\,(\delta^\gamma_\beta J^\alpha_\delta-\delta^\alpha_\delta J^\gamma_\beta)\\ [J^a_b, J^\alpha_c] &= -\hbar\,\delta^a_c J^\alpha_b\;,\quad [J^a_b, J^c_\alpha]=\hbar\,\delta^c_b J^a_\alpha\\ [J^\alpha_\beta, J^\gamma_a] &= \hbar\,\delta^\gamma_\beta J^\alpha_a\;,\quad [J^\alpha_\beta, J^a_\gamma]=-\hbar\,\delta^\alpha_\gamma J^a_\beta\\ \{J_a^\alpha, J_\beta^b\} &=\hbar\,(\delta_a^bJ^\alpha_\beta+\delta^\alpha_\beta J_a^b)\;. \end{split} \end{equation} In the definition of these charges we have used a ``graded symmetric'' ordering prescription modified by an arbitrary central charge $c$ that specifies possible different orderings allowed by the symmetry algebra. The possibility of inserting the central charge is related to the algebraic fact that $U(N|M) = U(1)\times SU(N|M)$. All these charges commute with the hamiltonian $H=p_\mu\bar p^\mu$ and are conserved. Other conserved quantities are the supersymmetric charges involving the space momenta: there are $2N$ fermionic supercharges $Q_a=\psi_a^\mu\, p_\mu$, $\bar Q^a=\bar\psi^a_\mu\, \bar p^\mu$, and $2M$ bosonic charges $Q_\alpha=z_\alpha^\mu\, p_\mu$, $\bar Q^\alpha=\bar z^\alpha_\mu\, \bar p^\mu$. All these operators form the $U(N|M)$ extended superalgebra that, together with the $U(N|M)$ internal algebra \eqref{U(N|M) internal superalgebra flat}, is given by the following relations \begin{equation} \begin{split} [J^a_b, Q_c] &= -\hbar\,\delta^a_c\,Q_b\;,\quad\; [J^a_b, \bar Q^c] = \hbar\,\delta^c_b\,\bar Q^a \\ [J^\alpha_\beta,Q_\gamma] &= -\hbar\,\delta^\alpha_\gamma\,Q_\beta\;,\quad [J^\alpha_\beta, \bar Q^\gamma] =\hbar\,\delta^\gamma_\beta\,\bar Q^\alpha \\ [J^\alpha_a, Q_\beta] &=-\hbar\,\delta^\alpha_\beta\,Q_a\;,\quad [J_\alpha^b, \bar Q^\beta] =\hbar\,\delta_\alpha^\beta\,\bar Q^b\\ \{J^a_\alpha, Q_b\} &= \hbar\,\delta^a_b\,Q_\alpha\;,\quad\ \ \{J^\alpha_b, \bar Q^c\}=\hbar\,\delta^c_b\,\bar Q^\alpha\\[2mm] \{Q_a, \bar Q^b\}&=\hbar\,\delta_a^b\,H\;,\qquad[Q_\alpha, \bar Q^\beta]=\hbar\,\delta_\alpha^\beta\,H\;. \end{split} \end{equation} (Anti)-commutators needed to close the algebra and not explicitly reported vanish. All these relations can be written in a more covariant way. In order to show up the full supergroup structure, let us introduce the superindex $A=(a,\alpha)$ and the $U(N|M)$ metrics \begin{equation} \delta^A_B=\left(\begin{array}{cc}\delta^a_b & 0\\ 0 &\delta^\alpha_\beta \end{array}\right)\;, \qquad \epsilon^A_B=\left(\begin{array}{cc}-\delta^a_b & 0\\ 0 &\delta^\alpha_\beta \end{array}\right)\;. \end{equation} The internal fermions and bosons are grouped into the fundamental and anti-fundamental representations of the supergroup, $Z_A^\mu=(\psi_a^\mu, z_\alpha^\mu)$, $\bar Z^A_\mu=(\bar\psi^a_\mu, \bar z^\alpha_\mu)$. The fundamental (anti)-commutation relations can be written as $[Z^\mu_A, \bar Z_\nu^B\}=\hbar\,\delta_A^B\, \delta^\mu_\nu $, or equivalently as $[\bar Z_\nu^B,Z^\mu_A,\}=-\hbar\,\epsilon_A^B\, \delta^\mu_\nu $. Here the graded commutator is used: $[A,B\}$ is defined as anti-commutator for $A$ and $B$ both fermionic, and as a commutator otherwise. Then we collect all the $U(N|M)$ generators in \begin{equation} J^A_B=\left(\begin{array}{cc}J^a_b & J^a_\beta \\ J^\alpha_b & J^\alpha_\beta \end{array}\right)= \bar Z^A_\mu Z_B^\mu + m\hbar\, \epsilon^A_B \;. \end{equation} With these notations at hand the entire superalgebra \eqref{U(N|M) internal superalgebra flat} is packaged into the single relation \begin{equation}\label{U(N|M) supergroup notation} [J^A_B, J^C_D\}=\hbar\,(\delta^C_B\, J^A_D\pm\delta^A_D\, J^C_B)\;, \end{equation} where the plus sign refers to the case with $J^A_B$ and $J^C_D$ both fermionic, and the minus sign to the other possibilities. By means of this supergroup notation, the supercharges are written as $Q_A=(Q_a, Q_\alpha)$ and $\bar Q^A=(\bar Q^a, \bar Q^\alpha)$, and the above superalgebra is summarized by \begin{equation}\label{U(N|M) complete superalgebra flat} \begin{split} [J^A_B, Q_C\} &= \pm\hbar\,\delta^A_C\,Q_B\; , \qquad [J^A_B, \bar Q^C\} = \hbar\,\delta^C_B\,\bar Q^A\\ [Q_A, \bar Q^B\} &= \hbar\,\delta_A^B\,H\;, \end{split} \end{equation} where $\pm$ stands for plus for $J^A_B$ and $Q_C$ both fermionic, and minus otherwise. All these quantum mechanical operators have simple geometrical meanings in terms of differential operators living on $\mathbb{C}^d$. Let us give a brief description. Generic wave functions of the Hilbert space can be represented by functions of the coordinates $(x,\bar x, \psi, z)$. Expanding them in $\psi^\mu$ and $z^\mu$ shows how they contain all possible tensors with $N+M$ blocks of holomorphic indices. Each of the first $N$ blocks of indices is totally antisymmetric, while each of the last $M$ blocks of indices is totally symmetric. In formulae \begin{equation}\label{general expansion} \begin{split} \phi(x,\bar x, \psi, z) \sim & \sum_{A_i=0}^d \sum_{B_i=0}^\infty \phi_{[\mu_1^1..\,\mu_{A_1}^1],\ldots,\,[\mu_1^N ..\,\mu_{A_N}^N], (\nu_1^1..\,\nu_{B_1}^1),\ldots,\,(\nu_1^M ..\,\nu_{B_M}^M)} (x,\bar{x})\\ & \times \Big (\psi_1^{\mu_1^1}..\,\psi_1^{\mu_{A_1}^1}\Big ) \ldots \Big (\psi_N^{\mu_1^N}..\,\psi_N^{\mu_{A_N}^N}\Big ) \Big (z_1^{\nu_1^1}..\,z_1^{\nu_{B_1}^1}\Big ) \ldots \Big (z_M^{\nu_1^M}..\, z_M^{\mu_{B_M}^M}\Big ) \; . \end{split} \end{equation} The quantum mechanical operators take the form of differential operators acting on these tensors. The hamiltonian is proportional the standard laplacian $H\sim \de_\mu\bar\de^{\mu} = \delta^{\mu\bar{\nu}}\,\de_\mu\bar\de_{\bar{\nu}}$. The supercharge $Q_a$ acts as the Dolbeault operator $\de$ restricted to the antisymmetric indices of block ``$a$", and $\bar Q^a$ as its adjoint $\de^\dagger$. Similarly the ``bosonic'' supercharge $Q_\alpha$ is realized as a symmetrized gradient acting on the symmetric indices of block ``$\alpha$", and $\bar Q^\alpha$ is its adjoint, taking the form of a divergence. The action of the $U(N|M)$ operators, i.e. the $J^A_B$ charges, is also amusing: they perform certain (anti)-symmetrizations on the tensors indices, and we leave it to the interested reader to work them out explicitly. The algebra of these differential/algebraic operators, as encoded in the susy algebra, is only valid in flat space. In the next section we will see how this algebra extends to generic K\"ahler manifolds. \section{Nonlinear $U(N|M)$ sigma model} We now extend the previous construction to nonlinear sigma models with generic K\"{a}hler manifolds as target spaces. On K\"{a}hler manifolds, in holomorphic coordinates, the only non vanishing components of the metric are $g_{\mu\bar\nu}=g_{\bar\nu\mu}$, and similarly $\Gamma^\mu_{\nu\lambda}$ and $\Gamma^{\bar\mu}_{\bar\nu\bar\lambda}$ are the only non vanishing components of the connection. We use the following conventions for curvatures \begin{equation} R^\mu_{\ph\mu\nu\bar\sigma\lambda}=\de_{\bar\sigma}\Gamma^\mu_{\nu\lambda}\;,\quad R^\mu_\nu=-g^{\bar\sigma\lambda}\,R^\mu_{\ph\mu\nu\bar\sigma\lambda}\;,\quad R=R^\mu_\mu \;, \end{equation} and denote by $g=\det(g_{\mu\bar\nu})$ the determinant of the metric, as standard in K\"ahler geometry. The classical phase space lagrangian with a minimally covariantized hamiltonian becomes \begin{equation}\label{U(N|M) curved phase space lagrangian} \mathcal{L}=p_\mu\dot x^\mu+\bar p_{\bar\mu}\dot{\bar x}^{\bar\mu} +i\bar Z_\mu^A \dot Z^\mu_A -g^{\mu\bar \nu} (p_\mu -i\Gamma^\lambda_{\mu\sigma}{\bar Z^A_\lambda} Z^\sigma_A ) \bar p_{\bar \nu} \end{equation} though, for future applications, it will be useful to consider more general hamiltonians. The corresponding configuration space lagrangian is the typical one for nonlinear sigma models \begin{equation}\label{U(N|M) curved conf space lagrangian} \mathcal{L}=g_{\mu\bar \nu} \dot x^\mu \dot{\bar x}^{\bar\nu} +i\bar Z_\mu^A \frac{D Z^\mu_A}{dt} \end{equation} where the covariant time derivative is given by $ \frac{DZ^\mu_A}{d t}=\dot Z^\mu_A+\dot x^\nu\,\Gamma^\mu_{\nu\sigma} \,Z^\sigma_A $. In the quantum case, it will be crucial to resolve ordering ambiguities by demanding target space covariance. Before discussing the quantum operators, let us make a few comments. We treat the $\bar Z_\mu^A$ fields as momenta, as such they have a natural lower holomorphic curved index. In this situation there is no real advantage in introducing a vielbein, so we will avoid introducing one. Also, the holonomy group of a K\"{a}hler manifold of complex dimensions $d$ is $U(d)$, and it will be convenient to define the $U(d)$ generators \begin{equation} M_\mu^\nu=\frac12[\bar\psi^a_\mu, \psi_a^\nu]+\frac12\{\bar z^\alpha_\mu, z^\nu_\alpha\} -k\hbar\delta^\nu_\mu \end{equation} where $k$ is a central charge parametrizing different orderings allowed by the $U(d)=U(1)\times SU(d)$ symmetry. These generators can be written as well as \begin{equation}\label{s charge} M_\mu^\nu= \bar Z^A_\mu Z^\nu_A -s\hbar\delta^\nu_\mu \end{equation} with $s=k+\frac{N-M}{2}$. They satisfy the correct $U(d)$ algebra \begin{equation} [M^\mu_\nu,M^\rho_\sigma]=\hbar\,\delta^\mu_\sigma\, M^\rho_\nu-\hbar\,\delta^\rho_\nu\, M^\mu_\sigma\;. \end{equation} We are now ready to discuss the covariantization of the quantum operators belonging to the $U(N|M)$ extended supersymmetry algebra. As we shall see, not all of the charges generate symmetries on generic K\"ahler manifolds: some of them do not commute with the hamiltonian and thus are not conserved. It is easiest to start with the generators of $U(N|M)$. They are left unchanged as the metric does not enter their definition: $J^A_B=\bar Z^A_\mu Z_B^\mu+m\hbar \epsilon^A_B$. They satisfy the same $U(N|M)$ symmetry algebra given in eq. \eqref{U(N|M) supergroup notation}. Now we consider the $Q$ supercharges. To covariantize them we introduce covariant momenta \begin{equation} \bar\pi_{\bar\mu}=g^{1/2}\,\bar p_{\bar\mu}\,g^{-1/2}\;, \qquad \pi_\mu=g^{1/2}\,\big(p_\mu-i\,\Gamma^\lambda_{\mu\sigma}\,M^\sigma_\lambda\big)\,g^{-1/2}\;, \end{equation} and write down covariantized supercharges as \begin{equation} Q_A=Z_A^\mu\,\pi_\mu\;,\qquad\bar Q^A=\bar Z_\mu^A\,g^{\mu\bar\nu}\,\bar\pi_{\bar\nu}\;. \end{equation} Similarly, the covariant hamiltonian operator is given by \begin{equation}\label{H_0} H_0=g^{\bar\mu\nu}\bar\pi_{\bar\mu}\pi_\nu=g^{1/2}\, g^{\bar\mu\nu} \bar p_{\bar\mu}\,\big(p_\nu-i\,\Gamma^\lambda_{\nu\sigma}\,M^\sigma_\lambda\big)\,g^{-1/2}\;. \end{equation} At this stage it is worthwhile to spend some words on the hermiticity properties of our operators: since the $\bar Z^A_\mu$ fields are defined as independent variables with lower holomorphic indices, but hermitian conjugation of vector indices naturally sends holomorphic into anti-holomorphic indices, and vice versa, the natural definition of the adjoint of $Z_A^\mu$ is $(Z_A^\mu)^\dagger=\bar Z_\nu^A\,g^{\nu\bar\mu}$. In this way, hermitian conjugation of the momentum is nontrivial: if $[p_\mu,Z_A^\nu]=0$, it must hold that $[(p_\mu)^\dagger,(Z^\nu_A)^\dagger]=[(p_\mu)^\dagger, \bar Z^A_\lambda\,g^{\lambda\bar\nu}]=0$ as well. Requiring this property we find \begin{equation} \big(p_\mu\big)^\dagger=\bar p_{\bar\mu}-i\,\Gamma^{\bar\lambda}_{\bar\mu\bar\sigma}\,M^\lambda_\sigma\,g^{\sigma\bar\sigma}g_{\lambda\bar\lambda}\;. \end{equation} Now, if we define the supercharges in the natural way written above, namely $Q_A=Z_A^\mu\,\pi_\mu$ and $\bar Q^A=\bar Z^A_\mu\,g^{\mu\bar\nu}\,\bar\pi_{\bar\nu}$, then it results that $(Q_A)^\dagger=\bar Q^A$ and $H_0^\dagger=H_0$. Note that the power of the metric determinant entering the various operators is necessary for verifying the hermiticity properties. Let us now consider their algebra. The first line of \eqref{U(N|M) complete superalgebra flat} simply states that $Q_A$ and $\bar Q^A$ belong to the fundamental and anti-fundamental representation of $U(N|M)$, and one can check that these relations remain unchanged even in curved space, \begin{equation}\label{JQ} [J^A_B, Q_C\} = \pm\hbar\,\delta^A_C\,Q_B\; , \qquad [J^A_B, \bar Q^C\} = \hbar\,\delta^C_B\,\bar Q^A \;. \end{equation} On the other hand the last relation becomes \begin{equation}\label{QbarQ curved space} [Q_A, \bar Q^B\}=\hbar\,\delta_A^B\,H_0+\hbar\,Z_A^\mu\bar Z^B_\nu\,R^{\nu\ph\mu\lambda}_{\ph\nu\mu\ph\lambda\sigma}M^\sigma_\lambda\; . \end{equation} The minimal covariant hamiltonian $H_0$, emerging from this commutator as the term multiplying $\delta_A^B$ and already given in \eqref{H_0}, does not conserve the supercharges except than in flat space; in fact the commutator between $H_0$ and $Q$ does not vanish and reads \begin{equation}\label{[Q,H_0], U(N|M)} \begin{split} [Q_A, H_0]&=\hbar\,Z_A^\mu\,R^{\nu\ph\mu\lambda}_{\ph\nu\mu\ph\lambda\sigma}\,M_\lambda^\sigma\,\pi_\nu +\hbar^2\,Z_A^\mu\,R^\nu_\mu\,\pi_\nu\\ [\bar Q^A, H_0]&\equiv-[Q_A, H_0]^\dagger\;. \end{split} \end{equation} $H_0$ is a central operator only in flat space. Finally, it is simple to verify that \begin{equation}\label{QQ} [Q_A, Q_B\}= [\bar Q^A, \bar Q^B\}= 0 \; . \end{equation} Relations \eqref{JQ}, \eqref{QbarQ curved space}, \eqref{[Q,H_0], U(N|M)} and \eqref{QQ}, together with \eqref{U(N|M) supergroup notation}, describe the deformation of the $U(N|M)$ supersymmetry algebra realized by our quantum nonlinear sigma model on a K\"ahler manifold. Supersymmetry is broken as the supercharges are not conserved. Only on flat spaces the hamiltonian $H_0$ becomes central and the supercharges get conserved. Given this state of affairs, one may try to redefine the hamiltonian in an attempt to make it central on more general backgrounds, thus recovering conserved supercharges. For this purpose, we add to $H_0$ several non minimal couplings \begin{equation}\label{H with cs} H=H_0+c_1R^{\nu\ph\mu\lambda}_{\ph\nu\mu\ph\lambda\sigma}\,M^\mu_\nu\,M^\sigma_\lambda+c_2\,\hbar\,R^\mu_\nu\,M^\nu_\mu+c_3\,\hbar^2R\;. \end{equation} With these generic couplings \eqref{[Q,H_0], U(N|M)} becomes \begin{equation}\label{[Q,H], U(N|M) generic couplings} \begin{split} [Q_A, H]&=\hbar\,(1+2c_1)\,Z_A^\mu\,R^{\nu\ph\mu\lambda}_{\ph\nu\mu\ph\lambda\sigma}\,M^\sigma_\lambda\,\pi_\nu+\hbar^2\,(1+c_1+c_2)\,Z^\mu_A\,R^\nu_\mu\,\pi_\nu\\ &-i\hbar c_1Z^\rho_A\nabla_\rho R^{\nu\ph\mu\lambda}_{\ph\nu\mu\ph\lambda\sigma}\,M^\mu_\nu\,M^\sigma_\lambda-i\hbar^2 c_2\,Z^\sigma_A\nabla_\sigma R^\mu_\nu\,M^\nu_\mu-i\hbar^3 c_3\,Z^\mu_A\nabla_\mu R\;. \end{split} \end{equation} We see that for the choice $c_1=-\frac12$, $c_2=-\frac12$ and generic $c_3$, the terms in the first line proportional to the covariant momentum $\pi_\nu$ vanish and, choosing $c_3=0$ for simplicity, we identify a canonical hamiltonian $H_{(c)}$ so that eq. \eqref{[Q,H], U(N|M) generic couplings} reduces to \begin{equation}\label{QH commutator} [Q_A,H_{(c)}]=\frac{i\hbar}{2}\,Z^\rho_A\,\nabla_\rho R^{\nu\ph\mu\lambda}_{\ph\nu\mu\ph\lambda\sigma}\,M^\mu_\nu\,M^\sigma_\lambda+ \frac{i\hbar^2}{2}\,Z^\sigma_A\,\nabla_\sigma R^\mu_\nu\,M^\nu_\mu\;, \end{equation} showing that $H_{(c)}$ is central on locally symmetric spaces. Of course, also the graded commutator \eqref{QbarQ curved space} changes and becomes \begin{equation}\label{QbarQ anticomm} [Q_A, \bar Q^B\}=\hbar\,\delta_A^B\,H_{(c)}+\hbar\,R^{\nu\ph\mu\lambda}_{\ph\nu\mu\ph\lambda\sigma}\left(Z^\mu_A\bar Z_\nu^B+\frac12\,\delta_A^B\,M^\mu_\nu\right)M^\sigma_\lambda+\frac12\,\hbar^2\,\delta_A^B\,R^\mu_\nu\,M^\nu_\mu\;. \end{equation} Thus one concludes that with the redefinition of the hamiltonian given above the supercharges are conserved on locally symmetric K\"ahler manifolds. One of the most interesting applications of the nonlinear sigma models discussed so far is to use them to construct spinning particles and related higher spin equations. This is achieved by gauging the extended susy algebra identified by the charges $(H, Q_A, \bar Q^A, J_A^B)$, possibly with a suitable redefinition of the hamiltonian. Unfortunately, we see that on generic K\"ahler manifolds the $U(N|M)$ extended susy algebra is not first class, as additional independent operators appear on the right hand sides, as evident for example in eqs. (\ref{QH commutator}) and (\ref{QbarQ anticomm}). However, there are special cases, namely the $U(1|0)$ and $U(2|0)$ quantum mechanics, which generate first class superalgebras with a central hamiltonian on any K\"ahler background. In fact, for the $U(1|0)\equiv U(1)$ model the algebra reduces to \begin{equation} \{Q, \bar Q\}=\hbar\,H\;,\quad [Q,H]=0 \end{equation} where the hamiltonian is now defined by \begin{equation} H=H_0-\frac{\hbar}{2}\,R^\mu_\nu\,M^\nu_\mu+\frac{\hbar^2}{4}\,R=H_0^{sym}+\frac{\hbar^2}{4}\,R\;, \end{equation} with $H_0^{sym}=\frac12 g^{\mu\bar\nu}(\pi_\mu\bar\pi_{\bar\nu}+\bar\pi_{\bar\nu}\pi_\mu)$. For the $U(2|0)\equiv U(2)$ model the choice of the hamiltonian is the canonical one, \emph{i.e.} the one in \eqref{H with cs} with $c_1=c_2=-\frac12$ and $c_3=0$, and the superalgebra closes as \begin{equation} \{Q_a, \bar Q^b\}=\delta_a^b\,H\;,\quad [Q_a,H]=0\;. \end{equation} For the general $U(N|M)$ extended susy algebras one cannot achieve such generality. Nevertheless, one may look for special backgrounds that make \eqref{QH commutator} and \eqref{QbarQ anticomm} first class. A nontrivial class of K\"ahler manifolds where the first class property can be achieved is that of manifolds with constant holomorphic sectional curvature. On these manifolds, the Riemann and Ricci tensors take the form \begin{equation} R_{\mu\bar\nu\sigma\bar\lambda} = -\frac{R}{d(d+1)}(g_{\mu \bar\nu} g_{\sigma\bar\lambda}+ g_{\sigma \bar\nu} g_{\mu\bar\lambda}) \;, \qquad R_{\mu\bar\nu} = \frac{R}{d}g_{\mu \bar\nu} \end{equation} where $R$ is the constant scalar curvature. Substituting these relations into the algebra, one notices that the metric tensor gets contracted with the $Z$ and $\bar Z$ operators, producing additional charges $J_A^B$ on the right hand side, so that with a suitable redefinition of the hamiltonian one obtains a first class algebra for generic $m$, $s$, $c_1$ and $c_2$, while $c_3$ gets fixed to a unique value. There is no loss of generality in choosing $c_1$ and $c_2$ equal to their canonical values, $c_1=c_2=-\frac{1}{2}$, when using the algebra as a first class constraint algebra. In this case \begin{equation} \begin{split} c_3 &= -\frac{m}{2d(d+1)}\Big ((N-M)^2 +(N-M)(4d-3m -2s+1) +2 (m-d)\Big) \\ &+ \frac{s}{2}\Big (1+ \frac{2(d-m)}{d} -\frac{s}{d+1} \Big ) \end{split} \end{equation} and the algebra can be casted in the following form \begin{equation}\label{algebra in constant curvature background} \begin{split} [Q_A, \bar Q^B\}&=\hbar\delta_A^B\,H-\frac{\hbar\,R}{d(d+1)}\,\Big\{(-)^{(A+B)C}J_A^CJ_C^B+(-)^{AB}J_A^BJ+(-)^{AB}\hbar k_1J_A^B \\ &+\delta_A^B \Big(\frac12 J^C_D\epsilon^D_E J^E_C+\frac12 J^2+\hbar k_2J\Big) \Big\}\;,\\ [Q_A, H]&=0 \end{split} \end{equation} where \begin{equation} \begin{split} k_1 &= d- s(d+1) + m(N-M-2)\\ k_2 &= d-s(d+1) -\Big(m+\frac12 \Big )(N-M) +\frac12\;. \end{split} \end{equation} We denoted $J\equiv J^A_A$ and used the notation $(-)^{A}$ with $A=0$ for a bosonic index and $A=1$ for a fermionic one. Gauging this first class algebra produces ``$U(N|M)$ spinning particles" on K\"ahler manifolds with constant holomorphic curvature, in a way analogous to the coupling of standard ``$O(N)$ spinning particles" to (A)dS spaces constructed in \cite{Kuzenko:1995mg}. One may recall that K\"ahler spaces with constant holomorphic sectional curvature are a subclass of spaces with vanishing Bochner tensor. The latter is a sort of complex analogue of the riemannian Weyl tensor, introduced in \cite{Bochner} and defined by \begin{equation} \begin{split} B_{\mu\bar\nu\sigma\bar\lambda} &= R_{\mu\bar\nu\sigma\bar\lambda} +\frac{1}{d+2}( g_{\mu \bar\nu} R_{\sigma\bar\lambda}+ g_{\sigma\bar\lambda} R_{\mu\bar\nu} + g_{\sigma \bar\nu} R_{\mu\bar\lambda} + g_{\mu\bar\lambda} R_{\sigma \bar\nu} ) \\ & -\frac{R}{(d+1)(d+2)}(g_{\mu \bar\nu} g_{\sigma\bar\lambda}+ g_{\sigma \bar\nu} g_{\mu\bar\lambda}) \;. \end{split} \end{equation} It satisfies the nice property of being traceless, $g^{\mu\bar\nu}B_{\mu\bar\nu\sigma\bar\lambda} =0$. It seems likely that on spaces with vanishing Bochner tensor one may obtain a first class algebra, indeed it is relatively easy to verify it at the classical level, but we do not wish to pursue the detailed quantum analysis here. \section{Transition amplitude} Up to now we have discussed nonlinear sigma models with $U(N|M)$ extended supersymmetry, broken at times by the target space geometry, and used them to analyze algebraic properties of differential operators defined on K\"ahler manifolds. The aim of this section is the explicit computation of the transition amplitude in euclidean time, that is $\Braket{x\,\bar\eta}{y\,\xi}{e^{-\frac{\beta}{\hbar} H}}$, in the limit of short propagation time and using operatorial methods. Such a calculation was presented for standard nonlinear sigma models with one, two or no supersymmetries in \cite{Peeters:1993vu}, see also \cite{Bastianelli:2006rx}, with the main purpuse of identifying a benchmark to which compare path integral evaluations of the same heat kernel. As we wish to be able to master path integrals for $U(N|M)$ sigma models, and eventually use them to address quantum properties of higher spin equations on K\"ahler manifolds, we compute here the heat kernel using the operatorial formulation of quantum mechanics. To achieve sufficient generality and allow diverse applications, we compute the heat kernel for the general hamiltonian \eqref{H with cs} containing three arbitrary couplings $(c_1, c_2, c_3)$ to the background curvature plus a fourth one, the charge $s$, hidden in the $U(1)$ part of the connection, see eq. \eqref{s charge}. Before starting the actual computation, we shall review our set up. We work on a $2d$ real dimensional K\"{a}hler manifold as target space. Holomorphic and anti-holomorphic vector indices will be often grouped into a riemannian index $i=(\mu,\bar\mu)$ for sake of brevity. The metric in holomorphic coordinates factorizes as follows \begin{equation} g_{ij}=\left(\begin{array}{cc} 0 & g_{\mu\bar\nu} \\ g_{\bar\mu\nu} & 0 \end{array}\right)\;. \end{equation} For determinants we use the conventions $g=\det(g_{\mu\bar\nu})$ and $G=\abs{\det(g_{ij})}=\abs{g}^2$. The dynamical variables of the $U(N|M)$ supersymmetric quantum mechanics consist of the following operators: target space coordinates $(x^\mu,\bar x^{\bar\mu})=x^i$, conjugate momenta $p_i$, and graded vectors $Z^\mu_A$ and $\bar Z_\nu^A$. Their fundamental (anti)-commutation relations are given in \eqref{CCR}. For computational advantages we recast the full quantum hamiltonian \eqref{H with cs} in a way that directly shows the dependence on the $Z$ operators \begin{equation}\label{full quantum H} \begin{split} H &= H_0+\Delta H\quad\text{with}\\ H_0 &= g^{\bar\mu\nu}\,g^{1/2}\,\bar p_{\bar\mu}\,\big(p_\nu-i\,\Gamma^\lambda_{\nu\sigma}\,M^\sigma_\lambda\big)\,g^{-1/2}\\ \Delta H &= a_1\,R_{\mu\ph\nu\rho}^{\ph\mu\nu\ph\rho\sigma}\,\bar Z_\nu\cdot Z^\mu\,\bar Z_\sigma\cdot Z^\rho+a_2\,\hbar\,R^\mu_\nu\,\bar Z_\mu\cdot Z^\nu+a_3\,\hbar^2\,R\;, \end{split} \end{equation} where the $a$ couplings are related to the $c$ couplings by \begin{equation} a_1=c_1\;,\quad a_2=c_2+2sc_1\;,\quad a_3= c_3-sc_2-s^2c_1 \;. \end{equation} Finally, it useful to recall that the final answer for the heat kernel will contain the exponent of the classical action, suitably Wick-rotated to euclidean time $\tau$ ($t\to -i\tau$), which in phase space takes the form \begin{equation} S=\int_{-\beta}^0 d\tau\,\Big[-ip_\mu\dot x^\mu-i\bar p_{\bar\mu}\dot{\bar x}^{\bar\mu}+\bar Z_\mu^A\dot Z^\mu_A+H_{cl}\Big] \end{equation} where $H_{cl}$ is the classical hamiltonian, a function, modified by suitable quantum corrections depending on $\hbar$. Now we are ready for the explicit computation of the transition amplitude, through order $\beta$ (up to the leading free particle propagator), between position eigenstates and coherent states for the internal degrees of freedom, \emph{i.e.} \begin{equation}\label{transition amplitude definition} \Braket{x\,\bar\eta}{y\,\xi}{e^{-\frac{\beta}{\hbar} H}}\;, \end{equation} where $Z^\mu_A\ket{\xi}=\xi^\mu_A\ket{\xi}$ and $\bra{\bar\eta}\bar Z_\mu^A=\bra{\bar\eta}\bar\eta_\mu^A$. Of course, $\ket{x}$ and $\ket{y}$ denote eigenvectors of the position operator $x^i$ as usual, $\ket{y\,\xi} \equiv \ket{y}\otimes \ket{\xi}$, and so on. For convenience in the normalization of the coherent states, from now on we rescale the $Z$ fields by a factor of $\sqrt{\hbar}$, so that $[Z^\mu_A, \bar Z_\nu^B\}=\delta^\nu_\mu\,\delta_A^B$. We are going to insert in \eqref{transition amplitude definition} a complete set of momentum eigenstates, and as an intermediate stage we need to compute \begin{equation}\label{transition x-p} \Braket{x\,\bar\eta}{p\,\xi}{e^{-\frac{\beta}{\hbar} H}} \;, \end{equation} pushing all $p$'s and $Z$'s to the right, all $x$'s and $\bar Z$'s to the left, taking into account all (anti)-commutators and then substituting these operators with the corresponding eigenvalues. Let us focus on the evaluation of \eqref{transition x-p}; clearly we have \begin{equation}\label{sum} \Braket{x\,\bar\eta}{p\,\xi}{e^{-\frac{\beta}{\hbar} H}}=\sum_{k=0}^\infty\frac{(-)^k}{k!}\,\left(\frac{\beta}{\hbar}\right)^k\,\Braket{x\,\bar\eta}{p\,\xi}{H^k}\;. \end{equation} It is well known that, in the case of a nonlinear sigma model, it is not sufficient to expand the exponent to first order, \emph{i.e.} $e^{-\beta H/\hbar}\sim 1-\frac{\beta}{\hbar}H$, to obtain the correct transition amplitude to order $\beta$, see \cite{Peeters:1993vu,Bastianelli:2006rx}. Contributions for all $k$ must be retained in the sum \eqref{sum}, but taking into account at most two $[x,p]$ commutators. Let us see this in more detail. In a factor of $H^k$, pushing all $p$'s to the right by repeated use of the $[x,p]$ commutator, one obtains, remembering that each $H$ can give at most two $p$ eigenvalues, \begin{equation}\label{Bcoeff} \Braket{x\,\bar\eta}{p\,\xi}{H^k}=\sum_{l=0}^{2k}B^k_l(x,\bar\eta,\xi)\,p^l\,\braket{x\,\bar\eta}{p\,\xi}\;, \end{equation} where $p^l$ stands for a homogeneous polynomial in $p$ of degree $l$. For the position eigenstates we use the normalization: $\braket{x}{x'}=g^{-1/2}(x)\delta^{2d}(x-x')$, while the standard normalization is employed for $p$-eigenstates. In this way the completeness relations read \begin{equation} \mathbf1=\int d^{2d}p\,\ket{p}\bra{p}\quad,\quad\mathbf1=\int d^{2d}x\,g\,\ket{x}\bra{x}\;, \end{equation} while the plane waves are given by: $\braket{x}{p}=(2\pi\hbar)^{-d}g^{-1/2}(x)e^{ip\cdot x}$, with $p\cdot x\equiv p_ix^i=p_\mu x^\mu+\bar p_{\bar\mu}\bar x^{\bar\mu}$. Finally, coherent states are normalized as $\braket{\bar\eta}{\xi}=e^{\bar\eta\cdot\xi}$. Having set our normalizations, we expand the transition amplitude as follows \begin{equation} \begin{split} &\Braket{x\,\bar\eta}{y\,\xi}{e^{-\frac{\beta}{\hbar} H}}=(2\pi\hbar)^{-d}\,g^{-1/2}(y) \int d^{2d}p\,e^{-\frac{i}{\hbar}p\cdot y}\,\Braket{x,\bar\eta}{p\,\xi}{e^{-\beta H/\hbar}}\\ &= (2\pi\hbar)^{-2d}\,[g(x)g(y)]^{-1/2}\int d^{2d}p\,e^{\frac{i}{\hbar}p\cdot(x-y)}\,e^{\bar\eta_\mu\cdot\xi^\mu}\,\sum_{k=0}^\infty\left(-\frac{\beta}{\hbar}\right)^k\frac{1}{k!} \sum_{l=0}^{2k}B_l^k(x,\bar\eta,\xi)\,p^l\;. \end{split} \end{equation} Now, to make the $\beta$ dependence explicit, we rescale momenta as $p_i=\sqrt{\hbar/\beta}q_i$ and obtain \begin{equation}\label{beta dependence} \begin{split} \Braket{x\,\bar\eta}{y\,\xi}{e^{-\frac{\beta}{\hbar} H}} &= (4\pi^2\hbar\beta)^{-d}[g(x)g(y)]^{-1/2}e^{\bar\eta_\mu\cdot\xi^\mu}\int d^{2d}q\,e^{iq\cdot(x-y)/\sqrt{\beta\hbar}} \\ &\times\sum_{k=0}^\infty\frac{(-)^k}{k!} \sum_{l=0}^{2k}\left(\frac{\beta}{\hbar}\right)^{k-l/2}B^k_l(x,\bar\eta,\xi)\,q^l\;. \end{split} \end{equation} After momentum integration, in configuration space the leading term in $(x-y)$ will be of the form $\exp[-(x-y)^2/2\beta\hbar]$, showing that effectively $(x-y)\sim\mathcal{O}(\beta^{1/2})$. Then, looking at \eqref{beta dependence}, we see that $q\sim\mathcal{O}(\beta^0)$ and so in the sum over $l$ only $B^k_{2k}$, $B^k_{2k-1}$ and $B^k_{2k-2}$ will contribute, for all $k$, to the order $\beta$ amplitude, as anticipated\footnote{Note that in $B_l^k$ at most $2k-l$ $[x,p]$ commutators are taken into account.}. The $B^k_l$ coefficients are explicitly derived in appendix \ref{computation of B coefficients}, and inserting \eqref{B2k-1} and \eqref{B2k-2} into \eqref{beta dependence}, one can see that the sum in $k$ can be immediately performed, producing the gaussian exponential $\exp[-q^2/2]$. The transition amplitude \eqref{beta dependence} then becomes \begin{equation}\label{transition amplitude with momenta} \begin{split} &\Braket{x\,\bar\eta}{y\,\xi}{e^{-\frac{\beta}{\hbar} H}}= (4\pi^2\hbar\beta)^{-d}[g(x)g(y)]^{-1/2}e^{\bar\eta_\mu\cdot\xi^\mu}\int d^{2d}q\,e^{-q^2/2-iq\cdot\Delta/\sqrt{\beta\hbar}}\,\Big\{1+\sqrt{\beta\hbar}\,\Big[ \frac{i}{2}g^jq_j\\ &-\frac{i}{4}g^{klj}\,q_j\,q_k\,q_l+ig^{\bar\mu\nu}\,\Gamma^\lambda_{\nu\sigma}\, (\bar\eta_\lambda\cdot\xi^\sigma)'\bar q_{\bar\mu}\Big]+\beta\hbar\,\Big[-\frac{1}{32}\ln G_i\ln G^i-\frac18\ln G_i^i-\frac18g^i\ln G_i\\ &-\Big(\frac14\de^jg^l+\frac18g^jg^l+\frac18g^kg^{jl}_k+\frac18g^{jlk}_k\Big)\,q_j\,q_l+\Big(\frac{1}{12} g^{mnkl}+\frac18g^{klm}g^n+\frac{1}{12}g^{ikl}g^{mn}_i\\ &+\frac{1}{24}g^{kl}_ig^{mni}\Big)\,q_k\,q_l\,q_m\,q_n-\Big(\frac{1}{32}g^{klj}g^{pqm}\Big)\,q_j\,q_k\,q_l\, q_m\,q_p\,q_q-\frac12g^{ij}\de_j\Big(g^{\mu\bar\nu}\,\Gamma^\lambda_{\mu\sigma}\Big) (\bar\eta_\lambda\cdot\xi^\sigma)'q_i\,\bar q_{\bar\nu}\\ &-\frac12g^{\bar\mu\nu}\,\Gamma^\rho_{\nu\sigma}\, (\bar\eta_\rho\cdot\xi^\sigma)'\Big(\de_{\bar\mu}g^{\lambda\bar\sigma}\,q_\lambda\,\bar q_{\bar\sigma}+g^jq_j\,\bar q_{\bar\mu}-\frac12g^{klj}\,q_j\,q_k\,q_l\,\bar q_{\bar\mu}+g^{\lambda\bar\sigma}\de_{\bar\mu}g_{\lambda\bar\sigma}\Big)\\ &-a_1\,R^{\ph\mu\nu\ph\rho\sigma}_{\mu\ph\nu\rho} \,\bar\eta_\nu\cdot\xi^\mu\bar\eta_\sigma\cdot\xi^\rho-(a_2-a_1+1)\,R^\mu_\nu\,\bar\eta_\mu\cdot\xi^\nu- \Big(a_3-s\Big)\,R\\ &-\frac12g^{\bar\mu\nu}\Gamma^\mu_{\nu\tau}g^{\lambda\bar\sigma}\Gamma^\rho_{\lambda\sigma} \,\bar q_{\bar\mu}\,\bar q_{\bar\sigma}\,\big[(\bar\eta_\mu\cdot\xi^\tau)'(\bar\eta_\rho\cdot\xi^\sigma)'+\delta_\rho^\tau\,\bar\eta_\mu\cdot \xi^\sigma\big]\Big]\Big\}\;, \end{split} \end{equation} where $\Delta^i=y^i-x^i$ and $(\bar\eta_\lambda\cdot\xi^\sigma)'=(\bar\eta_\lambda\cdot\xi^\sigma-s\,\delta_\lambda^\sigma)$. In order to lighten the formulae we have used the following compact notation \begin{equation}\nonumber \begin{split} \de_i...\de_mg^{jk} &= g^{jk}_{i...m}\;,\quad g^{ij}g^{kl}_j=g^{kli}\;,\quad g^{ij}_j=g^i \\ g^{jk}\de_kg^{lm}_m &=\de^jg^l\;,\quad\de_i\ln G=\ln G_i\;,\quad g^{ij}\de_i\de_j\ln G=\ln G_i^i \;. \end{split} \end{equation} Now we can complete squares in the exponent of \eqref{transition amplitude with momenta}, shift integration variables and perform the gaussian integral over momenta. The transition amplitude, up to order $\beta$, is then given by \begin{equation}\label{transition amplitude expanded} \begin{split} &\Braket{x\,\bar\eta}{y\,\xi}{e^{-\frac{\beta}{\hbar} H}}=(2\pi\hbar\beta)^{-d}\,\big[g(x)/g(y)\big]^{1/2}\,e^{-\frac{1}{2\beta\hbar}g_{ij}\Delta^i\Delta^j}\, e^{\bar\eta_\mu\cdot\xi^\mu}\Big\{1+\Delta^i\,g^{-1/2}\,\de_i\,g^{1/2}\\ &-\frac{1}{4\beta\hbar}\,\de_kg_{ij}\,\Delta^i\Delta^j\Delta^k+\frac12\,\Delta^i\Delta^j\,g^{-1/2}\,\de_i\de_jg^{1/2} -\frac{1}{4\beta\hbar}\,\Delta^ig^{-1/2}\de_ig^{1/2}\de_kg_{mn}\,\Delta^k\Delta^m\Delta^n\\ &+\frac12\,\Big[\frac{1}{4\beta\hbar}\,\de_kg_{ij}\,\Delta^i\Delta^j\Delta^k\Big]^2-\frac{1}{12\beta\hbar} \,\Big[\de_k\de_lg_{ij}-\frac12g_{mn}\,\Gamma^m_{ij}\,\Gamma^n_{kl}\Big]\Delta^i\Delta^j\Delta^k\Delta^l+\frac{1}{6}\, R_{\mu\bar\nu}\,\Delta^\mu\bar\Delta^{\bar\nu}\\ &+\Delta^\nu\,\Gamma^\lambda_{\nu\sigma}\,(\bar\eta_\lambda\cdot\xi^\sigma)'+ \Big[\Delta^\nu\Gamma^\lambda_{\nu\sigma}\,(\bar\eta_\lambda\cdot\xi^\sigma)'\Big]\Big[\Delta^ig^{-1/2}\de_ig^{1/2}\Big] +\frac12\Big[\Delta^\nu\Gamma^\lambda_{\nu\sigma}\,(\bar\eta_\lambda\cdot\xi^\sigma)'\Big]^2\\ &-\frac{1}{4\beta\hbar} \,\de_jg_{kl}\,\Delta^j\Delta^k\Delta^l\,\Big(\Delta^\nu\Gamma^\mu_{\nu\sigma}\,(\bar\eta_\mu\cdot\xi^\sigma)'\Big) +\frac12\Delta^i\Delta^\mu\de_i\Gamma^\lambda_{\mu\sigma}\,(\bar\eta_\lambda\cdot\xi^\sigma)'\\ &+\frac12\,\Delta^\nu\Delta^\lambda\,\Gamma^\mu_{\nu\sigma}\Gamma^\sigma_{\lambda\rho}\,\bar\eta_\mu\cdot\xi^\rho -a_1\,\beta\hbar\,R_{\mu\ph\nu\rho}^{\ph\mu\nu\ph\rho\sigma} \,\bar\eta_\nu\cdot\xi^\mu\,\bar\eta_\sigma\cdot\xi^\rho+\Big(a_1-a_2-\frac12\Big)\,\beta\hbar\,R^\mu_\nu\,\bar\eta_\mu\cdot\xi^\nu\\ &+\Big(\frac{1}{6}+\frac{s}{2}-a_3\Big)\,\beta\hbar\,R+\mathcal{O}(\beta^{3/2})\Big\}\;. \end{split} \end{equation} All functions in \eqref{transition amplitude expanded}, if not specified otherwise, are evaluated at point $x$. Keeping in mind that the transition amplitude is a bi-scalar, and that in a semiclassical expansion the classical action evaluated on-shell should appear in the exponent, we factorize and exponentiate, up to order $\beta$, four terms \begin{equation}\label{transition amplitude factorized} \begin{split} &\Braket{x\,\bar\eta}{y\,\xi}{e^{-\frac{\beta}{\hbar} H}}=(2\pi\hbar\beta)^{-d}\,g(y)^{-1/2}\Big[g^{1/2}+\Delta^i\de_ig^{1/2}+\frac12\,\Delta^i\Delta^j\de_i\de_jg^{1/2}\Big]\\ &\exp\Big\{\!\!-\frac{1}{\beta\hbar}\,\Big[\frac12\,g_{ij}\,\Delta^i\Delta^j+\frac14\,\de_ig_{jk}\,\Delta^i\Delta^j\Delta^k+\frac{1}{12}\, \Big(\de_k\de_lg_{mn}-\frac12\,g_{ij}\,\Gamma^i_{kl}\,\Gamma^j_{mn}\Big)\,\Delta^k\Delta^l\Delta^m\Delta^n\Big]\Big\}\\ &\exp\Big\{\bar\eta_\mu\cdot\xi^\mu+\Delta^\nu\,\Gamma^\lambda_{\nu\sigma}\,(\bar\eta_\lambda\cdot\xi^\sigma)' +\frac12\Delta^i\Delta^\mu\de_i\Gamma^\lambda_{\mu\sigma}\,(\bar\eta_\lambda\cdot\xi^\sigma)' +\frac12\,\Delta^\nu\Delta^\lambda\,\Gamma^\mu_{\nu\sigma}\Gamma^\sigma_{\lambda\rho}\,\bar\eta_\mu\cdot\xi^\rho\\ &-a_1\,\beta\hbar\,R_{\mu\ph\nu\rho}^{\ph\mu\nu\ph\rho\sigma} \,\bar\eta_\nu\cdot\xi^\mu\,\bar\eta_\sigma\cdot\xi^\rho-a_2\,\beta\hbar\,R^\mu_\nu\,\bar\eta_\mu\cdot\xi^\nu -a_3\,\beta\hbar\,R\Big\}\\ &\Big[1+\frac16\,R_{\mu\bar\nu}\,\Delta^\mu\bar\Delta^{\bar\nu}+\Big(a_1-\frac12\Big)\,\beta\hbar\, R^\mu_{\nu}\,\bar\eta_\mu\cdot\xi^\nu+\Big(\frac16+\frac{s}{2}\Big)\,\beta\hbar\,R\Big]\;. \end{split} \end{equation} The first term contains the Taylor expansion around $x$ of $g(y)^{1/2}$, that cancel the $g(y)^{-1/2}$ factor. The second and third terms should be the expansions of the exponential of the classical action, and the fourth is evidently covariant. The detailed study of the expansion of the on-shell action is demanded to appendix \ref{classical action appendix}. Comparing the result \eqref{classical action expanded} for the classical on-shell action $\tilde S_{os}$ with the expansion \eqref{transition amplitude factorized}, we see that, as expected, the transition amplitude can finally be cast in an explicitly covariant form \begin{equation}\label{transition amplitude covariant} \begin{split} \Braket{x\,\bar\eta}{y\,\xi}{e^{-\frac{\beta}{\hbar} H}}&=(2\pi\hbar\beta)^{-d}\,e^{-\tilde S_{os}/\hbar}\,\Big[1+\frac16\,R_{\mu\bar\nu}\,\Delta^\mu\bar\Delta^{\bar\nu}+\Big(a_1-\frac12\Big)\,\beta\hbar\, R^\mu_{\nu}\,\bar\eta_\mu\cdot\xi^\nu\\ &+\Big(\frac16+\frac{s}{2}\Big)\,\beta\hbar\,R + {\cal O}(\beta^2)\Big] \end{split} \end{equation} where the coordinate displacements $\Delta^\mu$ are considered of order $\sqrt{\beta}$. \section{Conclusions and outlook} In this paper we have introduced and studied the quantum properties of a class of quantum mechanical models with $U(N|M)$ extended supersymmetry on the worldline. These models take the form of nonlinear sigma models with K\"ahler manifolds as target spaces, and can be interpreted as describing the motion of a particle with extra degrees of freedom, carried by graded complex vectors $Z^\mu_A$, on K\"ahler spaces. When the K\"ahler space is flat, the model has conserved charges satisfying precisely a $U(N|M)$ extended supersymmetry algebra on the worldline. On curved K\"ahler spaces, the charges get modified by the geometry as does the corresponding quantum algebra, which generically fails to be first class, though a symmetry under the supergroup $U(N|M)$ is always present. Conserved supercharges can be defined on locally symmetric K\"ahler manifolds, i.e. K\"ahler manifolds with covariantly constant curvature tensors, while a truly first class algebra can be obtained on K\"ahler manifolds with constant holomorphic sectional curvature. The latter case is particularly interesting, as one can gauge the symmetry charges to obtain higher spin equations with peculiar gauge symmetries, as studied in flat space for the $U(N|0)$ models in \cite{Bastianelli:2009vj}. In the second part of the paper we have computed the heat kernel for our quantum mechanical models in a perturbative expansion. The computation was performed with operatorial methods on arbitrary K\"ahler manifolds and with a general hamiltonian containing four arbitrary couplings. The calculation turned out to be somewhat tedious for a rather simple final result. One possible application of this result is to use it as a benchmark for path integral calculations, which are often simpler and more flexible, but need to be defined precisely, with predetermined regularization schemes and corresponding counterterms. Indeed the operatorial calculation of ref. \cite{Peeters:1993vu} was useful to identify the correct time slicing regularization of path integrals in curved spaces \cite{DeBoer:1995hv}. Correctness of the alternative but equivalent mode \cite{Bastianelli:1991be} and dimensional \cite{Kleinert:1999aq, Ba:2005vk} regularizations has then been checked against time slicing, and the full consistency of these three schemes have been instrumental in putting the method of path integration on curved manifolds on solid foundations \cite{Bastianelli:2006rx}. In future works we plan to construct regularized path integrals for the $U(N|M)$ quantum mechanics, use them to study effective actions induced by higher spin fields and compute higher order heat kernel coefficients. \acknowledgments{We wish to thank Andrew Waldron for discussions. This work was supported in part by the Italian MIUR-PRIN contract 20075ATT78.} \vskip 1cm
2,869,038,156,004
arxiv
\section{ Introduction} There have been renewed interests for muon colliders operating at high energies in the range of multi-TeV~\cite{Delahaye:2019omf,Han:2020uid,Long:2020wfp}. This would offer great physics opportunity to open unprecedented new energy threshold for new physics, and provide precision measurements in a clean environment in leptonic collisions~\cite{Capdevilla:2021rwo,Liu:2021jyc,Huang:2021nkl,Yin:2020afe,Buttazzo:2020eyl,Capdevilla:2020qel,Han:2020pif,Han:2020uak,Costantini:2020stv}. Recent studies indeed demonstrated the impressive physics potentials exploring the electroweak sector, including precision Higgs boson coupling measurements~\cite{Han:2020pif}, the electroweak dark matter detection~\cite{Han:2020uak}, and discovery of other beyond the Standard Model (BSM) heavy particles~\cite{Costantini:2020stv}. In this note, we summarize the results on the discovery potential for the non-SM heavy Higgs bosons at a high-energy muon collider in the framework of 2HDMs~\cite{Branco:2011iw}. We adopt the commonly studied four categories according to the assignments of a discrete $\mathbb{Z}_2$ symmetry, which dictates the pattern of the Yukawa couplings. We take a conservative approach in the alignment limit for the mixing parameter so that there are no large corrections to the SM Higgs physics. We consider the benchmark energies for the muon colliders \cite{Delahaye:2019omf} in the range of $\sqrt s=3-30$ TeV, with the integrated luminosity scaled as \begin{align} \label{eq:luminosity} \mathcal{L}= \left({\frac{\sqrt s}{10\ {\rm TeV}}}\right)^2 \times 10^4\ {\rm fb}^{-1}. \end{align} We study both the heavy Higgs boson pair production as well as single production associated with two heavy fermions. Both $\mu^+\mu^-$ annihilation channels and Vector Boson Fusion (VBF) processes are considered, which are characteristically different. We also analyze the radiative return $s$-channel production of a heavy Higgs boson in $\mu^+\mu^-$ annihilation, given the possible enhancement of the muon Yukawa couplings in certain models. Combining together the production channels and the decay patterns, we also show how the four different types of 2HDMs can be distinguished. \section{Higgs pair production} \begin{figure}[!thb] \centering \includegraphics[width=0.8\textwidth]{figure/xsec_Ecm_HH_ANN_VBF.pdf} \caption{Pair production cross sections versus the c.m.~energy $\sqrt{s}$ for annihilation (left panel) and VBF process (right) in the alignment limit $\cos(\beta-\alpha)=0$. The vertical axis on the right shows the corresponding event yields for a 10 ${\rm ab}^{-1}$ integrated luminosity.} \label{fig:Pair_CS} \end{figure} The cross sections of the pair productions in the alignment limit $\cos(\beta-\alpha)=0$ via annihilation and VBF are shown in~\autoref{fig:Pair_CS}. For the annihilation, we see the threshold behavior for a scalar pair production in a $p$-wave as $\sigma\sim \beta^3$ with $\beta = \sqrt{1-4m_H^2/s}$. Well above the threshold, the cross section asymptotically approach $\sigma\sim \alpha^2/s$ which is insensitive to the heavy Higgs mass. The VBF processes become increasingly important at high c.m.~energies. We see the logarithmic enhancement over the energy $\log^2(s/m_\mu^2)$ (or $\log^2(s/m_V^2)$). Unlike the production via annihilation, the cross section for the VBF processes are very sensitive to the heavy Higgs masses. In general, the annihilation process yields more events than the VBF process, except for the $H^+H^-$ production. Different decay channels of the scalars, which have different branching fractions in different types of 2HDM, can be used to help distinguishing four different types. The leading signal channels for different cases are listed in~\autoref{tab:pair} and several observations can be made: \begin{itemize} \item{} For low values of $\tan\beta <5$, the four models cannot be distinguished since the dominating decay channels are the same: $H/A \rightarrow t\bar{t}$, $H^\pm \rightarrow tb$. \item{} For large values of $\tan\beta>10$, the decay modes of $H/A \rightarrow \tau^+\tau^-$, $H^\pm \rightarrow \tau\nu$ become substantial in Type-L, which can be used to separate Type-L from the other three types of 2HDMs. \item{} For $\tan\beta>5$, the enhancement of the bottom Yukawa coupling with $\tan\beta$ in Type-II/F leads to the growing and even the leading of $H/A\to b\bar b$ decay branching fraction, which can be used to separate Type-II/F from the Type-I 2HDM. \item{}Type-II and Type-F cannot be distinguished for all ranges of $\tan\beta$ based on the leading channel, since the leptonic decay mode is always sub-dominate comparing to decays into top or bottom quarks in all ranges of $\tan\beta$. The full discrimination is only possible at $\tan\beta>10$ if the sub-leading $H^\pm\to \tau\nu$ and $H/A\to \tau^+\tau^-$ decays in Type-II can be detected, which has a branching fraction about $10\%$. \end{itemize} \begin{table}[!bht] \centering \begin{tabular}{|c|c|c|c|c|c|}\hline & production &Type-I & Type-II & Type-F & Type-L \\ \hline \multirow{3}{*}{small $\tan\beta<5$}&$H^+H^-$&\multicolumn{4}{c|} {$t\bar b, \bar t b$} \\ &$HA/HH/AA$&\multicolumn{4}{c|}{$t\bar t, t\bar t$} \\ &$H^\pm H/A$&\multicolumn{4}{c|}{$tb, t\bar t$} \\ \hline \multirow{4}{*}{intermediate $\tan\beta$}&$H^+H^-$&\multicolumn{3}{c|}{$t\bar b, \bar t b$}&$tb, \tau\nu_\tau$ \\ \cline{3-5} &$HA/HH/AA$&$t\bar t, t\bar t$&\multicolumn{2}{c|}{$t\bar t, b\bar b$}&$t\bar t,\tau^+\tau^-$\\ &$H^\pm H/A$&$tb, t\bar t$&\multicolumn{2}{c|}{$tb, t\bar t$;\ $tb,b\bar b$}&$tb, t\bar t$;\ $tb,\tau^+\tau^-$; \\ & & & \multicolumn{2}{c|}{}& $\tau\nu_\tau, t\bar t$;\ \ $\tau\nu_\tau, \tau^+\tau^-$ \\ \hline \multirow{3}{*}{large $\tan\beta>10$}&$H^+H^-$& {$t\bar b, \bar t b$}& $tb, t b(\tau\nu_\tau)$&{$t\bar b, \bar t b$}&$\tau^+\nu_\tau, \tau^- \nu_\tau$\\ &$HA/HH/AA$&$t\bar t, t\bar t$&$b\bar{b},b\bar{b}(\tau^+\tau^-)$& {$b\bar b, b\bar b$}&$\tau^+\tau^-, \tau^+ \tau^-$\\ &$H^\pm H/A$&$tb, t\bar t$&$tb(\tau\nu_\tau),b\bar b(\tau^+\tau^-)$& $tb,b\bar b$&$\tau^\pm \nu_\tau,\tau^+ \tau^- $\\ \hline \end{tabular} \caption{Leading signal channels of Higgs pair production for various 2HDMs in different regions of small, intermediate and large $\tan\beta$. Channels in the parenthesis are the sub-leading channels. } \label{tab:pair} \end{table} The reach of the heavy Higgs pair production via annihilation at a muon collider is also summarized in~\autoref{fig:95reach} with the comparison of hadron collider reach in the Type-II. \begin{figure}[!tb] \centering \includegraphics[width=0.8\textwidth]{figure/Exclusion_Ann.pdf} \caption{95\% C.L. exclusion contour at muon collider with center of mass energy $\sqrt{s}=14$ (dash curves), 30 (dotted curves) TeV for different types of 2HDM from pair production channels with annihilation contribution only. For the Type-II 2HDM, the 95\% C.L. exclusion limits from the High-Luminosity Large Hadron Collider (HL-LHC) with 3 ab$^{-1}$ as well as the 100 TeV $pp$ collider with 30 ab$^{-1}$ are also shown (taken from Ref.~\cite{Craig:2016ygr}).} \label{fig:95reach} \end{figure} \section{Higgs boson associated production with a pair of heavy fermions} \begin{figure}[!thb] \centering \includegraphics[width=0.8\textwidth]{figure/xsec_Ecm_QQH_ANN_VBF.pdf} \caption{Cross sections versus the c.m.~energy $\sqrt{s}$ for a single heavy Higgs production associated with a pair of fermions via annihilation (left) and VBF (right) for $t_\beta=1$. Acceptance cuts of $p_{T}^f>50$ GeV and $10^\circ<\theta_f<170^\circ$ are imposed on all outgoing fermions. A veto cut of $0.8m_\Phi<m_{f f^\prime}<1.2m_\Phi$ is applied to the associated fermions to remove contributions from resonant Higgs decays. The vertical axis on the right shows the corresponding event yields for a 10 ab$^{-1}$ integrated luminosity.} \label{fig:com_ffH} \end{figure} Heavy Higgs bosons can also be abundantly produced in association with a pair of heavy fermions at a muon collider. The cross sections of such processes via annihilation or VBF for $t_\beta = 1$ and $\cos({\beta-\alpha})=0$ are shown in~\autoref{fig:com_ffH}. Unlike the pair production case, the production cross sections for a single heavy Higgs boson produced in association with a pair of heavy fermions depend sensitively on $\tan\beta$. Combining with different decay channels of the heavy scalar, we can also try to distinguish different types of 2HDM. In~\autoref{tab:fermion} we summarized the leading signal channels of the Higgs associated production with fermions in four types of 2HDMs in different regimes of $\tan\beta$. Several observations can be made: \begin{itemize} \item In the small $\tan\beta<5$ region, all six production channels have sizable production cross sections. However, it is hard to distinguish different types of 2HDMs since they all lead to the same final states. \item In the large $\tan\beta>10$ region, all the production channels for the Type-I are suppressed, while Type-II/F have sizable production in $tbH^\pm$, $bbH^\pm$, $bbH/A$ and $tbH/A$ channels. Type-II and Type-F can be further separated by studying the sub-dominant decay channels of $H^\pm \rightarrow \tau \nu_\tau$ and $H/A \rightarrow \tau^+\tau^-$ in the Type-II. Same final states of Type-II can also be obtained via $\tau\nu_\tau H^\pm$, $\tau^+\tau^- H^\pm$, $\tau^+\tau^- H/A$ and $\tau\nu_\tau H/A$ production. \item The intermediate range of $\tan\beta$ is the most difficult region for all types of 2HDMs, since top Yukawa couplings are reduced, while bottom Yukawa coupling is not big enough to compensate, resulting in a rather low signal production rate. A rich set of final states, however, are available given the various competing decay modes of $H^\pm$ and $H/A$. \item At very large value of $\tan\beta>50$, the tau-associated production $\tau\nu_\tau H^\pm$, $\tau^+\tau^- H^\pm$, $\tau^+\tau^- H/A$ and $\tau\nu_\tau H/A$ would be sizable for Type-L. \end{itemize} \begin{table}[tb] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline & production &Type-I & Type-II & Type-F & Type-L \\ \hline \multirow{6}{*}{small $\tan\beta<5$} & $tbH^\pm$ & \multicolumn{4}{c|}{$tb, tb$} \\ & $t\bar{t}H^\pm$ & \multicolumn{4}{c|}{$t\bar{t}, tb$} \\ & $b\bar{b}H^\pm$ & \multicolumn{4}{c|}{$b\bar{b}, tb$} \\ &$t\bar{t}H/A$ & \multicolumn{4}{c|}{$t\bar t, t \bar{t}$}\\ &$b\bar{b}H/A$ & \multicolumn{4}{c|}{$b\bar b, t \bar{t}$}\\ &$tbH/A$ & \multicolumn{4}{c|}{$tb, t \bar{t}$}\\ \hline \multirow{6}{*}{intermediate $\tan\beta$} &$tbH^\pm$ &\multicolumn{3}{c|}{$tb,tb$}&$tb,tb$; $tb,\tau\nu_\tau$ \\ \cline{3-6} &$t\bar{t}H^\pm$ &\multicolumn{3}{c|}{$t\bar{t},tb$}&$t\bar{t},tb$; $t\bar{t},\tau\nu_\tau$ \\ \cline{3-6} &$b\bar{b}H^\pm$ &\multicolumn{3}{c|}{$b\bar{b},tb$}&$b\bar{b},tb$; $b\bar{b},\tau\nu_\tau$ \\ \cline{3-6} &$t\bar{t}H/A$& $t\bar{t}, t\bar{t}$&\multicolumn{2}{c|}{$t\bar{t},t\bar{t}$; $t\bar{t}, b\bar{b}$} &$t\bar{t}, t\bar{t}$; $t\bar{t},\tau^+\tau^-$ \\ &$b\bar{b}H/A$& $b\bar{b}, t\bar{t}$&\multicolumn{2}{c|}{$b\bar{b}, t\bar{t}$; $b\bar{b}, b\bar{b}$} &$b\bar{b}, t\bar{t}$; $b\bar{b},\tau^+\tau^-$\\ &$tbH/A$& $tb,t\bar{t}$&\multicolumn{2}{c|}{$tb,t\bar{t}$; $tb,b\bar{b}$} &$tb,t\bar{t}$; $tb,\tau^+\tau^-$ \\ \hline \multirow{4}{*}{large $\tan\beta>10$}& $tbH^\pm$ & $-$ & {$tb,tb (\tau\nu_\tau)$} & {$tb,tb$} & $-$\\ \cline{4-5} & $bbH^\pm$ &$-$ & {$bb,tb (\tau\nu_\tau)$} & {$bb,tb$} & $-$\\ \cline{4-5} &$b\bar{b}H/A$&$-$ & {$b\bar{b}, b\bar{b} (\tau^+\tau^-)$}&{$b\bar{b}, b\bar{b}$}& $-$\\ \cline{4-5} &$t\bar{b}H/A$&$-$ & {$t\bar{b}, b\bar{b} (\tau^+\tau^-)$}&{$t\bar{b}, b\bar{b}$}& $-$\\ \hline \multirow{4}{*}{very large $\tan\beta>50$} &$\tau\nu_\tau H^\pm$& \multicolumn{3}{c|}{$-$}& $\tau\nu_\tau, \tau\nu_\tau$\\ &$\tau^+\tau^- H^\pm$& \multicolumn{3}{c|}{$-$}& $\tau^+\tau^-, \tau\nu_\tau$\\ &$\tau^+\tau^- H/A$& \multicolumn{3}{c|}{$-$}& $\tau^+\tau^-, \tau^+\tau^-$\\ &$\tau\nu_\tau H/A$& \multicolumn{3}{c|}{$-$}& $\tau \nu_\tau, \tau^+\tau^-$\\ \hline \end{tabular} \caption{Leading signal channels of single Higgs associated production with a pair of fermions for various 2HDMs in different regions of small, intermediate and large $\tan\beta$. Channels in the parenthesis are the sub-leading channels. } \label{tab:fermion} \end{table} \section{Radiative return} While the cross sections for heavy Higgs pair production are un-suppressed under the alignment limit, the cross section has a threshold cutoff at $m_\Phi\sim \sqrt{s}/2$. The resonant production for a single heavy Higgs boson may further extend the coverage to about $m_\Phi\sim \sqrt{s}$, as long as the coupling strength to $\mu^+\mu^-$ is big enough. The drawback for the resonant production is that the collider energy would have to be tuned close to the mass of the heavy Higgs, which is less feasible at future muon colliders. A promising mechanism is to take advantage of the initial state radiation (ISR), so that the colliding energy is reduced to a lower value for a resonant production, thus dubbed the ``radiative return''~\cite{Chakrabarty:2014pja}. The cross section of the ``radiative return'' process is calculated in two ways: (1) $\mu^+\mu^-\to \gamma H$ and (2) $\mu^+\mu^-\to H$ with ISR spectrum. The results are given in~\autoref{fig:gammaH}, where the left panel shows the dependence on c.m.~energy while the right panel shows the $\tan\beta$ dependence. We can see that the cross section of such process increases as the heavy Higgs mass approaches the collider c.m.~energy. On the other hand, although, the cross section is much smaller than the other productions channels in previous sections at moderate $\tan\beta$, it could be largely enhanced at large (small) $\tan\beta$ for Type-II/L (Type-I/F). It could even be the dominant production for heavy Higgs in the large $\tan\beta$ region of Type-L when pair production is kinematically forbidden and quark associated productions are suppressed. \section{Executive summary} Motivated by the recent interests in future high energy muon colliders, we explore the physics reach in studying heavy Higgs bosons in 2HDM at such facilities. We point out that the pair production of the non-SM Higgs bosons via the universal gauge interactions is the dominant mechanism once above the kinematic threshold. In contrast, single Higgs boson production associated with a pair of heavy fermions is important in the parameter region with enhanced Yukawa couplings. As such, $\mu^+\mu^-$ annihilation channels and Vector Boson Fusion processes are complementary for discovery and detailed property studies. The radiative return mechanism in the $s$-channel production can extend the mass coverage close to the collider c.m.~energy. Different types of 2HDMs can also be distinguishable for moderate and large values of $\tan\beta$. \begin{figure}[!tb] \centering \includegraphics[width=0.8\textwidth]{figure/xsec_Ecm_tanb_gammaH.pdf} \caption{Cross sections of single heavy Higgs $H$ production through the radiative return. Left panel is for $m_H=1$, 2 and 15 TeV and $\tan\beta=1$ versus the c.m.~energy $\sqrt s$, with the solid curves for the convoluted ISR spectrum and the dashed curves for single photon radiation $\mu^+\mu^- \rightarrow H \gamma$ with $10^\circ<\theta_\gamma<170^\circ$. Right panel is for the $\tan\beta$ dependence of the cross section for $\sqrt{s}=14$ TeV and $m_H=12$ TeV. The vertical axis on the right shows the corresponding event yields for a 10 ab$^{-1}$ integrated luminosity.} \label{fig:gammaH} \end{figure} \bibliographystyle{JHEP}
2,869,038,156,005
arxiv
\section{Introduction} The complete simply connected K\"ahler manifolds of nonzero constant holomorphic curvature are the complex space forms $\C\PP^n$ and $\C{\mathrm H}^n$. Takagi \cite{takagi1975}, for $\C\PP^n$ and Montiel \cite{montiel}, for $\C{\mathrm H}^n$, catalogued a specific list of real hypersurfaces which may be characterized as the homogeneous Hopf hypersurfaces. Other characterizations of these hypersurfaces have been derived over the years, both in terms of extrinsic information (such as properties of the shape operator) and intrinsic information (such as properties of the curvature tensor). In both cases, the interaction of these geometric objects with the complex structure has played an important role. Occurring as a real hypersurface in $\C\PP^n$ or $\C{\mathrm H}^n$ places significant restrictions on the geometry of a Riemannian manifold $M$ and on the way it is immersed. For example, it is known that such an $M$ cannot be Einstein (an intrinsic condition) or umbilic (an extrinsic condition). In fact, neither the Ricci tensor nor the shape operator can be parallel. Nevertheless, elements of the lists of Takagi and Montiel enjoy many nice properties and geometers have been successful in characterizing them in terms of these properties. Recently, the {\it structure Jacobi operator} has been an object of study and various non-existence and classification results are now known for $n \ge 3$. Unfortunately, the methods of proof used in establishing these results do not carry over to the case $n=2$. In this paper, we obtain corresponding results for $\C\PP^2$ and $\C{\mathrm H}^2$ using the method of moving frames, along with the theory of exterior differential systems. In what follows, all manifolds are assumed connected and all manifolds and maps are assumed smooth $(C^\infty)$ unless stated otherwise. Basic notation and historical information for hypersurfaces in complex space forms may be found in \cite{nrsurvey}. For more on moving frames and exterior differential systems, see the monograph \cite{BCG3} or the textbook \cite{cfb}. \subsection{Hypersurfaces in Complex Space Forms} Throughout this paper, we will take the holomorphic sectional curvature of the complex space form in question to be $4c$. The curvature operator $\widetilde \RR$ of the space form satisfies \begin{equation}\label{ambientcurvature} \widetilde \RR(X,Y) = c (X \wedge Y + JX \wedge JY + 2\langle X, J Y\rangle J) \end{equation} for tangent vectors $X$ and $Y$ (cf. Theorem 1.1 in \cite{nrsurvey}), where $X\wedge Y$ denotes the skew-adjoint operator defined by $$(X \wedge Y) Z = \langle Y, Z\rangle X - \langle X, Z\rangle Y.$$ We will denote by $r$ the positive number such that $c = \pm 1/r^2$. This is the same convention as used in (\cite{nrsurvey}, p. 237). A real hypersurface $M$ in $\C\PP^n$ or $\C{\mathrm H}^n$ inherits two structures from the ambient space. First, given a unit normal $\xi$, the {\em structure vector field} $W$ on $M$ is defined so that $$\JJ W = \xi, \qquad W \in TM,$$ where $\JJ$ is the complex structure. This gives an orthogonal splitting of the tangent space $$TM = \operatorname{span} \{W\} \oplus W^\perp.$$ Second, on the tangent space we define a linear operator $\varphi$ which is the complex structure $\JJ$ followed by projection onto $TM$: $$\varphi X = \JJ X - \langle X,W\rangle \xi, \qquad \varphi : TM \to TM.$$ Recall that, for a tangent vector field $V$ on a Riemannian manifold, the {\em Jacobi operator} $R_V$ is a tensor field of type $(1,1)$ satisfying $$R_V(X) = \RR(X,V)V,$$ where $\RR$ denotes the Riemannian curvature tensor of type $(1,3)$. Note that, because of the symmetries of the curvature tensor, $R_V$ is self-adjoint and $R_V V =0$. For a real hypersurface in a complex space form in particular, and $V=W$ (the structure vector), $R_W$ is called the {\em structure Jacobi operator}. In this paper, we will characterize certain hypersurfaces in $\C\PP^2$ and $\C{\mathrm H}^2$ in terms of the structure Jacobi operator. Some of the results we will state involve the notion of {\em Hopf hypersurfaces}. A hypersurface $M$ in a complex space form is said to be a Hopf hypersurface if the structure vector $W$ is a principal vector, (i.e. $AW = \alpha W$, where $A$ is the shape operator). It is a non-obvious fact (proved by Y. Maeda \cite{YMaeda} for $\C\PP^n$ and by Ki and Suh \cite{KiSuh} for $\C{\mathrm H}^n$) that the principal curvature $\alpha$ is (locally) constant. We refer to $\alpha$ as the {\it Hopf principal curvature} following Martins \cite{martins}. For an arbitrary oriented hypersurface in a complex space form, we define the function $$\alpha = \langle A\,W, W\rangle. $$ \noindent Of course, $\alpha$ need not be constant in general. We also recall the notion of {\em pseudo-Einstein hypersurfaces}. A real hypersurface $M$ in a complex space form is said to be pseudo-Einstein if there are constants $\rho$ and $\sigma$ such that the Ricci (1,1)-tensor $S$ of $M$ satisfies $$S X = \rho X + \sigma \langle X,W\rangle W$$ for all tangent vectors $X$. \subsection{Summary of results} We summarize results of Perez and collaborators on hypersurfaces satisfying conditions involving the structure Jacobi operator: \begin{theorem}[Ortega-Perez-Santos \cite{Ortega2006}] \label{OPStheorem} Let $M^{2n-1}$, where $n \ge 3$, be a real hypersurface in $\C\PP^n$ or $\C{\mathrm H}^n$. Then the structure Jacobi operator $R_W$ cannot be parallel. \end{theorem} \begin{theorem}[Perez-Santos \cite{Perez2005a}]\label{LieParalleltheorem} Let $M^{2n-1}$, where $n \ge 3$, be a real hypersurface in $\C\PP^n$. Then the Lie derivative ${\mathcal L}_V R_W$ of the structure Jacobi operator cannot vanish for all tangent vectors $V$. \end{theorem} Weakening the hypothesis of Theorem \ref{LieParalleltheorem}, Perez et al. \cite{Perez2005b} were able to prove the following. \begin{theorem}[Perez-Santos-Suh]\label{PSSthm} Let $M^{2n-1}$, where $n \ge 3$, be a real hypersurface in $\C\PP^n$. If ${\mathcal L}_W R_W = 0$, then $M$ is a Hopf hypersurface. If the Hopf principal curvature $\alpha$ is nonzero, then $M$ is locally congruent to a geodesic sphere or a tube over a totally geodesic $\C\PP^k$, where $ 0 < k < n-1.$ \end{theorem} In \S\ref{first} we extend Theorem \ref{OPStheorem} to the case $n=2$, while at the end of \S\ref{LWRW} we extend Theorem \ref{LieParalleltheorem} to the case $n=2$ for both $\C\PP^2$ and $\C{\mathrm H}^2$. We find that the analogue of Theorem \ref{PSSthm} for $n=2$ is essentially the same, and is valid for $\C{\mathrm H}^2$ as well as $\C\PP^2$. Specifically, in \S \ref{LWRW}, we prove \begin{theorem} \label{LWRWtheorem} Let $M^3$ be a real hypersurface in $\C\PP^2$ or $\C{\mathrm H}^2$. Then the identity ${\mathcal L}_W R_W =0 $ is satisfied if and only if $M$ is a pseudo-Einstein hypersurface. \end{theorem} It is not immediately obvious that Theorem \ref{LWRWtheorem} is, in fact, the extension of Theorem \ref{PSSthm} to $\C\PP^2$ and to $\C{\mathrm H}^2$. The analogue of Theorem \ref{PSSthm} for $n=2$ would say that a hypersurface $M^3$ in $\C\PP^2$ with ${\mathcal L}_W R_W =0 $ must be an open subset of a geodesic sphere or a Hopf hypersurface with $\alpha = 0$. However, the classification of pseudo-Einstein hypersurfaces in $\C\PP^2$ by Kim and Ryan \cite{kimryan2} yields exactly the same list of hypersurfaces. The classification of pseudo-Einstein hypersurfaces in $\C{\mathrm H}^2$ by Ivey and Ryan \cite{IveyRyan} yields an analogous list -- open subsets of horospheres, geodesic spheres, tubes over $\C{\mathrm H}^1$, and Hopf hypersurfaces with $\alpha = 0$. It is not hard to check that every Hopf hypersurface with $\alpha = 0$ (in $\C{\mathrm H}^n$ as well as in $\C\PP^n$) for $n \ge 2$, satisfies ${\mathcal L}_W R_W = 0$. The structure theory for Hopf hypersurfaces with $\alpha = 0$ is described in \cite{cecilryan, IveyRyan, kimryan2, martins}. Note that such hypersurfaces need not be pseudo-Einstein when $n \ge 3$. On the other hand, there are some pseudo-Einstein hypersurfaces in $\C\PP^n$, where $n \ge 3$, that do not satisfy ${\mathcal L}_W R_W =0 $. Thus one cannot restate Theorem \ref{PSSthm} in terms of the pseudo-Einstein condition. Finally, we observe that the condition considered in Theorem \ref{LieParalleltheorem} is actually quite strong. In \S \ref{LieParallelSection} we provide a new proof of this theorem that is also valid for $\C{\mathrm H}^n$. \section{Basic Equations} \label{basic} In this and subsequent sections, we follow the notation and terminology of \cite{nrsurvey}: $M^{2n-1}$ will be a hypersurface in a complex space form $\widetilde M$ (either $\C\PP^n$ or $\C{\mathrm H}^n$) having constant holomorphic sectional curvature $4c\ne 0$. The structures $\xi$, $W$, and $\varphi$ are as defined in the Introduction. The $(2n-2)$-dimensional distribution $W^\perp$ is called the {\it holomorphic distribution}. The operator $\varphi$ annihilates $W$ and acts as complex structure on $W^\perp$. The shape operator $A$ is defined by $$A X = -\widetilde\nabla_X \xi$$ where $\widetilde\nabla$ is the Levi-Civita connection of the ambient space. The Gauss equation expresses the curvature operator of $M$ in terms of $A$ and $\varphi$, as follows: \begin{equation}\label{gausseq} \RR(X,Y)=AX\& AY+c\(X\& Y+\varphi X\& \varphi Y +2 \<X,\varphi Y\> \varphi\). \end{equation} \noindent In addition, it is easy to show (see \cite{nrsurvey}, p. 239) that \begin{equation} \label{nablaW} \nabla_X W=\varphi A X, \end{equation} where $\nabla$ is the Levi-Civita connection of the hypersurface $M$. Consider now the case $n=2$, so that $M^3$ is a hypersurface in $\C\PP^2$ or $\C{\mathrm H}^2$. Suppose that there is a point $p$ (and hence an open neighborhood of $p$) where $AW \ne \alpha W$. Then there is a positive function $\beta$ and a unit vector field $X \in W^\perp$ such that $$AW = \alpha W + \beta X.$$ Let $Y = \varphi X$. Then there are smooth functions $\lambda$, $\mu$, and $\nu$ defined near $p$ such that with respect to the orthonormal frame $(W,X,Y)$, \begin{equation} A = \begin {pmatrix} \alpha&\beta& 0 \\ \beta &\lambda & \mu \\ 0&\mu&\nu \\ \end {pmatrix}\label{shapematrix}. \end{equation} A routine computation, using the Gauss equation \eqref{gausseq}, yields \begin{equation} R_W = \begin {pmatrix}\label{jacobimatrix} 0&0& 0 \\ 0 &\alpha\lambda+c-\beta^2 & \alpha\mu \\ 0&\alpha\mu&\alpha\nu+c \\ \end {pmatrix}. \end{equation} Consider now a point where $AW = \alpha W$. Let $X$ be a unit principal vector in $W^\perp$ and let $Y = \varphi X$. Then there are numbers $\alpha$, $\lambda$ and $\nu$ such that equations \eqref{shapematrix} and \eqref{jacobimatrix} still hold at this point, but with $\beta = \mu =0$. In this connection, we recall the following useful fact (\cite{nrsurvey}, p. 246.) \begin{prop}\label{hopfpc} Let $M^{2n-1}$, where $n \ge 2$, be a Hopf hypersurface in $\C\PP^n$ or $\C{\mathrm H}^n$ with Hopf principal curvature $\alpha$. If $X$ is a unit vector in $W^\perp$ such that $AX = \lambda X$ and $A \varphi X = \nu \varphi X$, then \begin{equation} \lambda \nu =\frac{\lambda + \nu}{2}\ \alpha +c. \end{equation} \end{prop} \section{Parallelism of $R_W$}\label{first} \subsection{The condition $\nabla R_W = 0$.}\label{parallel} We first show that this condition implies $R_W = 0$. \begin{prop}\label{parallelprop} Let $M^{2n-1}$ be a hypersurface in $\C\PP^n$ or $\C{\mathrm H}^n$, where $n \ge 2$. If $\nabla R_W = 0$ on $M$, then $R_W = 0$. \end{prop} \begin{proof} Since $R_W$ is parallel, every curvature operator commutes with $R_W$. Then for any tangent vector $V$, $$ 0 = \RR(V, W) R_W W = R_W \RR(V, W) W = R_W^2 V,$$ and thus $R_W^2=0$. So $R_W$, being self-adjoint, must also vanish. \end{proof} \subsection{The condition $R_W = 0$} \label{RWzero} \begin{prop} \label{nonvanish} There are no hypersurfaces in $\C\PP^2$ or $\C{\mathrm H}^2$ such that the structure Jacobi operator $R_W$ vanishes identically. \end{prop} \begin{proof} We use the setup from \S \ref{basic} with $n=2$. First look at possibility of a Hopf hypersurface with $R_W$ = 0. We see from \eqref{jacobimatrix} with $\beta=0$ that $\alpha \lambda +c =\alpha \nu + c = 0$, so that $\alpha \ne 0$ and $\nu = \lambda \ne 0$. However, in view of Proposition \ref{hopfpc}, we have $ 0 = \alpha \lambda + c = \lambda^2$, which is a contradiction. The non-Hopf case is handled by the following proposition which follows directly from \eqref{shapematrix} and \eqref{jacobimatrix}. Then Lemma \ref{parallellemma} completes our proof. \end{proof} \begin{prop} \label{parallelconditions} Suppose that $M^3$ is a non-Hopf hypersurface in $\C\PP^2$ or $\C{\mathrm H}^2$ satisfying $R_W=0.$ Then, in a neighborhood of some point $p$, we have (using the basic setup of \S \ref{basic}) \begin{itemize} \item $\beta$ and $\alpha$ are nonzero; \item $\mu = 0$; \item $\beta^2 = \alpha\lambda+c$; \item $\alpha\nu + c =0.$ \end{itemize} Conversely, every hypersurface satisfying these conditions will have $R_W = 0.$ \end{prop} \begin{lemma}\label{parallellemma} There does not exist a hypersurface in $\C\PP^2$ or $\C{\mathrm H}^2$ satisfying the conditions of Proposition \ref{parallelconditions}. \end{lemma} We prove this lemma in \S \ref{movingframes} using exterior differential systems. \section{Lie Parallelism of $R_W$}\label{LWRW} We begin by deriving a necessary condition for a hypersurface to satisfy ${\mathcal L}_W R_W = 0$. \begin{prop}\label{necessary} For any hypersurface in $\C\PP^n$ or $\C{\mathrm H}^n$, where $n \ge 2$, satisfying ${\mathcal L}_W R_W = 0$, we must have \begin{equation} [R_W, [\varphi, A]] = 0. \end{equation} \end{prop} \begin{proof} \begin{align*}\label{uglyv} ({\mathcal L}_W R_W)V &= {\mathcal L}_W (R_W V) - R_W({\mathcal L}_W V)\\ &= \nabla_W(R_W V) - \nabla_{R_W V} W -R_W (\nabla_W V) + R_W (\nabla_V W)\\ &= (\nabla_W R_W) V - \varphi A (R_W V) + R_W (\varphi A V) \end{align*} for all tangent vectors $V$. (Here we have used (\ref{nablaW})). Thus ${\mathcal L}_W R_W = 0$ if and only if $\nabla_W R_W = -[R_W, \varphi A]$. Using the fact that $\nabla_W R_W$ is self-adjoint, we see that ${\mathcal L}_W R_W = 0$ implies that $$(R_W \varphi A - \varphi A R_W)^t = (R_W \varphi A - \varphi A R_W),$$ which, once we use the fact that $A, R_W$ are self-adjoint while $\varphi$ is skew-adjoint, reduces to the desired identity. \end{proof} \subsection{The non-Hopf case}\label{LWRWnonHopf} \begin{prop} \label{LWRWprop} Suppose that $M^3$ is a non-Hopf hypersurface in $\C\PP^2$ or $\C{\mathrm H}^2$ satisfying ${\mathcal L}_WR_W=0.$ Then, in a neighborhood of some point $p$, we have (using the basic setup of \S \ref{basic}) \begin{itemize} \item $\beta$ and $\alpha$ are nonzero; \item $\mu = 0$; \item $\lambda = \nu$; \item $\alpha\nu + c =0$ \end{itemize} \end{prop} \begin{proof} By Proposition \ref{necessary}, we get $R_W(\varphi A - A \varphi)W = 0$ which implies that $R_W \varphi A W = 0$. In the setup of \S \ref{basic} with $\beta > 0$, this gives $R_W Y = 0.$ From equation \eqref{jacobimatrix}, we get $\alpha \mu=0$ and $\alpha \nu + c = 0$; the latter guarantees that $\alpha \ne 0$, and hence $\mu = 0$. Following the same procedure with $X$, we get $R_W (\varphi A - A \varphi)X = R_W (\lambda - \nu)Y = 0$. Therefore, $(\varphi A - A \varphi)R_W X = 0$, which reduces to $(\alpha \lambda + c - \beta^2)(\lambda - \nu)= 0$. If $\lambda \ne \nu$ at some point, then $\alpha \lambda + c - \beta^2$ vanishes in a neighborhood of this point and $R_W = 0$ there. This contradicts Proposition \ref{nonvanish} so we must conclude that $\lambda = \nu$ and that in a neighborhood of $p$, we have \begin{equation} A = \begin {pmatrix} \alpha&\beta& 0 \\ \beta &-\frac{c}{\alpha} & 0 \\ 0&0&-\frac{c}{\alpha} \\ \end {pmatrix}\label{LWRWshapematrix} \end{equation} and \begin{equation} R_W = \begin {pmatrix}\label{LWRWjacobimatrix} 0&0& 0 \\ 0 &-\beta^2 & 0 \\ 0&0&0 \\ \end {pmatrix}. \end{equation} \end{proof} However, the situation described in Proposition \ref{LWRWprop} cannot, in fact, occur. \begin{lemma}\label{LWRWlemma} There does not exist a hypersurface in $\C\PP^2$ or $\C{\mathrm H}^2$ satisfying the conditions listed in Proposition \ref{LWRWprop}. \end{lemma} We prove this in \S \ref{movingframes} using exterior differential systems. Thus, we have, \begin{prop} \label{LWRWmustbeHopf} Let $M^3$ be a real hypersurface in $\C\PP^2$ or $\C{\mathrm H}^2$ such that ${\mathcal L}_W R_W = 0.$ Then $M$ must be a Hopf hypersurface. \end{prop} We classify such hypersurfaces in the next section. \subsection{The Hopf case}\label{LWRWHopf} Now consider a Hopf hypersurface $M^3$ in $\C\PP^2$ or $\C{\mathrm H}^2$. At any point of $M$, let $X$ be a unit principal vector in $W^\perp$ and let $Y = \varphi X$. Then, with respect to the frame $(W, X, Y)$, we have \begin{equation} A = \begin {pmatrix} \alpha&0& 0 \\ 0&\lambda & 0 \\ 0&0&\nu \\ \end {pmatrix}\label{Hopfshapematrix} \end{equation} and \begin{equation} R_W = \begin {pmatrix}\label{Hopfjacobimatrix} 0&0& 0 \\ 0 &\alpha\lambda+c & 0 \\ 0&0&\alpha\nu+c \\ \end {pmatrix}. \end{equation} \noindent By a straightforward calculation, we obtain \begin{equation} \begin{aligned}\label{LWRWHopfcondition} [R_W, [\varphi, A]] W &= 0,\\ [R_W, [\varphi, A]] X &= -\alpha(\lambda - \nu)^2 Y,\\ [R_W, [\varphi, A]] Y &= \alpha(\lambda - \nu)^2 X. \end{aligned} \end{equation} We are now ready to prove the following proposition. \begin{prop} \label{LWRWHopfprop} Let $M^3$ be a Hopf hypersurface in $\C\PP^2$ or $\C{\mathrm H}^2$. Then the identity ${\mathcal L}_W R_W =0 $ is satisfied if and only if at each point of $M$, one of the following holds: \begin{itemize}\label{pseudoE} \item $\alpha = 0$, $\lambda \ne \nu$ and $\lambda \nu = c$; \item $\alpha = 0$, $\lambda = \nu$, and $\lambda^2 = c$; \item $\alpha^2+4c = 0$ and $\lambda=\nu = \frac{\alpha}{2}$; or \item $\alpha\ne 0$, $\alpha^2 +4c >0$, $\lambda = \nu$, and $\lambda^2 = \alpha\lambda + c$. \end{itemize}. \end{prop} \begin{proof} The necessity of these conditions follows immediately from (\ref{LWRWHopfcondition}), Proposition \ref{necessary} and Proposition \ref{hopfpc}. Now suppose that these conditions are satisfied. If $\alpha = 0$, we see that $R_W V = c V$ for all $V \in W^\perp$. If $\alpha^2 + 4c = 0$, then $\lambda=\nu=\alpha/2$ and $\alpha\lambda+c = -c$, so that $R_W V = -c V$ for all $V \in W^\perp$. In the remaining case, $R_W V = \lambda^2 V$ for all $V \in W^\perp$. In each case, there is a nonzero constant $k$ such that the identity $R_W V = k V$ holds globally for all $V \in W^\perp$. Then for any vector field $V \in W^\perp$, we have, using (\ref{nablaW}) \begin{equation} \begin{aligned} ({\mathcal L}_W R_W)V &= {\mathcal L}_W (k V) - k \left({\mathcal L}_W V - \<{\mathcal L}_W V, W\> W\right) \\ &= k( \<\nabla_W V, W \> -\<\nabla_V W, W\>) W \\ &= -k \< V, \nabla_W W \> W = -k \<V, \varphi A W\> W = 0. \end{aligned} \end{equation} Since $({\mathcal L}_W R_W)W=0$ automatically, we have ${\mathcal L}_W R_W = 0$ as required. \end{proof} Note that according to Propositions 2.13 and 2.21 of \cite{kimryan2}, the conditions in Proposition \ref{LWRWHopfprop} are precisely the conditions for $M$ to be a pseudo-Einstein hypersurface. Thus we have completed the proof of Theorem \ref{LWRWtheorem}. With a little additional work, we can now prove the analogue of Theorem \ref{LieParalleltheorem} for $n=2$. \begin{theorem} \label{LieParallel2} Let $M^3$ be a real hypersurface in $\C\PP^2$ or $\C{\mathrm H}^2$. Then the Lie derivative ${\mathcal L}_V R_W$ of the structure Jacobi operator cannot vanish for all tangent vectors $V$. \end{theorem} \begin{proof} We suppose that ${\mathcal L}_V R_W$ vanishes for all $V$ and derive a contradiction. First note that we must have ${\mathcal L}_W R_W = 0$. By Proposition \ref{LWRWmustbeHopf}, M must be Hopf. Thus, the classification of Proposition \ref{LWRWHopfprop} can be applied. For any unit vector field $V \in W^{\perp}$, and $U = \varphi V$, consider \begin{equation} \begin{aligned} \<({\mathcal L}_V R_W)U, W\> &= \<({\mathcal L}_V (R_W U) - R_W({\mathcal L}_V U)), W\>\\ &= \<(k {\mathcal L}_V U - k ({\mathcal L}_V U - \<{\mathcal L}_V U, W\> W)), W\>\\ &= k \< {\mathcal L}_V U, W \>\\ &= k \<\nabla_V U, W \> - k \<\nabla_U V, W\> \\ &= -k \< U, \varphi A V\> +k \<V, \varphi A U\>. \end{aligned} \end{equation} Now fix a particular point $p$, and suppose that $V$ is principal at $p$, with $AV = \lambda V$. Then $U$ must also be principal at $p$. Writing $AU = \nu U$, we get $$ \<({\mathcal L}_V R_W)U, W\> = -k (\lambda + \nu). $$ at $p$. The right side of this equation is nonzero unless $\lambda = - \nu$. Except possibly for the first case ($\alpha=0$, $\lambda \ne \nu$, $\lambda\nu = c$) in Proposition \ref{LWRWHopfprop}, we have an immediate contradiction. In the remaining case, our argument shows that the principal curvatures sum to zero everywhere (since $p$ was arbitrary). However, $\lambda = -\nu$ locally would give $\lambda^2 = -c$ and force $\lambda$ and $\nu$ to be locally constant. Since the well-known list of Hopf hypersurfaces with constant principal curvatures does not admit this possibility (see Theorem 4.13 of \cite{nrsurvey}), our proof is complete. \end{proof} \section{Lie parallelism for $n \ge 3$}\label{LieParallelSection} The condition of ``Lie parallelism" (see \cite {Perez2005a}, p. 270) is very strong. In fact, a tensor field of type $(1,1)$ will be Lie parallel if and only if it is a constant multiple of the identity. \begin{lemma}\label{LieParallellemma} Let $T$ be a tensor field of type $(1,1)$ on a manifold $M^n$, where $n \ge 2$. Then the Lie derivative ${\mathcal L}_X T$ vanishes for all vector fields $X$ if and only if $T$ is a constant multiple of the identity. \end{lemma} \begin{proof} Let $X$ and $Y$ be vector fields and $f$ a real-valued function defined on an open set $U \subset M$. Then, it is easy to check that the identity \begin{equation} ({\mathcal L}_{f X} T) Y = f ({\mathcal L}_X T) Y - df(TY) X + df(Y) T X \end{equation} holds. Suppose now that ${\mathcal L}_V T = 0$ for all vector fields $V$. Then \begin{equation} df(T Y) X = df(Y) T X \end{equation} for all $X$, $Y$, $f$. For a suitable choice of $Y$ and $f$, we can assume that $df(Y)$ is nonvanishing on $U$, so that we can write $TX = \tau X$ for a function $\tau= df(TY)/df(Y)$. Since $\tau$ can depend only on $X$, it must be independent of $X$ and $Y$. Therefore, there is a real-valued function $\tau$ such that $T = \tau I$. Finally, for any vector field $V$, we have $$0 = ({\mathcal L}_V T)Y = {\mathcal L}_V (\tau Y) - \tau {\mathcal L}_V Y = d\tau(V) Y $$ so that $\tau$ must be locally constant. Conversely, the same equation shows that if $T$ is a constant multiple of the identity, then ${\mathcal L}_V T = 0$. \end{proof} We are now in a position to prove our theorem. \begin{theorem}\label{NewLieParalleltheorem} Let $M^{2n-1}$, where $n \ge 3$, be a real hypersurface in $\C\PP^n$ or $\C{\mathrm H}^n$. Then the Lie derivative ${\mathcal L}_V R_W$ of the structure Jacobi operator cannot vanish for all tangent vectors $V$. \end{theorem} \begin{proof} Suppose that ${\mathcal L}_V R_W = 0$ for all $V$. Applying the preceding lemma to the $(1,1)$ tensor field $R_W$, we get that $R_W$ is a constant multiple of the identity. Since $R_W W = 0$, we have, in fact, that $R_W = 0$. Our result is now immediate from Theorem \ref{OPStheorem}. \end{proof} We could proceed similarly in the $n=2$ case, invoking Proposition \ref{parallelprop}. This would provide an alternative proof of Theorem \ref{LieParallel2}. \section{Differential Forms Calculations}\label{movingframes} In this section, we prove Lemmas \ref{parallellemma} and \ref{LWRWlemma} by analyzing the conditions that a moving frame along the hypersurface would have to satisfy, as a section of the orthonormal frame bundle of the relevant complex space form $\widetilde M = \C\PP^2$ or $\C{\mathrm H}^2$. The conditions proposed in the lemmas will imply that the sections are integral submanifolds of certain exterior differential systems on the frame bundle. The generators of these systems are defined in terms of the natural coframing on the frame bundle, which we will briefly review. On the orthonormal frame bundle ${F}_o$ of a $n$-dimensional Riemannian manifold $\widetilde M$, we define the {\em canonical 1-forms} $\omega^i$ and the {\em connection 1-forms} $\omega^i_j$ (where $1\le i,j,k \le n$) by the following properties: if $(e_1,\ldots,e_n)$ is any orthonormal frame defined on an open set $U\subset \widetilde M$, and $f:U \to {F}_o$ is the corresponding local section, then \begin{align} \mathbf v &= (\mathbf v \intprod f^* \omega^k) e_k, \label{omegacanon}\\ \widetilde\nabla_\mathbf v e_j &= (\mathbf v \intprod f^* \omega^k_j ) e_k, \label{omegaconn} \end{align} for any tangent vector $\mathbf v$ at a point in $U$, where $\widetilde\nabla$ denotes the Levi-Civita connection on $\widetilde M$ and we use the summation convention. The connection forms satisfy $\omega^j_i=-\omega^i_j$. The forms $\omega^i$ and $\omega^i_j$ (for $i>j$) together form a basis for the cotangent space of ${F}_o$ at each point. They satisfy the structure equations \begin{align}d\omega^i &= -\omega^i_j \& \omega^j,\\ d\omega^i_j &= -\omega^i_k \& \omega^k_j + \Phi^i_j, \end{align} where the 2-forms $\Phi^i_j$ pull back along any section to give the components of the curvature tensor with respect to the corresponding frame, i.e., $f^* \Phi^i_j(e_k,e_\ell) = \langle e_i, \widetilde \RR(e_k, e_\ell) e_j\rangle$. In our case, $n=4$ and $\widetilde M$ is a complex space form. We will use moving frames that are adapted to the complex structure on $\widetilde M$ in the following way: $$e_4 = \JJ e_1, \qquad e_3 = \JJ e_2.$$ We will refer to these as {\em unitary frames}, and let ${F}_u \subset {F}_o$ be the sub-bundle of such frames. We restrict the canonical and connection forms to ${F}_u$ without change of notation. The structure group of this sub-bundle is the 4-dimensional group $U(2)\subset SO(4)$. Because $\JJ$ is parallel, only the connection forms $\omega^3_2$, $\omega^4_1$, $\omega^4_2$, $\omega^4_3$ are linearly independent, the remaining forms satisfying the relations $$\omega^2_1 = -\omega^4_3, \qquad \omega^3_1 = \omega^4_2.$$ Using \eqref{ambientcurvature} and the structure equations, we find that the curvature forms on ${F}_u$ satisfy \begin{align*} \Phi^3_2 &= c (4 \omega^3 \& \omega^2 + 2 \omega^4 \& \omega^1), \\ \Phi^4_1 &= c( 4 \omega^4 \& \omega^1 +2\omega^3 \& \omega^2),\\ \Phi^4_2 &= \Phi^3_1 = c(\omega^3 \& \omega^1 + \omega^4 \& \omega^2), \\ \Phi^4_3 &= \Phi^1_2=c (\omega^1 \& \omega^2 + \omega^4 \& \omega^3). \\ \end{align*} Along a real hypersurface $M \subset \widetilde M$, we will use an {\em adapted} moving frame, meaning a unitary frame such that $e_4$ is normal to the hypersurface (and thus $e_1$ is the structure vector). It follows from \eqref{omegacanon} that $f^*\omega^4 = 0$ and $f^*(\omega^1 \& \omega^2 \& \omega^3)$ is a nonzero 3-form at each point. It also follows from \eqref{omegaconn} that $$\quad f^*\omega^4_i = h_{ij} f^* \omega^j, \qquad 1 \le i,j \le 3,$$ where $h_{ij}$ are functions that give the components of the shape operator of $M$. In particular, working in a neighborhood of a point where $AW \ne \alpha W$, let $W,X,Y$ be the unit vector fields defined in \S\ref{basic}. Then $e_1=W, e_2=X, e_3=Y$ and $e_4=\xi$ give the components of an adapted framing, and the $h_{ij}$ are the entries of the matrix given by \eqref{shapematrix}. We now have all the tools necessary to prove the two lemmas. \begin{proof}[Proof of Lemma \ref{parallellemma}] Again, let $W,X,Y$ be unit vector fields on an open set $U\subset M$, as in \S\ref{basic}, and let $f$ be the adapted moving frame such that $e_1=W, e_2=X, e_3=Y$. Then $f$ immerses $U$ as a three-dimensional submanifold of ${F}_u$ on which $\omega^4=0$ and the $\omega^4_i$ satisfy \begin{equation}\label{greeksetup} \begin{aligned} \omega^4_1 &= \alpha \omega^1 + \beta \omega^2,\\ \omega^4_2 &= \beta \omega^1 + \lambda \omega^2 + \mu \omega^3,\\ \omega^4_3 &= \mu \omega^2 + \nu \omega^3, \end{aligned} \end{equation} for some functions $\alpha,\beta,\lambda,\mu,\nu$ satisfying the conditions in the lemma. Because we assume that $\alpha$ is nowhere vanishing, these conditions can be expressed as $$\alpha, \beta \ne 0, \qquad \lambda = \dfrac{\beta^2-c}{\alpha}, \qquad \mu = 0, \qquad \nu=-\dfrac{c}{\alpha}.$$ Under these conditions, the functions $\alpha$ and $\beta$ completely determine the second fundamental form (and hence, determine the hypersurface up to rigid motion). The proof will proceed by deriving an overdetermined system of differential equations that these functions must satisfy, and showing that no solutions exist satisfying the nonvanishing conditions. Take $(\alpha, \beta)$ as coordinates on $\mathbb R^2$, and let $\Sigma \subset \mathbb R^2$ be the subset where $\alpha \ne 0$ and $\beta\ne 0$. On ${F}_u \times \Sigma$ define the 1-forms $$ \begin{aligned} \theta_0 &= \omega^4, \\ \theta_1 &= \omega^4_1 - \alpha \omega^1 - \beta \omega^2,\\ \theta_2 &=\omega^4_2 - \beta \omega^1 - \dfrac{(\beta^2-c)}{\alpha}\ \omega^2, \\ \theta_3 &=\omega^4_3 + \dfrac{c}{\alpha}\ \omega^3. \end{aligned} $$ Then for any adapted frame $f$ along $M$, the image of the map $p \mapsto (f(p),\alpha(p),\beta(p))$ is a 3-dimensional submanifold in ${F}_u\times \Sigma$ which is an {\em integral} of the Pfaffian exterior differential system generated by $\theta_0, \theta_1, \theta_2, \theta_3$. In other words, all 1-forms in this span pull back to be zero on this submanifold. We will now investigate the set of such submanifolds, satisfying the independence condition $\omega^1 \& \omega^2 \& \omega^3 \ne 0$, which is implied by (\ref{omegacanon}). Along any such submanifold, the exterior derivatives of the $\theta_i$ must also vanish (i.e., they pull back to the submanifold to be zero). Therefore, we will obtain additional differential forms that must vanish along integral manifolds if we compute the derivatives of the 1-form generators modulo the algebraic ideal (under wedge product) generated by those 1-forms. In this case, we compute $d\theta_0 \equiv 0$ and \begin{equation}\label{I1struct} \begin{aligned} -d\theta_1 &\equiv \pi_1 \& \omega^1 + \pi_2 \& \omega^2 + \pi_3 \& \omega^3,\\ -d\theta_2 &\equiv \pi_2 \& \omega^1 + \left(\dfrac{2 \beta}{\alpha}\ \pi_2 - \dfrac{(\beta^2-c)}{\alpha^2}\ \pi_1\right) \& \omega^2 +\dfrac{\beta}{\alpha}\ \pi_3 \& \omega^3, \\ -d\theta_3 &\equiv \pi_3 \& \left(\omega^1 +\dfrac{\beta}{\alpha}\omega^2\right) + \dfrac{c}{\alpha^2}\ \pi_1 \& \omega^3 \end{aligned} \mod \theta_0,\theta_1,\theta_2,\theta_3, \end{equation} where \begin{align*} \pi_1 &:= d\alpha + 3\dfrac{\beta(\alpha^2-c)}{\alpha}\ \omega^3,\\ \pi_2 &:= d\beta + \dfrac{(3\alpha^2\beta^2 + c^2 -c\beta^2)}{\alpha^2}\ \omega^3,\\ \pi_3 &:=\beta \omega^3_2 +4\alpha\beta \omega^1 + \dfrac{(4\alpha^2\beta^2-c^2+c\beta^2)}{\alpha^2}\ \omega^2. \end{align*} On any integral submanifold satisfying the independence condition, $\pi_1, \pi_2, \pi_3$ must restrict to be linear combinations of $\omega^1, \omega^2, \omega^3$ at each point. The possibilities for these linear combinations are determined by the requirement that the right-hand sides in \eqref{I1struct} must be zero. In fact, there is only one parameter's worth of possible values for the $\pi$'s, given by \begin{equation}\label{I1pival} \begin{aligned} \pi_1 &= \rho(\alpha \omega^1 + \beta \omega^2),\\ \pi_2 &= \rho\left(\beta \omega^1 + \dfrac{\beta^2+c}{\alpha}\omega^2\right),\\ \pi_3 &= \dfrac{\rho c}{\alpha}\ \omega^3 \end{aligned} \end{equation} in terms of the single parameter $\rho$. In other words, along each submanifold there will be a function $\rho$ such that the above equations hold. (To see why, note that the vanishing of the third line of \eqref{I1struct} implies that $\pi_1,\pi_3$ must be linear combinations of $\omega^3$ and $\alpha \omega^1 + \beta\omega^2$. On the other hand, linearly combining the first two lines to eliminate the $\pi_3 \& \omega^3$ term reveals that $\pi_1,\pi_2$ must be linear combinations of $\omega^1, \omega^2$. Thus, $\pi_1$ must be a multiple of $\alpha \omega^1 + \beta\omega^2$. By substituting this into the right-hand sides of \eqref{I1struct}, we see that this multiple determines the values of $\pi_2$ and $\pi_3$ at any point.) Just as we did with $\alpha$ and $\beta$, we introduce $\rho$ as a new coordinate, and define the following 1-forms on ${F}_u \times \Sigma \times \mathbb R$: \begin{align*} \theta_4 &= \pi_1 - \rho(\alpha \omega^1 + \beta \omega^2),\\ \theta_5 &= \pi_2 -\rho\left(\beta \omega^1 + \dfrac{\beta^2+c}{\alpha}\omega^2\right),\\ \theta_6 &= \pi_3 - \dfrac{\rho c}{\alpha}\ \omega^3. \end{align*} Then for any adapted framing $f$ along $M$ satisfying our assumptions, the image of the map $p \mapsto (f(p),\alpha(p),\beta(p),\rho(p))$ is an integral submanifold of the Pfaffian system defined by the 1-forms $\theta_0, \ldots, \theta_6$. (In technical terms, this system is the {\em prolongation} of the previous one.) As before, we compute the exterior derivatives of these 1-forms modulo themselves. We find that $$d\theta_4 \& \left(\alpha \omega^1 + \beta \omega^2\right)\equiv \dfrac{8 c (\alpha^2-c) \rho}{\alpha}\ \omega^1 \& \omega^2 \& \omega^3$$ modulo $\theta_0, \ldots, \theta_6$, indicating that any integral submanifold satisfying the independence condition must have $\rho(\alpha^2 - c)=0$ at each point. (Recall that the ambient curvature $c$ is nonzero.) If $\rho \ne 0$ at a point on the submanifold, then $\alpha^2 =c$ on an open set about that point. However, we compute $$d\left(\alpha \theta_5 - \beta \theta_4 \right) \& \omega^2 \equiv \dfrac{2 c(\beta^2 - 2(\alpha^2-c))\rho}{\alpha}\ \omega^1 \& \omega^2 \& \omega^3,$$ which shows that $\beta$ must vanish on that open set, a contradiction. Therefore, we conclude that $\rho$ must be identically zero on any integral satisfying the independence condition. We restrict the system to the submanifold where $\rho=0$. Then we compute $$ \begin{aligned} d\left(\alpha \theta_5 - \beta \theta_4\right) &\equiv \dfrac{c(2\beta^2+c)(4\alpha^2+\beta^2-c)}{\alpha^2}\ \omega^1 \& \omega^2,\\ d\theta_6 \& \omega^2 &\equiv\dfrac{c(10\alpha^2 \beta^2 - c(4\alpha^2+\beta^2-c))}{\alpha^3} \ \omega^1 \& \omega^2 \& \omega^3. \end{aligned} $$ The first line can vanish only if $4\alpha^2 + \beta^2=c$, whereupon the vanishing of the last line implies that one of $\alpha$ or $\beta$ must be zero, a contradiction. Thus, no hypersurfaces exist satisfying the hypotheses of the lemma. \end{proof} \begin{proof}[Proof of Lemma \ref{LWRWlemma}] Again, let $W,X,Y$ be unit vector fields on an open set $U\subset M$, satisfying the conditions given in \S\ref{basic}, and let $f$ be the adapted moving frame such that $e_1=W, e_2=X, e_3=Y$. Then $f$ immerses $U$ as a 3-dimensional submanifold of $ F_u$. We note that $f^* \omega^4 = 0$ and \eqref{greeksetup} hold for functions $\alpha, \beta,\lambda,\mu,\nu$ satisfying the conditions in the lemma, which can be expressed as $$\beta,\lambda \ne 0, \qquad \alpha = -c/\lambda, \qquad \nu=\lambda, \qquad \mu=0.$$ Thus, we set up an exterior differential system ${\mathcal I}$ on ${F}_u \times \Sigma$ (where now $\beta,\lambda$ are the nonzero coordinates on the second factor) generated by 1-forms $$ \begin{aligned} \theta_0 &= \omega^4, \\ \theta_1 &= \omega^4_1 + (c/\lambda) \omega^1 - \beta \omega^2,\\ \theta_2 &=\omega^4_2 - \beta \omega^1 - \lambda \omega^2, \\ \theta_3 &=\omega^4_3 -\lambda \omega^3. \end{aligned} $$ Then for any adapted frame $f$ along $M$, the image of the map $p \mapsto (f(p),\beta(p),\lambda(p))$ will be a 3-dimensional integral submanifold of ${\mathcal I}$ satisfying the usual independence condition. We compute $d\theta_0 \equiv 0$ and \begin{equation}\label{I2struct} \begin{aligned} -d\theta_1 &\equiv \dfrac{c}{\lambda^2}\ \pi_1 \& \omega^1 + \pi_2 \& \omega^2 + \pi_3 \& \omega^3,\\ -d\theta_2 &\equiv \pi_2 \& \omega^1 + \pi_1 \& \omega^2,\\ -d\theta_3 &\equiv \pi_3 \& \omega^1 + \pi_1 \& \omega^3 \end{aligned} \mod \theta_0,\theta_1,\theta_2,\theta_3, \end{equation} where \begin{align*} \pi_1 &:= d\lambda -3\beta\lambda\omega^3,\\ \pi_2 &:= d\beta + (\lambda^2-\beta^2)\ \omega^3,\\ \pi_3 &:=\beta \omega^3_2 -\dfrac{\beta(3\lambda^2+4c)}{\lambda}\ \omega^1 - (\beta^2+\lambda^2)\omega^2. \end{align*} In order for the pullbacks of the right-hand sides in \eqref{I2struct} to vanish, $\pi_1,\pi_2$ and $\pi_3$ must be multiples of $\omega^1, \omega^2, \omega^3$ respectively---and moreover the multiples must all be the same at each point. In other words, there must be a single function $\rho$ such that $\pi_i =\rho \,\omega^i$, $i=1\ldots 3$, at each point. Therefore, we define the prolongation of ${\mathcal I}$ on $F_u \times \Sigma \times \mathbb R$, with $\rho$ as new coordinate on the last factor, as the Pfaffian system generated by $\theta_0,\ldots, \theta_3$ and the new 1-forms $$ \theta_4 = \pi_1 - \rho\, \omega^1, \quad \theta_5 = \pi_2 -\rho\, \omega^2, \quad \theta_6 = \pi_3 -\rho\, \omega^3. $$ Now we compute $$d\theta_5 \& \omega^3 + d\theta_6 \& \omega^2 \equiv 24\ \dfrac{c\beta^2}{\lambda}\ \omega^1 \& \omega^2 \& \omega^3$$ modulo $\theta_0,\ldots, \theta_6$. Since $\beta \ne 0$, this shows that no integral submanifold of the prolongation can satisfy the independence condition. Hence no hypersurfaces exist satisfying the hypothesis of the lemma. \end{proof}
2,869,038,156,006
arxiv
\section{Introduction} If theoretical cosmologists are the flyboys of astrophysics, they were flying on fumes in the 1990s. Since the early 1980s inflation and cold dark matter (CDM) have been the dominant theoretical ideas in cosmology. However, a key prediction of inflation, a flat Universe (i.e., $\Omega_0 \equiv \rho_{\rm total}/\rho_{\rm crit} = 1$), was beginning to look untenable. By the late 1990s it was becoming increasingly clear that matter only accounted for 30\% to 40\% of the critical density (see e.g., Turner, 1999). Further, the $\Omega_M =1$, COBE-normalized CDM model was not a very good fit to the data without some embellishment (15\% or so of the dark matter in neutrinos, significant deviation from from scale invariance -- called tilt -- or a very low value for the Hubble constant; see e.g., Dodelson \mbox{\it et al.}, 1996). Because of this and their strong belief in inflation, a number of inflationists (see e.g., Turner, Steigman \& Krauss, 1984 and Peebles, 1984) were led to consider seriously the possibility that the missing 60\% or so of the critical density exists in the form of vacuum energy (cosmological constant) or something even more interesting with similar properties (see Sec. 3 below). Since determinations of the matter density take advantage of its enhanced gravity when it clumps (in galaxies, clusters or superclusters), vacuum energy, which is by definition spatially smooth, would not have shown up in the matter inventory. Not only did a cosmological constant solve the ``$\Omega$ problem,'' but $\Lambda$CDM, the flat CDM model with $\Omega_M\sim 0.4$ and $\Omega_\Lambda\sim 0.6$, became the best fit universe model (Turner, 1991 and 1997b; Krauss \& Turner, 1995; Ostriker \& Steinhardt, 1995; Liddle \mbox{\it et al.}, 1996). In June 1996, at the Critical Dialogues in Cosmology Meeting at Princeton University, the only strike recorded against $\Lambda$CDM was the early SN Ia results of Perlmutter's group (Perlmutter \mbox{\it et al.}, 1997) which excluded $\Omega_\Lambda > 0.5$ with 95\% confidence. The first indirect experimental hint for something like a cosmological constant came in 1997. Measurements of the anisotropy of the cosmic background radiation (CBR) began to show evidence for the signature of a flat Universe, a peak in the multipole power spectrum at $l=200$. Unless the estimates of the matter density were wildly wrong, this was evidence for a smooth, dark energy component. A universe with $\Omega_\Lambda \sim 0.6$ has a smoking gun signature: it is speeding up rather than slowing down. In 1998 came the SN Ia evidence that our Universe is speeding up; for some cosmologists this was a great surprise. For many theoretical cosmologists this was the missing piece of the grand puzzle and the confirmation of a prediction. \section{The theoretical case for accelerated expansion} The case for accelerated expansion that existed in January 1998 had three legs: growing evidence that $\Omega_M \sim 0.4$ and not 1; the inflationary prediction of a flat Universe and hints from CBR anisotropy that this was indeed true; and the failure of simple $\Omega_M =1$ CDM model and the success of $\Lambda$CDM. The tension between measurements of the Hubble constant and age determinations for the oldest stars was also suggestive, though because of the uncertainties, not as compelling. Taken together, they foreshadowed the presence of a cosmological constant (or something similar) and the discovery of accelerated expansion. To be more precise, Sandage's deceleration parameter is given by \begin{equation} q_0 \equiv {(\ddot R /R)_0 \over H_0^2} = {1 \over 2}\Omega_0 + {3\over 2} \sum_i \Omega_i w_i \,, \end{equation} where the pressure of component $i$, $p_i \equiv w_i \rho_i$; e.g., for baryons $w_i = 0$, for radiation $w_i = 1/3$, and for vacuum energy $w_X = -1$. For $\Omega_0 = 1$, $\Omega_M =0.4$ and $w_X < -{5\over 9}$, the deceleration parameter is negative. The kind of dark component needed to pull cosmology together implies accelerated expansion. \subsection{Matter/energy inventory: $\Omega_0 =1\pm 0.2$, $\Omega_M=0.4\pm 0.1$} There is a growing consensus that the anisotropy of the CBR offers the best means of determining the curvature of the Universe and thereby $\Omega_0$. This is because the method is intrinsically geometric -- a standard ruler on the last-scattering surface -- and involves straightforward physics at a simpler time (see e.g., Kamionkowski \mbox{\it et al.}, 1994). It works like this. At last scattering baryonic matter (ions and electrons) was still tightly coupled to photons; as the baryons fell into the dark-matter potential wells the pressure of photons acted as a restoring force, and gravity-driven acoustic oscillations resulted. These oscillations can be decomposed into their Fourier modes; Fourier modes with $k\sim l H_0/2$ determine the multipole amplitudes $a_{lm}$ of CBR anisotropy. Last scattering occurs over a short time, making the CBR is a snapshot of the Universe at $t_{\rm ls} \sim 300,000\,$yrs. Each mode is ``seen'' in a well defined phase of its oscillation. (For the density perturbations predicted by inflation, all modes the have same initial phase because all are growing-mode perturbations.) Modes caught at maximum compression or rarefaction lead to the largest temperature anisotropy; this results in a series of acoustic peaks beginning at $l\sim 200$ (see Fig.~\ref{fig:cbr_knox}). The wavelength of the lowest frequency acoustic mode that has reached maximum compression, $\lambda_{\rm max} \sim v_s t_{\rm ls}$, is the standard ruler on the last-scattering surface. Both $\lambda_{\rm max}$ and the distance to the last-scattering surface depend upon $\Omega_0$, and the position of the first peak $l\simeq 200/\sqrt{\Omega_0}$. This relationship is insensitive to the composition of matter and energy in the Universe. CBR anisotropy measurements, shown in Fig.~\ref{fig:cbr_knox}, now cover three orders of magnitude in multipole and are from more than twenty experiments. COBE is the most precise and covers multipoles $l=2-20$; the other measurements come from balloon-borne, Antarctica-based and ground-based experiments using both low-frequency ($f<100\,$GHz) HEMT receivers and high-frequency ($f>100\,$GHz) bolometers. Taken together, all the measurements are beginning to define the position of the first acoustic peak, at a value that is consistent with a flat Universe. Various analyses of the extant data have been carried out, indicating $\Omega_0 \sim 1\pm 0.2$ (see e.g., Lineweaver, 1998). It is certainly too early to draw definite conclusions or put too much weigh in the error estimate. However, a strong case is developing for a flat Universe and more data is on the way (Python V, Viper, MAT, Maxima, Boomerang, CBI, DASI, and others). Ultimately, the issue will be settled by NASA's MAP (launch late 2000) and ESA's Planck (launch 2007) satellites which will map the entire CBR sky with 30 times the resolution of COBE (around $0.1^\circ$) (see Page and Wilkinson, 1999). \begin{figure} \centerline{\psfig{figure=knox_better.eps,width=3.5in}} \caption{Current CBR anisotropy data, averaged and binned to reduce error bars and visual confusion. The theoretical curve is for the $\Lambda$CDM model with $H_0=65\,{\rm km\, s^{-1}\,Mpc^{-1}}$ and $\Omega_M =0.4$; note the goodness of fit (Figure courtesy of L. Knox). } \label{fig:cbr_knox} \end{figure} Since the pioneering work of Fritz Zwicky and Vera Rubin, it has been known that there is far too little material in the form of stars (and related material) to hold galaxies and clusters together, and thus, that most of the matter in the Universe is dark (see e.g. Trimble, 1987). Weighing the dark matter has been the challenge. At present, I believe that clusters provide the most reliable means of estimating the total matter density. Rich clusters are relatively rare objects -- only about 1 in 10 galaxies is found in a rich cluster -- which formed from density perturbations of (comoving) size around 10\,Mpc. However, because they gather together material from such a large region of space, they can provide a ``fair sample'' of matter in the Universe. Using clusters as such, the precise BBN baryon density can be used to infer the total matter density (White \mbox{\it et al.}, 1993). (Baryons and dark matter need not be well mixed for this method to work provided that the baryonic and total mass are determined over a large enough portion of the cluster.) Most of the baryons in clusters reside in the hot, x-ray emitting intracluster gas and not in the galaxies themselves, and so the problem essentially reduces to determining the gas-to-total mass ratio. The gas mass can be determined by two methods: 1) measuring the x-ray flux from the intracluster gas and 2) mapping the Sunyaev - Zel'dovich CBR distortion caused by CBR photons scattering off hot electrons in the intracluster gas. The total cluster mass can be determined three independent ways: 1) using the motions of clusters galaxies and the virial theorem; 2) assuming that the gas is in hydrostatic equilibrium and using it to infer the underlying mass distribution; and 3) mapping the cluster mass directly by gravitational lensing (Tyson, 1999). Within their uncertainties, and where comparisons can be made, the three methods for determining the total mass agree (see e.g., Tyson, 1999); likewise, the two methods for determining the gas mass are consistent. Mohr \mbox{\it et al.}\ (1998) have compiled the gas to total mass ratios determined from x-ray measurements for a sample of 45 clusters; they find $f_{\rm gas} = (0.075\pm 0.002)h^{-3/2}$ (see Fig.~\ref{fig:gas}). Carlstrom (1999), using his S-Z gas measurements and x-ray measurements for the total mass for 27 clusters, finds $f_{\rm gas} =(0.06\pm 0.006)h^{-1}$. (The agreement of these two numbers means that clumping of the gas, which could lead to an overestimate of the gas fraction based upon the x-ray flux, is not a problem.) Invoking the ``fair-sample assumption,'' the mean matter density in the Universe can be inferred: \begin{eqnarray} \Omega_M = \Omega_B/f_{\rm gas} & = & (0.3\pm 0.05)h^{-1/2}\ ({\rm X ray})\nonumber\\ & = & (0.25\pm 0.04)h^{-1}\ ({\rm S-Z}) \nonumber \\ & = & 0.4\pm 0.1\ ({\rm my\ summary})\,. \end{eqnarray} I believe this to be the most reliable and precise determination of the matter density. It involves few assumptions, most of which have now been tested. For example, the agreement of S-Z and x-ray gas masses implies that gas clumping is not significant; the agreement of x-ray and lensing estimates for the total mass implies that hydrostatic equilibrium is a good assumption; the gas fraction does not vary significantly with cluster mass. \begin{figure} \centerline{\psfig{figure=cbf2.eps,width=3.5in}} \caption{Cluster gas fraction as a function of cluster gas temperature for a sample of 45 galaxy clusters (Mohr \mbox{\it et al.}, 1998). While there is some indication that the gas fraction decreases with temperature for $T< 5\,$keV, perhaps because these lower-mass clusters lose some of their hot gas, the data indicate that the gas fraction reaches a plateau at high temperatures, $f_{\rm gas} =0.212 \pm 0.006$ for $h=0.5$ (Figure courtesy of Joe Mohr). } \label{fig:gas} \end{figure} \subsection{Dark energy} The apparently contradictory results, $\Omega_0 = 1\pm 0.2$ and $\Omega_M = 0.4\pm 0.1$, can be reconciled by the presence of a dark-energy component that is nearly smoothly distributed. The cosmological constant is the simplest possibility and it has $p_X=-\rho_X$. There are other possibilities for the smooth, dark energy. As I now discuss, other constraints imply that such a component must have very negative pressure ($w_X \mathrel{\mathpalette\fun <} -{1\over 2}$) leading to the prediction of accelerated expansion. To begin, parameterize the bulk equation of state of this unknown component: $w \equiv p_X/\rho_X$ (Turner \& White, 1997). This implies that its energy density evolves as $\rho_X \propto R^{-n}$ where $n=3(1+w)$. The development of the structure observed today from density perturbations of the size inferred from measurements of the anisotropy of the CBR requires that the Universe be matter dominated from the epoch of matter -- radiation equality until very recently. Thus, to avoid interfering with structure formation, the dark-energy component must be less important in the past than it is today. This implies that $n$ must be less than $3$ or $w< 0$; the more negative $w$ is, the faster this component gets out of the way (see Fig.~\ref{fig:xmatter}). More careful consideration of the growth of structure implies that $w$ must be less than about $-{1\over 3}$ (Turner \& White, 1997). Next, consider the constraint provided by the age of the Universe and the Hubble constant. Their product, $H_0t_0$, depends the equation of state of the Universe; in particular, $H_0t_0$ increases with decreasing $w$ (see Fig.~\ref{fig:wage}). To be definite, I will take $t_0 =14\pm 1.5\,$Gyr and $H_0=65\pm 5\,{\rm km\,s^{-1}\,Mpc^{-1}}$ (see e.g., Chaboyer \mbox{\it et al.}, 1998 and Freedman, 1999); this implies that $H_0t_0 = 0.93 \pm 0.13$. Fig.~\ref{fig:wage} shows that $w<-{1\over 2}$ is preferred by age/Hubble constant considerations. \begin{figure} \centerline{\psfig{figure=xmatter.eps,width=3.5in}} \caption{Evolution of the energy density in matter, radiation (heavy lines), and different possibilities for the dark-energy component ($w=-1,-{1\over 3},{1\over 3}$) vs. scale factor. The matter-dominated era begins when the scale factor was $\sim 10^{-4}$ of its present size (off the figure) and ends when the dark-energy component begins to dominate, which depends upon the value of $w$: the more negative $w$ is, the longer the matter-dominated era in which density perturbations can go into the large-scale structure seen today. These considerations require $w<-{1\over 3}$ (Turner \& White, 1997). } \label{fig:xmatter} \end{figure} \begin{figure} \centerline{\psfig{figure=wage.eps,width=3.5in}} \caption{$H_0t_0$ vs. the equation of state for the dark-energy component. As can be seen, an added benefit of a component with negative pressure is an older Universe for a given Hubble constant. The broken horizontal lines denote the $1\sigma$ range for $H_0=65\pm 5\,{\rm km\,s^{-1}\,Mpc^{-1}}$ and $t_0=14\pm 1.5\,$Gyr, and indicate that $w<-{1\over 2}$ is preferred. } \label{fig:wage} \end{figure} To summarize, consistency between $\Omega_M \sim 0.4$ and $\Omega_0 \sim 1$ along with other cosmological considerations implies the existence of a dark-energy component with bulk pressure more negative than about $-\rho_X /2$. The simplest example of such is vacuum energy (Einstein's cosmological constant), for which $w=-1$. The smoking-gun signature of a smooth, dark-energy component is accelerated expansion since $q_0 = 0.5 + 1.5w_X\Omega_X \simeq 0.5 + 0.9w < 0$ for $w<-{5\over 9}$. \subsection{$\Lambda$CDM} The cold dark matter scenario for structure formation is the most quantitative and most successful model ever proposed. Two of its key features are inspired by inflation: almost scale invariant, adiabatic density perturbations with Gaussian statistical properties and a critical density Universe. The third, nonbaryonic dark matter is a logical consequence of the inflationary prediction of a flat universe and the BBN-determination of the baryon density at 5\% of the critical density. There is a very large body of data that is consistent with it: the formation epoch of galaxies and distribution of galaxy masses, galaxy correlation function and its evolution, abundance of clusters and its evolution, large-scale structure, and on and on. In the early 1980s attention was focused on a ``standard CDM model'': $\Omega_0=\Omega_M =1$, $\Omega_B = 0.05$, $h=0.50$, and exactly scale invariant density perturbations (the cosmological equivalent of DOS 1.0). The detection of CBR anisotropy by COBE DMR in 1992 changed everything. First and most importantly, the COBE DMR detection validated the gravitational instability picture for the growth of large-scale structure: The level of matter inhomogeneity implied at last scattering, after 14 billion years of gravitational amplification, was consistent with the structure seen in the Universe today. Second, the anisotropy, which was detected on the $10^\circ$ angular scale, permitted an accurate normalization of the CDM power spectrum. For ``standard cold dark matter'', this meant that the level of inhomogeneity on all scales could be accurately predicted. It turned out to be about a factor of two too large on galactic scales. Not bad for an ab initio theory. With the COBE detection came the realization that the quantity and quality of data that bear on CDM was increasing and that the theoretical predictions would have to match their precision. Almost overnight, CDM became a ten (or so) parameter theory. For astrophysicists, and especially cosmologists, this is daunting, as it may seem that a ten-parameter theory can be made to fit any set of observations. This is not the case when one has the quality and quantity of data that will soon be available. In fact, the ten parameters of CDM + Inflation are an opportunity rather than a curse: Because the parameters depend upon the underlying inflationary model and fundamental aspects of the Universe, we have the very real possibility of learning much about the Universe and inflation. The ten parameters can be organized into two groups: cosmological and dark-matter (Dodelson \mbox{\it et al.}, 1996). \smallskip \centerline{\it Cosmological Parameters} \vspace{3pt} \begin{enumerate} \item $h$, the Hubble constant in units of $100\,{\rm km\,s^{-1}}\,{\rm Mpc}^{-1}$. \item $\Omega_Bh^2$, the baryon density. Primeval deuterium measurements and together with the theory of BBN imply: $\Omega_Bh^2 = 0.02 \pm 0.002$. \item $n$, the power-law index of the scalar density perturbations. CBR measurements indicate $n=1.1\pm 0.2$; $n=1$ corresponds to scale-invariant density perturbations. Many inflationary models predict $n\simeq 0.95$; range of predictions runs from $0.7$ to $1.2$. \item $dn/d\ln k$, ``running'' of the scalar index with comoving scale ($k=$ wavenumber). Inflationary models predict a value of ${\cal O}(\pm 10^{-3})$ or smaller. \item $S$, the overall amplitude squared of density perturbations, quantified by their contribution to the variance of the CBR quadrupole anisotropy. \item $T$, the overall amplitude squared of gravity waves, quantified by their contribution to the variance of the CBR quadrupole anisotropy. Note, the COBE normalization determines $T+S$ (see below). \item $n_T$, the power-law index of the gravity wave spectrum. Scale-invariance corresponds to $n_T=0$; for inflation, $n_T$ is given by $-{1\over 7}{T\over S}$. \end{enumerate} \smallskip \centerline{\it Dark-matter Parameters} \vspace{3pt} \begin{enumerate} \item $\Omega_\nu$, the fraction of critical density in neutrinos ($=\sum_i m_{\nu_i}/90h^2$). While the hot dark matter theory of structure formation is not viable, we now know that neutrinos contribute at least 0.3\% of the critical density (Fukuda \mbox{\it et al.}, 1998). \item $\Omega_X$ and $w_X$, the fraction of critical density in a smooth dark-energy component and its equation of state. The simplest example is a cosmological constant ($w_X = -1$). \item $g_*$, the quantity that counts the number of ultra-relativistic degrees of freedom. The standard cosmology/standard model of particle physics predicts $g_* = 3.3626$. The amount of radiation controls when the Universe became matter dominated and thus affects the present spectrum of density inhomogeneity. \end{enumerate} A useful way to organize the different CDM models is by their dark-matter content; within each CDM family, the cosmological parameters vary. One list of models is: \begin{enumerate} \item sCDM (for simple): Only CDM and baryons; no additional radiation ($g_*=3.36$). The original standard CDM is a member of this family ($h=0.50$, $n=1.00$, $\Omega_B=0.05$), but is now ruled out (see Fig.~\ref{fig:cdm_sum}). \item $\tau$CDM: This model has extra radiation, e.g., produced by the decay of an unstable massive tau neutrino (hence the name); here we take $g_* = 7.45$. \item $\nu$CDM (for neutrinos): This model has a dash of hot dark matter; here we take $\Omega_\nu = 0.2$ (about 5\,eV worth of neutrinos). \item $\Lambda$CDM (for cosmological constant) or more generally xCDM: This model has a smooth dark-energy component; here, we take $\Omega_X = \Omega_\Lambda = 0.6$. \end{enumerate} \begin{figure} \centerline{\psfig{figure=cdm_sum.eps,width=3in}} \caption{Summary of viable CDM models, based upon CBR anisotropy and determinations of the present power spectrum of inhomogeneity (Dodelson \mbox{\it et al.}, 1996).} \label{fig:cdm_sum} \end{figure} Figure \ref{fig:cdm_sum} summarizes the viability of these different CDM models, based upon CBR measurements and current determinations of the present power spectrum of inhomogeneity (derived from redshift surveys). sCDM is only viable for low values of the Hubble constant (less than $55\,{\rm km\,s^{-1}}\,{\rm Mpc}^{-1}$) and/or significant tilt (deviation from scale invariance); the region of viability for $\tau$CDM is similar to sCDM, but shifted to larger values of the Hubble constant (as large as $65\,{\rm km\,s^{-1}}\,{\rm Mpc}^{-1}$). $\nu$CDM has an island of viability around $H_0\sim 60\,{\rm km\,s^{-1}}\,{\rm Mpc}^{-1}$ and $n\sim 0.95$. $\Lambda$CDM can tolerate the largest values of the Hubble constant. While the COBE DMR detection ruled out ``standard CDM,'' a host of attractive variants were still viable. However, when other very relevant data are considered too -- e.g., age of the Universe, determinations of the cluster baryon fraction, measurements of the Hubble constant, and limits to $\Omega_\Lambda$ -- $\Lambda$CDM emerges as the hands-down-winner of ``best-fit CDM model'' (Krauss \& Turner, 1995; Ostriker \& Steinhardt, 1995; Liddle \mbox{\it et al.}, 1996; Turner, 1997b). At the time of the Critical Dialogues in Cosmology meeting in 1996, the only strike against $\Lambda$CDM was the absence of evidence for its smoking gun signature, accelerated expansion. \begin{figure} \centerline{\psfig{figure=kraus.eps,width=3.5in}} \caption{Constraints used to determine the best-fit CDM model: PS = large-scale structure + CBR anisotropy; AGE = age of the Universe; CBF = cluster-baryon fraction; and $H_0$= Hubble constant measurements. The best-fit model, indicated by the darkest region, has $H_0\simeq 60-65\,{\rm km\,s^{-1} \,Mpc^{-1}}$ and $\Omega_\Lambda \simeq 0.55 - 0.65$. Evidence for its smoking-gun signature -- accelerated expansion -- was presented in 1998 (adapted from Krauss \& Turner, 1995 and Turner, 1997).} \label{fig:best_fit} \end{figure} \subsection{Missing energy found!} In 1998 evidence for the accelerated expansion anticipated by theorists was presented in the form of the magnitude -- redshift (Hubble) diagram for fifty-some type Ia supernovae (SNe Ia) out to redshifts of nearly 1. Two groups, the Supernova Cosmology Project (Perlmutter \mbox{\it et al.}, 1998) and the High-z Supernova Search Team (Riess \mbox{\it et al.}, 1998), working independently and using different methods of analysis, each found evidence for accelerated expansion. Perlmutter \mbox{\it et al.}\ (1998) summarize their results as a constraint to a cosmological constant (see Fig.~\ref{fig:omegalambda}), \begin{equation} \Omega_\Lambda = {4\over 3}\Omega_M +{1\over 3} \pm {1\over 6}\,. \end{equation} For $\Omega_M\sim 0.4 \pm 0.1$, this implies $\Omega_\Lambda = 0.85 \pm 0.2$, or just what is needed to account for the missing energy! As I have tried to explain, cosmologists were quick than most to believe, as accelerated expansion was the missing piece of the puzzle found. Recently, two other studies, one based upon the x-ray properties of rich clusters of galaxies (Mohr \mbox{\it et al.}, 1999) and the other based upon the properties of double-lobe radio galaxies (Guerra \mbox{\it et al.}, 1998), have reported evidence for a cosmological constant (or similar dark-energy component) that is consistent with the SN Ia results (i.e., $\Omega_\Lambda \sim 0.7$). There is another test of an accelerating Universe whose results are more ambiguous. It is based upon the fact that the frequency of multiply lensed QSOs is expected to be significantly higher in an accelerating universe (Turner, 1990). Kochanek (1996) has used gravitational lensing of QSOs to place a 95\% cl upper limit, $\Omega_\Lambda < 0.66$; and Waga and Miceli (1998) have generalized it to a dark-energy component with negative pressure: $\Omega_X < 1.3 + 0.55w$ (95\% cl), both results for a flat Universe. On the other hand, Chiba and Yoshii (1998) claim evidence for a cosmological constant, $\Omega_\Lambda = 0.7^{+0.1}_{-0.2}$, based upon the same data. From this I conclude: 1) Lensing excludes $\Omega_\Lambda$ larger than 0.8; 2) Because of the modeling uncertainties and lack of sensitivity for $\Omega_\Lambda <0.55$, lensing has little power in strictly constraining $\Lambda$ or a dark component; and 3) When larger objective surveys of gravitational-lensed QSOs are carried out (e.g., the Sloan Digital Sky Survey), there is the possibility of uncovering another smoking-gun for accelerated expansion. \subsection{Cosmic concordance} With the SN Ia results we have for the first time a complete and self-consistent accounting of mass and energy in the Universe. The consistency of the matter/energy accounting is illustrated in Fig.~\ref{fig:omegalambda}. Let me explain this exciting figure. The SN Ia results are sensitive to the acceleration (or deceleration) of the expansion and constrain the combination ${4\over 3}\Omega_M -\Omega_\Lambda$. (Note, $q_0 = {1\over 2}\Omega_M - \Omega_\Lambda$; ${4\over 3}\Omega_M - \Omega_\Lambda$ corresponds to the deceleration parameter at redshift $z\sim 0.4$, the median redshift of these samples). The (approximately) orthogonal combination, $\Omega_0 = \Omega_M + \Omega_\Lambda$ is constrained by CBR anisotropy. Together, they define a concordance region around $\Omega_0\sim 1$, $\Omega_M \sim 1/3$, and $\Omega_\Lambda \sim 2/3$. The constraint to the matter density alone, $\Omega_M = 0.4\pm 0.1$, provides a cross check, and it is consistent with these numbers. Further, these numbers point to $\Lambda$CDM (or something similar) as the cold dark matter model. Another body of observations already support this as the best fit model. Cosmic concordance indeed! \begin{figure} \centerline{\psfig{figure=omegalambda.eps,width=3.5in}} \caption{Two-$\sigma$ constraints to $\Omega_M$ and $\Omega_\Lambda$ from CBR anisotropy, SNe Ia, and measurements of clustered matter. Lines of constant $\Omega_0$ are diagonal, with a flat Universe shown by the broken line. The concordance region is shown in bold: $\Omega_M\sim 1/3$, $\Omega_\Lambda \sim 2/3$, and $\Omega_0 \sim 1$. (Particle physicists who rotate the figure by $90^\circ$ will recognize the similarity to the convergence of the gauge coupling constants.) } \label{fig:omegalambda} \end{figure} \section{What is the dark energy?} I have often used the term exotic to refer to particle dark matter. That term will now have to be reserved for the dark energy that is causing the accelerated expansion of the Universe -- by any standard, it is more exotic and more poorly understood. Here is what we do know: it contributes about 60\% of the critical density; it has pressure more negative than about $-\rho /2$; and it does not clump (otherwise it would have contributed to estimates of the mass density). The simplest possibility is the energy associated with the virtual particles that populate the quantum vacuum; in this case $p=-\rho$ and the dark energy is absolutely spatially and temporally uniform. This ``simple'' interpretation has its difficulties. Einstein ``invented'' the cosmological constant to make a static model of the Universe and then he discarded it; we now know that the concept is not optional. The cosmological constant corresponds to the energy associated with the vacuum. However, there is no sensible calculation of that energy (see e.g., Zel'dovich, 1967; Bludman and Ruderman, 1977; and Weinberg, 1989), with estimates ranging from $10^{122}$ to $10^{55}$ times the critical density. Some particle physicists believe that when the problem is understood, the answer will be zero. Spurred in part by the possibility that cosmologists may have actually weighed the vacuum (!), particle theorists are taking a fresh look at the problem (see e.g., Harvey, 1998; Sundrum, 1997). Sundrum's proposal, that the gravitational energy of the vacuum is close to the present critical density because the graviton is a composite particle with size of order 1\,cm, is indicative of the profound consequences that a cosmological constant has for fundamental physics. Because of the theoretical problems mentioned above, as well as the checkered history of the cosmological constant, theorists have explored other possibilities for a smooth, component to the dark energy (see e.g., Turner \& White, 1997). Wilczek and I pointed out that even if the energy of the true vacuum is zero, as the Universe as cooled and went through a series of phase transitions, it could have become hung up in a metastable vacuum with nonzero vacuum energy (Turner \& Wilczek, 1982). In the context of string theory, where there are a very large number of energy-equivalent vacua, this becomes a more interesting possibility: perhaps the degeneracy of vacuum states is broken by very small effects, so small that we were not steered into the lowest energy vacuum during the earliest moments. Vilenkin (1984) has suggested a tangled network of very light cosmic strings (also see, Spergel \& Pen, 1997) produced at the electroweak phase transition; networks of other frustrated defects (e.g., walls) are also possible. In general, the bulk equation-of-state of frustrated defects is characterized by $w=-N/3$ where $N$ is the dimension of the defect ($N=1$ for strings, $=2$ for walls, etc.). The SN Ia data almost exclude strings, but still allow walls. An alternative that has received a lot of attention is the idea of a ``decaying cosmological constant'', a termed coined by the Soviet cosmologist Matvei Petrovich Bronstein in 1933 (Bronstein, 1933). (Bronstein was executed on Stalin's orders in 1938, presumably for reasons not directly related to the cosmological constant; see Kragh, 1996.) The term is, of course, an oxymoron; what people have in mind is making vacuum energy dynamical. The simplest realization is a dynamical, evolving scalar field. If it is spatially homogeneous, then its energy density and pressure are given by \begin{eqnarray} \rho & = & {1\over 2}{\dot\phi}^2 + V(\phi ) \nonumber \\ p & = & {1\over 2}{\dot\phi}^2 - V(\phi ) \end{eqnarray} and its equation of motion by (see e.g., Turner, 1983) \begin{equation} \ddot \phi + 3H\dot\phi + V^\prime (\phi ) = 0 \end{equation} The basic idea is that energy of the true vacuum is zero, but not all fields have evolved to their state of minimum energy. This is qualitatively different from that of a metastable vacuum, which is a local minimum of the potential and is classically stable. Here, the field is classically unstable and is rolling toward its lowest energy state. Two features of the ``rolling-scalar-field scenario'' are worth noting. First, the effective equation of state, $w=({1\over 2}\dot\phi^2 - V)/({1\over 2}\dot\phi^2 +V)$, can take on any value from 1 to $-1$. Second, $w$ can vary with time. These are key features that may allow it to be distinguished from the other possibilities. The combination of SN Ia, CBR and large-scale structure data are already beginning to significantly constrain models (Perlmutter, Turner \& White, 1999), and interestingly enough, the cosmological constant is still the best fit (see Fig.~\ref{fig:composite}). The rolling scalar field scenario (aka mini-inflation or quintessence) has received a lot of attention over the past decade (Freese \mbox{\it et al.}, 1987; Ozer \& Taha, 1987; Ratra \& Peebles, 1988; Frieman \mbox{\it et al.}, 1995; Coble \mbox{\it et al.}, 1996; Turner \& White, 1997; Caldwell \mbox{\it et al.}, 1998; Steinhardt, 1999). It is an interesting idea, but not without its own difficulties. First, one must {\em assume} that the energy of the true vacuum state ($\phi$ at the minimum of its potential) is zero; i.e., it does not address the cosmological constant problem. Second, as Carroll (1998) has emphasized, the scalar field is very light and can mediate long-range forces. This places severe constraints on it. Finally, with the possible exception of one model (Frieman \mbox{\it et al.}, 1995), none of the scalar-field models address how $\phi$ fits into the grander scheme of things and why it is so light ($m\sim 10^{-33}\,$eV). \begin{figure} \centerline{\epsfxsize=12cm \epsfbox{fig1.ps}} \caption{Contours of likelihood, from $0.5\sigma$ to $2\sigma$, in the $\Omega_M$--$w_{\rm eff}$ plane. Left: The thin solid lines are the constraints from LSS and the CMB. The heavy lines are the SN Ia constraints for constant $w$ models (solid curves) and for a scalar-field model with an exponential potential (broken curves). Right: The likelihood contours from all of our cosmological constraints for constant $w$ models (solid) and dynamical scalar-field models (broken). Note: at 95\% cl $w_{\rm eff}$ must be less $-0.6$, and the cosmological constant is the most likely solution (from Perlmutter, Turner \& White, 1999).} \label{fig:composite} \end{figure} \section{Looking ahead} Theorists often require new results to pass Eddington's test: No experimental result should be believed until confirmed by theory. While provocative (as Eddington had apparently intended it to be), it embodies the wisdom of mature science. Results that bring down the entire conceptual framework are very rare indeed. \begin{figure} \centerline{\psfig{figure=cos.pol4.eps,width=3.5in}} \caption{The 95\% confidence interval for the reconstructed potential assuming luminosity distance errors of 5\% and 2\% (shaded areas) and the original potential (heavy line). For this reconstruction, $\Omega_M = 0.3$ and $V(\phi ) = V_0[1+\cos (\phi /f) ]$ (from Huterer \& Turner, 1998). } \label{fig:ht_recon} \end{figure} Both cosmologists and supernova theorists seem to use Eddington's test to some degree. It seems to me that the summary of the SN Ia part of the meeting goes like this: We don't know what SN Ia are; we don't know how they work; but we believe SN Ia are very good standardizeable candles. I think what they mean is they have a general framework for understanding a SN Ia, the thermonuclear detonation of a Chandrasekhar mass white dwarf, and have failed in their models to find a second (significant) parameter that is consistent with the data at hand. Cosmologists are persuaded that the Universe is accelerating both because of the SN Ia results and because this was the missing piece to a grander puzzle. Not only have SN Ia led us to the acceleration of the Universe, but also I believe they will play a major role in unraveling the mystery of the dark energy. The reason is simple: we can be confident that the dark energy was an insignificant component in the past; it has just recently become important. While, the anisotropy of the CBR is indeed a cosmic Rosetta Stone, it is most sensitive to physics around the time of decoupling. (To be very specific, the CBR power spectrum is almost identical for all flat cosmological models with the same conformal age today.) SNe Ia probe the Universe just around the time dark energy was becoming dominant (redshifts of a few). My student Dragan Huterer and I (Huterer \& Turner, 1998) have been so bold as to suggest that with 500 or so SN Ia with redshifts between 0 and 1, one might be able to discriminate between the different possibilities and even reconstruct the scalar potential for the quintessence field (see Fig.~\ref{fig:ht_recon}). \begin{acknowledgments} My work is supported by the US Department of Energy and the US NASA through grants at Chicago and Fermilab. \end{acknowledgments}
2,869,038,156,007
arxiv
\section{Introduction} \label{sec:introduction} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{intro2.png} \end{center} \caption{Unsupervised spectral image land cover segmentation.} \label{fig:intro} \end{figure} Spectral remote sensing systems acquire information of the Earth's surface by sensing a large amount of spatial data at different electromagnetic radiation frequencies. Spectral images (SI) are commonly regarded as three dimensional datasets or data cubes with two dimensions in the spatial domain $(x,y)$ and one in the spectral domain $(\lambda)$ \cite{shaw2003spectral}. Based on the acquired spectral/spatial resolution, spectral imaging sensors can be categorized in Hyperspectral (HS) and Multispectral (MS). Typically, HS devices capture hundreds of spectral bands of the scene, however, their spatial resolution is often lower compared to that obtained with a MS sensor, which has a low spectral resolution \cite{yokoya2017hyperspectral}. As shown in Fig. \ref{fig:intro}, every spatial location in a spectral image is represented by a vector whose values correspond to the intensity at different spectral bands. These vectors are also known as the spectral signature of the pixels or spectral pixels. Since different materials usually reflect electromagnetic energy differently at specific wavelengths \cite{shaw2003spectral}, the information provided by the spectral signatures allows distinguishing different physical materials and objects within an image. In remote sensing, the classification of spectral images is also referred to as land cover segmentation or mapping and it is an important computer vision task for many practical applications, such as precision agriculture \cite{lanthier2008hyperspectral}, vegetation classification \cite{thenkabail2016hyperspectral}, monitoring and management of the environment \cite{gessesse2015model,volpi2015semantic}, as well as security and defense issues \cite{briottet2006military}. Accurate land cover segmentation is challenging due to the high-dimensional feature space and it has drawn widespread attention in remote sensing \cite{ghamisi2017advances,li2019deep}. In the past decade, significant efforts have been made in the development of numerous SI classification methods, however, most of them rely on supervised approaches \cite{sanchez2019supervised,hinojosa2019spectral}. More recently, with the blooming of deep learning techniques for big data analysis, several deep neural networks have been developed to extract high-level features of SIs achieving state-of-the-art supervised classification performance \cite{paoletti2019deep}. However, the success of such deep learning approaches hinges on a large amount of labeled data, which is not always available and often prohibitively expensive to acquire. As a result, the computer vision community is currently focused on developing unsupervised methods that can adapt to new conditions without requiring a massive amount of data.\cite{Kolesnikov_2019_CVPR}. Most successful unsupervised learning methods exploit the fact that high dimensional datasets can be well approximated by a union of low-dimensional subspaces. Under this assumption, the sparse subspace clustering (SSC) algorithm captures the relationship among all data points by exploiting the \textit{self-expressiveness} property \cite{elhamifar2013sparse}. This property states that each data point in a union of subspaces can be written as a linear combination of other points from its own subspace. Then, the set of solutions is restricted to be sparse by minimizing the $\ell_1$ norm. Finally, an affinity matrix is built using the obtained sparse coefficients, and the normalized spectral clustering algorithm \cite{von2007tutorial} is applied to achieve the final segmentation. Assuming that spectral pixels with a similar spectrum approximately belong to the same low-dimensional structure, the SSC algorithm can be successfully applied for land cover segmentation. \cite{hinojosa2018coded,hinojosa2018spectral,zhang2016spectral,zhai2016new,huang2019semisupervised}. Despite the great success of SSC in land cover segmentation, two main problems have been identified: (1) The overall computational complexity of SSC prohibits its usage on large spectral remote sensing datasets. For instance, given a SI with $N_r$ rows, $N_c$ columns, and $L$ spectral bands, SSC needs to compute the $N\times N$ sparse coefficient matrix corresponding to $N=N_rN_c$ spectral pixels, whose computational complexity is $O(LN^3)$. Moreover, after building the affinity matrix, spectral clustering performs an eigenvalue decomposition over the $N\times N$ graph Laplacian matrix which also has cubic time complexity, or quadratic using approximation algorithms \cite{chen2018spectral} (see Fig. \ref{fig:acc_time_comparison} right). (2) Under the context of SI, the SSC model only captures the relationship of pixels by analyzing the spectral features without considering the spatial information. Indeed, the sparse coefficient matrix is piecewise smooth since spectral pixels belonging to the same land cover material are arranged in a common region; hence there is a spatial relationship between the representation coefficient vector of one pixel and its neighbors. \begin{figure}[t] \begin{center} \includegraphics[width=\columnwidth]{fig_intro.pdf} \end{center} \caption{Clustering accuracy (left) and running time (right) of the SSC algorithm compared with the proposed method for land cover segmentation. In this example, we performed the two subspace clustering algorithms on the full image and two regions of interest (ROIs) of the Indian Pines dataset (See Section \ref{sec:experiments}). The first ROI has $N=4900$ pixels and $k=4$ classes; the second has $N=10000$ pixels and $k=12$ classes; the whole Indian Pines image has $N=21025$ pixels and $k=17$ classes.} \label{fig:acc_time_comparison} \end{figure} \textbf{Paper contribution.} This paper proposes a fast and accurate similarity-constrained subspace clustering algorithm to enhance both the clustering accuracy and execution time when performing land cover segmentation. Specifically, our main contributions are as below \begin{enumerate} \item We propose to first group similar spatial neighboring pixels in subsets using a ``superpixels'' technique \cite{li2015superpixel,achanta2010slic}. Then, instead of expressing each pixel as a linear combination of all pixels in the dataset, we constrain each pixel to be solely represented as a linear combination of other pixels in the same subset. Therefore, the obtained sparse coefficient matrix encodes information about similarities between the most representative pixels of each subset and the whole dataset. In this paper, we present an efficient algorithm for selecting the most representative pixels of each subset by minimizing the maximum representation cost of the data. \item Our second contribution is the enhancement of the obtained sparse coefficient matrix via 2D smoothing convolution before applying a fast spectral clustering algorithm that significantly reduces the computational cost. Specifically, the proposed method enforces the connectivity in the affinity matrix and then efficiently obtains spectral embedding without the need to compute the eigenvalue decomposition that has a computational complexity of $O(N^3)$ in general. \end{enumerate} Increasing the number of data points and the classes enlarges the computation time and make clustering more challenging. The proposed method, shown with the blue line in the in Fig. \ref{fig:acc_time_comparison}, can be up to three orders of magnitude faster than SSC and outperforms it in terms of accuracy when clustering more than $2 \times 10^4$ spectral pixels. This paper evaluates and compares our approach on three real remote sensing spectral images with different imaging environments and spectral-spatial resolution. \section{Related Works} \label{sec:related_works} In the literature, the scalability issue of SSC and its ability to perform land cover segmentation on spectral images have been studied separately. In this section, we review some related works from these two points of view. Considering a given collection of $N$ data points $\mathbf{X}=\left\{\mathbf{x}_1,\cdots,\mathbf{x}_{N} \right\}$ that lie in the union of $k$ linear subspaces of $\mathbb{R}^D$, SSC expresses each data point $\mathbf{x}_j$ as a linear combination of all other points in $\mathbf{X}$, i.e., $\mathbf{x}_j = \sum_{i \ne j} c_{ij}\mathbf{x}_i$, where $c_{ij}$ is nonzero only if $\mathbf{x}_i$ and $\mathbf{x}_j$ are from the same subspace, for $(i,j) \in \left\{ 1,\cdots, N \right\}$. Such representations $\left\{c_{ij}\right\}$ are called \textit{subspace-preserving}. In general, assuming that $\mathbf{c}_j$ is sparse, SSC solves the following optimization problem \begin{equation} \min_{\mathbf{c}_j \in \mathbb{R}^N} \|\mathbf{c}_j\|_1 + \frac{\tau}{2}\|\mathbf{x}_j - \sum_{i \neq j} c_{ij}\mathbf{x}_i \|_2^2, \label{eq:ssc} \end{equation} where $\tau >0$ and $\mathbf{c}_j = \left[ c_{1j},\cdots,c_{Nj} \right]^T$ encodes information about membership of $\mathbf{x}_j$ to the subspaces. Subsequently, an affinity matrix between any pair of points $\mathbf{x}_i$ and $\mathbf{x}_j$ is defined as $A_{ij}=|c_{ij}| + |c_{ji}|$ and it is used in a spectral clustering framework to infer the clustering of the data \cite{von2007tutorial,elhamifar2013sparse}. Although the representation produced by SSC is guaranteed to be subspace preserving, the affinity matrix may lack \textit{connectedness} \cite{nasihatkon2011graph}, i.e., the data points from the same subspace may not form a connected component of the affinity graph due to the sparseness of the connections, which may cause over-segmentation. \subsection{Fast and Scalable Subspace Clustering Methods} \label{sec:related_works_sub2} Taking into account the self-expressiveness property, an early approach to address the SSC scalability issue assumes that a small number of data points can represent the whole dataset without loss of information. Then, authors in \cite{peng2013scalable} proposed the \textit{Scalable Sparse Subspace Clustering} (SSSC) algorithm to cluster a small subset of the original data and then classify the rest of the data based on the learned groups. However this strategy is suboptimal since it sacrifices clustering accuracy for computational efficiency. In \cite{you2016scalable}, authors replace the $\ell_1$ optimization in the original SSC algorithm \cite{elhamifar2013sparse} with greedy pursuit, e.g., orthogonal matching pursuit (OMP) \cite{tropp2007signal}, for sparse self-representation \cite{dyer2013greedy}. While SSC-OMP improves the time efficiency of SSC by several orders of magnitude, it significantly loses clustering accuracy \cite{chen2017active}. Besides, SSC-OMP also suffers from the connectivity issue presented in the original SSC algorithm. To solve this issue, authors in \cite{you2016oracle} proposed to mixture the $\ell_1$ and $\ell_2$ norms to take advantage of subspace preserving of the $\ell_1$ norm and the dense connectivity of the $\ell_2$ norm. Specifically, this algorithm, named ORacle Guided Elastic Net solver (ORGEN), proposed to identify a support set for each sample. However, in this approach, a convex optimization problem is solved several times for each sample which limits the scalability of the algorithm. More recent works \cite{aldroubi2018similarity,aldroubi2017cur,abdolali2019scalable} use a different subset selection method for subspace clustering. In particular, the method named Scalable and Robust SSC (SR-SSC) \cite{abdolali2019scalable} selects a few sets of anchor points using a randomized hierarchical clustering method. Then, within each set of anchor points, it solves the LASSO \cite{tibshirani1996regression} problem for each data point, allowing only anchor points to have non-zero weights. However, this method does not demonstrate that their selected points are representative of the subspaces. Similar to the SSC-OMP paper, authors in \cite{you2018scalable} proposed an approximation algorithm~\cite{williamson2011design} to solve the optimization problem in Eq. (\ref{eq:ssc}). Specifically, instead of using all the dataset $\mathbf{X}$, the Exemplar-based Subspace Clustering (ESC) algorithm in \cite{you2018scalable} selects a small subset $\mathbf{\hat{X}} \subseteq \mathbf{X}$ that represents all data points, and then each point is expressed as a linear combination of points in $\mathbf{\hat{X}} \in \mathbb{R}^{D\times M}$, where $M<N$. In particular, the selection of $\mathbf{\hat{X}}$ is obtained by using the Farthest first search (FFS) algorithm, which is a modified version of the Farthest-First Traversal (FarFT) algorithm~\cite{williamson2011design}. Indeed, the main difference between FarFT and FFS is the used distance metric. Explicitly, FarFT uses the Euclidean distance, while FFS uses a custom metric, derived from Eq. \ref{eq:ssc}, that geometrically measures how well a data point $\mathbf{x}_j \in \mathbf{X}$ is covered by a subset $\mathbf{\hat{X}}$. The authors propose to construct $\mathbf{\hat{X}}$ by first performing random sampling to select a base point and then progressively add new representative data points using the defined metric. However, a careful selection of the search space and the first selected data point could speed up the unsupervised learning process. The complete algorithm proposed in \cite{you2018scalable} is known as ESC-FFS, and we compare it against our proposed method in Section \ref{sec:experiments}. In general, the previously described algorithms provide an acceptable subspace clustering performance on large-scale datasets. However, these general-purpose methods do not fully exploit the complex structure of remotely sensed spectral images, ignoring the rich spatial information of the spectral images, which could boost the accuracy of these algorithms. \subsection{SSC-based Methods for Land Cover Segmentation} \label{sec:related_works_sub1} Some SSC-based methods have been proposed for land cover segmentation, which take advantage of the neighboring spatial information but still present the scalability issue of SSC. Under the context of SIs, the $N_r\times N_c \times L$ 3D image data cube can be rearranged into a 2D matrix $\mathbf{X}\in \mathbb{R}^{D\times N}$ to apply the SSC algorithm, where $N=N_rN_c$ and $D<L$ is the number of features extracted from the spectral signatures after applying principal component analysis (PCA) \cite{sanchez2019supervised}. Taking into account that the spectral pixels belonging to the same land cover material are arranged in common regions, different works \cite{zhang2016spectral,zhai2016new,zhai2017kernel,bacca2017kernel,hinojosa2018spectral,hinojosa2018coded, hinojosa2021hyperspectral} aim at obtaining a piecewise smooth sparse coefficient matrix to incorporate such contextual dependence. In particular S-SSC \cite{zhang2016spectral} helps to guarantee spatial smoothness and reduce the representation bias by adding a regularization term in the SSC optimization problem which enforces a local averaging constraint on the sparse coefficient matrix. More recently, authors in \cite{hinojosa2021hyperspectral}, propose the 3DS-SSC algorithm which incorporate a 3D Gaussian filter in the optimization problem to perform a 3D convolution on the sparse coefficients, obtaining a piecewise-smooth representation matrix. \section{Fast and Accurate Similarity-constrained Subspace Clustering (SC-SSC)} \label{sec:prop_method} \begin{figure*}[h] \begin{center} \includegraphics[width=\linewidth]{prop_method_new.pdf} \end{center} \caption{Workflow of the proposed fast and accurate similarity-constrained subspace clustering algorithm (SC-SSC) for land cover segmentation. The overall algorithm is composed of four stages. In the first stage, we apply PCA to obtain the three principal components of the spectral image. Then, we segment the image in different subsets using a superpixel algorithm. Note that we only use PCA to extract spatial similarities, but we perform the following procedures on the spectral pixels, as we depicted with the circular flow symbols. In the second stage the most representative spectral pixels from each subset are obtained by solving Eq. (\ref{eq:findExmp_opt}) via Algorithm \ref{alg:selection}, and then all the representative spectral pixels from each subset are stacked as column in the matrix $\mathbf{\hat{X}}$. In the third stage, the $\left\{ \mathbf{c}_j \right\}$ vectors are obtained solving Eq. (\ref{eq:newSSC}). Finally, we reshape each row of the matrix $\mathbf{C}=\left[ \mathbf{c}_1,\cdots,\mathbf{c}_N \right]$, perform a 2D convolution with a $ K_s\times K_s $ kernel, and reshape back the result to obtain a piecewise-smooth coefficient matrix. We obtain the final data segmentation via fast spectral clustering, as described in Section \ref{subsec:enhanceSC}. The computational complexity of the overall algorithm is $O(\rho^2N^3)$, as analyzed on Section \ref{sec:comp_complexity}.} \label{fig:prop_method} \end{figure*} This section presents a subspace clustering algorithm for land cover segmentation that incorporates both properties: it can better handle large-scale datasets and takes advantage of the neighboring spatial information of SIs to boost the clustering accuracy. The complete workflow of the proposed method is shown in Fig. \ref{fig:prop_method}. In general, we exploit the self-representation property within subsets of neighboring similar pixels to select the most representative data points of the whole spectral image. Then, we enhance the sparse representation and perform fast spectral clustering to obtain the segmentation result. \subsection{Similarity-constrained Most Representative Spectral Pixels Selection} As neighboring spatial pixels commonly belong to the same land cover material, the proposed method aims to select a small subset of pixels that best represent their neighborhood. In this regard, we start by obtaining a segmentation map of the overall SI using a superpixels algorithm, which commonly expects a three-band image as input. Therefore, we first perform PCA to retrieve the three principal components of $\mathbf{X}$, and form the matrix $\mathbf{X}_{PCA}\in \mathbb{R}^{3\times N}$. Then, we use the SLIC algorithm \cite{achanta2012slic} to obtain a segmentation map $\mathbf{\tilde{m}} \in \mathbb{R}^{N}$ from $\mathbf{X}_{PCA}$, such that $\tilde{m}_j \in \left\{ 1,\cdots,E \right\}$, where $E$ is the number of segments. For instance, if $\tilde{m}_j=e$ means that the pixel $\mathbf{x}_j$ belongs to the segment $e$. Note that PCA is only performed to obtain $\mathbf{\tilde{m}}$ from $\mathbf{X}_{PCA}$ via SLIC; then, we use $\mathbf{\tilde{m}}$ to select the most representative spectral pixels $\mathbf{x}_j$ from $\mathbf{X}$ within each segment $e$. Let $\mathbf{p}_e \in \mathbb{R}^{N_e}$ be the vector containing the indices of the $N_e$ most similar spectral pixels belonging to the subset $e$. We are interested in selecting the $M_e = \lfloor \rho N_e \rfloor$ most representative pixels from each subset, where $\rho \in (0,1)$. Taking advantage of the self-expressiveness property, the selection of the pixels within each neighborhood $e$ is obtained by searching for a subset $ \mathbf{X}_{e}^{*} \subseteq \mathbf{X}$ that minimizes \begin{equation} \mathbf{X}_{e}^{*} = \argmin_{ \mathbf{X}_e \in \mathbb{R}^{D\times M}} F_\tau (\mathbf{X}_e), \label{eq:findExmp_opt} \end{equation} where $F_\tau$ is the \textit{self-representation} cost function defined as \begin{equation} F_{\tau}(\mathbf{X}_e) \coloneqq \sup_{\mathbf{x}_j \in \mathbf{X} \ :\ j \in \mathbf{p}_e} f_{\tau}(\mathbf{x}_j,\mathbf{X}_e). \label{eq:self_rep_cost_f_subset} \end{equation} The metric function $f_{\tau}(\mathbf{x}_j,\mathbf{X}_e)$ geometrically measures how well a data point $\mathbf{x}_j \in \mathbf{X} : j \in \mathbf{p}_e$ can be represented by the subset $\mathbf{X}_e$, and we define it as \begin{equation} f_{\tau}(\mathbf{x}_j,\mathbf{X}_e) \coloneqq \min_{\mathbf{c}_j \in \mathbb{R}^{N}} \|\mathbf{c}_j\|_1 + \frac{\tau}{2} \|\mathbf{x}_j - \sum_{i:\mathbf{x}_i \in \mathbf{X}_e} c_{ij}\mathbf{x}_i \|_2^2, \label{eq:ftau} \end{equation} where $\tau \in (1,\infty)$ is a parameter. Note that with Eq. (\ref{eq:self_rep_cost_f_subset}), we constrain Eq. \ref{eq:findExmp_opt} to search only for pixels $\mathbf{x}_j$ within the subset $e$, using the vector $\mathbf{p}_e$. To efficiently solve Eq. (\ref{eq:findExmp_opt}) for each subset $e$, we use the approximation algorithm described in Algorithm \ref{alg:selection}. Note that, instead of using a random initialization, we select the centroid spectral pixel $\mathbf{\bar{x}}_e$ as the initialization data point since it is the most similar point, in the Euclidean distance, to all other data points in $e$. The search space constraint--given by dividing the SI into subsets--in conjunction with selecting the centroid spectral pixel speeds up the acquisition of the most representative spectral pixels. \begin{algorithm}[t] \SetKwFunction{isOddNumber}{isOddNumber} \SetKwInOut{KwIn}{Input} \SetKwInOut{KwOut}{Output} \KwIn{Data $\mathbf{X} \in \mathbb{R}^{D\times N}$, Indices vector $\mathbf{p}_e \in \mathbb{R}^{N_e}$, Parameters $0<\rho <1$, and $\tau >1$.} \KwOut{$\mathbf{X}_e \in \mathbb{R}^{D \times \lfloor \rho N_e \rfloor}$.} \SetAlgoLined \SetKwProg{Fn}{Function}{}{end} \nonl \Fn{Data\_Selection($\mathbf{X},\mathbf{p}_e, \rho, \tau$)}{ $\mathbf{\bar{x}}_e \leftarrow \text{centroid}(\{\mathbf{x}_j \in \mathbf{X}:j \in \mathbf{p}_e\})$ $\mathbf{X}_e^{(1)} \leftarrow \left\{ \mathbf{\bar{x}}_e \right\}$ \mycommfont{$\triangleright$ $(\mathbf{p}_e)_{k}$ gets the $k$ element of the vector $\mathbf{p}_e$.} Compute $b_k = f_\tau (\mathbf{x}_j,\mathbf{X}_{e}^{(1)})$ for $k=1, \cdots,N_e$, and $j=(\mathbf{p}_e)_{k}$. $M_e \leftarrow \lfloor \rho N_e \rfloor$ \For{$i=1,\cdots,M_e-1$}{ Let $o_1,\cdots,o_{N_e}$ be an ordering of $1,\cdots,N_e$ such that $b_{o_p} \ge b_{o_q}$ when $p<q$. Initialize $\textit{max\_cost}=0$. \For{$k=1,\cdots,N_e$}{ Set $b_{o_k}=f_\tau(\mathbf{x}_{o_k},\mathbf{X}_{e}^{(i)}).$ \If{$b_{o_k} > \textit{max\_cost}$}{ Set $\textit{max\_cost}=b_{o_k}$, and $\textit{new\_index}=o_k$. } \If{$k=N_e$ or $\textit{max\_cost} \ge b_{o_{k+1}}$}{ \textbf{break} } } $\mathbf{X}_{e}^{(i+1)} = \mathbf{X}_{e}^{(i)} \cup \left\{ \mathbf{x}_{\textit{new\_index}} \right\}$ } \KwRet{$\mathbf{X}_{e}$} } \caption{Similarity-constrained spectral pixels selection} \label{alg:selection} \end{algorithm} \subsection{Enhancing the sparse representation coefficients for fast spectral clustering} \label{subsec:enhanceSC} Once the most representative spectral pixels from each subset are obtained, we build the matrix $\mathbf{\hat{X}}$ by stacking the results as columns, i.e., $\mathbf{\hat{X}}=\left[ \mathbf{X}_1,\cdots,\mathbf{X}_E \right]$. Then, the sparse coefficient matrix $\mathbf{C}$ of size $M \times N$, with $M = \lfloor \rho N \rfloor$, can be obtained by solving the following optimization problem, similar to Eq. \ref{eq:ftau}, \begin{equation} \min_{\mathbf{c}_j \in \mathbb{R}^M} \|\mathbf{c}_j\|_1 + \frac{\tau}{2}\|\mathbf{x}_j - \sum_{i:\mathbf{x}_i \in \mathbf{\hat{X}}} c_{ij}\mathbf{x}_i \|_2^2, \quad \forall \ \mathbf{x}_j \in \mathbf{X}. \label{eq:newSSC} \end{equation} Note that $\mathbf{C}$ encodes information about the similarities between $\mathbf{\hat{X}}$ and $\mathbf{X}$. Besides, each row of $\mathbf{C}$ contains the representation coefficients distribution of the whole image with respect to a single representative pixel. Taking into account that spectral pixels belonging to the same land cover material should be regionally distributed in the image, i.e., two spatially neighboring pixels in a SI usually have a high probability of belonging to the same class. Then, according to the self-expressiveness property, their representation coefficients should also be very close concerning the same sparse basis; hence, each row of $\mathbf{C}$ should be piecewise-smooth. Therefore, an intuitive approach to include the spatial information to boost the clustering performance is to apply a 2D smoothing convolution on the sparse coefficients $\mathbf{C}$. Given a blur kernel matrix $\mathbf{I}_{K_s}$ of size $K_s \times K_s$, we will denote the 2D convolution process as $\mathbf{\hat{C}} = \mathcal{G}(\mathbf{C},\mathbf{I}_{K_s})$. Specifically, as depicted in Fig. \ref{fig:prop_method} within dashed blue line, we propose to perform $\mathcal{G}$ by first reshaping each row of $\mathbf{C}$ to a window of size $N_r\times N_c$, which corresponds to the spatial dimensions of the SI, and then conducting the convolution with $\mathbf{I}_{K_s}$. Finally, the convolution result is rearranged back as a row vector of the piecewise-smooth coefficient matrix $\mathbf{\hat{C}}=\left[ \mathbf{\hat{c}}_1,\cdots,\mathbf{\hat{c}}_N \right] \in \mathbb{R}^{M\times N}$. \begin{algorithm}[t] \SetKwFunction{isOddNumber}{isOddNumber} \SetKwComment{tcp}{$\triangleright$ }{}% \SetKwInOut{KwIn}{Input} \SetKwInOut{KwOut}{Output} \KwIn{The spectral image in matrix form $\mathbf{X} \in \mathbb{R}^{D\times N}$, parameters $\tau >1$, $0<\rho <1$, $E > 1$, $K_s>1$.} \KwOut{The segmentation of $\mathbf{X}$.} $\mathbf{X}_{PCA}\in \mathbb{R}^{3\times N}\leftarrow PCA(\mathbf{X})$ $\mathbf{\tilde{M}} \leftarrow Superpixels(\mathbf{X}_{PCA})$ $\mathbf{\tilde{m}} \leftarrow \text{vec}(\mathbf{\tilde{M}})$ $\mathbf{\hat{X}}^{(e)} \leftarrow \emptyset$ \For{$e \leftarrow 1$ \KwTo $E-1$}{ $\mathbf{p}_e = \left\{ j : \mathbf{\tilde{m}}_j = e, \forall j\in \{1,\cdots,N\} \right\} $ $\mathbf{X}_{e} \leftarrow Data\_Selection(\mathbf{X},\mathbf{p}_e, \rho, \tau)$ $\mathbf{\hat{X}}^{(e+1)} = \mathbf{\hat{X}}^{(e)} \cup \mathbf{X}_e$ } Compute $\mathbf{C} = \left[ \mathbf{c}_1,\cdots,\mathbf{c}_N \right]$ by solving Eq. (\ref{eq:newSSC}). $\mathbf{I}_{K_s} = (1/K_s^2)\cdot\mathbf{1} \quad$ \tcp{where $\mathbf{1}$ is an all-ones matrix of size $K_s$} $\mathbf{\hat{C}} = \mathcal{G}(\mathbf{C},\mathbf{I}_{K_s})$ $\mathbf{\tilde{C}}=\left[ \mathbf{\hat{c}}_1 / \|\mathbf{\hat{c}}_1\|_2, \cdots, \mathbf{\hat{c}}_N / \|\mathbf{\hat{c}}_N\|_2 \right]$ $\mathbf{\alpha} = \sum_{j=1}^{N} \mathbf{\tilde{c}}_j$ $\mathbf{D} = \text{diag}(\mathbf{\tilde{C}}^T \mathbf{\alpha})$ Run $k$-means clustering algorithm on the top $k$ right singular vectors of $\mathbf{\tilde{C}D}^{-1/2}$ to obtain the segmentation of $\mathbf{X}$. \KwRet{The cluster assignments of $\mathbf{X}$} \caption{SC-SSC for land cover segmentation} \label{alg:SC-SSC_all} \end{algorithm} Since $\mathbf{\hat{C}}$ is not square, it is not feasible to directly build the affinity matrix $\mathbf{A}$ be used with spectral clustering as in SSC \cite{von2007tutorial,elhamifar2013sparse}. To resolve this issue we use a fast spectral clustering approach to efficiently obtain the spectral embedding of the input data. Specifically, let us consider the columns of $\mathbf{\tilde{C}}=\left[ \mathbf{\tilde{c}}_1,\cdots,\mathbf{\tilde{c}}_N \right] \in \mathbb{R}^{M \times N}$, where $\mathbf{\tilde{c}}_j=|\mathbf{\hat{c}}_j|/\|\mathbf{\hat{c}}_j\|_2$, and compute the $i$-th element of the degree matrix $\mathbf{D}$ as follows \begin{equation} (\mathbf{D})_i = \sum_{j=1}^{N} A_{ij} = \sum_{j=1}^{N} \mathbf{\tilde{c}}_i^T \mathbf{\tilde{c}}_j = \mathbf{\tilde{c}}_i^T \sum_{j=1}^{N} \mathbf{\tilde{c}}_j = \text{diag}(\mathbf{\tilde{C}}^T\boldsymbol{\alpha})_{i}, \end{equation} where $\boldsymbol{\alpha} = \sum_{j=1}^{N} \mathbf{\tilde{c}}_j \in \mathbb{R}^{M}$. Next, we can find the eigenvalue decomposition of $\mathbf{D}^{-1/2}\mathbf{AD}^{-1/2}$ by computing the singular value decomposition \cite{golub1971singular} of $\mathbf{\tilde{C}D}^{-1/2} \in \mathbb{R}^{M \times N}$. Finally, the segmentation of the data can be obtained by running the $k$-means algorithm on the top $k$ right singular vectors for $\mathbf{\tilde{C}D}^{-1/2}=\mathbf{U \Sigma P}^T$. As a result, the computational complexity of spectral clustering in our framework is linear with respect to the size of the data $N$, which makes it suitable for large-scale datasets. The proposed SC-SSC method is summarized in Algorithm \ref{alg:SC-SSC_all}, and its computational complexity is analyzed on Section \ref{sec:comp_complexity}. \subsection{Analysis of the Proposed Method} We now analyze how the proposed method optimizes the sparsity (subspace-preserving property) and the connectivity in the representation coefficient matrix. Furthermore, we analyze the computational complexity of Algorithm \ref{alg:SC-SSC_all}. \subsubsection{Subspace-preserving Property and Connectivity} As mentioned in Section \ref{sec:related_works}, one of the main requirements for the success of subspace clustering methods is that the optimization process recovers a subspace-preserving solution. Specifically, the non-zero entries of the sparse representation vector $\mathbf{c}_j$ should be related only to the intra-subspace samples of $\mathbf{x}_j$. Indeed, as the following definition states, the representation coefficients among intra-subspace data points are always larger than those among inter-cluster points. \theoremstyle{definition} \begin{definition}[\small \textbf{Intra-subspace projection dominance, IPD \cite{peng2016constructing}}] The IPD property of a coefficient matrix $\mathbf{C}$ indicates that for all $\mathbf{x}_u,\mathbf{x}_v \in \mathcal{S}$ and $\mathbf{x}_q \notin \mathcal{S}$, where $u,v,q \in \left\{1,\cdots,N\right\}$, and $\mathcal{S}$ is a subspace of $\mathbf{X}$, we have $C_{uv} \ge C_{uq}$. \label{def:IPD} \end{definition} Since the proposed method selects the most representative spectral pixels for each subset $e$ based on the self-representation property, it is expected that each subset is subspace-preserving, i.e., $c_{ij}$ is nonzero only if $\mathbf{x}_i$ and $\mathbf{x}_j$, for $i,j \in \mathbf{p}_e$, belong to the same subspace $\mathcal{S}$. Furthermore, note that it is very probable that a subset $e$ has more spectral pixels from the same class due to the spatial dependence in SI; then, the resulting coefficients vector will have large values for those spectral pixels within $e$. Therefore, the strategy adopted in the proposed method will improve the structure of the vectors $\mathbf{c}_j$ obtained by Eq. (\ref{eq:newSSC}) and will improve the probability that $\mathbf{c}_j$ satisfies the IPD. Besides, using the 2D smoothing convolution procedure $\mathcal{G}(\mathbf{C},\mathbf{I}_{K_s})$, the proposed method improves the connectivity of the data points by preserving the most significant values in the coefficient matrix $\mathbf{C}$ and reducing the small or noisy isolated values, based on the IPD property \cite{peng2016constructing}. Then, the resulting matrix $\mathbf{\hat{C}}$ will have localized neighborhoods in the sparse codes making the representation coefficients of spatially neighboring pixels very close as well, following our main assumption in section \ref{subsec:enhanceSC}. \subsubsection{Computational Complexity Analysis} \label{sec:comp_complexity} As shown in Fig. \ref{fig:prop_method}, the proposed method mainly involves four stages: the extraction of spatial similarities, the selection of similarity-constrained representative spectral pixels, the sparse coefficient matrix estimation by solving Eq. (\ref{eq:newSSC}), and enhancing the representation coefficients for fast spectral clustering. Given a spectral image in matrix form $\mathbf{X}\in \mathbb{R}^{D\times N}$ and $E$ subsets $\mathbf{X}_e \subseteq \mathbf{X}$ of dimensions $D \times M_e$, with $M_e = \rho N_e$, we will show the complexity of each stage before establishing the total complexity of Algorithm \ref{alg:SC-SSC_all}. Specifically, in the first stage, we acquire the segmentation map $\mathbf{\tilde{m}}$ for an SI. Such procedure involves computing PCA over $\mathbf{X}$ to retrieve only the three principal components, which takes $O(N)$, and performing SLIC superpixels \cite{achanta2012slic} which also has linear time complexity $O(N)$. The second stage requires to execute Algorithm \ref{alg:selection}, which has $O(\rho N_e^2)$ time complexity over $E$ subsets, then the overall complexity of this stage will be $O(\rho \max(N_1^2,\cdots,N_E^2))$. The third stage entails solving Eq. (\ref{eq:newSSC}) which is a LASSO problem that can be efficiently computed in $O(M^2N)$ using the LARS algorithm \cite{efron2004least}. Finally, in the last stage, the 2D convolution takes $O(N)$ as $K_s \ll N$ and, since for the spectral clustering we only need the $k$ largest singular values, we can use the truncated singular value decomposition (SVD), which takes $O(k^2N)$. Thus, the overall complexity of this stage is $O(k^2N)$. Therefore, the complexity of Algorithm \ref{alg:SC-SSC_all} will be dominated by the complexity of the third stage, hence it will run in $O(M^2N)=O(\rho^2N^3)$, where $\rho \in (0,1)$. \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{false_color.pdf} \caption{False-color images and regions of interest (ROI) for the three real remote sensing images used in the experiments.} \label{fig:datasets} \end{figure} \section{Experimental Evaluation} \label{sec:experiments} In this section we show the performance of SC-SSC\footnote{A MatLab implementation of the Algorithm \ref{alg:SC-SSC_all} can be found at \url{https://xxx.xxxxx.xxx/xxxx}.} for land cover segmentation. The sparse optimization problem in Eq. (\ref{eq:ftau}), and Eq. (\ref{eq:newSSC}) are solved by the LASSO version of the LARS algorithm \cite{efron2004least} implemented in the SPAMS package \cite{mairal2010online}. All the experiments were run on an Intel Core i7 9750H CPU (2.60GHz, 6 cores), with $32$ GB of RAM. \subsection{Setup} \textbf{Databases}. The proposed subspace clustering approach (SC-SSC) was tested on three well-known hyperspectral images\footnote{\url{http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes}.} with different imaging environments, see Fig. \ref{fig:datasets}. The \textbf{Indian Pines} hyperspectral data set has $145\times 145$ pixels and $200$ spectral bands in the range of $0.4-2.5 \mu m$. The second scene, \textbf{Salinas}, has $512 \times 217$ pixels and $204$ spectral bands in the range of $0.24-2.40 \mu m$. The third scene, \textbf{University of Pavia}, comprises $610 \times 340$ pixels, and has $103$ spectral bands with spectral coverage ranging from $0.43-0.84 \mu m$. In order to make a fair comparison with non-scalable methods, we select, for each image, the most frequently used region of interest (ROI) in spectral image clustering, as shown in Fig. \ref{fig:datasets}. The Indian Pines ROI has a size of $70\times 70$ pixels, which includes four main land-cover classes: corn-no-till, grass, soybeans-no-till, and soybeans-min-till. The Salinas ROI comprises $83 \times 83$ pixels and includes six classes: brocoli-1, corn-senesced, lettuce-4wk, lettuce-5wk, lettuce-6wk, and lettuce-7wk. Finally, the University of Pavia ROI is composed of $200\times 200$ pixels, and includes all the classes (nine) as in the full image: asphalt, meadows, gravel, trees, metal sheets, bare soil, bitumen, bricks, and shadows. For all experiments (including the baseline methods), we reduce the spectral dimensions of each image using PCA to $D=0.25L$, where $L$ is the number of spectral bands. Then, we rearrange the data cube to form a matrix $\mathbf{X} \in \mathbb{R}^{D\times N}$, and normalize the columns (spectral pixels) to have unit $\ell_2$ norm. \textbf{Baselines and Evaluation Metrics.} We compare our approach with the SSC-based methods highlighted in Section \ref{sec:related_works_sub2}: SSC\cite{elhamifar2013sparse}, SSSC \cite{peng2013scalable}, SSC-OMP \cite{you2016scalable}, ORGEN \cite{you2016oracle}, SR-SSC \cite{abdolali2019scalable}, ESC-FFS \cite{you2018scalable}, S-SSC \cite{zhang2016spectral}, and 3DS-SSC \cite{hinojosa2021hyperspectral}. We also show the results with SSC as an additional reference. To make a fair comparison, we compare the performance of the proposed SC-SSC algorithm with the non-scalable methods (SSC, S-SSC, 3DS-SSC, and ORGEN) on the ROIs of the remote sensing images shown in Fig. \ref{fig:datasets}. Then, we compare the performance of the SC-SSC with the scalable methods (SSC-OMP, ESC-FFS, SR-SSC, and SSSC) on the full hyperspectral images. For the sake of completeness, we also compare our approach with non-SSC-based methods that use fast spectral clustering \cite{wei2019fast,wang2017fast,wang2019scalable}. Specifically, we gather the clustering results on the Indian Pines image reported on such works and compared them against the proposed method. To compare the clustering performance of our model, we rely on five standard metrics: user's accuracy (UA), average accuracy (AA), overall accuracy (OA), Kappa coefficient, and normalized mutual information (NMI) \cite{lillesand2015remote,strehl2002cluster}. In particular, UA, AA, OA, and Kappa coefficient can be obtained by means of an error matrix (a.k.a confusion matrix) \cite{lillesand2015remote}. UA represents the clustering accuracy of each class, while AA is the mean of UA, and OA is computed by dividing the total number of correctly classified pixels by the total number of reference pixels. UA, AA, and OA values are presented in percentage, while Kappa coefficients and NMI values range from 0 (poor clustering) to 1 (perfect clustering). We also compare the methods in terms of clustering time, and the results are presented in seconds. \subsection{Parameters analysis and tuning} The parameters $\rho, E$, and $K_s$ of Algorithm \ref{alg:SC-SSC_all} were manually adjusted for each dataset. We conduct different experiments varying each parameter, with the others fixed, to obtain the best overall accuracy with each spectral image. During simulations, we observe that the parameter $\rho$ has a direct impact on the execution time of the proposed method. Figure \ref{fig:rho_plot} presents the running time of SC-SSC for all the databases. As shown, increasing $\rho$ directly increases the running time; however, the most significant increment in time is given by the number of spectral pixels $N$, as observed with the differences in time between the curves. As we analyze in Section \ref{sec:comp_complexity}, this behavior is expected since the computational complexity of the algorithm is $O(\rho^2N^3)$. \begin{figure}[t] \centering \hspace{-0.2in}\includegraphics[width=0.75\columnwidth]{rho_plot.pdf} \caption{Running time (seconds) as a function of $\rho$.} \label{fig:rho_plot} \end{figure} \begin{table}[t] \centering {\footnotesize \caption{Selected parameters in Algorithm \ref{alg:SC-SSC_all} for each testing spectral images.} \label{tab:parameters}} \resizebox{0.82\columnwidth}{!}{% \begin{tabular}{cccc} \hline \textbf{Parameter} & \textbf{Indian Pines} & \textbf{Salinas} & \textbf{University of Pavia} \\ \hline $\rho$ & 0.35 & 0.2 & 0.3 \\ $E$ & 1700 & 700 & 1900 \\ $K_s$ & 8 & 3 & 8 \\ \hline \end{tabular} } \end{table} \begin{figure*} \centering \includegraphics[width=\linewidth]{All_parameters_propM.pdf} \caption{Experimental results of different combinations of parameters $\rho,E$, and $K_s$ for a fixed $\lambda$. Each row presents the 3D bar plot of $\rho$ vs. $E$, $K_s$ vs. $E$, and $\rho$ vs. $K_s$ for each database, and the evaluation is given by the overall accuracy with values between 0 and 1. The plot $\rho$ vs. $E$ shows how the OA changes when the number of selected representative data points varies concerning the amount of superpixels $E$. $K_s$ vs. $E$ depicts how the OA is affected by the amount of superpixels and the kernel size used in the 2D convolution to enhance the sparse coefficient matrix. Finally, $\rho$ vs. $K_s$ shows the change in OA when the number of selected representative data points varies, and a specific kernel size is used in the 2D convolution. The three plots prove the importance of an adequate balance between the selection of the number of representative pixels and the inclusion of spatial information in the spectral clustering algorithm.} \label{fig:all_param_propM} \end{figure*} To find the best configuration, the parameters were varied between the following values: $\tau \in \left\{5,10,15,20\right\}$, $\rho \in \left\{ 0.2,0.25,0.3,0.35 \right\}$, $E \in \{ 100,300,500,700,900,$ $1100,1300,1500,1700,1900 \}$, $K_s \in \left\{ 3,5,8,16 \right\}$. Figure \ref{fig:all_param_propM} shows the performance of the proposed method with a different combination of the parameters for all the databases, where the overall accuracy is shown between 0 and 1, and the parameter $\tau$ was fixed. Given the results of the different combinations of the parameters, we selected the best ones and summarized them in Table \ref{tab:parameters}. By analyzing Figure \ref{fig:all_param_propM}, we observe that the precision changes with different values of $\rho$ and $K_s$. The parameter $\rho$ determines the number of the selected most-representative data points within each of the $E$ segments, and $K_s$ is the kernel size used in the 2D convolution. Then, we can conclude that an adequate balance between the selection of the number of representative pixels and the inclusion of spatial information in the spectral clustering algorithm is crucial to obtain the best performance. In order to make a fair comparison, we also performed several simulations with the baseline methods to manually select the best parameters in their configurations. Table \ref{table:selectParamsOM} presents the selected parameters for each method after running the experiments. We present the same parameters symbols used in the original works. Note that, for the SSC, S-SSC, and 3DS-SSC algorithms, we select the same parameters reported in \cite{hinojosa2018coded} and \cite{hinojosa2021hyperspectral}, since they are already optimal for spectral image clustering. \begin{table}[t] \centering {\footnotesize \caption{Selected parameters for the baseline methods.} \label{table:selectParamsOM}} \resizebox{0.92\columnwidth}{!}{% \begin{tabular}{@{}lccc@{}} \toprule \textbf{} & \textbf{Indian Pines} & \textbf{Salinas} & \textbf{University of Pavia} \\ \midrule \textbf{ESC-FFS \cite{you2018scalable}} & $\lambda$=10, k =700, t = 10. & $\lambda$=20, k=700, t=10. & $\lambda$=15, k=700, t=20. \\ \midrule \textbf{SR-SSC \cite{abdolali2019scalable}} & \begin{tabular}[c]{@{}c@{}}$\alpha$=200, nGraph=5,\\ Nsample=10.\end{tabular} & \begin{tabular}[c]{@{}c@{}}$\alpha$=300, nGraph=20,\\ Nsample=20.\end{tabular} & \begin{tabular}[c]{@{}c@{}} $\alpha$=140, nGraph=15,\\ Nsample=10.\end{tabular} \\ \midrule \textbf{ORGEN \cite{you2016oracle}} &\begin{tabular}[c]{@{}c@{}} $\lambda$=0.8, nu=10,\\ Nsample=200.\end{tabular} &\begin{tabular}[c]{@{}c@{}} $\lambda$=0.95, nu=50,\\ Nsample=400.\end{tabular} & \begin{tabular}[c]{@{}c@{}}$\lambda$=0.8, nu=100,\\ Nsample=400.\end{tabular} \\ \midrule \textbf{SSSC \cite{peng2013scalable}} & $\lambda$=0.01, tol=0.0001. & $\lambda$=0.001, tol=0.001. & $\lambda$=1e-06, tol=0.01. \\ \midrule \textbf{SSC-OMP \cite{you2016scalable}} & K=40, thr=1e-07 & K=30, thr=0.001 & K=10, thr=1e-06 \\ \bottomrule \end{tabular} } \end{table} \begin{table}[t] {\footnotesize \caption{Ablation study of our method. The configuration shown in bold (Experiment IV) corresponds to our proposed approach which leads to the best results in terms of OA and NMI.} \label{table:ablations}} \resizebox{\columnwidth}{!}{% \setlength{\tabcolsep}{3pt} \begin{tabular}{cccccccccc} & & & & \multicolumn{2}{c}{Indian Pines} & \multicolumn{2}{c}{Pavia} & \multicolumn{2}{c}{Salinas} \\ \hline \multicolumn{1}{c|}{Experiment} & PCA & Superpixels & \multicolumn{1}{c|}{2D Conv} & OA & \multicolumn{1}{c|}{NMI} & OA & \multicolumn{1}{c|}{NMI} & OA & NMI \\ \hline \multicolumn{1}{c|}{I} & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \multicolumn{1}{c|}{} & 64.36 & \multicolumn{1}{c|}{0.36} & 37.78 & \multicolumn{1}{c|}{0.49} & 74.26 & 0.65 \\ \multicolumn{1}{c|}{II} & & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \multicolumn{1}{c|}{\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;} & 80.53 & \multicolumn{1}{c|}{0.63} & 67.04 & \multicolumn{1}{c|}{0.81} & 84.63 & 0.86 \\ \multicolumn{1}{c|}{III} & & & \multicolumn{1}{c|}{\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;} & 41.72 & \multicolumn{1}{c|}{0.29} & 38.11 & \multicolumn{1}{c|}{0.46} & 43.33 & 0.42 \\ \multicolumn{1}{c|}{IV} & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & & \multicolumn{1}{c|}{\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;} & 41.81 & \multicolumn{1}{c|}{0.3} & 38.21 & \multicolumn{1}{c|}{0.47} & 49.53 & 0.41 \\ \multicolumn{1}{c|}{V} & & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \multicolumn{1}{c|}{} & 65.84 & \multicolumn{1}{c|}{0.37} & 47.60 & \multicolumn{1}{c|}{0.49} & 73.29 & 0.77 \\ \multicolumn{1}{c|}{\textbf{VI}} & \textbf{\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;} & \textbf{\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;} & \multicolumn{1}{c|}{\textbf{\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;}} & \textbf{93.14} & \multicolumn{1}{c|}{\textbf{0.8}} & \textbf{77.57} & \multicolumn{1}{c|}{\textbf{0.82}} & \textbf{99.42} & \textbf{0.98} \\ \hline \end{tabular}} \end{table} \subsection{Ablation Studies} We conduct six ablation experiments to investigate different configurations for the proposed subspace clustering approach. Specifically, we compare the proposed workflow's performance in Fig. \ref{fig:prop_method} when incorporating/excluding PCA, superpixels, and the 2D convolution. Table \ref{table:ablations} present the results obtained from the different combinations in terms of OA and NMI for the three tested images. We observed that using superpixels to extract spatial similarities improves the clustering performance for the three tested images in all the cases, which evidence the importance of the neighboring spatial information in our workflow. Also, using superpixels and the 2D convolution (Experiment II) leads to the second-best result, while only using 2D convolution (Experiment III) does not lead to a significant clustering improvement. Finally, Experiment VI corresponds to our proposed approach where we show that we achieve the best results in terms of OA and NMI when using the three operations as described in the workflow in Fig. \ref{fig:prop_method}. \subsection{Visual and Quantitative Results} \label{sec:experiments_VQ} \subsubsection{Comparison with non-scalable methods} Figure \ref{fig:visual_results} presents the obtained land cover maps on the Indian Pines, Salinas, and University of Pavia ROIs, where we compare the performance of our SC-SSC method with the non-scalable methods: SSC, S-SSC, ORGEN, and 3DS-SSC. The quantitative evaluations corresponding to the UA, AA, OA, Kappa, NMI, and Time with the non-scalable clustering methods are reported in Table \ref{table:QuantRes}, in which the best results are shown in bold and the second-best is underlined. From Table \ref{table:QuantRes}, it can be clearly observed that, in general, the proposed SC-SSC method performs better than others. Specifically, SC-SSC achieves an OA of $93.14\%$ and $99.42\%$, in only $1.63$ and $2.06$ seconds, for the Indian Pines and Salinas dataset, respectively, which are remarkable results for unsupervised learning settings. Similarly, for the University of Pavia ROIs, it is observed from Table \ref{table:QuantRes} that the proposed SC-SSC achieves the best clustering performance in all the accuracy evaluation metrics, among all the other algorithms. \subsubsection{Comparison with scalable methods} We now compare the performance of SC-SSC with the scalable approaches: SSC-OMP, SSSC, ESC-FFS, and SR-SSC. Figure \ref{fig:visual_results_full} and Table \ref{table:QuantRes_full} present the visual and quantitative results respectively on the full spectral images. From both, qualitative and quantitative results, we observed that the proposed SC-SSC method outperforms the other approaches in terms of OA, Kappa and NMI score. Note that, although the proposed method is not the fastest one, it provides high clustering performance in a shorter amount of time in comparison with other methods. \begin{figure*} \centering \includegraphics[width=\linewidth]{All_visual_joint.pdf} \caption{Land cover maps of (first row) Indian Pines ROI, (second row) Salinas ROI, and (last row) University of Pavia ROI. The proposed method is compared with the methods that perform best on these spectral images.} \label{fig:visual_results} \end{figure*} \begin{table}[t] \centering {\footnotesize \caption{Quantitative results of Indian Pines, Salinas, and University of Pavia ROIs.} \label{table:QuantRes}} \resizebox{0.8\columnwidth}{!}{% \setlength{\tabcolsep}{3pt} \def\arraystretch{0.95}% \begin{tabular}{@{}cccccc@{}} \toprule \textbf{Class} & \textbf{SSC} & \textbf{S-SSC} & \textbf{ORGEN} & \textbf{3DS-SSC} & \textbf{SC-SSC} \\ \midrule \multicolumn{6}{c}{Indian Pines ROI} \\ \midrule Corn-no-till & \textbf{97.43} & 83.98 & 93.19 & 73.24 & {\ul 95.53} \\ Grass & 89.14 & {\ul 89.50} & \textbf{92.47} & 83.64 & 76.50 \\ Soybean-no-till & 41.36 & 62.38 & 54.33 & {\ul 73.73} & \textbf{98.10} \\ Soybeans-min-till & 61.57 & 75.93 & 64.34 & {\ul 86.62} & \textbf{94.47} \\ \midrule AA & 69.35 & 79.09 & 71.84 & {\ul 82.07} & \textbf{94.69} \\ OA & 62.62 & 76.16 & 69.91 & {\ul 80.41} & \textbf{93.14} \\ Kappa & 0.48 & 0.66 & 0.56 & {\ul 0.72} & \textbf{0.90} \\ NMI & 0.39 & 0.47 & 0.42 & {\ul 0.57} & \textbf{0.79} \\ \midrule Time {[}s{]} & {\ul 285.57} & 301.41 & 668.36 & 341.21 & \textbf{1.63} \\ \midrule \multicolumn{6}{c}{Salinas ROI} \\ \midrule Brocoli-1 & \textbf{100.00} & \textbf{100.00} & \textbf{100.00} & {\ul 87.28} & \textbf{100.00} \\ Corn-senesced & \textbf{100.00} & 99.45 & {\ul 99.63} & 99.46 & \textbf{100.00} \\ Lettuce-4wk & 0.00 & 44.79 & 45.86 & {\ul 53.24} & \textbf{97.88} \\ Lettuce-5wk & 70.19 & 97.72 & 98.74 & \textbf{100.00} & {\ul 98.87} \\ Lettuce-6wk & {\ul 98.48} & 84.86 & 96.58 & \textbf{100.00} & \textbf{100.00} \\ Lettuce-7wk & 98.97 & {\ul 99.48} & 99.07 & 99.16 & \textbf{100.00} \\ \midrule AA & 75.75 & 88.13 & 89.51 & {\ul 91.42} & \textbf{99.37} \\ OA & 77.16 & 83.09 & 85.27 & {\ul 88.26} & \textbf{99.42} \\ Kappa & 0.71 & 0.79 & 0.82 & {\ul 0.86} & \textbf{0.99} \\ NMI & 0.85 & 0.84 & {\ul 0.87} & {\ul 0.87} & \textbf{0.98} \\ \midrule Time {[}s{]} & {\ul 319.42} & 327.66 & 1355.79 & 377.11 & \textbf{2.06} \\ \midrule \multicolumn{6}{c}{University of Pavia ROI} \\ \midrule Asphalt & 1.47 & 53.90 & 33.38 & \textbf{93.56} & {\ul 92.17} \\ Meadows & 30.02 & 37.34 & {\ul 49.64} & 30.87 & \textbf{93.87} \\ Gravel & \textbf{6.48} & {\ul 0.61} & 0.00 & 0.00 & 0.00 \\ Trees & 90.37 & 0.00 & 97.08 & \textbf{100.00} & {\ul 97.92} \\ Metal Sheets & 18.48 & 79.03 & \textbf{100.00} & 86.21 & {\ul 98.74} \\ Bare Soil & 95.53 & 89.28 & \textbf{99.71} & 93.94 & {\ul 98.75} \\ Bitumen & 47.15 & {\ul 72.13} & 64.55 & 50.42 & \textbf{83.60} \\ Bricks & 0.00 & {\ul 51.69} & 0.06 & 0.00 & \textbf{72.36} \\ Shadows & 22.77 & 0.00 & \textbf{100.00} & {\ul 53.27} & 28.57 \\ \midrule AA & 42.71 & 43.26 & {\ul 63.17} & 62.01 & \textbf{72.98} \\ OA & 32.68 & 46.59 & 58.49 & {\ul 65.07} & \textbf{77.57} \\ Kappa & 0.23 & 0.38 & 0.50 & {\ul 0.56} & \textbf{0.72} \\ NMI & 0.41 & 0.49 & 0.59 & {\ul 0.65} & \textbf{0.82} \\ \midrule Time {[}s{]} & 17821.25 & 10195.89 & {\ul 551.02} & 10501.29 & \textbf{68.72} \\ \bottomrule \end{tabular}} \end{table} \begin{table}[h] \centering {\footnotesize \caption{Quantitative comparison with unsupervised deep learning-based methods in terms of NMI score.} \label{tab:deep-cmp}} \resizebox{\columnwidth}{!}{% \setlength{\tabcolsep}{3.5pt} \begin{tabular}{c|ccccc} \hline \backslashbox{Datasets}{Methods} & VAE \cite{tulczyjew2020unsupervised} & 3D-CAE \cite{nalepa2020unsupervised} & AE-GRU \cite{tulczyjew2020unsupervised} & AE-LSTM \cite{tulczyjew2020unsupervised} & SC-SSC \\ \hline Indian Pines & 0.429 & 0.504 & {\ul 0.515} & 0.478 & \textbf{0.601} \\ \hline Salinas & 0.722 & {\ul 0.839} & 0.825 & 0.830 & \textbf{0.892} \\ \hline University of Pavia & 0.505 & {\ul 0.639} & 0.524 & 0.569 & \textbf{0.643} \\ \hline \end{tabular}} \end{table} \begin{table}[t] \centering {\footnotesize \caption{Quantitative results of Indian Pines, Salinas, and University of Pavia Full Images.} \label{table:QuantRes_full}} \resizebox{0.9\columnwidth}{!}{% \setlength{\tabcolsep}{3pt} \def\arraystretch{0.95}% \begin{tabular}{@{}cccccc@{}} \toprule \textbf{Class} & \textbf{SSC-OMP} & \textbf{SSSC} & \textbf{ESC-FFS} & \textbf{SR-SSC} & \textbf{SC-SSC} \\ \midrule \multicolumn{6}{c}{Indian Pines} \\ \midrule Alfalfa & \textbf{7.69} & {\ul 6.90} & 0.60 & 0.00 & 0.00 \\ Corn-no-till & 37.04 & 34.33 & {\ul 57.79} & 47.95 & \textbf{66.96} \\ Corn-min-till & {\ul 20.83} & 17.46 & 17.45 & 17.84 & \textbf{55.25} \\ Corn & {\ul 22.73} & 15.56 & 11.34 & 8.85 & \textbf{27.25} \\ Grass-pasture & 21.05 & {\ul 35.53} & 34.81 & 32.39 & \textbf{90.52} \\ Grass-trees & 57.14 & \textbf{84.91} & 77.94 & 70.11 & {\ul 77.05} \\ Grass-pasture-mowed & \textbf{0.45} & 0.00 & {\ul 0.29} & 0.00 & 0.00 \\ Hay-windrowed & 25.81 & {\ul 88.29} & 85.91 & 76.91 & \textbf{90.53} \\ Oats & 0.00 & {\ul 3.36} & 0.02 & \textbf{4.36} & 0.00 \\ Soybean-no-till & 18.18 & 30.89 & {\ul 38.54} & 36.11 & \textbf{64.15} \\ Soybean-min-till & 28.67 & 52.25 & {\ul 55.12} & 48.52 & \textbf{62.23} \\ Soybean-clean & 6.32 & {\ul 22.17} & 17.44 & 15.76 & \textbf{32.10} \\ Wheat & 8.24 & 37.60 & 37.53 & {\ul 56.35} & \textbf{66.13} \\ Woods & 11.54 & 88.22 & 81.03 & {\ul 89.42} & \textbf{91.49} \\ Building-grass-trees-drives & 5.56 & 17.33 & {\ul 25.51} & 22.15 & \textbf{67.34} \\ Stone-stell-towers & 0.00 & {\ul 45.20} & 18.53 & 38.14 & \textbf{49.73} \\ \midrule AA & 9.64 & {\ul 40.55} & 33.94 & 40.26 & \textbf{56.37} \\ OA & 12.84 & 35.23 & 36.01 & {\ul 39.99} & \textbf{59.76} \\ Kappa & 0.03 & 0.29 & 0.30 & {\ul 0.33} & \textbf{0.55} \\ NMI & 0.03 & 0.42 & 0.43 & {\ul 0.44} & \textbf{0.60} \\ \midrule Time {[}s{]} & 37.93 & 32.35 & 47.15 & \textbf{16.39} & {\ul 19.33} \\ \midrule \multicolumn{6}{c}{Salinas} \\ \midrule Brocoli 1 & 11.34 & 0.00 & \textbf{100.00} & 0.00 & {\ul 98.38} \\ Brocoli 2 & 28.88 & 57.41 & \textbf{99.68} & {\ul 62.98} & \textbf{99.68} \\ Fallow & 9.70 & {\ul 78.98} & 72.16 & 0.00 & \textbf{99.88} \\ Fallow Plow & 7.43 & {\ul 91.56} & 89.52 & \textbf{94.35} & 47.71 \\ Fallow Smooth & 22.58 & 58.11 & \textbf{71.98} & {\ul 71.82} & 64.94 \\ Stubble & 21.54 & 99.05 & {\ul 99.70} & \textbf{99.91} & 95.67 \\ Celery & 22.08 & 97.12 & {\ul 97.78} & 85.62 & \textbf{99.94} \\ Grapes & 3.47 & {\ul 70.19} & 58.44 & \textbf{70.68} & 60.92 \\ Soil & 24.68 & 91.18 & {\ul 95.91} & 85.58 & \textbf{96.88} \\ Corn & 3.91 & {\ul 61.92} & 26.70 & 58.04 & \textbf{99.76} \\ Lettuce 4 & 3.97 & \textbf{79.10} & {\ul 24.01} & 0.00 & 0.00 \\ Lettuce 5 & 4.42 & \textbf{81.09} & {\ul 65.14} & 51.99 & 62.81 \\ Lettuce 6 & {\ul 1.90} & \textbf{41.17} & - & 0.00 & 0.00 \\ Lettuce 7 & 0.00 & \textbf{58.92} & {\ul 55.09} & 51.18 & 43.70 \\ Vineyard & 11.57 & \textbf{56.92} & 25.00 & {\ul 49.22} & 0.00 \\ Vineyard trellis & 3.05 & 0.00 & 98.53 & {\ul 98.89} & \textbf{100.00} \\ \midrule AA & 10.95 & 64.76 & \textbf{73.15} & 61.57 & {\ul 73.03} \\ OA & 10.74 & 71.44 & {\ul 73.47} & 70.23 & \textbf{76.17} \\ Kappa & 0.05 & 0.68 & {\ul 0.70} & 0.67 & \textbf{0.73} \\ NMI & 0.16 & 0.75 & {\ul 0.83} & 0.78 & \textbf{0.87} \\ \midrule Time {[}s{]} & \textbf{78.18} & {\ul 58.29} & 615.40 & 168.18 & 906.71 \\ \midrule \multicolumn{6}{c}{University of Pavia} \\ \midrule Asphalt & 20.67 & 60.00 & \textbf{95.60} & 68.64 & {\ul 73.52} \\ Meadows & 48.11 & {\ul 85.58} & 71.91 & 77.98 & \textbf{96.81} \\ Gravel & \textbf{13.73} & 0.15 & 0.06 & 1.25 & {\ul 11.73} \\ Trees & 9.54 & 31.28 & 18.03 & \textbf{72.63} & {\ul 32.48} \\ Metal sheets & 81.71 & \textbf{97.01} & 36.74 & 60.15 & {\ul 86.22} \\ Bare soil & 0.00 & 5.75 & {\ul 24.15} & 15.04 & \textbf{97.53} \\ Bitumen & 0.00 & {\ul 4.26} & \textbf{27.80} & 0.00 & 0.00 \\ Bricks & 0.00 & {\ul 55.27} & \textbf{60.70} & 39.11 & 54.54 \\ Shadows & 0.00 & 43.32 & {\ul 62.99} & \textbf{96.45} & 0.00 \\ \midrule AA & 16.48 & 39.76 & \textbf{56.87} & {\ul 53.50} & 52.23 \\ OA & 35.34 & 51.00 & 42.61 & {\ul 53.15} & \textbf{69.79} \\ Kappa & 0.07 & 0.39 & 0.33 & {\ul 0.42} & \textbf{0.61} \\ NMI & 0.07 & 0.39 & 0.49 & {\ul 0.52} & \textbf{0.64} \\ \midrule Time {[}s{]} & 201.52 & \textbf{73.45} & 1821.58 & {\ul 148.55} & 913.67 \\ \bottomrule \end{tabular} } \vspace{-0.1in} \end{table} \begin{figure*} \centering \includegraphics[width=\linewidth]{full_results.pdf} \caption{Land cover maps on the Indian Pines (IP), Salinas Valley (SA), and University of Pavia (PU) Full images. The proposed method is compared only with the scalable SSC-based methods we found in the literature.} \label{fig:visual_results_full} \end{figure*} \subsubsection{Comparison with unsupervised deep-learning-based methods} For the sake of completeness, we compare the proposed SC-SSC method with unsupervised deep-learning-based methods based on autoencoders (AE) for spectral image clustering. Three of them were proposed in \cite{tulczyjew2020unsupervised} (VAE, AE-GRU, and AE-LSTM), and the 3D-CAE method was proposed in \cite{nalepa2020unsupervised} which is based on a 3D convolutional AE. Note that we only compare our method with totally unsupervised deep learning approaches to make a fair comparison. Table \ref{tab:deep-cmp} shows the quantitative results in terms of the NMI score. In the table, the best result is shown in bold font, and the second-best is underlined. As observed, our method obtains an NMI score of $0.601$, $0.892$, and $0.643$ on Indian Pines, Salinas, and University of Pavia full spectral images, respectively, corresponding to the highest clustering scores. \section{Conclusion} \label{sec:conclusions} In this work, we presented a new subspace clustering algorithm for land cover segmentation which can handle large-scale datasets and take advantage of spectral images' neighboring spatial information to boost the clustering accuracy. Our method considers the spatial similarity among spectral pixels to select the most representative ones, such that all other neighboring points can be well-represented by those representative pixels in terms of a sparse representation cost. Then, the obtained sparse coefficients matrix is enhanced by performing filtering on the coefficients, and a fast spectral clustering algorithm gives the segmentation. Through simulations using traditional test spectral images, we demonstrated the effectiveness of our method for fast land cover segmentation, obtaining remarkable high clustering performance when compared with state-of-the-art SSC algorithms and even novel unsupervised-deep-learning-based methods. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
2,869,038,156,008
arxiv
\section{Introduction} The universal relation between the statistics of quantum spectra and classical mechanics is a fundamental cornerstone of quantum chaos: For systems with regular dynamics it was conjectured that spectral statistics show Poissonian behavior \cite{BeTa1977}. In contrast, systems with chaotic dynamics should be described by random matrix theory \cite{BoGiSc1984, CaVaGu1980}, which can be explained in terms of periodic orbits \cite{Be1985, SiRi2001, HeMuAlBrHa2007}. For generic Hamiltonian systems with a mixed phase space, in which disjoint regions of regular and chaotic motion coexist, universal spacing statistics were obtained by Berry and Robnik \cite{BeRo1984}. Their derivation is based on the semiclassical eigenfunction hypothesis \cite{Pe1973, Be1977, Vo1979}, which states that eigenfunctions of a quantum system semiclassically localize on those regions in phase space a typical that orbit explores in the long-time limit. For regular states in one-dimensional systems this corresponds to the WKB quantization condition \cite{LaLi1991, BeBaTaVo1979} \begin{align} \label{eq:bohr-sommerfeld} \oint_{\mathcal{C}_{m}} p \, \mathrm{d}{}q = h_{\mathrm{eff}} \left( m + \frac{1}{2} \right). \end{align} It shows that the regular state, labeled by the quantum number $m$, localizes on the quantizing torus $\mathcal{C}_{m}$ which encloses the area $h_{\mathrm{eff}} \left( m + \frac{1}{2} \right)$ in phase space. On the other hand the semiclassical eigenfunction hypothesis implies that chaotic states uniformly extend over the chaotic region of phase space. Assuming that the disjoint regular and chaotic regions give rise to statistically uncorrelated level sequences, one obtains the Berry-Robnik level-spacing distribution \cite{BeRo1984}; see Fig.~\ref{fig:introPS} (dash-dotted lines). \begin{figure}[!b] \begin{center} \includegraphics[width=\linewidth]{fig1.eps} \caption{(Color online) Level-spacing distribution $P(s)$ of the model system (see Sec.~\ref{sec:modelSystem}) for $h_{\mathrm{eff}}\approx1/13$. The numerical data (black histogram) is compared to the flooding-improved Berry-Robnik distribution (red dashed lines), Eq.~(\ref{eq:effBerryRobnik}), as well as to the flooding- and tunneling-improved Berry-Robnik distribution (green solid lines), Eq.~(\ref{eq:ftibr_dist}), for system sizes (a) $M=1$ (weak flooding) and (b) $M=6765$ (strong flooding); $M$ is introduced in Sec.~\ref{sec:kickedSystems}. For comparison the Wigner distribution (dotted lines) and the Berry-Robnik distribution (dash-dotted lines) are shown. The insets show averaged Husimi functions of chaotic eigenstates.} \label{fig:introPS} \end{center} \end{figure} The assumption of uncorrelated regular and chaotic level sequences does not hold in the presence of dynamical tunneling \cite{DaHe1981, HaOtAn1984, BoToUl1993, ToUl1994, ShIk1995, BrScUl2001, BrScUl2002, ShFiGuRe2006, BaeKetLoeSch2008, LoBaKeSc2010, BaKeLo2010, DeGrHeHoReRi2000, StOsWiRa2001, He2001}, which quantum mechanically couples regular and chaotic states. If such tunneling couplings are small, regular eigenstates will typically have tiny chaotic admixtures and vice versa. The influence of such weak couplings on spacing statistics can be described perturbatively \cite{Le1993, PoNa2007, ViStRoKuHoGr2007, BaRo2010, BaKeLoMe2011}. Based on this description a tunneling-improved Berry-Robnik distribution was derived recently, which explains the power-law distribution of small spacings in mixed systems \cite{BaKeLoMe2011}. For systems with a large density of states, it is observed \cite{BoToUl1990,BoToUl1993,BaKeMo2005, BaKeMo2007, FeBaKeRoHuBu2006, IsTaSh2010} that a regular WKB state strongly couples to many chaotic states. As a consequence, the corresponding regular eigenstate disappears and chaotic eigenstates penetrate into the regular island, ignoring the semiclassical eigenfunction hypothesis. This effect is called flooding \cite{BaKeMo2005, BaKeMo2007}. It causes the number $N^{\mathrm{f}}_{\mathrm{r}}$ of regular eigenstates that actually exist in the regular island to be smaller than the number $N_{\mathrm{r}}^{\mathrm{sc}}$ expected from the semiclassical eigenfunction hypothesis. In Refs.~\cite{BaKeMo2005, BaKeMo2007} it was found that in addition to the WKB quantization condition \eqref{eq:bohr-sommerfeld} the regular state on the $m$th quantizing torus exists only if \begin{align} \label{eq:exCrit2} \gamma_{m} < \frac{1}{\tau_{\mathrm{H,c}}}. \end{align} Here, $\gamma_{m}$ is the tunneling rate, which describes the initial exponential decay of the $m$th WKB state to the chaotic region. The Heisenberg time $\tau_{\mathrm{H,c}}=h_{\mathrm{eff}}/\Delta_{\mathrm{c}}$ is the ratio of the effective Planck constant $h_{\mathrm{eff}}$ and the mean level spacing of the chaotic spectrum $\Delta_{\mathrm{c}}$. In this paper we study the consequences of flooding on spectral statistics in systems with a mixed phase space. With increasing density of states we observe a transition of the level-spacing distribution from Berry-Robnik [see Fig.~\ref{fig:introPS}(a)] to Wigner statistics [see Fig.~\ref{fig:introPS}(b)], although the underlying classical phase-space structure and $h_{\mathrm{eff}}$ remain unchanged. This transition is demonstrated quantitatively for model systems with a simple phase-space structure, but it is expected to hold for generic systems with a mixed phase space. In order to explain the transition, we introduce a flooding-improved Berry-Robnik distribution which takes into account that only $N^{\mathrm{f}}_{\mathrm{r}} \le N_{\mathrm{r}}^{\mathrm{sc}}$ regular states survive in the regular region. We find good agreement with numerical data; see Fig.~\ref{fig:introPS} (red dashed lines). We unify this intuitive prediction with the tunneling-improved Berry-Robnik distribution \cite{BaKeLoMe2011}, which explicitly considers the tunneling couplings between regular and chaotic states. This results in a tunneling- and flooding-improved Berry-Robnik distribution, which excellently reproduces the observed transition from Berry-Robnik to Wigner statistics as well as the power-law level repulsion at small spacings; see Fig.~\ref{fig:introPS} (green solid lines). This paper is organized as follows: In Sec.~\ref{sec:modelSystem} we introduce a family of model systems. Their level-spacing distribution is studied in Sec.~\ref{sec:specStatWithFlooding}, where we demonstrate the transition from Berry-Robnik to Wigner statistics numerically and explain it by the flooding of regular states. We conclude with a summary in Sec.~\ref{sec:outlook}. \section{Model System} \label{sec:modelSystem} In this section we introduce a family of model systems for which the consequences of flooding can be studied in detail. \subsection{Classical dynamics} \label{sec:kickedSystems} \begin{figure}[b] \begin{center} \includegraphics[width=\linewidth]{fig2.eps} \caption{(Color online) Phase-space portrait of the model system, Eq.~(\ref{eq:strobMap}). For one unit cell $M=1$ (a) the regular island (red lines) is embedded in the chaotic sea (blue dots). For systems with $M>1$ (b) the phase space consists of $M$ such unit cells side by side. The arrows indicate the transport in the regular islands and in the chaotic sea.} \label{fig:phasespace} \end{center} \end{figure} We consider systems with a mixed phase space where classically disjoint regions of regular and chaotic motion coexist. As examples we choose one-dimensional kicked systems, described by the classical Hamilton function \begin{align} \label{eq:kickedHamiltonian} H(q,p,t) = T(p) + V(q) \sum_{n \in \mathbb{Z}} \delta(t-n), \end{align} where $T(p)$ is the kinetic energy and the potential $V(q)$ is applied once per kicking period. The dynamics of such systems is determined by the stroboscopic mapping $\mathcal{M}$ of the positions and the momenta $(q_{n}, p_{n})$ at times $t = n$ just after each kick \cite{Ha2001}, \begin{align} \label{eq:strobMap} \mathcal{M}: (q_{n+1}, p_{n+1}) = (q_{n} + T'(p_{n}), p_{n} - V'(q_{n+1})). \end{align} We design the example systems similar to Refs.~\cite{HuKeOtSch2002,BaKeMo2005,BaeKetLoeSch2008,BaKeLo2010} by the piecewise linear functions \begin{align} \label{eq:piecelinfunc_t} t'(p) &= \begin{cases} -1 + s_{1} (p + 1/4) & \mathrm{for }\ \ p \in \ ]-1/2, 0[,\\ +1 - s_{2} (p - 1/4) & \mathrm{for }\ \ p \in \ ]0, 1/2[, \end{cases}\\ \label{eq:piecelinfunc_v} v'(q) &= -rq -(1-r) \lfloor q + 1/2 \rfloor, \end{align} where $\lfloor x \rfloor$ is the floor function and $t'(p)$ is periodically extended. Smoothing the functions $t'(p)$ and $v'(q)$ with a Gaussian $G_{\epsilon}(z) = \exp(-z^{2}/2\epsilon^{2})/\sqrt{2\pi\epsilon^{2}}$, one obtains analytic functions \begin{align} \label{eq:anaFunc_T} T'(p) &= \int_{-\infty}^{\infty} \mathrm{d}{}z\, t'(z) G_{\epsilon}(p-z),\\ \label{eq:anaFunc_V} V'(q) &= \int_{-\infty}^{\infty} \mathrm{d}{}z\, v'(z) G_{\epsilon}(q-z). \end{align} By construction these functions have the periodicity properties \begin{align} \label{eq:periodPropT} T'(p+k) &= T'(p),\\ \label{eq:periodPropV} V'(q+k) &= V'(q) - k, \end{align} for $k \in \mathbb{Z}$. This allows to consider the map $\mathcal{M}$ on a torus, i.e., $(q, p) \in [-M/2, M/2[ \times [-1/2, 1/2[$ with periodic boundary conditions and $M \in \mathbb{N}$. Due to the choice of $T'(p)$ and $V'(q)$, the dynamics is equivalent in each unit cell of phase space with $q \in [k-1/2, k+1/2[$ and $k \in \mathbb{Z}$; see Fig.~\ref{fig:phasespace}. In the following we choose the parameters $s_{1}\in[5,20]$, $s_{2}=2$, $r=0.46$, and $\epsilon=0.005$ such that each unit cell has a regular island centered at $(\bar{q}_{k}, \bar{p}) = (k, 1/4)$. The area of one such island is $A_{\mathrm{r}} \approx 0.32$, which equals the relative size of the regular region in phase space. Since the islands are transporting to the next unit cell in the positive $q$ direction, i.e., $\mathcal{M}(\bar{q}_{k}, \bar{p}) = (\bar{q}_{k+1}, \bar{p})$, the center of each island is a fixed point of the $M$th iterate of the map, $\mathcal{M}^{M}(\bar{q}_{k}, \bar{p}) = (\bar{q}_{k}, \bar{p})$. The surrounding chaotic sea has an average drift in the negative $q$ direction as the overall transport of the system is zero \cite{DiKeOtSch2001, DiKeSch2005}; see Fig.~\ref{fig:phasespace}. Quantum mechanically this transport suppresses the localization of chaotic eigenstates. In our model systems the hierarchical regions around the regular islands are sufficiently small, and also the effects of partial transport barriers and nonlinear resonance chains are irrelevant to the numerical studies. \subsection{Quantization} The quantum system is given by the time-evolution operator over one period of the driving, \begin{align} \label{eq:timeOp} \op{U} = \exp\left(-\frac{\operatorname{i}}{\hbar_{\mathrm{eff}}}V(\op{q})\right) \exp\left(-\frac{\operatorname{i}}{\hbar_{\mathrm{eff}}}T(\op{p})\right); \end{align} see, e.g., Refs.~\cite{BeBaTaVo1979, BeHa1980, ChSh1986}. Quantizing the map $\mathcal{M}$ on a two-torus induces the Bloch phases $\theta_{q}$ and $\theta_{p}$ \cite{KeMeRo1999, ChSh1986} which characterize the quasi-periodicity conditions on the torus. The Bloch phase $\theta_{p}$ is limited by $M\left( \theta_{p} + N/2 \right) \in \mathbb{Z}$ because of the periodic boundary conditions, whereas $\theta_{q} \in [0, 1[$ can be chosen arbitrarily \cite{KeMeRo1999, BaKeMo2007}. Due to the quantization on a compact torus the effective Planck constant $h_{\mathrm{eff}} = 2 \pi \hbar_{\mathrm{eff}}$ is determined by the number of unit cells $M$ and the dimension of the Hilbert space $N$, \begin{align} \label{eq:heff} h_{\mathrm{eff}} = \frac{M}{N}. \end{align} Here $N \in \mathbb{N}$ is a free parameter of the quantization and the semiclassical limit is reached for $h_{\mathrm{eff}} \to 0$. Note that $M$ and $N$ are chosen by continued fractions of $h_{\mathrm{eff}} = 1/(d + \sigma)$ with $\sigma = (\sqrt{5}-1)/2$ being the golden mean and $d \in \mathbb{N}$. This ensures that $h_{\mathrm{eff}} = M/N$ is as irrational as possible \cite{BaKeMo2005}. If $M$ and $N$ were commensurate the quantum system would effectively be reduced to less than $M$ cells. In the following we choose $d=12$, leading to $(M, N) = (1, 13), (21, 265), (610, 7697), (6765, 85361)$, such that the effective Planck constant is approximately fixed at $h_{\mathrm{eff}} \approx 1/13$. The eigenvalue equation \begin{align} \label{eq:eigvalProb} \op{U} \ket{\phi_{n}} = \operatorname{e}^{\operatorname{i} \phi_{n}} \ket{\phi_{n}} \end{align} gives $N$ eigenphases $\phi_{n} \in [0, 2\pi[$ with corresponding eigenvectors $\ket{\phi_{n}}$. For fixed $h_{\mathrm{eff}}$ it is possible to tune the density of states by varying $M$ and $N$, i.e., for increasing $M, N$ with approximately constant $h_{\mathrm{eff}}=M/N$ the density of states rises and flooding becomes more and more prominent, as will be discussed in Sec.~\ref{sec:flooding}. In order to numerically solve the eigenvalue equation \eqref{eq:eigvalProb} for $N > 10^{4}$ we use a band-matrix algorithm, see the Appendix. \section{Spectral Statistics And Flooding} \label{sec:specStatWithFlooding} In this section we study the consequences of flooding on spectral statistics. In Sec.~\ref{sec:br} we consider the model systems introduced in Sec.~\ref{sec:modelSystem}. Increasing their density of states ($M\to\infty$) at fixed $h_{\mathrm{eff}}$ gives the flooding limit for which we obtain a transition of the level-spacing distribution $P(s)$ from Berry-Robnik to Wigner statistics. In Sec.~\ref{sec:flooding} we discuss flooding of regular states. Based on this discussion, we introduce the flooding-improved Berry-Robnik distribution $P_{\mathrm{fi}}(s)$ in Sec.~\ref{sec:effBeRoStat}, which intuitively explains how the flooding of regular states causes the transition from Berry-Robnik to Wigner statistics. In Sec.~\ref{sec:complTheory} we unify this prediction with the results of Ref.~\cite{BaKeLoMe2011}, leading to the more sophisticated flooding- and tunneling-improved Berry-Robnik distribution $P_{\mathrm{fti}}(s)$. This distribution additionally accounts for the effects of level repulsion between regular and chaotic states. In Sec.~\ref{sec:compare} we consider three limiting cases in which level repulsion vanishes. In particular we discuss that the semiclassical limit, $h_{\mathrm{eff}}\to 0$ with fixed $M$, leads to the standard Berry-Robnik statistics, while Wigner statistics are obtained in the flooding limit considered in this paper. \subsection{Spacing statistics of the model system} \label{sec:br} We investigate the spectral statistics of the model systems introduced in Sec.~\ref{sec:modelSystem} numerically. In order to increase the statistical significance of the spectral data, we perform ensemble averages by varying the parameter $s_{1}$ of the map; see Eq.~\eqref{eq:piecelinfunc_t}. This modifies the chaotic dynamics but leaves the dynamics of the regular region unchanged. Also the Bloch phase $\theta_{q}$ is used for ensemble averaging. For the parameters $(M, N) = (1, 13), (21, 265), (610, 7697), (6765, 85361)$ we choose $50, 50, 10, 4$ equidistant values of $s_{1}$ in $[5, 20]$ and $400, 19, 10, 1$ equidistant values of $\theta_{q}$ in $[0,1[$, respectively. For each choice the ordered eigenphases $\phi_{n}$ give the unfolded level spacings \begin{align} \label{eq:spacings} s_{n} := \frac{N}{2\pi} (\phi_{n+1} - \phi_{n}). \end{align} Assuming an uncorrelated superposition of regular and chaotic subspectra corresponding to disjoint regular and chaotic regions in phase space, these spacings are expected to follow the Berry-Robnik distribution \cite{BeRo1984}. The only relevant parameter of this distribution is the density of regular states which semiclassically equals the relative size of the regular region in phase space, $A_{\mathrm{r}}$; see Eq.~\eqref{eq:dors}. This gives the standard Berry-Robnik distribution \begin{align} \label{eq:berryRobnik} P_{\mathrm{BR}}(s) = \frac{\mathrm{d}{}^{2}}{\mathrm{d}{}s^{2}} \left \{ \exp(-A_{\mathrm{r}} s) \ \mathrm{erfc}\left( \frac{\sqrt{\pi}}{2} (1-A_{\mathrm{r}}) s \right ) \right \}. \end{align} For purely chaotic systems one has $A_{\mathrm{r}}=0$ such that the Wigner distribution $\Ps{c}(s) = (\pi s / 2) e^{-\pi s^{2} / 4}$ is recovered. For purely regular systems one has $A_{\mathrm{r}}=1$, giving the Poisson distribution $\Ps{r}(s) = \operatorname{e}^{- s}$. For the model systems introduced in Sec.~\ref{sec:modelSystem} one has $A_{\mathrm{r}}\approx 0.32$, such that Eq.~\eqref{eq:berryRobnik} predicts the same level-spacing distribution for all system sizes $M$. This is in contrast to our numerical findings, which show a transition of the level-spacing distribution from Berry-Robnik to Wigner statistics with increasing system size $M$ and fixed $h_{\mathrm{eff}}$. In Fig.~\ref{fig:PS} numerical results for the level-spacing distribution of the model systems are shown as black histograms. For the case of only one unit cell [Fig.~\ref{fig:PS}(a)] the level-spacing distribution roughly follows the Berry-Robnik distribution (dash-dotted line). With increase of the system size to $M=21$ unit cells [Fig.~\ref{fig:PS}(b)] the level-spacing distribution shows global deviations from the Berry-Robnik distribution. For even larger system sizes [Figs.~\ref{fig:PS}(c) and \ref{fig:PS}(d)] we observe a transition to the Wigner distribution (dotted line). This transition is caused by flooding of regular states, which we discuss in the following section. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{fig3.eps} \caption{(Color online) Level-spacing distribution $P(s)$ of the model system for $h_{\mathrm{eff}} \approx 1/13$. The numerical data (black histograms) show a transition from the Berry-Robnik distribution (dash-dotted lines) to the Wigner distribution (dotted lines) with increasing system size $M=1$, $21$, $610$, and $6765$ (a)-(d). These data are compared to the flooding-improved Berry-Robnik distribution (red dashed lines), Eq.~\eqref{eq:effBerryRobnik}, as well as to the flooding- and tunneling-improved Berry-Robnik distribution (green solid lines), Eq.~\eqref{eq:ftibr_dist}. The insets show the same distributions on a double logarithmic scale. } \label{fig:PS} \end{center} \end{figure} \subsection{Flooding of regular states} \label{sec:flooding} We now show how the number of regular and chaotic states is modified in the presence of flooding. According to Eq.~\eqref{eq:bohr-sommerfeld} each regular state occupies an area $h_{\mathrm{eff}}$ in phase space. Hence, the maximal number of quantizing tori $m_{\mathrm{max}}$ per island is given by \begin{align} \label{eq:mmax} m_{\mathrm{max}} = \left\lfloor \frac{A_{\mathrm{r}}}{h_{\mathrm{eff}}} + \frac{1}{2} \right\rfloor \approx \frac{A_{\mathrm{r}}}{h_{\mathrm{eff}}} \end{align} and the quantum number $m$ runs from $0$ to $m_{\mathrm{max}}-1$. Since we consider a chain of $M$ islands there are $M$ regular levels supported by the $m$th quantizing tori of the $M$ islands. Hence, we semiclassically expect \begin{align} \label{eq:Nreg} N_{\mathrm{r}}^{\mathrm{sc}} = m_{\mathrm{max}} M \approx A_{\mathrm{r}} M / h_{\mathrm{eff}} \end{align} regular states supported by the $M$ regular islands of size $A_{\mathrm{r}}$. The semiclassically expected density of regular states $\rho_{\mathrm{r}}^{\mathrm{sc}}$ is therefore given by the relative size of the regular region, \begin{align} \label{eq:dors} \rho_{\mathrm{r}}^{\mathrm{sc}} := \frac{N_{\mathrm{r}}^{\mathrm{sc}}}{N} \approx A_{\mathrm{r}}. \end{align} Similarly we expect $N_{\mathrm{c}}^{\mathrm{sc}} = N -N_{\mathrm{r}}^{\mathrm{sc}}$ chaotic states and $\rho_{\mathrm{c}}^{\mathrm{sc}} = 1 - \rho_{\mathrm{r}}^{\mathrm{sc}}$. Due to dynamical tunneling, regular and chaotic states are coupled. The average coupling of the regular states localizing on the $m$th quantizing tori to the chaotic states is given by the typical coupling $v_{m}$ \cite{BaKeLoMe2011}. It is determined by the tunneling rate $\gamma_{m}$ which describes the initial exponential decay of the $m$th regular WKB state to the chaotic sea, \begin{align} \label{eq:effCoupling} v_{m} = \frac{N}{2\pi}\sqrt{\frac{\gamma_{m}}{N_{\mathrm{c}}^{\mathrm{sc}}}} = \frac{1}{2\pi}\sqrt{\frac{\gamma_{m}}{h_{\mathrm{eff}} \rho_{\mathrm{c}}^{\mathrm{sc}}}}\sqrt{M}. \end{align} We compute the system specific tunneling rates $\gamma_{m}$ numerically \cite{BaKeLo2010}. They depend only on Planck's constant $h_{\mathrm{eff}}$ and the classical phase-space structure of one regular island, which are fixed in our investigations. Hence, the factor $\sqrt{\gamma_{m}/(h_{\mathrm{eff}} \rho_{\mathrm{c}}^{\mathrm{sc}})}$ in Eq.~\eqref{eq:effCoupling} is constant for our model systems and the typical coupling $v_{m}$ is tunable by the system size $M$. Note that the couplings $v$ used in Refs.~\cite{BaKeMo2005, BaKeMo2007} differ by the factor $\rho_{\mathrm{c}}^{\mathrm{sc}}$ from our definition, Eq.~\eqref{eq:effCoupling}, due to a different choice of dimensionless units. In Ref.~\cite{BaKeMo2005} it was shown that in addition to the WKB quantization condition \eqref{eq:bohr-sommerfeld} regular states exist on the $m$th quantizing tori only if the tunneling rate $\gamma_{m}$ is smaller than the inverse Heisenberg time of the chaotic subsystem, $\gamma_{m} < 1/\tau_{\mathrm{H,c}}$, Eq.~\eqref{eq:exCrit2}. Using Eq.~\eqref{eq:effCoupling} and $\tau_{\mathrm{H,c}}=h_{\mathrm{eff}}/\Delta_{\mathrm{c}}=N_{\mathrm{c}}^{\mathrm{sc}}$ we rewrite this existence criterion in terms of the typical coupling, \begin{align} \label{eq:exCrit3} v_{m} < \frac{1}{2\pi\rho_{\mathrm{c}}^{\mathrm{sc}}}. \end{align} If the existence criterion \eqref{eq:exCrit3} is fulfilled, the typical coupling of the WKB states on the $m$th quantizing tori is smaller than the chaotic mean level spacing and the corresponding regular eigenstates exist. If $v_{m}$ increases beyond this threshold, the regular states on the $m$th quantizing tori effectively couple to an increasing number of spectrally close chaotic states. Consequently the corresponding regular eigenstates disappear. This process is called flooding of regular states \cite{BaKeMo2005, BaKeMo2007, LBphd, BaBiKe2012}. Thus for large typical couplings $v_{m}$ the number $N^{\mathrm{f}}_{\mathrm{r}}$ of regular states which actually exist in the regular islands is smaller than the semiclassically expected number $N_{\mathrm{r}}^{\mathrm{sc}}$ of regular states. The quantizing tori of the $N_{\mathrm{r}}^{\mathrm{sc}} - N^{\mathrm{f}}_{\mathrm{r}}$ regular states which violate Eq.~\eqref{eq:exCrit3} are flooded by chaotic states in phase space. Note that for our model systems the relation $v_{0} < v_{1} < v_{2} < \hdots$ holds, such that the quantizing tori are flooded in the order of decreasing quantum number $m$ from the border to the center of the regular islands. \subsection{Flooding-improved Berry-Robnik distribution} \label{sec:effBeRoStat} We now introduce a flooding-improved Berry-Robnik distribution which takes the flooding of regular states into account. For that purpose we compute the density of regular states $\rho^{\mathrm{f}}_{\mathrm{r}}$ in the presence of flooding. Starting from the semiclassically expected density of regular states, Eq.~\eqref{eq:dors}, and using Eqs.~\eqref{eq:heff} and \eqref{eq:Nreg} we obtain \begin{align} \label{eq:rhorsc2} \rho_{\mathrm{r}}^{\mathrm{sc}} \approx \sum_{m=0}^{m_{\mathrm{max}}-1} h_{\mathrm{eff}}. \end{align} This expression shows that each quantizing torus semiclassically contributes one Planck cell to the density of states. To compute the density of regular states $\rho^{\mathrm{f}}_{\mathrm{r}}$ in the presence of flooding we include only those quantizing tori in the sum in Eq.~\eqref{eq:rhorsc2} for which the existence criterion \eqref{eq:exCrit3} holds, and obtain \cite{BaKeMo2005, BaKeMo2007} \begin{align} \label{eq:rhoApp} \rho^{\mathrm{f}}_{\mathrm{r}} := \sum_{m=0}^{m_{\mathrm{max}}-1} h_{\mathrm{eff}} \left[1 - \Theta\left(v_{m} - \frac{1}{2\pi\rho_{\mathrm{c}}^{\mathrm{sc}}}\right)\right]. \end{align} \begin{figure}[b] \begin{center} \includegraphics[width=\linewidth]{fig4.eps} \caption{(Color online) Density of regular states $\rho^{\mathrm{f}}_{\mathrm{r}}$ in the presence of flooding, Eq.~\eqref{eq:rhoApp}, vs.\ the system size $M$ for $h_{\mathrm{eff}}\approx 1/13$ on a logarithmic abscissa. For comparison, the semiclassically expected density of regular states $\rho_{\mathrm{r}}^{\mathrm{sc}}$ is shown (green dotted line). The insets illustrate the classical phase space where the gray tori enclose the area $\rho^{\mathrm{f}}_{\mathrm{r}}$ for $M=1$, $21$, $610$, and $6765$. In addition, the averaged Husimi function of all chaotic eigenstates folded into the first unit cell is shown.} \label{fig:AEff} \end{center} \end{figure} In Fig.~\ref{fig:AEff} the density of regular states $\rho^{\mathrm{f}}_{\mathrm{r}}$ is shown for the example system at $h_{\mathrm{eff}}\approx 1/13$ versus the system size $M$ on a logarithmic abscissa. It decreases with increasing system size $M$ and has a step whenever a typical coupling $v_m$ equals $1/(2\pi\rho_{\mathrm{c}}^{\mathrm{sc}})$. The semiclassical density of regular states $\rho_{\mathrm{r}}^{\mathrm{sc}}$ is an upper limit. In the spirit of Eq.~\eqref{eq:dors}, $\rho_{\mathrm{r}}^{\mathrm{sc}} \approx A_{\mathrm{r}}$, we interpret the density of regular states $\rho^{\mathrm{f}}_{\mathrm{r}}$ in the presence of flooding by an area in phase space. For $M=1$, $21$, $610$, and $6765$ the insets of Fig.~\ref{fig:AEff} show this area enclosed by a gray torus. In addition the averaged Husimi functions (bright/yellow to darker/red color scale) illustrate that the surviving regular states are localized in this area $\rho^{\mathrm{f}}_{\mathrm{r}}$, which decreases in the flooding limit $M\to\infty$. Already for the system with one unit cell, $M=1$, we find that $\rho^{\mathrm{f}}_{\mathrm{r}}$ is smaller than its semiclassical expectation $\rho_{\mathrm{r}}^{\mathrm{sc}}$ because the outermost regular state of quantum number $m=3$ violates the existence criterion \eqref{eq:exCrit3}. Note that the amount by which a regular state is flooded can also be described by smooth functions, e.g., by the fraction of regular states \cite{BaKeMo2007} or the asymptotic flooding weight \cite{BaBiKe2012}. However, they do not provide a significant advantage for our investigations. To obtain a description of the level-spacing distribution which includes flooding, we now use prediction \eqref{eq:rhoApp} for the density of regular states $\rho^{\mathrm{f}}_{\mathrm{r}}$ instead of the relative size of the regular region $A_{\mathrm{r}}$ as the relevant parameter in Eq.~\eqref{eq:berryRobnik}. With $\rho^{\mathrm{f}}_{\mathrm{c}} := 1 - \rho^{\mathrm{f}}_{\mathrm{r}}$ this leads to the flooding-improved Berry-Robnik distribution \begin{align} \label{eq:effBerryRobnik} P_{\mathrm{fi}}(s) = \frac{\mathrm{d}{}^{2}}{\mathrm{d}{}s^{2}} \left \{ \exp(-\rho^{\mathrm{f}}_{\mathrm{r}} s) \ \mathrm{erfc}\left( \frac{\sqrt{\pi}}{2} \rho^{\mathrm{f}}_{\mathrm{c}} s \right) \right \}, \end{align} which is our first main result. In Fig.~\ref{fig:PS} we compare the numerically determined nearest-neighbor level-spacing distribution to the analytical prediction (\ref{eq:effBerryRobnik}) for our model system. With increasing system size $M$ and fixed $h_{\mathrm{eff}}$ we find a global transition of the level-spacing distribution from Berry-Robnik to Wigner statistics. This global transition is well described by the flooding-improved Berry-Robnik distribution, Eq.~(\ref{eq:effBerryRobnik}). It is a consequence of flooding, which reduces the density of regular states below its semiclassical expectation, $\rho^{\mathrm{f}}_{\mathrm{r}} \le \rho_{\mathrm{r}}^{\mathrm{sc}}$. According to Eq.~\eqref{eq:dors} this can be interpreted as a flooding induced decrease of the regular region in phase space. In the limit $M \to \infty$ the regular islands are completely flooded and no regular state exists. Hence, the Wigner distribution is obtained. Note that even for the case of only one unit cell [see Fig.~\ref{fig:PS}(a)] non-zero couplings $v_{m}$ exist such that the numerical data are better described by the flooding-improved Berry-Robnik distribution, Eq.~\eqref{eq:effBerryRobnik}, than by Eq.~\eqref{eq:berryRobnik}. At small spacings deviations between numerical data and the flooding-improved Berry-Robnik distribution are visible. They occur due to level repulsion between the surviving regular and the chaotic levels, which is not considered within this approach and will be incorporated in the following section. \subsection{Flooding- and tunneling-improved Berry-Robnik distribution} \label{sec:complTheory} We now unify the flooding-improved Berry-Robnik distribution, Eq.~\eqref{eq:effBerryRobnik}, with the tunneling-improved Berry-Robnik distribution \cite{BaKeLoMe2011}. The resulting flooding- and tunneling-improved Berry-Robnik distribution allows us to describe both the effect of flooding and the power-law level repulsion at small spacings. The derivation is done along the lines of Ref.~\cite{BaKeLoMe2011}. We incorporate the effects of flooding into this theory by replacing the number of regular states $N_{\mathrm{r}}^{\mathrm{sc}}$ with the number of surviving regular states $N^{\mathrm{f}}_{\mathrm{r}}$ which fulfill the existence criterion \eqref{eq:exCrit3}. The other regular states, which fulfill the WKB quantization condition \eqref{eq:bohr-sommerfeld} but have strong couplings to the chaotic sea, $v_{m} > 1/(2\pi\rho_{\mathrm{c}}^{\mathrm{sc}})$, are assigned to the chaotic subspectrum. Level repulsion is then modeled by accounting for the small tunneling couplings $v_m$ between the $N^{\mathrm{f}}_{\mathrm{r}}$ surviving regular states and the chaotic states perturbatively. Following Refs.~\cite{BeRo1984, PoNa2007, BaRo2010, BaKeLoMe2011} the flooding- and tunneling-improved Berry-Robnik distribution $P_{\mathrm{fti}}(s)$ consists of three distinct parts: \begin{align} \label{eq:partdist} P_{\mathrm{fti}}(s) = \ps{s}{r-r} + \ps{s}{c-c} + \ps{s}{r-c}. \end{align} Here $\ps{s}{r-r}$ describes the contribution of level spacings between two regular levels, $\ps{s}{c-c}$ the contribution of level spacings between two chaotic levels, and $\ps{s}{r-c}$ the contribution of level spacings formed by one regular and one chaotic level in the superposed spectrum. In our model systems the number of quantizing tori $m_{\mathrm{max}}$ is small, e.g., $m_{\mathrm{max}} \approx 4$, and the $M$ regular levels with the same quantum number $m$ are equispaced with distance $N/M$ in the unfolded spectrum \cite{BrHo1991}. Hence, the regular levels do not follow the generic Poissonian behavior occurring for large $m_{\mathrm{max}}$, but are well separated, \begin{align} \label{eq:regSpace} \ps{s}{r-r}\approx0. \end{align} Furthermore, \begin{align} \label{eq:chSpace} \ps{s}{c-c} = \Ps{c}(s) [1-\rho_{\mathrm{r}}^{\mathrm{sc}} s], \end{align} where $\Ps{c}(s)$ is the Wigner distribution, which describes the probability of finding a spacing $s$ in the chaotic subspectrum. The second factor $[1-\rho_{\mathrm{r}}^{\mathrm{sc}} s]$ describes the probability of finding a gap in the regular subspectrum. For the third term in Eq.~\eqref{eq:partdist} one finds \cite{BaKeLoMe2011} \begin{align} \label{eq:rcdist} \ps{s}{r-c} = p^{(0)}_{\mathrm{r-c}}(s) \frac{1}{N^{\mathrm{f}}_{\mathrm{r}}} \sum_{m=0}^{N^{\mathrm{f}}_{\mathrm{r}}-1} \frac{\tilde{v}_{m}}{v_{m}} X\left( \frac{s}{2v_{m}} \right), \end{align} with $X(x):=\sqrt{\pi/2}\, x\exp(-x^{2}/4)I_{0}(x^{2}/4)$, where $I_{0}$ is the zeroth-order modified Bessel function and $\tilde{v}_{m}=v_{m}/\sqrt{1-2\pi(\rho_{\mathrm{c}}^{\mathrm{sc}} v_{m})^2}$. The contribution of the zeroth-order regular-chaotic spacings, $p^{(0)}_{\mathrm{r-c}}(s)$, is given by \begin{align} \label{eq:zeroth-r-c} p^{(0)}_{\mathrm{r-c}}(s) = 2 \rho_{\mathrm{c}}^{\mathrm{sc}} \rho_{\mathrm{r}}^{\mathrm{sc}} \exp\left( -\frac{\pi (\rho_{\mathrm{c}}^{\mathrm{sc}} s)^{2}}{4}\right). \end{align} Using Eqs.~\eqref{eq:regSpace}, \eqref{eq:chSpace}, and \eqref{eq:rcdist} in Eq.~\eqref{eq:partdist}, we obtain the flooding and tunneling improved Berry-Robnik distribution \begin{align} \label{eq:ftibr_dist} P_{\mathrm{fti}}(s) = \Ps{c}(s) [1-\rho_{\mathrm{r}}^{\mathrm{sc}} s] + p^{(0)}_{\mathrm{r-c}}(s) \frac{1}{N^{\mathrm{f}}_{\mathrm{r}}} \sum_{m=0}^{N^{\mathrm{f}}_{\mathrm{r}}-1} \frac{\tilde{v}_{m}}{v_{m}} X\left( \frac{s}{2v_{m}} \right), \end{align} which is our final result. In Eq.~\eqref{eq:ftibr_dist} one has to sum over the $N^{\mathrm{f}}_{\mathrm{r}} \le N_{\mathrm{r}}^{\mathrm{sc}}$ regular states, which fulfill the existence criterion \eqref{eq:exCrit3}. This selection of the regular states takes flooding into account. In addition power-law level repulsion at small spacings is described by the last term of Eq.~\eqref{eq:ftibr_dist}. In Fig.~\ref{fig:PS} we compare the numerical results for the level-spacing distribution to the flooding- and tunneling-improved Berry-Robnik distribution, Eq.~\eqref{eq:ftibr_dist} (green solid lines). We find excellent agreement. The global transition of the level-spacing distribution from the Berry-Robnik distribution in Fig.~\ref{fig:PS}(a) for a system with one unit cell to the Wigner distribution in Fig.~\ref{fig:PS}(d) for a system with $M=6765$ is well described. This transition is caused by the disappearance of regular states due to flooding. Furthermore, the flooding- and tunneling-improved Berry-Robnik distribution, Eq.~\eqref{eq:ftibr_dist}, reproduces the power-law level repulsion of $P(s)$ at small spacings, which is caused by small tunneling splittings between the surviving regular and chaotic states. This can be seen particularly well in the double logarithmic insets of Fig.~\ref{fig:PS}. \subsection{Limiting cases} \label{sec:compare} Depending on the interplay between the flooding limit $M\to\infty$ and the semiclassical limit $h_{\mathrm{eff}}\to 0$, we identify three cases in which the tunneling corrections of Sec.~\ref{sec:complTheory} are insignificant. Case (i) is the flooding limit with fixed $h_{\mathrm{eff}}$ and $M\to \infty$ in which all regular states are flooded. Asymptotically one obtains the Wigner distribution; see Fig.~\ref{fig:PS}(d). Note that a further increase of the system size after all regular states have been flooded completely, may lead to the localization of chaotic eigenstates which affects spectral statistics \cite{Iz1990, ChSh1995}. Case (ii) considers the semiclassical limit $h_{\mathrm{eff}} \to 0$ for fixed system sizes $M$. In this case both flooding and tunneling corrections vanish due to exponentially decreasing tunneling couplings. Hence, the spacing statistics tend toward the standard Berry-Robnik distribution, Eq.~\eqref{eq:berryRobnik} \cite{BaKeLoMe2011, PrRo1994}. In case (iii) the semiclassical limit $h_{\mathrm{eff}} \to 0$ and the flooding limit $M \to \infty$ are coupled such that $\rho^{\mathrm{f}}_{\mathrm{r}}$ is constant and smaller than $\rho_{\mathrm{r}}^{\mathrm{sc}}$. In this limit flooding is present yet the tunneling corrections at small spacings vanish. In this case the spacing statistics are given by the flooding-improved Berry-Robnik distribution, Eq.~\eqref{eq:effBerryRobnik}. \begin{figure}[!t] \begin{center} \includegraphics[width=\linewidth]{fig5.eps} \caption{(Color online) Level-spacing distribution $P(s)$ of the model system at fixed density of regular states $\rho^{\mathrm{f}}_{\mathrm{r}} \approx 0.58 \rho_{\mathrm{r}}^{\mathrm{sc}}$. The numerical data (black histograms) are compared to the flooding-improved Berry-Robnik distribution (red dashed lines), Eq.~\eqref{eq:effBerryRobnik}, as well as to the flooding- and tunneling-improved Berry-Robnik distribution (green solid lines), Eq.~\eqref{eq:ftibr_dist}, for $(M, N) = (5, 63)$, $(34, 735)$, $(89, 2458)$, and $(1597, 77643)$ (a)-(d), corresponding to $h_{\mathrm{eff}}\approx1/13$, $1/22$, $1/28$, and $1/49$, respectively. For comparison, the Wigner distribution (dotted lines) and the Berry-Robnik distribution (dash-dotted lines) are shown. The insets show the same distributions on a double logarithmic scale. } \label{fig:ps_fconst} \end{center} \end{figure} In Fig.~\ref{fig:ps_fconst} we illustrate spectral statistics in the limit of case (iii). We consider the model systems for $(M, N) = (5, 63)$, $(34, 735)$, $(89, 2458)$, $(1597, 77643)$ such that the density of regular states is fixed, $\rho^{\mathrm{f}}_{\mathrm{r}}\approx 0.58 \rho_{\mathrm{r}}^{\mathrm{sc}}$, and $h_{\mathrm{eff}} \approx 1/13, 1/22, 1/28, 1/49$ decreases. Both the numerical data and the flooding- and tunneling-improved Berry-Robnik distribution, Eq.~\eqref{eq:ftibr_dist}, tend toward the flooding-improved Berry-Robnik distribution, Eq.~\eqref{eq:effBerryRobnik}. The vanishing influence of the tunneling corrections at small spacings is particularly visible in the insets, which show the spacing distributions on a double logarithmic scale. \section{Summary} \label{sec:outlook} In this paper we study the impact of flooding on the level-spacing distribution $P(s)$ for systems with a mixed phase space. Numerically we find a transition from Berry-Robnik to Wigner statistics with increasing density of states and fixed $h_{\mathrm{eff}}$. We explain this transition by the flooding of regular islands. It reduces the density of regular states $\rho^{\mathrm{f}}_{\mathrm{r}}$ below its semiclassical expectation $\rho_{\mathrm{r}}^{\mathrm{sc}}$ which can be interpreted as a flooding induced decrease of the regular region in phase space. Taking this into account we derive a flooding-improved Berry-Robnik distribution, which reproduces the observed transition of the level-spacing statistics. We unify this prediction with the tunneling-improved Berry-Robnik distribution \cite{BaKeLoMe2011} which includes power-law level repulsion. This gives the flooding- and tunneling-improved Berry-Robnik distribution, which shows excellent agreement with numerical data. In order to demonstrate the effect of flooding on spectral statistics, we investigated model systems with a simple phase-space structure. However, we expect that flooding has similar consequences for systems with a generic phase space, which may contain several regular and chaotic regions as well as nonlinear resonances, larger hierarchical regions, and partial transport barriers. This expectation is based on the fact that even for generic systems tunneling couplings between classically disjoint regions exist. Hence, increasing the density of states for fixed $h_{\mathrm{eff}}$ should still lead to a transition of the level-spacing distribution from Berry-Robnik to Wigner statistics. \begin{acknowledgments} We thank Roland Ketzmerick and Lars Bittrich for stimulating discussions. Furthermore, we acknowledge support by the Deutsche Forschungsgemeinschaft within the Forschergruppe 760 ``Scattering Systems with Complex Dynamics.'' \end{acknowledgments} \begin{appendix} \section*{Appendix: Periodic band matrix} \label{sec:perBandMat} Our aim is to calculate the eigenphases $\phi_{n}$ from Eq.~\eqref{eq:eigvalProb} numerically up to Hilbert-space dimension $N\approx10^{5}$. This is possible due to a band-matrix algorithm \cite{BaKe2000} which was used in Refs.~\cite{BaKeMo2005, BaKeMo2007} and is presented in the following. We start with the matrix representation of the symmetrized time-evolution operator, \begin{align} \op{U}^{{\mathrm{sym}}} = \operatorname{e}^{-\frac{\operatorname{i}}{2\hbar_{\mathrm{eff}}}T(\hat{p})}\, \operatorname{e}^{-\frac{\operatorname{i}}{\hbar_{\mathrm{eff}}}V(\hat{q})}\,\operatorname{e}^{-\frac{\operatorname{i}}{2\hbar_{\mathrm{eff}}}T(\hat{p})}, \end{align} in the basis of the discretized position states $\ket{q_{k}}$, \begin{align} \label{eq:uhfPos} U^{{\mathrm{sym}}}_{k,l} := \bra{q_{k}}\op{U}^{{\mathrm{sym}}}\ket{q_{l}}, \end{align} with $q_{k} = h_{\mathrm{eff}} (k+\theta_{p}-\tfrac{1}{2})$ and $k,l=0, 1, \dots, N-1$. This matrix has dominant contributions around the diagonal and in the upper right and lower left corners, i.e., $\op{U}^{{\mathrm{sym}}}$ can be approximated by a periodic band matrix. For our model systems the width of the band depends on the extrema of $V'(q)$. The essential step for computing the eigenvalues of $U^{{\mathrm{sym}}}$ is to find a similarity transformation from this periodic band matrix to a band matrix. For the Hermitian case a similar idea was used in Ref.~\cite{Kra1996}. Since the kicking potential is symmetric about $q=0$ for $M(\theta_p+N/2)\in\mathbb{Z}$, $V(q_{l}) = V(q_{N-l})$, we find \begin{align} \label{eq:uhfSym} U^{{\mathrm{sym}}}_{k,l} = U^{{\mathrm{sym}}}_{N-l, N-k}. \end{align} Hence, the set of eigenvectors $\ket{\phi_{n}}$, for which we choose the phase such that $\scal{q_{0}}{\phi_{n}} = \scal{q_{0}}{\phi_{n}}^{*}$, satisfies \begin{align} \label{eq:phiSym} \scal{q_{l}}{\phi_{n}} &= \scal{q_{N-l}}{\phi_{n}}^{*}, \end{align} where the star denotes the complex conjugation and $l$ runs from $1$ to $N-1$. Based on these relations it is possible to find a unitary transformation $\op{A}$ to a set of purely real vectors $\ket{\psi_{n}} := \op{A} \ket{\phi_{n}}$, given by \begin{align} \label{eq:unitTrafo} \scal{q_{0}}{\psi_{n}} &= \scal{q_{0}}{\phi_{n}},\\ \scal{q_{2k-1}}{\psi_{n}} &= \frac{1}{\sqrt{2}}(\scal{q_{k}}{\phi_{n}} + \scal{q_{N-k}}{\phi_{n}}),\\ \scal{q_{2k}}{\psi_{n}} &= \frac{1}{\operatorname{i}\sqrt{2}}(\scal{q_{k}}{\phi_{n}} - \scal{q_{N-k}}{\phi_{n}}),\\ \scal{q_{N-1}}{\psi_{n}} &= \scal{q_{N/2}}{\phi_{n}}. \end{align} Here, $k$ runs from $1$ to $(N-1)/2$ for odd $N$ or from $1$ to $(N-2)/2$ for even $N$ and the last row has to be considered only for even $N$. We now define a new operator $\op{W}$, given by the unitary transformation of $\op{U}^{{\mathrm{sym}}}$ with $\op{A}$, \begin{align} \label{eq:wDef} \op{W} := \op{A}\ \op{U}^{{\mathrm{sym}}} \op{A}^{-1}. \end{align} Its matrix representation $\mat{W}$ in the basis of position states $\ket{q_{k}}$ has a banded structure with twice the bandwidth of the matrix $\mat{U}^{{\mathrm{sym}}}$ but without components in the upper right and lower left corners. Furthermore it is symmetric, \begin{align} \label{eq:wSym} W_{k,l} = W_{l,k}, \end{align} with complex matrix elements $W_{k,l}$. The unitary transformation \eqref{eq:wDef} leads to a new eigenvalue problem with the same eigenphases as in Eq.~\eqref{eq:eigvalProb}, \begin{align} \label{eq:newEigvalProb} \op{W} \ket{\psi_{n}} = \operatorname{e}^{\operatorname{i} \phi_{n}} \ket{\psi_{n}}. \end{align} Numerical standard libraries provide methods for the eigenvalue computation of only real symmetric or complex Hermitian band matrices but not for unitary band matrices such as $\mat{W}$. Hence, we first calculate the real part of the eigenvalues, following from $\Re\{\mat{W}\} \ket{\psi_{n}} = \cos\phi_{n}\ket{\psi_{n}}$, and afterwards the imaginary part from $\Im\{\mat{W}\} \ket{\psi_{m}} = \sin\phi_{m}\ket{\psi_{m}}$. This is possible since the eigenvectors $\ket{\psi_{n}}$ are purely real. From these results one can recover the eigenphases $\phi_n$ by the requirement $\cos^2\phi_n + \sin^2\phi_m = 1$. The corresponding eigenfunctions $\ket{\psi_n}$ can be obtained from Eq.~\eqref{eq:newEigvalProb} by the method of inverse iteration using an LU decomposition. By the mapping of the original eigenvalue problem Eq.~\eqref{eq:eigvalProb} to the band matrix form Eq.~\eqref{eq:newEigvalProb}, it is possible to compute both eigenvalues and eigenfunctions with a numerical effort scaling as $N^2$ in contrast to the standard diagonalization procedures which scale as $N^3$. \end{appendix}
2,869,038,156,009
arxiv
\section{Introduction} This work was originally written in the late 1980s. It was then circulated as a preprint among a few colleagues (see e.g. MathSciNet review: 986700) but for some reason was never published. In a recent conversation Hillel Furstenberg asked whether there exists a subalgebra of $\ell^\infty(\mathbb{Z})$ which is contained in the subalgebra of distal functions, consists of strictly ergodic functions, and is ``universal" in some sense. In a way Section \ref{sec-se} of this work gives a negative answer to Hillel's question. So I decided to resurrect this old work. The new version differs from the original one mostly in the addition of a proof of Theorem \ref{ab}, and the new Section \ref{sec-new}. Also some misprints are corrected and a few references are added; in particular a reference to Salehi's work \cite{S}. \vspace{3 mm} A {\em flow} in this work is a pair $(X,T)$ consisting of a compact space $X$ and a self homeomorphism $T : X \to X$. A flow $(X,T)$ is {\em minimal} if the {\em orbit} $\{T^n x : n \in \mathbb{Z}\}$ of every point $x \in X$, is dense in $X$. If $(X,T)$ and $(Y,T)$ are two flows then a {\em homomorphism} $\pi : (X,T) \to (Y,T)$ is a continuous, surjective map $\pi : X \to Y$ such that $\pi(Tx) = T\pi(x)$ for every $x \in X$. (We often use the same letter $T$ to denote the acting homeomorphism on various flows; the idea is that these are all $\mathbb{Z}$-actions where $T$ represents the generator $1$ of $\mathbb{Z}$.) The {\em enveloping semigroup} $E(X,T)$ of a flow $(X,T)$ is the compact sub-semigroup of the {\em right topological semigroup} $X^X$, formed as the closure of the image of $\mathbb{Z}$ in $X^X$ under the map $n \mapsto T^n$. $E(X,T)$ is both a compact right topological semigroup and a flow, were $T$ acts by multiplication on the left. For more details see e.g. \cite{Gl-07}. A metric flow $(X,T)$ is called {\em distal} if $x, y \in X, \ x \neq y$, implies $\inf_{n \in \mathbb{Z}} d(T^n x, T^n y) >0$. (When $X$ is nonmetrizable the requirement is: $\forall \ x, y \in X, \ x \neq y$, there is a neighborhood $V$ of the diagonal $\Delta \subset X \times X$ such that $(T^nx, T^ny) \not \in V, \ \forall n \in \mathbb{Z}$.) By basic theorems of Robert Ellis \cite{Ellis-58} (i) Every distal flow $(X,T)$ is {\em semisimple} (i.e. it is the union of its minimal subsets), and (ii) A flow $(X,T)$ is distal iff its enveloping semigroup $E(X,T)$ is a group. A theorem of Krylov and Bogolubov asserts that every $\mathbb{Z}$-flow admits an invariant, Borel, probability measure (see e.g. \cite[Theorem 4.1]{Gl2}). When this measure is unique the flow is called {\em uniquely ergodic}. Suppose that $(X,T)$ is a uniquely ergodic flow and that the unique invariant measure, say $\mu$, has full support. It then follows that $(X,T)$ is minimal, as otherwise there would be a nonempty, closed and invariant subset $Y \subsetneq X$ which by the Krylov-Bogolubov theorem will support an invariant probability measure $\nu$ which is distinct from $\mu$ (having a smaller support). A flow is {\em strictly ergodic} when it is uniquely ergodic and minimal. \vspace{3 mm} A classical theorem of Harald Bohr asserts that every {\em Bohr almost periodic function} on $\mathbb{Z}$ (i.e. a function in $\ell^{\infty}(\mathbb{Z})$ whose orbit under translations is norm pre-compact) can be uniformly approximated by linear combinations of functions of the form $$ n \mapsto e^{2\pi i n \alpha} \quad (n \in \mathbb{Z}, \alpha \in \mathbb{R}). $$ Another characterization of almost periodic functions is the following one. The function $f \in \ell^\infty(\mathbb{Z})$ is almost periodic iff there exists a minimal equicontinuous flow $(X,T)$, a continuous function $F \in C(X)$ and a point $x_0 \in X$ with $$ f(n) = F(T^n x_0) \quad (n \in \mathbb{Z}). $$ Let $\mathcal{D}$ be the family of functions $f \in \ell^\infty(\mathbb{Z})$ such that : there exists a minimal distal flow $(X,T)$, a continuous function $F \in C(X)$ and a point $x_0 \in X$ with $$ f(n) = F(T^n x_0) \quad (n \in \mathbb{Z}). $$ (We say that $f$ is {\em coming} from the flow $(X,T)$.) Then $\mathcal{D}$ is a uniformly closed, translation invariant subalgebra of $\ell^\infty(\mathbb{Z})$, called {\em the algebra of distal functions}. For a fixed irrational $\alpha \in \mathbb{R}$ the function $$ n \mapsto e^{2\pi i n^2 \alpha} $$ is distal. To see this define the flow $(X,T)$ where $X = \mathbb{T}^2 = (\mathbb{R}/\mathbb{Z})^2$ (the two torus) and $T : X \to X$ is given by $$ T(x,y) = (x + \alpha, y +2x + \alpha), \quad \pmod 1. $$ Let $F(x,y)=e^{2\pi i y}$ and observe that $T^n(0,0)= (n\alpha, n^2 \alpha)$. Thus $$ f(n) = F(T^n(0,0)) = e^{2\pi i n^2 \alpha}. $$ It is easy to see that $(X,T)$ is a distal flow and it follows that $f$ is a distal function. It is shown in \cite{Fur-81} that for every $\alpha \in \mathbb{R}$ and $k \in \mathbb{N}$ one can build a distal flow $(X,T)$ on some torus $X$ so that, with the right choice of $F \in C(X)$ and $x_0 \in X$, the function $$ f_{\alpha,k} : n \mapsto e^{2\pi i n^k \alpha} \quad (n \in \mathbb{Z}) $$ has the form $f_{\alpha,k}(n) = F(T^n x_0)$. Thus the family $\{f_{\alpha,k} : \alpha \in \mathbb{R}, k \in \mathbb{N}\}$ consists of distal functions. Let us call the smallest closed, translation invariant subalgebra of $\ell^\infty(\mathbb{Z})$ containing $\{f_{\alpha,k} : \alpha \in \mathbb{R}, k \in \mathbb{N}\}$, the {\em Weyl algebra} and denote it by $\mathcal{W}$. We have $\mathcal{W} \subset \mathcal{D}$. Recalling Bohr's theorem it is natural to ask whether $\mathcal{W} = \mathcal{D}$ ? (To the best of my knowledge this was first asked by P. Milnes.) The answer to this question is negative and we will see this in two different approaches In section \ref{sec-Wse} we will show that every function of $\mathcal{W}$ is {\em strictly ergodic} (i.e. it comes from a strictly ergodic flow). Since in \cite{Fur-61} Furstenberg produces a minimal flow which is not strictly ergodic, this implies that $\mathcal{W} \not= \mathcal{D}$. Moreover, we will show that the family $\mathcal{S}^d$ of strictly ergodic functions in $\mathcal{D}$ does not form an algebra and hence in particular does not coincide with $\mathcal{W}$. In section \ref{sec-se} we shall see that a {\em multiplier for strict ergodicity}, either within $\mathcal{D}$ or in general (see definition below), is necessarily a constant. In Section \ref{sec-new} we give an example of a strictly ergodic, metric, distal flow whose which admits a non-strictly ergodic $2$-fold minimal self-joining. It then follows that the enveloping group of this flow is not strictly ergodic (as a $T$-flow). In section \ref{sec-nil} we show that the functions coming from a certain translation on a 3-dimensional nil manifold are not in $\mathcal{W}$, demonstrating again the inequality of $\mathcal{W}$ with $\mathcal{D}$. In particular a classical $\Theta$-function defined on $\mathbb{R}^3$ yields such a concrete function. \begin{rmk} \begin{enumerate} \item Independently of us, and at about the same time, E. Salehi has also shown that $\mathcal{W} \subsetneq \mathcal{D}$, \cite{S}. See also \cite{Kn-67} and \cite{JN-10}. \item In his paper 2016 paper \cite{R} Juho Rautio presents an in depth research of the flow $|\mathcal{W}|$. The main results of this definitive paper are as follows. \begin{enumerate} \item[(i)] The flow $|\mathcal{W}|$ has quasi-discrete spectrum (see \cite{H-P}), and is in fact the universal quasi-discrete spectrum flow; it admits every minimal quasi-discrete spectrum flow as a factor. Thus the Weyl algebra $\mathcal{W}$ coincides with the algebra generated by the minimal systems having quasi-discrete spectrum. \item[(ii)] An example of a factor of the flow $|\mathcal{W}|$ which does not have quasi-discrete spectrum is constructed. \item[(iii)] An explicit description of the topological and algebraic structures of the right topological group $|\mathcal{W}|$ is given. \end{enumerate} \end{enumerate} \end{rmk} We conclude this introduction with a brief description of the interplay between functions in $\ell^\infty(\mathbb{Z})$, $T$-subalgebras of $\ell^\infty(\mathbb{Z})$ and pointed flows. (A {\em pointed flow} $(X, x_0, T)$ is a flow with a distinguished point $x_0 \in X$ whose orbit is dense, i.e. $\overline{\{T^n x_0 : n \in \mathbb{Z}\}} = X$.) Starting with a function $f \in \ell^\infty(\mathbb{Z})$ one can form the flow $X_f$, the closure of the orbit of $f$ under translation in the {\em Bebutov flow} $(\mathbb{R}^\mathbb{Z},T)$ (with product topology). We usually consider this as a pointed flow $(X, f, T)$. Given a pointed flow $(X, x_0, T)$ we let $$ al(X,x_0) = \{ f \in \ell^\infty(\mathbb{Z}) : \exists \ F \in C(X), \ f(n) = F(T^n x_0)\ (n \in \mathbb{Z})\}. $$ This is a {\em $T$-subalgebra} of $\ell^\infty(\mathbb{Z})$, i.e. a translation invariant, uniformly closed, closed under conjugation subalgebra, which is isometrically isomorphic to $C(X)$. The Gelfand space $|\mathcal{A}|$ of a $T$-subalgebra $\mathcal{A}$ of $\ell^\infty(\mathbb{Z})$ is a flow under $T$, the homeomorphism induced on $|\mathcal{A}|$ by translation on $\mathcal{A}$. Again we consider $|\mathcal{A}|$ as a pointed flow where the multiplicative functional $f \mapsto f(0); \ \mathcal{A} \to \mathbb{C}$, is the distinguished point. We then have $|al(X_f)| \cong X_f, \ al(|\mathcal{A}|) = \mathcal{A}$ and $al(X_f) = al(f)$, where the latter is the smallest $T$-subalgebra containing $f$. If $\{(X_i, x_i, T)\}_{i \in I}$ is a family of pointed flows then their sup : $\bigvee_{i \in I}(X_i, x_i)$ is the subflow of the product flow $\prod_{i \in I} X_i$ which is the orbit closure of the point $x \in \prod_{i \in I} X_i$ defined by $x(i) = x_i$. We consider this flow as a pointed flow with distinguished point $x$. The smallest $T$-subalgebra containing $\mathcal{A}(X_i, x_i)$ \ $(i \in I)$, is just $\mathcal{A}(\bigvee_{i \in I}(X_i, x_i), x)$. A function $f \in \ell^\infty(\mathbb{Z})$ is called a {\em multiplier for strict ergodicity} (within $\mathcal{D}$) if for every strictly ergodic pointed (distal) flow $(X, x_0, T)$, the flow $(X_f, f) \vee (X, x_0)$ is strictly ergodic (and distal). In section \ref{sec-nil} we assume some familiarity with R. Ellis' algebraic theory of topological dynamics (see \cite{Ellis} or \cite{Gl1}). I thank Professor Benjamin Weiss for reading parts of the paper and correcting several mistakes. \vspace{3 mm} \section{$|\mathcal{W}|$ is strictly ergodic}\label{sec-Wse} Let $(X,T) \overset{\pi}{\to} (Y,T)$ be a homomorphism of minimal flows. $\pi$ is an {\em almost periodic extension} iff there exists a minimal flow $(\tilde{X}, T)$ and homomorphisms $\sigma$ and $\rho$ such that the diagram \begin{equation*} \xymatrix { \tilde{X} \ar[dr]_{\rho} \ar[r]^{\sigma} & X \ar[d]^{\pi} \\ & Y } \end{equation*} is commutative and $\rho$ is a group extension (i.e., there exists a compact group $K$ of automorphisms of $(\tilde{X},T)$ such that $(\tilde{X}/K, T) \cong (Y,T)$). If $(X,T) \overset{\pi}{\to} (Y,T)$ is an almost periodic extension and $\nu$ is a $T$-invariant probability measure on $Y$, there exists a canonically defined $T$-invariant measure $\mu$ on $X$ as follows. Let $\lambda$ be the Haar measure on $K$. Define $\tilde{\mu}$ on $\tilde{X}$ by $$ \int f(x)\, d\tilde{\mu}(x) = \int \int f(xk) \, d\lambda(k) \, d\nu(\rho(x)) $$ and put $\mu = \tilde{\mu} \circ \sigma^{-1}$. (We say that $\mu$ is the {\em Haar lift} of $\nu$.) By the distal structure theorem of Furstenberg \cite{Fur-63}, (which asserts that every minimal distal flow can be represented as a tower of almost periodic extensions) we see, using the procedure described above, that every minimal distal flow carries a canonically defined invariant measure. We recall the following theorem of Furstenberg \cite{Fur-61} (see also \cite[Theorem 3.30]{Gl2}): \begin{thm}\label{unique} Let $(X,T) \overset{\pi}{\to} (Y,T)$ be an almost periodic extension. Let $\nu$ be $T$-invariant probability measure on $Y$ and let $\mu$ be its Haar lift on $X$. Suppose $\mu$ is ergodic, then for every invariant probability measure $\mu_1$ on $X$ with $\mu_1 \circ \pi^{-1} = \nu$, we have $\mu_1 = \mu$. In particular, if $(Y,T)$ is strictly ergodic and $\mu$ is ergodic, then $(X,T)$ is strictly ergodic. \end{thm} \begin{cor}\label{cor-fur} Let $(V,T)$ be a minimal distal flow, $\mu$ its canonically defined measure. If $\mu$ is ergodic then $(X,T)$ is strictly ergodic. \end{cor} \begin{proof} Transfinite induction on the Furstenberg tower of $(X,T)$ (see \cite{Fur-63}). \end{proof} \begin{lem}[Berg]\label{lem-berg} Let $(X,T,\mu)$ and $(Y,T,\nu)$ be measure preserving ergodic processes. Suppose $(\tilde{X}, T, \tilde{\mu})$ and $(\tilde{Y}, T, \tilde{\nu})$ are disjoint, where the latter are the largest Kronecker factors of $(X,T,\mu)$ and $(Y,T,\nu)$ respectively. Then $(X \times Y, T \times T, \mu \times \nu)$ is ergodic. \end{lem} \begin{cor}\label{cor-berg} Let $(X,T,\mu)$ and $(Y,T,\nu)$ be strictly ergodic distal flows. Suppose that as measure preserving processes the corresponding largest Kronecker factors $(\tilde{X}, T, \tilde{\mu})$ and $(\tilde{Y}, T, \tilde{\nu})$ are disjoint, then $(X \times Y, T \times T, \mu \times \nu)$ is strictly ergodic. \end{cor} \begin{proof} The disjointness of the measure preserving processes $(\tilde{X}, T, \tilde{\mu})$ and $(\tilde{Y}, T, \tilde{\nu})$ implies that the largest equicontinuous factors of $(X,T)$ and $(Y,T)$ are disjoint. This implies that $(X,T)$ and $(Y,T)$ are topologically disjoint and it follows that $(X \times Y, T \times T, \mu \times \nu)$ is a minimal distal flow. Obviously $\mu \times \nu$ is the canonical measure on this distal flow and the corollary follows now from Lemma \ref{lem-berg} and Corollary \ref{cor-fur}. \end{proof} \begin{lem}\label{lem-se} For a fixed $\beta \in \mathbb{R}$ and $m \in \mathbb{N}$ consider the function $$ f(n) = e^{2\pi i \beta (n + n^2 + \cdots + n^m)} $$ and let $X_f$ be the associated flow. Then \begin{enumerate} \item[(i)] $X_f$ is strictly ergodic. \item[(ii)] The functions $e^{2\pi i \beta n^j} \ (j=1,2,\dots,m)$ are all in $al(X_f)$. \item[(iii)] If $\beta$ is rational $X_f$ is finite hence Kronecker. Otherwise $(\mathbb{T}, ,T_\beta)$ is the largest Kronecker factor of $(X_f, T, \mu)$, where $\mu$ is the unique $T$-invariant measure on $X_f$. \end{enumerate} \end{lem} \begin{proof} We prove the case $m = 3$. (This will indicate the proof in the general case which is cumbersome to write.) In this case \begin{gather*} f(n) = \exp(2\pi i \beta (n + n^2 + n^3)) \\ f(n+k) = \exp(2\pi i \beta ((n +k) + (n +k)^2 + (n +k)^3). \end{gather*} Let $k_j \beta \to x, \ k_j^2 \beta \to y, \ k_j^3\beta \to z$ for some sequence $k_j \to \infty$. Then \begin{align*} \lim_{j \to \infty} f(n + k_j) & = g_{x, y, z}(n) \\ & = \exp(2\pi i (n(3y + 2x + \beta) + n^2(3x + \beta) + n^3 \beta + x + y +z)), \end{align*} $g_{x, y,z} \in X_f$ and \begin{align*} T g_{x,y,z}(n) & = g_{x,y,z}(n+1)\\ & = \exp\{2\pi i (n[3(y + 2x + \beta) + 2(x + \beta) + \beta] + n^2[(3x + \beta) + \beta] + n^3 \beta \\ & \ \ \ + (x + \beta) + ( y + 2x + \beta) + (z + 3x + 3y + \beta))\}\\ & = g_{x+\beta, y + 2x + \beta, z + 3x + 3y + \beta}(n). \end{align*} Thus $(X_f, T)$ can be identified with the orbit closure of $(0, 0, 0) \in \mathbb{T}^3$ under the transformation $$ T(x,y,z) = (x +\beta, y +2x + \beta, z + 3x +3y + \beta). $$ When $\beta$ is irrational $X_f = \mathbb{T}^3$ and $(X_f,T)$ is strictly ergodic by \cite{Fur-61}. Now $T^n(0,0,0)= (n \beta, n^2 \beta, n^3 \beta)$ and if we let $\pi_j : \mathbb{T}^3 \to \mathbb{T}$ be the projection on the $j$th coordinate $(j = 1.2.3)$, we have $$ \exp(2 \pi i(\pi_j \circ T^n(0,0,0))) = \exp(2\pi in^j \beta) \in al(X_f). $$ By comparing Fourier expansions one can show that $f(T(x,y,z)) = \lambda f(x,y,z)$ with $|\lambda| =1, \ f \in C(X_f)$ implies $f(x,y,z) = f(x)$ and $\lambda = e^{2 \pi i \theta}$, where $\theta \in \mathbb{Z} \beta$. This proves (iii). \end{proof} \begin{thm}\label{Wse} The flow $(|\mathcal{W}|, T)$ is strictly ergodic. Hence every function in $\mathcal{W}$ is strictly ergodic; i.e., $f \in \mathcal{W}$ implies $(X_f,T)$ is strictly ergodic. \end{thm} \begin{proof} Let $\mathcal{W}_0$ be the algebra generated by the functions $$ \{\exp(2 \pi i n^k \alpha) : \alpha \in \mathbb{R}, \ k \in \mathbb{Z}\}. $$ An element $f \in \mathcal{W}_0$ has the form $$ f(n) = \sum_{j=1}^m a_j e^{2\pi i p_j(n)}, $$ where $p_j(n) = \sum_{l =0}^{v_j} \alpha_{l, j} n^l$. We construct a strictly ergodic flow $(X,T)$ such that for some $x_0 \in X$ and for each $j$ and $l$, $\exp(2\pi \alpha_{j,l}n^l)$ is in $al(X,x_0)$. Choose a finite $\mathbb{Q}$-independet set $\{\beta_1, \beta_2, \dots, \beta_u\} \subset [0,1]$ such that $\{\alpha_{j,l}\}_{j,l}^{m,v_j}$ is contained in $\oplus_{s =1}^u \mathbb{Z} \beta_s$. Let $$ f_s(n) = \exp(2\pi i \beta_s(n + n^2 + \cdots + n^{L_s})) \quad (s =1,2,\ldots, u). $$ By Lemma \ref{lem-se} the functions $\exp(2\pi i \beta_s n^l) \ (1 \leq l \leq L)$ are in $al(X_{f_s})$, $(X_{f_s},T)$ is strictly ergodic and the $\beta_s$ rotation is its largest Kronecker factor. By Corollary \ref{cor-berg} $X = \prod_{s =1}^u X_{f_s}$ is also strictly ergodic and $f \in al(X)$. Thus $f \in \mathcal{W}_0$ implies $f$ is strictly ergodic. Since $$ |\mathcal{W}| = \lim_{\rightarrow} \{ |X_f| : f \in \mathcal{W}_0\}, $$ it follows that $|\mathcal{W}|$ is strictly ergodic and our theorem follows. \end{proof} Now, in view of Theorem \ref{Wse}, and the example of Furstenberg in \cite{Fur-61} of a non-uniquely ergodic minimal distal flow on $\mathbb{T}^2$, we obtain the following corollary. \begin{cor} The inclusion $\mathcal{W} \subset \mathcal{D}$ is proper. More specifically, there exists a metric minimal distal flow $(X,T)$ which is not strictly ergodic and hence is not in $\mathcal{W}$. \end{cor} {\bf The following example shows that the family $\mathcal{S}^d$ of distal, strictly ergodic functions does not form an algebra (or even a vector space). In particular $\mathcal{S}^d$ contains $\mathcal{W}$ but does not coincide with it.} As in \cite{Fur-61} let $v_1 =1, \ v_{k+1} = 2^{v_k} + v_k + 1, \ n_k = 2^{v_k}, \ n_{-k} =- n_k $ and $ \alpha = \sum_{k=1}^\infty n_k^{-1}$. It was shown in \cite{C-G} that for some $t, \ 0 \leq t \leq 1$ the function $$ h(x) = t\sum_{k \not= 0}(e^{2\pi i n_k \alpha} -1) e^{2\pi i n_k x} $$ is distal and strictly ergodic and by the same argument so is the function $$ g(x) = t\sum_{k \not= 0}(e^{2\pi i n_k \alpha} -1)(1 + \frac{1}{|k|})e^{2\pi i n_k x}. $$ However the function $$ f(x) = g(x) - h(x) = t\sum_{k \not= 0} \frac{1}{|k|}(e^{2\pi i n_k \alpha} - 1)e^{2\pi i n_k x} $$ is not strictly ergodic by \cite{Fur-61}. \section{Multipliers for strict ergodicity are constants}\label{sec-se} In order to motivate our next problem we survey quickly the following well known results from topological dynamics. Let us call a function $f \in \ell^\infty(\mathbb{Z})$ {\em minimal} if there is a minimal flow $(X,T)$, a point $x_0 \in X$ and a function $F \in C(X)$ with $$ f(n) = F(T^n x_0). $$ Let $\mathcal{M}$ be the family of of all minimal functions in $\ell^\infty(\mathbb{Z})$. It is well known that $\mathcal{M}$ is not preserved under sums. Using Zoren's lemma we can however form the family $\{\mathcal{M}_\alpha\}$ of maximal $T$-subalgebras of $\mathcal{M}$. These are pairwise isometric (as Banach algebras) and in fact there exists a universal minimal flow $(M,T)$ such that each $\mathcal{M}_\alpha$ is of the form $al(M,m)$ for some $m \in M$. The object we are interested in now is the $T$-algebra $\mathcal{L} = \cap \mathcal{M}_\alpha$. One can easily see that $f \in \mathcal{L}$ iff the corresponding minimal pointed flow $(X_f,f)$ has the following propery: For every minimal pointed flow $(Y,y_0)$ the flow $(X_f \vee Y, (f,y_0))$ is minimal. We say that $f$ is a {\em multiplier for minimality}. One more definition. A minimal flow $(X,T)$ is called {\em point distal} if there exists a point $x_0 \in X$ such that no point of $X$ other than $x_0$ is proximal to $x_0$. We call $f \in \ell^\infty(\mathbb{Z})$ {\em point distal} if it is coming from a point distal flow; i.e. if there is a point distal flow $(X,T)$ and a point $x_0 \in X$ as above with $$ f(n) = F(T^n x_0) $$ for some $F \in C(X)$. We now have the following surprising characterization of multipliers for minimality, \cite[Theorem III.8]{AH}. \begin{thm}\label{thm-pointdistal} $\mathcal{L} = \cap \mathcal{M}_\alpha$ coincides with the algebra of point distal functions. \end{thm} For later use we also need to mention the following. \begin{thm}\label{disj} Every minimal weakly mixing flow is disjoint from every point distal flow. \end{thm} (A flow $(X,T)$ is {\em weakly mixing} if $(X \times X, T \times T)$ is topologically ergodic, i.e., an open invariant non-empty sunset of $X \times X$ is necessarily dense. It is {\em strongly mixing} if for every non-empty open sets $A$ and $B$ the set $N(A,B) = \{n \in \mathbb{Z} : T^n A \cap B \not= \emptyset\}$ has a finite complement.) Now back to strictly ergodic functions. We have seen that the set $\mathcal{S}^d$ of strictly ergodic distal functions does not form an algebra. The same example shows that neither is the family $\mathcal{S}$ of strictly ergodic functions an algebra. Using Zoren' lemma again we can form the two families $\{\mathcal{S}_\beta\}$ and $\{\mathcal{S}_\gamma^d\}$ of maximal $T$-subalgebras in $\mathcal{S}$ and $\mathcal{S}^d$, respectively. Motivated by the example of point distal functions we pose the problem of identifying $\cap \mathcal{S}_\beta = \mathcal{S}_0$ and $\cap \mathcal{S}_\gamma^d = \mathcal{S}_0^d$. As in the case of $\cap \mathcal{M}_\alpha$ it is easy to see that being an element of $\cap \mathcal{S}_\beta $ is the same as being a multiplier for strict ergodicity. I.e. $f \in \cap \mathcal{S}_\beta$ iff for every strictly ergodic flow $(Y,T)$ and every point $y \in Y$ the flow $(X_f \vee Y, (f,y))$ is strictly ergodic. A similar statement holds for $\cap \mathcal{S}_\gamma^d$. The answer to our question is disappointing. \begin{thm}\label{trivial} The only multipliers for both strict ergodicity and strict ergodicity within $\mathcal{D}$, are the constant functions: i.e. $\mathcal{S}_0 = \mathcal{S}_0^d = \mathbb{C}$. \end{thm} Notice that $\mathcal{S}_0 = \cap \mathcal{S}_\beta = \mathbb{C}$ does not automatically imply $\mathcal{S}^d_0 = \cap \mathcal{S}_\gamma^d = \mathbb{C}$; it is not at all clear that for a given $\beta$ the algebra $\mathcal{S}_\beta \cap \mathcal{D}$ is not a proper subalgebra of one of the maximal algebras $\mathcal{S}_\gamma^d$. For the proof we need the following strengthening of the Jewett Krieger theorem due to Lehrer \cite{Leh}. \begin{thm}\label{lehr} If $(\Omega, \mathcal{F},\mu,T)$ is a properly ergodic process then there exists a strictly ergodic flow $(X,T)$ with unique invariant measure $\nu$ such that $(Y,T)$ is topologically strongly mixing and $(Y, \mathcal{B}, \nu, T)$ ($\mathcal{B} =$ Borel field of $Y$) is measurably isomorphic to $(\Omega, \mathcal{F}, \mu, T)$. \end{thm} For the distal case we need the following theorem which can be deduced from \cite{Fur-61}. For $\beta \in \mathbb{R}$ we let $(X_\beta, R_\beta)$ be the rotation by $\beta$ of $\mathbb{T} = X_\beta$ when $\beta$ is irrational, and $\{1,2,\ldots,n\} = X_\beta$ when $\beta$ ia a rational number of order $n$ (considered as an element of $\mathbb{T}$). \begin{thm}\label{ab} Let $\beta \in \mathbb{R}$ be given, then there exist an irrational $\alpha \in \mathbb{R}$ such that $\alpha$ and $\beta$ are rationally independent, and a strictly ergodic distal flow $(X,T)$ with invariant measure $\mu$ such that \begin{enumerate} \item[(i)] $(X, \mu,T)$ is measure theoretically isomorphic with $(X_\alpha \times X_\beta, R_\alpha \times R_\beta)$. \item[(ii)] There exists a continuous homomorphism $(X,T) \to (X_\alpha,R_\alpha)$. \item[(iii)] The flows $(X,T)$ and $(\mathbb{T}, R_\beta)$ are topologically disjoint. \end{enumerate} \end{thm} \begin{proof} We go back to the construction described by Furstenberg \cite{Fur-61} on page 385. As in \cite{Fur-61} let $v_1 =1, \ v_{k+1} = 2^{v_k} + v_k + 1, \ n_k = 2^{v_k}, \ n_{-k} =- n_k $ and $ \alpha = \sum_{k=1}^\infty n_k^{-1}$. We have \begin{equation}\label{ineq} |n_k \alpha - [n_k \alpha]| < \frac{2 \cdot 2^{v_k}}{2^{v_{k+1}}} = 2^{-n_k}. \end{equation} Now set $$ h(x) = \sum_{k \not= 0}\frac{1}{|k|}(e^{2\pi i n_k \alpha} -1) e^{2\pi i n_k x} $$ and let $$ g(e^{2\pi i x}) = e^{2\pi i t h(x)}, $$ where $t$ is yet undetermined. By (\ref{ineq}) $h(x)$ and therefore $g(\zeta)$ are $C^{\infty}$ functions. Now we have $h(x) = H(x+\alpha) - H(x)$ where $$ H(x) = \sum_{k \not= 0}\frac{1}{|k|}e^{2\pi i n_kx} $$ As in \cite{Fur-61} we observe that $H(x)$ is in $L^2(0,1)$, hence defines a measurable function, which however, can not correspond to a continuous function, hence not summable at $\theta =0$ (see e.g. \cite[Theorem 3.4. page 89]{Z}). We conclude that there is some $t$ such that, {\bf for all $k \in \mathbb{N}$} the function $e^{2 \pi i k t H(x)}$ will not be a continuous function (see \cite[Proposition A1, page 83]{EH}). Taking $R(e^{2 \pi i x}) = e^{2 \pi i t H(x)}$ we have \begin{equation*}\label{chom} R(e^{2 \pi i \alpha} \zeta) / R(\zeta) = g(\zeta) \end{equation*} with $R(\zeta)$ measurable but not continuous. Now consider the transformation on $X = \mathbb{T}^2$ $$ T(x, y) = T_\phi(x,y) = (x + \alpha, y +t h(x) + \beta), $$ defined by means of the cocycle $\phi(x) : = t h(x) + \beta$ \footnote{If the $\alpha$ we just constructed and the given $\beta$ happen to be rationally dependent, we can slightly modify the construction of $\alpha$ so that it becomes independent of $\beta$.}. The claim (i) of the theorem follows because the function $\phi(x) = th(x) + \beta = tH(x +\alpha) - tH(x) + \beta$ is co-homologous to $\beta$. Claim (ii) is clear. Thus the measure theoretical system $(X, \mu, T)$ is ergodic and by a theorem of Furstenberg (see e.g. \cite[Theorem 4.3]{Gl2}) the flow $(X,T)$, which is a $\mathbb{T}$-extension of the strictly ergodic flow $(\mathbb{T},R_\alpha)$, is also strictly ergodic. Next observe that there is no nontrivial intermediate extension $$ (X,T) \to (Y,T) \overset{\rho}{\to} (\mathbb{T}, R_\alpha), $$ with $\rho$ finite to one. In fact, any intermediate flow $(Y,T)$ is obtained as a quotient $Y \cong X / K$, where $K$ is a closed subgroup of the group $\mathfrak{A}: = \{A_d : d \in \mathbb{T}\}$ of automorphisms of the flow $(X,T)$ of the form $A_d(x,y) = (x, y+d)$. But, any proper closed subgroup $K$ of $\mathfrak{A}$ is finite, so that $Y \cong X/K \overset{\rho}{\to} (\mathbb{T}, R_\alpha)$ can not be finite to one. Now in this situation one can check that the rotation $(\mathbb{T}, R_\alpha)$ is the maximal equicontinuous factor of $(X,T)$ iff \begin{quote} For every $\lambda \in \mathbb{C}$, for every $0 \neq k \in \mathbb{Z}$ the functional equation $$ f(x + \alpha) e^{2\pi i [k(t h(x) + \beta)]}= \lambda f(x) $$ has non non-zero continuous solution. \end{quote} Suppose $f$ is such a non-zero solution. Then $$ \frac{f(x + \alpha)}{f(x)} \cdot \frac{e^{2\pi i k t H(x + \alpha)}}{e^{2\pi i kt H(x)}} = \lambda e^{- 2\pi i k \beta}. $$ Writing $F(x) = e^{2\pi i kt H(x)}$ and $b =e^{2\pi i k \beta}$, we have $$ \frac{f(x + \alpha)}{f(x)} \cdot \frac{F(x + \alpha)}{F(x)} = \lambda b^{-1}. $$ Thus the function $f\cdot F$ is an eigenfunction of the flow $(\mathbb{T}, R_\alpha)$, hence, for some $l \in \mathbb{Z}$, $\lambda b^{-1} = l \alpha$, and $f \cdot F$ has the form $e^{2\pi i l x}$. This contradicts the fact that $F$ is not equal a.e. to a continuous function. Therefore $(\mathbb{T}, R_\alpha)$ is the maximal equicontinuous factor of $(X,T)$ and it follows that indeed $(X,T)$ and $(\mathbb{T}, R_\beta)$ are topologically disjoint. This completes the proof of part (iii). \end{proof} We are now ready for the proof of Theorem \ref{trivial}. \begin{proof}[Proof of Theorem \ref{trivial}] \ (i) $\mathcal{S}_0 = \mathbb{C}$. Since every strictly ergodic flow is minimal we have for each $\beta$ an $\alpha$ such that $\mathcal{S}_\beta \subset M_\alpha$. On the other hand, given n $\mathcal{M}_\alpha$ and a strictly ergodic flow $(X,T)$, there is a point $x \in X$ such that $al(X,x) \subset \mathcal{M}_\alpha$, so that each $\mathcal{M}_\alpha$ contains exactly one $\mathcal{S}_\beta$. It follows that $$ \mathcal{S}_0 = \cap \mathcal{S}_\beta \subset \cap \mathcal{M}_\alpha = \mathcal{L}. $$ \end{proof} Now fix an element $ f \in \mathcal{S}_0$, and let $(X,x_0) = (X_f, f)$. Since $f \in \mathcal{L}$ it follows that $(X, x_0)$ is point distal. Since $f \in \mathcal{S}_0$ the flow $(X,,T)$ is strictly ergodic with a unique invariant measure $\mu$. Use Lehrer's theorem, Theorem \ref{lehr}, to produce a strictly ergodic, topologically strongly mixing flow $(Y,T)$, with invariant measure $\nu$, so that $(Y, \nu, T)$ is measure theoretically isomorphic to $(X, \mu, T)$; say $X \overset{\phi}{\to} Y$. By Theorem \ref{disj} the flow $(X \times Y, T \times T)$ is minimal hence $(X,x_0)\vee (Y, y_0) = X \times Y$ for every choice of $y_0 \in Y$. Now on $X \times Y$ we have the following $T \times T$-invariant measures $$ \mu \times \nu = \int (\delta_x \times \nu)\, d\mu(x) $$ and $$ \int (\delta_x \times \delta_{\phi(x)})\, d\mu(x). $$ By the uniqueness of the disintegration of measures these two invariant measures coincide iff $\delta_{\phi(x)} = \nu$ a.e., iff $\nu $ is a point mass, iff $(X,T)$ is a trivial one point flow, iff $f$ is a constant. But by assumption $f$ is a multiplier for strict ergodicity so that $(X,x_0,T)\vee (Y, y_0,T) = (X \times Y, T \times T)$ is strictly ergodic and we conclude that $f$ is a constant. (ii) $\mathcal{S}_0^d = \mathbb{C}$. Let $\beta \in \mathbb{R}$ be given and let $\alpha$ and $(X,T)$ be as in Theorem \ref{ab}. Since $(X_\alpha, R_\alpha)$ and $(X_\beta, R_\beta)$ are topologically disjoint so are $(X,T)$ and $(X_\beta,R_\beta)$. This follows since by Theorem \ref{ab} $(X_\beta, \mathbb{R}_\beta)$ is the maximal Kronecker factor of $(X,T)$, and e.g., by Theorem 4.2 in \cite{EGS}. Thus $(X \times X_\beta, T \times R_\beta) = (X,x_0,T)\vee (X_\beta, x_\beta, R_\beta)$ is minimal. Now for $f(n) = e^{2\pi i n \beta}$ we have $|al(f)| = (X_\beta, R_\beta)$ and if $f \in \mathcal{S}_0^d$ then $f$ is a multiplier for strict ergodicity within $\mathcal{D}$ and $(X \times X_\beta, T \times R_\beta)$ is strictly ergodic. However, if $\mu$ and $\nu$ are the invariant measures on $X$ and $X_\beta$ respectively and $X \overset{\phi}{\to} X_\beta$ is the measure theoretical isomorphism, we can disintegrate $\mu$ over $\nu$ $$ \mu = \int_{X_\beta} \mu_{z} \, d\nu(z) $$ ($\mu_z$ a probability measure on $\phi^{-1}(z)$ for a.e. $z \in X_\beta$) and then $\mu \times \nu$ and $\int (\mu_z \times \nu) \, d\nu(z)$ are two different invariant measures on $(X \times X_\beta, T \times R_\beta)$, a contradiction. Thus for every $\beta \in \mathbb{R}$, $f_\beta(n) = e^{2\pi i n \beta}$ is not in $\mathcal{S}_0^d$ and since $\mathcal{S}_0^d$ is a distal algebra it follows from the structure theorem for minimal distal flows \cite{Fur-63} that $\mathcal{S}_0^d = \mathbb{C}$. \begin{rmk} It is interesting to compare the discussion above, about multipliers for strict ergodicity, with the study of multipliers for minimal weakly mixing flows in \cite{EG}. There it was shown that the collection of minimal functions which are multipliers for weak mixing, which by definition is the intersection of the family $\{\mathcal{A}_\delta\}$ of maximal weakly mixing subalgebras of $\ell^{\infty}(\mathbb{Z})$ (sitting in the algebra which corresponds to the algebra $C(M,u)$, where $M$ is the universal minimal flow), coincides with the algebra of {\em purely weakly mixing functions} hence, in particular, is nontrivial. \end{rmk} \section{A strictly ergodic distal flow with a non strictly ergodic self-joining}\label{sec-new} It is not hard to see that the family of Weyl minimal flows is closed under self joinings. As a consequence the enveloping group of a Weyl flow is again a Weyl flow, hence strictly ergodic. The same assertions hold for the family of pro-nil flows (i.e. those flows that correspond to subalgebras of the $T$-algebra generated by all the nil-functions; see Section \ref{sec-nil} below). Is it true that the enveloping group of every strictly ergodic distal flow is strictly ergodic ? Our goal in this section is to construct a strictly ergodic distal flow with a non strictly ergodic self-joining. The enveloping group of such a flow is clearly not strictly ergodic. We start with the Anzai-Furstenberg example, the simplest example of a minimal distal but not equicontinuous flow. On the two torus $\mathbb{T}^2$ we let $$ A(x,z) = (x + \alpha, z +x) $$ where $\alpha$ is irrational. We observe that for any $\beta \in \mathbb{R}$ the self joining $(X,(0,0),A) \vee (X,(\beta,0),A)$ admits the flow $(\mathbb{T}, R_\beta)$ as a factor, with factor map $$ ((x,z)(x',z')) \mapsto z' -z. $$ Next consider the strictly ergodic distal flow $(X,T)= (\mathbb{T}^2, T_\phi)$ constructed in Section \ref{sec-se} with an irrational $\beta$ independent of $\alpha$, and define $S : \mathbb{T}^3 \to \mathbb{T}^3$ by $$ S (x,y,z) = (x +\alpha, y + \phi(x), z + x). $$ \begin{claim} The flow $$ (W,S) := (\mathbb{T}^3,S) \cong (X, (0,0), T) \vee (\mathbb{T}^2,(0,0), A) = (X \underset{(\mathbb{T}, R_\alpha)}{\times}\mathbb{T}^2, T \times A) $$ is strictly ergodic. \end{claim} \begin{proof} Let $\lambda$ be Lebesgue measure on $\mathbb{T}^3$. If $f(x,y,z)$ is an $S$-invariant function in $L^2(\lambda)$ then using the Fourier expansion $$ f(x,y,z) = \sum_{k \in \mathbb{Z}} a_k(x,y) e^{2\pi i kz}, $$ we can check that $f$ is a constant, as follows. Since $f$ is invariant we have, for a.e. $(x, y, z) \in \mathbb{T}^3$, $$ f(x + \alpha,y + \phi(x),z + x) = \sum_{k \in \mathbb{Z}} a_k(x + \alpha ,y + \phi(x))e^{2\pi i kx} e^{2\pi i kz}, $$ Thus for $k \neq 0$, a.e. $$ a_k(x,y) = a_k(x + \alpha ,y + \phi(x))e^{2\pi i kx}. $$ By ergodicity of $(X,T_\phi)$ either $a_k = 0$ a.e. or it vanishes on a set of measure zero. Therefore we get an a.e. equality $$ \frac{a_k(T(x,y))}{a_k(x,y)} = e^{-2\pi i kx}. $$ However, via the the map $J : \mathbb{T}^2 \to \mathbb{T}^2, \ J(x,y) = (x, y + tH(x))$ the flow $(X,T_\phi)$ is measure theoretically isomorphic to the flow $$ T_{\alpha, \beta} : \mathbb{T}^2 \to \mathbb{T}^2, \quad T_{\alpha,\beta}(x,y) = (x + \alpha, y +\beta) $$ and thus, with $A_k(x,y) : = a_k \circ J (x,y)$, we get \begin{align*} A_k(x,y) & = (a_k \circ J)(x,y) = a_k(x, y +tH(x))\\ & = a_k(x +\alpha, y + tH(x) + th(x) + \beta) e^{2\pi i kx}\\ &= a_k(x +\alpha, y +t H(x+\alpha) + \beta) e^{2\pi i kx}\\ &= (a_k \circ J)(x +\alpha, y +\beta)e^{2\pi i kx} = A_k(x +\alpha, y + \beta)e^{2\pi i kx} \end{align*}\ hence $$ \frac{A_k(x +\alpha,y + \beta)}{A_k(x,y)} = e^{-2\pi i kx}. $$ Using again Fourier decomposition for $A_k$ we conclude that for $k \neq 0$ we have $A_k(x,y) =0$, hence also $a_k =0$. Indeed, with $$ A_k(x,y) = \sum_{l \in \mathbb{Z}} b_l(x) e^{2\pi i lky}, $$ we get $$ A_k(x,y) = \sum_{l \in \mathbb{Z}} b_l(x) e^{2\pi i lky} = \sum_{l \in \mathbb{Z}} b_l(x +\alpha) e^{2\pi i lky} e^{2\pi i lk \beta} e^{2\pi i kx}, $$ hence, for $l \neq 0$, $$ b_l(x) =b_l(x +\alpha) e^{2\pi i lk \beta} e^{2\pi i kx}. $$ Next let $b_l(x) = \sum_{m \in \mathbb{Z}} d_n e^{2\pi i m x}$. Then, comparing coefficients we get, $d_m = d_{m - k}e^{2 \pi i (lk\beta+ (m -k)\alpha)}$, hence $d_m = 0$, hence $b_l(x) =0$. Thus $A_k(x,y) = b_0(x)$, hence $b_0(x) = b_0(x +\alpha) e^{2 \pi i kx}$ and again Fourier expansion shows that $b_0 =0$, and consequently $A_k =0$. Finally for $k =0$ the equation we get is $$ \frac{A_0(x +\alpha,y + \beta)}{A_0(x,y)} = 1, $$ hence $A_0(x,y)$ is a constant and so is $f$. Thus the measure theoretical system $(W, \lambda, S)$ is ergodic and by a theorem of Furstenberg (see Theorem \ref{unique}) the flow $(W,S)$, which is a $\mathbb{T}$-extension of the strictly ergodic flow $(X,T)$, is also strictly ergodic. \end{proof} We will now show that the self-joining $$ M : = (W,(0,0,0),S) \vee (W, (\beta, 0,0),S) $$ is not strictly ergodic. In fact, since both $(X,T)$ and $(\mathbb{T}, R_\beta)$ are factors of $M$, it follows that also the flow $(X,x_0,T)\vee (\mathbb{T}, 0, R_\beta)$ (with $x_0 = (0,0,0)$) is a factor of $M$. However, as we have seen in the proof of Theorem \ref{trivial}, this flow $ (X,x_0,T)\vee (\mathbb{T}, 0, R_\beta) = (X \times \mathbb{T}, T \times R_\beta)$ is minimal but not strictly ergodic. Thus we conclude that also our minimal flow $M$ is not strictly ergodic. \begin{cor} There exists a metric, strictly ergodic, distal flow $(W,T)$ whose enveloping group $E(W,T)$ is not strictly ergodic. \end{cor} \begin{proof} Recall that, as flows, $$ E(W,T) \cong \bigvee_{w \in W} (W,w,T). $$ It follows that in the example above, $M$ is a factor of $E(W,T)$. \end{proof} \section{A nil-flow relatively disjoint from $\mathcal{W}$}\label{sec-nil} Our goal in this section is to show that the 3-dimensional nil-flow (by this we mean the time one, discrete flow in a one parameter transformation group), which is a strictly ergodic and distal flow, is relatively disjoint over its equicontinuous factor from $|\mathcal{W}|$. Thus a function $f \in \ell^\infty(\mathbb{Z})$ coming from this nil-flow - $f$ not almost periodic - will be an element of $\mathcal{S}^d$ but not an element of $\mathcal{W}$. One such example is the $\Theta$-function: $$ f(n) = F(n\alpha, n\beta, \frac{\alpha\beta}{2} + n \gamma), $$ where $$ F(x,y,z) = e^{2\pi i z} \sum_{m \in \mathbb{Z}} e^{2 \pi i m x} e^{-\pi(m+y)^2} $$ and $\alpha, \beta, \gamma \in \mathbb{R}$, $\alpha, \beta$ irrationals, rationally independent. Let $$ N=\biggl\lbrace\begin{pmatrix} 1&x&z\\ 0&1&y\\ 0&0&1\end{pmatrix} :x,y,z\in\mathbb {R}\bigg\}\ , $$ and $\Gamma \subset N$ the discrete co-compact subgroup with integral coefficients. Put $Z = N/\Gamma$ and let $$ T=\begin{pmatrix} 1&\alpha&\gamma + \frac12 (\alpha \beta) \\ 0&1&\beta\\ 0&0&1\end{pmatrix} , $$ It was shown in \cite{AGH} that the flow $(Z,T)$ (where $T(g\Gamma) = (Tg) \Gamma \ (g \in N)$) is strictly ergodic, distal and not equicontinuous; in fact the largest equicontinuous factor of $(Z,T)$ is the flow $(Y,T) = (\mathbb{T}^2, R_\alpha \times R_\beta) \cong (N/B\Gamma, T)$ where $$ B=\biggl\lbrace\begin{pmatrix} 1& 0 &z \\ 0&1&0\\ 0&0&1\end{pmatrix} : z \in \mathbb{R}\bigg\}. $$ We let $(Z,T) \overset{\pi}{\to} (Y,T)$ be the canonical homomorphism. In the next lemma we use the notations introduced in the proof of Theorem \ref{Wse}. Thus we let $$ X = X(\beta_1, \dots, \beta_u; L_1, \ldots, L_u) = \prod_{s =1}^u X_{f_s}, $$ where $$ f_s(n) = \exp(2\pi i \beta_s(n + n^2 + \cdots + n^{L_s})), $$ $(s =1,2,\ldots, u,\ n \in \mathbb{Z} )$, $\{\beta_1, \ldots, \beta_u\}$ is a $\mathbb{Q}$-independent subset of $\mathbb{R}$ and $L_1, \ldots, L_u$ are arbitrary positive integers. We have seen in the proof of Theorem \ref{Wse} that $|\mathcal{W}|$ is the inverse limit of the flows $X = X(\beta_1, \dots, \beta_u; L_1, \ldots, L_u)$. We observe that topologically each such $X$ has the form $\mathbb{T}^\nu \times K$ where $\nu$ is a positive integer and $K$ is a finite set. Moreover, if $\nu >1$ then $X$ is a $\mathbb{T}$-extension of a flow $X'$ on $\mathbb{T}^{\nu-1} \times K$ of the same form. Also note that $\pi_1(X) = \mathbb{Z}^\nu$, where $\pi_1$ denotes fundamental group. \begin{lem}\label{lem-disj} Let $X = X(\beta_1, \dots, \beta_u; L_1, \ldots, L_u)$, where $\beta_1 = \alpha, \ \beta_2 = \beta$. Then $(Y,T)$ is also a factor of $(X,T)$ and $(X,T)$ is relatively disjoint from $(Z,T)$ (the nil-flow with parameters $\alpha, \beta, \gamma$) over their common equicontinuous factor $(Y,T)$; i.e. the flow $(X \underset{Y}{\times} Z, T \times T)$ is minimal, where $$ X \underset{Y}{\times} Z = \{(x, z) : \phi(x) = \pi(z)\} $$ and $\phi$ and $\pi$ are the homomorphisms of $X$ and $Z$, respectively, onto $(Y,T) \cong (\mathbb{T}^2, R_\alpha \times R_\beta)$. \end{lem} \begin{proof} Choose $x_0 \in X, \ z_0 \in Z$, with $\phi(x_0) = \pi(z_0) = y_0$ and let $$ (W, w_0, T) = (X, x_0, T) \vee (Z, z_0, T) \subset X \underset{Y}{\times} Z. $$ $(W,T)$ is a distal minimal flow. We let $W \overset{\psi}{\to} Z$ and $W \overset{\eta}{\to} X$ be the projections. If $(x, z), (x, z') \in \eta^{-1}(x)$ then by the commutativity of the diagram \begin{equation*} \xymatrix { & (W,T) \ar[dl]_\eta \ar[dr]^\psi & \\ (X,T) \ar[dr]_\phi & & (Z,T) \ar[dl]^\pi\\ & (Y,T) & } \end{equation*} $\pi(z) = \pi(z')$. By considering the relative regionally proximal relation $Q_\eta$ it is now clear that $\eta$ is an almost periodic extension. We let $F = \mathfrak{G}(Y,y_0), \ A = \mathfrak{G}(X,x_0), \ B = \mathfrak{G}(Z, z_0)$ be the corresponding Ellis groups. We have $B \vartriangleleft F$ with $F/B \cong \mathbb{T}$ and $\mathfrak{G}(W,w_0) = A \cap B : = D$. The fact that $\eta$ is an almost periodic extension is equivalent to $A/D$ being a compact Hausdorff space with respect to the its $\tau$-topology (see e.g. \cite{Gl1}). Put $D_0 = \cap_{\alpha \in A} \alpha D \alpha^{-1}$, then $D_0 \vartriangleleft A$ and $K = A/D_0$ is a compact Hausdorff topological group. Now our assertion that $X \underset{Y}{\times}Z$ is minimal is equivalent to $AB =F$. If the latter equality does not hold then the group $AB/B \subset F/B \cong \mathbb{T}$, is a proper closed subgroup of $\mathbb{T}$ hence finite. Consider the map $$ aD_0 \mapsto aB : A/D_0 \to F/B. $$ Since $D_0 \subset B$ this is an isomorphism of the group $A/D_0$ onto the finite group $AB/B$. Thus $A/D_0$ and a fortiori $A/D$ are finite sets. Since $(W,T)$ is distal this means that $(W,T) \overset{\psi}{\to} (Z,T)$ is finite to one. Hence $\psi$ is a covering map and $\eta_* : \pi_1(W, w_0) \to \pi_1(Z,z_0)$ is a monomorphism. We conclude that $\pi_1(W,w_0)$ is a subgroup of $\mathbb{Z}^\nu$ and in particular abelian. Let $(X,T) \overset{\theta}{\to}(X',T)$ be the $\mathbb{T}$-extension discussed above (just before Lemma \ref{lem-disj}). This fits into the following commutative diagram \begin{equation*} \xymatrix { & (W,T)\ar[dl]_{\eta} \ar[d]^{\iota} \ar[ddr]^{\psi}& \\ (X,T) \ar[d]_\theta & (W',T) \ar[dl]_{\eta'} \ar[dr]^{\psi'} & \\ (X',T) \ar[dr]_{\phi'} & & (Z,T) \ar[dl]^\pi\\ & (Y,T) & } \end{equation*} where by induction hypothesis (on $\nu$) the flow $$ (W',T) = X' \underset{Y}{\times}Z $$ is minimal. Now $W \overset{\iota}{\to} W'$ is a group extension. If the corresponding fiber group is $\mathbb{T}$, it follows that $(X \underset{Y}{\times}Z,T)$ is minimal and we are done. Otherwise $\iota$ is a finite to one extension and again we have that $\iota_* : \pi_1(W, w_0) \to \pi_1(W',w'_0)$ is an isomorphism. The flow $W' = X' \underset{Y}{\times} Z$ has a fiber bundle structure with $Z$ as a basis and $\mathbb{T}^{\nu-3} \times K$ as the fiber space. Thus with $M = {\psi'}^{-1}(z_0)$ we get from the inclusion maps sequence $$ (M, w'_0) \overset{i}{\to} (W', w'_0) \overset{j}{\to} (W', M, w'_0), $$ the exact sequence $$ \mathbb{Z}^{\nu - 3}= \pi_1(M, w'_0) \overset{i_*}{\to} \pi_1(W', w'_0)\overset{j_*}{\to} \pi_1(W', M, w'_0) \cong \pi_1(Z, z_0). $$ It follows that $\pi_1(W', w'_0)$, which is abelian, is also a $\mathbb{Z}^{\nu- 3}$-extension of the non-abelian group $\pi_1(Z, z_0) = \Gamma$. This contradiction completes the proof. \end{proof} \begin{thm} The minimal flow $(Z,T)$ is relatively disjoint from $(|\mathcal{W}|, T)$ over its largest equicontinuous factor $(Y,T)$. \end{thm} \begin{proof} We have seen that the directed collection of flows $\{X(\beta_1, \dots, \beta_u; L_1, \ldots, L_u) \}$ generates $|\mathcal{W}|$ as its inverse limit. (The assumption $\beta_1 = \alpha, \ \beta_2 = \beta$ causes no loss in generality.) The theorem now follows from Lemma \ref{lem-disj} and the fact that relative disjointness over a fixed factor is preserved under inverse limits. \end{proof}
2,869,038,156,010
arxiv
\section{Introduction} The relation between the topological entropy of the geodesic flow of a Riemannian manifold $(M,g)$ with non-positive sectional curvature and the critical exponent of the discrete group of isometries $\Gamma \cong \pi_1(M)$ acting freely on the universal covering $\tilde{M}$ of $M$ has been intensively studied during the years. A.Manning proved that in case of compact manifolds with non-positive sectional curvature the topological entropy of the geodesic flow equals the volume entropy of the universal covering (cp. \cite{Man79}) and so the critical exponent of the group $\Gamma$, whose definition will be recalled later. This result has been generalized to compact quotients of geodesically complete, CAT$(0)$-spaces by R.Ricks \cite{Ric19}. Its proof works in case of Busemann convex metric spaces, but it is actually enough to consider metric spaces satisfying a weaker notion of non-positive curvature: that is spaces supporting a convex geodesic bicombing that is geodesically complete. We recall that a geodesic bicombing is a map $\sigma\colon X\times X \times [0,1]$ such that for all $x,y\in X$ the map $t\mapsto \sigma(x,y,\cdot) = \sigma_{xy}(\cdot)$ is a geodesic between $x$ and $y$ parametrized proportionally to arc-length. Every geodesic $\sigma_{xy}$ is said a $\sigma$-geodesic. The bicombing $\sigma$ is convex if the map $t\mapsto d(\sigma_{xy}(t),\sigma_{x'y'}(t))$ is convex on $[0,1]$ for every $x,y,x',y'\in X$, while it is geodesically complete if every $\sigma$-geodesic can be extended to a bigger $\sigma$-geodesic. Examples of these spaces are geodesically complete CAT$(0)$ and Busemann convex metric spaces, but there are also examples that are not uniquely geodesic, like all Banach spaces. \vspace{2mm} \noindent However we are more interested in the much more complicated case of non-cocompact actions. For Riemannian manifolds the following is true: \begin{theorem-no-number}[Otal-Peigné, \cite{OP04}] \label{intro-thm-OP} Let $M = \Gamma \backslash\tilde{M}$ be a Riemannian manifold with pinched, negative sectional curvature, i.e. $-b^2 \leq \textup{Sec}_g \leq -a^2 <0$. Then the topological entropy of the geodesic flow on $M$ equals the critical exponent of $\Gamma$. \end{theorem-no-number} The price to pay in order to consider any possible group acting discretely and freely on $\tilde{M}$ is the condition on the sectional curvature. While the lower bound is quite natural, since every compact manifold has such a bound, the negative upper bound marks a difference among the cocompact case and the general one. The proof of Otal-Peigné's Theorem uses local estimates that are true only in the strict negative curvature setting. \vspace{2mm} \noindent The purpose of this paper is to extend Otal-Peigné's Theorem to a wider class of metric spaces. As in \cite{Cav21} we will consider Gromov-hyperbolic, packed, \textup{GCB}-spaces $(X,\sigma)$. The packing condition, that is a uniform upper bound on the cardinality of $2r_0$-nets inside any ball of radius $3r_0$ for some $r_0>0$, can be considered as a weak lower bound on the curvature (cp. \cite{CavS20}, \cite{CavS20bis}, \cite{Cav21}), while Gromov-hyperbolicity is a condition of negative curvature at large scales. Finally the GCB-condition, that is the existence of a geodesically complete, convex geodesic bicombing, gives some control on the local geometry, but it is only a very weak notion of non-positive curvature. Before stating our generalization of Otal-Peigné's Theorem we need to introduce some terminology, due to the difference from the manifold case to ours.\\ If a group $\Gamma$ acts discretely and freely by $\sigma$-isometries on $X$ (this means that $g\sigma_{xy} = \sigma_{g(x)g(y)}$ for every $g\in \Gamma$ and every $x,y\in X$) then it acts properly discountinously by homeomorphisms on the space of $\sigma$-geodesic lines Geod$_\sigma(X)$, that is the space of geodesic lines whose finite subsegments are $\sigma$-geodesics. The quotient by this action is the space Loc-Geod$_\sigma(\Gamma \backslash X)$, called the space of $\sigma$-local geodesic lines of $\Gamma \backslash X$. The space Loc-Geod$_\sigma(\Gamma \backslash X)$ has a natural reparametrization flow $\lbrace\Phi_t\rbrace_{t\in \mathbb{R}}$ defined by $\Phi_t \gamma = \gamma(\cdot + t)$. We will be interested to the dynamical system $(\textup{Loc-Geod}_\sigma(\Gamma \backslash X), \Phi_1)$. We remark that in case of geodesically complete Busemann convex or CAT$(0)$-spaces every $\sigma$-geodesic line is a geodesic line and $\textup{Loc-Geod}_\sigma(\Gamma \backslash X)$ is exactly the space of all local geodesics of the quotient space $\Gamma \backslash X$, as in case of manifolds. \vspace{2mm} \noindent The topological entropy of the dynamical system $(\textup{Loc-Geod}_\sigma(\Gamma \backslash X), \Phi_1)$ is by definition: $$h_\text{top} = \sup_{\mu \in \mathcal{E}_1}h_\mu,$$ where $\mathcal{E}_1$ is the set of ergodic invariant measures of the dynamical system and $h_\mu$ is the Kolmogorov-Sinai entropy of $\mu$. Other versions of entropy, inspired by the one introduced by Brin-Katok (\cite{BK83}), are the lower and upper local entropies, namely: $$\underline{h}^\text{loc} = \sup_{\mu \in \mathcal{E}_1}\inf_\text{\texthtd} \underline{h}^\text{loc}_{\mu,\text{\texthtd}}, \qquad \overline{h}^\text{loc} = \sup_{\mu \in \mathcal{E}_1}\inf_\text{\texthtd} \overline{h}^\text{loc}_{\mu,\text{\texthtd}},$$ where the infimum is among all distances on $\textup{Loc-Geod}_\sigma(\Gamma \backslash X, \Phi_1)$ inducing its topology, $$\underline{h}^\text{loc}_{\mu,\text{\texthtd}} = \underset{\mu}{\text{ess}\inf}\lim_{r\to 0}\liminf_{n \to +\infty} -\frac{1}{n}\log\mu(B_{\text{\texthtd}^n}(\gamma,r))$$ and $$\overline{h}^\text{loc}_{\mu,\text{\texthtd}} = \underset{\mu}{\text{ess}\sup}\lim_{r\to 0}\limsup_{n \to +\infty} -\frac{1}{n}\log\mu(B_{\text{\texthtd}^n}(\gamma,r)).$$ Here $B_{\text{\texthtd}^n}(\gamma,r)$ is the classical $n$-dynamical ball of center $\gamma$ and radius $r$.\\ The topological entropy $h_\text{top}$ has an interpretation à la Bowen in consequence of the variational principle (see \cite{HK95}): \begin{equation} \label{intro-eq-top} h_\text{top}= \inf_{\textup{\texthtd}}\sup_{K}\lim_{r\to 0}\limsup_{n \to +\infty}\frac{1}{n}\log\text{Cov}_{\text{\texthtd}^n}(K,r), \end{equation} where the infimum is again among all distances inducing the topology of $\textup{Loc-Geod}_\sigma(\Gamma \backslash X)$, the supremum is among every possible compact subsets of $\textup{Loc-Geod}_\sigma(\Gamma \backslash X)$ and $\text{Cov}_{\text{\texthtd}^n}(K,r)$ is the covering number at scale $r$ of the set $K$ with respect to the dynamical distance $\text{\texthtd}^n$.\\ If in \eqref{intro-eq-top} we consider the supremum only among compact $\Phi_1$-invariant subsets we obtain another quantity which we call the invariant-topological entropy and that will be denoted by $h_{\text{inv-top}}$. For a generic dynamical system it holds: $$h_\text{inv-top} \leq h_\text{top} \leq \underline{h}^\text{loc} \leq \overline{h}^\text{loc}.$$ The main result of the paper is that in our case these inequalities are actually equalities and that the common value equals the critical exponent of $\Gamma$. \begin{theorem} \label{intro-thm-var-principle} Let $(X,\sigma)$ be a $\delta$-hyperbolic \textup{GCB}-space that is $P_0$-packed at scale $r_0$. Let $\Gamma$ be a group of $\sigma$-isometries of $X$ acting discretely and freely. Then \begin{equation*} h_{\textup{inv-top}} = h_\textup{top} = \underline{h}^\textup{loc} = \overline{h}^\textup{loc} = h_\Gamma, \end{equation*} where $h_\Gamma$ is the critical exponent of $\Gamma$. \end{theorem} This result clearly generalizes Otal-Peigné's Theorem. The critical exponent of $\Gamma$ is defined as $$h_\Gamma = \limsup_{T \to +\infty} \frac{1}{T} \log \#\Gamma x \cap B(x,T).$$ Using Bishop-Jones' Theorem we will conclude that the limit superior in the definition of $h_\Gamma$ is a true limit, generalizing Roblin's Theorem \cite{Rob02} holding for CAT$(-1)$-spaces to Gromov-hyperbolic metric spaces. Indeed we have: \begin{theorem} Let $X$ be a proper, $\delta$-hyperbolic metric space and let $\Gamma$ be a discrete group of isometries of $X$. Then $$\limsup_{T \to +\infty} \frac{1}{T} \log \#\Gamma x \cap B(x,T) = \liminf_{T \to +\infty} \frac{1}{T} \log \#\Gamma x \cap B(x,T) = h_\Gamma.$$ \end{theorem} \vspace{2mm} \noindent The proof of Theorem \ref{intro-thm-var-principle} is divided in two parts: $h_\text{inv-top} \geq h_\Gamma$ and $\overline{h}^\text{loc} \leq h_\Gamma$.\\ The first inequality is based on the estimate of the topological entropy of the compact $\Phi_1$-invariant subsets made by local geodesics never excaping a fixed compact set of $\Gamma \backslash X$. Namely we will fix a basepoint $x_0 \in \Gamma \backslash X$ and for every $\tau \geq 0$ we will consider the set $K_\tau$ of $\sigma$-local geodesic lines completely contained in the compact ball of radius $\tau$ around $x_0$. Since the sets $K_\tau$ are compact and $\Phi_1$-invariant we will use them to estimate from below the invariant-topological entropy. Using special distances on $K_\tau$ we will be able to prove that the topological entropy of the system $(K_\tau, \Phi_1)$ is at least the lower Lipschitz-topological entropy (see \cite{Cav21} for more properties of this invariant) of the set Geod$_\sigma(\Lambda_\tau)$ of $\sigma$-geodesic lines of the universal cover $X$ with both endpoints in the $\tau$-uniform radial set $\Lambda_\tau$. The $\tau$-uniform radial set is the subset of the limit set of $\Gamma$ made by geodesic rays $[\tilde{x}_0, z]$ staying at distance at most $\tau$ from the orbit $\Gamma \tilde{x}_0$, where $\tilde{x}_0$ is a covering point of $x_0$. Observe that any such geodesic ray defines a local geodesic in the quotient that is entirely contained in the compact ball of radius $\tau$ around $x_0$, showing why the topological entropy of $K_\tau$ is related to dynamical properties of the set $\Lambda_\tau$. The lower Lipschitz-topological entropy of the set Geod$_\sigma(\Lambda_\tau)$ (whose definition will be recalled in Section \ref{sec-lowerbound}) equals the lower Minkowski dimension of $\Lambda_\tau$, $\underline{\text{MD}}(\Lambda_\tau)$, by a result of \cite{Cav21}. Finally by Bishop-Jones Theorem we will conclude $$h_\text{inv-top}\geq \sup_{\tau \geq 0} h_\text{top}(K_\tau,\Phi_1) \geq \sup_{\tau \geq 0} \underline{\text{MD}}(\Lambda_\tau) \geq h_\Gamma.$$ \noindent The proof of the inequality $\overline{h}^\text{loc} \leq h_\Gamma$ is inspired to Ledrappier's work \cite{Led13}. We sketch here the main ideas: to every ergodic measure $\mu\in \mathcal{E}_1$ we associate a (non-canonical) measure $\nu$ on $\partial X$ with the following properties: \begin{itemize} \item[-] for every distance \texthtd\, it holds \begin{equation} \label{intro-ineq} h_{\mu,\text{\texthtd}}^\text{loc} \leq \underset{\nu}{\textup{ess}\sup}\limsup_{\rho \to 0}\frac{\log\nu(B(z,\rho))}{\log \rho}; \end{equation} \item[-] $\nu$ gives full measure to a special subset of the limit set of $\Gamma$, that we will call ergodic limit set and we will denote as $\Lambda_\text{erg}(\Gamma)$. \end{itemize} The intuition behind the definition of the ergodic limit set is the following. For every geodesic $\gamma \in \text{Loc-Geod}_\sigma(\Gamma \backslash X)$ and for every compact $K \subseteq \Gamma \backslash X$ of positive $\mu$-measure we can consider the sequence of returning integer times of $\gamma$ in $K$. By Birkhoff's Ergodic Theorem we deduce that this sequence has a very nice controlled behaviour for $\mu$-a.e.$\gamma$, that is if $\vartheta_i(\gamma)$ is the $i$-th integer time such that $\gamma(\vartheta_i(\gamma)) \in K$ then for $\mu$-a.e.$\gamma$ $\exists \lim_{i \to +\infty}\frac{\vartheta_i(\gamma)}{i} = \frac{1}{\mu(K)} < +\infty$. Correspondingly the set of ergodic limit points is defined in a similar way: we say that a point $z\in \partial X$ belongs to $\Lambda_\text{erg}(\Gamma)$ if one (hence every) geodesic ray $[x,z]$, where $x$ is a fixed basepoint of $X$, satisfies this condition: there are points $y_i \in [x,z]$, $i\in \mathbb{N}$ and a constant $\tau \geq 0$, such that \begin{itemize} \item[(a)] $d(y_i, \Gamma x) \leq \tau$ for every $i$, \item[(b)] $d(x,y_i) \to +\infty$, \item[(c)] $\exists \lim \frac{d(x,y_i)}{i} < +\infty$. \end{itemize} Observe that conditions (a) and (b) alone define the well known radial limit points, while (c) captures the idea of well-behaved returning times.\\ Now, the right hand side of \eqref{intro-ineq} is the classical definition of the upper packing dimension of the measure $\nu$ (see Section \ref{sec-HP-dimensions}) and it is always bounded from above by the packing dimension of any set of full measure. Therefore, indicating by PD$(\Lambda_\text{erg})$ the packing dimension of the ergodic limit set, we conclude: $$\overline{h}^\text{loc} \leq \text{PD}(\Lambda_\text{erg}).$$ Finally the end of the proof of this inequality is based on the following refinement of the easy part of Bishop-Jones' Theorem: \begin{theorem} Let $X$ be a proper, $\delta$-hyperbolic metric space and let $\Gamma$ be a discrete group of isometries of $X$. Then: \begin{equation*} \textup{PD}(\Lambda_{\textup{erg}}) = h_\Gamma. \end{equation*} \end{theorem} The packing dimension was introduced by Tricot (\cite{Tri82}) with a dual construction with respect to the classical Hausdorff dimension (denoted HD). For a subset $B$ of a general metric space $Z$ it holds $$\text{HD}(B) \leq \text{PD}(B) \leq \underline{\text{MD}}(B) \leq \overline{\text{MD}}(B),$$ where the last two quantities are respectively the lower and upper Minkowski dimension of $B$, where inequalities can well be strict.\\ An interesting question is the following: is there an example of a proper $\delta$-hyperbolic space with a discrete group of isometries $\Gamma$ such that PD$(\Lambda_\text{rad}(\Gamma)) > h_\Gamma$? Here $\Lambda_\text{rad}(\Gamma)$ denotes the radial limit set of $\Gamma$. The author thinks the answer should be affirmative. \section{Gromov-hyperbolic spaces} Let $(X,d)$ be a metric space. The open (resp.closed) ball of radius $r$ and center $x \in X$ is denoted by $B(x,r)$ (resp. $\overline{B}(x,r)$). If we need to specify the metric we will write $B_d(x,r)$ (resp. $\overline{B}(x,r))$. A geodesic segment is an isometry $\gamma\colon I \to X$ where $I=[a,b]$ is a a bounded interval of $\mathbb{R}$. The points $\gamma(a), \gamma(b)$ are called the endpoints of $\gamma$. A metric space $X$ is said geodesic if for all couple of points $x,y\in X$ there exists a geodesic segment whose endpoints are $x$ and $y$. When we will not need to consider a specific geodesic between $x$ and $y$ we will denote any geodesic segment between them, with an abuse of notation, by $[x,y]$. A geodesic ray is an isometry $\xi\colon[0,+\infty)\to X$ while a geodesic line is an isometry $\gamma\colon \mathbb{R}\to X$. A map $\gamma \colon I \to X$, where $I$ is an interval of $\mathbb{R}$ is a local geodesic if for every $t\in I$ there exists $\varepsilon > 0$ such that $\gamma\vert_{[t-\varepsilon, t+\varepsilon]}$ is a geodesic segment. \vspace{2mm} \noindent Let $X$ be a geodesic metric space. Given three points $x,y,z \in X$, the {\em Gromov product} of $y$ and $z$ with respect to $x$ is defined as \vspace{-3mm} $$(y,z)_x = \frac{1}{2}\big( d(x,y) + d(x,z) - d(y,z) \big).$$ \noindent The space $X$ is said {\em $\delta$-hyperbolic} if for every four points $x,y,z,w \in X$ the following {\em 4-points condition} hold: \begin{equation}\label{hyperbolicity} (x,z)_w \geq \min\lbrace (x,y)_w, (y,z)_w \rbrace - \delta \end{equation} \vspace{-2mm} \noindent or, equivalently, \vspace{-5mm} \begin{equation} \label{four-points-condition} d(x,y) + d(z,w) \leq \max \lbrace d(x,z) + d(y,w), d(x,w) + d(y,z) \rbrace + 2\delta. \end{equation} \noindent The space $X$ is {\em Gromov hyperbolic} if it is $\delta$-hyperbolic for some $\delta \geq 0$. \\ We recall that Gromov-hyperbolicity should be considered as a negative-curvature condition at large scale: for instance any CAT$(\kappa)$ metric space, with $\kappa <0$ is $\delta$-hyperbolic for a constant $\delta$ depending only on $\kappa$. The converse is false, essentially because the CAT$(\kappa)$ condition controls the local geometry much better than the Gromov-hyperbolicity due to the convexity of the distance functions in such spaces (see for instance \cite{LN19}, \cite{CavS20} and \cite{CavS20bis}). \subsection{Gromov boundary} \label{subsubsec-boundary} Let $X$ be a proper, $\delta$-hyperbolic metric space $x$ be a point of $X$. \\ The {\em Gromov boundary} of $X$ is defined as the quotient $$\partial X = \lbrace (z_n)_{n \in \mathbb{N}} \subseteq X \hspace{1mm} | \hspace{1mm} \lim_{n,m \to +\infty} (z_n,z_m)_{x} = + \infty \rbrace \hspace{1mm} /_\sim,$$ where $(z_n)_{n \in \mathbb{N}}$ is a sequence of points in $X$ and $\sim$ is the equivalence relation defined by $(z_n)_{n \in \mathbb{N}} \sim (z_n')_{n \in \mathbb{N}}$ if and only if $\lim_{n,m \to +\infty} (z_n,z_m')_{x} = + \infty$. \linebreak We will write $ z = [(z_n)] \in \partial X$ for short, and we say that $(z_n)$ {\em converges} to $z$. This definition does not depend on the basepoint $x$. There is a natural topology on $X\cup \partial X$ that extends the metric topology of $X$. \noindent Every geodesic ray $\xi$ defines a point $\xi^+=[(\xi(n))_{n \in \mathbb{N}}]$ of the Gromov boundary $ \partial X$: we say that $\xi$ {\em joins} $\xi(0) = y$ {\em to} $\xi^+ = z$. Moreover for every $z\in \partial X$ and every $x\in X$ it is possible to find a geodesic ray $\xi$ such that $\xi(0)=x$ and $\xi^+ = z$. Indeed if $(z_n)$ is a sequence of points converging to $z$ then, by properness of $X$, the sequence of geodesics $[x,z_n]$ converges to a geodesic ray $\xi$ which has the properties above (cp. Lemma III.3.13 of \cite{BH09}). A geodesic ray joining $x$ to $z\in \partial X$ will be denoted by $\xi_{xz}$ or simply $[x,z]$. The relation between Gromov product and geodesic ray is highlighted in the following lemma. \begin{lemma}[\cite{Cav21}, Lemma 5.4] \label{product-rays} Let $X$ be a proper, $\delta$-hyperbolic metric space, $z,z'\in \partial X$ and $x\in X$. Then \begin{itemize} \item[(i)] if $(z,z')_{x} \geq T$ then $d(\xi_{xz}(T - \delta),\xi_{xz'}(T - \delta)) \leq 4\delta$; \item[(ii)] for all $b> 0$, if $d(\xi_{xz}(T),\xi_{xz'}(T)) < 2b$ then $(z,z')_{x} > T - b$. \end{itemize} \end{lemma} \vspace{2mm} \noindent The {\em quasiconvex hull} of a subset $C$ of $\partial X$ is the union of all the geodesic lines joining two points of $C$ and it is denoted by QC-Hull$(C)$. The following is a standard computation, see \cite{BCGS} for instance. \begin{lemma} \label{parallel-geodesics} Let $X$ be a proper, $\delta$-hyperbolic metric space. Then every two geodesic rays $\xi, \xi'$ with same endpoints at infinity are at distance at most $8\delta$, i.e. there exist $t_1,t_2\geq 0$ such that $t_1+t_2=d(\xi(0),\xi'(0))$ and $d(\xi(t + t_1),\xi'(t+t_2)) \leq 8\delta$ for all $t\in \mathbb{R}$. \end{lemma} \subsection{Visual metrics} \label{subsubsec-visual-metrics} When $X$ is a proper, $\delta$-hyperbolic metric space it is known that the boundary $\partial X$ is metrizable. A metric $D_{x,a}$ on $\partial X$ is called a {\em visual metric} of parameter $a\in\left(0,\frac{1}{2\delta\cdot\log_2e}\right)$ and center $x \in X$ if there exists $V> 0$ such that for all $z,z' \in \partial X$ it holds \begin{equation} \label{visual-metric} \frac{1}{V}e^{-a(z,z')_{x}}\leq D_{x,a}(z,z')\leq V e^{-a(z,z')_{x}}. \end{equation} For all $a$ as before and $x\in X$ there exists always a visual metric of parameter $a$ and center $x$, see \cite{Pau96}. As in \cite{Pau96} and \cite{Cav21} we define the {\em generalized visual ball} of center $z \in \partial X$ and radius $\rho \geq 0$ as $$B(z,\rho) = \bigg\lbrace z' \in \partial X \text{ s.t. } (z,z')_{x} > \log \frac{1}{\rho} \bigg\rbrace.$$ It is comparable to the metric balls of the visual metrics on $\partial X$. \begin{lemma} \label{comparison-balls} Let $D_{x,a}$ be a visual distance of center $x$ and parameter $a$ on $\partial X$. Then for all $z\in \partial X$ and for all $\rho>0$ it holds $$B_{D_{x,a}}\left(z, \frac{1}{V}\rho^a\right) \subseteq B(z,\rho)\subseteq B_{D_{x,a}}(z, V\rho^a ).$$ \end{lemma} It is classical that generalized visual balls are related to shadows, whose definition is the following. Let $x\in X$ be a basepoint. The shadow of radius $r>0$ casted by a point $y\in X$ with center $x$ is the set: $$\text{Shad}_x(y,r) = \lbrace z\in \partial X \text{ s.t. } [x,z]\cap B(y,r) \neq \emptyset \text{ for all rays } [x,z]\rbrace.$$ For our purposes we just need: \begin{lemma} \label{shadow-ball} Let $X$ be a proper, $\delta$-hyperbolic metric space. Let $z\in \partial X$, $x\in X$ and $T\geq 0$. Then for all $r>0$ it holds $$\textup{Shad}_{x}\left(\xi_z\left(T\right), r\right) \subseteq B(z, e^{-T + r}).$$ \end{lemma} \section{Hausdorff and Packing dimensions} \label{sec-HP-dimensions} In the first part we recall briefly the definitions of Hausdorff and packing dimensions of a subset of a metric space. In the second part we will recall four definitions of local dimensions of measures and we will relate them to Hasudorff and packing dimensions of suitable subsets. Finally we will adapt these constructions and results to the case of the boundary at infinity of a $\delta$-hyperbolic metric space. The facts presented here are classical and can be found easily in literature. \subsection{Definitions of Hausdorff and Packing dimensions} Let $(X,d)$ be a metric space. The $\alpha$-Hausdorff measure of a Borelian subset $B\subset X$, $\alpha \geq 0$ is classically defined as $$\mathcal{H}^\alpha_d(B) = \lim_{\eta \to 0}\inf \left\lbrace \sum_{i\in \mathbb{N}} r_i^\alpha \text{ s.t. } B\subseteq \bigcup_{i\in \mathbb{N}}B(x_i,r_i) \text{ and } r_i\leq \eta\right\rbrace.$$ The argument of the limit is increasing when $\eta$ tends to $0$, so the limit exists. This formula actually defines a measure on $X$. The Hausdorff dimension of a Borelian subset $B$ of $X$, denoted HD$_d(B)$ is the unique real number $\alpha \geq 0$ such that $\mathcal{H}^{\alpha'}_d(B) = 0$ for all $\alpha' > \alpha$ and $\mathcal{H}^{\alpha'}_d(B) = +\infty$ for all $\alpha' < \alpha$. \vspace{1mm} \noindent The packing dimension is defined in a similar way, but using disjoint balls inside $B$ instead of coverings. To be precise we define, for all $\alpha \geq 0$ and for all Borelian subsets $B$ of $X$, $$\mathcal{P}^\alpha_d(B) = \lim_{\eta \to 0} \sup\left\lbrace \sum_{i\in \mathbb{N}} r_i^\alpha \text{ s.t. } B(x_i,r_i) \text{ are disjoint, }x_i\in B \text{ and } r_i\leq \eta\right\rbrace.$$ This is not a measure on $X$ but only a pre-measure. By a standard procedure one can define the $\alpha$-Packing measure as $$\hat{\mathcal{P}}_d^\alpha(B) = \inf\left\lbrace \sum_{k=1}^\infty \mathcal{P}_d^\alpha(B_k) \text{ s.t. } B\subseteq \bigcup_{k=1}^\infty B_k\right\rbrace.$$ The packing dimension of a Borelian subset $B\subseteq X$, denoted PD$_d(B)$, is the unique real number $\alpha \geq 0$ such that $\hat{\mathcal{P}}^{\alpha'}_d(B) = 0$ for all $\alpha' > \alpha$ and $\hat{\mathcal{P}}^{\alpha'}_d(B) = +\infty$ for all $\alpha' < \alpha$. \\ The packing dimension has another useful interpretation (see \cite{Fal04}, Proposition 3.8): indeed for all Borelian subsets $B\subseteq X$ we have \begin{equation} \label{eq-pack-MD} \text{PD}_d(B) = \inf\left\lbrace \sup_k \overline{\text{MD}}_d(B_k) \text{ s.t. } B\subseteq \bigcup_{k=1}^\infty B_k\right\rbrace. \end{equation} The quantity $\overline{\text{MD}}_d$ denotes the upper Minkowski dimension, namely: \begin{equation} \label{MD-definition} \overline{\text{MD}}_d(B) = \limsup_{r \to 0}\frac{\log\text{Cov}_d(B,r)}{\log\frac{1}{r}}, \end{equation} where $B$ is any subset of $X$ and $\text{Cov}_d(B,r)$ denotes the minimal number of $d$-balls of radius $r$ needed to cover $B$. Taking the limit inferior in place of the limit superior in \eqref{MD-definition} one defines the lower Minkowski dimension of $B$, denoted $\underline{\text{MD}}_d(B)$. \subsection{Local dimensions of measures} Let $(X,d)$ be a metric space, let $\mathscr{B}$ be the $\sigma$-algebra generated by the Borelian subsets of $X$ and let $\nu$ be a probability measure on the measure space $(X,\mathscr{B})$. There are several notions of dimension of the measure $\nu$, here we recall four of them. The \emph{lower and upper Hausdorff dimensions} of $\nu$ are respectively: $$\underline{\text{HD}}_d(\nu)=\underset{\nu}{\textup{ess}\inf}\liminf_{r \to 0}\frac{\log\nu(B(y,r))}{\log r}$$ $$\overline{\text{HD}}_d(\nu)=\underset{\nu}{\textup{ess}\sup}\liminf_{r \to 0}\frac{\log\nu(B(y,r))}{\log r}$$ while the \emph{lower and upper packing dimension} of $\nu$ are respectively: $$\underline{\text{PD}}_d(\nu)=\underset{\nu}{\textup{ess}\inf}\limsup_{r \to 0}\frac{\log\nu(B(y,r))}{\log r},$$ $$\overline{\text{PD}}_d(\nu)=\underset{\nu}{\textup{ess}\sup}\limsup_{r \to 0}\frac{\log\nu(B(y,r))}{\log r}.$$ The name Hausdorff and packing dimension is justified by the following facts, that are well-known at least in the Euclidean spaces. \begin{prop} Let $(X,d)$ be a separable metric space and let $\nu$ be a probability measure on the measure space $(X,\mathscr{B})$. Then \begin{equation*} \begin{aligned} \overline{\textup{PD}}_d(\nu)&=\inf\lbrace \textup{PD}_d(B) \textup{ s.t. } \nu(B)=1\rbrace \\ \underline{\textup{PD}}_d(\nu)&=\inf\lbrace \textup{PD}_d(B) \textup{ s.t. } \nu(B)>0\rbrace \\ \overline{\textup{HD}}_d(\nu)&=\inf\lbrace \textup{HD}_d(B) \textup{ s.t. } \nu(B)=1\rbrace \\ \underline{\textup{HD}}_d(\nu)&=\inf\lbrace \textup{HD}_d(B) \textup{ s.t. } \nu(B)>0\rbrace. \\ \end{aligned} \end{equation*} \end{prop} \begin{proof} The last two equalities are proved in \cite{Led13}, Proposition 2.5 and Section 7. We remark that the properness assumption in these references can be dropped to separability. \\ Let us now show the first equality, the proof of the second equality is similar and it will be omitted. Let $\alpha = \overline{\textup{PD}}_d(\nu)$, that is for all $\varepsilon > 0$ it exists a set $A_\varepsilon \subseteq X$ of positive measure such that $\limsup_{r \to 0} \frac{\log \nu(B(x,r))}{\log r} \geq \alpha - \varepsilon$ for all $x\in A_\varepsilon$. Now we apply Egoroff's Theorem as in \cite{Led13}. For all $r>0$ and all $x\in X$ we set $$f_r(x)=\frac{\log \nu(B(x,r))}{\log r},\qquad g_r(x)=\sup_{0<r'<r}f_{r'}(x),\qquad h_n(x)=g_{\frac{1}{n}}(x).$$ By definition $\limsup_{r \to 0}f_r(x) = \lim_{r\to 0}g_r(x) = \lim_{n\to +\infty}h_n(x) =: \alpha(x)$. By Egoroff's Theorem there exists a set of positive measure $A_\varepsilon'$ such that $\vert h_n(x) - \alpha(x) \vert < \varepsilon$ for all $x\in A_\varepsilon'\subseteq A_\varepsilon$ and all $n\geq n_\varepsilon$. This means that for all $r < r_\varepsilon =\frac{1}{n_\varepsilon}$ and for all $x\in A_\varepsilon'$ it holds $\frac{\log \nu(B(x,r))}{\log r} \geq \alpha(x) - \varepsilon \geq \alpha - 2\varepsilon$. In other words for all $r\leq r_\varepsilon$ and for all $x\in A_\varepsilon'$ it holds $\nu(B(x,r))\leq r^{\alpha - 2\varepsilon}$.\\ Let now $B$ be a borelian subset of $X$ with $\nu(B)=1$. By \eqref{eq-pack-MD} we know that $$\text{PD}_d(B) = \inf\left\lbrace\sup\overline{\text{MD}}_d(B_k)\text{ s.t. } \bigcup_k B_k \supseteq B\right\rbrace.$$ Let $B_k$ be subsets of $X$ such that $\bigcup_kB_k\supseteq B$. Since $\nu(B)=1$ there must be some index $k$ such that $\nu(B_k\cap A_\varepsilon') > 0$. Observe that from the estimates we proved before we get $$\text{Cov}_d(B_k\cap A_\varepsilon', r) \geq \nu(B_k\cap A_\varepsilon')\cdot\left(\frac{1}{r}\right)^{\alpha - 2\varepsilon}$$ for every $r<r_\varepsilon$. This directly implies $$\overline{\text{MD}}_d(B_k)\geq \overline{\text{MD}}_d(B_k\cap A_\varepsilon') \geq \alpha - 2\varepsilon,$$ and so $\text{PD}_d(B)\geq \alpha - 2\varepsilon$. By the arbitrariness of $\varepsilon$ we conclude that $$\overline{\textup{PD}}_d(\nu)\leq\inf\lbrace \textup{PD}_d(B) \textup{ s.t. } \nu(B)=1\rbrace.$$ For the other inequality for every $\varepsilon > 0$ we take the set $$B_\varepsilon = \left\lbrace x\in X \text{ s.t. } \limsup_{r \to 0}\frac{\log \nu(B(x,r))}{\log r} \leq \alpha + \varepsilon \right\rbrace.$$ Clearly it satisfies $\nu(B_\varepsilon) = 1$. Moreover it can be covered by the union of the sets $$B_{\varepsilon, k} = \left\lbrace x\in X \text{ s.t. } \frac{\log \nu(B(x,r))}{\log r} \leq \alpha + 2\varepsilon \text{ for all } r\leq \frac{1}{k} \right\rbrace,$$ where $k\in \mathbb{N}$. By similar estimates as before we can conclude that $\overline{\text{MD}}(B_{\varepsilon, k}) \leq \alpha + 2\varepsilon$ for every $k$, and so PD$_d(B_\varepsilon) \leq \alpha + 2\varepsilon$. By the arbitrariness of $\varepsilon$ we get the desired equality.\\ \end{proof} \subsection{Visual dimensions} Let $X$ be a proper, $\delta$-hyperbolic metric space and let $x\in X$. We know that $\partial X$ supports several visual metrics $D_{x,a}$, so the Hausdorff dimension, the packing dimension and the Minkowski dimension of subsets of $\partial X$ are well defined with respect to $D_{x,a}$. There is a way to define universal versions of these quantities that do not depend neither on $x$ nor on $a$. For a Borelian subset $B$ of $\partial X$ and for all $\alpha \geq 0$ we set, following \cite{Pau96}, $$\mathcal{H}^\alpha(B) = \lim_{\eta \to 0}\inf \left\lbrace \sum_{i\in \mathbb{N}} \rho_i^\alpha \text{ s.t. } B\subseteq \bigcup_{i\in \mathbb{N}}B(z_i,\rho_i) \text{ and } \rho_i\leq \eta\right\rbrace,$$ where $B(z_i,\rho_i)$ are generalized visual balls. As in the classical case the {\em{ visual Hausdorff dimension}} of $B$ is defined as the unique $\alpha \geq 0$ such that $\mathcal{H}^{\alpha'}(B) = 0$ for all $\alpha' > \alpha$ and $\mathcal{H}^{\alpha'}(B) = +\infty$ for all $\alpha'<\alpha$. The visual Hausdorff dimension of the borelian subset $B$ is denoted by HD$(B)$. By Lemma \ref{comparison-balls}, see also \cite{Pau96}, we have HD$(B) = a\cdot\text{HD}_{D_{x,a}}(B)$ for all visual metrics $D_{x,a}$ of center $x$ and parameter $a$. \vspace{2mm} \noindent In the same way we can define the visual $\alpha$-packing pre-measure of a Borelian subset $B$ of $\partial X$ by $$\mathcal{P}^\alpha(B) = \lim_{\eta \to 0} \sup\left\lbrace \sum_{i\in \mathbb{N}} \rho_i^\alpha \text{ s.t. } B(z_i,\rho_i) \text{ are disjoint, }x_i\in B \text{ and } \rho_i\leq \eta\right\rbrace,$$ and also in this case $B(z_i,\rho_i)$ are generalized visual balls. As usual we can define the visual $\alpha$-packing measure by $$\hat{\mathcal{P}}^\alpha(B) = \inf\left\lbrace \sum_{k=1}^\infty \mathcal{P}^\alpha(B_k) \text{ s.t. } B\subseteq \bigcup_{k=1}^\infty B_k\right\rbrace.$$ Consequently it is defined the visual packing dimension of a Borelian set $B$, denoted by PD$(B)$. Using Lemma \ref{comparison-balls} as in the case of the Hausdorff measure (see \cite{Pau96}) it is easy to check that for every visual metric $D_{x,a}$ of center $x$ and parameter $a$ it holds: $$\frac{1}{V^a} \hat{\mathcal{P}}_{D_{x,a}}^{\frac{\alpha}{a}}(B) \leq \hat{\mathcal{P}}^\alpha(B) \leq V^a \hat{\mathcal{P}}_{D_{x,a}}^{\frac{\alpha}{a}}(B)$$ for all $\alpha \geq 0$ and all Borelian sets $B\subseteq \partial X$. Therefore for every Borelian set $B$ it holds PD$(B)=a\cdot \text{PD}_{D_{x,a}}(B)$. \vspace{2mm} \noindent Using generalized visual balls instead of metric balls with respect to a visual metric one can define also the visual upper and lower Minkowski dimension of a subset $B\subseteq \partial X$, respectively: $$\overline{\text{MD}}(B) = \limsup_{\rho \to 0}\frac{\log \text{Cov}(B,\rho)}{\log \rho}, \qquad \underline{\text{MD}}(B) = \liminf_{\rho \to 0}\frac{\log \text{Cov}(B,\rho)}{\log \rho},$$ where $\text{Cov}(B,\rho)$ denotes the minimal number of generalized visual balls of radius $\rho$ needed to cover $B$. Using again Lemma \ref{comparison-balls} (see also \cite{Cav21}) one has $\overline{\text{MD}}(B) = a\cdot \overline{\text{MD}}_{D_{x,a}}(B)$ for every Borelian set $B$ and every visual metric of center $x$ and parameter $a$. The same of course holds for the lower Minkowski dimensions. \vspace{2mm} \noindent Moreover it is easy to check that for every Borelian set $B$ of $\partial X$ the numbers HD$(B)$, PD$(B)$, $\underline{\text{MD}}(B)$, $\overline{\text{MD}}(B)$ do not depend on $x$ too, see Proposition 6.4 of \cite{Pau96}. Clearly, using their comparisons with the classical dimensions defined by a visual metric, we get \begin{equation} \text{HD}(B) \leq \text{PD}(B) \leq \underline{\text{MD}}(B) \leq \overline{\text{MD}}(B) \end{equation} and \begin{equation} \label{PD-supMD} \text{PD}(B) = \inf\left\lbrace \sup_k \overline{\text{MD}}(B_k) \text{ s.t. } B\subseteq \bigcup_{k=1}^\infty B_k\right\rbrace. \end{equation} for all Borelian subsets $B$ of $\partial X$. \vspace{2mm} \noindent Let $\mathscr{B}$ be the $\sigma$-algebra generated by the Borelian subsets of $\partial X$ and let $\nu$ be a probability measure on the measure space $(\partial X,\mathscr{B})$. We define the \emph{visual lower and upper Hausdorff dimensions} of $\nu$ as: $$\underline{\text{HD}}(\nu)=\underset{\nu}{\textup{ess}\inf}\liminf_{\rho \to 0}\frac{\log\nu(B(z,\rho))}{\log \rho}$$ $$\overline{\text{HD}}(\nu)=\underset{\nu}{\textup{ess}\sup}\liminf_{\rho \to 0}\frac{\log\nu(B(z,\rho))}{\log \rho}$$ while the \emph{visual lower and upper packing dimension} of $\nu$ are respectively: $$\underline{\text{PD}}(\nu)=\underset{\nu}{\textup{ess}\inf}\limsup_{\rho \to 0}\frac{\log\nu(B(z,\rho))}{\log \rho},$$ $$\overline{\text{PD}}(\nu)=\underset{\nu}{\textup{ess}\sup}\limsup_{\rho \to 0}\frac{\log\nu(B(z,\rho))}{\log \rho}.$$ Here the difference is that $B(z,\rho)$ denotes the generalized visual ball instead of metric balls with respect to a visual distance $D_{x,a}$. Using again Lemma \ref{comparison-balls} it is straightforward to prove that $$\underline{\text{HD}}(\nu) = a\cdot \underline{\text{HD}}_{D_{x,a}}(\nu)$$ for every visual metric $D_{x,a}$, and similarly for the other dimensions. Therefore \begin{equation} \label{eq-packing-dimensions} \begin{aligned} \overline{\text{PD}}(\nu)&=\inf\lbrace \text{PD}(B) \text{ s.t. } \nu(B)=1\rbrace \\ \underline{\text{PD}}(\nu)&=\inf\lbrace \text{PD}(B) \text{ s.t. } \nu(B)>0\rbrace \\ \overline{\text{HD}}(\nu)&=\inf\lbrace \text{HD}(B) \text{ s.t. } \nu(B)=1\rbrace \\ \underline{\text{HD}}(\nu)&=\inf\lbrace \text{HD}(B) \text{ s.t. } \nu(B)>0\rbrace. \\ \end{aligned} \end{equation} \section{Bishop-Jones Theorem} If $X$ is a proper metric space we denote by Isom$(X)$ its group of isometries, endowed with the uniform convergence on compact subsets of $X$. A subgroup $\Gamma$ of Isom$(X)$ is called {\em discrete} if the following equivalent conditions (see \cite{BCGS}) hold: \begin{itemize} \item[(a)] $\Gamma$ is discrete as a subspace of Isom$(X)$; \vspace{-2mm} \item[(b)] $\forall x\in X$ and $R\geq 0$ the set $\Sigma_R(x) = \lbrace g \in \Gamma \text{ s.t. } g x\in \overline{B}(x,R)\rbrace$ is finite. \end{itemize} \noindent The critical exponent of a discrete group of isometries $\Gamma$ acting on a proper metric space $X$ can be defined using the Poincaré series, or alternatively (\cite{Cav21}, \cite{Coo93}), as $$\overline{h_\Gamma}(X) = \limsup_{T \to +\infty}\frac{1}{T}\log \# \Gamma x \cap B(x,T),$$ where $x$ is a fixed point of $X$. Clearly this quantity does not depend on the choice of $x$. In the following we will often write $\overline{h_\Gamma}(X)=:h_\Gamma$. Taking the limit inferior instead of the limit superior we define the lower critical exponent, denoted by $\underline{h_\Gamma}(X)$. In \cite{Rob02} it is proved that if $\Gamma$ is a discrete group of isometries of a CAT$(-1)$ space then $\overline{h_\Gamma}(X) = \underline{h_\Gamma}(X)$. We will generalize this result to proper, $\delta$-hyperbolic spaces (see Corollary \ref{roblin}). \subsection{Limit sets} We specialize the general situation above to the case of a proper, $\delta$-hyperbolic metric space $X$. Every isometry of $X$ acts naturally on $\partial X$ and the resulting map on $X\cup \partial X$ is a homeomorphism. The {\em limit set} $\Lambda(\Gamma)$ of a discrete group of isometries $\Gamma$ is the set of accumulation points of the orbit $\Gamma x$ on $\partial X$, where $x$ is any point of $X$; it is the smallest $\Gamma$-invariant closed set of the Gromov boundary (cp. \cite{Coo93}, Theorem 5.1) and it does not depend on $x$.\\ There are several interesting subsets of the limit set: the radial limit set, the uniformly radial limit set, etc. They are related to important sets of the geodesic flow on the quotient space $\Gamma \backslash X$, as we will see in the second part of the paper. In order to recall their definiton we need to introduce a class of subsets of $\partial X$.\\ We fix a basepoint $x\in X$. Let $\tau$ and $\Theta = \lbrace \vartheta_i \rbrace_{i\in \mathbb{N}}$ be, respectively, a positive real number and an increasing sequence of real numbers with $\lim_{i \to +\infty}\vartheta_i = +\infty$. We define $\Lambda_{\tau, \Theta}(\Gamma)$ as the set of points $z\in \partial X$ such that there exists a geodesic ray $[x,z]$ satisfying the following: for every $i\in \mathbb{N}$ there exists a point $y_i \in [x,z]$ with $d(x,y_i) \in [\vartheta_i, \vartheta_{i+1}]$ such that $d(y_i,\Gamma x) \leq \tau$. We observe that up to change $\tau$ with $\tau + 8\delta$ the definition above does not depend on the choice of the geodesic ray $[x,z]$. \begin{lemma} In the situation above it holds: \begin{itemize} \item[(i)] $\Lambda_{\tau, \Theta}(\Gamma) \subseteq \Lambda(\Gamma)$; \item[(ii)] the set $\Lambda_{\tau, \Theta}(\Gamma)$ is closed. \end{itemize} \end{lemma} \begin{proof} The first statement is obvious, so we focus on (ii). Let $z^k \in \Lambda_{\tau, \Theta}(\Gamma)$ converging to $z^\infty$. For every $k$ let $\xi^k = [x,z^k]$ be a geodesic ray as in the definition of $\Lambda_{\tau, \Theta}(\Gamma)$. We know that, up to a subsequence, the sequence $\xi^k$ converges uniformly on compact sets of $[0,+\infty)$ to a geodesic ray $\xi^\infty = [x,z^\infty]$. We fix $i\in \mathbb{N}$ and we take points $y_i^k$ with $d(x,y_i^k)\in [\vartheta_{i}, \vartheta_{i+1}]$ and $d(y_i^k,\Gamma x)\leq \tau$. The sequence $y_i^k$ converges to a point $y_i^\infty \in [x,z^\infty]$ with $d(x,y_i^\infty) \in [\vartheta_{i}, \vartheta_{i+1}]$. Moreover clearly $d(y_i^\infty, \Gamma x) \leq \tau$. Since this is true for every $i\in \mathbb{N}$ we conclude that $z^\infty \in \Lambda_{\tau, \Theta}(\Gamma)$. \end{proof} \noindent We can now introduce some interesting subsets of the limit set of $\Gamma$. Let $\Theta_\text{rad}$ be the set of increasing, unbounded sequences of real numbers. The \emph{radial limit set} is classically defined as $$\Lambda_\text{rad}(\Gamma) = \bigcup_{\tau \geq 0}\bigcup_{\Theta \in \Theta_\text{rad}}\Lambda_{\tau, \Theta}(\Gamma).$$ The \emph{uniform radial limit set} is defined (see \cite{DSU17}) as $$\Lambda_\text{u-rad}(\Gamma) = \bigcup_{\tau \geq 0}\Lambda_{\tau}(\Gamma),$$ where $\Lambda_\tau(\Gamma)=\Lambda_{\tau, \lbrace i\tau\rbrace}(\Gamma)$.\\ Another interesting set that will play an important role in the paper is the \emph{ergodic limit set}, defined as: $$\Lambda_\text{erg}(\Gamma) = \bigcup_{\tau \geq 0}\bigcup_{\Theta \in \Theta_\text{erg}}\Lambda_{\tau, \Theta}(\Gamma),$$ where a sequence $\Theta = \lbrace\vartheta_i\rbrace$ belongs to $\Theta_\text{erg}$ if $\exists\lim_{i\to +\infty} \frac{\vartheta_{i}}{i} <+\infty$. The name of this set will be justified in the second part of the paper: for the moment we just say it is related to properties of ergodic measures of the geodesic flow in the quotient space.\\ When $\Gamma$ is clear in the context we will simply write $\Lambda_{\tau, \Theta}, \Lambda_{\text{rad}}, \Lambda_{\text{u-rad}}, \Lambda_{\textup{erg}}, \Lambda$, omitting $\Gamma$. \begin{lemma} In the situation above the sets $\Lambda_{\textup{rad}}, \Lambda_{\textup{u-rad}}$ and $\Lambda_{\textup{erg}}$ are $\Gamma$-invariant and do not depend on $x$. \end{lemma} \begin{proof} Let $y$ be another point of $X$ and let $z\in \partial X$. By Lemma \ref{parallel-geodesics} for every geodesic rays $\xi = [y,z]$, $\xi' = [x,z]$ there are $t_1,t_2\geq 0$ such that $t_1+t_2\leq d(x,y)$ and $d(\xi(t+t_1), \xi'(t+t_2))\leq 8\delta$. This means that $d(\xi(t), \xi'(t)) \leq d(x,y) + 8\delta$ for every $t\geq 0$. It is then straightforward to see that if $z\in \Lambda_{\tau, \Theta}$ (as defined with respect to $x$) then it belongs to $\Lambda_{\tau + d(x,y) + 8\delta, \Theta}$ as defined with respect to $y$. This shows the thesis. \end{proof} \subsection{Bishop-Jones' Theorem} The celebrated Bishop-Jones' Theorem, in the generic version of \cite{DSU17}, states the following: \begin{theo}[\cite{BJ97}, \cite{DSU17}] \label{Bishop-Jones} Let $\Gamma$ be a discrete group of isometries of a proper, $\delta$-hyperbolic metric space $X$. Then: $$h_\Gamma = \textup{HD}(\Lambda_{\textup{rad}}) = \textup{HD}(\Lambda_{\textup{u-rad}}) = \sup_{\tau \geq 0} \textup{HD}(\Lambda_{\tau}).$$ \end{theo} Our contribution to Bishop-Jones' Theorem is the following: \begin{theo} \label{BJ-PD} Let $X$ be a proper, $\delta$-hyperbolic metric space and let $\Gamma$ be a discrete group of isometries of $X$. Then: \begin{equation*} \textup{PD}(\Lambda_{\textup{erg}}) = h_\Gamma \end{equation*} \end{theo} In order to introduce the techniques we will use in the proof of this theorem we start with another easier result that generalizes Roblin's Theorem, \cite{Rob02}. We remark that the proof is completely different. \begin{prop} \label{roblin} Let $X$ be a proper, $\delta$-hyperbolic metric space and let $\Gamma$ be a discrete group of isometries. Then the critical exponent is a true limit, i.e. $$h_\Gamma = \lim_{T\to +\infty}\frac{1}{T}\log\#\Gamma x \cap B(x,T).$$ \end{prop} \begin{proof} By Bishop-Jones' Theorem we have $$h_\Gamma = \sup_{\tau \geq 0} \text{HD}(\Lambda_\tau) \leq \sup_{\tau \geq 0} \underline{\text{MD}}(\Lambda_\tau).$$ So it would be enough to show that $$\sup_{\tau \geq 0} \underline{\text{MD}}(\Lambda_\tau) \leq \underline{h_\Gamma}(X).$$ We fix $\tau \geq 0$. For every $\varepsilon > 0$ we take a subsequence $T_j \to +\infty$ such that for all $j$ it holds $$\frac{1}{T_j}\log \#\Gamma x \cap \overline{B}(x,T_j) \leq \underline{h_\Gamma}(X) + \varepsilon.$$ We define $\rho_j = e^{-T_j}$: notice that $\rho_j \to 0$. Let $k_j\in \mathbb{N}$ such that $(k_j-1)\tau < T_j \leq k_j\tau$. For each $g\in \Gamma$ such that $(k_j-1)\tau \leq d(x,gx)\leq (k_j+1)\tau$ we consider the shadow Shad$_x(gx,2\tau)$. First of all if $z\in \Lambda_\tau$ then we know there exists $g\in \Gamma$ as before such that $d([x,z], gx) \leq \tau$, so $z\in \text{Shad}_x(gx,2\tau)$. In other words these shadows cover $\Lambda_\tau$, and their cardinality is at most $e^{(\underline{h_\Gamma}(X) + \varepsilon)(k_j+1)\tau}$. So, by Lemma \ref{shadow-ball}, $\Lambda_\tau$ is covered by at most $e^{(\underline{h_\Gamma}(X) + \varepsilon)(k_j+1)\tau}$ generalized visual balls of radius $e^{-T_j + 3\tau} = e^{3\tau}\rho_j$. Therefore \begin{equation*} \begin{aligned} \underline{\text{MD}}(\Lambda_{\tau})&\leq \liminf_{j \to +\infty}\frac{\log\text{Cov}(\Lambda_{\tau}, e^{2\tau}\rho_j)}{\log \frac{1}{e^{3\tau}\rho_j}} \\ &\leq \liminf_{j \to +\infty}\frac{(\underline{h_\Gamma}(X) + \varepsilon)(k_j+1)\tau}{-3\tau + (k_j-1)\tau} = \underline{h_\Gamma}(X)+\varepsilon. \end{aligned} \end{equation*} By the arbitrariness of $\varepsilon$ we can conclude the proof. \end{proof} \noindent There are several remarks we can do about this proof: \begin{itemize} \item[(a)] The proof is still valid for every sequence $T_j \to + \infty$, so it implies also that $\sup_{\tau \geq 0} \overline{\textup{MD}}(\Lambda_\tau) \leq h_\Gamma$. Therefore we have another improvement of Bishop-Jones Theorem, namely: \begin{equation} \label{eq-MD-critical} \sup_{\tau \geq 0} {\textup{HD}}(\Lambda_\tau) = \sup_{\tau \geq 0} \underline{\textup{MD}}(\Lambda_\tau) = \sup_{\tau \geq 0} \overline{\textup{MD}}(\Lambda_\tau)=h_\Gamma. \end{equation} \item[(b)] $\Lambda_{\textup{u-rad}} = \bigcup_{\tau \in \mathbb{N}}\Lambda_\tau$, so by (a) and \eqref{PD-supMD} we deduce that $\textup{PD}(\Lambda_{\textup{u-rad}})=h_\Gamma$. \item[(c)] We can get the same estimate of the Minkowski dimensions from above weakening the assumptions on the sets $\Lambda_\tau$. Indeed take a set $\Lambda_{\tau, \Theta}$ such that $\limsup_{i\to +\infty} \frac{\vartheta_{i+1}}{\vartheta_i} = 1$. Then we can cover this set by the $2\tau$-shadows casted by points of the orbit $\Gamma x$ whose distance from $x$ is between $\vartheta_{i_j}$ and $\vartheta_{i_j + 1}$, with $i_j \to +\infty$ when $j \to + \infty$. Therefore arguing as before we obtain $$\underline{\textup{MD}}(\Lambda_{\tau, \Theta}) \leq \liminf_{j \to +\infty}\frac{(\underline{h_\Gamma}(X) + \varepsilon)\vartheta_{i_j + 1}}{\vartheta_{i_j - 1}} \leq \underline{h_\Gamma}(X)+\varepsilon,$$ where the last step follows by the asymptotic behaviour of the sequence $\Theta$. A similar estimate holds for the upper Minkowski dimension. \item[(d)] One could be tempted to conclude that the packing dimension of the set $\bigcup_{\tau \geq 0}\bigcup_{\Theta} \Lambda_{\tau, \Theta}$, where $\Theta$ is a sequence such that $\limsup_{i\to +\infty} \frac{\vartheta_{i+1}}{\vartheta_i} = 1$, is $\leq h_\Gamma$. But this is not necessarily true since in \eqref{PD-supMD} it is required a countable covering and not an arbitrary covering. That is why the estimate of the packing dimension of the ergodic limit set $\Lambda_\textup{erg}$ in Theorem \ref{BJ-PD} is not so easy. However as we will see in a moment the ideas behind the proof are similar to the ones used in the proposition above. \end{itemize} \begin{proof}[Proof of Theorem \ref{BJ-PD}] We notice it is enough to prove that PD$(\Lambda_\text{erg}) \leq h_\Gamma$. The strategy is the following: for every $\varepsilon > 0$ we want to find a countable family of sets $\lbrace B_k\rbrace_{k\in\mathbb{N}}$ of $\partial X$ such that $\Lambda_{\text{erg}} \subseteq \bigcup_{k=1}^\infty B_k$ and $\sup_{k\in \mathbb{N}}\overline{\text{MD}}(B_k)\leq (h_\Gamma + \varepsilon)(1+\varepsilon)$. Indeed if this is true then by \eqref{PD-supMD}: $$\text{PD}(\Lambda_\text{erg}) \leq \sup_{k\in \mathbb{N}}\overline{\text{MD}}(B_k)\leq (h_\Gamma + \varepsilon)(1+\varepsilon),$$ and by the arbitrariness of $\varepsilon$ the thesis is true.\\ So we fix $\varepsilon > 0$ and we proceed to define the countable family. For $m,n\in \mathbb{N}$ and $l\in \mathbb{Q}_{> 0}$ we define $$B_{m,l,n} = \bigcup_{\Theta} \Lambda_{m,\Theta},$$ where $\Theta$ is taken among all sequences such that for all $i\geq n$ it holds $$l-\eta_l \leq \frac{\vartheta_{i}}{i} \leq l + \eta_l,$$ where $\eta_l = \frac{\varepsilon}{2+\varepsilon}\cdot l$.\\ First of all if $z\in \Lambda_\text{erg}$ we know that $z\in \Lambda_{m,\Theta}$ for some $m \in \mathbb{N}$ and $\Theta$ satisfying $\lim_{i\to +\infty}\frac{\vartheta_{i}}{i} = L <+\infty$, in particular it exists $n\in \mathbb{N}$ such that for all $i\geq n$ it holds $L - \beta \leq \frac{\vartheta_{i}}{i} \leq L + \beta$, where $\beta = \frac{2+\varepsilon}{4+3\varepsilon}\cdot\eta_L$. Now we take $l \in \mathbb{Q}_{>0}$ such that $\vert L - l \vert < \beta$. Then it is easy to see that $[L-\beta,L + \beta]\subseteq [l-2\beta, l+ 2\beta]$ and $\eta_l \geq \eta_L - \frac{\varepsilon}{2+\varepsilon}\beta \geq 2\beta$. So by definition $z\in B_{m,l,n}$, therefore $\Lambda_\text{erg}\subseteq \bigcup_{m,l,n} B_{m,l,n}.$ \\ Now we need to estimate the upper Minkowski dimension of each set $B_{m,l,n}$. We take $T_0$ big enough such that for every $T\geq T_0$ it holds $$\frac{1}{T}\log \#\Gamma x \cap \overline{B}(x,T) \leq h_\Gamma + \varepsilon.$$ Let us fix any $\rho \leq e^{-\max\lbrace T_0, n(l-\eta_l) \rbrace}$. We consider $j\in \mathbb{N}$ with the following property: $(j-1)(l-\eta_l) < \log \frac{1}{\rho}\leq j(l-\eta_l)$. We observe that the condition on $\rho$ gives $\log \frac{1}{\rho} \geq n(l-\eta_l)$, implying $j\geq n$.\\ We consider the set of elements $g\in \Gamma$ such that \begin{equation} \label{shadow} j(l-\eta_l) - m \leq d(x,gx) \leq (j+1)(l+\eta_l) + m. \end{equation} For any such $g$ we consider the shadow $\text{Shad}_{x}(gx,2m)$. We claim that this set of shadows covers $B_{m,l,n}$. Indeed every point $z$ of $B_{m,l,n}$ belongs to some $\Lambda_{m,\Theta}$ with $l-\eta_l \leq \frac{\vartheta_{i}}{i} \leq l + \eta_l$ for all $i\geq n$. In particular this holds for $i=j$, and so $j(l-\eta_l) \leq \vartheta_j \leq j(l+\eta_l)$, so there exists a point $y$ along a geodesic ray $[x,z]$ satisfying: $$j(l-\eta_l)\leq \vartheta_j \leq d(x,y)\leq \vartheta_{j+1}\leq (j+1)(l+\eta_l), \qquad d(y,\Gamma x) \leq m.$$ So there is $g\in \Gamma$ satisfying \eqref{shadow} such that $z\in \text{Shad}_{x}(gx,2m)$. Moreover these shadows are casted by points at distance at least $j(l-\eta_l) - m$ from $x$, so at distance at least $\log\frac{1}{e^m\rho}$ from $x$. We need to estimate the number of such $g$'s. By the assumption on $\rho$ we get that this number is less than or equal to $e^{(h_\Gamma+\varepsilon)[(j+1)(l+\eta_l) + m]}.$ Hence, using again Lemma \ref{shadow-ball}, we conclude that $B_{m,l,n}$ is covered by at most $e^{(h_\Gamma+\varepsilon)[(j+1)(l+\eta_l) + m]}$ generalized visual balls of radius $e^{3m}\rho$. Thus \begin{equation*} \begin{aligned} \overline{\text{MD}}(B_{m,l,n})&=\limsup_{\rho \to 0}\frac{\text{Cov}(B_{m,l,n}, e^{3m}\rho)}{\log \frac{1}{e^{3m}\rho}}\\ &\leq \limsup_{j\to +\infty}\frac{(h_\Gamma+\varepsilon)[(j+1)(l+\eta_l) + m]}{-3m + (j-1)(l-\eta_l)} \\ &\leq (h_\Gamma+\varepsilon)(1+\varepsilon), \end{aligned} \end{equation*} where the last inequality follows from the choice of $\eta_l$. \end{proof} \section{Dynamics on packed GCB spaces} \label{sec-dynamics} In this section we study quotients of proper metric spaces in general. In the second part we will apply these results to the special case of packed, GCB-spaces. \vspace{2mm} \noindent Let $X$ be a proper metric space and $\Gamma$ be a discrete group of isometries of $X$. We consider the quotient space $\Gamma \backslash X$ and the standard projection $\pi\colon X \to \Gamma \backslash X$. On the quotient it is defined a standard pseudometric by $d(\pi x, \pi y) = \inf_{g\in \Gamma}d(x, gy)$. Since the action is discrete then this pseudometric is actually a metric. Indeed if $d(\pi x, \pi y)= 0$ then for every $n > 0$ there exists $g_n\in \Gamma$ such that $d(x,g_ny)\leq \frac{1}{n}$. In particular $d(x,g_nx)\leq d(x,g_ny) + d(g_ny,g_nx) \leq d(x,y) + 1$ for every $n$. So the cardinality of these $g_n$'s is finite. Thus there must be one of these $g_n$'s such that $d(x, g_ny) = 0$, i.e. $x= g_ny$, and so $\pi x = \pi y$. The map $\pi$ is $1$-Lipschitz and moreover if $\Gamma$ acts freely (i.e. $gx = x$ for some $x\in X$, $g\in \Gamma$ implies $g=\text{id}$) then it is a local isometry. In this case for every $x\in X$ it is defined the injectivity radius at $x$ by $$\iota(x) = \sup\lbrace r > 0 \text{ s.t. } \pi_{\vert_{ B(x,r)}}\colon B(x,r) \to \pi(B(x,r)) \text{ is an isometry}\rbrace.$$ This defines a map $\iota \colon X \to (0,+\infty]$ with the following properties: \begin{itemize} \item[-] $\iota(x) > 0$ for every $x\in X$; \item[-] $\iota$ is lower-semicontinuous; \item[-] $\iota$ is $\Gamma$-equivariant. \end{itemize} As a consequence it induces a function $\iota \colon \Gamma\backslash X \to (0,+\infty]$ and for every compact subset $K \subseteq \Gamma \backslash X$ there exists the (strictly positive) minimum of $\iota$ on $K$, which is called the injectivity radius of $K$. The following lemma is standard. \begin{lemma} \label{lemma-inj-radius} Let $X$ be a proper metric space and let $\Gamma$ be a group of isometries of $X$ acting discretely and freely. Then \begin{itemize} \item[(i)] for all $x,y\in X$ such that $d(\pi x, \pi y) < \iota(x)$ there exists a unique $g\in \Gamma$ such that $d(x,gy)<\iota(x)$. In particular $d(x,gy)=d(\pi x, \pi y)$; \item[(ii)] if $X$ is a length space then $\pi$ is a locally isometric covering. \end{itemize} \end{lemma} \subsection{The space of local geodesics} Let $X$ be a proper metric space and let Loc-Geod$(X)$ be its space of local geodesic lines, thus a subset of the set of maps $\gamma\colon \mathbb{R} \to X$, endowed with the topology of uniform convergence on compact subsets of $\mathbb{R}$. It is metrizable, indeed we consider, as in \cite{Cav21}, the class $\mathcal{F}$ of continuous functions $f\colon \mathbb{R} \to \mathbb{R}$ satisfying \begin{itemize} \item[(a)] $f(s) > 0$ for all $s \in \mathbb{R}$; \item[(b)] $f(s) = f(-s)$ for all $s \in \mathbb{R}$; \item[(c)] $\int_{-\infty}^{+\infty} f(s)ds = 1$; \item[(d)] $\int_{-\infty}^{+\infty} 2\vert s \vert f(s) ds = C(f) < + \infty$ \end{itemize} and we define the quantity \begin{equation*} \label{f-distance} f(\gamma, \gamma') := \int_{-\infty}^{+\infty}d(\gamma(s),\gamma'(s))f(s)ds. \end{equation*} \begin{lemma} \label{lemma-metric-locgeod} Let $X$ be a proper, length metric space and let \textup{Loc-Geod}$(X)$ be its space of local geodesics. Then \begin{itemize} \item[(i)] for all $\gamma \in \textup{Loc-Geod}(X)$ and for all $t<s$ it holds $d(\gamma(t),\gamma(s))\leq \ell(\gamma\vert_{[t,s]})= \vert s -t \vert$, where $\ell(\cdot)$ denotes the length of a curve; \item[(ii)] for all $\gamma, \gamma' \in \textup{Loc-Geod}(X)$ and for all $f\in \mathcal{F}$ it holds $$f(\gamma, \gamma') \leq d(\gamma(0),\gamma'(0)) + C(f),$$ and this expression defines a distance on $\textup{Loc-Geod}(X)$ which induces its topology. Moreover $$d(\gamma(0),\gamma'(0)) \leq f(\gamma, \gamma') + C(f).$$ \end{itemize} \end{lemma} \begin{proof} The proof of (i) is easy. The first inequality in (ii) can be proved exactly as in Lemma 2.2 of \cite{Cav21}, using (i) to show that the integral is finite. The second inequality in (ii) is easy: for every $s\in \mathbb{R}$ it holds $d(\gamma(s),\gamma'(s)) \geq d(\gamma(0),\gamma'(0)) - 2\vert s \vert$ by (i) and this implies the estimate. The fact that the distance $f$ induces the topology of Loc-Geod$(X)$ can be proved in the same way of Lemma 2.2 of \cite{Cav21}. \end{proof} Every isometry $g$ of $X$ acts on Loc-Geod$(X)$ by $(g\gamma)(\cdot) = g\gamma(\cdot)$ for all $\gamma\in\text{Loc-Geod}(X)$. The action is clearly continuous, so $g$ defines a homeomorphism of Loc-Geod$(X)$. Thus if $\Gamma$ is a group of isometries of $X$ then $\Gamma$ acts by homeomorphisms on Loc-Geod$(X)$. \begin{lemma} Let $\Gamma$ be a group of isometries of a proper metric space $X$ acting discretely and freely. Then the action of $\Gamma$ on \textup{Loc-Geod}$(X)$ is properly discontinuous. \end{lemma} \begin{proof} We fix $\gamma \in \text{Loc-Geod}(X)$, we choose $\varepsilon > 0$ such that if $d(g\gamma(0),\gamma(0))< \varepsilon$ then $g=\text{id}$ and we consider the set $$U = \left\lbrace \gamma'\in \text{Loc-Geod}(X) \text{ s.t. } \gamma'(0)\in B\left(\gamma(0),\frac{\varepsilon}{2}\right)\right\rbrace.$$ The set $U$ is clearly an open neighbourhood of $\gamma$. Now if $g\in \Gamma$ is such that $g\gamma'\in U$ for some $\gamma'\in U$ then both $d(\gamma'(0),\gamma(0))$ and $d(g\gamma'(0), \gamma(0))$ are $<\frac{\varepsilon}{2}$, so $d(\gamma(0),g\gamma(0))<\varepsilon$ and $g=\text{id}$. \end{proof} We fix a group of isometries $\Gamma$ of a proper metric space $X$ acting discretely and freely. There are two natural objects we can consider: the quotient space $\Gamma \backslash \text{Loc-Geod}(X)$ endowed with the quotient topology and the space of parametrized local geodesic lines of the metric space $\Gamma \backslash X$, namely Loc-Geod$(\Gamma \backslash X)$, endowed as usual with the topology of uniform convergence on compact subsets of $\mathbb{R}$. \begin{lemma} \label{lemma-metrizability} Let $X$ be a proper, length metric space and let \textup{Loc-Geod}$(X)$ be its space of local geodesics. Let $\Gamma$ be a group of isometries of $X$ acting discretely and freely and let $f\in \mathcal{F}$. Then \begin{itemize} \item[(i)] the natural action of $\Gamma$ on $(\textup{Loc-Geod}(X),f)$ is by isometries and discrete; \item[(ii)] the space $\Gamma \backslash \textup{Loc-Geod}(X)$ is metrizable. \end{itemize} \end{lemma} \begin{proof} The action of $\Gamma$ on $(\textup{Loc-Geod}(X),f)$ is clearly by isometries. Moreover for every $\gamma \in \textup{Loc-Geod}(X)$ and every $R\geq 0$ we have, by Lemma \ref{lemma-metric-locgeod}, $$\#\lbrace g \in \Gamma \text{ s.t. } f(g\gamma, \gamma) \leq R\rbrace \leq \#\lbrace g \in \Gamma \text{ s.t. } d(g\gamma(0), \gamma(0)) \leq R + C(f)\rbrace < +\infty.$$ By the same argument at the beginning of Section \ref{sec-dynamics} we conclude that the quotient pseudometric induced by $f$ is actually a metric, which implies (ii) since the quotient metric induces the quotient topology. \end{proof} \begin{prop} \label{prop-quotient-geodesics} Let $\Gamma$ be a group of isometries of a proper, length metric space $X$ acting discretely and freely. Then the map $$\Pi\colon \textup{Loc-Geod}(X)\mapsto \textup{Loc-Geod}(\Gamma \backslash X), \quad \Pi(\gamma)(\cdot) = \pi\gamma(\cdot)$$ is continuous, surjective and $\Gamma$-equivariant. So it induces a map $$\bar\Pi\colon \Gamma\backslash\textup{Loc-Geod}(X)\mapsto \textup{Loc-Geod}(\Gamma \backslash X)$$ which is a homeomorphism. \end{prop} \begin{proof} First of all if $\gamma = g \eta$ for some $g\in \Gamma$ and $\gamma,\eta \in \text{Loc-Geod}(X)$ then clearly $\pi\gamma(\cdot) = \pi\eta(\cdot)$, so $\Pi$ is $\Gamma$-equivariant. Moreover every map $\gamma\colon \mathbb{R} \to \Gamma \backslash X$ can be lifted to a map $\tilde{\gamma}\colon \mathbb{R}\to X$ since $\pi\colon X\to \Gamma\backslash X$ is a covering map by Lemma \ref{lemma-inj-radius}. Clearly if $\gamma$ is a local geodesic then $\tilde{\gamma}$ is a local geodesic of $X$, again by Lemma \ref{lemma-inj-radius}. This shows that $\Pi$ is surjective. Furthermore since $\pi$ is $1$-Lipschitz then $\Pi$ is continuous.\\ It remains to show that the map $\bar{\Pi}$ is a homeomorphism. Let us show it is injective: suppose $\Pi(\gamma) = \Pi(\eta)$, i.e. $\pi\gamma(t)=\pi\eta(t)$ for every $t\in \mathbb{R}$. Therefore for every $t\in \mathbb{R}$ there is a unique $g_t\in \Gamma$ such that $g_t\eta(t) = \gamma(t)$, Lemma \ref{lemma-inj-radius}. We consider the set $A=\lbrace t \in \mathbb{R} \text{ s.t. } g_t = g_0\rbrace$. It is closed: indeed if $t_k\to t_\infty$ with $t_k\in A$ then we have \begin{equation*} \begin{aligned} d(g_{t_\infty}\eta(t_\infty), g_0\eta(t_\infty)) &\leq d(g_{t_\infty}\eta(t_\infty), g_{t_k}\eta(t_k)) + d(g_{t_k}\eta(t_k), g_0\eta(t_\infty))\\ &\leq d(\gamma(t_\infty),\gamma(t_k)) + d(\eta(t_k),\eta(t_\infty)) \end{aligned} \end{equation*} and the last quantity is as small as we want. So $g_{t_\infty} = g_0$, i.e. $t_\infty \in A$. Moreover $A$ is open: let $t\in A$ and suppose there is a sequence $t_k \to t$ such that $g_{t_k}\neq g_0$. Then $$d(\gamma(t), g_{t_k}\eta(t)) \leq d(\gamma(t), \gamma(t_k)) + d(g_{t_k}\eta(t_k), g_{t_k}\eta(t)) \leq 2\vert t - t_k \vert.$$ When $2\vert t - t_k \vert < \iota(\eta(t))$ we get $d(g_0^{-1}g_{t_k}\eta(t), \eta(t)) < \iota(\eta(t))$ and by Lemma \ref{lemma-inj-radius} this implies $g_0^{-1}g_{t_k} = \text{id}$, i.e. $g_{t_k} = g_0$ for every $t_k$ sufficiently close to $t$. Since $A$ is non empty we conclude that $A=\mathbb{R}$, i.e. $\gamma(t) = g_0 \eta(t)$ for every $t\in \mathbb{R}$. So $\gamma = g_0 \eta$, that is $\bar{\Pi}$ is injective.\\ The continuity of $\bar{\Pi}^{-1}$ can be checked on sequences since both spaces are metrizable by Lemma \ref{lemma-metric-locgeod} and \ref{lemma-metrizability}. Let $p\colon \text{Loc-Geod}(X) \to \Gamma\backslash\text{Loc-Geod}(X)$ be the standard projection map. We take $\gamma_n, \gamma_\infty \in \text{Loc-Geod}(\Gamma\backslash X)$ such that $\gamma_k \to \gamma_\infty$ uniformly on compact subsets of $\mathbb{R}$. Let $T\geq 0$, let $\varepsilon > 0$ be any real number which is less than the injectivity radius of the compact set $\gamma_\infty([-T,T])$ and let $\tilde{\gamma}_k, \tilde{\gamma}_\infty$ be any covering local geodesics of $\gamma_k, \gamma_\infty$ respectively. We know that for $k$ big enough it holds $d(\gamma_k(t),\gamma_\infty(t)) < \varepsilon$ for every $t\in [-T,T]$, then there is a unique $g_k(t)\in \Gamma$ such that $d(g_k(t)\tilde\gamma_k(t),\tilde\gamma_\infty(t)) < \varepsilon$ by Lemma \ref{lemma-inj-radius}. Arguing as before we conclude that for every such $k$ it holds $g_k(t) = g_k(0) =:g_k$ for every $t\in [-T,T]$. This implies that $p(\tilde{\gamma_k})$ converges uniformly on $[-T,T]$ to $p(\tilde{\gamma_\infty})$. Since this is true for every $T\geq 0$ we get the continuity of $\bar\Pi^{-1}$. \end{proof} \noindent There is a natural action of $\mathbb{R}$ on $\text{Loc-Geod}(X)$ defined by reparametrization: $$\Phi_t\gamma (\cdot) = \gamma(\cdot + t)$$ for every $t\in \mathbb{R}$. It is easy to see it is a continuous action, i.e. $\Phi_t \circ \Phi_s = \Phi_{t+s}$ for all $t,s\in \mathbb{R}$ and for every $t\in \mathbb{R}$ the map $\Phi_t$ is a homeomorphism of $\text{Loc-Geod}(X)$. This action is called the \emph{local geodesic flow} on $X$. \subsection{Packed GCB-spaces} From now on we will add some other assumptions on our metric space $X$, in terms of weak upper and lower bounds on the curvature. As a lower bound we take a bounded packing condition at a fixed scale. \\ Let $Y$ be any subset of a metric space $X$:\\ -- a subset $S$ of $Y$ is called {\em $r$-dense} if $\forall y \in Y$ $\exists z\in S$ such that $d(y,z)\leq r$; \\ -- a subset $S$ of $Y$ is called {\em $r$-separated} if $\forall y,z \in S$ it holds $d(y,z)> r$.\\ The packing number of $Y$ at scale $r$ is the maximal cardinality of a $2r$-separated subset of $Y$ and it is denoted by $\text{Pack}(Y,r)$. The covering number of $Y$ is the minimal cardinality of a $r$-dense subset of $Y$ and it is denoted by $\text{Cov}(Y,r)$. The packing and the covering functions of $X$ are respectively $$\text{Pack}(R,r)=\sup_{x\in X}\text{Pack}(\overline{B}(x,R),r), \qquad \text{Cov}(R,r)=\sup_{x\in X}\text{Cov}(\overline{B}(x,R),r).$$ We say that a metric space $X$ is {\em $P_0$-packed at scale $r_0$} if Pack$(3r_0,r_0)\leq P_0$, that is every ball of radius $3r_0$ contains no more than $P_0$ points that are $2r_0$-separated. \vspace{2mm} \noindent As upper curvature bound we take a very weak notion of non-positive curvature. Let $X$ be a metric space. A geodesic bicombing is a map $\sigma \colon X\times X\times [0,1] \to X$ with the property that for all $(x,y) \in X\times X$ the map $\sigma_{xy}\colon t\mapsto \sigma(x,y,t)$ is a geodesic from $x$ to $y$ parametrized proportionally to arc-length, i.e. $d(\sigma_{xy}(t), \sigma_{xy}(t')) = \vert t - t' \vert d(x,y)$ for all $t,t'\in [0,1]$ and $\sigma_{xy}(0)=x, \sigma_{xy}(1)=y$.\\ \emph{When $X$ is equipped with a geodesic bicombing then for all $x,y\in X$ we will denote by $[x,y]$ the geodesic $\sigma_{xy}$ parametrized by arc-length.}\\ A geodesic bicombing is: \begin{itemize} \item \emph{convex} if the map $t\mapsto d(\sigma_{xy}(t), \sigma_{x'y'}(t))$ is convex on $[0,1]$ for all $x,y,x',y' \in X$; \item \emph{consistent} if for all $x,y \in X$, for all $0\leq s\leq t \leq 1$ and for all $\lambda\in [0,1]$ it holds $\sigma_{pq}(\lambda) = \sigma_{xy}((1-\lambda)s + \lambda t)$, where $p:= \sigma_{xy}(s)$ and $q:=\sigma_{xy}(t)$; \item \emph{reversible} if $\sigma_{xy}(t) = \sigma_{yx}(1-t)$ for all $t\in [0,1]$. \end{itemize} For instance every convex metric space in the sense of Busemann (so also every CAT$(0)$ metric space) admits a unique convex, consistent, reversible geodesic bicombing.\\ Given a geodesic bicombing $\sigma$ we say that a geodesic (segment, ray, line) $\gamma$ is a $\sigma$-geodesic (segment, ray, line) if for all $x,y\in \gamma$ we have that $[x,y]$ coincides with the subsegment of $\gamma$ between $x$ and $y$.\\ A geodesic bicombing is \emph{geodesically complete} if every $\sigma$-geodesic segment is contained in a $\sigma$-geodesic line. A couple $(X,\sigma)$ is said a GCB-space if $\sigma$ is a convex, consistent, reversible, geodesically complete geodesic bicombing on the complete metric space $X$. The packing condition has a controlled behaviour in GCB-spaces. \begin{prop}[Proposition 3.2 of \cite{CavS20bis}] \label{packingsmallscales} Let $(X,\sigma)$ be a \textup{GCB}-space that is $P_0$-packed at scale $r_0$. Then: \begin{itemize} \item[(i)] for all $r\leq r_0$, the space $X$ is $P_0$-packed at scale $r$ and it is proper; \item[(ii)] for every $0<r\leq R$ and every $x\in X$ it holds: $$\textup{Pack}(R,r)\leq P_0(1+P_0)^{\frac{R}{r} - 1} \text{, if } r\leq r_0;$$ $$\textup{Pack}(R,r)\leq P_0(1+P_0)^{\frac{R}{r_0} - 1} \text{, if } r > r_0.$$ \end{itemize} \end{prop} \noindent Basic examples of GCB-spaces that are $P_0$-packed at scale $r_0$ are: \begin{itemize} \item[i)] complete and simply connected Riemannian manifolds with sectional curvature pinched between two nonpositive constants $\kappa' \leq \kappa < 0$; \item[ii)] simply connected $M^\kappa$-complexes, with $\kappa \leq 0$, without free faces and {\em bounded geometry} (i.e., with valency at most $V_0$, size at most $S_0$ and positive injectivity radius); \item[iii)] complete, geodesically complete, CAT$(0)$ metric spaces $X$ with dimension at most $n$ and volume of balls of radius $R_0$ bounded above by $V$. \end{itemize} \noindent For further details on the second and the third class of examples we refer to \cite{CavS20}. \noindent Let $(X,\sigma)$ be a proper GCB-space. The {\em space of parametrized $\sigma$-geodesic lines} of $X$ is $$\text{Geod}_\sigma(X) = \lbrace \gamma\colon \mathbb{R} \to X \text{ isometry s.t. } \gamma(\mathbb{R}) \text{ is a }\sigma\text{-geodesic line}\rbrace,$$ considered as a subset of Loc-Geod$(X)$. By the continuity of $\sigma$ (see \cite{Des15}, \cite{Cav21}) we have that $\text{Geod}_\sigma(X)$ is closed in $\text{Loc-Geod}(X)$. Moreover the local geodesic flow on Loc-Geod$(X)$ restricts as an action on $\text{Geod}_\sigma(X)$, called the {\em $\sigma$-geodesic flow} on $X$. The {\em evaluation map} $E\colon \text{Geod}_\sigma(X) \to X$, which is defined as $E(\gamma)=\gamma(0)$, is continuous and proper (\cite{BL12}, Lemma 1.10 and \cite{Cav21}). Moreover this restriction is surjective since $\sigma$ is assumed geodesically complete. The topology on $\text{Geod}_\sigma(X)$ is metrizable by Lemma \ref{lemma-metric-locgeod}. \vspace{2mm} \noindent Let $(X,\sigma)$ be a GCB-space. An isometry $g$ of $X$ is a $\sigma$-isometry if for all $x,y\in X$ it holds $\sigma_{g(x)g(y)} = g(\sigma_{xy})$. We say that a group of isometries of $X$ is a group of $\sigma$-isometries if every element of the group is a $\sigma$-isometry. If $\Gamma$ is a group of $\sigma$-isometries of $X$ acting discretely and freely then Geod$_\sigma(X)$ is $\Gamma$-invariant and we set Loc-Geod$_\sigma(\Gamma\backslash X):=\Pi(\text{Geod}_\sigma(X))$, which is homeomorphic to $\Gamma\backslash\text{Geod}_\sigma(X)$ by Proposition \ref{prop-quotient-geodesics}.\\ Since in Busemann convex (so also CAT$(0)$) metric spaces $X$ every local geodesic is a geodesic then it holds Loc-Geod$(X) =$Geod$_\sigma(X)$, where $\sigma$ is the unique bicombing, so Loc-Geod$_\sigma(\Gamma\backslash X) =$ Loc-Geod$(\Gamma\backslash X)$.\\ We end this section with two easy results and a remark. \begin{lemma} \label{cor-lipschitz} Let $(X,\sigma)$ be a proper \textup{GCB}-space. Then the natural map $\Pi\colon(\textup{Geod}_\sigma(X),f) \to (\textup{Loc-Geod}_\sigma(X),f)$ is $1$-Lipschitz for all $f\in \mathcal{F}$. \end{lemma} \begin{proof} For all $\gamma, \gamma'\in \text{Geod}_\sigma(X)$ we have \begin{equation*} \begin{aligned} f(\gamma, \gamma') &= \int_{-\infty}^{+\infty}d(\gamma(s),\gamma'(s))f(s)ds\\ &\geq \int_{-\infty}^{+\infty}d(\pi\gamma(s),\pi\gamma'(s))f(s)ds = f(\Pi(\gamma), \Pi(\gamma')). \end{aligned} \end{equation*} \end{proof} \begin{lemma} \label{lemma-completeness} Let $(X,\sigma)$ be a \textup{GCB}-space and let $\Gamma$ be a group of $\sigma$-isometries of $X$ acting discretely and freely. Then the metric on $\textup{Loc-Geod}_\sigma(\Gamma\backslash X)$ induced by every $f\in \mathcal{F}$ is complete. \end{lemma} \begin{proof} We observe the result is not completely trivial because the metric $f$ on $\textup{Loc-Geod}_\sigma(\Gamma\backslash X)$ does not coincide with the quotient metric. However given a Cauchy sequence $\lbrace\gamma_k\rbrace$ we know that for every $t\in\mathbb{R}$ the sequence $\lbrace \gamma_k(t)\rbrace$ is bounded, so we can apply Ascoli-Arzela's Theorem to find a limit map $\gamma_\infty \colon \mathbb{R} \to \Gamma \backslash X$. We need to show $\gamma_\infty$ is a local geodesic. Let $t\in \mathbb{R}$ and let $2\iota>0$ be smaller than the injectivity radius at $\gamma_\infty(t)$. Then by Lemma \ref{lemma-inj-radius} for every $k$ big enough there is a subsegment of length $2\iota$ of $\gamma_k$ which converges to $\gamma_\infty([t-\iota,t+\iota])$ and that is a true geodesic. Therefore $\gamma_\infty([t-\iota,t+\iota])$ is a true geodesic, that is $\gamma_\infty$ is a local geodesic.\\ Let $\tilde{\gamma}_k, \tilde\gamma_\infty$ be any covering local geodesic of $\gamma_k, \gamma_\infty$ respectively. Arguing as in the proof of Proposition \ref{prop-quotient-geodesics} we know that for every $T\in\mathbb{R}$ there exists $g_T\in \Gamma$ such that $g_T\tilde\gamma_k$ converges uniformly to $\tilde\gamma_\infty$ on $[-T,T]$. It is then clear by Lemma \ref{lemma-inj-radius} that if $T'\geq T$ then $g_{T'} = g_{T}$. In particular we can find $g\in \Gamma$ such that $g\tilde\gamma_k$ converges uniformly on every compact subset of $\mathbb{R}$ to $\tilde\gamma_\infty$. Observe that each $g\tilde\gamma_k$ is a $\sigma$-geodesic line, so it is $\tilde\gamma_\infty$, i.e. $\gamma_\infty \in \textup{Loc-Geod}_\sigma(\Gamma\backslash X)$. \end{proof} \begin{obs} \label{rmk-closure} We observe that the proof shows also that \textup{Loc-Geod}$_\sigma(\Gamma\backslash X)$ is closed in \textup{Loc-Geod}$(\Gamma\backslash X)$. \end{obs} In our situation the space $\Gamma \backslash X$ is naturally a locally GCB-space with local convex bicombing $\sigma$ as defined in \cite{Mie15}. A $\sigma$-local geodesic can be defined as a local geodesic $\gamma$ such that for all close enough $t<t'$ it holds $\gamma\vert_{[t,t']} = \sigma_{\gamma(t),\gamma(t')}$. Of course this equality has a meaning only locally, where $\sigma$-geodesics are defined. Clearly every $\sigma$-local geodesic can be lifted to a $\sigma$-local geodesic of $X$ and by convexity it is easy to check that every $\sigma$-local geodesic of $X$ is actually a $\sigma$-geodesic. So $\textup{Loc-Geod}_\sigma(X)$ can be seen exactly as the space of $\sigma$-local geodesics of $\Gamma \backslash X$.\\ This picture can be generalized: if $X$ is a complete, connected, locally GCB-space then by Theorem 1.1 of \cite{Mie15} it admits a universal cover $\tilde X$ that can be endowed with a unique structure of GCB-space $(\tilde X, \sigma)$ and $X=\Gamma\backslash \tilde{X}$, where $\Gamma$ is a group of $\sigma$-isometries of $\tilde{X}$ acting discretely and freely. In this setting the natural space of $\sigma$-local geodesics of $X$ coincides with the space $\textup{Loc-Geod}_\sigma(\Gamma\backslash \tilde X)$ defined above. \section{Upper bound of the entropy} We consider the dynamical system $(\text{Loc-Geod}_\sigma(\Gamma \backslash X), \Phi_1)$, where $(X,\sigma)$ is a $\delta$-hyperbolic GCB-space that is $P_0$-packed at scale $r_0$, $\Gamma$ is a group of $\sigma$-isometries of $X$ acting discretely and freely and $\Phi_1$ is the classical time one reparametrization flow, i.e. $\Phi_1(\gamma)(\cdot) = \gamma(\cdot + 1)$ for all $\gamma \in \text{Loc-Geod}_\sigma(\Gamma \backslash X)$.\\ In this section we prove that the topological entropy of the dynamical system above is less than or equal to the critical exponent of $\Gamma$.\\ Before we recall the definition of the topological entropy and the local entropies of a dynamical system. For us a dynamical system is a couple $(Y,T)$ where $Y$ is a topological space and $T\colon Y \to Y$ is a continuous map. We will always denote by $\mathscr{B}$ the $\sigma$-algebra of Borelian subsets of $Y$. We denote by $\mathcal{M}_1(Y,T)$ the set of $T$-invariant probability measure on $(Y,\mathscr{B})$. We recall that a measure $\mu\in \mathcal{M}_1(Y,T)$ is ergodic if every subset $A$ of $Y$ such that $T^{-1}A \subseteq A$ has $\mu$-measure $0$ or $1$. We denote the set of ergodic measures by $\mathcal{E}_1(Y,T)$.\\ \emph{For the rest of the section we will always assume that $Y$ is a locally compact, metrizable, topological space.} \noindent The topological entropy of the dynamical system $(Y,T)$ is by definition: \begin{equation} \label{def-htop} h_\text{top}(Y,T) := \sup_{\mu\in \mathcal{M}_1(Y,T)}h_\mu(Y,T) = \sup_{\mu\in \mathcal{E}_1(Y,T)}h_\mu(Y,T), \end{equation} where $h_\mu(Y,T)$ denotes the Kolmogorov-Sinai entropy of the measure $\mu$. For our purposes we will not need to recall its definition. The second equality is classical and follows from the convexity of the entropy functional.\\ We introduce two other possible definitions of entropy of a measure $\mu\in \mathcal{E}_1(Y,T)$. We restrict to ergodic measures as we have seen that it is enough to consider them in order to compute the topological entropy of $(Y,T)$. Let $\mu \in \mathcal{E}_1(Y,T)$ and \texthtd\, be a metric on $Y$ inducing its topology. The upper local entropy of $\mu$ with respect to \texthtd\, is defined as $$\overline{h}_{\mu,\text{\texthtd}}^\text{loc}(Y,T) = \underset{\mu}{\text{ess}\sup}\lim_{r\to 0}\limsup_{n\to +\infty} -\frac{1}{n}\log \mu(B_{\text{\texthtd}^n}(y,r)),$$ while the lower local entropy is $$\underline{h}_{\mu,\text{\texthtd}}^\text{loc}(Y,T) = \underset{\mu}{\text{ess}\inf}\lim_{r\to 0}\liminf_{n\to +\infty} -\frac{1}{n}\log \mu(B_{\text{\texthtd}^n}(y,r)).$$ Here $\text{\texthtd}^n$ is the dynamical distance defined as usual as: $$\text{\texthtd}^n(y,y') = \max_{i=0,\ldots,n-1}\text{\texthtd}(T^iy,T^iy'), \qquad \forall y,y'\in Y.$$ Observe that this definition of the lower local entropy agrees with the one given in \cite{Led13} while it is a bit different from the one of \cite{Riq16}. However it is certainly bigger than or equal to this last one. So we conclude: \begin{lemma}[\cite{Riq16}, Theorem 1.28] \label{Brin-Katok-Riquelme} Let $\mu \in \mathcal{E}_1(Y,T)$ and let \textup{\texthtd\,} be a complete metric on $Y$ inducing its topology. Then $h_\mu(Y,T) \leq \underline{h}_{\mu,\textup{\texthtd}}^\textup{loc}(Y,T).$ \end{lemma} It is natural to define the \emph{upper and lower local entropy} of the dynamical system $(Y,T)$, respectively, as $$\overline{h}^\text{loc}(Y,T) = \sup_{\mu \in \mathcal{E}_1(Y,T)}\inf_{\text{\texthtd}}\overline{h}_{\mu,\text{\texthtd}}^\text{loc}(Y,T), \qquad \underline{h}^\text{loc}(Y,T) = \sup_{\mu \in \mathcal{E}_1(Y,T)}\inf_{\text{\texthtd}}\underline{h}_{\mu,\text{\texthtd}}^\text{loc}(Y,T),$$ where the infimum are among all complete metrics on $Y$ inducing its topology. Clearly by Lemma \ref{Brin-Katok-Riquelme} we have \begin{equation} \label{eq-comparison-entropies} h_\text{top}(Y,T) \leq \underline{h}^\text{loc}(Y,T) \leq \overline{h}^\text{loc}(Y,T). \end{equation} \noindent As said at the beginning of the section we consider the dynamical system $(\text{Loc-Geod}_\sigma(\Gamma \backslash X), \Phi_1)$. To simplify the notations we denote by $\mathcal{M}_1$, $\mathcal{E}_1$ the sets $\mathcal{M}_1(\text{Loc-Geod}_\sigma(\Gamma \backslash X), \Phi_1)$ and $\mathcal{E}_1(\text{Loc-Geod}_\sigma(\Gamma \backslash X), \Phi_1)$. In the same way the topological entropy, the lower and the upper local entropy of $(\text{Loc-Geod}_\sigma(\Gamma \backslash X), \Phi_1)$ will be simply denoted by, respectively, $h_\text{top}$, $\underline{h}^\text{loc}$ and $\overline{h}^\text{loc}$. In order to prove $h_\text{top}\leq h_\Gamma$ the strategy is to show $\overline{h}^\textup{loc} \leq h_\Gamma$. This would be enough by \eqref{eq-comparison-entropies}. Before proceeding in this direction we need to simplify the expression of the upper local entropy. \vspace{2mm} \noindent By Lemma \ref{lemma-completeness} every $f\in \mathcal{F}$ defines a complete metric on Loc-Geod$_\sigma(\Gamma\backslash X)$ which induces its topology, so by definition it is clear that $$\overline{h}^\text{loc} \leq \sup_{\mu \in \mathcal{E}_1} \overline{h}^\text{loc}_{\mu,f}, \qquad \underline{h}^\text{loc} \leq \sup_{\mu \in \mathcal{E}_1} \underline{h}^\text{loc}_{\mu,f}.$$ Our aim is to simplify the computation of $\overline{h}^\text{loc}_{\mu,f}$ and $\underline{h}^\text{loc}_{\mu,f}$. In order to do that we need the key lemma of \cite{Cav21}. \begin{lemma}[Key lemma, \cite{Cav21}] \label{key-lemma} Let $(X,\sigma)$ be a \textup{GCB}-space that is $P_0$-packed at scale $r_0$. Let $f\in\mathcal{F}$, $\gamma \in \textup{Geod}_\sigma(X)$ and $0<r\leq R$. Then $$\lim_{T\to +\infty}\frac{1}{T}\log\textup{Cov}_{f^T}(B_{f^T}(\gamma, R), r) = 0.$$ \end{lemma} We are now ready to prove: \begin{lemma} \label{lemma-loc-entropy} Let $\mu \in \mathcal{E}_1$ anf $f\in \mathcal{F}$. Then for every $R>0$ and for $\mu$-a.e.$\gamma \in \textup{Loc-Geod}_\sigma(\Gamma \backslash X)$ it holds: $$\underline{h}_{\mu,f}^\textup{loc} = \liminf_{n\to +\infty} -\frac{1}{n}\log \mu(B_{f^n}(\gamma,R)).$$ The same corresponding conclusion is true for the upper local entropy. \end{lemma} \begin{proof} We fix $R>0$ and we take $0<r\leq R$. Clearly $B_{f^n}(\gamma, R) \supseteq B_{f^n}(\gamma, r)$ for all $\gamma \in \textup{Loc-Geod}_\sigma(\Gamma \backslash X)$, so $$\liminf_{n\to +\infty} -\frac{1}{n}\log \mu(B_{f^n}(\gamma,R)) \leq \lim_{r\to 0}\liminf_{n\to +\infty} -\frac{1}{n}\log \mu(B_{f^n}(\gamma,r))$$ for all $\gamma \in \textup{Loc-Geod}_\sigma(\Gamma \backslash X)$. In the other direction we notice that for all $\gamma \in \textup{Loc-Geod}_\sigma(\Gamma \backslash X)$ it holds $$\mu(B_{f^n}(\gamma, R)) \leq \text{Cov}_{f^n}(B_{f^n}(\gamma, R), r)\cdot \underset{\mu}{\textup{ess}\sup} \mu(B_{f^n}(\gamma', r)).$$ So we deduce \begin{equation*} \begin{aligned} \liminf_{n\to +\infty} -\frac{1}{n}\log \mu(B_{f^n}(\gamma,R)) &\geq \liminf_{n\to +\infty} -\frac{1}{n}\log \text{Cov}_{f^n}(B_{f^n}(\gamma, R), r)\cdot \underset{\mu}{\textup{ess}\sup} \mu(B_{f^n}(\gamma', r)) \\ &= \liminf_{n\to +\infty} -\frac{1}{n}\log \text{Cov}_{f^n}(B_{f^n}(\gamma, R), r) \\ &\quad + \liminf_{n\to +\infty} -\frac{1}{n}\log \underset{\mu}{\textup{ess}\sup} \mu(B_{f^n}(\gamma', r)) \\ &= \liminf_{n\to +\infty} -\frac{1}{n}\log \underset{\mu}{\textup{ess}\sup} \mu(B_{f^n}(\gamma', r)) \\ &\geq \underset{\mu}{\textup{ess}\sup} \liminf_{n\to +\infty} -\frac{1}{n}\log \mu(B_{f^n}(\gamma', r)), \end{aligned} \end{equation*} where the second inequality follows directly by Lemma \ref{key-lemma}. By the arbitrariness of $r$ we conclude: \begin{equation*} \begin{aligned} \liminf_{n\to +\infty} -\frac{1}{n}\log \mu(B_{f^n}(\gamma,R)) &\geq \lim_{r\to 0} \underset{\mu}{\textup{ess}\sup} \liminf_{n\to +\infty} -\frac{1}{n}\log \mu(B_{f^n}(\gamma', r)) \\ &\geq \underset{\mu}{\textup{ess}\sup}\lim_{r\to 0}\liminf_{n\to +\infty} -\frac{1}{n}\log \mu(B_{f^n}(\gamma', r)). \end{aligned} \end{equation*} Since this is true for all $\gamma \in \textup{Loc-Geod}_\sigma(\Gamma \backslash X)$ we obtain: \begin{equation*} \begin{aligned} \underset{\mu}{\textup{ess}\inf}\lim_{r\to 0}\liminf_{n\to +\infty} -\frac{1}{n}\log \mu(B_{f^n}(\gamma,r)) &\geq \underset{\mu}{\textup{ess}\inf}\liminf_{n\to +\infty} -\frac{1}{n}\log \mu(B_{f^n}(\gamma,R)) \\ &\geq \underset{\mu}{\textup{ess}\sup}\lim_{r\to 0}\liminf_{n\to +\infty} -\frac{1}{n}\log \mu(B_{f^n}(\gamma', r)). \end{aligned} \end{equation*} This implies that for $\mu$-a.e.$\gamma$ the function $$\gamma \mapsto \lim_{r\to 0}\liminf_{n\to +\infty}-\frac{1}{n}\log \mu(B_{f^n}(\gamma', r))$$ is constantly equal to $\underline{h}_{\mu,f}^\text{loc}$, by definition of lower local entropy. So for $\mu$-a.e.$\gamma$ we conclude $$\underline{h}_{\mu,f}^\text{loc} = \liminf_{n\to +\infty} -\frac{1}{n}\log \mu(B_{f^n}(\gamma,R)).$$ The proof for the upper entropy is the same. \end{proof} We are ready to prove the first part of Theorem \ref{intro-thm-var-principle}, namely \begin{theo} Let $(X,\sigma)$ be a $\delta$-hyperbolic \textup{GCB}-space that is $P_0$-packed at scale $r_0$. Let $\Gamma$ be a group of $\sigma$-isometries of $X$ acting discretely and freely. Then $$\overline{h}^\textup{loc} \leq h_\Gamma.$$ \end{theo} The proof is inspired by \cite{Led13}. Essentially to every measure $\mu \in \mathcal{E}_1$ we associate a measure $\nu$ on $\partial X$ with the following properties: \begin{itemize} \item[(i)] $\overline{h}^\textup{loc}_{\mu,f} \leq \overline{\text{PD}}(\nu)$, where $f\in \mathcal{F}$ is arbitrary; \item[(ii)] $\nu(\Lambda_{\text{erg}}) = 1$. \end{itemize} Then we will use \eqref{eq-packing-dimensions} and Theorem \ref{BJ-PD} to conclude the estimate. \begin{proof} It is enough to show that for all $\mu \in \mathcal{E}_1$ it holds $\overline{h}^\textup{loc}_{\mu,f} \leq h_\Gamma$ for some $f\in \mathcal{F}$. So we fix $f\in\mathcal{F}$ and $\mu \in \mathcal{E}_1$. \\ For every $x\in X$ we consider the ball $B(x,\iota(x))$, where $\iota(x)$ is the injectivity radius at $x$. Since $X$ is proper, so separable, we can find a countable set $\lbrace x_i \rbrace_{i\in \mathbb{N}} \subseteq X$ such that $X=\bigcup_{i\in \mathbb{N}} B(x_i,\iota(x_i))$. We consider the sets $$U_i = \lbrace \gamma \in \text{Loc-Geod}_\sigma(\Gamma\backslash X) \text{ s.t. } \gamma(0) \in B(\pi x_i, \iota(x_i))\rbrace.$$ Clearly $\bigcup_{i\in \mathbb{N}}U_i = \text{Loc-Geod}_\sigma(\Gamma \backslash X)$, so there must be some index $i_0$ such that $\mu(U_{i_0}) = c > 0$. By Lemma \ref{lemma-inj-radius} for all $\gamma \in U_{i_0}$ there exists a unique $\tilde{\gamma}\in \text{Geod}_\sigma(X)$ such that $\Pi \tilde{\gamma} = \gamma$ and $d(\tilde\gamma(0), x_{i_0}) < \iota(x_{i_0})$.\\ Therefore we can define the map $+ \colon U_{i_0} \to \partial X$ by $+\gamma = \tilde{\gamma}^+$, where $\tilde{\gamma}$ is the unique covering geodesic of $\gamma$ with $d(\tilde{\gamma}(0), x_{i_0}) < \iota(x_{i_0})$. Let us show it is continuous: let $\gamma_k \in U_{i_0}$ converging uniformly on compact subsets to $\gamma_\infty \in U_{i_0}$. We showed in Lemma \ref{lemma-completeness} that there exist covering $\sigma$-geodesics of $\gamma_k$ that converge uniformly on compact subsets to $\tilde{\gamma}_\infty$. It is then clear that these covering $\sigma$-geodesics must be the $\sigma$-geodesics $\tilde{\gamma}_k$. So $\tilde{\gamma}_k^+$ converges to $\tilde{\gamma}_\infty^+$. Hence we can define the measure $\nu$ on $(\partial X, \mathscr{B})$ by $\nu = +_\ast \mu$, i.e. $\nu(A) = \mu(+^{-1}A)$ for every Borelian subset $A$ of $\partial X$. \vspace{1mm} \noindent\textbf{Step 1:} $\overline{h}^\textup{loc}_{\mu,f} \leq \overline{\text{PD}}(\nu)$. \vspace{1mm}\\ We fix $R= 28\delta + 3\iota(x_{i_0}) + 2C(f)$. By Lemma \ref{lemma-loc-entropy} we know that \begin{equation} \label{eq-loc-entropy} \overline{h}_{\mu,f}^\textup{loc} = \limsup_{n\to +\infty} -\frac{1}{n}\log \mu(B_{f^n}(\gamma,R)) \end{equation} for $\mu$-a.e.$\gamma \in U_{i_0}$. Clearly the measure $\nu$ is concentrated on the image of the map $+$, so in order to compute its upper packing dimension we can just consider the points of this set. So for all $\gamma \in U_{i_0}$ we consider the point $z = \tilde \gamma^+ \in \partial X$ and the generalized visual ball $B(z,e^{-n})$, where we choose $x_{i_0}$ as basepoint of $X$. We want to estimate $\nu(B(z,e^{-n})) = \mu(+^{-1} B(z,e^{-n}))$. We observe that every $\eta \in +^{-1} B(z,e^{-n})$ satisfies: $$d(\tilde{\eta}(0), \tilde \gamma (0)) \leq 2\iota(x_{i_0}), \qquad d(\tilde \eta (n), \tilde \gamma (n)) \leq 14\delta + \iota(x_{i_0}).$$ Indeed the first inequality is by definition of $\tilde{\gamma}$ and $\tilde{\eta}$, while the second one follows by Lemma \ref{product-rays} and Lemma \ref{parallel-geodesics}. By standard arguments, see \cite{Cav21}, this implies $$f^n(\tilde \eta, \tilde \gamma)\leq 28\delta + 3\iota(x_{i_0}) + 2C(f) = R$$ and so, by Lemma \ref{cor-lipschitz}, $f^n(\eta,\gamma)\leq R$. In other words $$+^{-1} B(+\gamma,e^{-n}) \subseteq B_{f^n}(\gamma, R)$$ for all $\gamma \in U_{i_0}$. We conclude that \begin{equation*} \begin{aligned} \overline{\text{PD}}(\nu) &= \underset{\nu}{\text{ess}\sup}\limsup_{\rho\to 0}\frac{\log \nu(B(z,\rho))}{\log \rho} \\ &\geq \underset{\nu}{\text{ess}\sup}\limsup_{n\to +\infty}-\frac{1}{n}\log \nu(B(z,e^{-n})) \\ &\geq \underset{\mu}{\text{ess}\sup}\limsup_{n\to +\infty}-\frac{1}{n}\log \mu(B_{f^n}(\gamma,R)) = \overline{h}_{\mu,f}^\textup{loc}. \end{aligned} \end{equation*} The first inequality is clear choosing $\rho=e^{-n}$ and using the property that the limit superior along a selected subsequence is smaller than or equal to the limit superior along any subsequence. The second follows from what we said before: indeed $\nu$ gives full measure to the set of points $z \in \partial X$ that are in the image of the map $+$. This, together with the fact that $\nu(B(+\gamma,e^{-n})) = \mu(+^{-1} B(+\gamma,e^{-n})) \leq \mu(B_{f^n}(\gamma, R))$, implies the second inequality. The last equality is direct consequence of \eqref{eq-loc-entropy} and the fact that $U_{i_0}$ has positive $\mu$-measure. \vspace{1mm} \noindent\textbf{Step 2:} $\nu$ gives full measure to the set $\Lambda_{\text{erg}}$. \vspace{1mm}\\ In this step we will use for the first (and last) time the ergodicity of the measure $\mu$. We associate to $\gamma \in U_{i_0}$ the set of integers $\Theta(\gamma) = \lbrace \vartheta_i(\gamma)\rbrace$ defined recursively by $$\vartheta_0(\gamma)=0, \qquad \vartheta_{i+1}(\gamma) = \inf \lbrace n > \vartheta_i(\gamma) \text{ s.t. } \Phi_n(\gamma) \in U_{i_0}\rbrace.$$ We notice that an integer $n$ satisfies $$d(\Phi_{n}\gamma(0), \pi x_{i_0}) = d(\gamma(n), \pi x_{i_0}) < \iota({x_{i_0}})$$ if and only if $n \in \Theta(\gamma)$, i.e. $\Theta(\gamma)$ is exactly the set of (integer) returning times of $\gamma$ to the ball $B(\pi x_{i_0}, \iota(x_{i_0}))$.\\ We apply Birkhoff's Ergodic Theorem to the indicator function of the set $U_{i_0}$, namely $\chi_{U_{i_0}}$, obtaining that for $\mu$-a.e.$\gamma \in \text{Loc-Geod}_\sigma(\Gamma\backslash X)$ it holds $$\exists \lim_{N\to +\infty}\frac{1}{N}\sum_{j=0}^{N-1} \chi_{U_{i_0}} \circ \Phi_j(\gamma) = \mu(U_{i_0}) = c > 0.$$ We remark that $\chi_{U_{i_0}} \circ \Phi_j(\gamma) = 1$ if and only if $j\in \Theta(\gamma)$ and it is $0$ otherwise. So $$\lim_{N\to +\infty}\frac{1}{N}\sum_{j=0}^{N-1} \chi_{U_{i_0}} \circ \Phi_j(\gamma) = \lim_{N\to +\infty}\frac{\#\Theta(\gamma) \cap [0,N-1]}{N},$$ and the right hand side is by definition the density of the set $\Theta(\gamma)$. It is classical that, given the standard increasing enumeration $\lbrace \vartheta_0(\gamma), \vartheta_1(\gamma),\ldots \rbrace$ of $\Theta(\gamma)$, it holds $$\lim_{N\to +\infty}\frac{\#\Theta(\gamma) \cap [0,N-1]}{N} = \lim_{N\to +\infty}\frac{N}{\vartheta_N(\gamma)}.$$ Putting all together we conclude that for $\mu$-a.e.$\gamma \in \text{Loc-Geod}_\sigma(\Gamma \backslash X)$ the following is true \begin{equation} \label{eq-rec-times} \exists \lim_{N\to +\infty}\frac{\vartheta_N(\gamma)}{N} = \frac{1}{c} < +\infty. \end{equation} Now we claim that for every $\gamma \in U_{i_0}$ satisfying \eqref{eq-rec-times} it holds $+\gamma \in \Lambda_\text{erg}$. For every such $\gamma$ we take its covering geodesic $\tilde{\gamma}$ whose limit point $\tilde{\gamma}^+$ defines $+\gamma = z$. We recall that our basepoint of $X$ is $x_{i_0}$. We fix any geodesic ray $\xi = [x_{i_0}, z]$. First of all we know that for all $t\geq 0$ it holds $d(\xi(t), \tilde\gamma(t)) \leq 8\delta + \iota(x_{i_0})$ by Lemma \ref{parallel-geodesics}. Moreover we know that for every $N\in \mathbb{N}$ the geodesic $\gamma$ returns to $B(\pi x_{i_0}, \iota(x_{i_0}))$ at time $\vartheta_N(\gamma)$. This implies that $d(\tilde{\gamma}(\vartheta_N(\gamma)), \Gamma x_{i_0}) < \iota({x_{i_0}})$, and so $d(\xi(\vartheta_N(\gamma)), \Gamma x_{i_0}) < 8\delta + 2\iota(x_{i_0})$. By definition this means that $z\in \Lambda_{\tau, \Theta(\gamma)}$, where $\tau = 8\delta + 2\iota(x_{i_0})$. Finally we observe that the sequence $\Theta(\gamma)=\lbrace\vartheta_N(\gamma)\rbrace$ satisfies \eqref{eq-rec-times}, that is exactly the condition that defines a sequence involved in the definition of $\Lambda_\text{erg}$. This shows the claim and so that $\nu(\Lambda_\text{erg}) = 1$. \vspace{1mm} \noindent\textbf{Step 3.} By Step 1, Step 2, \eqref{eq-packing-dimensions} and Theorem \ref{BJ-PD} we conclude that $$h_\Gamma = \text{PD}(\Lambda_\text{erg}) \geq \overline{\text{PD}}(\nu) \geq \overline{h}_{\mu,f}^\textup{loc} \geq \overline{h}_{\mu}^\textup{loc}.$$ Finally, from the arbitrariness of $\mu$ we get the thesis. \end{proof} \section{Lower bound of the entropy} \label{sec-lowerbound} We consider again the dynamical system $(\text{Loc-Geod}_\sigma(\Gamma \backslash X), \Phi_1)$, where $(X,\sigma)$ is a $\delta$-hyperbolic GCB-space that is $P_0$-packed at scale $r_0$ and $\Gamma$ is a group of $\sigma$-isometries of $X$ acting discretely and freely. \noindent By Remark \ref{rmk-closure} and Lemma \ref{lemma-metrizability} the space $\text{Loc-Geod}_\sigma(\Gamma \backslash X)$ is locally compact and metrizable, so the topological entropy $h_\text{top}$ can be computed via the variational principle, see \cite{HK95}, as \begin{equation} \label{top-metric} h_\textup{top}=\inf_{\text{\texthtd}}\sup_{K}\lim_{r\to 0}\limsup_{n \to +\infty}\frac{1}{n} \log \textup{Cov}_{\text{\texthtd}^n}(K,r), \end{equation} where the infimum is among all metrics \texthtd\, inducing the natural topology of $\text{Loc-Geod}_\sigma(\Gamma \backslash X)$ and the supremum is among all compact subsets of $\text{Loc-Geod}_\sigma(\Gamma \backslash X)$. Observe that in the right hand side of \eqref{top-metric} the compact subsets $K$ are not required to be $T$-invariant, where by $T$-invariant we mean $T(K) = K$. For a general dynamical system the restriction to $T$-invariant compact subsets gives a strictly smaller quantity, see the remark below Lemma 1.6 of \cite{HK95}. We remark that when $K$ is a compact $T$-invariant subset then the dynamical system $(K,T)$ is supported on a compact metrizable space, so the quantity $$\lim_{r\to 0}\limsup_{n \to +\infty}\frac{1}{n} \log \textup{Cov}_{\text{\texthtd}^n}(K,r)$$ does not depend on the choice of the metric \texthtd\, and it coincides with the topological entropy of the dynamical system $(K,T)$ by the variational principle for compact dynamical systems. We define \begin{equation} h_\textup{inv-top}=\sup_{K\,\,\, T\text{-inv.}}\lim_{r\to 0}\limsup_{n \to +\infty}\frac{1}{n} \log \textup{Cov}_{\text{\texthtd}^n}(K,r) = \sup_{K\,\,\, T\text{-inv.}} h_\text{top}(K,T), \end{equation} called the \emph{invariant topological entropy} of $(\text{Loc-Geod}_\sigma(\Gamma \backslash X),\Phi_1)$. Here the supremum is among all compact $T$-invariant subsets $K$ and \texthtd\, is any metric on the compact subset $K$ inducing its topology. Clearly $h_\textup{inv-top} \leq h_\textup{top}$. The strategy to conclude the proof of Theorem \ref{intro-thm-var-principle} is to show $h_\textup{inv-top} \geq h_\Gamma$. First of all we need to identify the invariant compact subsets of $\text{Loc-Geod}_\sigma(\Gamma \backslash X)$. We fix a basepoint $x\in X$ and we call $\pi x$ the projection of $x$ in $\Gamma \backslash X$, as usual. Then the following are the prototype of compact $\Phi_1$-invariant subsets: $$K_\tau = \lbrace \gamma \in \text{Loc-Geod}_\sigma(\Gamma\backslash X) \text{ s.t. } \gamma(n)\in \overline{B}(\pi x, \tau) \text{ for all } n\in \mathbb{Z} \rbrace,$$ where $\tau \geq 0$ is a real number. The set $K_\tau$ is essentially the set of local geodesic lines whose image is completely contained in a fixed compact set of the quotient $\Gamma \backslash X$. \begin{lemma} $K_\tau$ is a compact $\Phi_1$-invariant subset for all $\tau \geq 0$. Moreover any compact $\Phi_1$-invariant subset is contained in $K_\tau$ for some $\tau \geq 0$. \end{lemma} \begin{proof} For all $\tau \geq 0$ the set $K_\tau$ is clearly $\Phi_1$-invariant and closed by Remark \ref{rmk-closure}. Let now $\gamma_k\colon \mathbb{R} \to \overline{B}(\pi x, \tau)$ be a sequence of elements of $K_\tau$. By Ascoli-Arzela's Theorem there exists a map $\gamma_\infty \colon \mathbb{R} \to \overline{B}(\pi x, \tau)$ such that $\gamma_k \to \gamma_\infty$ uniformly on compact subsets of $\mathbb{R}$ and since $K_\tau$ is closed clearly $\gamma_\infty \in K_\tau$. This completes the proof of the first statement.\\ Now let $K$ be a compact $\Phi_1$-invariant subset of $\text{Loc-Geod}_\sigma(\Gamma\backslash X)$. We need to show it exists $\tau \geq 0$ such that $K$ is contained in $K_\tau$. Suppose it is not true. Then for every $k \geq 0$ there exists $\gamma_k \in K$ such that $\gamma_k(n_k) \notin \overline{B}(\pi x,k)$ for some $n_k\in \mathbb{Z}$. For every $k$ we reparametrize $\gamma_k$ so that $\gamma_k(0) \notin \overline{B}(\pi x,k)$. Since $K$ is $\Phi_1$-invariant then the reparametrized geodesics belong again to $K$. Moreover $K$ is compact, then there exists a subsequence, denoted again by $\gamma_k$, that converges to $\gamma_\infty \in K$. In particular the sequence $\gamma_k(0)$ converges to $\gamma_\infty(0)$, but $d(\gamma_k(0), \pi x) > k$ for every $k$, which is a contradiction. \end{proof} As a corollary we directly have: $h_\text{inv-top} = \sup_{\tau \geq 0} h_{\text{top}}(K_\tau, \Phi_1).$ Now the idea is to relate the topological entropy of the dynamical system $(K_\tau, \Phi_1)$ to the Minkowski dimension of the set $\Lambda_\tau$, computed with respect to the basepoint $x$. The tool we need is the Lipschitz-topological entropy of a closed subset of the boundary at infinity, studied in \cite{Cav21}. We recall briefly its definition. Let $(X,\sigma)$ be a $\delta$-hyperbolic, \textup{GCB}-space that is $P_0$-packed at scale $r_0$. For a subset $C$ of $\partial X$ and $Y\subseteq X$ we set $$\text{Geod}_\sigma(Y,C)= \lbrace \gamma \in \text{Geod}_\sigma(X) \text{ s.t. } \gamma^{\pm} \in C \text{ and } \gamma(0)\in Y\rbrace.$$ If $Y = X$ we simply write $\text{Geod}_\sigma(C)$. Clearly $\text{Geod}_\sigma(C)$ is a $\Phi$-invariant subset of $\text{Geod}_\sigma(X)$, so the geodesic flow is well defined on it. The {\em lower Lipschitz-topological entropy} of $\text{Geod}_\sigma(C)$ is defined as $$\underline{h_{\text{Lip-top}}}(\text{Geod}_\sigma(C)) = \inf_{\text{\texthtd}}\sup_K \lim_{r\to 0} \liminf_{T\to + \infty}\frac{1}{T}\log\text{Cov}_{\text{\texthtd}^T}(K,r),$$ where the infimum is taken among all geometric metrics on $\text{Geod}_\sigma(C)$, the supremum is among all compact subsets of $\text{Geod}_\sigma(C)$, $\text{\texthtd}^T$ denotes the metric $\text{\texthtd}^T(\gamma,\gamma')=\sup_{t\in [0,T]}\text{\texthtd}(\Phi_t \gamma, \Phi_t \gamma')$ and $\text{Cov}_{\text{\texthtd}^T}(K,r)$ denotes the covering number of the set $K$ at scale $r$ with respect to the metric $\text{\texthtd}^T$. Taking the limit superior instead of the limit inferior we define the so-called upper Lipschitz-topological entropy of Geod$_\sigma(C)$, denoted $\overline{h_{\text{Lip-top}}}(\text{Geod}_\sigma(C))$. The computation of these invariants can be really simplified. \begin{prop}[\cite{Cav21}] \label{thm-h-liptop} Let $(X,\sigma)$ be a $\delta$-hyperbolic \textup{GCB}-space that is $P_0$-packed at scale $r_0$. Let $C$ be a subset of $\partial X$ and $x\in X$. Then there exists a constant $L$ depending only on $x$ such that $$\underline{h_\textup{Lip-top}}(\textup{Geod}_\sigma(C)) = \sup_{C'\subseteq C}\liminf_{T \to +\infty}\frac{1}{T}\log \textup{Cov}_{f^T}(\textup{Geod}_\sigma(\overline{B}(x,L), C'), r)$$ for every $f\in \mathcal{F}$ and every $r>0$, where the supremum is among all closed subsets $C'$ of $C$. Moreover if $C$ is closed then $\underline{h_\textup{Lip-top}}(\textup{Geod}_\sigma(C)) = \underline{\textup{MD}}(C).$\\ In particular for every subset $C$ of $\partial X$ it holds $$\underline{h_\textup{Lip-top}}(\textup{Geod}_\sigma(C)) = \sup_{C'\subseteq C} \underline{\textup{MD}}(C'),$$ where the supremum is among all closed subsets of $C$. The corresponding results are true for the upper Lipschitz-topological entropy. \end{prop} We are now ready to show: \begin{theo} Let $(X,\sigma)$ be a $\delta$-hyperbolic \textup{GCB}-space that is $P_0$-packed at scale $r_0$. Let $\Gamma$ be a group of $\sigma$-isometries of $X$ acting discretely and freely. Then $$h_{\textup{inv-top}} \geq h_\Gamma.$$ \end{theo} \begin{proof} We already know that $h_\text{inv-top} = \sup_{\tau \geq 0} h_{\text{top}}(K_\tau, \Phi_1).$ Moreover $K_\tau$ is compact, so we can use every metric on $K_\tau$ inducing its topology in order to compute its topological entropy. We are going to select a specific metric in the following way: first of all we call $\iota_\tau$ the positive injectivity radius of the compact set $\overline{B}(x,\tau)$. Secondly we consider $$f_\tau(s) = \frac{1}{2}\frac{4}{\iota_\tau}e^{-\frac{4}{\iota_\tau}\vert s \vert}.$$ It is easy to check that $f_\tau \in \mathcal{F}$ and $$C(f_\tau)=\int_{-\infty}^{+\infty}2\vert s \vert f_\tau(s)ds = \frac{\iota_\tau}{2}.$$ Moreover we have: \begin{equation*} \begin{aligned} h_\text{top}(K_\tau,\Phi_1) &= \lim_{r\to 0}\limsup_{n \to +\infty}\frac{1}{n}\log\text{Cov}_{f^n_\tau}(K_\tau, r) \geq \lim_{r\to 0}\liminf_{n \to +\infty}\frac{1}{n}\log\text{Cov}_{f^n_\tau}(K_\tau, r) \\ &\geq \liminf_{n \to +\infty}\frac{1}{n}\log\text{Cov}_{f^n_\tau}\left(K_\tau, \frac{\iota_\tau}{2}\right) \geq \liminf_{T \to +\infty}\frac{1}{T}\log\text{Cov}_{f^T_\tau}\left(K_\tau, \frac{\iota_\tau}{2}\right). \end{aligned} \end{equation*} Our aim is to show \begin{equation} \label{eq-second-inequality} \small \liminf_{T \to +\infty}\frac{1}{T}\log\text{Cov}_{f^T_\tau}\left(K_\tau, \frac{\iota_\tau}{2}\right) \geq \liminf_{T\to +\infty}\frac{1}{T}\log \text{Cov}_{f_\tau^T}\left(\text{Geod}_\sigma(\overline{B}(x,L), \Lambda_{\tau - 8\delta - L}), \frac{3}{2}\iota_\tau\right), \end{equation} \normalsize where $L$ is the constant of Proposition \ref{thm-h-liptop} that depends only on $x$ and not on $\tau$. Indeed the right hand side equals $\underline{h_\text{Lip-top}}(\text{Geod}_\sigma(\Lambda_{\tau - 8\delta - L}))$ by Proposition \ref{thm-h-liptop}, so we would have $$h_\text{inv-top} = \sup_{\tau \geq 0} h_{\text{top}}(K_\tau, \Phi_1) \geq \sup_{\tau \geq 0} \underline{h_{\text{Lip-top}}}(\Lambda_{\tau - 8\delta - L}) = \sup_{\tau \geq 0} \underline{\text{MD}}(\Lambda_{\tau}) = h_\Gamma$$ again by Proposition \ref{thm-h-liptop} and \eqref{eq-MD-critical}, concluding the proof.\\ It remains to show \eqref{eq-second-inequality}. For every $T \geq 0$ let $\gamma_1,\ldots,\gamma_N$ be a set of $\sigma$-local geodesics realizing $\text{Cov}_{f_\tau^T}(K_\tau,\frac{\iota_\tau}{2}).$ For every $i=1,\ldots,N$ we consider the set $$A_i = \lbrace \tilde\gamma \in \text{Geod}_\sigma(X) \text{ s.t. } \Pi \tilde\gamma = \gamma_i \text{ and } \tilde\gamma(0)\in \overline{B}(x,L + \iota_\tau)\rbrace.$$ Observe that for every two elements $\tilde\gamma,\tilde\gamma'\in A_i$ we have $\Pi\tilde\gamma(0) = \Pi\tilde\gamma'(0)$, so by Lemma \ref{lemma-inj-radius} either $\tilde\gamma=\tilde\gamma'$ or $d(\tilde\gamma(0),\tilde\gamma'(0)) \geq \iota_\tau$ since $\iota_\tau$ is smaller than or equal to the injectivity radius at $\gamma_i(0)$. We conclude that the set $\lbrace\tilde\gamma(0)\rbrace_{\tilde\gamma\in A_i}$ is a $\frac{\iota_\tau}{2}$-separated subset of $\overline{B}(x,L+\iota_\tau)$, implying $\#A_i \leq \text{Pack}(L + \iota_\tau,\frac{\iota_\tau}{2})$.\\ We consider the set $A_T = \bigcup_{i=1}^N A_i$ whose cardinality satisfies $$\#A_T \leq \text{Cov}_{f_\tau^T}(K_\tau,r)\cdot\text{Pack}\left(L + \iota_\tau,\frac{\iota_\tau}{2}\right).$$ We claim that $A_T$ covers the set $\text{Geod}_\sigma(\overline{B}(x,L), \Lambda_{\tau - 8\delta - L})$ at scale $\frac{3}{2}\iota_\tau$ with respect to the metric $f_\tau^T$. Indeed let $\tilde\gamma \in \text{Geod}_\sigma(\overline{B}(x,L), \Lambda_{\tau - 8\delta - L})$. This means that a geodesic ray $\xi = [x,\tilde\gamma^+]$ is contained in the $(\tau - 8\delta - L)$-neighbourhood of the orbit $\Gamma x$. Moreover by Lemma \ref{parallel-geodesics} we know that $d(\tilde{\gamma}(t), \xi(t)) \leq 8\delta + L$ for every $t\geq 0$. The same consideration holds for the negative ray $\tilde{\gamma}\vert_{(-\infty,0]}$, so $d(\tilde{\gamma}(t), \Gamma x) \leq \tau$ for all $t\in \mathbb{R}$: this implies that $\Pi\tilde{\gamma}$ belongs to $K_\tau$. By definition there exists $i \in \lbrace 1,\ldots, N\rbrace$ such that $f_\tau^T(\gamma_i, \Pi\tilde\gamma)\leq \frac{\iota_\sigma}{2}.$ Therefore for every $t\in[0,T]$ we know that $$\int_{-\infty}^{+\infty} d(\gamma_i(t+s), \Pi \tilde\gamma(t+s))f_\tau(s)ds \leq \frac{\iota_\tau}{2}.$$ Moreover from the choice of $f_\tau$ we have \begin{equation*} \begin{aligned} \frac{\iota_\tau}{2}&\geq\int_{-\infty}^{+\infty} d(\gamma_i(t+s), \Pi \tilde\gamma(t+s))f_\tau(s)ds \\ &\geq \int_{-\infty}^{+\infty} (d(\gamma_i(t), \Pi \tilde\gamma(t)) - 2\vert s \vert)f_\tau(s)ds \geq d(\gamma_i(t), \Pi \tilde\gamma(t)) - \frac{\iota_\tau}{2}. \end{aligned} \end{equation*} In conclusion $d(\gamma_i(t), \Pi \tilde\gamma(t)) \leq \iota_\tau$ for every $t\in [0,T]$. By definition of injectivity radius, since $\Pi\tilde\gamma(t)\in \overline{B}(x,\tau)$, we conclude that for every $t\in [0,T]$ there exists a covering geodesic $\tilde{\gamma}_t$ of $\gamma_i$ such that $$d(\tilde{\gamma}_t(t), \tilde\gamma(t)) = d(\gamma_i(t), \Pi \tilde\gamma(t)) \leq \iota_\tau.$$ For $t=0$ we have the covering geodesic $\tilde\gamma_0$ of $\gamma_i$ that satisfies $$d(\tilde{\gamma}_0(0), x)\leq d(\tilde{\gamma}_0(0), \tilde\gamma(0)) + d(\tilde\gamma(0), x)\leq L + \iota_\tau,$$ so $\tilde{\gamma}_0 \in A_i$. Moreover for every $t\in [0,T]$ there exists an element $g_t \in \Gamma$ such that $\tilde{\gamma}_t = g_t \tilde{\gamma}_0$. But arguing as in the proof of Proposition \ref{prop-quotient-geodesics} we conclude that $g_t = \text{id}$ for every $t\in [0,T]$ and so $d(\tilde{\gamma}_0(t), \tilde\gamma(t)) \leq \iota_\tau$ for every $t \in [0,T]$.\\ We can now estimate the distance between $\tilde{\gamma}_0$ and $\tilde\gamma$ with respect to $f_\tau^T$. For every $t\in [0,T]$ we have \begin{equation*} \begin{aligned} f^t(\tilde{\gamma}_0, \tilde\gamma) &= \int_{-\infty}^{+\infty} d(\tilde{\gamma}_0(t + s), \tilde\gamma(t + s)) f_\tau(s)ds \\ &\leq \int_{-\infty}^{+\infty} (d(\tilde{\gamma}_0(t), \tilde\gamma(t)) + 2\vert s\vert)f_\tau(s)ds \leq \iota_\tau + \frac{\iota_\tau}{2} = \frac{3}{2}\iota_\tau. \end{aligned} \end{equation*} Since $\tilde{\gamma}_0 \in A_T$ we conclude that the set $A_T$ covers Geod$_\sigma(\overline{B}(x,L),\Lambda_{\tau - 8\delta - L})$ at scale $\frac{3}{2}\iota_\tau$ with respect to the metric $f_\tau^T$. We can finish the proof of the theorem since $$\liminf_{T\to +\infty}\frac{1}{T}\log \text{Cov}_{f_\tau^T}\left(\text{Geod}_\sigma(\overline{B}(x_0,L), \Lambda_{\tau - 8\delta -L}), \frac{3}{2}\iota_\tau\right)$$ is less than or equal to \begin{equation*} \begin{aligned} \liminf_{T\to +\infty}\frac{1}{T}\log \# A_T &\leq \liminf_{T\to +\infty}\frac{1}{T}\log \text{Cov}_{f_\tau^T}\left(K_\tau,\frac{\iota_\tau}{2}\right)\cdot\text{Pack}\left(L + \iota_\tau,\frac{\iota_\tau}{2}\right) \\ &= \liminf_{T\to +\infty}\frac{1}{T}\log \text{Cov}_{f_\tau^T}\left(K_\tau,\frac{\iota_\tau}{2}\right), \end{aligned} \end{equation*} where the last equality follows from Proposition \ref{packingsmallscales}. \end{proof} \section{Additional remarks} \label{subsec-additional} The definition of upper and lower local entropy of a dynamical system $(Y,T)$ as $$\overline{h}^\text{loc}(Y,T) = \sup_{\mu \in \mathcal{E}_1(Y,T)}\inf_{\text{\texthtd}}\overline{h}_{\mu,\text{\texthtd}}^\text{loc}(Y,T), \quad \underline{h}^\text{loc}(Y,T) = \sup_{\mu \in \mathcal{E}_1(Y,T)}\inf_{\text{\texthtd}}\underline{h}_{\mu,\text{\texthtd}}^\text{loc}(Y,T)$$ could seem arbitrary, in the sense that the order of the supremum and the infimum could be switched. In our case there is no difference. \begin{cor} Let $(X,\sigma)$ be a $\delta$-hyperbolic \textup{GCB}-space that is $P_0$-packed at scale $r_0$. Let $\Gamma$ be a group of $\sigma$-isometries of $X$ acting discretely and freely. Then $$\inf_{\textup{\texthtd}}\sup_{\mu \in \mathcal{E}_1}\overline{h}_{\mu,\textup{\texthtd}}^\textup{loc} = \inf_{\textup{\texthtd}}\sup_{\mu \in \mathcal{E}_1}\underline{h}_{\mu,\textup{\texthtd}}^\textup{loc} = h_\Gamma.$$ \end{cor} \begin{proof} Clearly it always holds $$\inf_{\textup{\texthtd}}\sup_{\mu \in \mathcal{E}_1}\overline{h}_{\mu,\textup{\texthtd}}^\textup{loc} \geq \sup_{\mu \in \mathcal{E}_1}\inf_{\textup{\texthtd}}\overline{h}_{\mu,\textup{\texthtd}}^\textup{loc}.$$ But, for $f\in \mathcal{F}$, we have also $$h_\Gamma = \sup_{\mu \in \mathcal{E}_1}h_\mu \leq \sup_{\mu \in \mathcal{E}_1}\inf_{\textup{\texthtd}}\overline{h}_{\mu,\textup{\texthtd}}^\textup{loc} \leq \sup_{\mu \in \mathcal{E}_1}\overline{h}_{\mu,f}^\textup{loc} \leq h_\Gamma.$$ So $$h_\Gamma \leq \sup_{\mu \in \mathcal{E}_1}\inf_{\textup{\texthtd}}\overline{h}_{\mu,\textup{\texthtd}}^\textup{loc} \leq \inf_{\textup{\texthtd}}\sup_{\mu \in \mathcal{E}_1}\overline{h}_{\mu,\textup{\texthtd}}^\textup{loc} \leq \sup_{\mu \in \mathcal{E}_1}\overline{h}_{\mu,f}^\textup{loc} \leq h_\Gamma,$$ implying the equalities. The same proof holds for the lower entropies. \end{proof} The second observation is about measure of maximal entropy. \begin{cor} Let $(X,\sigma)$ be a $\delta$-hyperbolic \textup{GCB}-space that is $P_0$-packed at scale $r_0$. Let $\Gamma$ be a group of $\sigma$-isometries of $X$ acting discretely and freely. If there exists a measure $\mu \in \mathcal{E}_1$ of maximal entropy, that is $h_\mu = h_\textup{top}$, then for all $f\in \mathcal{F}$ it holds: $$h_\Gamma = h_\mu = \underline{h}_\mu^\textup{loc} = \underline{h}_{\mu,f}^\textup{loc} = \overline{h}_\mu^\textup{loc} = \overline{h}_{\mu,f}^\textup{loc}.$$ Moreover for $\mu$-a.e.$\gamma \in \textup{Loc-Geod}_\sigma(\Gamma\backslash X)$ it holds $$h_\Gamma = \lim_{n \to +\infty}-\frac{1}{n}\log\mu(B_{f^n}(\gamma,R))$$ for every $R>0$. \end{cor} We recall that it can happen that a measure of maximal entropy does not exist. For instance if $M$ is a Riemannian manifold with pinched negative curvature it is known by Otal-Peigné's Theorem (cp. \cite{OP04}) that such a measure exists if and only if the Bowen-Margulis measure is finite, and it that case this measure, once normalized, is the unique measure of maximal entropy. Probably the same is true in the setting of Gromov-hyperbolic, packed, GCB-spaces but the construction of the Bowen-Margulis entropy is less direct. However using the tecnhiques of \cite{Ric14} and \cite{Lin19} it is plausible that Otal-Peigné's result can be generalized to our setting. \vspace{2mm} We conclude with a corollary and an example. \begin{cor} \label{cor-h-liptop} Let $(X,\sigma)$ be a $\delta$-hyperbolic \textup{GCB}-space that is $P_0$-packed at scale $r_0$, $\Gamma$ be a discrete group of $\sigma$-isometries of $X$ and $x$ be a fixed basepoint of $X$. Then $$h_\Gamma = \overline{h_{\textup{Lip-top}}}(\textup{Geod}_\sigma(\Lambda_\textup{u-rad})) = \underline{h_{\textup{Lip-top}}}(\textup{Geod}_\sigma(\Lambda_\textup{u-rad})).$$ \end{cor} \begin{proof} The set $\Lambda_\text{u-rad}$ is the increasing union of the closed subsets $\Lambda_\tau$, $\tau \geq 0$. Let $C'$ be a closed subset of $\Lambda_{\text{u-rad}}$. There are two possibilities: either $C' = \Lambda_{\text{u-rad}}$ or it is contained in $\Lambda_\tau$ for some $\tau \geq 0$. In the first case by Theorem 12.2.7 of \cite{DSU17} we get $\Lambda_{\text{u-rad}} = \Lambda_\tau$ for some $\tau \geq 0$. Therefore in both cases $\overline{\text{MD}}(C')\leq \overline{\text{MD}}(\Lambda_{\tau})$, and the same holds for the lower Minkowski dimensions. This implies the thesis by Proposition \ref{thm-h-liptop}. \end{proof} Now we show an example of subset $C$ of $\partial X$ such that \begin{equation} \label{eq-end} \overline{h_{\textup{Lip-top}}}(\textup{Geod}_\sigma(C))<\overline{h_{\textup{Lip-top}}}(\textup{Geod}_\sigma(\overline{C})), \end{equation} where $\overline{C}$ is the closure of $C$. In \cite{DPPS09} there is an example of a pinched negatively curved Riemannian manifold $(M,g)$ admitting a non-uniform lattice $\Gamma$ (i.e. the volume of $\Gamma\backslash M$ is finite) such that $h_\Gamma < h_\text{vol}(M)$, where $h_\text{vol}(M)$ is the volume entropy of $M$. By \cite{Cav21} we know that $$h_\text{vol}(M) = \overline{h_{\textup{Lip-top}}}(\textup{Geod}(\partial M)) = \overline{h_{\textup{Lip-top}}}(\textup{Geod}(\Lambda(\Gamma))),$$ while $$h_\Gamma = \overline{h_{\textup{Lip-top}}}(\textup{Geod}(\Lambda_\textup{u-rad}(\Gamma)))$$ by the corollary above. Since the closure of $\Lambda_{\text{u-rad}}(\Gamma)$ is $\Lambda(\Gamma)$ then \eqref{eq-end} holds. \bibliographystyle{alpha}
2,869,038,156,011
arxiv
\section{Introduction} Functions are to mathematics as sentences are to linguistics, constituting basic resources for develping more complete mathematical systems and models. The importance of functions is reflected in their widespread applications not only to the physical sciences, but to virtually every scientific field. Traditionally, the mathematical study of functions and their properties has been approached in continuous vector spaces, involving infinite instances of a given type of function. While this constitutes an effective and important approach, most of the signals in practical applications have discrete nature, being represented as discrete signals or vectors. This follows as a consequence of the sampling of physical signals by using acquisition systems that inherently implies the signals to be quantized along their domain and magnitude. Though discrete functions are systematically studied in areas as digital signal processing (e.g.~\cite{oppenheim:2009,Parr:2013}), emphasis is often placed on aspects of quantization errors and representations in the frequency domain, employing the Fourier series or transform (e.g.~\cite{brigham:1988}). However, relatively lesser attention is typically focused on the relationship between the discrete signals, or on how they can be approximated by specific functions. Though the latter subject constitutes one of the main motivation of the areas of numerical methods (e.g.~\cite{recipes}) and numerical analysis (e.g.~\cite{Burden:2015}), this subject is typically approached from the perspective of function approximation, not often addressing the interrelationship between functions. The present work develops an approach aimed at characterizing not only which discrete signals in a discrete region $\Omega \subset \Re^2$ can be adjusted by a given set of reference functions $g_i(x)$, $i = 1, 2, \ldots, N$, but also how such adjustable discrete signals interrelate one another in the sense of being similar, or adjacent. The consideration of a pre-specified set of function types happens frequently in science, especially when fitting data or studying dynamic systems. In particular, the solution of linear systems of differential equations is often approached in terms of linear combinations of a set of eigenfunctions (e.g.~\cite{NagleSaff}), which could also be taken as the reference functions considered in this work. The concepts and methods developed in the present article are interesting not only theoretically while studying how distinct types of functions are related, but also from several application perspectives, such as characterizing specific discrete spaces, discrete signal approximation, morphing of functions (i.e.~transforming a function into another through incremental changes), controlling systems underlain by specific types of function, among many other possibilities. In a sense, functions can be approached as a way to constrain, in specific respective manners, the adjacency between continuous signals in a given region or space. For instance, the function sine restricts all possible continuous signals in a given space. In addition to the relevance of the described developments respectively to the aforementioned mathematical aspects, they also provide several contributions to the area of network science (e.g.~\cite{netwsci}). Indeed, as it will be seen along this work, the transition networks derived from discrete signal spaces with respect to sets of reference functions are characterized by a noticeably rich topological structure that can involve modularity, hubs, symmetries, handles and tails~\cite{tails_handles}, as well as coexistence of regular and modular subgraphs. As such, these networks provide valuable resources not only regarding the characterization of complex networks, but also constitute a model or benchmark that can be used as reference in studies aimed at investigating the classification and robustness of networks, as well as investigations addressing the particularly challenging relationship between network topology and possible implemented dynamics. In order to obtain the means for quantifying how discrete signals in a region can be approximated by reference functions, and how these signals interrelate one another, we develop several respective concepts and methods. More specifically, after defining the problem in a more formal manner, we proceed by suggesting how to define a system of adjacencies between continuous functions, in terms of the identification of respective transition points. These concepts are then transferred to discrete signals, allowing the proposal of indices for quantifying the coverage of the discrete signals by adopted reference functions. Subsequently, we adapt the concept of adjacency between functions to discrete signals, allowing the derivation of a methodology for obtaining transition networks expressing how the discrete signals in a region can be transformed one another while being approximated as instances of the reference functions. Several studies involving the obtained transition networks are then described, including the identification of shortest paths between two adjustable discrete signals, random walks related to the unfolding of dynamics on the network, as well as the possibility of identifying discrete signals that are more central regarding the interrelationships represented in the transition networks. The developed concepts and methods are then illustrated with respect to three main case examples involving (i) four power functions; (ii) a single complete polynomial of forth order; and (iii) two sets of hybrid reference functions involving combinations of power functions and sinusoidals. Several remarkable results are identified and discussed. \section{Defining the Problem} Consider the region $\Omega \subset \Re^2$ in Figure~\ref{fig:cont}, which corresponds to the Cartesian product of the intervals $x_{min} \leq x \leq x_{max}$ and $y_{min} \leq y \leq y_{max}$. \begin{figure}[h!] \begin{center} \includegraphics[width=0.7\linewidth]{continuous.png} \\ \vspace{0.1cm} \caption{A region $\Omega \subset Re^2$ delimited as $x_{min} \leq x \leq x_{max}$ and $y_{min} \leq y \leq y_{max}$, and an example of a function $y = f(x)$ completely comprised in this region.} \label{fig:cont} \end{center} \end{figure} Let $y = f(x)$ correspond to a generic \emph{signal}, which can be associated to a function, completely bound in $\Omega$, in the sense of having all its points comprised within $\Omega$. No requirement, such as continuity or smoothness, are whatsoever imposed on these functions. In addition, consider the \emph{difference} between two generic functions $y = f(x)$ and $y = h(x)$, both comprised in $\Omega$, as corresponding to the following root mean square distance (or error): \begin{equation} \delta(f,h) = \sqrt{ \frac{1}{x_{max}-x_{min}} \int_{x_{min}}^{x_{max}} [f(x) - h(x) ]^2 dx } \end{equation} A possible manner to quantify the \emph{similarity} between $f(x)$ and $h(x)$ is as \begin{equation} \label{eq:simil} \sigma(f,h) = e^{-\alpha \, \delta(f,h)} \end{equation} for some chosen value of $\alpha$. Let $y = g_i(x)$, $i = 1, 2, \ldots, N$ be a finite set of specific functions \emph{types} taken as a reference for our analysis. For instance, we could have $g_1(x) = a_1 x + a_0$, $g_2(x) = a_1 x^2 + a_0$, and $g_3(x) = a_1 x^3 + a_0$ and $g_4(x) = a_1 x^4 + a_0$, with $a_0, a_1, a_2, a_3, a_4 \subset \Re$. An interesting question regards the identification, among all the possible signals $y = \tilde{f}(x)$ in $\Omega$, of which of these signals can be expressed as $g_i(x)$ for $i = 1, 2, \ldots, N$, by yielding zero difference or unit similarity between $\tilde{f}(x)$ and $g_i(x)$. For each of the reference functions $g_i(x)$, we obtain a respective set $S_i$ containing all functions $\tilde{f}(x)$ that can be exactly expressed in terms of $g_i(x)$. It is also interesting to allow for some tolerance by taking these two functions to be related provided: \begin{equation} \delta(\tilde{f},g_i) \leq \tau_d \end{equation} or, considering their similarity, as: \begin{equation} \sigma(\tilde{f},g_i) \geq \tau_s \end{equation} with: \begin{equation} \tau_s = e^{-\alpha \, \tau_d} \end{equation} The identification of the sets $S_i$ can provide interesting insights regarding the relative density of each type of the reference functions in the specified region $\Omega$, paving the way to the identification of reference functions with more general fitting capability as well as the interrelationship between these functions, in the sense of their proximity. It should be observed that the obtained $S_i$ will also depend on the specific size (or even shape) of $\Omega$, as a consequence of the requirement of all functions to be completely bound in that region. The alternative approach of allowing the clipping of functions can also be considered, but this is not developed in the present work. In this work, we focus on \emph{discrete signals}, which are typically handled in scientific applications and technology. These signals are sampled along their domain and quantized in their magnitude (see Section~\ref{sec:discrete}). In order to identify the adjacency between discrete signals, given an $\Omega$ and a set of reference functions $g_i(x)$, first we identify (by using linear least squares) the discrete functions that can be approximated, within a tolerance, by the reference functions, therefore defining the sets $S_i$, and then link these functions by considering their pairwise Euclidean distance. The thus obtained network $\Gamma$ or network can be verified to be undirected and to contain a total of nodes equal to the sum of the cardinality of the obtained sets $S_i$, $i = 1, 2, \ldots, N$. In addition, each of the nodes becomes intrinsically associated to the respective reference functions that were found to provide a good respective approximation. In case a discrete function $\vec{f}$ is found to be adjusted by two or more of the reference functions, only that corresponding to the best fitting may be associated to $\vec{f}$, therefore avoiding replicated labeling. The reference function thus associated to each node of $\Gamma$ is henceforth called the node \emph{type}. The transition network $\Gamma$ provides a systematic representation of the relationships between the discrete functions in $\Omega$ that can be reasonably approximated by the reference functions. Several concepts and methods from the area of network science (e.g.~\cite{netwsci,surv_meas}) can then be applied in order to characterize the topological properties of the obtained network. For instance, the average degree of a node can provide an interesting indication about how that function can be transformed (or `morphed'), by a minimal perturbation, into other functions in $\Omega$. The definition of a system of adjacencies between the functions of $\Omega$ as proposed above also paves the way for performing respective random walks (e.g.~\cite{Justin}). Starting at a given node, adjacent nodes are subsequently visited according to a given criterion (e.g.~uniform probability), therefore defining sequences of incremental transformations of the original function. These trajectories of functions can provide insights about how a function can be progressively transformed into another (morphing), to define minimal distances between any of the adjustable functions in $\Omega$ or, when associated to energy landscapes, to investigate the properties of respectively associated dynamical systems (e.g.~\cite{Ogata}), including possible oscillations (cycles) and chaotic behavior. \section{Continuous Function Adjacency} \label{sec:adjacency} A mathematical function often involves parameters, corresponding to values determining its respective instantiation. For instance, the function: \begin{equation} g(x) = a_1 x + a_0 \end{equation} corresponds to a straight line function whose inclination and translation is specified by the parameters $a_0$ and $a_1$, respectively. Given two generic functions in the region $\Omega$, a particularly interesting question is whether one of them can be made identical to the other, which will be henceforth be expressed as these functions being mutually \emph{adjacent}, in the sense of providing an interface between these two functions, which can be therefore transitioned. More specifically, let the two following functions $g_i(x)$ and $g_j(x)$, with respective parameters $a^i_0, a^i_1, \ldots, a^i_{N_i}$ and $a^j_0, a^j_1, \ldots, a^j_{N_j}$: \begin{eqnarray} g_i(x; a^i_0, a^i_1, \ldots, a^i_{N_i}) \nonumber \\ g_j(x; a^j_0, a^j_1, \ldots, a^j_{N_j}) \nonumber \end{eqnarray} It should be kept in mind that, throughout this work, the superscript value $j$ in the terms $a^j_0$ corresponds to an index associated to the respective reference function, not corresponding to the $j$-power of $a$. The functions $g_i()$ and $g_j()$ can be said to be \emph{adjacent} provided it is possible to find respective configurations of parameters $ \tilde{a}^i_0, \tilde{a}^i_1, \ldots, \tilde{a}^i_{N_i}$ and $\tilde{a}^j_0, \tilde{a}^j_1, \ldots, \tilde{a}^j_{N_j}$ so that: \begin{equation} g_i(x; \tilde{a}^i_0, \tilde{a}^i_1, \ldots, \tilde{a}^i_{N_i}) = g_j(x; \tilde{a}^j_0, \tilde{a}^j_1, \ldots, \tilde{a}^j_{N_j}) \end{equation} for every value of $x$ in $\Omega$. The set of parameters $\tilde{a}^i_0, \tilde{a}^i_1, \ldots, \tilde{a}^i_{N_i}$ and $\tilde{a}^j_0, \tilde{a}^j_1, \ldots, \tilde{a}^j_{N_j}$ are henceforth understood to represent a \emph{transition point} in the parameter space $[\tilde{a}^i_0, \tilde{a}^i_1, \ldots, \tilde{a}^i_{N_i}, \tilde{a}^j_0, \tilde{a}^j_1, \ldots, \tilde{a}^j_{N_j}]$, namely: \begin{equation} P_{g_i \leftrightarrow g_j}: [\tilde{a}^i_0, \tilde{a}^i_1, \ldots, \tilde{a}^i_{N_i}, \tilde{a}^j_0, \tilde{a}^j_1, \ldots, \tilde{a}^j_{N_j}] \end{equation} Observe that each transition point defines a respective instantiation of both involved functions, therefore also corresponding to a specific instantiated function in $\Omega$. As an example, let's consider the following four parametric power functions: \begin{eqnarray} \label{eq:four_ref} g_1(x) = a^1_1 x + a^1_0 \nonumber \\ g_2(x) = a^2_1 x^2 + a^2_0 \nonumber \\ g_3(x) = a^3_1 x^3 + a^3_0 \nonumber \\ g_4(x) = a^4_1 x^4 + a^4_0 \end{eqnarray} with $a^1_0, a^2_0, a^3_0, a^4_0, a^1_1, a^2_2, a^3_3, a^4_4 \subset \Re$. All pairwise combinations of these functions $g_i()$ and $g_j()$ have respective transition points corresponding to $\tilde{a}^i_1 = \tilde{a}^j_1 = 0$ for any values of $a^i_0$ and $a^j_0$ for which the functions remain completely comprised within $\Omega$. Though the four reference functions above have an infinite number of pairwise transitions points, each of them defines a respective transition network $\Gamma$ as presented in Figure~\ref{fig:trans_net}. \begin{figure}[h!] \begin{center} \includegraphics[width=0.45\linewidth]{trans_continuous.png} \\ \vspace{0.1cm} \caption{The four reference functions in Eq.~\ref{eq:four_ref} share the transition point $P$ given as $a_0 \in [y_{min},y_{max}]$ with $\tilde{a}^1_1 = \tilde{a}^2_1 = \tilde{a}^3_1 = \tilde{a}^4_1 = 0$. For one of the reference functions $g_i$ to transition to another function $g_j$, it is necessary that $g_i$ be instantiated to the function corresponding to $P$ through a respective parameter configuration, from which it can then follow to $g_j$. Observe that, though this diagram involves only five basic nodes (functions), there is actually an infinite number of respectively defined situations in $\Omega$ as a consequence of its continuous nature.} \label{fig:trans_net} \end{center} \end{figure} It is interesting to observe that, in this particular example, each of the transition points corresponds to the constant functions $g(x) = a_0 = a^1_0 = a^3_0 = a^4_0 $, which therefore acts as a quadruple transition point for each $a_0 \in [y_{min},y_{max}]$ with $\tilde{a}^1_1 = \tilde{a}^2_1 = \tilde{a}^3_1 = \tilde{a}^4_1 = 0$: \begin{equation} P: [\tilde{a}^1_0 = \tilde{a}^2_0= \tilde{a}^3_0 = \tilde{a}^4_0 = a_0, \tilde{a}^1_1 = \tilde{a}^2_1 = \tilde{a}^3_1 = \tilde{a}^4_1 = 0] \end{equation} Observe that other sets of reference functions can present many other types of transition points, which can be of types other than the null function. Actually, any shared term between two parametric functions potentially corresponds to a transition point. In addition to transitions between types of reference functions as developed above, it is also possible to have transitions between incrementally different instances of a same type of function. This can be achieved by adopting a tolerance $\tau$ regarding the similarity of two instances of the same type of function $g(x)$, i.e.: \begin{eqnarray} \int_{x_{min}}^{x_{max}} [ g(x; a_0, a_1, \ldots, a_{N - 1}) - \nonumber \\ - g(x; a_0 + \delta_0, a_1+ \delta_1, \ldots, a_{N_i}+ \delta_{N-1}) ]^2 dx \leq \tau \nonumber \end{eqnarray} In this manner, it is possible to obtain long sequences of transitions between instances of a same function as the respective parameters are incrementally variated ($\vec{\delta}$), typically giving rise to handles and tails~\cite{tails_handles} in respectively obtained network representations. Given that the approach reported in this work is respective to discrete signals and functions given a tolerance $\tau$, both types of function transitions identified in this section are expected to be taken into account and incorporated into the respectively derived transition networks. \section{The Discrete Case} \label{sec:discrete} Though interesting in itself, the above described problem involves infinite and non-countable sets $S_i$. Though this could be approached by using specific mathematical resources, in the present work we focus on regions $\Omega$ that are discrete in both $x$ and $y$, taken with respective resolutions: \begin{eqnarray} \Delta x = \frac{x_{max} - x_{min}}{N_x - 1} \nonumber \\ \Delta y = \frac{y_{max} - y_{min}}{N_y - 1} \end{eqnarray} where $N_x$ and $N_y$ correspond to the number of discrete values taken for representing $x$ and $y$, respectively. The so-obtained discretized region $\Omega$ is depicted in Figure~\ref{fig:discr}. \begin{figure}[h] \begin{center} \includegraphics[width=0.7\linewidth]{discrete.png} \\ \vspace{0.1cm} \caption{A discretized region $\Omega \subset Re^2$, with $N_x$ values along the $x$-axis and $N_y$ values along the $y$-axis. } \label{fig:discr} \end{center} \end{figure} More specifically, we now have that: \begin{eqnarray} X_j = (j-1) \; \Delta x - x_{min} \nonumber \\ Y_k = (k-1) \; \Delta y - y_{min} \end{eqnarray} for $j = 1, 2, \ldots, N_x$ and $k = 1, 2, \ldots, N_y$. Now, the possible functions in $\Omega$ can be expressed as the finite set of vectors or discrete signals: \begin{equation} \vec{f} = \left[ f_1 \; f_2 \; \ldots \; f_{N_x} \right]^T \end{equation} with $f_j$ taking values in the set $\left\{Y_k \right\}$ respectively to the abscissae $X_j$. It is assumed henceforth, typically with little loss of generality, that $x_{min}=-1$, $x_{max}=1$, $y_{min}=-1$, $y_{max}=1$. The total number of possible vectors in the discretized region $\Omega$ can now be calculated as being given as corresponding to the number of permutations: \begin{equation} N_T = N_x^{N_y} \end{equation} Henceforth, we identify each of the $N_T$ possible discrete signals (or functions) in $\Omega$ in terms of a respective label $n = 1, 2, \ldots, N_T$. In case $N_y$ is relatively small, it is possible to implement this association by deriving the discrete signal from its respective label $n$ by first representing this value in radix $N_y$, yielding the number $[p_{N_x-1} \; \ldots \; p_{1} \; p_0]_{N_y}$, and then making: \begin{equation} \label{eq:assign} Y_{i+1} = p_{i} \, \Delta y + y_{min} \end{equation} for $i = 0, 1, \ldots, N_x-1$. The difference between two discrete functions $\vec{f}$ and $\vec{h}$ can now be expressed in terms of the following root mean square error: \begin{equation} \label{eq:tau_d} \delta(\vec{f},\vec{h}) = \sqrt{ \frac{1}{x_{max}-x_{min}} \sum_{j = 1}^{N_x} [f[X_j] - h[X_j] ]^2 } \end{equation} while the similarity between those functions can still be gauged by using Equation~\ref{eq:simil}. In order to verify if a given function $\vec{f}$ can be approximated by a reference function $g_i(x)$, we apply the linear least squares methodology (e.g.~\cite{CostaLeast}). This approach provides the set of fit parameters (e.g.~the coefficients of a polynomial) so as to minimize the error of the fitting as expressed by the sum of the square of the differences between $\vec{f}$ and $g_i(x)$ (taken at the abscissae $X_f$). For instance, if $g_i(x)$ is a third degree polynomial and $N_x=5$, we first obtain the matrix: \begin{equation} A = \left[ \begin{array}{c c c c} 1 & X_1 & X_1^2 & X_1^3 \\ 1 & X_2 & X_2^2 & X_2^3 \\ 1 & X_3 & X_3^2 & X_3^3 \\ 1 & X_4 & X_4^2 & X_4^3 \\ 1 & X_5 & X_5^2 & X_5^3 \\ \end{array} \right] \nonumber \end{equation} and then express the respective coefficients in terms of the vector: \begin{equation} \vec{p} = \left[ \begin{array}{c} a_0 \; a_1 \; a_2 \; a_3 \end{array} \right]^T \nonumber \end{equation} So that the fitting can be represented in terms of the following overdetermined system: \begin{equation} \vec{f} = A \; \vec{p} \end{equation} The respective solution can be obtained in terms of the \emph{pseudo-inverse} of $A$ as: \begin{equation} \vec{p} = (A^T A)^{-1} A^T \vec{f} \end{equation} \begin{figure}[h!] \begin{center} \includegraphics[width=0.8\linewidth]{fitting.png} \\ \vspace{0.1cm} \caption{Example of the linear least squares methodology for fitting a discrete signal $Y_i = f(X_i)$, with $N_x=7$ and $N_y = 5$, by a reference function of the type $a_1 x^4 + a_0$. The respectively obtained root mean square error was $\tau_d = 0.136$, implying a similarity of $\tau_s = e^{-10 \, \tau_d} = 0.256$ (for $\alpha = 10$).} \label{fig:fitting} \end{center} \end{figure} \section{Discrete Signals Coverage} \label{sec:coverage} The discretization of $\Omega$ implies that not all signals in $\vec{f}$ can be expressed with full accuracy in terms of reference functions $\vec{g}_i$, so that it becomes important to adopt some difference tolerance $\tau_d$, or respectively associated similarity tolerance $\tau_s$. Henceforth, every discrete signal $\vec{f}$ that can be approximated by a reference function $g_i$ within a given tolerance $\tau$ will be said to be \emph{adjustable} by that reference function. It is important to keep in mind that, when a tolerance is allowed, more than one of the reference functions can be verified to provide a good enough (i.e.~with error smaller than the specified tolerance) approximation, in which case a same function $\vec{f}$ will be identified as being adjustable by more than one reference function, which is reasonable given that this actually happens in discrete domains. However, in case the mapping is required to be made unique, it is possible to keep only one of the fittings for each possible $\vec{f}$, such as that corresponding to the smallest approximation error. In this work, however, multiple adjustments will be considered. The sets $S_i(\tau_d)$, which are defined by $\tau_d$, will now contain a \emph{finite} number of discrete functions. Thus, given a discrete signal $\vec{f}$ and a set of reference functions $g_i$, $i = 1, 2, \ldots, N$, the total number of adjustable signals $N_a$ can be expressed as: \begin{equation} N_a = \sum_{k=1}^{N}S_k(\tau_d) \end{equation} We can now take the relative frequency of each reference function $g_i$ with respect to the whole of adjustable functions as: \begin{equation} r(g_i, \tau_d) = \frac{\# \left\{ S_i(\tau_d) \right\}}{N_a} \end{equation} where $\# \left\{ S_i(\tau_d) \right\}$ corresponds to the \emph{cardinality} of the set $S_i(\tau_d)$. This measurement, which is henceforth referred to as \emph{relative coverage}, can be used to compare the fitting potential of each of the considered reference functions. It is also possible to consider the following densities relative to the total number of functions in $\Omega$ as: \begin{equation} \label{eq:q} q(g_i, \tau_d) = \frac{\# \left\{ S_i(\tau_d) \right\}}{N_T} \end{equation} In case only one fitting is associated to each possible discrete signal $\vec{f}$ in $\Omega$, we will have that $0 \leq q(g_i, \tau_d) \leq 1$ and that $\sum_{k=1}^N q(g_i, \tau_d) = 1$. This can be achieved by considering the sets $\tilde{S_i} = S_i - \bigcup_{k=1}^{N} S_{k \neq i}$ instead of $S_i$ in Equation~\ref{eq:q}. Otherwise, this measurement may take values larger than 1 and we will also have that $\sum_{k=1}^N q(g_i, \tau_d) \geq 1$, indicating that the possible discrete functions in $\Omega$ is being covered in excess. The relative density $q(g_i, \tau_d)$, henceforth called the \emph{coverage index} of $g_i$ provides a means to quantify of how well the reference function $g_i$ \emph{covers} the discrete signals in the given region $\Omega$ and resolution $\tau_d$. Larger values of $q(g_i, \tau_d)$ will typically be observed when $\tau_d$ is increased (or $\tau_s$ is decreased). Also, observe that the above relative densities also depend on the choice of the discretization resolutions $\Delta x$ and $\Delta y$, with $\# \left\{ S_i(\tau_d) \right\}$ increasing substantially with $N_x$ and $N_y$. \section{Discrete Functions Adjacency} While the relative densities $r(g_i,\tau_d)$ can provide interesting insights about the generality of each considered reference function $g_i$, these measurements can provide no information about the proximity or interrelationship between the discrete functions $\vec{f}$ as fitted by a set of reference functions $g_i$, $i = 1, 2, \ldots, N$. However, it is possible to quantify the proximity between all the possible discrete functions in $\Omega$ in terms of some distance between the respective vectors and then define links between the pairs of functions that have respective distances smaller than a given threshold $L$. Consider the Euclidean distance between two discretized functions $\vec{f^{[i]}}$ and $\vec{f^{[j]}}$ in the region $\Omega$ as: \begin{equation} \omega \left( \vec{f^{[i]}},\vec{f^{[j]}} \right) = \sqrt{\sum_{k=1}^{N_x} \left[ f^{[i]}_k - f^{[j]}_k \right]^2 } \end{equation} The whole set of Euclidean distances between every possible pair of functions in a given $\Omega$ can then be represented in terms of the following distance matrix: \begin{equation} W_{i,j} = w \left( \vec{f^{[i]}},\vec{f^{[j]}} \right) \end{equation} The symmetric matrix $W$ can be immediately understood as providing the strength of the links between the nodes of a graph, each of these nodes being associated to one of the possible $N_T$ discrete functions in a given $\Omega$. However, such a graph would express the \emph{distances} between functions, not their \emph{proximity}. Though these distances could be transformed into similarity measurements by adopting an expression analogous to Equation~\ref{eq:simil}, therefore yielding a weighted respective graph, in this work we adopt the alternative approach of understanding two discrete functions as being \emph{adjacent} provided the respective Euclidian distance as defined above is smaller or equal to a given threshold $L$. Overall, obtaining the transition network for a set of reference functions $g_i(x)$ and a respective discrete region $\Omega$, with $N_x$ and $N_y$, involves the following 3 main processing stages: \begin{itemize} \item Assign a label $n$ to each of the $N_T = N_x^{N_y}$ possible discrete signals in $\Omega$; \item For each value $n = 1, 2, \ldots, N_T$, obtain the respective function $\vec{f}_n = [Y_{N_x-1}, \ldots, Y_2, Y_1]$ by using Equation~\ref{eq:assign} and apply least square approximation respectively to each of the reference functions $g_i(x)$, $i=1, 2, \ldots,N$. In case the similarity between $\vec{f}_n$ and $g_i$ as obtained by applying Equations~\ref{eq:tau_d} and then ~\ref{eq:simil}, is larger or equal to $\tau_s$, assign a respective node with label $n$, also incorporating the type $i$ of the respective approximating function $g_i(x)$; \item Interconnect all pairs of nodes obtained in the previous step which have Euclidean distance smaller than $L$, therefore yielding the transition network $\Gamma$. \end{itemize} It is also important to keep in mind that one so obtained transition network can be understood as constraining the overall adjacency network between all possible functions $\vec{f}$ in $\Omega$ so that only the nodes associated to cases that can be adjusted with good accuracy by a respective reference function $g_i$ are maintained. In brief, the transition network therefore provides a representation of the adjacency between the possible discrete functions that can be adjusted by the reference functions. \section{Optimized Transitions and Random Walks} The derivation of the transition network $\Gamma$ respective to a set of reference functions and a discrete region $\Omega$ paves the way to several interesting analysis and simulations, some of which are discussed in this section. One first interesting possibility is, given two functions $\vec{f}_i$ and $\vec{f}_j$ in $\Gamma$, to identify the \emph{shortest paths} between the respective nodes. We mean paths in the plural because it may happen that more than one shortest path exist between any two nodes of a network. Each of these obtained shortest paths indicate the smallest number of successive transitions from $\vec{f}_i$ to $\vec{f}_j$ that are necessary to take one of those functions into the other (or vice-versa) while using only instances of the considered reference functions. This result is potentially interesting for several applications, including implementing optimal controlling dynamics underlain by the reference functions, or optimal morphing between two or more signals underlain by the respectively considered reference functions. Given a transition network $\Gamma$ and all the shortest paths between its pairs of nodes, it also becomes interesting to consider statistics of the length of those paths, such as their average and standard deviation, which can provide interesting information about the overall potential of the reference functions for implementing optimal transitions and morphings as mentioned above. Another interesting approach considering a transition network consists in performing \emph{random walks} (e.g.~\cite{Justin}) along its nodes. Several types of random walks can be adopted, including uniformly random and preferential choice of nodes according to several local topological properties of the network nodes, such as degree and clustering coefficient. These random walks can be understood as implementing respective types of dynamics in the network. For instance, a random walk with uniform transition probabilities is intrinsically associated to diffusion in the network. In this manner, random walks on transition networks provide means for simulating and characterizing properties related to dynamics involving transition between the discrete signals in $\Gamma$. Yet another interesting perspective allowed by the derivation of the transition matrices $\Gamma$ concerns studies involving betweenness centrality (e.g.~\cite{surv_meas}) or accessibility (e.g.~\cite{accessibility}) of edges and nodes in $\Gamma$, which can complement the two aforementioned analyses. For instance, it could be interesting to use the accessibility to identify the discrete signals in $\Omega$, as underlain by the reference functions $g_i(x)$, leading to the largest and smallest number of nodes, therefore providing information about the role of those nodes regarding influencing or being influenced by other nodes. The accessibility measurement can also be applied in order to identify the center and periphery of the obtained transition networks~\cite{borders_access}. \section{Case Example 1: Power Functions} This section presents a case example of the proposed methodology assuming the four power functions in Equation~\ref{eq:four_ref}. First, we consider the region $\Omega$ as being sampled by $N_x = 5$ abscissae values and $N_y = 7$ coordinate samples, assuming $\tau_s = 0.2$, $\alpha = 10$, and $L=0.6$. The resulting transition network is depicted in Figure~\ref{fig:transition_5}, as visualized by the Fruchterman-Reingold methodology~\cite{Fruchterman}. \begin{figure}[h!] \begin{center} \includegraphics[width=0.9\linewidth]{power_graph_5.png} \\ \vspace{0.1cm} \caption{Visualization, by using the Fruchterman-Reingold method, of the transition network obtained for the reference power functions in Eq.~\ref{eq:four_ref}, $N_x=5$ and $N_y=5$, assuming $\tau_s = 0.2$ and $L = 0.6$. The colors indicate, according to the legend, the respective type of power function approximating the discrete signals. The two semiplanes of the bilateral symmetry corresponds to the sign of the coefficients $a^i_1$. The five main clusters of nodes correspond to the constant functions $a^i_0 = -1, -0.5, 0, 0.5, 1$. Observe the hubs at the center of each of the 5 clusters of nodes. See text for more information. } \label{fig:transition_5} \end{center} \end{figure} Several remarkable features can be identified in the obtained transition network. First, we find the nodes organized according to a well-defined bilateral symmetry, which can be verified to correspond to the sign of the coefficients $a^i_1$, $i=1, 2, 3, 4$. In addition, the nodes corresponding to approximations by the power functions $g_1$ and $g_3$, both of which presenting odd parity, tend to be adjacent one another, with a similar tendency being observed for the nodes respective to the evenly symmetric power functions $g_2$ and $g_4$. Five main clusters of nodes can also be identified along the diagonal of the figure running from bottom-left to top-right, each of which with a respective central hub. These hubs correspond to the constant functions $a^i_0 = -1, -0.5, 0, 0.5, 1$ which, as discussed in Section~\ref{sec:adjacency}, represent transition points of the adopted set of reference functions and $\Omega$. As could be expected, these hubs and surrounding clusters of nodes, are characterized by the presence of all the four types of considered power functions. The other, smaller, clusters of nodes are associated to transition points allowed by the adoption of a non-null tolerance, and possibly reflect the intrinsic structure of the discrete space $\Omega$. It is also possible to derive a \emph{reduced} version of the above transition network. Basically, all nodes associated to each of the 4 categories of nodes (i.e.~the adopted 4 reference functions) are subsumed by a respective node, while the interconnections between all the original nodes are also collected into the links between the agglomerated nodes. Figure~\ref{fig:reduced} illustrates the reduced version of the transition network in Figure~\ref{fig:transition_5}. \begin{figure}[h!] \begin{center} \includegraphics[width=0.9\linewidth]{reduced.png} \\ \vspace{0.1cm} \caption{The reduced version of the transition network in Fig.~\ref{fig:transition_5}. Each of the 4 nodes correspond to a respectively indicated type of power function, while the links between each pair of nodes accumulate all connections between the original subsumed nodes.} \label{fig:reduced} \end{center} \end{figure} The obtained reduced graph corroborates the predominance of transitions between odd ($g_1$ and $g_3$) and even ($g_2$ and $g_4$) power functions. In addition, we also have that the largest number of transitions is observed between instances of $g_1$, and that the smallest number of transitions takes place between instances of $g_3$ and $g_4$. The largest number of transitions between odd and even functions take place between $g_1$ and $g_2$. One interesting question regards to what an extent the transition networks may vary with respect to distinct values of the tolerance $\tau_d$ or $\tau_s$. Figure~\ref{fig:several} depicts 9 additional transition networks obtained for the same configuration adopted in the previous example, with respect to several different values of $\tau_s$ respectively indicated above each network. \begin{figure*}[h!] \begin{center} \includegraphics[width=0.3\linewidth]{g_001.png} \includegraphics[width=0.3\linewidth]{g_003.png} \includegraphics[width=0.3\linewidth]{g_009.png} \\ \vspace{0.7cm} \includegraphics[width=0.3\linewidth]{g_011.png} \includegraphics[width=0.3\linewidth]{g_021.png} \includegraphics[width=0.3\linewidth]{g_025.png} \\ \vspace{0.7cm} \includegraphics[width=0.3\linewidth]{g_027.png} \includegraphics[width=0.3\linewidth]{g_029.png} \includegraphics[width=0.3\linewidth]{g_035.png} \caption{Additional examples of transition networks obtained for the same configuration used in the previous example, but with respect to several other values of $\tau_s$, as respectively indicated in each network. The colors follow the same convention as in Fig.~\ref{fig:transition_5}. Network visualizations obtained by using the Fruchterman-Reingold method.} \label{fig:several} \end{center} \end{figure*} As illustrated in Figure~\ref{fig:several}, the size and connectivity of the transition network decreases steadily with $\tau_s$, and several markedly distinct types of networks, most of which presenting bilateral symmetry, are respectively observed. Given that the more generalized ability of the power functions to adjust the discrete signals when larger tolerance values are allow (i.e.~small values of $\tau_s$), the initial networks tend to present a more widespread and uniform interconnectivity. Observe also that the networks split into two or more connected components for values of $\tau_s$ larger than approximately $0.2$. The relative coverage and coverage index (see Section~\ref{sec:coverage}) of the four considered power functions for $\tau_s = 0.01, 0.02, \ldots, 0.5$ are shown in Figures~\ref{fig:coverage_5}(a) and (b), respectively. \begin{figure}[h!] \begin{center} \includegraphics[width=0.9\linewidth]{coverage_5.png} \\ \vspace{0.1cm} \caption{The relative coverage (a) and coverage index (b) for the case example 1 with $N_x=5$ and considering the reference functions in Equation~\ref{eq:four_ref}, as obtained for similarity tolerance values $\tau_s = 0.01, 0.02, \ldots, 0.5$.} \label{fig:coverage_5} \end{center} \end{figure} Similar values of relative coverage can be observed for the four power functions, with oscillations along $\tau_s$ that tend to increase from left to right in Figure~\ref{fig:coverage_5}(a), up to a point, near $\tau_s =0.35$, where the relative coverages become nearly constant and markedly distinct between the 4 considered types of reference functions. As expected, the coverage index decreased steadily with $\tau_s$ for all the four considered reference functions, also presenting values similar. Observe that only a small percentage of the possible discrete signals are adjustable at $\tau_s = 0.2$. Let's now consider the shortest path between two functions in the above transition network. Figure~\ref{fig:shortest_5} illustrates the shortest sequence of transitions between the functions respectively identified by the numbers $53$ and $105$. \begin{figure}[h!] \begin{center} \includegraphics[width=1\linewidth]{shortest_path_5.png} \\ \vspace{0.1cm} \caption{The minimal sequence of transitions in the transition network in Figure~\ref{fig:transition_5}, assuming $N_x=5$ and $N_y = 5$, leading from signal $n=53$ to signal $n=105$. The numbers within parenthesis indicate the type of respectively fitted power function ($1 = g_1$, $2 = g_2$, $3 = g_3$, and $4 = g_4$).} \label{fig:shortest_5} \end{center} \end{figure} Figure~\ref{fig:rand_5} shows the first 23 steps of a possible self avoiding random walk along the above transition network, starting at signal $n = 53$. \begin{figure*}[h] \begin{center} \includegraphics[width=1\linewidth]{rand_walk_5.png} \\ \vspace{0.1cm} \caption{One of the many possible random walks with 23 steps in the transition network shown in Figure~\ref{fig:transition_5}, considering self-avoiding uniform transition probabilities. Observe the incremental change implemented in the involved discrete signals at each successive step. The numbers within parenthesis indicate the type of respectively fitted power function ($1 = g_1$, $2 = g_2$, $3 = g_3$, and $4 = g_4$).} \label{fig:rand_5} \end{center} \end{figure*} Self avoiding operation was adopted in not to repeat nodes. Observe the relatively smooth transition, involving minimal modifications of the discrete signals, along each of the implemented transitions. Figure~\ref{fig:transition_7} depicts the transition network obtained for the same situation above, but now with $N_x=7$ instead of $N_x=5$. \begin{figure}[h!] \begin{center} \includegraphics[width=0.9\linewidth]{power_graph_7.png} \\ \vspace{0.1cm} \caption{Visualization, by using the Fruchterman-Reingold method, of the transition network obtained for the reference power functions in Eq.~\ref{eq:four_ref}, $N_x=7$ and $N_y=5$, assuming $\tau_s = 0.2$ and $L = 0.6$. The colors indicate, according to the legend, the respective type of power function approximating the discrete signals. See text for more information.} \label{fig:transition_7} \end{center} \end{figure} The resulting transition network again presents several interesting features. As before, we have the bilateral symmetry corresponding to the sign of the coefficients associated to the $x$-term. In addition, clusters and respective central hubs have again be obtained, corresponding to the constant (null) transition functions as observed before. However, unlike the network obtained for $N_x=5$, now we most of the nodes separated also along the up-down orientation, corresponding to interactions between blue-yellow (up) and red-green (down). These two portions of the transition network can therefore be understood as being directly associated to the odd/even parity of the involved reference functions. Of particular interest is the fact that the discrete signals associated to the blue nodes, associated to the reference function $g_1(x) = a^1_1 x + a^1_0$, define a relatively regular pattern of interconnection that is markedly distinct to the more sequential pattern of interconnections observed for the 3 other reference functions. Observe that this transition network also incorporates several \emph{handles}, corresponding to relatively long sequences of links~\cite{tails_handles}. Such sequences are associated to incrementally distinct instances of the same type of reference function, as discussed in Section~\ref{sec:adjacency}. \section{Case Example 2: Polynomials} While the previous case example assumed power functions containing only two terms, we now address the more general situation where only one complete polynomial of order $P$ is adopted as reference function, i.e.: \begin{equation} g_1 = a_P x^P + \ldots + a_2 x^2 + a_1 x + a_0 \end{equation} Figure~\ref{fig:poly} illustrates the transition network obtained for the above polynomial reference function assuming $P = 4$, $N_x = 7$, $N_y = 5$, $\tau_s = 0.4$, $\alpha = 10$, and $L = 0.6$. \begin{figure}[h!] \begin{center} \includegraphics[width=0.9\linewidth]{poly_7.png} \\ \vspace{0.1cm} \caption{Visualization, by using the Fruchterman-Reingold method, of the transition network obtained for a complete polynomial of order $P=4$ as single reference function, and $N_x = 7$, $N_y = 5$, $\tau_s = 0.4$, $\alpha = 10$, and $L = 0.6$. The colors are assigned so that increasing values are represented from cyan to magenta color tones.} \label{fig:poly} \end{center} \end{figure} Interestingly, a completely different topology is now observed for the polynomial transition network as compared to the previous the networks respective to power functions. The main distinguishing features are two: (i) a much larger number of nodes are now observed; and (ii) their interconnectivity is much more uniform, without present of well-defined heterogeneities such as clusters, hubs, tails or handles. All these properties can be understood as being consequence of the substantially higher flexibility that a complete polynomial function has for adjusting signals as compared to those of the more specific power functions considered previously in this work. As a consequence of this enhanced adjusting property, many more discrete signals could be fitted with reasonably accuracy, hence the larger network size obtained. The observed uniformity of connections also follows from the flexibility of complete polynomials, as they cater for many more transition points corresponding to the larger number $P$ of involved terms and parameters. \section{Case Example 3: Hybrid Functions} As with power functions and polynomials, also sinusoidal functions are extensively applied in mathematics, physics, and science in general, constituting the basic components of the flexible Fourier series. The third case example considered in the present work adopts a set of reference functions containing two power functions and two sinusoidals, more specifically: \begin{eqnarray} g_1(x) = a^1_1 x + a^1_0 \nonumber \\ g_2(x) = a^2_1 x^2 + a^2_0 \nonumber \\ g_3(x) = a^3_1 \, sin(3 x) + a^3_0 \nonumber \\ g_4(x) = a^4_1 \, sin(5 x) + a^4_0 \label{eq:hybrid} \end{eqnarray} Figure~\ref{fig:hybrid} illustrates the transition network obtained for the above hybrid reference functions assuming $P = 4$, $N_x = 5$, $N_y = 7$, $\tau_s = 0.2$, $\alpha = 10$, and $L = 0.6$. \begin{figure}[h!] \begin{center} \includegraphics[width=1\linewidth]{hybrid_7.png} \\ \vspace{0.1cm} \caption{Visualization, by using the Fruchterman-Reingold method, of the transition network obtained for the reference hybrid (two power and two sinusoidal) functions in Eq.~\ref{eq:hybrid}, for $N_x=7$ and $N_y=5$, $\tau_s = 0.2$ and $L = 0.6$. The colors indicate, according to the legend, the respective type of power function approximating the adjustable discrete signals. In addition to the 5 clusters observed in the previous examples involving power functions, now also tails and handles are obtained. See text for additional discussion.} \label{fig:hybrid} \end{center} \end{figure} A particularly interesting structure is observed for this example. First, the five main clusters, corresponding to respective constant transition points, are again observed in analogous manner with the other examples involving power functions. bilateral symmetry is again observed, being related to the sign of the coefficients $a^i_1$, $i = 1, 2, 3, 4$. Now, surrounding those clusters of nodes, which incorporate all four types of reference functions, we also discern a relatively regular subnetwork involving the first order power $g_1$ (blue) and the lower frequency sinusoidal $g_3$ (yellow), both of which have odd parity. These nodes also tend to form handles at the border of the obtained transition network. The nodes associated to the second order power function $g_2$ (red) results mostly distributed along the six projecting tails at the periphery of the network, which correspond to incremental instantiations of the same type of function. Contrariwise, the nodes corresponding to discrete signals adjustable by the high frequency sinusoidal $g_4$ (green) are found concentrated in the three most central clusters of nodes of the network, despite the fact that both $g_2$ and $g_4$ share even parity. In order to study the effect of extending a set of reference functions on the topology of the respectively defined transition network, we incorporate two additional power functions, respective to third and forth orders, into the set of reference functions adopted in the previous example (Eq.~\ref{eq:hybrid}), yielding the following extended set of reference functions: \begin{eqnarray} g_1(x) = a^1_1 x + a^1_0 \nonumber \\ g_2(x) = a^2_1 x^2 + a^2_0 \nonumber \\ g_3(x) = a^3_1 x + a^3_0 \nonumber \\ g_4(x) = a^4_1 x^2 + a^4_0 \nonumber \\ g_5(x) = a^5_1 \, sin(3 x) + a^5_0 \nonumber \\ g_6(x) = a^6_1 \, sin(5 x) + a^6_0 \label{eq:hybrid_b} \end{eqnarray} Figure~\ref{fig:hybrid_b} illustrates the transition network obtained for the above hybrid reference functions assuming $P = 4$, $N_x = 5$, $N_y = 7$, $\tau_s = 0.2$, $\alpha = 10$, and $L = 0.6$. \begin{figure}[h!] \begin{center} \includegraphics[width=1\linewidth]{hybrid_b_7.png} \\ \vspace{0.1cm} \caption{Visualization, by using the Fruchterman-Reingold method, of the transition network obtained for the second case of reference hybrid (four power and two sinusoidal) functions as in Eq.~\ref{eq:hybrid_b}, for $N_x=7$ and $N_y=5$, $\tau_s = 0.2$ and $L = 0.6$. The colors indicate, according to the legend, the respective type of power function approximating the adjustable discrete signals. In addition to the 5 clusters observed in the previous examples involving power functions, now also tails and handles are obtained. See text for additional information.} \label{fig:hybrid_b} \end{center} \end{figure} It is particularly interesting to contrast the obtained transition network in Figure~\ref{fig:hybrid_b} with the networks in Figure~\ref{fig:transition_7}, obtained for four power functions, and that in Figure~\ref{fig:hybrid}, which considers two power functions and two sinusoidal functions. Therefore, it could be expected that network in Figure~\ref{fig:hybrid_b}, respective to the union of the reference functions in the two aforementioned sets, inherits some of their respective topological features. Indeed, the network in Figure~\ref{fig:hybrid_b} incorporates some features from both the related structures. First, we again observe the bilateral symmetry also common to those previous networks. In addition, the obtained network can be understood, to a good extent, to the structure in Figure~\ref{fig:transition_7} to which peripheral subnetworks corresponding to the two sinusoidals ($g_5$ in pink and $g_6$ in cyan) have been incorporated, being characterized by several respective handles. Also of particular interest is the fact of the tails in Figure~\ref{fig:hybrid_b} being assimilated into the inner structure of the network. \section{Concluding Remarks} Functions can be understood as essential mathematical concepts, being widely used both from the theoretical and applied points of view in science and technology. As a consequence of their great importance, whole areas of mathematics and other major areas have been dedicated to their study and applications, including calculus, mathematical physics, linear algebra, functional analysis, numerical methods, numerical analysis, dynamic systems, and signal processing, to name but a few examples. The present work situates at the interface between several of these areas, also encompassing other areas, including network science, computer graphics, and shape analysis. More specifically, we aimed at developing the issue of how well all possible signals in a given region $\Omega$ correspond to instances of a given set of reference functions. Given that infinite sets of adjustable functions would be obtained when working with continuous functions, we focused instead on addressing the aforementioned problem in discrete regions, leading to finite sets of adjustable functions to be obtained. In particular, if the region $\Omega$ is sampled by $N_x \times N_y$ values, the total number of possible discrete signals in that region is necessarily equal to $N_x^{N_y}$. The adoption of discrete signals also paves the way to verify if each of them can be adjusted, given a pre-specified tolerance, as instances of the reference parametric functions by using the least linear squares methodology. Having identified the sets of adjustable discrete signals respectively to each of the adopted reference functions, it becomes possible not only to study their relative density, but to approach the particularly interesting issue of transitions between adjacent functions, yielding respective transition networks. The adjacency between two functions, as understood in this work, was first characterized with respect to continuous parametric functions as corresponding to respective instances leading to the identity between the two functions, being subsequently adapted to discrete signals and functions by taking into account the Euclidian distances smaller than a specified threshold $L$. A number of interesting possible investigations can then be performed with basis on these obtained networks, including studies of optimal sequence of transitions, random walks potentially associated to dynamical systems, as well the identification of particularly central signals in terms of betweenness centrality and accessibility. The potential of the reported concepts and methods were then illustrated with respect to three case examples respective to: (i) four power functions; (ii) a single complete polynomial of forth order; and (iii) two sets of hybrid reference functions involving combinations of power functions and sinusoidals. As expected, the coverage index decreased steadily as $\tau_s$ increased, while the four power functions presented similar potential for adjusting the discrete functions in the assumed region $\Omega$. In addition, the obtained transition networks gave rise to a surprising diversity of topologies, including combinations o modularity and regularity, as well as hubs, handles and tails. Several of the networks also were characterized by symmetries which have been found to be related to the sign of the reference function coefficients, as well as their parity. The power functions and sinusoidals were found to lead to quite distinct patterns of interconnectivity in the resulting transition networks, wth the latter leading to peripheral handles. The intricate and diverse patterns of topological structure obtained for the transition networks are also influenced by the discrete aspects of the lattice underlying $\Omega$. For instance, most of the case examples involving power and sinusoidals for $N_x=5$ were found to incorporate five clusters of nodes associated to the null discrete transition. Other topological heterogeneities of the obtained networks are also related to specific anisotropies of the lattice, as well as to the nature of the respective reference functions. One particularly distinguishing aspect of the proposed approaches concerns the complete, exhaustive representation of every possible discrete signal in the region $\Omega$. As such, these approaches provide the basis for systematic studies in virtually every theoretical or applied areas involving discrete signals or functions. In particular, it would be interesting to revisit dynamic systems from the perspective of the described concepts and methods, associating each admissible signal to a respective node in the transition networks, and studying or modeling specific dynamics by considering these networks. The generality of the concepts and methods developed along this work paves the way to many related further developments. For instance, it would be interest to extend the approach from 1D signals to higher dimensional scalar and vector fields, as well as to other types of regions possibly including non regular borders or even disconnected parts. It would also be interesting to study other types of functions such as exponential, logarithm, Fourier series, as well as several types of statistical distributions. In addition, the several types of obtainable transition networks can be applied as benchmark in approaches aimed ad characterizing classifying complex networks, as well as for studies aimed at investigating the robustness of networks to attacks, and also from the particularly important perspective of relating topology and dynamics in network science. Another interesting possibility consists in applying the developed methodologies to the analysis of real data, such as time series, shapes and images. Though the present work focused on undirected networks, it is possible to adapt the proposed concepts and methods for handling directed transition networks, therefore extending even further the possibly modeled patterns and dynamics. this can be done, for instance, by defining the concept of adjacency in an asymmetric manner, such as when one of the reference functions approaches, through incremental parameter variations, approaches another parameterless reference function, in which case the direction would extend from the former to the latter respectively associated nodes. Another possibility would be to establish the directions in terms of an external field, which could be possibly associated to a dynamical system. Last but not least, the networks generated by the proposed methodology yield remarkable patterns when visualized into a geometric space, presenting shapes with diverse types of coexisting regularity, heterogeneity and symmetries. It has been verified that an even wider and richer repertoire of shapes can be obtained by the suggested method by varying the involved parameters. For instance, symmetries of types other than bilateral can be obtained by using reference functions containing 3 or more terms instead of the 2 terms as adopted in most of the examples in this work. One particularly interesting aspect of generating shapes in the described manner is that very few parameters are involved while determining structures with high levels of spatial and morphologic diversity and complexity. Actually, the only involved parameters specifying each of the possibly obtained shapes are $N_x$, $N_y$, the reference functions, $\alpha$ and $L$. This potential for producing such flexible shapes paves the way to several studies not only in shape and pattern generation and recognition, but also for development biology, in the sense that the obtained structures could represent a model of morphogenesis through gene expression control by the reference functions, while the spatial organization of the cells would be defined in a manner similar to the Fruchterman-Reingold method, i.e.~nodes that are connected attract one another, while disconnected nodes tend to repel one another. These interactions could be associated to morphic fields (e.g.~biochemical concentrations, electric fields, etc.) taking place during development. \vspace{0.7cm} \textbf{Acknowledgments.} Luciano da F. Costa thanks CNPq (grant no.~307085/2018-0) for sponsorship. This work has benefited from FAPESP grant 15/22308-2 . \vspace{1cm}
2,869,038,156,012
arxiv
\section{Introduction} \label{sec1} Quantum state tomography (QST) determines an unknown state by making measurements on identical copies with highly important applications in various areas of quantum technologies \cite{d2003quantum, cramer2010efficient, christandl2012reliable}. QST and process estimation tasks for characterizing and reconstructing purposes generally require exponential amount of measurements \textit{$\mathcal{O}(2^{2n})$} with the number of qubits $n$ without any knowledge about the state. Low-rank density matrices with rank $r \ll 2^n$ approximating pure states are reconstructed with \textit{$\mathcal{O}(r \, 2^n \, n^c)$} Pauli measurements for a constant $c \in [2, 6]$ by exploiting compressive sensing (CS) with convex or non-convex programming approaches \cite{gross2010quantum, kyrillidis2018provable, flammia2012quantum} and experimental studies \cite{steffens2017experimentally}. Linear optical methods are already utilized such as in \cite{banchi2018multiphoton} for multi-mode multi-photon states where finite number of linear optical interferometer configurations are used. The reconstruction method requires $\sharp$P-hard matrix permanent calculations with \textit{$\mathcal{O}\big(poly(D_{n,M},\, 2^n) \big)$} complexity where $n$ is the number of photons, $M$ is the number of modes and $D_{n,M} = {n \choose n-M}$. In this article, QST of any pure state of $n$ qubits composed of the superposition of $K$ different computational basis states in a specific measurement set-up, i.e., denoted as \textit{$K$-sparse} pure state, is achieved in quantum polynomial-time without any knowledge about the state including the value of $K$ based on assumptions about the implementation of the black-boxes of specially designed unitary operator $U_{\vec{\Phi}}$. It is assumed that exponentially large powers of $U_{\vec{\Phi}}$, i.e., controlled-$U_{\vec{\Phi}}^{2^j}$ operations for finite $j$, can be implemented with polynomial size quantum circuits. Then, the main QST problem for $n$ qubits is basically converted to the problem of quantum CS based QST of $\log(K)$ qubits by creating independence from the number of qubits $n$ for finite $K \ll 2^n$. Firstly, an ancillary single qubit initialized to $(\ket{0} \, + \, \ket{1}) \, / \, \sqrt{2}$ is included by increasing the number of qubits to $n+1$. After applying n-qubit Hadamard transform to the pure state, QST problem for the resulting $n\,+\,1$ qubits is shown to be equivalent to conventional phase estimation problem for estimating eigenvalues and projecting onto eigenvectors of $U_{\vec{\Phi}}$. Here, we mainly exploit favorable structure of the eigenvectors of $U_{\vec{\Phi}}$ to represent any input pure state as a superposition of eigenvectors of $U_{\vec{\Phi}}$ with the help of ancillary qubit. Then, the application of conventional phase estimation algorithm determines unknown computational basis states composing the pure state. After learning the basis locations of sparsity, i.e., the unknown $K$ different computational basis states, the conventional quantum CS methods are applied to estimate $K$ different complex superposition coefficients. We present a linear optical set-up to realize such a unitary operator. It is motivated by the target of tracking the evolution of single photons through consecutive beam splitters (BSs) and phase shifters by using consecutive which-path-detectors (WPDs) after each BS. We exploit the surprising eigenstructure of the designed set-up. WPDs allow to track the evolution of photons in various settings \cite{englert1996fringe}. In addition, quantum circuit implementation of $U_{\vec{\Phi}}$ is presented to be realized in universal quantum computers or noisy-intermediate scale quantum (NISQ) devices. More specifically, consider the following state in the problem definition composed of the superposition of $K$ different computational basis states of $n$ qubits, i.e., denoted as \textit{ $K$-sparse} pure state, for estimation with QST. The problem is defined as follows. \begin{customdef}{1} \textbf{$K$-sparse pure state reconstruction problem}: \textit{Estimate the unknown values and reconstruct any pure state being in a superposition of $K$ different computational basis states of $n$ qubits defined as follows: \begin{eqnarray} \label{purestate} \ket{\Psi} = \sum_{k = 1}^{K} c_k \, e^{\imath \, \vartheta_k}\, \ket{s_{k,n} \, s_{k, n-1} \, \hdots s_{k,1}} \end{eqnarray} where $c_k > 0$ and $c_k \, e^{\imath \, \vartheta_k}$ is the complex superposition coefficient of the unknown computational basis state $\ket{s_{k,n} \, s_{k, n-1} \, \hdots s_{k,1}}$ with unknown values of $s_{k,j} = 0$ or $1$ for $j \in [1, n]$ and $K \ll 2^n$. The unknown values to be estimated are $c_k$ and $ \vartheta_k$ for $k \in [1, K]$, the number of superposition components $K$ and $s_{k,j}$ for $j \in [1,n]$, i.e., the locations of sparsity, without any initial knowledge about the state. The structure of the computational basis states for each qubit, i.e., $\ket{0}$ and $\ket{1}$, is initially defined depending on the measurement set-up. } \end{customdef} Observe that the sparsity is defined with respect to the chosen computational basis states of each qubit, i.e., $\ket{0}$ and $\ket{1}$, in the measurement set-up. The computational basis state of each qubit depends on the given measurement set-up and the pure state to be estimated is assumed to have finite number of superposition components without knowing the exact value of $K$. In classical CS, the problem is defined in \cite{baraniuk2007compressive} as finding the minimum $\mathscr{L}_1$-norm with $\widehat{\mathbf{x}} = \mbox{argmin} \, \Vert \mathbf{x'} \Vert_1$ such that $\mathbf{y} = \Theta \, \mathbf{x'}$ where $\mathbf{y}$ is the measurement result, $\mathbf{x}$ is the unknown vector and $\Theta $ is a Gaussian matrix. It is observed that if $\mathbf{x}$ is $K$-sparse, then $\Theta $ with the dimension $M \times N$ is enough where $M \, = \, c\, K \, log(N \, / \, K)$ is the number of measurements with constant $c$ and $N$ is the dimension of the signal $\mathbf{x}$. The computational complexity of the solution with basis pursuit is $\mathcal{O}(N^3)$. In quantum CS, e.g., with convex solution \cite{gross2010quantum}, randomly chosen Pauli expectations $tr(\mathbf{P}^k \,\rho)$ are utilized to minimize $\Vert \sigma \Vert_{tr}$ where $ tr(\mathbf{P}^k\, \sigma ) = tr(\mathbf{P}^k\, \rho )$ where $\Vert \sigma \Vert_{tr}$ denotes the sum of the singular values of $\sigma $, the unknown density matrix is $\rho$, $M$ is the number of measurements with $k \in [1, M]$ and $\mathbf{P}^k \, \equiv \, \bigotimes_{j=1}^{n} P_i^k$ is a random Pauli measurement with $P_i^k \in \lbrace \mathtt{1}, \sigma^x, \,\sigma^y, \, \sigma^z \rbrace$. $M$ is mainly reduced from the requirement of roughly $2^{2n}$ measurements to $2^n$ measurements by using CS based on the assumption of the purity of the sampled state. There are diverse number of studies utilizing quantum CS or alternative approaches in more practical manners for further reducing the number of measurements and the required resources such as online learning and shadow tomography \cite{aaronson2019online, aaronson2019shadow}, self-calibrating quantum state tomography by relaxing the blind tomography problem to sparse de-mixing \cite{roth2020semi} and hierarchical compressed sensing \cite{eisert2021hierarchical}, adaptive compressive tomography without a-priori information \cite{ahn2019adaptive}, reduced density matrices \cite{cotler2020quantum, xin2017quantum}, matrix product state tomography \cite{lanyon2017efficient, cramer2010efficient}, neural network \cite{torlai2018neural}, machine learning \cite{lohani2020machine} based approaches or different methods including \cite{pereira2021scalable}. There is not any polynomial-time quantum algorithm or solution method available for exactly reconstructing $K$-sparse pure states in (\ref{purestate}), i.e., $K$-sparse vectors in dimension $2^n$, with neither quantum nor classical polynomial-time resource complexity. It is an open issue to achieve $\mathscr{L}_1$-norm minimization based QST of $K$-sparse pure states with quantum algorithms of polynomial-time complexity. In this article, we provide an alternative quantum polynomial-time solution with $\mathcal{O}(d \, K \,(log K)^c)$ measurement settings for $K$-sparse pure states in parallel with $\mathcal{O}\big(c\, K \, log(2^n \, / \, K) \big)$ measurements in classical CS without any complexity exponentially growing with $2^n$. On the other hand, the computational complexity is also maintained as quantum polynomial-time providing a complete practicality in QST tasks in analogy with the practical utilization of CS in classical world with exponentially smaller dimensions compared with the Hilbert space size of quantum states. We provide the locations of the $K$-sparse points without solving any $\mathscr{L}_1$-norm minimization problem with a different perspective compared with conventional CS. Phase estimation presents a new approach for QST of pure states while encouraging the design of new unitary operators $U_{\vec{\Phi}}$ with the proposed eigenstructure and practical implementation capability. It is an open issue to realize polynomial size quantum circuit implementations for exponentially large powers of the presented specific design based on linear optics and WPDs. Furthermore, analyzing the effects of noise in input state and estimation errors are important for practical considerations \cite{riofrio2017experimental, steffens2017experimentally}. Extension to mixed state inputs is also an open issue. The proposed architecture is promising to be utilized in all state estimation and process modeling tasks \cite{d2003quantum, cramer2010efficient, christandl2012reliable, aaronson2019online}. Quantifying the amount of entanglement existing in a quantum state is another potential application \cite{schneeloch2019quantifying}. Quantification generally requires full state tomography and complex calculations for entanglement monotones \cite{di2013embedding}. Furthermore, the proposed method can be utilized to map classical states into eigenvectors of $U_{\vec{\Phi}}$ for various machine learning procedures similar to embedding into the amplitude, basis or the dynamics of quantum systems through Hamiltonian embedding \cite{schuld2018supervised}, or using various quantum feature map operators \cite{havlivcek2019supervised, schuld2019quantum, lloyd2020quantum}. The embedded classical data is extracted reliably in quantum polynomial-time by using the proposed QST architecture. One fundamental theorem and supporting conjecture are formulated in this article. Theorem-1 reduces the exponential amount of resources and measurements necessary in conventional QST and quantum CS algorithms to quantum polynomial-time resources as follows: \begin{theorem} There exists a unitary operator $U_{\vec{\Phi}}$ with a specially defined eigenstructure to be utilized in quantum phase estimation algorithm so that any $K$-sparse $n$-qubit pure state can be reconstructed after $\mathcal{O}(1 \, / \, m)$ repetitions of conventional $t$-bit quantum phase estimation algorithm with $\widetilde{t} \, \approx t\,+\,log \big(2 \, + \, 1 \, / \, \,(2 \, \epsilon) \big)$ ancillary qubits having success probability of at least $(1 \, - \, \epsilon)$ and consecutive $\mathcal{O}(d \, K \,(log K)^c)$ measurement settings based on existing quantum CS methods for $c \in [2, 6]$ and some constant $d$ reducing the error exponentially. $m$ is such that the number of repetitions is $\mathcal{O}(2 \, / \, \min_{k \in [1, K]} \lbrace \vert c_k \vert^2 \rbrace)$ independent from the number of qubits while depending on the probability $\vert c_k \vert^2$ of the least probable basis state in the superposition for $k \in [1, K]$. It is assumed that black-boxes of $U_{\vec{\Phi}}^{2^j}$ are available for $j \in [0, \widetilde{t}-1]$. \end{theorem} \begin{proof} The proof is provided in Section \ref{sec2}. \end{proof} The supporting conjecture proposes that unitary operators $U_{\vec{\Phi}}$ with the special eigenstructure exploited in Theorem-1 can be realized by using polynomial size quantum circuits and linear optics. \begin{customthm}{1} \textit{There exist a unitary operator $U_{\vec{\Phi}}$ and its polynomial size quantum circuit implementation composed of CNOT gates, $R_{X}(\frac{-\pi}{2}) \equiv \frac{1}{\sqrt{2}}\begin{bmatrix}1 & \imath\\ \imath & 1\end{bmatrix} $ gates and phase shifters $\Phi_{j} \, \equiv \, \begin{bmatrix} 1 & 0 \\ 0 & e^{\imath \, \phi_{j}}\\ \end{bmatrix}$ for $j \in [1, n+1]$ such that it has distinct eigenvalues and unique pairs of eigenvectors with the form specified as quantum states $ H^{\otimes n} \,\ket{s_{k,n} \, s_{k,n-1} \, \hdots s_{k,1}} \, \big( \alpha_{k,0} \, \ket{0} \, + \, \alpha_{k,1} \, \ket{1} \big)$ or $ H^{\otimes n} \,\ket{s_{k,n} \, s_{k,n-1} \, \hdots s_{k,1}} \, \big( \beta_{k,0} \, \ket{0} \, + \, \beta_{k,1} \, \ket{1} \big)$ corresponding to each $\ket{s_{k,n} \, s_{k,n-1} \, \hdots s_{k,1}}$ for $k \in [1, 2^n]$. The parameters of a pair of eigenvectors satisfy the relation such that the parameters of the first eigenvector are $\alpha_{k,0} = a_0 \, + \, \imath \, b_0$ and $\alpha_{k,1} = a_1$ with $a_0$, $b_0$ and $a_1 \, \in \mathcal{R}$ and $a_0^2 \, + \, b_0^2 \, + \, a_1^2 = 1$ and the parameters of the other eigenvector are $\beta_{k,0} = a_1$ and $ \beta_{k,1} = - a_0 \, + \, \imath \, b_0$ (or their multiplication with arbitrary phase). The parameters $\alpha_{k,0}$, $\alpha_{k,1}$, $\beta_{k,0}$ and $\beta_{k,1}$ depend on $\phi_{j}$ for $j \in [1, n+1]$, $H^{\otimes n}$ is n-qubit Hadamard transform and $\ket{s_{k,n} \, s_{k,n-1} \, \hdots s_{k,1}}$ denotes computational basis state with $s_{k,j} = 0$ or $1$ for $j \in [1, n]$. The existence of the operator satisfying the specific form of eigenstructure depends on the chosen set $\phi_j$ for $j \in [1, n+1]$.} \end{customthm} Theoretical analysis and supporting evidence are provided in Section \ref{sec5a}. Linear optical set-up is provided in Section \ref{sec3} to realize such unitary operators where it is motivated by consecutive WPDs tracking the evolution of a single photon in a linear optical set-up. In Section \ref{sec4}, quantum circuit implementation is provided for the set-up. Numerical analysis shows that if uniformly distributed phase shifts are utilized in the linear optical design or the proposed quantum circuit implementation, then $U_{\vec{\Phi}}$ with the specific form of eigenstructure is obtained. However, it is an open issue to determine the conditions on the phase shift values under which the proposed eigenstructure is satisfied. This is the reason that we provide the result as a conjecture. The remainder of the paper is organized as follows. In Section \ref{sec2}, QST algorithm is presented. In Section \ref{sec3}, linear optical set-up providing a design of the targeted $U_{\vec{\Phi}}$ is presented. Then, in Section \ref{sec4}, quantum circuit implementation for the designed operator is provided. Eigenstructure of $U_{\vec{\Phi}}$ is analyzed in Section \ref{sec5}. Open problems are presented in Section \ref{sec6}. \section{$K$-sparse Pure State Tomography Algorithm} \label{sec2} QST problem is converted to eigenvector estimation problem by using the eigenvalues estimated with phase estimation algorithm \cite{kitaev1995quantum, nielsen2010quantum}. The algorithm is summarized in Algorithm-\ref{alg:QST} with two important phases. The proof of Theorem-1 and the description of the algorithm are provided next. \begin{algorithm} \label{alg:QST} \caption{$K$-sparse pure state tomography with phase estimation} \textbf{$1^{st}$ PHASE:} \begin{enumerate} \item Initial pure state: $\ket{\Psi} = \sum_{k = 1}^{K} c_k \, e^{\imath \, \vartheta_k}\, \ket{s_{k,n} \, s_{k, n-1} \, \hdots s_{k,1}}$ with unknown $K$, $c_k$, $\varphi_k$ and $\ket{s_{k,n} \, s_{k, n-1} \, \hdots s_{k,1}}$ for $k \in [1, K]$\\ \item Add an ancillary state $(\ket{0} \, + \, \ket{1}) \, / \, \sqrt{2}$ and apply n-qubit Hadamard transformation to $\ket{\Psi}$ by transforming into a superposition of eigenvectors of the unitary operator $U_{\vec{\Phi}}$: $$ \ket{\Psi_e} = \sum_{k = 1}^{K} c_k \, e^{\imath \, \vartheta_k}\, H^{\otimes n} \,\ket{s_{k,n} \, \, s_{k, n-1} \, \hdots s_{k,1}} \bigg(\frac{\ket{0} \, + \, \ket{1}}{\sqrt{2}} \bigg) \, = \, \sum_{k = 1}^{K} c_k \, e^{\imath \, \vartheta_k}\, \bigg( \sum_{l = 1}^2 d_{k,l} \, \ket{E_{k,l}} \bigg) $$\\ \item Apply phase estimation and collapse the state to $\ket{\widetilde{\lambda_{k,1}}} \, \ket{E_{k,l}}$ for $l \in [1,2]$ with probabilities $\vert c_{k}\vert^2 \, \vert d_{k,1}\vert^2 = \vert c_{k}\vert^2 \, (1/2 \, + \,a_{k,0} \, a_{k,1} )$ and $\vert c_{k}\vert^2 \, \vert d_{k,2}\vert^2 = \vert c_{k}\vert^2 \, (1/2 \, - \,a_{k,0} \, a_{k,1} )$. \\ \item Apply n-qubit Hadamard transformation to the second register except the ancillary qubit converting the measured state to either $ \ket{\widetilde{\lambda_{k,1}}} \, \ket{s_{k,n} \, s_{k, n-1} \, \hdots s_{k,1}} \, \big( \alpha_{k,0} \, \ket{0} \, + \, \alpha_{k,1} \, \ket{1} \big) $ or $ \ket{\widetilde{\lambda_{k,2}}} \, \ket{s_{k,n} \, s_{k, n-1} \, \hdots s_{k,1}} \, \big( \beta_{k,0} \, \ket{0} \, + \, \beta_{k,1} \, \ket{1} \big)$.\\ \item Measure the second register except the ancillary qubit to obtain $\ket{s_{k,n} \, s_{k, n-1} \, \hdots s_{k,1}}$.\\ \item Perform the first five steps $\mathcal{O}(2 \, / \, \min_{k \in [1, K]} \lbrace \vert c_k \vert^2 \rbrace)$ times completing the estimation of $K$ and $\ket{s_{k,n} \, s_{k, n-1} \, \hdots s_{k,1}}$ for $k \in [1, K]$.\\ \end{enumerate} \textbf{$2^{nd}$ PHASE:} \begin{enumerate} \item $\ket{\Psi}$ is estimated by converting the problem to QST problem of $log(K)$-qubit pure states by using the projection operators $P_{k} \equiv \ket{s_{k,n} \, s_{k, n-1} \, \hdots s_{k,1}} \bra{s_{k,1} \, \hdots \, s_{k, n-1} s_{k,n}} $ and $I - P_k$ since the states $\ket{s_{k,n} \, s_{k, n-1} \, \hdots s_{k,1}}$ for $k \in [1, K]$ are obtained in the first phase of the algorithm. Existing quantum CS methods are utilized in this step which require $\mathcal{O}(d\, K \,(log K)^c)$ measurement settings for $c \in [2, 6]$ and constant $d$ while completing the calculation of $c_k \, e^{\imath \, \varphi_k}$ for $k \in [1, K]$. \end{enumerate} \end{algorithm} \begin{proof}[\textbf{Proof of Theorem-1:}] Assume that there exists a special unitary transform $U_{\vec{\Phi}}$ with a favorable eigenstructure as described next. In the first phase, the unknown state $\ket{\Psi}$ is firstly converted to the following by using n-qubit Hadamard transformation and an ancillary qubit to encode as the superposition of the eigenvectors of $U_{\vec{\Phi}}$: \begin{eqnarray} \ket{\Psi_e} \, & = & \, \sum_{k = 1}^{K} c_k \, e^{\imath \, \vartheta_k}\, H^{\otimes n} \,\ket{s_{k,n} \, \hdots s_{k,1}} \, \bigg(\frac{\ket{0} \, + \, \ket{1}}{\sqrt{2}} \bigg) \\ & = & \, \sum_{k = 1}^{K} c_k \, e^{\imath \, \vartheta_k}\, H^{\otimes n} \,\ket{s_{k,n} \, \hdots s_{k,1}} \, \bigg( d_{k,1} \big( \alpha_{k,0} \, \ket{0} \, + \, \alpha_{k,1} \, \ket{1} \big) + d_{k,2} \big( \beta_{k,0} \, \ket{0} \, + \, \beta_{k,1} \, \ket{1} \big) \bigg) \hspace{0.2in} \\ & = & \, \sum_{k = 1}^{K} c_k \, e^{\imath \, \vartheta_k}\, \big( d_{k,1} \ket{E_{k,1}}\, + \, d_{k,2} \, \ket{E_{k,2}}\big) \end{eqnarray} where $\ket{E_{k,1}}$ and $\ket{E_{k,2}}$ are eigenvectors of $U_{\vec{\Phi}}$, $ \lbrace d_{k, 1}, \,d_{k, 2} \rbrace \in \mathbb{C}$, $\alpha_{k,0} \, \equiv \, a_{k, 0} \, + \, \imath \, b_{k, 0}$, $\alpha_{k,1} = a_{k, 1}$ for the parameters $ \lbrace a_{k,0}, \, a_{k,1}, \, b_{k,0} \rbrace \in \mathbb{R}$, and it is assumed that there is a special relation between the values of $\alpha_{k,j}$ and $\beta_{k,j}$ for $j \in [0, \,1]$, i.e., denoted as duality relation, defined as $\beta_{k,0} = \, a_{k, 1}$ and $\beta_{k,1} = \, - \, a_{k, 0} \, + \, \imath \, b_{k, 0}$ (or multiplication of $\beta_{k,0}$ and $\beta_{k,1}$ with arbitrary phase, e.g., $-1$). All the parameters depend on phase shift values in the design of $U_{\vec{\Phi}}$ where eigenvectors $\ket{E_{k,1}}$ and $\ket{E_{k,2}}$ are defined as follows: \begin{eqnarray} \ket{E_{k,1}} \, & \equiv & \, H^{\otimes n} \,\ket{s_{k,n} \, \hdots s_{k,1}} \, \big( \alpha_{k,0} \, \ket{0} \, + \, \alpha_{k,1} \, \ket{1} \big) \\ \ket{E_{k,2}} \, & \equiv & \, H^{\otimes n} \,\ket{s_{k,n} \, \hdots s_{k,1}} \, \big( \beta_{k,0} \, \ket{0} \, + \, \beta_{k,1} \, \ket{1} \big) \end{eqnarray} We assumed that $U_{\vec{\Phi}}$ has a special eigenstructure with the pair of eigenvectors $\ket{E_{k,1}}$ and $\ket{E_{k,2}}$ corresponding to each unique $\ket{s_{k,n} \, s_{k,n-1} \, \hdots s_{k,1}}$ such that these eigenvectors are separable pure states and they differ only in terms of the state of the first qubit, i.e., as $\big( \alpha_{k,0} \, \ket{0} \, + \, \alpha_{k,1} \, \ket{1} \big)$ or $\big( \beta_{k,0} \, \ket{0} \, + \, \beta_{k,1} \, \ket{1} \big)$. Furthermore, our assumption includes the special duality relation between the values of $\alpha_{k,j}$ and $\beta_{k,j}$ for $j \in [0, \,1]$ as described. In addition, it is assumed that all the eigenvalues of $U_{\vec{\Phi}}$ are different which will be exploited during phase estimation as described next. \begin{figure}[ht!] \includegraphics[ width=4.5in]{fig5.eps \caption{Estimation of a single computational basis state $\ket{s_{k,n} \, s_{k, n-1} \, \hdots s_{k,1}} $ by using phase estimation algorithm. Ancillary qubit state $(\ket{0} \, + \, \ket{1}) \, / \, \sqrt{2} $ encodes $ H^{\otimes n} \,\ket{s_{k,n} \, s_{k,n-1} \, \hdots s_{k,1}} \, \big( (\ket{0} \, + \, \ket{1}) \, / \, \sqrt{2} \big) $ as the superposition of two eigenvectors of $U_{\vec{\Phi}}$, i.e., $\ket{E_{k,1}}$ and $\ket{E_{k,2}}$, with $ H^{\otimes n} \,\ket{s_{k,n} \, s_{k,n-1} \, \hdots s_{k,1}} \, \big( (\ket{0} \, + \, \ket{1}) \, / \, \sqrt{2} \big)$ being equal to $\sum_{l =1}^2 d_{k,l} \ket{E_{k,l}}$ where $\ket{E_{k,1}} \, \equiv \, H^{\otimes n} \,\ket{s_{k,n} \, s_{k,n-1} \, \hdots s_{k,1}} \, \big( \alpha_{k,0} \, \ket{0} \, + \, \alpha_{k,1} \, \ket{1} \big)$ and $ \ket{E_{k,2}} \, \equiv \, H^{\otimes n} \,\ket{s_{k,n} \,s_{k,n-1} \, \hdots s_{k,1}} \, \big( \beta_{k,0} \, \ket{0} \, + \, \beta_{k,1} \, \ket{1} \big)$. In the proposed QST algorithm, the state $\ket{\Psi} = \sum_{k = 1}^{K} c_k \, e^{\imath \, \vartheta_k}\, \ket{s_{k,n} \, s_{k, n-1} \, \hdots s_{k,1}}$ is the input as the superposition of the computational basis states while the ancillary qubit encodes each computational basis state $\ket{s_{k,n} \, s_{k, n-1} \, \hdots s_{k,1}} $ as superpositions of different pairs of eigenvectors. After the phase estimation, the first register with $\widetilde{t}$ qubits is measured. Then, n-qubit Hadamard transformation is applied to the second register except the ancillary qubit. The state collapses to either $ \ket{\widetilde{\lambda_{k,1}}} \, \ket{s_{k,n} \, s_{k, n-1} \, \hdots s_{k,1}} \, \big( \alpha_{k,0} \, \ket{0} \, + \, \alpha_{k,1} \, \ket{1} \big) $ or $ \ket{\widetilde{\lambda_{k,2}}} \, \ket{s_{k,n} \, s_{k, n-1} \, \hdots s_{k,1}} \, \big( \beta_{k,0} \, \ket{0} \, + \, \beta_{k,1} \, \ket{1} \big) $. } \label{fig5} \end{figure} The parameters $ d_{k,1}$ and $d_{k,2}$ with $\vert d_{k,1} \vert^2 \, + \, \vert d_{k,2} \vert^2$ = 1 are calculated easily if $ \lbrace a_{k,0}, \, a_{k,1}, \, b_{k,0} \rbrace$ are known. They satisfy the following for $\beta_{k,0} \, = \, a_{k, 1} \, \mbox{and} \, \beta_{k,1} = - a_{k, 0} \, + \, \imath \, b_{k, 0}$: \begin{equation} d_{k,1} \, = \, \frac{\beta_{k,0}-\beta_{k,1}}{\sqrt{2}}; \,\,\,\, d_{k,2} \, = \, \frac{\beta_{k,0}+\beta_{k,1}^* }{\sqrt{2}} \end{equation} We observe that $\vert d_{k,1} \vert^2 = (1/2 \,+ \,a_{k,0} \, a_{k,1} )$ and $\vert d_{k,2} \vert^2 = (1/2 \,- \,a_{k,0} \, a_{k,1} )$. Then, phase estimation algorithm with input $\ket{\Psi_e}$ is exploited to estimate the eigenvalues $e^{\imath \, \lambda_{k,1}}$ and $e^{\imath \, \lambda_{k,2}}$ corresponding to each pair of $\ket{E_{k,1}}$ and $\ket{E_{k,2}}$ resulting in the following state: \begin{eqnarray} \ket{\Psi_o} \, \equiv \, \sum_{k = 1}^{K} c_k \, e^{\imath \, \vartheta_k}\, \big( d_{k,1} \ket{\widetilde{\lambda_{k,1}}} \, \ket{E_{k,1}}\, + \, d_{k,2} \, \ket{\widetilde{\lambda_{k,2}}} \, \ket{E_{k,2}}\big) \end{eqnarray} where $\ket{\widetilde{\lambda_{k,1}}} $ and $\ket{\widetilde{\lambda_{k,2}}} $ are $t$-bit approximations to the exact values of the phases of the eigenvalues which can be realized with a quantum circuit including $ \widetilde{t} \approx t\,+\,log\big(2 \, + \, \, 1 \, / \, \,(2 \, \epsilon) \big)$ ancillary qubits in the first register as shown in Fig. \ref{fig5} with success probability of at least $(1 \, - \, \epsilon)$ \cite{nielsen2010quantum}. On the other hand, measurement of $\ket{\Psi_o}$ collapses the state to $\ket{\widetilde{\lambda_{k,l}}} \, \ket{E_{k,l}}$ for $l \in [1,2]$ with probabilities $\vert c_{k}\vert^2 \, \vert d_{k,1}\vert^2 = \vert c_{k}\vert^2 \, (1/2 \, + \,a_{k,0} \, a_{k,1} )$ and $\vert c_{k}\vert^2 \, \vert d_{k,2}\vert^2 = \vert c_{k}\vert^2 \, (1/2 \, - \,a_{k,0} \, a_{k,1} )$. Applying the n-qubit Hadamard transform to the second register of the resulting state, i.e., $\ket{E_{k,l}}$ for $l \in [1,2]$, except the ancillary qubit results in either $ \ket{s_{k,n} \, s_{k, n-1} \, \hdots s_{k,1}} \, \big( \alpha_{k,0} \, \ket{0} \, + \, \alpha_{k,1} \, \ket{1} \big) $ or $ \ket{s_{k,n} \, s_{k, n-1} \, \hdots s_{k,1}} \, \big( \beta_{k,0} \, \ket{0} \, + \, \beta_{k,1} \, \ket{1} \big)$ in the second register. Then, measuring the unknown qubits gives the values of $s_{k,j}$ for $j \in [1, n]$. Another important assumption is that all the eigenvalues of $U_{\vec{\Phi}}$ are different so that there will be one-to-one mapping between each $\ket{s_{k,n} \, \, s_{k, n-1} \, \hdots s_{k,1}}$ and the specific eigenvalues $\ket{\widetilde{\lambda_{k,1}}} $ and $\ket{\widetilde{\lambda_{k,2}}}$ so that the measurement restores the corresponding $\ket{s_{k,n} \, s_{k, n-1} \, \hdots s_{k,1}}$. In order to estimate all values for $k \in [1,K]$, we apply standard phase estimation algorithm shown in Fig. \ref{fig5} as much as $\mathcal{O}(1 \, / \, m)$ times with the designed operator $U_{\vec{\Phi}}$ where $m$ is chosen as follows: $$ m = \min_{k} \big \lbrace \max \lbrace \vert c_{k} \, \vert^2 \, \vert d_{k,1}\vert^2, \, \vert c_{k}\vert^2 \, \vert d_{k,2}\vert^2 \rbrace \big \rbrace = \min_{k} \big \lbrace \vert c_{k}\vert^2 \, \max \lbrace \big(\frac{1}{2} \, + \,a_{k,0} \, a_{k,1} \big), \, \big(\frac{1}{2} \, - \,a_{k,0} \, a_{k,1} \big) \rbrace \big \rbrace $$ for detecting one of the pairs of eigenvalues for each $\ket{s_{k,n} \, s_{k, n-1} \, \hdots s_{k,1}}$ where the one with higher $\vert d_{k,l}\vert^2$ has higher probability for $l \in [1, \,2]$. Since $ \big(1 \, / \, 2 \, + \,a_{k,0} \, a_{k,1} \big) $ and $ \big(1 \, / \, 2 \, - \,a_{k,0} \, a_{k,1} \big)$ are varying in opposite directions as the multiplication $a_{k,0} \, a_{k,1}$ varies, it creates a balanced effect on the number of measurements practically independent from the number of qubits. In other words, $\max \lbrace \big(1 \, / \, 2 \, + \,a_{k,0} \, a_{k,1} \big), \, \big(1 \, / \, 2 \, - \,a_{k,0} \, a_{k,1} \big) \rbrace $ is larger than $1 \, / \, 2$. Therefore, the number of phase estimation steps is $\mathcal{O}(2 \, / \, \min_{k \in [1, K]} \lbrace \vert c_k \vert^2 \rbrace)$ independent from the number of data qubits while depending on the probability $\vert c_k \vert^2$ of the least probable state in the superposition for $k \in [1, K]$. Then, detected unknown basis states and the phases of eigenvalues are paired as the list $\lbrace \widetilde{\lambda_{k,l}}, \, \ket{s_{k,n} \, \, s_{k, n-1} \, \hdots s_{k,1}}\rbrace$ by choosing the first detected $l \in [1,2]$ for $\ket{s_{k,n} \, s_{k, n-1} \, \hdots s_{k,1}}$. Therefore, the first phase of the algorithm is completed with estimation of $K$ and $\ket{s_{k,n} \, s_{k, n-1} \, \hdots s_{k,1}}$ for $k \in [1, K]$. Therefore, by combining the unique eigenvector structure of $U_{\vec{\Phi}}$ with the power of phase estimation, we can easily estimate the number of superposition components and unknown basis states $\ket{s_{k,n} \, s_{k, n-1} \, \hdots s_{k,1}}$ with quantum polynomial-time resources. On the other hand, observe that phase estimation algorithm uses black-boxes, i.e., controlled-$U_{\vec{\Phi}}^{2^j}$ for $j \in [0, \, \widetilde{t} \, - \,1]$, so that we assume that there is a polynomial size quantum circuit implementing the exponentially large powers of controlled-$U_{\vec{\Phi}}$ operations. This is an important open issue for practical implementation and for clarifying theoretical bounds of the complexity for reconstructing $K$-sparse pure states by exploiting quantum computation, i.e., phase estimation. In the second phase of the algorithm, $c_k \, e^{\imath \, \varphi_k}$ is estimated for $k \in [1, K]$. Since each state $\ket{s_{k,n} \, s_{k, n-1} \, \hdots s_{k,1}}$ is known, projection of $\ket{\Psi} $ onto $P_{k}\,\equiv \, \ket{0_k}\bra{0_k}$ and $P_{k}^{\perp} \, \equiv \, I \, - \, P_{k} \equiv \ket{1_k}\bra{1_k}$ converts the problem to QST for pure states of $log(K)$ qubits where $P_{k}$ is defined as follows: \begin{equation} P_{k} \equiv \ket{s_{k,n} \, s_{k, n-1} \, \hdots s_{k,1}} \bra{s_{k,1} \, \hdots \, s_{k, n-1} \, s_{k,n}} \end{equation} The existing quantum CS methods in \cite{gross2010quantum, kyrillidis2018provable} can be utilized for $\mathcal{O}(K \,(log K)^c)$ measurement settings for $c \in [2, 6]$ by estimating $c_k \, e^{\imath \, \varphi_k}$ for $k \in [1, K]$ with high accuracy and low error probability. In fact, as shown in \cite{gross2010quantum}, $\mathcal{O}(d\, K \,(log K)^c)$ measurement settings reduce the error exponentially with some parameter $d$ by using certified tomography for pure states. As a result, the state is fully reconstructed after $\mathcal{O}(2 \, / \, \min_{k \in [1, K]} \lbrace \vert c_k \vert^2 \rbrace)$ repetitions of conventional $t$-bit quantum phase estimation algorithm to estimate $K$ and $\ket{s_{k,n} \, s_{k, n-1} \, \hdots s_{k,1}}$ for $k \in [1,\, K]$, and then $\mathcal{O}(d \, K \,(log K)^c)$ measurement settings with constant $c$ and $d$ for quantum CS to estimate $c_k \, e^{\imath \, \varphi_k}$ for $k \in [1,K]$. This completes the proof. \end{proof} Next, the design of the optical set-up composed of linear optical elements and WPDs, and its quantum circuit implementation are provided having the unique and favorable eigenstructure. \section{Design of Linear Optical Set-up Realizing the Unitary Operator $U_{\vec{\Phi}}$} \label{sec3} A novel linear optical set-up motivated for tracking the path of a single photon through consecutive BSs and phase shifters by using consecutive WPDs surprisingly provides an interesting relation among the eigenvectors. WPDs are utilized to track the history of a single photon state in various optical architectures for exploring quantum mechanical fundamentals \cite{englert1996fringe}. Multi-WPD implementation is shown in Fig. \ref{fig1} motivating the designed set-up in this article. Single photon diffraction paths are tracked with the help of WPDs. The analogical linear optical circuit and its combination with WPDs are realized by replacing planes with BSs, path length differences with phase shifters and the effects of WPDs with CNOT gates as shown in Fig. \ref{fig2}(a). Quantum circuit modeling for symmetric BSs is shown in Fig. \ref{fig2}(b) as described in the next section. \begin{figure*}[t!] \includegraphics[width=4in]{fig1.eps} \caption{Multi-plane evolution of diffracting single photon tracked with WPDs motivating the designed linear optical set-up.} \label{fig1} \end{figure*} \begin{figure*}[t!] \includegraphics[width=6in]{fig2.eps} \caption{(a) Linear optical set-up and WPDs for realizing the targeted unitary operator $U_{\vec{\Phi}}$ where the set-up includes consecutive $n+1$ BSs, phase shifters and WPDs tracking the paths of the single photon input. $\ket{D_k}$ for $k \in [1,n]$ shows the initial states of WPDs while unitary gates $U_{k,0}$ or $U_{k,1}$ are applied on $\ket{D_k}$ depending on the path selected by the photon. (b) Quantum circuit implementation of the set-up for symmetric BSs. $\ket{Q}$ shows the initial state of the single photon. The proposed quantum circuit realizes a unitary operator denoted with $U_{\vec{\Phi}}$.} \label{fig2} \end{figure*} The paths obtained with consecutive symmetric BSs and corresponding phase shifters result in a linear optical set-up where the single photon is tracked by using consecutive WPDs with Hilbert space size of the circuit being $2^{n+1}$. It is shown in Appendix \ref{appA} that the final states of WPDs and the single photon for any initial basis state of WPDs are classically simulable. The final states are also derived in Appendix \ref{appA}. Defining the final state of $k$th WPD as $ \ket{d_j}_{k} \equiv U_{k, j} \ket{D_k}$ for $j \in [0, 1]$, the final state of the photon as $\ket{j_{n+1}}_{n+1}$, the combined state of the photon and WPDs as $\ket{j} \, \equiv \, \, \ket{j_{n+1}}_{n+1}\, \ket{d_{j_n}}_n \, \hdots \, \ket{d_{j_1}}_1 $ and $A(j) \equiv K_{j_1} \prod_{k=1}^{n} \chi_{k+1, j_{k}, j_{k+1}}$ with decimal value of $ j \equiv j_{n+1}\, 2^n \, + \, \sum_{k=1}^n d_{j_k} \, 2^{k-1}$ allows to simplify $\ket{\Psi_{n+1}}$ as \begin{equation} \ket{\Psi_{n+1}} = \sum_{j = 0}^{2^{n+1}-1} A(j) \, \ket{j} = \sum_{j = 0}^{2^{n+1}-1} \bigg( K_{j_1} \prod_{k=1}^{n} \chi_{k+1, j_{k}, j_{k+1}} \bigg) \ket{j} \end{equation} where the parameters $K_{j_1}$ and $\chi_{k+1, j_{k}, j_{k+1}}$ composing $A(j)$ are defined in Appendix \ref{appA}. It is observed that the final state is an entangled state due to the non-separable structure of $A(j)$. The factor $A(j)$ chains neighbour WPD states $\ket{d_{j_k}}_k$ and $\ket{d_{j_{k+1}}}_{k+1}$ with the factor $\chi_{k+1, j_{k}, j_{k+1}}$ depending on both the paths experienced by the photon propagating through each WPD. On the other hand, each $A(j)$ is classically calculated with $\mathcal{O}(n)$ complex calculations for any $\ket{j}$. Next, quantum circuit implementation of the proposed optical design is provided for simulating in recent quantum computers or NISQ devices. Quantum circuit implementation provides an easier analytical framework to explore the proposed algorithms. \section{Quantum Circuit Implementation of the Unitary Operator $U_{\vec{\Phi}}$ and Complex Hadamard Matrix Representation} \label{sec4} The optical set-up composed of multi-WPDs can be formulated and simulated with quantum circuits by modeling BSs with rotation gates \cite{amico2020simulation}. First of all, BS operator is simplified to create equal propagation probability through the paths with $\theta_k = \pi \, / \, 4$ for $k \in [1, n+1]$ resulting in the path splitting operation of $R_{X}(-\pi/2) \equiv \dfrac{1}{\sqrt{2}}\begin{bmatrix}1 & \imath\\ \imath & 1\end{bmatrix}$. It is equal to the rotation of $-\pi \, / \, 2$ around X-axis in Bloch-sphere. It is more comparable to BS operation as compared with a Hadamard gate. Secondly, the phase shift in the upper path of each BS is set to zero with $\phi_{k,0} = 0$ for $k \in [1, n+1]$ since the difference between the phase shifts between upper and lower paths can be reflected as a global phase shift on the final quantum state. It results in the phase shift operators defined as $\Phi_{k} \, \equiv \, \begin{bmatrix} 1 & 0 \\ 0 & e^{\imath \, \phi_{k}}\\ \end{bmatrix}$ for $k \in [1, n +1]$. Thirdly, the unitary operator on the upper path is chosen as Identity operator $I \equiv \begin{bmatrix}1 & 0\\ 0 & 1\end{bmatrix}$ while the lower path is set to the Pauli-X operator $X \equiv \begin{bmatrix}0 & 1\\ 1 & 0\end{bmatrix}$. In other words, as the photon passes through the WPD, it applies a CNOT gate to the current state of the WPD. For example, if the initial state of the $k$th WPD is denoted as $\ket{D_k} = \ket{0}_k$ and the photon state after the $k$th BS is equal to $\alpha \, \ket{0} + \, \beta \, \ket{1}$, then the entangled state of WPD and photon becomes $\alpha \, \ket{0} \ket{0}_k + \, \beta \, \ket{1} \ket{1}_k$. If the initial state of the $k$th WPD is $\ket{1}_k$, then it produces $\alpha \, \ket{0} \ket{1}_k + \, \beta \, \ket{1} \ket{0}_k$. Finally, we does not constrain the initial state of the photon as $\ket{0}$ but a complex superposition of $\ket{0}$ and $\ket{1}$, i.e., $\alpha_0 \, \ket{0} \, + \, \alpha_{1} \, \ket{1}$, is possible. The quantum circuit implementation of the linear optical set-up is shown in Fig. \ref{fig2}(b). It results in the unitary operator denoted with $U_{\vec{\Phi}}$ in the family of complex Hadamard matrices which are composed of unit magnitude entries with arbitrary phases \cite{tadej2006concise}. These matrices are important for quantum information theory and computing with the famous example of quantum Fourier transform (QFT). In this article, a novel method is proposed to realize a specific form of complex Hadamard matrices. The formation process is analyzed by calculating the elements of $U_{\vec{\Phi}}$ for the circuit composed of phase shift gates of $\vec{\Phi} \equiv [\phi_1 \, \phi_2 \hdots \phi_{n+1} ]$. Assume that the computational basis state for the measured photon state is denoted with $\ket{e_{n+1}}$ where $e_{n+1}$ denotes $0$ or $1$ and similarly with $\ket{d_{k}}$ (removing the symbol $k$ under the ket for simplicity) where $d_k$ denotes $k$th detector state output. Then, the elements of $U_{\vec{\Phi}}$ denoted with $U_{\vec{\Phi}}(k,l)$ are calculated as follows: \begin{eqnarray} U_{\vec{\Phi}}(k,l) & \equiv & \bra{e_{1}} \bra{e_{2}} \hdots \bra{e_{n+1}} \, U_{\vec{\Phi}} \, \ket{Q} \,\ket{D_n} \hdots \ket{D_{1}} \\ & \equiv & \bra{e_{1}} \bra{e_{2}} \hdots \braket{e_{n+1} \vert \Psi_{n+1}} \end{eqnarray} where $ \ket{Q} \, \ket{D_n} \hdots \ket{D_{1}} $ denotes the initial state for the photon with $\ket{Q}$ and $k$th WPD with $\ket{D_k} $ in the computational basis states while $k = \sum_{j=1}^{n+1} e_{j} \, 2^{j-1}$ and $l = Q\, 2^n \, + \, \sum_{j=1}^{n} D_{j} \, 2^{j-1}$. \begin{figure*}[t!] \includegraphics[width=6.5in]{fig3.eps \caption{Iterative approach to calculate $U_{\vec{\Phi}}(k,l) \equiv \bra{e_{1}} \bra{e_{2}} \hdots \bra{e_{n+1}} \, U_{\vec{\Phi}} \, \ket{Q} \,\ket{D_n} \hdots \ket{D_{1}}$ resulting in a special form of complex Hadamard matrix representation.} \label{fig3} \end{figure*} Tracing the quantum state, $U_{\vec{\Phi}}(k,l)$ is calculated in classical polynomial-time in an iterative manner as described next and as shown in Fig. \ref{fig3}. At each time step, it is assumed that the normalized unit amplitude of the photon basis state $\ket{0}$ entangled with WPD states is denoted as $a_0 \, + \, \imath \, b_0 $ while the one for $\ket{1}$ is denoted with $a_1 \, + \, \imath \, b_1 $. Before passing to the next step, the information about the measured state $\ket{e_j}$ allows us to choose either $\ket{D_j}$ or $\ket{\overline{D_j}}$ for $j \in [1, n]$ to continue with in the calculation of $U_{\vec{\Phi}}(k,l)$. This operation at the same time measures the photon state to either $\ket{0}$ or $\ket{1}$ due to entanglement with the $j$th WPD state. Therefore, at each time step, we are left with the photon state being either $\ket{0}$ or $\ket{1}$ with the amplitudes $(a_0 \, + \, \imath \, b_0 )$ and $(a_1 \, + \, \imath \, b_1 )$. We need to track only these amplitudes depending on the pairs $D_j$ and $e_j$ for $j$th WPD state. An iterative approach allows to obtain $U_{\vec{\Phi}}(k,l)$ with the algorithmic evaluation framework shown in Fig. \ref{fig3} where $R(\alpha) \equiv \begin{bmatrix} \cos(\alpha) & -\sin(\alpha)\\ \sin(\alpha) & \cos(\alpha)\end{bmatrix}$ denotes rotation in the counter-clockwise direction of a two dimensional vector composed of the real and imaginary components of complex amplitude of some number $z$, i.e., $z = a_0 \, + \, \imath \, b_0 $ and $R(\alpha) \, \begin{bmatrix} a_0 \\ b_0 \end{bmatrix} $. After the final $(n+1)^{th}$ step, there are two possible amplitudes for the photon state which is chosen depending on $\ket{e_{n+1}}$. For example, at time $t_1$, $\ket{\Psi_1}$ is as follows where we reorder the position of $\ket{Q}$ to emphasize the entanglement with the first WPD state depending on the path: \begin{eqnarray} \ket{\Psi_1} = \frac{1}{2^{1/2}} \, && \, \ket{D_n \, \hdots \, D_3 \,D_2} \, \otimes \, \bigg( (a_0 \, + \, \imath \, b_0 )\, \ket{0} \ket{D_1} \, + \, (a_1 \, + \, \imath \, b_1 )\, \ket{1} \ket{\overline{D_1}} \bigg) \end{eqnarray} and the the following is obtained based on the first pair of $e_1$ and $D_1$ values: \begin{eqnarray} \braket{e_1 \vert \Psi_1} = \frac{1}{2^{1/2}} \, \, \ket{D_n \, \hdots \, D_3 \,D_2} \, \otimes \, \bigg( (a_0 \, + \, \imath \, b_0 )\, \ket{0} \braket{e_1 \vert D_1} \, + \, (a_1 \, + \, \imath \, b_1 )\, \ket{1} \braket{e_1 \vert \overline{D_1}} \bigg) \end{eqnarray} where $\overline{D_1}$ denotes the NOT of $D_1$ while $(a_0 \, + \, \imath \, b_0 )$ and $(a_1 \, + \, \imath \, b_1 )$ are calculated by $R_X(-\pi/2)\, \ket{Q}$ and consecutive phase shift $\phi_1$ depending on the value of $Q$ and $e_1$. Therefore, if $e_1 = D_1$, the amplitude passed on to the next step is $(a_0 \, + \, \imath \, b_0 )$ with $\ket{0}$ while if $e_1 = \overline{D_1}$, the amplitude passed on to the next step is $(a_1 \, + \, \imath \, b_1 )$ with $\ket{1}$. In fact, $R_{X}(-\pi \,/ \, 2) $ transforms $\ket{0}$ into $(\ket{0} \, + \, e^{\imath \, \pi \, / \, 2}\, \ket{1}) \,/ \, \sqrt{2}$ and $\ket{1}$ into $(e^{\imath \, \pi \, / \, 2} \, \ket{0} \, + \, \ket{1}) \,/ \, \sqrt{2}$ where the effect of complex $\imath$ is modeled as $\pi \, / \, 2$ rotation in the counter-clockwise direction on the complex amplitudes of $\ket{0}$ or $\ket{1}$, i.e., $R(\pi \, / \, 2)$. Therefore, if initially $Q \, = \, 0$, i.e., $a_0 \, = \, 1$ and $b_0 \, = \, 0$, and $e_1 \, = \, D_1$, then $R(0)$ or no change is applied on the state resulting in $a_0 \, = \,1$, $b_0 \, = \, 0$, $a_1 \, = \, 0$, $b_1 \, = \, 0$. If initially $Q \, = \, 0$ and $e_1 \, = \, \overline{D_1}$, then both the phase factors $e^{\imath \, \pi \, / \, 2}$ and $e^{\imath \, \phi_1}$ will multiply $\ket{1}$, and the result becomes $a_1 \, +\, \imath \, b_1 = \, R( \phi_1 \, + \, \pi \, / \, 2)\,(a_{0} \, + \, \imath \, b_0)$ where $a_0 \, = \, 1$ and $b_0 \, = \, 0$. Similarly, iterative approach allows to track the phases of $\ket{0}$ and $\ket{1}$ until to $(n+1)^{\mbox{th}}$ step. In this step, $R(0)$ and $R(\phi_{n+1} \, + \, \pi / \, 2)$ are applied on $(a_0 \, + \, \imath \, b_0)$ coming from the previous step in order to calculate the amplitudes of $\ket{0}$ and $\ket{1}$ for the final step, i.e., $(a_{0} \, + \, \imath \, b_0)$ and $ (a_1 \, + \, \imath \, b_1 )$, respectively. Similarly, $R(\pi \, / \, 2)$ and $R(\phi_{n+1})$ are applied on $(a_1 \, + \, \imath \, b_1)$ coming from the previous step to calculate the amplitudes of $\ket{0}$ and $\ket{1}$. The iterative approach provides a classical polynomial-time complexity formulation for the calculation of $U_{\vec{\Phi}}(k,l)$ as follows: \begin{equation} U_{\vec{\Phi}}(k,l) = \frac{1}{2^{(n+1)/2}} \, \vec{u}_{e_{n+1}}^T \, \bigg( \prod_{j = 1}^{n}\mathbf{M}_{j, D_j, e_{j}} \bigg) \, \vec{v}_{Q} \end{equation} where $\prod_{j = 1}^{n} \mathbf{A}_{j}$ denotes the matrix product $\mathbf{A}_{n} \, \mathbf{A}_{n-1}\, \hdots \, \mathbf{A}_{1}$ from the right to the left (with the same notation in the following discussions), $k = \sum_{j=1}^{n+1} e_{j} \, 2^{j-1}$ and $l = Q\, 2^n \, + \, \sum_{j=1}^{n} D_{j} \, 2^{j-1}$, and the following are defined: \begin{equation} \label{eq6and7} \vec{v}_{l}= \begin{cases} \begin{bmatrix} 1 & 0 & 0 & 0 \end{bmatrix}^T, & \text{if } l \, = \, 0 \\ \begin{bmatrix} 0 & 0 & 1 & 0 \end{bmatrix}^T, & \text{if } l \, = \, 1 \\ \end{cases}; \hspace{1in} \vec{u}_{l}= \begin{cases} \begin{bmatrix} 1 & \imath & 0 & 0 \end{bmatrix}^T, & \text{if } l \, = \, 0 \\ \begin{bmatrix} 0 & 0 & 1 & \imath \end{bmatrix}^T, & \text{if } l \, = \, 1 \\ \end{cases} \end{equation} \begin{equation} \label{eq5} \mathbf{M}_{j, D_j, e_{j}}= \begin{cases} \begin{bmatrix} \mathbf{I}_2 & R(\frac{\pi}{2}) \\ \mathbf{0}_2 & \mathbf{0}_2 \end{bmatrix} , & \text{if } e_{j} \, = \, D_j \\ \\ \begin{bmatrix} \mathbf{0}_2 & \mathbf{0}_2 \\ R(\phi_j \, + \, \frac{\pi}{2} ) & R(\phi_j) \end{bmatrix} , & \text{if } e_{j} \, = \, \overline{D}_j \\ \end{cases} \end{equation} where $\mathbf{M}_{j, D_j, e_{j}}$ for $j \in [1,n-1]$ is $4 \times 4$ block diagonal matrix composed of the rotation matrix $R(\alpha) \equiv \begin{bmatrix} \cos(\alpha) & -\sin(\alpha) \\ \sin(\alpha) & \cos(\alpha)\\ \end{bmatrix} $, $\mathbf{I}_2 \equiv \begin{bmatrix} 1 & 0 \\ 0 & 1\\ \end{bmatrix} $ and $\mathbf{0}_2 \equiv \begin{bmatrix} 0 & 0 \\ 0 & 0\\ \end{bmatrix} $ and $\mathbf{M}_{n, D_n, e_{n}}$ is defined as follows: \begin{equation} \mathbf{M}_{n, D_n, e_{n}} = \begin{cases} \begin{bmatrix} \mathbf{I}_2 & \mathbf{0}_2 \\ R(\phi_{n+1} \, + \, \frac{\pi}{2} ) & \mathbf{0}_2 \end{bmatrix} \, \begin{bmatrix} \mathbf{I}_2 & R(\frac{\pi}{2}) \\ \mathbf{0}_2 & \mathbf{0}_2 \end{bmatrix} , & \text{if } e_{n} \, = \, D_n \\ \\ \begin{bmatrix} \mathbf{0}_2 & R(\frac{\pi}{2}) \\ \mathbf{0}_2 & R(\phi_{n+1} ) & \end{bmatrix} \, \begin{bmatrix} \mathbf{0}_2 & \mathbf{0}_2 \\ R(\phi_n \, + \, \frac{\pi}{2} ) & R(\phi_n) \end{bmatrix} , & \text{if } e_{n} \, = \, \overline{D}_n \\ \end{cases} \end{equation} As a result, unit magnitude output amplitude for any input and output basis states is easily calculated with $\mathcal{O}(n)$ multiplications of complex $4 \times 4$ matrices. \section{Eigenstructure of the Designed Unitary Operator $U_{\vec{\Phi}}$} \label{sec5} We order the input qubits in the reverse manner with respect to the ordering of WPDs, i.e., the bottom qubit which is the photon input state to BS is denoted with $\ket{j_0} \equiv \ket{Q}$, the initial state of the $n$th WPD as $\ket{j_1} \equiv \ket{D_n}$, and similarly for $\ket{j_{n+1-l}} \equiv \ket{D_l}$ for $l \in [1, n]$. The proposed re-ordering of the qubits is shown in Fig. \ref{fig4}. The same ordering of qubits should be kept in the output with output qubit numbers starting from $\ket{k_{0}} \equiv \ket{e_{n+1}}$ for the photon output state and $\ket{k_{n+1-l}} \equiv \ket{e_l}$ for $l \in [1, n]$. Therefore, the elements of the unitary operator in this new re-ordered qubit basis states are calculated as follows: \begin{eqnarray} \label{eq1and2} \braket{k \,\vert \,U_{\vec{\Phi}} \, \vert \, j} \, &\equiv &\, \bra{k_{0} \, \hdots k_{n}} \, U_{\vec{\Phi}} \, \ket{j_n \hdots j_0} = \, \frac{\vec{u}_{k_0}^T}{2^{(n+1)/2}} \, \bigg( \prod_{l = 1}^{n}\mathbf{M}_{l, j_{n+1-l}, k_{n+1-l}} \bigg) \, \vec{v}_{j_0}\hspace{0.2in} \end{eqnarray} where $k = \sum_{l=0}^{n} k_l \, 2^{l}$ and $j = \sum_{l=0}^{n} j_l \, 2^{l}$. Next, the analysis and supporting evidence of Conjecture-1 are presented by showing the favorable form of the eigenvectors. \begin{figure}[t!] \includegraphics[width=2.5in]{fig4 \caption{Indexing of inputs and outputs for the quantum circuit implementation of $U_{\vec{\Phi}}$.} \label{fig4} \end{figure} \subsection{Analysis and Supporting Evidence of Conjecture-1} \label{sec5a} Assume that input to $U_{\vec{\Phi}}$ is a state $\ket{E}_{m,\vec{s}, \vec{\alpha}}$ defined as follows: \begin{eqnarray} \label{eq8and9} \ket{E}_{m,\vec{s}, \vec{\alpha}} & \, \equiv \, & \sum_{b=0}^{T} \, \sum_{j=0}^{M} B_{j} \ket{j+ b\, 2^m} \\ \label{eq10} & \, \equiv \, & H^{\otimes n} \ket{0 \, \hdots \, 0 \,s_{m-1} \,s_{m-2} \hdots \,s_1} (\alpha_{0} \, \ket{0} \, + \, \alpha_{1} \, \ket{1}) \\ \label{eq11} & \, = \, & \frac{1}{(\sqrt{2})^n} \, \sum_{b_{n-m}, \, \hdots, \, b_0 } \, \ket{b_{n-m} \hdots b_0} \, \sum_{j_{m-1}, \, \hdots, \, j_1} e^{\imath \, \pi \, \sum_{l=1}^{m-1} j_l \, s_l } \ket{j_{m-1} \hdots j_1} \sum_{j_{0}=0}^{1} \alpha_{j_0} \, \ket{j_0} \hspace{0.4in} \end{eqnarray} where the state is a separable (not entangled) state with some periodic amplitude $B_j$ of period $2^m$, $\vec{\alpha} \equiv [ \alpha_0 \,\, \alpha_1]^T$, $s_{l}$ being either $0$ or $1$ for $l \in [1, m \,- \,2]$, $s_{m-1} \,= \,1$, $M \equiv 2^{m}-1$, $T \equiv 2^{n \, - \, m \, + \, 1}-1$, $b \,\equiv \, \sum_{l=0}^{n-m} b_l \, 2^{l}$, $j \,\equiv \, \sum_{l=0}^{m-1} j_l \, 2^{l}$ and $(\alpha_0 \, \ket{0} + \alpha_1 \ket{1} )$ is the initial state of the photon. We denote $\vec{s} \equiv \left[s_{1} \, \, \hdots \, \, s_{m-1}\right]$ as the vector of the input state. The value $s_{m-1} = 1$ is chosen in order to realize periodicity of the amplitudes $B_j$ of the basis states with $2^m$ since the value of $s_{m-1} = 0$ will increase the number of bits in $\ket{b_{n-m} \hdots b_0}$ by one while reducing the periodicity to $2^{m-1}$. Therefore, there are $2^{m-2}$ possible $\vec{s}$ vectors with varying binary values of $s_{l}$ for $l \, \in [1, m-2]$. In fact, the eigenvector amplitudes are discrete periodic vectors having periodicity $2^m$ for $m \in [1, n+1]$ by also including the periodicity of $2$ for all zero input vector. We target to calculate $ \braket{ k + t \, 2^m \vert U_{\vec{\Phi}} \, \vert E_{m,\vec{s}, \vec{\alpha}} }$ for $k \in [0, M]$ and $t \in [0, T]$ by using (\ref{eq1and2}). Firstly, if $\ket{\overline{j}} \equiv \ket{j+ b\, 2^m}$ is defined, then it is easy to observe that $\overline{j}_l = j_l$ for $l \in [0, m-1]$ and $\overline{j}_l = b_{l-m}$ for $l \in [m, n]$ as shown in (\ref{eq8and9}-\ref{eq11}). Similarly, if $\ket{\overline{k}} \equiv \ket{k+ t\, 2^m}$ is defined, then $\overline{k}_l = k_l$ for $l \in [0, m-1]$ and $\overline{k}_l = t_{l-m}$ for $l \in [m, n]$. Then, the following is obtained by using (\ref{eq1and2}-\ref{eq11}): \begin{eqnarray} \braket{ k + t \, 2^m \, \vert \, U_{\vec{\Phi}} \, \vert E_{m,\vec{s}, \vec{\alpha}} } \equiv \frac{ \vec{u}_{k_0}^T}{2^{n} \, \sqrt{2}} \, \bigg(\prod_{l=1}^{m-1} \mathbf{\widetilde{M}}_{m-l, s_{m-l}, k_{m-l}} \bigg) \mathbf{K}_{n-m+1} \, \big(\alpha_0 \, \vec{v}_{0} \, + \, \alpha_1 \, \vec{v}_{1} \big) && \end{eqnarray} where the product matrix $\mathbf{K}_{a}$ becoming independent of $t$ is defined as follows: \begin{eqnarray} \mathbf{K}_{a} & \equiv & \sum_{b_0=0}^{1} \hdots \sum_{b_{a-1}=0}^{1} \mathbf{M}_{a, b_{0}, t_{0}} \hdots \, \mathbf{M}_{2, b_{a-2}, t_{a-2}} \, \mathbf{M}_{1, b_{a-1}, t_{a-1}} \\ & = & \prod_{l=1}^{a} \bigg( \begin{bmatrix} \mathbf{I}_2 & R(\frac{\pi}{2}) \\ \mathbf{0}_2 & \mathbf{0}_2 \end{bmatrix} + \begin{bmatrix} \mathbf{0}_2 & \mathbf{0}_2 \\ R(\phi_l \, + \, \frac{\pi}{2} ) & R(\phi_l) \end{bmatrix} \bigg) \end{eqnarray} and $ \mathbf{\widetilde{M}}_{a, s_a, k_a} \equiv e^{\imath \, \pi \, k_{a}\, s_{a}} \, \mathbf{\widehat{M}}_{{a}, s_{a}}$ and $\mathbf{\widehat{M}}_{a, s_{a}} \equiv \mathbf{M}_{n-a+1, 0, 0} \,+ \, e^{\imath \, \pi \, s_{a}} \, \mathbf{M}_{n-a+1, 1, 0}$. Then, the resulting simplified expression is obtained: \begin{eqnarray} \label{eq13} \braket{ k + t \, 2^m \, \vert \, U_{\vec{\Phi}} \, \vert E_{m,\vec{s}, \vec{\alpha}} } \, = \, e^{\imath \, \pi \, \sum_{l=1}^{m-1} k_l \, s_l } \sum_{j_0=0}^1 \big( \varrho_{j_0, k_0} \, \alpha_{j_0} \big) \hspace{0.2in} \end{eqnarray} where $\varrho_{j_0, k_0} \equiv \vec{u}_{k_0}^T \,\mathbf{V}_{\vec{s}} \, \vec{v}_{j_0} \, / \, (2^{n} \, \sqrt{2})$ for $j_0 =0$ or $1$ and $\mathbf{V}_{\vec{s}} \equiv \big(\prod_{l=1}^{m-1} \mathbf{\widehat{M}}_{m-l, s_{m-l}} \big) \, \mathbf{K}_{n-m+1} $. Observe that $\mathbf{V}_{\vec{s}}$ is easily calculated with $\mathcal{O}(n)$ multiplications of complex $4 \times 4 $ matrices where the matrices depend on $\vec{\Phi}$, i.e., the phase shift values in each step. If there exists a solution (depending on the provided values $\phi_k$ for $k \in [1, n+1]$) for $\alpha_0$, $\alpha_1$ and the eigenvalue $e^{\imath \, \lambda_{m, \vec{s}, \vec{\alpha}}}$ by solving the equation $ \sum_{j_0=0}^1 \big( \varrho_{j_0, k_0} \, \alpha_{j_0} \big) = e^{\imath \, \lambda_{m, \vec{s}, \vec{\alpha}}} \, \alpha_{k_0} \, / \, (\sqrt{2})^n$ for $k_0 = 0$ and $1$ simultaneously, then by using (\ref{eq11}) and (\ref{eq13}) it is observed that $\ket{E}_{m,\vec{s}, \vec{\alpha}}$ becomes an eigenvector of $U_{\vec{\Phi}}$ as follows: \begin{eqnarray} U_{\vec{\Phi}} \, \ket{E}_{m,\vec{s}, \vec{\alpha}} & = & e^{\imath \, \lambda_{m, \vec{s}, \vec{\alpha}}} \, \ket{E}_{m,\vec{s}, \vec{\alpha}} \end{eqnarray} where the eigenvalue is equal to the following: \begin{equation} \label{eq14} e^{\imath \, \lambda_{m, \vec{s}, \vec{\alpha}}} \, = \, \frac{\varrho_{0, 0} \, \alpha_0 + \varrho_{1, 0} \, \alpha_1}{\alpha_0 \, / \, (\sqrt{2})^n} =\frac{\varrho_{0, 1} \, \alpha_0 + \varrho_{1, 1} \, \alpha_1}{\alpha_1 \, / \, (\sqrt{2})^n} \end{equation} We can constrain $\alpha_0$ and $\alpha_1$ further by setting one of them as real, e.g., similar to the Bloch sphere representation by omitting a global phase, i.e., $\alpha_0 \ket{0} \, + \, \alpha_1 \, \ket{1} \equiv \cos(\varphi \, / \, 2) \, \ket{0} \, + \, e^{\imath \, \gamma} \sin(\varphi \, / \, 2) \, \ket{0}$. Assume that $\alpha_0 = a_0 \, + \, \imath \, b_0 $ and $\alpha_1 = a_1 \, + \, \imath \,b_{1}$ where either $b_0$ or $b_1$ equal to zero. Another constraint is that the eigenvector in (\ref{eq11}) should be normalized so that $\vert \alpha_0 \vert^2 \, + \,\vert \alpha_1 \vert^2 = 1$. Then, the four unknowns, i.e., $a_0$, $b_0$, $a_1$ and $b_1$ can be solved with the four polynomial equalities with one of them having complex coefficients. The first equality is due to $\vert \sum_{j_0=0}^1 \big( \varrho_{j_0, k_0} \, \alpha_{j_0} \big) \vert^2$ being equal to $\vert \alpha_{k_0} \vert ^2 \, / \, 2^n$ for $k_0 \, = \, 0$ or $1$: \begin{eqnarray} \label{eq15} \vert \varrho_{0, k_0} \, (a_0 + \, \imath \, b_0) + \varrho_{1, k_0} \, \big(a_1 \, + \,\imath \, b_1\big) \vert^2 & = & (a_{k_0}^2 \, + \, b_{k_0}^2) \, / \, 2^n \hspace{0.4in} \end{eqnarray} The second equality is due to the expanded form of (\ref{eq14}) by excluding the eigenvalue as follows: \begin{eqnarray} \label{eq16} (a_1 + \, \imath \, b_1) \big( (a_0 + \imath \, b_0) \varrho_{0, 0} + (a_1 + \imath \, b_1) \varrho_{1, 0} \big) -(a_0 + \imath \, b_0) \big( (a_0 + \, \imath \, b_0) \varrho_{0, 1} + (a_1 + \imath \, b_1) \varrho_{1, 1} \big) = 0 \hspace{0.4in} \end{eqnarray} Finally, the third and fourth equalities are as follows: \begin{eqnarray} \label{eq17} a_0^2 \, + \, b_0^2 + \, a_1^2 \, + \, b_1^2 & \, = \, & 1 \\ \label{eq18} b_0 \, b_1 &\, = \, & 0 \end{eqnarray} It can be observed that if $b_1 = 0$, and $(\alpha_0 = a_0 \, + \, \imath \, b_0, \alpha_1 = a_1)$ is a solution to the equations, then $(\alpha_0 = a_1, \alpha_1 = - a_0 \, + \, \imath \, b_0)$ (or its multiplication with some phase) becomes also a solution by creating a duality for the single photon state for the same $\vec{s}$. The trigonometric proof is provided in Appendix \ref{appB} with some minor open issues. Observe that $\mathbf{V}_{\vec{s}}$ includes multiplications of $4 \times 4$ matrices composed of $2 \times 2$ block matrices applying rotations depending on $\phi_k$ for $k \in [1, n+1]$. It is an open issue to determine the conditions on $\phi_k$ for $k \in [1, n+1]$ under which there exist solutions satisfying four equalities (\ref{eq15}-\ref{eq18}) and the duality condition with all different eigenvalues so that the conditions and assumptions in Theorem-1 and consecutively Conjecture-1 are satisfied. In fact, it can be hypothesized that uniformly distributed values of phase shifts result in different eigenvalues satisfying the proposed assumptions based on extensive numerical analysis. \section{Open Problems} \label{sec6} \begin{itemize} \item The existence of the specific eigenstructure form of $U_{\vec{\Phi}}$ definitely depends on the chosen phase shift values $\phi_k$ for $k \in [1, n+1]$. It is an open issue under which conditions it provides such a favorable eigenstructure to be utilized in Theorem-1. In numerical analysis, it is observed that uniformly distributed random values of $\phi_k$ for $k \in [1, n+1]$ produce such operators $U_{\vec{\Phi}}$ having the desired structure by providing numerical evidence. \item Quantum polynomial-time complexity solution in Theorem-1 depends on the existence of black-boxes or oracles, i.e., controlled-$U_{\vec{\Phi}}^{2^j}$ operations for $j \in [0, \widetilde{t} \, - \,1]$. We have provided a design of $U_{\vec{\Phi}}$ based on linear optics and WPDs while also providing polynomial size quantum circuit implementation. It is an open issue how to implement exponentially large powers of the designed $U_{\vec{\Phi}}$ with polynomial size quantum circuits. Is there any specific set of phase shift values $\phi_k$ for $k \in [1, n+1]$ so that both Conjecture-1 is satisfied and polynomial size quantum circuits exist for implementing $U_{\vec{\Phi}}^{2^j}$ based on the proposed linear optical design? \item Is there any other practical design of $U_{\vec{\Phi}}$ having the desired eigenstructure so that its exponentially large powers can be implemented with polynomial size quantum circuits? \item What are the effects of noise in input state and how are the estimation errors modeled? \item How can the proposed unitary operator design be exploited in QST of mixed state inputs? \item The designed operator $U_{\vec{\Phi}}$ has eigenvalues and eigenvectors to be calculated easily by using multiplications of matrix factorizations as shown in Section \ref{sec5}. It is conjectured as a one-way function promising to be utilized in various cryptographic algorithms as an open issue. \end{itemize} \begin{acknowledgments} This work was supported by TUBITAK (The Scientific and Technical Research Council of Turkey) under Grant $\#$119E584. \end{acknowledgments}
2,869,038,156,013
arxiv
\section{Introduction} In this paper, we present a deterministic algorithm for unsupervised generative modeling on strings using tensor networks. The algorithm is deterministic with a fixed number of steps and the resulting model has a perfect sampling algorithm that allows efficient sampling from marginal distributions, or sampling conditioned on a substring. The algorithm is inspired by the density matrix renormalization group (DMRG) procedure \cite{schollwock,white,stoud_schwab}. This approach, at its heart, involves only simple linear algebra which allows us to give a detailed ``under the hood'' look at the algorithm in action. Our analysis illustrates how to interpret the trained model and how to go beyond worst case bounds on generalization errors. We work through the algorithm with an exemplar dataset to produce a prediction for the generalization error as a function of the fraction used in training which well approximates the generalization error observed in experiments. The machine learning problem of interest is to learn a probability distribution on a set of sequences from a finite training set of samples. For us, an important technical and conceptual first step is to pass from \emph{Finite Sets} to \emph{Functions on Finite Sets}. Functions on sets have more structure than sets themselves and we find that the extra structure is meaningful. Furthermore, well-understood concepts and techniques in quantum physics give us powerful tools to exploit this extra structure without incurring significant algorithmic costs \cite{mps}. We emphasize that it is not necessary that the datasets being modeled have any inherently quantum properties or interpretation. The inductive bias of the model can be understood as a kind of low-rank factorization hypothesis---a point we expand upon in this paper. Reduced density operators play a central role in our model. In a happy coincidence, they play the central role in both the model's theoretical inspiration and the training algorithm. There is structure in reduced densities that inspire us to model classical probability distributions using a quantum model. The training algorithm amounts to successively matching reduced densities, a process which leads inevitably to a tensor network model, which may be thought of as a sequence of compatible autoencoders. We refer readers unfamiliar with tensor diagram notation to references such as \cite{tensornetwork, tensors,orus}. This paper also builds on investigations of tensor networks as models for machine learning tasks. Tensor networks have been demonstrated to give good results for supervised learning and regression tasks \cite{Novikov:2016,stoud_schwab,Stoudenmire:2018L,Glasser:2018,Guo:2018,Evenbly:2019,Liu:2019,Leichenauer:2019}. They have also been applied successfully to unsupervised, generative modeling \cite{Han:2018,Li:2018,Stokes:2019,Cheng:2019} including a study based on the parity dataset we use here \cite{Stokes:2019}. This work focuses on the latter task, proposing and studying an alternative algorithm for optimizing MPS for generative modeling. The expressivity of models like the one considered in this paper have been studied \cite{Glasser:2019}. In this paper, we focus on understanding how our training algorithm learns to generalize. \subsection*{Acknowledgments} The authors thank Gabriel Drummond-Cole, Glen Evenbly, James Stokes, and Yiannis Vlassopoulos for helpful discussions, and are happy to acknowledge KITP Santa Barbara, the Flatiron Institute, and Tunnel for support and excellent working conditions. \section{Densities and reduced densities} For our purposes, the passage from classical to quantum can be thought of as the passage from \emph{Finite Sets} to \emph{Functions on Finite Sets}, which have a natural Hilbert space structure. We are interested in probability distributions on finite sets. The quantum version of a probability distribution is a density operator on a Hilbert space. The quantum version of a marginal probability distribution is a reduced density operator. The operation that plays the role of marginalization is the partial trace. In our setup, the reduced densities contain more information than the marginal distributions associated to them and much of our work concerns this extra information. Given a finite set $S$, one has the free vector space $V=\mathbb{C}^S$ consisting of complex valued functions on $S$, which is a Hilbert space with inner product \begin{equation*} \<f|g\> = \sum_{s \in S}\overline{f(s)} g(s). \end{equation*} The free vector space comes with a natural map from $S\to \mathbb{C}^S$, which we recall in a moment. To avoid confusion, it is helpful to use notation to distinguish between an element $s\in S$ and its image in $\mathbb{C}^S$, which is a vector. Commonly, the vector image of $s$ is denoted with a boldface font or an overset arrow. We like the bra and ket notation, which is better when inner products are involved. For any $s\in S$, let $|s\>$ denote the function $S \to \mathbb{C}$ that sends $s\mapsto 1$ and $s'\mapsto 0$ for $s' \neq s$. The set $\{|s\>\}$ is an independent, orthonormal spanning set for $V$. If one chooses an ordering on the set $S$, say $S=\{s_1. \ldots, s_d\}$, then $|s_j\>$ is identified with the $j$-th standard basis vector in $\mathbb{C}^d$, thus defining an isometric isomorphism of $V\xrightarrow{\sim}\mathbb{C}^d$ and a ``one-hot'' encoding $S \hookrightarrow \mathbb{C}^d$. More generally, we denote elements in $V$ by ket notation $|\psi\> \in V$. For any $|\psi\> \in V$, there is a linear functional in $V^*$ whose value on $|\phi\>\in V$ is the inner product $\<\psi|\phi\>$. We denote this linear functional by the succinct bra notation $\<\psi| \in V^*$. Every linear functional in $V^*$ is of the form $\<\psi|$ for some $|\psi\>\in V$. We have vectors $|\psi\> \in V$ and covectors $\<\psi| \in V^*$ and the map \[ |\psi\> \longleftrightarrow \<\psi| \] defines a natural isomorphism between $V$ and $V^*$. We have chosen to distinguish between vectors and covectors with bra and ket notation; we will not imbue upper and lower indices with any special meaning. When several spaces $V, W, \ldots$ are in play, some tensor product symbols are suppressed. So, for instance, if $|\psi\> \in V$ and $|\phi\> \in W$, we will write $|\psi\> |\phi\>$, or even $|\psi \phi\>$, instead of $|\psi\> \otimes |\phi\> \in V\otimes W$. An expression like $|\phi\>\<\psi|$ is an element of $W\otimes V^*$, naturally identified with an operator $V \to W$. The expression $|\psi\>\<\psi|$ is an element in $\End(V)$. Here, $\End(V)$ denotes the space of all linear operators on $V$ and in the presence of a basis is identified with $\dim(V)\times \dim(V)$ matrices. If $|\psi\>$ is a unit vector, then the operator $|\psi\>\<\psi|$ is orthogonal projection onto $|\psi\>$: it maps $|\psi\> \mapsto |\psi\>$ and maps every vector perpendicular to $|\psi\>$ to zero. A \emph{density operator}, or just \emph{density} for short, is a unit-trace, positive semi-definite linear operator on a Hilbert space. Sometimes a density is called a quantum \emph{state}. If $S$ is a finite set and $V=\mathbb{C}^S$, then a density $\rho:V \to V$ defines a probability distribution on $S$ by defining the probability $\pi_\rho:S \to \mathbb{R}$ by the \emph{Born rule} \begin{equation}\label{BornRule} \pi_\rho(s)= \<s |\rho| s\>. \end{equation} Going the other way, there are multiple ways to define a density $\rho:V \to V$ from a classical probability distribution $\pi$ on $S$ so that $\pi_\rho=\pi$. One way is as a diagonal operator: $\rho_{diag}:=\sum_{s \in S}\pi(s)|s\>\<s|$. Another way is to define \begin{equation}\label{rho_pi} \rho_{\pi} = |\psi\>\<\psi| \text{ where } |\psi\>:=\sum_{s\in S} \sqrt{\pi(s)}|s\>. \end{equation} There exist other densities that realize $\pi$ via the Born rule, but think of the diagonal density and projection onto $|\psi\>$ as two extremes. The density $\rho_{\pi}$ has minimal rank and $\rho_{diag}$ has maximal rank. In the language of quantum mechanics, a state is \emph{pure} if it has rank one and is \emph{mixed} otherwise. The degree to which a state is mixed is measured by its von Neumann entropy, $-\tr(\rho \ln(\rho))$, which ranges from zero in the case of $\rho_{\pi}$ up to the Shannon entropy of the classical distribution $\pi$ in the case of $\rho_{diag}$. In this paper, we always use the pure state $\rho:=\rho_{\pi}$. To summarize, we associate to any probability distribution $\pi:S \to \mathbb{R}$ the density $\rho_{\pi}:V \to V$ defined by Equation \eqref{rho_pi}, which has the property that $\pi_{\rho_\pi} = \pi.$ If a set $S$ is a Cartesian product $S= A \times B$ then the Hilbert space $\mathbb{C}^S$ decomposes as a tensor product $\mathbb{C}^S \cong \mathbb{C}^A\otimes \mathbb{C}^B$. In this case, a density $\rho:\mathbb{C}^A \otimes \mathbb{C}^B \to \mathbb{C}^A \otimes \mathbb{C}^B$ is the quantum version of a joint probability distribution $\pi:A\times B \to \mathbb{R}$. By an operation that is analogous to marginalization, $\rho$ gives rise to two densities $\rho_A:\mathbb{C}^A \to \mathbb{C}^A$ and $\rho_B:\mathbb{C}^B \to \mathbb{C}^B$ which we refer to as \emph{reduced densities}. We now describe this operation, which is called \emph{partial trace}. If $X$ and $Y$ are finite dimensional vector spaces, then $\End(X\otimes Y)$ is isomorphic to $\End(X) \otimes \End(Y)$. Using this isomorphism, there are maps \[ \begin{tikzcd} & \End(X\otimes Y) \arrow[ld, "\tr_Y"'] \arrow[rd, "\tr_X"] & \\ \End(X) & & \End(Y) \end{tikzcd} \] defined by \[ \tr_Y(f\otimes g):= f\;\tr(g) \text{ and } \tr_X(f\otimes g):=g\;\tr(f) \] for $f\in \End(X)$ and $g\in\End(Y)$. The maps $\tr_Y$ and $\tr_X$ are called \emph{partial traces}. The partial trace preserves both trace and positive semi-definiteness and so the image of any density $\rho\in\End(X\otimes Y)$ under partial trace defines \emph{reduced densities} $\tr_Y\rho\in \End(X)$ and $\tr_X\rho \in \End(Y)$. It is worth noting that while we have maps $\End(X) \otimes \End(Y) \to \End(X)$ and $\End(X) \otimes \End(Y) \to \End(Y)$, there do not exist natural maps $V\otimes W \to V$ or $V\otimes W \to W$ for arbitrary vector spaces $V$ and $W$; partial trace is special, it is defined in the case that $V$ and $W$ are endomorphism spaces. \subsection{Reconstructing a pure state from its reduced densities}\label{sec:SVD} We now discuss the problem of reconstructing a pure quantum state $\rho$ on a product $X\otimes Y$ from its reduced densities $\rho_X$ and $\rho_Y$. Using the isomorphism $X \cong X^*$ that is available in any finite dimensional Hilbert space, one can view any vector $\ket{\psi}$ in a product of Hilbert spaces $X\otimes Y$ as an element of $X^*\otimes Y$, hence as a linear map $M\colon X\to Y$. Computationally, if $\ket \psi$ is expressed using bases $\{|a\>\}$ of $X$ and $\{|b\>\}$ of $Y$ as \[ \ket \psi = \sum_{a,b} m_{ab} \,|a\>\otimes |b\> \] then the coefficients $\{m_{ab}\}$ of that sum can be reshaped into a $\dim(Y)\times \dim(X)$ matrix $M$. A singular value decomposition (SVD) of $M$ gives a factorization $M=VDU^*$ with $V$ and $U$ unitary and $D$ diagonal as in Figure \ref{SVD}. \begin{figure}[h] \begin{center} \begin{tikzpicture}[y=1.3cm,baseline={(current bounding box.center)}] \node[smalltensor, fill=yellow!10, minimum width = 14mm, minimum height = 5mm] (a) at (0,1) {}; \node[] (L) at (-.3,0) {}; \node[] (R) at (.3,0) {}; \node[] (topL) at (-.3,1) {}; \node[] (topR) at (.3,1) {}; \draw[shortout] (topL) -- (L); \draw[shortout] (topR) -- (R); \node[] at (1.25,1) {\huge$\rightsquigarrow$}; \node[smalltensor, fill=yellow!10, minimum width = 14mm, minimum height = 5mm] (a) at (2.5,1) {}; \node[] (L) at (2.2,0) {}; \node[] (R) at (2.8,0) {}; \node[] (topL) at (2.2,1) {}; \node[] (topR) at (2.8,1) {}; \draw[shortin] (L) -- (topL); \draw[shortout] (topR) -- (R); \node[] at (3.75,1) {\huge$\rightsquigarrow$}; \node[smalltensor,fill=yellow!10, minimum height = 5mm] (t) at (4.75,1) {}; \node[] (s1) at (4.75,2) {}; \node[] (s2) at (4.75,0) {}; \draw[witharrow] (s1) -- (t); \draw[witharrow] (t) -- (s2); \node[] at (5.5,1) {=}; \node[downtriangle,minimum height = 3mm,fill=blue!30] (u) at (6.25,1.7) {}; \node[smalltensor, minimum height = 2mm, fill = black!10] (d) at (6.25,1) {}; \node[uptriangle,fill=green!30,minimum height = 4mm] (v) at (6.25,.3) {}; \node[] (u1) at (6.25,2.5) {}; \node[] (v1) at (6.25,-.5){}; \draw[longout] (u) -- (d) ; \draw[longin] (d) -- (v); \draw[witharrow] (u1) -- (u); \draw[witharrow] (v) -- (v1); \node[] at (7.25,1) {with}; \node[uptriangle,minimum height = 3mm,fill=blue!30] (u) at (8.5,1.25) {}; \node[downtriangle,minimum height = 3mm,fill=blue!30] (ud) at (8.5,.75) {}; \node[] (u1) at (8.5,2.25) {}; \node[] (u2) at (8.5,-.25){}; \draw[thick] (u) -- (ud); \draw[witharrow, longin] (u1) -- (u); \draw[witharrow,longout] (ud) -- (u2); \node[] at (9.25,1) {=}; \draw[thick] (9.75,1.5) -- (9.75,.5) {}; \node[] at (10.5,1) {and}; \node[uptriangle,fill=green!30,minimum height = 3mm] (v) at (11.75,1.25) {}; \node[downtriangle,fill=green!30,minimum height = 3mm] (vd) at (11.75,.75) {}; \node[] (v1) at (11.75,2.25) {}; \node[] (v2) at (11.75,-.25){}; \draw[thick] (v) -- (vd); \draw[witharrow,longin] (v1) -- (v); \draw[witharrow,longout] (vd) -- (v2); \node[] at (12.5,1) {=}; \draw[thick] (13.25,1.5) -- (13.25,.5) {}; \end{tikzpicture} \end{center} \caption{A tensor network diagram following $|\psi\>\in X \otimes Y$ through the isomorphisms $X \otimes Y \cong X^* \otimes Y \cong \hom(X,Y)$, leading to the singular value decomposition of $M=VDU^*$ with the unitarity of $V$ and $U$. }\label{SVD} \end{figure} The columns $\{\ket{f_i}\}$ of the matrix $V$ are the left singular vectors of $M$. They are the eigenvectors of $MM^*$ and comprise an orthonormal basis for the image of $M$. The columns $\{\ket{e_i}\}$ of the matrix of $U$ are the right singular vectors of $M$. They are the eigenvectors of $M^*M$, an orthonormal set of vectors spanning a subspace of $X$ isomorphic to the image of $M$. The nonnegative real numbers $\{\sigma_i\}$ on the diagonal of $D$ are the singular values of the matrix $M$. The matrices $M^*M$ and $MM^*$ have the same eigenvalues $\{\lambda_i\}$ which are the squares of the singular values $\lambda_i:=|\sigma_i|^2$. The map $M$ defines a bijection between the $\{\ket{e_i}\}$ and $\{\ket{f_i}\}$. Specifically, $M$ acts as \begin{equation}\label{Mbijection} \ket{e_i} \mapsto \sigma_i \ket{f_i} \end{equation} and maps the perpendicular complement of the span of the $\{\ket{e_i}\}$ to zero. Now, given a unit vector $|\psi\> \in X\otimes Y$, we have the density $\rho = |\psi\>\<\psi| \in X \otimes Y \otimes Y^* \otimes X^*$ and the reduced densities $\rho_X: X\to X$ and $\rho_Y :Y \to Y$. The reduced densities of $\rho$ are related to the operator $M:X \to Y$ fashioned from $|\psi\>$ as follows \begin{equation} \rho_X = M^*M \text{ and }\rho_Y = MM^* \end{equation} as illustrated in Figure \ref{MstarMisrhoX}. \begin{figure} \begin{center} \begin{tikzpicture}[y=1.3cm,baseline={(current bounding box.center)}] \node[] at (-1,0) {$\rho\:=$}; \node[smalltensor, fill=yellow!10, minimum width = 14mm, minimum height = 5mm] (a) at (0,-0.25) {}; \node[] (L) at (-.3,-1.25) {}; \node[] (R) at (.3,-1.25) {}; \node[] (topL) at (-.3,-.25) {}; \node[] (topR) at (.3,-.25) {}; \draw[shortout] (topL) -- (L); \draw[shortout] (topR) -- (R); \node[smalltensor, fill=yellow!10, minimum width = 14mm, minimum height = 5mm] (a) at (0,0.25) {}; \node[] (L) at (-.3,.25) {}; \node[] (R) at (.3,.25) {}; \node[] (topL) at (-.3,1.25) {}; \node[] (topR) at (.3,1.25) {}; \draw[shortout] (L) -- (topL); \draw[shortout] (R) -- (topR); \node[] at (.75,0) {$,$}; \end{tikzpicture} \hspace{0.5cm} \begin{tikzpicture}[y=1cm,baseline={(current bounding box.center)}] \node[] at (-1.1,1) {$\rho_X\:=$}; \node[smalltensor, fill=yellow!10, minimum width = 14mm, minimum height = 5mm] (a) at (0,2) {}; \node[] (L) at (-.3,1) {}; \node[] (R) at (.3,0) {}; \node[] (topL) at (-.3,2) {}; \node[] (topR) at (.3,2) {}; \draw[shortin] (L) -- (topL); \draw[shortin,shorten <=1.5mm,] (topR) -- (R); \node[smalltensor, fill=yellow!10, minimum width = 14mm, minimum height = 5mm] (a) at (0,0) {}; \node[] (L) at (-.3,0) {}; \node[] (topL) at (-.3,1) {}; \draw[shortout] (L) -- (topL); \node[] at (.9,1) {=}; \node[smalltensor,fill=yellow!10, minimum height = 5mm] (t1) at (1.5,1.5) {}; \node[] (s1) at (1.5,2.5) {}; \node[smalltensor,fill=yellow!10, minimum height = 5mm] (t2) at (1.5,.5) {}; \node[] (s2) at (1.5,-.5) {}; \draw[witharrow] (s1) -- (t1); \draw[thick] (t1) -- (t2); \draw[witharrow] (t2) -- (s2); \node[] at (1.9,1) {$,$}; \end{tikzpicture} \hspace{0.5cm} \begin{tikzpicture}[y=1cm,baseline={(current bounding box.center)}] \node[] at (-1.1,1) {$\rho_Y\:=$}; \node[smalltensor, fill=yellow!10, minimum width = 14mm, minimum height = 5mm] (a) at (0,2) {}; \node[] (L) at (-.3,0) {}; \node[] (R) at (.3,0) {}; \node[] (topL) at (-.3,2) {}; \node[] (topR) at (.3,1) {}; \draw[shortin] (L) -- (topL); \draw[shortout] (topR) -- (R); \node[smalltensor, fill=yellow!10, minimum width = 14mm, minimum height = 5mm] (a) at (0,0) {}; \node[] (R) at (.3,1) {}; \node[] (topR) at (.3,2) {}; \draw[shortout] (topR) -- (R); \node[] at (.9,1) {=}; \node[smalltensor,fill=yellow!10, minimum height = 5mm] (t1) at (1.5,1.5) {}; \node[] (s1) at (1.5,2.5) {}; \node[smalltensor,fill=yellow!10, minimum height = 5mm] (t2) at (1.5,.5) {}; \node[] (s2) at (1.5,-.5) {}; \draw[witharrow] (t1) -- (s1); \draw[thick] (t1) -- (t2); \draw[witharrow] (s2) -- (t2); \node[] at (1.9,1) {$.$}; \end{tikzpicture} \end{center} \caption{A tensor network diagram showing that \mbox{$\rho_X = M^*M$} and \mbox{$\rho_Y = MM^*$}. }\label{MstarMisrhoX} \end{figure} The singular vectors $\{\ket{e_i}\}$ and $\{\ket{f_i}\}$ of $M$ are precisely the eigenvectors of the reduced densities. Therefore, the density $\rho$ can be completely reconstructed from its reduced densities $\rho_X$ and $\rho_Y$. One obtains $|\psi\>$ by gluing the eigenvectors of the reduced densities along their shared eigenvalues (Figure \ref{reconstructpsi}). In the nondegenerate case that the eigenvalues are distinct, then there is a unique way to glue the $\{|e_i\>\}$ and the $\{|f_i\>\}$ and $|\psi\>$ is recovered perfectly. \begin{figure}[h] \begin{center} \begin{tikzpicture}[y=1.3cm,baseline={(current bounding box.center)}] \node[] at (-3.1,0) {$\rho_X\:=$}; \node[smalltensor, fill=yellow!10, minimum width = 14mm, minimum height = 5mm] (a) at (-1.7,1) {}; \node[] (L) at (-2,0) {}; \node[] (R) at (-1.4,-1) {}; \node[] (topL) at (-2,1) {}; \node[] (topR) at (-1.4,1) {}; \draw[shortin] (L) -- (topL); \draw[shortin,shorten <=1.5mm,] (topR) -- (R); \node[smalltensor, fill=yellow!10, minimum width = 14mm, minimum height = 5mm] (a) at (-1.7,-1) {}; \node[] (L) at (-2,-1) {}; \node[] (topL) at (-2,0) {}; \draw[shortout] (L) -- (topL); \node[] at (-0.8,0) {=}; \node[downtriangle,minimum height = 3mm,fill=blue!30] (u) at (0,1) {}; \node[smalltensor, minimum height = 2mm, fill = black!10] (mu) at (0,.25) {}; \node[smalltensor, minimum height = 2mm, fill = black!10] (md) at (0,-.25) {}; \node[uptriangle,fill=blue!30,minimum height = 3mm] (d) at (0,-1) {}; \node[] (uu) at (0,2) {}; \node[] (dd) at (0,-2){}; \draw[witharrow] (uu) -- (u); \draw[longout] (u) -- (mu) ; \draw[thick] (mu) -- (md); \draw[thick,shorten >= -.5mm] (md) -- (d); \draw[witharrow,shortout,shorten <=-.5mm] (d) -- (dd); \draw[rounded corners,thick2, dashed, red] (-.75, 0.5) rectangle (0.75, 2) {}; \end{tikzpicture} , \begin{tikzpicture}[y=1.3cm,baseline={(current bounding box.center)}] \node[] at (-3.1,0) {$\rho_Y\:=$}; \node[smalltensor, fill=yellow!10, minimum width = 14mm, minimum height = 5mm] (a) at (-1.7,1) {}; \node[] (L) at (-2,-1) {}; \node[] (R) at (-1.4,-1) {}; \node[] (topL) at (-2,1) {}; \node[] (topR) at (-1.4,0) {}; \draw[shortin] (L) -- (topL); \draw[shortout] (topR) -- (R); \node[smalltensor, fill=yellow!10, minimum width = 14mm, minimum height = 5mm] (a) at (-1.7,-1) {}; \node[] (R) at (-1.4,0) {}; \node[] (topR) at (-1.4,1) {}; \draw[shortout] (topR) -- (R); \node[] at (-0.8,0) {=}; \node[downtriangle,minimum height = 3mm,fill=green!30] (u) at (0,1) {}; \node[smalltensor, minimum height = 2mm, fill = black!10] (mu) at (0,.25) {}; \node[smalltensor, minimum height = 2mm, fill = black!10] (md) at (0,-.25) {}; \node[uptriangle,fill=green!30,minimum height = 3mm] (d) at (0,-1) {}; \node[] (uu) at (0,2) {}; \node[] (dd) at (0,-2){}; \draw[witharrow] (u) -- (uu); \draw[longin] (mu) -- (u) ; \draw[thick] (md) -- (mu); \draw[thick,shorten <=-.5mm] (d) -- (md); \draw[witharrow,shortout,shorten <=-.5mm] (dd) -- (d); \draw[rounded corners,thick2, dashed, red] (-.75, 0.5) rectangle (0.75, 2) {}; \draw[rounded corners,thick2, dashed, red] (-.35, 0) rectangle (0.35, -.5) {}; \end{tikzpicture} , \begin{tikzpicture}[y=1.3cm,baseline={(current bounding box.center)}] \node[downtriangle,minimum height = 3mm,fill=blue!30] (u) at (0.25,.7) {}; \node[smalltensor, minimum height = 2mm, fill = black!10] (d) at (0.25,0) {}; \node[uptriangle,fill=green!30,minimum height = 4mm] (v) at (.25,-.7) {}; \node[] (u1) at (0.25,1.5) {}; \node[] (v1) at (0.25,-1.5){}; \draw[longout] (u) -- (d) ; \draw[longin] (d) -- (v); \draw[witharrow] (u1) -- (u); \draw[witharrow] (v) -- (v1); \draw[rounded corners,thick2, dashed, red] (-.5, .35) rectangle (1, 1.75) {}; \draw[rounded corners,thick2, dashed, red] (-.5, -.35) rectangle (1, -1.75) {}; \draw[rounded corners,thick2, dashed, red] (-.1, 0.25) rectangle (.6, -.25) {}; \node[] at (1,0) {=}; \node[smalltensor,fill=yellow!10, minimum height = 5mm] (t) at (1.75,0) {}; \node[] (s1) at (1.75,1) {}; \node[] (s2) at (1.75,-1) {}; \draw[witharrow] (s1) -- (t); \draw[witharrow] (t) -- (s2); \node[] at (2.75,0) {\huge$\rightsquigarrow$}; \node[smalltensor, fill=yellow!10, minimum width = 14mm, minimum height = 5mm] (a) at (4,0) {}; \node[] (L) at (3.7,-1) {}; \node[] (R) at (4.3,-1) {}; \node[] (topL) at (3.7,0) {}; \node[] (topR) at (4.3,0) {}; \draw[shortout] (topL) -- (L); \draw[shortout] (topR) -- (R); \node[] at (5.25,0) {$=|\psi\>$}; \end{tikzpicture} \end{center} \caption{Reconstructing $|\psi\>$ from the eigenvectors of $\rho_X$ and $\rho_Y$ and their shared eigenvalues.} \label{reconstructpsi} \end{figure} \section{Reduced densities of classical probability distributions} Let $\pi:S\to \mathbb{R}$ be a probability distribution and consider the density $\rho_{\pi}$ as in Equation \eqref{rho_pi}. Suppose $S\subset A\times B$ and let $\rho_A = \tr_Y \rho$ and $\rho_B = \tr_X \rho$ denote the reduced densities where, as above, $X=\mathbb{C}^A$, $Y=\mathbb{C}^B$, and $V=X\otimes Y$. Let us now interpret the matrix representation of these reduced densities. We compute: \begin{align*} \rho &=|\psi\>\<\psi| \\ &=\left(\sum_{(a,b)\in S}\sqrt{\pi(a,b)} |a\>\otimes |b\> \right) \otimes \left(\sum_{(a',b')\in S}\sqrt{\pi(a',b')} \<a'|\otimes \<b'| \right)\\ &=\sum_{\substack{(a,b)\in S\\(a',b')\in S}}\sqrt{\pi(a,b)}\sqrt{\pi(a',b')} \; |a\>\<a'|\otimes |b\>\<b'| \end{align*} We compute the partial trace $\tr_Y(|a\>\<a'|\otimes|b\>\<b'|)=\<b|b'\> \;|a\>\<a'|$. Since $\<b|b'\> = 1$ if $b=b'$ and zero otherwise, we can understand the $(a,a')$ entry of the reduced density $\rho_A$ as \begin{equation}\label{rhox} \left(\rho_A\right)_a^{a'} = \sum_{b\in B} \sqrt{\pi(a,b)\pi(a',b)}. \end{equation} In particular, the diagonal entry $\left(\rho_A\right)_a^{a}$ is $\sum_{b \in B} \pi(a,b)$ and we see the marginal distribution $\pi_A:A \to \mathbb{R}$ along the diagonal of the reduced density $\rho_A$. We make the consistent observation that $\rho_A$ has unit trace. The off-diagonal entries of $\rho_A$ are determined by the extent to which $a,a'\in A$ have the same continuations in $B$. Note that $\rho_A$ is symmetric. The reduced density on $B$ is similarly given: \begin{equation}\label{rhoy} \left(\rho_B\right)_b^{b'} = \sum_{a\in A} \sqrt{\pi(a,b)\pi(a,b')}. \end{equation} So, the reduced densities of $\rho$ contains all the information of the marginal distributions $\pi_A$ and $\pi_B$ and more. Now, let's take a look at the extra information carried by the reduced densities, which is entirely contained in the off diagonal entries. Since the entire state, and therefore $\pi$ itself, can be reconstructed from the eigenvectors and eigenvalues of $\rho_A$ and $\rho_B$, we know that from a high level this spectral information encodes the conditional probabilities that are lost by the classical process of marginalization. En route to decoding this spectral information, let us describe how an arbitrary density $\tau$ is a classical mixture model of pure quantum states. If $\ket{e_1}, \ldots, \ket{e_k}$ is a basis for the image of a density $\tau$ consisting of orthonormal eigenvectors, then the corresponding eigenvalues $\lambda_1, \ldots, \lambda_k$ are nonnegative real numbers whose sum is one. One has \[ \tau = \sum_{i=1}^k \lambda_i |e_i\>\<e_i| \] The density $\tau$ defines a probability distribution on pure states: the probability of the pure state $|e_i\>\<e_i|$ being $\lambda_i$. Then, $|e_i\>\<e_i|$ defines a probability distribution on the computational basis $\{s\}$ via the Born Rule: the probability of $s$ is $\<s|e_i\>\<e_i|s\> = |\<e_i|s\>|^2$. We're interested in the reduced densities of $\rho=|\psi\>\<\psi|$ and in this case there exists a one-to-one correspondence $\ket{e_i} \leftrightarrow \ket{f_i}$ between eigenvectors of the reduced densities $\rho_A:=\tr_Y(\rho)$ and $\rho_B:=\tr_X(\rho)$ spanning their respective images. \[ \rho_A = \sum_{i=1}^k \lambda_i |e_i\>\<e_i| \text{ and } \rho_B = \sum_{i=1}^k \lambda_i |f_i\>\<f_i|. \] as outlined in Section \ref{sec:SVD}. Putting together the general picture of a density as a mixture of pure states with the reduced densities of a pure state leads one to the following paradigm. With probability $\lambda_i$ the prefix subsystem will be in a state determined by the corresponding eigenvector $\ket{e_i}$ of $\rho_A$, and the corresponding suffix subsystem will be in a state determined by the eigenvector $\ket{f_i}$. The vector $\ket{e_i} = \sum_a \gamma_i^a |a\>$ determines a probability distribution on the set of prefixes $A$: the probability of the prefix $a$ is $|\gamma_i^a|^2$. The vector $\ket{f_i} = \sum_b \beta_i^b |b\>$ determines a probability distribution on the set of suffixes $B$: the probability of $b$ is $|\beta_i^b|^2$. As a final remark, if we had begun with the diagonal density \[\rho_{diag}=\sum_{(a,b)\in A\times B} \pi(a,b) \left(|a\>\otimes |b\> \right) \otimes \left(\<b|\otimes \<a| \right) \] whose Born distribution is also $\pi$, then the matrices representing $\rho_A$ and $\rho_B$ would be diagonal matrices with marginal distributions on $A$ and $B$ along the diagonals and all off diagonal elements are zero. The eigenvectors of $\rho_A$ and $\rho_B$ are simply the prefixes $\ket a$ and and suffixes $\ket b$ and carry no further information. The process of computing reduced densities of $\rho_{diag}$ is nothing more than the process of marginalization. We always use the pure state $\rho=|\psi\>\<\psi|$ ensuring that the reduced densities carry information about subsystem interactions. The eigenvectors of the reduced densities, which are linear combinations of prefixes and linear combinations of suffixes, interact through their eigenvalues and capture rich information about the prefix-suffix system. Let us summarize. Begin with a classical probability distribution $\pi$ on a product set $S=A\times B$. Form a density $\rho_\pi$ on $\mathbb{C}^{A \times B}$ by the formula in Equation \eqref{rho_pi}. The reduced densities $\rho_A$ and $\rho_B$ on $\mathbb{C}^A$ and $\mathbb{C}^B$ contain marginal distributions $\pi_A$ and $\pi_B$ on their diagonals, but they are not diagonal operators. The eigenvectors of these reduced densities encode information about prefix-suffix interactions. The prefix-suffix interactions are tantamount to conditional probabilities and carry sufficient information to reconstruct the density $\rho$. \subsection{Learning from samples}\label{sec:learn} In the machine learning applications to come, the goal is to learn $\rho_\pi$ defined in Equation \eqref{rho_pi} from a set $\{s_1, \ldots, s_{N_T}\}$ of samples drawn from a probability distribution $\pi$. Each sample $s_i$ will be a sequence $(x_1, \ldots, x_N)$ of a fixed length $N$. The algorithm to learn the density $\rho_\pi$ on the full set of sequences $S$ is an inductive procedure. One only works with a density $\rho$ defined using the sample set since the density $\rho_\pi$ for the entire distribution $\pi$ is unavailable. The procedure begins by computing the reduced density $\rho_A$ and its eigenvectors for a subsystem $A$ consisting of short prefixes. Step by step, the size of the subsystem $A$ is increased until one reaches a point where the suffix subsystem $B$ is small. In a final step, $\rho$ is recombined from the collected eigenvectors of $\rho_A$ for all the prefix systems $A$ and the eigenvectors and eigenvalues of $\rho_B$. This procedure leads naturally to a tensor network approximation for $\rho$. An important point is that the reduced density $\rho_A$ operates in a space whose dimension grows exponentially with the length of the prefix system $A$. So, instead of computing $\rho_A$ exactly, it is computed by a sequence of approximations that keep its rank small. The modeling hypothesis is that $\pi$ is a distribution whose corresponding quantum state $\rho_\pi$ has low rank in the sense that the reduced densities $\rho_A$ and $\rho_B$ are low rank operators for all prefix-suffix subsystems $A$ and $B$. The large rank of the density $\rho$ witnessed from the empirical distribution drawn from $\pi$ is regarded as sampling error. Therefore, under the modeling hypothesis, the process of replacing the empirically computed reduced densities with low rank approximations should be thought of as repairing a state damaged by sampling errors. The low rank modeling hypothesis can lead to excellent generalization properties for the model. Let us continue our analysis of the reduced densities as in the previous sections using notation appropriate for the machine learning algorithm. Let $T$ be a training set of labeled samples $T=\{s_1, \ldots, s_{N_T}\}$. We use $N_T$ for the number of training examples. Each sample $s_i$ will be a sequence of symbols from a fixed alphabet $\Sigma$ of a fixed length $N$. We will designate a cut to obtain a prefix $a_i$ and suffix $b_i$ whose concatenation is the sample $s_i=(a_i,b_i) \in \Sigma^N$. This provides a decomposition of $T$ as $T\subset A\times B$ where $A=\{a_1, a_2, \ldots, a_{N_T}\}$ and $B=\{b_1, b_2, \ldots, b_{N_T}\}$ are the sampled prefixes and suffixes. For the applications we have in mind, samples in $T$ will be distinct. That is $(a_i,b_i) \neq (a_j,b_j)$ if $i\neq j$, though crucially it may happen that $a_i = a_j$ or $b_i=b_j$ for $i\neq j$. Let $\widehat{\pi}$ be the resulting empirical distribution on $T$ so that \begin{equation} \widehat{\pi}(a,b)=\begin{cases} 1/\sqrt{N_T}& \text{ if }(a,b)\in T,\\ 0 &\text{otherwise.} \end{cases} \end{equation} Let us look at the empirical state \begin{equation}\label{emp_psi} |\psi\> = \frac{1}{\sqrt{N_T}} \sum_{i=1}^{N_T} |s_i\>, \end{equation} the empirical density $\rho=|\psi\>\<\psi|$, and its partial trace \begin{equation}\label{mainequation} \rho_A = \frac{1}{N_T}\sum_{i,j=1}^{N_T} s(a_i,a_j) |a_i\>\<a_j|. \end{equation} Here the sum is expressed in terms of the indices $i,j$, which range over the number of samples. The coefficient $s(a_i,a_j)$ of $|a_i\>\<a_j|$ is a nonnegative integer, namely the number of times that $a_i$ and $a_j$ have the same continuation $b_i=b_j$. It may be convenient to have some notation for shared continuations. For any pair $a,a'$ of elements of $A$, let $T_{a,a'}$ be the subset of $B$ consisting of shared continuations of $a$ and $a'$: \begin{equation}\label{rhoaaprime} T_{a,a'}=\{b\in B: (a,b) \in T \text{ and }(a',b)\in T\}. \end{equation} So, the $(a,a')$ entry of the matrix representing $\rho_A$ is the cardinality of the set $T_{a,a'}$ divided by an overall factor of $1/N_T$. A similar combinatorial description holds for the reduced density on $B$, \[\rho_B=\frac{1}{N_T}\sum_{i,j}s(b_i,b_j)|b_i\>\<b_j|\] where $s(b_i,b_j)$ is the number of common prefixes that $b_i$ and $b_j$ share. The counting involved can be visualized with graphs. Every probability distribution $\widehat{\pi}$ on a Cartesian product $A\times B$ uniquely defines a weighted bipartite graph: the two vertex sets are $A$ and $B$ and the edge joining $a$ and $b$ is labeled by $\widehat{\pi}(a,b).$ Here, because we assume the samples in $T$ are distinct, the graph can be simplified since $\widehat{\pi}(a,b)$ is either $0$ or $1/N_T$. We draw an edge from $a$ to $b$ if $(a,b)\in T$ and we omit the edge if $(a,b)\notin T$ and understand the probabilities to be obtained by dividing by $N_T$, which is the total number of edges in the graph. \begin{center} \begin{tikzpicture} \node[vertex, label = left : $a_1$] (a1) at (0,.25) {}; \node[vertex, label = left : $a_2$] (a2) at (0,-.25) {}; \node[vertex, label = right : $b_1$] (b1) at (1,.75) {}; \node[vertex, label = right : $b_2$] (b2) at (1,.25) {}; \node[vertex, label = right : $b_3$] (b3) at (1,-.25) {}; \node[vertex, label = right : $b_4$] (b4) at (1,-.75) {}; \draw (a1) -- (b1); \draw (a1) -- (b4); \draw(a2) -- (b1); \draw(a2) -- (b2); \draw(a2) -- (b3); \draw(a2) -- (b4); \end{tikzpicture} \end{center} In the example above, the total number of edges is the sample size $N_T=6$. The probability of $(a_1,b_1)=1/6$ and the probability of $(a_1,b_2)=0$. Now we illustrate how to read off the entries of the reduced density $\rho_A$ from the graph. There will be an overall factor of $1/N_T$ multiplied by a matrix of nonnegative integers. The diagonal entries are $d(a)$, the degree of vertex $a$. The $(a,a')$ entry is the number of shared suffixes, which equals the number of paths of length 2 between $a$ and $a'$, divided by $6$. Given any graph with $|A|=2$, such as the one above, the reduced density on the prefix subsystem is equal to \begin{equation}\label{eq:2by2} \rho_A =\frac{1}{N_T}\begin{bmatrix} d_1 & s\\[5pt] s & d_2 \end{bmatrix} \end{equation} where the diagonal entries are the degrees of the vertices and $s$ is the number of paths of length two, which equals the number of degree two vertices of $B$. The denominator of the coefficient $N_T=d_1+d_2$ is the total number of edges in the graph. The eigenvalues $\lambda_{+}$ and $\lambda_{-}$ and (unnormalized) eigenvectors $e_{+}$ and $e_{-}$ of this matrix have simple, explicit expressions in terms of the gap $G=d_1-d_2$ in the diagonal entries and the off-diagonal entry $s$. Namely, \begin{equation}\label{evalues} \lambda_{+}=\frac{N_T+\sqrt{G^2+4s^2}}{2N_T} \text{ and } \lambda_{-}=\frac{N_T-\sqrt{G^2+4s^2}}{2N_T} \end{equation} and \begin{equation}\label{evects} \ket{e_{+}}= \begin{bmatrix} \sqrt{G^2+4s^2}+ G\\ +2s \end{bmatrix} \text{ and } \ket{e_{-}}= \begin{bmatrix} \sqrt{G^2+4s^2}- G\\ -2s \end{bmatrix}. \end{equation} \section{The Training Algorithm} Suppose that $|\psi\> \in V_1 \otimes \cdots \otimes V_N$. We depict $|\psi\>$ as \begin{center} \begin{tikzpicture}[x=1cm,y=1.5cm,baseline={(current bounding box.center)}] \node[hugetensor,fill=red!10, minimum width=50mm] (v15) at (9,2) {}; \node[] (l0) at (7,2) {}; \node[] (l1) at (8,2) {}; \node[] (l2) at (9,2) {}; \node[] (l3) at (10,2) {}; \node[] (l4) at (11,2) {}; \node[] (n0) at (7,1) {$V_1$}; \node[] (n1) at (8,1) {$V_{2}$}; \node[] (n2) at (9,1) {$\cdots$}; \node[] (n3) at (10,1) {$V_{N-1}$}; \node[] (n4) at (11,1) {$V_{N}$}; \draw [shortout] (l0) -- (n0); \draw [shortout] (l1) -- (n1); \draw [shortout] (l3) -- (n3); \draw [shortout] (l4) -- (n4); \end{tikzpicture} \end{center} There are various sorts of decompositions of such a tensor that are akin to an iterated SVD. We will describe one decomposition that results in a factorization of $|\psi\>$ into what is called a matrix product state (MPS) or synonymously, a tensor train decomposition. The process defines a sequence of ``bond'' spaces $\{B_k\}$ and operators $\left\{ U_k:B_{k}\otimes V_{k} \to B_{k-1}\right\}$ which can be composed $U_1 U_2 \cdots U_{N-1}U_N$ as pictured: \begin{center} \begin{tikzpicture}[x=1cm,y=1.5cm,baseline={(current bounding box.center)}] \node[bigtensor] (j0) at (0,2) {}; \node[bigtensor] (j1) at (1,2) {}; \node[] (j2) at (2,2) {$\cdots$}; \node[bigtensor] (j3) at (3,2) {}; \node[bigtensor] (j4) at (4,2) {}; \draw[thick] (j0) -- (j1) -- (j2) -- (j3) -- (j4); \node[] (k0) at (0,1) {$V_1$}; \node[] (k1) at (1,1) {$V_2$}; \node[] (k3) at (3,1) {$V_{N-1}$}; \node[] (k4) at (4,1) {$V_{N}$}; \draw[witharrow] (j0) -- (k0); \draw[witharrow] (j1) -- (k1); \draw[witharrow] (j3) -- (k3); \draw[witharrow] (j4) -- (k4); \node[] () at (5.5,1.5) {$=$}; \node[hugetensor,fill=red!10, minimum width=50mm] (v15) at (9,2) {}; \node[] (l0) at (7,2) {}; \node[] (l1) at (8,2) {}; \node[] (l2) at (9,2) {}; \node[] (l3) at (10,2) {}; \node[] (l4) at (11,2) {}; \node[] (n0) at (7,1) {$V_1$}; \node[] (n1) at (8,1) {$V_{2}$}; \node[] (n2) at (9,1) {$\cdots$}; \node[] (n3) at (10,1) {$V_{N-1}$}; \node[] (n4) at (11,1) {$V_{N}$}; \draw [shortout] (l0) -- (n0); \draw [shortout] (l1) -- (n1); \draw [shortout] (l3) -- (n3); \draw [shortout] (l4) -- (n4); \end{tikzpicture} \end{center} The initial operator has form $U_1:B_1 \to V_1$ and the final tensor has the form $U_N\in B_{N-1} \otimes V_N$. We begin with $B_1 = V_1$ and set $U_1:B_1 \to V_1$ to be the identity. For $k=2, \ldots, N-1$ we will define $U_k$ inductively. To describe the inductive process, first notice that for any $k=1, \ldots, N-1$, one has the tensor factorization \[V_1 \otimes \cdots \otimes V_N \cong \left(V_1\otimes\cdots\otimes V_k\right) \bigotimes \left(V_{k+1}\otimes\cdots \otimes V_N\right). \] The operator $\alpha_k:V_1\otimes\cdots\otimes V_k \to V_{k+1}\otimes\cdots \otimes V_N$ fashioned from $|\psi\>$ may be pictured as follows: % \begin{equation}\label{alphakpicture} \begin{tikzpicture}[x=1cm,y=1.5cm,baseline={(current bounding box.center)}] \node[] (alphak) at (-0.5,0) {$\alpha_k=$}; \node[hugetensor,fill=red!10, minimum width=75mm] (v15) at (4,0) {}; \node[] (j1) at (1,0) {}; \node[] (j2) at (2,0) {}; \node[] (j3) at (3,0) {}; \node[] (j4) at (4,0) {}; \node[] (j5) at (5,0) {}; \node[] (j6) at (6,0) {}; \node[] (j7) at (7,0) {}; \node[] (n1) at (1,1) {$V_1$}; \node[] (n2) at (2,1) {$V_{2}$}; \node[] (n3) at (3,1) {$\cdots$}; \node[] (n4) at (4,1) {$V_{k}$}; \node[] (n5) at (5,-1) {$V_{k+1}$}; \node[] (n6) at (6,-1) {$\cdots$}; \node[] (n7) at (7,-1) {$V_N$}; \draw [shortin] (n1) -- (j1); \draw [shortin] (n2) -- (j2); \draw [shortin] (n4) -- (j4); \draw [shortout] (j5) -- (n5); \draw [shortout] (j7) -- (n7); \end{tikzpicture} \end{equation} The operators $U_k$ when composed $U_1 U_2 \cdots U_k$ as below \begin{center} \begin{tikzpicture}[x=1cm,y=1.5cm,baseline={(current bounding box.center)}] \node[bigtensor] (j0) at (0,2) {}; \node[bigtensor] (j1) at (1,2) {}; \node[] (j2) at (2,2) {$\cdots$}; \node[bigtensor] (j3) at (3,2) {}; \draw[thick] (j0) -- (j1) -- (j2) -- (j3); \node[] (j4) at (4.75,2) {$B_{k}$}; \node[] (k0) at (0,1) {$V_1$}; \node[] (k1) at (1,1) {$V_2$}; \node[] (k3) at (3,1) {$V_{k}$}; \draw[witharrow] (j0) -- (k0); \draw[witharrow] (j1) -- (k1); \draw[witharrow] (j3) -- (k3); \draw[witharrow] (j4) -- (j3); \end{tikzpicture} \end{center} define an operator $B_{k} \to V_1 \otimes \cdots \otimes V_k$. One then has the composition $\beta_k:=\alpha_k U_1 U_2 \cdots U_k:B_{k} \to V_{k+1}\otimes \cdots \otimes V_N$: \begin{center} \begin{tikzpicture}[x=1cm,y=1.5cm,baseline={(current bounding box.center)}] \node[hugetensor,fill=red!10, minimum width=80mm] (v15) at (3.5,1) {}; \node[bigtensor, label = below left : {$V_1$}] (j0) at (0,2) {}; \node[bigtensor] (j1) at (1,2) {}; \node[] (j2) at (2,2) {$\cdots$}; \node[bigtensor] (j3) at (3,2) {}; \node (j4) at (4.5,2) {}; \node[] (j5) at (5,1) {}; \node[] (j6) at (6,1) {}; \node[] (j7) at (7,1) {}; \node[] (n0) at (0,1) {}; \node[] (n1) at (1,1) {}; \node[] (n2) at (2,1) {}; \node[] (n3) at (3,1) {}; \node[] (n4) at (4,1) {}; \node[] (j5) at (5,1) {}; \node[] (n5) at (5,0) {$V_{k+1}$}; \node[] (n6) at (6,0) {$\cdots$}; \node[] (n7) at (7,0) {$V_N$}; \draw [shortin] (j0) -- (n0); \draw [shortin] (j1) -- (n1); \draw [shortin] (j3) -- (n3); \draw [shortout] (j5) -- (n5); \draw [shortout] (j7) -- (n7); \draw[thick] (j0) -- (j1) -- (j2) -- (j3); \node[] (k5) at (4.75,2) {$B_{k}$}; \draw[witharrow] (j4) -- (j3); \end{tikzpicture} \end{center} The inductive hypothesis is that $\alpha_k U_1 U_2 \cdots U_k U_k^* \cdots U_2^* U_1^* = \alpha_k$. Pictorally, \begin{center} \begin{tikzpicture}[x=1cm,y=1.5cm,baseline={(current bounding box.center)}] \node[hugetensor,fill=red!10, minimum width=80mm] (v15) at (3.5,1) {}; \node[bigtensor] (j0) at (0,2) {}; \node[bigtensor] (j1) at (1,2) {}; \node[] (j2) at (2,2) {$\cdots$}; \node[bigtensor] (j3) at (3,2) {}; \node[bigtensor,fill=blue!20] (k4) at (4,2) {}; \node[bigtensor,fill=blue!20] (k6) at (6,2) {}; \node[] (k5) at (5,2) {$\cdots$}; \node[bigtensor,fill=blue!20] (k7) at (7,2) {}; \node[] (j5) at (5,1) {}; \node[] (j6) at (6,1) {}; \node[] (j7) at (7,1) {}; \node[] (n0) at (0,1) {}; \node[] (n1) at (1,1) {}; \node[] (n2) at (2,1) {}; \node[] (n3) at (3,1) {}; \node[] (n4) at (4,1) {}; \node[] (j5) at (5,1) {}; \node[] (n5) at (5,0) {$V_{k+1}$}; \node[] (n6) at (6,0) {$\cdots$}; \node[] (n7) at (7,0) {$V_N$}; \node[] (m4) at (4,3) {$V_{k}$}; \node[] (m5) at (5,3) {$\cdots$}; \node[] (m6) at (6,3) {$V_2$}; \node[] (m7) at (7,3) {$V_1$}; \draw[witharrow] (m4) -- (k4); \draw[witharrow] (m6) -- (k6); \draw[witharrow] (m7) -- (k7); \draw [shortin] (j0) -- (n0); \draw [shortin] (j1) -- (n1); \draw [shortin] (j3) -- (n3); \draw [shortout] (j5) -- (n5); \draw [shortout] (j7) -- (n7); \draw[thick] (j0) -- (j1) -- (j2) -- (j3)-- (k4); \draw[thick] (k4) -- (k5) -- (k6) --(k7) ; \node[] () at (8,1.5) {$=\alpha_k$}; \end{tikzpicture} \end{center} In the penultimate step, one has the operator $\alpha_{N-1}U_1U_2\cdots U_{N-1}:B_{N-1} \to V_N$. The final step is to define $U_N$ as the adjoint of this operator: $U_N = \left(\alpha_{N-1}U_1U_2\cdots U_{N-1}\right)^*$. \begin{center} \begin{tikzpicture}[x=1cm,y=1.5cm,baseline={(current bounding box.center)}] \node[hugetensor,fill=red!10, minimum width=80mm] (v15) at (3.5,1) {}; \node[bigtensor] (j0) at (0,2) {}; \node[bigtensor] (j1) at (1,2) {}; \node[] (j3) at (3,2) {$\cdots$}; \node[bigtensor] (j5) at (5,2) {}; \node[bigtensor] (j6) at (6,2) {}; \node[] (j7) at (7,1) {}; \node[] (n0) at (0,1) {}; \node[] (n1) at (1,1) {}; \node[] (n2) at (2,1) {}; \node[] (n3) at (3,1) {}; \node[] (n4) at (4,1) {}; \node[] (n5) at (5,1) {}; \node[] (n6) at (6,1) {}; \node[] (n7) at (7,0) {$V_N$}; \node[] (k7) at (7.75,2) {$B_{N-1}$}; \draw[witharrow] (k7) -- (j6); \draw [shortin] (j0) -- (n0); \draw [shortin] (j1) -- (n1); \draw [shortin] (j5) -- (n5); \draw [shortin] (j6) -- (n6); \draw [shortout] (j7) -- (n7); \draw[thick] (j0) -- (j1) -- (j3) -- (j5)-- (j6); \node[] () at (8,1.5) {$=$}; \node[bigtensor] (un) at (9,1.5) {}; \node[] (ub) at (9,.5) {$V_N$}; \node[] (ur) at (10.75,1.5) {$B_{N-1}$}; \draw[witharrow] (ur) --(un); \draw[witharrow] (un) --(ub); \end{tikzpicture} \end{center} Therefore, the entire composition reduces nicely: \begin{align*} U_1 U_2 \cdots U_{N-1} U_N &= U_1 U_2 \cdots U_{N-1} U_{N-1}^* \cdots U_2^* U_1^* \alpha_{N-1}^* \\ &= \alpha_{N-1}^* \end{align*} The final equality follows from the adjoint of the inductive hypothesis. The outcome $\alpha_{N-1}^*: V_N^* \to V_1 \otimes \cdots \otimes V_{N-1}$, after a minor reshaping, is the same as $|\psi\>$. To define the inducive step, assume the spaces $B_1, \ldots, B_{k-1}$ and operators $U_k$ have been defined and satisfy the inductive hypothesis. Reshape the operator $B_{k-1} \to V_k \otimes V_{k+1}\otimes \cdots \otimes V_N$ as a map \[B_{k-1} \otimes V_k \to V_{k+1}\otimes \cdots \otimes V_N\]% An SVD decomposition of this map yields $\alpha_{k-1} U_1 \cdots U_{k-1} = W_k D_k U_k^*$. \begin{center} \begin{tikzpicture}[x=1cm,y=1.5cm,baseline={(current bounding box.center)}] \node[hugetensor,fill=red!10, minimum width=90mm] (v15) at (4,1) {}; \node[bigtensor, label = below left : {$V_1$}] (j0) at (0,2) {}; \node[bigtensor] (j1) at (1,2) {}; \node[] (j2) at (2,2) {$\cdots$}; \node[bigtensor] (j3) at (3,2) {}; \node (j4) at (4.5,2) {}; \node[] (j5) at (5.5,2) {}; \node[] (j6) at (6,1) {}; \node[] (j7) at (7,1) {}; \node[] (j8) at (8,1) {}; \node[] (n0) at (0,1) {}; \node[] (n1) at (1,1) {}; \node[] (n2) at (2,1) {}; \node[] (n3) at (3,1) {}; \node[] (n4) at (4,1) {}; \node[] (m5) at (5,2) {$V_{k}$}; \node[] (n5) at (5,1) {}; \node[] (n6) at (6,0) {$V_{k+1}$}; \node[] (n7) at (7,0) {$\cdots$}; \node[] (n8) at (8,0) {$V_N$}; \draw [shortin] (j0) -- (n0); \draw [shortin] (j1) -- (n1); \draw [shortin] (j3) -- (n3); \draw [shortin] (m5) -- (n5); \draw [shortout] (j6) -- (n6); \draw [shortout] (j8) -- (n8); \draw[thick] (j0) -- (j1) -- (j2) -- (j3); \node[label = above left : {$B_{k-1}$}] (k5) at (4.5,2) {}; \draw[witharrow] (j4) -- (j3); \node at (9.5,1) {$=$}; \node[medtriangle,minimum height = 8mm] (u) at (11,2) {}; \node[smalltensor, minimum height = 2mm, fill = black!10] (d) at (11,1.25) {}; \node[bigtriangle] (v) at (11,.5) {}; \node (a1) at (10.75,2) {}; \node (a2) at (11.25,2) {}; \node (a3) at (10,.4) {}; \node (a4) at (12,.4) {}; \node[label = left : $B_{k-1}$] (b1) at (10.75,3) {}; \node[label = right : $V_k$] (b2) at (11.25,3) {}; \node (b3) at (10,-.75) {$V_{k+1}$}; \node (b4) at (12,-.75) {$V_N$}; \node at (11,-.25) {$\cdots$}; \draw[shortin] (b1) -- (a1); \draw[shortin] (b2) -- (a2); \draw[shortout] (a3) -- (b3); \draw[shortout] (a4) -- (b4); \draw[thick,shorten <=-1mm] (u) -- (d) ; \draw[thick,shorten >=-1mm] (d) -- (v); \end{tikzpicture} \end{center} The adjoint of the map $U_k^* : B_{k-1} \otimes V_k \to B_k$, pictured as the blue triangle on the right hand side, is then defined to be $U_k:B_k \to B_{k-1}\otimes V_k$ and becomes the next tensor in the MPS decomposition. To check that the inductive hypothesis is satisfied, note that $\alpha_{k} U_1 \cdots U_{k-1} U_k U_k^* = \alpha_{k-1} U_1 \cdots U_{k-1}$ since $\alpha_{k-1} U_1 \cdots U_{k-1} = W_k D_k U_k^*$ and $U_k^* U_k =1$. Here is the picture proof: \begin{center} \begin{tikzpicture}[x=1cm,y=1.5cm,baseline={(current bounding box.center)}] \node[hugetensor,fill=red!10, minimum width=90mm] (v15) at (4,1) {}; \node[bigtensor, label = below left : {$V_1$}] (j0) at (0,2) {}; \node[bigtensor] (j1) at (1,2) {}; \node[] (j2) at (2,2) {$\cdots$}; \node[bigtensor] (j3) at (3,2) {}; \node[bigtensor,fill=blue!20, label = above right : $B_k$] (j4) at (4,2) {}; \node[bigtensor,fill=blue!20,label = below right : $V_k$] (j5) at (5,2) {}; \node[] (j6) at (6,1) {}; \node[] (j7) at (7,1) {}; \node[] (j8) at (8,1) {}; \node[] (n0) at (0,1) {}; \node[] (n1) at (1,1) {}; \node[] (n2) at (2,1) {}; \node[] (n3) at (3,1) {}; \node[] (n4) at (4,1) {}; \node[] (n5) at (5,1) {}; \node[] (n6) at (6,0) {$V_{k+1}$}; \node[] (n7) at (7,0) {$\cdots$}; \node[] (n8) at (8,0) {$V_N$}; \draw [shortin] (j0) -- (n0); \draw [shortin] (j1) -- (n1); \draw [shortin] (j3) -- (n3); \draw [shortin] (j4) -- (n4); \draw [shortout,shorten <=4mm] (n5) -- (j5); \draw [shortout] (j6) -- (n6); \draw [shortout] (j8) -- (n8); \draw[thick] (j0) -- (j1) -- (j2) -- (j3) -- (j4) -- (j5); \node[label = above left : {$B_{k-1}$}] (k6) at (6.5,2) {}; \draw[witharrow] (j5) -- (k6); \node at (3.5,2.7) {$B_{k-1}$}; \draw[->,thick2,opacity = .5] (3.5,2.5) -- (3.5,2.2) {}; \end{tikzpicture} \end{center} is equal to this \begin{center} \begin{tikzpicture}[x=1cm,y=1.5cm,baseline={(current bounding box.center)}] \node[medtriangle,minimum height = 8mm,fill = blue!20] (u3) at (0,4) {}; \node[medtriangle,minimum height = 8mm,fill = blue!20,shape border rotate = 90] (u2) at (0,3) {}; \node[medtriangle,minimum height = 8mm] (u) at (0,2) {}; \node[smalltensor, minimum height = 2mm, fill = black!10] (d) at (0,1.25) {}; \node[bigtriangle] (v) at (0,.5) {}; \node (a1) at (-.25,2) {}; \node (a2) at (.25,2) {}; \node (a3) at (-1,.4) {}; \node (a4) at (1,.4) {}; \node[label = below left : $B_{k-1}$] (a5) at (-.25,5) {}; \node[label = below right : $V_k$] (a6) at (.25,5) {}; \node (b1) at (-.25,2.9) {}; \node (b2) at (.25,2.9) {}; \node (b3) at (-1,-.75) {$V_{k+1}$}; \node (b4) at (1,-.75) {$V_N$}; \node (b5) at (-.25,4) {}; \node (b6) at (.25,4) {}; \node at (0,-.25) {$\cdots$}; \draw[shortin] (b1) -- (a1); \draw[shortin] (b2) -- (a2); \draw[shortout] (a3) -- (b3); \draw[shortout] (a4) -- (b4); \draw[shortin] (a5) -- (b5); \draw[shortin] (a6) -- (b6); \draw[thick,shorten <=-1mm,shorten >=-1mm] (u3) -- (u2); \draw[thick,shorten <=-1mm] (u) -- (d); \draw[thick,shorten >= -1mm] (d) -- (v); \node at (3,2) {$=$}; \node[medtriangle,minimum height = 8mm,fill = blue!20] (u) at (6,2) {}; \node[smalltensor, minimum height = 2mm, fill = black!10] (d) at (6,1.25) {}; \node[bigtriangle] (v) at (6,.5) {}; \node (a1) at (5.75,2) {}; \node (a2) at (6.25,2) {}; \node (a3) at (5,.4) {}; \node (a4) at (7,.4) {}; \node (a5) at (6,5) {}; \node[label = below left : $B_{k-1}$] (b1) at (5.75,2.9) {}; \node[label = below right : $V_k$] (b2) at (6.25,2.9) {}; \node (b3) at (5,-.75) {$V_{k+1}$}; \node (b4) at (7,-.75) {$V_N$}; \node (b5) at (6,4) {}; \node at (6,-.25) {$\cdots$}; \draw[shortin] (b1) -- (a1); \draw[shortin] (b2) -- (a2); \draw[shortout] (a3) -- (b3); \draw[shortout] (a4) -- (b4); \draw[thick,shorten <= -1mm] (u) -- (d); \draw[thick,shorten >= -1mm] (d) -- (v); \draw[rounded corners,thick2, dashed, gray] (-1, 1.5) rectangle (1, 3.5) {}; \end{tikzpicture} \end{center} which is the first picture: \begin{center} \begin{tikzpicture}[x=1cm,y=1.5cm,baseline={(current bounding box.center)}] \node[hugetensor,fill=red!10, minimum width=90mm] (v15) at (4,1) {}; \node[bigtensor, label = below left : {$V_1$}] (j0) at (0,2) {}; \node[bigtensor] (j1) at (1,2) {}; \node[] (j2) at (2,2) {$\cdots$}; \node[bigtensor] (j3) at (3,2) {}; \node (j4) at (4.5,2) {}; \node[] (j5) at (5.5,2) {}; \node[] (j6) at (6,1) {}; \node[] (j7) at (7,1) {}; \node[] (j8) at (8,1) {}; \node[] (n0) at (0,1) {}; \node[] (n1) at (1,1) {}; \node[] (n2) at (2,1) {}; \node[] (n3) at (3,1) {}; \node[] (n4) at (4,1) {}; \node[] (m5) at (5,2) {$V_{k}$}; \node[] (n5) at (5,1) {}; \node[] (n6) at (6,0) {$V_{k+1}$}; \node[] (n7) at (7,0) {$\cdots$}; \node[] (n8) at (8,0) {$V_N$}; \draw [shortin] (j0) -- (n0); \draw [shortin] (j1) -- (n1); \draw [shortin] (j3) -- (n3); \draw [shortin] (m5) -- (n5); \draw [shortout] (j6) -- (n6); \draw [shortout] (j8) -- (n8); \draw[thick] (j0) -- (j1) -- (j2) -- (j3); \node[label = above left : {$B_{k-1}$}] (k5) at (4.5,2) {}; \draw[witharrow] (j4) -- (j3); \end{tikzpicture} \end{center} In our application, the vector $|\psi\>$ and the operators $\beta_{k-1}: B_{k-1} \otimes V_k \to V_{k+1}\otimes \cdots \otimes V_N$ operate in spaces of such high dimensions that neither they, nor a direct SVD of them, is feasible. Nonetheless, the $U_k$ operators can be obtained from an SVD of a reduced density operating in the effective space $B_{k-1}\otimes V_k$ \[\beta_{k-1}^* \beta_{k-1} : B_{k-1}\otimes V_k \to B_{k-1}\otimes V_k\] In our application, the effective reduced density $\beta_{k-1}^* \beta_{k-1}$ can be computed as a double sum over the training examples and we can efficiently compute the tensors required for the inductive steps. Then in the final step, the complementary space is small so the final map $U_N D_N:B_{N-1} \to V_N$ completes the reconstruction. More specifically, to define the $U_k$, we only need an eigenvector decomposition of $\beta_{k-1}^* \beta_{k-1}$, which looks like \begin{center} \begin{tikzpicture}[x=1cm,y=1.5cm,baseline={(current bounding box.center)}] \node[hugetensor,fill=red!10, minimum width=90mm] (v15) at (4,1) {}; \node[hugetensor,fill=red!10, minimum width=90mm] () at (4,-0.5) {}; \node[bigtensor, label = below left : {$V_1$}] (j0) at (0,2) {}; \node[bigtensor] (j1) at (1,2) {}; \node[] (j2) at (2,2) {$\cdots$}; \node[bigtensor] (j3) at (3,2) {}; \node (j4) at (4.5,2) {}; \node[] (j5) at (5.5,2) {}; \node[] (j6) at (6,1) {}; \node[] (j7) at (7,1) {}; \node[] (j8) at (8,1) {}; \node[] (n0) at (0,1) {}; \node[] (n1) at (1,1) {}; \node[] (n2) at (2,1) {}; \node[] (n3) at (3,1) {}; \node[] (n4) at (4,1) {}; \node[] (m5) at (5,2) {$V_{k}$}; \node[] (n5) at (5,1) {}; \node[] (n6) at (6,0) {}; \node[] (n7) at (7,0) {}; \node[] (n8) at (8,0) {}; \node[] (p5) at (5,-0.5) {}; \node[] (p6) at (6,-0.5) {}; \node[] (p7) at (7,-0.5) {}; \node[] (p8) at (8,-0.5) {}; \node[] (p0) at (0,-0.5) {}; \node[] (p1) at (1,-0.5) {}; \node[] (p2) at (2,-0.5) {}; \node[] (p3) at (3,-0.5) {}; \node[] (p4) at (4,-0.5) {}; \draw [shortin] (j0) -- (n0); \draw [shortin] (j1) -- (n1); \draw [shortin] (j3) -- (n3); \draw [shortin] (m5) -- (n5); \draw [thick,shorten <= 1.5mm,shorten >= 1.5mm] (p6) -- (j6); \draw [thick,shorten <= 1.5mm,shorten >= 1.5mm] (p8) -- (j8); \node[] (q5) at (5,-1.5) {$V_{k}$}; \draw [shortout] (p5) -- (q5); \node[bigtensor, label = above left : {$V_1$}] (q0) at (0,-1.5) {}; \node[bigtensor] (q1) at (1,-1.5) {}; \node[] (q2) at (2,-1.5) {$\cdots$}; \node[bigtensor] (q3) at (3,-1.5) {}; \node (q4) at (4.5,-1.5) {}; \node[label = below left : {$B_{k-1}$}] (q5) at (4.5,-1.5) {}; \draw[witharrow] (q4) -- (q3); \draw[thick] (q0) -- (q1) -- (q2) -- (q3); \draw[thick] (j0) -- (j1) -- (j2) -- (j3); \node[label = above left : {$B_{k-1}$}] (k5) at (4.5,2) {}; \draw[witharrow] (j4) -- (j3); \draw [shortin] (q0) -- (p0); \draw [shortin] (q1) -- (p1); \draw [shortin] (q3) -- (p3); \node[] at (7,.25) {$\cdots$}; \end{tikzpicture} \end{center} and is given by a formula like the one in Equation \eqref{mainequation}. In general, when factoring an arbitrary vector as an MPS, the bond spaces $B_{k}$ grow large exponentially fast. Therefore, we may characterize data sets for which the MPS model is a good model by saying that $|\psi\>$ as defined in Equation \eqref{rho_pi} has an MPS model whose bond spaces $B_{k}$ remain small. Alternatively, one can truncate or restrict the dimensions of the spaces $B_k$ resulting in a low rank MPS approximation of $|\psi\>$. As a criterion for this truncation, one can inspect the singular values at each inductive step and discard those which are small according to a pre-determined cutoff, and the corresponding columns of $U$ and $W$. In the even-parity dataset that we investigate as an example, we always truncate $B_k$ to two dimensions throughout. To understand whether this kind of low-rank approximation is useful, remember that we understand that the eigenvectors and eigenvalues of the reduced densities carry the essential prefix-suffix interactions. By having a training algorithm that emphasizes these eigenvalues and eigenvectors as the most important features of the data throughout training, the resulting model should be interpreted as capturing the most important prefix-suffix interactions. We view these prefix-suffix interactions a proxy for the meaning of substrings within a language of larger strings. \section{Under the hood} With an in-depth understanding of the training algorithm, we aim to predict experimental results, simply given the fraction $0<f\leq 1$ of training samples used. Such an under-the-hood analysis shows that each tensor within the MPS is comprised of eigenvectors of a reduced density operator. The eigenvectors can be understood in terms of the reduced density matrix representation, which contains information from errors accrued in the algorithm's prior steps, along with combinatorial information from the current step. We now describe these ideas in careful detail. As an example, we perform an analysis of how well the algorithm learns on the even-parity dataset. Let $\Sigma=\{0,1\}$ and consider the set $\Sigma^N$ of bitstrings of a fixed length $N$. Define the \emph{parity} of a bitstring $(b_1, \ldots, b_N)$ to be \begin{equation} \parity(b_1, \ldots, b_N) := \sum_{i=1}^N b_i \mod 2. \end{equation} The set $\Sigma^N$ is partitioned into even and odd bitstrings: \[E^N=\{s \in \Sigma^N:\parity(s)=0\} \text{ and }O^N=\{s\in \Sigma^N: \parity(s)=1\}\] Consider the probability distribution $\pi:\Sigma^N\to \mathbb{R}$ uniformly concentrated on $E^N$: \[ \pi(x) = \begin{cases} \frac{1}{2^{N-1}} & \text{ if $x\in E^N$} \\ 0 & \text{ if $x\in O^N$.} \end{cases} \] This distribution defines a density $\rho_{\pi}=|E_N\>\<E_N|$ where \begin{equation} \ket{E_N}=\frac{1}{\sqrt{2^{N-1}}}\sum_{s\in E^N}\ket{s}\in V_1\otimes V_2\otimes \cdots \otimes V_N \end{equation} where $V_j\cong \mathbb{C}^2$ is the site space spanned by the bits in the $j$-th position. Choose a subset $T=\{s_1,\ldots, s_{N_T}\}\subset E_N$ of even parity bitstrings and let $f=N_T/2^{N-1}$ be the fraction selected. The empirical distribution on this set defines the vector $|\psi\>= \frac{1}{\sqrt{N_T}} \sum_{i=1}^{N_T} |s_i\>$ as in Equation \eqref{emp_psi}. To begin our analysis on $|\psi\>$, let us closely inspect the algorithm's second step. The ideas therein will generalize to subsequent steps. In step 2, we view each sample $s$ as a prefix-suffix pair $(a,b)$ where $a\in\Sigma^2$ and $b\in \Sigma^{N-2}$. We visualize the training set $T$ as a bipartite graph. Vertices represent prefixes $a$ and suffixes $b$ and there is an edge joining $a$ and $b$ if and only if $(a,b)\in T$. \begin{minipage}{.5\textwidth} \begin{center} \begin{tikzpicture} \node[vertex, label = left : {$00$}] (x1) at (0,.25) {}; \node[vertex, label = left : {$11$}] (x2) at (0,-.25) {}; \node[vertex, label = right : {$0000$}] (y1) at (1,1.5) {}; \node[vertex, label = right : {$1100$}] (y2) at (1,1) {}; \node[vertex, label = right : {$0110$}] (y3) at (1,.5) {}; \node[vertex, label = right : {$0011$}] (y4) at (1,0) {}; \node[vertex, label = right : {$1010$}] (y5) at (1,-.5) {}; \node[vertex, label = right : {$0101$}] (y6) at (1,-1) {}; \node[vertex, label = right : {$1001$}] (y7) at (1,-1.5) {}; \node[vertex, label = right : {$1111$}] (y8) at (1,-2) {}; \draw (x1) -- (y1) {}; \draw (x1) -- (y2) {}; \draw (x1) -- (y5) {}; \draw (x1) -- (y6) {}; \draw (x1) -- (y8) {}; \draw (x2) -- (y1) {}; \draw (x2) -- (y4) {}; \draw (x2) -- (y6) {}; \draw (x2) -- (y8) {}; \end{tikzpicture} \end{center} \end{minipage} \begin{minipage}{.5\textwidth} \begin{center} \begin{tikzpicture} \node[vertex, label = left : {$01$}] (x1) at (0,.25) {}; \node[vertex, label = left : {$10$}] (x2) at (0,-.25) {}; \node[vertex, label = right : {$1000$}] (y1) at (1,1.5) {}; \node[vertex, label = right : {$0100$}] (y2) at (1,1) {}; \node[vertex, label = right : {$0010$}] (y3) at (1,.5) {}; \node[vertex, label = right : {$0001$}] (y4) at (1,0) {}; \node[vertex, label = right : {$0111$}] (y5) at (1,-.5) {}; \node[vertex, label = right : {$1011$}] (y6) at (1,-1) {}; \node[vertex, label = right : {$1101$}] (y7) at (1,-1.5) {}; \node[vertex, label = right : {$1110$}] (y8) at (1,-2) {}; \draw (x1) -- (y1); \draw (x1) -- (y3); \draw (x1) -- (y5); \draw (x2) -- (y3); \draw (x2) -- (y5); \draw (x2) -- (y6); \draw (x2) -- (y8); \end{tikzpicture} \end{center} \end{minipage} Notice that samples in the left graph are concatenations of even parity bitstrings; samples in the right graph are concatenations of odd parity bitstrings. Let $\ket{\psi_2}\in \mathbb{C}^{\Sigma^2}\otimes \mathbb{C}^{\Sigma^{N-2}}$ denote the sum of the samples after having completed step 1, \begin{equation} \begin{tikzpicture}[x=1cm,y=1.5cm,baseline={(current bounding box.center)}] \node[] at (-2,1) {$\ket{\psi_2}=$}; \node[hugetensor,fill=red!10, minimum width=60mm,minimum height = 6mm] (v15) at (2.5,1) {}; \node[bigtensor] (j0) at (0,1.75) {}; \node[] (j1) at (1,2.5) {}; \node[] (j2) at (2,2.5) {}; \node[] (j3) at (3,2.5) {}; \node[] (j4) at (4,2.5) {}; \node[] (j5) at (5,2.5) {}; \node[] (n0) at (0,1) {}; \node[] (n1) at (1,1) {}; \node[] (n2) at (2,1) {}; \node[] (n3) at (3,1) {}; \node[] (n4) at (4,1) {}; \node[] (n5) at (5,1) {}; \node[] (m0) at (0,2.5) {}; \draw [shortin] (j0) -- (n0); \draw [shortin] (j1) -- (n1); \draw [shortin] (j2) -- (n2); \draw [shortin] (j3) -- (n3); \draw [shortin] (j4) -- (n4); \draw [shortin] (j5) -- (n5); \draw [witharrow] (m0) -- (j0); \end{tikzpicture} \end{equation} and consider the reduced density $\rho_2=tr_{\Sigma^{N-2}}|\psi_2\>\<\psi_2|$. The entries of its matrix representation are understood from the data in the graph. Choosing an ordering on the set $\Sigma^2$, we write $\rho_2$ as \begin{equation}\label{eq:rho2} \rho_2\;=\; \frac{1}{N_T} \begin{bmatrix} d_1 & s_e & 0 & 0 \\ s_e & d_2 & 0 & 0 \\ 0 & 0 & d_3 & s_o \\ 0 & 0 & s_o & d_4 \end{bmatrix} \end{equation} The number of training samples $N_T$ is the total number of edges in the graph. The diagonal entries are the degrees of vertices associated to prefixes: $d_1$ is the degree of 00, $d_2$ is the degree of 11, $d_3$ is the degree of 01, $d_4$ is the degree of 10. The off-diagonal entries are the number of paths of length 2 in each component of the graph. That is, $s_e$ is the number of suffixes that $00$ and $11$ have in common; $s_o$ is the number of suffixes that $01$ and $10$ have in common. If $T$ contains all samples then both graphs are complete bipartite and the entries of $\rho_2$ are all equal (to $2^{N-3}$ in this case). In this case, $\rho_2$ is a rank 2 operator. It has two eigenvectors---one from each block. This is the idealized scenario: every sequence is present in the training set, the tensor obtained $\rho_2=U_2D_2U_2^*$ is then \[\rho_2=\frac{1}{2}(|E_2\>\<E_2| \oplus |O_2\>\<O_2|)\] where $\ket{E_{2}}=\frac{1}{\sqrt{2}}(|00\> + |11\>)$ denotes the normalized sum of even prefixes of length $2$, and $\ket{O_{2}}=\frac{1}{\sqrt{2}}(|01\> + |10\>)$ denotes the normalized sum of odd prefixes of length $2$. As a matrix, $U_2$ has $\ket{E_2}$ and $\ket{O_2}$ along its rows. We think of it as a ``summarizer'': it projects a prefix onto an axis that can be identified with either $|E_2\>$ or $|O_2\>$ according to its parity, perfectly summarizing the information of that prefix required to understand which suffixes it is paired with. More generally, however, if $T \neq E_N$ then the reduced density $\rho_2$ may be full rank. In this case we choose the eigenvectors $\ket{E'_2}, \ket{O'_2}$ that correspond to the two largest eigenvalues of $\rho_2$. We assume these eigenvectors come from distinct blocks. This defines the tensor $U_2$, which as a matrix has $|E'_2\>$ and $|O'_2\>$ along its rows, where \begin{align*} \ket{E'_2}&=\cos\theta_2\ket{00} + \sin\theta_2\ket{11} \\ \ket{O'_2}&=\cos\phi_2\ket{01} + \sin\phi_2\ket{10} \end{align*} for some angles $\theta_2$ and $\phi_2$. These angles can be computed following the expression in \eqref{evects} for the eigenvectors: \[ \theta_2 = \arctan\left(\frac{2s_e}{\sqrt{G_e^2 + 4s_e^2} + G_e}\right) \quad\text{and}\quad \phi_2= \arctan\left(\frac{2s_o}{\sqrt{G_o^2 + 4s_o^2} + G_o}\right) \] Here, $G_e=d_1-d_2$ and $G_o=d_3-d_4$ denote the gaps between the diagonal entries in each block. The angles should be thought of as measuring the deviation from perfect learning in step 2: if $f=1$ then $G_e,G_o=0$ and so $\theta_2=\phi_2=\pi/4$ which implies $\ket{E'_2}=\ket{E_2}$ and $\ket{O'_2}=\ket{O_2}$. In this case, step 2 has worked perfectly. Note that this is not an if-and-only-if scenario. Even if $f<1$ then the reduced density may \emph{still} have $\ket{E_2}$ and $\ket{O_2}$ as its eigenvectors. Indeed, this occurs whenever $G_e=G_o=0$ and $s_e,s_o\neq 0$. In that case, the eigenvectors of $\rho_2$ are the desired parity vectors $\ket{E_2},\ket{O_2}$, and the summarizer $U_2$ obtained is a true summarization tensor. But if $G_e$ or $G_o$ are both nonzero, then step 2 induces a summarization error, which we measure as the deviation of $\theta_2$ and $\phi_2$ from the desired $\pi/4$. The analysis described here is repeated at each subsequent step $k=3,\ldots,N$, with minor adjustments to the combinatorics. So let us now describe the general schema. In the $k$th step of the training algorithm, each sample is cut after the $k$-th bit and viewed as a prefix-suffix pair $s=(a,b)$ where $a\in \Sigma^k$ and $b\in \Sigma^{N-k}$. Let $|\psi_k\>\in \mathbb{C}^{\Sigma^k}\otimes \mathbb{C}^{\Sigma^{N-k}}$ denote the sum of the samples after having completed step $k-1$. \begin{center} \begin{tikzpicture}[x=1cm,y=1.5cm,baseline={(current bounding box.center)}] \node[] at (-2,1.75) {$\ket{\psi_3}=$}; \node[hugetensor,fill=red!10, minimum width=60mm,minimum height = 6mm] (v15) at (2.5,1) {}; \node[bigtensor] (j0) at (0,1.75) {}; \node[bigtensor] (j1) at (1,1.75) {}; \node[] (j2) at (2,2.5) {}; \node[] (j3) at (3,2.5) {}; \node[] (j4) at (4,2.5) {}; \node[] (j5) at (5,2.5) {}; \node[] (n0) at (0,1) {}; \node[] (n1) at (1,1) {}; \node[] (n2) at (2,1) {}; \node[] (n3) at (3,1) {}; \node[] (n4) at (4,1) {}; \node[] (n5) at (5,1) {}; \node[] (m1) at (1,2.5) {}; \draw [shortin] (j0) -- (n0); \draw [shortin] (j1) -- (n1); \draw [shortin] (j2) -- (n2); \draw [shortin] (j3) -- (n3); \draw [shortin] (j4) -- (n4); \draw [shortin] (j5) -- (n5); \draw[witharrow] (m1) -- (j1); \draw[thick] (j0) -- (j1); \end{tikzpicture} \end{center} and let $\rho_k:=tr_{\Sigma^{N-k}}|\psi_k\>\<\psi_k|$ denote the reduced density on the prefix subsystem at step $k$. It is an operator on $B_{k-1}\otimes V_k$, where $B_{k-1}$ is a 2-dimensional space which may be identified with the span of the eigenvectors associated to the two largest eigenvalues of $\rho_{k-1}.$ As a matrix, $\rho_k$ is a direct sum of $2\times 2$ matrices, \begin{equation}\label{eq:rhok} \rho_k\;=\; \frac{1}{tr(\rho_k)} \begin{bmatrix} e0 & s_e & 0 & 0\\ s_e & o1 & 0 & 0\\ 0 & 0 & e1 & s_o\\ 0 & 0 & s_o & o0 \end{bmatrix} \end{equation} We postpone a description of the entries until Section \ref{sec:comb}. But know that, as in the case when $k=2$, the upper and lower blocks contains combinatorial information about prefixes of even and odd parity, respectively. As before, we are interested in the largest eigenvectors $\ket{E_k'},\ket{O_k'}$ contributed by each block. They define the tensor $U_k$, which as a matrix has $|E'_k\>$ and $|O'_k\>$ along its rows, and can be understood inductively. The eigenvectors contain combinatorial information from step $k$ along with data from step $k-1$. Let $\ket{E'_1}:=\ket 0$ and $\ket{O'_1} := \ket 1$. Then for $k\geq 2$ \begin{align*} \ket{E'_k}&=\cos\theta_k\ket{E'_{k-1}}\otimes \ket{0} + \sin\theta_k\ket{O'_{k-1}}\otimes \ket 1\\ \ket{O'_k}&=\cos\phi_k\ket{E'_{k-1}}\otimes \ket{1} + \sin\phi_k\ket{O'_{k-1}}\otimes \ket 0 \end{align*} where \begin{equation}\label{eq:angles} \theta_k = \arctan\left(\frac{2s_e}{\sqrt{G_e^2 + 4s_e^2}+G_e}\right) \quad \phi_k = \arctan\left(\frac{2s_o}{\sqrt{G_o^2 + 4s_o^2}+G_o}\right) \end{equation} Again, the angles are a measurement of the error accrued in step $k$. Significantly, no error is accrued when the gaps $G_e:=e0-o1$ and $G_o:=e1-o0$ are zero and the off-diagonals $s_e,s_o$ are non-zero, for then $\theta_k=\phi_k=\pi/4$. This outcome, or one close to it, is statistically favored for a wide range of training fractions. As a matrix, \[ U_k =\begin{bmatrix} \cos\theta_k & \sin\theta_k & 0 & 0\\ 0 & 0 & \cos\phi_k & \sin\phi_k \end{bmatrix} \] and so $U_k$ is akin to a map $B_{k-1}\otimes V_k\to B_k$ that combines previously summarized information from $B_{k-1}$ with new information from $V_k$. It then summarizes the resulting data by projecting onto one of two orthogonal vectors, which may be identified with $\ket{E'_k}$ or $\ket{O'_k}$, in the new bond space $B_k$. \begin{center} \begin{tikzpicture} \node[bigtensor, label = above : {$U_3$}] (u) at (4.5,4) {}; \node[] (u0) at (3,4) {$|01\>$}; \node[] (u2) at (6,4) {$|E'_3\>$}; \node[] (n1) at (4.5,2.5) {$|1\>$}; \draw[witharrow] (n1)--(u); \draw[witharrow] (u)--(u2); \draw[witharrow] (u0) -- (u); \end{tikzpicture} \end{center} The true orientation of the arrows on $U_k$ are down-left, rather than up-right. But the vector spaces in question are finite-dimensional, and our standard bases provide an isomorphism between a space and its dual. That is, no information is lost by momentarily adjusting the arrows for the purposes of sharing intuition. In summary, this template provides a concrete handle on the tensors $U_k$ that comprise the MPS factorization of $|\psi\>$. \subsection{High-level summary} We close by summarizing the high-level ideas present in this under-the-hood analysis. At the $k$th step of the training algorithm one obtains a $4\times 4$ block diagonal reduced density matrix $\rho_k$. It is given in Equation \eqref{eq:rho2} in the case when $k=2$ and as in Equation \eqref{eq:rhok} when $k>2$. These matrices are obtained by tracing out the suffix subsystem from the projection $|\psi_k\>\<\psi_k|$, where $|\psi_k\>$ is the sum of the samples in the training set after having completed step $k-1$. Since $|\psi_k\>$ depends on the error obtained in step $k-1$, so does $\rho_k$. This error is defined by the angles $\theta_{k-1}$ and $\phi_{k-1}$. As shown in Equation \eqref{eq:angles}, these angles---and hence the error---are functions of the entries of the matrix representing $\rho_{k-1}$. So, the $k$th level density takes into account the errors accrued at each subsequent step as well as combinatorial information in the present step. A partial trace computation thus directly leads to the matrix representation for $\rho_k$ given in Equation \eqref{eq:rhok}. Explicitly, the non-zero entries of the matrix are computed by Equations \eqref{eq:e0} and \eqref{eq:se}. With this, one has full knowledge of the matrix $\rho_k$ and therefore of its eigenvectors $|E_k'\>,|O_k'\>$. Written in the computational basis, they are of the form shown in Equation \eqref{evects}. These two eigenvectors then assemble to form the rows of the tensor $U_k$, when viewed as a $2\times 4$ matrix. This analysis gives a thorough understanding of the error propagated at each step of the algorithm, as well as of the final MPS $|\psi_{\text{MPS}}\>$. To measure the algorithm's performance, we begin by evaluating the inner product of this vector with an MPS decomposition of the target vector $|E_N\>$. \begin{center} \begin{tikzpicture} \node[] at (-2,.5) {$\<E_N|\psi_{\text{MPS}}\> = $}; \node[bigtensor] (p0) at (0,1.5) {}; \node[bigtensor] (p1) at (1,1.5) {}; \node[bigtensor] (p2) at (2,1.5) {}; \node[bigtensor] (p3) at (3,1.5) {}; \node[bigtensor] (p4) at (4,1.5) {}; \node[bigtensor] (p5) at (5,1.5) {}; \node[bigtensor,fill = green!20] (e0) at (0,0) {}; \node[bigtensor,fill = green!20] (e1) at (1,0) {}; \node[bigtensor,fill = green!20] (e2) at (2,0) {}; \node[bigtensor,fill = green!20] (e3) at (3,0) {}; \node[bigtensor,fill = green!20] (e4) at (4,0) {}; \node[bigtensor,fill = green!20] (e5) at (5,0) {}; \draw[thick] (p0) -- (p1) -- (p2) -- (p3) -- (p4) -- (p5); \draw[thick] (e0) -- (e1) -- (e2) -- (e3) -- (e4) -- (e5); \draw[witharrow] (p0) -- (e0); \draw[witharrow] (p1) -- (e1); \draw[witharrow] (p2) -- (e2); \draw[witharrow] (p3) -- (e3); \draw[witharrow] (p4) -- (e4); \draw[witharrow] (p5) -- (e5); \end{tikzpicture} \end{center} The $k$th tensor comprising the decomposition of $|E_N\>$ is equal to $U_k$ when $\theta_k$ and $\phi_k$ are evaluated at $\pi/4.$ The contraction thus results in a sum of products of $\cos\theta_k,\sin\theta_k,\cos\phi_k,\sin\phi_k$ for $k=2,\ldots,N$. More concretely, for each even bitstring $s\in E^N$ the inner product $\<s|\psi_{\text{MPS}}\>$ is the square root of the probability of $s$. For now, we'll refer to it as the \emph{weight} $w(s):=\<s|\psi_{\text{MPS}}\>$ associated to the sample $s$. For each $s$, its weight $w(s)$ is a product of various $\cos\theta_k,\sin\theta_k,\cos\phi_k,\sin\phi_k$, the details of which are given in Section \ref{sec:comb}. The final overlap is then the sum \begin{equation}\label{eq:estimate} \<E_N|\psi_{\text{MPS}}\> = \frac{1}{\sqrt{2^{N-1}}}\sum_{s\in E^N}w(s) \end{equation} Now, suppose the training set consists of a fraction $f$ of the entire population. The entries of the reduced densities in \eqref{eq:rhok} are described combinatorially, as detailed in the next section. This makes it possible to make statistical estimates for gaps $G_e$ and $G_o$ and off-diagonal entries $s_o$ and $s_e$ in \eqref{eq:angles}. Therefore, we can make statistical predictions for the angles $\theta_k$ and $\phi_k$ and hence for the tensors $U_k$ comprising the trained MPS and the resulting generalization error. The results are plotted in Figure \ref{fig:experiments}, where we use the Bhattacharya distance \begin{equation}\label{eq:bhatt} -\frac{1}{\sqrt{2^{N-1}}}\ln\left(\sum_{s\in E^N}w(s)\right) \end{equation} between the true population distribution and the one defined by either an experimentally trained MPS as a proxy for generalization error. The theoretical curve could, in principle, be improved by making more accurate statistical estimates for the combinatorics involved. \begin{center} \begin{figure}[h] \begin{subfigure}[t]{0.7\textwidth} \includegraphics[scale=0.6]{bhatt1_trunc.png} \caption{The experimental average (orange) and theoretical prediction (blue).} \end{subfigure} % \begin{subfigure}[t]{0.7\textwidth} \includegraphics[scale=0.6]{bhatt2_trunc.png} \caption{A closer look for $0.15\leq f \leq 0.2.$} \end{subfigure} \caption{The experimental average (orange) and theoretical prediction (blue) of the weighted Bhattacharya distance between the probability distribution learned experimentally and the theoretical prediction for bit strings of length $N=16$ and training set fractions of $0<f\leq 0.2$.}\label{fig:experiments} \end{figure} \end{center} \subsection{Combinatorics of reduced densities}\label{sec:comb} We now describe the entries of $k$th level reduced density in Equation \eqref{eq:rhok}. They depend on certain combinatorics in step $k$ as well as error accumulated in the previous step. The latter has an inductive description. To start, observe that the parity of a prefix $a\in\Sigma^k$ is determined by its last bit, together with the parity of its first $k-1$ bits. The set $\Sigma^k$ thus partitions into four sets: \begin{align*} E0 = \{a\in\Sigma^k:a=(e_{k-1},0) \text{ where $e_{k-1}\in E^{k-1}$}\}\\ O1 = \{a\in\Sigma^k:a=(o_{k-1},1) \text{ where $o_{k-1}\in O^{k-1}$}\}\\ E1 = \{a\in\Sigma^k:a=(e_{k-1},1) \text{ where $e_{k-1}\in E^{k-1}$}\}\\ O0 = \{a\in\Sigma^k:a=(o_{k-1},0) \text{ where $o_{k-1}\in O^{k-1}$}\}\\ \end{align*} By viewing the training set as a bipartite graph, one has a visual understanding of these sets: $E0$ contains all prefixes of even parity whose last bit is 0; $O1$ contains all prefixes of even parity whose last bit is $1$, and so on. In the example below with $k=3$, we use color to distinguish each set. \begin{minipage}{.5\textwidth} \begin{center} \begin{tikzpicture} \node[vertex, label = left : {$00\;0$}] (x1) at (0,1.5) {}; \node[vertex, label = left : {$11\;0$}] (x2) at (0,1){}; \node[vertex, label = left : {$01\;1$}] (x3) at (0,.5){}; \node[vertex, label = left : {$10\;1$}] (x4) at (0,0){}; \node[vertex, label = right : {$000$}] (y1) at (1,1.5){}; \node[vertex, label = right : {$110$}] (y2) at (1,1){}; \node[vertex, label = right : {$101$}] (y3) at (1,.5){}; \node[vertex, label = right : {$011$}] (y4) at (1,0){}; \draw[OrangeRed,thick2] (x1) -- (y1) {}; \draw[OrangeRed,thick2] (x1) -- (y3) {}; \draw[OrangeRed,thick2] (x2) -- (y1) {}; \draw[OrangeRed,thick2] (x2) -- (y3) {}; \draw[OrangeRed,thick2] (x2) -- (y4) {}; \draw[ProcessBlue,thick2] (x3) -- (y1) {}; \draw[ProcessBlue,thick2] (x4) -- (y2) {}; \draw[ProcessBlue,thick2] (x4) -- (y4) {}; \node at (.5,1.75) {{\color{OrangeRed}$E0$}}; \node at (.5,-.25) {{\color{ProcessBlue}$O1$}}; \node at (-1.5,1.5) {${\color{gray}{\cos\theta_2}}$}; \node at (-1.5,1) {${\color{gray}{\sin\theta_2}}$}; \node at (-1.5,.5) {${\color{gray}{\cos\phi_2}}$}; \node at (-1.5,0) {${\color{gray}{\sin\phi_2}}$}; \end{tikzpicture} \end{center} \end{minipage} \begin{minipage}{.5\textwidth} \begin{center} \begin{tikzpicture} \node[vertex, label = left : {$00\;1$}] (x1) at (0,1.5) {}; \node[vertex, label = left : {$11\;1$}] (x2) at (0,1){}; \node[vertex, label = left : {$01\;0$}] (x3) at (0,.5){}; \node[vertex, label = left : {$10\;0$}] (x4) at (0,0){}; \node[vertex, label = right : {$100$}] (y1) at (1,1.5){}; \node[vertex, label = right : {$010$}] (y2) at (1,1){}; \node[vertex, label = right : {$001$}] (y3) at (1,.5){}; \node[vertex, label = right : {$111$}] (y4) at (1,0){}; \draw[ForestGreen,thick2] (x1) -- (y1); \draw[ForestGreen,thick2] (x1) -- (y2); \draw[ForestGreen,thick2] (x1) -- (y4); \draw[ForestGreen,thick2] (x2) -- (y4); \draw[YellowOrange,thick2] (x3) -- (y2); \draw[YellowOrange,thick2] (x3) -- (y4); \draw[YellowOrange,thick2] (x4) -- (y2); \draw[YellowOrange,thick2] (x4) -- (y4); \node at (.5,1.75) {{\color{ForestGreen}$E1$}}; \node at (.5,-.25) {{\color{YellowOrange}$O0$}}; \node at (-1.5,1.5) {${\color{gray}{\cos\theta_2}}$}; \node at (-1.5,1) {${\color{gray}{\sin\theta_2}}$}; \node at (-1.5,.5) {${\color{gray}{\cos\phi_2}}$}; \node at (-1.5,0) {${\color{gray}{\sin\phi_2}}$}; \end{tikzpicture} \end{center} \end{minipage} As shown, each prefix also has a weight that records its contribution to the error accumulated in previous steps. Concretely, we assign to each prefix $a\in \Sigma^k$ a weight $w(a)$, which is a product of $k-2$ terms. For $2\leq i \leq k-1,$ the $i$th factor of $w(a)$ is defined to be \begin{itemize} \item $\cos\theta_i$ if the parity of the first $i-1$ bits is even and the $i$th bit is 0 \item $\sin\theta_i$ if the parity of the first $i-1$ bits is odd and the $i$th bit is 1 \item $\cos\phi_i$ if the parity of the first $i-1$ bits is even and the $i$th bit is 1 \item $\sin\phi_i$ if the parity of the first $i-1$ bits is odd and the $i$th bit is 0 \end{itemize} For example, if $k=3$ then $w(011)=\cos\phi_2$. If $k=5$ then $w(01101)=\cos\theta_4\sin\theta_3\cos\phi_2$. These weights are naturally associated to each tensor. For instance, recalling that each tensor $U_k$ is akin to a summarizer, one sees $w(01101)$ in the following way: \begin{center} \begin{tikzpicture} \node[bigtensor, label = above : {$U_2$}] (a) at (0,1) {}; \node[] (aL) at (-1.5,1) {$\ket{0}$}; \node[] (aR) at (2,1) {$\cos\phi_2\ket{01}$}; \node[] (aB) at (0,-.5) {$\ket{1}$}; \draw[witharrow] (aB)--(a); \draw[witharrow] (a)--(aR); \draw[witharrow] (aL) -- (a); \node[bigtensor, label = above : {$U_3$}] (b) at (4,1) {}; \node[] (bL) at (2.7,1) {}; \node[] (bR) at (6.5,1) {$\sin\theta_3\cos\phi_2\ket{011}$}; \node[] (bB) at (4,-.5) {$\ket{1}$}; \draw[witharrow] (bB)--(b); \draw[witharrow] (b)--(bR); \draw[witharrow] (bL) -- (b); \node[bigtensor, label = above : {$U_4$}] (c) at (9,1) {}; \node[] (cL) at (7.7,1) {}; \node[] (cR) at (12.2,1) {$\cos\theta_4\sin\theta_3\cos\phi_2\ket{0110}$}; \node[] (cB) at (9,-.5) {$\ket{0}$}; \draw[witharrow] (cB)--(c); \draw[witharrow] (c)--(cR); \draw[witharrow] (cL) -- (c); \end{tikzpicture} \end{center} We can now describe the entries of the reduced density defined in Equation \eqref{eq:rhok}. The first diagonal entry is \begin{equation}\label{eq:e0} e0 = \sum_{\text{suffixes }b}\left(\sum_{\substack{a\in E0 \\ (a,b)\in T}} w(a)\right)^2 \end{equation} and the other diagonals are defined similarly. If perfect learning occurs then $e0$ is, up to a normalizing constant, the sum of the squares of the degrees of each suffix, with respect to $E0$. For example, in the graph below $e0$ is proportional to $ 2^2 + 2^2 + 1^2=9$. \begin{center} \begin{minipage}{.5\textwidth} \begin{tikzpicture} \node[vertex, label = left : {$00\;0$}] (x1) at (0,1.5) {}; \node[vertex, label = left : {$11\;0$}] (x2) at (0,1){}; \node[vertex, label = left : {$01\;1$}] (x3) at (0,.5){}; \node[vertex, label = left : $10\;1$] (x4) at (0,0){}; \node[vertex, label = right : {$000$}] (y1) at (1,1.5){}; \node[vertex, label = right : {$110$}] (y2) at (1,1){}; \node[vertex, label = right : {$101$}] (y3) at (1,.5){}; \node[vertex, label = right : {$011$}] (y4) at (1,0){}; \draw[OrangeRed,thick2] (x1) -- (y1) {}; \draw[OrangeRed,thick2] (x1) -- (y3) {}; \draw[OrangeRed,thick2] (x2) -- (y1) {}; \draw[OrangeRed,thick2] (x2) -- (y3) {}; \draw[OrangeRed,thick2] (x2) -- (y4) {}; \draw[ProcessBlue,opacity=.25] (x3) -- (y1) {}; \draw[ProcessBlue,opacity=.25] (x4) -- (y2) {}; \draw[ProcessBlue,opacity=.25] (x4) -- (y4) {}; \node at (.5,1.75) {{\color{OrangeRed}$E0$}}; \node at (-1.5,1.5) {${\color{gray}{\cos\theta_2}}$}; \node at (-1.5,1) {${\color{gray}{\sin\theta_2}}$}; \node at (-1.5,.5) {${\color{gray}{\cos\phi_2}}$}; \node at (-1.5,0) {${\color{gray}{\sin\phi_2}}$}; \end{tikzpicture} \end{minipage} \end{center} In general, though, the summands will not be integers but rather products of weights. The off-diagonal entry in the even block of the reduced density is \begin{equation}\label{eq:se} s_e=\sum_{\text{suffixes }b}\left(\sum_{\substack{a\in E0, \;a'\in O1 \\ (a,b),(a',b)\in T}} w(a)\cdot w(a')\right) \end{equation} When perfect learning occurs, $s_e$ counts the number of paths of length 2, where now a path is comprised of one edge from $E0$ and one edge from $O1$. For example, in the graph below $s_e=3.$ \begin{center} \begin{tikzpicture} \node[vertex] (x1) at (0,1.5) {}; \node[vertex] (x2) at (0,1){}; \node[vertex] (x3) at (0,.5){}; \node[vertex] (x4) at (0,0){}; \node[vertex] (y1) at (1,1.5){}; \node[vertex] (y2) at (1,1){}; \node[vertex] (y3) at (1,.5){}; \node[vertex] (y4) at (1,0){}; \draw[OrangeRed,thick2] (x1) -- (y1); \draw[OrangeRed,thick2] (x2) -- (y1); \draw[OrangeRed,thick2] (x2) -- (y4); \draw[OrangeRed,opacity=.25] (x1) -- (y3); \draw[OrangeRed,opacity=.25] (x2) -- (y3); \draw[ProcessBlue,thick2] (x3) -- (y1); \draw[ProcessBlue,thick2] (x4) -- (y4); \draw[ProcessBlue,opacity=.25] (x4) -- (y2); \node at (1.5,.75) {$=$}; \node[vertex] (a1) at (2,1.5) {}; \node[vertex] (a2) at (2,1){}; \node[vertex] (a3) at (2,.5){}; \node[vertex] (a4) at (2,0){}; \node[vertex] (b1) at (3,1.5){}; \node[vertex] (b2) at (3,1){}; \node[vertex] (b3) at (3,.5){}; \node[vertex] (b4) at (3,0){}; \draw[OrangeRed,opacity=.25] (a1) -- (b1); \draw[OrangeRed,opacity=.25] (a1) -- (b3); \draw[OrangeRed,opacity=.25] (a2) -- (b1); \draw[OrangeRed,opacity=.25] (a2) -- (b3); \draw[OrangeRed,thick2] (a2) -- (b4); \draw[ProcessBlue,opacity=.25] (a3) -- (b1); \draw[ProcessBlue,opacity=.25] (a4) -- (b2); \draw[ProcessBlue,thick2] (a4) -- (b4); \node at (3.5,.75) {$+$}; \node[vertex] (c1) at (4,1.5) {}; \node[vertex] (c2) at (4,1){}; \node[vertex] (c3) at (4,.5){}; \node[vertex] (c4) at (4,0){}; \node[vertex] (d1) at (5,1.5){}; \node[vertex] (d2) at (5,1){}; \node[vertex] (d3) at (5,.5){}; \node[vertex] (d4) at (5,0){}; \draw[OrangeRed,thick2] (c1) -- (d1); \draw[OrangeRed,opacity=.25] (c2) -- (d1); \draw[OrangeRed,opacity=.25] (c2) -- (d3); \draw[OrangeRed,opacity=.25] (c2) -- (d4); \draw[ProcessBlue,thick2] (c3) -- (d1); \draw[ProcessBlue,opacity=.25] (c4) -- (d2); \draw[ProcessBlue,opacity=.25] (c4) -- (d4); \node at (5.5,.75) {$+$}; \node[vertex] (u1) at (6,1.5) {}; \node[vertex] (u2) at (6,1){}; \node[vertex] (u3) at (6,.5){}; \node[vertex] (u4) at (6,0){}; \node[vertex] (v1) at (7,1.5){}; \node[vertex] (v2) at (7,1){}; \node[vertex] (v3) at (7,.5){}; \node[vertex] (v4) at (7,0){}; \draw[OrangeRed,opacity=.25] (u1) -- (v1); \draw[OrangeRed,thick2] (u2) -- (v1); \draw[OrangeRed,opacity=.25] (u2) -- (v3); \draw[OrangeRed,opacity=.25] (u2) -- (v4); \draw[ProcessBlue,thick2] (u3) -- (v1); \draw[ProcessBlue,opacity=.25] (u4) -- (v2); \draw[ProcessBlue,opacity=.25] (u4) -- (v4); \node at (.5,1.75) {{\color{OrangeRed}$E0$}}; \node at (.5,-.25) {{\color{ProcessBlue}$O1$}}; \end{tikzpicture} \end{center} In general, however, $s_e$ will be a sum of products of weights. The expression for the off-diagonal $s_o$ in the odd block is similar to that in Equation \eqref{eq:se}. In summary, the theory behind the reduced densities and their eigenvectors gives us an exact understanding of the error propagated through each step of the training algorithm. We may then predict the Bhattacharya distance in \eqref{eq:bhatt} using statistical estimates of the expected combinatorics. This provides an accurate prediction based solely on the fraction $f$ of training samples used and the length $N$ of the sequences. \section{Experiments} The training algorithm was written in the ITensor library \cite{iTensor}; the code is available on Github. For a fixed fraction $0<f\leq0.2$ we run the algorithm on ten different datasets, each containing $N_T=f2^{N-1}$ bitstrings of length $N=16$. We then compare the average Bhattacharya distance in Equation \eqref{eq:bhatt} to the theoretical prediction. To handle the angles $\theta_k$ and $\phi_k$ in the theoretical model, we make a few simplifying assumptions about the expected behavior of the combinatorics. First we assume $\theta = \phi_k$ for all $k$ since the combinatorics of both blocks of the reduced densities $\rho_k$ in (\ref{eq:rhok}) have similar behavior. We further assume the average angle $\theta$ is a function of the average off-diagonal $s_e$ and the average diagonal gap $G_e$ at the $k$th step, that is $\mathbb{E}[\theta_k(s_e,G_e)] = \theta_k(\mathbb{E}[s_e],\mathbb{E}[G_e])$ for all $k$. The expectation for $s_e$ is experimentally determined to be independent of $k$, and dependent on the fraction $f$ and bitstring length $N$ alone: $\mathbb{E}[s_e] = f\cdot N_T/4$ for all $k$. We approximate the expected gap $G_e$ at the $k$th step to be an experimentally determined function of $f$ and the expected gap $G_2=|d_1-d_2|$ of the diagonal entries of the reduced density defined at step 2 of the algorithm. Understanding the expected behavior of $G_2$ is similar to understanding the statistics of a coin toss. On average, one expects to flip the same number of heads and tails and yet the expectation for their difference is non-zero. The distribution for $G_2$ is similar, but a little different: \begin{align*} \mathbb{E}[G_2]=\sum_{d_1}|2d_1-r|\frac{\binom{n}{d_1}\binom{n}{r-d_1}}{\binom{2n}{r}} \end{align*} where $r=d_1+d_2=N_T/2$ and $n=2^{N-3}$ is the number of even parity bitstrings of length $N-2$. The plots in Figure \ref{fig:experiments} compare the theoretical estimate against the experimental average. \section{Conclusion} Models based on tensor networks open interesting directions for machine learning research. Tensor networks can be viewed as a sequence of related linear maps, which by acting together on a very high-dimensional space allows the model to be arbitrarily expressive. The underlying linearity and powerful techniques from linear algebra allow us to pursue a training algorithm where we can look ``under the hood'' to understand each step and its consequences for the ability of our model to reconstruct a particular data set, the even-parity data set. Our work also highlights the advantages of working in a probability formalism based on the 2-norm. This is the same formalism used to interpret the wavefunction in quantum mechanics; here we use it as a framework to treat classical data. Density matrices naturally arise as the 2-norm analogue of marginal probability distributions familiar from conventional 1-norm probability. Marginals still appear as the diagonal of the density matrix. Unlike marginals, the density matrices we use hold sufficient information to reconstruct the entire joint distribution. Our training algorithm can be summarized as estimating the density matrix from the training data, then reconstructing the joint distribution step-by-step from these density matrix estimates. The theoretical predictions we obtained for the generative performance of the model agree well with the experimental results. Note that care is needed to compare these results, since the theoretical approach involves averaging over all possible training sets to produce a single typical weight MPS, whereas the experiments produce a different weight MPS for each training-set sample. In the near future, we look forward to extending our approach to other measures of model performance and behavior, and certainly other data sets as well. More ambitiously, we hope this work points the way to theoretically sound and robust predictions of machine learning model performance based on empirical summaries of real-world data. If such predictions can be obtained for training algorithms that also produce state-of-the art results, as tensor networks are starting to do, we anticipate this will continue to be an exciting program of research. \bibliographystyle{unsrt}
2,869,038,156,014
arxiv
\section{Introduction} The literature contains considerable discussion of the likelihood of nearby supernova events: their frequency has been estimated (Shklovsky \cite{shkl}), and their possible impacts on the biosphere have been considered (Ruderman \cite{rude}; Ellis \& Schramm \cite{es}; Ellis, Fields, \& Schramm \cite{efs}). Within the considerable uncertainties, it is conceivable that there may have been one or more nearby supernova events during the Phanerozoic era. This has prompted discussion of their isotopic signatures and kill radii, and speculation on their possible role in triggering biological mass extinctions. Supernova events that were sufficiently close to have left some terrestrial isotope signature, but far enough not to have triggered a mass extinction, are expected to have been more frequent. In this connection, various authors have noted the enhancement of \be10 in ice cores and marine sediments $\sim 35 \,{\rm kyr} \,{\sc bp}$ ($\,{\sc bp} =$ before the present). In particular, Ellis, Fields, \& Schramm \pcite{efs} discussed the possibility that this might have arisen from the supernova event that gave birth to the Geminga pulsar, and proposed looking at deep-ocean sediments, suggesting that the long-lived isotopes \i129, \sm146, and \iso{Pu}{244}, as well as the shorter-lived \be10, \iso{Al}{26}, \iso{Cl}{36}, \mn53, \fe60, and \iso{Ni}{59}, might provide geological evidence of a nearby supernova event at any time during the past $10^8$~yr or more. In the light of this proposal, the recent deep-ocean ferromanganese crust measurements (Knie et al.\ \cite{fe60}) of \fe60 ($t_{1/2} = 1.5 \,{\rm Myr}$) and \mn53 ($t_{1/2} = 3.7 \,{\rm Myr}$) are very exciting. Whilst \mn53 is known to have a significant natural background from interplanetary dust accretion, the expected background for \fe60 is significantly lower than the measured levels. Knie et al.\ discuss possible alternative origins for the apparent excess of \fe60, and argue that their results can only be understood in terms of a supernova origin for the \fe60. As we discuss in more detail later, their data suggest a nearby supernova event within $\sim 30$ \,{\rm pc} ~during the past few $\,{\rm Myr}$, a significantly longer timescale than had been discussed in connection with the \be10 signal. The data are very new and the statistics is small, with only 63 \fe60 and \mn53 nuclei being detected in total. However, if these data are confirmed, together with their extraterrestrial interpretation, the implications are profound. The would constitute the first direct evidence that a supernova event occurred near earth within a relatively recent geological time, with detectable effects on our planet. The purpose of this paper is to discuss the implications of the data in more detail. We first re-estimate the cosmogenic backgrounds to both the \fe60 and \mn53 signals observed by Knie et al.\ \pcite{fe60}. We confirm their estimates that the \fe60 signal is higher than plausible cosmogenic sources, whereas the \mn53 signal is compatible with such backgrounds. We emphasize the desirability of understanding better the sedimentation history of the last $20$~Myr, and seeking confirmation in other ferromanganese crusts and elsewhere that the apparent \fe60 excess is global. We stress in particular the desirability of finding an earlier layer in which the apparent \fe60 excess is absent. We then use the available data to derive constraints on the possible supernova event, including its time and distance. We also discuss the possible implications for supernova nucleosynthesis and review other isotope signals - such as \be10, \i129 and \sm146 - that could be used to confirm the supernova diagnosis and deepen its interpretation. Finally, we address the possible implications of this apparent supernova event for the terrestrial biosphere. We revisit the effects of the expected cosmic-ray flux on the earth's ozone layer, and the resulting enhanced penetration of the atmosphere by solar ultraviolet radiation. We speculate whether its effects could be related to either of the mini-extinctions that apparently occurred during the Middle Miocene and Pliocene epochs. We also raise the possibility of other supernova-induced effects on the biosphere, in particular the possibility that the enhanced cosmic-ray flux might seed extra cloud cover, potentially triggering global cooling: a ``cosmic-ray winter''. \section{Data} Knie et al.\ \pcite{fe60} studied the isotopic composition of a deep ocean sample of hydrogenic ferromanganese crust. This is material which has slowly precipitated from seawater onto one of several specific kinds of substrate. Using accelerator mass spectrometry, Knie et al.\ detected both live \fe60 and live \mn53 in three layers at different depths, spanning 0 to 20 mm. They estimate that these depths correspond to times spanning 0 to 13.4 Myr $\,{\sc bp}$. Their data are summarized below. From the outset of our analysis, we emphasize caution, since the data may yet turn out to be a false alarm. We re-analyze the consistency of the data with conventional cosmogenic backgrounds before exploring the consequences if their interpretation in terms of a nearby supernova explosion is confirmed. We emphasize the necessity for follow-up data to confirm or reject this tantalizing scenario. \begin{table}[htb] \caption{ \protect{Ferromanganese crust data from Knie et al.\ \pcite{fe60}}.} \begin{tabular}{|cc|ccc|ccc|} \hline\hline depth & age $\Delta t_\ell$ & \mn53 & $\phi_{53}(\Delta t_\ell)$ & $N_{53}(\Delta t_\ell)$ & \fe60 & $\phi_{60}(\Delta t_\ell)$ & $N_{60}(\Delta t_\ell)$ \\ mm & \,{\rm Myr} \,{\sc bp}& events & ${\rm cm}^{-2} \,{\rm Myr}^{-1}$ & ${\rm cm}^{-2}$ & events & ${\rm cm}^{-2} \,{\rm Myr}^{-1}$ & ${\rm cm}^{-2}$ \\ \hline\hline $0-3$ & $0-2.8$ & 26 & $2.6 \times 10^{8}$ & $5.7 \times 10^{8}$ & 14 & $1 \times 10^{6}$ & $1.6 \times 10^{6}$ \\ $5-10$ & $3.7-5.9$ & 6 & $6.4 \times 10^{8}$ & $5.8 \times 10^{8}$ & 7 & $7 \times 10^{6}$ & $1.8 \times 10^{6}$ \\ $10-20$ & $5.9-13.4$ & 7 & $4.4 \times 10^{8}$ & $6.0 \times 10^{8}$ & 2 & $9 \times 10^{6}$ & $1.3 \times 10^{6}$ \\ \hline\hline \end{tabular} \label{tab:data} \end{table} Knie et al.\ quantify their detection in terms of the flux $\phi_i(\Delta t_\ell)$ deposited in layer $\ell$ of the crust, which they infer as follows. Using an independently determined crustal growth rate (which varies from $1-2 \mm_myr$), Knie et al.\ translate the depth of each layer into time intervals $\Delta t_\ell = (t_{\ell,{\rm i}},t_{\ell,{\rm f}})$ before the present. The fluxes are calculated given the number of detected events in each layer, corrected for radioactive decays, and assuming a constant deposition during each time interval. The reported fluxes appear in Table \ref{tab:data}. They found that the inferred fluxes of \fe60 increase as one goes back in time, whereas the inferred fluxes of \mn53 are less variable. Since both radioisotopes are found in all three layers, whereas a nearby supernova would naively be expected to have contaminated at most one layer, one might conclude that the background must be significant for both isotopes. However, as we discuss below, astrophysical effects might have spread out the deposition of the signal, and one should not ignore the possibility of some terrestrial mixing effect such as bioturbation. Knie et al.\ estimate that the \mn53 background is large and do not use this as the primary indicator of a supernova signal. In \S \ref{sec:bgnd}, we explicitly calculate the expected background levels. In our analysis below, we find it useful to express the observations in terms of the (present-day, uncorrected) surface density $N_i^{\rm obs}(\Delta t_\ell)$ detected in crust layer $\ell$. To extract this from the reported fluxes one simply inverts the procedure used to derive the fluxes \begin{eqnarray} N_i^{\rm obs}(\Delta t_\ell) & = & \int_{t_{\ell,{\rm i}}}^{t_{\ell,{\rm f}}} \ dt \ \phi_i(t) \ e^{-t/\tau_i} \\ & = & \left[ \exp\left({-\frac{t_{\ell,{\rm i}}}{\tau_i}}\right) - \exp\left({-\frac{t_{\ell,{\rm f}}}{\tau_i}}\right) \right] \ \tau_i \ \phi_i(\Delta t_\ell) \end{eqnarray} The inferred surface densities also appear in Table \ref{tab:data}. Note that, although the inferred flux is strongly varying with time, the present surface density is quite constant. We examine later reasons why the \fe60 signal is seen in multiple crust layers, in apparent conflict with the simplest picture of punctuated deposition of material after a nearby supernova explosion. As noted by Knie et al., the observed flux of material into the ferromanganese crust is not necessarily the same as the mean flux of material averaged over the globe. The largest difference between the two arises from the reduced uptake of both Mn and Fe into the crust, i.e., the efficiency for each of these elements to be deposited onto the crust is not perfect, and thus the flux onto the crusts is lower than the flux deposited over the earth's surface. Consequently, the surface densities in the crust and globally are related by \begin{equation} \label{eq:reduced} N_i^{\rm obs} = f_i \ N_i^{\oplus} \end{equation} where $N_i^{\oplus}$ is the global surface density deposited, and $f_i < 1$ accounts for the reduced uptake. Thus, to compare with the observations, one must reduce the theoretically expected surface density, $N_i^{\oplus}$, by the uptake factor $f_i$ in order to compare with the observations. In our calculations, we use the fiducial values suggested by Knie et al., who recommend approximate values of $f_{53} \sim 1/20$, and $f_{60} \sim f_{53}/5 \sim 1/100$, which are based in part on estimates of the \mn53 background. We find that these estimates have significant uncertainties, which translate directly into corresponding uncertainties in $f_{53}$, $f_{60}$, and finally $N_{60}^{\oplus}$. \section{Interpreting \fe60 in the Nearby Supernova Hypothesis} Ellis, Fields \& Schramm \pcite{efs} estimated the terrestrial deposition of observable long-lived $\beta$-unstable nuclei. Since the deposition rate is a few $\mm_myr$, any radioisotopic signature should be temporally isolated, unless there is some astrophysical effect or disturbance of the deposited layers that could smear out the time resolution. However, even if that is the case, one can still constrain the deposited fluence $F_i$ of the radioisotopes and the time since deposition. If one measures a surface density $N_i$ of atoms per unit area, which was deposited at a time $t$ before the present, one knows that then \begin{equation} \label{eq:pres_dens} N_i (t) = \frac{1}{4} \ F_i \ e^{-t/\tau_i} \end{equation} where we assume that the fall-out on the earth's surface is isotropic, the factor of $1/4$ is the ratio of the area of the earth's shadow ($\pi R_\oplus^2$) to the surface area of the earth ($4 \pi R_\oplus^2$), and $\tau_i$ is the mean life of $i$. If one assumes an isotropic ejection of supernova debris, the deposited fluence is in turn directly related to the supernova distance $D$ and the yield: \begin{equation} \label{eq:dep_flue} F_i = \frac{M_i/A_i m_p}{4 \pi D^2} \end{equation} where the mass ejected in species $i$ is $M_i$. Note that we have assumed for simplicity that essentially no decays occur as the ejecta is transported to the earth, i.e., that the transit time $\delta t \ll \tau_i$. As we expect $\delta t \mathrel{\mathpalette\fun <} few \times 10^5 \,{\rm yr}$ for a supernova blast, this is an excellent approximation within the accuracy of our calculation. Combining \pref{eq:pres_dens} and \pref{eq:dep_flue}, we have \begin{eqnarray} \label{eq:master} N_i (t) & = & \frac{M_i/A_i m_p}{16 \pi D^2} \ e^{-t/\tau_i} \\ \nonumber & = & 4.6 \times 10^{8} \ e^{-t/\tau_i} \ {\rm cm}^{-2} \ \left( \frac{A_i}{60} \right)^{-1} \; \left( \frac{M_i}{10^{-5} \hbox{$M_{\odot}$}} \right) \; \left( \frac{D}{30 \,{\rm pc}} \right)^{-2} \end{eqnarray} We re-emphasize that the surface density calculated here assumes (1) an isotropic explosion, (2) isotropic fall-out on the earth's surface, and (3) that the incorporation of the debris into the crust or sediment that is sampled is faithful (i.e., no chemical fractionation and 100\% uptake). The real situation is likely to violate all of these assumptions at some level. In particular, Knie et al.\ note that fractionation and reduced uptake effects have already been observed and are large. Such effects must be taken into account before one can use \pref{eq:master} to compare theory and observation. If one can indeed deduce the supernova contribution to the abundance of {\em one} radioisotope in a sediment or crust, then one can determine the supernova distance with the addition of two other inputs. That is, with the observations $N_i^{\rm obs}$ in hand, one can solve \pref{eq:master} for $D$, given estimates of the yield $M_i$ and the deposition epoch $t_{\rm SN}$, as determined by the depth of the supernova layer in the crust: \begin{equation} \label{eq:D_SN} D = e^{-t_{\rm SN}/2 \tau_i} \ \sqrt{ \frac{f_i M_i}{16 \pi A_i m_p N_i^{\rm obs}} } \end{equation} where we have allowed for the reduced uptake as in \pref{eq:reduced}. If one can deduce the supernova depositions of {\em two} radioisotopes $i$ and $j$ (where $\tau_i < \tau_j$), then one can use \pref{eq:master} with the combination the yields and the observed $N_i$ and $N_j$ to get not only $D$ but also an independent estimate of $t$, as follows: \begin{equation} \label{eq:t_SN} t_{\rm SN} = \frac{1}{ \tau_i^{-1} - \tau_j^{-1} } \ \ln \left( \frac{A_j}{A_i} \, \frac{f_i}{f_j} \, \frac{N_j^{\rm obs}}{N_i^{\rm obs}} \, \frac{M_i}{M_j} \right) \end{equation} This value is derived independently of the value of $t$ inferred from the depth of the supernova-enhanced layer in the crust. The two values can thus be compared as a consistency check. For the estimates below, we adopt the supernova yields from Woosley \& Weaver \pcite{ww}, one of the most extensive studies to date of supernova nucleosynthesis. Woosley \& Weaver tabulate isotopic yields for nuclei with $A<66$ over a range in progenitor masses and progenitor metallicity. Since the putative supernova would have occurred close to the Sun, we adopt the yields for solar metallicity. We note that, whilst the \mn53 yield is expected to grow as a fairly smooth function of the progenitor mass, the \fe60 yields are not. Consequently, the \mn53/\fe60 ratio varies widely and non-monotonically. The maximum is $\mn53/\fe60 \simeq 20$ at $20 \hbox{$M_{\odot}$}$, and the minimum is $\mn53/\fe60 \simeq 0.6$ at $13 \hbox{$M_{\odot}$}$. This introduces an additional uncertainty, since the mass of the nearby supernova is unknown. The initial mass function favors stars in the $11-20 \hbox{$M_{\odot}$}$ range, whose $\mn53/\fe60$ ratios span this range. We adopt the fiducial yield $M_{60} = 10^{-5} \hbox{$M_{\odot}$}$, corresponding to an intermediate value, recognizing that this is clearly somewhat uncertain. \subsection{Backgrounds} \label{sec:bgnd} There are several potential contributions to the background. (1) Cosmogenic production, namely the spallative production of radioisotopes due to the steady cosmic-ray flux into the earth's atmosphere, was discussed in detail in Ellis, Fields \& Schramm. We have repeated that analysis using the cross section for \fe60 production via spallation in $p+\kr84$, as reported in Knie et al.\ \pcite{fe60}. We agree with these authors that the \fe60 from this source is several orders of magnitude below the observations. (2) A related source is {\it in situ} production, via the penetrating muon and neutron flux. We have estimated thus using the calculations by Lal \& Peters \pcite{lp}, and find that this source is negligibly small. (3) In the absence of a strong cosmogenic component, the dominant contribution to the background of both elements is the influx of extraterrestrial material, e.g., dust and meteorites, onto the earth. This material has been exposed to cosmic rays en route, and thus also contains the resulting spallogenic species at some level. To compute the radioisotope contribution of infalling material, one must first determine the present rate $J$ of meteoric mass accretion onto the earth. This quantity is difficult to measure, and past estimates have spanned a full six orders of magnitude, as seen in the tabulation of Peucker \& Ehrenbrink (\cite{p-e}). However, recent measurements using different techniques have showed a convergence. Love \& Brownlee \pcite{lb} have inferred the micrometeor ($\mathrel{\mathpalette\fun <} 10^{-4} \,{\rm g}$) mass spectrum and flux from the cratering patterns on the exposed surfaces of the Long Duration Exposure Facility satellite, and have reported a total accretion rate of $J = (4 \pm 2) \times 10^{10} \,{\rm g} \,{\rm yr}^{-1}$ due to micrometeor infall. Love \& Brownlee report that these objects have a mass spectrum which peaks around $1.5 \times 10^{-5} \,{\rm g}$, corresponding to a diameter $\sim 200 \, \mu {\rm m}$, and have an infall rate which probably dominates the mass accretion on short timescales, with large (multi-ton) impacts possibly contributing a similar net influx on longer timescales. An independent measure of extraterrestrial dust accretion comes from the analysis of osmium concentrations and isotopic ratios in oceanic sediments. Recent inferred accretion rates of $(3.7\pm 1.3) \times 10^{10} \,{\rm g} \,{\rm yr}^{-1}$ from Peucker \& Ehrenbrink (\cite{p-e}), and of $(4.9-5.6)\times 10^{10} \,{\rm g} \,{\rm yr}^{-1}$ from Esser \& Turekian \pcite{et} agree with each other and with the Love \& Brownlee result. We adopt a fiducial rate of $J = 4 \times 10^{10} \,{\rm g} \,{\rm yr}^{-1}$, which we expect to be accurate to within 50\%. With the meteor accretion rate in hand, we now estimate the cosmogenic background by adopting the following simplified picture. In general, some fraction of the infalling material does not impact the earth directly as a meteorite, but is mixed into the atmosphere. This ``well-mixed'' fraction includes all of the smallest bodies (micrometeorites), and the portion of the larger bodies which is vaporized during the descent\footnote {The vaporized material comprises the outermost layers of the falling material. For the larger objects these are also the regions with the highest concentration of spallogenic material.}. Since the accretion rate includes micrometeorites and dust, we assume that all such material is indeed well-mixed. We further assume that it is deposited isotropically on the surface of the earth. The total isotropic mass flux of incoming material is thus \begin{equation} \dot{\Sigma} = \frac{J}{4 \pi R_\oplus^2} \end{equation} If the average mass fraction of radioisotope $i$ in the well-mixed infalling material is $X_i$, then the isotropic mass flux of $i$ is just $\dot{\Sigma}_i = X_i \dot{\Sigma}$, and the number flux of $i$ is \begin{eqnarray} \nonumber \Phi_i & = & \frac{X_i \dot{\Sigma}}{A_i m_p} \\ \label{eq:meteor} & = & 4.7 \times 10^{21} \ \frac{X_i}{A_i} \ {\rm atoms} \ {\rm cm}^{-2} \ \,{\rm Myr}^{-1} \ \left( \frac{J}{4 \times 10^{10} \ \,{\rm g} \, \,{\rm yr}^{-1}} \right) \end{eqnarray} The problem now reduces to finding $X_i$. For \mn53, we use the results of Michel et al.\ \pcite{mich}, who calculate the production of cosmogenic nuclides in meteoroids by cosmic-ray protons. These calculations are tabulated in Michel et al.\ \pcite{mich} for different kinds of meteoroids as functions of depth and size. We use the zero-depth values, since these correspond to the smallest and most common objects. Michel et al.\ express their results in terms of the specific activity $\Gamma_i$, i.e., the decay rate per unit mass of iron. Finally, we take the infalling material to have the iron mass fraction $X_{\rm Fe} = 0.19$, as found in C1 carbonaceous chondrites (Anders \& Grevesse \cite{ag}). Then, in a meteorite with such a mass faction of iron, we have $X_i = m_i \tau_i \Gamma_i X_{\rm Fe}$, and so \begin{equation} X_{53} = 1.9 \times 10^{-11} \ \left( \frac{X_{\rm Fe}}{0.19} \right) \ \left( \frac{\Gamma_{53}}{400 \, {\rm dpm} \, {\rm kg \, Fe}^{-1}} \right) \end{equation} Using this, we estimate a background flux of \mn53 of \begin{equation} \Phi_{53} = 1.7 \times 10^{9} \ {\rm atoms} \ {\rm cm}^{-2} \ \,{\rm Myr}^{-1} \end{equation} For comparison, Bibron et al.\ \pcite{bib} report a measurement of \mn53 in antarctic snow which implies \begin{equation} \Phi_{53}^{\rm obs} = (6.1 \pm 1.4) \times 10^{9} \ {\rm atoms} \ {\rm cm}^{-2} \ \,{\rm Myr}^{-1} \end{equation} whereas Imamura et al.\ \pcite{imam} measured \mn53 in ocean sediments and found \begin{equation} \Phi_{53}^{\rm obs} = (2.0 \pm 0.9) \times 10^{9} \ {\rm atoms} \ {\rm cm}^{-2} \ \,{\rm Myr}^{-1} \end{equation} The differences in these results highlight the large systematic errors in these estimates. However, within these uncertainties, we find our estimate to be in good agreement with the data. The \fe60 background is lower because \fe60 lies above the iron peak, thus reducing the abundance of the possible target nuclei. Because of this and the shorter lifetime of \fe60, the data on \fe60 in meteors are much sparser, and we know of just two relevant results in the literature. Early work by Goel \& Honda \pcite{gh} detected \fe60 in the Odessa meteorite with a specific activity of $\Gamma_{60} = 0.9 \pm 0.2 \, {\rm dpm} \, {\rm kg}^{-1}$, for a sample with 7\% Ni and 91\% Fe, or $\Gamma_{60}^{\rm Ni} = 13 \pm 3 \, {\rm dpm} \, {\rm kg \, Ni}^{-1}$. This report can be used to make a background estimate, as we did for \mn53. Using the specific activity of Goel \& Honda and a Ni mass fraction $X_{\rm Ni} = 0.011$, we would find an \fe60 flux of \begin{equation} \Phi_{60} = 1.3 \times 10^{6} \ {\rm atoms} \ {\rm cm}^{-2} \ \,{\rm Myr}^{-1} \end{equation} This can be compared with the value we obtain from the \fe60/\mn53 ratio reported by Knie et al. for the Dermbach meteorite, which implies that $\Gamma_{60}^{\rm Ni} = 1.2 \times 10^{-2} \Gamma_{53}^{\rm Fe}$. For our adopted $\Gamma_{53}^{\rm Fe}$, this gives $\Gamma_{60}^{\rm Ni} = 5 \, {\rm dpm} \, {\rm kg \, Ni}^{-1}$, and \begin{equation} \Phi_{60} = 0.5 \times 10^{6} \ {\rm atoms} \ {\rm cm}^{-2} \ \,{\rm Myr}^{-1} \end{equation} In order to compare with the levels measured by Knie et al.\ in the ferromanganese crust, one must reduce the total \fe60 flux by the uptake factor $f_{60} \sim 1/100$ discussed earlier (\ref{eq:reduced}). Thus the expected background \fe60 is at least two orders of magnitude smaller that the observed level in the crust. Knie et al.\ argue, however, that the \mn53 and \fe60 interstellar cosmogenic production mechanisms differ significantly. Namely, they argue that while \mn53 comes primarily from spallation of Fe nuclei by high-energy Galactic cosmic rays, \fe60 derives mostly from reactions on Ni induced by secondary neutrons. These secondary neutrons would be more abundant in the interiors of large meteors than in micrometeors or interstellar dust. In the absence of secondary neutrons, the \fe60 production by protons alone is much smaller, leading to a lower specific activity $\Gamma_{60}$ and finally a lower flux on earth. This scenario can be tested experimentally, by measuring the \fe60 depth profile in meteorites. An effect omitted from this simple calculation of the meteoritic background is that, if a nearby supernova occurs, it leads to an enhanced cosmic-ray flux not only on the earth, but also in the entire solar system, including, e.g., the material which falls as meteorites. Thus, one expects the meteoritic ``background'' to in fact include also some supernova ``signal,'' and thus to undergo an increase which lasts for a timescale of order the species' lifetime. For the present data, this is not a serious issue. The enhanced cosmic-ray flux lasts for at most a few kyr, whereas the meteoritic sources average over several Myr, so the increase in the measured fluence for any given layer is only perceptible if the cosmic-ray flux enhancement is much larger than expected: the needed increase is a factor $\sim 10^3$, which is about an order of magnitude more than expected. However, if the signal could be measured with a much finer time resolution of order $10-100$ kyr, then this effect could be significant. At any rate, this discussion points up the advantage of measuring \mn53 content in layers {\em prior} to the alleged supernova event, which should not show any supernova-related enchancement. \subsection{Distance and Epoch of the Putative Supernova} A crucial problem for any attempt to estimate these parameters is that the observed signals are present in all layers for both isotopes. We assume that the \fe60 signal is real and not due to background, but not necessarily the \mn53 signal. Even so, any deposition mechanism should be able to accommodate the continuous, rather than punctuated and isolated, nature of the signal. We thus consider two possible scenarios. \begin{enumerate} \item As suggested by Knie et al., a continuous \fe60 signal could arise from residual contamination due to the nearby supernova. Knie et al. note that this could contaminate the local interstellar medium, in particular contaminating dust in the local interstellar medium which could enter the solar cavity and fall onto the earth. Also, the cosmic-ray flux would irradiate metoric and cometary material in the solar cavity, leading to enhanced \fe60 and \mn53 production. Both processes lead to a \fe60 flux which has an abrupt onset but is continuous until the \fe60 is extinct. Until it is extinct, the present-day \fe60 levels would be constant, assuming a constant dust accretion rate, since all of the \fe60 was created at the same time, by the supernova event. This scenario is compatible with the observations. \item Alternatively, it is possible that the \fe60 signal is punctuated but mixed in the sample itself, e.g., by bioturbation. In this case, we should sum all of the signal, which we assume to have originated at the earliest time. \end{enumerate} In fact, as we now see, these scenarios lead to similar predictions for $D$ and $t_{\rm SN}$, as they share key aspects. In both, the \fe60 was all produced by the putative supernova at the explosion epoch. Thus, the signal has decayed by the same factor $e^{-t_{\rm SN}/\tau}$, regardless of when the signal arrived on earth and was deposited on the crust. Thus, the signal in different crust layers should not be given a different correction for decay. Instead, the signals should all be added. This is what we do for both scenarios, for the following reasons. In scenario 1 with contamination of the local interstellar medium, the signal should have two components, an impulsive one received by the passage of the supernova blast through the solar system, and a continuous component derived from the solar system's accretion of interstellar material enriched by its passage. Both components are signal, and may not be well-resolved, depending on the relative strength, amplitude, and timescales of each. Thus, we sum the signals in all layers and take this to be a rough estimate of the fraction of material deposited on the earth. In scenario 2 we posit that the signal is mixed across layers. Thus, even if we assume that the deposition of material on the earth is only impulsive, the signal is smeared. To recover the original signal, we must again sum over the layers. Thus, in both cases we take the signal to be \begin{equation} \label{eq:sum_signal} N_{60}^{\rm tot} = \sum_\ell N_{60}^{\oplus}(\Delta t_\ell) \sim 5 \times 10^{6} \ {\rm atoms} \ {\rm cm}^{-2} \end{equation} Since only the \fe60 is taken as signal, we need to know one of $D$ or $t_{\rm SN}$ to get the other. An accurate estimate of either is impossible with the available data, but different assumptions enable us to make non-trivial limits and estimates, as seen in Fig.~1 and described in the following paragraphs. The data of Knie et al. in each time period are shown in Fig.~1 together with their statistical error bars. As we have argued above, the sum $N_i^{\rm tot}$ is the appropriate measure of the total signal, and this is plotted as a vertical error bar for \fe60 and \mn53 on the right-hand sides of the two panels. The dashed lines also indicate the ranges favored for the possible signals. In terms of our fiducial values, \pref{eq:D_SN} now reads \begin{equation} D = 29 \ e^{-t_{\rm SN}/2 \tau_{60}} \ \,{\rm pc} \end{equation} which we may use to derive constraints on the supernova distance and epoch. The maximum possible distance comes if the supernova happened ``yesterday'', which is possible only in the mixing scenario. In this case, $t_{\rm SN}/\tau_{60} \sim 0$, and we have a maximum distance of \begin{equation} D \mathrel{\mathpalette\fun <} D_{\rm max} = 30 \ \,{\rm pc} \end{equation} Interestingly, this distance happens to lie just within the Ellis, Fields, \& Schramm \pcite{efs} estimate of the maximum distance at which a supernova might deposit its ejecta. Whilst the precision of the numerical agreement is accidental, and subject to the numerous uncertainties we have described, it is both amusing and intriguing that the two are so close. The upper (lower) solid curve in the top panel of Fig.~1 illustrates the signal expected from a supernova exploding at a distance of 10~(30)~pc as a function of the time at which it exploded. As described in \S2, the astrophysical predictions have been corrected for the reduced uptake via (\ref{eq:reduced}) with $f_{60} = 0.01$. We note that this distance constraint comes about by combining disparate information about supernova ejecta and observed surface densities. We find it both remarkable and encouraging that these numbers combine to give a distance limit that is not only of the right order of magnitude, but indeed jibes neatly with upper limit suggested by Ellis, Fields, \& Schramm \pcite{efs}. A different consideration leads to another constraint which limits the epoch of the blast in both scenarios. Since the putative supernova apparently did not cause a catastrophic mass extinction, we require the distance to be larger than the Ellis \& Schramm \pcite{es} maximum killer radius: $D > 10 \ \,{\rm pc}$, which gives \begin{equation} \label{eq:nokill} t_{\rm SN} \le 4.6 \ \,{\rm Myr} \ \,{\sc bp} \end{equation} as seen in Fig.~1. Thus, even without identifying a crust layer with the epoch of the \fe60 deposition, we can already limit the distance to $10 \ \,{\rm pc} \mathrel{\mathpalette\fun <} D \mathrel{\mathpalette\fun <} 30 \ \,{\rm pc}$ and the epoch to $t_{\rm SN} \mathrel{\mathpalette\fun <} 5 \ \,{\rm Myr} \ \,{\sc bp}$. The time constraints are not surprising, given that the very existence of the \fe60 essentially demands that one place the putative supernova event within the past few \fe60 lifetimes. The result \pref{eq:nokill} raises an issue of self-consistency, since $t_{\rm SN}$ is so small as to be inconsistent with the age of the lowest (i.e., oldest) crust layer, $5.9-13.4 \,{\rm Myr} \ \,{\sc bp}$. This poses a problem in scenario 1, which predicts that no signal should appear before the explosion. One can resolve this issue in two ways. On the one hand, we can weaken the $t_{\rm SN}$ limits by taking a more conservative $2-\sigma$ lower limit on the summed signal. In this case, the age constraint rises to 5.9 Myr $\,{\sc bp}$, just at the limit of the lowest layer's age. On the other hand, we note that only two events were detected in the lowest layer, with a possible instrumental background of one event, though Knie et al.\ argue that both events might be real and thus have not made any subtraction. If one regards the events in the oldest layers as background, one should remove this layer's contribution to the sum \pref{eq:sum_signal}. In this case, the time constraint to now gives a consistent limit of $t_{\rm SN} \le 5.4$ Gyr $\,{\sc bp}$. We have so far used only the \fe60 data, assuming that the observed \mn53 is background, but this assumption is subject to challenge. Whilst the detected flux is roughly consistent with the expected background, our estimate is sufficiently uncertain that it is worth examining the alternative. Indeed, self-consistency demands that we estimate the expected \mn53 signal, which may be obtained directly from \mn53/\fe60 ratio. The vertical error bar and dashed lines in the lower panel of Fig.~1 represent the sum of the \mn53 data. The curves in this panel represent the astrophysical predictions for supernovae at distances of 10 or 30 pc as before, assuming $f_{53} = 0.05$ and \mn53/\fe60 = 20. As noted above, this ratio is unfortunately uncertain, but we expect $\mn53/\fe60 \mathrel{\mathpalette\fun <} 20$. If the true value lies at the high end of this range, i.e., if $m_{\rm SN} \simeq 20 \hbox{$M_{\odot}$}$, then the observed \mn53 might also be signal, as seen in the lower panel of Fig.~1. In this case, the \mn53 signal can no longer be used to estimate the reduced Mn uptake, so we lose our ability to estimate $f_{53}$ and $f_{60}$. On the other hand, if we interpret both radioisotopes as signal, we can derive the supernova epoch. Taking the net \mn53 surface density as pure signal and using \pref{eq:t_SN}, we have \begin{equation} \label{eq:two_signals} t_{\rm SN} = 4.3 \,{\rm Myr} \ \,{\sc bp} \end{equation} as also seen in Fig.~1, and in good agreement with the range estimated above. In terms of the (now unknown) reduced uptake for \mn53, this would imply a supernova distance of $D = 48 f_{53}^{1/2} \,{\rm pc}$, or $D \mathrel{\mathpalette\fun <} 50 \,{\rm pc}$ for any $f_{53} < 1$. It is already a new statement about supernova nucleosynthesis if the \fe60 in the ferromanganese crust data indeed has a supernova origin. This isotope has long been predicted to come from supernovae. In recent calculations, Woosley \& Weaver \pcite{ww} note that \fe60 is made in presupernova via $s$-process He burning, and explosively at the base of the O and Si burning shells. Observationally, there is meteoric evidence that live \fe60 was present in the protosolar nebula, perhaps due to a supernova explosion soon before the formation of the solar system (Shukolyukov \& Lugmair \cite{sl92,sl93}). Furthermore, \fe60 has received particular attention because its decay through ${}^{60}$Co to ${}^{60}$Ni is accompanied by the emission of 1.17 and 1.33 MeV $\gamma$ rays, making it a target for search by $\gamma$-ray telescopes. Timmes et al.\ \pcite{timmes} noted that the expected $\gamma$-ray signal should spatially trace that of ${}^{26}$Al. They calculated the flux levels, and found them to be just below the sensitivity of the Compton Gamma-Ray Observatory. However, \fe60 should be visible by the upcoming INTEGRAL $\gamma$-ray satellite. Until seen in $\gamma$ rays, the data discussed here could be the strongest available indication of a supernova origin for \fe60. The observation of additional radioisotopes (Ellis, Fields, \& Schramm \cite{efs}) would not only help constrain the distance, but would also allow one to use the various radioisotope measurements as a telescope which provides information about the supernova nucleosynthesis processes. Therefore, we urge more searches, both of ferromanganese crusts like the one reported (for repeatability and confirmation that the effect is global), as well as other materials and particularly other radioisotopes which would help confirm the extraterrestrial origin and add to the constraints on the timing, distance, and mass of the putative explosion. Candidate species are those which are expected to be made copiously in supernovae and have lifetimes comparable to the $\sim 10 \,{\rm Myr}$ timescale considered. Promising candidates include \be10, \i129, and \sm146. Note that \be10 has a half-life $t_{1/2} = 1.51 \,{\rm Myr}$ which is equal to that of \fe60, within errors. Thus, the \be10/\fe60 ratio due to a nearby supernova should remain constant over time, providing a consistency check. Also, \be10 will have contributions from enhanced cosmogenic production as well as any possible supernova origin. The other isotopes of note, \i129 ($t_{1/2} = 15.7 \,{\rm Myr}$) and \sm146 ($t_{1/2} = 10.3 \,{\rm Myr}$), have longer lifetimes, and thus can probe earlier epochs. This could again allow for a cross-check: if \i129 and \sm146 are enhanced along with \fe60, they should drop off in material which dates prior to the explosion event. To get a feel for the likelihood of the putative nearby supernova, we estimate the expected rate for an explosion occurring with a given distance. Following Shklovsky \pcite{shkl}, the average rate $\lambda$ of supernovae within a distance $D$ is just the total Galactic rate ${\cal R}_{\rm SN}$, times the volume fraction: $\lambda = (4 D^3/3 R^2 h) \, {\cal R}_{\rm SN}$, where $R$ is the disk radius and $h$ the scale height. Using $R = 20$ kpc, $h$ = 100 pc, and the possibly optimistic estimate ${\cal R}_{\rm SN} = 3 \times 10^{-2} \,{\rm yr}^{-1}$, we get $\lambda \sim 1 \,{\rm Gyr}^{-1} \, (D/10 \,{\rm pc})^{3}$. Thus an explosion at a distance of 30 pc should have a mean recurrence time of 100 Myr. This is at least an order of magnitude larger than the timescale suggested by the \fe60 data, which suggests that either the supernova was unusually recent, or that the rate has been grossly underestimated. In this connection, we note that a recent and nearby supernova remnant has been identified by Aschenbach \pcite{asch}. At an estimated distance of about 200 pc, which is too distant to have any effect of the type we discuss, but its age of about 700 yr (Iyudin et al.\ \cite{ti44}) does suggest that the rate of nearby supernovae could be higher than the above estimate. Moreover, we note that the expected supernova rate is enhanced during the passage of the earth through spiral arms (Shapley \cite{shap}; Hoyle \& Lyttleton \cite{hl}; Clark, McCrea, \& Stephenson \cite{cms}). This occurs every $10^8$~yr or so, and it would be interesting to seek evidence for any possible correlation with past extinctions. It seems that we are now approaching or just entering the Orion arm, so that an elevated supernova rate is possible. \section{Impact on the Biosphere} Potential implications of a nearby supernova explosion for earth's biosphere have been considered by a number of authors (Ruderman \cite{rude}; Ellis \& Schramm \cite{es}; Ellis, Fields, \& Schramm \cite{efs}), and recent work has suggested that the most important effects might be induced by cosmic rays. In particular, their possible role in destroying the earth's ozone layer and opening the biosphere up to irradiation by solar ultraviolet radiation has been emphasized (Ellis \& Schramm \cite{es}; Ellis, Fields, \& Schramm \cite{efs}). The energetic radiation from supernovae contains two components: a neutral one due to $\gamma$ rays, which has been estimated to have a fluence \begin{equation} \phi_{\gamma} \sim 6.6 \times 10^5 \left ({ 10 \over D } \right )^2 {\rm erg \, cm}^{-2} \label{eq:neutralcr} \end{equation} for about a year, where here and subsequently $D$ is understood to be in units of pc, and a charged component whose flux has been estimated to have a fluence \begin{equation} \phi_c \sim 7.4 \times 10^6 \left ({10 \over D} \right ) {\rm erg \ cm}^{-2} \label{eq:chargedcr} \end{equation} for about $3 \, D^2$~yr, to be compared with the ambient flux of $9 \times 10^4 \, {\rm erg \, cm}^{-2} {\rm yr}^{-1}$. We see that the ambient flux would be doubled if $D \sim 30$~pc, with considerable uncertainties, and could be considerably greater if $D \sim 20$~pc, which cannot be excluded. These enhanced fluxes are not thought likely to be directly dangerous to life. However, it has been argued (Ruderman \cite{rude}; Ellis \& Schramm \cite{es}; Ellis, Fields, \& Schramm \cite{efs}) that this ionizing radiation should produce NO in the stratosphere, making a contribution \begin{equation} y_{cr} \sim 88 \left ( {10 \over D } \right)^2 \label{crNO} \end{equation} to the NO abundance in parts per $10^9$. This is in turn estimated to deplete the ozone abundance by a factor \begin{equation} F_O = { \sqrt{16 + 9 X^2} - 3 X \over 2 } \label{lessO3} \end{equation} where $X = (3 + y_{cr})/3$ is the factor of enhancement in the abundance of NO. In the case of a supernova at $30$~pc, we would estimate $F_O \sim 0.33$. The factor by which the penetrating flux of solar ultraviolet radiation at the earth's surface is increased by this ozone depletion is approximately $f^{F_O - 1}$, where $f$ is the fraction of the solar ultraviolet flux that normally reaches the surface. In the case of radiation with a wavelength of $2500$ \AA, which is effective for killing {\it Escherichia Coli} bacteria and producing erythema (sunburn), $f \sim 10^{-40}$ normally. We estimate that supernova at $30$~pc might increase this by some $27$ orders of magnitude, for a period measured in thousands of years. Clearly these estimates are very unreliable, but they serve as a warning that the effects of such a supernova may not be negligible. All the biosphere is dependent on photosynthesizing organisms at the bottom of both the terrestrial and marine food chains. Terrestrial photosynthesis is most effective for red light with a wavelength $\sim 570$~nm, and we do not know of any detailed studies how it might be impacted by enhanced solar ultraviolet radiation. On the other hand, carotenoids in phytoplankton shift their most sensitive wavelength towards the blue. This might provide a mechanism for an amplification of the possible effect on marine ecology relative to terrestrial ecology. The effect of enhanced solar ultraviolet radiation on marine photosynthesis by phytoplankton has in fact been studied in connection with the ozone hole in the Antarctic, and a decline in the rate of photosynthesis by phytoplankton exposed in plastic bags has been demonstrated (Smith et al.\ \cite{s92}), although this needs to be understood in the context of other effects such as vertical mixing and cloudiness (Neale, Davis, \& Cullen \cite{n98}). It is natural to ask at this point whether any significant extinction events are known to have occurred within the past $10$~Myr or so during which the apparent excess of \fe60 may have been deposited. Indeed, there is evidence for a couple of minor extinctions: one during the middle Miocene, about $13$~Myr ago, and one of lesser significance during the Pliocene, about $3$~Myr ago (Sepkoski \cite{s86}). Impacts on marine animal families near the bottom of the food chain have been noted, including zooplankton such as tropical foraminifers (which eat phytoplankton), bivalves, gastropods and echinoids (whose diets include plankton and debris). This is exactly the pattern that might be expected from a major insult to marine photosynthesis. It is interesting to note that the stability of phytoplankton community structure over 200~kyr has been demonstrated using deep-ocean sediments (Schubert et al.\ \cite{s98}), and it would be valuable to extend such studies to longer periods. It would be fascinating to devise a single experiment that could correlate directly possible isotope and phytoplankton signatures of a supernova event~\footnote{We observe in passing that a weak correlation has been observed between magnetic field reversals and mass extinctions (Raup \cite{raup}). We note that an enhanced cosmic-ray flux is one consequence of such a reversal.}. Many other possible causes of such an insult should be considered, including volcanism and meteor impact(s), and we would like to mention another possibility that could also be linked to a nearby supernova explosion. A strong correlation has been observed (Friis-Christensen \& Lassen \cite{fcl91}; Svensmark \& Friis-Christensen \cite{sfc97}; Svensmark \cite{s97}) between solar activity (particularly the solar sunspot cycle) and the earth's cloud cover. It is thought that cosmic rays may help seed cloud formation, and it has been suggested that the correlation with the sunspot cycle might be due to the known modulation of the cosmic-ray flux during the solar cycle (Ney \cite{n59}), which is due to variations in the solar wind. Increased cloud cover is expected to reduce the earth's surface temperature (Hartmann \cite{h93}), and it has been conjectured that the lower global temperatures three centuries ago might be related to the different level of sunspot activity at that time~\footnote{Interestingly, William Herschel noted two centuries ago that wheat prices were anticorrelated with sunspot numbers.}. We remark that the large increase in the cosmic ray flux that we estimate from a nearby supernova explosion might seed a large increase in the cloud cover, possibly triggering a ``cosmic-ray winter'' lasting for thousands of years~\footnote{Increasing the cloud cover might also provide a mechanism for reversals of the terrestrial magnetic field to trigger global cooling, since field reduction during a reversal could enable a higher cosmic-ray flux to reach the earth's upper atmosphere.}. Indications from recent solar cycles are that variations in the cosmic-ray flux by about 20\% might be correlated with fractional changes of the cloud cover by about 3\%, corresponding in turn to variations in the mean earth temperature by about $0.4$~K (Kirkby \cite{k98}). This is a very speculative possibility, since there are considerable variations in the flux of cosmic rays at different energies and latitudes, and their efficiency for seeding clouds is only guessed from a statistical analysis. Also, the ensuing impact on the environment would be very complex, though no obvious mechanism that would enhance the effect on marine life comes immediately to mind. However, we do at least note that accelerator experiments to probe the possible seeding of clouds by cosmic rays are now being considered (Kirkby \cite{k98}). \section{Conclusions} We have discussed in this paper the implications of the possible anomalous \fe60 signature of a nearby supernova explosion reported recently by Knie et al.\ \pcite{fe60}. We re-emphasize that the interpretation of this effect requires confirmation. This could be addressed by searching for anomalies in other radioisotopes as suggested here and in Ellis, Fields \& Schramm \pcite{efs}, by checking that the \fe60 background is low as argued here and in Knie et al. \pcite{fe60}, by verifying that the \fe60 enhancement is global, and by checking that the \fe60 signal is absent in earlier ferromanganese layers. Nevertheless, if the signal is real, it is the first direct evidence for the supernova production of \fe60, and may be used to constrain the possible distance and epoch of the putative supernova. We find that a distance of about 30 pc is consistent with the magnitude of the \fe60 signal, and that it should have occurred about 4 Myr ago. If the supernova origin of the observed \fe60 is confirmed, this opens up a whole new era of supernova studies using deep-ocean sediments as telescopes. We draw particular attention to the interest of searching for \be10, \i129, and \sm146 as well as \mn53 and \fe60. Finally, we have been encouraged by the report of Knie et al. to review the possible impact of a nearby supernova explosion on the biosphere. In this connection, we recall that a couple of mini-extinctions have been reported within the past 10 Myr or so, during the Middle Miocene and Pliocene. It would be interesting to investigate whether either of these may be correlated with a supernova event. We have noted in passing that an enhancement of the cosmic-ray flux, such as that accompanying a nearby supernova explosion, might increase the global cloud cover. It remains to be seen whether this might induce significant climate change such as, in an extreme case, a ``cosmic-ray winter''. If the \fe60 signal reported by Knie et al. is confirmed, such speculation would become more compelling. \newpage \acknowledgments We acknowledge and remember with gratitude our late friend and collaborator David Schramm: his enthusiasm and insight made work on this subject particularly enjoyable.\\ We thank warmly G\"{u}nther Korshinek for sharing unpublished results with us, and for many informative discussions. One of us (J.E.) also thanks Jasper Kirkby for interesting conversations.
2,869,038,156,015
arxiv
\section{Introduction} Biology presents fascinating examples of nonequilibrium systems at high densities, exhibiting some remarkable similarities to the dynamics of equilibrium glasses \cite{angelini2011,garcia2015,zhou2009,sadati2014}. Such examples give strong motivation to study non-equilibrium (active) systems at high densities, where our current understanding is far more incomplete compared to systems in equilibrium \cite{sriramreview,sriramrmp,jarzynski2015,jacques2015}. Active systems, consisting of self-propelled particles (SPP) \cite{sriramreview,sriramrmp} appear in a wide range of systems, both living as well as synthetically designed, for example, motile cells in tissues \cite{angelini2011,wyart2012,garcia2015}, catalytic Janus particles \cite{howse2007,palacci2010}, light activated swimmers \cite{jiang2010,palacci2013}, vertically vibrated granular systems \cite{dauchot2005,nitin2014} as well as many other biological contexts \cite{baylis2015}. In many biological systems the densities are high and a number of recent experimental and numerical studies have shown a remarkable similarity, albeit with important differences, to the dynamics of equilibrium glasses \cite{angelini2011,garcia2015,zhou2009,sadati2014}. Motivated by these studies on biological systems, as well as by the basic challenge in non-equilibrium statistical physics, a large number of recent works have been devoted to extension of the equilibrium glassy phenomenology for active systems \cite{kranz2010,zippelius2011,berthier2013,ni2013,berthier2014,szamel2015,bi2015,szamel2016,bi2016,flenner2016,mandal2016,feng2017}. Mode-coupling theory (MCT) has been immensely successful in describing the dynamics of a passive glass within a range of validity \cite{das2004,giulioreview,goetzebook}, therefore, it becomes imperative to extend MCT for active systems. Equilibrium MCT adequately describes the dynamics through the equation of motion of a two-point density correlation function \cite{das2004,goetzebook,hansenmcdonald}. An active system, however, is inherently out of equilibrium and one must write down the theory for both correlation and response functions as these quantities are not related in a simple manner as in thermal equilibrium. MCT has been recently extended for active systems \cite{kranz2010,zippelius2011,szamel2016,feng2017}. However, these approaches have treated only the correlation function. An MCT via the integration through transient (ITT) approach has recently been proposed for active Brownian particles in \cite{liluashvili2017}. A nonequilibrium mode-coupling theory through the correlation and response functions for an active spin glass model was presented in \cite{berthier2013}, but such a theory for active system of structural glasses is lacking. We present in this paper an extension of MCT to systems of SPP, where the activity is characterized by a self-propulsion force vector of magnitude $f_0$, and a directional persistence time $\tau_p$. The theory we present gives the following main results: (1) We obtain a nonequilibrium MCT for the steady-state of an active system and show that the properties of the steady-state are characterized by an evolving, time-dependent effective temperature $T_{eff}(\t)$. However, the effect of activity on the dynamics can be understood through the long-time limit of $T_{eff}(\t\to\infty)$. (2) Considering two different noise statistics, widely used in the literature, we show that the effect of activity strongly depends on the microscopic details of how activity is implemented. Within both models, $f_0$ inhibits glassiness whereas $\t_p$ may either inhibit or promote \footnote{In the sense that larger $\t_p$ drives the system closer to the glassy regime.} glassiness depending on the particular noise statistics. (3) We provide a scaling analysis that predicts that close to the MCT transition of the passive system, the $\alpha$-relaxation time, $\t_\alpha$, varies as $f_0^{-2\gamma}$ when $\t_p$ is fixed. As a function of $\tau_p$ the relaxation time $\tau_{\alpha}$ may increase or decrease (for constant $f_0$), depending on the active noise statistics, but the behavior is always governed by the same exponent $\gamma=1.74$. \section{Mode-coupling theory for active steady-state} We start with the hydrodynamic equations of motion for an active system. The continuity equations for density, $\rho(\mathbf{r},t)$ and momentum density, $\rho(\mathbf{r},t)\mathbf{v}(\mathbf{r},t)$, where $\mathbf{v}(\mathbf{r},t)$ is the velocity field, at position $\mathbf{r}$ and time $t$ are \begin{align} &\frac{\p \rho(\mathbf{r},t)}{\p t}=-\nabla\cdot[\rho(\mathbf{r},t)\mathbf{v}(\mathbf{r},t)] \label{cont_rho}\\ &\frac{\p(\rho\mathbf{v})}{\p t}+\nabla\cdot(\rho\mathbf{v} \mathbf{v})=\eta\nabla^2\mathbf{v}+(\zeta+\eta/3)\nabla\nabla\cdot\mathbf{v} \nonumber\\ &\hspace{3cm}-\rho\nabla\frac{\delta \mathcal{F}}{\delta \rho}+\mathbf{f}_T+\mathbf{f}_A\label{cont_momentum} \end{align} where $\zeta$ and $\eta$ are bulk and shear viscosities, $\mathbf{f}_T$ and $\mathbf{f}_A$ are thermal and active noises respectively. The thermal noise has zero mean with the statistics \begin{equation} \langle \mathbf{f}_T({\bf 0},t)\mathbf{f}_T({\bf r},t)\rangle=-2k_BT[\eta {\bf I}\nabla^2+(\zeta+\frac{\eta}{3})\nabla\nabla]\delta({\bf r})\delta(t), \end{equation} where ${\bf I}$ is the unit tensor, $k_BT$ is the Boltzmann constant times temperature, $T$. The active noise also has zero mean and the following statistics: \begin{equation}\label{activenoise} \langle \mathbf{f}_A({\bf 0},t)\mathbf{f}_A({\bf r},t)\rangle=2\Delta({\bf r},t), \end{equation} where the detailed form of $\Delta({\bf r},t)$ depends on microscopic details of how activity is realized. We use two different models of active noise statistics as discuss below. Note that in simulations \cite{mandal2016} it is important to include friction between the active particles and an external substrate, to keep the system in steady-state. In Eq. (\ref{cont_momentum}) we do not have such a term, as its not needed here, since the active noise has zero mean and does not set up large-scale flows. $\mathcal{F}$ in Eq. (\ref{cont_momentum}) is a free-energy functional that we choose to be the Ramakrishnan-Yussouff functional \cite{ramakrishnan1979}: \begin{align} \beta \mathcal{F}[\rho]&=\int_\mathbf{r} \rho(\mathbf{r},t)\left[\ln\frac{\rho(\mathbf{r},t)}{\rho_0}-1\right]\nonumber\\ &-\frac{1}{2}\int_{\mathbf{r},\mathbf{r}'}\delta\rho(\mathbf{r},t)c(\mathbf{r}-\mathbf{r}')\delta\rho(\mathbf{r}',t) \end{align} where $\beta=1/k_BT$, $\rho_0=\rho(\mathbf{r},t)-\delta\rho(\mathbf{r},t)$, the average density and $\delta\rho(\mathbf{r},t)$, the fluctuation from the average and $\int_\mathbf{r} \equiv \int \d\mathbf{r}$. $c(\mathbf{r}-\mathbf{r}')$ is the direct correlation function that encodes the information of interaction potential among the particles. We assume $\delta\rho({\bf r},t)$ is small, while $\mathbf{v}(\mathbf{r},t)$ in the glassy regime is also small. We linearize Eqs. (\ref{cont_rho}) and (\ref{cont_momentum}) by neglecting $\delta\rho(\mathbf{r},t)\mathbf{v}(\mathbf{r},t)$, taking the divergence of (\ref{cont_momentum}) and replacing $\nabla\cdot \mathbf{v}$ in this equation using the linearized form of Eq. (\ref{cont_rho}) (see Sec. SI in Supplementary Material (SM) \cite{supmat} for details). Taking a Fourier transform, we obtain the equation for density fluctuation in Fourier space, $\delta\rho_k(t)$ as \begin{align}\label{kdepeq} D_Lk^2&\frac{\p\delta\rho_\mathbf{k}(t)}{\p t}+\frac{k^2k_BT}{S_k}\delta\rho_\mathbf{k}(t)= ik \hat{f}_T^L(t)+ik \hat{f}_A^L(t)\nonumber\\ &+\frac{k_BT}{2}\int_\mathbf{q} \mathcal{V}_{k,q} \delta\rho_\mathbf{q}(t) \delta\rho_{\mathbf{k}-\mathbf{q}}(t) \end{align} where $\mathcal{V}_{k,q}=\mathbf{k}\cdot[\mathbf{q} c_q+(\mathbf{k}-\mathbf{q})c_{k-q}]$, $\hat{f}_T^L$ and $\hat{f}_A^L$ are the longitudinal parts of the Fourier transforms of $\mathbf{f}_T$ and $\mathbf{f}_A$, $D_L=(\zeta+4\eta/3)/\rho_0$. $S_k=1/(1-\rho_0c_k)$ is the static structure factor. We have neglected the acceleration term in Eq. (\ref{kdepeq}). Through a field-theoretical method \cite{reichman2005,castellani2005,saroj2012,saroj2016,supmat}, we obtain the equations for the correlation, $C_k(t,t')=\langle\delta\rho_k(t)\delta\rho_{-k}(t')\rangle$, and the response, $R_k(t,t')=\langle \p\delta\rho_k(t)/\p \hat{f}_T^L(t')\rangle$, functions as \begin{align}\label{kdepeq1} \frac{\p C_k(t,t')}{\p t} &=-\mu_k(t)C_k(t,t')+\int_0^{t'}\d s\mathcal{D}_k(t,s)R(t',s) \nonumber\\ +\int_0^t\d s &\S_k(t,s)C_k(s,t')+2TR_k(t',t)\\ \frac{\p R_k(t,t')}{\p t} &=-\mu_k(t)R_k(t,t') \nonumber\\ &+\int_{t'}^t \d s\S_k(t,s)R_k(s,t')+\delta(t-t') \\ \mu_k(t) = T&R_k(0)+\int_0^t \d s[\mathcal{D}_k(t,s)R_k(t,s)+\S_k(t,s)C_k(t,s)] \nonumber \end{align} \begin{align} \text{with} \,\,\S(t,s)&= \kappa_1^2 \int_{\bf q} \mathcal{V}_{k,q}^2 C_{k-q}(t,s)R_q(t,s), \label{kdepeq5} \\ \mathcal{D}_k(t,s)&=\frac{\kappa_1^2}{2} \int_{\bf q} \mathcal{V}_{k,q}^2 C_q(t,s)C_{k-q}(t,s)+\kappa_2^2\mathcal{D}_k(t-s), \nonumber \end{align} where $\kappa_1=k_BT/D_Lk^2$ and $\kappa_2=1/D_L$ (see SM \cite{supmat} for details). Eqs. (\ref{kdepeq1}-\ref{kdepeq5}) are the nonequilibrium non-stationary MCT for an active system. Since the numerical solution of these equations is not possible with the currently available numerical methods, we take a schematic approximation, keeping only one wave vector (see SM \cite{supmat}). It is advantageous for numerical solution to write the equations in terms of integrated response function, $F(\t)=-\int_0^\t R(s)\d s$, instead of $R(\t)$. Then we obtain the mode-coupling theory for the active steady-state as (see SM): \begin{align} \label{activemct1} \frac{\p C(\t)}{\p \t}&=\Pi(\t)-(T-p)C(\t)-\int_0^\t m(\t-s)\frac{\p C(s)}{\p s}\d s \\ \frac{\p F(\t)}{\p \t}&=-1-(T-p)F(\t)-\int_0^\t m(\t-s)\frac{\p F(s)}{\p s}\d s \label{activemct2} \end{align} \begin{align} \text{where, } \hspace{1cm} m(\t-s)=2\l\frac{C^2(\t-s)}{T_{eff}(\t-s)}; \label{modelmu} \end{align} \begin{align} p=\int_0^\infty \Delta(s)\frac{\p F(s)}{\p s}\d s \label{modelp} \end{align} \begin{align} \text{and }\,\,\, \Pi(\t)=-\int_\t^\infty \Delta(s)\frac{\p F(s-\t)}{\p s}\d s \label{modelPi}, \end{align} where $\l$ is defined through the equation $\mathcal{D}_{k_{max}}(t,s)\equiv2\l C(t,s)^2$ and the effective temperature, $T_{eff}(\tau)$, defined via a generalized fluctuation-dissipation relation (FDR) for nonequilibrium systems \cite{cugliandolo1997,shen2004,lu2006,wang2011,wang2013,szamel2014} as \begin{equation}\label{Teff} \frac{\p C(\tau)}{\p \tau}=T_{eff}(\tau)\frac{\p F(\tau)}{\p \tau}. \end{equation} We show below that the glassy dynamics of an active SPP system can be understood through the long-time limit of $T_{eff}(\t)$, similar to what was shown in Refs. \cite{shen2004,wang2011,wang2013} for active network materials. Note that there is a problem of interpretation of the theory deep in the gassy regime \cite{supmat,bouchaud1996,barrat1996}, however, we are interested here only in the liquid state and Eqs. (\ref{activemct1}-\ref{modelPi}) describe the MCT for the steady-state of a dense active system. \begin{figure} \includegraphics[width=8.6cm]{Teff_twomodels.eps} \caption{Behavior of $T_{eff}(\t)$ as a function of $\log\t$, as calculated by solving numerically the MCT, Eqs. (\ref{activemct1}-\ref{modelPi}). At very short time, $T_{eff} = T$ and evolves to a larger value given by the parameters of activity, with a crossover time $\sim\mathcal{O}(\t_p)$. The parameters used for these plots are noted in the figure. } \label{Teff_twomodel} \end{figure} \section{Two models for active noise statistics} To complete the description we must provide the noise statistics $\Delta(\t)$ (Eq. \ref{activenoise}) that enters the extended mode-coupling theory through $p$ and $\Pi(\t)$, Eqs. (\ref{modelp}) and (\ref{modelPi}) respectively. Mainly two types of active noise have been considered in the literature: (1) The first realization in an active noise with zero mean and shot-noise temporal correlation (SNTC) \cite{benisaac2015,mandal2016} \begin{equation} \text{SNTC: }\,\,\, \Delta(\tau)=\Delta_0\exp[-\tau/\tau_p]. \label{SNTPmodel}\\ \end{equation} This noise statistics naturally applies to biological systems, such as the cytoplasm of cell, where activity arises from many identical molecular motors, each of which can apply a fixed force for a certain amount of time in a particular direction \cite{benisaac2011,benisaac2015,fodor2015,fodor2016}. (2) The second is based on a constant single-particle effective temperature $T_{eff}^{sp}$ \cite{flenner2016,berthier2014} and the active noise evolves as an Ornstein-Uhlenbeck process (OUP) \cite{uhlenbeck1930} with \begin{align} \text{OUP: }\,\,\, \Delta(\tau)=(T_{eff}^{sp}/\tau_p)\exp[-\tau/\tau_p] \label{OUPmodel}. \end{align} Note that both $\Delta_0$ and $T_{eff}^{sp}$ in Eqs. (\ref{SNTPmodel}) and (\ref{OUPmodel}) are proportional to $f_0^2$. The temporal correlations decay exponentially for both noise statistics and therefore, we do not expect any fundamentally different behavior for a fixed value of $\t_p$. However, their effects on the glassy dynamics are markedly different as a function of $\t_p$. \begin{figure} \includegraphics[width=8.6cm]{schematic_cage.eps} \caption{Schematic illustration of the timescale for the validity of the scenario of effective temperature, $T_{eff}$. (a) and (b) Decay of correlation function $C(\tau)$ and mean-square displacement (MSD) as a function of $\log\tau$. We divide the entire timescale in three different regimes. (c) The typical environment of a single particle in the three timescales. At very short time (blue), the particle does not see the other particles and performs a ballistic motion. Then (green) the particle sees the cage formed by other particles and $C(\t)$ and MSD show plateau in this timescale. The particle eventually breaks this cage and $C(\t)$ relaxes to zero. (d) The single-particle trapped within a confining harmonic potential created by the other particles. This scenario is valid around the plateau and $\alpha$-relaxation regime of MCT.} \label{schematic_cage} \end{figure} \section{Evolving effective temperature and its long-time limit} We numerically solve the MCT equations (\ref{activemct1}-\ref{modelPi}) and show the behavior of $T_{eff}(\t)$ in Fig. \ref{Teff_twomodel} for a particular set of parameters, $T=1.0$, $\l=2.0$, $\Delta_0=T_{eff}^{sp}=0.2$ and $\t_p=0.1$, where the passive system is at the MCT critical point. $T_{eff}(\t)$ is equal to $T$ when $\t \ll\t_p$ and evolves to a larger value at $t\gg\t_p$. Note the qualitatively similar behavior of $T_{eff}$ for both the models. The crossover from $T$ to the larger value occurs at $\t\sim\mathcal{O}(\t_p)$. This characteristic of $T_{eff}(\t)$ is similar for other driven systems in the glassy regime \cite{ono2002,haxton2007,berthier2000,berthier2002}, as well as for the spin-glass model of active systems \cite{berthier2013} and fluctuations of active membrane \cite{benisaac2011}. The evolving nature of $T_{eff}(\t)$ shows that the glassy-properties of the steady-state of the active system are quite different from that of equilibrium glasses and an MCT description in terms of $C(\t)$ alone is incomplete. Since we are interested in the long-time dynamics in a glassy system, at a timescale $\t\gg\t_p$, we define $T_{eff}(\t\to\infty)\equiv T_{eff}$ and show below that the effects of activity on the glassy dynamics can be understood in terms of $T_{eff}$. The MCT equations (Eqs. \ref{activemct1}-\ref{modelPi}) can be solved numerically to give $T_{eff}$. In order to obtain an analytical expression, which potentially provides deeper insight, we propose to utilize a mapping of the motion in the dense active fluid to the motion of a single, trapped active particle (STAP), which is analytically tractable \cite{benisaac2015}. A similar scenario of mapping an interacting system of particles into a single particle in an effective potential created by the surrounding particles has been recently proposed in Ref. \cite{manoj2017} for a passive system, showing the detailed correspondence between such a mapping and the MCT phenomenology. We start with the mode-coupling phenomenology, where the plateau in the decay of $C(\t)$ appears due to caging and the $\alpha$-relaxation time is governed by the cage-breaking dynamics. We schematically illustrate the timescale over which this picture is valid in Fig. \ref{schematic_cage}. We show the decay of the correlation function, $C(\tau)$ and the mean-square displacement (MSD) in Fig. \ref{schematic_cage}(a) and (b) respectively. We divide the entire duration of the dynamics into three parts and schematically illustrate the environment for one of the particles in the system in Fig. \ref{schematic_cage}(c). When $\tau$ is very small, the test-particle (red color) doesn't yet see the other particles and performs a ballistic motion. This $\beta$-relaxation time scale is shaded blue in Figs. \ref{schematic_cage}(a), (b). The particle then sees the cage formed by the other particles, in the timescale shaded green, and both $C(\tau)$ and MSD show a plateau region. Of course the cage is not static and the particles forming the cage are themselves dynamic. The test-particle eventually breaks the cage at a longer timescale (shaded gray), known as the $\alpha$-relaxation time, $\tau_\alpha$, within MCT framework. We next consider a single active particle, trapped by the effective potential of the surrounding particles (Fig. \ref{schematic_cage}d). For simplicity, we assume this confining potential to be harmonic in nature. This effective potential should capture the behavior of the real fluid particles during the timescales shaded blue and green. Therefore the maximal spatial extent of the single trapped particle motion within the effective potential well corresponds to the point where the real fluid particle breaks from the cage. By this analogy, we expect the energy scale that describes the long-time motion of the active fluid particles to correspond to the potential energy of the STAP model: $T_{eff}\propto k\langle x(t)^2\rangle$, and obtain for the two active noise statistics: \begin{numcases} {T_{eff}=} T+\frac{H\Delta_0\t_p}{1+G\t_p}, &\text{for SNTC statistics} \label{SNTC}\\ T+\frac{HT_{eff}^{sp}}{1+G\t_p}, & \text{for OUP statistics}\label{OUP}, \end{numcases} where $H=1/2\Gamma$ and $G=k/\Gamma$. Note that we do not know how to relate the values of the effective confining potential stiffness $k$, and the friction coefficient $\Gamma$, to the microscopic parameters of the active fluid, although this has been recently done for a passive system \cite{manoj2017}. We nevertheless assume that the effective parameters $k,\Gamma$ are largely independent of the activity parameters $f_0,\tau_p$. Similar expressions were also obtained in \cite{szamel2017,benisaac2015,mandal2016,samanta2016} in different contexts. We show below that these expressions agree surprisingly well with the numerical solution of the MCT equations. \begin{figure} \includegraphics[width=8.6cm]{model1_plots_C_Teff.eps} \caption{Behavior of the MCT (Eqs. \ref{activemct1}-\ref{modelPi}) using the SNTC statistics, Eq. (\ref{SNTPmodel}) with $T=1.0$ and $\l=2.1$ kept fixed: (a) Decay of the two-point correlation function, $C(\t)$ for different values of $\Delta_0$. $C(\t)$ decays faster as $\Delta_0$ increases. We have used $\t_p=1.2$. (b) MCT calculation of $T_{eff}$ as a function of $\Delta_0$ with $\t_p=1.2$. Line is a fit with $T_{eff}= T+a_{M1}\Delta_0$ (Eq. \ref{SNTC}) with $a_{M1}=0.07$. (c) Decay of $C(\t)$ for $\Delta_0=0.6$ for different values of $\t_p$ as shown in the figure. We again see $C(\t)$ decays faster with increasing $\t_p$. (d) MCT calculation of $T_{eff}$ as a function of $\t_p$ with $\Delta_0=0.6$. Line is a fit with $T_{eff}=T+b_{M1}\t_p/(1+c_{M1}\t_p)$ (Eq. \ref{SNTC}) with $b_{M1}=0.31$ and $c_{M1}= 1.10$.} \label{model1_CTeff} \end{figure} \begin{figure} \includegraphics[width=8.6cm]{model1_scalingplots.eps} \caption{Approaching the MCT-transition of the passive system, our scaling analysis predicts $\t_\alpha \sim \Delta_0^{-\gamma}$ at constant $\t_p$ and $\t_\alpha\sim[\t_p/(1+G\t_p)]^{-\gamma}$ at constant $\Delta_0$ (with $\gamma=1.74$) for the SNTC statistics (Eq. \ref{scaling_M1}). Within MCT we define $\t_\alpha$ as the time when $C(\t)$ becomes $0.4$ and plotted as symbols. The numerical solution of the theory agrees quite well with the scaling analysis. We have used $T=1.0$, $\l=2.0$, $\Delta_0=0.1$ for the data as a function of $\t_p$ (stars) and $\t_p=0.1$ for the data as a function of $\Delta_0$ (circles).} \label{model1_scaling} \end{figure} \section{Results} We first look at the detailed results of the SNTC statistics. We fix the temperature $T=1$ and the passive system shows MCT transition at $\l=2.0$ \cite{supmat}. We use $\l=2.1$, where the passive system is in the glassy regime, but close to the transition point, and look at the dynamics as a function of activity alone. In Fig. \ref{model1_CTeff}(a) we show the MCT calculated decay of the correlation function $C(\t)$ (using Eqs. \ref{activemct1}-\ref{modelPi}) for different values of $\Delta_0$, where we have kept $\tau_p=1.2$ fixed. $C(\t)$ first rapidly decays to a plateau and then has a much slower decay from the plateau to zero. As we are interested in the long-time dynamics, we can define an $\alpha$-relaxation time, $\t_\alpha$, where $C(\t)$ becomes $0.4$. As we increase $\Delta_0$, $\t_\alpha$ decreases, thus, $\Delta_0$ fluidizes the system, consistent with simulation \cite{mandal2016} and experiments \cite{zhou2009,sadati2014,parry2014}. We can understand this behavior looking at $T_{eff}$ as plotted in Fig. \ref{model1_CTeff}(b), which increases linearly with $\Delta_0$. The behavior of $C(\t)$ for different $\t_p$ is shown in Fig. \ref{model1_CTeff}(c) where the system fluidizes with increasing $\t_p$. This behavior can also be understood in terms of $T_{eff}$ as shown in Fig. \ref{model1_CTeff}(d) where $T_{eff}$ increases with $\t_p$. Thus, $T_{eff}$ seems to play a role similar to $T$ for the dynamics: $C(\t)$ decays faster and $\t_\alpha$ decreases at larger $T_{eff}$. Within this noise statistics, activity always fluidizes the system \cite{sarojPNAS,mandal2016}. In Figs. \ref{model1_CTeff}b,d we see the excellent agreement between the MCT calculation and the functional form we obtained from the STAP model (Eq. \ref{SNTC}). From the analytic expression (Eq. \ref{SNTC}) we gain an understanding of the roles played by both $f_0$ and $\tau_p$: larger self-propulsion force allows the trapped particle to reside further from the potential minimum, thereby aiding in the escape from the cage, and leading to shorter $\tau_{\alpha}$ and higher fluidity. Larger $\tau_p$ means that the trapped active particle resides for longer times away from the potential minimum, thereby having the same qualitative effect as increasing $f_0$. Next, we provide a scaling analysis for the behavior of $\t_\alpha$ as a function of activity. MCT predicts a power law divergence for $\tau_\alpha$: $\tau_\alpha\sim (\sigma-\sigma_c)^{-\gamma}$, where $\sigma$ is the control parameter ($T$, density etc.), and $\sigma_c$ is its critical value for the MCT transition \cite{gotze1989}. We show in SM (Sec. SIV) \cite{supmat} that $\gamma=1.74$ within schematic MCT for the passive system. Then, using the STAP model (Eq. \ref{SNTC}) and setting $T=T_c$ we obtain \begin{equation}\label{scaling_M1} \tau_\alpha\sim(T_{eff}-T_c)^{-\gamma}\sim\left(T+\frac{H\Delta_0\t_p}{1+G\t_p}-T_c\right)^{-\gamma} \sim \left[\frac{H\Delta_0\tau_p}{1+G\t_p}\right]^{-\gamma}. \end{equation} Thus, at constant $\t_p$ we expect $\t_\alpha\sim \Delta_0^{-\gamma}$, while at constant $\Delta_0$ we obtain $\t_\alpha\sim [\t_p/(1+G\t_p)]^{-\gamma}$. In Fig. \ref{model1_scaling} we show the numerical solution of the MCT, Eqs. (\ref{activemct1}-\ref{modelPi}), agrees very well with this scaling analysis. We expect a deviation from this scaling when $\Delta_0$ is large. However, when $\Delta_0$ is small, a very large $\t_p$ makes the effective temperature saturate and we expect the scaling behavior to apply for all values of the $\t_p$. \begin{figure} \includegraphics[width=8.6cm]{model2_CTeff.eps} \caption{Effect of activity on the glassy behavior for OUP statistics, Eq. (\ref{OUPmodel}). $T=1.0$ and $\l=2.1$ for this figure. (a) $C(\t)$, at $\t_p=0.1$, decays faster with increasing $T_{eff}^{sp}$. (b) $C(\t)$, at $T_{eff}^{sp}=0.12$, decays slower with increasing $\t_p$ implying $\t_p$ drives the system closer to the glassy regime. (c) Symbols: MCT data at $\t_p=0.1$, line: fit with $T_{eff}=1+a_{M2} T_{eff}^{sp}$ (Eq. \ref{OUP}) with $a_{M2}=0.72$. (d) Symbols: MCT data at $T_{eff}^{sp}=0.6$, line is fit with $T_{eff}=1+b_{M2}/(1+c_{M2}\t_p)$ (Eq. \ref{OUP}) with $b_{M2}=0.59$ and $c_{M2}=3.24$. {\bf Inset:} Same as in the main figure with semi-log axes.} \label{model2_CTeff} \end{figure} We now look at the behavior of the OUP statistics. Larger $T_{eff}^{sp}$ drives the system away from the glassy regime as shown in Fig. \ref{model2_CTeff}(a) where $C(\t)$ decays faster for larger $T_{eff}^{sp}$, similar to $\Delta_0$ in the SNTC statistics (Fig. \ref{model1_CTeff}a). However, the behavior with respect to $\t_p$ is quite opposite to that of the SNTC statistics. We show the decay of $C(\t)$ as a function of $\log \t$ in Fig. \ref{model2_CTeff}(b), where $C(\t)$ decays slower with increasing $\t_p$, driving the system towards the glassy regime, consistent with simulations \cite{flenner2016}. The behavior of this noise statistics can also be understood from Eq. (\ref{OUP}) \cite{sarojPNAS,benisaac2015,szamel2017} as $T_{eff}$ increases linearly with $T_{eff}^{sp}$ and decreases monotonically with increasing $\t_p$, approaching $T$ when $\t_p\to\infty$. In Figs. \ref{model2_CTeff}(c) and (d) we show the excellent agreement between the $T_{eff}$ obtained from the numerical solution of MCT and the STAP model, Eq. (\ref{OUP}). We emphasize here that activity {\em never} promotes glassiness, as compared to the passive system, and the introduction of any amount of activity {\em always} fluidizes the system, for both noise statistics that we have considered. Fig. \ref{model2_CTeff}(d) shows $T_{eff}$ decreases with increasing $\t_p$, but it never becomes less than $T$ (Eq. \ref{OUP}). For any non-zero activity, we get $T_{eff}\geq T$. From the analytic expression (Eq. \ref{OUP}) we understand the roles played by both $T_{eff}^{sp}$ and $\tau_p$: the self-propulsion force is now not fixed in amplitude, but increases for shorter $\tau_p$ (Eq. \ref{OUPmodel}). Larger $T_{eff}^{sp}$ acts as $f_0^2$ in the SNTC statistics. However, larger $\tau_p$ means that the amplitude of the active force decreases, thereby leading to smaller excursion of the particle away from the potential minimum, and smaller $T_{eff}$. \begin{figure} \includegraphics[width=8.6cm]{model2_scaling_diff_del0.eps} \caption{Test of the scaling predictions within OUP statistics. Symbols are obtained through the numerical solution of the MCT Eqs. (\ref{activemct1}-\ref{modelPi}), where $\t_\alpha$ are obtained as the time when $C(\t)=0.4$. Lines are fits with the scaling predictions (Eq. \ref{scaling_M2}): $\t_\alpha\sim (1+G\t_p)^\gamma$ with $\gamma=1.74$. {\bf Inset:} Comparison with molecular dynamics simulations. Data obtained from \cite{szamel2015}. Line is the fit of Eq. (\ref{scaling_M2}): $\t_\alpha=a(1+G\t_p)^\gamma$ with $a=0.2$ and $G=36.59$.} \label{model2_scaling} \end{figure} Through a similar argument as we have used for the scaling analysis of $\t_\alpha$ within the SNTC statistics, using STAP model (Eq. \ref{OUP}) we obtain for the OUP statistics \begin{equation}\label{scaling_M2} \t_\alpha\sim \left[\frac{HT_{eff}^{sp}}{1+G\t_p}\right]^{-\gamma}. \end{equation} Therefore, we see $\t_\alpha \sim {T_{eff}^{sp}}^{-\gamma}$ at constant $\t_p$ and $\t_\alpha\sim (1+G\t_p)^{\gamma}$ at constant $T_{eff}^{sp}$. We obtain $\t_\alpha$ from the numerical solution of the MCT, Eqs. (\ref{activemct1}-\ref{modelPi}) with the noise statistics of OUP and show the behavior of $\t_\alpha$ as a function of $\t_p$ in Fig. \ref{model2_scaling} for three $T_{eff}^{sp}$. We see that our scaling analysis agrees very well with the numerical solution of MCT. We have obtained $\t_\alpha$ for different $\t_p$ from the particle-based simulations of an active dense fluid from Ref. \cite{szamel2015}, and find good agreement with our scaling analysis as shown in the inset of Fig. \ref{model2_scaling}. Note that both the scaling relations in Eqs. (\ref{scaling_M1}) and (\ref{scaling_M2}) work only if the passive system is close to the MCT transition regime since our starting relations for the scaling analysis is valid only in this regime. \section{Discussion and conclusion} We have provided a nonequilibrium mode-coupling theory for active systems of self-propelled particles in the regime of a dense fluid, where activity is included through a colored noise. Considering two different models for the active noise statistics, we have provided a scaling analysis for the dynamics of the system when the passive system is close to the MCT transition point: $\t_\alpha\sim \Gamma^{-\gamma}$, where $\Gamma$ is either of $\Delta_0$ or $T_{eff}^{sp}$ when $\t_p$ remains constant. Within SNTC statistics we find $\t_\alpha\sim [\t_p/(1+G\t_p)]^{-\gamma}$, while for OUP statistics $\t_\alpha\sim (1+G\t_p)^\gamma$, where $G$ is a system-dependent constant and $\gamma=1.74$ as for the passive MCT. We haven't been able to numerically solve the full wave vector-dependent theory using the standard algorithms due to excessive time requirement. MCT predicts similar dynamics for all wave vectors and we expect our qualitative results to remain unchanged. The exponent $\gamma$ may vary for different systems from the schematic MCT value, as is well-known for the passive systems \cite{das2004}. However, irrespective of the particular value of $\gamma$, what is interesting is that the effect of activity on $\t_\alpha$ in the active system is governed by the same exponent as that of the passive system. Comparison with numerical solution of the nonequilibrium MCT as well as published molecular-dynamics simulation data \cite{szamel2015}, show excellent agreement with the scaling analysis. The nonequilibrium nature of the system is manifested through a time-dependent effective temperature, $T_{eff}(\t)$, derived from a generalized FDR. This shows that description of such systems within a mode-coupling theoretical framework in terms of the correlation function alone \cite{szamel2016,feng2017,kranz2010} is incomplete. $T_{eff}(\t)$ has two distinct regimes: at very short times ($\t\ll\t_p$) we have $T_{eff}(\t)=T$ and it dynamically evolves to a higher value, determined by the parameters of activity, at long time ($\t\gg\t_p$). {\footnote{The ITT approach explicitly separates out these two regimes and our theory should be viewed as complimentary to that of \cite{liluashvili2017}. } This implies that characterization of the system in terms of a unique effective temperature is not possible. However, we find that the dynamics can be understood in terms of the long-time value of effective temperature: $T_{eff}(\t\to\infty)\equiv T_{eff}$. Through the well-known caging scenario of MCT for passive systems, where $\alpha$-relaxation is the cage-breaking dynamics \cite{giulioreview}, we can associate $T_{eff}$ with the potential energy of a simplified model for the dynamics of a single trapped active particle (STAP) within an effective harmonic potential \cite{benisaac2015}. Such a mapping of an interacting system into a single particle in an external field created by all the other partciles has been proposed recently \cite{manoj2017} for a passive system. We find excellent agreement for the activity dependence of the potential energy obtained from this simplified model and the long-time limit of $T_{eff}(\t\to\infty)$ obtained from the numerical solution of active MCT. This phenomenological mapping to the simplified model provides us with an analytic expression for $T_{eff}$, giving deeper insight into the effects of activity on the motion within the dense active fluid. Mode-coupling theory for passive systems is valid within a window of temperature and/or density and fails beyond certain values of these parameters. We expect the non-equilibrium MCT to have a similar regime of validity. Random first order transition (RFOT) theory works beyond this regime where MCT fails. We have recently extended RFOT for active systems \cite{sarojPNAS} and the qualitative nature of the effect of activity within MCT, found in this work, is consistent with the extended RFOT theory. However, the regimes of validity of these theories are different as well as their quantitative predictions; MCT predicts power law scaling for the realxation time whereas RFOT predicts activated scaling. A sharp distiction between the MCT regime and activated regime of RFOT is not possible due to fluctuations in finite-dimensional systems \cite{giuliobook,rfimMCT}. We find that activity fluidizes the system within MCT theory: a non-ergodic glassy regime of passive MCT becomes a regular fluid in the presence of activity, close to the glass transition point. Specifically, we predict that while the glass phase of MCT is known to fail to describe correctly the passive system, in the presence of activity it correctly describes the behavior of the active fluid. It remains to be tested if these predictions agree with future detailed simulations. \section*{Acknowledgements} We are grateful to Madan Rao and Chandan Dasgupta for useful discussions, comments and a critical reading of the manuscript. SKN would like to thank G. Szamel, J. Kurchan, S. Ramaswamy, T. Voigtmann and J. Prost for many important discussions, L. Berthier for comments and Koshland Foundation for funding through a fellowship. NSG is the incumbent of the Lee and William Abramowitz Professorial Chair of Biophysics and this research was made possible in part by the generosity of the Harold Perlman family. \input{activeMCT.bbl} \onecolumngrid \section*{Supplementary Material: Nonequilibrium mode-coupling theory for the steady state of dense active systems of self-propelled particles} In this Supplementary Material, we provide some discussion of the active system, details of the schematic MCT calculation, a brief description of the numerical method and some results of the equilibrium mode-coupling theory that are relevant for our discussion. \\ \twocolumngrid \renewcommand{\theequation}{S\arabic{equation}} \renewcommand{\thefigure}{S\arabic{figure}} \renewcommand{\thesubsection}{S\Alph{subsection}} \setcounter{equation}{0} \setcounter{figure}{0} \subsection{Description of active systems of self-propelled particles} \begin{figure} \includegraphics[width=8.6cm]{activeparticle_persistence.eps} \caption{Schematic picture of an active system consisting of self-propelled particles with a self-propulsion force $f_0$ and persistence time $\t_p$ for their motion in a certain direction. One possible effect of persistence is schematically illustrated in the right on a collision event of two particles (see text for details).} \label{ppschematic} \end{figure} We consider an active system of self-propelled particles in the dense regime. Each particle has a self-propulsion force $f_0$ and persistence time $\t_p$ for its motion in a certain direction, which is marked by an arrow on the particles in Fig. \ref{ppschematic} for clarity. The fact that dynamics of such a system is different from a passive system can be understood from the following simple consideration. Let us consider one collision event between two particles as shown in the right of Fig. \ref{ppschematic}. As the particles approach towards each other, imagine the directors point towards each other as shown in the figure. As they collide, they move away from each other due to the short-range repulsive potential, but, because their self-propulsion remains along their directors, they again move towards each other and collide. This continues over a timescale of the order of $\t_p$, until the directors loose their correlation. Thus, even if the particles are purely repulsive, self-propulsion that is persistent for a timescale of $\t_p$, creates an effective attractive force among the particles. In our description, we implement activity through the active noise statistics. Usually two such statistics have been considered in the literature \cite{smflenner2016,smmandal2016} as we discuss in the main text. \subsection{Details of the mode-coupling theory calculation} We have the full wavevector-dependent equations of motion for the correlation, $C_k(t,t')=\langle\delta\rho_k(t)\delta\rho_{-k}(t')\rangle$, and the response, $R_k(t,t')=\langle \p\delta\rho_k(t)/\p \hat{f}_T^L(t')\rangle$, functions (Eqs (7-9) in the main text): \begin{align}\label{kdepeq1} \frac{\p C_k(t,t')}{\p t} &=-\mu_k(t)C_k(t,t')+\int_0^{t'}\d s\mathcal{D}_k(t,s)R(t',s) \nonumber\\ +\int_0^t\d s &\S_k(t,s)C_k(s,t')+2TR_k(t',t)\\ \frac{\p R_k(t,t')}{\p t} &=-\mu_k(t)R_k(t,t') \nonumber\\ &+\int_{t'}^t \d s\S_k(t,s)R_k(s,t')+\delta(t-t') \\ \mu_k(t) = T&R_k(0)+\int_0^t \d s[\mathcal{D}_k(t,s)R_k(t,s)+\S_k(t,s)C_k(t,s)] \end{align} with \begin{align} \mathcal{D}_k(t,s)&=\frac{\kappa_1^2}{2} \int_{\bf q} \mathcal{V}_{k,q}^2 C_q(t,s)C_{k-q}(t,s)+\kappa_2^2\mathcal{D}_k(t-s),\\ \S(t,s)&= \kappa_1^2 \int_{\bf q} \mathcal{V}_{k,q}^2 C_{k-q}(t,s)R_q(t,s), \label{kdepeq5} \end{align} where $\kappa_1=k_BT/D_Lk^2$ and $\kappa_2=1/D_L$. The details of this field-theoretical method can be found in a number of places, including \cite{smreichman2005,smcastellani2005,smsaroj2016}. Eqs. (\ref{kdepeq1}-\ref{kdepeq5}) form the mode-coupling theory for a generic nonequilibrium non-stationary state of an active system. However, the numerical solution of these equations is not possible due to excessive time-requirement with the currently available algorithms, even in the steady-state limit, where we need to solve the equations iteratively (see Sec. SC). We therefore take a schematic approximation, writing the theory at a particular wave vector $k_{max}$, which corresponds to the first maximum of the static structure factor, that leads to simplified equations manageable for numerical solution. Then we obtain the equations for $C(t,t')\equiv C_{k=k_{max}}(t,t')$ and $R(t,t') \equiv R_{k=k_{max}}(t,t')$ as \begin{align} \frac{\p C(t,t')}{\p t} &=-\mu(t)C(t,t')+\int_0^{t'}\d s\mathcal{D}(t,s)R(t',s) \nonumber\\ +\int_0^t\d s &\S(t,s)C(s,t')+2TR(t',t)\label{correq1} \\ \frac{\p R(t,t')}{\p t} &=-\mu(t)R(t,t')+\int_{t'}^t \d s\S(t,s)R(s,t')+\delta(t-t') \label{responseeq1}\\ \mu(t) = &T+\int_0^t \d s[\mathcal{D}(t,s)R(t,s)+\S(t,s)C(t,s)] \label{def_mu} \end{align} with $\mathcal{D}(t,s)=2\l C^2(t,s)+\Delta(t-s)$ and $\S(t,s)=4\l C(t,s)R(t,s)$. Note that $\l$ contains the information of interaction through the direct correlation function. It is well-known that the schematic form of MCT and the equations for $p$-spin glass model are analogous \cite{smkirkpatrick1987} and similar equations, as in Eqs. (\ref{correq1}-\ref{def_mu}), were also obtained in \cite{smberthier2013} for the $p$-spin spherical active spin glass model. Now we define the integrated response function $F(t,t')$ as \begin{equation} F(t,t')=-\int_{t'}^tR(t,s)\d s, \end{equation} as this is more advantageous for the numerical integration since $R$ fluctuates more compared to $F$. To write the equations in terms of $F(t,t')$, we take an integration of Eq. (\ref{responseeq1}) on $t'$ and obtain \begin{equation}\label{varchange} \frac{\p F(t,t'')}{\p t}=-\mu(t)F(t,t'')-1-\int_{t''}^t\int_{t'}^t\d s\S(t,s)R(s,t')\d t'. \end{equation} We show the region of integration for the last term above in Fig. \ref{intregion} where we need to do the integration for $s$ first and then on $t'$. However, to write the equation in terms of $F(t,t')$, we need to carry out the integration on $t'$ first (i.e., along the dotted lines), when the integration limits go from $t''$ to $s$. Then we obtain the equation for $F(t,t')$ from Eq. (\ref{responseeq1}) as \begin{align}\label{varchange2} \frac{\p F(t,t')}{\p t}=-1 -\mu(t)F(t,t') +\int_{t'}^t \d s\S(t,s)F(s,t'). \end{align} \begin{figure} \includegraphics[height=5cm]{region_int_response.eps} \caption{Region of integration for the last term in Eq. (\ref{varchange}). We need to change the order of integration for the variables $t'$ and $s$ to obtain Eq. (\ref{varchange2}) [see text].} \label{intregion} \end{figure} Using the definitions of $\mathcal{D}(t,t')$ and $\S(t,t')$ in Eqs. (\ref{correq1}-\ref{def_mu}) we obtain the equations of motion for the correlation, $C(t,t')$, and integrated response, $F(t,t')$, functions as \begin{subequations} \label{correq2} \begin{align} \frac{\p C(t,t')}{\p t} &=-\mu(t)C(t,t')+2\l\int_0^{t'}\d sC^2(t,s)\frac{\p F(t',s)}{\p s} \nonumber\\ &+4\l\int_0^t\d s C(t,s)\frac{\p F(t,s)}{\p s}C(s,t') \nonumber\\ &+\int_0^{t'}\Delta(t-s)\frac{\p F(t',s)}{\p s}\d s \\ \frac{\p F(t,t')}{\p t} &= -1 -\mu(t)F(t,t') \nonumber\\ &+4\l\int_{t'}^t \d sC(t,s)\frac{\p F(t,s)}{\p s}F(s,t')\\ \mu(t) =& T+\int_0^t \d s\bigg[\bigg\{2\l C^2(t,s)+\Delta(t-s)\bigg\} \frac{\p F(t,s)}{\p s} \nonumber\\ &+4\l C(t,s)\frac{\p F(t,s)}{\p s}C(t,s)\bigg] \end{align} \end{subequations} These equations are valid in general for a non-equilibrium system even in the aging regime. We assume that the system goes to a steady state at long time and $C(t,t')$ and $F(t,t')$ become functions of the time difference $(t-t')$ alone. It can be shown through the numerical solution of Eqs. (\ref{correq2}) that if the final parameter values are such that the system is in liquid state, the system dynamically evolves to this steady state. To obtain the equations for this steady state, we take the limits of $t$ and $t'$ to $\infty$ such that $(t-t')=\tau$ remains finite. Then, we obtain \begin{align} \label{gss1} \frac{\p C(\t)}{\p \t} &= \Pi(\t)-\mu(\infty)C(\t) -\epsilon(\t) \nonumber \\ &+4\l\int_0^\t \d s C(\t-s)\frac{\p F(\t-s)}{\p s}C(s)\\ \frac{\p F(\t)}{\p \t} &= -1 -\mu(\infty)F(\t)-4\l\int_0^\t \d s C(s)\frac{\p F(s)}{\p s}F(\t-s)\nonumber \end{align} where the different parameters are defined as \begin{subequations}\label{gss2} \begin{align} \Pi(\t) &= -\int_\t^\infty \Delta(s)\frac{\p F(s-\t)}{\p s} \d s\\ \epsilon(\t) &= 2\l\int_\t^\infty\d sC^2(s)\frac{\p F(s-\t)}{\p s} \nonumber\\ &+4\l\int_\t^\infty\d s C(s)\frac{\p F(s)}{\p s}C(s-\t)\\ \mu(\infty)&= T-6\l\int_0^\infty \d sC^2(s)\frac{\p F(s)}{\p s}-\int_0^\infty \Delta(s)\frac{\p F(s)}{\p s}\d s. \end{align} \end{subequations} \begin{figure} \includegraphics[height=8.6cm,angle=-90]{equilibrium_corfn.eps} \caption{Decay of the correlation function $C(\t)$ for different values of $\l$ within equilibrium MCT obtained from solving Eq. (\ref{eqMCT_standard}). $C(\t)$ doesn't decay to zero for $\l\geq 2.0$ and this is the MCT transition when the system goes to a non-ergodic state. We note that the non-ergodic state is not found in simulation or experiments where some other mechanisms, absent within MCT, takes over and the theory fails to describe the system beyond this point.} \label{equilibriumplot} \end{figure} In equilibrium, considering the fluctuation-dissipation relation (FDR), such that $\p C/\p \t=T \p F/\p\t$, we obtain the equation for the correlation function from Eq. (\ref{gss1}) as \begin{align} \label{eqMCT} \frac{\p C(\t)}{\p \t}+TC(\t)+\frac{2\l}{T}C^3(\infty)[1-C(\t)] \nonumber\\ +\frac{2\l}{T}\int_0^\t C^2(\t-s)\frac{\p C(s)}{\p s}\d s=0. \end{align} This equation becomes the standard MCT equation for the ergodic state when $C(\infty)=0$ and the third term in the above equation vanishes. But in the nonergodic state, $C(\infty)$ is non-zero and Eq. (\ref{eqMCT}) is different from the standard MCT equation. A resolution of this paradox has been offered in \cite{smbarrat1996}, where it has been shown that to obtain the MCT from the field-theoretic treatment in the non-ergodic state, one must start from a different initial condition that is commensurate to this state and then one obtains the standard MCT equation. We concentrate on the ergodic state in this work where $C(\infty)=0$ and obtain the equilibrium MCT equation as \begin{equation}\label{eqMCT_standard} \frac{\p C(\t)}{\p \t}+TC(\t) +\frac{2\l}{T}\int_0^\t C^2(\t-s)\frac{\p C(s)}{\p s}\d s=0. \end{equation} The solution of Eq. (\ref{eqMCT_standard}) is well known \cite{smgoetzebook,smdas2004}. We set $T$ to unity and show the decay of $C(\t)$ as a function of $\log\t$ for different values of $\l$ in Fig. \ref{equilibriumplot}. As $\l$ increases, the decay of $C(\t)$ becomes slower and at $\l=2.0$, $C(\t)$ doesn't decay to zero anymore, this is the MCT transition point where the system goes to a non-ergodic state. Such a transition, however, is not found in simulations or experiments on structural glasses and the theory fails to describe the system beyond this point. Within this description, $\l$ is inversely proportional to $T$ and therefore, in terms of $T$, larger $\l$ can be seen as results for small $T$. Eqs. (\ref{gss1}) along with the definitions in (\ref{gss2}) give the mode-coupling theory for an active system of self-propelled particles in the dense or low temperature regime. A closer look at Eqs. (\ref{gss2}) shows that evaluation of the variables $\Pi(\t)$, $\epsilon(\t)$ and $\mu(\infty)$ requires the values of $C(\t)$ and $F(\t)$ for all values of $\t$, from $0$ to $\infty$. Therefore, we must solve the equations through an iterative method, and the algorithm must be extremely accurate. We have modified the algorithm that was used to investigate the aging behavior in \cite{smsaroj2012} for a steady state. However, this algorithm is not extremely accurate close to the transition and a small error gets amplified at later iterations and the solution blows up. To give an example, when $T=1.0$ and $\l=1.99$, we could not iterate the solution for more than thrice. Therefore, we write the equations slightly differently using a generalized FDR through the definition of a time-dependent effective temperature $T_{eff}(\t)$ \begin{equation}\label{teff_def} \frac{\p C(\t)}{\p \t}=T_{eff}(\t)\frac{\p F(\t)}{\p \t}. \end{equation} We have seen that $T_{eff}(\t)$, obtained through Eq. (\ref{teff_def}) from the numerical solution of Eqs. (\ref{gss1}), varies slowly and has two distinct regime as discussed in the main text. At small $\t$, $T_{eff}(\t)=T$ and at large $\t$ it goes to a different value, larger than $T$ and the crossover from $T$ to the larger value occurs at a timescale $\t\sim \mathcal{O}(\t_p)$. Therefore, we are justified to assume that $T_{eff}(\t)$ varies slowly and write the MCT equations for the active steady state as \begin{align}\label{activemcteq1} \frac{\p C(\t)}{\p \t}&=\Pi(\t)-(T-p)C(\t)-\int_0^\t m(\t-s)\frac{\p C(s)}{\p s}\d s \\ \frac{\p F(\t)}{\p \t}&=-1-(T-p)F(\t)-\int_0^\t m(\t-s)\frac{\p F(s)}{\p s}\d s \label{activemcteq2} \end{align} where we have \begin{subequations} \label{def_activemct} \begin{align} & m(\t-s)=2\l\frac{C^2(\t-s)}{T_{eff}(\t-s)} \\ & p=\int_0^\infty \Delta(s)\frac{\p F(s)}{\p s}\d s \\ & \Pi(\t)=-\int_\t^\infty \Delta(s)\frac{\p F(s-\t)}{\p s}\d s . \end{align} \end{subequations} Note that the definition of $T_{eff}(\t)$ through Eq. (\ref{teff_def}) doesn't imply any loss of generality as we evaluate $T_{eff}(\t)$ at each time step. The advantage of the above form is that the standard algorithm, that can be used with large accuracy, for equilibrium MCT can be easily extended and used through an iteration method, as discussed in Sec. SC, therefore we chose to present the theory in the form of Eqs. (\ref{activemcteq1}-\ref{def_activemct}). We have checked that in the regime of parameter space, where the earlier numerical method works, the solutions of Eqs. (\ref{gss1}-\ref{gss2}) and Eqs. (\ref{activemcteq1}-\ref{def_activemct}) are the same. The initial conditions for the correlation and response functions are $C(0)=1.0$ and $F(0)=0.0$. \begin{figure} \includegraphics[width=8.6cm]{activemct_numsol.eps} \caption{Illustration of the iterative procedure for the solution of the mode-coupling theory for the steady-state of an active system.} \label{activemct_numsol} \end{figure} \subsection{Numerical Solution} \label{numsolsec} \begin{figure} \includegraphics[width=8.6cm]{passive_relaxationtime.eps} \caption{Symbols are the equilibrium MCT data of $\t_\alpha$ when the correlation function $C(\t)$ becomes $0.4$ (see Fig. \ref{equilibriumplot}). The dashed line is a fit to the equation $\log \tau=a-\gamma \log(\l-\l_c)$ with $a=0.69$ and $\gamma=1.74$.} \label{passive_reltime} \end{figure} The numerical solution of Eqs. (\ref{activemcteq1}-\ref{activemcteq2}) along with the definitions in Eqs. (\ref{def_activemct}) can be obtained through a generalization of the standard algorithm to solve the MCT equations in equilibrium \cite{smfuchs1991,smmiyazaki2004,smflenner2005}. The advantage of this algorithm is that it can be used with any desired accuracy simply through the reduced initial step size and increasing the number of steps after which the time-step is doubled \cite{smmiyazaki2004}. We start with the passive system at a certain $T$ and $\l$ with $\Delta(\t)=0$ and obtain $C(\t)$ and $F(\t)$. $T_{eff}(\t)$ for $\Delta(\t)=0$ is equal to $T$. We then use these values of $F(\t)$ to obtain $p$ and $\Pi(\t)$ using the relations in Eqs. (\ref{def_activemct}). We again evaluate $C(\t)$ and $F(\t)$ with these new values of parameters and obtain these parameters again with the new values of $F(\t)$. We continue this process until the older and new values of $p$ and $\Pi(\t)$ are same. We illustrate this through a flowchart in Fig. \ref{activemct_numsol}. When activity is not very large (for example $\Delta_0=0.1$ and $\t_p=0.1$), it takes around 30 iterations to achieve the desired accuracy, however, for larger activity parameters, it takes of the order of 100 iterations for the solution to converge. \subsection{Exponent for the power-law divergence of $\alpha$-relaxation time within schematic MCT} In a glassy system, we are interested in the long time dynamics and therefore we look at the $\alpha$-relaxation time $\t_\alpha$ that is defined as the time when $C(\t)$ becomes $0.4$. We obtain the mode-coupling exponent $\gamma$ for the $\alpha$-relaxation time with $\tau_\alpha\sim (\sigma-\sigma_c)^{-\gamma}$, where $\sigma$ is any control parameter ($T$ or density) and $\sigma_c$ is its critical value where we obtain the MCT transition for the passive system. We extract $\t_\alpha$ from the numerical solution of Eq. (\ref{eqMCT_standard}), and fit the data with a form $\log \tau=a-\gamma \log(\l-\l_c)$ and obtain $a=0.69$ and $\gamma=1.74$. In simulations or experiments, this value of $\gamma$ may vary slightly, as is well-known for the equilibrium MCT, however, what is important is that the same exponent for the passive system governs the effect of activity on the dynamics of the active system when the parameters are such that the passive system is close to the MCT transition point. \input{SupMat.bbl} \end{document}
2,869,038,156,016
arxiv
\section{Introduction} The radial velocity (RV) method is the principal technique for constraining the masses of exoplanets \citep{FirstDetection_1995}. It provides complementary information to the transit method, e.g., as used by the \emph{Kepler} and TESS spacecraft \citep{Borucki977, 1538-3873-126-938-398, TESS2014} and ground-based transit surveys. The Keplerian reflex motion induced in a Sun-like star by an Earth-mass planet in the habitable zone is of order 10 cm~s$^{-1}$ \citep{Fischer2016}, the target sensitivity of next-generation spectrographs \citep{Pepe_et_al_2010}. However, contributions to observed stellar RVs from photospheric stellar activity often exceed 1 m~s$^{-1}$ even in the quietest Sun-like stars, posing a significant barrier to the detection of exoplanets by the RV method (e.g. \citealt{Saar_et_al_1997, schrijver_zwaan_2000, Isaacson_Fischer_2010, Motalebi_et_al_2015}). Several recent works describe a variety of models to mitigate the effects of magnetic activity on stellar RVs. One approach has been to study the Sun as a star, extracting solar activity estimates from images of the solar surface \citep{m2010, haywood, tim} and comparing to simultaneous disk-integrated spectral measurements. In order to reduce unwanted stellar signals from exoplanet searches, however, methods for extracting stellar activity directly from spectra, and not from ancillary datasets, must be developed. For Sun-like stars with low activity, suppression of convective blueshift due to photospheric plage (hereafter $\rm RV_{\rm conv}$) dominates over the wavelength-independent photometric effects due to spots, or RV shifts induced by Earth-like exoplanets \citep{Meunier_et_al_2010_MDI, Dumusque_et_al_2014, haywood} (hereafter $\rm RV_{\rm sppl}$). \cite{m2017Model} (hereafter M17) have developed one model to isolate $\rm RV_{\rm conv}$ contributions based on the observed non-linear relationship between relative depths and absolute RV blueshifts of spectral lines of a given species (here neutral iron) driven by plasma flow in granules, as described in \cite{gray2009, reiners, m2017Other, GrayOostra2018}. The exact physical origin of this observed correlation is non-trivial: a correct description of spectral line formation necessitates the summation of many different line profiles, each formed at different depths in the photosphere, and requires a full three-dimensional treatment (e.g., see \citealt{Nordlund2009, Stein2012, Cegla_2013, Bergemann_et_al_2019} and references therein). An intuitive (though inexact) understanding of this relationship may be determined by considering a simplified 1D picture: in this model, rising plasma low in the photosphere exhibits strong RV blueshift, while plasma closer to the surface has most of its motion directed tangentially as it merges into intergranular lanes, thus exhibiting less RV blueshift \citep{Dravins_1981}. While many factors such as temperature, electron pressure, and atomic constants affect spectral line relative depth \citep{gray2005book}, for spectral lines of a given atomic species, line depth shows strong anti-correlation with height of formation in the stellar photosphere. Therefore, the absolute radial velocity blueshift shows a strong, non-linear relationship with line depth, commonly referred to as the third signature of stellar granulation \citep{gray2009}. M17 leverage the dominance of $\rm RV_{\rm conv}$ to write the RV time series derived from a set of lines $s_0$ as \begin{equation} \rm RV_0 = \rm RV_{\rm sppl} + \rm RV_{\rm conv} \end{equation} where RV${}_0$ is the radial velocity measured with this specific line list. $\rm RV_{\rm conv}$ are line-list dependent contributions due to the suppression of convective blueshift, and $\rm RV_{\rm sppl}$ are photometric variations (e.g. spots and plage), planetary signals, or other RV sources that are the same for all spectral lines. M17 makes use of the non-linear relationship between line depth and convective shift by writing an an RV time series from a sublist $s_1$ of $s_0$ with a restricted flux range can be written \begin{equation} \rm RV_1 = \rm RV_{\rm sppl} + \alpha \rm RV_{\rm conv} \end{equation} where $\alpha$ is the ratio of the weighted mean shift in radial velocity by suppression of convective blueshift from line list $s_1$ compared to line list $s_0$. Based on the third signature of granulation \citep{gray2009}, we would expect $\alpha < 1$ for a sublist $s_1$ comprising strong lines formed close to the top of the photosphere, and $\alpha > 1$ for a sublist of weak lines, formed deep in the photosphere. If a precise value for $\alpha$ is known or can be inferred, we can invert the observed $RV_0$ and $ RV_1$ time series to extract time series of interest $RV_{\rm conv}$ and $RV_{\rm sppl}$. Using time series $\rm RV_{\rm conv}$ and $\rm RV_{\rm sppl}$ extracted from solar photospheric images, M17 construct synthetic time series $\rm RV_0$ and $\rm RV_1$ using a value for $\alpha$ fitted from a solar atlas \citep{kurucz84, kurucz05} and added white noise. The authors then test several methods for estimating $\alpha$ on these synthetic time series, finding good convergence for the value of $\alpha$ across the methods (within 5\% of the true value for low-noise conditions) (M17). Using this calculated value of $\alpha$, they then recover and validate the original $\rm RV_{\rm sppl}$ time series. Ideally, this technique could be utilized to correct RV time series for $\rm RV_{\rm conv}$ contributions to lower the RV activity threshold. On real data, determining an absolute scale for radial velocities is challenging, making it difficult to precisely determine $\alpha$. M17 apply these methods to HARPS exposures of HD207129 but find no agreement for values of $\alpha$ derived by different estimation methods, which they attribute to infrequent observations and low SNR. Using the solar telescope \citep{phillips16} operating with the HARPS-N spectrograph at the Telescopio Nazionale Galileo (TNG, \citealt{HARPSN_2012}), we extract high-resolution disk-integrated solar spectra \citep{dumusque}. We now have more than 50,000 high-SNR solar exposures spanning over 4 years of observing \citep{ACC}. In this work, we apply Meunier's methods to the first 2.5 years of the solar dataset (from Summer 2015 - November 2017) to attempt a recovery of a precise value of $\alpha$ for use in reconstructing $\rm RV_{\rm conv}$ and $\rm RV_{\rm sppl}$. In Section 2, we discuss our method for extracting line-by-line RVs from the HARPS-N solar spectra. In Section 3, we discuss various techniques for determining $\alpha$ from the resulting RV timeseries, and attempt to compute consistent $\alpha$ values. We conclude in Section 4 with a discussion of the resulting values, and possible explanations for why the model does not reduce RV RMS on our dataset \section{Methods} \subsection{Extracting RVs from HARPS-N spectra} Third signature plots in the literature are often based on neutral iron lines to demonstrate the relationship between relative depth and absolute convective blueshift \citep{gray2009, reiners, GrayOostra2018}. In order to compare lines known to exhibit the third signature effect, we consider a line list from the NIST database for Fe I lines\footnote{\url{https://physics.nist.gov/PhysRefData/ASD/lines_form.html}}. We extract disk-integrated HARPS-N solar spectra over a 2.5 year span, with an average of 51 exposures per day. We cut data taken in overcast weather, as identified using the HARPS-N exposure meter, and reject data for any day with five or fewer exposures. Each spectrum is shifted to a heliocentric reference frame using relative velocities from the JPL Horizons ephemeris \citep{Horizons_1996}. We normalize the spectrum continuum by dividing by the corresponding blaze measurement - we propagate the photon shot noise error from each into the fits of each spectral line, as described below. \begin{table} \centering \caption{\label{table:linelist} First ten iron line uses in this analysis. An extended line list containing all 765 spectral lines used in out analysis is available online at DOI:\dataset[10.5281/zenodo.3541149]{https://doi.org/10.5281/zenodo.3541149}.} \begin{tabular}{l} \hline \hline Wavelength (\AA) \\ \hline 3922.91\\ 3946.99\\ 3948.10\\ 3975.21\\ 3995.98\\ 4000.25\\ 4000.46\\ 4001.66\\ 4022.74\\ 4047.30\\ \vdots \\\hline \end{tabular} \end{table} For each individual spectral line, we stack the Doppler-shifted measurements from a given day to produce a composite line profile for each day. This approach, stacking data from different exposures instead of averaging over multiple exposures, avoids interpolating data onto a common wavelength grid. We fit the line core (0.2 Angstroms total) with Gaussian profiles to extract relative depth, and line center (0.1 Angstroms total) with 2nd-degree polynomials to measure the RV.\footnote{Full lists of each Gaussian fit parameter as a function of time for each spectral line are available online at DOI:\dataset[10.5281/zenodo.3541149]{https://doi.org/10.5281/zenodo.3541149}.} We adopted polynomial fits to best mirror the methods of M17. We then convert these line center positions from wavelength to RV. This process is illustrated in Figure \ref{fig: fig1.png}. Removing poorly-fit and blended lines results in a final list of 765 spectral lines, given in Table \ref{table:linelist}. The relative velocities derived reproduce the shape of the third signature curve from \cite{reiners} up to an overall offset, as shown in Figure \ref{fig: thirdSig}. This offset may result from differences between the line lists or instruments used. \begin{figure*} \includegraphics[scale=.65]{fig_1.pdf} \caption{Left: Illustration of boundaries of observed points (black) included in fits for Gaussian (gray curve) and polynomial (dark blue curve) fits for a representative line (at 6173 \AA). Right, top: demonstration of zeroing procedure for same 6173 \AA\,line-- the average of the middle two quartiles of RV values per line is subtracted off. Right, bottom: zeroed RV time series for 6173 \AA\,line (gray), compared to average time series for all lines, $RV_0$ (red). \label{fig: fig1.png}} \end{figure*} \begin{figure} \centering \includegraphics[scale=0.45]{fig_2.pdf} \caption{Third signature of stellar granulation trend demonstrated in Fe I list extracted from NIST database. Lines are binned in 0.1 relative depth bins: black dots show the average value per bin, and errorbars show standard deviation per bin. The red curve shows the polynomial of best fit from \cite{reiners}.} \label{fig: thirdSig} \end{figure} \subsection{Zeroing RV time series} It is challenging to extract absolute RVs from spectral data. In accounting for blueshifts of individual spectral lines, we must identify the hypothetical RV value achieved in the absence of stellar activity, which will vary from line to line, in a manner that is robust against outlier points or noise. We zero the radial velocity time series per spectral line to account for this absolute blueshift. We sort time series observations by RV value per line, and subtract the average of the middle two quartiles, as shown in Figure \ref{fig: fig1.png}. In selecting this range, we assume that low-activity days will fall close to the median value; by subtracting the average value for the low-activity days, we aim to identify the hypothetical no-activity point for each line while avoiding bias in our zero point due to outliers. \subsection{Finding RVs from sublists} Following the procedure of M17, we identify line sublists by relative depth. We take variance-weighted means of the entire line list ($s_0$), lines with relative depth .5-.95 ($s_1$), and lines with relative depth .05-.5 ($s_2$), to extract $\rm RV_0$, $\rm RV_1$, and $\rm RV_2$ respectively. The RV errors are computed from fit errorbars on the line center parameter, which incorporate propagated shot noise from the raw spectra. Features of these time series are listed in Table \ref{table:1}, while the time series themselves are given in Table \ref{table:RVlist}. \begin{deluxetable}{lrrr} \tabletypesize{\scriptsize} \tablecaption{Features of RV time series extracted from HARPS-N/solar telescope daily binned spectra.\label{table:1}} \tablewidth{0pt} \tablehead{ \colhead{} & \colhead{$\rm RV_0$} & \colhead{$\rm RV_1$} & \colhead{$\rm RV_2$} } \startdata Relative Depth & .05-.95 & .5-.95 & .05-.5 \\ Number of Lines & 765 & 386 & 379\\ Standard Deviation (m~s$^{-1}$) & 1.50 & 1.64 & 1.74\\ Mean (m~s$^{-1}$) & .47 & .37 & .59\\ \enddata \end{deluxetable} \begin{table*} \centering \caption{\label{table:RVlist} The extracted time series $\rm RV_0$, $\rm RV_1$, and $\rm RV_2$ used in this analysis. The RVs derived from the HARPS-N DRS ($\rm RV_{DRS}$) are also provided as a point of comparison. The first ten RV values are given here - an extended list containing all values is available at DOI:\dataset[10.5281/zenodo.3541149]{https://doi.org/10.5281/zenodo.3541149}.} \begin{tabular}{l l l l l} \hline \hline JD - 2450000.5 & $\rm RV_0$ (m~s$^{-1}$) & $\rm RV_1$ (m~s$^{-1}$) & $\rm RV_2$ (m~s$^{-1}$) & $\rm RV_{drs}$ (m~s$^{-1}$)\\ \hline 7232.51&1.34&1.26&1.46&5.66 \\ 7233.54&2.00&2.15&1.81&6.71 \\ 7234.51&0.96&-0.16&2.38&6.24 \\ 7235.49&0.99&0.06&2.15&6.89 \\ 7236.51&1.13&-0.34&2.99&7.48 \\ 7237.49&1.79&0.68&3.19&7.12 \\ 7238.56&0.59&-0.32&1.75&5.25 \\ 7239.43&-0.70&-1.46&0.25&5.38 \\ 7241.55&1.00&0.07&2.18&5.34 \\ 7244.47&-1.08&-2.58&0.82&2.31 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ \hline \end{tabular} \end{table*} To validate our extracted time series, we compare to the HARPS-N Data Reduction System (DRS) RVs \citep{Baranne_et_al_1996, Sosnowska_2012}. Figure \ref{fig: drsComp} shows a Lomb-Scargle periodogram comparison of the two time series \citep{GLS, VanderPlas2012, VanderPlas_2015}. Many of the periods with highest concentration of signal power align between the two series, suggesting that they capture the same solar physics. We note that the power concentrated at rotation and half-rotation periods in $\rm RV_1$ exceeds that of $\rm RV_2$, normalizing to the false alarm probability (FAP). While the RMS scatter of $\rm RV_2$ is greater than $\rm RV_1$, we find that over 1.5 $\rm m s^{-1}$ of white noise would need to be added to $\rm RV_2$ compared to $\rm RV_1$ to account for this difference in peak heights. The model from M17 would predict, however, that since $\rm RV_2$ is calculated from lines presumably formed lower in the stellar photosphere, it should be more dominated by the suppression of convective blueshift, which would imply that this trend should be reversed. This observation presents the first suggestion that one of the assumptions that underlies the model is not realized on this dataset. \begin{figure} \includegraphics[scale=.4]{fig_3.pdf} \caption{Lomb-Scargle periodograms of the DRS-reduced RVs, as well as the $RV_0$, $RV_1$, and $RV_2$ time series generated from our line lists. The 10\%, 1\%, and 0.1\% False Alarm Probabilities (red dotted lines) are shown for each periodogram. The greatest power is concentrated in the solar synodic rotation period and its first harmonic (dotted gray lines), with no corresponding peaks in the window function (bottom panel).} \label{fig: drsComp} \end{figure} \section{Analysis} \subsection{Solving for $\alpha$} If $\alpha$ is known, we can use experimentally determined values for $\rm RV_0$ and $\rm RV_1$ to extract theoretical time series of interest $\rm RV_{\rm conv}$ and $\rm RV_{\rm sppl}$. This process should isolate contributions from the suppression of convective blueshift, and leave behind common-mode planetary and photometric contributions in the corrected RV time series. In practice, however, we must estimate the parameter $\alpha$ by imposing assumptions on the reconstructed time series. We adopt methods to solve for the parameter $\alpha$ based on assumptions made for the reconstructed time series from M17. These methods rely predominantly on the assumption that $\rm RV_{\rm conv}$ dominates the RV time series. We applied the five methods detailed in M17 to solve numerically for the value of $\alpha$ that: 1) minimizes the mean absolute value of $\rm RV_{\rm sppl}$; 2) minimizes the correlation between $\rm RV_{\rm conv}$ and $\rm RV_{\rm sppl}$; 3) is the slope of $\rm RV_1$ vs $\rm RV_0$; 4) maximizes the ratio of the variance in $\rm RV_{\rm conv}$ vs that in $\rm RV_{\rm sppl}$; 5) maximizes that ratio when $\rm RV_{\rm sppl}$ is smoothed over 30 days, to average over rotationally modulated activity-induced variations. Additionally, 6) we calculate a best estimate for $\alpha$ as the ratio of the mean values of absolute RV time series derived from $\langle\rm RV_{\rm 1}\rangle/\langle\rm RV_{\rm 0}\rangle$ or $\langle\rm RV_{\rm 2}\rangle/\langle\rm RV_{\rm 0}\rangle$. \begin{deluxetable*}{lrrrr} \tabletypesize{\scriptsize} \tablecaption{Estimates for $\alpha$, and RMS variation for extracted $\rm RV_{\rm sppl}$ time series (m~s$^{-1}$) from different methods. Method 2 fails to converge likely due to correlated noise, as discussed in M17 \label{table: 2}} \tablehead{ \colhead{Method} & \colhead{$\rm RV_1, \alpha$} & \colhead{$\rm RV_1, std(\rm RV_{\rm sppl})$} & \colhead{$\rm RV_2, \alpha$} & \colhead{$\rm RV_2, std(\rm RV_{\rm sppl})$} } \startdata 1) $\langle\rm RV_{\rm sppl}\rangle = 0$ & .79 & 3.54 & 1.26 & 3.60 \\ 2) Minimize correlation between $\rm RV_{\rm sppl}, \rm RV_{\rm conv}$ & N/A & N/A & N/A & N/A\\ 3) Slope of $\rm RV_i$ vs $\rm RV_0$ & .97 & 22.63 & 1.03 & 28.54\\ 4) Maximize $\rm std(\rm RV_{\rm conv})/\rm std(\rm RV_{\rm sppl})$ & .99 & 67.79 & 1.01 & 85.56\\ 5) Maximize $\rm std(\rm RV_{\rm conv})/std(\rm RV_{\rm sppl}\rm \; smoothed \ 30 \ days )$ & .73 & 2.91 & 1.34 & 2.91 \\ 6) $\langle\rm RV_{\rm i, abs}\rangle/\langle\rm RV_{\rm 0, abs}\rangle$& .74 & 2.99 & 1.32 & 3.05\\ \enddata \end{deluxetable*} \begin{figure} \centering \includegraphics[scale=.45]{fig_4.pdf} \caption{Histogram of timeseries points of $\rm RV_0$, and treated timeseries $\rm RV_{\rm sppl}$, $\rm RV_{\rm sppl}$ reconstructed using $\rm RV_1$ for $\alpha = .73$: this reconstruction had the lowest RV RMS for $\rm RV_{\rm sppl}$. As shown, this reconstruction fails to reduce the RMS from the initial timeseries $\rm RV_0$. } \label{fig:treated} \end{figure} These values are shown in Table \ref{table: 2}. Despite the relatively consistent observational sampling and high SNR of over 300 per exposure with an average of over 50 exposures per day, values for $\alpha$ differ based on the choice of assumption. Crucially, no physically motivated choice of $\alpha$ reduces the variability of $\rm RV_{\rm sppl}$, the corrected RV time series, compared to $\rm RV_0$, the untreated time series, as illustrated in Figure \ref{fig:treated}. \section{Discussion} Despite the higher SNR and better observational sampling for solar spectra, we are unable to extract physically significant reconstructed time series $\rm RV_{\rm conv}$ and $\rm RV_{\rm sppl}$ using the model and methods described in M17, suggesting that one of the assumptions of the model is not satisfied on this dataset. Crucially, the methods to estimate $\alpha$ assume that $\rm RV_{\rm conv}$ strongly dominates the RV timeseries. Potentially important is that our data set spans the activity minimum of the solar cycle, unlike the synthetic dataset from M17, which included a full solar activity cycle: the assumption that $\rm RV_{\rm conv}$ strongly dominates the RV time series might not hold for these restricted observations. Since the process of inverting the linear equations that describe $\left(\rm RV_0, \rm RV_1\right)$ to extract $\left(\rm RV_{\rm sppl}, \rm RV_{\rm conv}\right)$ amplifies all non-$\rm RV_{\rm conv}$ contributions, weakened $\rm RV_{\rm conv}$ at solar minimum could explain the inability to extract physical values of $\alpha$ on this data set. Our inability to reduce RV variability by applying the methods of M17 implies that sources of RV variability other than $\rm RV_{\rm conv}$ must be taken into account. Additional results in the literature have shown that other processes besides $\rm RV_{\rm conv}$ may indeed play a dominant role near the solar minimum. For example, recent techniques demonstrate the ability to remove most power at the rotation period, but leave 1 m~s$^{-1}$ RV variability in corrected timeseries: \cite{Dumusque_2018} and \cite{Cretignier_2019} identify spectral lines insensitive to the suppression of convective blueshift; by computing RVs using these specially-selected line lists, the authors are able to reduce the RV RMS by a factor of 2.2, down to 0.9 m~s$^{-1}$. Independently, \cite{tim} use solar images from HMI/SDO to reproduce the activity-driven RVs. This analysis successfully removes the activity-driven signal at the rotation period, but still leaves an RMS amplitude of 1.2 m~s$^{-1}$. Using independent analysis frameworks, both techniques successfully remove the rotationally-modulated activity signal, but are still limited by some other processes. Other work is ongoing to characterize the contributions from granulation and supergranulation, which can contribute as much as 1 m~s$^{-1}$ to RV RMS \citep{Dumusque2011, Meunier2015, Meunier2019, Cegla2019}. The fact that our RV timeseries contain power concentrated at the rotation period and its harmonics is consistent with some significant $\rm RV_{\rm conv}$ contribution.\footnote{We note, however, that concentration of high-activity regions separated by 180 degrees longitude on the Sun containing not only plage but also long-lived sunspots that contribute to $\rm RV_{\rm sppl}$ through the photometric effect can also supply power at the rotation period \citep{schroeter, Shelke}; similar structure exists on other Sun-like stars \citep{Berdyugina2005}.} The inability to significantly reduce RV RMS using the methods of M17 makes sense in the context of \cite{tim}, \cite{Dumusque_2018}, the literature on granulation and supergranulation, which demonstrate that well over 1 m/s of RV variation remains after accounting for $\rm RV_{\rm conv}$. When the linear equations defining $\rm RV_0$ and $\rm RV_1$ in terms of $\rm RV_{\rm sppl}$ and $\rm RV_{\rm conv}$ are inverted in the presence of noise introduced by this external variability, the RMS of noise in $\rm RV_{\rm sppl}$ is magnified to twice as large as the original RMS of noise in $\rm RV_0$ (M17). Potential contributions from instrumental systematics due to wavelength calibration \citep{HARPS-N_2014, Dumusque_2018, Cersullo_2019, Coffinet_2019}, or daily calibration sequences \citep{ACC}, may also contribute significantly to this non-$\rm RV_{\rm conv}$ RV variability. This sensitivity to RV contributions other than $\rm RV_{\rm conv}$ motivates future consideration of different solar activity processes, especially those operating on different timescales such as magnetoconvection \citep{Palle_et_al_1995, DelMoro_2004, Meunier_et_al_2018}. Furthermore, these additional processes, and even the suppression of convective blueshift itself, may contain subtle line list dependency, based on proxies for line responsiveness to magnetic activity such as the Lande-g factor (e.g., \citealt{Norton2006}). All of these contributions must be accounted for in order to reach the 10 cm~s$^{-1}$ detection limit of an Earth-like planet orbiting a Sun-like star. Future work is needed to identify correlates in spectra, solar images, or some other ancillary dataset that could be used to model these phenomenon. \acknowledgments This work was supported in part by NASA award number NNX16AD42G and the Smithsonian Institution. Based on observations made with the Italian {\it Telescopio Nazionale Galileo} (TNG) operated by the {\it Fundaci\'on Galileo Galilei} (FGG) of the {\it Istituto Nazionale di Astrofisica} (INAF) at the {\it Observatorio del Roque de los Muchachos} (La Palma, Canary Islands, Spain). The solar telescope used in these observations was built and maintained with support from the Smithsonian Astrophysical Observatory, the Harvard Origins of Life Initiative, and the TNG. This work was performed in part under contract with the California Institute of Technology (Caltech)/Jet Propulsion Laboratory (JPL) funded by NASA through the Sagan Fellowship Program executed by the NASA Exoplanet Science Institute (R.D.H.). The authors thank A. Ravi for his assistance in preparing and submitting this manuscript. A.C.C. acknowledges support from STFC consolidated grant number ST/M001296/1. D.W.L. acknowledges partial support from the \emph{Kepler} mission under NASA Cooperative Agreement NNX13AB58A with the Smithsonian Astrophysical Observatory. S.S. acknowledges support by NASA Heliophysics LWS grant NNX16AB79G. H.M.C. acknowledges the financial support of the National Centre for Competence in Research PlanetS supported by the Swiss National Science Foundation (SNSF) X. D. is grateful to the Branco-Weiss fellowship--Society in Science for continuous support. This publication was made possible through the support of a grant from the John Templeton Foundation. The opinions expressed are those of the authors and do not necessarily reflect the views of the John Templeton Foundation. This material is based upon work supported by the National Aeronautics and Space Administration under grants No. NNX15AC90G and NNX17AB59G issued through the Exoplanets Research Program. The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant Agreement No. 313014 (ETAEARTH). This work was supported in part by the NSF-REU solar physics program at SAO, grant number AGS-1560313. The HARPS-N project has been funded by the Prodex Program of the Swiss Space Office (SSO), the Harvard University Origins of Life Initiative (HUOLI), the Scottish Universities Physics Alliance (SUPA), the University of Geneva, the Smithsonian Astrophysical Observatory (SAO), and the Italian National Astrophysical Institute (INAF), the University of St Andrews, Queen's University Belfast, and the University of Edinburgh. This research has made use of NASA's Astrophysics Data System. We thank the entire TNG staff for their continued support of the solar telescope project at HARPS-N. \facility{TNG:HARPS-N} \bibliographystyle{yahapj}
2,869,038,156,017
arxiv
\section{Introduction} The dynamics of the long term evolution of small bodies revolving beyond the Neptune's orbit and forming the Edgeworth-Kuiper belt is a hot research topic in planetary science. Since 1992, when the first small body was discovered (Jewitt and Luu, 1993), hundreds of such small bodies, called trans-Neptunian objects (TNOs), have been detected and are included in the associated list of the Minor Planet Center (http://cfa-www.harvard.edu/iau/ mpc.html, Jewitt, 1999). The study of their dynamical evolution provides valuable information towards the understanding of the origin, the formation and the structure of our Solar system (Morbidelli et al, 2003). The first theoretical and numerical studies (e.g. Torbett and Smoluchovski, 1990; Kne\v zevi\'c et al, 1991; Duncan et al., 1995; Gallardo and Ferraz-Mello, 1998; see also the review paper by Morbidelli, 1999, and references therein) showed the important role of resonances for the capture and the long term stability of TNO's. The most simple but efficient model for understanding the underlying dynamics of the resonant orbits in Kuiper Belt is the restricted three-body problem (RTBP) where the primary bodies are the Sun and Neptune and the small body (of negligible mass) represents a potential TNO. The dynamics can be understood by studying the phase space structure of this simple model, where chaos and order coexist. The simplest model, namely the planar circular RTBP, is a conservative system of two degrees of freedom and for each energy level the particular qualitative characteristics of the dynamics are revealed clearly by calculating the corresponding Poincar\'{e} surface of section. Based on this tool, Malhotra (1996) studied some main mean motion resonances with Neptune and showed that resonances are associated with stable motion which is a libration with respect to the corresponding resonant angle $\sigma=p \lambda ' -q \lambda-(p-q)\varpi$, where $p/q$ denotes the external resonance ($p<q$). This property had been also indicated, using a different approach to the problem, by Morbidelli et al, 1995. The center of libration is a fixed point on the Poincar\'{e} section that corresponds to a resonant stable periodic orbit for the particular energy level. It is well known that the periodic orbits and their stability are essentially associated with the structure of the phase space (Berry, 1978; Hadjidemetriou,1993,1998). Stable periodic orbits are surrounded by invariant tori and the motion is regular while unstable periodic orbits are associated with a hyperbolic structure. Consequently the usefulness of the computation of families of periodic orbits is to understand the dynamics in an open interval of energy levels without spending an exaggerative computational time. In the planar elliptic problem, which is of two degrees of freedom but non-autonomous, the Poincar\'{e} sections are four dimensional and, subsequently, their usefulness is reduced significantly compared with that of the autonomous case. But the knowledge of periodic orbits provides essential information for the dynamics as in the autonomous case. It is clarified in Section 2 that such periodic orbits are isolated in a hyperplane of constant eccentricity of Neptune and, thus, information is provided only for a small subset of the phase space. Nevertheless, the localization of a stable periodic orbit in this case is associated with the existence of regular orbits in the neighborhood of such a periodic orbit. Additionally, the numerical simulations show that the eccentricity of Neptune introduce small perturbations to the circular problem that do not affect significantly the long term stability of the resonant librating motion near the stable periodic orbits of the circular problem (Malhotra, 1996; Kotoulas and Voyatzis, 2004a). The periodic orbits of the planar circular RTBP have been extensively studied since the decade of '60. A detailed review, which includes the associated definitions, theoretical aspects, references and examples, is given by H\'{e}non (1997). The basic theory and the numerical methods for the computation of periodic orbits in the planar elliptic problem have been described in detail by Broucke (1969a,b) and their application in our Solar system and the asteroidal motion has been shown (e.g. Hadjidemetriou, 1988,1992). Applications also can be found for the dynamics of extra-solar systems (Haghighipour et al, 2003). As far as the Kuiper belt dynamics is concerned, the families of symmetric periodic orbits in the 1/2, 2/3 and 3/4 resonances have been studied (Kotoulas and Hadjidemetriou, 2002) and, as well, the 1/2, 1/3 and 1/4 resonant symmetric and asymmetric families in the planar circular problem (Voyatzis et al 2004). Also, the 2/3 resonant periodic orbits are studied by Varadi (1999), for various mass ratio and eccentricity values of the primaries. The aim of this paper is to study the resonant motion through the computation of the periodic orbits. A considerable number of resonances that are present in the Kuiper belt dynamics and located between 30 and 48 a.u. is included in our study. Namely, we examine the first order resonances $p/q=m/(m+1)$, where $m=1,2,...,7$, the second order resonances 3/5, 5/7 and 7/9 and the third order ones 4/7, 5/8 and 7/10. At this stage we restrict our study to the planar circular and planar elliptic case of the RTBP. In section 2 we present the formulation of the RTBP model and the periodicity conditions used in the present work. In section 3 we present the families of periodic orbits computed for the planar circular model. In section 4 we discuss on the continuation of periodic orbits in the elliptic case and present the results of our computations. Finally, we summarize our results and discuss about the resonant structures obtained and their consequences in the long term stability of the small bodies. \section{System configuration and periodicity conditions} We consider the planar restricted three body problem with primaries the Sun and Neptune of mass $1-\mu$ and $\mu$ respectively. The gravitational constant is set equal to unit. In the inertial orthogonal frame $OXY$, the motion of Neptune is either circular or elliptic round the center $O$ with eccentricity $e'$, semimajor axis $a'=1$ and period $T'=2\pi$. In the rotating orthogonal frame of reference $Oxy$, where the Sun and Neptune define the $Ox$-axis, the motion of the small body is described by the Lagrangian (Roy, 1982) \begin{equation} L=\frac{1}{2}(\dot{x}^2+\dot{y}^2+(x^2+y^2)\dot{\theta}^2+2(x\dot{y}-\dot{x}y)\dot{\theta}) +\frac{1-\mu}{r_1}+\frac{\mu}{r_2}, \label{Lagrangian} \end{equation} where $r_1^2=(x+\mu r)^2+y^2$, $r_2^2=(x-1+\mu r)^2+y^2$, $r$ is the distance between the primaries and $\theta$ the angle between the axes $Ox$ and $OX$. For the circular problem it is $r=1$, $\dot{\theta}=1$ and there exist the {\it Jacobi} constant $h$ written in the form \begin{equation} h=\frac{1}{2}(\dot{x}^2+\dot{y}^2-(x^2+y^2))-\frac{1-\mu}{r_1}-\frac{\mu}{r_2}. \label{Jacobi} \end{equation} For the mass of Neptune we use the value $\mu$=5.178$ \times 10^{ - 5}$. In the circular problem a symmetric periodic orbit can be defined by the initial conditions $x(0)=x_0$, $y(0)=0$, $\dot{x}(0)=0$, and $\dot{y}(0)=\dot{y}_0$ and the periodicity conditions are \begin{equation} y(0)=y(T/2)=0, \:\: \dot{x}(0)=\dot{x}(T/2)=0, \label{PCOND1} \end{equation} where $T$ is the period of the orbit which is unknown. We can represent such a periodic orbit as a point in the plane $x_0-h$, where $h$ is the corresponding {\it Jacobi} constant. By varying the value of $x_0$ (or $h$) we get a monoparametric family of periodic orbits with parameter $x_0$ (or $h$). Generally, the period $T$ changes along the family and the eccentricity $e_0$ that corresponds to the initial conditions of the periodic orbit is either almost constant and approximatelly equal to zero or it changes significantly along the family (see Section 2; Henon, 1997). The multiplicity of a periodic orbit is defined as the number of crosses of the orbit with the axis $y=0$ in the same direction ($\dot{y}>0$ or $\dot{y}<0$) in a period. In the elliptic case, the periodic orbits are isolated points in the plane $x_0-\dot{y}_0$ for a particular value of $e'$ and their period is $T=2 k \pi,\:k=1,2,..$. Therefore the same periodicity conditions (\ref{PCOND1}) hold but the value of $T$ is known a priori. A monoparametric family of symmetric periodic orbits is formed by analytic continuation varying $e'$ (the parameter of the family). Such a family can be represented as a curve in the 3D space $x_0-\dot{y}_0-e'$. In our computations the periodicity conditions are solved by a Newton-Raphson shooting algorithm (Press et al. 1992) with accuracy $10^{-13}$ or $10^{-11}$ for the circular and the elliptic case respectively. Only for some difficult cases (e.g. near collisions) we were forced to decrease the accuracy by one decimal order. The numerical integrations were performed using a control step Bulirch-Stoer method. The starting points for the computation of the families of periodic orbits in the circular case are obtained from Poincar\'{e} sections at a particular level of the {\it Jacobi} constant. For the elliptic problem we started from the known bifurcation points derived from the families of the circular problem. Our algorithm controls automatically the possible change of the multiplicity of the orbits and the step of the parameter of the family. Since a symmetric periodic orbit crosses the $y=0$ axis at least twice, the algorithm may change the starting point of the periodic orbits when the convergence to the prescribed accuracy fails. The linear stability of the periodic orbits is determined from the corresponding stability indices. For the planar circular case the stability index $k=a_{11}+a_{22}$ is computed, where $a_{ij}$ are the elements of the monodromy matrix (H\'{e}non, 1997). For the planar elliptic case we calculate the indices $k_1$ and $k_2$ introduced by Broucke (1969a,b). For both cases the computation is based on the numerical solution of the variational equations. \begin{figure}[ht] \centering \includegraphics[width=10cm]{figure01.eps} \caption{Characteristic curves showing the continuation of families of elliptic periodic orbits of first order from the circular ones ($\mu\neq 0$). The almost straight line segments, indicated by the circles, consist the family of circular orbits (first kind).} \label{FF1} \end{figure} \section{The planar circular problem} We consider the planar circular case of the RTBP. The families of symmetric periodic orbits are classified in two different kinds (Poincar\'{e}, 1892; Henon, 1997):\\ {\em Families of circular orbits (or first kind)} : The periodic orbits correspond to nearly circular orbits for the small body. The period $T$, and subsequently the resonance $n/n'=T/T'$, varies along the family.\\ {\em Resonant Families (or second kind)} : The periodic orbits correspond to almost elliptic orbits for the small body. The eccentricity $e_0$ increases along the family but the ratio $n/n'$ is almost constant and rational $n/n'\approx p/q,\: p,q\in Z$. \begin{figure}[htb] \centering \includegraphics[width=14cm]{figure02.eps} \caption{Families of resonant periodic orbits for the cases a) 4/5 and b) 5/6. The symbols {\bf x} denote close encounters with Neptune while the encircled ones denote close encounters with the Sun. The segments $I_{4/5}^1$ and $I_{5/6}^1$ are the only that consist of unstable periodic orbits and are presented by a thin curve.} \label{FF2} \end{figure} In the exterior resonances $n/n'=p/q$, studied in this paper, it is $q>p$ and the difference $\Delta=q-p$ defines the order of the resonance. The circular periodic orbits of the unperturbed problem ($\mu=0$) are continued for $\mu>0$ and a family $C$ of first kind exists at each particular value of $\mu$. The families of resonant periodic orbits originate from the $p/q$-resonant orbits of the family $C$. In each resonance, independently of its order, there exist two different families, $I$ and $II$ of second kind, which differ in phase. On family $I$ the small body is initially at perihelion and on family $II$ it is at aphelion. The families may consist of many segments separated by gaps which correspond to close encounter orbits where computations fail. We use the notation $I_{p/q}^n$ (or $II_{p/q}^n$), where $p/q$ denotes the corresponding resonance and $n$ numbers the possible different segments in the family. In the following discussion and plots we select the variable $x_0$ as the parameter for the characteristic curves that present the resonant families. \begin{figure}[ht] \centering \includegraphics[width=14cm]{figure03.eps} \caption{The variation of the resonance (left) and the initial eccentricity of the periodic orbits along the families (the parameter of the family is considered to be the variable $x$).} \label{FF3} \end{figure} \subsection{1st order resonances} We consider the first order resonances with $p/q=m/(m+1)$, $m=2,3,..,7$. In this case the circular family of the unperturbed problem breaks close to the resonances when $\mu>0$ and the persisting segments are separated by gaps. These segments, consisting of almost circular orbits, are continued smoothly giving rise to the resonant families $I$ and $II$. The six characteristic curves computed are presented in Fig. 1. We obtain that a family $I_{m/(m+1)}$ joins smoothly (through a segment of almost circular orbits) with the family $II_{(m+1)/(m+2)}$. By decreasing the parameter $x_0$, the families $I$ approach a collision orbit with Neptune for $h\approx -1.5$. The first order resonances are considered exceptional because they follow the above scenario instead of a regular bifurcation (Guillaume, 1974; Hadjidemetriou, 1993). In our case a rather different structure is observed for the families $I_{6/7}$ and $II_{7/8}$ because the resonant family $I_{6/7}$ avoids the collision with Neptune. A new resonant family of unstable periodic orbits, denoted by $II_{7/8}^u$, is found and the three families ($I_{6/7}$,$II_{7/8}$ and $II_{7/8}^u$) form a closed characteristic curve. In Fig. 2a and 2b we present the variation of the resonance and eccentricity, respectively, along the first order resonant families shown in Fig. 1. Actually we present the mean values, computed along one period time interval. We observe that the ratio $n/n'$ vary along the part of the family where the orbits are almost circular. The length of these parts, which corresponds to the the plateau $e\approx 0$ of the curves in Fig. 2b, decreases rapidly as $m$ increases, and disappears for the closed characteristic curve mentioned above. In Fig. 1 only the first segment of the families $I_{m/(m+1)}^1$ and $II_{m/(m+1)}^1$ is shown in order to emphasize their origin. In Fig. 3 the complete resonant families 4/5 and 5/6 are presented. For all the resonances with $1\leq m\leq 6$ the family segment $I_{m/(m+1)}^1$ consists of unstable periodic orbits of multiplicity one and terminates at a collision orbit with Neptune. The family continues with new segments separated by collisions with Neptune, and extends up to a collision with the Sun. For $m<4$ (i.e in the resonances 2/3 and 3/4) only one collision orbit is obtained while for $m\geq 4$ two collision orbits exist. In all cases, the segments after the first collision consist of stable periodic orbits. In the families $II$, the segments $II_{m/(m+1)}^1$ ($1\leq m\leq 7$) reach a collision orbit too, as $h$ increases. Beyond this point the family continues with the segment $II_{m/(m+1)}^2$ which either extends up to a collision orbit with the Sun (cases for $1\leq m\leq 4$) or is interrupted by a second collision orbit with Neptune (cases 5/6 and 6/7). All the families $II_{m/(m+1)}$ consist of stable orbits. Only the periodic orbits which are close encounter orbits prove to be unstable but this result may be an artifact of the limited accuracy of computations. \begin{figure}[htb] \centering \includegraphics[width=14cm]{figure04.eps} \caption{Families of periodic orbits for the second order resonances. Bold or thin curves denote stable or unstable orbits respectively. The family $C$ of circular orbits is also indicated. The symbols {\bf x} denote close encounters with Neptune while the encircled ones denote the termination of the family to a collision orbit with the Sun.} \label{FF4} \end{figure} \subsection{Second order resonances} We consider the second order mean motion resonances $p/q$=3/5, 5/7 and 7/9. The families of resonant periodic orbits bifurcate from the circular family $C$ by analytical continuation for $\mu\neq 0$. The bifurcation points are those on family $C$ where $n/n'=p/q$. For each resonance two families bifurcate, called family $I$ and family $II$. The periodic orbits included in family $I$ cross vertically the axis $Ox$ (i.e. $\dot{x}_0=0$) only for $x>0$, while in family $II$ such a vertical cross occurs only for $x<0$. Thus, the two families are presented in separated plots $x_0-h$ shown in Fig. 4. \begin{figure}[htb] \centering \includegraphics[width=14cm]{figure05.eps} \caption{Periodic orbits of the family $I_{3/5}$ in the $x-y$ rotating frame. a) $x_0=1.5$, b) $x_0=1.77$ (before the collision with Neptune) c) $x_0=2.0$ (after the collision) and d) $x_0=2.7$ (close to collision with the Sun). The position of the Sun ($x=\mu$) and Neptune ($x=1-\mu$) is indicated.} \label{FF5} \end{figure} The resonant family $I_{p/q}$ starts from the bifurcation point having unstable periodic orbits and is interrupted by a close encounter with Neptune for $h\approx -1.5$. After a close encounter the families continue with stable orbits. In the 3/5 resonance, this continuation extends smoothly up to high eccentricity values (i.e. up to a collision orbit with the Sun). For the 5/7 resonant case, the family reaches to a second close encounter with Neptune and then terminates at a collision orbit with the Sun. In the 7/9 resonant case three collision orbits occur along the family. The multiplicity of the orbits is affected when a close encounter takes place. An example of some typical 3/5 resonant orbits of the family $I$ is shown in Fig. 5. Excluding close encounter orbits, family $II_{p/q}$ consists of stable periodic orbits. Along the families we obtain one, two and three close encounters with Neptune for the 3/5, 5/7 and 7/9 resonances respectively. All families terminate at a collision orbit with the Sun (Fig. 4b). \begin{figure}[htb] \centering \includegraphics[width=14cm]{figure06.eps} \caption{a) Families of third order resonances 4/7, 5/8 and 7/10. The symbol {\bf x} and the encircled {\bf x} indicate collisions with Neptune and the Sun respectively. The solid circles indicate almost double collisions. b) A stable periodic orbit of the family $II_{7/10}$ close to a double collision ($h\approx-0.75$) in the rotating $x-y$ frame. c) The same orbit as in (b) presented in the inertial frame X-Y. The circular orbit $N$ of Neptune is also plotted. } \label{FF6} \end{figure} \subsection{Third order resonances} Our study includes the resonant cases $p/q$=4/7, 5/8 and 7/10. The dynamics of third order resonances shows similar qualitative characteristics to these of the second order. The families $I$ and $II$ of elliptic resonant periodic orbits bifurcate from the circular family and they are presented in Fig. 6a. Families $I$ start with unstable orbits and extend up to close encounter with Neptune at $h\approx-1.50$. The families continue after this point with stable orbits and extend up to a second close encounter. At this point the family $I_{4/7}$ seems to terminate or become strongly chaotic and is difficult to be localized by the computations. This is also the case for the family $I_{7/10}$ after the third close encounter while the continuation of the family $I_{5/8}$ is possible up to a collision orbit with the Sun. The periodic orbits of families $II_{p/q}$ are all stable (except in the neighborhood of collisions). Along the families $II_{5/8}$ and $II_{7/10}$ two close encounters with Neptune occur but only one for the family $II_{4/7}$. The family $II_{5/8}$ is continued up to very high eccentricity values and terminates at a collision with the Sun. The families $II_{4/7}$ and $II_{7/10}$ extend up to high eccentricity values and their computation terminates when close encounters with both Neptune and Sun take place. A linearly stable periodic orbit of the family $II_{7/10}$ that approaches the above double collision is shown in Fig.6b and 6c in the rotating and inertial frame respectively. The intersection of the orbits, which is shown in Fig. 6c, is a usual feature of resonant motion. However, because of resonance, there is phase protection and collisions are avoided (Hadjidemetriou 1988, Morbidelli, 1999). \section{The elliptic restricted three-body problem} The resonant periodic orbits of the circular problem are bifurcation points (BPs) or, in different terminology, generating orbits for the elliptic problem ($e'\neq 0$) when their period is multiple of $T'=2\pi$. Namely, starting from a generating orbit at $e'=0$, a family of periodic orbits of constant period is formed by varying $e'$ (the parameter of the family). In Table 1 we present the values of eccentricity and {\it Jacobi} constant that correspond to the bifurcation points found in the studied resonances (see also Kotoulas and Voyatzis, 2004b). From each BP$l$ two families bifurcate, denoted as $E_{lp}^{p/q}$ or $E_{la}^{p/q}$, where $p/q$ is the resonance, $l$ is the bifurcation point (according to the numbering in the first row of Table 1, which is based on the ascending sorting of the corresponding eccentricity values) and the second subscript $p$ or $a$ denotes the initial position of Neptune is at perihelion or aphelion respectively. The case $n=0$ is discussed in section 4.2. In Table 1, the symbol ``S'' or ``U'' denotes the stability of the bifurcating periodic orbits at $e'\approx 0$ (stable or unstable respectively). The first symbol refers to family $E_{lp}^{p/q}$ and the second one to family $E_{la}^{p/q}$. In all cases, the indicated stability is preserved along the family up to $e'=0.01$, which is the actual eccentricity of Neptune. But we calculate the families and their stability up to high values of $e'$ in order to provide some general results on the resonant structures in the elliptic RTBP. In this case Neptune refers to a fictitious planet. \begin{table}[htb] \begin{center} \caption{The bifurcation points (BP$l$) where the families $E_{lp}^{p/q}$ and $E_{la}^{p/q}$ of periodic orbits originate. We present the corresponding initial eccentricity value $e_0$ and the Jacobi constant $h$ in brackets. The symbol ``S'' or ``U'' denotes the stability character of the periodic orbits close to the bifurcation point.} \begin{tabular}{cccccc} \hline $n/n'$ & ~~BP0 & ~~BP1 & ~~BP2 & ~~BP3 & ~~BP4 \\ \hline 2/3 & - & ~~0.469 SU & - & - & -\\ & - & ~~(-1.393) & - & - & -\\ 3/4 & - & ~~0.329 SU & - & - & -\\ & - & ~~(-1.452) & - & - & -\\ 4/5 & - & ~~0.253 SU & ~~0.871 SU & - & -\\ & - & ~~(-1.473) & ~~(-0.960) & - & -\\ 5/6 & - & ~~0.205 SU & ~~0.749 SU & - & -\\ & - & ~~(-1.483) & ~~(-1.146) & - & -\\ 6/7 & - & ~~0.172 UU & ~~0.649 SU & ~~0.960 SU & -\\ & - & ~~(-1.488) & ~~(-1.253) & ~~(-0.743) & -\\ 3/5 & ~~0.0 UU & ~~0.427 US & ~~0.800 SU & - & -\\ & ~~(-1.541) & ~~(-1.428) & ~~(-1.065) & - & -\\ 5/7 & ~~0.0 UU & ~~0.278 SU & ~~0.562 SU & ~~0.778 US & ~~0.936 SU \\ & ~~(-1.518) & ~~(-1.474) & ~~(-1.325) & ~~(-1.102) & ~~(-0.792) \\ 7/9 & ~~0.0 UU & ~~0.203 SU & ~~0.427 SU & ~~0.606 US & ~~0.766 SU \\ & ~~(-1.510) & ~~(-1.487) & ~~(-1.406) & ~~(-1.287) & ~~(-1.122) \\ 4/7 & ~~0.0 UU & ~~0.027 UU & ~~0.400 UU & ~~0.900 SU & -\\ & ~~(-1.549) & ~~(-1.549) & ~~(-1.408) & ~~(-0.864) & -\\ 5/8 & ~~0.0 UU & ~~0.029 SU & ~~0.335 US & ~~0.800 SU & -\\ & ~~(-1.535) & ~~(-1.535) & ~~(-1.468) & ~~(-1.067) & -\\ 7/10 & ~~0.0 UU & ~~0.025 SU & ~~0.249 US & ~~0.905 SU & -\\ & ~~(-1.520) & ~~(-1.520) & ~~(-1.485) & ~~(-0.872) & -\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[ht] \centering \includegraphics[width=14cm]{figure07.eps} \caption{a) The resonant families, with $e'$ as a parameter, of the elliptic problem at the resonance 4/5. The families bifurcate from the family $II_{4/5}$ located at $e'=0$ and presented by the thin curve. b) The stable periodic orbit of the family $E_{1p}^{4/5}$ for $e'=0.4$ in the rotating frame O$xy$. c) the same as in (b) for the family $E_{1a}^{4/5}$. This orbit is unstable.} \label{FF7} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=11cm]{figure08.eps} \caption{The presentation of the families $E_{lp}^{p/q}$ (solid curves) and $E_{la}^{p/q}$ (dashed curves) of first order resonances in the plane $e'-e_0$, where $e_0=e(0)$. Bold curves indicate the family segments of stable periodic orbits and thin curves indicates the segments of unstable ones.} \label{FF8} \end{figure} \subsection{First order resonances} The computations reveal that there exist bifurcation points (BPs) of periodic orbits in the elliptic problem, only in the families $II_{p/q}$, ($p/q=m/(m+1)$, $m=1,...,6$). Each one of the 2/3 and 3/4 resonances has one BP and two families originate from each one. These families were studied in Kotoulas and Hadjidemetriou (2002). In the 4/5 resonance there exist two bifurcation points. The families $E^{4/5}_{1p}$ and $E^{4/5}_{1a}$ start from BP1 and the $E^{4/5}_{2p}$ and $E^{4/5}_{2a}$ start from BP2. The families $E^{4/5}_{1p}$ and $E^{4/5}_{2p}$ include stable orbits, while the orbits of the families $E^{4/5}_{1a}$ and $E^{4/5}_{2a}$ are unstable. These families are presented in the $x_0-\dot{y}_0-e'$ space in Fig. 7a. In the same figure the corresponding family of the circular problem is also indicated in the plane $e'=0$. Two samples of periodic orbits of the families $E^{4/5}_{1p}$ and $E^{4/5}_{1a}$ for $e'=0.4$ are shown in Fig. 7b,c. Without loss of important information, it is more convenient to present the families in the projection space $e'-e_0$, where $e_0=e(0)$ is the eccentricity value of the orbit that corresponds to the initial conditions of the periodic orbit. Certainly, a point in the plane $e'-e_0$ does not represent a unique orbit. The variation of the eccentricity $e=e(t)$ along a periodic orbit is rather insignificant and evidently periodic. Therefore, we consider the value $e_0$ as the eccentricity of the whole periodic orbit. By using the above mentioned plane, the computed families are shown in Fig. 8. Solid and dashed curves refer to the families $E_{lp}^{p/q}$ and $E_{la}^{p/q}$ respectively. Also the type of stability is indicated. Thin lines denote stable orbits and bold ones denote unstable orbits. In Fig. 8 we observe that in all resonances and along the families $E_{na}^{p/q},\: n=1,2,..$, the initial eccentricity ($e_0$) increases, as $e'$ increases, monotonically. The families seem to continue up to the rectilinear case ($e'=1$). But in most cases and for $e'>0.9$ our computations fail to localize the periodic orbits for various reasons (e.g. the $\dot{y}$ or its derivatives with respect to $y$ or $x$ become large). Exceptional cases are the families $E^{4/5}_{2a}$, $E^{5/6}_{2a}$ and $E^{6/7}_{3a}$, which start from the BP with the largest eccentricity value. These families tend to terminate at a collision orbit with the Sun as $e'$ increases. For the families $E_{np}^{p/q},\: n=1,2,..$ we observe that they start with a decreasing $e_0$ as $e'$ increases and the orbits are almost elliptic with longitude of pericenter $\tilde{\omega}=180^o$. If the orbits attain the value $e_0=0$, at some particular value $e'$, the family continues with orbits of $\tilde{\omega}=0^o$ and the initial eccentricity increases along the rest of the family. In all families, the stability type is preserved except in families $E^{5/6}_{1p}$ and $E^{6/7}_{1p}$. Especially, in family $E^{5/6}_{1p}$ we observe a segment of unstable orbits in the interval $0.17<e'<0.55$. The end points of such an interval are candidate bifurcation points for families of asymmetric orbits (Beaug\'{e} et al,2003; Voyatzis et al 2004). \begin{figure}[ht] \centering \includegraphics[width=14cm]{figure09.eps} \caption{Families of periodic orbits of the 3/5 and 5/7 resonances. The presentation is the same as that in Fig. 8.} \label{FF9} \end{figure} \subsection{Second order resonances} In the second order resonances ($p/q$, $q=p+2$) both families $I$ and $II$ of the circular problem have bifurcation points, which are starting points for families of the elliptic problem. Additionally, bifurcation points (denoted as BP0 in table 1) exist in the circular family $C$ where the period is $T=q \pi$. Such circular orbits are continued for $e'\neq 0$, providing periodic orbits of period $T=2q\pi$ (Hadjidemetriou 1992). There are two different initial configurations giving rise to these families denoted as $E_{01}^{p/q}$ and $E_{02}^{p/q}$ :\\ Family $E_{01}^{p/q}$ : SUN-$N_{per}$-$B_{per} \:\: (t=0) \:\:\rightarrow$ SUN-$N_{ap}$-$B_{ap} \:\: (t=T/2)$ \\ Family $E_{02}^{p/q}$ : $B_{ap}$-SUN-$N_{per} \:\: (t=0)\:\:\rightarrow$ $B_{per}$-SUN-$N_{ap} \:\: (t=T/2)$ \\ where $B$ and $N$ denote the small body and Neptune respectively and the subscripts $ap$ and $per$ denote the position of the bodies (at apocenter or pericenter respectively). In the case of the 3/5 mean motion resonance there exist three bifurcation points (see Table 1). From BP0, which belongs to the family $C$, the families $E_{01}^{3/5}$ and $E_{02}^{3/5}$ bifurcate with unstable orbits. The BP1 belongs to the family $II_{3/5}$ of the circular problem and BP2 belongs to a stable segment of the $I_{3/5}$ one. In Fig. 9a the corresponding families are presented in the $e'-e_0$ plane and their stability is indicated. As in the case of first order resonances, along the families $E_{la}^{p/q}$ or $E_{la}^{p/q}$ that bifurcate from the family $II$ the eccentricity $e_0$ increases or decreases, respectively, as $e'$ increases. The opposite situation holds for the families that bifurcate from the $I$ family of the circular problem. In comparison to first order resonances, a different structure is observed for the families $E_{02}^{3/5}$ and $E_{1p}^{3/5}$, which join smoothly at $e'\approx 0.85$ and can be considered as one family starting from BP0 and ending at BP1. In the case of 5/7 resonance, there exist five bifurcation points. The bifurcation point BP0 belongs to an unstable segment of the family $C$ of the circular problem. The points BP1 and BP3 belong to the family $II_{5/7}$ and the BP2 and BP4 belong to the $I_{5/7}$ one. The corresponding families and their stability is shown in Fig. 9b. As in the case of 3/5 resonance, the families $E_{02}$ and $E_{1p}$ join smoothly. The type of stability of orbits changes twice along the family $E^{5/7}_{2p}$ and a segment of unstable orbits exist. The edges of this segment may be bifurcation points for families of asymmetric orbits, as mentioned above for the $E_{1p}^{5/6}$ family. The 7/9 resonance shows five bifurcation points distributed as in the 5/7 resonant case. Also, the generated families have the same qualitative features as the ones presented for the 5/7 resonance. \begin{figure}[ht] \centering \includegraphics[width=14cm]{figure10.eps} \caption{a) Families of periodic orbits of the 4/7 resonance in the planar elliptic RTBP. b) The evolution of the eccentricity along some orbits which start near the periodic orbits $Tr_1$ (unstable), $Tr_2$ (stable) and $Tr_3$ (doubly unstable) indicated in (a). Note the different scales in the vertical axis.} \label{FF10} \end{figure} \subsection{Third order resonances } The families of the circular problem at the resonances $p/q=$4/7, 5/8 and 7/10 exhibit four points from which families of periodic orbits of the elliptic problem bifurcate (see Table 1). In all cases, the bifurcation point BP0 belongs to the family $C$ and corresponds to a periodic orbit of period $2q\pi/3$. This orbit is continued to the elliptic problem if we assume that it is described three times providing periodic orbits of period $T=2q\pi$. The rest of bifurcation points (BP1, BP2 and BP3) belong to the family $II$. The bifurcating families for the 4/7 resonance and their stability type are shown in Fig. 10a. The families $E^{4/7}_{1a}$ and $E^{4/7}_{02}$ join smoothly at a relatively low value of the primary's eccentricity ($e'\approx 0.13$). Such a property is also obtained for the families $E^{4/7}_{1p}$ and $E^{4/7}_{2p}$, which join at $e'\approx 0.8$. However, the family $E^{4/7}_{2a}$ does not show a typical continuation. It starts out from BP2 with unstable orbits and continues by increasing $e'$ up to the value 0.767. At this point, $e'$ decreases along the family and the periodic orbits are stable. The stable orbits occupy a short segment ($0.745<e'<0.767$) in the family which is followed by a segment with complex unstable orbits ($0.740<e'<0.745$). For $e'<0.74$ the orbits become doubly unstable and the family terminates at $e'\approx 0.296$. This termination point is not either a collision orbit or a strongly chaotic one and we are not able to interpret it. Also, there is not any obvious reason for the failure of calculations. Comparatively to the 4/7 resonance, the resonances 5/8 and 7/10 have four bifurcation points too. However, the stability type of the bifurcating families differs and is indicated in Table 1. The continuation of the families shows the same qualitative characteristics as these described above. Closing our study on the elliptic problem we should state that in all cases of unstable periodic orbits the stability indices have values slightly larger than 2.0. This fact, beside numerical integrations, suggests that chaos is not present in a sense of practical importance. Nevertheless, the presence of stable and unstable asymptotic manifolds changes the phase space topology. The orbits that start in the neighborhood of unstable periodic orbits show a large variation in their orbital elements in comparison with that shown by the orbits which start near a stable periodic orbit. A typical example of the evolution of the eccentricity along some orbits of the same family $E^{4/7}_{2a}$ but of different stability type is shown in Fig.10b. The presented trajectories correspond to the initial conditions $x_0, y_0=0$ and $\dot{y}_0$ which coincide with that of the periodic orbits $Tr_1$, $Tr_2$ and $Tr_3$ indicated in Fig.10a while we have set $\dot{x}_0=10^{-3}$. The time scale is based on Neptune's period, namely $2\pi$t.u.=165 years. For the trajectory starting near $Tr_2$, which is stable, the evolution of the eccentricity $e=e(t)$ shows small amplitude oscillations around the starting value. In contrast, the evolution of $e(t)$ along the orbits that start near the simply unstable periodic orbit $Tr_1$ and the doubly unstable $Tr_3$ shows remarkable deviation from the initial value. However, for both cases of instability, the evolution seems regular. It is worth to mention that the orbits along the families $E^{p/q}_{lp}$ intersect the orbit of Neptune in general (as in the case of Fig. 6c) but the collision of the bodies is avoided because of phase protection. For the families $E^{p/q}_{la}$ the intersection of orbits occurs only for orbits belonging in some particular short segments of the families. We did not find any collision orbits along the families studied in the elliptic RTBP. \section{Conclusions} In this paper we presented families of resonant symmetric periodic orbits obtained for the planar RTBP and considering $\mu=5.175 \cdot 10^{-5}$ (the normalized mass of Neptune). Thus, our study on the circular RTBP is associated to the Kuiper belt dynamics and its most important resonances between 30 and 48 a.u. Particularly we studied systematically the first order resonances 2/3, 3/4, 4/5, 5/6 and 6/7, the second order ones 3/5, 5/7 and 7/9 and the third order ones 4/7, 5/8 and 7/10 for the circular and the elliptic RTBP. Based on well established methods and previous results of H\'{e}non, Broucke and Hadjidemetriou, we determined all the families of the above mentioned resonances and presented their main characteristic such as bifurcation points, stability, continuation and collisions. For the circular planar problem we found that all studied resonances have two families of periodic orbits: the family $I$ and the family $II$. Family $II$ consist always of stable periodic orbits while unstable periodic orbits are obtained only in a short segment of family $I$ located between the origin of the family and the first collision point which is met. Also very short segments of unstable orbits are obtained close to collisions but in these cases the computations are ambiguous. The stable orbits constitute centers for resonant librating motion where Trans-Neptunian objects can be captured. The families, though they are interrupted by collisions, extend up to large values of the Jacobi constant preserving their stability i.e. regular highly eccentric motion can be localized for each resonance at least in the level of the RTBP model. For all resonances we located the bifurcation points at $e'=0$ which are starting points for families of periodic orbits for the elliptic problem. At each one of them a pair of families ($E_a$ and $E_p$) originates. Thus for $e'\approx 0.01$, which is the eccentricity of the Neptune's orbit we obtain isolated periodic orbits, one in each family. The families of the elliptic problem start withm either stable or unstable orbits. The stability type of the bifurcating families is preserved up to the value $e'$ of the Neptune's orbit, but in many cases it changes for larger values of $e'$. The corresponding eigenvalues of the unstable orbits are always very close to the value +1 (i.e. at least one stability index is slightly greater than 2). This situation has been also observed for internal resonances (Hadjidemetriou, 1993) and indicates the existence of weak instability at the particular regions in phase space. Indeed, our numerical simulations show that in the neighborhood of unstable periodic orbits of the elliptic problem the motion seems regular for long term evolution but the orbital elements show slow and large variation. Generally, collisions are responsible for the generation of strong chaos but in the elliptic problem all the families found avoid collisions with Neptune. Only the families $E_a$ which bifurcate from the most eccentric bifurcation point seem to terminate at a collision with the Sun as $e'\rightarrow 1$. As far as the planar RTBP is concerned, our systematic study on resonant periodic orbits verifies the dynamical structures obtained in the past for other resonant cases. Additionally, we should remark the following points: a) The scenario of bifurcation and continuation of the families of first order resonances (Guillaume, 1974; Hadjidemetriou, 1993), shows some differentiation when the resonant families avoid collisions with the planet. In our study, this is obtained for the $I_{6/7}$ and $II_{7/8}$ families, which join together forming a closed characteristic curve b)The families of the elliptic problem seem to extend either up to the rectilinear case ($e'=1$) or join smoothly with another family forming characteristic curves that start from one bifurcation point and terminate to another one. c) It has been suggested (Voyatzis et al, 2004) that the existence of asymmetric periodic orbits is associated with the existence of continuous segments of unstable orbits along a family of symmetric orbits. This condition is true for the resonances of the form $1/q,\:q=1,2,..$. In our study of the circular problem no any families with such a property have been found. In the elliptic problem the families $E_{1p}^{5/6}$ and $E_{2p}^{5/7}$ include a segment of unstable periodic orbits and, subsequently, we may conjecture the existence of asymmetric periodic orbits in these cases. An interesting extension of the present work is to determine the families of periodic orbits in the three dimensional (3D) RTBP. The bifurcation points of families of 3D periodic orbits are given in Kotoulas and Voyatzis (2004b). The 3D resonant structure should provide useful information about the capture of trans-Neptunian objects in highly inclined orbits. Interesting structures of resonant motion can also be provided by the study of asymmetric resonances located beyond 48 a.u. and are not included in the present study. \vspace{0.5cm} \textbf{Acknowledgements} The authors would like to thank Prof. Hadjidemetriou for fruitful discussions and suggestions. This work has been supported by the research program ``EPEAEK II, PYTHAGORAS, No.21878'' of the Greek Ministry of Education and European Union.
2,869,038,156,018
arxiv
\section{Introduction} In this paper we are concerned with stochastic (inviscid) dyadic models written in vector form \begin{equation}\label{stoch-dyadic-model} \d X= B(X)\,\d t + (\circ\,\d W(t)) X, \end{equation} where $X=(X_n)_n \in \R^\infty$ is an infinite dimensional column vector, $B(X)= B(X,X)$ and $B:\R^\infty\times \R^\infty \to \R^\infty$ is a bilinear mapping defined as $$B(X,Y)_n = \lambda^{n-1} X_{n-1} Y_{n-1} - \lambda^n X_n Y_{n+1}, \quad n\geq 1,$$ with the convention $X_0 = 0$, and $\lambda>1$ is a fixed parameter. The noise $\{W(t) \}_{t\geq 0}$ is white in time, taking values in the space of infinite dimensional skew-symmetric matrices, and $\circ\,\d$ means the stochastic differential is understood in Stratonovich sense. For $X(0) \in \ell^2 $, the subspace of $\R^\infty$ consisting of square summable sequences, one easily deduces from the skew-symmetry of $W(t)$ and the structure of $B(X)$ that the $\ell^2$-norm is formally preserved by the dynamics of \eqref{stoch-dyadic-model}. In fact, we shall prove that the model \eqref{stoch-dyadic-model} admits weak solutions with bounded $\ell^2$-norm, which are unique in law. Motivated by recent works \cite{Galeati20, FGL21, FL21, FGL21b} on scaling limits of SPDEs with transport noise, we intend to show in this work that, under a suitable scaling limit of the noises $\{W(t) \}_{t\geq 0}$, the stochastic model \eqref{stoch-dyadic-model} converges weakly to the deterministic viscous dyadic system $$\dot X = B(X) + \nu SX,$$ where $\nu>0$ comes from the intensity of noise and $S= -\mbox{diag}(\lambda^2, \lambda^4, \ldots)$ is a diagonal matrix. We shall provide explicit convergence rate in terms of the parameters of noises, and also prove a central limit theorem underlying such scaling limit results; in the case of viscous stochastic dyadic models, we shall show the phenomenon of dissipation enhancement. Before stating the precise choice of noises in \eqref{stoch-dyadic-model}, let us briefly recall the literature concerning the dyadic model. The deterministic dyadic model is a special version of shell-type models \cite{LPPPV98, Bif03}, which describe in a simplified way the energy cascade in turbulent fluids; nevertheless, they capture some essential features of Euler and Navier-Stokes equations. The dyadic model considered in this paper was introduced by Katz and Pavlovi\'c \cite{KP05}, see also \cite[Section 2]{Ches08} for a short derivation. The viscous dyadic model, reads in component form as \begin{equation}\label{determ-dyadic-model} \dot X_n= B(X)_n - \kappa\, \lambda^{2\alpha n} X_n, \quad n\geq 1, \end{equation} was studied in detail by Cheskidov \cite{Ches08}; here, $\kappa \geq 0$ is the viscosity and $\alpha>0$ is the dissipation degree. In particular, the existence of Leray-Hopf solutions was proved for any $\alpha>0$ and global regularity for $\alpha\geq 1/2$; moreover, a finite time blow-up result in the case $\alpha<1/3$ was given in Section 5 therein. Based on some previous studies on dyadic models, Tao constructed in the influential work \cite{Tao15} an averaged version of the deterministic 3D Navier-Stokes equations, exhibiting the behavior of finite time blow-up. We refer to \cite[Introduction]{Ches08} and \cite[Chapter 3]{Flandoli10} for more information on the dyadic models, see also Section 3 of the recent survey paper \cite{BF20}, where one can find a more general tree model. An inviscid tree model was studied by Bianchi and Morandin \cite{BM17}, showing that the exponent of structure function is strictly increasing and concave. Though the energy is formally preserved by the inviscid dyadic model, i.e. $\kappa=0$ in \eqref{determ-dyadic-model}, it was shown in \cite{BFM11} that the energy actually dissipates for positive solutions; see \cite{BBFM13} for related results on a tree model and \cite{BFM11b} for the case of stochastic dyadic model perturbed by energy-preserving noise as in \eqref{stoch-dyadic-model-1} below. Moreover, the inviscid model enjoys uniqueness of solutions if we restrict to nonnegative solutions (cf. \cite{BFM10}), a property which does not hold for solutions with varying signs. The uniqueness of solutions can be restored by suitable noise. More precisely, the following stochastic dyadic model was studied in \cite{BFM10b}: \begin{equation}\label{stoch-dyadic-model-1} \d X_n= B(X)_n \,\d t + \lambda^{n-1} X_{n-1} \circ \d W_{n-1} - \lambda^n X_{n+1} \circ \d W_n , \quad n\geq 1, \end{equation} where $\{W_n \}_{n}$ is a family of independent standard Brownian motions. Rewriting the equations in It\^o form and using Girsanov transform, the authors in \cite{BFM10b} obtained a system of stochastic linear equations for which they were able to prove existence and uniqueness of solutions; these results are then transferred to uniqueness in law for the nonlinear system \eqref{stoch-dyadic-model-1}. See \cite{Bian13} for a related uniqueness result for the stochastic tree model. Romito \cite{Rom14} studied a viscous dyadic model with additive noise, namely, there is an independent Brownian noise $W_n$ in each equation of the system \eqref{determ-dyadic-model}; he presented a detailed analysis of the uniqueness and blow-up of solutions, depending on the strengths of linear dissipation term and nonlinear term. Before moving forward, we note that the stochastic dyadic model \eqref{stoch-dyadic-model-1} can be written in the form of \eqref{stoch-dyadic-model}. To this end, we first rewrite \eqref{stoch-dyadic-model-1} in vector form $$\d X= B(X)\,\d t + \lambda \begin{pmatrix} - X_2 \\ X_1 \\ 0 \\ 0 \\ \vdots \end{pmatrix} \circ \d W_1 + \lambda^2 \begin{pmatrix} 0\\ - X_3 \\ X_2 \\ 0 \\ \vdots \end{pmatrix} \circ \d W_2 + \cdots. $$ For $1\leq i<j$, we introduce the infinite dimensional skew-symmetric matrix $A_{i,j}$ whose entries are all zero except that the $(i,j)$ entry is $-1$ and the $(j,i)$ entry is $1$. Then we can rewrite \eqref{stoch-dyadic-model-1} as \begin{equation}\label{stoch-dyadic-1} \d X= B(X)\,\d t + \sum_i \lambda^i A_{i,i+1} X \circ \d W_i. \end{equation} Therefore, the matrix-valued noise in this case is $W(t)= \sum_i \lambda^i A_{i,i+1} W_i(t),\, t\geq 0$. In It\^o form, the above equation becomes \begin{equation}\label{stoch-dyadic-Ito} \d X= B(X)\,\d t + \sum_i \lambda^i A_{i,i+1} X \, \d W_i + \frac12 \bigg(\sum_i \lambda^{2i} A_{i,i+1}^2 \bigg) X \, \d t. \end{equation} Note that $A_{i,i+1}^2$ is a diagonal matrix whose $(i,i)$ and $(i+1, i+1)$ entries are $-1$ while all the other entries are zero; thus, \begin{equation}\label{Stra-Ito-corrector} \sum_i \lambda^{2i} A_{i,i+1}^2 = - \mbox{diag}\big(\lambda^2, \lambda^2 +\lambda^4, \lambda^4 +\lambda^6, \ldots\big) \iffalse \begin{pmatrix} \lambda^2 & 0 & 0 & 0 & \cdots \\ 0 & \lambda^2 +\lambda^4 & 0 & 0 & \cdots \\ 0 & 0 & \lambda^4 +\lambda^6 & 0 & \cdots \\ \vdots & \vdots & \vdots & \vdots & \ddots \end{pmatrix} \fi . \end{equation} In several recent works, Flandoli, Galeati and the first named author of the current paper have shown that many PDEs perturbed by multiplicative transport noise converge weakly, under a suitable scaling of the noise to high Fourier modes, to the corresponding deterministic equations with an additional viscous term, see for instance \cite{Galeati20, FGL21, Luo21}; see also \cite{FL20} where the limit equation is driven by additive noise. Regarding such high mode transport noise as small components of turbulent fluids, one may interpret heuristically the above results as the emergence of eddy viscosity. We have made use of the enhanced dissipation to show suppression of blow-up by transport noise, cf. \cite{FL21, FGL21b, Luo22}. Furthermore, the weak convergence results were improved in \cite{FGL21c} by providing quantitative convergence rates, and the large deviation principle and central limit theorems underlying such limit results have been established in the recent paper \cite{GL22}. There are also studies on stochastic heat equations with transport noise in a bounded domain \cite{FGL21d} and in an infinite channel \cite{FlaLuongo22}. Our purpose in this work is to extend some of these results to the dyadic model, but first let us point out that a simple way of rescaling the noise in \eqref{stoch-dyadic-1} does not work, as discussed in the next remark. \begin{remark}\label{rem-intro-1} Following the ideas in \cite{Galeati20, FGL21}, it seems natural to take a sequence of coefficients $\theta^N= (\theta^N_i)_i \in \ell^2$ such that (for instance $\theta^N_i= N^{-1/2} {\bf 1}_{\{1\leq i\leq N\}}$) \begin{equation}\label{rem-intro-1.1} \| \theta^N \|_{\ell^2} =1 \ (\forall\, N\geq 1), \quad \| \theta^N \|_{\ell^\infty} \to 0 \mbox{ as } N\to \infty, \end{equation} and to consider equations $$\d X^N= B(X^N)\,\d t + \sum_i \theta^N_i \lambda^i A_{i,i+1} X^N \circ \d W_i. $$ The associated It\^o equations are $$\d X^N= B(X^N)\,\d t + \sum_i \theta^N_i \lambda^i A_{i,i+1} X^N \, \d W_i + \frac12 \bigg(\sum_i (\theta^N_i )^2 \lambda^{2i} A_{i,i+1}^2 \bigg) X^N \, \d t. $$ From the expression \eqref{Stra-Ito-corrector}, it is easy to see that the matrix $$\sum_i (\theta^N_i )^2 \lambda^{2i} A_{i,i+1}^2 = - {\rm diag}\Big((\theta^N_1 )^2\lambda^2, (\theta^N_1 )^2\lambda^2 +(\theta^N_2 )^2\lambda^4, (\theta^N_2 )^2\lambda^4 +(\theta^N_3 )^2\lambda^6, \ldots\Big) $$ vanishes as $N\to \infty$. The reason is that, in each diagonal entry, there are at most two components of $\theta^N$ which tend to 0 as $N\to \infty$. As a result, we cannot get an extra viscous term in the limit. \end{remark} In view of the above remark, given $\theta\in \ell^2$ with $\|\theta \|_{\ell^2}=1$, we consider the following stochastic dyadic model: \begin{equation}\label{stoch-dyadic-2} \d X= B(X)\,\d t + \sqrt{2\nu} \sum_i \lambda^i \sum_{j=1}^\infty \theta_j A_{i,i+j} X \circ \d W_{i,j}, \end{equation} where $\nu>0$ represents the intensity of noise and $\{W_{i,j} \}_{i,j \geq 1}$ are independent real Brownian motions. In other words, the matrix-valued noise in \eqref{stoch-dyadic-model} takes the form $$W(t)= \sqrt{2\nu} \sum_i \lambda^i\sum_{j=1}^\infty \theta_j A_{i,i+j} W_{i,j}(t). $$ \begin{remark} Heuristically, we require the noise to transfer energy among components far from each other, not only between neighbouring ones. We admit that such noise is not consistent with the definition of dyadic model; from the view point of turbulence theory, however, the noise seems reasonable since there does exist long-range interactions among multiple fluid scales and energy is transferred between them. \end{remark} \iffalse For the above stochastic dyadic model \eqref{stoch-dyadic-2} (see \eqref{stoch-dyadic-2-Ito} below for the It\^o form), it is standard to show, using the Galerkin approximation and compactness arguments as in \cite{FG95}, the existence of weak solutions $X(t)=(X_n(t))_n$ for initial data $x\in \ell^2$; moreover, $\|X(t)\|_{\ell^2} \leq \|x \|_{\ell^2}$ almost surely for all $t\in [0,T]$. Following \cite[Definition 3.2]{Flandoli10}, we shall call such solutions as $L^\infty$-weak solutions. Unfortunately, due to the complexity of noise, the method of Girsanov transform applied in \cite{BFM10b} to show the uniqueness of solutions does not work here, and thus we do not have uniqueness of weak solutions for \eqref{stoch-dyadic-2}. \fi We give some notations frequently used below. For $s\in \R$, let $H^s$ be the subspace of $\R^\infty$ consisting of those $x=(x_n)_n$ such that $$\|x\|_{H^s} = \bigg(\sum_n \lambda^{2ns} x_n^2 \bigg)^{1/2} <\infty. $$ We shall write $H=H^0$ which coincides with $\ell^2$, and we often use $\|\cdot\|_{\ell^2}$ for the norm in $H$; the notation $\<\cdot, \cdot\>$ stands for the scalar product in $H$ or the duality between $H^s$ and $H^{-s}$, $s\in \R$. Given $T\ge 0$, we denote $C_t H^s$ the space $C([0,T], H^s)$ endowed with the norm $$\|X \|_{C_t H^s}:= \sup_{t\in[0,T]} \|X(t)\|_{H^s} < \infty. $$ The notation $a\lesssim b$ means $a\leq Cb$ for some unimportant constant $C$, and $\lesssim_\lambda$ means that $C$ is dependent on $\lambda$. Sometimes we write $\Z_+^2$ for the set of integer indices $k=(i,j)$ with $i,j\ge 1$. \subsection{Well posedness of stochastic dyadic model} \label{subsec-well-posedness} We first rewrite \eqref{stoch-dyadic-2} in It\^o form: \begin{equation}\label{stoch-dyadic-2-Ito} \d X= B(X)\,\d t + \sqrt{2\nu} \sum_i \lambda^i \sum_{j=1}^\infty \theta_j A_{i,i+j} X \, \d W_{i,j} + \nu \sum_i \lambda^{2i} \sum_{j=1}^\infty \theta_j^2 A_{i,i+j}^2 X \, \d t. \end{equation} The Stratonovich-It\^o corrector looks quite complicated, but the matrix term is in fact diagonal. In the sequel we write $$ S_\theta = \sum_i \lambda^{2i} \sum_{j=1}^\infty \theta_j^2 A_{i,i+j}^2, $$ then it is not difficult to show that $$S_\theta = - {\rm diag}\bigg(\lambda^2, \lambda^4 + \theta_1^2 \lambda^2, \lambda^6 + \theta_2^2\lambda^2 + \theta_1^2\lambda^4, \cdots, \lambda^{2i} + \sum_{j=1}^{i-1} \theta_j^2 \lambda^{2(i-j)}, \ldots \bigg). $$ Indeed, since $A_{i,i+j}^2$ is a matrix whose entries are zero except that the ones at $(i,i)$ and $(i+j, i+j)$ are $-1$, one has $$\sum_{j=1}^\infty \theta_j^2 A_{i,i+j}^2 = {\rm diag}\big(\underbrace{0,\ldots, 0}_{i-1}, -1, -\theta_1^2, -\theta_2^2, \ldots \big), $$ where we have used $\|\theta \|_{\ell^2}=1$. This leads to the above identity by elementary computations. Using the notation $S_\theta$, \eqref{stoch-dyadic-2-Ito} reduces to \begin{equation}\label{stoch-dya-model-N} \d X= B(X)\,\d t + \sqrt{2\nu} \sum_i \lambda^i \sum_{j=1}^\infty \theta_j A_{i,i+j} X \, \d W_{i,j} + \nu S_\theta X \, \d t. \end{equation} The component form of equation \eqref{stoch-dya-model-N} is \begin{equation}\label{com-stoch-dya-model-N} \begin{aligned} \d X_n &= \big(\lambda^{n-1}X_{n-1}^2-\lambda^n X_n X_{n+1} \big)\,\d t - \nu \bigg(\lambda^{2n} + \sum_{j=1}^{n-1} \theta_j^2 \lambda^{2(n-j)} \bigg) X_n \, \d t\\ &\quad+ \sqrt{2\nu} \sum\limits_{j=1}^{n-1} \lambda^j \theta_{n-j} X_j \,\d W_{j,n-j} -\sqrt{2\nu} \lambda^n \sum\limits_{j=1}^{\infty} \theta_j X_{n+j} \, \d W_{n,j}. \end{aligned} \end{equation} The following definition of weak solutions to \eqref{stoch-dya-model-N} is the same as \cite[Definition 3.2]{Flandoli10}. \begin{definition}\label{defn-martingale-solution} Given $x=\{x_n\}_{n\ge 1} \in \ell^2$, we say that \eqref{stoch-dya-model-N} has a weak solution in $\ell^2$ if there exist a filtered probability space $(\Omega, \mathcal{F}, \mathcal{F}_t, \P)$, a sequence of independent Brownian motions $\{W_{k}\}_{k\in \mathbb{Z}_+^2}$ on $(\Omega, \mathcal{F}, \mathcal{F}_t, \P)$ and an $\ell^2$-valued stochastic process $(X_n)_{n\ge 1}$ on $(\Omega, \mathcal{F}, \mathcal{F}_t, \P)$ with continuous adapted components $X_n$, such that \begin{equation*} \begin{aligned} X_n(t)&= x_n +\! \int_0^t\! \big(\lambda^{n-1}X_{n-1}^2 -\lambda^n X_n X_{n+1} \big)(s)\,\d s - \nu\! \int_0^t\! \bigg(\lambda^{2n} + \sum_{j=1}^{n-1} \theta_j^2 \lambda^{2(n-j)} \bigg) X_n(s) \,\d s\\ &\quad + \sqrt{2\nu} \int_0^t \sum\limits_{j=1}^{n-1} \lambda^j X_j(s) \theta_{n-j} \,\d W_{j,n-j}(s) -\sqrt{2\nu} \lambda^n\! \int_0^t \sum\limits_{j=1}^{\infty} \theta_j X_{n+j}(s) \,\d W_{n,j}(s) \end{aligned} \end{equation*} for each $n\ge 1$, with $X_0(\cdot)\equiv 0$. We denote this solution simply by $X$. By $L^\infty$-weak solution we mean that there exists a constant $C>0$ such that $\| X(t)\|_{\ell^2} \le C$ for a.e. $(\omega, t)\in \Omega \times [0,T]$. \end{definition} Here is the main result of this part. \begin{theorem}\label{thm-existence} Assume that the first component of $\theta$ is nonzero, i.e. $\theta_1\neq 0$. Given $x=\{x_n\}_{n\ge 1} \in \ell^2$, there exists an $L^\infty$-weak solution $X=(X(t))_{t\in [0,T]}$ to \eqref{stoch-dya-model-N} such that $\|X(t) \|_{\ell^2} \le \|x \|_{\ell^2}$; moreover, such solutions are unique in law. \end{theorem} Following the arguments of \cite[Section 3.4]{Flandoli10}, we shall prove in Section \ref{section-existence-Scaling-limit} the above theorem by applying the Girsanov transform: the nonlinear model \eqref{stoch-dya-model-N} is transformed into a stochastic linear system, for which one can prove existence and uniqueness of strong solutions; in particular, we borrow the idea of \cite{Bian13} to deal with the uniqueness part. These results are then transferred back to \eqref{stoch-dya-model-N}, yielding existence and uniqueness in law of weak solutions. \subsection{Convergence rates to deterministic viscous dyadic model}\label{subsec-converg-rate} From the expression of $S_\theta$, we easily deduce that if $\|\theta \|_{\ell^\infty}\to 0$ while keeping $\|\theta \|_{\ell^2}=1$ (as in \eqref{rem-intro-1.1}, but we do not want to use $\theta^N$ for ease of notation), then $$S_\theta \to -{\rm diag}(\lambda^2, \lambda^4, \ldots ) =:S .$$ Therefore, if we can show that the martingale part of \eqref{stoch-dya-model-N} vanishes in the limit, then the solutions $X$ will be close to that of the deterministic viscous dyadic model: \begin{equation}\label{thm-scaling-limit.1} \frac{\d \tilde X}{\d t}= B(\tilde X) + \nu S \tilde X, \quad \tilde X(0)=x, \end{equation} which reads in component form as $$\frac{\d \tilde X_n}{\d t}= \lambda^{n-1} \tilde X_{n-1}^2 - \lambda^n \tilde X_n \tilde X_{n+1} - \nu\lambda^{2n} \tilde X_n. $$ Given $x\in \ell^2$, a function $\tilde X = \{\tilde X_n \}_{n\geq 1} \in L^\infty(0,T; \ell^2)$ is called a weak solution to \eqref{thm-scaling-limit.1} if for every $n\ge 1$, $\tilde X_n \in C^1([0,T])$ and the component equation holds pointwise for all $t\in [0,T]$. It is easy to show that \eqref{thm-scaling-limit.1} admits a unique solution $\tilde X$ satisfying $\|\tilde X(t)\|_{\ell^2}\le \|x\|_{\ell^2}$ for all $t\in [0,T]$. The proof of existence part is standard, cf. \cite[Theorem 4.1]{Ches08} which also shows $L^2(0,T; H^1)$-regularity of solutions; we shall provide in Section \ref{section-Uniqueness} a simple proof of uniqueness in the space $L^\infty(0,T; \ell^2)$. Our purpose is to prove a quantitative estimate on the distance between $X$ and $\tilde X$, in a suitable topology. For simplicity, we assume that \eqref{stoch-dya-model-N} and \eqref{thm-scaling-limit.1} have the same initial data. \begin{theorem}\label{thm-quantitative-convergence-rate} Given initial data $x=\{x_n\}_{n\ge 1} \in \ell^2$, let $X$ be the $L^\infty$-weak solution to \eqref{stoch-dya-model-N} and $\tilde X$ be any weak solution to \eqref{thm-scaling-limit.1}. Then for any $ \delta \in (\frac12 , 1),\ \alpha \in (2-2\delta, 1 )$, one has $$\begin{aligned} \E\big[\|X - \tilde X \|_{C_T H^{-\alpha}}^2 \big] \lesssim \nu^{\frac{\alpha}{2}} \|\theta\|_{\ell^{\infty}}^2 \|x \|_{\ell^2}^2 \bigg\{ \alpha^{-1} \big[C(T) C_{1-\frac{\alpha}{2}} \big]^2 \frac{\lambda^{-\alpha}}{1-\lambda^{-\alpha}} + C(T,\delta) \nu^{2-2\delta-\frac{\alpha}{2}} \|\theta\|_{\ell^{\infty}}^2 \bigg\}. \end{aligned}$$ \end{theorem} The proof will be given in Section \ref{section-quantitative-convergence}, using the mild formulations of both the stochastic dyadic model \eqref{stoch-dya-model-N} and the deterministic model \eqref{thm-scaling-limit.1}. The two key ingredients in the proof are the estimates on the nonlinear term (see Lemma \ref{lem-nonlinearity-2}) and on a stochastic convolution; to prove the latter, we shall borrow some ideas from \cite[Lemma 2.5]{FGL21c}. We also point out that the uniqueness of the weak solutions to \eqref{thm-scaling-limit.1} is an easy consequence of Theorem \ref{thm-quantitative-convergence-rate}. Indeed, let $\tilde X^1,\ \tilde X^2\in L^\infty(0,T; \ell^2)$ be two weak solutions to \eqref{thm-scaling-limit.1} and $X$ be an $L^\infty$-weak solution to \eqref{stoch-dya-model-N}, we have $$ \|\tilde X^1- \tilde X^2\|_{C_T H^{-\alpha}}^2 \lesssim \E\big[\|X - \tilde X^1 \|_{C_T H^{-\alpha}}^2 \big]+ \E\big[\|X - \tilde X^2 \|_{C_T H^{-\alpha}}^2\big] \lesssim \|\theta\|_{\ell^{\infty}}^2. $$ Then, since the left hand side does not rely on $\theta$, we can take $\|\theta\|_{\ell^{\infty}} \to 0$ and get the uniqueness. \subsection{CLT underlying the scaling limit}\label{subsec-CLT} The result proved in Theorem \ref{thm-quantitative-convergence-rate} involves the convergence of stochastic processes $X$ to a deterministic one $\tilde X$, and thus it could be interpreted as a law of large numbers. We are interested in studying the Gaussian type fluctuations underlying such limit results. Motivated by \cite{GL22}, we shall take a special sequence of coefficients as below: $$\theta_j^{N}=\sqrt{\varepsilon_N}\frac{1}{j^{\alpha_1}},\quad 1\le j\le N,\ 0< \alpha_1 < \frac12, $$ where $\varepsilon_N=\big(\sum_{j=1}^N \frac{1}{j^{2\alpha_1}} \big)^{-1} \to 0$ and $\|\theta^N\|_{\ell^2}=1$. Now \eqref{stoch-dya-model-N} becomes \begin{equation}\label{stoch-dya-model-N-new} \d X^N= B(X^N)\,\d t + \sqrt{2\nu \varepsilon_N} \sum_i \lambda^i\sum_{j=1}^N j^{-\alpha_1} A_{i,i+j} X^N \, \d W_{i,j} + \nu S_{\theta^N} X^N \, \d t. \end{equation} By Theorem \ref{thm-quantitative-convergence-rate}, we know that $X^N$ converge weakly to $\tilde X$ as $N\to \infty$. Set $$\xi^N:=\frac{X^N-\tilde X}{\sqrt{\varepsilon_N}}, $$ then $\xi^N(0)=0$, and it satisfies \begin{equation}\label{central-limit-model-N} \begin{aligned} \d \xi^N &= \big[B(\xi^N,X^N)+ B(\tilde X, \xi^N)\big]\, \d t+ \frac{\nu}{\sqrt{\varepsilon_N}}(S_{\theta^N}-S)X^N\, \d t + \nu S \xi^N\, \d t\\ &\quad + \sqrt{2\nu} \sum_i \lambda^i \sum_{j=1}^N j^{-\alpha_1} A_{i,i+j} X^N \, \d W_{i,j}. \end{aligned} \end{equation} When $N\to \infty$, it is expected that the limit of $\xi^N$ solves the following equation \begin{equation}\label{central-limit-model-N-limit} \left\{ \begin{aligned} \d \xi &= \big[B(\xi,\tilde X)+ B(\tilde X, \xi)\big]\, \d t+ \nu S \xi\, \d t + \sqrt{2\nu} \sum_i \lambda^i \sum_{j=1}^\infty j^{-\alpha_1} A_{i,i+j} \tilde X \, \d W_{i,j},\\ \xi(0) &=0. \end{aligned} \right. \end{equation} Our purpose is to rigorously establish this convergence. Before stating the main result of this part, we need to clarify a technical issue. The original stochastic dyadic model \eqref{stoch-dya-model-N-new} admits only weak solutions, and thus the probability space $(\Omega, \F,\P)$ on which the solutions $X^N$ and the Brownian motions $\{W_{i,j}\}_{i,j\geq 1}$ are defined is not prescribed in advance. The fluctuations $\xi^N$ live also on the same probability space $(\Omega, \F,\P)$. Fortunately, for these Brownian motions $\{W_{i,j}\}_{i,j\geq 1}$ on $(\Omega, \F,\P)$, one can show that the limit equation \eqref{central-limit-model-N-limit} admits a unique strong solution $\xi$, see Corollary \ref{cor-central-limit-wellposedness} below. As $\xi^N$ and $\xi$ are defined on the same probability space, we can estimate the moments of their distances in a suitable topology. \begin{theorem}\label{thm-CLT} Let $X^N$ be a weak solution to \eqref{stoch-dya-model-N-new} and $\tilde X$ the unique solution to \eqref{thm-scaling-limit.1}; define $\xi^N $ as above. Let $\xi$ be the unique solution to \eqref{central-limit-model-N-limit}. For any $\beta\in (0, 1 )$ and $T>0$, we have $$ \lim_{N\to \infty} \sup\limits_{t\in [0,T]} \E \|\xi^N(t)-\xi(t)\|_{H^{-\beta}}^2= 0. $$ \end{theorem} This result will be proved in Section \ref{section-CLT}. Unfortunately, we are unable to find an explicit convergence rate in the above limit. \subsection{Dissipation enhancement} Now we consider the stochastic viscous dyadic model with the same noise term as above: \begin{equation}\label{stoch-viscous-dyadic-model} \d X = B(X)\,\d t + \kappa S X \,\d t+ \sqrt{2\nu} \sum_i \lambda^i\sum_{j=1}^\infty \theta_j A_{i,i+j} X \circ \d W_{i,j}, \end{equation} where $\kappa>0$ is the viscosity. For this equation, it is not difficult to show the existence and pathwise uniqueness of Leray-Hopf type solutions $\{X(t)\}_{t\geq 0}$, which is strong in the probabilistic sense and satisfies the energy equality: $\P$-a.s., for all $0\leq s<t$, \begin{equation}\label{viscous-dyadic-energy-balance} \|X(t)\|_{\ell^2}^2 + 2 \kappa \int_s^t \|X(r)\|_{H^1}^2 \,\d r= \|X(s)\|_{\ell^2}^2. \end{equation} We omit the proofs since they are quite standard, see Proposition \ref{prop-uniqueness} below for sketched arguments in the deterministic setting; we only mention that, in the stochastic case, one should use \cite[Theorem 2.13]{RZ18} instead of Theorem 2.12 therein. Equality \eqref{viscous-dyadic-energy-balance} implies that $t \mapsto \|X(t)\|_{\ell^2}$ is almost surely decreasing; moreover, using the simple inequality $\|X(r)\|_{H^1} \ge \lambda \|X(r)\|_{\ell^2}$ one deduces the decay property $$\|X(t)\|_{\ell^2} \le e^{-\kappa\lambda^2 t}\|X(0)\|_{\ell^2},$$ where the rate of decay is independent of the noise. The next result implies that the dissipation of \eqref{stoch-viscous-dyadic-model} is greatly enhanced for suitably chosen intensity $\nu$ and coefficients $\theta$. \begin{theorem}\label{thm-enhance-dissipation} For any $p\ge 1,\, \chi >0$ and $L>0$, there exists a pair $(\nu, \theta)$ satisfying the following property: for any $ X(0)\in \ell^2$ with $\|X(0) \|_{\ell^2} \leq L$, there exists a random constant $C>0$ with finite $p$-th moment, such that for the solution $X(t)$ of \eqref{stoch-viscous-dyadic-model} with initial condition $X(0)$, we have $\P$-a.s. for any $t\ge 0$, $$ \|X(t)\|_{\ell^2} \le C e^{-\chi t}\|X(0)\|_{\ell^2}. $$ \end{theorem} Contrary to usual results on dissipation enhancement for linear equations, here we have to confine the initial data $X(0)$ in a bounded ball in $\ell^2$, see \cite[Theorem 2.3]{Luo21b} for a similar result. The reason is due to the estimate of quadratic part $B(X)$, which produces a fourth order term, see the treatment of $I_2$ in the proof of Lemma \ref{lem-energy-decreasing}. We finish the introduction by describing the structure of this paper. We prove in Section \ref{section-Preliminaries} some basic estimates on the nonlinearity and on the semigroup $\{e^{tS} \}_{t\ge 0}$, as well as the uniqueness of solutions in $L^\infty(0,T; \ell^2)$ to viscous dyadic model \eqref{thm-scaling-limit.1}. Section \ref{section-existence-Scaling-limit} is devoted to the well posedness and weak limit result of the stochastic dyadic model \eqref{stoch-dya-model-N}. Theorem \ref{thm-quantitative-convergence-rate} will be proved in Section \ref{section-quantitative-convergence}, giving a quantitative estimate on the distance between the solutions of stochastic model \eqref{stoch-dya-model-N} and deterministic model \eqref{thm-scaling-limit.1}. Section \ref{section-CLT} is dedicated to the proof of the central limit theorem. Finally, we prove Theorem \ref{thm-enhance-dissipation} in the last Section \ref{section-dissipation-enhancement}, showing the phenomenon of dissipation enhancement for dyadic models by our choice of noise. \section{Preliminaries}\label{section-Preliminaries} This section contains two parts: in Section \ref{subs-basic-estimates} we collect some basic estimates on the nonlinearity $B(X)$ and on the semigroup $\{e^{tS} \}_{t\geq 0}$, then we provide in Section \ref{section-Uniqueness}, for the reader's convenience, a proof of uniqueness of bounded solutions for deterministic viscous dyadic model \eqref{thm-scaling-limit.1}, as well as some results on the Leray-Hopf solutions. \subsection{Some basic estimates}\label{subs-basic-estimates} We first prove an estimate on the nonlinear term $B(x,y)$. \begin{lemma}\label{lem-nonlinearity-2} Let $x\in H^{a},\ y\in H^{b},\, a,b\in\R $, then $$ \| B(x,y) \|_{H^{-1+a+b}} \lesssim_{\lambda} \|x\|_{H^{a}} \|y\|_{H^{b}}. $$ \end{lemma} \begin{proof} By direct calculation, $$\begin{aligned} \| B(x,y) \|_{H^{-1+a+b}}^2 &= \sum\limits_{n=1}^{\infty} \lambda^{2(-1+a+b)n}(\lambda^{n-1}x_{n-1}y_{n-1}-\lambda^n x_n y_{n+1})^2\\ &\lesssim \sum\limits_{n=1}^{\infty} \lambda^{2(-1+a+b)n}\lambda^{2(n-1)}x_{n-1}^2y_{n-1}^2 + \sum\limits_{n=1}^{\infty} \lambda^{2(-1+a+b)n}\lambda^{2n} x_n^2 y_{n+1}^2 \\ &:= J_1 +J_2. \end{aligned}$$ For $J_1$, we have $$\begin{aligned} J_1 &= \lambda^{2(-1+a+b)} \sum_{n=1}^{\infty} \lambda^{2a(n-1)} x_{n-1}^2 \lambda^{2b(n-1)} y_{n-1}^2\\ &\le \lambda^{2(-1+a+b)} \bigg(\sup_{n\ge 1} \lambda^{2a(n-1)} x_{n-1}^2 \bigg) \sum_{n=1}^{\infty} \lambda^{2b(n-1)} y_{n-1}^2 \\ &\le \lambda^{2(-1+a+b)} \|x\|_{H^{a}}^2 \|y\|_{H^{b}}^2. \end{aligned}$$ Similarly, we can obtain $$ J_2 \le \lambda^{-2b} \|x\|_{H^{a}}^2 \|y\|_{H^{b}}^2. $$ Combining the above two estimates, we complete the proof. \end{proof} \begin{corollary}\label{subs-2-cor} If $ a+b+c\geq 1$, then the trilinear functional $$H^a\times H^b\times H^c\ni (x,y,z)\mapsto \<B(x,y),z\>$$ is continuous in each argument; as a consequence, if $a+2b\ge 1$, then $\<B(x,y),y\>=0$. \end{corollary} \begin{proof} We have $$|\<B(x,y),z\>| \le \|B(x,y) \|_{H^{-c}} \|z \|_{H^{c}} \le \|B(x,y) \|_{H^{-1+a+b}} \|z \|_{H^{c}}, $$ where the last step is due to $-c\le -1+a+b$. Then by Lemma \ref{lem-nonlinearity-2}, we have $|\<B(x,y),z\>| \lesssim \|x \|_{H^{a}} \|y \|_{H^{b}} \|z \|_{H^{c}}$ which implies the first claim. To prove the second one, approximating $x\in H^a, \, y\in H^b $ by vectors $x^N,\, y^N$ with only finitely many nonzero components, and using the identity $\big\<B(x^N, y^N), y^N \big\>=0$, we immediately get the result thanks to the continuity of the trilinear functional. \end{proof} The next two lemmas concern properties of $\{e^{tS} \}_{t\geq 0}$ which are similar to those of the heat semigroup. \begin{lemma}\label{similar-heat-semigroup-property} Let $x\in H^a$, $a \in \mathbb{R}$. Then: \begin{itemize} \item[\rm (i)] for any $\rho \ge 0$, it holds that $\|e^{tS}x\|_{H^{a+\rho}} \le C_{\rho} t^{-\frac{\rho}{2}}\|x\|_{H^{a}}$, where $C_{\rho} $ is an decreasing function when $\rho \le 1$ and an increasing function of $\rho$ when $\rho \ge 1$; \item[\rm (ii)] for any $\rho\in[0,2]$, it holds that $\|(I-e^{tS})x\|_{H^{a-\rho}} \lesssim t^{\frac{\rho}{2}}\|x\|_{H^{a}} $. \end{itemize} \end{lemma} \begin{proof} First we prove (i). Notice that for any $ \eta >0, y>0$, we have $y^{\eta}e^{-y} \le \eta^{\eta}e^{-\eta}=C_{\eta}$, where $C_{\eta}$ is an decreasing function when $\eta \le 1$ and an increasing function of $\eta$ when $\eta \ge 1$. Therefore, setting $\eta =\rho,\ y=t\lambda^{2n}$, we can easily get that $$ \|e^{tS}x\|_{H^{a+\rho}}=\bigg[ \sum\limits_{n=1}^{\infty}\lambda^{2n(a+\rho)} \big( e^{-t\lambda^{2n}}x_n \big)^2 \bigg]^{\frac12} \le C_{\rho} t^{-\frac{\rho}{2}}\|x\|_{H^{a}}. $$ Next we prove (ii). Since for any $ \eta\in[0,1],\ y>0$, we have $1-e^{-y}\lesssim y^{\eta}$. Hence setting $\eta=\frac{\rho}{2},\ y=t\lambda^{2n}$, we can easily obtain that $$ \|(I-e^{tS})x\|_{H^{a-\rho}}=\bigg[ \sum\limits_{n=1}^{\infty} \lambda^{2n(a-\rho)} \big( \big(1-e^{-t\lambda^{2n}}\big) x_n \big)^2 \bigg]^{\frac12} \lesssim t^{\frac{\rho}{2}}\|x\|_{H^{a}}. $$ This completes the proof. \end{proof} \begin{lemma}\label{lem-heat-integral-property} For any $a \in \mathbb{R}$ and $ f\in L_t^2 H^{a}$, it holds that for $ t\in [0,T]$, $$ \bigg\|\int_0^t e^{\nu (t-r)S}f_r \,\d r \bigg\|_{H^{a+1}}^2 \lesssim \frac{1}{\nu}\int_0^t \|f_r\|_{H^{a}}^2 \,\d r. $$ Similarly, for $s< \tau$ we have $$ \int_s^{\tau} \bigg\|\int_s^t e^{\nu (t-r)S}f_r \,\d r \bigg\|_{H^{a+2}}^2 \,\d t \lesssim \frac{1}{\nu^2}\int_s^{\tau} \|f_r\|_{H^{a}}^2 \,\d r. $$ \end{lemma} \begin{proof} By Cauchy's inequality, $$\begin{aligned} \bigg\|\int_0^t e^{\nu (t-r)S}f_r \,\d r \bigg\|_{H^{a+1}}^2 &= \sum\limits_{n=1}^{\infty} \lambda^{2(a+1)n}\bigg(\int_0^t e^{-\nu(t-r)\lambda^{2n}}f_n(r) \,\d r \bigg)^2\\ &\le \sum\limits_{n=1}^{\infty} \lambda^{2(a+1)n} \int_0^t e^{-2\nu(t-r)\lambda^{2n}} \d r \int_0^t f_n^2(r) \,\d r\\ &\lesssim \frac{1}{\nu} \int_0^t \sum\limits_{n=1}^{\infty} \lambda^{2a n} f_n^2(r) \,\d r = \frac{1}{\nu} \int_0^t \| f(r)\|_{H^{a}}^2 \,\d r. \end{aligned}$$ For the second inequality, we apply Cauchy's inequality as follows: $$\aligned \bigg(\int_s^t e^{-\nu(t-r)\lambda^{2n}}f_n(r) \,\d r \bigg)^2 &\leq \int_s^t e^{-\nu(t-r)\lambda^{2n}} \,\d r \int_s^t e^{-\nu(t-r)\lambda^{2n}} f_n^2(r) \,\d r \\ &\leq \frac1{\nu \lambda^{2n}} \int_s^t e^{-\nu(t-r)\lambda^{2n}} f_n^2(r) \,\d r . \endaligned $$ Integrating both sides and using Fubini's Theorem, we can easily get the second estimate by similar calculation. \end{proof} \subsection{Some uniqueness results for viscous dyadic models}\label{section-Uniqueness} In this part, we consider the viscous dyadic model in \cite{Ches08}, namely, we fix some $\lambda>1$, $\alpha>0$ and $\nu>0$, and consider \begin{equation}\label{dm-1} \frac{\d X_n}{\d t}= \lambda^{n-1} X_{n-1}^2 - \lambda^n X_n X_{n+1} - \nu \lambda^{2\alpha n} X_n, \quad n\geq 1, \end{equation} with initial condition $X(0)=x\in H= \ell^2$. Note that if $\alpha=1$, then this system coincides with the component form of \eqref{thm-scaling-limit.1}. We define the diagonal matrix $$S^\alpha= -{\rm diag}(\lambda^{2\alpha}, \lambda^{4\alpha},\ldots) $$ and simply write $S^1$ as $S$; then the system of equations \eqref{dm-1} can be written as \begin{equation}\label{dm-2} \frac{\d X}{\d t}= B(X)+ \nu S^\alpha X, \quad X(0) =x. \end{equation} We first prove a uniqueness result in the case $\alpha=1$. \begin{proposition}\label{prop-uniqueness} If $\alpha=1$, then for any $x\in \ell^2$, the viscous dyadic model \eqref{dm-2} has at most one solution in $L^\infty(0,T; \ell^2)$. \end{proposition} \begin{proof} Let $X^1, X^2\in L^\infty(0,T; \ell^2)$ be two solutions to \eqref{dm-2}; we write the equations in mild form $$ X^i(t) = e^{\nu tS}x + \int_0^t e^{\nu (t-r)S} B( X^i(r) )\,\d r, \quad t\in [0,T],\, i=1,2. $$ As a result, letting $f(t)= X^1(t) - X^2(t)$, we have $$f(t) = \int_0^t e^{\nu (t-r)S}\big[ B(f(r), X^1(r)) + B(X^2(r), f(r))\big]\,\d r . $$ Then by Lemmas \ref{lem-heat-integral-property} and \ref{lem-nonlinearity-2}, $$\aligned \|f(t) \|_{\ell^2}^2 &\lesssim \frac1\nu \int_0^t \big\| B(f(r), X^1(r)) + B(X^2(r), f(r))\big\|_{H^{-1}}^2\,\d r \\ &\lesssim \frac1\nu \int_0^t \|f(r)\|_{\ell^2}^2 \big[\| X^1(r) \|_{\ell^2}^2 + \| X^2(r) \|_{\ell^2}^2 \big]\,\d r. \endaligned $$ Since $X^1,\, X^2\in L^\infty(0,T; \ell^2)$, the Gronwall inequality immediately gives us that $\|f(t) \|_{\ell^2} =0$ for all $t\in [0,T]$. \end{proof} We also have the following results. \begin{proposition}\label{prop-Leray-Hopf} Let $\alpha>0$. For any $x\in \ell^2$, there exists a Leray-Hopf solution $X(t)$ of \eqref{dm-1}. In particular, the energy inequality $$\|X(t) \|_{\ell^2}^2 + 2\nu\int_{t_0}^t \|X(s) \|_{H^\alpha}^2\,\d s \leq \|X(t_0) \|_{\ell^2}^2 $$ holds for all $0\leq t_0 \leq t$, $t_0$ a.e. in $[0,\infty)$. Moreover, if $\alpha\geq 1/2$, then the energy equality holds for all $0\le t_0 \le t$ and Leray-Hopf solutions are unique. \end{proposition} \begin{proof} The first two assertions are proved in \cite[Theorem 4.1]{Ches08}; in particular, we see from the proof that $t_0$ can take the value $0$. Here we only prove the remaining part. To show the energy equality on any time interval $[0,T]$, we shall apply \cite[Theorem 2.12]{RZ18}. For this purpose, we take the triple $H^{1/2} \subset \ell^2 \subset H^{-1/2}$; using the terminology of \cite{RZ18}, $\ell^2$ equipped with the pair $H^{1/2},\, H^{-1/2}$ is called a rigged Hilbert space. It is not difficult to see that the conditions in \cite[Theorem 2.12]{RZ18} can be verified; thus, up to changing values on a subset of $[0,T]$ with zero Lebesgue measure, the solution $X$ satisfies the following identity: for all $t\in [0,T]$, $$\aligned \|X(t) \|_{\ell^2}^2 &= \|X(0) \|_{\ell^2}^2 + 2\int_0^t \big\<X(s), B(X(s))+\nu S^\alpha X(s) \big\>\,\d s \\ &= \|X(0) \|_{\ell^2}^2 - 2\nu \int_0^t \|X(s) \|_{H^\alpha}^2 \,\d s, \endaligned $$ where in the second step we have used Corollary \ref{subs-2-cor}. It is clear that we can replace the starting time by any $t_0\in [0,T]$ and $t_0\le t$. It remains to show the uniqueness of Leray-Hopf solutions. Let $X$ and $\tilde X$ be two Leray-Hopf solutions to \eqref{dm-2}, then they belong to $L^\infty(0,T; \ell^2) \cap L^2(0,T; H^\alpha)$. We have $$\aligned \frac{\d}{\d t} (X-\tilde X) &=B(X-\tilde X, X) + B(\tilde X, X-\tilde X)+ \nu S^\alpha (X-\tilde X) . \endaligned $$ Multiplying both sides by $X-\tilde X$ in $\ell^2$ gives us $$\frac12 \frac{\d}{\d t} \|X-\tilde X \|_{\ell^2}^2 = -\nu \|X-\tilde X \|_{H^\alpha}^2 + \big\<B(X-\tilde X, X), X-\tilde X \big\>, $$ where we have used the orthogonality property $\<B(x,y),y\>=0$. By Lemma \ref{lem-nonlinearity-2} and Cauchy's inequality, we arrive at $$\aligned \frac12 \frac{\d}{\d t} \|X-\tilde X \|_{\ell^2}^2 &\leq -\nu \|X-\tilde X \|_{H^\alpha}^2 + \|B(X-\tilde X, X) \|_{H^{-\alpha}} \|X-\tilde X \|_{H^\alpha} \\ &\leq -\nu \|X-\tilde X \|_{H^\alpha}^2 + C_{\lambda,\alpha} \|X-\tilde X \|_{\ell^2}\, \|X\|_{H^{1-\alpha}} \|X-\tilde X \|_{H^\alpha} \\ &\leq \frac{ C_{\lambda,\alpha}^2}{4\nu} \|X-\tilde X \|_{\ell^2}^2 \|X\|_{H^{1-\alpha}}^2. \endaligned $$ Since $\alpha\geq 1/2$, we have $\|X\|_{H^{1-\alpha}} \leq \|X\|_{H^\alpha}$, thus $$ \frac{\d}{\d t} \|X-\tilde X \|_{\ell^2}^2 \leq \frac{ C_{\lambda,\alpha}^2}{2\nu} \|X-\tilde X \|_{\ell^2}^2 \|X\|_{H^\alpha}^2. $$ As $X\in L^2(0,T; H^\alpha)$, the function $t\mapsto \|X(t) \|_{H^\alpha}^2$ is integrable, the Gronwall inequality implies uniqueness of solutions. \end{proof} \section{Well posedness of \eqref{stoch-dya-model-N} and scaling limit}\label{section-existence-Scaling-limit} This section consists of two parts: Section \ref{subsec-weak-existence} is devoted to the well posedness of the stochastic dyadic model \eqref{stoch-dya-model-N}, by using the method of Girsanov transform; we give in Section \ref{subsec-weak-convergence} a heuristic proof of the weak convergence of \eqref{stoch-dya-model-N} to the deterministic dyadic model \eqref{thm-scaling-limit.1}. \subsection{Well posedness of \eqref{stoch-dya-model-N}} \label{subsec-weak-existence} In this section, we apply the Girsanov transform, as in \cite[Section 3.4]{Flandoli10}, to prove the well posedness for equation \eqref{stoch-dya-model-N} with initial condition $X(0)=x \in \ell^2$. We start by explaining the ideas of Girsanov transform. Assume that $(X_n)_{n\ge1}$ is an $L^{\infty}$-weak solution and the first component of $\theta$ is nonzero, i.e. $\theta_1 \ne 0$. We rewrite the component form \eqref{com-stoch-dya-model-N} as \begin{equation}\label{subsec-weak-existence.1} \begin{aligned} \d X_n &= \sqrt{2\nu} \theta_1 \lambda^{n-1}X_{n-1}\Big(\frac{ X_{n-1}}{\sqrt{2\nu} \theta_1} \,\d t + \,\d W_{n-1,1}\Big)-\sqrt{2\nu} \theta_1 \lambda^n X_{n+1}\Big(\frac{ X_n}{\sqrt{2\nu} \theta_1} \,\d t + \,\d W_{n,1}\Big)\\ &\quad + \sqrt{2\nu} \sum\limits_{j=1}^{n-2} \lambda^j \theta_{n-j} X_j \,\d W_{j,n-j} -\sqrt{2\nu} \lambda^n \sum\limits_{j=2}^{\infty} \theta_j X_{n+j} \,\d W_{n,j}\\ &\quad - \nu \Big(\lambda^{2n} + \sum_{j=1}^{n-1} \theta_j^2 \lambda^{2(n-j)}\Big) X_n \,\d t,\quad n \in \mathbb{Z}_+. \end{aligned} \end{equation} To transform these nonlinear equations into linear ones, as in \cite{Flandoli10}, we observe that we need \begin{equation}\label{new-BM} \widehat W_{i,j}(t) =\begin{cases} \frac{1}{\sqrt{2\nu} \theta_1} \int_0^t X_i(s) \,\d s + W_{i,1}(t),\ & j=1\\ W_{i,j}(t),\ & j\ne 1 \end{cases} \end{equation} to be a family of independent Brownian motions under some new probability measure. In fact, since $(X_n)_{n\ge1}$ is an $L^{\infty}$-weak solution, we know that $$\int_0^T \sum_{i=1}^{\infty} X_i^2 (s)\, \d s \leq CT < \infty,$$ therefore the process $$L_t := - \frac{1}{\sqrt{2\nu} \theta_1} \sum_{i=1}^{\infty} \int_0^t X_i(s) \,\d W_{i,1}(s)$$ is well defined and is a martingale with quadratic variation $$[L,L]_t =\frac{1}{2\nu \theta_1^2 } \int_0^t \sum_{i=1}^{\infty} X_i^2 (s)\, \d s.$$ Again because $(X_n)_{n\ge1}$ is an $L^{\infty}$-weak solution, we have $$\E\, e^{\frac12 [L,L]_T}= \E \exp\bigg(\frac{1}{4\nu \theta_1^2 } \int_0^T \sum_{i=1}^{\infty} X_i^2 (s)\, \d s \bigg) \le \exp\bigg(\frac{CT}{4\nu \theta_1^2 } \bigg) < \infty, $$ which implies, by Novikov criterion, that $$t\mapsto e^{L_t - \frac{1}{2}[L,L]_t}$$ is a strictly positive martingale. Hence for each $0\le T < \infty$, we can define a probability measure $Q$ on $\mathcal{F}_T$ by \begin{equation}\label{new-probability-measure} \frac{\d Q}{\d \P}=e^{L_T - \frac{1}{2}[L,L]_T}. \end{equation} Then, by the strict positivity of $\frac{\d Q}{\d \P}$, $Q$ and $\P$ are equivalent on $\mathcal{F}_T$ and \begin{equation}\label{original-probability-measure} \frac{\d \P}{\d Q}=e^{R_T - \frac{1}{2}[R,R]_T}, \end{equation} where \begin{equation}\label{martingale-R} R_t =\frac{1}{\sqrt{2\nu} \theta_1} \sum_{i=1}^{\infty} \int_0^t X_i(s) \,\d \widehat W_{i,1}(s),\quad [R,R]_t =\frac{1}{2\nu \theta_1^2 } \int_0^t \sum_{i=1}^{\infty} X_i^2 (s)\, \d s. \end{equation} Under the new probability measure $Q$, $\widehat W_{i,j}(t)$ defined in \eqref{new-BM} is a family of independent Brownian motions and \eqref{subsec-weak-existence.1} can be rewritten as \begin{equation}\label{auxiliary-linear-model} \begin{aligned} \d X_n &= \sqrt{2\nu} \sum\limits_{j=1}^{n-1} \lambda^j \theta_{n-j} X_j \,\d \widehat W_{j,n-j} -\sqrt{2\nu} \lambda^n \sum\limits_{j=1}^{\infty} \theta_j X_{n+j} \,\d \widehat W_{n,j}\\ &\quad - \nu \bigg( \lambda^{2n} + \sum_{j=1}^{n-1} \theta_j^2 \lambda^{2(n-j)} \bigg) X_n \,\d t,\quad n \in \mathbb{Z}_+. \end{aligned} \end{equation} In Stratonovich form, the system reads as $$ \d X_n= \sqrt{2\nu} \sum\limits_{j=1}^{n-1} \lambda^j \theta_{n-j} X_j \circ \d \widehat W_{j,n-j} -\sqrt{2\nu} \lambda^n \sum\limits_{j=1}^{\infty} \theta_j X_{n+j} \circ \d \widehat W_{n,j} ,\quad n \in \mathbb{Z}_+, $$ which is the component form of the following linear equation \begin{equation}\label{auxiliary-linear-model-stratonovich} \d X= \sqrt{2\nu} \sum_{i=1}^{\infty} \lambda^i \sum\limits_{j=1}^{\infty} \theta_{j} A_{i,i+j}X \circ \d \widehat W_{i,j}. \end{equation} Hence we first turn to prove the well posedness of \eqref{auxiliary-linear-model}. In the sequel of this section, we denote the expectation with respect to $\P$ and $Q$ by $\E^{\P}$ and $\E^Q$, respectively. Let us give the definition of the strong solution to \eqref{auxiliary-linear-model}, which is the same as \cite[Definition 3.3]{Flandoli10}. \begin{definition}\label{definition-strong-solution} Let $(\Omega, \mathcal{F}_t, Q)$ be a filtered probability space and let $\big( \widehat W_k \big)_{k\in \mathbb{Z}_+^2}$ be a sequence of independent Brownian motions on $(\Omega, \mathcal{F}_t, Q)$. Given initial data $x\in \ell^2$, a strong solution of \eqref{auxiliary-linear-model} on $[0,T]$ in $\ell^2$ is an $\ell^2$-valued stochastic process $(X(t))_{t\in[0,T]}$, with continuous adapted components $X_n$, such that $Q$-a.s. $$\begin{aligned} X_n (t)&=x_n + \sqrt{2\nu} \int_0^t \sum\limits_{j=1}^{n-1} \lambda^j \theta_{n-j} X_j \,\d \widehat W_{j,n-j} -\sqrt{2\nu} \int_0^t \lambda^n \sum\limits_{j=1}^{\infty} \theta_j X_{n+j} \,\d \widehat W_{n,j}\\ &\quad - \nu \int_0^t \bigg( \lambda^{2n} + \sum_{j=1}^{n-1} \theta_j^2 \lambda^{2(n-j)} \bigg) X_n \,\d t \end{aligned}$$ for each $n\ge 1$ and $t\in [0,T]$, with the convention $X_0 \equiv 0$. We say that a solution is of class $L^{\infty}$ if there exists a constant $C>0$ such that $\|X(t)\|_{\ell^2} \le C$ for a.e. $(\omega,t)\in \Omega \times [0,T]$. \end{definition} We first give the strong (in the probabilistic sense) uniqueness of \eqref{auxiliary-linear-model}. Due to the complexity of our noise, we cannot use the trick in \cite[Theorem 3.6]{Flandoli10} to prove the uniqueness. But we observe that we have the formal conservation of energy for \eqref{auxiliary-linear-model-stratonovich} since the matrices $\{A_{i,j} \}_{i,j\ge 1}$ are skew-symmetric; based on this property, we apply the Laplace transform to prove the uniqueness of solutions to \eqref{auxiliary-linear-model}, following the method in \cite{Bian13} for the stochastic dyadic model on a tree. \begin{proposition}\label{thm-uniqueness-of-linear-model} Given initial data $x\in \ell^2$, strong uniqueness holds in the class of $L^{\infty}$-solutions to \eqref{auxiliary-linear-model} on $[0,T]$. \end{proposition} \begin{proof} By the linearity of \eqref{auxiliary-linear-model}, it is sufficient to prove that any $L^{\infty}$-solution with zero initial data $x=0$ is the zero solution $X=0$. We split the proof into two steps. \textbf{Step 1: Equation for $\E^Q [X_n^2(t)]$.} By It\^o formula, we have \begin{equation}\label{linear-energy-for-component} \begin{aligned} \frac{1}{2}\d X_n^2 &= \d G_n - \nu \bigg( \lambda^{2n} + \sum_{j=1}^{n-1} \theta_j^2 \lambda^{2(n-j)} \bigg) X_n^2 \,\d t + \nu \sum_{j=1}^{n-1}\lambda^{2j}\theta_{n-j}^2 X_j^2 \,\d t + \nu \lambda^{2n} \sum_{j=1}^{\infty} \theta_j^2 X_{n+j}^2 \,\d t, \end{aligned} \end{equation} where $$ G_n (t)= \sqrt{2\nu} \int_0^t \sum\limits_{j=1}^{n-1} \lambda^j \theta_{n-j} X_j X_n \,\d \widehat W_{j,n-j} -\sqrt{2\nu} \lambda^n \int_0^t \sum\limits_{j=1}^{\infty} \theta_j X_{n+j}X_n \,\d \widehat W_{n,j}. $$ Since $X$ is an $L^{\infty}$-solution, we can deduce from $\|\theta\|_{\ell^2}=1$ that $$\begin{aligned} \E^Q \int_0^T \sum_{j=1}^{n-1}\lambda^{2j}\theta_{n-j}^2 X_j^2 X_n^2 \,\d t \le T\lambda^{2n}\|x\|_{\ell^2}^4,\quad \E^Q \int_0^t \lambda^{2n} \sum_{j=1}^{\infty} \theta_j^2 X_{n+j}^2 X_n^2 \,\d t \le T\lambda^{2n}\|x\|_{\ell^2}^4. \end{aligned}$$ Therefore, $G_n$ is a martingale for each $n\ge1$ and $\E^Q [G_n(t)]=0$. Moreover, again by the fact that $X$ is an $L^{\infty}$-solution, we have \begin{equation}\label{expectation-energy-linear} \E^Q \big[\|X(t)\|_{\ell^2}^2 \big] \le C_1 \end{equation} for some constant $C_1 >0$, in particular, $\E^Q [X_n^2(t)] \le C_1$ for each $n\ge1$. Writing \eqref{linear-energy-for-component} in integral form and taking expectation, we arrive at $$ \begin{aligned} \E^Q [X_n^2(t)] &= - 2 \nu \bigg( \lambda^{2n} + \sum_{j=1}^{n-1} \theta_j^2 \lambda^{2(n-j)} \bigg)\int_0^t \E^Q [X_n^2(s)] \,\d s\\ &\quad + 2\nu \sum_{j=1}^{n-1}\lambda^{2j}\theta_{n-j}^2 \int_0^t \E^Q[ X_j^2(s)] \,\d s + 2\nu \lambda^{2n} \sum_{j=1}^{\infty} \theta_j^2 \int_0^t \E^Q [X_{n+j}^2(s)] \,\d s, \end{aligned} $$ which implies that $\E^Q [X_n^2(t)]$ is continuously differentiable in $t$. Thus we can obtain a system of equations: for every $n\geq 1$, \begin{equation}\label{linear-energy-integral-component} \begin{aligned} \frac{\d}{\d t} \E^Q [X_n^2(t)] &= - 2 \nu \bigg( \lambda^{2n} + \sum_{j=1}^{n-1} \theta_j^2 \lambda^{2(n-j)} \bigg) \E^Q [X_n^2(t)] \\ &\quad + 2\nu \sum_{j=1}^{n-1}\lambda^{2j}\theta_{n-j}^2\, \E^Q [ X_j^2(t)] + 2\nu \lambda^{2n} \sum_{j=1}^{\infty} \theta_j^2\, \E^Q [X_{n+j}^2(t)]. \end{aligned} \end{equation} \textbf{Step 2: Uniqueness by Laplace transform.} Let $Y_n(t):=\E^Q [X_n^2(t)] \ge 0,\ n\in \Z_+$; then by \eqref{expectation-energy-linear}, \begin{equation}\label{Y-t} \|Y(t) \|_{\ell^1} = \sum_{n=1}^\infty Y_n(t) \le C_1,\quad t\ge 0, \end{equation} and for all $n\ge 1$, $Y_n\in C^1([0,T])$. From \eqref{linear-energy-integral-component}, one can obtain an equation in matrix form ($Y(t)$ is a row vector) $$ \frac{\d}{\d t} Y(t)= Y(t) M, $$ where $M=(m_{n,j})$ is an infinite dimensional matrix whose entries are defined as $$ m_{n,n}= - 2 \nu \bigg( \lambda^{2n} + \sum_{j=1}^{n-1} \theta_j^2 \lambda^{2(n-j)} \bigg),\quad m_{n,j}= \begin{cases} 2\nu \lambda^{2j}\theta_{n-j}^2 ,\ & n>j;\\ 2\nu \lambda^{2n}\theta_{j-n}^2 ,\ & n<j. \end{cases} $$ Note that this matrix $M$ is symmetric, with finite diagonal entries and $0\le m_{n,j}< +\infty$ when $ n\ne j$. Moreover, as $\| \theta \|_{\ell^2}=1$, we have \begin{equation}\label{conservative-matrix} -m_{n,n}=\sum_{j\ne n} m_{n,j} < +\infty \quad \mbox{for every } n\geq 1. \end{equation} Now we can rewrite \eqref{linear-energy-integral-component} as \begin{equation}\label{Y-equation} Y^{\prime}_n(t):=\frac{\d}{\d t} Y_n(t)=\sum_{j=1}^{\infty} Y_j(t)m_{j,n}, \quad n\ge 1. \end{equation} Denote the Laplace transform $\hat Y_n= \int_0^\infty e^{-t} Y_n(t) \,\d t$; then by \eqref{Y-t}, we have $\sum_n \hat Y_n \le C_1$. Thus we can find $k\in \Z_+$ such that $\hat Y_k \ge \hat Y_n$ for all $n\in \Z_+$. From \eqref{Y-equation} and \eqref{Y-t}, we obtain that $$ Y^{\prime}_k(t) \le Y_k(t)|m_{k,k}|+ \sum_{j\ne k} Y_j(t)m_{j,k} \le C_1|m_{k,k}|+ C_1|m_{k,k}| < +\infty, $$ where in the second inequality we use \eqref{conservative-matrix} and the fact that the matrix $M$ is symmetric, and the last inequality is due to $-m_{k,k} < +\infty$. Now we can apply integration by parts to get that $$ \hat Y_k= \int_0^\infty e^{-t} Y^{\prime}_k(t) \,\d t= \int_0^\infty e^{-t} \bigg[\sum_{j=1}^{\infty} Y_j(t)m_{j,k}\bigg] \,\d t = \sum_{j=1}^{\infty} \hat Y_j m_{j,k}. $$ Recall that $\hat Y_k \ge \hat Y_n$ for all $n\in \Z_+$; then by \eqref{conservative-matrix} again, we obatin $$ \hat Y_k= \hat Y_k m_{k,k}+ \sum_{j\ne k} \hat Y_j m_{j,k} \le \hat Y_k \bigg(m_{k,k}+ \sum_{j\ne k} m_{j,k}\bigg)=0. $$ Therefore, we have $\hat Y_k =0$ and so $\hat Y_n=0$ for all $n\in \Z_+$. Hence $Y_n(t)=\E^Q [X_n^2(t)]=0$ for any $n\in \Z_+$ and $t \ge 0$, which implies that $Q$-a.s., $X=0$. \end{proof} We now present an existence result. \begin{proposition}\label{thm-existence-of-linear-model} Given initial data $x=(x_n)_{n\ge 1} \in \ell^2$, there exists a strong solution to \eqref{auxiliary-linear-model} in $L^{\infty}(\Omega\times [0,T]; \ell^2)$. \end{proposition} \begin{proof} \textbf{Step 1: Galerkin approximations.} For any $ N \in \mathbb{Z}_+$, let $X^{(N)}= \big(X^{(N)}_1, X^{(N)}_2,\cdots, X^{(N)}_{N},0,0,\cdots \big)^\ast$ be a column vector; we consider the following finite dimensional stochastic system \begin{equation}\label{finite-approximate} \begin{aligned} &\d X^{(N)}= \sqrt{2\nu} \sum\limits_{i=1}^{N-1} \lambda^i \sum\limits_{j=1}^{N-i} \theta_j A_{i,i+j}X^{(N)} \circ \,\d \widehat W_{i,j}, \\ &X_n^{(N)}(0)=x_n,\quad 1\leq n \leq N, \end{aligned} \end{equation} where $X^{(N)}_0\equiv 0$. Then, since $A_{i,j}$ is skew-symmetric, we get that \begin{equation*} \begin{aligned} \d \|X^{(N)}\|_{\ell^2}^2 &= 2\sqrt{2\nu} \sum_{i=1}^{N -1} \lambda^i \sum_{j=1}^{N-i} \theta_j \big\langle X^{(N)}, A_{i,i+j}X^{(N)} \big\rangle_{\ell^2} \circ \,\d \widehat W_{i,j} =0, \end{aligned} \end{equation*} Thus we have for any $ N \in \mathbb{Z}_+$, $Q$-a.s., \begin{equation}\label{finite-energy-dissipation} \|X^{(N)}(t)\|_{\ell^2} = \|X^{(N)}(0)\|_{\ell^2} \le \|x\|_{\ell^2} \quad \mbox{for all } t\ge 0. \end{equation} By the classical theory of finite dimensional SDEs, for any fixed $N$, there exists a unique continuous adapted solution $ X^{(N)}\in C([0,T],\ell^2)$ of \eqref{finite-approximate}. The bound \eqref{finite-energy-dissipation} implies that $X^{(N)}=(X_n^{(N)})_{n\ge 1}$ is uniformly bounded in $L^p (\Omega \times [0,T];\ell^2)$ for any $p>1$ with respect to $N$. Hence there exists a subsequence $N_k \to \infty$ such that $$\begin{aligned} &(X_n^{(N_k)})_{n\ge 1} \to (X_n)_{n\ge 1} \quad \mbox{weakly in}\quad L^p (\Omega \times [0,T];\ell^2)\quad \mbox{for each}\quad p>1,\\ &(X_n^{(N_k)})_{n\ge 1} \to (X_n)_{n\ge 1} \quad \mbox{weakly star in}\quad L^{\infty} (\Omega \times [0,T];\ell^2). \end{aligned}$$ Therefore, $X:=(X_n)_{n\ge 1} \in L^{\infty} (\Omega \times [0,T];\ell^2)$, and moreover, $Q$-a.s., \begin{equation}\label{solu-bound} \|X(t) \|_{\ell^2} \leq \|x \|_{\ell^2} \quad \mbox{for a.e. } t\in [0,T]. \end{equation} Applying the same standard arguments as in \cite[proof of Theorem 3.7]{Flandoli10} (see also \cite{Pardoux75, Rozovskii90}), the subspace of $L^p (\Omega \times [0,T];\ell^2)$ of progressively measurable process is strongly closed, hence it is also weakly closed. Therefore $(X_n)_{n\ge 1}$ is progressively measurable. \textbf{Step 2: strong solution.} We now show that $(X_n)_{n\ge 1}$ is a strong solution to \eqref{auxiliary-linear-model} in $L^{\infty}(\Omega\times [0,T]; \ell^2)$. Rewriting \eqref{finite-approximate} in It\^o form, integrating from $0$ to $t$ and replacing $N$ by $N_k$, we obtain \begin{equation}\label{finite-approximation-integral} \begin{aligned} X_n^{(N_k)}(t) & = x_n + \nu \int_0^t \big( S^{(N_k)}_{\theta} X^{(N_k)} \big)_n \,\d s \\ &\quad + \sqrt{2\nu} \int_0^t \sum\limits_{j=1}^{n-1} \lambda^j\theta_{n-j} X_j^{(N_k)} \,\d \widehat W_{j,n-j}- \sqrt{2\nu} \lambda^n \int_0^t \sum\limits_{j=1}^{N_k-n} \theta_j X_{n+j}^{(N_k)} \,\d \widehat W_{n,j} \\ &=: J_1+ J_2 +J_3, \end{aligned} \end{equation} where we have omitted the time variable $s$ in the integrals to save space and \begin{equation}\label{S-N-theta} S^{(N_k)}_{\theta} = - {\rm diag}\bigg(\lambda^2 \sum_{j=1}^{N_k-1} \theta_j^2, \ldots, \sum_{j=1}^{i-1} \lambda^{2j} \theta_{i-j}^2 + \lambda^{2i} \sum_{j=1}^{N_k-i} \theta_{j}^2, \ldots, \sum_{j=1}^{N_k-1} \lambda^{2j} \theta_{N_k-j}^2, 0, \ldots \bigg). \end{equation} We prove that for each $n$, the integrals $J_1,\, J_2$ and $J_3$ converge as $k \to \infty$ to the corresponding limits in $L^2 (\Omega )$. For $J_1$, we split it into two parts $$\begin{aligned} J_1 &= \nu \bigg(\sum_{j=1}^{n-1} \lambda^{2j} \theta_{n-j}^2 + \lambda^{2n} \sum_{j=1}^{N_k-n} \theta_{j}^2 \bigg) \int_0^t X_n^{(N_k)}\, \d s \\ &= \nu \int_0^t \big( S_{\theta} X^{(N_k)} \big)_n \, \d s + \nu \lambda^{2n} \bigg( \sum_{j=1}^{N_k-n} \theta_{j}^2 -1 \bigg) \int_0^t X_n^{(N_k)}\, \d s \\ &=J_{1,1}+J_{1,2}. \end{aligned}$$ First, we have $$\begin{aligned} \E \Bigg[\bigg(\sum_{j=1}^{n-1} \lambda^{2j} \theta_{n-j}^2 + \lambda^{2n} \bigg) \int_0^t X_n^{(N_k)}\, \d t \Bigg]^2 &\le \bigg(\sum_{j=1}^{n-1} \lambda^{2j} \theta_{n-j}^2 + \lambda^{2n} \bigg)^2 T\, \E \int_0^T (X_n^{(N_k)})^2 \, \d t\\ & \lesssim_{\lambda,n,T} \E \int_0^T \|X^{(N_k)}(t)\|_{\ell^2}^2 \,\d t, \end{aligned}$$ which implies that $J_{1,1}$ is a continuous linear operator from the subspace of $L^2 (\Omega \times [0,T];\ell^2)$ of progressively measurable processes to $L^2 (\Omega)$. Therefore, $J_{1,1}$ is weakly continuous. Recalling that in Step 1 we have proved $\big(X_n^{(N_k)}\big)_{n\ge 1} \to (X_n)_{n\ge 1}$ weakly in $L^2 (\Omega \times [0,T];\ell^2)$, therefore we obtain that $$ \nu \int_0^t \big( S_{\theta} X^{(N_k)} \big)_n \,\d s \to \nu \int_0^t \big( S_{\theta} X \big)_n \,\d s $$ weakly in $L^2(\Omega)$. For $J_{1,2}$, by the uniform bound \eqref{finite-energy-dissipation} we have $$\begin{aligned} \E \Bigg[\lambda^{2n} \bigg(\sum_{j=1}^{N_k-n} \theta_{j}^2 -1 \bigg) \int_0^t X_n^{(N_k)}\, \d s\Bigg]^2 & \le \lambda^{4n}\bigg(\sum_{j=1}^{N_k-n} \theta_{j}^2 -1 \bigg)^2 T\, \E \int_0^t \big(X_n^{(N_k)}\big)^2 \, \d s \\ & \lesssim_{\lambda,n,T,\|x\|_{\ell^2}} \bigg(\sum_{j=1}^{N_k-n} \theta_{j}^2 -1 \bigg)^2 , \end{aligned}$$ which converges to zero as $k\to \infty$ since $\|\theta\|_{\ell^2}^2=1$. This implies that $J_{1,2} \to 0$ strongly in $L^2(\Omega)$, and thus also weakly in $L^2(\Omega)$. For the stochastic integral $J_2$, we have $$ \E \Bigg( \int_0^t \sum\limits_{j=1}^{n-1} \lambda^j\theta_{n-j} X_j^{(N_k)} \,\d \widehat W_{j,n-j} \Bigg)^2 = \E \int_0^t \sum\limits_{j=1}^{n-1} \lambda^{2j}\theta_{n-j}^2 \big(X_j^{(N_k)}\big)^2 \,\d s \lesssim_{\lambda,n} \E \int_0^T \|X^{(N_k)}\|_{\ell^2}^2 \,\d s. $$ By the same arguments as $J_{1,1}$ above, we derive that, as $k\to \infty$, $$ \sqrt{2\nu} \int_0^t \sum\limits_{j=1}^{n-1} \lambda^j\theta_{n-j} X_j^{(N_k)} \,\d \widehat W_{j,n-j} \to \sqrt{2\nu} \int_0^t \sum\limits_{j=1}^{n-1} \lambda^j\theta_{n-j} X_j \,\d \widehat W_{j,n-j} $$ weakly in $L^2(\Omega)$. Similarly, we get that for $J_3$: $$ \E \Bigg( \lambda^n \int_0^t \sum\limits_{j=1}^{N_k-n} \theta_j X_{n+j}^{(N_k)} \,\d \widehat W_{n,j} \Bigg)^2 =\E \Bigg( \lambda^n \int_0^t \sum\limits_{j=1}^{\infty} \theta_j X_{n+j}^{(N_k)} \,\d \widehat W_{n,j} \Bigg)^2 \lesssim_{\lambda,n} \E \int_0^T \|X^{(N_k)}\|_{\ell^2}^2 \,\d s. $$ Therefore, as $k\to \infty$, we also have $$ \sqrt{2\nu}\lambda^n \int_0^t \sum\limits_{j=1}^{N_k-n} \theta_j X_{n+j}^{(N_k)} \,\d \widehat W_{n,j} \to \sqrt{2\nu}\lambda^n \int_0^t \sum\limits_{j=1}^{\infty} \theta_j X_{n+j} \,\d \widehat W_{n,j} $$ weakly in $L^2(\Omega)$. Combining the above convergence results, we finally get that for each $n$, $Q$-a.s., $$\begin{aligned} X_n(t)&= x_n +\nu \int_0^t (S_{\theta} X)_n \,\d t + \sqrt{2\nu} \int_0^t \sum\limits_{j=1}^{n-1} \lambda^j\theta_{n-j} X_j \,\d W_{j,n-j}\\ &\quad - \sqrt{2\nu} \lambda^n \int_0^t \sum\limits_{j=1}^{\infty} \theta_j X_{n+ j} \,\d W_{n,j}. \end{aligned}$$ These integral equations implies that there is a modification such that all components are continuous. The proof of existence is completed. \end{proof} We are now ready to follow the arguments of \cite[Section 3.4.5]{Flandoli10} to prove the well posedness for \eqref{stoch-dya-model-N}, by using the Girsanov transform. \begin{proposition}\label{thm-weak-uniqueness} Given initial data $x\in \ell^2$, in the class of $L^{\infty}$-weak solution on $[0,T]$, there is weak uniqueness for \eqref{stoch-dya-model-N}. \end{proposition} \begin{proof} Assume that $\big( \Omega^{(k)},\mathcal{F}_t^{(k)},\P^{(k)},W^{(k)},X^{(k)} \big), k=1,2$ are two $L^{\infty}$-weak solutions to \eqref{stoch-dya-model-N} with the same initial data $x\in \ell^2$. Then we can obtain \begin{equation}\label{two-model-prove-uniqueness} \begin{aligned} \d X_n^{(k)} &= \sqrt{2\nu} \sum\limits_{j=1}^{n-1} \lambda^j \theta_{n-j} X_j^{(k)} \,\d \widehat W_{j,n-j}^{(k)} -\sqrt{2\nu} \lambda^n \sum\limits_{j=1}^{\infty} \theta_j X_{n+j}^{(k)} \,\d \widehat W_{n,j}^{(k)}\\ &\quad - \nu \bigg( \lambda^{2n} + \sum_{j=1}^{n-1} \theta_j^2 \lambda^{2(n-j)} \bigg) X_n^{(k)} \,\d t, \end{aligned} \end{equation} where for each $k=1,2$, $$ \widehat W_{i,j}^{(k)}(t) =\begin{cases} \frac{1}{\sqrt{2\nu} \theta_1} \int_0^t X_i^{(k)}(s) \,\d s + W_{i,1}^{(k)}(t),\ & j=1\\ W_{i,j}^{(k)}(t),\ & j\ne 1 \end{cases} $$ is a sequence of independent Brownian motions on $\big( \Omega^{(k)},\mathcal{F}_t^{(k)},Q^{(k)} \big)$ and $Q^{(k)}$ is defined by \eqref{new-probability-measure} with respect to $\big( \P^{(k)},W^{(k)},X^{(k)} \big)$. Since $\P^{(k)}$ and $Q^{(k)}$ are equivalent on $\mathcal{F}^{(k)}_T$, we can easily get that $X^{(k)}, k=1,2$ are two $L^{\infty}$-solutions to \eqref{auxiliary-linear-model} under $Q$. We have proved in Proposition \ref{thm-uniqueness-of-linear-model} that \eqref{auxiliary-linear-model} has strong uniqueness in the class of $L^{\infty}$-solutions on $[0,T]$. Therefore it has uniqueness in law on $C([0,T];\R)^{\N}$ by Yamada-Watanabe theorem, i.e., the laws under $Q^{(k)}$ are the same. Here we use this Yamada-Watanabe theorem in the infinite dimensional context, which can be proved by following step by step the finite dimensional case such as \cite[Chap. 9, Lemma 1.6 and Theorem 1.7]{RevuzYor94} and we omit the proof here. Given $n\in \N, t_1,\dots,t_n \in [0,T]$ and a bounded measurable function $f:(\ell^2)^n \to \R$, by \eqref{original-probability-measure}, we can get $$\begin{aligned} \E^{\P^{(k)}} \big[ f\big(X^{(k)}(t_1),\dots,X^{(k)}(t_n) \big) \big] = \E^{Q^{(k)}} \Big[e^{R^{(k)}_t - \frac{1}{2}[R^{(k)},R^{(k)}]_t} f\big( X^{(k)}(t_1),\dots,X^{(k)}(t_n) \big) \Big], \end{aligned}$$ where $$ R^{(k)}_t =\frac{1}{\sqrt{2\nu} \theta_1} \sum_{i=1}^{\infty} \int_0^t X^{(k)}_i(s) \,\d \widehat W^{(k)}_{i,1}(s). $$ Then consider the enlarged system made of stochastic equations \eqref{two-model-prove-uniqueness} and equation $$ \d R^{(k)} =\frac{1}{\sqrt{2\nu} \theta_1} \sum_{i=1}^{\infty} X^{(k)}_i \,\d \widehat W^{(k)}_{i,1}. $$ This enlarged system obviously has strong uniqueness and thus weak uniqueness by Yamada-Watanabe theorem again. Therefore, we have that under $Q^{(k)}$, the law of $(R^{(k)},X^{(k)})$ on $C([0,T];\R)^{\N}\times C([0,T];\R)^{\N}$ is independent of $k=1,2$. Thus $$ \E^{\P^{(1)}} \big[ f\big( X^{(1)}(t_1),\dots,X^{(1)}(t_n) \big) \big]= \E^{\P^{(2)}} \big[ f\big( X^{(2)}(t_1),\dots,X^{(2)}(t_n) \big) \big], $$ which implies the uniqueness of the laws of $X^{(i)}$ on $C([0,T];\R)^{\N}$. The proof of weak uniqueness is completed. \end{proof} \begin{proposition}\label{thm-weak-existence} Given initial data $x\in \ell^2$, there exists an $L^{\infty}$-weak solution $X$ on $[0,T]$ of \eqref{stoch-dya-model-N} such that almost surely, $\|X(t) \|_{\ell^2} \le \|x \|_{\ell^2}$ for a.e. $t\in [0,T]$. \end{proposition} \begin{proof} Let $\big( \Omega,\mathcal{F}_t,Q,\widehat W,X \big)$ be a solution in $L^{\infty}(\Omega\times [0,T];\ell^2)$ of the linear equation \eqref{auxiliary-linear-model}, provided by Proposition \ref{thm-existence-of-linear-model}; in particular, $X=\{X(t) \}_{t\in [0,T]}$ satisfies the uniform bound \eqref{solu-bound}. Introducing a new probability measure $$ \frac{\d \P}{\d Q}=e^{R_T - \frac{1}{2}[R,R]_T}, $$ where $R_t$ is defined in \eqref{martingale-R}. Then under $\P$, the process $$ W_{i,j}(t) :=\begin{cases} \frac{1}{\sqrt{2\nu} \theta_1} \int_0^t X_i \,\d t + \widehat W_{i,1}(t),\ & j=1\\ \widehat W_{i,j}(t),\ & j\ne 1 \end{cases} $$ are a family of Brownian motions. Hence we obtain that $\big( \Omega,\mathcal{F}_t,\P, W,X \big)$ is an $L^{\infty}$-weak solution to \eqref{stoch-dya-model-N}. Here the $L^{\infty}$-property \eqref{solu-bound} is preserved since $\P$ and $Q$ are equivalent. We complete the proof of the weak existence. \end{proof} Combining Propositions \ref{thm-weak-uniqueness} and \ref{thm-weak-existence}, we finish the proof of Theorem \ref{thm-existence}. \subsection{Weak convergence to deterministic dyadic model} \label{subsec-weak-convergence} In this section we take a sequence of coefficients $\{\theta^N \}_{N\geq 1}$ satisfying \begin{equation}\label{theta-N} \| \theta^N \|_{\ell^2} =1 \ (\forall\, N\geq 1), \quad \| \theta^N \|_{\ell^\infty} \to 0 \mbox{ as } N\to \infty, \end{equation} and consider the stochastic dyadic models \begin{equation}\label{stoch-dyadic-N} \d X^N= B(X^N)\,\d t + \sqrt{2\nu} \sum_i \lambda^i \sum_{j=1}^\infty \theta^N_j A_{i,i+j} X^N \circ \d W_{i,j}, \quad X^N(0) = x\in \ell^2. \end{equation} The It\^o form of \eqref{stoch-dyadic-N} is $$\d X^N= B(X^N)\,\d t + \sqrt{2\nu} \sum_i \lambda^i \sum_{j=1}^\infty \theta^N_j A_{i,i+j} X^N \, \d W_{i,j} + \nu S_{\theta^N} X^N \,\d t, $$ where $S_{\theta^N}$ is the diagonal matrix defined in Section \ref{subsec-well-posedness}. By Theorem \ref{thm-existence}, for each $N\geq 1$, there exists an $L^\infty$-weak solution $X^N= (X^N_n)_{n\geq 1}$ fulfilling $\P$-a.s. $\|X^N(t) \|_{\ell^2} \leq \|x \|_{\ell^2}$ for all $t\in [0,T]$. Our purpose is to prove \begin{theorem}\label{thm-scaling-limit} Let $\{X^N\}_N$ be $L^\infty$-weak solutions to \eqref{stoch-dyadic-N}, where $\{\theta^N \}_N \subset \ell^2$ satisfies \eqref{theta-N}. Then the solutions converge weakly to the solution of the deterministic viscous equation \begin{equation*} \frac{\d X}{\d t}= B(X) + \nu S X, \quad X(0)=x, \end{equation*} where $S= -{\rm diag}(\lambda^2, \lambda^4, \cdots)$ is a diagonal matrix. \end{theorem} Due to the uniform $\ell^2$-bound of the solutions $\{X^N\}_{N\geq 1}$, one can prove Theorem \ref{thm-scaling-limit} by using Simon's compactness theorems \cite{Simon}, the Prohorov theorem and the Skorokhod representation theorem, similarly as in \cite[Section 4]{FGL21}. Here we do not provide the details, since we shall give a quantitative convergence rate in the next section; instead, we only show why the martingale part vanishes as $N\to \infty$, in the weak sense. To this end, we denote by $\ell_c^2$ the subset of $\ell^2$ consisting of vectors with only finitely many nonzero components. \begin{lemma}\label{lem-martingale} For any $y\in \ell_c^2$ and $t\geq 0$, one has $$\lim_{N\to \infty}\E \big\<M^N(t), y \big\>^2 = 0, $$ where $\<\cdot,\cdot\>$ is the inner product in $\ell^2$. \end{lemma} \begin{proof} We have $$\big\<M^N(t), y \big\>= \sqrt{2\nu} \sum_i \lambda^i\sum_{j=1}^\infty \theta^N_j \int_0^t \big\< A_{i,i+j} X^N(s), y \big\> \, \d W_{i,j}(s), $$ thus by It\^o isometry, $$\aligned \E \big\<M^N(t), y \big\>^2 &= 2\nu \sum_{i,j} \lambda^{2i} (\theta^N_j)^2\, \E \int_0^t \big\< A_{i,i+j} X^N(s), y \big\>^2 \, \d s \\ &\leq 2\nu \|\theta^N \|_{\ell^\infty}^2 \sum_{i,j} \lambda^{2i} \, \E \int_0^t \big[ A_{i,i+j}: (X^N(s)\otimes y) \big]^2 \, \d s , \endaligned $$ where $:$ denotes the inner product of (infinite dimensional) matrices: $A: B= {\rm Tr}(AB)$, and $X^N(s)\otimes y$ is the matrix obtained from the tensor product of $X^N(s)$ and $y$. Since $y\in \ell_c^2$, it is clear that $A_{i,i+j}: (X^N\otimes y)$ vanishes for big $i$, and thus $\lambda^i$ involved in the sum is bounded: $\lambda^{2i}\leq C(\lambda, y)$. Moreover, we know that the matrices $\{A_{i,i+j} \}_{i,j\geq 1}$ are orthogonal with respect to the inner product $:$, and they all have norm 2. Therefore, by Bessel's inequality, $$\aligned \E \big\<M^N(t), y \big\>^2 &\leq 2\nu \|\theta^N \|_{\ell^\infty}^2 C(\lambda, y)\, \E \int_0^t \|X^N(s) \otimes y \|^2 \,\d s \\ &\leq 2\nu \|\theta^N \|_{\ell^\infty}^2 C(\lambda, y)\, \E \int_0^t \|X^N(s) \|_{\ell^2}^2 \|y \|_{\ell^2}^2\,\d s \\ &\leq 2\nu \|\theta^N \|_{\ell^\infty}^2 C(\lambda, y)\, t\, \|x \|_{\ell^2}^2 \|y \|_{\ell^2}^2, \endaligned $$ where in the last step we have used the fact that $\P$-a.s., $\| \tilde X^N (t) \|_{\ell^2} \le \|x\|_{\ell^2}$. This immediately gives us the desired result. \end{proof} \section{Quantitative convergence rate}\label{section-quantitative-convergence} The purpose of this section is to prove Theorem \ref{thm-quantitative-convergence-rate}, namely, we intend to prove a quantitative estimate on a certain distance between the unique solution of \eqref{thm-scaling-limit.1} and the weak solution of \eqref{stoch-dya-model-N}, which can be rewritten as $$\d X= B(X)\,\d t + \sqrt{2\nu} \sum_i \lambda^i\sum_{j=1}^\infty \theta_j A_{i,i+j} X \, \d W_{i,j} + \nu( S_{\theta}-S) X \, \d t + \nu SX\,\d t. $$ We regard $S$ as a Laplace-type operator and rewrite the above equation in mild form as $$X(t) = e^{\nu t S} x+ \int_0^t e^{\nu (t-r) S} B(X(r))\,\d r + Z_t + \nu \int_0^t e^{\nu (t-r) S} (S_\theta -S) X(r)\,\d r , $$ where the stochastic convolution \begin{equation}\label{stoch-convol} Z_t= \sqrt{2\nu}\sum_i \lambda^i\sum_{j=1}^\infty \theta_j \int_0^t e^{\nu (t-r) S} (A_{i,i+j} X(r)) \, \d W_{i,j}(r). \end{equation} We denote the solution to \eqref{thm-scaling-limit.1} as $\tilde X$ and write it also in mild form: $$\tilde X(t)= e^{\nu t S} x+ \int_0^t e^{\nu (t-r) S} B(\tilde X(r))\,\d r. $$ Here we assume for simplicity the solutions have the same initial condition $x$. Then we have $$X(t)- \tilde X(t)= \int_0^t e^{\nu (t-r) S} \big(B(X(r))- B(\tilde X(r)) \big)\,\d r + Z_t + \nu \int_0^t e^{\nu (t-r) S} (S_\theta -S) X(r)\,\d r. $$ For $\alpha\in (0,1)$, we have \begin{equation}\label{estimate-1} \aligned \|X(t)- \tilde X(t) \|_{H^{-\alpha}}^2 &\lesssim \bigg\| \int_0^t e^{\nu (t-r) S} \big(B(X(r))- B(\tilde X(r)) \big)\,\d r \bigg\|_{H^{-\alpha}}^2 + \|Z_t \|_{H^{-\alpha}}^2 \\ &\quad + \bigg\| \nu \int_0^t e^{\nu (t-r) S} (S_\theta -S) X(r)\,\d r \bigg\|_{H^{-\alpha}}^2 . \endaligned \end{equation} The first term on the right-hand side can be estimated as in the proof of Proposition \ref{prop-uniqueness}: using Lemma \ref{lem-nonlinearity-2} and Lemma \ref{lem-heat-integral-property} we have $$\aligned & \bigg\| \int_0^t e^{\nu (t-r) S} \big(B(X(r))- B(\tilde X(r)) \big)\,\d r \bigg\|_{H^{-\alpha}}^2 \\ &\lesssim \frac1\nu \int_0^t \big\| B(X(r))- B(\tilde X(r)) \big\|_{H^{-1-\alpha}}^2 \,\d r \\ &\lesssim \frac1\nu \int_0^t \Big(\big\| B(X(r)-\tilde X(r), X(r)) \big\|_{H^{-1-\alpha}}^2 + \big\| B(\tilde X(r), X(r)-\tilde X(r)) \big\|_{H^{-1-\alpha}}^2 \Big) \,\d r \\ &\lesssim \frac1\nu \int_0^t \big( \|X(r) \|_{\ell^2}^2 + \|\tilde X(r) \|_{\ell^2}^2 \big) \|X(r) - \tilde X(r)\|_{H^{-\alpha}}^2 \,\d r. \endaligned $$ Substituting this estimate into \eqref{estimate-1} yields $$\aligned \|X(t)- \tilde X(t) \|_{H^{-\alpha}}^2 &\lesssim \frac1\nu \int_0^t \big(\|X(r) \|_{\ell^2}^2 + \|\tilde X(r) \|_{\ell^2}^2 \big) \|X(r) - \tilde X(r)\|_{H^{-\alpha}}^2 \,\d r + \|Z_t \|_{H^{-\alpha}}^2 \\ &\quad + \bigg\| \nu \int_0^t e^{\nu (t-r) S} (S_\theta -S) X(r)\,\d r \bigg\|_{H^{-\alpha}}^2 . \endaligned $$ Gronwall's inequality implies $$\aligned \sup_{t\leq T} \|X(t)- \tilde X(t) \|_{H^{-\alpha}}^2 &\lesssim \bigg\{\sup_{t\leq T} \|Z_t \|_{H^{-\alpha}}^2 + \sup_{t\leq T} \bigg\| \nu \int_0^t e^{\nu (t-r) S} (S_\theta -S) X(r)\,\d r \bigg\|_{H^{-\alpha}}^2 \bigg\} \\ &\quad \times \exp\bigg(\frac1\nu \int_0^T \big(\|X(r) \|_{\ell^2}^2 + \|\tilde X(r) \|_{\ell^2}^2 \big)\,\d r \bigg). \endaligned $$ Note that $\|X(r) \|_{\ell^2} \vee \|\tilde X(r) \|_{\ell^2} \leq \|x \|_{\ell^2}$ for all $r\in [0,T]$, hence $$\sup_{t\leq T} \|X(t)- \tilde X(t) \|_{H^{-\alpha}}^2 \lesssim e^{2T|x|^2/\nu} \bigg\{\sup_{t\leq T} \|Z_t \|_{H^{-\alpha}}^2 + \sup_{t\leq T} \bigg\| \nu \int_0^t e^{\nu (t-r) S} (S_\theta -S) X(r)\,\d r \bigg\|_{H^{-\alpha}}^2 \bigg\}. $$ As a result, \begin{equation}\label{quantitative-rate-1} \aligned \E\bigg[\sup_{t\leq T} \|X(t)- \tilde X(t) \|_{H^{-\alpha}}^2 \bigg] & \lesssim e^{2T\|x \|_{\ell^2}^2/\nu} \bigg\{ \E \bigg[\sup_{t\leq T} \|Z_t \|_{H^{-\alpha}}^2 \bigg] \\ &\qquad+ \E \bigg[\sup_{t\leq T} \bigg\| \nu \int_0^t e^{\nu (t-r) S} (S_\theta -S) X(r)\,\d r \bigg\|_{H^{-\alpha}}^2 \bigg] \bigg\} . \endaligned \end{equation} It remains to estimate the two expectations on the right-hand side. For the first expectation, we have the following result. \begin{lemma}\label{lem-estimate-expectation-1} For any $\beta \in \left(0, 1 \right]$ and any $p\in \left[1,\infty \right) $, it holds \begin{equation}\label{estimate-Z-1} \begin{aligned} &\bigg[ \E \sup\limits_{t\in[0,T]}\|Z_t \|_{H^{-\beta}}^{p} \bigg]^{\frac{1}{p}} \le C(p,T)C_{1-\frac{\beta}{2}} \sqrt{\nu^{\frac{\beta}{2}}\beta^{-1}}\, \|\theta\|_{\ell^{\infty}}^2 \|x \|_{\ell^2} \bigg(\frac{\lambda^{-\beta}}{1-\lambda^{-\beta}} \bigg)^{\frac12}. \end{aligned} \end{equation} \end{lemma} \begin{proof} Recall the definition of $Z_t$ in \eqref{stoch-convol}. By Burkholder-Davis-Gundy's inequality and Lemma \ref{similar-heat-semigroup-property} (i), when $\beta \le \frac12$, we can get $$\begin{aligned} & \Big[ \E \Big( \|Z_t \|_{H^{-2\beta}}^{2p}\Big) \Big]^{\frac{1}{2p}}\\ &\le C(p) \sqrt{2\nu} \bigg[\E \bigg( \sum_{i=1}^{\infty} \lambda^{2i} \sum_{j=1}^\infty \theta_j^2 \int_0^t \big\|e^{\nu (t-r) S} (A_{i,i+j} X(r))\big\|_{H^{-2\beta}}^2 \, \d r \bigg)^{p} \bigg]^{\frac{1}{2p}}\\ & \le C(p) \sqrt{2\nu} C_{1-\beta} \bigg[\E \bigg( \int_0^t \frac{1}{[\nu(t-r)]^{1-\beta}} \sum_{i=1}^{\infty} \lambda^{2i} \sum_{j=1}^\infty \theta_j^2 \|A_{i,i+j} X(r)\|_{H^{-1-\beta}}^2 \, \d r \bigg)^{p} \bigg]^{\frac{1}{2p}}, \end{aligned}$$ where $$\begin{aligned} & \sum_{i=1}^{\infty} \lambda^{2i} \sum_{j=1}^\infty \theta_j^2 \|A_{i,i+j} X(r)\|_{H^{-1-\beta}}^2 \\ & =\sum_{i=1}^{\infty} \lambda^{2i} \sum_{j=1}^\infty \theta_j^2 \big( \lambda^{-2(1+\beta)i} X_{i+j}^2(r) + \lambda^{-2(1+\beta)(i+j)}X_i^2(r) \big) \\ & \le \|\theta\|_{\ell^{\infty}}^2 \sum_{i=1}^{\infty} \lambda^{-2\beta i} \sum_{j=1}^\infty X_{i+j}^2(r) + \|\theta\|_{\ell^{\infty}}^2 \sum_{i=1}^\infty \lambda^{-2\beta i} X_{i}^2(r) \sum_{j=1}^\infty \lambda^{-2(1+\beta)j}\\ &\leq \|\theta\|_{\ell^{\infty}}^2 \|x \|_{\ell^2}^2 \frac{2 \lambda^{-2\beta}}{1-\lambda^{-2\beta}} . \end{aligned}$$ Therefore, we have $$\begin{aligned} \Big[ \E \left( \|Z_t \|_{H^{-2\beta}}^{2p}\right) \Big]^{\frac{1}{2p}} & \le C(p) \sqrt{2\nu} C_{1-\beta} \|\theta\|_{\ell^{\infty}} \|x \|_{\ell^2} \bigg(\frac{\lambda^{-2\beta}}{1-\lambda^{-2\beta}} \bigg)^{\frac12} \bigg(\int_0^t \frac{1}{[\nu(t-r)]^{1-\beta}} \, \d r \bigg)^{\frac{1}{2}}\\ & \le C(p)C_{1-\beta} \sqrt{\nu^{\beta}\beta^{-1}} \|\theta\|_{\ell^{\infty}} \|x \|_{\ell^2}\, t^{\frac{\beta}{2}} \bigg(\frac{\lambda^{-2\beta}}{1-\lambda^{-2\beta}} \bigg)^{\frac12}. \end{aligned}$$ Similarly, we can obtain that $$\begin{aligned} & \bigg[ \E \bigg( \bigg\|\sqrt{2\nu}\sum_{i=1}^{\infty} \lambda^i\sum_{j=1}^\infty \theta_j \int_s^t e^{\nu (t-r) S} (A_{i,i+j} X(r)) \, \d W_{i,j}(r) \bigg\|_{H^{-2\beta}}^{2p} \bigg) \bigg]^{\frac{1}{2p}} \\ & \le C(p)C_{1-\beta} \sqrt{\nu^{\beta}\beta^{-1}} \|\theta\|_{\ell^{\infty}} \|x \|_{\ell^2}\, |t-s|^{\frac{\beta}{2}} \bigg(\frac{\lambda^{-2\beta}}{1-\lambda^{-2\beta}} \bigg)^{\frac12}. \end{aligned}$$ By construction $Z$ satisfies the relation $$ Z_t =e^{\nu (t-s) S}Z_s + \sqrt{2\nu}\sum_{i=1}^{\infty} \lambda^i\sum_{j=1}^\infty \theta_j \int_s^t e^{\nu (t-r) S} (A_{i,i+j} X(r)) \, \d W_{i,j}(r), $$ and by Lemma \ref{similar-heat-semigroup-property} (ii), we have $$\begin{aligned} \|Z_t-Z_s\|_{H^{-4\beta}} & \le \|(I-e^{\nu (t-s)S})Z_s\|_{H^{-4\beta}} \\ &\quad + \bigg\|\sqrt{2\nu}\sum_{i=1}^{\infty} \lambda^i\sum_{j=1}^\infty \theta_j \int_s^t e^{\nu (t-r) S} (A_{i,i+j} X(r)) \, \d W_{i,j}(r)\bigg\|_{H^{-4\beta}}\\ & \le \nu^{\beta}|t-s|^{\beta} \|Z_s\|_{H^{-2\beta}} \\ &\quad+ \bigg\|\sqrt{2\nu}\sum_{i=1}^{\infty} \lambda^i\sum_{j=1}^\infty \theta_j \int_s^t e^{\nu (t-r) S} (A_{i,i+j} X(r)) \, \d W_{i,j}(r)\bigg\|_{H^{-2\beta}}. \end{aligned}$$ Then applying the previous estimates, when $\beta \le \frac14$, we get that $$ \Big(\E \|Z_t-Z_s\|_{H^{-4\beta}}^{2p} \Big)^{\frac{1}{2p}} \le C(p,\lambda, \beta,T) \nu^{\beta} \sqrt{\beta^{-1}} \|\theta\|_{\ell^{\infty}}\|x \|_{\ell^2}\, |t-s|^{\beta}. $$ We now rewrite it as $$ \E \|Z_t-Z_s\|_{H^{-2\beta}}^{2p} \le \big( C(p,\lambda,\beta,T) \sqrt{\nu^{ \beta} \beta^{-1}} \|\theta\|_{\ell^{\infty}}\|x \|_{\ell^2} \big)^{2p} |t-s|^{p \beta}. $$ Then for $\beta \in\left(0, 1 \right]$, choosing $p > \frac{1}{\beta}$ (otherwise we can use $L^{\tilde p}$-norm with $\tilde p > \frac{1}{\beta}$ to control $L^p$-norm) and applying Kolmogorov's continuity criterion, we can obtain that $$ \bigg[ \E \sup\limits_{t\in[0,T]}\|Z_t \|_{H^{-2\beta}}^{2p} \bigg]^{\frac{1}{2p}} \le C(p,T)C_{1-\beta} \sqrt{\nu^{\beta}\beta^{-1}} \|\theta\|_{\ell^{\infty}} \|x \|_{\ell^2} \bigg(\frac{\lambda^{-2\beta}}{1-\lambda^{-2\beta}} \bigg)^{\frac12}. $$ Renaming $2\beta$ as $\beta$ and $2p$ as $p$ gives us \eqref{estimate-Z-1}. \end{proof} For the other expectation on the right-hand side of \eqref{quantitative-rate-1}, we have the following estimate. \begin{lemma}\label{lem-estimate-expectation-2} For any $\delta \in \left[0,1 \right)$, $\beta \ge 2-2\delta $, we have \begin{equation}\label{estimate-S} \E \bigg[\sup_{t\leq T} \bigg\| \nu \int_0^t e^{\nu (t-r) S} (S_\theta -S) X(r)\,\d r \bigg\|_{H^{-\beta}}^2 \bigg] \le C(T,\delta,\lambda) \nu^{2-2\delta} \|\theta\|_{\ell^{\infty}}^4 \|x \|_{\ell^2}^2. \end{equation} \end{lemma} \begin{proof} By Lemma \ref{similar-heat-semigroup-property}, for $\delta \in \left[0,1 \right)$, we have $$\begin{aligned} \bigg\| \nu \int_0^t e^{\nu (t-r) S} (S_\theta -S) X(r)\,\d r \bigg\|_{H^{-\beta}} & \le \nu \int_0^t \| e^{\nu (t-r) S} (S_\theta -S) X(r) \|_{H^{-\beta}} \,\d r \\ & \le \nu C_{\delta} \int_0^t \frac{1}{(\nu (t-r))^{\delta}} \| (S_\theta -S) X(r) \|_{H^{-\beta-2\delta}} \,\d r, \end{aligned}$$ where, by the definition of $S_\theta$, it holds $$\begin{aligned} \| (S_\theta -S) X(r) \|_{H^{-\beta-2\delta}} &= \bigg[ \sum\limits_{i=1}^{\infty} \frac{1}{\lambda^{2(\beta+2\delta)i}}\bigg(\sum\limits_{j=1}^{i-1}\theta_j^2 \lambda^{2(i-j)} \bigg)^2 X_i^2(r) \bigg]^{\frac12}\\ & \le \|\theta\|_{\ell^{\infty}}^2 \bigg[ \sum_{i=1}^{\infty} X_i^2(r)\bigg( \sum\limits_{j=1}^{i-1} \frac{\lambda^{2(i-j)}}{\lambda^{(\beta+2\delta)i}} \bigg)^2 \bigg]^{\frac12} . \end{aligned}$$ Since $\beta \ge 2-2\delta$, the last quantity is dominated by $\|\theta\|_{\ell^{\infty}}^2 \|X(r) \|_{\ell^2} \lesssim_\lambda \|\theta\|_{\ell^{\infty}}^2 \|x \|_{\ell^2}$, thus $$ \bigg\| \nu \int_0^t e^{\nu (t-r) S} (S_\theta -S) X(r)\,\d r \bigg\|_{H^{-\beta}} \lesssim_\lambda \nu^{1-\delta} C(T, \delta) \|\theta\|_{\ell^{\infty}}^2 \|x \|_{\ell^2} . $$ And the proof is completed. \end{proof} Combining Lemmas \ref{lem-estimate-expectation-1} and \ref{lem-estimate-expectation-2}, for $\delta \in (\frac12 , 1)$ and $ \alpha \in (2-2\delta, 1 )$, we can now derive that $$\begin{aligned} & \E\bigg[\sup_{t\leq T} \|X(t)- \tilde X(t) \|_{H^{-\alpha}}^2 \bigg] \\ &\lesssim e^{2T\|x \|_{\ell^2}^2/\nu} \bigg\{ \E \bigg[\sup_{t\leq T} \|Z_t \|_{H^{-\alpha}}^2 \bigg] + \E \bigg[\sup_{t\leq T} \bigg\| \nu \int_0^t e^{\nu (t-r) S} (S_\theta -S) X(r)\,\d r \bigg\|_{H^{-\alpha}}^2 \bigg] \bigg\} \\ & \lesssim e^{2T\|x \|_{\ell^2}^2/\nu} \bigg\{ \nu^{\frac{\alpha}{2}}\alpha^{-1} \|\theta\|_{\ell^{\infty}}^2 \|x \|_{\ell^2}^2 \big[ C(T)C_{1-\frac{\alpha}{2}} \big]^2 \frac{\lambda^{-\alpha}}{1-\lambda^{-\alpha}} + C(T,\delta) \nu^{2-2\delta} \|\theta\|_{\ell^{\infty}}^4 \|x \|_{\ell^2}^2 \bigg\}\\ & \lesssim e^{2T\|x \|_{\ell^2}^2/\nu} \nu^{\frac{\alpha}{2}} \|\theta\|_{\ell^{\infty}}^2 \|x \|_{\ell^2}^2 \bigg\{ \alpha^{-1} \big[C(T) C_{1-\frac{\alpha}{2}} \big]^2 \frac{\lambda^{-\alpha}}{1-\lambda^{-\alpha}} + C(T,\delta) \nu^{2-2\delta-\frac{\alpha}{2}} \|\theta\|_{\ell^{\infty}}^2 \bigg\} . \end{aligned}$$ Hence, we arrive at the estimate in Theorem \ref{thm-quantitative-convergence-rate}. \section{Central limit theorem}\label{section-CLT} The purpose of this section is to prove the central limit theorem, i.e. Theorem \ref{thm-CLT}. Recall the setting in Section \ref{subsec-CLT}; in particular, $X^N$ is the weak solution to the stochastic dyadic model \eqref{stoch-dya-model-N-new}, that is, \eqref{stoch-dya-model-N} with $\theta=\theta^N$ defined as $$\theta^N_j= \sqrt{\eps_N}\, j^{-\alpha_1} {\bf 1}_{\{j\leq N\}}, \quad j\in \Z_+, $$ where $\alpha_1\in (0,1/2)$ is fixed and $\eps_N= \big(\sum_{j=1}^N j^{-2\alpha_1} \big)^{-1} \to 0$ as $N\to \infty$; $\tilde X$ is the unique solution to the deterministic equation \eqref{thm-scaling-limit.1}. We want to prove that the fluctuation term $$\xi^N= (X^N-\tilde X)/\sqrt{\eps_N}$$ converges as $N\to \infty$ to $\xi$, the solution to \eqref{central-limit-model-N-limit}. For convenience of later use, we recall the fact $\|X^N_r \|_{\ell^2} \vee \|\tilde X_r \|_{\ell^2} \leq \|x \|_{\ell^2}$ for all $r\in [0,T]$ and $N\geq 1$. In this part we shall often write the time variables as subscripts to save space. First of all, we prove the well posedness of the limit equation \eqref{central-limit-model-N-limit}. Rewrite \eqref{central-limit-model-N-limit} in its mild form: \begin{equation}\label{central-limit-model-N-limit-mild} \xi_t=\int_0^t e^{\nu (t-r) S} \big[B(\xi_r,\tilde X_r)+ B(\tilde X_r, \xi_r)\big]\,\d r + M_t, \end{equation} where the stochastic convolution $$ M_t = \sqrt{2\nu}\int_0^t \sum_i \lambda^i\sum_{j=1}^\infty j^{-\alpha_1} e^{\nu (t-r) S} A_{i,i+j} \tilde X_r \, \d W_{i,j}(r). $$ Since $\tilde X$ is a deterministic function, the process $M=(M_t)$ is Gaussian. Define $\hat\theta\in \ell^\infty$ as $\hat \theta_j=j^{-\alpha_1},\, j\in \Z_+$. Replacing $X$ by $\tilde X$ and $\theta$ by $\hat \theta $ in Lemma \ref{lem-estimate-expectation-1} (note that its proof does not require $\|\theta \|_{\ell^2} =1$), by similar calculations, we can derive the following regularity result of the stochastic convolution $M$. \begin{lemma}\label{lem-stochastic-convolution-regularity} For any $ \beta \in \left(0, 1 \right]$ and any $p\in \left[1,\infty \right) $, it holds \begin{equation}\label{estimate-stochastic-convolution} \begin{aligned} &\bigg[ \E \sup\limits_{t\in[0,T]}\|M_t \|_{H^{-\beta}}^{p} \bigg]^{\frac{1}{p}} \le C(p,T)C_{1-\frac{\beta}{2}} \sqrt{\nu^{\frac{\beta}{2}}\beta^{-1}} \|\hat \theta\|_{\ell^{\infty}}^2 \|x \|_{\ell^2} \bigg(\frac{\lambda^{-\beta}}{1-\lambda^{-\beta}} \bigg)^{\frac12} < \infty . \end{aligned} \end{equation} \end{lemma} Based on this lemma, we can now treat $M_t$ as a given element in $C_t H^{-\beta}$ for some $\beta \in \left(0, 1 \right]$ and turn to the following deterministic equation \begin{equation}\label{central-limit-model-N-limit-new} \phi_t= e^{\nu t S}\phi_0 +\int_0^t e^{\nu (t-r) S} \big[ B(\phi_r,\tilde X_r)+ B(\tilde X_r, \phi_r) \big]\,\d r + M_t, \end{equation} where $\phi$ is the unknown; note that, unlike in \eqref{central-limit-model-N-limit-mild}, we consider general initial data $\phi_0\in H^{-\beta}$. \begin{proposition}\label{prop-central-limit-wellposedness} Let $\tilde X$ be the unique solution to \eqref{thm-scaling-limit.1}, $\beta\in \left(0, 1 \right]$. Then for any given $M\in C_t H^{-\beta},\ \phi_0 \in H^{-\beta}$, there exists a unique solution $\phi \in C_t H^{-\beta}$ to \eqref{central-limit-model-N-limit-new}. Moreover, when $\phi_0=0$, the solution map $M \mapsto \phi =: \mathcal{T}M$ is a bounded linear operator and satisfies $$\|\mathcal{T}M\|_{C_t H^{-\beta}} \lesssim e^{2\|x \|_{\ell^2}\sqrt{T/\nu}} \|M\|_{C_t H^{-\beta}}. $$ \end{proposition} \begin{proof} We first define a map $\Gamma$ on $C_t H^{-\beta}$ by $$(\Gamma \phi)_t := e^{\nu t S}\phi_0+ \int_0^t e^{\nu (t-r) S} \big[B(\phi_r,\tilde X_r)+ B(\tilde X_r, \phi_r) \big]\,\d r + M_t. $$ We want to apply the contraction principle to prove the well posedness. Since $M= (M_t)_{t\in [0,T]} \in C_t H^{-\beta}$ and $\tilde X\in L^\infty(0,T; \ell^2)$, by Lemma \ref{lem-nonlinearity-2}, we can easily prove that $\Gamma: C_t H^{-\beta} \mapsto C_t H^{-\beta}$. We now show that $\Gamma$ is a contraction map. For any $ \phi^1, \phi^2 \in C_t H^{-\beta}$, set $f=\phi^1-\phi^2$, then we obtain $$\begin{aligned} \|(\Gamma \phi^1)_t - (\Gamma \phi^2)_t\|_{H^{-\beta}} &= \bigg\|\! \int_0^t\! e^{\nu (t-r) S} \big[B(f_r,\tilde X_r)+ B(\tilde X_r, f_r) \big]\,\d r \bigg\|_{H^{-\beta}}\\ &\le \int_0^t \big\|e^{\nu (t-r) S} \big[B(f_r,\tilde X_r)+ B(\tilde X_r, f_r) \big] \big\|_{H^{-\beta}}\,\d r \\ &\le C \int_0^t \frac{1}{[\nu(t-r)]^{1/2}} \big\| B(f_r,\tilde X_r)+ B(\tilde X_r, f_r) \big\|_{H^{-\beta-1}}\,\d r \\ &\le C \lambda^{\beta} \|x \|_{\ell^2}\, \|f_r\|_{C_t H^{-\beta} } \sqrt{t/\nu }, \end{aligned}$$ where in the third step we use Lemma \ref{similar-heat-semigroup-property} (i) and in the last step we use Lemma \ref{lem-nonlinearity-2}. Taking the supremum over $t\in[0,T]$, we can get $$\|(\Gamma \phi^1) - (\Gamma \phi^2)\|_{C_T H^{-\beta}} \le C \lambda^{\beta} \|x \|_{\ell^2} \sqrt{T/\nu }\, \|\phi^1 -\phi^2\|_{C_T H^{-\beta} }. $$ Thus we can choose $T$ sufficiently small such that $\Gamma$ is contractive on $C_T H^{-\beta}$. Then we derive the local existence and uniqueness of solutions to \eqref{central-limit-model-N-limit-new}. Note that the local existence time $T$ is independent of the initial data $\phi_0$, we can therefore uniquely extend the local solution to obtain the unique global solution. When $\phi_0=0$, it is easy to verify that the map $M \mapsto \phi =: \mathcal{T}M$ is linear and it remains to prove that $\mathcal{T}$ is bounded. Similar to the estimate above, by Lemma \ref{similar-heat-semigroup-property} (i) and Lemma \ref{lem-nonlinearity-2}, we can get $$\begin{aligned} \|\phi_t\|_{H^{-\beta}} & \le \int_0^t \big\|e^{\nu (t-r)S} \big[B(\phi_r, \tilde X_r)+B(\tilde X_r, \phi_r)\big] \big\|_{H^{-\beta}} \,\d r +\|M_t\|_{H^{-\beta}}\\ & \lesssim \int_0^t \frac{1}{[\nu (t-r)]^{1/2}}\|x \|_{\ell^2}\, \|\phi_r\|_{H^{-\beta}} \,\d r + \|M_t\|_{H^{-\beta}}. \end{aligned}$$ Then by the generalized Gronwall inequality, we can derive $$ \|\phi\|_{C_t H^{-\beta}} \lesssim e^{\|x \|_{\ell^2}\frac{2}{\sqrt{\nu}}\sqrt{T}} \|M\|_{C_t H^{-\beta}}. $$ The proof is completed. \end{proof} From Proposition \ref{prop-central-limit-wellposedness}, we can immediately get that there exists a unique solution $\xi:=\mathcal{T}M$ to \eqref{central-limit-model-N-limit-mild}. Since $M$ is a Gaussian process, the linearity of the map $\mathcal{T}$ implies that $\xi:=\mathcal{T}M$ is also Gaussian. Therefore, we arrive at the following corollary. \begin{corollary}\label{cor-central-limit-wellposedness} Let $\tilde X$ be the unique solution to \eqref{thm-scaling-limit.1}, $\beta\in (0, 1]$. Then there exists a unique solution $\xi:= \mathcal{T}M \in C_t H^{-\beta}$ to \eqref{central-limit-model-N-limit-mild}. In particular, $\xi$ is a Gaussian process and satisfies $ \E \big[\|\xi\|_{C_t H^{-\beta}}^p \big] < \infty$ for any $ p\in [1,\infty )$. \end{corollary} We now show that $\xi^N \to \xi$ as $N\to \infty$. \begin{proof}[Proof of Theorem \ref{thm-CLT}.] The proof is slight long and is divided into three steps. \textbf{Step 1: preliminary computations.} Rewrite \eqref{central-limit-model-N} in its mild form: \begin{equation}\label{central-limit-model-N-mild} \begin{aligned} \xi_t^N &=\int_0^t e^{\nu(t-r)S} \big[B(\xi_r^N,X_r^N)+ B(\tilde X_r, \xi_r^N)\big] \,\d r+ \frac{\nu}{\sqrt{\varepsilon_N}}\int_0^t e^{\nu(t-r)S} (S_{\theta^N}-S)X_r^N \,\d r \\ &\quad+ \sqrt{2\nu}\int_0^t \sum_i \lambda^i\sum_{j=1}^N j^{-\alpha_1}\, e^{\nu(t-r)S} A_{i,i+j} X_r^N \, \d W_{i,j}(r). \end{aligned} \end{equation} Then by \eqref{central-limit-model-N-mild} and \eqref{central-limit-model-N-limit-mild}, we can derive that $$\begin{aligned} & \quad\ \E \|\xi_t^N-\xi_t\|_{H^{-\beta}}^2 \\ & \lesssim \E \bigg\|\! \int_0^t\! e^{\nu (t-r)S} B(\xi_r^N-\xi_r, X_r^N) \,\d r \bigg\|_{H^{-\beta}}^2 + \E \bigg\|\! \int_0^t\! e^{\nu (t-r)S} B(\xi_r, X_r^N-\tilde X_r) \,\d r \bigg\|_{H^{-\beta}}^2\\ &\quad + \E \bigg\|\! \int_0^t\! e^{\nu (t-r)S} B(\tilde X_r, \xi_r^N-\xi_r ) \,\d r \bigg\|_{H^{-\beta}}^2 + \E \bigg\|\frac{\nu}{\sqrt{\varepsilon_N}} \int_0^t\! e^{\nu (t-r)S} (S_{\theta^N}-S)X^N_r \,\d r \bigg\|_{H^{-\beta}}^2 +\hat M_t \\ &=: \sum\limits_{i=1}^4 I_i + \hat M_t, \end{aligned}$$ where $$\begin{aligned} \hat M_t &= 2\nu\, \E \bigg\|\int_0^t \sum_i \lambda^i\sum_{j=1}^N j^{-\alpha_1} e^{\nu(t-r)S} A_{i,i+j} X_r^N \, \d W_{i,j}(r)\\ & \hskip40pt -\int_0^t \sum_i \lambda^i\sum_{j=1}^\infty j^{-\alpha_1} e^{\nu(t-r)S} A_{i,i+j} \tilde X_r \, \d W_{i,j}(r)\bigg\|_{H^{-\beta}}^2. \end{aligned}$$ For $I_1$, by the first estimate of Lemma \ref{lem-heat-integral-property} and Lemma \ref{lem-nonlinearity-2}, one can get $$\begin{aligned} I_1 & \lesssim \frac{1}{\nu}\, \E \int_0^t \big\|B(\xi_r^N-\xi_r, X_r^N)\big\|_{H^{-\beta-1}}^2 \,\d r \le C(\lambda) \frac{\|x \|_{\ell^2}^2}{\nu}\, \E \int_0^t \|\xi_r^N-\xi_r\|_{H^{-\beta}}^2 \,\d r. \end{aligned}$$ Similarly, we can obtain $$\begin{aligned} I_3 \lesssim \frac{1}{\nu}\, \E \int_0^t \big\|B(\tilde X_r, \xi_r^N-\xi_r ) \big\|_{H^{-\beta-1}}^2 \,\d r \le C(\lambda) \frac{\|x \|_{\ell^2}^2}{\nu}\, \E \int_0^t \|\xi_r^N-\xi_r\|_{H^{-\beta}}^2 \,\d r. \end{aligned}$$ Then combining the above estimates, we conclude from Gronwall's inequality that \begin{equation}\label{gronwall-inequality} \sup\limits_{t\in [0,T]} \E \|\xi_t^N-\xi_t\|_{H^{-\beta}}^2 \lesssim e^{C(\lambda) \|x \|_{\ell^2}^2 T/\nu} \sup_{t\in [0,T]} (I_2 +I_4 + \hat M_t) . \end{equation} It remains to estimate the terms $I_2, I_4$ and $\hat M_t$. \textbf{Step 2: estimates of $I_2$ and $I_4$.} For any fixed small $\alpha \in (0,1)$, by Lemma \ref{similar-heat-semigroup-property} (i), one obtains that $$\begin{aligned} I_2 & \le \E \bigg( \int_0^t \big\| e^{\nu (t-r)S}B(\xi_r, X_r^N-\tilde X_r) \big\|_{H^{-\beta}} \,\d r \bigg)^2\\ & \le C_{1+\alpha}^2\, \E \bigg( \int_0^t \frac{1}{[\nu(t-r)]^{\frac{1+\alpha}{2}}} \big\| B(\xi_r, X_r^N-\tilde X_r) \big\|_{H^{-1-\beta-\alpha}} \,\d r \bigg)^2. \end{aligned}$$ Then by Lemma \ref{lem-nonlinearity-2}, we can get $$\begin{aligned} I_2 &\le C_{1+\alpha}^2 \lambda^{2\alpha}\, \E \bigg( \int_0^t \frac{1}{[\nu(t-r)]^{\frac{1+\alpha}{2}}} \| \xi_r\|_{H^{-\beta}} \|X_r^N-\tilde X_r\|_{H^{-\alpha}} \,\d r \bigg)^2 \\ &\le C_{1+\alpha}^2 \frac{\lambda^{2\alpha} }{\nu^{1+\alpha}} \big(\E \|\xi\|_{C_t^0H^{-\beta}}^4 \big)^{\frac12} \big(\E \|X^N-\tilde X\|_{C_t^0H^{-\alpha}}^4 \big)^{\frac12} \bigg(\int_0^t \frac{\d r}{(t-r)^{\frac{1+\alpha}{2}}} \bigg)^2. \end{aligned}$$ The first expectation can be estimated by using Corollary \ref{cor-central-limit-wellposedness}, while the second one is treated as below: $$\big(\E \|X^N-\tilde X\|_{C_t^0H^{-\alpha}}^4 \big)^{\frac12} \le \|x \|_{\ell^2} \big(\E \|X^N-\tilde X\|_{C_t^0H^{-\alpha}}^2 \big)^{\frac12} \lesssim \|x \|_{\ell^2}^2 \|\theta^N\|_{\ell^{\infty}} = \|x \|_{\ell^2}^2 \sqrt{\varepsilon_N}, $$ where the second step follows from Theorem \ref{thm-quantitative-convergence-rate}. Therefore, $$I_2 \le C(\alpha, \lambda,\nu, T, \delta, \beta, \|x \|_{\ell^2})\, \varepsilon_N. $$ Now we turn to estimate $I_4$. For any fixed $\delta_1 \in (1-\beta/2, 1)$, by Lemma \ref{similar-heat-semigroup-property} (i), we have $$\begin{aligned} I_4 &\le \frac{\nu^2}{\varepsilon_N}\, \E \bigg( \int_0^t \big\|e^{\nu (t-r)S} (S_{\theta^N}-S)X^N_r \big\|_{H^{-\beta}} \,\d r \bigg)^2\\ &\le \frac{\nu^2}{\varepsilon_N}\, \E \bigg( \int_0^t \frac{1}{[\nu (t-r)]^{\delta_1}} \big\| (S_{\theta^N}-S) X^N_r \big\|_{H^{-\beta-2\delta_1}} \,\d r \bigg)^2.\\ \end{aligned}$$ Then, as $\beta> 2-2\delta_1$, we can get $$\begin{aligned} I_4 &\le \frac{\nu^2}{\varepsilon_N}\, \E \Bigg\{ \int_0^t \frac{1}{[\nu (t-r)]^{\delta_1}} \bigg[ \sum_{i=1}^{\infty}\frac{1}{\lambda^{2(\beta+2\delta_1)i}} \bigg(\sum_{j=1}^{i-1}(\theta^N_j)^2\lambda^{2(i-j)}\bigg)^2 (X^N_i(r))^2\bigg]^{\frac12} \,\d r \Bigg\}^2 \\ &\le \frac{\nu^2}{\varepsilon_N} \|\theta^N\|_{\ell^{\infty}}^4\, \E \Bigg\{ \int_0^t \frac{1}{[\nu (t-r)]^{\delta_1}} \bigg[ \sum_{i=1}^{\infty} (X^N_i(r))^2\bigg]^{\frac12} \,\d r \Bigg\}^2 \\ &\le C(T,\delta_1,\lambda)\frac{\nu^{2-2\delta_1}}{\varepsilon_N} \|\theta^N\|_{\ell^{\infty}}^4 \|x \|_{\ell^2}^2 \\ &= C(T,\delta_1,\lambda,\nu,\|x \|_{\ell^2})\, \varepsilon_N. \end{aligned}$$ \textbf{Step 3: estimate of $\hat M_t$.} The last term $\hat M_t$ is the most difficult one to deal with. We split it into the following two parts: \begin{equation}\label{hat-M-t} \begin{aligned} \hat M_t &\lesssim \nu\,\E \bigg\|\int_0^t \sum_i \lambda^i\sum_{j=1}^N j^{-\alpha_1} e^{\nu(t-r)S} A_{i,i+j} (X_r^N -\tilde X_r)\, \d W_{i,j}(r) \bigg\|_{H^{-\beta}}^2 \\ &\quad + \nu\,\E \bigg\| \int_0^t \sum_i \lambda^i\sum_{j=N+1}^\infty j^{-\alpha_1} e^{\nu(t-r)S} A_{i,i+j} \tilde X_r \, \d W_{i,j}(r)\bigg\|_{H^{-\beta}}^2\\ &= J_1+ J_2. \end{aligned} \end{equation} First, let us denote $$g(r):=X_r^N -\tilde X_r, \quad r\in [0,T]; $$ then by It\^o's isometry, we can get $$\begin{aligned} J_1 &= \nu\, \E \int_0^t \sum_i \lambda^{2i} \sum_{j=1}^N j^{-2\alpha_1} \big\| e^{\nu(t-r)S} A_{i,i+j}\, g(r) \big\|_{H^{-\beta}}^2 \,\d r \\ &= \nu\, \E \int_0^t \sum_i \lambda^{2i} \sum_{j=1}^N j^{-2\alpha_1} \big( \lambda^{-2\beta i}e^{-2\nu (t-r)\lambda^{2i}} g_{i+j}^2(r) + \lambda^{-2\beta(i+j)}e^{-2\nu (t-r)\lambda^{2(i+j)}} g_i^2(r) \big) \,\d r \\ &= J_{11}+J_{12}. \end{aligned}$$ For $J_{11}$, we have $$ J_{11}\le \nu\, \E \int_0^t \sum_i \lambda^{2(1-\beta)i} e^{-2\nu (t-r)\lambda^{2i}} \sum_{j=1}^{\infty} j^{-2\alpha_1} g_{i+j}^2(r) \,\d r, $$ where, for some fixed $n_0\in \mathbb{Z}_{+}$, $$\begin{aligned} \sum_{j=1}^{\infty} j^{-2\alpha_1} g_{i+j}^2(r) &\le \sum_{j=1}^{n_0} j^{-2\alpha_1} g_{i+j}^2(r)+ \sum_{j=n_0+1}^{\infty} n_0^{-2\alpha_1} g_{i+j}^2(r) \\ &\le \lambda^{2\alpha (i+n_0)} \sum_{j=1}^{n_0} \lambda^{-2\alpha (i+j)} g_{i+j}^2(r) + n_0^{-2\alpha_1} \|g(r) \|_{\ell^2}^2\\ &\le \lambda^{2\alpha (i+n_0)} \|g(r)\|_{H^{-\alpha}}^2 + n_0^{-2\alpha_1} \|g(r) \|_{\ell^2}^2. \end{aligned}$$ Hence we obtain that $$\begin{aligned} J_{11}&\le \nu\, \E \int_0^t \sum_i \lambda^{2(1-\beta)i} e^{-2\nu (t-r)\lambda^{2i}} \big( \lambda^{2\alpha (i+n_0)} \|g(r)\|_{H^{-\alpha}}^2 + n_0^{-2\alpha_1} \|g(r)\|_{\ell^2}^2 \big) \,\d r\\ &\le \nu \lambda^{2\alpha n_0} \E\big(\|g\|_{C_t^0H^{-\alpha}}^2 \big) \sum_i \lambda^{2(1+\alpha-\beta)i} \int_0^t e^{-2\nu (t-r)\lambda^{2i}} \,\d r \\ &\quad + \nu n_0^{-2\alpha_1}\cdot 4 \|x \|_{\ell^2}^2 \sum_i \lambda^{2(1-\beta)i} \int_0^t e^{-2\nu (t-r)\lambda^{2i}} \,\d r\\ &\le \lambda^{2\alpha n_0} \E\big(\|g\|_{C_t^0H^{-\alpha}}^2 \big) \sum_i \lambda^{2(\alpha-\beta) i} +2 n_0^{-2\alpha_1} \|x \|_{\ell^2}^2 \sum_i \lambda^{-2\beta i}. \end{aligned}$$ Next, for $J_{12}$, take $\rho\in (0,1)$ such that $\beta +\rho > 1+\alpha$; by the same idea as in the proof of Lemma \ref{similar-heat-semigroup-property} (i), we have $$\begin{aligned} J_{12}& = \nu\, \E \int_0^t \sum_i \lambda^{2i} \sum_{j=1}^N j^{-2\alpha_1} \lambda^{-2\beta(i+j)}e^{-2\nu (t-r)\lambda^{2(i+j)}} g_i^2(r) \,\d r\\ &\le \nu C_{\rho}\, \E \int_0^t \sum_i \lambda^{2i} \sum_{j=1}^N j^{-2\alpha_1} \lambda^{-2\beta(i+j)} \frac{1}{[2\nu (t-r)]^{\rho}} \lambda^{-2(i+j)\rho} g_i^2(r) \,\d r\\ &\le \nu C_{\rho}\, \E \int_0^t \sum_i \lambda^{-2\alpha i}g_i^2(r) \lambda^{2(1+\alpha-\beta-\rho)i} \sum_{j=1}^N \lambda^{-2(\beta+\rho)j} \frac{1}{[\nu (t-r)]^{\rho}} \,\d r\\ &\le \nu^{1-\rho} C_{\rho} C(T,\beta,\rho,\lambda)\, \E \big(\|g\|_{C_t^0H^{-\alpha}}^2 \big), \end{aligned}$$ where in the last inequality, we have used $\beta +\rho > 1+\alpha$. Combining the above estimates on $J_{11}$ and $J_{12}$, by Theorem \ref{thm-quantitative-convergence-rate}, we can derive that $$ J_1 \le \lambda^{2\alpha n_0} C(T,\alpha, \delta, \nu ,\|x \|_{\ell^2})\, \varepsilon_N + n_0^{-2\alpha_1} \|x \|_{\ell^2}^2 C(\lambda, \beta)+ \nu^{1-\rho} C_{\rho} C(T,\beta,\rho,\lambda,\alpha, \delta, \nu ,\|x \|_{\ell^2})\, \varepsilon_N. $$ Next for $J_2$ defined in \eqref{hat-M-t}, we fix $\eta \in (1-\beta,1)$; again by It\^o's isometry and Lemma \ref{similar-heat-semigroup-property} (i), we have $$\begin{aligned} J_2 &= \nu\, \E \bigg\| \int_0^t \sum_i \lambda^i\sum_{j=N+1}^\infty j^{-\alpha_1} e^{\nu(t-r)S} A_{i,i+j} \tilde X_r \, \d W_{i,j}(r)\bigg\|_{H^{-\beta}}^2\\ &= \nu\, \E \int_0^t \sum_i \lambda^{2i} \sum_{j=N+1}^\infty j^{-2\alpha_1} \big\|e^{\nu(t-r)S} A_{i,i+j} \tilde X_r \big\|_{H^{-\beta}}^2 \,\d r\\ &\le \nu C_{\eta}\, \E \int_0^t \frac{1}{[\nu(t-r)]^{\eta}} \sum_i \lambda^{2i} \sum_{j=N+1}^\infty j^{-2\alpha_1} \| A_{i,i+j} \tilde X_r \|_{H^{-\beta-\eta}}^2 \,\d r , \end{aligned}$$ where $$\begin{aligned} &\quad \sum_i \lambda^{2i} \sum_{j=N+1}^\infty j^{-2\alpha_1} \| A_{i,i+j} \tilde X_r \|_{H^{-\beta-\eta}}^2 \\ &=\sum_i \lambda^{2i} \sum_{j=N+1}^\infty j^{-2\alpha_1} \big(\lambda^{-2(\beta+\eta)i}\tilde X_{i+j}^2 (r) + \lambda^{-2(\beta+\eta)(i+j)}\tilde X_i^2 (r) \big)\\ & =\widehat J_1 + \widehat J_2 . \end{aligned}$$ Since $1-\beta-\eta<0$, one obtains that $$\begin{aligned} \widehat J_1 &= \sum_i \lambda^{2i} \sum_{j=N+1}^\infty j^{-2\alpha_1} \lambda^{-2(\beta+\eta)i}\tilde X_{i+j}^2 (r)\\ &\le (N+1)^{-2\alpha_1} \sum_{i} \lambda^{2(1-\beta-\eta)i} \sum_{j=N+1}^\infty \tilde X_{i+j}^2 (r)\\ &\le (N+1)^{-2\alpha_1} \|x \|_{\ell^2}^2 \frac{\lambda^{2(1-\beta-\eta)}}{1-\lambda^{2(1-\beta-\eta)}}. \end{aligned}$$ Similarly, for $\widehat J_2$, we have $$\begin{aligned} \widehat J_2 &\le (N+1)^{-2\alpha_1} \sum_i \lambda^{2(1-\beta-\eta)i} \tilde X_i^2 (r) \sum_{j=N+1}^\infty \lambda^{-2(\beta+\eta)j}\\ &\le (N+1)^{-2\alpha_1} \|x \|_{\ell^2}^2 \frac{\lambda^{-2(\beta+\eta)(N+1)}}{1-\lambda^{-2(\beta+\eta)}}. \end{aligned}$$ Hence, we can easily derive that $$ J_2 \le 2\nu^{1-\eta} C_{\eta} C(T,\eta,\beta,\lambda) (N+1)^{-2\alpha_1} \|x \|_{\ell^2}^2. $$ Substituting the estimates on $J_1$ and $J_2$ in \eqref{hat-M-t}, we obtain that $$\begin{aligned} \hat M_t \lesssim \lambda^{2\alpha n_0}\tilde C_1\, \varepsilon_N + n_0^{-2\alpha_1} \|x \|_{\ell^2}^2 C(\lambda, \beta)+ \nu^{1-\rho} C_{\rho} \tilde C_2\, \varepsilon_N + \nu^{1-\eta} C_{\eta} \tilde C_3 (N+1)^{-2\alpha_1} \|x \|_{\ell^2}^2. \end{aligned}$$ where $\tilde C_1 = C(T,\alpha, \delta, \nu ,\|x \|_{\ell^2})$, $\tilde C_2=C(T,\beta,\rho,\lambda,\alpha, \delta, \nu ,\|x \|_{\ell^2})$ and $\tilde C_3 = C(T,\eta,\beta,\lambda)$. Finally, combining the above estimates for $I_2,I_4, \hat M_t$ and \eqref{gronwall-inequality}, we can arrive at $$ \sup\limits_{t\in [0,T]} \E \|\xi_t^N-\xi_t\|_{H^{-\beta}}^2 \lesssim C_1\, \varepsilon_N + C_2\, n_0^{-2\alpha_1} + C_3 (N+1)^{-2\alpha_1}, $$ where $C_1 = C(\alpha, \lambda,\nu, T, \delta, \beta, \|x \|_{\ell^2}, \delta_1, \rho)$, $C_2 = C(\lambda, \beta, \|x \|_{\ell^2})$ and $C_3= C(T,\eta,\beta,\lambda, \nu, \|x \|_{\ell^2})$. Therefore, first letting $N \to \infty$ and then taking $n_0 \to \infty$, we get that $$ \lim_{N\to \infty} \sup_{t\in [0,T]} \E \|\xi_t^N-\xi_t\|_{H^{-\beta}}^2= 0 $$ and the proof is complete. \end{proof} \section{Dissipation enhancement}\label{section-dissipation-enhancement} The stochastic viscous dyadic model \eqref{stoch-viscous-dyadic-model} can be written in It\^o form as \begin{equation}\label{stochastic-viscous-dyadic} \begin{aligned} \d X &= B(X)\,\d t + \kappa S X \,\d t+ \sqrt{2\nu} \sum_i \lambda^i\sum_{j=1}^\infty \theta_j A_{i,i+j} X \, \d W_{i,j} + \nu S_{\theta} X \, \d t \\ & = B(X)\,\d t + (\kappa+ \nu) S X \,\d t+ \sqrt{2\nu} \sum_i \lambda^i\sum_{j=1}^\infty \theta_j A_{i,i+j} X \, \d W_{i,j} + \nu (S_{\theta}-S) X \, \d t. \end{aligned} \end{equation} We shall denote $\mu = \kappa +\nu$. For any $ 0\le s\le t$, we have the mild formulation \begin{equation}\label{stochastic-viscous-dyadic-mild} X(t)= e^{\mu(t-s)S}X(s) + \int_s^t e^{\mu(t-r)S} B(X(r))\,\d r + \widehat Z_{t,s}+ \nu \int_s^t e^{\mu(t-r)S} (S_{\theta}-S) X(r) \, \d r, \end{equation} where $$ \widehat Z_{t,s}=\sqrt{2\nu} \int_s^t \sum_i \lambda^i\sum_{j=1}^\infty \theta_j\, e^{\mu(t-r)S} A_{i,i+j} X(r) \, \d W_{i,j}(r). $$ We recall the energy balance \eqref{viscous-dyadic-energy-balance} for the reader's convenience: $\P$-a.s. for all $0\leq s<t$, \begin{equation}\label{viscous-energy-balance} \|X(t)\|_{\ell^2}^2 + 2 \kappa \int_s^t \|X(r)\|_{H^1}^2 \,\d r = \|X(s)\|_{\ell^2}^2, \end{equation} Before proving the dissipation enhancement, we need the following result. \begin{lemma}\label{lem-energy-decreasing} There exists $\delta > 0$ such that, for any $ n \ge 0$, $$ \E \|X(n+1)\|_{\ell^2}^2 \le \delta\, \E \|X(n)\|_{\ell^2}^2, $$ where $$ \delta \lesssim \bigg( \frac{1}{\mu \lambda^2}+ \frac{\lambda^2}{\mu^2} \|X(0)\|_{\ell^2}^2 + \frac{\nu^2}{\mu^2} C(\lambda) \|\theta\|_{\ell^{\infty}}^4 + \|\theta\|_{\ell^{\infty}}^2 \frac{\nu C_{\rho}^2 C(\lambda)}{\kappa \mu^{\rho} (1-\rho)} \bigg). $$ In particular, by letting $\nu$ big and then choosing $\theta \in \ell^2$ with $\|\theta\|_{\ell^{\infty}}$ small enough, $\delta$ can be made arbitrarily small. \end{lemma} \begin{proof} Since $t \mapsto \|X(t)\|_{\ell^2}$ is almost surely decreasing, we have $\|X(n+1)\|_{\ell^2}^2 \le \int_n^{n+1} \|X(t)\|_{\ell^2}^2 \,\d t$. Then we can get from the mild formulation \eqref{stochastic-viscous-dyadic-mild} that $$\begin{aligned} \|X(n+1)\|_{\ell^2}^2 & \lesssim \int_n^{n+1} \big\|e^{\mu(t-n)S}X(n)\big\|_{\ell^2}^2 \,\d t + \int_n^{n+1} \bigg\|\int_n^t e^{\mu(t-r)S} B(X(r))\,\d r \bigg\|_{\ell^2}^2 \,\d t \\ &\quad + \int_n^{n+1} \|\widehat Z_{t,n}\|_{\ell^2}^2 \,\d t + \int_n^{n+1} \bigg\|\nu \int_n^t e^{\mu(t-r)S} (S_{\theta}-S) X(r) \, \d r \bigg\|_{\ell^2}^2 \,\d t\\ &=: I_1+ I_2+ I_3+ I_4. \end{aligned}$$ For $I_1$, one has that $$ I_1 \le \int_n^{n+1} e^{-\mu(t-n)\lambda^2}\|X(n)\|_{\ell^2}^2 \,\d t \le \frac{1}{\mu \lambda^2} \|X(n)\|_{\ell^2}^2. $$ By the second estimate in Lemma \ref{lem-heat-integral-property}, we obtain that $$\begin{aligned} I_2 &\lesssim \frac{1}{\mu^2} \int_n^{n+1} \|B(X(r))\|_{H^{-2}}^2 \,\d r\\ &\lesssim \frac{\lambda^2}{\mu^2} \int_n^{n+1} \|X(r)\|_{\ell^2}^4 \,\d r \lesssim \frac{\lambda^2}{\mu^2} \|X(0)\|_{\ell^2}^2 \|X(n)\|_{\ell^2}^2, \end{aligned}$$ where in the second step we have used Lemma \ref{lem-nonlinearity-2} with $a=0,b=-1$ and the fact that $\|X(r)\|_{H^{-1}} \le \|X(r)\|_{\ell^2}$, while the last step is due to the energy equality \eqref{viscous-energy-balance}. Again by the second estimate in Lemma \ref{lem-heat-integral-property}, we can derive $$\begin{aligned} I_4= \int_n^{n+1} \bigg\|\nu \int_n^t e^{\mu(t-r)S} (S_{\theta}-S) X(r) \, \d r \bigg\|_{\ell^2}^2 \,\d t \lesssim \frac{1}{\mu^2} \int_n^{n+1} \nu^2 \| (S_{\theta}-S) X(r) \|_{H^{-2}}^2 \,\d r, \end{aligned}$$ where $$\begin{aligned} \| (S_{\theta}-S) X(r) \|_{H^{-2}}^2 &= \sum\limits_{i=1}^{\infty} \frac{1}{\lambda^{4i}} \bigg(\sum\limits_{j=1}^{i-1}\theta_j^2 \lambda^{2(i-j)} \bigg)^2 X_i^2(r)\\ & \le \|\theta\|_{\ell^{\infty}}^4 \sum\limits_{i=1}^{\infty} X_i^2(r) \bigg(\sum\limits_{j=1}^{i-1} \lambda^{-2j} \bigg)^2\\ & \le \frac{\lambda^{-4}}{(1-\lambda^{-2})^2} \|\theta\|_{\ell^{\infty}}^4 \|X(r)\|_{\ell^2}^2. \end{aligned}$$ By equality \eqref{viscous-energy-balance} we have $\|X(r)\|_{\ell^2} \le \|X(n)\|_{\ell^2} $. Thus $$ I_4 \lesssim \frac{\nu^2}{\mu^2} \frac{\lambda^{-4}}{(1-\lambda^{-2})^2} \|\theta\|_{\ell^{\infty}}^4 \|X(n)\|_{\ell^2}^2. $$ Finally, for $I_3$, by It\^o isometry and Lemma \ref{similar-heat-semigroup-property} (i) with $\rho \in (0,1)$, we can obtain $$\begin{aligned} \E I_3 &= \int_n^{n+1} \E \bigg\|\sqrt{2\nu} \int_n^t \sum_i \lambda^i\sum_{j=1}^\infty \theta_j e^{\mu(t-r)S} A_{i,i+j} X(r) \, \d W_{i,j}(r)\bigg\|_{\ell^2}^2 \,\d t\\ &= 2\nu \int_n^{n+1} \E\int_n^t \sum_i \lambda^{2i}\sum_{j=1}^\infty \theta_j^2 \big\|e^{\mu(t-r)S} A_{i,i+j} X(r) \big\|_{\ell^2}^2 \,\d r \,\d t\\ & \le 2\nu \int_n^{n+1} \E\int_n^t C_{\rho}^2 [\mu(t-r)]^{-\rho} \sum_i \lambda^{2i}\sum_{j=1}^\infty \theta_j^2 \left\| A_{i,i+j} X(r) \right\|_{H^{-\rho}}^2 \,\d r \,\d t, \end{aligned}$$ where $$\begin{aligned} \sum_i \lambda^{2i}\sum_{j=1}^\infty \theta_j^2 \left\| A_{i,i+j} X(r) \right\|_{H^{-\rho}}^2 &= \sum_i \lambda^{2i}\sum_{j=1}^\infty \theta_j^2 \big(\lambda^{-2\rho i}X_{i+j}^2(r) + \lambda^{-2\rho (i+j)}X_i^2(r) \big)\\ &\le \bigg(\frac{\lambda^{-2-2\rho}}{1-\lambda^{-2\rho}}+ \frac{\lambda^{-2\rho}}{1-\lambda^{-2\rho}} \bigg) \|\theta\|_{\ell^{\infty}}^2 \|X(r)\|_{H^1}^2 \\ &\le \frac{2\lambda^{-2\rho}}{1-\lambda^{-2\rho}} \|\theta\|_{\ell^{\infty}}^2 \|X(r)\|_{H^1}^2. \end{aligned}$$ Therefore, we have $$\begin{aligned} \E I_3 &\le 4\nu C_{\rho}^2 \frac{\lambda^{-2\rho}}{1-\lambda^{-2\rho}} \|\theta\|_{\ell^{\infty}}^2 \mu^{-\rho} \int_n^{n+1} \E\|X(r)\|_{H^1}^2 \int_r^{n+1} (t-r)^{-\rho} \,\d t \,\d r \\ & \le 2\nu C_{\rho}^2 \frac{\lambda^{-2\rho}}{1-\lambda^{-2\rho}} \|\theta\|_{\ell^{\infty}}^2 \frac{1}{\kappa \mu^{\rho}} \frac{1}{1-\rho} \E\|X(n)\|_{\ell^2}^2. \end{aligned}$$ Combining the above estimates, we can derive that $$\begin{aligned} \E \|X(n+1)\|_{\ell^2}^2 &\lesssim \bigg( \frac{1}{\mu \lambda^2}+ \frac{\lambda^2}{ \mu^2} \|X(0)\|_{\ell^2}^2 + \frac{\nu^2}{ \mu^2} C(\lambda) \|\theta\|_{\ell^{\infty}}^4 + \|\theta\|_{\ell^{\infty}}^2 \frac{\nu C_{\rho}^2 C(\lambda) }{\kappa \mu^{\rho} (1-\rho)} \bigg)\E \|X(n)\|_{\ell^2}^2 \end{aligned}$$ and the proof is complete. \end{proof} Now we are ready to provide \begin{proof}[Proof of Theorem \ref{thm-enhance-dissipation}] We follow the ideas in the proof of \cite[Theorem 1.9]{FGL21c}. By Lemma \ref{lem-energy-decreasing}, there exists $ \delta\in(0,1)$ such that for any $ n \ge 1$, $$ \E \|X(n)\|_{\ell^2}^2 \le \delta\, \E \|X(n-1)\|_{\ell^2}^2 \le \cdots \le \delta^n \|X(0)\|_{\ell^2}^2. $$ Since $t \mapsto \|X(t)\|_{\ell^2}^2$ is $\P$-a.s. decreasing, we have $$ \E \bigg(\sup_{t\in[n,n+1]}\|X(t)\|_{\ell^2}^2 \bigg)=\E \|X(n)\|_{\ell^2}^2 \le \delta^n \|X(0)\|_{\ell^2}^2 = e^{-2\chi^{\prime}n} \|X(0)\|_{\ell^2}^2, $$ where $\chi^{\prime}= -\frac12 \log\delta >0$. Lemma \ref{lem-energy-decreasing} implies that for any $ p\ge 1, \chi >0$, we can choose a suitable pair $(\nu, \theta)$ such that $\chi^{\prime} > \chi(1+\frac{p}{2})$. Define the event $$ \hat G_n := \bigg\{ \omega \in \Omega: \sup\limits_{t\in[n,n+1]}\|X(t,\omega)\|_{\ell^2} > e^{-\chi n} \|X(0)\|_{\ell^2} \bigg\}. $$ Then by Chebyshev's inequality, $$ \sum\limits_{n} \P(\hat G_n) \le \sum_n \|X(0)\|_{\ell^2}^{-2} e^{2 \chi n}\, \E \bigg(\sup_{t\in[n,n+1]}\|X(t,\omega)\|_{\ell^2}^2 \bigg) \le \sum_n e^{2 (\chi-\chi^{\prime}) n} < \infty. $$ Hence by Borel-Cantelli Lemma, for $\P$-a.s. $\omega \in \Omega$, there exists a big $N(\omega) \ge 1$ such that for any $n> N(\omega)$, $$ \sup\limits_{t\in[n,n+1]}\|X(t,\omega)\|_{\ell^2} \le e^{-\chi n} \|X(0)\|_{\ell^2}. $$ For the case $0\le n \le N(\omega)$, by \eqref{viscous-energy-balance}, we have $$ \sup\limits_{t\in[n,n+1]}\|X(t,\omega)\|_{\ell^2} \le \|X(n,\omega)\|_{\ell^2} = e^{\chi n} e^{-\chi n} \|X(n,\omega)\|_{\ell^2} \le e^{\chi N(\omega)} e^{-\chi n} \|X(0)\|_{\ell^2}. $$ Then, setting $C(\omega)=e^{\chi(1+N(\omega))}$, we can easily get that, $\P$-a.s. for any $ t\ge0$, $\|X(t,\omega)\|_{\ell^2}\le C(\omega) e^{-\chi t} \|X(0)\|_{\ell^2}$. We now prove that $C(\omega)$ has finite $p$-th moment. Since we can also define $N(\omega)$ as $$ N(\omega)=\sup\bigg\{ n\in \mathbb{Z}_{+}: \sup\limits_{t\in[n,n+1]}\|X(t,\omega)\|_{\ell^2} > e^{-\chi n} \|X(0)\|_{\ell^2} \bigg\}, $$ we have that $$ \{ \omega \in \Omega: N(\omega)\ge k \}=\bigcup_{n=k}^{\infty} \hat G_n. $$ Therefore we obtain $$ \P(\{ N(\omega)\ge k \}) \le \sum\limits_{n=k}^{\infty} \P(\hat G_n) \le \sum\limits_{n=k}^{\infty} e^{2 (\chi-\chi^{\prime}) n} = \frac{e^{2 (\chi-\chi^{\prime}) k}}{1-e^{2 (\chi-\chi^{\prime})} }. $$ Then we can derive that $$ \E e^{\chi p N(\omega)} = \sum\limits_{k=0}^{\infty} e^{\chi p k} \P(\{ N(\omega)= k \}) \le \frac{1}{1-e^{2 (\chi-\chi^{\prime})} } \sum\limits_{k=0}^{\infty} e^{\chi pk}e^{2 (\chi-\chi^{\prime}) k} < \infty, $$ where the last inequality is due to $\chi^{\prime} > \chi(1+\frac{p}{2})$. Hence $C(\omega)$ has finite $p$-th moment. \end{proof} \bigskip \noindent\textbf{Acknowledgement.} The first named author would like to thank the financial supports of the National Key R\&D Program of China (No. 2020YFA0712700), the National Natural Science Foundation of China (Nos. 11931004, 12090014), and the Youth Innovation Promotion Association, CAS (Y2021002).
2,869,038,156,019
arxiv
\section{\label{intro}Introduction} In the standard model the fundamental fermions come in families. In writing down the theory one may start by first introducing just one family, then one may repeat the same procedure by introducing copies of the first family. Why do quarks and leptons come in repetitive structures--families? How many families are there? How to understand the interrelation and mass-hierarchy between the families? In addition, the standard model cannot explain the tiny masses and mixing profile of neutrinos, and the close-to-unity of quark mixing matrix as well \cite{pdg}. These have been the central puzzles known as the flavor question in particle physics beyond the standard model. The current neutrino experimental data are consistent with the tribimaximal form proposed by Harrison-Perkins-Scott (HPS), which apart from the phase redefinitions, is given by \cite{hps} \begin{eqnarray} U_{\mathrm{HPS}}=\left( \begin{array}{ccc} \frac{2}{\sqrt{6}} &\frac{1}{\sqrt{3}} &0\\ -\frac{1}{\sqrt{6}} &\frac{1}{\sqrt{3}} &\frac{1}{\sqrt{2}}\\ -\frac{1}{\sqrt{6}} &\frac{1}{\sqrt{3}} &-\frac{1}{\sqrt{2}} \end{array}\right),\label{eq:1} \end{eqnarray} where the large mixing angles are completely different from the quark mixing ones defined by the Cabibbo-Kobayashi-Maskawa (CKM) matrix. It is an interesting challenge to formulate dynamical principles that can lead to the flavor mixing patterns for quarks and leptons given in a completely natural way as first approximations. A fascinating way seems to be the use of some discrete non-Abelian groups \cite{kj} as family symmetries added to the standard model gauge group. There is a series of models based on the group $A_4$ \cite{A4,dlsh}, $T'$ \cite{T'}, and more recently $S_4$ \cite{old S4, new S4}---the group of permutations of four objects, which is also the symmetry group of the cube. We would like to extend the above application to the $\mathrm{SU}(3)_C\otimes \mathrm{SU}(3)_L \otimes \mathrm{U}(1)_X$ (3-3-1) gauge model \cite{331m,331r,ecn331} because of the following. The $[\mathrm{SU}(3)_L]^3$ anomaly cancelation in the model requires the number of $\mathrm{SU}(3)_L$ fermion triplets to equal that of antitriplets. Taking into account an unrestricted number of standard model families with corresponding extensions of lepton and quark representations, the number of families results in a multiple of 3. Furthermore the QCD asymptotic condition constrains the number of quark families to be lesser than or equal to 5. The family number is exact 3. The model thus provides a partial explanation of the family number, as also required by flavor symmetries such as $S_4$ for 3-dimensional representations. In addition, due to the anomaly cancelation one family of quarks has to transform under $\mathrm{SU}(3)_L$ differently from the two others. We should look for a family symmetry group with 2- and 3-dimensional irreducible representations respectively acting on the 2- and 3-family indices, the simplest of which is just $S_4$. Note that $S_4$ has not been considered before in the kind of the 3-3-1 model. For the similar works on $A_4$, let us call the reader's attention to Refs. \cite{dlsh}. There are two typical variants of the 3-3-1 model as far as lepton sectors are concerned. In the minimal version, three $\mathrm{SU}(3)_L$ lepton triplets are of the form $(\nu_L,l_L,l^c_R)$, where $l_{R}$ are ordinary right-handed charged-leptons \cite{331m}. In the second version, the third components of lepton triplets include right-handed neutrinos, respectively, $(\nu_L,l_L,\nu^c_R)$ \cite{331r}. In trying to recover the tribimaximal form in present work, by analysis a possibility close to the typical versions is when we replace the right-handed neutrinos by new standard model fermion singlets ($N_R$) with vanishing lepton-number \cite{matd}. The resulting model is near that of our previous work in \cite{dlsh}. The neutrinos thus gain masses from only contributions of SU(3)$_L$ scalar antisextets. The antisextets contain tiny vacuum expectation values (VEVs) in the first components, similar to the cases of the standard model with scalar triplets. To avoid the decay of $Z$ into the Majorons associated with these components, the lepton-number violating potential should be turned on. The lepton charge is therefore no longer of an exact symmetry; thereby the Majorons can get large enough masses to escape from the $Z$ decay \cite{matd}. Assuming the antisextets very heavy, the potential minimization can provide a natural explanation of the expected vacuum alignments as well as the smallness of seesaw contributions responsible for neutrino mass. The rest of this article is organized as follows. In Sec. \ref{model}, we propose the model with $S_4$. The masses and mixing matrices of leptons and quarks are obtained then. In Sec. \ref{vev} we consider the Higgs potential and minimization conditions. We summarize our results and make conclusions in Sec.~\ref{conclus}. Appendix \ref{apa} is devoted to $S_4$ group with its Clebsch-Gordan coefficients. Appendix \ref{apt} presents the lepton numbers and lepton parities of model particles. \section{\label{model}The model} The fermions in this model under $[\mathrm{SU}(3)_L, \mathrm{U}(1)_X, \mathrm{U}(1)_\mathcal{L},\underline{S}_4]$ symmetries, respectively, transform as \begin{eqnarray} \psi_{L} &\equiv& \psi_{1,2,3L}=\left( \begin{array}{c} \nu_{1,2,3L} \\ l_{1,2,3L} \\ N^c_{1,2,3R} \\ \end{array} \right)\sim [3,-1/3,2/3,\underline{3}],\\ l_{1R}&\sim&[1,-1,1,\underline{1}],\hspace*{0.5cm} l_R\equiv l_{2,3R}\sim[1,-1,1,\underline{2}],\\ Q_{3L}&=& \left( \begin{array}{c} u_{3L} \\ d_{3L} \\ U_{L} \\ \end{array} \right)\sim[3,1/3,-1/3,\underline{1}],\hspace*{0.5cm} Q_{L}\equiv Q_{1,2L}= \left( \begin{array}{c} d_{1,2L} \\ -u_{1,2L} \\ D_{1,2L} \\ \end{array} \right)\sim[3^*,0,1/3,\underline{2}], \\ u_{R}&\equiv&u_{1,2,3R}\sim[1,2/3,0,\underline{3}],\hspace*{0.5cm} d_{R}\equiv d_{1,2,3R}\sim[1,-1/3,0,\underline{3}],\\ U_R&\sim&[1,2/3,-1,\underline{1}],\hspace*{0.5cm} D_R\equiv D_{1,2R}\sim[1,-1/3,1,\underline{2}],\end{eqnarray} where the numbered subscripts on field indicate to respective families which also in order define components of their $S_4$ multiplet representation. The reader can see in Appendix \ref{apa} for more details of the $S_4$ group representations. As usual, the $X$ charge is related to the electric charge operator as $Q=T_3-\frac{1}{\sqrt{3}}T_8+X$ where $T_a$ $(a=1,2,...,8)$ are $\mathrm{SU}(3)_L$ charges, satisfying $\mathrm{Tr}[T_aT_b]=\frac 1 2 \delta_{ab}$. The $N_R$ as above mentioned are exotic neutral fermions having the lepton number $L(N_R)=0$ \cite{matd,dlsh}. Hence the lepton number $L$ in this model does not commute with the gauge symmetry. We can therefore search for a new conserved charge $\mathcal{L}$ as given in the square brackets above, which is defined in terms of the ordinary lepton number by $L=\frac{2}{\sqrt{3}}T_8+\mathcal{L}$ \cite{clong,dlsh}. This definition is only convenient one for accounting the global lepton numbers of the model particles, because the $T_8$ is a gauged charge, and thus $L$ consequently gauged. The gauging of the $L$ charge deserves further studies, where in the present work we will take it globally. This is possible since the $T_8$ can be considered as the charge of a group replication of $\mathrm{SU}(3)_L$ but taken globally, thus $L$ is not gauged. Finally, the lepton charge arranged in the way is to suppress unwanted interactions (due to $\mathrm{U}(1)_{\mathcal{L}}$ symmetry) to yield the tribimaximal form as shown below. $U$ and $D_{1,2}$ as supplied are exotic quarks carrying lepton numbers $L(U)=-1$ and $L(D_{1,2})=1$, known as leptoquarks. The lepton parity is introduced as follows $P_l=(-)^L$, which is a residual symmetry of $L$. The particles possess $L=0,\pm 2$ such as $N_R$, ordinary quarks and bileptons having $P_l=1$; the particles with $L=\pm 1$ such as ordinary leptons and exotic quarks have $P_l=-1$. Any non-zero VEV with odd parity, $P_l=-1$, will break this symmetry spontaneously. For convenience in reading, the numbers $L$ and $P_l$ of the component particles are given in Appendix~\ref{apt}. In the following, we consider possibilities of generating the masses for the fermions. The scalar multiplets needed for the purpose are introduced accordingly. \section{fermion mass} \subsection{Lepton mass} To generate masses for the charged leptons, we need two scalar multiplets: \begin{eqnarray} \phi = \left(% \begin{array}{c} \phi^+_1 \\ \phi^0_2 \\ \phi^+_3 \\ \end{array}% \right)\sim [3,2/3,-1/3, \underline{3}],\hspace*{0.5cm} \phi' = \left(% \begin{array}{c} \phi'^+_1 \\ \phi'^0_2 \\ \phi'^+_3 \\ \end{array}% \right)\sim [3,2/3,-1/3, \underline{3}'], \end{eqnarray} with the VEVs $\langle \phi \rangle = (v,v,v)$ and $\langle \phi' \rangle = (v',v',v')$ written as those of $S_4$ components respectively (these will be derived from the potential minimization conditions). Here and after, the number subscripts on the component scalar fields are indices of $\mathrm{SU}(3)_L$. The $S_4$ indices are discarded and should be understood. The Yukawa interactions are \begin{eqnarray} -\mathcal{L}_{l}=h_1 (\bar{\psi}_L \phi)_{\underline{1}} l_{1R}+h_2 (\bar{\psi}_L \phi)_{\underline{2}} l_{R}+h_3 (\bar{\psi}_L \phi')_{\underline{2}} l_{R}+h.c.\end{eqnarray} The mass Lagrangian of the charged leptons reads $-\mathcal{L}^{\mathrm{mass}}_l=(\bar{l}_{1L},\bar{l}_{2L},\bar{l}_{3L}) M_l (l_{1R},l_{2R},l_{3R})^T+h.c.$, \begin{eqnarray} M_l= \left(% \begin{array}{ccc} h_1v & h_2v-h_3v' & h_2 v+h_3v' \\ h_1v & (h_2v-h_3v')\omega & (h_2 v+h_3v')\omega^2 \\ h_1v & (h_2v-h_3v')\omega^2 & (h_2 v+h_3v')\omega \\ \end{array}% \right).\end{eqnarray} The mass matrix is then diagonalized, \begin{eqnarray} U^\dagger_L M_lU_R=\left(% \begin{array}{ccc} \sqrt{3}h_1 v & 0 & 0 \\ 0 & \sqrt{3}(h_2 v - h_3v') & 0 \\ 0 & 0 & \sqrt{3}(h_2 v+h_3v') \\ \end{array}% \right)=\left(% \begin{array}{ccc} m_e & 0 & 0 \\ 0 & m_\mu & 0 \\ 0 & 0 & m_\tau \\ \end{array}% \right),\end{eqnarray} where \begin{eqnarray} U_L=\frac{1}{\sqrt{3}}\left(% \begin{array}{ccc} 1 & 1 & 1 \\ 1 & \omega & \omega^2 \\ 1 & \omega^2 & \omega \\ \end{array}% \right),\hspace*{0.5cm} U_R=1.\label{lep}\end{eqnarray} We see that the masses of muon and tauon are separated by the $\phi'$ triplet. This is the reason why we introduce $\phi'$ in addition to $\phi$. Notice that the couplings $\bar{\psi}^c_L \psi_L\phi$ and $\bar{\psi}^c_L \psi_L\phi'$ are suppressed because of the $\mathcal{L}$--symmetry violation. Therefore $\bar{\psi}^c_L\psi_L$ can couple to SU(3)$_L$ antisextets instead to generate masses for the neutrinos. The antisextets in this model transform as \begin{eqnarray} \sigma= \left(% \begin{array}{ccc} \sigma^0_{11} & \sigma^+_{12} & \sigma^0_{13} \\ \sigma^+_{12} & \sigma^{++}_{22} & \sigma^+_{23} \\ \sigma^0_{13} & \sigma^+_{23} & \sigma^0_{33} \\ \end{array}% \right)\sim [6^*,2/3,-4/3,\underline{1}], \end{eqnarray} \begin{eqnarray} s= \left(% \begin{array}{ccc} s^0_{11} & s^+_{12} & s^0_{13} \\ s^+_{12} & s^{++}_{22} & s^+_{23} \\ s^0_{13} & s^+_{23} & s^0_{33} \\ \end{array}% \right)\sim [6^*,2/3,-4/3,\underline{3}]. \end{eqnarray} The Yukawa interactions are \begin{eqnarray} -\mathcal{L}_\nu&=&\frac 1 2 x (\bar{\psi}^c_L \psi_L)_{\underline{1}}\sigma+\frac 1 2 y (\bar{\psi}^c_L \psi_L)_{\underline{3}}s+h.c.\nonumber \\ &=& \frac 1 2 x(\bar{\psi}^c_{1L}\psi_{1L}+\bar{\psi}^c_{2L}\psi_{2L}+\bar{\psi}^c_{3L}\psi_{3L})\sigma\nonumber \\ && +y(\bar{\psi}^c_{2L}\psi_{3L}s_1+\bar{\psi}^c_{3L}\psi_{1L}s_2+\bar{\psi}^c_{1L}\psi_{2L}s_3)\nonumber \\ &&+h.c.\label{yn}\end{eqnarray} The VEV of $s$ is set as $(\langle s_1\rangle,0,0)$ under $S_4$ (which is also a natural minimization condition for the scalar potential), where \begin{eqnarray} \langle s_1\rangle=\left(% \begin{array}{ccc} \lambda_{s} & 0 & v_{s} \\ 0 & 0 & 0 \\ v_{s} & 0 & \Lambda_{s} \\ \end{array}% \right).\label{s1}\end{eqnarray} The VEV of $\sigma$ is \begin{eqnarray} \langle \sigma \rangle=\left(% \begin{array}{ccc} \lambda_\sigma & 0 & v_\sigma \\ 0 & 0 & 0 \\ v_\sigma & 0 & \Lambda_\sigma \\ \end{array}% \right).\label{sim}\end{eqnarray} The mass Lagrangian for the neutrinos is defined by \begin{eqnarray} -\mathcal{L}^{\mathrm{mass}}_\nu=\frac 1 2 \bar{\chi}^c_L M_\nu \chi_L+ h.c.,\hspace*{0.5cm} \chi_L\equiv \left(% \begin{array}{c} \nu_L \\ N^c_R \\ \end{array}% \right),\hspace*{0.5cm} M_\nu\equiv\left(% \begin{array}{cc} M_L & M^T_D \\ M_D & M_R \\ \end{array}% \right),\label{nm}\end{eqnarray} where $\nu=(\nu_{1},\nu_{2},\nu_{3})^T$ and $N=(N_1,N_2,N_3)^T$. The mass matrices are then obtained by \begin{eqnarray} M_{L,R,D}=\left(% \begin{array}{ccc} a_{L,R,D} & 0 & 0 \\ 0 & a_{L,R,D} & b_{L,R,D} \\ 0 & b_{L,R,D} & a_{L,R,D} \\ \end{array}% \right),\end{eqnarray} with \begin{eqnarray} a_L=x \lambda_\sigma,\ a_D=x v_\sigma,\ a_R=x \Lambda_\sigma,\ b_L=y \lambda_{s},\ b_D= y v_s,\ b_R=y \Lambda_s.\end{eqnarray} The VEVs $\Lambda_{\sigma,s}$ break the 3-3-1 gauge symmetry down to that of the standard model, and provide the masses for the neutral fermions $N_R$ and the new gauge bosons: the neutral $Z'$ and the charged $Y^{\pm}$ and $X^{0,0*}$. The $\lambda_{\sigma,s}$ and $v_{\sigma,s}$ belong to the second stage of the symmetry breaking from the standard model down to the $\mathrm{SU}(3)_C \otimes \mathrm{U}(1)_Q$ symmetry, and contribute the masses to the neutrinos. Hence, to keep a consistency we assume that $\Lambda_{\sigma,s}\gg v_{\sigma,s},\lambda_{\sigma,s}$. The natural smallness of the lepton number violating VEVs $\lambda_{\sigma,s}$ and $v_{\sigma,s}$ will be explained in Section \ref{vev}. Three active-neutrinos ($\sim\nu_L$) therefore gain masses via a combination of type I and type II seesaw mechanisms derived from (\ref{nm}) as \begin{eqnarray} M_{\mathrm{eff}}=M_L-M_D^TM_R^{-1}M_D= \left(% \begin{array}{ccc} a' & 0 & 0 \\ 0 & a & b \\ 0 & b & a \\ \end{array}% \right),\label{neu}\end{eqnarray} where \begin{eqnarray} a'&=&a_L-\frac{a^2_D}{a_R},\nonumber \\ a&=&a_L+2a_Db_D\frac{b_R}{a_R^2-b^2_R}-(a^2_D+b^2_D)\frac{a_R}{a_R^2-b^2_R},\nonumber \\ b&=&b_L-2a_Db_D\frac{a_R}{a_R^2-b^2_R}+(a^2_D+b^2_D)\frac{b_R}{a_R^2-b^2_R}.\label{neu3}\end{eqnarray} We can diagonalize the mass matrix (\ref{neu}) as follows: \begin{eqnarray} U^T_\nu M_{\mathrm{eff}} U_\nu=\left(% \begin{array}{ccc} a+b & 0 & 0 \\ 0 & a' & 0 \\ 0 & 0 & b-a \\ \end{array}% \right)=\left(% \begin{array}{ccc} m_1 & 0 & 0 \\ 0 & m_2 & 0 \\ 0 & 0 & m_3 \\ \end{array}% \right),\label{neu2} \end{eqnarray} where \begin{eqnarray} U_\nu=\left(% \begin{array}{ccc} 0 & 1 & 0 \\ \frac{1}{\sqrt{2}} & 0 & -\frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & 0 & \frac{1}{\sqrt{2}} \\ \end{array}% \right)\left(% \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -i \\ \end{array}% \right).\label{neu1}\end{eqnarray} Combined with (\ref{lep}), the lepton mixing matrix yields the tribimaximal form: \begin{eqnarray} U^\dagger_L U_\nu=\left(% \begin{array}{ccc} \sqrt{2/3} & 1/\sqrt{3} & 0 \\ -1/\sqrt{6} & 1/\sqrt{3} & 1/\sqrt{2} \\ -1/\sqrt{6} & 1/\sqrt{3} & -1/\sqrt{2} \\ \end{array}% \right)=U_{\mathrm{HPS}},\end{eqnarray} which is a main result of the paper. If the lepton parity is an exact and spontaneously unbroken symmetry, the $a_D$ and $b_D$ vanish. The neutrinos then gain masses only from the type II seesaw due to the VEVs of first components of $\sigma$ and $s$, as we can see from (\ref{neu3}) with $a_D=b_D=0$. If this parity is broken, there is no reason to prevent the 13 and 31 components of $\sigma$ and $s$ from getting nonzero VEVs as given in $a_D$, $b_D$. The neutrino masses therefore gain additional contributions from the type I seesaw as well. Deviations from the tribimaximal form if required can be further explained by $\mathrm{S}_4$ breaking soft-terms, or if $\mathcal{L}$ was slightly violated, the terms breaking this charge as mentioned would also give contributions. \subsection{Quark mass} To generate masses for quarks, we additionally acquire the following scalar multiplets: \begin{eqnarray} \chi&=& \left(% \begin{array}{c} \chi^0_1 \\ \chi^-_2 \\ \chi^0_3 \\ \end{array}% \right)\sim[3,-1/3,2/3,\underline{1}],\\ \eta&=& \left(% \begin{array}{c} \eta^0_1 \\ \eta^-_2 \\ \eta^0_3 \\ \end{array}% \right)\sim[3,-1/3,-1/3,\underline{3}],\hspace*{0.5cm} \eta'= \left(% \begin{array}{c} \eta'^0_1 \\ \eta'^-_2 \\ \eta'^0_3 \\ \end{array}% \right)\sim[3,-1/3,-1/3,\underline{3}'].\end{eqnarray} The Yukawa interactions are \begin{eqnarray} -\mathcal{L}_q &=& f_3 \bar{Q}_{3L}\chi U_R + f \bar{Q}_{L}\chi^* D_{R}\nonumber \\ &&+h^d_3 \bar{Q}_{3L}(\phi d_R)_{\underline{1}} + h^u_3 \bar{Q}_{3L}(\eta u_R)_{\underline{1}}\nonumber \\ && + h^u \bar{Q}_{L}(\phi^* u_R)_{\underline{2}}+ h^d \bar{Q}_{L}(\eta^* d_R)_{\underline{2}}\nonumber \\ &&+h'^u \bar{Q}_{L}(\phi'^* u_R)_{\underline{2}}+h'^d \bar{Q}_{L}(\eta'^* d_R)_{\underline{2}}\nonumber \\ &&+h.c.\label{yv}\end{eqnarray} Suppose that the VEVs of $\eta$, $\eta'$ and $\chi$ are $(u,u,u)$, $(u',u',u')$ and $w$, where $u=\langle \eta^0_1\rangle$, $u'=\langle \eta'^0_1\rangle$, $w=\langle \chi^0_3\rangle$. The other VEVs $\langle \eta^0_3\rangle$, $\langle \eta'^0_3\rangle$, $\langle\chi^0_1\rangle$ vanish if the lepton parity is conserved. Otherwise they can develop VEVs. In addition, the VEV $w$ also breaks the 3-3-1 gauge symmetry down to that of the standard model, and provides the masses for the exotic quarks $U$ and $D$ as well as the new gauge bosons. The $u,u'$ as well as $v,v'$ break the standard model symmetry, and give the masses for the ordinary quarks, charged leptons and gauge bosons. To keep a consistency with the effective theory, we assume that $w$ is much larger than those of $\phi$ and $\eta$. In the following we consider the first case of the unbroken parity. The exotic quarks get masses $m_U=f_3 w$ and $m_{D_{1,2}}=f w$, where the $U$ and $D_{1,2}$ by themselves are the mass eigenstates. The mass matrices for ordinary up-quarks and down-quarks are, respectively, obtained as follows: \begin{eqnarray} M_u &=& \left(% \begin{array}{ccc} -(h^u v+h'^u v') & -(h^u v+h'^u v') \omega^2 & -(h^u v+h'^u v') \omega \\ -(h^u v-h'^u v') &-(h^u v-h'^u v') \omega & -(h^u v-h'^u v') \omega^2 \\ h^u_3 u & h^u_3 u & h^u_3 u \\ \end{array}% \right),\\ M_d &=& \left(% \begin{array}{ccc} h^d u+h'^d u' & (h^d u+h'^d u') \omega^2 & (h^d u+h'^d u') \omega \\ h^d u-h'^d u' &(h^d u-h'^d u') \omega & (h^d u-h'^d u') \omega^2 \\ h^d_3 v & h^d_3 v & h^d_3 v \\ \end{array}% \right).\end{eqnarray} Let us define \begin{eqnarray} A= \frac{1}{\sqrt{3}}\left(% \begin{array}{ccc} 1 & 1 & 1 \\ \omega & \omega^2 & 1 \\ \omega^2 & \omega & 1 \\ \end{array}% \right).\end{eqnarray} We have then \begin{eqnarray} M_u A&=& \left(% \begin{array}{ccc} -\sqrt{3}(h^u v+h'^u v') & 0 & 0 \\ 0 & -\sqrt{3}(h^u v-h'^u v') & 0 \\ 0 & 0 & \sqrt{3}h^u_3 u \\ \end{array}% \right)=\left(% \begin{array}{ccc} m_u & 0 & 0 \\ 0 & m_c & 0 \\ 0 & 0 & m_t \\ \end{array}% \right), \nonumber \\ M_d A&=& \left(% \begin{array}{ccc} \sqrt{3}(h^d u+h'^d u') & 0 & 0 \\ 0 & \sqrt{3}(h^d u-h'^d u') & 0 \\ 0 & 0 & \sqrt{3}h^d_3 v \\ \end{array}% \right) =\left(% \begin{array}{ccc} m_d & 0 & 0 \\ 0 & m_s & 0 \\ 0 & 0 & m_b \\ \end{array}% \right).\end{eqnarray} In similarity to the charged leptons, the $u$ and $c$ quarks are also separated by the $\phi'$ scalar. We see also that the introduction of $\eta'$ is necessary to provide the different masses for $d$ and $s$ quarks. The unitary matrices, which couple the left-handed up- and down-quarks to those in the mass bases, are $U^u_L=1$ and $U^d_L=1$, respectively. Therefore we get the quark mixing matrix \begin{eqnarray} U_\mathrm{CKM}=U^{d\dagger}_L U^u_L=1.\label{a41}\end{eqnarray} This is also an important result of our paper since the experimental quark mixing matrix is close to the unit matrix. In this case, the flavor changing neutral current (FCNC) can arise from one-loop processes with the exchange of heavy exotic quarks: see, for example, a contribution to the $K^0-\bar{K}^0$ mixing due to the box diagram in Fig. \ref{h1}. \begin{figure}[h] \begin{center} \begin{picture}(200,100)(0,0) \ArrowLine(50,25)(100,25)\ArrowLine(100,25)(100,75)\ArrowLine(100,75)(50,75) \DashLine(100,25)(150,25){2}\DashLine(100,75)(150,75){2} \ArrowLine(150,25)(200,25)\ArrowLine(150,75)(150,25)\ArrowLine(200,75)(150,75) \Text(110,50)[]{$U_L$}\Text(160,50)[]{$U_L$} \Text(60,30)[]{$s_R$} \Text(60,80)[]{$d_R$}\Text(190,80)[]{$s_R$} \Text(190,30)[]{$d_R$} \Text(125,82)[]{$S$} \Text(125,32)[]{$S'$} \end{picture} \caption{\label{h1} A contribution to $K^0-\bar{K}^0$ mixing, where $S,S'$ are respectively some combinations of the singly-charged scalars, the vertices proportional to $h^d$ (or $h'^d$) and appropriate mixing matrix elements.} \end{center} \end{figure} The amplitude after integrating out the heavy particles is proportional to $[(h^d)^4/(16\pi^2 m^2_U)](\bar{d}_R \gamma^\mu s_R)(\bar{d}_R \gamma_\mu s_R)$, which is strongly suppressed by the loop factor and the exotic quark mass. The deviation of the CKM matrix from the identity can be given by the FCNC effects with the left-handed quarks, but such deviations are highly suppressed by the mass of the extra quarks also. If the lepton parity is spontaneously broken, i.e. $\langle \eta^0_3\rangle$, $\langle \eta'^0_3\rangle$, $\langle\chi^0_1\rangle$ $\neq 0$, then there exist the following effects: (i) the mixings between ordinary quarks and exotic quarks (namely, $u_{1,2,3}$ mix with $U$ and $d_{1,2,3}$ with $D_{1,2}$) which can lead to FCNC processes at the tree level; (ii) the result (\ref{a41}) is no longer correct, and the CKM is not unitary. A small mixing among the ordinary quarks may exist due to this violation. Let us recall that in the ordinary 3-3-1 model without $S_4$, the Yukawa interactions like (\ref{yv}) might additionally contain $\mathcal{L}$ explicitly-violating terms \cite{clong}, which can be also the source contributing into the ones similar to (i) and (ii). Such kinds of the mixings in the 3-3-1 model have been studied in a number of papers \cite{qmixng}, so we will not discuss it further. We remark that the mixings will be very small since the parity breaking VEVs are strongly suppressed by the same reason like $v_{\sigma,s}$ in (\ref{sI}) due to violating potentials. Anyway, the solution corresponding to the residual symmetry $P_l$ as in the first case should be more natural. \section{\label{vev}Vacuum Alignment} We can separate the general scalar potential into \begin{eqnarray} V_{\mathrm{total}}=V_{\mathrm{tri}}+V_{\mathrm{sext}}+V_{\mathrm{tri-sext}}+\overline{V},\end{eqnarray} where $V_{\mathrm{tri}}$ and $V_{\mathrm{sext}}$ respectively consist of the $\mathrm{SU}(3)_L$ scalar triplets and sextets, whereas $V_{\mathrm{tri-sext}}$ contains the terms connecting the two sectors. Moreover $V_{\mathrm{tri,sext,tri-sext}}$ conserve $\mathcal{L}$--charge and $S_4$ symmetry, while $\overline{V}$ includes possible soft-terms explicitly violating these charges. Here the soft-terms as we meant include the trilinear and quartic ones as well. The reason for imposing $\bar{V}$ will be shown below. The details on the potentials are given as follows. We first denote $V(\textit{X}\rightarrow\textit{X}_1,\textit{Y}\rightarrow\textit{Y}_1,\cdots) \equiv V(X,Y,\cdots)\!\!\!\mid_{X=X_1,Y=Y_1,\cdots}$ Notice also that $(\mathrm{Tr}\textit{A})(\mathrm{Tr}\textit{B})=\mathrm{Tr}(\textit{A}\mathrm{Tr}\textit{B})$. $V_{\mathrm{tri}}$ is a sum of \begin{eqnarray} V(\chi)&=&\mu_{\chi}^2\chi^\+\chi +\lambda^{\chi}({\chi}^\+\chi)^2,\\ V(\phi)&=&\mu_\phi^2(\phi^\+\phi)_{\underline{1}} +\lambda_1^\phi(\phi^\+\phi)_{\underline{1}}(\phi^\+\phi)_{\underline{1}} +\lambda_2^\phi(\phi^\+\phi)_{\underline{2}}(\phi^\+\phi)_{\underline{2}}\nonumber \\ &&+\lambda_3^\phi(\phi^\+\phi)_{\underline{3}}(\phi^\+\phi)_{\underline{3}} +\lambda_4^\phi(\phi^\+\phi)_{\underline{3}'}(\phi^\+\phi)_{\underline{3}'},\\ V(\phi')&=&V(\phi\rightarrow \phi'),\hspace*{0.5cm} V(\eta)=V(\phi\rightarrow \eta),\hspace*{0.5cm} V(\eta')=V(\phi\rightarrow \eta'),\\ V(\phi,\chi)&=&\lambda_1^{\phi\chi}({\phi}^\+\phi)_{\underline{1}}({\chi}^\+\chi) +\lambda_2^{\phi\chi}({\phi}^\+\chi)(\chi^\+\phi),\\ V(\phi',\chi)&=&V(\phi\rightarrow \phi',\chi),\hspace*{0.5cm} V(\eta,\chi)=V(\phi\rightarrow \eta,\chi),\hspace*{0.5cm} V(\eta',\chi)=V(\phi\rightarrow \eta',\chi), \\ V(\phi,\eta)&=&{\lambda_1^{\phi\eta}}({\phi}^\+\phi)_{\underline{1}}({\eta}^\+\eta)_{\underline{1}} +{\lambda_2^{\phi\eta}}({\phi}^\+\phi)_{\underline{2}}({\eta}^\+\eta)_{\underline{2}} +{\lambda_3^{\phi\eta}}({\phi}^\+\phi)_{\underline{3}}({\eta}^\+\eta)_{\underline{3}}\nonumber \\ &&+{\lambda_4^{\phi\eta}}({\phi}^\+\phi)_{\underline{3}'}({\eta}^\+\eta)_{\underline{3}'} +{\lambda_5^{\phi\eta}}({\eta}^\+\phi)_{\underline{1}}({\phi}^\+\eta)_{\underline{1}} +{\lambda_6^{\phi\eta}}({\eta}^\+\phi)_{\underline{2}}({\phi}^\+\eta)_{\underline{2}}\nonumber \\ &&+{\lambda_7^{\phi\eta}}({\eta}^\+\phi)_{\underline{3}}({\phi}^\+\eta)_{\underline{3}} +{\lambda_8^{\phi\eta}}({\eta}^\+\phi)_{\underline{3}'}({\phi}^\+\eta)_{\underline{3}'},\\ V(\phi',\eta')&=&V(\phi\rightarrow\phi',\eta\rightarrow\eta'),\\ V(\phi,\eta')&=&{\lambda_1^{\phi\eta'}}({\phi}^\+\phi)_{\underline{1}}({\eta'}^\+\eta')_{\underline{1}} +{\lambda_2^{\phi\eta'}}({\phi}^\+\phi)_{\underline{2}}({\eta'}^\+\eta')_{\underline{2}} +{\lambda_3^{\phi\eta'}}({\phi}^\+\phi)_{\underline{3}}({\eta'}^\+\eta')_{\underline{3}}\nonumber \\ &&+{\lambda_4^{\phi\eta'}}({\phi}^\+\phi)_{\underline{3}'}({\eta'}^\+\eta')_{\underline{3}'} +{\lambda_5^{\phi\eta'}}({\eta'}^\+\phi)_{\underline{1}'}({\phi}^\+\eta')_{\underline{1}'} +{\lambda_6^{\phi\eta'}}({\eta'}^\+\phi)_{\underline{2}}({\phi}^\+\eta')_{\underline{2}}\nonumber \\ &&+{\lambda_7^{\phi\eta'}}({\eta'}^\+\phi)_{\underline{3}}({\phi}^\+\eta')_{\underline{3}} +{\lambda_8^{\phi\eta'}}({\eta'}^\+\phi)_{\underline{3}'}({\phi}^\+\eta')_{\underline{3}'},\\ V(\phi',\eta)&=&V(\phi\rightarrow\phi',\eta'\rightarrow\eta),\\ V(\phi,\phi')&=&V(\phi\rightarrow \phi,\eta'\rightarrow \phi')+\left[{\lambda_9^{\phi\phi'}}({\phi}^\+\phi)_{\underline{2}}({\phi}^\+\phi')_{\underline{2}} +{\lambda_{10}^{\phi\phi'}}({\phi}^\+\phi)_{\underline{3}}({\phi}^\+\phi')_{\underline{3}} +{\lambda_{11}^{\phi\phi'}}({\phi}^\+\phi)_{\underline{3}'}({\phi}^\+\phi')_{\underline{3}'}\right. \nonumber \\ &&+\left. {\lambda_{12}^{\phi\phi'}}({\phi'}^\+\phi')_{\underline{2}}({\phi'}^\+\phi)_{\underline{2}} +{\lambda_{13}^{\phi\phi'}}({\phi'}^\+\phi')_{\underline{3}}({\phi'}^\+\phi)_{\underline{3}} +{\lambda_{14}^{\phi\phi'}}({\phi'}^\+\phi')_{\underline{3}'}({\phi'}^\+\phi)_{\underline{3}'}+h.c.\right],\nonumber \\ V(\eta,\eta')&=&V(\phi\rightarrow\eta,\phi'\rightarrow\eta'),\\ V_{\chi\phi\phi'\eta\eta'}&=&{\mu_1}\chi\phi\eta+{\mu'_1}\chi\phi'\eta' \nonumber \\ &&+{\lambda^1_1}(\phi^+\phi')_{\underline{1}'}(\eta^+\eta')_{\underline{1}'} +{\lambda^1_2}(\phi^\+\phi')_{\underline{2}}(\eta^\+\eta')_{\underline{2}} +{\lambda^1_3}(\phi^\+\phi')_{\underline{3}}(\eta^\+\eta')_{\underline{3}} +{\lambda^1_{4}}(\phi^\+\phi')_{\underline{3}'}(\eta^\+\eta')_{\underline{3}'} \nonumber \\ && +{\lambda}^2_1(\phi^\+\eta')_{\underline{1}'}(\eta^\+\phi')_{\underline{1}'} +{\lambda}^2_2(\phi^\+\eta')_{\underline{2}}(\eta^\+\phi')_{\underline{2}}+ {\lambda}^2_3(\phi^\+\eta')_{\underline{3}}(\eta^\+\phi')_{\underline{3}} +{\lambda}^2_4(\phi^\+\eta')_{\underline{3}'}(\eta^\+\phi')_{\underline{3}'} \nonumber \\ && +{\lambda^3_1}(\phi^+\phi')_{\underline{1}'}(\eta'^+\eta)_{\underline{1}'} +{\lambda^3_2}(\phi^\+\phi')_{\underline{2}}(\eta'^\+\eta)_{\underline{2}} +{\lambda^3_3}(\phi^\+\phi')_{\underline{3}}(\eta'^\+\eta)_{\underline{3}} +{\lambda^3_{4}}(\phi^\+\phi')_{\underline{3}'}(\eta'^\+\eta)_{\underline{3}'} \nonumber \\ && +{\lambda}^4_1(\phi^\+\eta)_{\underline{1}}(\eta'^\+\phi')_{\underline{1}} +{\lambda}^4_2(\phi^\+\eta)_{\underline{2}}(\eta'^\+\phi')_{\underline{2}}+ {\lambda}^4_3(\phi^\+\eta)_{\underline{3}}(\eta'^\+\phi')_{\underline{3}} +{\lambda}^4_4(\phi^\+\eta)_{\underline{3}'}(\eta'^\+\phi')_{\underline{3}'} \nonumber \\ && +{\lambda^5_1}(\phi^+\eta)_{\underline{1}}(\phi'^+\eta')_{\underline{1}} +{\lambda^5_2}(\phi^+\eta)_{\underline{2}}(\phi'^+\eta')_{\underline{2}} +{\lambda^5_3}(\phi^+\eta)_{\underline{3}}(\phi'^+\eta')_{\underline{3}} +{\lambda^5_{4}}(\phi^+\eta)_{\underline{3}'}(\phi'^+\eta')_{\underline{3}'} \nonumber \\ && +{\lambda}^6_1(\phi^\+\eta')_{\underline{1}'}(\phi'^\+\eta)_{\underline{1}'} +{\lambda}^6_2(\phi^\+\eta')_{\underline{2}}(\phi'^\+\eta)_{\underline{2}}+ {\lambda}^6_3(\phi^\+\eta')_{\underline{3}}(\phi'^\+\eta)_{\underline{3}} +{\lambda}^6_4(\phi^\+\eta')_{\underline{3}'}(\phi'^\+\eta)_{\underline{3}'}\nonumber \\ &&+h.c. \end{eqnarray} $V_{\mathrm{sext}}$ is a sum of \begin{eqnarray} V(\sigma)&=&\mathrm{Tr}[V(\chi\rightarrow\sigma)+\lambda'^{\sigma}(\sigma^\+ \sigma)\mathrm{Tr}({\sigma}^\+\sigma)],\\ V(s)&=&\mathrm{Tr}\{V(\phi\rightarrow{s})+{\lambda'}_1^{s}(s^\+s)_{\underline{1}}\mathrm{Tr}(s^\+s)_{\underline{1}} +{\lambda'}_2^{s}(s^\+s)_{\underline{2}}\mathrm{Tr}(s^\+s)_{\underline{2}}\nonumber \\ &&+{\lambda'}_3^{s}(s^\+s)_{\underline{3}}\mathrm{Tr}(s^\+s)_{\underline{3}} +{\lambda'}_4^{s}(s^\+s)_{\underline{3}'}\mathrm{Tr}(s^\+s)_{\underline{3}'}\},\\ V(s,\sigma)&=&\mathrm{Tr}\{V(\phi\rightarrow{s},\chi\rightarrow\sigma) +{\lambda_1'}^{s\sigma}(s^\+s)_{\underline{1}}\mathrm{Tr}(\sigma^\+\sigma) +{\lambda_2'}^{s\sigma}(s^\+\sigma)\mathrm{Tr}({\sigma^\+}s)\nonumber \\ && +[{\lambda_3'}^{s\sigma}(s^\+\sigma)\mathrm{Tr}(s^\+\sigma) +{\lambda_4'}^{s\sigma}({\sigma^\+}s)\mathrm{Tr}({s^\+}s)_{\underline{3}} +h.c.]\}\end{eqnarray} $V_{\mathrm{tri-sext}}$ is a sum of \begin{eqnarray} V(\sigma,\chi)&=&\lambda_1^{\sigma\chi}({\chi}^\+\chi)\mathrm{Tr}({\sigma}^\+\sigma) +\lambda_2^{\sigma\chi}{({\chi}^\+\sigma^\dagger)({\sigma}\chi)} +(\mu_2\chi^T\sigma\chi+h.c.),\\ V(s,\chi)&=&\mathrm{Tr}[V(\phi\rightarrow{s}^\+,\chi\rightarrow\chi)],\hspace*{0.5cm} V(\phi,\sigma)=\mathrm{Tr}[V(\phi\rightarrow\phi,\chi\rightarrow\sigma^\+)],\\ V(\phi,s)&=&\mathrm{Tr}[V(\phi\rightarrow\phi,\eta\rightarrow{s}^\+)],\hspace*{0.5cm} V(\phi',\sigma)=\mathrm{Tr}[V(\phi'\rightarrow\phi',\chi\rightarrow\sigma^\+)],\\ V(\phi',s)&=&\mathrm{Tr}[V(\phi'\rightarrow\phi',\eta\rightarrow{s}^\dagger)],\hspace*{0.5cm} V(\eta,\sigma)=\mathrm{Tr}[V(\eta\rightarrow\eta,\chi\rightarrow\sigma^\dagger)],\\ V(\eta,s)&=&\mathrm{Tr}[V(\phi\rightarrow\eta,\eta\rightarrow{s}^\+)],\hspace*{0.5cm} V(\eta',\sigma)=\mathrm{Tr}[V(\eta'\rightarrow\eta',\chi\rightarrow\sigma^\+)],\\ V(\eta',s)&=&\mathrm{Tr}[V(\phi'\rightarrow\eta',\eta\rightarrow s^\+)],\\ V_{s\sigma\chi \phi\phi' \eta \eta'}&=&\chi^\+ \sigma^\+ (\lambda_1 \phi\eta+ \lambda_2 \phi'\eta')_{\underline{1}}+\chi^\+ s^\+ (\lambda_3 \phi\eta + \lambda_4 \phi' \eta' +\lambda_5 \phi \eta' +\lambda_6 \phi'\eta)_{\underline{3}}\nonumber \\ &&+\mathrm{Tr}(s^\+s)_{\underline{2}}(\lambda_7 \phi^\+\phi'+\lambda_8 \eta^\+\eta')_{\underline{2}} +\mathrm{Tr}(s^\+s)_{\underline{3}}(\lambda_9 \phi^\+\phi'+\lambda_{10} \eta^\+\eta')_{\underline{3}}\nonumber \\ &&+\mathrm{Tr}(s^\+s)_{\underline{3}'}(\lambda_{11} \phi^\+\phi'+\lambda_{12} \eta^\+\eta')_{\underline{3}'} +\mathrm{Tr}(\sigma^\+ s)(\lambda_{13}\phi^\+\phi +\lambda_{14}\phi'^\+\phi'\nonumber \\ && + \lambda_{15}\phi^\+\phi' + \lambda_{16}\phi'^\+\phi +\lambda_{17}\eta^\+\eta+ \lambda_{18}\eta'^\+\eta'+\lambda_{19}\eta^\+\eta' + \lambda_{20}\eta'^\+\eta)_{\underline{3}} \nonumber \\ &&+\phi^\+\sigma^\+s (\lambda_{21} \phi+\lambda_{22}\phi')+\phi'^\+\sigma^\+s (\lambda_{23} \phi+\lambda_{24}\phi')+\eta^\+\sigma^\+s (\lambda_{25} \eta+\lambda_{26}\eta')\nonumber \\ && +\eta'^\+\sigma^\+s (\lambda_{27} \eta+\lambda_{28}\eta')+\lambda_{29}(\phi^\+ s^\+)_{\underline{2}} (s \phi')_{\underline{2}}+\lambda_{30}(\phi^\+ s^\+)_{\underline{3}} (s \phi')_{\underline{3}}\nonumber \\ &&+\lambda_{31}(\phi^\+ s^\+)_{\underline{3}'} (s \phi')_{\underline{3}'} +\lambda_{32}(\eta^\+ s^\+)_{\underline{2}} (s \eta')_{\underline{2}}+\lambda_{33}(\eta^\+ s^\+)_{\underline{3}} (s \eta')_{\underline{3}}\nonumber \\ &&+\lambda_{34}(\eta^\+ s^\+)_{\underline{3}'} (s \eta')_{\underline{3}'}+h.c.\end{eqnarray} And, the $\overline{V}$ up to quartic interactions is given by \begin{eqnarray} \bar{V}&=& (\bar{\mu}_1\eta \eta+\bar{\mu}'_1\eta' \eta')_{\underline{1}}\sigma+ (\bar{\mu}_2\eta \eta+\bar{\mu}'_2\eta'\eta'+\bar{\mu}''_2\eta\eta'+\bar{\mu}_3\chi\eta)_{\underline{3}}s\nonumber \\ && + \eta^\+\sigma^\+(\bar{\lambda}_1\phi\chi + \bar{\lambda}_2\phi\eta+\bar{\lambda}_3 \phi'\eta'+\bar{\lambda}_4\phi'\eta +\bar{\lambda}_5\phi \eta')_{\underline{3}}+ \eta'^\+\sigma^\+(\bar{\lambda}_6\phi'\chi + \bar{\lambda}_7\phi\eta+\bar{\lambda}_8 \phi'\eta'\nonumber \\ &&+\bar{\lambda}_9\phi'\eta +\bar{\lambda}_{10}\phi \eta')_{\underline{3}'}+\bar{\lambda}_{11}\phi^\+\sigma^\+(\phi\phi')_{\underline{3}} +\bar{\lambda}_{12}\phi'^\+\sigma^\+(\phi\phi')_{\underline{3}'}+\bar{\lambda}_{13}\chi^\+s^\+\phi\chi \nonumber \\ && +(\eta^\+s^\+)_{\underline{1}}(\bar{\lambda}_{14}\phi\eta+\bar{\lambda}_{15}\phi'\eta')_{\underline{1}} +(\eta^\+s^\+)_{\underline{2}}(\bar{\lambda}_{16}\phi\eta+\bar{\lambda}_{17}\phi'\eta' +\bar{\lambda}_{18}\phi'\eta+\bar{\lambda}_{19}\phi\eta')_{\underline{2}} \nonumber \\ &&+(\eta^\+s^\+)_{\underline{3}}(\bar{\lambda}_{20}\phi\eta +\bar{\lambda}_{21}\phi'\eta'+\bar{\lambda}_{22}\phi'\eta+\bar{\lambda}_{23}\phi\eta')_{\underline{3}} +(\eta^\+s^\+)_{\underline{3}'}(\bar{\lambda}_{24}\phi\eta +\bar{\lambda}_{25}\phi'\eta'\nonumber \\ && +\bar{\lambda}_{26}\phi'\eta+\bar{\lambda}_{27}\phi\eta')_{\underline{3}'} +(\eta'^\+s^\+)_{\underline{1}'}(\bar{\lambda}_{28}\phi'\eta +\bar{\lambda}_{29}\phi\eta')_{\underline{1}'}+(\eta'^\+s^\+)_{\underline{2}}(\bar{\lambda}_{30}\phi\eta +\bar{\lambda}_{31}\phi'\eta'\nonumber \\ &&+\bar{\lambda}_{32}\phi'\eta +\bar{\lambda}_{33}\phi\eta')_{\underline{2}} +(\eta'^\+s^\+)_{\underline{3}}(\bar{\lambda}_{34}\phi\eta +\bar{\lambda}_{35}\phi'\eta'+\bar{\lambda}_{36}\phi'\eta+\bar{\lambda}_{37}\phi\eta')_{\underline{3}} \nonumber \\ && +(\eta'^\+s^\+)_{\underline{3}'}(\bar{\lambda}_{38}\phi\eta +\bar{\lambda}_{39}\phi'\eta' +\bar{\lambda}_{40}\phi'\eta+\bar{\lambda}_{41}\phi\eta')_{\underline{3}'}+\bar{\lambda}_{42}(\phi'^\+s^\+)_{\underline{1}'} (\phi\phi')_{\underline{1}'}\nonumber \\ &&+\bar{\lambda}_{43}(\phi'^\+s^\+)_{\underline{2}} (\phi\phi')_{\underline{2}}+\bar{\lambda}_{44}(\phi'^\+s^\+)_{\underline{3}} (\phi\phi')_{\underline{3}}+\bar{\lambda}_{45}(\phi'^\+s^\+)_{\underline{3}'} (\phi\phi')_{\underline{3}'}+\bar{\lambda}_{46}(\phi^\+s^\+)_{\underline{2}} (\phi\phi')_{\underline{2}}\nonumber \\ &&+\bar{\lambda}_{47}(\phi^\+s^\+)_{\underline{3}} (\phi\phi')_{\underline{3}}+\bar{\lambda}_{48}(\phi^\+s^\+)_{\underline{3}'} (\phi\phi')_{\underline{3}'}+[\bar{\lambda}_{49}\mathrm{Tr}(s^\+s)+\bar{\lambda}_{50}\mathrm{Tr}(s^\+\sigma)+\bar{\lambda}_{51}\mathrm{Tr}(\sigma^\+s) \nonumber \\ && +\bar{\lambda}_{52}\eta^\+\chi +\bar{\lambda}_{53}\eta^\+\eta+\bar{\lambda}_{54}\eta'^\+\eta'+\bar{\lambda}_{55}\eta^\+\eta'+\bar{\lambda}_{56}\eta'^\+\eta +\bar{\lambda}_{57}\phi^\+\phi+\bar{\lambda}_{58}\phi'^\+\phi' +\bar{\lambda}_{59}\phi^\+\phi'\nonumber \\ &&+\bar{\lambda}_{60}\phi'^\+\phi]_{\underline{3}}\eta^\+\chi +[\bar{\lambda}_{61}\mathrm{Tr}(s^\+s)+\bar{\lambda}_{62}\eta'^\+\chi +\bar{\lambda}_{63}\eta^\+\eta+\bar{\lambda}_{64}\eta'^\+\eta'+\bar{\lambda}_{65}\eta^\+\eta'+\bar{\lambda}_{66}\eta'^\+\eta \nonumber \\ && +\bar{\lambda}_{67}\phi^\+\phi+\bar{\lambda}_{68}\phi'^\+\phi'+\bar{\lambda}_{69}\phi^\+\phi' +\bar{\lambda}_{70}\phi'^\+\phi]_{\underline{3}'}\eta'^\+\chi +\bar{\lambda}_{71}(\eta^\+\phi)_{\underline{3}}(\phi^\+\chi)+\bar{\lambda}_{72}(\eta^\+\phi')_{\underline{3}'} (\phi'^\+\chi)\nonumber \\ && +\bar{\lambda}_{73}(\eta^\+\phi)_{\underline{3}'}(\phi'^\+\chi)+\bar{\lambda}_{74}(\eta^\+\phi')_{\underline{3}}(\phi^\+\chi)+ \bar{\lambda}_{75}(\eta'^\+\phi)_{\underline{3}}(\phi^\+\chi)+\bar{\lambda}_{76}(\eta'^\+\phi')_{\underline{3}'} (\phi'^\+\chi) \nonumber \\ && +\bar{\lambda}_{77}(\eta'^\+\phi)_{\underline{3}'}(\phi'^\+\chi)+\bar{\lambda}_{78}(\eta'^\+\phi')_{\underline{3}}(\phi^\+\chi) +\bar{\lambda}_{79}(\eta^\+s^\+)_{\underline{3}}s\chi +\bar{\lambda}_{80}(\eta'^\+s^\+)_{\underline{3}}s\chi+\bar{\lambda}_{81}\eta^\+s^\+\sigma\chi\nonumber \\ &&+\bar{\lambda}_{82}\eta^\+\sigma^\+s\chi+h.c.,\label{vi}\end{eqnarray} where all the terms in this potential violate the $\mathcal{L}$-charge, but conserving $S_4$. Yet we have not pointed out, but there must additionally exist the terms in $\bar{V}$ explicitly violating the only $S_4$ symmetry or both the $S_4$ and $\mathcal{L}$-charge too. In the following, most of them will be omitted, only the terms of the kind of interest are provided. There are the several scalar sectors corresponding to the expected VEV directions: $(1,0,0)$ for $s$ and $(1,1,1)$ for $\phi,\ \phi',\ \eta,\ \eta'$, as written out before. However if these sectors are strongly coupled through the potential $V_{\mathrm{tri-sext}}\neq 0$, such vacuum misalignment cannot be given from the potential minimization. To overcome the difficulty, as in the literature we might include the extradimensions or supersymmetry, or using additional discrete symmetries. However, in this paper we will provide an alternative explanation, following the works in Refs.\cite{A4} of Ma and/nor collaborations in 2001, 2004, and 2010. We thus suppose that $\sigma$ and $s$ are all very heavy (see also \cite{matd}) with masses $\mu_\sigma$ and $\mu_s$ respectively, so that all of them (as given in $V_{\mathrm{tri-sext}}$) are integrated away. They therefore have the only interactions among themselves as given in $V_{\mathrm{sext}}$. They do not appear as physical particles at or below the TeV scale. Only their imprint at the low energy is a resulting effective potential, which consists of only the fields $\phi$, $\phi'$, $\eta$, $\eta'$ and $\chi$, up to the fourth orders having the same form as $V_{\mathrm{tri}}$. Consider the potential $V_{\mathrm{tri}}$. The flavons $\phi\ ,\ \phi',\ \eta,\ \eta'$ with their VEVs aligned in the same direction $(1,1,1)$ are a automatical solution from the minimization conditions of $V_{\mathrm{tri}}$. To see this obviously, in the system of minimization equations let us put $v_1=v_2=v_3=v$, $v'_1=v'_2=v'_3=v'$, $u_1=u_2=u_3=u$, and $u'_1=u'_2=u'_3=u'$, which reduces to \begin{eqnarray}(\mu_\phi^2+\lambda_1^{\phi\chi}v_{\chi}^2)v+(3\lambda_1^{\phi\eta} +4\lambda_3^{\phi\eta})u^2v+(3\lambda_1^{\phi\eta'}+4\lambda_3^{\phi\eta'})u'^2v +(6\lambda_1^{\phi}+8\lambda_3^{\phi})v^3\nonumber \\ +(3\lambda_1^{\phi\phi'}+4\lambda_3^{\phi\phi'}+3{\lambda}_5^{\phi\phi'} +4{\lambda}_8^{\phi\phi'})vv'^2+(3\lambda_1^1+4\lambda_4^1+3{\lambda}_1^3+4{\lambda}_4^3)u u'v'=0,\\ (\mu_{\phi'}^2+\lambda_1^{\phi'\chi}v_{\chi}^2)v'+(3\lambda_1^{\phi'\eta} +4\lambda_3^{\phi'\eta})u^2v'+(3{\lambda}_1^{\phi'\eta'}+4{\lambda}_3^{\phi'\eta'})u'^2v' +(6\lambda_1^{\phi'}+8\lambda_3^{\phi'})v'^3\nonumber \\+(3{\lambda}_1^{\phi\phi'} +4{\lambda}_3^{\phi\phi'}+3{\lambda}_5^{\phi\phi'}+4{\lambda}_8^{\phi\phi'})v^2v' +(3\lambda_1^1+4\lambda_4^1+3\lambda_1^3+4\lambda_4^3)uu'v=0,\\ (\mu_\eta^2+\lambda_1^{\chi\eta}v_{\chi}^2)u+(3\lambda_1^{\phi\eta} +4\lambda_3^{\phi\eta})v^2u+(3\lambda_1^{\phi'\eta}+4\lambda_3^{\phi'\eta})v'^2u +(6\lambda_1^{\eta}+8\lambda_3^{\eta})u^3\nonumber \\ +(3\lambda_1^{\eta\eta'}+4\lambda_3^{\eta\eta'}+3{\lambda}_5^{\eta\eta'}+4{\lambda}_8^{\eta\eta'})u'^2u +(3\lambda_1^1+4\lambda_4^1)u'v'v=0,\\ (\mu_{\eta'}^2+\lambda_1^{\eta'\chi}v_{\chi}^2)u'+(3\lambda_1^{\phi\eta'} +4\lambda_3^{\phi\eta'})u'v^2+(3\lambda_1^{\phi'\eta'}+4\lambda_3^{\phi'\eta'})u'v'^2 +(6\lambda_1^{\eta'}+8\lambda_3^{\eta'})u'^3\nonumber \\+(3\lambda_1^{\eta\eta'} +4\lambda_3^{\eta\eta'}+3{\lambda}_5^{\eta\eta'}+4{\lambda}_8^{\eta\eta'})u^2u' +(3\lambda_1^3+4\lambda_4^3)uvv'=0.\end{eqnarray} This system always give the solution ($u,v,u',v'$) as expected, even though it is complicated. It is also noted that the aligned $(1,1,1)$ as given is only one solution. The other directions such as $(1,0,0)$ are also the solution of the potential minimization. We have thus imposed the first case to have the desirable results. Now we consider the potential $V^{s\sigma}$ concerning the antisextets. To obtain the desirable solution $\langle\sigma\rangle \neq 0$, $\langle s_1\rangle \neq 0$, and $\langle s_2\rangle = \langle s_3\rangle =0$, the $\mathcal{L}$-charge as well as the $S_4$ symmetry must be broken as spoken of around (\ref{vi}). Assume the following choice of soft scalar trilinear and quartic terms as given in the general potential expression $\bar{V}$ works in $V^{s\sigma}$:\begin{eqnarray} V^{s\sigma}&=&V_{\mathrm{sext}}+ [\bar{\mu}_1(\eta \eta)_{\underline{1}}\sigma+ \bar{\mu}_2(\eta \eta)_{\underline{1}}s_1 + \bar{\lambda}_1 \eta^\+\sigma^\+(\phi\eta)_{\underline{3}}+ \bar{\lambda}_{2}\eta^\+s_1^\+(\phi\eta)_{\underline{3}} +h.c.]\end{eqnarray} To understand this, note first that in order for $\sigma$ or $s_{1,2,3}$ to have a VEV, $\mathcal{L}$ must be broken and that can only be achieved through the terms of $\bar{V}$. However, as in one of the works of Ma cited above, we can introduce a protect symmetry $Z_2$ so that the $s_2$ and $s_3$ are only connected to the terms in the potentials or the Yukawa couplings, which always preserve the symmetry $\psi_{2,3}\rightarrow -\psi_{2,3}$, where $\psi$ is any $S_4$ triplet appearing in the text such as $s,\phi,\psi_L$ and so on. Hence they always appear together and protect each other from getting a VEV if neither has one to begin with. From $V^{s\sigma}$, the unique solution to the minimization conditions is $\langle s_2\rangle =\langle s_3\rangle =0$ and nonzero but very small values of $\lambda_{\sigma,s}$ and $v_{\sigma,s}$ as induced in $\langle s_1\rangle $ and $\langle \sigma\rangle$ of Eqs. (\ref{s1},\ref{sim}) being the root of the $\partial V^{s\sigma}_{\mathrm{min}}/\partial \langle s_1\rangle ^*=0$ and $\partial V^{s\sigma}_{\mathrm{min}}/\partial \langle \sigma\rangle ^*=0$ (with $V^{s\sigma}_{\mathrm{min}}$ the minimum of $V^{s\sigma}$). First, the equations $\partial V^{s\sigma}_{\mathrm{min}}/\partial \Lambda_\sigma^*=0$ and $\partial V^{s\sigma}_{\mathrm{min}}/\partial \Lambda_s ^*=0$ imply that $\Lambda_\sigma$ and $\Lambda_s$ are in the scale of the antisextets' masses $\mu_\sigma$ and $\mu_s$ \cite{dlsh}. Let us denote a characteristic scale $M$ so that $\Lambda_\sigma,\Lambda_s,\mu_\sigma,\mu_s\sim M$. The remaining equations $\partial V^{s\sigma}_{\mathrm{min}}/\partial \lambda_{\sigma,s}^*=0$ and $\partial V^{s\sigma}_{\mathrm{min}}/\partial v_{\sigma,s}^*=0$ provide the small VEVs induced by the standard model electroweak scale $u\sim v$:\begin{eqnarray} \lambda_\sigma &\sim& \bar{\mu}_1\frac{v^2}{M^2},\hspace*{0.5cm} \lambda_s \sim \bar{\mu}_2\frac{v^2}{M^2},\label{sII}\\ v_\sigma &\sim& \bar{\lambda}_1v \frac{v^2}{M^2},\hspace*{0.5cm} v_s \sim \bar{\lambda}_2v\frac{v^2}{M^2}.\label{sI}\end{eqnarray} The parameters $\bar{\mu}_{1,2}$ and $\bar{\lambda}_{1,2}v$ (which have the dimension of mass) may be naturally small in comparison with $v$, because its absence enhances the symmetry of $V^{\sigma s}$. We remark that the VEVs of the type II seesaw mechanism $\lambda_\sigma$, $\lambda_s$ work because from (\ref{sII}) the spontaneous breaking of electroweak symmetry is already accomplished by $v$, the $\lambda_\sigma$, $\lambda_s$ may be small, as long as $M$ is large. On the other hand, $v_\sigma$ and $v_s$ are the VEVs of the type I seesaw mechanism which are also small for the same reason; therefore, in this model the seesaw scale $M$ may be much lower than that of the unusual type I seesaw. These are also the important results of our paper. Along the model, as mentioned the new particles are: $N_R$ getting masses in $\Lambda_{\sigma,s}$ scale, $U$ and $D$ with masses proportional to $w$, and $Z'$, $X$, $Y$ having masses as combinations of $w$ and $\Lambda_{\sigma,s}$, where $w$ and $\Lambda_{\sigma,s}$ are the scales of 3-3-1 gauge symmetry breaking down to the standard model \cite{331m,331r}. If the antisextets $\sigma$, $s$ are heaviest, i.e. $\Lambda^2_{\sigma,s}\gg w^2$, the new gauge bosons and $N_R$ will have large masses ranging in this scale accordingly, however $U$ and $D$ can gain masses much smaller than (for example, in some hundreds of GeV). In the case of $w\sim \Lambda_{\sigma,s}$, the masses of $U$ and $D$ will be picked up in the same order with those of the new gauge bosons and $N_R$. By the way, the $\chi$ scalar may be also integrated out like the antisextets. This will explain why the parity breaking parameters $\langle \eta^0_3\rangle$, $\langle \eta'^0_3\rangle$, $\langle\chi^0_1\rangle$ are small, in similarity to $v_{\sigma,s}$. The mixings among the ordinary quarks and exotic quarks and the tree-level FCNC as mentioned can be suppressed by this mechanism. There are a lot of $\mathrm{SU}(2)_L$ scalar doublets and triplets in the model, under which they can lead to modifications for the precision electroweak data (see \cite{longinami} for a detailed analysis on this problem). The most serious one comes from tree-level corrections for the $\rho$ parameter. In the effective theory limit, the mass of $W$ boson and $\rho$ are evaluated by \begin{equation} m^2_W=\frac{g^2}{2}v^2_{\mathrm{w}},\hspace*{0.5cm} \rho=\frac{m^2_W}{c^2_W m^2_Z}=1-\frac{2(\lambda^2_{\sigma}+\lambda^2_s)}{v^2_{\mathrm{w}}},\end{equation} where $v^2_{\mathrm{w}}\simeq 3(v^2+v'^2+u^2+u'^2)=(174\ \mathrm{GeV})^2$ is a natural approximation due to $v^2_{\sigma},v^2_{s}, \langle\chi^0_1\rangle^2\ll v^2,v'^2,u^2,u'^2$, as given above. Because $\lambda_{\sigma,s}$ are in eV order responsible for the neutrino masses, the $\rho$ parameter is absolutely close to $1$, which is in good agreement with the data \cite{pdg}. \section{\label{conclus}Conclusions} As a result of anomaly cancelation, the 3-3-1 model accepts discrepancy of one family of quarks from other two. We have therefore searched for a symmetry group acting on both 2-family and 3-family indices, the simplest of which is $S_4$---the symmetry group of a cube as a flavor symmetry. Corresponding to the lepton number, the new lepton charge $\mathcal{L}$ and its residual symmetry---the lepton parity $P_l$ have been introduced into the model. If $P_l$ is conserved, the neutrino masses come from small VEVs of first components of scalar antisextets, known as type II seesaw contributions. If $P_l$ is broken there are additional contributions from type I seesaw due to suppression of 3-3-1 symmetry breaking VEVs of just the antisextets. The tribimaximal mixing arises as a result under $S_4$ and $\mathcal{L}$ symmetries. A deviation from this mixing can result from $\mathcal{L}$ small violating terms or $S_4$ breaking soft-terms. By imposing appropriate $\mathcal{L}$ and $S_4$ violating potential, the VEV alignments have been obtained. Also, the smallness of the seesaw contributions have been explained. Quark mixing matrix is unity at the tree-level only if $P_l$ is exact, not spontaneously broken. A breaking of the charge will lead to mixings between exotic quarks and ordinary quarks. It can also provide mixings among the ordinary quarks. In this case the CKM is not unitary. There are contributions to flavor changing neutral currents at the tree-level. The model can provide interesting candidates for dark matter without supersymmetry as stored in the antisextet flavons as well as in the $\chi$ triplet if the lepton parity is conserved (see also the notes as sketched in \cite{takahashi}), and the model's phenomenology is very rich. They are worthy to be devoted to further studies. \section*{Acknowledgments} We would like to thank Ryo Takahashi for his comments and showing possible dark matter candidates existing in our model. We are grateful to Yin Lin for communications and indicating us to some papers in \cite{A4}. This work was supported in part by the National Foundation for Science and Technology Development (NAFOSTED) of Vietnam under Grant No. 103.01.15.09. \\[0.3cm]
2,869,038,156,020
arxiv
\section{Introduction} Shape is arguably the most important property of objects, providing cues for affordance, function, category, and interaction. This paper examines the problem of predicting the 3D object shape from a single image (Fig.~\ref{fig:splash}). The availability of large 3D object model datasets~\cite{chang2015shapenet} and flexible deep network learning methods has made this an increasingly active area of research. Recent methods predict complete 3D shape using voxel~\cite{kar2015category,choy20163d} or octree~\cite{tatarchenko2017octree} volumetric representations, multiple depth map surfaces~\cite{tatarchenko2016multi}, point cloud~\cite{fan2017point}, or a set of cuboid part primitives~\cite{zou2017_iccv}. However, there is not yet a systematic evaluation of important design choices such as the choice of shape representation and coordinate frame. \begin{figure}[t] \begin{center} \includegraphics[scale=0.15]{figs/rgb-to-mesh2.png} \end{center} \vspace{-0.1in} \caption{ We investigate the problem of predicting the 3D shape of an object from a single depth or RGB image (illustrated above). In particular, we examine the impacts of coordinate frames (viewer-centered vs. object-centered), shape representation (volumetric vs. multi-surface), and familiarity (known instance, novel instance, novel category). } \vspace{-0.1in} \label{fig:splash} \end{figure} \begin{figure*} \begin{center} \adjustbox{trim={.00\width} {.00\height} {0.00\width} {.00\height},clip}{ \includegraphics[scale=0.1721]{figs/fig1.png} } \end{center} \vspace{-0.1in} \caption{We compare view-centered vs. object-centered and volumetric vs. multi-surface formulations of shape prediction. In \textbf{view-centered} prediction, the shape is predicted relative to the viewpoint of the input image, which requires encoding both shape and pose. In \textbf{object-centered} prediction, the shape is predicted in a canonical pose, which is standardized across training and prediction evaluation. For \textbf{volumetric} prediction, the shape is modeled as a set of filled 3D voxels. For \textbf{multi-surface} prediction, the shape is modeled as depth maps from multiple different viewpoints which tile the viewing sphere.} \label{fig:terms} \vspace{-0.1in} \end{figure*} In this paper, we investigate two key issues, illustrated in Fig.~\ref{fig:terms}. First, is it better to represent shape {\em volumetrically} or as {\em multiple 2.5D surfaces} observed from varying viewpoints? The earliest (albeit still recent) pattern recognition approaches to shape prediction use volumetric representations (e.g.~\cite{rock2015completing,kar2015category}), but more recent works have proposed surface-based representations~\cite{tatarchenko2016multi}. Qi \etal.~\cite{qi2016volumetric} finds an advantage for surface-based representations for 3D object classification, since surfaces can encode high resolution shapes with fewer parameters. Rendered surfaces have fewer pixels than there are voxels in a high resolution mesh. However, generating a complete shape from 2.5D surfaces creates an additional challenge, since the surfaces need to be aligned and fused into a single 3D object surface. Second, what is the impact of object-centered vs. view-centered coordinate frames for shape prediction? Nearly all recent 3D shape generation methods use {\em object-centered coordinates}, where the object's shape is represented in a canonical view. For example, shown either a front view or side view of a car, the goal is to generate the same front-facing 3D model of the car. Object-centered coordinates simplify the prediction problem, but suffer from several practical drawbacks: the viewer-relative pose is not recovered; 3D models used for training must be aligned to a canonical pose; and prediction on novel object categories is difficult due to lack of predefined canonical pose. In {\em viewer-centered coordinates}, the shape is represented in a coordinate system aligned to the viewing perspective of the input image, so a front-view of a car should yield a front-facing 3D model, while a side-view of a car should generate a side-facing 3D model. This increases the variation of predicted models, but also does not require aligned training models and generalizes naturally to novel categories. We study these issues using a single encoder-decoder network architecture, swapping the decoder to study volume vs. surface representations and swapping the coordinate frame of predictions to study viewer-centered vs. object-centered. We examine effects of familiarity by measuring accuracy for novel views of known objects, novel instances of known categories, and objects from novel categories. We also evaluate prediction from both depth and RGB images. Our experiments indicate a clear advantage for surface-based representations in novel object categories, which likely benefit from the more compact output representations relative to voxels. Our experiments also show that prediction in viewer-centered coordinates generalizes better to novel objects, while object-centered performs better for novel views of familiar instances. Further, models that learn to predict in object-centered coordinates seem to learn and rely on object categorization to a greater degree than models trained to predict viewer-centered coordinates. In summary, our main contributions include: \begin{itemize} \item We introduce a new method for surface-based prediction of object shape in a viewer-centered coordinate frame. Our network learns to predict a set of silhouette and depth maps at several viewpoints relative to the input image, which are then locally registered and merged into a point cloud from which a surface can be computed. \item We compare the efficacy of volumetric and surface-based representations for predicting 3D shape, showing an advantage for surface-based representations on unfamiliar object categories regardless of whether final evaluation is volumetric or surface-based. \item We examine the impact of prediction in viewer-centered and object-centered coordinates and showing that networks generalize better to novel shapes if they learn to predict in viewer-centered coordinates (which is not currently common practice), and that the coordinate choice significantly changes the embedding learned by the network encoder. \end{itemize} \section{Related work} Our approach relates closely to recent efforts to generate novel views of an object, or its shape. We also touch briefly on related studies in human vision. \vspace{-3.5mm} \paragraph{Volumetric shape representations:} Several recent studies offer methods to generate volumetric object shapes from one or a few images~\cite{wu20153d, kar2015category, rock2015completing, choy20163d, yan2016prespective, tatarchenko2017octree}. Wu \etal~\cite{wu20153d} proposes a convolutional deep belief network for learning 3D representations using volumetric supervision and evaluate applications to various recognition tasks. Other studies quantitatively evaluate 3D reconstruction results, with metrics including voxel intersection-over-union~\cite{rock2015completing, choy20163d, yan2016prespective}, mesh distance~\cite{kar2015category, rock2015completing}, and depth map error~\cite{kar2015category}. Some follow template deformation approaches using surface rigidity~\cite{kar2015category, rock2015completing} and symmetry priors~\cite{rock2015completing}, while others~\cite{wu20153d, choy20163d, yan2016prespective} approach the problem as deep representation learning using encoder-decoder networks. Fan \etal~\cite{fan2017point} proposes a point cloud generation network that efficiently predicts coarse volumetric object shapes by encoding only the coordinates of points on the surface. Our voxel and multi-surface prediction networks use an encoder-decoder network. For multi-surface prediction, the decoder generates multiple segmented depth images, pools depth values into a 3D point cloud, and fits a 3D surface to obtain the complete 3D shape. \vspace{-3.5mm} \paragraph{Multi-surface representations:} Multi-surface representations of 3D shapes are popular for categorization tasks. The seminal work by Chen \etal~\cite{chen2003visual} proposes a 3D shape descriptor based on the silhouettes rendered from the 20 vertices of a dodecahedron surrounding the object. More recently, Su \etal~\cite{su2015multi} and Qi \etal~\cite{qi2016volumetric} train CNNs on 2D renderings of 3D mesh models for classification. Qi \etal~\cite{qi2016volumetric} compares CNNs trained on volumetric representations to those trained on multiview representations. Although both representations encode similar amounts of information, they showed that multiview representations significantly outperform volumetric representations for 3D object classification. Unlike our approach, these approaches use multiple projections as input rather than output. To synthesize multi-surface output representations, we train multiple decoders. Dosovitskiy \etal~\cite{dosovitskiy2015learning} show that CNNs can be used to generate images from high-level descriptions such as object instance, viewpoint, and transformation parameters. Their network jointly predicts an RGB image and its segmentation mask using two up-convolutional output branches sharing a high-dimensional hidden representation. The decoder in our network learns the segmentation for each output view in a similar manner. Our work is related to recent studies~\cite{tatarchenko2016multi, kulkarni15deep, yang2015weakly, zhu2014multi, yan2016prespective, soltani2017synthesizing, lun20173d} that generate multiview projections of 3D objects. The multiview perceptron by Zhu \etal~\cite{zhu2014multi} generates one random view at a time, given an RGB image and a random vector as input. Inspired by the mental rotation ability in humans, Yang \etal~\cite{yang2015weakly} proposed a recurrent encoder-decoder network that outputs RGB images rotated by a fixed angle in each time step along a path of rotation, given an image at the beginning of the rotation sequence as input. They disentangle object identity and pose by sharing the identity unit weights across all time steps. Their experiments do not include 3D reconstruction or geometric analysis. Our proposed method predicts 2.5D surfaces (depth image and object silhouette) of the object from a set of fixed viewpoints evenly spaced over the viewing sphere. In some experiments (Table~\ref{fig:recon_shrec12}), we use 20 views, as in~\cite{chen2003visual}, but we found that 6 views provide similar results and speeds training and evaluation, so 6 views are used for the remainder. Most existing approaches~\cite{tatarchenko2016multi, kulkarni15deep, yan2016prespective} parameterize the output image as $(x, \theta)$ where $x$ is the input image and $\theta$ is the desired viewpoint relative to canonical object-centered coordinate system. Yan \etal~\cite{yan2016prespective} introduce a formulation that indirectly learns to generate voxels through silhouettes using multi-surface projective constraints, but interestingly they report that voxel IoU performance is better when the network is trained to minimize projection loss alone, compared to when jointly trained with volumetric loss. Our approach, in contrast, uses multiview reconstruction techniques (3D surface from point cloud) as a post-process to obtain the complete 3D mesh, treating any inconsistencies in the output images as if they were observational noise. Our formulation also differs in that we learn a view-specific representation, and the complete object shape is produced by simultaneously predicting multiple views of depth maps and silhouettes. In this multi-surface prediction, our approach is similar to Soltani \etal's~\cite{soltani2017synthesizing}, but our system does not use class labels during training. When predicting shape in object-centered coordinates, the predicted views are at fixed orientations compared to the canonical view. When predicting shape in viewer-centered coordinates, the predicted views are at fixed orientations compared to the input view. \vspace{-4mm} \paragraph{Human vision:} In experiments on 2D symbols, Tarr and Pinker~\cite{tarr1990does} found that human perception is largely tied to viewer-centered coordinates; this was confirmed by McMullen and Farah~\cite{mcmullen1991viewercentered} for line drawings, who also found that object-centered coordinates seem to play more of a role for familiar exemplars. Note that in the human vision literature, ``viewer-centered'' usually means that the object shape is represented as a set of images in the viewer's coordinate frame, and ``object-centered'' usually means a volumetric shape is represented in the object's coordinate frame. In our work, we consider both the shape representation (volumetric or surface) and coordinate frame (viewer or object) as separate design choices. We do not claim our computational approach has any similarity to human visual processing, but it is interesting to see that in our experiments with 3D objects, we also find a preference for object-centered coordinates for familiar exemplars (i.e., novel view of known object) and for viewer-centered coordinates in other cases. \begin{figure*} \begin{center} \adjustbox{trim={.022\width} {.00\height} {0.022\width} {.00\height},clip}{ \includegraphics[scale=0.37]{figs/fig2.jpg} } \end{center} \vspace{-0.1in} \caption{Network architecture: Encoders $E_d$, $E_s$, $E_h$ learn view-specific shape features $h$ extracted from the input depth and silhouette. $h$ is used by the 10 output decoder branches $V^{(k)}$, $k$=1..10 which each synthesize one silhouette and two, front and back, depth images. The branches have independently parameterized fully connected layers, but the up-convolutional decoders $G_d$, $G_s$ share parameters across all output branches. } \label{fig:network} \vspace{-0.1in} \end{figure*} \section{Viewer-centered 3D shape completion} Given a single depth or RGB image as input, we want to predict the complete 3D shape of the object being viewed. In the commonly used object-centered setting, the shape is predicted in canonical model coordinates specified by the training data. For example, in the ShapeNetCore dataset, the x-axis or ($\phi_{\text{az}}=\ang{0}, \; \theta_\text{el}= \ang{0}$) direction corresponds to the commonly agreed upon front of the object, and the relative transformation parameters from the input view to this coordinate system is unknown. In our viewer-centered approach, we supervise the network to predict a pre-aligned 3D shape in the input image's reference frame --- e.g. so that $(\phi_{\text{az}}=\ang{0}, \; \theta_\text{el}= \ang{0})$ in the output coordinate system always corresponds to the input viewpoint. Our motivation for exploring these two representations is the hypothesis that networks trained on viewer-centered and object-centered representations learn very different information. A practical advantage of the viewer-centered approach is that the network can be trained in an unsupervised manner across multiple categories without requiring humans to specify intra-category alignment. However, viewer-centered training requires synthesizing separate target outputs for each viewpoint input which increases training data storage cost. In all experiments, we supervise the networks only using geometric (or photometric) data without providing any side information about the object category label or input viewpoint. The only assumption is that the gravity direction is known (fixed as down in the input view). This allows us to focus on whether the predicted shapes can be completed/interpolated solely based on the 2.5D geometric or RGB input stimuli in a setting where contextual cues are not available. In the case of 2.5D input, we normalize the input depth image so that the bounding box of the silhouette fits inside an orthographic viewing frustum ranging from $\langle {\text{-}1},{\text{-}1} \rangle$ to $\langle 1,1 \rangle$ with the origin placed at the centroid. \section{Network architectures for shape prediction} Our multi-surface shape prediction system uses an encoder-decoder network to predict a set of silhouettes and depth maps. Figure~\ref{fig:network} provides an overview of the network architecture, which takes as input a depth map and a silhouette. We also perform experiments on a variant that takes an RGB image as input. To directly evaluate the relative merits of the surface-based and voxel-based representations, we compare this with a volumetric prediction network by replacing the decoder with a voxel generator. Both network architectures can be trained to produce either viewer-centered or object-centered predictions. \subsection{Generating multi-surface depth and silhouettes} We observe that, for the purpose of 3d reconstruction, it is important to be able to see the object from certain viewpoints -- e.g. classes such as cup and bathtub need at least one view from the top to cover the concavity. Our proposed method therefore predicts 3D object shapes at evenly spaced views around the object. We place the cameras at the 20 vertices $\{\mathbf{v}_0, .., \mathbf{v}_{19}\}$ of a dodecahedron centered at the origin. A similar setup was used in the Light Field Descriptor \cite{chen2003visual} and a recent study by Soltani \etal \cite{soltani2017synthesizing}. In order to determine the camera parameters, we rotate the vertices so that vertex $\mathbf{v}_0=\langle 1,1,1 \rangle$ aligns with the input viewpoint in the object's model coordinates. The up-vectors point towards the z-axis and are rotated accordingly. Note that the input viewpoint $\mathbf{v}_0$ is not known in our setting, but the relative transformations from $\mathbf{v}_0$ to all of the output viewpoints are known and fixed. As illustrated in Figure~\ref{fig:network}, our network takes the depth image and the silhouette in separate input branches. The encoder units ($E_d$, $E_s$, $E_h$) consist of bottleneck residual layers. $E_d$ and $E_s$ each take in a depth image and a silhouette. They are concatenated in the channel dimension at resolution 16 and the following residual layers $E_h$ output the latent vector $h$ from which all output images are derived simultaneously. An alternate approach is taking in a two-channel image in a single encoder. We experimented with both architectures and found the two-branch network to perform better. We use two generic decoders (Table~\ref{fig:network}) to generate the views, one for all depths and another for all silhouettes. Each view in our setting has a corresponding segmented silhouette and another view on the opposite side, thus only 10 out of the 20 silhouettes need to be predicted due to symmetry (or 3 out of 6 if predicting six views). The network therefore outputs a silhouette and corresponding front and back depth images $\{s^{(i), d_f^{(i)}, d_b^{(i)}} \}$ in the $i$-th output branch. Similarly to Dosovitskiy \etal~\cite{dosovitskiy2015learning}, we minimize the objective function \vspace{-3mm} $$\mathbf{L}_{\text{proj}} = k \mathbf{L}_s + (1-k) \mathbf{L}_d $$ where $\mathbf{L}_s$ is the mean logistic loss over the silhouettes and $\mathbf{L}_d$ is the mean MSE over the depth maps whose silhouette label is 1. We use $k=0.2$ in our experiments. \begin{figure}[t] \begin{center} \includegraphics[width=0.8\linewidth]{figs/fig3.jpg} \end{center} \vspace{-0.1in} \caption{Illustration of ground truth generation. Top: GT mesh voxelized in camera coordinates which are used in the viewer-centered voxel baseline experiment. The numeric labels around the model are the indices of the viewpoints from which the multi-surface depth images (shown above) were rendered, left to right. Viewpoint-0, whose camera is located at the origin, \textsl{always} corresponds to the input view, and the relative transformation from Viewpoint-0 to all the other viewpoints is constant throughout all our experiments. Bottom: Multi-surface projections are offset relative the the input view. This viewer-centered approach does not require alignment between the 3D models and allows the network to be trained in an unsupervised manner on synthetic data. } \vspace{-0.1in} \label{fig:datasetPrep} \label{fig:onecol} \end{figure} \subsection{Reconstructing multi-surface representations} In our study, we use the term ``reconstruction'' to refer to surface mesh reconstruction in the final post-processing step. We convert the predicted multiview depth images to a single triangulated mesh using Floating Scale Surface Reconstruction (FSSR) \cite{fuhrmann2014floating}, which we found to produce better results than Poisson Reconstruction \cite{kazhdan2013screened} in our experiments. FSSR is widely used for surface reconstruction from oriented 3D points derived from multiview stereo or depth sensors. Our experiments are unique in that surface reconstruction methods are used to resolve noise in predictions generated by neural networks rather than sensor observations. We have found that 3D surface reconstruction reduces noise and error in surface distance measurements. \subsection{Generating voxel representations} We compare our multi-surface shape prediction method with a baseline that directly predicts a 3D voxel grid. Given a single-view depth image of an object, the ``Voxels'' network generates a grid of 3D occupancy mappings in the camera coordinates of viewpoint $\mathbf{v}_0$. The cubic window of length 2 centered at $\langle 0,0,1 \rangle$ is voxelized after camera transformation. The encoded features $h$ feed into 3D up-convolutional layers, outputting a final $48 \times 48 \times 48$ volumetric grid. The network is trained from scratch to minimize the logistic loss $\mathbf{L}_v$ over the binary voxel occupancy labels. \section{Experiments} We first describe the datasets (Sec.~\ref{sec:exp_dataset}) and evaluation metrics (Sec.~\ref{sec:exp_metrics}), then discuss results in Section~\ref{sec:discussion}. In all experiments, we train the networks on synthetically generated images. A single training example for the multi-surface network is the input-output pair $(x_d, x_s) \rightarrow \{(s^{(k)}, d_f^{(k)}, d_b^{(k)}) \}_{k= 0..9}$ where $(x_d, x_s)$ is the input depth image and segmentation, and the orthographic depth images $(s^{(k)}, d_f^{(k)}, d_b^{(k)})$ serve as the output ground truth. The $k$-th ground truth silhouette has associated front and back depth images. Each image is uniformly scaled to fit within 128x128 pixels. Training examples for the voxel prediction network consist of input-output pairs $(x_d, x_s) \rightarrow V$, where $V$ is a grid of ground truth voxels (size 48x48x48 for the input depth experiments, and 32x32x32 for the input RGB experiments). \subsection{Datasets} \label{sec:exp_dataset} \begin{figure*} \begin{center} \adjustbox{trim={.00\width} {.003\height} {0.00\width} {.01\height},clip}{ \includegraphics[scale=0.31]{figs/voxel-mv.png} } \end{center} \vspace{-0.1in} \caption{Multi-surfaces vs. Voxels. ``Novel View'' means other views of that shape instance were seen during training. ``Novel Model'' means other instances from the same class were seen in training. ``Novel Class'' means that no instances from that category were seen during training. Under multi-surface, a subset of the predicted depth maps are shown, as well as the complete reconstructed shape. The multi-surface approach tends to generalize better for novel classes.} \label{fig:qual_depth} \end{figure*} \begin{table*} \begin{center} \normalsize \begin{tabular}{ | l || c|c|c || c|c|c |} \hline {{Mean }} & \multicolumn{3}{ c ||}{Surface Distance} & \multicolumn{3}{ c |}{Voxel IoU} \\ \cline{2-7} {} & NovelClass &NovelModel &NovelView & NovelClass &NovelModel &NovelView \\ \hline Voxels & 0.0950 & 0.0619 & 0.0512 & 0.4569 & 0.5176 & \bfseries 0.6969 \\ Multi-surfaces & \bfseries 0.0759 & 0.0622 & \bfseries 0.0494 & 0.4914 & 0.5244 & 0.6501 \\ Rock \etal \cite{rock2015completing} & 0.0827 & \bfseries 0.0604 & 0.0639 & \bfseries 0.5320 & \bfseries 0.5888 & 0.6374 \\ \hline \end{tabular} \end{center} \caption{ 3D shape prediction from a single depth image on the SHREC'12 dataset used by~\cite{rock2015completing}, comparing results for voxel and multi-surface decoders trained to produce models in a viewer-centered coordinate frame. Rock \etal.~\cite{rock2015completing} also predicts in viewer-centered coordinates. } \label{fig:recon_shrec12} \end{table*} \begin{table}[] \small \begin{center} \begin{tabular}{|l|c|c|c|} \hline & \small{NovelView} & \small{NovelModel} & \small{NovelClass}\\ \hline View-centered & 0.714 & \bf{0.570} & \bf{0.517} \\ Obj-centered & \bf{0.902} & 0.474 & 0.309 \\ \hline \end{tabular} \end{center} \caption{ Voxel IoU of predicted and ground truth values (mean, higher is better), using the voxel network. Trained for 45 epochs with batch size 150, learning rate 0.0001. } \label{table:shrec12_objview_iou} \end{table} \begin{table}[] \small \begin{center} \begin{tabular}{|l|c|c|c|} \hline & \small{NovelView} & \small{NovelModel} & \small{NovelClass}\\ \hline View-centered & 0.807 & \bf{0.706} & \bf{0.670} \\ Obj-centered & \bf{0.921} & 0.586 & 0.416 \\ \hline \end{tabular} \end{center} \caption{ Silhouette IoU, using the 6-view multi-surface network (mean, higher is better). } \label{table:shrec12_objview_silh} \end{table} \begin{table}[] \small \begin{center} \begin{tabular}{|l|c|c|c|} \hline & \small{NovelView} & \small{NovelModel} & \small{NovelClass}\\ \hline View-centered & 0.011 & \bf{0.016} & \bf{0.0207} \\ Obj-centered & \bf{0.004} & 0.035 & 0.0503 \\ \hline \end{tabular} \end{center} \caption{ Depth error, using the 6-view multi-surface network (mean, lower is better). } \label{table:shrec12_objview_depth} \vspace{-2.4mm} \end{table} \boldhead{3D shape from single depth} We use the SHREC'12 dataset for comparison with the exemplar retrieval approach by Rock \etal~\cite{rock2015completing} on predicting novel views, instances, and classes. Novel views require the least generalization (the same shape is seen in training), and novel classes require the most (no instances from the same category seen during training). This dataset has a training set consisting of 22,500 training + 6,000 validation examples and has 600 examples in each of the three test evaluation sets, using the standard splits~\cite{rock2015completing}. The 3D models in the dataset are aligned to each other, so that they can be used for both viewer-centered and object-centered prediction. Results are shown in Fig.~\ref{fig:qual_depth} and Tables~\ref{fig:recon_shrec12}, \ref{table:shrec12_objview_iou}, \ref{table:shrec12_objview_silh}, and \ref{table:shrec12_objview_depth}. \begin{figure*} \begin{center} \adjustbox{trim={.00\width} {.00\height} {0.00\width} {.00\height},clip}{ \includegraphics[scale=0.26]{figs/fig5-2.png} } \end{center} \vspace{-0.1in} \caption{\textbf{RGB-based shape prediction examples: } On left, is the input image. We show predicted depth maps and silhouettes from three views and a merged point cloud from all views, produced by the networks trained with object-centered coordinates and with viewer-centered coordinates. Viewer-centered tends to generalize better while object-centered sometimes produces a model that looks good but is from entirely the wrong category. In viewer-centered, the encoder learns to map inputs together if they correspond to similar shapes in similar poses, learning a viewpoint-sensitive representation. In object-centered, the encoder learns to map different views of the same object together, learning a viewpoint-invariant representation. } \vspace{-0.1in} \label{fig:qual_rgb} \end{figure*} \begin{table}[] \footnotesize \begin{center} \begin{tabular}{|l|c|c|c|c|} \hline Category & \cite{choy20163d} (OC) & \cite{choy20163d} (VC) & Ours (OC) & Ours (VC) \\ \hline aero & 0.359 & 0.201 & \textbf{0.362} & 0.289 \\ bike & \textbf{0.535} & 0.106 & 0.362 & 0.272 \\ boat & \textbf{0.366} & 0.236 & 0.331 & 0.259 \\ bottle & 0.617 & 0.454 & \textbf{0.643} & 0.576 \\ bus & 0.387 & 0.273 & 0.497 & \textbf{0.556} \\ car & 0.462 & 0.400 & 0.566 & \textbf{0.582} \\ chair & 0.325 & 0.221 & \textbf{0.362} & 0.332 \\ d.table & 0.081 & 0.023 & \textbf{0.122} & 0.118 \\ mbike & 0.474 & 0.167 & \textbf{0.487} & 0.366 \\ sofa & \textbf{0.602} & 0.447 & 0.555 & 0.538 \\ train & \textbf{0.340} & 0.192 & 0.301 & 0.257 \\ tv & 0.376 & 0.164 & 0.383 & \textbf{0.397} \\ \hline mean & 0.410 & 0.240 & \textbf{0.414} & 0.379 \\ \hline \end{tabular} \end{center} \caption{ Per-category voxel IoU on PASCAL 3D+ using our multi-surface network and the voxel-based 3D-R2N2 network~\cite{choy20163d}. Although the network trained to produce object-centered (OC) models performs slightly better quantitatively (for multi-surface), the viewer-centered (VC) model tends to produce better qualitative results (see supplemental material for more), sometimes with misaligned pose. } \label{table:pascal3d} \vspace{-3mm} \end{table} \vspace{0.3mm} \boldhead{3D shape from real-world RGB images} We also perform novel model experiments on RGB images. We use RenderForCNN's \cite{su2015render} rendering pipeline and generate 2.4M synthetic training examples using the ShapeNetCore dataset along with target depth and voxel representations. In this dataset, there are 34,000 3D CAD models from 12 object categories. We perform quantitative evaluation of the resulting models on real-world RGB images using the PASCAL 3D+ dataset~\cite{xiang_wacv14_pascal3d}. We train 3D-R2N2's network~\cite{choy20163d} from scratch using the same dataset and compare evaluation results. The results we report here differ from those in the original paper due to differences in the training and evaluation sets. Specifically, the results reported in~\cite{choy20163d} are obtained after fine-tuning on the PASCAL 3D+ dataset, which is explicitly discouraged in~\cite{xiang_wacv14_pascal3d} because the same 3D model exemplars are used for train and test examples. Thus, we train on renderings and test on real RGB images of objects that may be partly occluded and have background clutter -- a challenging task. Results are shown in Tables~\ref{table:pascal3d} and in Figure~\ref{fig:qual_rgb}. \subsection{Evaluation metrics and processes} \label{sec:exp_metrics} \boldhead{Voxel intersection-over-union} Given a mesh reconstructed from the multi-surface prediction (which may not be watertight), we obtain a solid representation by voxelizing the mesh surface into a hollow volume and then filling in the holes using ray tracing. All voxels not visible from the outside are filled. Visibility is determined as follows: from the center of each voxel, we scatter 1000 rays and the voxel is considered visible if any of them can reach the edge of the voxel grid. We compute intersection-over-union (IoU) with the corresponding ground truth voxels, defined as the number of voxels filled in both representations divided by the number of voxels filled in at least one. \boldhead{Surface distance} We also evaluate with a surface distance metric similar to \cite{rock2015completing}, which tends to correspond better to qualitative judgments of accuracy when there are thin structures. The distance between surfaces is approximated as the mean of \textsl{point-to-triangle} distances from i.i.d. sampled points on the ground truth mesh to the closest points on surface of the reconstructed mesh, and vice versa. We utilize a KD-tree to find the closest point on the mesh. To ensure scale invariance of this measure across datasets, we divide the resulting value by the mean distance between points sampled on the GT surface. The points were sampled at a density of 300 points per unit area. To evaluate surface distance for voxel-prediction models, we use Marching Cubes to obtain the mesh from the prediction. \boldhead{Image-based measures} For multi-surface experiments, in addition to voxel IoU and surface distance, we also evaluate using silhouette intersection-over-union and depth error averaged over the predicted views. Sometimes, even when the predictions for individual views are quite accurate, slight inconsistencies or oversmoothing by the final surface estimation can reduce the accuracy of the 3D model. \section{Discussion} \label{sec:discussion} \paragraph{Multi-surface vs. voxel shape representations:} Table~\ref{fig:recon_shrec12} compares performance of multi-surface and voxel-based representations for shape prediction. Quantitatively, multi-surface outperforms for novel class and performs similarly for novel view and novel instance. We also find that the 3D shapes produced by the multi-surface model look better qualitatively, as they can encode higher resolution. We observe that it is generally difficult to learn and reconstruct thin structures such as the legs of chairs and tables. In part this is a learning problem, as discussed in Choy \etal.~\cite{choy20163d}. Our qualitative results suggest that silhouettes are generally better for learning and predicting thin object parts than voxels, but the information is often lost during surface reconstruction due to the sparsity of available data points. We expect that improved depth fusion and mesh reconstruction would likely yield even better results. As shown in Fig. 6, the multi-surface representation can more directly be output as a point cloud by skipping the reconstruction step. This avoids errors that can occur during the surface reconstruction but is more difficult to quantitatively evaluate. \vspace{0.4mm} \boldhead{Viewer-centered vs. object-centered coordinates} When comparing performance of predicting in viewer-centered coordinates vs. object-centered coordinates, it is important to remember that only viewer-centered encodes pose and, thus, is more difficult. Sometimes, the 3D shape produced by viewer-centered prediction is very good, but the pose is misaligned, resulting in poor quantitative results for that example. Even so, in Tables~\ref{table:shrec12_objview_iou}, \ref{table:shrec12_objview_silh}, and \ref{table:shrec12_objview_depth}, we observe a clear advantage for viewer-centered prediction for novel models and novel classes, while object-centered outperforms for novel views of object instances seen during training. For object-centered prediction, two views of the same object should produce the same 3D shape, which encourages memorizing the observed meshes. Under viewer-centered, the predicted mesh must be oriented according to the input viewpoint, so multiple views of the same object should produce different 3D shapes (which are related by a 3D rotation). This requirement seems to improve the generalization capability of viewer-centered prediction to shapes not seen during training. In Table~\ref{table:pascal3d}, we see that object-centered prediction quantitatively slightly outperforms for RGB images. In this case, training is performed on rendered meshes, while testing is performed on real images of novel object instances from familiar categories. Qualitatively, we find that viewer-centered prediction tends to produce much more accurate shapes, but that the pose is sometimes wrong by 15-20 degrees, perhaps as a result of dataset transfer. Qualitative results support our initial hypothesis that object-centered models tend to correspond more directly to category recognition. We see in Figure~\ref{fig:qual_rgb}, that the object-centered model often predicts a shape that looks good but is an entirely different object category than the input image. The viewer-centered model tends not to make these kinds of mistakes and, instead, errors tend to be overly simplified shapes or slightly incorrect poses. \vspace{0.4mm} \boldhead{Implications for object recognition} While not a focus of our study, we also trained an object classifier using the 4096-dimensional encoding layer of the viewer-centric model as input features for a single hidden-layer classifier. The resulting classifier outperformed by $1\%$ a Resnet classifier that was trained end-to-end on the same data. This indicates that models trained to predict shape and pose contain discriminative information that is highly useful for predicting object categories and may, in some ways, generalize better than models learned to predict categories directly. More study is needed in this direction. \section{Conclusion} Recent methods to produce 3D shape from a single image have used a variety of representation for shape (voxels, octrees, multiple depth maps). By utilizing the same encoder architecture for volumetric and surface-based representations, we are able to more directly compare their efficacy. Our experiments show an advantage for surface-based representations in predicting novel object shapes, likely because they can encode shape details with fewer parameters. Nearly all existing methods predict object shape in object-centered coordinates, but our experiments show that learning to predict shape in viewer-centered coordinates leads to better generalization for novel objects. Further improvements in surface-based prediction could be obtained through better alignment and fusing of produced depth maps. More research is also needed to verify whether recently proposed octree-based representations~\cite{tatarchenko2017octree} close the gap with surface-based representations. In addition, the relationship between object categorization and shape/pose prediction requires further exploration. Novel view prediction and shape completion could provide a basis for unsupervised learning of features that are effective for object category and attribute recognition. \vspace{0.74mm} \boldhead{Acknowledgements} This project is supported by NSF Awards IIS-1618806, IIS-1421521, Office of Naval Research grant ONR MURI N00014-16-1-2007 and a hardware donation from NVIDIA. {\small \bibliographystyle{ieee}
2,869,038,156,021
arxiv
\section{Introduction} The modelling of a two-sex population with heritable traits has often proved to be a formidable problem due to the interplay of many seemingly crucial evolutionary mechanisms. Our study is motivated by a specific example of this problem: the evolution of longevity in ancestral populations, where sex-specific life histories introduce biases in the mating sex ratio and each sex has a fitness tradeoff associated with longevity. The tradeoffs and life histories vary with longevity, which is expected from Charnov's model of {life history variation in female mammals} \cite{129}, and {when models include both males and females,} result in a \emph{sexual conflict}, wherein each sex is optimised at different longevities, but both optima cannot be obtained simultaneously \cite{173,174}. This leads to the selected longevity of the population being a compromise between the two optima. Although an observation shared by previous studies is that the sexual conflict plays a significant role in the evolutionary dynamics, it remains a major obstacle in gaining a deeper understanding of the problem as it has been difficult to explicitly quantify \cite{114,115,116,117,170}. This further adds to the confusion on exactly which components of the problem generate the sexual conflict and how strongly such components contribute to the sexual conflict. \\ \\ Instead of reasoning about this problem in a discrete sense, for example with an agent-based approach, we show that it is possible, and more convenient, to consider continuous distributions along trait-space. This approach allows for a simplified computation of the evolutionary dynamics and the explicit calculation of the fitness landscape of the population. Moreover, the male and female fitnesses across trait-space can be separately calculated, which provides a way to visualise the sexual conflict. Since the calculation of the fitness landscape is explicit and inexpensive, comparisons of fitness landscapes generated from different sex-specific life-history strategies and tradeoffs can be made. In Section \ref{ntrait}, we generalise this result for an arbitrary number of heritable traits that affect the sex-specific tradeoffs and life-history parameters. {A common approach to such problems is discrete formulations aiming to generalise Price's equation (for examples see Batty et al. \cite{179} and Grafen, A. \cite{180}), which is a mathematical statement of how trait frequencies in a population change over time, given the initial population frequencies over trait space and the fitnesses of the traits (see Frank \cite{176} for a detailed discussion and also Price's original papers \cite{177,178}). We take an alternative dynamical-systems approach; thus, our result can be thought of as a dynamical-systems generalisation of Price's equation to include two sexes, age structure and multiple traits.} \\ \\ In Section \ref{GMsect}, we apply the result to determine whether grandmothering, whereby post-fertile females increase the fertility rate of their daughters by provisioning their grandchildren, can drive the evolution of increased longevity by allowing for a transition between equilibria in the trait-space. This question stems from the Grandmother Hypothesis, which proposes increased longevity was selected for when ancestral populations began relying on foods weaned juveniles could not acquire effectively for themselves. Under such circumstances, mothers would have to feed their offspring longer, but with subsidies from grandmothers, they could have next offspring sooner; and longer-lived grandmothers, without infants themselves, could support more grandchildren. \\ \\ Whether grandmothering alone can propel a great ape-like population to higher longevities has been the subject of several modelling studies recently, many of which involve computationally expensive simulations \cite{115,116,117,118}. Similar to Kim et al. \cite{116}, Kim et al. \cite{117} and Chan et al. \cite{170}, who considered the problem with comparable assumptions, we find here that grandmothering alone can drive the evolution of increased longevity. The key difference from previous models is that our approach produces an explicit fitness landscape which is inexpensive to compute. This allows for a straightforward comparison between {fitness landscapes of both sexes and the population compromise that evolves} with and without grandmothering. \section{Model} \subsection{Problem setup} \label{setup} We consider a two-sex population in which each individual possesses a heritable longevity trait value $x$. {Females are fertile from $\tau_f(x)$ to $\hat{\tau}_f(x)$, while males are fertile from $\tau_m(x)$ to $\hat{\tau}_m(x)$, where $\hat{\tau}_f(x)$ and $\hat{\tau}_m(x)$ denote the end of fertility for females and males respectively for a particular longevity trait value $x$. We make no assumptions on the form of $\tau_f(x)$, $\tau_m(x)$, $\hat{\tau}_f(x)$ or $\hat{\tau}_m(x)$, only that they are piecewise continuous functions.} Both males and females have equal age-specific mortality rates $\mu(x)$, leading to equivalent age-profiles for both sexes. \\ \\ In each time interval, fertile males compete over every fertile female for a chance at paternity. The competitive success rate for a male with a specific trait value $x$ is governed by the male longevity-fertility tradeoff function $\phi(x)$, representing the relative probability that a male will out-compete others for a chance at paternity. Likewise, the female birth rate at each time $t$ is governed by the female longevity-fertility tradeoff function $b(x)$, {which accounts for varying interbirth intervals as longevity increases}. Offspring inherit the mean value of their parents' longevity trait values, with a probability to mutate according to a normal distribution with mean 0 and variance $\varepsilon^2$. We assume that there is equal probability for an offspring to be male or female, leading to an equal sex ratio. Finally, to ensure that the population converges to a finite equilibrium, we assume that the number of offspring with a specific trait value $x$ entering the population is regulated by a competition factor equal to the total number of births at time $t$. \subsection{Model formulation} \noindent We let $u(a,x,t)$ denote the density of individuals with age $a$, longevity $x$ at time $t$, and model the age and mortality dynamics via the McKendrick von-Foerster model \begin{equation} \label{PDE} \displaystyle{\frac{\partial u}{\partial t} + \frac{\partial u}{\partial a} = -\mu(a,x) u}. \end{equation} \\ The mating and mutation dynamics are addressed in the boundary condition, given by \begin{equation} \label{BC} u(0,x,t) = \frac{1}{\xoverline{S}_f(t) \xoverline{S}_m(t)} \int_{-\infty}^\infty S_f(y,t) \int_{-\infty}^\infty S_m(\xoverline{z},t) N \left(\frac{\xoverline{z}+y}{2},x,\varepsilon^2 \right) \, d\xoverline{z} \, dy. \end{equation} \\ {The boundary condition gives the total density of offspring with longevity $x$ entering the system at any time $t$. We give an explanation of its form below.} The functions $S_f(x)$ and $S_m(x)$ respectively denote the density of births by fertile females with longevity $x$ and paternities by males with longevity $x$, that is, \begin{equation} \begin{aligned} S_f (x,t) &= \int_{\tau_f}^{\hat{\tau}_f} b(a,x) u(a,x,t) \, da,\\ S_m (x,t) &= \int_{\tau_m}^{\hat{\tau}_m} \phi(a,x) u(a,x,t) \, da. \end{aligned} \end{equation} \\ The functions $\xoverline{S}_f(t):= \int_{-\infty}^{\infty} S_f \, dx$ and $\xoverline{S}_m(t):= \int_{-\infty}^{\infty} S_m \, dx$ represent the total density of births and a measure of total male competitiveness at time $t$ respectively. The last factor in the integrand $N(x,\mu,\varepsilon^2)$ denotes the normal density function with mean $\mu$ and variance $\varepsilon^2$; {this factor governs the mating and mutation dynamics, where the parameter $\varepsilon$ in Eq.~(\ref{BC}) denotes the mutation rate, which we assume to be small. The expression ${S_m (\xoverline{z},t)}/{\xoverline{S}_m (t)}$ represents the probability of a female mating with a male with longevity $\xoverline{z}$, or equivalently, the probability of a male with longevity $\xoverline{z}$ securing a mate. The total paternities by males with longevity $\xoverline{z}$ is then given by $S_m(\xoverline{z},t) S_f(y,t)/ \xoverline{S}_m(t)$. Including the mutation factor $N((\xoverline{z}+y)/2,x,\varepsilon^2)$ and integrating over $y$ gives the total density of offspring with longevity $x$ entering the system at any time. Finally, we divide by $\xoverline{S}_f(t)$ to regulate the number of offspring entering the system, which serves to keep the total population density finite as described in Section \ref{setup}. This process results in the boundary condition Eq.~(\ref{BC}).} \\ \\ To simplify Eq.~(\ref{BC}), we let $\xoverline{z} = 2z-y$ to obtain \begin{equation} \label{BC2} \begin{aligned} u(0,x) &= 2 \frac{1}{\xoverline{S}_f \xoverline{S}_m} \int_{-\infty}^\infty S_f(y) \int_{-\infty}^\infty S_m(2z-y) N \left(z,x,\varepsilon \right) \, dz \, dy,\\ & = \frac{2}{\xoverline{S}_f \xoverline{S}_m} S_f(x) \ast S_m(x) [2x] \ast N(x,0,\varepsilon^2)[x], \end{aligned} \end{equation} \\ where $\ast$ denotes the convolution operator, i.e. $f(x) \ast g(x) [x] = \int_{-\infty}^\infty f(y)g(x-y) \, dy$. \\ \\ Since the mortality and ageing dynamics in Eq.~(\ref{PDE}) and the evolutionary dynamics in Eq.~(\ref{BC2}) are on different timescales, we simplify the system by assuming that the population is always at a stable age distribution. Thus, the density of births and paternities for specific longevity values, given above by $S_f$ and $S_m$, can be simplified to \begin{equation} \begin{aligned} S_f (x) &= u(0,x)\int_{\tau_f}^{\hat{\tau}_f} b(a,x) \exp \left( -\int_0^a \mu(s,x) \, ds \right) \, da,\\ S_m (x) &= u(0,x)\int_{\tau_m}^{\hat{\tau}_m} \phi(a,x) \exp \left( -\int_0^a \mu(s,x) \, ds \right) \, da. \end{aligned} \end{equation} \\ For convenience, we define \begin{equation} \label{imp} \begin{aligned} F(x) = \int_{\tau_f}^{\hat{\tau}_f} b(a,x) \exp \left( -\int_0^a \mu(s,x) \, ds \right) \, da,\\ M(x) = \int_{\tau_m}^{\hat{\tau}_m} \phi(a,x) \exp \left( -\int_0^a \mu(s,x) \, ds \right) \, da. \end{aligned} \end{equation} \\ \\ The evolutionary dynamics of the system, given in Eq.~(\ref{BC2}), is then simplified to the following integrodifference equation \begin{equation} \label{convint} \begin{aligned} u_{n+1}(x) = 2 \left( \frac{F(x)u_n(x)}{\langle F(x),u_n(x) \rangle} \right) \ast \left( \frac{M(x)u_n(x)}{\langle M(x),u_n(x) \rangle} \right) [2x] \ast N(x,0,\varepsilon^2) [x], \end{aligned} \end{equation} \\ where the angled brackets denote the $L^2$ inner product $\langle f(x),g(x) \rangle := \int_{-\infty}^\infty f(x)g(x) \, dx$. {Since $\frac{F(x)u_n(x)}{\langle F(x),u_n(x) \rangle}$ and $\frac{M(x)u_n(x)}{\langle M(x),u_n(x) \rangle}$ both integrate to 1, is it instructive to view them as probability density functions. The term $2 \left( \frac{F(x)u_n(x)}{\langle F(x),u_n(x) \rangle} \right) \ast \left( \frac{M(x)u_n(x)}{\langle M(x),u_n(x) \rangle} \right) [2x]$ can then be interpreted as another probability density function which lies ``in-between" $\frac{F(x)u_n(x)}{\langle F(x),u_n(x) \rangle}$ and $\frac{M(x)u_n(x)}{\langle M(x),u_n(x) \rangle}$ assuming $u_n(x)$ is bell-shaped, see Figure \ref{fig1}}. Convolving with $N(x,0,\varepsilon^2) [x]$, the term responsible for mutations, ``smooths" this function. {The convolution of probability density functions results in another probability density function, thus} $u_{n+1}(x)$ integrates to 1 and can be viewed as a probability density function for all $n$. We provide an illustration of one hypothetical iteration of Eq.~(\ref{convint}) in Figures \ref{fig1}-\ref{fig2}, {where one iteration encapsulates the mating, mutation and birth dynamics for one generation. The process illustrated in Figures \ref{fig1}-\ref{fig2} is a continuous perspective on the discrete problem described in Section \ref{setup}. Its main advantage is that the computation of convolutions of functions is much faster and more elegant than the agent-based approach of simulating discrete individuals. We note that this approach does take into account the biases in the mating sex ratio, even though there is no age-structure in Eq.~(\ref{convint}).} \\ \\ The function $F(x)$ gives the expected number of births by a female with longevity $x$ in her lifetime, while $M(x)$ {gives a value proportional to the fraction of available mating opportunities a male with longevity $x$ will have in his lifetime. We let $F(x)$ and $M(x)$ be measures of fitness for females and males respectively, although we note that $M(x)$ is technically not a measure of reproductive success, instead it is a measure of the expected proportion of matings a male with longevity $x$ will have in his lifetime}. {We note that a special case of $F(x)$ and $M(x)$ is when the tradeoff functions are independent of age $a$, in which case $F(x) = b(x) k_1(x)$ and $M(x) = \phi(x) k_2(x)$, where $k_1(x)$ and $k_2(x)$ are the expected number of years survived in the fertile ages for females and males respectively with longevity trait value $x$.} \\ \\ A natural question is, to which longevity value $x$ does the population converge for given parameter values and tradeoff functions of the model? {Through the stability analysis in Section \ref{stability}, we show that the answer} is simply the longevity value $x$ that maximises the product $F(x)M(x)$. In fact, $F(x)M(x)$ represents the fitness landscape of the scenario outlined in Section \ref{setup}, and so one can examine the shape of $F(x)M(x)$ and infer the optimal longevity value $x$ of the population without performing a simulation of the PDE. {The magnitude of the sexual conflict in the system can be inferred from a comparison of $F(x)$, $M(x)$ and $F(x)M(x)$, that is, a comparison of the female, male and combined fitness landscapes}. Furthermore, we show that the speed of convergence to the optimum of the fitness landscape is proportional to the slope of the fitness landscape $F(x)M(x)$. We generalise these results for an arbitrary number of traits in Section \ref{ntrait}. \begin{figure}[H] \hspace{5mm} \centerline{ \subfloat[]{\label{fig1}\includegraphics[width=0.55\textwidth]{fig1.pdf}} \hspace{-0mm} \subfloat[]{\label{fig2}\includegraphics[width=0.55\textwidth]{fig2.pdf}}} \caption{An illustration of the process described in Eq.~(\ref{convint}) for hypothetical functions $F(x)$ and $M(x)$. We note that the height of the individual curves is irrelevant; the important aspect is that they integrate to 1, since they describe distributions across trait-space. The black curve, representing $u_{n+1}(x)$, is obtained by convolving the function given in red with a normal density function with zero mean and finite variance.} \label{fig3} \end{figure} \section{Including grandmothering} \label{GMsect} In this section, we consider the same problem as in Section \ref{setup}, except that we allow for grandmothering, i.e. provisioning from post-fertile females. We let $L$, an individual's life expectancy, be the only heritable trait in the population. Similar to Chan et al. \cite{170}, we assume that post-fertile females take care of all their matrilineal grandchildren and that grandmothers boost their daughters' birth rates. The birth rate $b(a,L)$ in Eq.~(\ref{imp}) is then of the form $B(G(L)) \bar{b}(a,L)$, where $\bar{b}(a,L)$ is the base birth rate without the help of grandmothers, $G(L)$ is the proportion of fertile females covered by a post-fertile mother for a fixed longevity $L$, assuming a stable age distribution, and $B(x) = mx + (1-x)$ represents the boost in fertility. Following Chan et al. \cite{170}, we let the base birth rate $\bar{b}(L)$ be independent of age and require it to be equal to 0.3/year at $L=14$ and 0.11/year at $L=34$. {Also guided by Chan et al. \cite{170},} we set $m=3$, so that grandmothers boost their daughters' birth rates by a factor of 3, thus increasing the birth rate to 0.33/year for an individual with a life expectancy of 34 years, who is also supported by a post-fertile mother. \\ \\ We use the parameter settings of Kim et al. \cite{117} for $\tau_f(L)$, $\tau_m$, $\hat{\tau}_f$, $\hat{\tau}_m(L)$ and $\mu(L)$. These parameters are listed in Table \ref{lifehistoryparatable}. {We include the assumption from Kim et al. \cite{117} that individuals are frail, and thus exit the population, at the age $\tau_T = \min(2L,75)$ to prevent individuals living to unrealistic ages. Moreover, males are fertile from sexual maturity $\tau_m$ until frailty $\tau_T$.} We note that the monotonically increasing nature of the age of female sexual maturity $\tau_f(L)$ is based on Charnov's model of mammalian life history, which predicts an increase in maturation age with greater longevities \cite{129}. The male age of sexual maturity is kept constant for simplicity. \\ \\ {Since} we could not find a straightforward analytic way to calculate $G(L)$ given arbitrary parameter values, we approximate $G(L)$ by using an agent-based model (ABM). The ABM runs as follows: First a longevity value $L$ is chosen and is fixed for the entire simulation. {All individuals have the same longevity $L$ and there is zero probability for mutations.} Females are fertile from $\tau_f(L)$ to $\hat{\tau}_f$. {Due to longevity being fixed and the assumption in Section \ref{setup} that every female is guaranteed to mate in every time step, there is no need for the ABM to track males; tracking males is only essential if $L$ is allowed to evolve}. At each time step, every fertile female gives birth to one offspring with probability $1-\exp(-\bar{b}(x))$ if she does not have a surviving post-fertile mother or with probability $1-\exp(-3\bar{b}(x))$ if she does. If there are over 200 newborns, then newborns are randomly removed from the population until only 200 remain. Individuals die with probability $1-\exp(-\mu(x))$ and any individuals past the age of $\min(2L,75)$ are removed from the population. Figure \ref{GMprop} shows 100 samples of $G(x)$ for each integer $L$ past 22, since there are no grandmothers unless $L \geq 23$ due to {$\hat{\tau}_f = \min(2L,45)$}. We fit the function \begin{equation} G(x) = \begin{cases} \begin{aligned} \frac{(x-23)^a}{b+(x-23)^c} \quad\quad &\text{if } x \geq 23,\\ 0 \quad\quad &\text{if } x<23, \end{aligned} \end{cases} \end{equation} \\ through the means of each data set collected in Figure \ref{GMprop} and use the method of least squares to obtain $a = 1.164$, $b=43.83$ and $c=1.36$. \\ \\ Following Kim et al. \cite{116}, {we let the male fertility-longevity tradeoff $\phi(L)$, the relative probability for a male with longevity $L$ to secure a mate, to decrease exponentially with longevity by requiring it to satisfy} $\phi'(L) = \delta(L)\phi(L)$. The function $\phi(L)$ is then uniquely defined, up to a scaling factor, by \begin{equation} \label{maletrade} \phi(L) = \exp \left( \int_{L_0}^L \delta (s) \, ds \right), \end{equation} \\ where we choose $L_0=20$ and $\delta(L) = -0.4\exp(-0.087 L)$. \begin{table}[H] \centering \caption{Parameter values} \begin{tabular}{lll} \toprule Symbol & Definition & Value\\ \midrule $m$ & Benefit received by fertile female from grandmothering & $3$ \\ $\tau_T(L)$ & Age of frailty & $\min(2L,75)$ \\ $\tau_f(L)$ & Age of female sexual maturity & $L/2.5 + 2$ \\ $\tau_m$ & Age of male sexual maturity & 15 \\ $\hat{\tau}_f$ & Age female fertility ends & 45 \\ $\hat{\tau}_m(L)$ & Age male fertility ends & $\min(2L,75)$ (equal to $\tau_T$) \\ $\phi(L)$ & Male fertility-longevity tradeoff & Function of $L$, see Eq.~(\ref{maletrade}) \\ $\mu(L)$ & Mortality rate & $1/L$ \\ $\bar{b}(L)$ & Birth rate & $4.522/L - 0.023$ \\ \bottomrule \end{tabular} \label{lifehistoryparatable} \\[5pt] \end{table} \noindent To examine the life expectancy {that is selected in the population} with and without grandmothering, we compute $F(L)M(L)$. As shown in Figure \ref{GMfitness}, the fitness landscape is the same before life expectancies of $L=23${, since $L=23$ is the earliest life expectancy at which the population has any post-fertile females.} However, for life expectancies greater than 23, grandmothering alters the fitness landscape and enables a transition to a unique optimum corresponding to a higher longevity. \\ \\ We confirm that the population converges to the maxima of the fitness landscape in Figure \ref{GMfitness2}, where we show the expected trait value of the population over time, calculated via Eq.~(\ref{convint}). Moreover, to elucidate the role of the sexual conflict in the system, Figure \ref{GMfitness3} provides a way to visualise the magnitude of the sexual conflict by comparing the female fitness function $F(L)$, male fitness function $M(L)$ and the two-sex fitness landscape $F(L)M(L)$. From Figure \ref{GMfitness3}, male fitness increases concurrently with longevity $L$; however, this is counterbalanced by a decrease in female fitness, which creates the unique maximum point in the combined two-sex fitness landscape. One can observe the optima preferred by the males and females through their respective fitness landscapes, and also the compromise adopted by the population overall that lies between these two optima, as shown by the two-sex fitness landscape. We note that the male fitness landscape is the same regardless of whether there is grandmothering in the population, since we have made the assumption that the only function of grandmothers in the population is to boost their daughter's birth rate. \begin{figure}[H] \centerline{ \subfloat[]{\label{GMprop}\includegraphics[width=0.55\textwidth]{GMprop.pdf}} \hspace{2mm} \subfloat[]{\label{GMfitness}\includegraphics[width=0.55\textwidth]{GMfitness.pdf}}} \caption{Figure (a) shows box plots of the proportion of fertile females with a surviving post-fertile mother when the population has reached stable age distribution in the ABM. The function $G(x) = \frac{(x-23)^a}{b+(x-23)^c}$ is used as a model for $G(x)$, where $a = 1.164$, $b=43.83$ and $c=1.36$. Figure (b) shows fitness landscapes of the population with and without grandmothering. The dashed lines correspond to the maxima of each curve. When there is no grandmothering, the population converges to a life expectancy of approximately 24, whereas the population transitions to a life expectancy of approximately 38 with grandmothering.} \label{fig3} \end{figure} \begin{figure}[H] \centerline{ \subfloat[]{\label{GMfitness2}\includegraphics[width=0.55\textwidth]{GMevolutionfigure.pdf}} \hspace{2mm} \subfloat[]{\label{GMfitness3}\includegraphics[width=0.505\textwidth]{femalevsGMFL.pdf}}} \caption{Figure (a) shows plots of the first moment of $u_n(L)$, as defined in Eq.~(\ref{convint}), with and without grandmothering, with $\varepsilon^2=0.025$. When the population without grandmothering (blue curve) reached equilibrium, grandmothering was allowed for (orange curve). The equilibria match the maxima of the fitness landscapes shown in Figure \ref{GMfitness}, as expected from the proof in Section \ref{stability}. Figure (b) shows plots of the female fitness landscape $F(L)$, male fitness landscape $M(L)$ and the two-sex fitness landscape $F(L)M(L)${, with grandmothering (solid) and without grandmothering (dashed).} Each curve is scaled such that the maximum value is 1. This provides a visualisation of the magnitude of the sexual conflict in the system.} \label{fig4} \end{figure} \section{Discussion} We have presented a computationally inexpensive method to compute the fitness landscape of the problem presented in Section \ref{setup} and have applied it to determine if grandmothering in a population can enable an evolutionary trajectory to higher longevities. Previous studies of this problem have included both short- and long-time scale components, which result in computationally expensive simulations and hinders inference from the model. Our method essentially compacts the short-time scale components of the problem, such as mortality and ageing, by assuming a stable age-distribution, while focusing on the evolution of traits in the population, which we assume to be of a much longer-time scale. The resultant equation describing the evolutionary dynamics, given by Eq.~(\ref{convint}), can be viewed as a dynamical systems generalisation of Price's Equation that includes two-sexes and multiple traits. \\ \\ {A difficulty encountered by previous studies in the literature is assessing the role of the sexual conflict in the problem. This is because the sexual conflict is a manifestation of the many different components; specifically, the mating sex ratio (which is itself determined by the male and female fertile ages, and the mortality rate) and the male and female tradeoffs with longevity. {Although it is tempting to analyse the sexual conflict through the mating sex ratio, it is not the correct approach; we stress that since a given mating sex ratio value can be achieved through a variety of different fertile ages and mortality rates, there is not a unique correspondence between mating sex ratio values and fitness landscapes.} This is evident from the form of the female fitness landscape $F(L)$ and male fitness landscape $M(L)$ (see Eq.~(\ref{imp})). {More specifically, the function $k(L)$, denoting the mating sex ratio at longevity $L$ and given by \begin{equation} k(L) = \left(\int_{\tau_m}^{\hat{\tau}_m} \exp \left( -\int_0^a \mu(s,L) \, ds \right) \, da \right) \Bigg/ \left( \int_{\tau_f}^{\hat{\tau}_f} \exp \left( -\int_0^a \mu(s,L) \, ds \right) \, da \right), \end{equation} \\ is not guaranteed to be invertible. This implies that the same mating sex ratio value may be obtainable at different longevity $L$ values.} Consequently, although the sexual conflict is clearly present in the system, it is not an aspect that can be accurately represented through a single indicator and this hinders its analysis. In this study, we provide the next best approach: a method to compute the female and male fitness landscapes separately, and also the combined fitness landscape. Through this, one can observe how each sex contributes to the resulting evolutionary dynamics of the overall problem and how the male and female fitness landscapes vary with the mating sex ratio. In particular, when the female and male tradeoffs are quantitatively known, our method allows one to assess how much contribution a change in mating sex ratio contributes to the fitness landscape of a population.} \\ \\ A limiting assumption of our model is that fertile males compete over every fertile female for a chance at paternity regardless of the mating sex ratio. However, a shifting mating sex ratio will influence the availability of partners due to the asymmetry in the male and female fertile ages. Hence, mating strategies are likely to change along with the mating sex ratio (see Schacht \& Bell \cite{172}, {Coxworth et al. \cite{169} and Loo et al. \cite{181} for examples}). Thus, an avenue for future work is including such changes in mating strategies in response to a shifting mating sex ratio via the evolution of longevity in a population. This could in turn have important consequences on the evolutionary trajectory the population follows towards higher longevities. \begin{acknowledgements} MHC and PSK were supported by the Australian Research Council, Discovery Project (DP160101597). \end{acknowledgements} \bibliographystyle{apalike}
2,869,038,156,022
arxiv
\section*{Abstract} {\bf We study $N$ spinless fermions in their ground state confined by an external potential in one dimension with long range interactions of the general Calogero-Sutherland type. For some choices of the potential this system maps to standard random matrix ensembles for general values of the Dyson index $\beta$. In the fermion model $\beta$ controls the strength of the interaction, $\beta=2$ corresponding to the noninteracting case. We study the quantum fluctuations of the number of fermions ${\cal N}_{\cal D}$ in a domain $\cal{D}$ of macroscopic size in the bulk of the Fermi gas. We predict that for general $\beta$ the variance of ${\cal N}_{\cal D}$ grows as $A_{\beta} \log N + B_{\beta}$ for $N \gg 1$ and we obtain a formula for $A_\beta$ and $B_\beta$. This is based on an explicit calculation for $\beta\in\left\{ 1,2,4\right\} $ and on a conjecture that we formulate for general $\beta$. This conjecture further allows us to obtain a {universal} formula for the higher cumulants of ${\cal N}_{\cal D}$. Our results for the variance in the microscopic regime are found to be consistent with the predictions of the Luttinger liquid theory with parameter $K = 2/\beta$, and allow to go beyond. In addition we present families of interacting fermion models in one dimension which, in their ground states, can be mapped onto random matrix models. We obtain the mean fermion density for these models for general interaction parameter $\beta$. In some cases the fermion density exhibits interesting transitions, for example we obtain a noninteracting fermion formulation of the Gross-Witten-Wadia model. } \vspace{10pt} \noindent\rule{\textwidth}{1pt} \tableofcontents\thispagestyle{fancy} \noindent\rule{\textwidth}{1pt} \vspace{10pt} \section{Introduction} \subsection{Overview} The full counting statistics (FCS), which measures the fluctuations of the number of particles ${\cal N}_{\cal D}$ inside a domain ${\cal D}$ has been studied extensively in the context of shot noise \cite{Levitov}, quantum transport \cite{Lev96}, quantum dots \cite{Been06,Gus06}, non-equilibrium Luttinger liquids \cite{PGM} as well as in quantum spin chains and fermionic chains \cite{EiserRacz2013,AbanovIvanovQian2011,IvanovAbanov2013,Caux2019,FCSEsslerTransverseIsing,StephanHaldaneChain2017}. The FCS is particularly important for noninteracting fermions because of its connection to the entanglement entropy \cite{Kli06,KL09,CalabreseMinchev2,Hur11,LMG19}. For free fermions, in the absence of external potential and at zero temperature it is well known that both the variance of ${\cal N}_{{\cal D}}$ and the entropy grow as $\sim R^{d-1} \log R$ with the typical size $R$ of the domain ${\cal D}$ in space dimension $d$ \cite{Klitch,CalabreseMinchev1,Torquato,Widom1,Widom2,Widom3}. Recently these results have been extended for noninteracting fermions in the presence of a confining potential. This is important for applications e.g. to cold atoms experiments \cite{Fermicro1,Fermicro2,Fermicro3,Pauli} where the fermions are in traps of tunable shapes \cite{BDZ08,flattrap}. In a confining potential, the Fermi gas is supported over a finite domain. Its mean density is inhomogeneous and can be calculated using the well known local density approximation (LDA) \cite{BR1997,Castin}. To compute quantum correlations, in particular at the edge of the Fermi gas, where the density vanishes and the LDA method fails, more elaborate methods have been developed \cite{koh98,Eisler1,DeanPLDReview}. In $d=1$, one can exploit the fact that for specific potentials, the problem at zero temperature can be mapped to standard random matrix ensembles, for which powerful mathematical tools are available. For instance, for $N$ noninteracting spinless fermions in a harmonic well, described by the single particle Hamiltonian $H = \frac{p^2}{2} + V(x)$ with $V(x)=\frac{1}{2} x^2$, the quantum joint probability distribution function (PDF) for the positions $\vec x = \{ x_i \}_{i=1,\dots,N}$ of the fermions takes the form \begin{equation} \label{harm} |\Psi_0(\vec x)|^2 \propto \prod_{i <j} |x_i-x_j|^{\beta} e^{- \sum_{i=1}^N x_i^2} \;, \end{equation} with $\beta=2$ and $\Psi_0(\vec x)$ denotes the ground state many body wave function. Remarkably, Eq. \eqref{harm}, specialised to $\beta = 2$, coincides with the joint PDF of the eigenvalues $\lambda_i$ of a random matrix belonging to the Gaussian unitary ensemble (GUE). The two problems thus map into each other with the identification $\lambda_i=x_i$. As a result, for large $N$, the mean fermion density, i.e., the quantum average $\rho(x) \langle \sum_i \delta(x-x_i) \rangle$, takes the Wigner semi-circle form $\rho(x) \simeq \rho^{\rm bulk}(x)=\frac{1}{\pi} \sqrt{2 N- x^2}$ in the bulk, i.e., for $x \in [x_e^-,x_e^+]$ and vanishes beyond the edges $x_e^\pm \simeq \pm \sqrt{2N}$. To discuss the FCS within an interval $[a,b]$ at large $N$, i.e., the fluctuations of ${\cal N}_{[a,b]}$, one needs to distinguish two natural length scales: the microscopic scale given by the interparticle distance $\sim 1/\sqrt{N}$, and the macroscopic scale of order $x_e^+-x_e^- \sim \sqrt{N}$. It is well known since Dyson and Mehta \cite{Dyson,DysonMehta} that for an interval of microscopic size the variance for the GUE is given by \cite{MehtaBook,CL95,FS95,AbanovIvanovQian2011,DIK2009,MMSV14,MMSV16,CLM15,CharlierSine2019} \begin{equation} \label{DM} {\rm Var}\,{\cal N}_{\left[a,b\right]}\simeq\frac{1}{\pi^{2}}\left[\log\left(\sqrt{2N {-a^2}} \, |b-a|\right)+c_{2}\right] \;, \end{equation} for $\sqrt{N}|b-a| = O(1) \gg 1$, with $c_2=\gamma_E +1+\log 2$, where $\gamma_E$ is Euler's constant. This result is obtained from the celebrated sine kernel which describes the eigenvalue correlations in the GUE at microscopic scales. The formula \eqref{DM} thus carries to the fermions in the harmonic potential. The FCS for an interval of macroscopic size has been studied more recently. For the harmonic well, some results for the variance in that regime were obtained using the connection to the GUE, both in mathematics \cite{Borodin1,BaiWangZhou,Charlier_hankel,JohanssonLS}, and in physics using a Coulomb gas method \cite{MMSV14,MMSV16}. The mapping to random matrix theory (RMT) holds however only for a few specific potentials. For instance the so-called Wishart-Laguerre ensemble is related to the potential $V(x)=\frac{x^2}{2} + \frac{\gamma^2 - \frac{1}{4}}{x^2}$ for $x>0$. For an arbitrary smooth $V(x)$, not necessarily related to RMT, we have recently obtained a general formula \cite{SDMS} for the variance ${\rm Var}\,{\cal N}_{\left[a,b\right]}$ for a macroscopic interval $[a,b]$ in the bulk. The method used combines determinantal point processes with semi-classical (WKB) approaches. In the special cases related to RMT, the formula recovers the available exact results \cite{CharlierJacobi}. It is also in general agreement with recent approaches relying on inhomogeneous bosonization \cite{BrunDubail2018,RuggieroBrunDubail2019,DubailStephanVitiCalabrese2017}, although our method allows for more precise and controled results. One can also ask about higher cumulants of ${\cal N}_{\left[a,b\right]}$, i.e., beyond the variance given in \eqref{DM}, both for microscopic and macroscopic interval $[a,b]$. In the absence of potential, i.e., for free fermions, there exist results for the higher cumulants which are obtained using the sine-kernel \cite{DIK2009,AbanovIvanovQian2011,CharlierSine2019}. A natural conjecture, which we put forward in \cite{SDMS} is that these higher cumulants are determined solely from fluctuations on microscopic scales. Consequently (i) they are independent of the size of the interval (within the bulk) and (ii) they are universal, i.e., independent of the precise shape of the potential (assumed to be smooth). This conjecture was used to obtain a prediction for the entanglement entropy for noninteracting fermions in a potential in \cite{SDMS}. An outstanding question is the study of the FCS for interacting particles \cite{FlindtFCS2011}. It is of current interest for cold atom experiments, which have recently measured particle number fluctuations in the $1d$ Bose gas \cite{ExperimentsFCSEsteve,ExperimentsFCSBouchoule}. On the theory side however there are only a few results, even for integrable models. For instance, for the delta Bose gas (Lieb-Liniger) model, an exact formula was derived using the Bethe ansatz for the FCS in the limit of a very small interval \cite{Calabrese_etal,BastianelloPiroliFCS}. Results for larger intervals were also obtained, but are only valid in the weak interaction/high temperature regime \cite{Gangardt2019}. For interacting fermions, numerical results were obtained for the Hubbard model \cite{FCSHumeniuk}. In the context of spin chains, several exact formulae for the FCS were obtained, e.g for the XXZ spin chain \cite{CalabreseFCSXXZ2020,FCSEsslerHeisenbergXXZ}, for the transverse field Ising model \cite{FCSEsslerTransverseIsing} and for the Haldane-Shastry chain \cite{StephanHaldaneChain2017}. In view of our previous works on the FCS of noninteracting trapped fermions it is thus natural to look for extensions which include interactions. A promising direction is to explore further the connection between random matrix theory for a general value of the Dyson index $\beta$, and trapped fermions in $1d$ in the presence of two-body interactions of the Calogero-Sutherland type \cite{Calogero69,Sutherland1971a,Sutherland1971c,SutherlandBook,Forrester}. The simplest example corresponds to the Gaussian $\beta$ ensemble (G$\beta$E) which contains the GUE for $\beta=2$, as well as the other standard ensembles, the GOE for $\beta=1$ and the GSE for $\beta=4$ \cite{MehtaBook,Forrester}. For general $\beta$ they can be constructed from random tridiagonal matrices \cite{Dumitriu2002}. The joint PDF of their eigenvalues $\lambda_i$ is given by \eqref{harm} with the substitution $x_i \equiv \sqrt{\frac{\beta}{2}} \lambda_i$. The important observation is that Eq. \eqref{harm} is also the quantum joint PDF, $|\Psi_0(\vec x)|^2$, of the positions $x_i$ of $N$ fermions in the ground state of the following $N$ body Hamiltonian \begin{equation} \label{H0} {\cal H}_N=\sum_{i=1}^{N} \left( \frac{p_{i}^{2}}{2}+\frac{x_i^2}{2} \right) + \sum_{1 \leq i<j \leq N} \frac{\beta (\beta-2)}{4 (x_{i}-x_{j})^{2}} \;. \end{equation} It thus describes fermions which interact through either repulsive ($\beta>2$) or attractive ($ 1\le \beta<2$) long range $1/x^2$ interaction. As we discuss below there are several other examples of two-body interactions and trapping potentials for which a similar connection exists. Note that there are also lattice versions of these models, e.g. Haldane-Shastry spin chains corresponding to a discretized version of the circular $\beta$ ensemble (C$\beta$E) with $\beta=4$, for which FCS results exist \cite{StephanHaldaneChain2017}. In this paper we will use the relations between interacting fermions and RMT for general $\beta$, to obtain precise predictions for the FCS for various examples of trapped fermions in the presence of interactions. In the rest of this section we first present the main models that we will study, we explain the main idea of the method and we present the main results. \subsection{Models and mappings} Let us now describe the class of models which we study in this paper. In this section we focus on models related to RMT, while further extensions will be discussed below. Here we consider $N$ spinless fermions trapped in an external potential $V(x)$ and with two-body interactions parameterized by a symmetric function $W(x,y)=W(y,x)$. The Hamiltonian is of the general form (we use units such that $m=\hbar=1$) \begin{equation} \label{H} {\cal H}_N=\sum_{i=1}^{N}\left[\frac{p_{i}^{2}}{2}+V\left(x_{i}\right)\right]+\sum_{i<j}W\left(x_{i},x_{j}\right). \end{equation} In this paper we study specific choices for $V(x)$ and $W(x,y)$ such that the joint PDF of the positions of the fermions in the ground state can be written in the form \begin{equation} \label{P0} |\Psi_0(\vec x)|^2 = e^{ - U(\vec x)} ~,~ U(\vec x)= \sum_i v(x_i) + \sum_{i<j} w(x_i,x_j) \end{equation} with $w$ a symmetric function $w(x,y)=w(y,x)$. One example of such models corresponds to Eqs. \eqref{H0}, with $V(x)= \frac{x^2}{2}$ and $W(x,y)=\frac{\beta (\beta-2)}{4 (x-y)^{2}}$, and \eqref{harm}, with $v(x)=x^2$ and $w(x,y)=- \beta \log|x-y|$. This is one instance of a more general class of fermion models that can be mapped onto a random matrix ensemble (in that case G$\beta$E). \begin{table*} \bgroup\small \renewcommand{\arraystretch}{2.2} \begin{tabular}{ | p{1.3cm} | p{2.4cm} | p{3.7cm} | p{1.4cm} | p{2.0cm} | l |} \hline Fermions' domain & Fermion potential $V(x)$ & Fermion interaction $W(x,y)$ & RMT ensemble & Matrix potential $V_0(\lambda)$ & Map $\lambda(x)$ \\ \hline $x\in\mathbb{R}$ & $x^2/2$ & {\scriptsize $\frac{\beta\left(\beta-2\right)}{4\left(x-y\right)^{2}}$} & G$\beta$E & $\beta \lambda^2/2$ & $\lambda=\sqrt{\frac{2}{\beta}} x$ \\ \hline $x \! \in \! [0,L]$ & $0$ & {\scriptsize $\left(\frac{2\pi}{L}\right)^{2}\frac{\beta\left(\beta-2\right)}{16\sin^{2}\frac{\pi\left(x-y\right)}{L}}$} & C$\beta$E & $0$ & $\lambda=e^{ix\frac{2\pi}{L}}$ \\ \hline $x\in\mathbb{R}^+$ & {\scriptsize $\frac{x^{2}}{2}+\frac{\gamma^{2}-\frac{1}{4}}{2x^{2}}$} & {\scriptsize $\frac{\beta(\beta-2)}{4}\left[\frac{1}{\left(x-y\right)^{2}}+\frac{1}{\left(x+y\right)^{2}}\right]$} & WL$\beta$E & $\frac{\beta}{2} \lambda - \gamma \log \lambda$ & $\lambda=\frac{2}{\beta} x^{2}$ \\ \hline $x\! \in\! [0,\pi]$ & {\scriptsize $\frac{1}{8} \! \left(\frac{\gamma_1^{2}-\frac{1}{4}}{\sin^{2}\frac{x}{2}} \! + \! \frac{\gamma_2^{2}-\frac{1}{4}}{\cos^{2}\frac{x}{2}}\right)$} & {\scriptsize $\frac{\beta(\beta-2)}{16} \! \left(\frac{1}{\sin^{2}\frac{x-y}{2}} \! + \! \frac{1}{\sin^{2}\frac{x+y}{2}}\right)$} & J$\beta$E & $\log$ {\scriptsize $\!\! \frac{1}{\lambda^{\gamma_1}\left(1-\lambda\right)^{\gamma_2}}$} & $\lambda=$ {\scriptsize $\frac{1-\cos x}{2}$} \\ \hline \hline \end{tabular} \egroup \caption{The mappings between (i) models of interacting trapped fermions studied here and (ii) the standard random matrix ensembles. The variable $x$ denotes the positions of the fermions, $\lambda$ the eigenvalues of the RMT ensemble, and the mapping $\lambda(x)$ is displayed in the last column. The first three columns denote respectively the domain for the fermions, the external potential $V(x)$, and their interaction $W(x,y)$, defined in \eqref{H0}. Note that in the second line periodic boundary conditions are to be understood for the fermionic system. The next two columns indicate the RMT ensemble and the matrix potential $V_0(\lambda)$, see Eq. \eqref{F}. Here $\beta$ is the Dyson index which varies continuously and corresponds to noninteracting fermions for $\beta=2$.} \label{table:mappings} \end{table*} Let us briefly review the ensembles of interest. We denote $\lambda_i$, $i=1,\dots,N$ the eigenvalues of a random matrix in an ensemble such that the joint PDF can be written as \begin{equation} \label{F} P(\vec \lambda) = \frac{e^{ - F(\vec \lambda) }}{Z_N} ~,~ F(\vec \lambda) = \sum_{i=1}^{N} V_0(\lambda_i) - \beta \sum_{i < j}\log\left|\lambda_{i}-\lambda_{j}\right| \end{equation} where $\vec \lambda=\{ \lambda_i \}_{i=1,\dots,N}$ and $Z_N$ is a normalisation constant. Here $\beta$ is the Dyson index and $V_0$ the matrix potential (not to be confused with the fermion potential $V$). For instance the Gaussian-beta ensemble (G$\beta$E) corresponds to the case where the eigenvalues are on the real axis, $\lambda_i \in \mathbb{R}$, with $V_0(\lambda)= \frac{\beta}{2} \lambda^2$. It contains the Gaussian unitary, orthogonal and symplectic ensemble for $\beta=2,1,4$ respectively. The circular-beta ensemble (C$\beta$E) corresponds to the case where the eigenvalues are on the unit circle in the complex plane, with $V_0(\lambda)=0$, and includes the circular unitary ensemble (CUE) for $\beta=2$. The Wishart-Laguerre-beta ensemble (WL$\beta$E) corresponds to $\lambda_i \in \mathbb{R}^+$ and $V_0(\lambda)= \frac{\beta}{2} \lambda - \gamma \log \lambda$. The Jacobi-beta ensemble (J$\beta$E) corresponds to $\lambda_i \in [0,1]$ and $V_0(\lambda)=- \gamma_1 \log \lambda - \gamma_2 \log(1-\lambda)$. In all these cases and for any $\beta$, $Z_N$ has an explicit expression as a Selberg integral \cite{Forrester}, and the ensembles can be mapped onto certain tridiagonal matrices \cite{Dumitriu2002}. These ensembles are recapitulated in the Table \ref{table:mappings}. The general idea behind the mapping between fermions and RMT is that, upon some map which we denote $\lambda(x)$, i.e., $\lambda_i = \lambda(x_i)$, one can identify the joint PDF \eqref{F} with the quantum joint PDF \eqref{P0} corresponding to the many-body ground state of the fermion system with Hamiltonian \eqref{H}. Taking into account the Jacobian of the map, the correspondence reads \begin{eqnarray} \label{jac} && v(x) = V_0(\lambda(x)) - \log| \lambda'(x)| \\ && w(x,x')= - \beta \log|\lambda(x)-\lambda(x')| \;. \label{w} \end{eqnarray} The simplest case is the mapping {$\lambda(x)=\sqrt{\frac{2}{\beta}}\,x$} from the G$\beta$E to the fermions on the real axis described by the model \eqref{H0}. For the C$\beta$E ensemble the map is $\lambda(x) = e^{i x \frac{2 \pi}{L}}$, where the fermions live on the periodic ring, $x_j \in [0,L]$. In that case it maps onto the Sutherland model, without any external potential $V(x)=0$, see second line of the Table \ref{table:mappings}. For the WL$\beta$E ensemble the map is {$\lambda(x)=\frac{2}{\beta} x^2$} and the fermions live on $\mathbb{R}^+$, $x_i >0$, with potential sum of harmonic and $1/x^2$ wall, and $1/x^2$ type interactions as given in the third line of the Table \ref{table:mappings}. Finally for the J$\beta$E ensemble the map is $\lambda(x)=\frac{1}{2} (1-\cos x)$ and the fermions live in a box $x_i \in [0,\pi]$ with potential and interactions given in the fourth line of the Table \ref{table:mappings}. For $\gamma_1=\gamma_2=1/2$ the potential is a hard box with Dirichlet boundary conditions. For $\beta=2$ the fermions are noninteracting in all four cases, and their positions $x_i$ in the ground state form a determinantal point process. Note that the above mappings are valid for any~$N$. To summarize, our main strategy behind the mapping between the ground state of trapped fermions with two-body interactions $W$ and the joint distribution of the eigenvalues of a matrix model consists of the following three steps. \begin{itemize} \item We consider the Hamiltonian ${\cal H}_N$ in \eqref{H} which has only one and two-body potentials, $V$ and $W$ respectively. We then write the many-body ground state wave function in any given ordered sector, e.g. $x_1<\dots<x_N$, as $\Psi_0(\vec x) \sim e^{- U(\vec x)/2}$, where $U(\vec x)$ is of the form \eqref{P0} consisting only of one-body and two-body terms. \item We next substitute this wave function $\Psi_0(\vec x) \sim e^{- U(\vec x)/2}$ in the Schr\"odinger equation ${\cal H}_N \Psi_0= E_0 \Psi_0$ (in an ordered sector). The main condition is that this equation is satisfied for some value of the ground state energy $E_0$, i.e., that no three-body interaction is generated upon applying the kinetic operator. This condition selects some special families of potentials $V$ and interactions $W$. In the absence of potential this approach dates back to Sutherland and Calogero \cite{SutherlandBook,Calogero1975}. This is a standard although tedious calculation, recalled in Appendix \ref{appendix:mappings}, allowing also to determine $E_0$ for each model. The fact that $\Psi_0$ is indeed the ground state is ensured by the additional condition that $\Psi_0(\vec x)$ vanishes only at $x_i=x_j$ for $i\ne j$, but not elsewhere \cite{Sutherland1971a}, see also \cite{Calogero69,footnoteCalogero}. \item Finally, we identify the quantum probability, given by $|\Psi_0(\vec x)|^2$, as the joint PDF of the eigenvalues $\lambda_1,\dots,\lambda_N$ of a random matrix, under a map $\lambda_i=\lambda(x_i)$, and we show how to construct this map explicitly for several examples. This last step allows us to identify new connections between interacting (and noninteracting) fermions and random matrix models, see e.g. Section \ref{sec:generalmodels}. \end{itemize} {\bf Mean density}. The simplest observable to compute is the average density $\rho(x)$ of the fermions \begin{equation} \rho(x)=\left\langle \sum_{i=1}^{N}\delta\left(x-x_{i}\right)\right\rangle \;, \end{equation} where $\langle \dots \rangle$ denotes expectation values with respect to the ground state. In particular one can ask how the interactions modify this density as compared to the noninteracting case $W=0$. In the large $N$ limit and in the absence of interactions, the density in the bulk reads $\rho(x) \simeq \frac{1}{\pi} \sqrt{2(\mu- V(x))_+}$ as given by LDA or semi-classical methods (we denote everywhere $(x)_+=\max(x,0)$). Note that the LDA works only for noninteracting fermions and in the bulk \cite{DeanPLDReview}. Here $\mu$ denotes the Fermi energy which is determined by the normalization condition $\int dx \rho(x)=N$ (e.g., $\mu \simeq N$ for the harmonic oscillator (HO) considered above). {Note that for some integrable systems, the LDA may be improved as in Ref. \cite{StephanInteractions} to include interactions. We will not explore this route here.} For the models in Table \ref{table:mappings}, the noninteracting case corresponds to $\beta=2$. To obtain the density for arbitrary $\beta$ one can interpret the PDF \eqref{P0}, or equivalently \eqref{F}, as the Boltzmann distribution for a gas of classical particles at unit temperature, with energy $U(\vec{x})$, or equivalently $F(\vec{\lambda})$. Using \eqref{jac} and \eqref{w}, in both cases, the interaction between these particles is logarithmic which corresponds to the $2d$ Coulomb interaction. In the large $N$ limit and in the presence of a confining potential, the equilibrium density is obtained by minimizing the corresponding energy. This Coulomb gas (CG) method has been widely used in the context of RMT. Rewriting \eqref{F} as $F(\vec{\lambda})=\frac{\beta}{2}\left[\sum_{i=1}^{N}\frac{2V_{0}(\lambda_{i})}{\beta}-2\sum_{i<j}\log\left|\lambda_{i}-\lambda_{j}\right|\right]$, one immediately sees that the Coulomb gas result for a general $\beta$ coincides with that of a gas with $\beta=2$ and a matrix potential $2V_{0}\left(\lambda\right)/\beta$. Using the known results for the average eigenvalue density, defined as $\sigma(\lambda)=\frac{1}{N} \sum_i \langle \delta(\lambda-\lambda_i) \rangle$, we can write, respectively for the G$\beta$E and WL$\beta$E \begin{eqnarray} \label{sigma_density} && \sigma(\lambda)={\!\frac{1}{\sqrt{N}}}\sigma_{{\rm W}}\left({\frac{\lambda}{\sqrt{N}}}\right),\quad \sigma_{{\rm W}}(z)=\frac{\sqrt{\left(2-z^{2}\right)_{+}}}{\pi} \\ && \sigma(\lambda)=\frac{1}{N}\sigma_{{\rm MP}}\left(\frac{\lambda}{N}\right),\quad \sigma_{{\rm MP}}(z)=\frac{1}{2\pi}\sqrt{\left(\frac{4-z}{z}\right)_{+}}\;. \end{eqnarray} The subscripts `W' and `MP' stand for Wigner (semi-circle) and Marcenko-Pastur, respectively. Using the mapping to the fermions with \begin{equation} \label{rho_sigma} \rho(x) = N \lambda'(x) \sigma(\lambda(x)) \end{equation} and $\lambda(x)= \sqrt{\frac{2}{\beta}} x$ and $\lambda(x)=\frac{2}{\beta} x^2$ for the G$\beta$E and WL$\beta$E respectively, we obtain the fermion density for the models in the first and third line of the Table \ref{table:mappings} as \begin{eqnarray} \label{eq:densityGBetaE} && \rho(x) \simeq \frac{2}{\pi \beta} \sqrt{(N \beta - x^2)_+} \quad, \quad\, V(x) = \frac{x^2}{2} \\ && \rho(x) \simeq \frac{2\, \theta(x)}{\pi \beta} \sqrt{(2 N \beta - x^2)_+} \quad, \quad V(x) = \frac{x^2}{2} + \frac{\gamma^2-\frac{1}{4}}{2x^2} \;, \nonumber \end{eqnarray} {where $\theta(x)$ is the Heavisde function.} The positions of the two edges are thus $x=x_e^\pm \simeq \pm \sqrt{\beta N}$ in the first case, while $x_e^- \simeq 0$ and $x_e^+ \simeq \sqrt{2 \beta N}$ in the second one. For $\beta=2$ it agrees with the LDA result, and it shows that the Fermi gas expands for $\beta>2$ and shrinks for $\beta<2$, as compared to the noninteracting case $\beta=2$, while retaining a semi-circular shape. In the case of the box, corresponding to the J$\beta$E, the density is uniform in the large $N$ limit. Note that in the large $N$ limit, with $\gamma=O(1)$, the $1/x^2$ part of the potential in the third and fourth models in the Table \ref{table:mappings} do not affect the bulk density (for a different scaling see below). They become important only in the region close to the wall (see below). \subsection{Outline and main results} In this paper we study the statistics of ${\cal N}^{(\beta)}_{\cal I}$, i.e., the number of fermions in an interval ${\cal I}$, for the models in the Table \ref{table:mappings} for any $\beta \geq 1$, and, in a second stage, for a larger class of models. In parallel to the applications to fermions we also obtain new results in the corresponding random matrix ensembles, with a slightly larger domain of validity, i.e., for any $\beta>0$. In Section \ref{sec:variance} we study the variance of ${\cal N}^{(\beta)}_{[a,b]}$ for an interval ${\cal I}=[a,b]$ of macroscopic size in the bulk for $\beta=1,2,4$ in the large $N$ limit, and then propose an extension to any $\beta$. In all these cases the variance grows logarithmically with $N$ for large $N$, and we obtain the amplitude of the logarithm together with the $O(1)$ correction term which has a non trivial dependence on the two edges $a,b$ on macroscopic scales. In the noninteracting case $\beta=2$ there exists a formula, recalled here in Eq. \eqref{generalab} for the variance ${\rm Var} {\cal N}^{(\beta=2)}_{[a,b]}$ for a general potential $V(x)$. For the harmonic potential $V(x)=\frac{x^2}{2}$, which corresponds to G$\beta$E, we extend this formula to $\beta=1,2,4$, and it reads \begin{eqnarray} \label{eq:NumberVarianceGBetaEab} \frac{\beta\pi^{2}}{2}{\rm Var}{\cal N}_{[a,b]}&=&\log N+\frac{3}{4}\log\left[\left(1-\tilde{a}^{2}\right)\left(1-\tilde{b}^{2}\right)\right]\nonumber\\ &+& \log\left|\frac{4|\tilde{a}-\tilde{b}|}{1-\tilde{a}\tilde{b}+\sqrt{(1-\tilde{a}^{2})(1-\tilde{b}^{2})}}\right|+c_{\beta} + o(1) \end{eqnarray} where $\tilde{a}=a/\sqrt{\beta N}$ and $\tilde{b}=b/\sqrt{\beta N}$, $\left|\tilde{a}\right|,\left|\tilde{b}\right|<1$, where $\pm \sqrt{\beta N}$ are the positions of the two edges, as can be seen in Eq. (\ref{eq:densityGBetaE}). For $\beta=1,2,4$ the constant $c_\beta$ takes the values \begin{eqnarray} \label{eq_c2} && \!\!\!\! c_1 = \log2+\gamma_E+1-\frac{\pi^{2}}{8} \; , \quad c_2 = \log2+\gamma_{E}+1 \; , \\ && \label{eq_c4} \!\!\!\! c_{4}=2\log2+\gamma_E+1+\frac{\pi^{2}}{8} \, . \end{eqnarray} Here we argue that formula \eqref{eq:NumberVarianceGBetaEab} extends to the model \eqref{H0} of interacting fermions with general $\beta$ in the harmonic potential. Using related works \cite{ForresterFrankel2004,FyodorovLeDoussal2020} (see discussion below) we propose the following expression as a series representation for $c_\beta$ \begin{equation} \label{eq:cbetaConjecture0} c_{\beta}=\gamma_{E}+\log\beta+\sum_{q=1}^{\infty}\left[\frac{2}{\beta}\psi^{(1)}\left(\frac{2q}{\beta}\right)-\frac{1}{q}\right] \;, \end{equation} where here and below $\psi^{(k)}(z) = \frac{d^{k+1}}{dz^{k+1}} \log \Gamma(z)$ is the polygamma function. We have checked numerically the predictions \eqref{eq:NumberVarianceGBetaEab}, \eqref{eq:cbetaConjecture0} for the variance (see Fig.~\ref{FigHalfSpaceBeta}). In fact, going beyond the harmonic potential, our more general prediction for the models in Table \ref{table:mappings} reads at large $N$ and in the bulk \cite{footnote:gamma} \begin{equation} \label{prediction} \frac{\beta\pi^{2}}{2}{\rm Var}{\cal N}_{[a,b]}^{(\beta)}-c_{\beta}=\pi^{2}{\rm Var}{\cal N}_{[a',b']}^{(\beta=2)}-c_{2}+o(1) \end{equation} where $a'=a \sqrt{2/\beta}$ and $b'=b \sqrt{2/\beta}$ for the models in line 1 and 3 in the Table \ref{table:mappings} and $a'=a$ and $b'=b$ for the other two models (on a circle and in a box). On the right hand side of Eq. (\ref{prediction}), ${\rm Var} {\cal N}^{(\beta=2)}_{[a,b]}$ is the variance for noninteracting fermions (i.e., for $\beta=2$) in the presence of a potential $V(x)$ indicated in the Table \ref{table:mappings} and given by the general formula \eqref{generalab} (which takes simpler forms for the models in the Table \ref{table:mappings}). The constant $c_\beta$ is independent of the model, and \eqref{prediction} also holds on microscopic scales in the limit of large interval (as in Eq. \eqref{DM}). Finally, in Section \ref{sec:variance} we also analyze the case of a ``semi-infinite'' interval, i.e., $[a,+\infty[$ for the G$\beta$E, $[0,b]$ for WL$\beta$E and J$\beta$E, for which we have a similar prediction. One consequence of our prediction \eqref{prediction} is that in the microscopic limit ($|a-b|$ small compared to the size of the Fermi gas) one has \cite{footnote_bourgade} \begin{equation} \label{prediction2} {\rm Var}\,{\cal N}_{\left[a,b\right]}\simeq\frac{2}{\beta \pi^{2}}\left[\log\left(k_F(a) \, |b-a|\right)+c_{\beta}\right] \end{equation} for $k_F(a) |b-a| = O(1) \gg 1$. In Section \ref{sec:cum} we study the higher cumulants of ${\cal N}_{\left[a,b\right]}$. We present the following conjecture for interacting fermions. Consider an interval $[a,b]$ inside the bulk. For the models displayed in Table \ref{table:mappings} and for any $\beta$, the cumulants of ${\cal N}^{(\beta)}_{[a,b]}$ of order $3$ and higher are determined solely from the microscopic scales. This implies that these higher cumulants are identical to those of the C$\beta$E. The cumulants for the C$\beta$E have been given in \cite{FyodorovLeDoussal2020}, using yet another conjecture about extended Fisher-Hartwig asymptotics for C$\beta$E, formulated in \cite{ForresterFrankel2004}. We will thus use these formulae and obtain here the full counting statistics for a larger class of interacting fermion models. These cumulants for general $\beta$ admit the following series representations \begin{eqnarray} \label{highercum} && \left\langle \left({\cal N}_{[a,b]}^{(\beta)}\right)^{2p}\right\rangle^{c}=\frac{2}{\left(2\beta\pi^{2}\right)^{p}}\tilde{C}_{2p}^{(\beta)} \\ && \tilde C^{(\beta)}_{2p} = (-2)^{p+1} \frac{1}{\beta^p} \sum_{q=1}^{\infty} \psi^{(2 p-1)}\left(\frac{2 q}{\beta}\right) \;, \end{eqnarray} for arbitrary integer $p>1$, while the odd cumulants vanish. For $\beta\in\left\{ 1,2,4\right\}$ the explicit evaluation for the fourth cumulant gives \begin{equation} \tilde C^{(\beta=2)}_4 = -12 \zeta (3) \;, \quad \tilde C^{(\beta=1)}_4 = \frac{\pi ^4}{4}-24 \zeta (3)\;, \quad \tilde C^{(\beta=4)}_4 = -24 \zeta (3)-\frac{\pi ^4}{4} \;, \label{Ctilde4} \end{equation} where $\zeta(z)$ is the Riemann-zeta function. The conjecture extends naturally to the case of an interval with only one point in the bulk (i.e., for a ``semi-infinite'' interval). It is a natural extension of the conjecture previously formulated in Ref. \cite{SDMS} for noninteracting fermions $\beta=2$ and recalled in the introduction. One can check that formula \eqref{highercum} reduces to Eq. (23) in \cite{SDMS} in the case $\beta=2$. This conjecture can be checked in a few cases, with impressive agreement. For instance in Section \ref{sec:edge} we study the limit from the bulk to the edge for any $\beta$. In the case $\beta\in\left\{ 1,2,4\right\} $ we can compare with the results of Bothner and Buckingham \cite{BK18} obtained by Riemann-Hilbert methods. We find that it agrees in a quite non-trivial way. In Section \ref{sec:boso} we discuss the approaches to interacting fermions using bosonisation in terms of the Luttinger liquid. As we explain, the Luttinger parameter is given here by $K=2/\beta$ for the models in Table \ref{table:mappings}. Finally in Section \ref{sec:generalmodels} we present a more general class of interacting fermion models which have a ground state wave function of the one- and two-body form \eqref{P0}. Some of these models still map onto random matrices, with however a more general matrix potential $V_0$. We study in detail the example of fermions on the circle in the presence of an external periodic potential which, in the noninteracting case $\beta=2$ turns out to be related to the so-called Gross-Witten-Wadia model in high energy physics \cite{GW1980,Wadia}. The density in this model exhibits an interesting transition, and we show that the LDA formula in that case reproduces the well known results obtained in Refs. \cite{GW1980,Wadia} from the Coulomb gas method. As discussed there we expect that our results for the counting statistics extend to this more general class of models. \section{Number variance} \label{sec:variance} \subsection{Previous results for noninteracting fermions ($\beta=2$) in an external potential} \label{subsec:previous} In a recent work \cite{SDMS} we have calculated the variance of the number of fermions in {a domain ${\cal D}$} in $d=1$, for noninteracting fermions in their ground state in a general potential $V(x)$. In this case the positions of the fermions form a determinantal point process. This means that the $n$-point correlation function can be written as a $n \times n$ determinant built from the so-called kernel $K_\mu(x,y)$. As a result the variance can be computed from the following formula \cite{MehtaBook, Forrester} \begin{equation} \label{eq:VarianceUsingK} \text{Var}\mathcal{N}_{\mathcal{D}}=\int_{x\in\mathcal{D}}\int_{y\in\bar{\mathcal{D}}} dx dy K_{\mu}\left(x,y\right)^2 \;, \end{equation} in terms of the kernel. By plugging the large-$N$ asymptotic form of the kernel (given by the WKB expansion {of the eigenstates of the single-particle Hamiltonian}) into \eqref{eq:VarianceUsingK} we obtained the leading- and subleading-order terms in the number variance, which are generically of order $O(\log N)$ and $O(1)$ respectively. Consider a confining potential, such that the bulk density $\rho(x)=k_F(x)/\pi$, where $k_F(x) \! = \! \sqrt{2 (\mu-V(x))}$ is the local Fermi wave vector, has a single support $[x_e^-,x_e^+]$. For an interval $[a,b]$ in the bulk with $|a-b| \! \gg \! 1/k_F(a)$, we obtained that for $N \! \gg \! 1$ (i.e., $\mu \! \gg \! 1$) the variance is given by \cite{SDMS} \begin{eqnarray} \label{generalab} && (2 \pi^2) {\rm Var} {\cal N}_{[a,b]} = 2 \log \left( 2 k_F(a) k_F(b) \int_{x^-}^{x^+} \frac{dz}{\pi k_F(z)} \right) \nonumber \\ && \qquad\qquad\qquad + \log \left( \frac{\sin^2\frac{\theta_a-\theta_b}{2}}{\sin^2\frac{\theta_a+\theta_b}{2}}|\sin \theta_a \sin \theta_b| \right) + 2 c_2 + o(1) \\ && \text{where} ~~~~~\label{thetax0} \theta_x = \pi \frac{\int_{x^-}^{x} dz/k_F(z) }{\int_{x^-}^{x^+} dz/k_F(z) } \quad , \quad \begin{cases} \theta_{x^-}=0 \\ \theta_{x^+}=\pi \end{cases} \;, \end{eqnarray} and $c_2$ is given in (\ref{eq_c2}). For a semi-infinite interval, and for any $a$ in the bulk, the variance reads \cite{SDMS} \begin{equation} \label{Haa} {\rm Var}{\cal N}_{[a,+\infty[}\simeq\frac{1}{2\pi^{2}}\left(\log\frac{2k_{F}(a)^{2}\sin\theta_{a}}{d\mu/dN}+c_{2}\right). \end{equation} For the harmonic potential $V(x)=\frac{1}{2} x^2$ this gives the explicit expression for the variance for the semi-infinite interval \cite{SDMS} \begin{equation} {\rm Var} {\cal N}_{[a,+\infty[} ={\rm Var} {\cal N}_{]-\infty,a]} = \frac{1}{2 \pi^2} \left[\log \mu + \frac{3}{2} \log (1 - \tilde a^2) + c_2 + 2 \log 2 + o(1) \right] \;, \end{equation} where $\tilde a = \frac{a}{\sqrt{2 \mu}}$. For an interval in the bulk, Eqs. (\ref{generalab}) and (\ref{thetax0}) lead to the formula given in \eqref{eq:NumberVarianceGBetaEab} with $\beta=2$. Note that one can eliminate the Fermi energy $\mu$ in all above formula and express all quantities as a function of the number of fermions $N$ using the relation \begin{equation} \label{relation_Nmu} N = \int dx \rho(x) \simeq \frac{1}{\pi} \int dx \sqrt{2 (\mu - V(x))_+} \;, \end{equation} valid at large $N$. Since the Fermi energy does not have a direct meaning for interacting fermions, it is indeed more natural to use $N$, in order to study the dependence in $\beta$, {\it at fixed $N$}, as we do below. Note that in the interacting case, the zero-temperature chemical potential can be obtained in the large $N$ limit as $\mu = \partial_N E_0$ where $E_0=E_0(N,\beta)$ is the ground state energy (which, in the models studied here can be calculated exactly, see Appendix \ref{appendix:mappings}). For $\beta=2$ this definition of $\mu$ coincides with the Fermi energy. We will now compute the number variance for interacting fermions, $\beta \neq 2$. For this purpose we recall the definition of a more general observable, the covariance function. \subsection{Two-point covariance function} It is useful to define the two point covariance function $C\left(x,y\right)$ which gives the covariance between the numbers of particles in infinitesimal intervals around two distinct points $x$ and $y$: \begin{equation} \text{Cov}\left(\mathcal{N}_{\left[x,x+dx\right]},\mathcal{N}_{\left[y,y+dy\right]}\right)=C\left(x,y\right)dxdy. \end{equation} Using the linearity of the covariance, one immediately obtains, for any two nonintersecting domains $\mathcal{D}_1$ and $\mathcal{D}_2$ \begin{equation} \label{eq:CovGeneral} \text{Cov}\left(\mathcal{N}_{\mathcal{D}_{1}},\mathcal{N}_{\mathcal{D}_{2}}\right)=\int_{x\in\mathcal{D}_{1}}\int_{y\in\mathcal{D}_{2}}C\left(x,y\right)dxdy. \end{equation} Using that $N=\mathcal{N}_{\mathcal{D}}+\mathcal{N}_{\mathcal{\bar{D}}}$, where $N$ is the total number of fermions, and where $\bar{\mathcal{D}}$ is the complement of $\mathcal{D}$, we obtain a convenient expression for the number variance in a domain: \begin{equation} \label{eq:VarianceUsingCxy} \text{Var}\mathcal{N}_{\mathcal{D}}=-\int_{x\in\mathcal{D}}\int_{y\in\bar{\mathcal{D}}}C\left(x,y\right) dx dy. \end{equation} For noninteracting fermions, the determinantal structure can be used in order to express the covariance function in terms of the kernel, $C\left(x,y\right)=-K_{\mu}\left(x,y\right)^{2}$ and one recovers the formula in \eqref{eq:VarianceUsingK}. \subsection{Number variance for interacting fermions in a harmonic trap} Let us now discuss the case of interacting fermions described by the model \eqref{H0}, which corresponds to random matrices in the G$\beta$E. For interacting fermions, the positions of the fermions do not form a determinantal point process, however for $\beta=1,4$ they exhibit a Pfaffian structure which allows for (complicated) exact expressions for $C(x,y)$ for any finite $N$ (see e.g. \cite{MehtaBook,Forrester,AFNV2000,Gronqvist2004}). Here, to calculate the variance for $\beta=1,2,4$, we will only need their large $N$ asymptotics. We will first recall their expressions separately for the case of microscopic scales $|x-y| = O(1/k_F(x))$ and macroscopic scales $|x-y| \gg 1/k_F(x)$. Indeed, when computing the integral in \eqref{eq:VarianceUsingCxy}, as we do below, there are contributions from both regimes of scales. (i) {\it Microscopic scales}. For $x$ and $y$ in the bulk with $x-y$ microscopic, $C(x,y)$ can be obtained from the Eqs. (18)-(20) in \cite{Pandey1979} using the mapping from the Gaussian ensembles to the fermions in the harmonic potential (the same formula also holds, see Eqs. (18) and (19) in \cite{Pandey1979}, for the circular ensembles, i.e., for fermions on the circle) \begin{eqnarray} \label{eq:Cxy_for_GbetaE_micro} C\left(x,y\right)&\simeq& {-} \left[\rho(x) \right]^{2} Y_{2\beta}\left(\rho(x) \left|x-y\right|\right),\\ \label{eq:Y21def} Y_{21}\left(r\right)&=&\left(s\left(r\right)\right)^{2}-\text{Js}\left(r\right)\text{Ds}\left(r\right),\\ Y_{22}\left(r\right)&=&\left(s\left(r\right)\right)^{2},\\ \label{eq:Y24def} Y_{24}\left(r\right)&=&\left(s\left(2r\right)\right)^{2}-\text{Is}\left(2r\right)\text{Ds}\left(2r\right) \end{eqnarray} where $\rho(x)$ is the fermion density given in \eqref{eq:densityGBetaE}, and \begin{eqnarray} &&s\left(r\right)=\frac{\sin\left(\pi r\right)}{\pi r}, \quad \text{Ds}\left(r\right)=\frac{ds}{dr}=\frac{\pi r\cos\left(\pi r\right)-\sin\left(\pi r\right)}{\pi r^{2}},\\ &&\text{Is}\left(r\right)=\int_{0}^{r}s\left(r'\right)dr'=\frac{\text{Si}\left(\pi r\right)}{\pi},\quad \text{Js}\left(r\right)=\text{Is}\left(r\right)-\frac{\text{sgn}\left(r\right)}{2} \;, \end{eqnarray} where $\text{sgn}\left(r\right)$ is the sign function and $\text{Si}\left(z\right)=\int_{0}^{z}\frac{\sin t}{t}dt$ is the sine integral. Note that for $\beta=2$ the positions of the fermions form a determinantal process, with an associated kernel given in the bulk by the function $s(r)$ which is the well-known sine-kernel. (ii) {\it Macroscopic scales.} For $x$ and $y$ well separated in the bulk, and for the G$\beta$E for arbitrary $\beta$ the covariance function $C(x,y)$ is also known in the large $N$ limit \cite{Pandey1981,BZ1993,Beenakker1993,Eyn2017}. Under the RMT to fermion mapping it leads to \begin{equation} \label{eq:Cxy_for_GbetaE} C\left(x,y\right)\simeq-\frac{1-\frac{xy}{\beta N}}{\beta\pi^{2}\left(x-y\right)^{2}\left(1-\frac{x^{2}}{\beta N}\right)^{1/2}\left(1-\frac{y^{2}}{\beta N}\right)^{1/2}}\,, \end{equation} up to rapidly oscillating terms that average out to zero when integrating over macroscopic domains. For noninteracting fermions $\beta=2$, Eq.~\eqref{eq:Cxy_for_GbetaE} was also derived directly from the fermion model, see the Supp. Mat. of \cite{SDMS} (for a numerical check of this formula see \cite{Sargeant2020}). Using a Coulomb gas method Eq.~\eqref{eq:Cxy_for_GbetaE} was extended to arbitrary \emph{matrix} potentials and $\beta$ in \cite{Beenakker1993}. One can check that the above formula match between microscopic and macroscopic scales, i.e., that the limit $r \gg 1$ in \eqref{eq:Cxy_for_GbetaE_micro} agrees with the limit $|x-y|\ll \sqrt{\beta N}$ in \eqref{eq:Cxy_for_GbetaE}. The double integral \eqref{eq:VarianceUsingCxy} can then be calculated in the large-$N$ limit. The calculation is performed in the Appendix \ref{appendix:NumberVarianceGBetaE}. First one can approximate $C(x,y)\simeq0$ if either $x$ or $y$ are not in the bulk. Plugging in $C(x,y)$ from \eqref{eq:Cxy_for_GbetaE_micro} for $x$ near $y$, and \eqref{eq:Cxy_for_GbetaE} for $x$ far from $y$ (this procedure works because there is a joint regime where both of the approximate expressions for $C(x,y)$ are valid) one finds the result for the variance given in \eqref{eq:NumberVarianceGBetaEab} for $\beta \in \left\{ 1,2,4\right\} $. For instance, for a finite interval $[-a,a]$ centered around the origin and contained in the bulk, $\tilde{a} = a/\sqrt{\beta\,N}<1$, Eq.~\eqref{eq:NumberVarianceGBetaEab} simplifies into \begin{equation} \label{eq:NumberVarianceGOEminusatoa} \frac{\beta\pi^{2}}{2}\text{Var}\left(\mathcal{N}_{\left[-a,a\right]}\right)\simeq\log\left[4N\tilde{a}\left(1-\tilde{a}^{2}\right)^{3/2}\right]+c_{\beta} \;. \end{equation} The leading term agrees with a Coulomb gas calculation for general $\beta$, \cite{MMSV14,MMSV16}. In the limit $|\tilde{b}-\tilde{a}| \ll 1$, Eq.~\eqref{eq:NumberVarianceGBetaEab} matches with the microscopic result \eqref{DM}. We also obtain the result for a semi-infinite interval whose edge is in the bulk, $\left|\tilde{a}\right|<1$ as \begin{equation} \label{eq:NumberVarianceGOEatoinfinity} \beta\pi^{2}\text{Var}\left(\mathcal{N}_{\left[a,\infty\right[}\right)\simeq\log N+\frac{3}{2}\log\left(1-\tilde{a}^{2}\right)+2\log2+c_{\beta} \, . \end{equation} In both formulae (\ref{eq:NumberVarianceGOEminusatoa}) and (\ref{eq:NumberVarianceGOEatoinfinity}) the constants $c_{\beta}$ for $\beta=1, 2, 4$ are related to the so-called Dyson-Mehta constants and given in \eqref{eq_c2}. For $\beta=1$ and $\beta=4$, we have checked numerically the predictions given in \eqref{eq:NumberVarianceGOEminusatoa} and \eqref{eq:NumberVarianceGOEatoinfinity} for the fermion model using the correspondence in the first line of the Table~\ref{table:mappings}. We have performed exact diagonalizations of the G$\beta$E with $\beta=1,4$ in Figs.~\ref{Fig_GOE} and \ref{Fig_GSE} respectively (the case $\beta=2$ has been tested numerically in \cite{SDMS}). The convergence at large $N$ appears to be slower as $\beta$ is increased, which is known to occur quite generally, see e.g. Fig. 3.2 in \cite{VivoBook}. \begin{figure*}[ht] \centering \includegraphics[angle=0,width=0.48\linewidth]{NumberVarianceGOE_N100_symmetric_interval.pdf} \includegraphics[angle=0,width=0.48\linewidth]{NumberVarianceGOE_N100.pdf} \caption{Variance of the number of fermions for finite intervals centered around the origin (a) and semi-infinite intervals (b) for the model in the first line of Table \ref{table:mappings} (with quadratic potential) associated to the GOE ($\beta=1$). The blue markers are the empirical variance computed over $5\times{10}^4$ simulated GOE matrices with $N=100$, and the red lines are our predictions (with $\beta=1$) \eqref{eq:NumberVarianceGOEminusatoa} in (a), and \eqref{eq:NumberVarianceGOEatoinfinity} in (b).} \label{Fig_GOE} \end{figure*} \begin{figure*}[ht] \centering \includegraphics[angle=0,width=0.9\linewidth]{NumberVarianceGSE_N1000.pdf} \caption{Variance of the number of fermions in a semi-infinite interval for the model in the first line of Table \ref{table:mappings} associated to the GSE ($\beta=4$). The blue line is the empirical variance computed over $2\times{10}^5$ simulated GSE matrices with $N=1000$, and the red line is our prediction (with $\beta=4$) \eqref{eq:NumberVarianceGOEatoinfinity}.} \label{Fig_GSE} \end{figure*} \subsection{Conjecture for general $\beta$ and results for other potentials}\label{sec:conjecture} In the previous section, we have calculated the number variance for interacting fermions in the harmonic potential for $\beta=1,2,4$. We now conjecture that the formula \eqref{eq:NumberVarianceGBetaEab}, \eqref{eq:NumberVarianceGOEminusatoa} and \eqref{eq:NumberVarianceGOEatoinfinity} hold for general values of $\beta$, with an a priori unknown $\beta$-dependent constant $c_\beta$. The rationale behind this conjecture is that (i) the expression \eqref{eq:Cxy_for_GbetaE} for $C(x,y)$ on macroscopic scale is valid for arbitrary $\beta$, a result which comes naturally from the Coulomb gas calculations \cite{Beenakker1993,MMSV14,MMSV16} (ii) the above calculations for $\beta=1,2,4$ show that the $\beta$-dependence of the constant part, $c_\beta$, is determined from microscopic scales only. Hence we expect that it is independent of the RMT ensemble in the Table \ref{table:mappings}. As we argue below, this constant is given for general $\beta$ by formula \eqref{eq:cbetaConjecture0}, which we will justify in Section \ref{sec:cum} based on previous works on the C$\beta$E. The conjecture can be expressed as follows. For an arbitrary interval $[a,b]$ in the bulk and for the fermion models listed in Table \ref{table:mappings}, there is a relation between the variance for an arbitrary $\beta \geq 1$ and the variance for $\beta=2$ (up to a possible rescaling of lengths). This relation is given in \eqref{prediction} above. Let us now present a few results which follow from this conjecture. For fermions on the circle with $L=2 \pi$ this leads to \begin{equation} \label{varcb} \beta\pi^{2}{\rm Var}{\cal N}_{[a,b]}=2\log N+\log\left(\sin^{2}\frac{b-a}{2}\right)+2c_{\beta} \; . \end{equation} In the microscopic limit, this formula agrees with \eqref{prediction2} with $k_F=\pi \rho= N/2$. Consider now interacting fermions related to the WL$\beta$E in the potential (line 3 in Table \ref{table:mappings}) \begin{equation} \label{VV} V(x) = \frac{\gamma^2-\frac{1}{4}}{2 x^2} + \frac{1}{2} x^2 \;. \end{equation} In the introduction, we have discussed this model when the parameter $\gamma=O(1)$ in which case the $1/x^2$ hard wall potential does not affect the bulk properties for $x>0$, such as the density given in Eq. \eqref{eq:densityGBetaE}. Another interesting limit amounts to scale the parameter $\gamma \sim \mu \sim N$ in which case the effect of the $1/x^2$ potential is to open a gap in the density of fermions near the origin. Let us recall that the associated Wishart-Laguerre (WL) matrix potential is $V_0(\lambda) = \frac{\beta}{2} \lambda - \gamma \log \lambda$. By a similar argument as in Eq. \eqref{sigma_density} and below, one can absorb the $\beta$ dependence in the product $\frac{2}{\beta} V_0(\lambda)$, by rescaling $\gamma$. As a result the eigenvalue density $\sigma(\lambda;\gamma)$ for the WL$\beta$E takes the scaling form at large $N$ (as obtained e.g. from the Coulomb gas method) \begin{equation} \sigma(\lambda;\gamma) \simeq \frac{1}{N} \sigma_{WL}\left(\frac{\lambda}{N} ; \frac{2 \gamma}{\beta N} \right) \end{equation} where \begin{equation} \sigma_{WL}(z;c) = \frac{\sqrt{(z-\zeta_-)(\zeta_+-z)}}{2 \pi z} ~,~ \zeta_\pm=(1 \pm \sqrt{1+c})^2 \;. \end{equation} The normalization condition reads $\int_{\zeta_-}^{\zeta_+} dz \sigma_{WL}(z;c)=1$, where the two scaled edges of the support $\zeta_\pm$ depend on the parameter $c$. Using the mapping to the fermions, with $\lambda=\frac{2}{\beta} x^2$ we obtain the fermion density [see \eqref{sigma_density}] as \begin{equation} \label{rho2} \rho(x) \simeq \frac{4 x}{\beta} \sigma_{WL}\left(\frac{2 x^2}{\beta N} ; \frac{2 \gamma}{\beta N} \right) \;. \end{equation} For $\beta=2$, one can check that this result coincides with the prediction from the LDA in the bulk as expected, i.e., \begin{equation} \rho(x) \simeq \frac{1}{\pi} \sqrt{2 (\mu - V(x))} \;, \end{equation} together with the relation between $\mu$ and $N$, which reads $\mu = 2 N +\gamma + \frac{1}{2} \simeq 2 N +\gamma$ in the large $N$ limit, with $\gamma = O(N)$ considered here. The prediction \eqref{rho2} allows to obtain the density for interacting fermions for general $\beta$ in the potential \eqref{VV}. It is interesting to note {in} the above result that the gap in the fermion density near the origin remains non-zero for any value of the interaction parameter $\beta=O(1)$. In the limit $\gamma/N \to 0$ one recovers the result in \eqref{eq:densityGBetaE}. \begin{figure*}[t] \centering \includegraphics[angle=0,width=0.48\linewidth]{NumberVarianceWishartBeta1N100.pdf} \includegraphics[angle=0,width=0.48\linewidth]{NumberVarianceWishartBeta2N100.pdf} \caption{Variance of the number of particles in the interval $[0,a]$ (or equivalently, $[a,\infty)$) for the model in the third line of Table \ref{table:mappings} associated to the WL$\beta$E with $\gamma = 2$, and $\beta=1$ (a) and $\beta=2$ (b). The blue markers are the empirical variance computed over $5 \times {10}^4$ simulated WL$\beta$E matrices with $N=100$, and the red lines are our theoretical prediction \eqref{eq:WLBetaEVariance}.} \label{Fig_Wishart} \end{figure*} One can now use our main conjecture \eqref{prediction} and its analog \begin{eqnarray} \label{conjecture_semi} \beta\pi^{2}{\rm Var}{\cal N}_{[0,a]}^{(\beta)}-c_{\beta}=2\pi^{2}{\rm Var}{\cal N}_{[0,a]}^{(\beta=2)}-c_{2}+o(1) \end{eqnarray} for semi-infinite intervals, and the result that we obtained in \cite{SDMS} for the case $\beta=2$, to predict the variance of the number of fermions in an interval for general $\beta$ for the potential \eqref{VV}. For the interval $[0,a]$ with $a$ in the bulk, this leads to \begin{equation} \label{eq:WLBetaEVariance} \beta\pi^{2}{\rm Var}{\cal N}_{[0,a]} \simeq \log\left(8 N \right) +c_{\beta} +\log\left( \tilde a \sqrt{1+ \frac{\tilde \gamma}{2}} \frac{\left(1-\frac{\tilde{a}^{2}}{ 1+ \frac{\tilde \gamma}{2}}-\frac{\tilde \gamma^{2}}{8\tilde{a}^{2} (2+ \tilde \gamma)}\right)^{3/2}}{\left(1- \frac{\tilde \gamma^{2}}{(2 + \tilde \gamma)^2} \right)^{1/2}}\right) \;, \end{equation} where $\tilde \gamma=2 \gamma/(N \beta)$ and $\tilde a= \frac{a}{\sqrt{2 \beta N}}$ (which is the position of the edge at $\tilde \gamma=0$). The theoretical prediction \eqref{eq:WLBetaEVariance} is compared with numerical simulations of Wishart matrices in Fig.~\ref{Fig_Wishart} with $\gamma=2$ and $\beta\in\left\{ 1,2\right\}$, with excellent agreement. A formula analogous to \eqref{eq:WLBetaEVariance} for a general interval $[a,b]$ with $a,b$ in the bulk is given in Appendix \ref{appendix:WLBetaEandJBetaE}, where we also give some formula for the Jacobi box potential (line 4 in Table \ref{table:mappings}) as well as the details of the derivation of \eqref{eq:WLBetaEVariance}. \section{Higher cumulants} \label{sec:cum} We now study the higher cumulants (larger than $2$) of the number of fermions in an interval for the interacting fermion models displayed in Table \ref{table:mappings}. In our previous work for noninteracting fermions in \cite{SDMS} we had conjectured, and checked with available rigorous results for several potentials, that the higher cumulants are determined solely from microscopic scale. Hence they are independent of the potential in the large $N$ limit. Here we will go one step further and conjecture that this remains true in the interacting case for general~$\beta$. Although the numerical values of these cumulants depend non trivially on $\beta$, i.e., on the interaction strength, they are insensitive to the details of an external smooth potential. Indeed these cumulants are determined at microscopic scales where the $1/x^2$ interactions dominate over the local variations of the potential. Consequently we can conjecture that the higher cumulants are the same as for fermions on a circle without a potential, i.e for the C$\beta$E. It turns out that the cumulants for the C$\beta$E were recently predicted in Ref. \cite{FyodorovLeDoussal2020} in a different context. Consider the periodic model in the second line of Table \ref{table:mappings} where $x$ is the coordinate along a circle of perimeter $L=2 \pi$. The result of \cite{FyodorovLeDoussal2020} gives the FCS generating function for ${\cal N}_{[a,b]}$, i.e., the number of fermions with positions $x_i \in [a,b]$ as \cite{footnote10} \begin{equation} \label{FCS1} \log\left\langle e^{2\pi\sqrt{\frac{\beta}{2}}t ({\cal N}_{[a,b]} - \langle {\cal N}_{[a,b]} \rangle) }\right\rangle = 2t^{2}\log N+t^{2}\log\left(4\sin^{2}\frac{|b-a|}{2}\right)+2\log|A_{\beta}(t)|^{2} \end{equation} up to terms that vanish in the large $N$ limit. Here $t$ is a parameter \cite{foot_t}, and for $\beta = 2 s/r$, with $s,r$ integers mutually prime \begin{equation} \label{Abeta} A_{\beta}(t)=r^{-t^{2}/2}\prod_{\nu=0}^{r-1}\prod_{p=0}^{s-1}\frac{G\left(1-\frac{p}{s}+\frac{\nu+it\sqrt{\frac{2}{\beta}}}{r}\right)}{G\left(1-\frac{p}{s}+\frac{\nu}{r}\right)} \;, \end{equation} where $G(z)$ is the Barnes function \cite{BarnesFunctionWikipedia}. This formula is based on yet another conjecture made in \cite{ForresterFrankel2004} (see formula (3.22)-(3.23) there and \cite{footnote11}). From \eqref{FCS1} expanding on both sides in powers of $t$ one finds that the cumulants $\langle {\cal N}_{[a,b]}^k \rangle^c$ of order $k>2$ take the form \begin{equation} \label{cumulants_vs_ck} \left\langle {\cal N}_{[a,b]}^{k}\right\rangle ^{c} = \frac{2}{\left(\pi\sqrt{2\beta}\right)^{k}}\tilde{C}_{k}^{(\beta)} + o(1) \;, \end{equation} where the coefficients $\tilde C^{(\beta)}_k$ are defined for $k \geq 2$ as \begin{equation} \label{Cbeta2} \tilde{C}_{k}^{(\beta)}=\left.\frac{d^{k}}{dt^{k}}\right|_{t=0}\log\left(A_{\beta}(t)A_{\beta}(-t)\right) \;. \end{equation} It is obvious from this formula that $\tilde C^{(\beta)}_{2 p+1}=0$ hence all the odd cumulants vanish, i.e., $\langle {\cal N}_{[a,b]}^{2 p+1} \rangle^c =0$. We thus focus now on the even cumulants. Although the coefficients $\tilde C^{(\beta)}_k$ are defined here for rational values of $\beta$, it is possible to obtain expressions for these coefficient for any real $\beta$, as an explicitly continuous function of $\beta$. This is achieved using the fact that any real $\beta$ can be reached by a sequence $\beta=2 s_n/r_n$ of arbitrary large $s_n,r_n$ and performing an asymptotic analysis (see details in \cite{FyodorovLeDoussal2020}). The result for $k=2 p$ with $p \geq 2$ can be written in terms of the following double series \begin{equation} \label{s2} \tilde{C}_{2p}^{(\beta)}=\left(-1\right)^{p+1}2\left(2p-1\right)!\sum_{\nu=0}^{\infty}\sum_{q=1}^{\infty}\frac{1}{\left(\nu\sqrt{\frac{\beta}{2}}+q\sqrt{\frac{2}{\beta}}\right)^{2p}} \; . \end{equation} In addition, one of the sums (either over $\nu$ or over $q$) can be carried out, leading to two equivalent "dual" expressions \begin{eqnarray} \label{s3} \tilde C^{(\beta)}_{2p} &=&(-2)^{1-p}\beta^{p}\sum_{\nu=0}^{\infty}\psi^{(2p-1)}\left(1+\frac{\beta\nu}{2}\right)\nonumber\\ &=& (-2)^{p+1}\frac{1}{\beta^{p}}\sum_{q=1}^{\infty}\psi^{(2p-1)}\left(\frac{2q}{\beta}\right) \end{eqnarray} where we recall that $\psi^{(q)}(x) = \frac{d^{q+1}}{dx^{q+1}} \log \Gamma(x)$ is the polygamma function. The above series are convergent for $p \geq 2$, since at large $x$ one has $\psi^{(2 p-1)}(z) \simeq \frac{(2p-2)!}{z^{2p-1}}$. The asymptotics for small and large $\beta$ can be obtained from either of the dual series in \eqref{s3} and can be found in \cite{FyodorovLeDoussal2020}. For the classical values $\beta\in\left\{ 1,2,4\right\} $ these series can be performed explicitly, e.g. see formula \eqref{Ctilde4} for the fourth cumulant. Note that the formula \eqref{s2} transforms simply under the "duality" $\beta \to 4/\beta$. This duality was studied in \cite{For20}. \\ From our conjecture, the formula \eqref{cumulants_vs_ck} for the cumulants of the fermion model on the circle (i.e., the C$\beta$E) is thus predicted to hold for all the fermion models in Table \ref{table:mappings}, with no modification. Indeed the rescaling of lengths is unimportant here since the values of these cumulants are independent of the size of the intervals (assumed here to be macroscopic in the bulk). In the case of a semi-infinite interval, e.g. $[a,+\infty[$ for the quadratic potential, the result is divided by a factor of~$2$. \\ We can now return to the question of the $O(1)$ term in the second cumulant (the variance) as discussed in the previous section. In particular the above predictions based on the C$\beta$E allow to obtain explicitly the universal constant $c_\beta$ which enters in all the formulae for the variance of the fermion models considered here. Comparing the formula \eqref{varcb} with the $O(t^2)$ term in \eqref{FCS1} one finds that the relation \eqref{prediction} holds, together with $c_{\beta}=\log2+\frac{1}{2}\tilde{C}_{2}^{(\beta)}$, where $\tilde{C}_{2}^{(\beta)}$ is given in \eqref{Cbeta2}. Using the analysis of $\tilde{C}_{2}^{(\beta)}$ in \cite{FyodorovLeDoussal2020}, the constant $c_\beta$ can be written in several alternative forms, either as a convergent double series \begin{equation} c_{\beta}=\log2+\gamma_{E}+\sum_{\nu=0}^{+\infty}\left[\sum_{q=1}^{+\infty}\frac{\beta/2}{\left(\nu\frac{\beta}{2}+q\right)^{2}}-\frac{1}{1+\nu}\right] \;, \label{cbeta1} \end{equation} or, performing one of the sums, as a convergent simple series as given in the Introduction, see \eqref{eq:cbetaConjecture0}, or as the dual series \begin{equation} c_{\beta}=\log2+\gamma_{E}+\sum_{\nu=0}^{+\infty}\left[\frac{\beta}{2}\psi^{(1)}\left(1+\frac{\beta\nu}{2}\right)-\frac{1}{1+\nu}\right] \;. \end{equation} We have tested the prediction for $c_\beta$ \eqref{eq:cbetaConjecture0} numerically, together with the prediction for the variance \eqref{eq:NumberVarianceGOEatoinfinity}, see Fig.~\ref{FigHalfSpaceBeta}. Using the correspondence in Table \eqref{table:mappings} we have diagonalized G$\beta$E matrices generated using the Dimitriu-Edelman tridiagonal matrices \cite{Dumitriu2002} for various sizes $N$. A rather large even-odd finite $N$ effect is observed, however the average value over consecutive $N$'s is very close to the predictions. Note that in Fig.~\ref{FigHalfSpaceBeta} we tested these predictions also in the range $0 < \beta < 1$, which is only relevant for RMT (or log-gases) but not for the fermion models studied in the rest of this paper (since in the latter models $\beta \ge 1$). {It would be interesting to test our predictions for the higher cumulants too. This is more computationally demanding, but nevertheless the fourth cumulant was tested numerically in \cite{V12} for the GUE.} \begin{figure*}[h!] \centering \includegraphics[angle=0,width=0.49\linewidth]{NumberVarianceGBetaEHalfSpaceN2000.pdf} \includegraphics[angle=0,width=0.49\linewidth]{cBetaVsEmpiricalN2000andN100.pdf} \includegraphics[angle=0,width=0.47\linewidth]{NumberVarianceGBetaEHalfSpaceN2000BetaLessThanOne.pdf} \includegraphics[angle=0,width=0.50\linewidth]{cBetaVsEmpiricalN2000BetaLessThanOne.pdf} \caption{(a) Number variance for a semi-infinite interval $[0,\infty[$ for the harmonic oscillator \eqref{H0} as a function of $\beta \ge 1$. The squares and circles correspond to numerical simulations of G$\beta$E with $N=2001$ and $N=2000$ respectively (with ${10}^5$ simulations for each plot marker), and their average is indicated by the diamonds. The red line is the conjecture given by \eqref{eq:cbetaConjecture0} and \eqref{eq:NumberVarianceGOEatoinfinity} (with $\tilde{a}=0$). The reason for averaging over two consecutive values of $N$ is because it appears that there is a parity effect, which comes from subleading corrections which we do not calculate here (for $\beta=2$ these corrections are known and indeed depend on the parity of $N$ \cite{WF2012}). (b) Markers: Numerically computed $\beta\pi^{2}\text{Var}\left(\mathcal{N}_{\left[a,\infty\right[}\right)-\log\left(4N\right)$. According to our conjecture \eqref{eq:NumberVarianceGOEatoinfinity}, this should converge to the constant $c_\beta$ in the large-$N$ limit. The squares, circles, and diamonds are based on the same data as in (a). The triangles and upside-down triangles correspond to ${10}^5$ simulations with $N=100$ and $N=101$, respectively. Red line: our conjecture \eqref{eq:cbetaConjecture0}. One observes a (rather) slow convergence of the numerical results to our conjecture as $N$ is increased. In (c) and (d), analogous results are plotted for $0 < \beta < 1$ (and $N=2000,2001$), again with good agreement between the numerical results and our conjecture. These results are only relevant for RMT (or log-gases) but not for the fermion models studied in the rest of this paper (since in the latter models $\beta \ge 1$). } \label{FigHalfSpaceBeta} \end{figure*} \section{FCS near the edge and matching with the bulk}\label{sec:edge} Until now we have discussed the counting statistics in the large $N$ limit for an interval which has at least one point inside the bulk. Let us consider a general smooth potential $V(x)$ such that the Fermi gas has two edges $x \simeq x_e^\pm$, where the LDA density vanishes. There is a region near these edges, of width denoted $w_N$, where it is known that the quantum fluctuations are enhanced, and that the counting statistics are different from the bulk. While this region has been studied in the noninteracting case ($\beta = 2$), there are only a few recent results for the counting statistics in the edge region for the interacting case. They were obtained in the context of RMT, specifically for the G$\beta$E for $\beta=1,4$. This corresponds to interacting fermions in a quadratic potential. In Ref. \cite{BK18} the FCS (i.e., all the cumulants) have been obtained, in the outer edge region, i.e., for an interval $[a,+\infty)$ where $\frac{x_+-a}{w_N}$ is $O(1)$ but large (i.e., in the crossover region from the edge to the bulk). Inside the edge region, there are recent results about the second cumulant for general linear statistics for $\beta=1,4$ \cite{MC2020}. In this section we discuss how these results compare with our conjecture for the cumulants for general $\beta$. We start by recalling the noninteracting case and the matching between the bulk and the edge regions. \subsection{Noninteracting case $\beta=2$} In the case of noninteracting fermions the positions $x_i$ form a determinantal point process based on the kernel $K_\mu(x,y)$ discussed in Section \ref{sec:variance}. For any smooth confining potential $V(x)$, as discussed in \cite{DeanPLDReview}, for $x,y$ near the right edge $x^+$ (and similarly for $x^-$) the kernel takes the universal scaling form $K_\mu\left(x,y\right)\simeq\frac{1}{w_{N}}K_{\text{Ai}}\left(\frac{x-x^+}{w_{N}},\frac{y-x^+}{w_{N}}\right)$ where $w_N$ is the width of the edge region $w_{N}=\left[2V'\left(x^+\right)\right]^{-1/3}$ for noninteracting fermions. Here $K_{\rm Ai}$ is the Airy kernel given by \begin{eqnarray} \label{K_Airy} K_{\rm Ai}(x,y)=\frac{{\rm Ai}(x) {\rm Ai}'(y)- {\rm Ai}'(x) {\rm Ai}(y)}{x-y} \;. \end{eqnarray} Using this scaling form one obtains the number variance for any interval in the edge region in terms of the Airy kernel as was done in \cite{MMSV14, MMSV16} for the case of the harmonic oscillator/GUE, \begin{eqnarray} \label{edgeV2} &&\! \! \! \! \! \text{Var} {\cal N} _{\left[a,+\infty\right)} = \int_{a}^{+\infty}dx\int_{-\infty}^{a}dy\,K_{\mu}^{2}\left(x,y\right) \simeq \frac{1}{2} {\cal V}_{2} \left(\hat a \right) \\ && \! \! \!\!\! {\cal V}_2(\hat a) := 2 \int_{\hat a}^{+\infty}du\int_{-\infty}^{\hat a}dv K_{\text{Ai}}^{2}\left(u,v\right) ~,~ \hat a = \frac{a-x^+}{w_{N}} \;, \nonumber \end{eqnarray} where the scaling function ${\cal V}_2(\hat a)$, defined in \cite{MMSV14, MMSV16}, is universal, i.e., independent of the potential $V(x)$, in terms of the scaling variable $\hat a$. One interesting question is the matching of the number variance as $a$ in \eqref{edgeV2} moves from the bulk to the edge. In \cite{SDMS} it was shown that the asymptotic behavior for $a \to x^+$ coming from the bulk reads \begin{equation} \label{asympt2} \text{Var}{\cal N}_{\left[a,+\infty \right)} = \frac{1}{2\pi^{2}}\left[\frac{3}{2}\log(-\hat{a})+c_{2}+2\log2\right] + o(1) \;, \end{equation} in terms of the edge scaling variable $\hat a$ defined in (\ref{edgeV2}). The result \eqref{asympt2} matches exactly with the formula \eqref{edgeV2} in the limit $\hat a \to - \infty$, which corresponds to the crossover from the edge to the bulk. The comparison between the two results is performed in Appendix \ref{appendix:varianceEdge}. This crossover was also obtained in the case of the harmonic potential (i.e., for the GUE) in \cite{BK18} (see also \cite{Gus2005}). In fact in Ref. \cite{BK18} the FCS, i.e., the higher cumulants were also given for $\beta=2$ in this crossover regime. As we discuss below and in Appendix \ref{app:checks}, this result matches perfectly our predictions for the higher cumulants in the bulk given in \cite{SDMS}. \subsection{Interacting case} Let us start with the case of the harmonic potential $V(x)=\frac{1}{2} x^2$. From the RMT-fermion correspondence in the first line of Table \ref{table:mappings}, i.e., $\lambda(x) = \sqrt{2/\beta} \, x$, one finds that for general $\beta$ the right edge is at position $x^{+}=\sqrt{\beta N}$ and the width of the edge region is \begin{equation} w_{N}=\frac{\sqrt{\beta}}{2}N^{-1/6} \;. \end{equation} It is known in RMT that the edge properties of the G$\beta$E are described by the Airy$_\beta$ point process denoted $a_i^\beta$, which implies that the fermion positions in the edge region and for $N \to +\infty$ can be written as \begin{equation} x_i = \sqrt{\frac{\beta}{2}} \lambda_i \quad , \quad \lambda_i \simeq \sqrt{2 N} + \frac{a_i^\beta}{\sqrt{2} N^{1/6}} \;. \end{equation} The statistics of the $a_i^\beta$ is described by the so-called stochastic Airy operator \cite{EdelmanSutton2007,RamirezRiderVirag2011,Virag2018}. It is known that the largest eigenvalue (i.e., the rightmost fermion) is described by the $\beta$- Tracy-Widom distribution ${\rm Prob}(\max_i a_i^\beta < s) = F_\beta(s)$ which depends continuously on $\beta$. Explicit expressions in terms of Fredholm determinants or solutions to Painlev\'e equations are known for $\beta\in\left\{ 1,2,4\right\} $ \cite{TW1994,TW1996} (see also \cite{TW2002,MS2014,BK18}). From known results for the mean density $\sigma(\lambda)$ of eigenvalues of G$\beta$E at the edge for $\beta\in\left\{ 1,2,4\right\} $ \cite{FFG2006,PS2015} we obtain that the mean density for the fermion model takes the scaling form in the large $N$ limit \begin{equation} \rho(x)=N\sqrt{\frac{2}{\beta}}\sigma\left(\sqrt{\frac{2}{\beta}}x\right)\simeq\frac{1}{w_{N}}\sigma_{\beta}^{e}\left(\frac{x-\sqrt{\beta N}}{w_{N}}\right) \end{equation} where the scaling functions are \begin{equation} \sigma_{\beta}^{e}(\xi)=\begin{cases} \sigma_{2}^{e}(\xi)+\frac{1}{2}{\rm Ai}(\xi)\left[1-\int_{\xi}^{\infty}{\rm Ai}(t)\,dt\right]\;, & \beta=1\\[0.1cm] \sigma_{2}^{e}(\xi)={\rm Ai}(\xi)^{2}-\xi{\rm Ai}'(\xi)^{2}\;, & \beta=2\\[0.1cm] \frac{1}{\sqrt{\kappa}}\left[\sigma_{2}^{e}(\kappa\xi)-\frac{1}{2}{\rm Ai}(\kappa\xi)\int_{\kappa\xi}^{\infty}{\rm Ai}(t)\,dt\right]\,, & \beta=4 \end{cases} \end{equation} with $\kappa = 2^{2/3}$. The (smooth) linear statistics for general $\beta$ was studied in \cite{krajenbrink4ways} in the limit towards the bulk. For the FCS for general $\beta$ however explicit formulae are still lacking in the edge region. Concerning the variance of the particle number, the scaling form \eqref{edgeV2} can be extended to any $\beta$, see \cite{MMSV14, MMSV16} \begin{equation} \label{eq:VBetaScalingDef} \text{Var}{\cal N}_{\left[a,\infty\right[}\simeq \frac{1}{2} {\cal V}_{\beta}\left(\frac{a-x^{+}}{w_{N}}\right) \; . \end{equation} However the explicit form for general $\beta$ is unknown at present. The asymptotic behavior in the limit towards the bulk, $\hat a \to - \infty$ can also be extracted from the results in \cite{BK18}, see Appendix \ref{appendix:varianceEdge} for details. This leads to the asymptotic behaviors for $\beta\in\left\{ 1,2,4\right\} $ \begin{equation} \label{eq:VBetaAsymptotic} \mathcal{V}_{\beta}\left(\hat{a}\right)\simeq 2 \frac{\frac{3}{2}\log\left(-\hat{a}\right)+c_{\beta}+2\log2}{\beta\pi^{2}}\;,\qquad-\hat{a}\gg1 \;, \end{equation} which matches with the bulk result \eqref{eq:NumberVarianceGOEatoinfinity} that we obtained above. The leading order (logarithmic) term in \eqref{eq:VBetaAsymptotic} was conjectured in \cite{MMSV14} for any $\beta$ based on the expected matching with the bulk. From our conjecture in Section \ref{sec:conjecture} we can now predict that \eqref{eq:VBetaAsymptotic} holds for general $\beta$, with the constant $c_\beta$ given in \eqref{eq:cbetaConjecture0}. Concerning the cumulants of the particle number of order three and higher, these have been obtained in the limit towards the bulk ($\hat a \to - \infty$) for $\beta\in\left\{ 1,2,4\right\} $ in Ref.~\cite{BK18}. In Appendix \ref{app:checks} we have verified that our prediction for the higher cumulants \eqref{highercum} for general $\beta$ match perfectly with the results from Ref.~\cite{BK18} for $\beta\in\left\{ 1,2,4\right\} $. This provides a non trivial check of our conjecture, and involves not so well known identities among Barnes functions. The above results were obtained for fermions in the harmonic potential, associated to the G$\beta$E. In RMT it is known that there is a universality at the soft edge, hence we expect the same behavior to hold for the fermion model associated to the WL$\beta$E (line 3 in Table~\ref{table:mappings}) at its soft edge (i.e., near $x_e^+= \sqrt{2 \beta N}$). On the other hand, for that model near $x_e^-=0$ (and for the Jacobi box) the universality is called the hard-edge and described by the Bessel stochastic operator \cite{RamirezRider2009}. As in the case $\beta=2$ it is reasonable to conjecture that for general $\beta$ the above results extend to any smooth confining potential, e.g. $V(x) \sim |x|^p$ with $p>0$, near the edge, the functions ${\cal V}_\beta$ being thus universal. The simple picture is that $V(x)$ near the edge can be approximated by a linear potential in all cases. One can also expect that the higher cumulants will also take a universal scaling form, being a non-trivial function of $\hat a$. Note that the effect of short range interactions at the edge was discussed in Ref. \cite{StephanInteractions} and found to be subdominant within the model studied there. It was also noticed there that the case of $1/x^2$ interactions lead to a new universality class for general $\beta$. \section{Bosonisation and Luttinger liquid}\label{sec:boso} It is well known that interacting fermions in one dimension can be described by the effective theory of the Luttinger liquid (LL) based on the bosonisation method, for a review see e.g. \cite{BookGiam}. It provides a hydrodynamic description which in its simplest form is valid in the absence of an external potential. At equilibrium, and for spinless fermions, it depends only on two parameters, the mean density $\rho_0$ and the dimensionless Luttinger parameter $K$, with $K=1$ for noninteracting fermions, $K<1$ for repulsive interactions and $K>1$ for attractive interactions. The dynamics also depends on the ``sound velocity" $v_F$ (Fermi velocity for free fermions). This is based on the description in terms of the phase field $\varphi(x)$ defined such that $\rho(x) = - \frac{1}{\pi} \partial_x \varphi(x)$, which at large scale is described by a Gaussian theory. This theory can be extended in the presence of a potential which varies very slowly on scales of the order of the inter-particle distance $1/\rho(x)$. Many recent studies have addressed the case of inhomogeneous bosonisation where $v_F$, $K$ and $\rho_0$ may become slowly varying functions of the position $x$ \cite{BrunDubail2018,RuggieroBrunDubail2019,DubailStephanVitiCalabrese2017} (see also \cite{LLNonlocalconductivity}). Let us recall that for a LL the density correlations are given by \begin{equation} \label{eq:densitycorrelations} \left\langle \rho(x)\rho(0)\right\rangle \simeq\rho_{0}^{2}\left[1-\frac{2K}{(2\pi\rho_{0} x)^{2}}+\sum_{m=1}^{+\infty}A_{m}(\rho_{0}|x|)^{-2Km^{2}}\cos(2\pi m\rho_{0}x)\right]\;, \end{equation} while the correlation function of the fermionic field (which is the analogue of the kernel in the case of noninteracting fermions) reads \begin{equation} \label{corrPsi} \left\langle \Psi^{\dagger}(x)\Psi(0)\right\rangle \simeq \rho_0 \sum_{m=0}^{+\infty} C_m (\rho_0 |x|)^{- \frac{1}{2 K} - 2 K (m+ \frac{1}{2})^2} \sin \left( 2 \pi \left(m+ \frac{1}{2}\right) \rho_0 x\right) \;. \end{equation} These formulae \eqref{eq:densitycorrelations} and (\ref{corrPsi}) are valid for $\rho_0 x \gtrsim 1$. Here $A_1$ in \eqref{eq:densitycorrelations} and $C_0$ in (\ref{corrPsi}) represent the leading behaviors at large $\rho_0 x$, while the terms $A_m$, $m \geq 2$ and $C_m$, $m \geq 1$ represent the contributions of higher harmonics (often neglected in LL studies). For noninteracting (free) fermions, $K=1$, $C_0=\frac{1}{\pi}$ and all $C_{m}=0$ for $m \geq 1$, and the expression in (\ref{corrPsi}) becomes exact. In this case, this is precisely the sine kernel $\sin(\pi \rho_0 x)/(\pi x)$. In the presence of interactions we see that the correlation function of the fermionic field in (\ref{corrPsi}) in the ground state now decays at large $x$ as \begin{equation} \langle \Psi^\dagger(x) \Psi(0) \rangle_0 \sim \frac{\sin(\pi \rho_0 x)}{|x|^\eta} \quad , \quad \eta = \frac{1}{2}\left(K+K^{-1}\right) \;, \end{equation} with a non-universal prefactor. Consider now the model of interacting fermions on the circle (second line in Table \ref{table:mappings}). One can predict that it corresponds to a Luttinger liquid with parameter $K=2/\beta$. Indeed the density correlations were calculated for the C$\beta$E in Ref. \cite{ForresterBosonisation} and one can check that formula 4.11 there agrees with the prediction of the LL theory \eqref{eq:densitycorrelations}, with the choice $K=2/\beta$, up to subleading terms (which for each harmonic decreases faster by a factor $1/|x|$). As mentioned in proposition 13.2.4, p. 604 of \cite{Forrester} (see also \cite{For1995,For1993}) this asymptotics was established for $\beta$ an even integer. However the value of $K$ can be inferred already from the second term in \eqref{eq:densitycorrelations}, i.e., from the coefficient of the long range decay $\sim 1/x^2$, which in fact can be obtained by electrostatic arguments and linear response theory from the Coulomb gas representation \cite{JancoviciCoulomb1995,Beenakker1993}. Note that the identification $K=2/\beta$ was also noted in \cite{StephanInteractions} (see Appendix there) and in Ref. \cite{ZollerCalogero} where an approximate formula for more general power-law interactions was also obtained. We thus expect that all the universal properties of the Luttinger liquid with this value of the parameter will hold for the model in second line in Table \ref{table:mappings} for fermions on the circle. For instance the variance of the number of fermions, ${\cal N}_{[a,b]}= \frac{1}{\pi} (\varphi(a)-\varphi(b))$, can be computed from the correlator of the phase field given in \cite{BookGiam} (see also \cite{Laflorencie}) as \begin{equation} \label{VarLL} {\rm Var}{\cal N}_{[a,b]}\simeq\frac{2}{\pi^{2}}\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}\int_{-k_{F}}^{k_{F}}\frac{dq}{2\pi}\frac{\pi K\left[1-\cos q(b-a)\right]}{\frac{\omega^{2}}{v_{F}}+v_{F}q^{2}}\simeq\frac{K}{\pi^{2}}\left[\log\left(k_{F}|a-b|\right)+\gamma_{E}\right] \end{equation} where $|a-b| \gg 1/k_F$, where in the general interacting case $k_F$ is defined as $\pi \rho_0$ and is unrelated to $v_F$ (while $\hbar k_F=v_F/m$ in the noninteracting case). Substituting $K=2/\beta$ in (\ref{VarLL}) this agrees with the leading logarithmic term in our prediction \eqref{prediction2}, although it is not accurate enough to predict the $O(1)$ term. In the presence of both interactions and an external potential, an inhomogeneous bosonization approach was recently developed which aims to calculate correlations in the bulk using conformal field theory methods \cite{BrunDubail2018,RuggieroBrunDubail2019,DubailStephanVitiCalabrese2017}. The potential induces a spatial dependence of the LL parameters, $K(x)$, $v_F(x)$ and $\rho_0(x)$. In the case of free fermions this approach can be compared with the exact results of Ref. \cite{SDMS} that we presented in Section \ref{subsec:previous}. The correspondence appears to be, from Eq.~(20) of \cite{DubailStephanVitiCalabrese2017} that $z(x,0) \sim \theta(x) \sim \int^x \frac{dx}{v_F(x)}$, where $z(x,y)$ is defined there and $\theta(x)$ was defined in \eqref{thetax0}. It would be interesting to extend these results to the present models with $K=2/\beta$. Since it is natural to assume, because of the long range nature of the interactions, that $K=2/\beta$ is independent of $x$, a similar description will hold for more general potentials than the ones considered here. \section{More general models} \label{sec:generalmodels} \begin{table*}[t] \bgroup\small \renewcommand{\arraystretch}{2.2} \begin{tabular}{ | p{1.4cm} | p{1.4cm} | p{4.3cm} | p{2.1cm} | p{1.6cm} | l |} \hline Fermions' domain & Fermion potential $V(x)$ & Fermion interaction $W(x,y)$ & RMT ensemble & Matrix potential $V_0(\lambda)$ & Map $\lambda(x)$ \\ \hline $x \! \in \! [0, \! 2\pi]$ & Eq.~\eqref{GWdef} & {\scriptsize $\frac{1}{16} \frac{\beta(\beta-2)}{\sin^2 \frac{x-y}{2}}$} & GWW & {\scriptsize $-\, \frac{g\left(\lambda+\lambda^{*}\right)}{2}$} & $\lambda=e^{ix}$ \\ \hline $x\in\mathbb{R}^+$ & Eq.~\eqref{x6n} & {\scriptsize $\frac{\beta(\beta-2)}{4}\left[\frac{1}{\left(x-y\right)^{2}}+\frac{1}{\left(x+y\right)^{2}}\right]$} & Half line & Eq.~\eqref{eq:V0HalfLineGeneral} & $\lambda=\frac{2}{\beta} x^{2}$ \\ \hline $x\! \in \! [0,\pi]$ & Eq.~\eqref{period22} & {\scriptsize $\frac{\beta(\beta-2)}{16}\left(\frac{1}{\sin^{2}\frac{x-y}{2}}+\frac{1}{\sin^{2}\frac{x+y}{2}}\right)$} & Box & Eq.~\eqref{eq:V0BoxGeneral} & $\lambda=$ {\scriptsize $\frac{1-\cos x}{2}$} \\ \hline $x\in\mathbb{R}$ & Eq.~\eqref{hypermorseMainText} & {\scriptsize $\frac{\beta(\beta-2) }{16 \sinh^2 \frac{x-y}{2} } $} & ``Hyperbolic" & Eq.~\eqref{eq:V0MorseGeneral} & $\lambda=e^x$ \\ \hline $x\in\mathbb{R}^+$ & Eq.~\eqref{hypermorse2} & Eq.~\eqref{hypermorse2W} & ``Hyperbolic" on half line & Eq.~\eqref{hypermorseV0} & $\lambda \! =\! \cosh \! \left(px\right)$ \\ \hline $x\in\mathbb{R}$ & $\frac{1}{2}a^{2}x^{2}$ & {\scriptsize $\frac{\beta(\beta-2)}{16\sinh^{2}\frac{x-y}{2}}$} - {\scriptsize $\frac{a\beta\left(x-y\right)}{4} $} $\! \coth$ {\scriptsize $\! \frac{x-y}{2}$} & SW$\beta$E & $a\log^{2}\lambda$ & Eq.~\eqref{eq:maplambdaxSWBetaE} \\ \hline $x \in \mathbb{R} $ & $\dfrac{-1}{\cosh^2(x)}$ & { $ \frac{\beta(\beta-2)}{16} \! \left( \frac{1}{\sinh^2 \frac{x-y}{2} } - \frac{1}{\cosh^2 \frac{x+y}{2}} \right)$} & Cauchy & Eq.~\eqref{V0Cauchy} & $\lambda \! = \! \sinh \! \left(px\right)$ \\\hline $x\in\mathbb{R}$ & Eq.~\eqref{eq:quarticV} & Eq.~\eqref{eq:quarticW} & Quartic matrix potential & $c_{2}\lambda^{2} \! +c_{4}\lambda^{4}$ & $\lambda = x$ \\ \hline \hline \end{tabular} \egroup \caption{The mappings between (i) models of interacting trapped fermions studied in section \ref{sec:generalmodels} (with details in Appendix \ref{appendix:mappings}) and (ii) random matrix ensembles. The table is in the same format as Table \ref{table:mappings}. In the first line, GWW refers to the Gross-Witten-Wadia model who studied it for $\beta=2$ \cite{GW1980,Wadia}. The acronym SW$\beta$E refers to the Stieltjes-Wigert ensemble \cite{Forrester}. } \label{table:mappingsMore} \end{table*} So far we have focused on the models in Table \ref{table:mappings}, with ground state wave functions of the form \eqref{P0} involving one and two-body factors and are related to random matrix models of the form \eqref{F}. There is in fact a larger class of models for which the ground state wave-functions are still of the form \eqref{P0}. The study of such models was initiated by Sutherland and Calogero \cite{Sutherland1971c,Sutherland1975,Calogero1975} and extended in Refs. \cite{IM1984,Forrester1994,Wagner2000}. We recall how models with this property are constructed using a slightly more general approach, and give a list of corresponding Hamiltonians in the Appendix \ref{appendix:mappings}. In addition we show that some of these models are related to other interesting random matrix models. Some of them were studied in the RMT literature. The models presented in this section are summarized in Table \ref{table:mappingsMore}. {\it Fermions on the circle}. The first interesting extension corresponds to fermions on the circle, i.e $x\in[0,2\pi]$ with periodic boundary conditions, in the presence of an external periodic potential. It generalizes the models of the second line of the Table \ref{table:mappings} which are related to C$\beta$E (in the absence of potential). It corresponds to the Hamiltonian \eqref{H} with the external potential $V(x)$ and two-body interaction $W(x,y)$ given by \begin{eqnarray} \label{GWdef} && V(x)= \frac{g}{4} \left(1 + \frac{N-1}{2} \beta\right) \cos x - \frac{g^2}{8} \cos^2 x \;, \\ && W(x,y)=\frac{1}{16} \frac{\beta(\beta-2)}{\sin^2 \frac{x-y}{2}} \;. \end{eqnarray} The ground state wave function is of the form \eqref{P0} for any $N$, with $v(x)=g \cos x$ and $w(x,y)=- \beta \log|\sin \frac{x-y}{2}|$ and the ground state energy $E_0$ is given in \eqref{E0GW}. The quantum probability is (up to a normalization) \begin{equation} \label{GWProba} \left|\Psi_{0}(\vec{x})\right|^{2}\propto\prod_{i<j}\left|\sin\frac{x_{i}-x_{j}}{2}\right|^{\beta}e^{g\sum_{i}\cos x_{i}} \;. \end{equation} Interestingly, for $\beta=2$ this probability is identical to the one studied at large $N$ by Gross and Witten and independently by Wadia in the context of lattice gauge theories \cite{GW1980,Wadia}, and later on in combinatorics \cite{Joh1998}. The weight \eqref{GWProba} corresponds to the joint PDF of the eigenvalues $\lambda_j = e^{ i x_j}$ of a matrix model with a probability measure on the $N \times N$ unitary matrices $\propto e^{\frac{g}{2} {\rm Tr} (U + U^\dagger) } dU $. Remarkably, in the present context this corresponds to noninteracting fermions. {The mapping is summarized in the first line of Table \ref{table:mappingsMore}.} \begin{figure*}[t] \centering \includegraphics[angle=0,width=0.49\linewidth]{VofxGrossWittenWeak.pdf} \includegraphics[angle=0,width=0.49\linewidth]{VofxGrossWittenStrong.pdf} \includegraphics[angle=0,width=0.49\linewidth]{VofuGrossWittenWeak.pdf} \includegraphics[angle=0,width=0.49\linewidth]{VofuGrossWittenStrong.pdf} \caption{The potential \eqref{VxGW} for $\tilde{g} =1/3$ [(a) and (c)] and $\tilde{g} =3$ [(b) and (d)]. $V$ is plotted as a function of $x$ in (a) and (b), and as a function of $u = \cos x$ in (c) and (d). As is seen in (a) and (b), for weak coupling $\tilde{g} <1$, $V(x)$ has a single maximum, but for strong coupling $\tilde{g} >1$ there are two degenerate maxima. The dotted lines correspond to the Fermi energy~$\mu$. } \label{VofxGrossWitten} \end{figure*} As we now show, this {mapping} allows us to shortcut the Coulomb gas method used in Refs. \cite{GW1980,Wadia} and obtain the eigenvalue density in a simpler way. For $\beta=2$ the external potential for the fermions reads \begin{equation} \label{VxGW} V(x) = \frac{N^2 \tilde g^2}{8} \left( \frac{2}{\tilde g} u - u^2 \right) \quad , \quad u=\cos x \;, \end{equation} where we have defined $\tilde g=g/N$ which is the important parameter that is kept of $O(1)$ in the large $N$ limit. In Fig.~\ref{VofxGrossWitten} we show a plot of $V(x)$ for various values of $\tilde g$. Since there is no interaction $W=0$ for $\beta=2$ in the large $N$ limit the fermion density can be obtained from the LDA formula \begin{equation} \label{LDAGW} \tilde \rho(x) = \frac{\sqrt{2}}{N \pi} \sqrt{ (\mu - V(x))_+} \;, \end{equation} where $\mu$ is the Fermi energy and is determined by the normalization condition $\int_0^{2 \pi} dx \tilde \rho(x)=1$. From \eqref{VxGW} and Fig.~\ref{VofxGrossWitten} we see that there are two cases depending on $\tilde g$. For $\tilde g<1$ there is a single maximum $V_{\rm max}$ of the potential $V(x)$ which is attained for $u=\cos x=1$, i.e., $x=0$, with $V_{\rm max} =\frac{N^2}{8} (2 \tilde g - \tilde g^2)$. For $\tilde g>1$ there are two degenerate maxima of the potential $V(x)$ which are attained for $u=\cos x = 1/\tilde g$, of values $V_{\rm max} =\frac{N^2}{8}$. As we now discuss this change of behavior of $V(x)$ results in two distinct phases: for $\tilde g<1$ (weak coupling) the Fermi energy is above the maximum of the potential and the density is everywhere positive. For $\tilde g>1$ (strong coupling) the Fermi energy is below the maximum and the density has a restricted support. Since the Fermi energy $\mu$ is itself determined by the normalization condition this is a non trivial transition. In the weak coupling phase one finds, as we show below, that \begin{equation} \label{mu1} \mu = \frac{N^2}{8} > V_{\rm max} =\frac{N^2}{8} (2 \tilde g - \tilde g^2) \quad , \quad \tilde g<1 \;. \end{equation} For that particular value, one sees that the expression inside the square root in \eqref{LDAGW} becomes a perfect square leading to \begin{equation} \label{densGW1} \tilde \rho(x) = \frac{1}{2 \pi} (1 - \tilde g \cos x) \quad , \quad \tilde g <1 \;, \end{equation} which is automatically normalized to unity, with a support $x \in [0, 2 \pi]$. Hence \eqref{mu1} is the correct value for $\mu$. Since the density must be positive, this solution is acceptable only for $\tilde g<1$. For $\tilde g>1$ we note that the potential $V(x)$ has a local minimum at $x=0$ of value $V_{\min}=\frac{N^2}{8} (2 \tilde g - \tilde g^2)$. In this strong coupling phase, one finds \begin{equation} \label{mu2} \mu = \frac{1}{8} \left(2 g N - g^2\right) = V_{\min} \quad , \quad \tilde g>1 \;. \end{equation} As can be seen on the Fig.~\ref{VofxGrossWitten} this value of $\mu$ is such that the system has a single support in all phases. For this value of $\mu$ we obtain from \eqref{LDAGW} \begin{equation} \label{densGW2} \tilde{\rho}(x)=\frac{\tilde{g}}{\pi}\left|\sin\left(\frac{x}{2}\right)\right|\sqrt{\left(\frac{1}{\tilde{g}}-\cos^{2}\left(\frac{x}{2}\right)\right)_{+}}\;,\quad\tilde{g}>1\;, \end{equation} which has now a restricted support $[x_-,x_+]$ where the edges are $x_-=2 {\rm arccos}\sqrt{\frac{1}{\tilde g}}$ and $x_+=2 \pi - x_-$. One can check the normalization \begin{equation} \int_0^{2 \pi} dx \tilde \rho(x) = \frac{2 \tilde g}{\pi} \int_{- \sqrt{\frac{1}{\tilde g}} }^{\sqrt{\frac{1}{\tilde g}} } du \sqrt{\frac{1}{\tilde g}-u^2} = 1 \;, \end{equation} which shows that \eqref{mu2} is the correct value of $\mu$. For $\tilde g \to 1^+$ one has $x_-\simeq 2 \sqrt{\tilde g-1}$, and for $\tilde g \to +\infty$ one has $x_+ \to \pi^-$ (all fermions are around $x=\pi$). For $\tilde g=1$ the formulae \eqref{densGW1} and \eqref{densGW2} become identical. The phase transition in the density in the above formula recovers the results obtained in \cite{GW1980,Wadia} by a different method, upon the identification $\tilde g=2/\lambda$ (and $x \to x+\pi$) from the notations of \cite{GW1980}. In these papers the partition function (i.e., the normalization amplitude of the probability measure in \eqref{GWProba}) was computed and shown to exhibit a third order phase transition at $\tilde g=1$ (for a recent review, see \cite{MS2014}). In the fermion system this transition at $\tilde g=1$ can be seen as a freezing transition for the Fermi energy as a function of the coupling strength $\tilde g$ (see Figs.~\ref{VofxGrossWitten} and \ref{fig:muvsgtilde}) \begin{equation} \label{eq:muvsgtilde} \mu=\begin{cases} \frac{N^{2}}{8}\;, & \tilde{g}<1\quad\text{(weak coupling)}\\ \frac{N^{2}}{8}\left(2\tilde{g}-\tilde{g}^{2}\right)\;, & \tilde{g}>1\quad\text{(strong coupling)} \end{cases} \end{equation} This transition coincides with the opening of a gap in the bulk in the fermion density. A similar transition has been recently studied by us for fermions in an inverted parabolic potential~\cite{Smith2020}, and the correlation kernel at the transition was explicitly computed. However, in the present situation the critical behavior is expected to be different. Indeed the shape of the potential at criticality around $x=0$ is here $V(x)- 1 \sim - x^4$ while it is $V(x)=-x^2$ in \cite{Smith2020}. \begin{figure}[t] \centering \includegraphics[angle=0,width=0.49\linewidth]{muVsgtilde.pdf} \caption{The fermi energy $\mu$ as a function of $\tilde{g} = g/N$, see Eq.~\eqref{eq:muvsgtilde}. The black dot corresponds to the point $\tilde{g} = 1$ where the freezing transition occurs.} \label{fig:muvsgtilde} \end{figure} Finally note that the model for $\beta \neq 2$, i.e., for interacting fermions in \eqref{GWdef} is also of interest. Using CG arguments, its density $\tilde \rho_{\beta,\tilde g}(x)$ is obtained from the density for $\beta=2$ by a simple rescaling, i.e., $\tilde \rho_{\beta,\tilde g}(x) = \tilde \rho_{2, \frac{2}{\beta} \tilde g}(x)$ found above. The transition then occurs for $\tilde g = \beta/2$. The correlation functions are expected, however, to depend on $\beta$ and remain to be explored. {\it Fermions on the half-line}. The second interesting extension generalizes the models on the third line of Table \ref{table:mappings} which are related to the WL$\beta$E. The interaction potential $W$ is the same as in Table \ref{table:mappings}, but the potential $V$ is more general \begin{equation} \label{x6n} V(x) = 2 c_1^2 x^6+2 c_0 c_1 x^4 +\frac{c_2 \left(c_2+2\right)}{8 x^2} + \left(\frac{c_0^2}{2}+c_1 \left(c_2-3\right) - \beta (N-1) 2 c_1 \right) x^2 \;. \end{equation} The ground state wavefunction is of the form \eqref{P0} with $v(x)= c_0 x^2 + c_1 x^4 + c_2 \log x$. It corresponds to a matrix model of the form \eqref{jac} upon the map $\lambda(x)=\frac{2}{\beta} x^2$ with a matrix potential \begin{equation} \label{eq:V0HalfLineGeneral} V_{0}(\lambda)=c_{0}\frac{\beta}{2}\lambda+c_{1}\frac{\beta^{2}}{4}\lambda^{2}+\left(\frac{c_{2}}{2}+1\right)\log\lambda\,. \end{equation} {The mapping is summarized in the second line of Table~\ref{table:mappingsMore}.} {\it Fermions in a box}. The next extension generalizes the models on the fourth line of the Table \ref{table:mappings} which are related to the J$\beta$E. The interaction potential $W(x,y)$ is the same as in Table \ref{table:mappings}, but the potential $V(x)$ contains additional $\cos x$ and $\cos 2 x$ terms, see formula \eqref{period22}. The ground state wave function has the form \eqref{P0} with $v(x)=c_1 \log \sin \frac{x}{2} + c_2 \log \cos \frac{x}{2} + c_3 \cos x$. It corresponds to a matrix model of the form \eqref{jac} upon the map $\lambda(x)=\frac{1}{2}(1- \cos x)$ and matrix potential \begin{equation} \label{eq:V0BoxGeneral} V_{0}(\lambda)= \frac{c_{1}+1}{2}\log\lambda+\frac{c_{2}+1}{2}\log\left(1-\lambda\right)-2c_{3}\lambda \; . \end{equation} The mapping for this model is summarized in the third line of Table \ref{table:mappingsMore}. For $c_3=0$ it recovers the J$\beta$E. For $c_1 = -1$ and $c_3 \neq 0$, this matrix model was studied in \cite{VivoPhD,VMB2008,VMB2010} and its density was calculated using the Coulomb gas method. We show in Appendix \ref{appendix:LDAandCG} that this result agrees with the LDA. Finally there are some models not related to Table \ref{table:mappings}. \noindent {\it Hyperbolic models}. The simplest example are fermions on the real line with the two-body interaction potential \begin{equation} \label{Whyper} W(x,y) = \frac{\beta(\beta-2)}{16 \sinh^2 \frac{x-y}{2} } \;. \end{equation} Its ground state wave function is of the form \eqref{P0} with a two-body term $w(x,y)= - \beta \log|\sinh \frac{1}{2} (x-y)|$. For normalizability of \eqref{P0} one needs a confining potential. The most general family consistent with \eqref{Whyper} is \begin{equation} \label{hypermorseMainText} V(x) = \frac{1}{8} c_1^2 e^{2 x}+\frac{1}{8} c_2^2 e^{-2 x} - \frac{\beta (N-1)}{8} (c_1 e^{x} + c_2 e^{-x}) + \frac{1}{4} c_1 \left(c_0-1\right) e^{x}-\frac{1}{4} c_2 \left(c_0+1\right) e^{-x} \, , \end{equation} a potential of the (generalized) Morse type. The one-body term in the ground state wave function is then $v(x)=c_0 x + c_1 e^{ x} + c_2 e^{- x}$. This model corresponds to a matrix model under the map $\lambda(x)=e^x$ with matrix potential \begin{equation} \label{eq:V0MorseGeneral} V_{0}(\lambda)=c_{1}\lambda+c_{2}\lambda^{-1}+\left[1+\frac{\beta}{2}(N-1)+c_{0}\right]\log\lambda\,. \end{equation} The mapping for this model is summarized in the fourth line of Table \ref{table:mappingsMore}. In the case $c_2=0$ of the Morse potential this relation to the Wishart model was also obtained in \cite{GBD2021}. For $c_2 \neq 0$ the calculation of the mean density $\sigma(\lambda)$ was performed using Coulomb gas methods for some values of the parameters in ~\cite{TM2013}. We show that this result agrees with the LDA in Appendix \ref{appendix:LDAandCG}. Note that this matrix model was also studied in various contexts in Refs. \cite{GT2016,LGC2019}. Another interesting example in this class of hyperbolic models corresponds to a ground state wave function of the form form \eqref{P0} with $v(x) = a x^2$ and $w(x,y)= - \beta \log|\sinh \frac{1}{2} (x-y)|$. In that case there is an additional repulsive two-body interaction $\delta W$ on top of the interaction \eqref{Whyper}, of the form $\delta W(x,y)\propto-a\beta\left(x-y\right)\coth\frac{1}{2}\left(x-y\right)$, which never vanishes for any value of $\beta$ (the model is always interacting). This model is related to the Stieltjes-Wigert $\beta$-ensemble (SW$\beta$E) \cite{Forrester1994,ForresterSW2020,DT2007,TK2014,Marino2005} which was studied in the context of Chern-Simons theory in high energy physics \cite{Marino2006} and of non-intersecting Brownian bridges \cite{ForresterSW2020,TK2012,GMS2021}. The correspondence is through the map (see the discussion below Eq.~(\ref{hypergen}) in Appendix \ref{appendix:mappings}) \begin{equation} \label{eq:maplambdaxSWBetaE} \lambda(x)=e^{x+\frac{1}{2a}\left(1+\frac{\beta}{2}(N-1)\right)} \end{equation} and the matrix potential reads \begin{equation} V_0(\lambda) = a \log^2 \lambda = \frac{\beta}{2} \tilde a \, \log^2 \lambda \;, \end{equation} {where $\tilde{a}=2a/\beta$.} {The mapping for this model is summarized in the sixth line of Table \ref{table:mappingsMore}.} For this matrix model the joint PDF of the eigenvalues is determinantal for $\beta=2$, since the model becomes bi-orthogonal \cite{Boro_det,Muttalib1995,Borodin1998}. In the limit of large $N$, scaling $a = O(N)$, the eigenvalue density $\sigma(\lambda)$ is known \cite{Marino2006,ForresterSW2020,GMS2021} \begin{equation} \sigma(\lambda) = \frac{1}{\pi u \lambda} {\rm arctan} \frac{\sqrt{ 4 e^u \lambda - (1+ \lambda)^2} }{1 + \lambda} \;, \end{equation} where $u=N/(2 \tilde a)= O(1)$. From the CG arguments this density is in fact independent of $\beta$. Its support is $\lambda \in [z_-,z_+]$ where $z_{\pm} = - z \pm \sqrt{z^2-1}$ and $z= 1- 2 e^{u}$. Hence we obtain the fermion density for the associated quantum model for any $\beta$ as $\rho(x) = N e^{u/2} e^x \sigma(e^{u/2} e^{x})$. Finally, there are two more hyperbolic models for fermions, one on the positive half axis which corresponds to the fifth line in Table \ref{table:mappingsMore}, and the second one, which maps to the Cauchy random matrix ensemble, and corresponds to the seventh line in Table \ref{table:mappingsMore}. These models are described in the Appendix \ref{appendix:mappings}. Let us close this section by indicating yet another family of quantum models where the interaction $W(x,y)$ is a sum of a harmonic attraction $\propto (x-y)^2$ and of the inverse square interaction $\frac{\beta(\beta-2)}{4 (x-y)^2}$. The first case is in an external potential $V(x) \sim x^2$. The second is related to a quartic matrix model {$V_0(\lambda)= c_2 \lambda^2 + c_4 \lambda^4$} and corresponds to a fermion model with a polynomial potential with terms $x^2, x^4, x^6$. These models are described in the Appendix~\ref{appendix:mappings}. To relate to the main focus of the paper, i.e., the counting statistics, let us point out that many models presented in this Section are noninteracting for $\beta=2$. In that case, the methods of \cite{SDMS} summarized in the Section \ref{subsec:previous} can be applied to obtain the variance of the number of fermions in an interval. Upon scaling properly the parameters of the model with $\beta$ one can relate the variance of the interacting model to the one for $\beta=2$ by similar relations as in \eqref{prediction}, {with the same constants \eqref{eq:cbetaConjecture0}.} Our conjecture for the higher cumulants should also apply. \section{Discussion and conclusion} In summary, we calculated the counting statistics for several models of $N \gg 1$ interacting spinless fermions in their ground state in one dimension confined by an external potential, see Tables \ref{table:mappings} and \ref{table:mappingsMore}. The interactions are of the general Calogero-Sutherland type, and depend on the parameter $\beta$, where $\beta=2$ corresponds to the noninteracting case. We have emphasized the connections to random matrix ensembles, where $\beta$ is the Dyson index. We found that the variance of the number of fermions in a macroscopic interval $[a,b]$ in the bulk of the Fermi gas grows with $N$ as $A_{\beta} \log N + B_{\beta} + o(1)$. We obtained explicit formulae for $A_\beta$ and $B_\beta$, which depend on $a,b$, on the type of interaction and on the shape of the confining potential. These results were obtained by explicit calculations for $\beta\in\left\{ 1,2,4\right\} $ and from a conjecture that we formulated for general $\beta$. This conjecture extends to the higher-order cumulants of the distribution of $\mathcal{N}_{[a,b]}$. They are $O(1)$ at $N \gg 1$ and are predicted here to be given by \eqref{cumulants_vs_ck}. Remarkably, this result is universal: it does not depend on the confining potential. This is because the conjecture states that the short scales determine the $O(1)$ part of the fluctuations of the particle number. We have obtained a few ``smoking gun'' tests for this conjecture. First we have shown that it matches in a highly nontrivial way, near the edge of the Fermi gas, with recent results from the mathematics literature \cite{BK18}. Second we have shown that our analytical predictions are in very good agreement with our numerical simulations. In addition we have shown that the leading term $A_\beta$ is in agreement with the predictions from the Luttinger liquid theory with parameter $K = 2/\beta$. Finally, we formulated a general approach for obtaining mappings between interacting fermion models in one dimension in their ground state and random matrix models (or, more generally, models of classical interacting particles confined by an external potential in thermal equilibrium). We applied this approach and found several such mappings. In particular we found a surprising mapping of the famous RMT Gross-Witten-Wadia model from high energy physics onto noninteracting fermions in an external potential on a circle. The simple application of the LDA allows to obtain the mean fermion density in that case, and recovers results known for this model obtained by more involved Coulomb gas methods. In turn, we have shown that these Coulomb gas methods can be used to study interacting fermions in a trapping potential. We exploited these mappings to obtain the mean fermion density for these models for general interaction parameter $\beta$ by relating them to the noninteracting case $\beta=2$. Similarly, we argue that the counting statistics in these models can be calculated by relating them to the noninteracting case. Our results hold also for Dyson indices $0 < \beta < 1$ which, although meaningless in the fermion systems on which we focused here (since for fermions $\beta \ge 1$), are meaningful for the RMT ensembles. The scaling limit $\beta \sim 1/N$ has generated much interest recently, and it would be interesting to study the counting statistics in this limit \cite{Allez12,Allez13,Trinh19,Trinh20, Forrester21}. Among the connections unveiled in this paper, e.g., with the models in Table \ref{table:mappingsMore}, many interesting questions remain to be explored. In particular one may wonder whether the universality of the higher cumulants of the fermion number can be extended to more general interacting models, and whether one can derive formula for the variance in more general potentials. {In particular, it would be interesting to test this universality when perturbing the interaction term away from the Calogero-Sutherland type studied here.} For noninteracting fermions, the counting statistics is connected to the bipartite entanglement entropy (EE) of the subsystem ${\cal D}$ with its complement $\overline{\cal D}$ \cite{Kli06,KL09,CalabreseMinchev2,Hur11}. Given the results of the present work, it would be interesting to search for similar (perhaps approximate) connections for interacting fermions in order to calculate the EE. Finally, it would be interesting to extend our approach to higher dimensions. In particular, there is a known mapping between noninteracting fermions in a 2d rotating harmonic trap and random matrices of the complex Ginibre ensemble \cite{LMG19,KMS21}. It remains a challenge to extend this mapping to more general cases. \section*{Acknowledgements} PLD thanks Y. V. Fyodorov for an earlier collaboration on related topics. We thank A.~Borodin and P.~J. Forrester for useful correspondence. We thank D. S. Dean and C. Salomon for interesting discussions. We thank T. Bothner for useful comments on the manuscript. {We thank M. Beau for pointing out the recent references \cite{delCampo20, Beau21} about ground-states in Calogero-type models and their extensions in higher dimensions.} NRS acknowledges support from the Yad Hanadiv fund (Rothschild fellowship). This research was supported by ANR grant ANR-17-CE30-0027-01 RaMaTraF. \begin{appendix} \section{Interacting fermion models with ground state of the form \eqref{P0} and mappings to RMT}\label{appendix:mappings} In this Appendix we recall the construction of quantum Hamiltonians with two-body interactions in one dimension~(\ref{H}), whose ground state wave function has itself a two-body form as in Eq.~\eqref{P0}. This question was pionneered by Calogero \cite{Calogero1975} (following Sutherland \cite{Sutherland1971c,Sutherland1975}) and extended in Refs. \cite{IM1984,Forrester1994,Wagner2000, delCampo20, Beau21}. In some cases these models are also fully integrable (i.e., their full eigenspectrum is known), see e.g. \cite{SutherlandBook,Forrester,IM1985,LangmannElliptic}. Here we also discuss the construction of the ground state in the light of the connections to random matrix ensembles. In particular we perform a search for models using the map $\lambda(x)$ which relates RMT to fermions. \subsection{Schr\"odinger equation and general conditions for two-body-only interaction} \label{sec:gen} Consider the following unnormalized wave function, $\Psi_0(\vec x)= e^{- U(\vec x)/2}$ (defined up to a sign in an ordered sector), where $U(\vec x)=\sum_i v(x_i) + \sum_{i<j} w(x_i,x_j)$ has the two-body form \eqref{P0}. A necessary condition for it to be the ground state of the two-body Hamiltonian ${\cal H}_N$ in \eqref{H} with energy $E_0$ is that ${\cal H}_N \Psi_0(\vec x) = E_0 \Psi_0(\vec x)$. Substituting and multiplying by $e^{U(\vec x)/2}$ on both sides one gets \begin{eqnarray} && \hspace*{-1.5cm} \sum_i V\left(x_{i}\right)+\sum_{i<j}W\left(x_{i},x_{j}\right) - E_0 = e^{U(\vec x)/2} \frac{1}{2} \sum_i \partial_{x_i}^2 e^{- U(\vec x)/2} = - \frac{1}{4} \sum_i U''_{ii} + \frac{1}{8} \sum_i (U'_i)^2 \label{A1} \\ && \qquad =-\frac{1}{4}\left[\sum_{i}v''(x_{i})+ \sum_{i\neq j} w_{20}(x_{i},x_{j})\right]+\frac{1}{8}\sum_{i}\left[v'(x_{i})+\sum_{j\neq i}w_{10}(x_{i},x_{j})\right]^{2} \nonumber\\ && \qquad =T_{1}+T_{2}+T_{2}'+T_{3} \;, \end{eqnarray} where $T_n$ denotes a term which is naively $n$ body. Here, and in the following, we use the notation $U'_i = \partial_{x_i} U(\vec{x})$ and similarly $U''_{ii} = \partial^2_{x_i} U(\vec{x})$. We recall that $w(x,y)$ is a symmetric function and denote by subscripts the order of its partial derivatives. These terms are \begin{eqnarray} && T_1= \sum_i V^{(1)}(x_i) \quad , \quad V^{(1)}(x)= \frac{1}{8} v'(x)^2 - \frac{1}{4} v''(x) \\ && T_2 = \sum_{i<j} W^{(1)}(x_i,x_j) \quad , \quad W^{(1)}(x,y) = W^{(1,1)}(x,y) + W^{(1,2)}(x,y) \\ && W^{(1,1)}(x,y)=-\frac{1}{4}\left[w_{20}(x,y)+w_{02}(x,y)\right]\;,\\ && W^{(1,2)}(x,y)=\frac{1}{8}\left[w_{10}(x,y)^{2}+w_{10}(y,x)^{2}\right] \\ && T_2'= \sum_{i < j} W^{(2)}(x_i,x_j) \quad , \quad W^{(2)}(x,y)=\frac{1}{8} ( v'(x) w_{10}(x,y) + v'(y) w_{10}(y,x) ) \label{Tprime2} \\ && T_3 = \frac{1}{8} \sum_{j\neq i, k \neq i, j \neq k} w_{10}(x_i,x_j) w_{10}(x_i,x_k) \nonumber\\ &&\quad= \frac{1}{8} \sum_{i<j<k} \sum_{\tau\in S_{3}} w_{10}(x_{\tau\left(i\right)},x_{\tau\left(j\right)}) w_{10}(x_{\tau\left(i\right)},x_{\tau\left(k\right)}) \;, \label{T3} \end{eqnarray} where we have splitted the term $\frac{1}{8} \sum_{i} \sum_{j\neq i}w_{10}(x_{i},x_{j}) \sum_{k\neq i}w_{10}(x_{i},x_{k})$ into the term $j=k$ (in $T_2$) and $j \neq k$ (in $T_3$). In the cases that we will study, these terms will drastically simplify and turn out to be constants (and sometimes zero). The ground state energy $E_0$ will be determined, as a result. To obtain a two-body Hamiltonian we must thus impose the condition that the three-body interactions are absent. This amounts to a condition on $w(x,y)$ so that $T_3$ can be written as two-body term, or a one-body or a constant. We will search for solutions to this condition in two possible forms $w(x,y)=w(x-y)$ and $w(x,y)= -\beta \log|\lambda(x)-\lambda(y)|$. Asking that $T_3$ is a constant, or one-body, then allows for a systematic search. This leads to a set of quantum model with two-body interactions $W(x,y)=(W^{(1)} + W^{(2)})|_{\rm 2 body}$, with a specific family of interactions $T_2|_{\rm 2 body}$ which vanish for $\beta=2$, while $T_2'|_{\rm 2 body}$ depends on $v(x)$. For some specific choices of $v(x)$ which we identify $T_2'|_{\rm 2 body}=0$, which lead to simpler quantum models in an external potential which become noninteracting for $\beta=2$. In the next section we make the list of the models which are obtained by this method, and in the following section we explain how one searches for these models. Remark: The term $T_1$ has the form of potentials from supersymmetric quantum mechanics. More generally the above equation \eqref{A1} is equivalent to \begin{equation} H-E_{0}=\frac{1}{2}\sum_{i}\left(-\partial_{x_{i}}+\frac{U'_{i}}{2}\right)\left(\partial_{x_{i}}+\frac{U'_{i}}{2}\right) \;. \end{equation} For repulsive interactions $\beta > 2$, the mappings described here are expected to hold for bosons too. For bosons, it is the repulsive interaction that causes the many-body wave function $\Psi_0$ to vanish at $x_i = x_j$ for $i \ne j$. \subsection{Families of models} We consider here different kinds of models. Some are defined on the real line (or the half-line) and require a confining potential $v(x)$ in order for $\Psi_0$ to be normalized. The others are called "periodic" models, and defined either on the circle or an interval, in which case $v(x)$ may be chosen to be zero. We recall that when there is a mapping $x \mapsto \lambda(x)$ between the fermions models with potential $v(x)$ and a matrix model (\ref{F}) with a matrix potential $V_0(\lambda)$, the relation between the two potentials reads \begin{eqnarray} \label{rel_v_V0} v(x)=V_0(\lambda(x))- \log|\lambda'(x)| \;. \end{eqnarray} Note that $w$ and $v$ in \eqref{P0} are defined up to an irrelevant additive constant which can be absorbed into the normalisation of $\Psi_0$. Depending on $v(x)$, one may also extract a one-body part from $W(x,y)$ and add it to $V(x)$, and extract constant parts from $W, V$ and add them to $-E_0$, where $E_0$ below denotes the ground state energy. \\ \noindent{\bf Logarithmic models}. In this class, the first set of models is, for $x,y$ on the real axis \begin{eqnarray} \label{loggen} \!\!\!\!\!\!\!\! w(x,y)&=& - \beta \log|x-y| \quad , \quad \lambda(x)=x \quad , \quad T_3=0 \\ \!\!\!\!\!\!\!\! W(x,y) &=& \frac{\beta(\beta-2)}{4 (x-y)^2} - \frac{\beta}{4} \frac{v'(x)-v'(y)}{x-y} \; , \quad V(x)= \frac{1}{8} v'(x)^2 - \frac{1}{4} v''(x) \; , \quad E_0= 0 \, . \end{eqnarray} In this set of models the only normalizable choice of $v(x)$ which does not contribute to the two-body interaction $W$ is $v(x)= a x^2$. It corresponds to the quantum model \cite{footnote:E0} \begin{equation} \label{logG} V(x)=\frac{a^2}{2} x^2 \quad,\quad W(x,y) = \frac{\beta(\beta-2)}{4 (x-y)^2} \quad,\quad E_{0}=\frac{\beta a}{4}N(N-1)+\frac{N a}{2} \, . \end{equation} This corresponds to the {\bf G$\beta$E}, for which the canonical choice, given in the text, is $a=1$, $\lambda(x)=\sqrt{\frac{2}{\beta}}\,x$ and $V_0(\lambda)=\beta\lambda^{2}/2$. For the noninteracting case $\beta=2$ (setting $a=1$) one recovers that $E_0$ is the sum of the energies of the single-particle states up to the Fermi energy, $E_0= \sum_{n=0}^{N-1} (n + \frac{1}{2})$. \\ The second set of models is, for $x,y$ on the positive real axis \cite{footnoteHalf} \begin{eqnarray} \label{logimagegen} && w(x,y)= - \beta \log|x^2-y^2| \quad , \quad \lambda(x)=x^2 \quad , \quad T_3=0 \\ && W(x,y) = \frac{\beta(\beta-2)}{4} \left( \frac{1}{(x-y)^2} + \frac{1}{(x+y)^2} \right) - \frac{\beta}{2} \frac{ x v'(x)- y v'(y)}{x^2-y^2} \; ,\nonumber\\ && \quad V(x)= \frac{1}{8} v'(x)^2 - \frac{1}{4} v''(x) \; , \quad E_0= 0 \, . \nonumber \end{eqnarray} In this set of models the only normalizable choice of $v(x)$ which does not contribute to the two-body interaction is $v(x)= c_0 x^2 + c_1 x^4 + c_2 \log x$ which corresponds to the quantum model \cite{footnote:E0} \begin{eqnarray} \label{x6} \hspace*{-1.cm}V(x) &=& 2 c_1^2 x^6+2 c_0 c_1 x^4+\left(\frac{c_0^2}{2}+c_1 \left(c_2-3\right) - \beta (N-1) 2 c_1 \right) x^2+\frac{c_2 \left(c_2+2\right)}{8 x^2} \\ \label{x6W} \hspace*{-1cm} W(x,y) &=& \frac{\beta(\beta-2)}{4} \left( \frac{1}{(x-y)^2} + \frac{1}{(x+y)^2} \right) \; , \\ \label{x6E0} \hspace*{-1cm}E_{0} &=& \beta c_0 \frac{N(N-1)}{2} + \frac{1}{2} c_0 (1-c_2) N \, . \end{eqnarray} This contains the case of the {\bf WL$\beta$E} with the canonical choice, given in the text \begin{equation} c_0=1 \quad , \quad c_1=0 \quad , \quad c_2=- (1 + 2 \gamma) \quad , \quad \lambda(x)=\frac{2}{\beta} x^2 \quad , \quad V_0(\lambda)= \frac{\beta}{2} \lambda - \gamma \log \lambda \end{equation} which leads to \begin{eqnarray} &&V(x)=\frac{x^{2}}{2}+\frac{\gamma^{2}-\frac{1}{4}}{2x^{2}} \;,\quad W(x,y) = \frac{\beta(\beta-2)}{4} \left( \frac{1}{(x-y)^2} + \frac{1}{(x+y)^2} \right) \;, \\ && E_{0}=\frac{\beta}{2}N(N-1)+(\gamma+1)N \, . \end{eqnarray} For $\beta=2$ one recovers the energy $E_0= \sum_{n=0}^{N-1} (2 n + 1 + \gamma)$. However there is a larger class of potentials which correspond to matrix models with matrix potentials $V_0(\lambda) = c_0 \frac{\beta}{2} \lambda + c_1 \frac{\beta^2}{4} \lambda^2 + \frac{c_2+1}{2} \log \lambda$. \\ \noindent{\bf Periodic models}. In this class, the first set of models is defined on the circle with $p x \in [0,2 \pi[$ \begin{eqnarray} \label{periodicgen} && w(x,y)= - \beta \log|\sin \frac{p}{2} (x-y)| \quad , \quad T_3= - \frac{1}{8} \frac{N(N-1)(N-2)}{3} \beta^2 \frac{p^2}{4} \\ && W(x,y) = \frac{\beta(\beta-2) p^2}{16 \sin^2 \frac{p}{2} (x-y)} - \frac{\beta p}{8} (v'(x)-v'(y)) \cot \frac{p}{2} (x-y) \; , \\ && V(x)= \frac{1}{8} v'(x)^2 - \frac{1}{4} v''(x) \\ && E_0= \frac{\beta^2 p^2}{32} ( N(N-1) + \frac{N(N-1)(N-2)}{3} ) = \frac{\beta^2 p^2}{32} \frac{N(N-1)(N+1)}{3} \;. \end{eqnarray} It contains the {\bf C$\beta$E} which is obtained for $v(x)=0$. The canonical choice given in the text is $p=1$. One can check that for $\beta=2$, the ground state energy is exactly equal to the sum of the energies of the single-particle states, e.g. for $N$ odd one has $E_0=2 \sum_{k=0}^{\frac{N-1}{2}} k^2 = \frac{N(N-1)(N+1)}{24}$. \\ In this set the only choice of $v(x)$ which does not generate a two-body interaction is $v(x)=b \cos(p x)$ (up to translations on the circle), which leads to the quantum model on the circle \begin{eqnarray} \label{periodicGW} V(x) &=& b\frac{p^{2}}{4}\left(1+\frac{N-1}{2}\beta\right)\cos(px) - \frac{1}{8} {b^2} p^2 \cos^2(p x) \; , \\ W(x,y) &=& \frac{p^2}{16} \frac{\beta(\beta-2)}{\sin^2 \frac{p}{2} (x-y)} \; , \\ E_0 &=& \frac{\beta^2 p^2}{16} \frac{N(N-1)}{2} + \frac{1}{8} \frac{N(N-1)(N-2)}{3} \beta^2 \frac{p^2}{4} - N \frac{b^2 p^2}{8} \;. \label{E0GW} \end{eqnarray} For $\beta=2$ this is the Gross-Witten-Wadia model discussed in the text. \\ The second set of models is defined for $p x \in [0,\pi]$ and corresponds to the choice $w(x,y)=- \beta \log\left|\cos p x - \cos p y\right|$, which is equivalent to the choice \begin{eqnarray} \label{periodicJac} && w(x,y)=-\beta\log\left|\sin\frac{p}{2}(x-y)\right|\left|\sin\frac{p}{2}(x+y)\right|~,~~\lambda(x)=\frac{1}{2}\left(1-\cos(px)\right)=\sin^{2}\frac{px}{2}\;,\nonumber\\ && T_{3}=-\frac{1}{8}\frac{N(N-1)(N-2)}{3}\beta^{2}p^{2} \; , \nonumber \\ && W(x,y) = \frac{\beta(\beta-2) p^2}{16} \left( \frac{1}{\sin^2 \frac{p (x-y)}{2} } + \frac{1}{\sin^2 \frac{p (x+y)}{2}} \right) +\frac{\beta p \left(\sin (p x) v'(x)-\sin (p y) v'(y)\right)}{4 (\cos (p x)-\cos (p y))} \nonumber\\\\ && E_0= \frac{\beta^2 p^2}{8} \frac{N(N-1)}{2} + \frac{1}{8} \frac{N(N-1)(N-2)}{3} \beta^2 p^2 \;. \end{eqnarray} In this set the only choice of $v(x)$ which does not contribute to the two-body interaction is $v(x)=c_1 \log \sin \frac{p x}{2} + c_2 \log \cos \frac{p x}{2} + c_3 \cos p x$, which leads to the quantum model on the circle \cite{footnote:E0} \begin{eqnarray} \label{period22} && V(x)= \frac{c_3 p^2}{8} (2-c_1-c_2+ 2 \beta (N-1)) \cos(p x) - \frac{c_3^2 p^2}{16} \cos(2 p x) \nonumber\\ &&\qquad + \frac{p^2 c_1 (2+ c_1)}{32 \sin^2\frac{p x}{2}} + \frac{p^2 c_2 (2+ c_2)}{32 \cos^2\frac{p x}{2}} \\ && W(x,y) = \frac{\beta(\beta-2) p^2}{16} \left( \frac{1}{\sin^2 \frac{p (x-y)}{2} } + \frac{1}{\sin^2 \frac{p (x+y)}{2}} \right) \\ \label{period22E0} && E_{0}=\frac{\beta^{2}p^{2}}{8}\frac{N(N-1)}{2} (1-c_1-c_2) +\frac{1}{8}\frac{N(N-1)(N-2)}{3}\beta^{2}p^{2} \nonumber\\ &&\qquad +\frac{p^{2}N}{32}\left[\left(c_{1}+c_{2}\right)^{2}+2c_{3}\left(2c_{1}-2c_{2}-c_{3}\right)\right] \; . \end{eqnarray} In this set of models, choosing $c_3=0$, $c_1=-(2 \gamma_1+1)$, $c_2=-(2 \gamma_2+1)$ we obtain the Jacobi box potential which corresponds to the {\bf J$\beta$E}. Let us set $p=1$, i.e., $L=\pi$ for the box, and define the map $\lambda(x)=\frac{1}{2} (1- \cos x)= \sin^2 \frac{x}{2}$ and $1- \lambda(x)=\frac{1}{2} (1+ \cos x) = \cos^2 \frac{x}{2}$. The matrix potential becomes $V_{0}(\lambda)=-\gamma_{1}\log\lambda-\gamma_{2}\log(1-\lambda)$ hence (for $0<x<\pi$) \begin{equation} v(x)=V_{0}(\lambda(x))-\log\left|\lambda'(x)\right|=-\left(\gamma_{1}+\frac{1}{2}\right)\log\sin^{2}\frac{x}{2}-\left(\gamma_{2}+\frac{1}{2}\right)\log\cos^{2}\frac{x}{2} \;. \end{equation} In summary we have \begin{eqnarray} && \!\!\!\!\!\!\!\!\antiquad V(x)=\frac{1}{8}\left(\frac{\gamma_{1}^{2}-\frac{1}{4}}{\sin^{2}\frac{x}{2}}+\frac{\gamma_{2}^{2}-\frac{1}{4}}{\cos^{2}\frac{x}{2}}\right)\;,\quad W(x,y)=\frac{\beta(\beta-2)}{16}\left(\frac{1}{\sin^{2}\frac{x-y}{2}}+\frac{1}{\sin^{2}\frac{x+y}{2}}\right) \\ && \!\!\!\!\!\!\!\!\antiquad E_{0}=\frac{\left(\gamma_{1}+\gamma_{2}+1\right)^{2}N}{8}+\frac{\beta^{2}N(N-1)}{16}+\frac{\beta N(N-1)\left(\gamma_{1}+\gamma_{2}+1\right)}{8} \nonumber\\ && \!\!\!\!\!\!\!\!+\frac{\beta^{2}N(N-1)(N-2)}{24} \; . \label{E0123} \end{eqnarray} For $\beta=2$ using the single-particle energy levels $\epsilon_{n}=\frac{1}{2}\left(n+\frac{\gamma_{1}+\gamma_{2}+1}{2}\right)^{2}$ one finds that $E_0 = \sum_{n=0}^{N-1}\epsilon_{n} = \frac{N\left[6N\left(\gamma_{1}+\gamma_{2}\right)+3\left(\gamma_{1}+\gamma_{2}\right)^{2}+4N^{2}-1\right]}{24}$ which coincides with the formula (\ref{E0123}) specialised to $\beta = 2$. \vspace*{0.5cm} \noindent{\bf Hyperbolic models}. In this class, the first set of models is defined on the real axis \begin{eqnarray} \label{hypergen} && \!\!\!\!\!\!\!\! w(x,y) = -\beta\log\left|\sinh\frac{p}{2}(x-y)\right|\quad,\quad T_{3}=\frac{1}{8}\frac{N(N-1)(N-2)}{3}\beta^{2}\frac{p^{2}}{4} \\ && \!\!\!\!\!\!\!\! W(x,y) = \frac{\beta(\beta-2) p^2}{16 \sinh^2 \frac{p}{2} (x-y)} - \frac{\beta p}{8} (v'(x)-v'(y)) \coth \frac{p}{2} (x-y) \; , \nonumber\\ && \!\!\!\!\!\!\!\! V(x) = \frac{1}{8} v'(x)^2 - \frac{1}{4} v''(x) \\ && \!\!\!\!\!\!\!\! E_0 = -\frac{\beta^{2}p^{2}}{32}\left[N(N-1)+\frac{N(N-1)(N-2)}{3}\right]=-\frac{\beta^{2}p^{2}}{32}\frac{N(N-1)(N+1)}{3} \; . \end{eqnarray} Note that this model $(w,v)$ is equivalent to the model $(\tilde w, \tilde v)$ where $\tilde w(x,y)= - \beta \log|\lambda(x)-\lambda(y)|$ with $\lambda(x)=e^{p x}$ and $\tilde v(x)=v(x) + \frac{\beta p}{2} (N-1) x$. It is thus equivalent to a matrix model with the matrix potential $V_0(\lambda)$ such that $\tilde v(x) = V_0(e^{p x}) - p x$. For the choice $v(x) = a x^2$ this model is related to the Stieltjes-Wigert $\beta$ ensemble \cite{Forrester1994,ForresterSW2020} as discussed in the text, where we have also used the parametrization $(\tilde w, \tilde v)$ to obtain Eq. \eqref{eq:maplambdaxSWBetaE}. In this set the only choice of $v(x)$ which does not contribute to the two-body interaction is $v(x)=c_0 x + c_1 e^{p x} + c_2 e^{-p x}$. This leads to the quantum model \cite{footnote:E0} \begin{eqnarray} \label{hypermorse} && V(x) = \frac{1}{8} c_1^2 p^2 e^{2 p x} +\frac{1}{8} c_2^2 p^2 e^{-2 p x}+\frac{1}{4} c_1 p \left(c_0-p\right) e^{p x}-\frac{1}{4} c_2 p \left(c_0+p\right) e^{-p x} \\ && \qquad - \frac{\beta p^2}{8} (N-1) (c_1 e^{p x} + c_2 e^{-p x}) \nonumber \\ && W(x,y) = \frac{\beta(\beta-2) p^2}{16 \sinh^2 \frac{p}{2} (x-y)} \\ \label{hypermorseE0} && E_0 = \frac{N}{8} \left(2 c_1 c_2 p^2-c_0^2\right) - \frac{\beta^2 p^2}{32} \frac{N(N-1)(N+1)}{3} \;, \end{eqnarray} which has a potential of the (generalized) Morse type. Note that the ground state energy behaves as $\propto - N^3$, which is surprising for a repulsive interaction (say $\beta >2$). This is because the confining potential (necessary for normalization) has a deep minimum at energy $\propto - N^2$. Note that it corresponds to a matrix model with matrix potential \begin{equation} V_0(\lambda) = c_1 \lambda + c_2 \lambda^{-1} + \left(1 + \frac{\beta}{2} (N-1) + \frac{c_0}{p}\right) \log \lambda \; \quad , \quad \lambda \geq 0 \;. \end{equation} \\ The second set of models is defined on the positive half line and corresponds to the choice $w(x,y)=- \beta \log|\cosh p x - \cosh p y|$, which is equivalent to the choice \begin{eqnarray} w(x,y) &=& -\beta\log\left|\sinh\frac{p}{2}(x-y)\right|\left|\sinh\frac{p}{2}(x+y)\right|~~,~~\lambda(x)=\cosh(px)~~, \\ T_{3}&=&\frac{1}{8}\frac{N(N-1)(N-2)}{3}\beta^{2}p^{2} \\ \label{hypersecondset} W(x,y) &=& \frac{\beta(\beta-2) p^2}{16} \left( \frac{1}{\sinh^2 \frac{p (x-y)}{2} } + \frac{1}{\sinh^2 \frac{p (x+y)}{2}} \right) \nonumber\\ &-& \frac{\beta p \left(\sinh (p x) v'(x)-\sinh (p y) v'(y)\right)}{4 (\cosh (p x)-\cosh (p y))} \\ E_0 &=& - \frac{\beta^2 p^2}{8} \frac{N(N-1)}{2} - \frac{1}{8} \frac{N(N-1)(N-2)}{3} \beta^2 p^2 \;. \end{eqnarray} In this set the only choice of $v(x)$ which does not contribute to a two-body interaction is $v(x)=c_1 \log \sinh \frac{p x}{2} + c_2 \log \cosh \frac{p x}{2} + c_3 \cosh p x$, which leads to the quantum model \cite{footnote:E0} \begin{eqnarray} \label{hypermorse2} && V(x)= \frac{c_{3}p^{2}}{8}\left[-2+c_{1}+c_{2}-2\beta(N-1)\right]\cosh(px)+\frac{c_{3}^{2}p^{2}}{16}\cosh(2px) \nonumber\\ && \qquad +\frac{p^{2}c_{1}(2+c_{1})}{32\sinh^{2}\frac{px}{2}}-\frac{p^{2}c_{2}(2+c_{2})}{32\cosh^{2}\frac{px}{2}} \\ \label{hypermorse2W} && W(x,y) = \frac{\beta(\beta-2) p^2}{16} \left( \frac{1}{\sinh^2 \frac{p (x-y)}{2} } + \frac{1}{\sinh^2 \frac{p (x+y)}{2}} \right) \\ \label{hypermorse2E0} && E_{0}=-\frac{\beta^{2}p^{2}N(N-1)}{16}-\frac{N(N-1)(N-2)}{24}\beta^{2}p^{2}+\frac{\beta p^{2}}{8}\frac{N(N-1)}{2}(c_{1}+c_{2}) \nonumber\\ &&\quad\; -\frac{p^{2}N}{32}\left[\left(c_{1}+c_{2}\right)^{2}+2c_{3}\left(2c_{1}-2c_{2}-c_{3}\right)\right] \; . \end{eqnarray} It corresponds to a matrix model with matrix potential \begin{equation} \label{hypermorseV0} V_{0}(\lambda)=\frac{c_{1}+1}{2} \log(\lambda-1) +\frac{c_{2}+1}{2}\log(\lambda+1)+c_{3}\lambda \quad , \quad \lambda \in [1,+\infty] \;. \end{equation} The mapping for this model is summarized in the fifth line of Table \ref{table:mappingsMore}. Finally there is a third set of models, defined on the line or on the positive half-line and which corresponds to the choice $w(x,y)=- \beta \log|\sinh p x - \sinh p y|$, which is equivalent to the choice \begin{eqnarray} \label{hypernewset} && w(x,y)=-\beta\log\left|\sinh\frac{p}{2}(x-y)\right|\left|\cosh\frac{p}{2}(x+y)\right|~~,~~\lambda(x)=\sinh(px)~~, \nonumber\\ &&T_{3}=\frac{1}{8}\frac{N(N-1)(N-2)}{3}\beta^{2}p^{2} \nonumber \\ && W(x,y) = \frac{\beta(\beta-2) p^2}{16} \left( \frac{1}{\sinh^2 \frac{p (x-y)}{2} } - \frac{1}{\cosh^2 \frac{p (x+y)}{2}} \right) \nonumber\\ &&\qquad +\frac{\beta p \left(\cosh (p y) v'(y)-\cosh (p x) v'(x)\right)}{4 (\sinh (p x)-\sinh (p y))} \\ && E_0= - \frac{\beta^2 p^2}{8} \frac{N(N-1)}{2} - \frac{1}{8} \frac{N(N-1)(N-2)}{3} \beta^2 p^2 \;. \end{eqnarray} In this set the only choice of $v(x)$ which does not contribute to the two-body interaction is $v(x)=c_1 \arctan \sinh p x + c_2 \log \cosh p x + c_3 \sinh p x$ which leads to the quantum model~\cite{footnote:E0} \begin{eqnarray} \label{hypermorse3} && V(x)= \left(c_1^2-c_2 \left(c_2+2\right)\right) \frac{p^2}{8 \cosh^2(p x)} + \frac{p^2}{8} c_3^2 \cosh ^2(p x)+\frac{1}{4} c_1 \left(c_2+1\right) p^2 \frac{\sinh p x}{\cosh^2 p x} \nonumber \\ && + \frac{p^2}{4} c_3 (c_2-1- \beta (N-1)) \sinh(p x) \\ && W(x,y) = \frac{\beta(\beta-2) p^2}{16} \left( \frac{1}{\sinh^2 \frac{p (x-y)}{2} } - \frac{1}{\cosh^2 \frac{p (x+y)}{2}} \right) \\ \label{hypermorse3E0} && E_0= -\frac{\beta^{2}p^{2}}{8}\frac{N(N-1)}{2}-\frac{N(N-1)(N-2)}{24}\beta^{2}p^{2} \nonumber\\ &&\quad +\beta p^{2}\frac{N(N-1)}{8}c_{2}-\frac{N}{8}\left(c_{2}^{2}+2c_{1}c_{3}\right)p^{2} \;. \end{eqnarray} Note however that for normalizability on the whole axis one needs $c_3=0$. For $c_3=0$ the potential $V(x)$ is known as the hyperbolic Scarf potential \cite{scarf}. It corresponds to a matrix model \begin{equation} \label{V0Cauchy} V_0(\lambda) = c_3 \lambda + c_1 \arctan(\lambda) + \frac{c_2+1}{2} \log(1+ \lambda^2) \; . \end{equation} The mapping for this model is summarized in the seventh line of Table \ref{table:mappingsMore}. In the case $c_3=0$ this model is the generalized Cauchy beta ensemble (Ca$\beta$E) studied, e.g., in \cite{BO2001,MMS2014}. The joint PDF of eigenvalues has the form \begin{equation} P( \vec \lambda) \propto \prod_j \frac{1}{(1+ i \lambda_j)^{a + i b}} \frac{1}{(1- i \lambda_j)^{a - i b} } \times \prod_{i<j} |\lambda_i - \lambda_j|^\beta \;, \end{equation} with $a=\frac{1+c_2}{2}$ and $b=-c_1/2$. In the case $c_1=0$ (i.e., $b=0$) and $a = \frac{\beta}{2}(N-1)+1$, the Ca$\beta$E is related to the circular ensemble C$\beta$E via the stereographic projection $e^{i \theta}=\frac{1+ i \lambda}{1- i \lambda}$. As a result the exact density $\sigma_N(\lambda)$ is known for any finite $N$ from the fact that it is uniform on the circle \cite{Forrester,MMS2014} and it is given by \begin{eqnarray}\label{densityCa} \sigma_{N}(\lambda) = \frac{1}{\pi} \frac{1}{1+\lambda^2} \;, \end{eqnarray} independently of $N$ and $\beta$. This mapping does not relate however the Schr\"odinger operators on the circle and the line, so it does not extend to the quantum model. In fact the potential in the fermion model associated to the case $c_1=c_3=0$ is the P\"oschl-Teller potential, as can be seen from \eqref{hypermorse3} (setting $p=1$) \begin{equation} V(x)= - \frac{c_2(c_2+2)}{8 \cosh^2 x} \;. \end{equation} For $\beta=2$ (noninteracting fermions) the LDA approximation at large $N$ for the fermion density, is $\rho(x)=\frac{1}{\pi}\sqrt{2\mu+\frac{c_{2}(c_{2}+2)}{4\cosh^{2}x}}$. It is easy to check that this formula is compatible with the exact result \label{densityCa}, which holds for $c_2 \simeq 2N$ at large $N$. This determines the value of the Fermi energy as $\frac{\mu}{N^2} \simeq 0$, which means that in the ground state, the potential well which has only a finite number of energy levels, is almost full. \\ {\bf Elliptic models}. In all the above models $T_3$ was a constant. There is a more general family of models for which $T_3$ is a sum of one-body terms. They are such that $w(x,y)=-\beta\log\left|\lambda(x)-\lambda(y)\right|$ and $\lambda(x)$ solution of \begin{equation} \label{We} \lambda''(x)=B+A\lambda(x)+\frac{C}{2}\lambda(x)^{2}. \end{equation} In that case $T_3=\frac{1}{4} \sum_{i<j<k} (u(x_i)+u(x_j)+u(x_k))$ with $u(x)=\frac{\beta^2}{3} \frac{\lambda'''(x)}{\lambda'(x)}$, which leads to a potential $V(x)=\frac{1}{8} (N-1)(N-2) u(x)$. For $C=0$ one recovers the models discussed above (and $u(x)$ is then a constant). For the general case $C \ne 0$ the solutions of \eqref{We} are of the form $\lambda(x)= a + b \, {\cal P}(x;{g_2,g_3})$ where $a=-A/C$, $b=12/C$ and $g_2=- \frac{C}{6} (B- \frac{A^2}{2 C})$ and $g_3$ is an arbitrary constant. Here ${\cal P}$ is the Weierstrass function \cite{weier} (see also next Section). This leads to $V(x)=\frac{\beta^2}{2} (N-1)(N-2) {\cal P}(x;{g_2,g_3})$. The two-body term $T_2$ then leads to the interaction $W^{(1)}(x,y)=\frac{\beta}{8}\left[(\beta-2)\frac{\lambda'(x)^{2}+\lambda'(y)^{2}}{(\lambda(x)-\lambda(y))^{2}}+2\frac{\lambda''(x)-\lambda''(y)}{\lambda(x)-\lambda(y)}\right]$ which appears to be quite complicated. We will not further study these solutions here. \\ {\bf Models with quadratic interactions}. A last example is the following fermion model defined on the real axis \begin{eqnarray} \label{quad} && w(x,y)=a (x-y)^2 - \beta \log |x-y| \\ && W(x,y)=\left[a^{2}+\frac{a^{2}}{2}(N-2)\right](x-y)^{2}+\frac{\beta(\beta-2)}{4(x-y)^{2}} \nonumber\\ && \qquad\quad +\left[v'(x)-v'(y)\right]\left[\frac{a}{2}(x-y)-\frac{\beta}{4(x-y)}\right] \\ && V(x)= \frac{1}{8} v'(x)^2 - \frac{1}{4} v''(x) \\ && E_0= a (1+ \beta) \frac{N(N-1)}{2} + \frac{1}{4} a \beta N(N-1)(N-2) \; . \end{eqnarray} For general $a$, one has that $T_3$ is the sum of two-body terms. There are two interesting special cases. The first one is the case where $v(x)=c_2 x^2$ which leads to \cite{footnote:E0} \begin{eqnarray} \label{eq:T3twobody1} W(x,y)&=&\left[a^{2}+\frac{a^{2}}{2}(N-2)+ac_{2}\right](x-y)^{2}+\frac{\beta(\beta-2)}{4(x-y)^{2}}\;,\quad V(x)=\frac{1}{2}c_{2}^{2}x^{2} \\ E_{0} &=& \frac{Nc_{2}}{2}+\left[a(1+\beta)+\frac{1}{2}\beta c_{2}\right]\frac{N(N-1)}{2}+\frac{1}{4}a\beta N(N-1)(N-2) \; . \end{eqnarray} Another special case is $a=0$, for which $T_3$ is actually a constant (this model also belongs to the first class in Eq. \eqref{loggen}). In this case, one has $w(x,y)=- \beta \log |x-y|$ and $v(x)=c_2 x^2 + c_4 x^4$ which leads to \cite{footnote:E0} \begin{eqnarray} \label{eq:quarticW} && W(x,y)= \frac{\beta c_4}{2} (x-y)^2 + \frac{\beta(\beta-2)}{4 (x-y)^2} \; , \\ \label{eq:quarticV} && V(x)= 2 c_4^2 x^6+2 c_2 c_4 x^4+ \left( \frac{1}{2} c_2^2 -3 c_4 - \beta c_4 \frac{3}{2} (N-1) \right) x^2 \; , \\ \label{eq:T3twobody5} && E_0= \frac{\beta c_2}{2} \frac{N(N-1)}{2} + \frac{c_2}{2} N \; . \end{eqnarray} This quantum model is thus related via $\lambda(x)=x \in \mathbb{R}$, to the random matrix model with a quartic matrix potential \begin{equation} V_0(\lambda) = c_2 \lambda^2 + c_4 \lambda^4 \;. \end{equation} This model is well known in RMT \cite{BIPZ,Eynard2015,EynardHouches,DiFrancesco}. The mapping for this model is summarized in the eighth line of Table \ref{table:mappingsMore}. \subsection{Search for models} Let us summarize how one searches for models. One imposes the condition that the three-body interactions are absent, i.e., that $T_3$ can be written as a two-body term, or a one-body term or a constant. We first consider the case when $T_3$ in Eq. (\ref{T3}) is simply a constant. It means that for all $x,y,z$ \begin{equation} \label{cond1} t_3(x,y,z):= w_{10}(x,y) w_{10}(x,z) + w_{10}(y,x) w_{10}(y,z) + w_{10}(z,x) w_{10}(z,y) = \pm q^2 \;, \end{equation} in which case $T_3=\frac{1}{4} \sum_{i<j<k} (\pm q^2) = \pm \frac{1}{8} \frac{N(N-1)(N-2)}{3} q^2$. Note that this does not involve $v(x)$ hence for now $v(x)$ is arbitrary. \\ A first series of model is obtained by considering $w(x,y)=w(x-y)$ with $w$ even. Inserting into \eqref{cond1} and taking $z,y \to x$ one sees that there is no differentiable solution at $x=0$, i.e., with $w'(0)=0$ since $w'(x)$ is an odd function. Hence one needs $w'(x)$ to diverge at $x=0$. One thus writes $w'(z)=1/g(z)$ with $g(z)$ an odd function. Setting $z=x+\epsilon$, one must have, regrouping the terms \begin{equation} \frac{1}{g(\epsilon)}\left[\frac{1}{g(x+\epsilon-y)}-\frac{1}{g(x-y)}\right]+\frac{1}{g(y-x)g(y-x-\epsilon)}=\pm q^{2} \;. \end{equation} We see that the only possibility (for a smooth $g(x)$ at generic non zero $x$) is $g(\epsilon)=O(\epsilon)$, i.e., a simple pole for $w'(x)$, and taking $\epsilon \to 0$ one finds \begin{equation} \frac{1}{g(x)^{2}}\left[1-\frac{g'(x)}{g'(0)}\right]=\pm q^{2} \;. \end{equation} The solutions are $g(x)=\frac{1}{q} \tan(q x g'(0))$, $g(x) =\frac{1}{q} \tanh(q x g'(0))$, and $g(x) = g'(0) x$. Defining $g'(0)=- 1/\beta$ and $q=- \frac{\beta}{2} p$ one obtains $w(x,y) = - \beta \log|\sin (\frac{p}{2} (x-y) )|$ for the $(+)$ branch, $w(x,y)=- \beta \log|\sinh (\frac{p}{2} (x-y) )|$ for the $(-)$ branch, and $w(x,y)=- \beta \log |x-y|$ for $q=0$. Here $\beta$ and $p$ are for now an arbitrary parameters. One checks that indeed they satisfy the condition \eqref{cond1} (providing not so trivial trigonometric identities). These are the models respectively \eqref{periodicgen}, \eqref{hypergen} and \eqref{loggen}. For these models the two-body term from $T_2$ leads to the interaction potential $W^{(1)}(x,y)=\frac{1}{4} w'(x-y)^2 - \frac{1}{2} w''(x-y)$. In the presence of a potential $v(x)$ there is generically another two-body term from $T'_2$ in Eq. (\ref{Tprime2}). It reads $W^{(2)}(x,y) = \frac{1}{4} (v'(x)-v'(y)) w'(x-y)$ which leads to the interaction potentials in \eqref{periodicgen}, \eqref{hypergen} and \eqref{loggen}. For each of these three models there is a unique family of exceptional potentials $v(x)$ for which $W^{(2)}(x,y)$ is either one-body or constant. To search for them one imposes the necessary condition $\partial_x \partial_y W^{(2)}(x,y)=0$ and solves it for $y \to x$. This leads to the models \eqref{logG}, \eqref{periodicGW} and \eqref{hypermorse}. \\ Interestingly, the models found so far are closely related, up to some change of variable $\lambda=\lambda(x)$, to the logarithmic interaction which appears naturally in the $\beta$ random matrix models of the form \eqref{F}, with an a priori arbitrary matrix potential $V_0(\lambda)$. Thus it is natural to search for $w(x,y)$ parameterized in the form $w(x,y) = - \beta \log|\lambda(x)-\lambda(y)|$, where again, for now, $v(x)$ is arbitrary. Note that there is some redundancy, since e.g. the problem with $\lambda(x)=1/\tilde \lambda(x)$ is equivalent to $w(x,y)=- \beta \log|\tilde \lambda(x)-\tilde \lambda(y)|$ and $v(x) \to v(x) + (N-1) \beta \log|\tilde \lambda(x)|$. One now imposes to satisfy \eqref{cond1} with $w_{10}(x,y) = - \beta \frac{\lambda'(x)}{\lambda(x)-\lambda(y)}$ and we restrict here to $\lambda(x)$ real. Substituting and taking successively the limits $z \to x$ and $y \to x$, one arrives at the necessary condition $\beta^2 \frac{\lambda'''(x)}{\lambda'(x)} = \pm q^2$. It is convenient to parameterize these models using the parameter $p$ defined now via $q^2=\beta^2 p^2$ \cite{footnoteparam}. The general solutions are $\lambda(x) = a x + b x^2$ (for $q=0$), $\lambda(x)= a_1 \cos p x + a_2 \sin p x$ (for the $-$ branch) and $\lambda(x)= a'_1 \cosh p x + a'_2 \sinh p x$ for the (for the $+$ branch). The case $\lambda(x)=x$ recovers \eqref{loggen}, while $\lambda(x)=x^2$ gives the new family \eqref{logimagegen} (and upon a translation in $x$ one can always reduce to one of these cases). Similarly for the periodic model one can always choose $a_2=0$ by translation, which leads to \eqref{periodicJac}. Finally for the hyperbolic solutions one can always reduce to either $a'_1=a'_2$, which leads again to \eqref{hypergen}, or to $a'_2=0$ (whenever $a'_1>a_2$), which leads to \eqref{hypersecondset}, or to $a'_1=0$ (whenever $a'_1<a_2$), which leads to \eqref{hypernewset}. One can check that $T_3$ is indeed a constant for each of these models, again via non trivial trigonometric identities. With the chosen parameterization $w(x,y)=-\beta\log\left|\lambda(x)-\lambda(y)\right|$ the term $T_2$ leads to an interaction $W^{(1)}(x,y)= \frac{\beta}{8} ((\beta-2) \frac{\lambda'(x)^2 + \lambda'(y)^2}{(\lambda(x)-\lambda(y))^2} + 2 \frac{\lambda''(x)-\lambda''(y)}{\lambda(x)-\lambda(y)})$, which simplifies for these models as given in \eqref{loggen}, \eqref{logimagegen}, \eqref{periodicJac}, \eqref{hypergen},\eqref{hypersecondset}, \eqref{hypernewset}. In each of these sets we can search for exceptional potentials $v(x)$ for which $W^{(2)}(x,y)$ in Eq. (\ref{Tprime2}) is either one-body or constant, i.e., \begin{equation} W^{(2)}(x,y) = - \frac{\beta}{4} \frac{v'(x) \lambda'(x)-v'(y)\lambda'(y) }{\lambda(x)-\lambda(y)} = - \frac{\beta}{4} ( u(x) + u(y) ) \;. \end{equation} This is equivalent to \begin{equation} \label{aga1} v'(x) \lambda'(x)-v'(y)\lambda'(y)= (u(x) + u(y)) (\lambda(x)-\lambda(y)) \;. \end{equation} This is possible only if the cross term on the right hand side, namely $\lambda(x) u(y) - \lambda(y) u(x)$, is a one-body term, meaning that $\partial_x \partial_y (\lambda(x) u(y) - \lambda(y) u(x))=0$. This implies that $u'(y)/\lambda'(y)=u'(x)/\lambda'(x)$ which implies $u'(y)/\lambda'(y)=u'(x)/\lambda'(x)=K_1$, i.e., $u(x) = K_1 \lambda(x) + K_2$. Inserting into \eqref{aga1} it gives the necessary condition \begin{equation} \label{vprime} v'(x) = \frac{ K_1 \lambda(x)^2 + K_2 \lambda(x) + K_3}{\lambda'(x)} \quad , \quad u(x) = K_1 \lambda(x) + K_2 \;. \end{equation} Using the specific forms for $\lambda(x)$ obtained above (i.e., for which $T_3$ is a constant), this relation (\ref{vprime}) leads to the models \eqref{x6}, \eqref{period22}, \eqref{hypermorse}, \eqref{hypermorse2} and \eqref{hypermorse3} which are the most general solutions in each case. \\ Next we turn to the condition that the three-body term in \eqref{cond1} is the sum of one-body terms, i.e., $t_3(x,y,z) = u(x)+u(y)+u(z)$. Substituting and taking successively the limits $z \to x$ and $y \to x$ one arrives at the necessary condition $\beta^2 \frac{\lambda'''(x)}{\lambda'(x)} = 3 u(x)$. Next we expand in $y-x$ and to second order we obtain another necessary condition \begin{equation} -3 \lambda ^{(3)}(x) \lambda ''(x)^2+\lambda '(x) \left(\lambda ^{(3)}(x)^2-\lambda ^{(5)}(x) \lambda '(x)\right)+3 \lambda ^{(4)}(x) \lambda '(x) \lambda''(x) = 0 \;. \end{equation} Multiplying this equation by $\lambda'(x)^{-4}$ it can be integrated once. Multiplying the result by $\lambda'(x)$ it can be rewritten as $\frac{d}{dx}\left[\frac{\lambda^{(3)}(x)}{\lambda'(x)}\right]=C\lambda'(x)$. Multiplying by $\lambda'(x)$ and integrating once more leads to the simple condition $\lambda''(x)=B+A\lambda(x)+\frac{C}{2}\lambda(x)^{2}$, i.e., the relation given above in Eq. \eqref{We}, where $A, B$ and $C$ are integration constants. We have not explored in full generality the condition that $T_3$ is a two-body term, i.e., $t_3(x,y,z)$ defined in \eqref{cond1} be a sum of two and one-body potentials. {This condition leads to the nonlinear, nonlocal partial differential equation \begin{eqnarray} && \partial_{x}\partial_{y}\partial_{z}t_{3}(x,y,z) = w_{21}(x,y)w_{11}(x,z)+w_{11}(x,y)w_{21}(x,z)+w_{21}(y,x)w_{11}(y,z)\nonumber\\ && \qquad\qquad + w_{11}(y,x)w_{21}(y,z)+w_{21}(z,x)w_{11}(z,y)+w_{11}(z,x)w_{21}(z,y)=0 \;, \end{eqnarray} whose study we leave for future research.} In the case of the form $w(x,y)=w(x-y)$ this was done by Calogero \cite{Calogero1975} who found that the general solution must obey $w''(z) = a \,{\cal P}(z,g_2,g_3)$ where ${\cal P}$ is the Weierstrass ${\cal P}$ function, i.e., a solution of ${\cal P}''=6 {\cal P}^2 - \frac{g_2}{2}$ and $({\cal P}')^2=4 {\cal P}^3 - g_2 {\cal P} - g_3$ \cite{weier}. Integrating twice the general solution is thus in that case, $w(z)=-\beta \log (\sigma (z;g_2,g_3)) + b z^2$ where $\sigma (z;g_2,g_3)$ is the $\sigma$-Weierstrass function. Thanks to non trivial identities involving Weierstrass functions \cite{Calogero1975}, the absence of three-body term is indeed obeyed. Note that it is natural to set $a=\beta$ since $\sigma \simeq z$ at small $z$, hence $w(z)$ is again of the logarithmic type at small $z$. The resulting quantum interaction $W(x,y)$ can be expressed as a polynomial in terms of Weierstrass functions \cite{Calogero1975}. Here we only give the simple example \eqref{quad}. In fact, the only solution with $w(x,y)=w(x-y)$ and $w''(0)$ finite is the quadratic form $w(x,y)= a (x-y)^2$. \section{Mean fermion density: LDA versus Coulomb gas} \label{appendix:LDAandCG} In the absence of interactions, $\beta=2$, the LDA prediction for the mean fermion density in the large $N$ limit is (with unit normalization) \begin{equation} \tilde \rho(x)= \frac{\sqrt{2}}{N \pi} \sqrt{\mu - V(x)} \;. \end{equation} On the other hand, when there is a map $\lambda(x)$ which maps the ground state of this model to the joint PDF of the eigenvalues of a RMT ensemble of the form \begin{equation} \label{eq:LDArhotilde} P(\vec \lambda) \propto e^{ - \frac{\beta}{2} \sum_i N \tilde V_0(\lambda_i) - \beta \sum_{i<j} \log|\lambda_i - \lambda_j|} \;, \end{equation} i.e., when the matrix potential in \eqref{F} is scaled as $V_0(\lambda)= \frac{\beta}{2} N \tilde V_0(\lambda)$, then it is possible to use the CG method to obtain the eigenvalue density $\sigma(\lambda)$. It is given as the optimal density which minimizes the CG energy functional \begin{equation} {\cal E}[\sigma] = \int d\lambda \tilde V_0(\lambda) \sigma(\lambda) - \int d\lambda d\lambda' \sigma(\lambda) \sigma(\lambda') \log |\lambda - \lambda'| \;, \end{equation} under the constraint that $\int d\lambda \sigma(\lambda)=1$, which we (abusively) also denote $\sigma(\lambda)$. The connection between the two is $\tilde \rho(x) = \lambda'(x) \sigma(\lambda(x))$. Since the CG density is independent of $\beta$ this allows to obtain the fermion density for any $\beta$. We have discussed this connection in the text on the example of the Gross-Witten-Wadia model. Here we give some more details for the other cases. The computationally difficult part is to determine the Fermi energy $\mu$. \\ {\bf Hyperbolic model}. Consider the model \eqref{Whyper} discussed in the text with potential $V(x)$ given in \eqref{hypermorseMainText}, which has three parameters $c_0,c_1,c_2$. We will scale them as $c_j=N \frac{\beta}{2} \tilde c_j$. From \eqref{eq:V0MorseGeneral}, it corresponds, in the large $N$ limit, to the matrix potential (dropping subleading terms at large $N$) $\tilde V_{0}(\lambda)=\tilde c_{1}\lambda+\tilde c_{2}\lambda^{-1}+ (1+ \tilde c_0) \log \lambda$. In \cite{TM2013} the minimization equation was solved in the case $\tilde c_1=1$, $1+\tilde c_0=-1$ and it was found that \begin{equation} \label{res1} \sigma(\lambda) = \frac{1}{2 \pi} \frac{\lambda + c}{\lambda^2} \sqrt{ (\lambda-a) (b - \lambda)} \;. \end{equation} In \cite{TM2013} the parameters $a,b,c$ are given as a function of $\mu_1=\tilde c_2$. On the other hand the LDA prediction for $\beta=2$ with $\lambda(x)=e^x$ gives in the more general case of the potential~\eqref{hypermorseMainText} \begin{equation} \label{eq:hyperbolicModelDensityLDAgen} \sigma(\lambda) = \frac{dx}{d \lambda} \frac{\sqrt{2}}{N \pi} \sqrt{\mu - V(x(\lambda))} = \frac{\sqrt{2}}{\pi} \frac{1}{\lambda} \sqrt{\tilde \mu - \left( \frac{\tilde c_1^2}{8} \lambda^2 +\frac{\tilde c_2^2}{8} \frac{1}{\lambda^{2}} + \frac{\tilde c_1}{4} (\tilde c_0-1) \lambda - \frac{\tilde c_2}{4} (\tilde c_0+1) \frac{1}{\lambda} \right) } \end{equation} with $\mu=N^2 \tilde \mu$. Let us recover \eqref{res1} from this result. Plugging $\tilde c_1=1$, $\tilde c_0=-2$ into \eqref{eq:hyperbolicModelDensityLDAgen} yields \begin{equation} \label{eq:hyperbolicModelDensityLDA} \sigma(\lambda) =\frac{1}{2\pi}\frac{1}{\lambda^{2}}\sqrt{-\lambda^{4}+6\lambda^{3}+8\tilde{\mu}\lambda^{2}-2\tilde{c}_{2}\lambda-\tilde{c}_{2}^{2}} \; . \end{equation} Now, requiring that the expression under the square root in \eqref{eq:hyperbolicModelDensityLDA} can be written in the form \begin{equation} \label{eq:lambdaDoubleRoot} -\lambda^{4}+6\lambda^{3}+8\tilde{\mu}\lambda^{2}-2\tilde{c}_{2}\lambda-\tilde{c}_{2}^{2}=\left(\lambda+c\right)^{2}\left(\lambda-a\right)\left(b-\lambda\right) \end{equation} for some $a,b,c$, one obtains by comparing the coefficients of powers of $\lambda$ on both sides of the equation, the following relations \begin{equation} 6=a+b-2c,\quad8\tilde{\mu}=-ab+2ac+2bc-c^{2},\quad-2\tilde{c}_{2}=-2abc+ac^{2}+bc^{2},\quad\tilde{c}_{2}^{2}=abc^{2}, \end{equation} whose solution, in terms of $v=\sqrt{ab}$ and $u=\sqrt{a/b}$, is \begin{eqnarray} \label{eq:uv} &&v=2u\frac{3u^{2}-2u+3}{\left(1-u^{2}\right)^{2}},\quad\tilde{c}_{2}=2vu\frac{v-1}{u^{2}+1}=-4u^{2}\frac{\left(u^{2}-6u+1\right)\left(3u^{2}-2u+3\right)}{\left(1-u^{2}\right)^{4}},\nonumber\\ && c=\frac{\tilde{c}_{2}}{v}=-2u\frac{u^{2}-6u+1}{\left(1-u^{2}\right)^{2}} \;, \end{eqnarray} in agreement with \cite{TM2013}, and the (rescaled) fermi energy \begin{equation} \tilde{\mu}=\frac{1}{8}\left(-v^{2}+2\tilde{c}_{2}\frac{u^{2}+1}{u}-\frac{\tilde{c}_{2}^{2}}{v^{2}}\right) \;, \end{equation} which can be expressed in terms of $\tilde c_2$ alone using the equations in \eqref{eq:uv}. In particular, the agreement with \cite{TM2013} ensures that $\sigma(\lambda)$ is correctly normalized, which shows that the form \eqref{eq:lambdaDoubleRoot} is indeed correct. \\ {\bf Fermions in a box.} This is the case of the matrix potential \eqref{eq:V0BoxGeneral}. Here $\lambda(x)=\frac{1}{2}(1- \cos x)$. The LDA prediction is given by \eqref{eq:LDArhotilde} where $V(x)$ is given in \eqref{period22} (with $\beta=2$) and $\mu$ is determined through the normalization $\int\tilde{\rho}(x)dx=1$. In general, this expression for $\tilde{\rho}$ and the calculation of $\mu$ are very cumbersome. {However, choosing $p=1$ and $\beta=2$ in \eqref{period22} and assuming $c_{1}=O\left(1\right),\; c_{2}=O\left(1\right),\; c_{3}=O\left(N\right)$, the potential is simply \begin{equation} V\left(x\right)\simeq\frac{c_{3}N}{2}\cos x-\frac{c_{3}^{2}}{8}\cos^{2}x+\frac{c_{3}^{2}}{16} \end{equation} to leading order for large $N$.} We note that up to the additive constant $\frac{c_{3}^{2}}{16}$ this potential coincides (at $N\gg1$) with the potential \eqref{VxGW} with $2N$ particles, if one identifies $\tilde{g} \leftrightarrow \frac{c_{3}}{2N}$. Therefore, the density predicted by the LDA can be immediately deduced from Eqs.~\eqref{mu1}-\eqref{densGW2} with $2N$ particles, by adding a factors of 2 to $\tilde{\rho}$ due to the difference between the domains $[-\pi,\pi]$ for Gross-Witten vs. $[0,\pi]$ in the present case. The result is: \begin{equation} \label{eq:RhoLDAfermionsBox} \tilde{\rho}(x)=\begin{cases} \frac{1}{\pi}\left(1-\frac{c_{3}}{2N}\cos x\right)\,, & 0< \frac{c_{3}}{2N}<1,\\[0.1cm] \frac{c_{3}}{N\pi}\left|\sin\left(\frac{x}{2}\right)\right|\sqrt{\left(\frac{2N}{c_{3}}-\cos^{2}\left(\frac{x}{2}\right)\right)_{+}}\;, & \frac{c_{3}}{2N}>1, \end{cases} \end{equation} and the associated fermi energies are \begin{equation} \mu=\begin{cases} \frac{8N^{2}+c_{3}^{2}}{16}\,, & 0 < \frac{c_{3}}{2N}<1,\\[0.1cm] \frac{c_{3}N}{2}-\frac{c_{3}^{2}}{16}\,, & \frac{c_{3}}{2N}>1. \end{cases} \end{equation} It is assumed above that $c_3 > 0$, but flipping the sign of $c_3$ is equivalent to transforming $x \to \pi-x$. This can be compared with the predictions of \cite{VivoPhD,VMB2008,VMB2010} who studied the model \eqref{F} with matrix potential \eqref{eq:V0BoxGeneral} with (in our notations) \begin{equation} c_{1}=1-\beta,\quad c_{2}=0,\quad c_{3}=-\frac{\beta}{4}pN \;. \end{equation} They obtained the density { \begin{equation} \sigma_{p=-\frac{4c_{3}}{\beta N}}\left(\lambda\right)=\begin{cases} \frac{p}{2\pi\sqrt{\lambda\left(1-\lambda\right)}}\left(\frac{4+p}{2p}-\lambda\right)\;, & 0\le\lambda\le1,\quad-4\le p\le4\\[0.1cm] \frac{p}{2\pi\sqrt{\lambda}}\sqrt{\frac{4}{p}-\lambda}\;, & 0\le\lambda\le4/p,\quad p\ge4,\\[0.1cm] \frac{\left|p\right|}{2\pi\sqrt{1-\lambda}}\sqrt{\lambda-\left(1-4/\left|p\right|\right)}\;, & 1-4/\left|p\right|\le\lambda\le1,\quad p\le-4 \end{cases} \end{equation}} which, for $\beta=2$, leads to \begin{equation} \lambda'(x)\sigma\left(\lambda(x)\right)=\begin{cases} \frac{1}{\pi}\left(1-\frac{c_{3}}{2N}\cos x\right)\;, & 0\le\lambda\le1,\quad-4\le\frac{2c_{3}}{N}\le4\\[0.1cm] -\cos\frac{x}{2}\frac{c_{3}}{\pi N}\sqrt{-\frac{2N}{c_{3}}-\sin^{2}\frac{x}{2}}\;, & 0\le\lambda\le4/\left|\frac{4c_{3}}{\beta N}\right|,\quad-\frac{2c_{3}}{N}\ge4,\\[0.1cm] \frac{c_{3}}{\pi N}\left|\sin\frac{x}{2}\right|\sqrt{\frac{2N}{c_{3}}-\cos^{2}\frac{x}{2}}\;, & 1-4/\frac{4c_{3}}{\beta N}\le\lambda\le1,\quad-\frac{2c_{3}}{N}\le-4 \;, \end{cases} \end{equation} in agreement with \eqref{eq:RhoLDAfermionsBox}. \section{Number variance for the harmonic trap \label{appendix:NumberVarianceGBetaE} Here we calculate the number variance for interacting fermions described by the model \eqref{H0}, which corresponds to random matrices in the G$\beta$E, thereby obtaining Eqs.~\eqref{eq:NumberVarianceGOEatoinfinity} and \eqref{eq:NumberVarianceGBetaEab} of the main text. Let us first consider $\text{Var}\left(\mathcal{N}_{\left[a,\infty\right)}\right)$. We aim to calculate the double integral \eqref{eq:VarianceUsingCxy} in the large-$N$ limit, by approximating $C(x,y)\simeq0$ if either $x$ or $y$ are not in the bulk, and for $x$ and $y$ both in the bulk, plugging in $C\left(x,y\right)$ from \eqref{eq:Cxy_for_GbetaE_micro} for $x$ near $y$, and \eqref{eq:Cxy_for_GbetaE} for $x$ far from $y$ (this procedure works because there is a joint regime of validity for both of these approximate expressions for $C(x,y)$). Thus $\text{Var}\left(\mathcal{N}_{\left[a,\infty\right)}\right) \simeq I_1 + I_2$ where \begin{eqnarray} \label{eq:I1def} I_1 \!\!\!\! &\equiv& \!\!\!\! \left(\int_{a+\xi}^{\sqrt{\beta N}}\!\! dx\int_{-\sqrt{\beta N}}^{a}\!\! dy+\int_{a}^{a+\xi}\!\! dx\int_{-\sqrt{\beta N}}^{a-\xi}\!\! dy\right) \frac{1-\frac{xy}{\beta N}}{\beta\pi^{2}\left(x-y\right)^{2}\left(1-\frac{x^{2}}{\beta N}\right)^{1/2}\left(1-\frac{y^{2}}{\beta N}\right)^{1/2}} \,,\nonumber\\\\ \label{eq:I2def} I_2 \!\!\!\! &\equiv& \!\!\!\! \left[N\rho_{N}\left(a\right)\right]^{2} \int_{a}^{a+\xi}dx\int_{a-\xi}^{a}dy\,Y_{2\beta}\left(N\rho_{N}\left(a\right)\left|x-y\right|\right) \,, \end{eqnarray} where we have chosen some cutoff $\xi$ such that $\frac{1}{\sqrt{N}}\ll\xi\ll1$, which justifies the approximation $\rho_N(x)\simeq \rho_N(a)$ that we made in the integral $I_2$. We now calculate \eqref{eq:I1def}. Rescaling $\tilde{x}=x/\sqrt{\beta N}$, $\tilde{y}=y/\sqrt{\beta N}$, this term can be written as \begin{equation} I_{1}=\frac{1}{\beta\pi^{2}}g\left(\frac{a}{\sqrt{\beta N}},\frac{\xi}{\sqrt{\beta N}}\right) \end{equation} where \begin{eqnarray} \label{eq:gdef} g\left(\tilde{a},z\right)&=& \left(\int_{\tilde{a}+z}^{1}d\tilde{x}\int_{-1}^{\tilde{a}}d\tilde{y}+\int_{\tilde{a}}^{\tilde{a}+z}d\tilde{x}\int_{-1}^{\tilde{a}-z}d\tilde{y}\right)\tilde{C}\left(\tilde{x},\tilde{y}\right)\; ,\\ \tilde{C}\left(\tilde{x},\tilde{y}\right) &=& \frac{1-\tilde{x}\tilde{y}}{\left(\tilde{x}-\tilde{y}\right)^{2}\left(1-\tilde{x}^{2}\right)^{1/2}\left(1-\tilde{y}^{2}\right)^{1/2}}. \end{eqnarray} Now using $-\frac{1}{2}\partial_{\tilde{x}}\partial_{\tilde{y}}\sigma\left(\tilde{x},\tilde{y}\right)=\tilde{C}\left(\tilde{x},\tilde{y}\right)$ where \begin{equation} \label{eq:sigmaDef} \sigma\left(\tilde{x},\tilde{y}\right)= -2\log\left(\frac{\left|\tilde{x}-\tilde{y}\right|}{1-\tilde{x}\tilde{y}+\sqrt{1-\tilde{x}^{2}}\sqrt{1-\tilde{y}^{2}}}\right) \;, \end{equation} the integral \eqref{eq:gdef} is then given in terms of $\sigma$ by \begin{equation} g\left(\tilde{a},z\right) = \frac{1}{2}\left[\sigma\left(\tilde{a}+z,\tilde{a}\right)-\sigma\left(\tilde{a}+z,\tilde{a}-z\right)+\sigma\left(\tilde{a},\tilde{a}-z\right)\right] \;, \end{equation} where we used $\sigma\left(1,\cdots\right)=\sigma\left(\cdots,-1\right)=0$. In the limit $z\ll1$ this becomes \begin{equation} g\left(\tilde{a},z\ll1\right) = \log\frac{4\left(1-\tilde{a}^{2}\right)}{z} + o(1), \end{equation} leading to \begin{equation} \label{eq:I1} I_{1}\simeq\frac{1}{\beta\pi^{2}}\left[\log4+\log\frac{\sqrt{\beta N}\left(1-\tilde{a}^{2}\right)}{\xi}\right]. \end{equation} We now turn to the integral \eqref{eq:I2def}, and we focus on $\beta\in\left\{ 1,2,4\right\}$. After changing integration variables $\tilde{x}=N\rho_{N}\left(a\right)\left(x-a\right)$, $\tilde{y}=N\rho_{N}\left(a\right)\left(y-a\right)$, it becomes \begin{equation} \label{eq:fbetadef} I_{2}=f_{\beta}\left(N\rho_{N}\left(a\right)\,\xi\right),\qquad f_{\beta}\left(z\right)=\int_{0}^{z}d\tilde{x}\int_{-z}^{0}d\tilde{y}\,Y_{2\beta}\left(\tilde{x}-\tilde{y}\right). \end{equation} It is useful to note that $\eta_{\beta}''\left(z\right)=Y_{2\beta}(z)$ where \begin{eqnarray} \eta_{2}\left(z\right)&=&\frac{\text{Ci}(2\pi z)+2\pi z\text{Si}(2\pi z)-\log(2\pi z)+\cos(2\pi z)}{2\pi^{2}},\\ \eta_{1}\left(z\right)&=&\frac{4\eta_{2}\left(z\right)+\text{Is}\left(z\right)-\text{Is}\left(z\right)^{2}}{2},\\ \eta_{4}\left(z\right)&=&\frac{4\eta_{2}\left(2z\right)-\text{Is}\left(2z\right)^{2}}{8}. \end{eqnarray} (For $\beta=1$, note that the argument of $Y_{2\beta}$ in the integral \eqref{eq:fbetadef} is always positive, and then we use $Y_{21}\left(r\right)=\left(s\left(r\right)\right)^{2}-\text{Is}\left(r\right)\text{Ds}\left(r\right)+\frac{1}{2}\text{Ds}\left(r\right)$ for $r>0$.) As a result, \begin{equation} -\partial_{\tilde{x}}\partial_{\tilde{y}}\eta_{\beta}\left(\tilde{x}-\tilde{y}\right)=Y_{2\beta}\left(\tilde{x}-\tilde{y}\right) \end{equation} which leads to \begin{eqnarray} f_{\beta}\left(z\right) &=&\int_{0}^{z}d\tilde{x}\int_{-z}^{0}d\tilde{y}\,Y_{2\beta}\left(\tilde{x}-\tilde{y}\right) \nonumber\\ &=&-\left.\eta_{\beta}\left(\tilde{x}-\tilde{y}\right)\right|_{\left(z,0\right)}+\left.\eta_{\beta}\left(\tilde{x}-\tilde{y}\right)\right|_{\left(0,0\right)}-\left.\eta_{\beta}\left(\tilde{x}-\tilde{y}\right)\right|_{\left(0,-z\right)}+\left.\eta_{\beta}\left(\tilde{x}-\tilde{y}\right)\right|_{\left(z,-z\right)} \nonumber\\ &=& \eta_{\beta}\left(2z\right)-2\eta_{\beta}\left(z\right)+\eta_{\beta}\left(0\right). \end{eqnarray} For the purpose of our calculation, since $\xi \gg 1/\sqrt{N}$, we need the $z\gg1$ behavior of $f_{\beta}(z)$. Using \begin{eqnarray} \eta_{2}\left(0\right)&=&\frac{1+\gamma_E}{2\pi^{2}},\qquad\eta_{2}\left(z\gg1\right)\simeq\frac{\pi^{2}z-\log\left(2\pi z\right)}{2\pi^{2}},\\ \eta_{1}\left(0\right)&=&\frac{1+\gamma_E}{\pi^{2}},\qquad\eta_{1}\left(z\gg1\right)\simeq\frac{\pi^{2}z-\log\left(2\pi z\right)+\frac{\pi^{2}}{8}}{\pi^{2}},\\ \eta_{4}\left(0\right)&=&\frac{1+\gamma_E}{4\pi^{2}},\qquad\eta_{4}\left(z\gg1\right)\simeq\frac{2\pi^{2}z-\log\left(4\pi z\right)-\frac{\pi^{2}}{8}}{4\pi^{2}}, \end{eqnarray} we find \begin{equation} \label{eq:fLargez} f_{\beta}\left(z\gg1\right)\simeq\frac{\log\left(\pi z\right)+c_{\beta}-\log2}{\beta\pi^{2}} \end{equation} for $\beta \in \left\{ 1,2,4\right\} $, where the constants $c_\beta$ are given in \eqref{eq_c2} and \eqref{eq_c4}. Finally, by using $\text{Var}\left(\mathcal{N}_{\left[a,\infty\right)}\right) \simeq I_1 + I_2$ together with Eqs.~\eqref{eq:I1}, \eqref{eq:fbetadef} and \eqref{eq:fLargez} and plugging in the density \eqref{eq:densityGBetaE}, we obtain Eq.~\eqref{eq:NumberVarianceGOEatoinfinity} of the main text.\\ For a finite interval $[a,b]$ it is convenient to use $\mathcal{N}_{\left[a,b\right]}=N-\mathcal{N}_{\left]-\infty,a\right]}-\mathcal{N}_{\left[b,\infty\right[}$, which together with the linearity of the covariance, yields \begin{equation} \label{eq:VarAndCov} \text{Var}\left(\mathcal{N}_{\left[a,b\right]}\right)=\text{Var}\left(\mathcal{N}_{\left]-\infty,a\right]}\right)+\text{Var}\left(\mathcal{N}_{\left[b,\infty\right[}\right)+2\text{Cov}\left(\mathcal{N}_{\left]-\infty,a\right]},\mathcal{N}_{\left[b,\infty\right[}\right) \;. \end{equation} The covariance is calculated using \eqref{eq:CovGeneral} and then approximating $C(x,y)\simeq0$ if $x$ or $y$ are not in the bulk, and \eqref{eq:Cxy_for_GbetaE} if $x$ and $y$ are both in the bulk (this approximation holds in the entire domain of integration below since we are assuming that $a$ and $b$ are well separated in the bulk) \begin{eqnarray} \label{eq:CovandSigma} \text{Cov}\left(\mathcal{N}_{\left]-\infty,a\right]},\mathcal{N}_{\left[b,\infty\right[}\right)&=&\int_{-\infty}^{a}dx\int_{b}^{\infty}dy\,C\left(x,y\right) \nonumber\\ &\simeq& -\int_{-\sqrt{\beta N}}^{a}dx\int_{b}^{\sqrt{\beta N}}dy\,\frac{1-\frac{xy}{\beta N}}{\beta\pi^{2}\left(x-y\right)^{2}\left(1-\frac{x^{2}}{\beta N}\right)^{1/2}\left(1-\frac{y^{2}}{\beta N}\right)^{1/2}} \nonumber\\ &=&-\frac{1}{\beta\pi^{2}}\int_{-1}^{\tilde{a}}d\tilde{x}\int_{\tilde{b}}^{1}d\tilde{y}\,\tilde{C}\left(\tilde{x},\tilde{y}\right)=-\frac{1}{2\beta\pi^{2}} \sigma\left(\tilde{a},\tilde{b}\right) \; , \end{eqnarray} where we rescaled $\tilde{x}=x/\sqrt{\beta N}$, $\tilde{y}=y/\sqrt{\beta N}$. Finally, plugging \eqref{eq:NumberVarianceGOEatoinfinity}, \eqref{eq:CovandSigma} and \eqref{eq:sigmaDef} into \eqref{eq:VarAndCov}, we obtain Eq.~\eqref{eq:NumberVarianceGBetaEab} of the main text. \section{Number variance for the WL$\beta$E and the J$\beta$E} \label{appendix:WLBetaEandJBetaE} In this Appendix we present a detailed derivation of Eq.~\eqref{eq:WLBetaEVariance} and we give the result for the variance of the fermion number for the models in the third and fourth line of the Table~\ref{table:mappings}, which are not already given in the text. In \cite{SDMS} we found the number variance for the WL$\beta$E with $\beta=2$ for a semi-infinite interval: \begin{equation} 2\pi^{2}{\rm Var}{\cal N}_{[0,a]}^{{\rm LUE}} = \log(\mu)+\log\left(4\frac{a}{\sqrt{2\mu}}\frac{\left(1-\frac{a^{2}}{2\mu}-\frac{\lambda^{2}\mu}{2a^{2}}\right)^{3/2}}{\left(1-\lambda^{2}\right)^{1/2}}\right)+c_{2} + o(1) \;, \end{equation} where $\mu=2N+\gamma+1$ and $\lambda^{2}=\frac{\gamma^{2}-\frac{1}{4}}{\mu^{2}}$. Expressing this result in terms of $N$ (rather than $\mu$), we obtain to leading order for large $N$ (by replacing $\mu\to2N+\gamma$, $\lambda^{2}\to\frac{\gamma^{2}}{\left(2N+\gamma\right)^{2}}$) \begin{equation} \label{eq:LUENumberVariance} 2\pi^{2}{\rm Var}{\cal N}_{[0,a]}^{{\rm LUE}} = \log\left(4\sqrt{2}\sqrt{2+\tilde{\gamma}}\tilde{a}N\frac{\left(1-\frac{2\tilde{a}^{2}}{\left(2+\tilde{\gamma}\right)}-\frac{\tilde{\gamma}^{2}}{8\tilde{a}^{2}\left(2+\tilde{\gamma}\right)}\right)^{3/2}}{\left(1-\frac{\tilde{\gamma}^{2}}{\left(2+\tilde{\gamma}\right)^{2}}\right)^{1/2}}\right)+c_{2} + o(1) \;, \end{equation} where $\tilde{\gamma} = \gamma/N$ and $\tilde{a}=a/\sqrt{4N}$. Finally, we use \eqref{eq:LUENumberVariance} in the conjecture $\beta\pi^{2}{\rm Var}{\cal N}_{[0,a]}^{(\beta)}-c_{\beta}=2\pi^{2}{\rm Var}{\cal N}_{[0,a]}^{(\beta=2)}-c_{2}+o(1)$ and this leads to Eq.~\eqref{eq:WLBetaEVariance}. For the WL$\beta$E and a general interval in the bulk, a calculation similar to that which leads to \eqref{eq:WLBetaEVariance} yields the number variance (based on our result for $\beta=2$ in \cite{SDMS}, together with our conjecture \eqref{prediction} with \cite{footnote:gamma}) \begin{equation} \frac{\beta\pi^{2}}{2}{\rm Var}{\cal N}_{[a,b]}^{(\beta)}=\log\left(8N\sqrt{1+\frac{\tilde{\gamma}}{2}}\sqrt{\tilde{a}\tilde{b}\kappa_{\tilde{a}}^{3}\kappa_{\tilde{b}}^{3}}\frac{|\tilde{a}^{2}-\tilde{b}^{2}|}{\tilde{a}^{2}+\tilde{b}^{2}-4\frac{\tilde{a}^{2}\tilde{b}^{2}}{2+{\tilde \gamma}}-\frac{\tilde{\gamma}^{2}}{4\left(2+\tilde{\gamma}\right)}+2\tilde{a}\tilde{b}\kappa_{\tilde{a}}\kappa_{\tilde{b}}}\right)+c_{\beta} + o(1) \end{equation} where \begin{equation} \tilde{a}=\frac{a}{\sqrt{2\beta N}} \;,\quad \tilde{\gamma}=\frac{2\gamma}{N\beta} \; ,\quad \kappa_{\tilde{a}}=\left(1-2\frac{\tilde{a}^{2}}{2+\tilde \gamma}-\frac{\tilde{\gamma}^{2}}{8\left(2+\tilde{\gamma}\right)\tilde{a}^{2}}\right)^{1/2} \end{equation} and $\tilde{b}$ and $\kappa_{\tilde{b}}$ defined similarly. For the fermions in the hard box potential \begin{equation} V\left(x\right)=\begin{cases} 0 & x\in\left[0,\pi\right]\\ \infty & x\notin\left[0,\pi\right] \end{cases} \end{equation} which can be obtained as the limit of the J$\beta$E for $\gamma_1 = \gamma_2 = 1/2$, we obtained the number variance for semi-infinite and finite intervals in \cite{SDMS} for the case $\beta=2$. Using these results together with the conjecture \eqref{prediction} and its analog $\beta\pi^{2}{\rm Var}{\cal N}_{[0,a]}^{(\beta)}-c_{\beta}=2\pi^{2}{\rm Var}{\cal N}_{[0,a]}^{(\beta=2)}-c_{2}+o(1)$ for semi-infinite intervals, we find \begin{eqnarray} {\rm Var}{\cal N}_{[0,a]} &=& \frac{1}{\beta\pi^{2}}\left(\log N+\log\left|\sin a\right|+\log2+c_{\beta}+o\left(1\right)\right)\;,\\ {\rm Var}{\cal N}_{[a,b]}&=&{\rm Var}{\cal N}_{[0,a]}+{\rm Var}{\cal N}_{[0,b]}+\frac{2}{\beta\pi^{2}}\log\left|\frac{\sin\frac{a-b}{2}}{\sin\frac{a+b}{2}}\right|+o(1)\;. \end{eqnarray} \section{Checks of the conjecture for the cumulants near the edge}\label{app:checks} In the text we have conjectured that the cumulants of order $3$ and higher of the number of eigenvalues in an interval of macroscopic size in the bulk are identical for the C$\beta$E and for the G$\beta$E. This conjecture has led to predictions for fermion models. Here we provide a test of this conjecture for $\beta=1,2,4$ by showing that it matches perfectly well with the rigorous results obtained for the G$\beta$E at the edge by Bothner and Buckingham \cite{BK18}. The methods used in \cite{BK18} and in \cite{FyodorovLeDoussal2020,ForresterFrankel2004} being completely different, this is a quite non trivial check. In Ref. \cite{BK18} Bothner and Buckingham study the eigenvalues $\lambda_i$ of the $N \times N$ random matrices belonging to GUE,GOE and GSE ensembles near the edge, where, for large $N$, they scale as \begin{equation} \lambda_i \simeq \sqrt{2 N} + \frac{1}{\sqrt{2} N^{1/6}} a_i^\beta \label{airybeta} \end{equation} where the $a_i^\beta$ form the Airy${_\beta}$ point process. They study ${\cal N}_{[s,+\infty[}$ the number of points $a_i^\beta$ in the interval $[s,+\infty[$. For $v$ real, they prove that, for $\beta=1,2,4$ and in the limit $s \to - \infty$ (i.e., towards the bulk) \begin{eqnarray} \label{eq:BKbeta2} \log \left\langle e^{-v{\cal N}_{\left[s,+\infty\right[}}\right\rangle = -\frac{2v}{3\pi}(-s)^{3/2}+\frac{v^{2}}{2\beta\pi^{2}}\log\left(8 k_\beta (-s)^{3/2}\right) + \chi_\beta(v) + o(1) \end{eqnarray} where $k_1=k_2=1$ and $k_4=2$ and \begin{equation} \label{eq:chi} \chi_{\beta}(v)=\begin{cases} \frac{1}{2}\mathcal{G}\left(v\right)+\frac{1}{2}\log\frac{2}{1+e^{v}}\quad, & {\rm for}\quad\beta=1\;,\\[0.2cm] \mathcal{G}\left(\frac{v}{2}\right)\quad, & {\rm for}\quad\beta=2\;,\\[0.2cm] \frac{1}{2}\mathcal{G}\left(\frac{v}{2}\right)+\log\left(\frac{1}{2}\left(\frac{1+\sqrt{1-e^{-v}}}{1-\sqrt{1-e^{-v}}}\right)^{1/4}+\frac{1}{2}\left(\frac{1-\sqrt{1-e^{-v}}}{1+\sqrt{1-e^{-v}}}\right)^{1/4}\right)\;, & {\rm for}\quad\beta=4\;, \end{cases} \end{equation} where $\mathcal{G}\left(v\right)=\log\left(G\left(1+\frac{iv}{\pi}\right)G\left(1-\frac{iv}{\pi}\right)\right)$, where we recall that $G(z)$ is the Barnes G-function \cite{BarnesFunctionWikipedia}. We now compare these results with our predictions. Let us denote $\lambda_i=e^{i \theta_i}$ with $\theta_i \in [0,2 \pi]$ the $N$ eigenvalues for the C$\beta$E. Consider the number ${\cal N}_{[0,\theta]}$ of eigenvalues with $\theta_i \in [0,\theta]$. The general conjecture for the FCS \cite{FyodorovLeDoussal2020} is (as also given in the text in \eqref{FCS1}) \begin{equation} \label{FCS2} \log\left\langle e^{2\pi\sqrt{\frac{\beta}{2}}t ({\cal N}_{[0,\theta]} - \langle {\cal N}_{[0,\theta]} \rangle) }\right\rangle = 2t^{2}\log N+t^{2}\log\left(4\sin^{2}\frac{\theta}{2}\right)+2\log|A_{\beta}(t)|^{2} \end{equation} up to terms that vanish in the large $N$ limit \cite{foot_t}. One has \cite{ForresterFrankel2004} \begin{equation} A_{\beta}(t)=\begin{cases} 2^{-t^{2}/2}\frac{G\left(1+\frac{it}{\sqrt{2}}\right)G\left(\frac{3}{2}+\frac{it}{\sqrt{2}}\right)}{G(1)G(3/2)}\quad, & {\rm for}\quad\beta=1\\[0.2cm] G(1+it)\quad, & {\rm for}\quad\beta=2\\[0.2cm] \frac{G\left(1+\frac{it}{\sqrt{2}}\right)G\left(\frac{1}{2}+\frac{it}{\sqrt{2}}\right)}{G(1)G(1/2)}\quad, & {\rm for}\quad\beta=4 \end{cases} \end{equation} Our conjecture presented in the text implies the following: consider the number ${\cal N}_{[\lambda,\lambda']}$ of eigenvalues in the G$\beta$E of size $N \times N$. The FCS generating function $\log\left\langle e^{2\pi\sqrt{\frac{\beta}{2}}t {\cal N}_{[\lambda,\lambda']} }\right\rangle$ has the same expression as \eqref{FCS2} up to terms of order $O(t)$ and $O(t^2)$ (which correspond to first and second cumulants). This means that all cumulants or order higher than 3 coincide. The same formula holds for the semi-infinite interval, i.e., for ${\cal N}_{[\lambda,+\infty)}$, upon dividing the r.h.s of formula \eqref{FCS2} by a factor of $2$. It is this prediction for the G$\beta$E, valid for $\lambda$ in the bulk, that we can now compare with the result \eqref{eq:BKbeta2}, valid for $\lambda$ in the edge region, i.e., as in \eqref{airybeta}. Indeed the latter result is valid asymptotically for $s \to - \infty$, which corresponds to the limit towards the bulk. We now show that the matching occurs perfectly (without any intermediate regime). To compare \eqref{FCS2} and \eqref{eq:BKbeta2} we note the identification \begin{equation} v = 2 \pi \sqrt{\frac{\beta}{2}} \, t \quad , \quad {\rm equivalently} \quad t = \sqrt{\frac{2}{\beta}} \frac{v}{2 \pi} \;. \end{equation} We now discuss the three cases separately. \\ {\it Case $\beta=2$}. In that case $t= \frac{v}{2 \pi}$. One checks that $1/2$ times the last term in \eqref{FCS2} is equal to $\log\left|G\left(1+it\right)\right|^{2}=\log\left|G\left(1+\frac{iv}{2\pi}\right)\right|^{2}$ which is the last term in \eqref{eq:BKbeta2}-\eqref{eq:chi}. Hence the terms of order $v^3$ and higher exactly coincide in the two formula. \\ {\it Case $\beta=1$}. In that case $t= \frac{v}{\sqrt{2} \pi}$. We need to compare $1/2$ times the last term in \eqref{FCS2}, which is equal to $\log\left|\frac{G\left(1+\frac{it}{\sqrt{2}}\right)G\left(\frac{3}{2}+\frac{it}{\sqrt{2}}\right)}{G(1)G(3/2)}\right|^{2}=\log\left|\frac{G\left(1+\frac{iv}{2\pi}\right)G\left(\frac{3}{2}+\frac{iv}{2\pi}\right)}{G(1)G(3/2)}\right|^{2} $, with the corresponding term in \eqref{eq:BKbeta2}-\eqref{eq:chi}, which reads $\frac{1}{2}\log\left( \frac{2}{1+e^{v}} G\left(1+\frac{iv}{\pi}\right)G\left(1-\frac{iv}{\pi}\right)\right)$. A priori the identification looks hopeless! However, there exists a remarkable "duplication relation" between Barnes functions, for $v$ real, \begin{equation} \label{217} \left|\frac{G\left(1+\frac{iv}{2\pi}\right)G\left(\frac{3}{2}+\frac{iv}{2\pi}\right)}{G(1)G(3/2)}\right|^{4}=\left|G\left(1+\frac{iv}{\pi}\right)\right|^{2}\frac{2}{1+e^{v}}e^{v/2}2^{v^{2}/\pi^{2}} \end{equation} which we checked explicitly using Mathematica (it is presumably equivalent to the relation (3.5) in \cite{Duplication}). Hence, once again, the terms of order $v^3$ and higher exactly coincide in the two formulae mentioned above. \\ {\it Case $\beta=4$}. In that case $t= \frac{v}{2 \sqrt{2} \pi}$. One checks that $1/2$ times the last term in \eqref{FCS2} is $\log\left|\frac{G\left(1+\frac{it}{\sqrt{2}}\right)G\left(\frac{1}{2}+\frac{it}{\sqrt{2}}\right)}{G(1)G(1/2)}\right|^{2}=\log\left|\frac{G\left(1+\frac{iv}{4\pi}\right)G\left(\frac{1}{2}+\frac{iv}{4\pi}\right)}{G(1)G(1/2)}\right|^{2}$. This must be compared with the last line \eqref{eq:chi}. This looks even more hopeless than for $\beta=1$. However, using Mathematica we have discovered the identity valid for real $v$ (where the right hand side appears to be an even function of $v$) \begin{eqnarray} \label{218} && \!\!\!\!\!\!\!\!\antiquad\!\!\!\!\!\!\!\! \log\frac{\left|\frac{G\left(1+\frac{iv}{4\pi}\right)G\left(\frac{1}{2}+\frac{iv}{4\pi}\right)}{G(1)G(1/2)}\right|^{4}}{\left|G\left(1+\frac{iv}{2\pi}\right)\right|^{2}} \nonumber\\ && =2\log\left(\frac{1}{2}\left(\frac{1+\sqrt{1-e^{v}}}{1-\sqrt{1-e^{v}}}\right)^{1/4}+\frac{1}{2}\left(\frac{1-\sqrt{1-e^{v}}}{1+\sqrt{1-e^{v}}}\right)^{1/4}\right)+\frac{v}{4}+\frac{v^{2}}{4\pi^{2}}\log2 \end{eqnarray} whose derivation we leave as a challenge to the reader \cite{foot_bothner}. Thus, also for $\beta=4$, the terms of order $v^3$ and higher exactly coincide in the two formulae mentioned above. \section{Matching the variance near the edge for the harmonic oscillator} \label{appendix:varianceEdge} In this Appendix we show that our bulk result for the variance for the harmonic oscillator for general $\beta$, given in \eqref{eq:NumberVarianceGOEatoinfinity}, matches for $\beta=1,2,4$ the universal edge behavior obtained in \cite{BK18}. The FCS formula \eqref{eq:BKbeta2} in Appendix \ref{app:checks} was obtained in \cite{BK18} for the G$\beta$E. We now translate it in the context of the fermion model in the harmonic potential $V(x)=\frac{1}{2} x^2$. The connection is simply a scale transformation $x_i= \sqrt{ \frac{\beta}{2}} \lambda_i$, see Table \ref{table:mappings}. For general $\beta$ the right edge is thus at position $x^{+}=\sqrt{\beta N}$ and the width of the edge region is $w_{N}=\frac{\sqrt{\beta}}{2}N^{-1/6}$. To obtain the FCS for the number of fermions ${\cal N}_{\left[a,\infty\right[}$ in the semi-infinite interval $\left[a,\infty\right[$, we can simply replace in the formula \eqref{eq:BKbeta2} \begin{equation} \label{stohata} s \to \hat a = \frac{a-x^{+}}{w_{N}} \quad , \quad {\cal N}_{[s,+\infty[} \to {\cal N}_{\left[a,\infty\right[} \end{equation} We begin with the simplest case, $\beta=2$. Using the expansion of the Barnes-G function~\cite{BarnesFunctionWikipedia} \begin{equation} \label{eq:BarnesGExpansion} \log G\left(1+z\right)=\frac{\log\left(2\pi\right)-1}{2}z-\frac{\left(1+\gamma_E\right)}{2}z^{2}+\sum_{k=2}^{\infty}\left(-1\right)^{k}\frac{\zeta\left(k\right)}{k+1}z^{k+1} \end{equation} we find the leading terms in the expansion of \eqref{eq:BKbeta2} in powers of $v$: \begin{equation} \label{eq:BothnerExpansionGUE} \log \left\langle e^{-v{\cal N}_{\left[a,+\infty\right[}}\right\rangle = -\frac{2v}{3\pi}\left(-\hat a \right)^{3/2}+\frac{\log\left[8\left(- \hat a \right)^{3/2}\right]+1+\gamma_E}{2\pi^{2}}\frac{v^{2}}{2}+ O(v^4). \end{equation} The coefficient of $v^2 /2$ in the expansion of \eqref{eq:BothnerExpansionGUE} corresponds to the second cumulant (the variance) and therefore it gives the asymptotic behavior of the scaling function $ \mathcal{V}_{2}\left(\hat{a}\right)$ from~\eqref{edgeV2} for $\hat a \to -\infty$ as \begin{equation} \label{eq:V2Asymptotic} \mathcal{V}_{2}\left(\hat{a}\right)\simeq 2 \frac{\frac{3}{2}\log\left(-\hat{a}\right)+c_{2}+2\log2}{2\pi^{2}} \;, \end{equation} which, together with \eqref{edgeV2}, matches exactly the bulk result \eqref{asympt2}. Similarly, for $\beta=1,4$ for the harmonic oscillator it was conjectured \cite{MMSV14, MC2020} that there exist universal scaling functions such that in the edge region \begin{equation} \label{eq:VBetaScalingDef} \text{Var}{\cal N}_{\left[a,\infty\right[}\simeq \frac{1}{2} {\cal V}_{\beta}\left(\frac{a-x^{+}}{w_{N}}\right) \;. \end{equation} Applying the correspondence \eqref{stohata} we obtain from \eqref{eq:BKbeta2} \begin{equation} \! \log\left\langle e^{-v{\cal N}_{\left[a,+\infty\right[}}\right\rangle \! = \! \begin{cases} -\frac{2v}{3\pi}\left(-\hat{a}\right)^{3/2}-\frac{v}{4}+\frac{\frac{3}{2}\log\left(-\hat{a}\right)+1+\gamma_{E}+3\log2-\frac{\pi^{2}}{8}}{\pi^{2}}\frac{v^{2}}{2}+O(v^{4}), & \beta=1,\\ -\frac{2v}{3\pi}\left(-\hat{a}\right)^{3/2}+\frac{v}{8}+\frac{\frac{3}{2}\log\left(-\hat{a}\right)+1+\gamma_{E}+4\log2+\frac{\pi^{2}}{8}}{4\pi^{2}}\frac{v^{2}}{2}+O(v^{4}), & \beta=4. \end{cases} \end{equation} Again, the coefficients of $v^2 / 2$ in these expansions give the asymptotic behaviours of $\mathcal{V}_{1}\left(\hat{a}\right)$ and $\mathcal{V}_{4}\left(\hat{a}\right)$, which, together with \eqref{eq:V2Asymptotic}, can be summarized as \begin{equation} \label{eq:VBetaAsymptotic2} \mathcal{V}_{\beta}\left(\hat{a}\right)\simeq 2 \frac{\frac{3}{2}\log\left(-\hat{a}\right)+c_{\beta}+2\log2}{\beta\pi^{2}},\qquad-\hat{a}\gg1,\quad\beta\in\left\{ 1,2,4\right\} \, , \end{equation} which matches the bulk result \eqref{eq:NumberVarianceGOEatoinfinity}. The leading order (logarithmic) term in \eqref{eq:VBetaAsymptotic} was conjectured in \cite{MMSV14} for any $\beta$ based on the expected matching with the bulk. \end{appendix}
2,869,038,156,023
arxiv
\part{\par \addvspace{4ex}\@afterindentfalse \secdef\@part\@spart} \def\@part[#1]#2{\ifnum \c@secnumdepth >\m@ne \refstepcounter{part} \addcontentsline{toc}{part}{Part \arabic{part}: #1} \else \addcontentsline{toc}{part}{#1} \fi {\parindent 0pt \raggedright \ifnum \c@secnumdepth >\m@ne \reset@font\large\rm PART \ifcase\arabic{part} \or ONE \or TWO \or THREE \or FOUR \or FIVE \or SIX \or SEVEN \or EIGHT \or NINE \or TEN \else \fi \par \nobreak \fi \reset@font\LARGE \rm #2 \markboth{}{}\par } \nobreak \vskip 3ex \@afterheading} \def\@spart#1{{\parindent 0pt \raggedright \reset@font\LARGE \rm #1\par} \nobreak \vskip 3ex \@afterheading} \newif\if@levelone \def\@startsection#1#2#3#4#5#6{\if@noskipsec \leavevmode \fi \par \@tempskipa #4\relax \@afterindenttrue \ifnum #2=\@ne \global\@levelonetrue \else \global\@levelonefalse \fi \ifdim \@tempskipa <\z@ \@tempskipa -\@tempskipa \@afterindentfalse\fi \if@nobreak \everypar{}\ifnum#2=2 \vskip 0pt plus1pt\fi \else \addpenalty{\@secpenalty}\addvspace{\@tempskipa} \fi \@ifstar {\@ssect{#3}{#4}{#5}{#6}}{\@dblarg{\@sect{#1}{#2}{#3}{#4}{#5}{#6}}}} \def\SFB@hangraggedright{\rightskip=\@flushglue \let\\=\@centercr \parindent=0pt} \newif\if@firstsection \@firstsectiontrue \def\section{% \if@firstsection \fixfootnotes\@firstsectionfalse \fi% \@startsection{section}{1}{\z@} {-24pt plus -12pt minus -1pt}{6pt} {\SFB@hangraggedright\reset@font\normalsize\bf}} \def\@startsection{subsection}{2}{\z@{\@startsection{subsection}{2}{\z@} {-18pt plus -9pt minus -1pt}{6pt} {\SFB@hangraggedright\reset@font\normalsize\bf}} \def\@startsection{subsubsection}{3}{\z@{\@startsection{subsubsection}{3}{\z@} {-18pt plus -9pt minus -1pt}{6pt} {\SFB@hangraggedright\reset@font\normalsize\it}} \def\@startsection{paragraph}{4}{\z@{\@startsection{paragraph}{4}{\z@} {12pt plus 2.25pt minus 1pt}{-0.5em}{\reset@font\normalsize\bf}} \def\@startsection{subparagraph}{5}{\parindent{\@startsection{subparagraph}{5}{\parindent} {12pt plus 2.25pt minus 1pt}{-0.5em}{\reset@font\normalsize\it}} \setcounter{secnumdepth}{4} \def\@sect#1#2#3#4#5#6[#7]#8{% \ifnum #2>\c@secnumdepth \def\@svsec{}% \else \refstepcounter{#1}% \if@levelone \ifSFB@appendix \edef\@svsec{}% \else \edef\@svsec{\csname the#1\endcsname\hskip 1em}% \fi \else \edef\@svsec{\csname the#1\endcsname\hskip 1em}% \fi \fi \@tempskipa #5\relax \ifdim \@tempskipa>\z@ \begingroup #6\relax \if@levelone \ifSFB@appendix \@hangfrom{\hskip #3\relax\@svsec}{\interlinepenalty \@M APPENDIX \csname the#1\endcsname:\hskip 0.5em\uppercase{#8}\par}% \else \@hangfrom{\hskip #3\relax\@svsec}{\interlinepenalty \@M \uppercase{#8}\par}% \fi \else \@hangfrom{\hskip #3\relax\@svsec}{\interlinepenalty \@M #8\par}% \fi \endgroup \csname #1mark\endcsname{#7}% \addcontentsline{toc}{#1}{\ifnum #2>\c@secnumdepth \else \protect\numberline{\csname the#1\endcsname}\fi #7 \else \def\@svsechd{#6\hskip #3\@svsec \if@levelone \uppercase{#8}\else #8\fi \csname #1mark\endcsname{#7} \addcontentsline{toc}{#1}{\ifnum #2>\c@secnumdepth \else \protect\numberline{\csname the#1\endcsname}\fi#7 }\fi \@xsect{#5}} \def\@ssect#1#2#3#4#5{% \@tempskipa #3\relax \ifdim \@tempskipa>\z@ \begingroup #4\@hangfrom{\hskip #1}{% \interlinepenalty \@M \if@levelone \uppercase{#5}% \else {#5}% \fi\par}% \endgroup \else \def\@svsechd{#4\hskip #1\relax \if@levelone \uppercase{#5}% \else {#5}% \fi} \fi \@xsect{#3}% } \newif\ifSFB@appendix \def \section{Introduction} Dwarf galaxies are easily the most common type of galaxy in the Universe, and yet their very origin remains a subject of much debate. They can be found in vast numbers in the cluster environment - our nearest large cluster, the Virgo cluster, contains over a thousand dwarfs. The understanding of the formation and evolution of cluster dwarfs is hampered by their intrinsic properties of low luminosity and low surface brightness causing them to be challenging objects to observe, and study. Intuitively, it might be expected that their low mass, and hence shallow potential wells, cause them to be extremely sensitive to their local environment. As such, cluster dwarfs aught to provide sensitive indicators to the effects of mechanisms occurring in the high density environment. The Virgo cluster catalogue (\citealp{VCC}), continues to provide an invaluable tool for the study of dwarf galaxies in the Virgo cluster. Complete down to apparent magnitude $B_T>$18, and with a suprisingly accurate morphological classification that has, on the whole, stood the test of time, the Virgo cluster catalogue (VCC) clearly demonstrates that dwarf galaxies are far from a homogenous family of galaxies. Dwarf ellipticals are the most common type of cluster dwarf - typically containing no current star formation, and an old stellar population. Dwarf ellipticals can be additionally split into further categories, with some containing a compact, central stellar nucleus - the nucleated dwarf ellipticals, and dwarf spheroidals - generally considered to be low luminosity analogues of dwarf ellipticals. Not all the dwarfs are `red and dead' with many containing active star formation, with knotty HII regions clearly visible in the Las Campanas observatory photographic plates utilised to create the VCC. These star forming dwarfs are further split into sub-categories; dwarf irregulars containing atomic gas, and with evidence of rotation (\citealp{Zee2004}), and blue compact dwarfs whose gas and star formation is centrally concentrated and also show significant rotation (\citealp{Zee2001b}). A further classification is made for `transition-types', objects whose properties are somewhere mid-way between dwarf ellipticals and dwarf irregulars, and may form a possible evolutionary link between the two classes. The large variety in dwarf galaxy properties makes a single theory for their creation highly challenging. With the release of SDSS data set, covering the Virgo region, further progress was made in the observation of cluster dwarfs. A series of papers (\citealp{Lisker2006},\def\citename##1{##1}\@internalcite{Lisker2006b},\def\citename##1{##1}\@internalcite{Lisker2007}) have provided a quantitive study of over 400 Virgo dwarf galaxies, suggesting that the VCC dwarf classifications can be further sub-categorised. The improved imaging quality, and additional colour information have shown that many dwarf ellipticals thought to be typically `red and dead', infact contain blue centres indicating recent or current central star-formation. Furthermore, dwarf ellipticals that appear smooth and featureless, present intricate spiral patterns and disks when an un-sharp mask technique is applied. \def\citename##1{##1}\@internalcite{Lisker2007} stress the importance of recognising these sub-categories and considering them separately when attempting to un-ravel dwarf galaxy origins, and conclude that multiple environmental mechanisms are required to explain the diverse properties of cluster dwarfs. Current suggested dwarf galaxy formation and evolution scenarios are numerous and varied. The $\Lambda$CDM paradigm suggests that current day dwarf galaxies are the left over debris from hierarchical merging of haloes. While $\Lambda$CDM simulations predicts the presence of an extended dark matter halo about each dwarf galaxy, they make less detailed predictions about properties of the baryonic content of a galaxy. It is necessary to include physical mechanisms that influence the visible components of dwarf galaxies in order to bring the simulations into agreement with observations (\citealp{Bullock2000}). These physical mechanisms can be broadly split into two categories; global mechanisms that affect all dwarf galaxies regardless of their location, such as supernovae feedback (\citealp{Dekel1986}), and environmentally dependent mechanisms. One such environmental mechanism, known as harassment, is believed to be of importance in high density environments (\citealp*{Moore1998}, \citealp{Mastropietro2005}). Numerous high speed tidal encounters within a cluster act to morphologically transform medium mass, low surface brightness disc galaxies into smaller galaxies - cluster dwarfs. A second scenario, ram pressure stripping, acts on in-falling galaxies that are already dwarfs - late-type dwarfs - and that this can produce galaxies that resemble cluster dwarf ellipticals (\citealp{Zee2004}, \citealp{Boselli2008}). These two scenarios have one thing in common - a newly infalling disc galaxy is subjected to cluster environmental mechanisms causing transformation into an object resembling a cluster dwarf elliptical. Indeed, radial velocity measurements of cluster dwarf ellipticals in the Virgo cluster do indicate that they are far from a virialised population within the cluster potential, bearing signatures of a recent in-fall (\citealp{Conselice2001}). Meanwhile dwarf galaxies obey their own morphology-density relationship; early -type dwarfs have been said to be the most strongly clustered of all galaxies, whereas gas-rich dwarfs are the most weakly clustered (\def\citename##1{##1}\@internalcite{Ferguson1994}). As cluster-centric distance increases, the ratio of dwarf ellipticals to dwarf irregulars decreases (\citealp{Sabatini2005}). These two observations combined may suggest that many cluster dwarf ellipticals have their origin in infalling late-type dwarfs. Further evidence for galaxy transformation of discs is present in the form of hidden disc features, in some cases even including fine spiral arm structure (\citealp{Lisker2006b}, \citealp{Lisker2007}) in Virgo cluster dwarf ellipticals. It is as if the original discy nature of the galaxies has yet to be completely removed by the cluster environmental mechanisms. This is additionally supported by evidence of significant rotation in a sub-sample of the dwarf ellipticals in \def\citename##1{##1}\@internalcite{Zee2004b}. Remarkably the rotating dwarf ellipticals, and dwarf irregulars, both appear to obey the Tully-Fisher relation for bright late-type spirals. If we assume that a substantial fraction of present-day dwarf ellipticals are indeed accreted as dwarf discs into the cluster, the next question is whether it is harassment or ram pressure stripping that is the key environmental influence driving the tranformation. As ram pressure stripping has been shown to remove significant amounts of atomic hydrogen from {\it giant} spiral galaxies within the Virgo cluster (\citealp{Chung2008}, \citealp{Vollmer2002}), it should be of little surprise that ram pressure can strip small late-type dwarfs of their HI gas content, causing a cessation of star-formation, and evolving the dwarfs from blue to red colours, typical of cluster dwarf ellipticals (\citealp{Boselli2008}). However, some form of physical morphological transformation may additionally be required to transform dwarfs from thin discs to spheroidals. One suggestion is that both mechanisms are required - ram pressure halts the star formation, while harassment thickens up the disc. In the literature (\citealp{Moore1998}), harassment is the effects of repeated and numerous long range tidal encounters in the potential well of the cluster. The tidal forces experienced by a galaxy within a cluster can be broadly split into two categories - tidal heating, and tidal shocking. The key difference between these categories is the time-scale on which the galaxy experiences the tidal force. For tidal shocking the time-scale is short - for example in a high speed tidal encounter between two cluster galaxies, a strong but short-lived tidal force may be experienced by the two galaxies. Tidal heating occurs when the time-scale is far longer - for example a cluster galaxy that passes close to the cluster centre spends a significant length of time under the influence of the deep potential well of the cluster, and may be entirely dismantled if the resulting tidal forces are in excess of the galaxy's self-gravity. Harassment is the combined effects of both tidal heating (from the tidal forces associated with the deep potential well of the cluster) and tidal shocking (from the tidal forces associated with the high speed galaxy-galaxy encounters) on a cluster galaxy. The aim of this work is to study the effects of harassment on newly infalling, late-type dwarf irregulars. This is accomplished using a new, and fast algorithm to model a galaxy clusters tidal fields that produces similar results to far more complex models set in a cosmological context. In concordance with cosmological models, both dwarf galaxy models and the cluster tidal field utilise cuspy cold dark matter haloes.The dwarf galaxy models are significantly lower mass ($\sim$10$^9$ - $ 1 \times 10^{10}$ M$_\odot$) than previous studies of harassment on dwarf galaxies. \def\citename##1{##1}\@internalcite{Mastropietro2005} dwarf models were of mass $7 \times 10^{10}$ M$_\odot$ in comparison. Studying the effect of harassment on dwarfs in a lower mass regime is interesting for a number of reasons - obviously it extends harassment studies to cover a wider range of the parameter space. Additionally while larger dwarfs may be harassed into smaller objects resembling dwarf ellipticals (\citealp{Mastropietro2005}), the fate of smaller mass dwarfs is less clear. A weakly harassed low mass dwarf may be able to complete a morphological transformation into an object resembling a dwarf elliptical without significant mass-loss. Alternatively, a heavily harassed low mass dwarf may be entirely destroyed in the process. For these low mass dwarfs, the significance of tidal heating from the potential well of the cluster alone can be estimated using a Roche limit analysis. When the tidal forces of the cluster potential well exceeds the dwarf galaxy's own self-gravity, the dwarf will be entirely dismantled. For our standard dwarf model, and for the cluster potential utilised (see Section \ref{harassmodel}), this can be expected to occur at a radius of less than $\sim 50$ kpc within the cluster. Hence only dwarfs on extremely plunging orbits are significantly influenced by tidal heating. Hence for the orbits we consider (see Section \ref{simeffects}) whose peri-cluster distance is $\sim 200$ kpc, the effects of tidal heating alone are expected to be weak. We can also analytically estimate the effects of tidal shocking from galaxy-galaxy encounters for our low-mass dwarf models. Although it seems intuitive, that the lowest mass galaxies might be expected to be the most significantly affected by tidal encounters, this is not necessarily the case. The strength of the effects of a high-speed encounter can be expected to depend on the dynamical time at the scale-radius, $r_s$ of the dwarf ($t_{dyn} \sim \left[(r_s^3/{G M(<r_s)}\right]^{\frac{1}{2}}$, where $G$ is the gravitational constant. When this dynamical time is long compared to the passage-time of a passing harasser galaxy, then the the dwarf can respond effectively to the encounter. It is interesting to note that in the dwarf models considered here, for material surrounding the disk of the galaxy, $t_{dyn}$ is of the same order as the passing time of a harasser galaxy. Despite the lowered mass, they are significantly more compact, and hence resilient to harassment. These properties are not peculiar to our models - in this mass regime, this is to be expected. Although tidal stripping and shocking are expected to be weak for the dwarfs we consider, strong tidal shocking between galaxies in combination with tidal heating can still produce significant effects. These effects are studied in detail on model dwarf galaxies in Section \ref{simeffects}. We attempt to quantify the likelihood of significant harassment in section \ref{Monty}, and test how the likelihood varies for a variety of orbits in Section \ref{orbitsection}. \newline \noindent The paper is organised as follows. In section 2, the numerical code utilised in all simulations is discussed, and in Section 3 the methods used to build model dwarf galaxies are shown. The harassment model is presented in section 4. In section 5, the influence of harassment on 11 in-falling dwarf galaxy models are shown. The statistical influences of harassment are found by means of a Monte-Carlo simulation in section 6, and tested for a variety of orbits. \section[]{The code} In this study we make use of `gf' (\citealp{Williams2001},\citealp{Williams1998}), which is a Treecode-SPH algorithm that operates primarily using the techniques described in \def\citename##1{##1}\@internalcite{Hernquist1989}. `gf' has been parallelised to operate simultaneously on multiple processors to decrease simulation run-times. While the Treecode allows for rapid calculation of gravitational accelerations, the SPH code allows us to include a HI gas component to our dwarf galaxy models. In all simulations, the gravitational softening length, $\epsilon$, is fixed for all particles at a value of 100 pc, in common with the harassment simulations of \def\citename##1{##1}\@internalcite{Mastropietro2005}. Gravitational accelerations are evaluated to quadropole order, using an opening angle $\theta_c=0.7$. A second order individual particle timestep scheme was utilised to improve efficiency following the methodology of \def\citename##1{##1}\@internalcite{Hernquist1989}. Each particle was assigned a time-step that is a power of two division of the simulation block timestep, with a minimum timestep of $\sim$0.5 yrs. Assignment of time-steps for collisionless particles is controlled by the criteria of \def\citename##1{##1}\@internalcite{Katz1991}, whereas SPH particle timesteps are assigned using the minimum of the gravitational time-step and the SPH Courant conditions with a Courant constant, $C$=0.1 (\citealp{Hernquist1989}). As discussed in \def\citename##1{##1}\@internalcite{Williams2004}, the kernel radius $h$ of each SPH paricle was allowed to vary such that at all times it maintains between 30 and 40 neighbours within 2$h$. In order to realistically simulate shocks within the SPH model, the artificial viscosity prescription of \def\citename##1{##1}\@internalcite{Gingold1983} is used with viscosity parameters $(\alpha,\beta)$ = (1,2). The equation of state for the gas component of the galaxies is isothermal with a bulk velocity dispersion of 7.5 km s$^{-1}$, based on the measured velocity dispersion of molecular clouds in the local interstellar medium (\citealp{Stark1989}). By choosing an isothermal equation of state, we are intrisically assuming that stellar feedback processes are balanced by radiative cooling producing a constant velocity dispersion. `gf' contains a simple star formation prescription, where each SPH particle converts it's mass into stellar mass at a rate controlled by a Schmidt law with a power of 1.5. When the global stellar mass formed in this manner reaches a specified mass, a specified number of new N-body star particles are formed. The new particles have their position chosen statistically such that recent star formation history of individual SPH particles is taken into account. For details of code testing, please refer to \def\citename##1{##1}\@internalcite{Williams1998}. \section{Building model dwarf galaxies} Our dwarf galaxies models consist of 3 components; an NFW dark matter halo (\citealp*{Navarro1996}), an exponential disc of gas and one of stars. The methods used to form each component will be discussed in the following sections. \@startsection{subsection}{2}{\z@{The dark matter halo} In the current standard model of galaxy formation galaxy mass is dominated by an unseen dark matter component, surrounding the disc. This provides the additional gravitational force required to explain the observed velocities while simultaneously producing their observed flat rotation curves. \def\citename##1{##1}\@internalcite{Navarro1996} suggests that the density profiles of dark haloes have a universal shape (the NFW profile), for haloes that range in mass from dwarf galaxies to galaxy clusters. The NFW profile has the form: \begin{equation} \rho_{NFW}(r) = \frac{\rho_0}{(\frac{r}{r_s})(1+\frac{r}{r_s})^2} \label{NFWdensprof} \end{equation} \noindent where $r_s$ is a characteristic radial scale-length. The profile is truncated at the virial radius, $r_{200}=r_s c$. Here $c$ is the concentration parameter (\citealp{Lokas2001}). $c$ is found to have a range of values in cosmological simulation, however there is a general trend for higher values in less massive systems with some scatter - see figure 8, \def\citename##1{##1}\@internalcite{Navarro1996}. Typical values for cluster-mass objects is $c \sim$4, whereas dwarf galaxies can have $c \sim 20$. For our standard dwarf galaxy model we choose a total mass of $10^{10}$M$_\odot$, and $c$=20. Positions and velocities are assigned the dark matter particles using the distribution function described in \def\citename##1{##1}\@internalcite{Widrow2000}. Dark matter haloes produced in this manner are tested by evolving the initial conditions using the numerical simulation code, and the evolution of the density profile is observed. Tests show that after 0.5 Gyrs, transient effects have settled, and the halo remains stable. Due to the abrupt truncation of the density distribution at $r_{200}$, $\sim 2 \%$ of halo mass can expand outwards at the truncation, increasing it's radii by $\sim 25 \%$ but this causes negligible effect on the density distribution surrounding the disc. \@startsection{subsection}{2}{\z@{The galaxy discs} Within the centre of the NFW halo is a rotating stellar and gas disc. The stellar and gas disc of typical spiral galaxies are observed to be exponential, albeit with roughly a factor of two difference between the gas and stellar scalelength (\citealp{Boselli2006}): \begin{equation} \label{expdisc} \Sigma(r) = \Sigma_0 exp (r/r_s) \end{equation} \noindent where $\Sigma$ is the surface density, $\Sigma_0$ is central surface density, $r$ is radius within the disc, and $r_s$ is the scale-length of the disc. In order to choose a reasonable scale-length for the stellar component of the disc, we follow the recipe of \def\citename##1{##1}\@internalcite{Mo1998} for producing discs within the $\Lambda$CDM paradigm. This recipe has been shown to reproduce the Tully-Fisher relationship's slope and scatter, as well as the observed scatter in the size-rotation velocity plane. Having first chosen a halo mass, the disc mass $M_d$ is assumed to be a fixed fraction $m_d$ of the halo mass. Additionally, the angular momentum of the disc is assumed to be a fixed fraction $j_d$ of the halo's angular momentum. \def\citename##1{##1}\@internalcite{Mo1998} find $m_d \sim j_d$ and $m_d \leq$0.05 to reproduce observations, and we choose $m_d$=0.05 for all galaxy models. The scale-length of the disc is then fully defined by the properties of it's host halo: \begin{equation} r_s = \frac{\lambda G M_{200}^{3/2}}{2 V_{200} \mid E \mid ^{1/2}} \left(\frac{j_d}{m_d}\right) \end{equation} where $M_{200}$, $V_{200}$,and $E$ are total mass, circular velocity, and energy of the halo, and $\lambda$ is the spin parameter of the halo. We choose $\lambda$=0.05 for all disc models presented in this paper unless otherwise stated (the mean $\lambda$ in the \def\citename##1{##1}\@internalcite{Mo1998} probability function) . For our standard galaxy model, this produces an $r_s$=0.86 Kpc. Knowing this, we can solve for $\Sigma(r)$ in Equation \ref{expdisc} using $\Sigma_0 = M_d/2 \pi r_s^2$. The disc scalelength for the gas disc is then simply the stellar disc scalength multiplied by a factor of two. In order to set up the positions of stellar and gas particles, we integrate over $\Sigma(r)$ in concentric rings using a standard integral such that \begin{equation} \label{discmwirad} M(r') = \int_0^{r'} \Sigma(r) 2 \pi r dr = 2 \pi \Sigma_0 {r_s}^2 \left[ \left(\frac{-r'}{r_s}-1\right)e^{-r'/r_s}+1\right] \end{equation} \noindent As with the dark matter halo, we can solve for $r'(M)$ numerically, then by producing a random number between 0 and 1 for the normalised M, we are provided with the radius of each particle. These particles are initially laid down in a single plane. Real discs have a finite thickness, but we cannot solve for this until we have defined the velocity dispersion throughout the disc. This is necessary in order to ensure that the disc is Toomre stable (\citealp{Toomre1964}). The Toomre stability criterion is defined as \begin{equation} \label{Toomreeqn} Q \equiv \frac{\kappa\sigma_r}{3.36 G \Sigma} > 1 \end{equation} \noindent Here $\Sigma$ is the surface density, $\sigma_r$ is the radial velocity dispersion, and $\kappa$ is the epicyclic frequency defined, using the epicyclic approximation (\citealp{SpringelWhite1999}). Next we use $\sigma_\phi^2 = \frac{\sigma_r^2}{\gamma^2}$ where $\gamma^2 \equiv \frac{4}{\kappa^2 R} \frac{d\Phi}{dR}$ (also \citealp{SpringelWhite1999}), and $\phi_z = 0.6 \cdot \phi_r$ (\def\citename##1{##1}\@internalcite{Shlosman1993}). This completely defines the minimum values of the necessary velocity dispersions throughout the disc. In practice, $Q>1.5$ is required throughout the stellar disc to ensure stability. The gas disc has an intrinsic velocity dispersion, due to it's isothermal nature, that automatically satisfies the Toomre criteria at all radii. Once more following \def\citename##1{##1}\@internalcite{SpringelWhite1999}, we now use $z_d = \frac{\sigma_r^2}{\pi G \Sigma}$ for the vertical scale height of the disc, and distribute the particles vertically out of the disc following the Spitzer's isothermal sheet solution \begin{equation} \rho(R,z) = \frac{\Sigma(r)}{2z_d} {\rm sech}\,^2(z/z_d) \end{equation}. Finally, the circular velocities of disc particles are calculated. The value of the potential is calculated in thin spherical shells for the combined dark matter, stars and gas in a one-off N-body calculation, including the effects of gravitational softening. This is necessary as the discs own self-gravity can influence the radial orbits of disc particles close to the centre of the halo. The gradient of the potential is then used to calculate the circular velocity at each radius. As a result inner disc particles have raised circular velocities beyond that of the circular velocity of the halo alone at that radius. In all standard dwarf galaxy model simulations, a gas-rich dwarf irregular is simulated, containing $4 \times 10^8$ M$_\odot$ of gas and $1 \times 10^8$ M$_\odot$ of stars. Such gas rich systems can be found in Virgo cluster dwarf galaxy observations (\citealp{Boselli2006}). However the paper's conclusions are additionally tested using a pure stellar galaxy disc model to verify the significance of the gas-to-stellar ratio on the influences of harassment - see Section \ref{diffmodel}. To ensure long term stability, the complete three-component galaxy models are evolved in isolation for 2.5 Gyrs. Transient effects are found to fully settle in $<$1 Gyr and only evolved models are introduced into the cluster harassment models. A summary of the standard dwarf galaxy models properties is provided in Table \ref{galprops}. \begin{table} \centering \begin{tabular}{|c|c|c|} \hline Component & Parameter & Parameter value\\ \hline Halo& Total mass & 10$^{10}$ M$_\odot$ \\ & Concentration $c$ & 20\\ & Virial Radius $r_{200}$& 44 kpc\\ & Particle number & 200000\\ Total disc & Mass & 5 $\times 10^8$ M$_\odot$\\ & Particle number & 50000 \\ Stellar disc & disc fraction & 0.2 \\ & Particle number & 10000 \\ & Scalelength $r_d$ & 0.87 kpc \\ Gas disc & disc fraction & 0.8 \\ & Particle number & 40000 \\ & Scalelength $r_d$ & 1.74 kpc \\ \hline \end{tabular} \refstepcounter\@captype \@dblarg{\@caption\@captype}{Summary of key parameters of standard dwarf galaxy model} \label{galprops} \end{table} \section{The harassment model} \label{harassmodel} The potential well from the gravitational field of a large galaxy cluster, such as Virgo, applies a significant force on nearby galaxies. The local group itself is thought to be infalling towards the cluster at $\sim$200 km s$^{-1}$ as a result of the Virgo over-density (\citealp{Tammann1985}). However, it is the {\it tidal} force of the cluster that will strip or deform an in-falling galaxy. Observationally, clusters have typical mass-to-light ratios of several hundred (\citealp{Sheldon2009}). As a result, the combined mass of individual galaxies, even including their dark matter haloes, is not sufficient to total the mass of a cluster. As a result, we model the tidal field of a Virgo-like cluster using a two component model; the first is a dynamical tidal field associated with individual `harasser' galaxies, and the second is a static tidal field to represent the rest of the mass of the cluster that is not associated with galaxies - referred to as the background cluster potential herein. The potential field of both the harassers and the background is represented using the analytical form for an NFW density distribution (\def\citename##1{##1}\@internalcite{Lokas2001}) \begin{equation} \label{potNFW} \Psi = -g_cGM_{200}\frac{ln(1+(r/r_s))}{r} \end{equation} \noindent where $g_c=1/[{ln(1+c)-c/(1+c)}]$. Modelling the total cluster potential as two components has an additional advantage - it is easy to separate and study the effects of the background cluster potential from the additional effects of high-speed galaxy-galaxy encounters. The background cluster potential is fully defined once its virial mass and concentration have been chosen. We choose $c$=4, and $M_{200}$=1.6$\times$10$^{14}$M$_\odot$. This choice of cluster mass is in agreement with the model Virgo cluster potential of \def\citename##1{##1}\@internalcite{Vollmer2001}, and with galaxy line-of-sight velocity measurements in \def\citename##1{##1}\@internalcite{VCC} ($\sim 10^{14}$M$_\odot$). Now each harasser galaxy has it's own individual NFW potential field. To make this field dynamical, a one-off N-body simulation of a cluster-mass halo is conducted, and the time-dependent coordinates of a fraction of the particles are logged. This log now provides the position-evolution of the centre of each of the harassing haloes, where each individual galaxy potential field will be super-imposed. Once more we require a mass and concentration for each harasser's potential field to be defined. For mass, we use the Schector function (\citealp{Schechter1976} with parameters provided in \def\citename##1{##1}\@internalcite{Sandage1985}, and assume a constant mass-to-light ratio, i.e. \begin{equation} M_{tot} = \frac{\phi_\star}{L_\star} \left(\frac{M}{L}\right)_K \int_{L_{min}}^{\infty} \left( \frac{L}{L_\star} \right)^{\alpha + 1} \exp \left( -\frac{L}{L_\star} \right) dL \label{massfunct} \end{equation} \noindent where $\alpha=-1.25$, and $(M/L)_K=20$. The fitting parameters provided in \def\citename##1{##1}\@internalcite{Sandage1985} are based on the Virgo cluster catalogue (\citealp{VCC}), and are therefore complete down to a minimum galaxy luminosity of $\sim 2.5 \times 10^7 L_\odot$ (assuming a distance modulus of 31.0 for Virgo). We choose to resolve all harasser galaxies down to one-tenth the mass of our infalling dwarf model ($10^9 M_\odot$). Completing the integral in Equation \ref{massfunct} produces a total mass in galaxies that is 14.5$\%$ of the total cluster mass. This is in reasonable agreement with the $\sim 10 \%$ percentage of a cluster's mass locked up in sub-structure, found in $\Lambda$CDM simulations - see \def\citename##1{##1}\@internalcite{Gill2004}. This process produces 733 harassing galaxies in total. The cummulative distribution of galaxy masses produced can be seen in Figure \ref{galmassdistrib}. The vast majority of harasser galaxies are dwarfs, with $\sim 65 \%$ of the total number of harassers having a mass less than the standard dwarf galaxy model, while only $\sim 9 \%$ of the harassers have a mass greater than 10 times that of the model dwarf. The maximum mass galaxies produced are $\sim 10^{12} M_\odot$. The value of the mass-to-light ratio chosen is consistent, if not an upper limit for galaxies in this mass-range (see \citealp{Gilmore2007}). The concentration $c$ of each halo is chosen to follow the trend of higher concentration for lower mass objects as found in $\Lambda$CDM simulations. This is achieved by a fit to concentration values with mass found in \def\citename##1{##1}\@internalcite{Navarro1996} producing $c = -3 \log (M_{halo}(M_\odot)) + 52$. This fit produces $c \sim 15$ for a $6 \times 10^{11}$ M$_\odot$ mass halo, and $c \sim 20$ for a $1 \times 10^{10}$ M$_\odot$ mass halo. It should be noted that the use of analytical potentials for the harassing galaxies tidal fields has advantages and disadvantages over previous harassment models. \def\citename##1{##1}\@internalcite{Mastropietro2005} and \def\citename##1{##1}\@internalcite{Moore1999} utilise full $\Lambda$CDM simulations of the dark matter in cluster-mass objects, while \def\citename##1{##1}\@internalcite{Moore1998} utilises softened point masses to represent the tidal fields of harassers. These previous methods have the advantage that the harassing galaxies can interact and react to the tidal potential of the harassed galaxy model, whereas analytical potentials on fixed paths of motion do not. This essentially violates Newton's third law - the dwarf can respond to the cluster potential field, but the cluster potential field cannot respond to the dwarf. However, primarily we are interested in studying the effects of the harassers on the model galaxy and not vice versa. As will be shown in Section \ref{locencs}, encounters velocities are typically very high, and as a result the tidal forces in a high speed encounter are brief, and strongly peaked. The ability for the harasser to alter it's trajectory in response to the dwarf models tidal field is inconsequential to the brief, strong tidal forces that the model is subjected to at these velocities. Typically the dwarf model suffers a slight deviation in it's orbit as the result of a strong tidal encounter, and the same might be expected for the harasser galaxy. However this will have a negligible effect on the strength, and shape of the peaked tidal forces due to the rapid passing time, and high velocity of the encounter. Additionally, the use of analytical potentials has advantages - the tidal forces have effectively infinitely small spatial resolution. Only the self-gravity of the galaxy model's own particles have a limited spatial resolution due to softened gravity. Analytical potentials are highly computationally cheap, allowing a large number of simulations to be conducted rapidly as is a necessity for harassment simulations due to their highly stochastic behaviour (see Section \ref{Monty}). \begin{figure} \centering% \epsfxsize=5.9cm \begin{rotate}{ \begin{rotate}{ \begin{rotate}{\epsfbox{750halomassdistrib.eps}} \end{rotate}} \end{rotate}} \end{rotate} \refstepcounter\@captype \@dblarg{\@caption\@captype}{Cummulative mass distribution of harasser galaxies, binned in $10^{10} M_\odot$ width bins} \label{galmassdistrib} \end{figure} \section{Effects of harassment on model dwarf galaxies} \label{simeffects} The harassment model is utilised for 8 infalls (Runs 1-8) of the standard dwarf galaxy model. Initially, the dwarf galaxy models are positioned at the outskirts of the cluster, and randomly orientated on a shell at the virial radius ($r_{apo}$=1.1 Mpc), such that each dwarf galaxy is presented with a different set of tidal encounters as it infalls. They are each given an azimuthal velocity of 250 km s$^{-1}$. If all harassing galaxies are removed, this produces a plunging orbit with a pericenter of $\sim$ 200 kpc and a peak velocity of $\sim$ 1600 km s$^{-1}$. This choice of orbit is consistent with the range of orbits utilised for infalling galaxies in \def\citename##1{##1}\@internalcite{Vollmer2002}. The orbit is chosen to model a dwarf galaxy infalling into the cluster {\it for the first time} at the current epoch. Models are evolved for 2.5 Gyrs providing time for the infalling galaxy to make one-pass of the cluster centre on this orbit. In order to complete a second pass, the model would need to be evolved for $>$ 6 Gyrs in which case the assumption of constant cluster mass becomes invalid. Hence the model is most applicable to a first infall into the cluster environment. The effects of less plunging orbits are simulated and discussed in Section \ref{Monty}. \@startsection{subsection}{2}{\z@{Tidal histories} As we know both the position-evolution and the properties of all harasser galaxies, we can measure the exact tidal force, from the background cluster potential and from harasser galaxies, felt by an infalling galaxy at any instant. All tidal force measurements are presented in units of the Roche limit of the isolated dwarf galaxy model at the edge of its stellar disc - a discussion of the use of Roche limits is provided in the following section. In Figure \ref{run4_tidehist}, the tidal evolution of Run 4 is presented as a function of time. We refer to these figures as {\it `tidal histories'}, and Run 4 can be considered a typical infall for reasons that will be further discussed in Section \ref{Monty}. Clearly the background cluster potential alone doesn't produce tidal forces beyond the dwarf galaxy's Roche limit, even at the peri-center of the orbit. As discussed and quantified in the Introduction, this would require a far more deeply plunging orbit. However, it is the sum of both the background and the individual encounters that can produce tidal forces exceeding the Roche limit. \begin{figure} \centering% \includegraphics[scale=0.34,angle=-90]{tideevol.eps} \refstepcounter\@captype \@dblarg{\@caption\@captype}{A tidal history for Run 4. Tidal forces are measured in units of the isolated dwarf galaxy model's Roche limit at the edge of it's stellar disc. The Gaussian-like curve represents the contribution from the back ground cluster potential, and points are the contribution from individual galaxy-galaxy encounters.} \label{run4_tidehist} \end{figure} \@startsection{subsection}{2}{\z@{Mass loss} Although the net tidal force rises above the Roche limit, it does so only briefly and hence the complete dismantling of the dwarf does not occur. This form of tidal force is therefore in the `tidal-shocking category' (see Introduction for further discussion). The use of tidal forces expressed in units of the Roche limit of the dwarf is directly relevant for tidal heating, but not so for tidal shocking. As a result, it should be clearly noted that the presenting of all tidal forces in units of the Roche limit serves merely as a more intuitive and comprehendable scale or yard-stick for the strength of the high speed tidal encounters. In figure \ref{run4_npartevol}, the number evolution of dark matter, and stellar particles within a sphere of radius 15 kpc centered on the stellar disc is presented. Clearly the stellar disc is far from completely dismantled, and loses $< 10 \%$ of it's original mass. The dark matter halo is affected far more significantly, losing $\sim 60 \%$ of it's original mass. To test that this quantification of dark matter loss is robust to increased resolution, we repeat the infall with the gravitational resolution improved by a factor of 10 (the softening length, $\epsilon$ is reduced by a factor of 10 to 10 pc). We also improve the dark matter mass resolution by a factor of 2.5 by conducting the high resolution simulation with a dark matter halo consisting of 500,000 dark matter particles. The high resolution models are found to have negligible differences in dark matter and stellar mass losses from the lower resolution studies. To confirm how much of the dark matter loss is from the background potential alone, and how much is lost because of the inclusion of the harassing galaxies, the simulation is repeated without the harassing galaxies. In this simulation, the galaxy models loses only $\sim 25 \%$ of its dark matter - shown as dotted line in Figure \ref{run4_npartevol}. Therefore galaxy-galaxy interactions are responsible for more than half of the dark matter losses. As discussed in \def\citename##1{##1}\@internalcite{Moore1998}, the reason that the stars are so less significantly affected is due to their circular, centralised orbits whereas the eccentric orbits of dark matter particles render them far more susceptible to tidal disruption. The net result of enhanced dark matter losses over stellar losses is a decrease in dynamical mass-to-light ratio by roughly a factor of 2. With currently available data sets, it would be a significant observational challenge to detect a systematic difference in dynamical mass-to-light ratio of isolated dwarfs in comparison to cluster dwarfs at this level, due to the inherent observational and theoretical difficulties in obtaining these ratios. Such a test would provide a significant test of our understanding of dark matter. As discussed in \def\citename##1{##1}\@internalcite{Moore1998}, it is difficult to see how cluster galaxies can avoid losing a sizeable fraction of their dark matter, within the $\Lambda$CDM paradigm. \begin{figure} \centering \epsfxsize=9.0cm \epsfbox{npartevol_log.eps} \refstepcounter\@captype \@dblarg{\@caption\@captype}{Number evolution of dark matter particles (solid), and star particles (dashed), within a sphere of radius 15 kpc centred on the stellar disc for Run 4. The dotted line shows the final number of dark matter particles at the end of a simulation with the background cluster potential alone, i.e. no harassers included} \label{run4_npartevol} \end{figure} \@startsection{subsection}{2}{\z@{Little morphological transformation} The final harassed stellar disc {\it does not} appear to be significantly transformed from the original disc. The discs remains flattened and dominated by rotation. The isolated dwarf galaxy has a peak circular velocity V$_{peak} \sim $80 km s$^{-1}$, and a velocity dispersion $\sigma$ out of the plane of the disc $\sim$ 10 km s$^{-1}$. After harassment, $V_{peak}$ falls slightly, and $\sigma$ is raised to 14 km s$^{-1}$. This has a net effect of reducing the ($V_{peak}/\sigma$) ratio by less than half to $\sim 4.6$, therefore remaining significantly dominated by rotation. The post-harassment stellar discs, which show induced spiral structure and bars, were initially smooth and featureless - see Figure \ref{spiralinduced}. It is interesting to note that in \def\citename##1{##1}\@internalcite{Lisker2007}, spiral structure in dwarf ellipticals is observed, and presented as evidence for the origin of dwarf ellipticals in disc galaxies. However, in these simulations such features originate {\it because} of high speed tidal encounters. The stars that are stripped are arranged along very low surface brightness streams, that move along similar orbits to that of the galaxy though the cluster. Note that in Figure \ref{spiralinduced}, the extended low surface brightness streams are not visible even at surface brightness limits of $\mu_B > 31 $ mag arcsec$^{-2}$. To form these images, a stellar mass-to-light ratio of $\sim 6$ is assumed following \def\citename##1{##1}\@internalcite{Mastropietro2005}. These surface brightnesses are well beneath the surface brightness limits of current optical Virgo galaxy surveys. For example, the INT wide-field survey has $\mu_B \sim 26$ mag arcsec$^{-2}$ (\citealp{Davies2005}). \begin{figure} \centering% \includegraphics[scale=0.9]{xy_stars_bw_contours.eps} \refstepcounter\@captype \@dblarg{\@caption\@captype}{B-band surface brightness plot of face-on stellar disc in Run 3 after 2.5 Gyrs of harassment. The grey-scale colour bar indicates surface brightness in units of mag arcsec$^{-2}$. The box size is 5 $\times$ 5 kpc. White isochrones are displayed with values of 25, 28, and 31 mag arcsec$^{-2}$ (from galaxy centre to edge of disc)} \label{spiralinduced} \end{figure} Although we have concentrated on one typical harassment infall simulation, 6 out of 8 of the dwarf galaxy infalls are affected in a very similar manner to the results presented above. In Table \ref{8simeffects}, the effects of harassment in terms of mass loss and change in ($V_{peak}/\sigma$) is summarised for all 8 simulations. The exception is Run 5 and Run 6 for which, in both cases, the galaxy model has at least one encounter of $\sim 4$ times the Roche limit or greater. To better understand the frequency of such encounters we utilise Monte-Carlo simulations (see Section \ref{Monty}). \begin{table} \centering \begin{tabular}{|c|c|c|c|c|c} \hline Run & Tides & DM & Star & ($V_{peak}/\sigma$) & (M/L)$_{dyn}$\\ \hline 1& 2.4,2.5,1.4& 30 & 82 & 45 & 37 \\ 2& 2.5 & 59 & 100 & 59 & 59\\ 3& 1.1,1.8,2.3 & 65 & 94 & 69 & 70\\ 4& 1.1,1.2,1.6 & 41 & 93 & 55 & 44\\ 5& 1.8,1.7,5.6 & 5 & 22 & 30 & 22\\ 6& 3.7,1.9,1.8 & 8& 34 & 44 & 23 \\ 7& 1.6,1.1,1.2 & 33 & 91 & 57 & 36 \\ 8& 2.3 & 47 & 98 & 58 & 48 \\ \hline \end{tabular} \refstepcounter\@captype \@dblarg{\@caption\@captype}{Summary of 8 dwarf galaxy infalls; {\it column 1} is run description, {\it column 2} is summary of all tidal encounters with tidal strength greater than the Roche limit of the isolated dwarf galaxy, {\it column 3} and {\it column 4} are percentage of dark matter and stellar mass at the end of the simulation, found within a 15 kpc radius sphere, {\it column 5} and {\it column 6} is final ($V_{peak}/\sigma$), and final (M/L)$_{dyn}$ as a percentage of the orignal} \label{8simeffects} \end{table} \@startsection{subsection}{2}{\z@{Effects on gas and star formation} \label{gaseffects} In Figure \ref{gasspiralinduced}, the effects of high speed tidal encounters on the gas component of the infalling dwarf can be seen. Prior to the infall ({\it left panel}) the gas disc is a smooth exponential disc. Tidal encounters can induce spiral structure, which is stronger than is seen in the stellar disc due to the dissipative nature of the gas ({\it central panel}). The deformation of the disc from a circular shape causes shocking and barring of the gas disc, resulting in enhanced star formation within the spiral arms. Loss of angular momentum along the spiral arms can additionally cause radial inflow of the gas into the central regions of the dwarf irregular. However the spiral structure is only temporarily induced and, once the driving mechanism has stopped, the disc becomes smooth once more ({\it right panel}), leaving only a central knot of gas. The short-lived enhancement of gas densities within spiral arms and the central region results in a total galaxy star formation rate that is bursty in nature (Figure {\ref{burstysfrs}). The star formation rates of the dwarf irregular models decline steadily as they convert their gas reservoir into stars. However tidal encounters can cause both brief lowering (when the gas disc is stretched and lowered in density), and raising (when the gas disc is compressed) of the star formation rates. It should be noted that the simplicity by which star formation and associated feedback processes have been modelled here, provides only qualitative predictions for star formation rates. For example, a prescription for super-nova feedback, and gas cooling is not specifically included. Instead, it is assumed that energy input from stellar feedback is balanced by radiative cooling to produced an isothermal, single phase gas representing the diffuse, atomic gas content of the galaxy. The artificial viscosity of the gas component does enable it to shock, and this is important for spiral-arm formation, and radial gas inflow towards the galaxy centre. We do not include a prescription for gas-ionisation from background UV photons that may be important in galaxies of this mass (\def\citename##1{##1}\@internalcite{Mayer2006}). Hence, the star-formation rates seen in these simulations can only be considered qualitive at best, based purely on the density of the gas in the disk. However, it is likely that the missing gas physics is of little consequence as infalling dwarf galaxies will be most significantly influenced by ram pressure stripping in the cluster environment. The effects of ram pressure stripping will be investigated in detail in a subsequent paper (\citealp{Smith2010a}). For now, we restrict our analysis to the use of a simple analytical prediction for the efficiency of ram pressure stripping (\citealp{GunnGott1972}). Despite its simplicity, the Gunn and Gott formalism has been proven to provide reasonable first order predictions for the gas truncation radius in agreement with numerous full, numerical simulations of ram pressure stripping (\citealp{Abadi1999}, \citealp{Vollmer2002}, \citealp{Mayer2006}, \citealp{Jachym2007}, \citealp{Roediger2007}). We use the intra-cluster medium radial density profile of \def\citename##1{##1}\@internalcite{Vollmer2002}, for the varying infall velocity of our infalling dwarfs, to predict the radius within the cluster when the dwarf's gas disc will be completely stripped. This method predicts that the dwarf will be completely stripped at a cluster radius of $\sim 350$ kpc, {\it before} it completes it's first pass of the cluster centre. Thus, it is unlikely that the majority of infalling dwarfs will still contain appreciable amounts of gas before they experience harassment. The typical location within the cluster that an infalling dwarf experiences harassment is discussed further in Section \ref{locencs}. Additionally this excludes the possibility that a central nucleus could form from the radial inflow of gas seen in the harassment simulations (although it should be noted that nuclei tend to be observed more frequently in brighter dwarf galaxies (\citealp{Lisker2007}) that may be able to maintain a truncated HI disk after ram pressure stripping, and hence could allow formation of nuclei in this manner). \begin{figure*} \centering% \includegraphics[scale=0.5]{xy_gas.eps} \refstepcounter\@captype \@dblarg{\@caption\@captype}{Column density plot of face-on gas disc in Run 4; ({\it left}) before harassment, ({\it centre}) post tidal encounter, ({\it right}) at the end of the simulation. The grey-scale colour bar indicates column density in units of M$_\odot$ pc$^{-2}$. The box size is 10 $\times$ 10 kpc} \label{gasspiralinduced} \end{figure*} \@startsection{subsection}{2}{\z@{Dependency on disc properties} \label{diffmodel} Our harassment model is ideal for testing how a galaxys response to harassment depends on specific properties such as gas fraction, surface brightness and mass. As will be shown in Section \ref{Monty}, a tidal history such as that of Run 4 can be considered a statistically typical infall. Now different galaxy models can be made to infall along the same trajectory, and be subjected to an identical tidal history. Using this method we test how different dwarf galaxy's respond to harassment; (i) Model A contains no gas, (ii) Model B is a more low surface brightness disc, and (iii) Model C is $\sim 6$ times less massive. \begin{figure} \centering \epsfxsize=9.0cm \epsfbox{SFR_comp.eps} \refstepcounter\@captype \@dblarg{\@caption\@captype}{Bursty star formation rates of the 8 infalling dwarf irregular models. Star formation rates are shown in M$_\odot$ yr$^{-1}$} \label{burstysfrs} \end{figure} \@startsection{subsubsection}{3}{\z@{Model parameters} Model A is similar to the standard dwarf model, except it's disc mass is purely stellar. Each gas particle is replaced with an identical mass stellar particle. Model B is also similar except that we utilise a spin parameter $\lambda=0.1$. This is double the value of the standard dwarf galaxy model and forces the exponential disc scalelength of the stellar and gas disc to also double, to $r_d=1.72$ kpc (stars) and 3.48 kpc (gas). As a result, model B is $25 \%$ lower in surface brightness at all radii within the disc than in Model A. Model C has a dark matter halo of $1.6 \times 10^{9}$ M$_\odot$. As it's dark matter halo is less massive, it's concentration is increased slightly to $c=24$. Following the \def\citename##1{##1}\@internalcite{Mo1998} recipe for this halo, produces a stellar disc with an exponential scale length $r_d=0.43$ kpc. Once more, the scalelength of the gas disc is double that of the stellar disc. \@startsection{subsubsection}{3}{\z@{Response to typical harassment} In Table \ref{diffgalresp} the key effects of harassment are summarised for ease of comparison. Particles are counted within a 15 kpc radius, to calculate dark matter and stellar losses in Model A. A sphere of double this radius was used for Model B, to match the fact that it's stellar disc had doubled in diameter. A sphere of only 7.5 kpc was used for Model C to account for it's smaller stellar disc. \begin{table} \centering \begin{tabular}{|c|c|c|c|c|c} \hline Run & DM & Star & ($V_{peak}/\sigma$) & (M/L)$_{dyn}$\\ \hline Standard& 41 & 93 & 55 & 44 \\ Model A (star only)& 42 & 91 & 61 & 46\\ Model B (LSB)& 41 & 75 & 34 & 55\\ Model C (low mass)& 37 & 97 & 73 & 38\\ \hline \end{tabular} \refstepcounter\@captype \@dblarg{\@caption\@captype}{Summary of effects of typical harassment on varying dwarf galaxy models; {\it column 1} is run description, {\it column 2} and {\it column 3} are percentage of dark matter and stellar mass at the end of the simulation,{\it column 4} and {\it column 5} is final ($V_{peak}/\sigma$), and final (M/L)$_{dyn}$ as a percentage of the orignal} \label{diffgalresp} \end{table} Firstly, comparing the standard model to Model A, there appears to be very little change in the effects of harassment in terms of mass loss, mass-to-light ratio, or stellar dynamics. A visual inspection of the stellar disc also reveals little difference, with mild enhancement of spiral structure in both models. The inclusion of a gas component has made little difference. This suggests thats the effects of harassment will not change significantly if ram-pressure stripping had removed the gas from the galaxy prior to harassment (see Section \ref{gaseffects} for further discussion). However, a deeper study of the combined effects of harassment and ram pressure stripping is deferred to a future paper (\citealp{Smith2010a}). The same cannot be said for the low surface brightness disc of Model B. The halo has suffered similar losses as the standard model - this is to be expected as their haloes are identical and suffering the same tidal interactions. But the stellar disc suffered additional $\sim 15 \%$ losses. The ($V_{peak}/\sigma$) ratio is also reduced by an additional $10\%$ from the standard model. The simplest explanation is that a larger stellar disc represents a larger target for harassing galaxies. In addition, the self gravity of the disc is reduced by a factor of $25 \%$. This is in agreement with the simulations of \def\citename##1{##1}\@internalcite{Moore1999} where a low surface brightness giant spiral suffers considerably more harassment than a high surface brightness spiral. The surface brightness (or disc scale-length for exponential discs) appears to be an additional parameter controlling a galaxys sensitivity to harassment, as it controls the cross-sectional area of a galaxy to harassers. Finally, we compare the low mass dwarf (model C) to the standard model. Despite model C having considerably less mass, its dark matter halo has not suffered substantially greater mass-loss. Furthermore, it's stellar disc losses are a smaller mass fraction than in the standard model. The ($V_{peak}/\sigma$) ratio suffers only a minor reduction in comparison to the other models. This further supports the conclusion that the cross-sectional area of a galaxy is a strong parameter controlling the response of a galaxy to harassment. \section{A Monte-Carlo simulation of harassment} \label{Monty} Column 2 of Table \ref{8simeffects} summarises the tidal histories of each infalling dwarf galaxy model. There is a large range in strength and frequency of tidal encounters. This brings into question just how commonly strong encounters occur, or in fact just how typical our `typical' simulation really is. To attempt to answer this question, and to gain additional insight into the principles of harassment, we conduct a 1000 galaxy infall Monte-Carlo simulation. Clearly, conducting 1000 full N-body/SPH simulations of infalling dwarf galaxy models would be extremely time-consuming. Instead 1000 N-body particles with mass equal to the dwarf galaxy (and with the same assumed Roche limit) are allowed to infall into the cluster model. Initially each particle is randomly positioned on a spherical surface of radius 1.1 Mpc. They are given an initial velocity of 250 km s$^{-1}$ and the direction of the velocity vector on the spherical surface is randomised. Hence the initial orbital parameters ($r_{apo}$ and $v_{azi}$) of each particle's orbit is identical to that used in the full numerical simulations. However, now the range of tidal histories that an infalling dwarf galaxy encounters is sampled to a far greater degree of completeness than is possible with only 8 galaxy model infalls. The tidal history of each particle is then recorded allowing for a statistical analysis, and by comparison with the tidal histories of the full simulations, conclusions can be drawn. \@startsection{subsection}{2}{\z@{Frequency of strong encounters} \begin{figure} \centering% \epsfxsize=5.9cm \begin{rotate}{ \begin{rotate}{ \begin{rotate}{\epsfbox{maxtides_cumm.eps}} \end{rotate}} \end{rotate}} \end{rotate} \refstepcounter\@captype \@dblarg{\@caption\@captype}{Cummulative frequency plot of peak strength tidal encounters experienced by infalling dwarf galaxies on plunging orbits} \label{cummfreqplunge} \end{figure} To understand, statistically, what the typical tidal history of an infalling dwarf galaxy appears like, we log the peak tidal force that each infalling Monte-Carlo particle is subjected to throughout its orbit. In Figure \ref{cummfreqplunge}, a cummulative frequency plot of the peak strength encounter is shown. It can be seen that half of the infalling dwarfs never experience a peak tidal force greater than twice the Roche limit of the dwarf model. In fact peak tidal forces in the range of 1-2 Roche limits are the most common experienced by the infalling galaxies. In the full numerical simulations, both Run 4 and Run 7 have this type of tidal history, so these can be considered as typical infalls. It should be noted that even tidal histories that include peak tidal forces as large as 3 Roche limits (occurring in two-thirds of infalls) do not show significantly stronger effects of harassment (see Table \ref{8simeffects}). Significant tidal encounters that cause strong transformation or stellar losses occur for tidal forces that exceed $\sim 4$ Roche limits in the full numerical simulations. These occur in less than $25 \%$ of the infalls and cannot be considered statistically typical. Hence mild harassment can be considered the norm for newly accreted dwarf galaxies, at least for the orbits considered so far (see Section \ref{orbitsection} for the dependency of this conclusion on orbital parameters). \begin{figure} \centering% \epsfxsize=6.0cm \begin{rotate}{ \begin{rotate}{ \begin{rotate}{\epsfbox{tidesvsmass.eps}} \end{rotate}} \end{rotate}} \end{rotate} \refstepcounter\@captype \@dblarg{\@caption\@captype}{Peak tidal strength of encounters is plotted against mass of the harassing galaxy. There is no indication of stronger tidal encounters occurring at the high end of the galaxy mass function} \label{strengthvsmass} \end{figure} \@startsection{subsection}{2}{\z@{What drives strong tidal encounters?} We are also in a position to answer the question of whether strong tidal forces are dominated by encounters with the rarer, more massive harassing galaxies. In Figure \ref{strengthvsmass}, the strength of the encounter versus the mass of the harassing galaxy is plotted. There is a large degree of scatter in the plot and little trend for stronger tidal encounters with increased mass of the harasser. Strong tides are not predominantly caused by encounters with the rarer, more massive cluster galaxies. A far stronger trend is seen when we instead plot tidal strength against the minimum separation between a harassing galaxy and the infalling dwarf - see Figure \ref{strengthvsrad}. This clearly indicates that strong tides are in fact driven by close encounters with harasser galaxies. This is contrary to statements in \def\citename##1{##1}\@internalcite{Moore1996} suggesting that the bulk of the evolution is caused by encounters with galaxies brighter than $\sim L_\star$. In fact the strongest encounters seen may not be well modelled by the simulations in this paper, as the proximity of the galaxy-galaxy encounter is such that a collision of the baryonic contents is likely. If the colliding galaxies both contain a dissipative gas component, then there is the possibility for a dissipative (wet) merger. Fortunately, this is not a common event. $<6 \%$ of all encounters occur at separations of less than 15 kpc. The dwarf galaxy model initially has a gas disc that extends to $\sim 15$ kpc, hence passes closer than this, between similar galaxies, are likely to cause a collision rather than a tidal encounter. The maximum tidal strength encountered at a separation greater than 15 kpc is 5 times the Roche limit of the dwarf galaxy. Thus infalling galaxies who experience tidal forces as strong as this (e.g. Run 5) can still be considered to be well modelled by the simulations in this paper. However, it is not unreasonable for us to assume that the infalling dwarf galaxies are ram pressure stripped of their gas content before the collisions occur. In this case, the collisionless stellar component of the dwarf galaxy will only be affected by gravitational tides, as modelled in these simulations, and the results of even stronger tides would still remain reasonable. This has interesting implications for the possibility of collisionless (dry) mergers. Newly infalling galaxies are not captured by the tidal potentials of the harassers galaxies, but merely deviate from their original trajectory. \begin{figure} \centering \epsfxsize=9.0cm \epsfbox{tidesvsrad.eps} \refstepcounter\@captype \@dblarg{\@caption\@captype}{Tidal strength of encounters is plotted against minimum separation between the dwarf galaxy and the harassing galaxy. There is a strong trend for increasing strength of tidal encounters with decreasing separation.} \label{strengthvsrad} \end{figure} \@startsection{subsection}{2}{\z@{Location and speed of encounters} \label{locencs} While ram pressure stripping is believed to be only significant in the inner regions of the cluster, how common are high speed tidal encounters in the outskirts of the cluster? In this model, harassment is found to be similarly preferential to the cluster centres. Statistically, the average radius of an encounter is $\sim 200 \pm 150$ kpc within the cluster. Only $\sim 7 \%$ occur at radii beyond 500 kpc (beyond the core radius as defined in \def\citename##1{##1}\@internalcite{Sabatini2005}). Hence, the effects of harassment are very unlikely to be observed in the outskirts of the cluster, unless the dwarf galaxy has already made a pass of the cluster centre (i.e. it is a member of the back-splash (\citealp{Gill2005}) galaxy population). The average velocity of an encounter is very high with a large standard deviation: $\sim$ 2000 $\pm$ 600 km s$^{-1}$. Hence the short-lived strong tides seen in Figure \ref{run4_tidehist} are virtually the standard and consequently galaxy-galaxy capture events are very rare . Chance low speed encounters are vanishingly small, with $< 4 \%$ occurring at velocities of $<500$ km s$^{-1}$. At first glance, these encounters appear to occur at very high velocity, considering the total velocity dispersion of the cluster is $\sim 1000 $ km s$^{-1}$. However, the elliptical orbits of the harasser galaxies causes the velocity dispersion in the central region of the cluster to be higher and this is where the majority of encounters occur. In addition, the plunging orbit of the infalling dwarf is far from the orbit of a virialised galaxy population raising encounter velocities further. \@startsection{subsection}{2}{\z@{Strong sensitivity to orbit} \label{orbitsection} \@startsection{subsubsection}{3}{\z@{A less plunging orbit} The orbit utilised in the full numerical simulations and in the Monte-Carlo simulation described above, is not un-realistic but does involve a plunging orbit. We repeat the Monte-Carlo simulation but this time with a less plunging orbit. This allows us to test how strongly the tidal histories of infalling dwarf galaxies depend on their orbit through the cluster. For this we use the average orbital parameters for sub-structure in cluster-mass objects of a $\Lambda$CDM simulation - see \def\citename##1{##1}\@internalcite{Gill2004}. Despite a wide variety of cluster properties, the orbits of the satellite populations were found to be similar with an average ellipticity $e=0.61$, and an average pericenter distance $r_{peri}=0.35 r_{vir}$ (with minimal scatter in both quantities). Here ellipticity is defined as $e=1-r_{peri}/r_{apo}$ and $r_{vir}$ is the virial radius of the cluster. For the cluster model presented in this paper, this fixes $r_{peri}=385$ kpc, and $r_{apo}=987$ kpc. Hence the orbit is less eccentric, and does not plunge so deeply into the cluster centre. To produce this orbit, all of the Monte-Carlo particles representing galaxies are positioned at the apocenter of the orbit, and given a 450 km s$^{-1}$ azimuthal velocity. This produces a peak velocity at pericluster distance of $\sim 1250$ km s$^{-1}$. The result is significantly less strong harassment. In Figure \ref{cummfreqtypical} a cummulative frequency plot of peak tidal forces encountered by a dwarf galaxy following a typical orbit is shown. Now $70\%$ of galaxy infalls never experience a tidal force greater than two times the Roche limit, compared to $50 \%$ for the previous more plunging orbit. Such a tidal history typically resulted in only minor stellar losses, and minor disc heating in the full numerical simulations - see Table \ref{8simeffects}. Still Run 4 and Run 7 represent the most statistically common results of harassment. Strong tidal encounters that cause significant stripping and morphological transformation (at tidal forces $\sim 4$ times the dwarf galaxies Roche limit or more) occur in less than $15 \%$ of the infalls, compared to $25 \%$ for the previous plunging orbits utilised. Clearly, the influences of harassment are strongly dependent on the orbital parameters of the orbiting galaxies, and this can be simply understood - the deeper the orbit plunges in to the cluster, the more harassing galaxies are encountered. However, there must be an additional dependency on the time spent in the cluster core also. Hence it might be expected that the strongest harassment will occur for much less eccentric orbits, that are close to circular, and remain near to the cluster center throughout their orbits. \begin{figure} \centering \epsfxsize=9.0cm \epsfbox{maxtides_cumm2.eps} \refstepcounter\@captype \@dblarg{\@caption\@captype}{Cummulative frequency plot of peak strength tidal encounters experienced by infalling dwarf galaxies on typical orbits for substructure within $\Lambda$CDM simulations of cluster mass dark matter haloes} \label{cummfreqtypical} \end{figure} This does indeed seem to be the case for the dwarf-harassment simulations in \def\citename##1{##1}\@internalcite{Mastropietro2005}. Galaxies that have large apocentric distances ($\sim r_{vir}$) all only suffer very mild harassment with similar dark matter and stellar losses to the dwarf galaxies in these simulations ($\sim 10 \%$ stellar and $\sim 60-70 \%$ dark matter lost). Although the harassment model used in this study involves analytical calculation of tidal forces, it produces similar results to a full $\Lambda$CDM simulation of a cluster, although a direct comparison is not possible due to the significantly lower mass of the dwarf models in this study. Meanwhile in \def\citename##1{##1}\@internalcite{Mastropietro2005}, almost all orbits whose apocentre is $\sim 0.1$ r$_{vir}$, and hence never leave the dense cluster core suffer significant mass loss ($\sim 50 \%$ and greater stellar loss, and $\sim 95 \%$ dark matter loss). A low eccentricity, small apo-centric orbit is additionally more likely to make a number of repeat passes of the cluster core. If a galaxy makes a second pass, the chances of a strong encounter should double if the cluster mass and number of harassers has not changed significantly over this time. The strongly harassed dwarf galaxies in \def\citename##1{##1}\@internalcite{Mastropietro2005} complete numerous orbits over the length of the simulation. Similar conclusions are also drawn in \def\citename##1{##1}\@internalcite{Gill2004}. However, this form of orbit is {\it highly unlikely} to occur for galaxies that are {\it newly accreted} into the cluster at the current epoch. The cluster now has a larger mass, and the galaxy's orbit must have high eccentricity to pass close to the cluster core where the effects of harassment are strong. This results in a long period orbit with a large apo-centre, where it is unlikely that the galaxy will make a second pass in $>6$ Gyrs (for the NFW background potential used in this study). Dwarfs that make numerous repeat cluster center passes must have formed close to the cluster core, or at least infallen when the cluster was significantly less massive. Such dwarfs could form their own sub-class of dwarf, as discussed in the `Discussion and Summary' section. \@startsection{subsubsection}{3}{\z@{A small apo-centric orbit} To test the influence of harassment on galaxies that are not newly infalling into the cluster, we repeat the Monte-Carlo simulations only this time we place each Monte-Carlo particle closer to the centre of the cluster, at a radius of 200 kpc. Each is provided with a purely azimuthal velocity of 660 km s$^{-1}$ producing an orbit that, in the absence of harasser galaxies, is virtually circular (zero eccentricity). A cummulative frequency plot of the peak tidal force experienced by each of the Monte-Carlo particles is presented in Figure \ref{montecarlo_central}. \begin{figure} \centering \epsfxsize=9.0cm \epsfbox{maxtides_cumm3.eps} \refstepcounter\@captype \@dblarg{\@caption\@captype}{Cummulative frequency plot of peak strength tidal encounters experienced by infalling dwarf galaxies initially on a circular orbit, close to the cluster centre at radius 200 kpc} \label{montecarlo_central} \end{figure} Clearly, strong tidal encounters occur significantly more often for such a small apo-centric orbit. There are now {\it no} Monte-carlo particles that experience a peak tidal force of less than one Roche limit over 2.5 Gyrs of evolution. In fact $>$70$\%$ of all particles experience at least one tidal encounter of force greater than 4 times the Roche limit of the dwarf, which has been shown to cause significant dark matter and stellar losses. Hence for this type of orbit, strong harassment is the norm. To test the effects of such tidal encounters on small dwarf galaxies, we place the standard model dwarf galaxy at a 200 kpc radius and provide it with the same 660 km s$^{-1}$ aximuthal velocity. We then evolve the dwarf galaxy for 2.5 Gyrs under the influences of harassment. We repeat this test 4 times, with a different velocity vector each time (although all have a purely azimuthal velocity of 660 km s$^{-1}$) so as the 4 models each under go different tidal histories. The results can be seen in Table \ref{centraltable}. \begin{table} \centering \begin{tabular}{|c|c|c|c|c|c|c} \hline Run & Apo-Peri & DM & Star & ($V_{peak}/\sigma$) & (M/L)$_{dyn}$\\ \hline Run 1& 220-100 & 12 & 64 & 38 & 19 \\ Run 2& 200-50 & 1 & 3 & - & -\\ Run 3& 240-70 & 1 & 8 & - & -\\ Run 4& 240-120 & 28 & 85 & 55 & 33\\ \hline \end{tabular} \refstepcounter\@captype \@dblarg{\@caption\@captype}{Summary of effects of a small apocentric orbit (r$<$250 kpc) harassment on the standard dwarf galaxy model; {\it column 1} is run description, {\it column 2} is apocentre and pericentre of the orbit, {\it column 3} and {\it column 4} are percentage of dark matter and stellar mass at the end of the simulation,{\it column 5} and {\it column 6} is final ($V_{peak}/\sigma$), and final (M/L)$_{dyn}$ as a percentage of the orignal} \label{centraltable} \end{table} Run 1 suffers strong harassment, in a similar manner to the rare strong encounters of the more plunging orbits, (see Run 6 of Table \ref{8simeffects}) with $\sim 90 \%$ dark matter losses, and a substantial decrease in ($V_{peak}/\sigma$) and (M/L)$_{dyn}$. Run 2 and Run 3 are both destroyed by tidal encounters, making a measurement of their final ($V_{peak}/\sigma$) and (M/L)$_{dyn}$ impossible. The Monte-carlo tests demonstrate that disruption by the cluster background potential alone is a rare ocurrence. $<$1$\%$ of Monte-carlo particles experience a tidal force above 90$\%$ of the models Roche limit from the cluster background potential alone. Run 2 and 3 are dismantled by repeated strong tidal encounters and pass the cluster centre several times over the period of their orbit. Run 4 suffers mild harassment - similar to the typical harassment of the plunging orbits (e.g. Run 1 and Run 7 of Table \ref{8simeffects}). It can be considered as one of the $<$30$\%$ of the Monte-carlo particles that avoids tidal encounters of greater than four times the Roche limit, hence is fairly atypical of harassment in this regime. In general, the dwarfs are typically destroyed or suffer significant dark matter and stellar losses when undergoing small apocentric orbits. The final stellar disks of the two surviving galaxies show significant heating, and appear to have increased in vertical scale-height substantially. It is estimated that, if viewed edge-on, the ratio of their short-axis to long-axis would be $\sim 0.8$ and $\sim 0.5$ in Run 1 and Run 4 respectively - hence Run 1 has become close to spherical in it's stellar distribution. Once more, a direct comparison between the results of \def\citename##1{##1}\@internalcite{Mastropietro2005} is not possible due to the differing mass-regime of the dwarf galaxy models, but substantial dark matter and stellar losses of equivalent magnitudes ($\sim$90 $\%$ and $\sim$50 $\%$ respectively, and greater) are seen. For comparison Gal 1,2,4,7-12 of Table 1 in \def\citename##1{##1}\@internalcite{Mastropietro2005} have approximately similar orbital characteristics and mass-loss. However, one difference is that none of Mastropietro's dwarfs were considered completely destroyed by the end of their simulation. The lower mass dwarfs are more susceptible to disruption by the combined action of the background cluster potential (which is continuously stronger than in the plunging orbit case), and numerous strong tidal encounters. In this situation, dark matter that is liberated by a strong encounter, is quickly swept away by the stronger tidal field of the background cluster potential. This brings into question the fate of the dark matter and stars from such disrupted galaxies. One possibility is a fraction of the dark matter and stars is absorbed into the central elliptical of the clusters, while any stars that are not absorbed in this manner contribute to the low surface brightness background observed in the cluster (\citealp{Mihos2005}). Indeed, the filaments and streams of stars observed at low surface brightnesses between the giant central elliptical galaxies in Virgo, may have their origin in the merging of heavily disrupted harassed dwarfs with the giant ellipticals. \section{Discussion and Summary} As discussed in \def\citename##1{##1}\@internalcite{Lisker2007}, it is important that we recognise the various sub-classes of dwarf galaxy if we are to understand their origin. If we split the cluster dwarfs into nucleated and non-nucleated dwarfs, we find significant variations between the two populations. In order to recover the intrinsic, three dimensional shape of dwarf galaxies, flattening distributions combined with the surface brightness test are utilised (\citealp{Binggeli1995}, \citealp{Lisker2007}). Comparisons of {\it non-nucleated} dwarf ellipticals, with dwarf irregulars and late-type spirals show no statistically significant difference in their flattening distributions as measured by a standard Kolmogorov-Smirnov test. However \def\citename##1{##1}\@internalcite{Binggeli1995} find that {\it nucleated} dwarfs maybe as oblate as classical ellipticals, and \def\citename##1{##1}\@internalcite{Lisker2007} find they are more akin to E3 or E4 type giant ellipticals. Further evidence for a different evolutionary history is reflected in their spatial distribution. Nucleated dwarfs are found to be concentrated towards the centre of the Virgo and Fornax cluster like the giant E and S0 galaxies. Meanwhile non-nucleated dwarfs are distributed like the late-type spirals and irregulars, suggesting they are far from virialised within the cluster (\citealp{Ferguson1989}). A similar conclusion is drawn in \def\citename##1{##1}\@internalcite{Lisker2007}, who uses radial velocity profiles to suggest that the nucleated dwarf ellipticals are indeed virialised, whereas the non-nucleated dwarfs still bear the signatures of a recent infall. This accumulation of evidence could suggest that nucleated dwarfs formed in situ within the cluster, whereas the non-nucleated population have been accreted from the cluster outskirts. The research presented in this paper suggests that harassment does not provide a significant mechanism to convert newly infalling late-type dwarfs into dwarf ellipticals. Ram pressure stripping of late-type dwarfs may provide a far more effective mechanism to form the non-nucleated dwarfs (\citealp{Boselli2008}) and will be studied in further detail in \def\citename##1{##1}\@internalcite{Smith2010a} in combination with the harassment model presented in this paper. However, it is less clear how ram pressure stripping could have formed the nuclei found in the nucleated dwarf population. The simulations of \def\citename##1{##1}\@internalcite{Mastropietro2005} demonstrate that harassment may be far more effective on a more centrally concentrated, and virialised dwarf population that formed in situ in the cluster. The early cluster environment would be expected to have lower intra-cluster medium densities, and so ram pressure stripping would be far less efficient than it is today. If their inter-stellar medium is not stripped, then tidal encounters could efficiently drive gas into their centres (\citealp{Moore1998}), and could facilitate the formation of central nuclei. Perhaps these galaxies were the progenitors of current day nucleated dwarf ellipticals whose oblate shapes may reflect the stronger harassment they have incurred. \@startsection{subsection}{2}{\z@{Key results} \begin{enumerate} \item In general ($>75 \%$ of infalls), dwarf galaxy models that infall from the outskirts of the cluster suffer only mild harassment. Typical stellar losses are $\sim 10 \%$, while typical dark matter losses are $\sim 60 \%$, resulting in a reduction in dynamical mass-to-light ratio by a factor $\sim 2$. Typical ($V_{peak}/\sigma$) ratios are also reduced by approximately the same factor, but final stellar discs remain rotationally dominated, and appear as thickened discs, and often with tidally induced spiral structure. \item In rarer cases ($< 25 \%$ of infalls), dwarf galaxy models infalling from the cluster outskirts suffer strong harassment, losing $\sim 70 \%$ of their stars and $\sim 90 \%$ of their dark matter. \item A Monte-Carlo simulation reveals that strong harassment occurs in $< 25 \%$ of infalls for the deeply plunging orbit, and $<15 \%$ of infalls for a more typical orbit of substructure within a $\Lambda$CDM cluster-mass halo. The effects of harassment are strongly dependent on orbital parameters, as this dictates time spent in the cluster centre and the number-density of harassers encountered. \item For a low eccentricity, small apo-centric orbit, strong harassment is typical, occurring in $>70 \%$ of galaxies on such orbits over 2.5 Gyrs. This typically results in strong mass loss ($>90\%$ dark matter, $>50 \%$ stars) or complete disruption of these low mass dwarf models. \item Loss of stellar mass is caused by impulsive heating of the stellar disc by short-lived tidal interactions. Stars that are stripped from the disc by harassment form very low surface brightness streams ($\mu_B > 31$ mag arcsec$^{-2}$) \item The Monte-Carlo simulation reveals that average tidal encounters occur at high velocities ($\sim 2000$ km s$^{-1}$), with chance low speed encounters being very rare ($< 4 \%$ of encounters have a relative velocity $<500$ km s$^{-1}$). \item For a plunging infall, galaxy-galaxy encounters are centrally concentrated, typically occurring at a radius $\sim 1/5$th the cluster virial radius. Encounters in the outer cluster are rare ($\sim 7 \%$ occur at a cluster radius $>500$ kpc). Therefore the effects of harassment are unlikely to be observed in the outskirts of the cluster for newly accreted galaxies - with the exception of the back-splash population that has already completed one near pass of the cluster centre. \item Strong tidal interactions are not primarily caused by encounters with massive harasser galaxies, but are driven by near-pass encounters with more typical cluster galaxies. \item Spiral structure such as those observed in \def\citename##1{##1}\@internalcite{Lisker2006} can be induced by tidal encounters, and therefore is not necessarily an indication that that original galaxy contained the same spiral structure. \end{enumerate} \section{Conclusions} Harassment of small late-type dwarf galaxies {\it that have recently been accreted into the cluster} is not an effective mechanism for forming the thickened discs of dwarf ellipticals. Despite their low mass, they are additionally very concentrated in mass, and hence are robust against the effects of harassment. In previous studies (\citealp{Moore1998}), harassment is the effect of repeated long-range fast encounters. For dwarfs in the low-mass but more concentrated regime studied here, harassment is only signicant in repeated, or even individual, close-range high speed encounters where tidal forces are the strongest, and these are rare for objects infalling into the cluster at the current epoch. Currently it would be a challenge to observe the typical effects of harassment on dwarfs that have recently infallen into the cluster (very low surface brightness extragalactic features, mild disc thickening, and a $\sim 50 \%$ reduction in dynamical mass-to-light ratios). However harassment appears far more effective on dwarf galaxies that have formed in situ within the cluster as it formed, possibly forming nucleated cluster dwarfs. Alternatively harassment of larger, low surface brightness spirals into dwarf ellipticals in the manner suggested by \def\citename##1{##1}\@internalcite{Moore1998} is another possibility. However, the simulations of \def\citename##1{##1}\@internalcite{Moore1999} demonstrate the harassment is not effective on all disc galaxies - while a giant low surface brightness disc is effectively tranformed, a high surface brightness giant like our own galaxy is only mildly affected. Therefore if cluster dwarf ellipticals have their origins in larger disc galaxies, a very large population of specifically low surface brightness field disc galaxies is required, in order to form the vast numbers of dwarf ellipticals seen in clusters today. The formation of dwarf ellipticals from dwarf irregulars via ram pressure stripping is another convincing possibility as discussed in \def\citename##1{##1}\@internalcite{Boselli2008}. Ram-pressure stripping need only terminate their star formation to produce objects that appear very much like dwarf ellipticals today (\citealp{Zee2004}). \section*{Acknowledgments} Greatest thanks to Dr M. Fellhauer for a critical reading of the original version of the paper, and to Dr L. Cortese for numerous helpful discussions and support. Thanks also to Rhys Taylor whose input was vital in developing simulation visualisation tools. Finally, thanks to the School of Physics and Astronomy, Cardiff University who enabled our usage of the Coma Cluster Super Computer for all simulation presented in this work. This research was enabled and funded by an STFC studentship.
2,869,038,156,024
arxiv
\section{Introduction} \label{intro} Given a bichromatic point set $P = P_r \cup P_b$, where $P_r$ is the set of $n$ red points and $P_b$ is the set of $m$ blue points, the basic separability problem is to find a separator $S$ such that the points in $P_r$ and $P_b$ lie in two different sides of $S$ respectively. The motivation for studying this separability problem for a bichromatic point set stems from its various applications in facility location, VLSI layout design, image analysis, data mining, computer graphics and other classification based real life scenarios \cite{cristianini2000introduction,dobkin1996computing,duda2012pattern,eckstein2002maximum,edmonds2003mining}. The bichromatic separability problem also has its application to detect obstacle free separators. In the literature, different types of separators like hyperplane \cite{megiddo1983linear}, circle \cite{o1986computing}, rectangle \cite{cortes2009bichromatic,eckstein2002maximum,van2009identifying}, square \cite{cabello2008covering,sheikhi2015separating} has been studied to optimize the objective function of the corresponding problem. In this paper, we focus on designing space-efficient algorithms for the following problems: \vspace{-0.1in} \begin{itemize} \item[\bluec{P1}] Computing an \blue{{\it arbitrarily oriented monochromatic rectangle of maximum size} ($LMR$)} in $\mathbb{R}^2$, where a rectangle $U$ is said to be monochromatic if it contains points of only one color in the proper interior of $U$, and the \blue{{\em size}} of $U$ is the number of points of that color inside or on the boundary of the rectangle $U$. \vspace{-0.05in} \item[\bluec{P2}] Computing an \blue{{\it arbitrarily oriented rectangle of maximum weight} $(LWR)$} in $\mathbb{R}^2$, where each point in the set $P_b$ (resp $P_r$) is associated with negative (resp. positive) real-valued weight, and the weight of a rectangle $U$ is the sum of weights of all the points inside or on the boundary of $U$. \vspace{-0.05in} \item[\bluec{P3}] Computing a \blue{{\it monochromatic axis parallel cuboid\footnote{a solid which has six rectangular faces at right angles to each other}} ($LMC$)} of maximum size in $\mathbb{R}^3$. \end{itemize} \vspace{-0.1in} A rectangle of arbitrary orientation in $\mathbb{R}^2$ is called \blue{ red} if it does not contain any blue point in its interior{\footnote{blue points may appear on the boundary}}. The \blue{ largest red rectangle ($LRR$)} is a red rectangle of maximum size. Similarly, the \blue{largest blue rectangle ($LBR$)} is defined. The \blue{ largest monochromatic rectangle ($LMR$)} is either $LRR$ or $LBR$ depending on which one is of maximum size. Here, the objective is to compute the $LRR$. In $\mathbb{R}^3$, we similarly define the \blue{ largest axis parallel red cuboid ($LRC$)}, i.e. a cuboid containing the maximum number of red points and no blue point in its interior$^1$. We use $x(p)$ and $y(p)$ to denote the $x$- and $y$-coordinate of a point $p\in P$ respectively. Several variations of this problem are well studied in the literature. In the well-known {\it maximum empty rectangle} (MER) problem, a set $P$ of $n$ points is given; the goal is to find a rectangle (axis parallel/arbitrary orientation) of maximum area that does not contain any point of $P$ in its interior (see \cite{aggarwal1987fast,chazelle1986computing, naamad1984maximum,orlowski1990new} for MER of fixed orientation, and \cite{CND,asish} for MER of arbitrary orientation). For fixed orientation version, the best-known algorithm runs in $O(n\log^2 n)$ time and $O(n)$ space \cite{aggarwal1987fast}. For arbitrary orientations version, the best-known algorithm runs in $O(n^3)$ time using $O(n)$ space \cite{CND}. For the bichromatic version of the problem, Liu and Nediak \cite{liu2003planar} designed an algorithm for finding an axis parallel $LRR$ of maximum size in $O(n^2\log n+nm+m\log m)$ time using $O(n)$ space. Backer and Keil \cite{backer2009bichromatic} improved the time complexity to $O((n+m)\log^3 (n+m))$ using $O(n\log n)$ space adopting the divide-and-conquer approach of Aggarwal and Suri \cite{aggarwal1987fast}. They also proposed an $O((n+m)\log (m+n))$ time algorithm for finding an axis-parallel {\em red} square of maximum size. Recently, Bandyapadhyay and Banik \cite{bandyapadhyay2017polynomial} proposed an algorithm for finding the $LRR$ in arbitrary orientation using $O(g(n,m)\log(n+m)+n^2)$ time and $O(n^2+m^2)$ space, where $g(n,m)\in O(m^2(n+m))$ and $g(n,m)\in \Omega(m(n+m))$. Other variations of the $LRR$ problem, studied in the literature are as follows. For a given bichromatic (red,blue) point set, Armaselu and Daescu \cite{Daescucccg16} considered the problem of finding a rectangle of maximum area containing all red points and minimum number of blue points. In $\mathbb{R}^2$, the axis-parallel version of this problem can be solved in $O(m\log m +n)$ time and the arbitrary oriented version requires $O(m^3 + n\log n)$ time. In $\mathbb{R}^3$, the axis-aligned version of the problem can be solved in $O(m^2(m+n))$ time. Eckstein {\it et al.} \cite{eckstein2002maximum} considered the axis-parallel version of the $LRR$ problem in higher ($d \geq 3$) dimensions. They showed that, if the dimension $d$ is not fixed, the problem is $NP$-hard. They presented an $O(n^{2d+1})$ time algorithm for any fixed dimension $d \geq 3$. Later, Backer and Keil \cite{backer2010mono} improved the time bound of the problem to $O(n^d\log^{d-2} n)$. Cort{\'e}s {\it et al.} \cite{cortes2009bichromatic} considered the problem of removing as few points as possible from the given bichromatic point set such that the remaining points can be enclosed by two axis-parallel rectangles $A_R$ and $A_B$ (may or may not be disjoint), where $A_R$ (resp. $A_B$) contains all the remaining red (resp. blue) points. They solved this problem in $O(n^2\log n)$ time using $O(n)$ space. The problem of separating bichromatic point sets by two disjoint axis-parallel rectangles such that each of the rectangles is monochromatic, is solved in $O(n\log n)$ time by Moslehi and Bagheri \cite{moslehi2016separating} (if such a solution exists). If these two rectangles are of arbitrary orientation then they solved the problem in $O(n^2\log n)$ time. Bitner {\it et al.} \cite{bitner2010minimum} studied the problem of computing the minimum separating circle, which is the smallest circle containing all the points of red color and as few points as possible of blue color in its interior. The proposed algorithm runs in $O(nm\log m+n\log n)$ time using $O(n+m)$ space. They also presented an algorithm for finding the largest separating circle in $O(nm\log m+k(n+m)\log (n+m))$ time using $O(n+m)$ space, where $k$ is the number of separating circles containing the smallest possible number of points from blue point set. The problem of covering a bichromatic point set with two disjoint monochromatic disks has been studied by Cabello {\it et al.} \cite{cabello2013covering}, where the goal is to enclose as much points as possible in each of the monochromatic disks. They solved the problem in $O(n^{\frac{11}{3}} \polylog~n)$ time. If the covering objects are unit disks or unit squares, then it can be solved in $O(n^{\frac{8}{3}}\log^2 n)$ and $O(n\log n)$ time respectively \cite{cabello2008covering}. The weighted bichromatic problems are also studied in the literature. The smallest maximum-weight circle for weighted points in the plane has been addressed by Bereg {\it et al.} \cite{bereg2015smallest}. For $m$ negative weight points and $n$ positive weight points they solved the problem in $O(n(n+m)\log (n+m))$ time using linear space. For a weighted point set Barbay {\it et al.} \cite{barbay2014maximum} provided an $O(n^2)$ time algorithms to find the maximum weight axis-parallel square. \subsection*{Our Contribution} Given a bichromatic point set $P=P_r \cup P_b$ in a rectangular region $\cal A \subseteq$ $\mathbb{R}^2$, where $P_r$ and $P_b$ are set of $n$ red points and $m$ blue points respectively, we design an in-place algorithm\footnote{An in-place algorithm is an algorithm where the input is given in an array, the execution of the algorithm is performed using only $O(1)$ extra workspace, and after the execution of the algorithm all the input elements are present in the array.} for finding an arbitrarily oriented $LMR$ of maximum size in $O(m(n + m)(m\sqrt{n} + m \log m + n\log n))$ time, using $O(1)$ extra workspace. We also show that the axis-parallel version of the $LMR$ problem in $\mathbb{R}^3$ (called the $LMC$ problem) can be solved in an in-place manner in $O(m^3\sqrt{n}+m^2n\log n)$ time using $O(1)$ extra workspace. As a prerequisite of the above problems, we propose an algorithm for constructing a k-d tree with a set of $n$ points in $\mathbb{R}^k$ given in an array of size $n$ in an in-place manner such that the orthogonal range counting query can be performed using $O(1)$ extra workspace. The construction and query time of this data structure is $O(n\log n)$ and $O(n^{1-1/k})$, respectively. Finally, we show that if the points in $P_r$ (resp. $P_b$) have positive (resp. negative) real-valued weight, then a rectangle of arbitrary orientation with maximum weight (called $LWR$) can be computed in $O(m^2(n+m)\log (n+m))$ time using $O(n)$ space. \section{In-place k-d tree} \label{in-place} To perform the orthogonal range reporting query, Bentley~\cite{Bentley75} invented k-d tree in 1975. It is a binary tree in which every node is a k-dimensional point. Every non-leaf node can be thought of being associated with one of the k-dimensions of the corresponding point, with a hyperplane perpendicular to that dimension's axis, and implicitly this hyperplane splits the space into two half-spaces. Points to the negative side of this \emph{splitting hyperplane} are represented by the left subtree of that node and points in the positive side of the hyperplane are represented by the right subtree\footnote{For a hyperplane $x_i=c$, its {\it negative} (resp. {\it positive}) side is the half-space $x_i<c$ (resp. $x_i > c$), where $x_i$ is the $i$-th coordinate of a $k$ dimensional point $(x_1, x_2, \ldots, x_k)$}. Depending on the level of a node going down the tree, the splitting dimension is chosen one after another in a cyclic manner. Each node $v$ of the tree is associated implicitly with a rectangular region of space, called \emph{cell($v$)}. The cell corresponding to the root of the tree is the entire $\mathbb{R}^k$. A child's cell is contained within its parent's cell, and it is determined by the splitting hyperplane stored at the predecessor nodes. Br\"{o}nnimann~{\it et al.}~\cite{BCC04} mentioned an in-place version of the k-d tree. We note that their approach for both constructing the data structure and querying in the data structure are recursive, and need to remember the subarray and the cell in which the recursive call is invoked. As a result, there is a hidden $O(\log n)$ space requirement for system stack. We present an alternate variant of in-place $k$-d tree data structure that supports both reporting and counting query for orthogonal query range with same query time as the classical one. The advantage of this in-place variant is that both construction and query algorithms are non-recursive, and it takes only $O(1)$ extra workspace during the execution apart from the array containing the input points. The in-place organization of this data structure is similar to the in-place min-max priority search tree proposed by De {\it et al.}~\cite{de2013place}. \subsection{Construction of in-place k-d tree} \label{preprocessing2d} Let us consider that a set $P$ of $n$ points in $\mathbb{R}^k$ is given in an array $P[1,\ldots,n]$. We propose an in-place algorithm to construct the k-d tree $\cal T$ in the array $P$. Here, $\cal T$ is a binary tree of height $h = \lfloor \log n \rfloor$, such that the levels $0,1,\ldots,h-1$ are full and level $h$ consists of $\varkappa=n - (2^h-1)$ nodes which are aligned as far as possible to the left. At the end of the construction, the tree $\cal T$ is stored implicitly in the given array $P$. In other words, we store the root of the tree in $P[1]$, its left and right children in $P[2]$ and $P[3]$, etc. This allows us to navigate $\mathord{\emph{parent}}(P[i])$ which is at $P[\lfloor\frac{i}{2}\rfloor]$, and $\mathord{\emph{left-child}}(P[i])$ and $\mathord{\emph{ right-child}}(P[i])$, if they exist, which are at $P[2i]$ and $P[2i+1]$, respectively. \begin{figure} \noindent\begin{minipage}[b]{.42\textwidth} \centering{\includegraphics[width=2.2in]{level_wise_tree.pdf}} \centerline{$(a)$} \end{minipage} \hfill \begin{minipage}[b]{.58\textwidth} \centering{\includegraphics[width=4in]{array_representation_level-wise_construction.pdf}} \centerline{$(b)$} \end{minipage} \caption{(a) k-d tree after constructing its $(i-1)$-th level (stripped), and (b) its array representation up to $i$-th level}\vspace{-0.15in} \label{2d-tree-1} \vspace{-0.1in} \end{figure} Note that there are $2^i$ nodes in the level $i\neq h$ of the tree $\cal T$. As the number of leaf nodes in a full tree of height $h-i$ is $2^{h-i}$, so there are $k_i = \lfloor{\frac{n-\varkappa}{2^{h-i}}}\rfloor$ nodes at level $i$ ($0<i<h$) that are roots of subtrees, each of size $K_1^i= 2^{h+1-i} - 1$. If $k_i=2^i$, then all the subtrees at level $i$ are full, and the number of nodes in each subtree is $2^{h+1-i}-1$. Otherwise, we have $k_i< 2^i$, and level $i$ ($0<i<h$) consists of, from left to right, \begin{itemize} \item $k_i$ nodes which are roots of subtrees, each of size $K_1^i= 2^{h+1-i} - 1$, \item one node that is the root of a subtree of size $K_2^i = 2^{h-i} - 1 + \varkappa - k_i\cdot2^{h-i}$, and \item $2^i - k_i-1$ nodes which are roots of subtrees, each of size $K_3^i = 2^{h-i} - 1$. \end{itemize} See Figure \ref{2d-tree-1} for an illustration. Here, we introduce the notion of {\em block} and {\em block median}. Assume that $0<i<h$. We refer to the portion of the array $P[(2^i +(j-1)K_1^i),\ldots, (2^i+jK_1^i-1)]$ as block $B_j^i$, for $j\leq k_i$. The portion of the array $P[(2^i +k_iK_1^i),\ldots, (2^i +k_iK_1^i+K_2^i-1)]$ is referred to as block $B_{k_i+1}^i$, and $P[(2^i +k_iK_1^i+K_2^i + (j-1)K_3^i), \ldots (2^i +k_iK_1^i+K_2^i + jK_3^i-1)]$ are referred to as blocks $B_j^i$, for all $j>k_i+1$. For $i=0$, we refer to the whole array $P[1,\ldots, n]$ as $B_1^0$. For $i=h$, we refer to the array element $P[2^h+j]$ as block $B_j^h$, where $1\leq j\leq \varkappa$. For a block $B_j^i$ ($0<i<h$) of size $K_1^i$ (resp. $K_3^i$), we denote block median $m_j^i$ as a point in $B_j^i$ whose $(i \mod k)$-th coordinate value is $\lceil\frac{K_1^i}{2}\rceil$-th (resp. $\lceil\frac{K_3^i}{2}\rceil$-th) smallest among all the points in $B_j^i$. If the size of $B_j^i$ is $K_2^i$, then depending on whether $K_2^i-(2^{h-i}-1)< 2^{h-i-1}-1$ or $K_2^i-(2^{h-i}-1)\geq 2^{h-i-1}-1$, we refer the block median $m_j^i$ as a point in $B_j^i$ whose $(i \mod k)$-th coordinate value is $K_2^i- (2^{h-i-1}-1)$-th or $2^{h-i}$-th smallest among all the points in $B_j^i$. For block $B_1^0$, depending on whether $n-(2^{h}-1)< 2^{h-1}-1$ or $n-(2^{h}-1)\geq 2^{h-1}-1$, we refer the block median $m_1^0$ as a point in $B_1^0$ whose $1$st coordinate value is $n- (2^{h-1}-1)$-th or $2^{h}$-th smallest among all the points in $B_1^0$. Our algorithm constructs the tree level by level. After constructing the $(i-1)$-th level of the tree, it maintains the following invariants: \begin{invariant} \begin{itemize} \item[(i)] The subarray $P[1, \ldots, 2^i-1]$ stores levels $0,1,\ldots, i-1$ of the tree. \vspace{-0.05in} \item[(ii)] Block $B_j^i$ contains all the elements of the $j$-th leftmost subtree of level $i$, for $j\in\{1,\ldots,2^i\}$ $(j\in \{1,\ldots, \varkappa\}$ when $i=h)$. \end{itemize} \end{invariant} At the first iteration of the algorithm, we find the block median $m_1^0$ using the linear time in-place median finding algorithm of Carlsson and Sundstr{\"o}m \cite{CarlssonS95}, and swap it with $P[1]$. Next, we arrange the subarray $P[2,\ldots, n]$ such that all the elements whose first coordinate value is greater than $m_1^0$ appear before all the elements whose first coordinate value is less than $m_1^0$. We can do this arrangement in linear time using $O(1)$ extra space. Note that after the first iteration, both the invariants are maintained. Assuming that the tree is constructed up to level $(i-1)$, now, we construct the tree up to level $i$ by doing the following: \begin{enumerate} \item Find block median $m_j^i$ from each block $B_j^i$ and swap it with the first location of block $B_j^i$. Using the median finding algorithm of \cite{CarlssonS95}, this needs a total of $O(n)$ time for all the blocks in this ($i$-th) level. \item Now depending on the median value $m_j^i$ we arrange the elements of each block $B_j^i$ such that all the elements having $(i \mod k)$-th coordinate value greater than $m_j^i$ appears before all the elements having $(i \mod k)$-th coordinate value less than $m_j^i$. Thus the block $B^i_j$ splits into two parts, named {\it first half-block} and {\it second half-block}. This step again needs time proportional to the size of each block, and hence $O(n)$ time in total. \item Now, we need to move all $m_j^i$ stored at the first position of each block to the correct position of level $i$ of the tree. To do this we do the following. First, we move the last block median $m_{2^i}^i$ next to $m_{2^i-1}^i$ by two swaps; (i) swap $m_{2^i}^i$ with the first element of the second half-block of $B_{2^i-1}^i$, and (ii) swap $m_{2^i}^i$ with the first element of the first half-block of $B_{2^i-1}^i$. Thus, after this swapping step all the elements in the block $B_{2^i-1}^i$ that are less than $m_{2^i-1}^i$ will stay before the elements greater than $m_{2^i-1}^i$. Now, we will move both the pair ($m_{2^i-1}^i$, $m_{2^i}^i$) just after $m_{2^i-2}^i$. It can be shown that, for the move of each element of this pair, we need a pair of swaps as explained above. Next, we will move $m_{2^i-2}^i$, $m_{2^i-1}^i$ and $m_{2^i}^i$ by swapping (as mentioned above) next to $m_{2^i-3}^i$. In this way, we will continue until all the block medians $\{m_j^i| j\in 2^i\}$ will become consecutively placed. Using $O(1)$ space, this can be done in linear time\footnote{The reason is that, during this step of execution each element is moved backward from its present position in the array at most once.}. \end{enumerate} Step 3 ensures that both the invariants are maintained after this iteration. At the end of $h$-th iteration, we have the tree $\cal T$ stored implicitly in the array $P$. The correctness of this algorithm follows by observing that the invariants are correctly maintained. As there are $O(\log n)$ iterations and each iteration takes $O(n)$ time, in total the algorithm takes $O(n \log n)$ time. \begin{lemma} \label{preprocess} Given a set of $n$ points in $\mathbb{R}^k$ in an array $P$, the in-place construction of $k\text{-}d\text{-}tree$ takes $O(n\log n)$ time and $O(1)$ extra workspace. \end{lemma} \subsection{Orthogonal range counting query in the in-place k-d tree}\label{counting2d} For the simplicity of explanation, we illustrate the range counting query for points in $\mathbb{R}^2$. We can easily generalize it for points in $\mathbb{R}^k$, for any fixed $k$. Given a rectangular range $Q = [\alpha, \beta] \times [\gamma, \delta]$ as a query, here, the objective is to return a count of the number of elements in $P$ that lie in the rectangular range $Q$. In the traditional model, to answer counting query in $O(\sqrt{n})$ time each node in pre-proceesed k-d tree stores the subtree size. For our case, we cannot afford to store the subtree size along with each node of the in-place k-d tree. However, if we have the information of the level $\ell$ of a node $P[t]$, then we can on-the-fly compute the subtree size as follows. Note that $P[t]$ is $r=t-(2^{\ell}-1)$-th left most node at $\ell$-th level of the tree $\cal T$. Depending on whether $r\leq k_{\ell}$, $r= k_{\ell}+1$ or $r \geq k_{\ell}+2$, the subtree size of the node corresponding to $P[t]$ is $K_1^{\ell}$, $K_2^{\ell}$ or $K_3^{\ell}$. We want to remind the reader that $k_{\ell} = \lfloor{\frac{n-\varkappa}{2^{h-\ell}}}\rfloor$, $K_1^{\ell}= 2^{h+1-\ell} - 1$, $K_2^{\ell}= 2^{h-\ell} - 1 + \varkappa - k_{\ell}\cdot2^{h-\ell}$ and $K_3^{\ell}= 2^{h-\ell} - 1$, where $\varkappa=n-(2^h-1)$. On the other hand, the traditional query algorithm is a recursive algorithm that starts from the root of the tree. At a node $v$, (i) if $Q\cap cell(v)=\emptyset$, then it returns 0; (ii) else if $cell(v)\subseteq Q$, then it returns the subtree size of $v$; (iii) otherwise, it recursively counts in the two children of $v$ and returns by adding these counts, accordingly. The main issue in implementing this algorithm in the in-place model is that it needs $O(\log n)$ space for system stack to have the knowledge of the corresponding $cell$ of a node. To tackle this situation, we have a new geometric observation which leads to a non-recursive algorithm in the in-place model. At a node $v$, we can test whether the cells corresponding to both the children are intersecting the query region $Q$ or not, by checking whether the splitting plane stored at $\mathord{\emph{parent}}(v)$ is intersecting the query region $Q$ or not. If the splitting plane does not intersect, then the one of the child's cell that has non-empty intersection with $Q$, can be decided by checking in which side of the hyperplane the region $Q$ lies. This simple trick works because when we are at a node $v$, we know that the cell corresponding to its parent has non-empty intersection with $Q$. The following observation plays a crucial role here. \begin{observation}\label{obj:1} If the left (resp. right, bottom, and top) boundary of $cell(v)$ intersects the query region $Q$, then the left (resp. right, bottom, and top) boundary of $cell(v')$ corresponding to the left (resp. right, left, right) child ($v'$) of node $v$ also intersects the query region $Q$. \end{observation} To decide whether $cell(v)\subseteq Q$, we do the following. Throughout the query algorithm, we keep a four-tuple $(L,R,B,U)$ each being able to store one coordinate value of the given input points. Initially, all of them are set to $NULL$. Throughout the query algorithm, this four-tuple maintains the following invariant: \begin{invariant} When we are at a node $\mathord{\it Current}$, the non-NULL or NULL value stored at $L$ (resp. $R$, $B$, and $U$) implies that the left (resp. right, bottom, and top) boundary of the $cell(\mathord{\it Current})$ is intersecting or not intersecting the query region $Q$. More specifically, if the value stored at $L$ (resp. $R$, $B$, and $U$) is non-NULL\footnote{split-value of some node of the ancestor of $\mathord{\it Current}$}, then it represents the left (resp. right, bottom, and top) boundary of the cell corresponding to the lowest level ancestor $v$ of the node $\mathord{\it Current}$, such that left (resp. right, bottom, and top) boundary of $cell(v)$ intersects the query region $Q$. \end{invariant} At a node $v$, if all the entries in the four-tuple is non-NULL, then the $cell(v)\subseteq Q$. We present our algorithm as a pseudocode in Algorithm~\ref{AlgoRCount}. This is similar to the algorithm \textsf{\sc Explore}\ in~\cite{de2013place}. It uses two variables $\mathord{\it Current}$ and $\mathord{\it state}$ that satisfies the following: \noindent \begin{itemize} \item $\mathord{\it Current}$ is a node in $\cal T$. \item $\mathord{\it state} \in \{0,1,2\}$. \item If $\mathord{\it state} = 0$, then {\it either} $cell(\mathord{\it Current}) \subseteq Q$ {\it or} both the children of $\mathord{\it Current}$ need to be processed to compute $cell(\mathord{\it Current}) \cap Q$. \item If $\mathord{\it state} = 1$, then all elements of the set $Q \cap \left( \{\mathord{\it Current}\} \cup {\cal T}_{{\mathord{\emph{left-child}}(\mathord{\it Current})}} \right)$ have been counted, where $ {\cal T}_{{\mathord{\emph{left-child}}(\mathord{\it Current})}}$ is the left subtree of $\mathord{\it Current}$ in the tree $ {\cal T}$. \item If $\mathord{\it state} = 2$, then all elements of the set $Q \cap {\cal T}_{\mathord{\it Current}}$ have been counted. \end{itemize} \begin{algorithm}[h] \SetAlgoLined \scriptsize \SetKwData{Counnt}{Count} \SetKwComment{Comment}{\*}{} \KwIn{The root $p$ of $\cal T$ and a rectangular query range $Q = [\alpha, \beta] \times [\gamma, \delta]$.} \KwOut{Count of all the points $q$ in $T$ that lies in $Q$.} $\mathord{\it Current} = p$; $\mathord{\it state} = 0$; $\mathord{\it Count}=0$; $level=0$; 4-Tuple=$(L,R,U,B)=(NULL,NULL,NULL,NULL)$\; \While{$\mathord{\it Current} \neq p$ or $\mathord{\it state} \neq 2$} {\If{$\mathord{\it state} = 0$} { \If{$L\neq NULL \bigwedge R\neq NULL \bigwedge U\neq NULL \bigwedge B\neq NULL $} { $\mathord{\it Count}=\mathord{\it Count} + SubtreeSize({\cal T}_{\mathord{\it Current}})$ \; \If{($level \mod 2 =0$)} {$L=val(\mathord{\it Current})$\; \If{($val(\mathord{\it Current}) = R$)} {$R=NULL$\;}} \If{($level \mod 2 =1$)} {$B=val(\mathord{\it Current})$\; \If{($val(\mathord{\it Current}) = T$)} {$T=NULL$\;}} \If{$\mathord{\it Current}$ is the $\mathord{\emph{left-child}}$ of its parent}{$\mathord{\it state} = 1$;} \Else{$\mathord{\it state} = 2$;} $\mathord{\it Current} = \mathord{\emph{parent}}(\mathord{\it Current})$; $\mathord{\it level}=\mathord{\it level}-1$\; } \Else{ \If{($val(\mathord{\it Current})$ lies in $Q$)}{$\mathord{\it Count}=\mathord{\it Count}+1$\;} \If{$(\mathord{\it Current}$ has a left child $)\bigwedge$ (full or a part of $Q$ is in the left/bottom half-space of the splitting hyperplane at $\mathord{\it Current}$ )} { \If{the splitting hyperplane at $\mathord{\it Current}$ intersects $Q$} {{\bf if} ($level \mod 2 =0$ and $R = NULL$) {\bf then} $R=val(\mathord{\it Current})$\; {\bf if} ($level \mod 2 =1$ and $T = NULL$) {\bf then} $T=val(\mathord{\it Current})$\;} $\mathord{\it Current} = {\mathord{\emph{left-child}}(\mathord{\it Current})}$; $\mathord{\it level}=\mathord{\it level}+1$\; } \Else{$\mathord{\it state} = 1$;} } } \Else{\If{$\mathord{\it state} = 1$} { {\If{$(\mathord{\it Current}$ has a right child) $ \bigwedge$ (full or part of $Q$ is in the right/top half-space of the splitting hyperplane at $\mathord{\it Current}$ )} {\If{the splitting hyperplane at $\mathord{\it Current}$ intersects $Q$} {{\bf if} ($level \mod 2 =0$ and $L = NULL$) {\bf then} $L=val(\mathord{\it Current})$\; {\bf if} ($level \mod 2 =1$ and $B = NULL$) {\bf then} $B=val(\mathord{\it Current})$\;} $\mathord{\it Current} = {\mathord{\emph{ right-child}}(\mathord{\it Current})}$; $\mathord{\it level}=\mathord{\it level}+1$\; $\mathord{\it state} = 0$\; }} \Else{$\mathord{\it state} = 2$; } } \Else{\Comment{// $\mathord{\it state} = 2$ and $\mathord{\it Current} \neq p$} \If{($\mathord{\it Current}$ is the $\mathord{\emph{left-child}}$ of its parent) $\bigwedge$ (the splitting hyperplane at $\mathord{\it Current}$ intersects $Q$)} {$\mathord{\it state} = 1$;} {{\bf if} ($level \mod 2 =0$ and $L = NULL$) {\bf then} $L=val(\mathord{\it Current})$\; {\bf if} ($level \mod 2 =1$ and $B = NULL$) {\bf then} $B=val(\mathord{\it Current})$\;} $\mathord{\it Current} = \mathord{\emph{parent}}(\mathord{\it Current})$; $\mathord{\it level}=\mathord{\it level}-1$\; } } } \normalsize \caption{RangeCounting} \label{AlgoRCount} \end{algorithm} Update of the four-tuple $(L, R, T, B)$ is done as follows. While searching with the query rectangle $Q$ and with $\mathord{\it state}=0$, when $Q$ is split by the split-line of the node and the search proceeds towards one subtree of that node, we store the split-value (corresponding to the split-line) in the corresponding variable of the four-tuple provided it is not set earlier (contains $NULL$ value). During the backtracking, i.e, when $\mathord{\it state}=2$, if the split-value of the current node matches with the corresponding variable in the four-tuple, then the corresponding entity of the four-tuple is set to $NULL$. Now, if backtracking reaches from left, we set $\mathord{\it state}=1$. Since the right child of the current node needs to be processed, we set the corresponding entity of four-tuple with the split-value stored at that node. The correctness of the algorithm follows from maintaining the invariants and Observation~\ref{obj:1}. In the worst case, we might have visited all the nodes whose corresponding cells overlap on the orthogonal query rectangle $Q$. As the number of cells stabbed by $Q$ can be shown to be $O(\sqrt{n})$~\cite{Berg2008}, we have the following result. \begin{lemma}\label{count} Given the in-place $2$-d tree maintained in the array $P$ of size $n$, the rectangular range counting query can be performed in $O(\sqrt{n})$ time using $O(1)$ extra workspace. \end{lemma} We can generalize, the above algorithm for points in $\mathbb{R}^k$. The only difference is that we need $2k$-tuple instead of four-tuple. Assuming $k$ is a fixed constant, we have the following: \begin{lemma}\label{count2} Given the in-place k-d tree maintained in the array $P$ of size $n$, the orthogonal range counting query can be performed in $O(n^{1-1/k})$ time using $O(1)$ extra workspace. \end{lemma} \section{$LMR$ problem in arbitrary orientation}\label{twod} In this section, we describe the method of identifying an arbitrarily oriented red rectangle of largest size for a given bichromatic point set $P=P_r\cup P_b$ in $\mathbb{R}^2$. The $LRR$ problem was solved by Bandyapadhyay and Banik \cite{bandyapadhyay2017polynomial}, considering the blue points as obstacles, using the following observation: \begin{observation} \label{obj1} \cite{bandyapadhyay2017polynomial} At least one side of a $LRR$ must contain two points $p,q$ such that $p \in P_b$ and $q\in P_r\cup P_b$, and other three sides either contain at least one point of $P_b$, or is open (unbounded) (see Figure \ref{obs2ex}). \end{observation} \begin{figure}[h] \centering{\includegraphics[scale=0.5]{obs2example}} \caption{Example of $LRR$} \label{obs2ex} \end{figure} For the sake of formulation of our problem, let us have a {\it general position assumption} that no three points are collinear. We will use $\cal A$ to denote the convex hull of the point set $P$. \begin{definition} A pair of points $(p,q)$ is said to be a {\em candidate pair} if $p \in P_b$ and $q \in P_r\cup P_b$. \end{definition} \begin{definition} A rectangle with one of its boundaries defined by a candidate pair, and each of the other three boundaries containing at least one point in $P_b$ is referred to as a {\em candidate $LRR$}, or $cLRR$ in short. \end{definition} We consider each candidate pair $(p,q)$, and define a line $\ell_{pq}$ passing through $p$ and $q$. We process each side of $\ell_{pq}$ separately to compute all the $cLRR$s' with $(p,q)$ on one of its boundaries by sweeping a line parallel to $\ell_{pq}$ among the points in $P$ in that side of $\ell_{pq}$, as stated below. After considering all the candidate pairs in $P$, we report the $LRR$. We describe the method of processing the points in $P$ above\footnote{A point $(\alpha,\beta)$ is said to be above the line $ax+by+c = 0$ if $a\alpha + b\beta +c > 0$; otherwise the point $(\alpha,\beta)$ is below the said line.} $\ell_{pq}$. A similar method works for processing the points in $P$ below $\ell_{pq}$. \subsection{Processing a candidate pair $(p, q)$} \label{propq1} Without loss of generality, we consider $\ell_{pq}$ as the $x$-axis, and $x(p) < x(q)$. Let $P'$ be the array containing the subset of $P$ lying above the $x$-axis. Let $P_b'$ and $P_r'$ denote the blue and red point set respectively in $P'$, $m'= |P_b'|$ and $n'=|P_r'|$. We sort the points of $P_b'$ with respect to their $y$-coordinates, and construct a range tree $\cal T$ with the red points in $P_r'$ considering $\ell_{pq}$ as the $x$-axis. Observe that each $cLRR$ above $\ell_{pq}$ with $(p,q)$ on its one side corresponds to a maximal empty rectangle ($MER$) \cite{cccgDeN11} among the points in $P_b'$ whose bottom side is aligned with the $x$-axis and containing $(p,q)$. We sweep a horizontal line $H$ in a bottom-up manner to identify all these $cLRR$s'. During the sweep, we maintain an interval ${\cal I}=[\alpha,\beta]$. $\cal I$ is initialized by $[x_{min},x_{max}]$ at the beginning of the sweep, where $x_{min}$ and $x_{max}$ are the points of intersection of the line $\ell_{pq}$ (the $x$-axis) with the boundary of $\cal A$. For each point $\theta \in P_b'$ encountered by the sweep line, if $x(\theta) \not\in {\cal I}$, sweep proceeds to process the next point. Otherwise, we have a $cLRR$ with horizontal span $[\alpha,\beta]$, and the top boundary containing $\theta$\footnote{Needless to say, its bottom boundary contains the points $(p,q)$.}. Its size is determined in $O(\log n')$ time by performing a rectangular range counting query in $\cal T$. Now, \vspace{-0.1in} \begin {itemize} \item if $x(\theta) \in [x(p), x(q)]$ then the sweep stops. \vspace{-0.05in} \item otherwise, \vspace{-0.1in} \begin{itemize} \item if $\alpha \leq x(\theta) \leq x(p)$ then $\alpha =x(\theta)$ is set, \vspace{-0.05in} \item if $x(q) \leq x(\theta) \leq \beta$ then $\beta= x(\theta)$ is set, \end{itemize} \end {itemize} and the sweep continues. Finally, after considering all the points in $P_b$, the sweep stops. For a detailed description of our proposed method, see Algorithm \ref{rangeT}. A similar method is adopted for the points below $\ell_{pq}$. \begin{algorithm} \SetAlgoLined \SetKwData{I}{I}\SetKwData{size}{size}\SetKwData{clrr}{cLRR}\SetKwData{Stop}{Stop}\SetKwData{sizec}{size(cLRR)} \small \KwIn{An array $P=P_b \cup P_r$ of points above $\ell_{pq}$; $P_b$ is $y$-sorted blue points, and $P_r$ corresponds to the range tree $\cal T$ for the red points.} \tcc{$\ell_{pq}$ is the line through the candidate pair $(p,q)$} \KwOut{$LRR$ in $P$} $\alpha \leftarrow x_{min}$ \tcc*[r]{$x_{min}$ is left-intersection point of the line $\ell_{pq}$ with boundary of $\cal A$} $\beta \leftarrow x_{max}$ \tcc*[r]{$x_{max}$ is right-intersection point of the line $\ell_{pq}$ with boundary of $\cal A$} \I $\leftarrow [\alpha,\beta]$\; \size $\leftarrow 0$ \tcc*[r]{number of red points in a rectangular range} \sizec $\leftarrow 0$ \tcc*[r]{\size of optimum \clrr} \For(\tcc*[f]{$H$ is the sweepline}){each point $\theta=(x_\theta,y_\theta) \in P_b$ encountered by $H$ in order} { \If{$x_\theta \in \text{\I}$}{ define a $cLRR$ with its bottom boundary by the candidate pair $(p,q)$, top boundary at $\theta$, left and right boundaries at $\alpha$ and $\beta$ respectively\; determine \size of \clrr \tcc*[r]{using rectangular range query in $\cal T$} \If{\size $>$ \sizec}{ \sizec $\leftarrow$ \size\; } \If{$x_\theta \in [x_p,x_q]$}{ \Stop\; } \If{$\alpha \leq x_\theta \leq x_p$}{ $\alpha \leftarrow x_\theta$\;} \If{$x_q \leq x_\theta \leq \beta$}{ $\beta \leftarrow x_\theta$\; } } } return \sizec\; \caption{\textsf{\sc LRR-Premitive-Algorithm-candidate-pair-$(p,q)$}} \label{rangeT} \end{algorithm} \begin{lemma} \label{l} The above algorithm computes the $LRR$ in $O(m(m+n)(m\log n + m\log m +n\log n))$ time using $O(n\log n)$ extra space. \end{lemma} \begin{proof} The space complexity follows from the space needed for maintaining the range tree $\cal T$ . We now analyze the time complexity. For each candidate pair $(p,q)$, (i) the preprocessing steps sorting of the points in $P_b'$, and constructing $\cal T$ with the points in $P_r'$) need $O(n'\log n' + m'\log m')$ time, and (ii) during the sweep, reporting the size of each $cLRR$ needs $O(\log n')$ time \footnote{the time for the counting query for a rectangle in a range tree using fractional cascading.}. Since, $O(m')$ $cLRR$ may be reported for the candidate pair $(p, q)$, the total processing time for $(p, q)$ is $O(m'\log n' + m'\log m'+ n'\log n')$ in the worst case. The result follows from the fact that we have considered $O(m(n + m))$ candidate pairs, $m'= O(m)$ and $n'= O(n)$ in the worst case. \end{proof} The same method is followed to compute the $LBR$. Finally, $LMR$ is reported by comparing the size of $LRR$ and $LBR$. Lemma \ref{l} says that both the time and space complexities of our proposed algorithm for computing the $LMR$ are an improvement over those of the algorithm of Bandyapadhyay and Banik \cite{bandyapadhyay2017polynomial} for the same problem. It needs to be mentioned that, we can implement the algorithm for the $LRR$ problem in an in-place manner by replacing range tree with the in-place implementation of 2-d tree as described in Section \ref{in-place} for the range counting. Thus, the preprocessed data structure (the sorted array of $P_b$ and the 2-d tree for $P_r$) can be stored in the input array $P$ without any extra space. Using the results in Lemmata \ref{preprocess} and \ref{count}, we have the following result. \begin{theorem} \label{lrr} In the in-place setup, one can compute an $LMR$ in $O(m(m+n)(m\sqrt{n}+m\log m+n\log n))$ time using $O(1)$ extra space. \end{theorem} \section{$LWR$ problem in arbitrary orientation} In this section, we consider a weighted variation of {\bf P1}. Here each point in $P_r$ is associated with a non-zero positive weight and each point in $P_b$ is associated with a non-zero negative weight. Our goal is to report a rectangle $LWR$ of arbitrary orientation such that the sum of weights of the points inside that rectangle (including its boundary) is maximum among all possible rectangles in that region. Unlike problem {\bf P1}, here the optimum rectangle may contain points of both the colors. \begin{observation} At least one side of the $LWR$ must contain two points $p,q\in P_r$, and other three sides either contain a point of $P_r$ or is open. A point $p\in P_r$ may appear at a corner of the solution rectangle $LWR$. In that case, $p$ is considered to be present in both the adjacent sides of $LWR$. \end{observation} We will consider all possible pairs of points $(p,q)\in P_r$ and define a line $\ell_{pq}$ joining $p,q$. We process each side of $\ell_{pq}$ separately to compute all the candidate $LWR$, denoted as $cLWR$, among the points in $P$ lying in that side of $\ell_{pq}$. After considering all possible pairs of points, we report $LWR$. We now describe the processing of the set of points $P' \in P$ that lies above $\ell_{pq}$. \subsection{Processing a point-pair $(p, q)$} As earlier, assume $\ell_{pq}$ to be the $x$-axis. Consider a rectangle $R$ whose bottom side aligned with $\ell_{pq}$ (see Figure \ref{fig4}); the top side passing through $p_\theta$, left and right sides are passing through $p_b$ and $p_c$ respectively. We can measure the weight of the rectangle $R$ as follows: \begin{figure}[h] \centerline{\includegraphics[scale=0.5]{rectangle_update.pdf}} \caption{Update of $LWR$} \label{fig4} \end{figure} \begin{description} \item Let $U = \{u_i, i = 1, 2, \ldots,n\}$ be the projection of all the points on $\ell_{pq}$ having $y$-coordinate (distance from $\ell_{pq}$) less than or equal to that of $p_\theta$. Each member $u_i$ is assigned an weight equal to the weight of its corresponding point $p_i$. Now, compute the cumulative sum of weights $W(u_i)$ at each projected point of $U$ from left to right. Observe that the weight of the rectangle $R$ is equal to $W(c)-W(\alpha)$, where $u_\alpha$ is the rightmost point in $U$ to the left of $p_b$. \end{description} Thus, in order to get a maximum weight rectangle with its top boundary passing through the point $p_\theta$ and having $(p, q)$ on its bottom boundary, we need to search for an element in $u_{\alpha}\in U$ having $x$-coordinate less that $\min(x(p_\theta),x(p))$ having minimum weight, and an element $u_{\beta}\in U$ having $x$-coordinate greater than $\max(x(p_\theta),x(q))$ having maximum weight. \blue{The weight of the rectangle with $(p,q),p_\alpha, p_\theta, p_\beta$ on its bottom, left, top and right boundaries will be $W(u_{\beta})-W(u_{\alpha})$.} We sweep a horizontal line (see Figure \ref{fig4}) among the points in $P'$. During the sweep, we create a projection $u_i$ of each point $p_i\in P'$ and assign its weight $w(u_i) = \displaystyle w(p_i)$, and store them in a dynamically maintained weight balanced leaf search binary tree $\cal T$ \cite{overmars1983}. Its leaves correspond to the projections of all points that are faced by the sweep line (see Figure \ref{fig5}). Each internal node $u$ in $\cal T$ maintains three pieces of information, namely $EXCESS$, $MAX$ and $MIN$. $MAX$ and $MIN$ store the maximum and minimum of $W(u_i)$ values stored in the subtree rooted at the node $u$ of $\cal T$. The $EXCESS$ field is initialized with ``zero''. Each projected point $u_j$ at the leaf also stores the cumulative sum of weights $W(u_j)$. During the sweep, when a new point $p_i \in P'$ is faced by the sweep line, $u_i$ is inserted in $\cal T$. Now, for all $u_j$ with $x(u_j) > x(u_i)$, the cumulative sum of weights needs to be updated as $\hat{W}(u_j)=W(u_j)+w(u_i)$. We use $EXCESS$ field to defer this update as follows. \begin{description} \item While tracing the search path to insert $u_i$ (= $x(p_i)$) in $\cal T$, if the search goes from a node $v$ to its left child, then we add $w(u_i)$ with the $EXCESS$ field of the right child $z$ of $v$. This is in anticipation that while processing another point $u_j \in P'$ if the search goes through $z$, then the $EXCESS$ field of $z$ will be propagated to the $EXCESS$ field of its two children (setting the $EXCESS$ field of $z$ to $0$). \end{description} \begin{figure}[h] \centering \includegraphics[scale=0.55]{tree.pdf} \caption{Search Path in $\cal T$} \label{fig5} \end{figure} After the insertion of $u_i$ in $\cal T$, we trace back up to the root of $\cal T$ and update the $MAX$ and $MIN$ fields (if necessary) of each node on the search path. If the (weight-)balance condition at any node is violated, a time linear in the size of the subtree rooted at that node is spent to rebuild that subtree in the (weight-)balanced manner. Now, if $p_i \in P_r$, then we find the $cLWR$ of maximum weight with $(p,q)$ on its bottom boundary and $p_i$ on its top boundary by identifying (i) a element $u_\alpha \in {\cal T}$ with $W(u_\alpha) = min\{W(u)|x(u) < min(x(p), x(p_i))\}$ using the $MIN$ fields of the nodes on the search path, and (ii) a point $u_\beta\in {\cal T}$ with $W(u_\beta) = max\{W(u)|x(u) > max(x(q),x(p_i))\}$ using the $MAX$ fields of the nodes on the search path. As mentioned earlier, the weight of the rectangle on $\ell_{pq}$ with $p_\alpha,p_i,p_\beta \in P_r$ on its left, top, and right sides respectively, is $W(u_\beta)- W(u_\alpha)$. The iteration continues until all the points of $P'$ are considered by the sweep line. \begin{lemma} \label{fixed_side} The $cLWR$ of maximum weight with $(p,q)$ on its one side can be computed in $O((n+m)\log(n+m))$ time. \end{lemma} \begin{proof} Follows from the fact that the amortized insertion time of a point in $\cal T$ is $O(\log n)$ \cite{overmars1983}. While rebuilding, due to the violation of balance condition, the setting of $EXCESS$, $MIN$ and $MAX$ fields of each node can also be done in $O(|{\cal T }|)$ time, and rebuilding of $\cal T$ is needed after at least $O(\log n)$ updates \cite{overmars1983}. \end{proof} The algorithm proposed above is not in-place. It uses a preprocessed data structure implemented in an $O(n)$ extra space. Lemma \ref{fixed_side} and the fact that we need to consider $\ell_{pq}$ for each pair $p,q \in P_r$ suggest the following result: \begin{theorem} An $LWR$ of arbitrary orientation for a set of weighted points can be computed in $O(m^2(n + m) \log(n + m))$ time using $O(n)$ workspace. \end{theorem} \section{Computing largest axis-parallel monochromatic cuboid $\mathbb{R}^3$} \label{3d} We now propose an in-place algorithm for computing a monochromatic axis-parallel cuboid with the maximum number of points. Here, the input is a set of bi-chromatic points $P=P_r\cup P_b$ inside a $3D$ axis-parallel region $\cal A$ bounded by six axis-parallel planes, where $P_r$ is the set of $n$ {\em red} points and $P_b$ is the set of $m$ {\em blue} points. The input points are given in an array, also called $P$. The $x,y,z$ coordinates of a point $p_i\in P$ are denoted by $x(p_i)$, $y(p_i)$ and $z(p_i)$ respectively, along with its color information $c(p_i)$ = red/blue. A cuboid is said to be a candidate for $LRC$ if its every face either coincides with a face of $\cal A$ or passes through a blue point, and its interior does not contain any blue point. Such a cuboid will be referred to as $cLRC$. The objective is to identify an $LRC$, which is a $cLRC$ containing the maximum number of red points. Similarly, a blue cuboid containing the maximum number of blue points ($LBC$) can be defined. The $LMC$ is either $LRC$ or $LBC$ depending on whose number of points is more. We compute all possible maximal empty cuboid \cite{NB} among the $m$ blue points. Each one will be a $cLRC$; we perform a range query to count the number of red points it contains. In our algorithm, three types of $cLRC$s' inside $\cal A$ are considered separately. \begin{description} \item[type-1:] the $cLRC$ with both top and bottom faces aligned with the top and bottom faces of $\cal A$, \item[type-2:] the $cLRC$ whose top face is aligned with the top face of $\cal A$, but bottom face passes through a blue point in $P_b$, and \item[type-3:] the $cLRC$ whose top face passes through some blue point in $P_b$. The bottom face may pass through another blue point in $P_b$ or may coincide with the bottom face of $\cal A$. \end{description} As a preprocessing, we first split the array $P$ into two parts, namely $P_r$ and $P_b$, such that $P_r = P[1,\ldots,n]$ and $P_b=P[n+1,\ldots,n+m]$. We construct an in-place 2-d tree $\cal T$ with the points in $P_r$ considering their $(x,y)$ coordinates, which will be used for the range-counting query for the $cLRC$s'. We also sort the points in $P_b$ in decreasing order of their $z$-coordinates. Thus, the preprocessing needs $O(m\log m + n\log n)$ time. In \cite{DumitrescuJ13,kaplan2008}, it is proved that the number of maximal empty hyper-rectangles among a set of $n$ points in $\mathbb{R}^d$ is $O(n^d)$. In the following subsections, we will analyze the processing of these three types of $cLRC$s' in an in-place manner. The largest among the {\em type-$i$} $cLRC$ will be referred to as {\em type-$i$} $LRC$, for $i=1,2,3$. \remove{ \begin{algorithm} \KwIn{An array containing the point set $P=\{p_1, p_2,\ldots,p_{m+n}\}$ in 3-d axis-parallel box $\cal A$, where $p_i=(x_i,y_i,z_i)$.} \KwOut{The largest axis-parallel red cuboid $(LRC)$ $C$, and its size $size_{max}$.} $size_{max}=0$\; Execute TYPE-1\_LRC($size_{max},C$)\; Execute TYPE-2\_LRC($size_{max},C$)\; Execute TYPE-3\_LRC($size_{max},C$)\; Report $size_{max}, C$\; \caption{\textsf{\sc LRC($P$)}} \label{LRC} \end{algorithm} } \subsection{Computation of {\it type-1} $LRC$} \label{type1} As both the top and bottom faces of the {\em type-1} $cLRC$s' are aligned with the top and bottom faces of $\cal A$, if we consider the projections of the points in $P_b$ on the top face of $\cal A$, then each maximal empty axis-parallel rectangle ($MER$) on the top face of $\cal A$ will correspond to a {\it type-1} $cLRC$. Thus, the problem reduces to the problem of computing all the $MER$s' using the array $P_b$ in an in-place manner, and for each $MER$, count the number of points of $P_r$ inside the corresponding {\em type-1} $cLRC$ using the 2-d tree $\cal T$ with the projection of points in $P_r$ on the top face of $\cal A$. \remove{ \begin{algorithm} \SetKwData{return}{return} \KwIn{$P_r$ and $P_b$} \KwOut{TYPE-1 $LRC$} Consider the projections $b_i,~ \forall ~p_i \in P_b$ and $r_i, ~ \forall ~p_i \in P_r$ on the top face of $\cal A$\; Construct a 2d-Tree $\cal T$ with the red points $r_i, i= 1\ldots,n$ \tcc*{see Section \ref{preprocessing2d}} Compute all $MER$s for the projected points $b_i, i= 1,\ldots,m$ \tcc*{see Section \ref{twod}} \For{ each generated MER}{perform range counting in $\cal T$ \tcc*{see Section \ref{counting2d}}} \return cuboid $C$ with maximum size $size_{max}$\; \caption{TYPE-1\_LRC($size_{max},C$)} \label{type-1} \end{algorithm} } \begin{lemma}\label{res-type1} The number of {\em type-1} $cLRC$ is $O(m^2)$ in the worst case and the one of maximum size can be computed in $O(m^2\sqrt{n}+n\log n)$ time. \end{lemma} \begin{proof} The first part of the result i.e the number of {\em type-1} $cLRC$ follows from \cite{naamad1984maximum}. (i) We can generate all the $MER$s with bottom boundary passing through a point $b_i$ on the top face of $\cal A$ using the method described in Section \ref{twod} in $O(m)$ time, and (ii) for each $MER$, the number of projected red points inside that $MER$ can be obtained in $O(\sqrt{n})$ time using the 2-d tree $\cal T$. The second part in the result follows from the fact that $\cal T$ can be generated in $O(n\log n)$ time (see Section \ref{preprocessing2d}). \end{proof} \subsection{Computation of {\it type-2} $LRC$} \label{type-2} Now we describe the in-place method of computing the largest {\em type-2} $cLRC$ whose top face is aligned with the top face of $\cal A$, but bottom face passes through a point in $P_b$. We will use $p_1,p_2, \ldots, p_m$ to denote the points in $P_b$ in decreasing order of their $z$-coordinates. We consider each point $p_i \in P_b$ in order and compute $LRC(p_i)$, the largest {\it type-2} red cuboid whose bottom face passes through $p_i$. Let $B_i=\{b_1, b_2, \ldots, b_{i-1}\}$, $i<m$ be the set containing the projection of all the blue points having $z$-coordinate larger than $z(p_i)$ on the plane $H(p_i)$. Similarly, $R_i=\{r_1,r_2, \ldots\}$ are the projection of all the red points having $z$-coordinate larger than $z(p_i)$ on the plane $H(p_i)$. Thus, $LRC(p_i)$ corresponds to a rectangle on the plane $H(p_i)$ that contains $p_i$, but no point of $B_i$ in its interior, and has the maximum number of points of $R_i$. As in the earlier section, we can partition the array $P$ into two contiguous blocks $P_b$ and $P_r$. The block $P_b$ contains all the blue points in decreasing order of their $z$-coordinates. The block $P_r$ contains all the red points. The blue points are processed in top-to-bottom order. Global counters $LRC$ and $MAX_r$ are maintained to store the $LRC$ detected so far, and its size. While processing each point $p_i \in P_b$, let $B_i$ denote the blue points with their $z$-coordinates greater than $z(p_i)$. We split $P_r$ into two parts. The left part contains an in-place 2-d tree ${\cal T}_i$ with all the red points having $z$-coordinates greater than $z(p_i)$. The right part of $P_r$ contains the red points with $z$-coordinates less than $z(p_i)$. We can compute all the $MER$s using the set $B_i$ as in Section \ref{type1}. For each generated $MER$ if it contains $p_i$ in its interior, then we perform the range counting query in ${\cal T}_i$ to compute the number of red points inside it. $LRC$ and $MAX_b$ are updated, if necessary. Thus, we have the following result. \begin{lemma} \label{res-type2} The {\em type-2} $LRC$ can be computed in $O(m^3\sqrt{n}+mn\log n)$ time. \end{lemma} \begin{proof} The time complexity of processing each point $p_i \in P_b$ follows from Lemma \ref{res-type1}. Since $m$ blue points are processed, the result follows. \end{proof} \subsection{Computation of {\it type-3} $LRC$} \label{type-3} Here also we use $p_1,\ldots,p_m$ to denote the points in $P_b$ in decreasing order of their $z$-coordinates, and the algorithm processes the members in $P_b$ in this order. We now describe the {\it phase of processing} of a point $p_i \in P_b$. It involves generating all the {\it type-3} $cLRC$s whose top face passes through $p_i$; their bottom face may pass through another blue point $p_j\in P_b$ or may coincide with the bottom face of $\cal A$. Consider the horizontal plane $H(p_i)$ passing through $p_i\in P_b$ and sweep it downwards until it hits the bottom face of ${\cal A}$. During this phase when the sweeping plane touches $H(p_j)$ (i.e. hits a point $p_j \in P_b$), the points inside these two horizontal planes $H(p_i)$ and $H(p_j)$ will participate in computing the $cLRC$s with top and bottom faces passing through $p_i$ and $p_j$, respectively. Let, $B_{ij}= \{b_i,\ldots,b_j\}$ be the projections of these blue points $p_i,\ldots,p_j$ $(1 \leq i <j\leq n)$ on the plane $H(p_i)$. Similarly, consider the projections $R_{ij}$ of the red points on the plane $H(p_i)$ those lie in between the planes $H(p_i)$ and $H(p_j)$. Our objective is to determine a $cLRC$ corresponding to an $MER$ on the plane $H(p_i)$ with the points in $B_{ij}$ as obstacles that contains the maximum number of points in $R_{ij}$. In the phase of processing $p_i \in P_b$, the points of $P$ above $H(p_i)$ does not participate in this processing. Those points of $P_b$ (resp. $P_r$) are separately stored at the beginning of the array $P_b$ (resp. $P_r$). From now onwards, by $P_b$ (resp. $P_r$) we will mean the blue (resp. red) points below $H(p_i)$. We consider two mutually orthogonal axis-parallel lines $x=x(p_i)$ and $y=y(p_i)$ on the plane $H(p_i)$ that partition $H(p_i)$ into four quadrants. The blue points that belong to the $\theta$-th quadrant, are denoted by $P_b^\theta$, and are stored consecutively in the array $P_b[i+1, \ldots, m]$. We use $m_\theta=|P_b^\theta|$. While processing the point $p_j \in P_b$ during the sweep in this phase, we use $B_{ij}^\theta$ to denote the projections of the subset of points in $P_b^\theta$ that lie between the planes $H(p_i)$ and $H(p_j)$, $\theta=1,2,3,4$. The members in $B_{ij}^\theta$ are stored in the consecutive locations of the array $P_b^\theta$ in decreasing order of their $z$-coordinates. We maintain four index variables $\chi_\theta$, $\theta=1,2,3,4$, where $\chi_\theta$ indicates the last point hit by the sweeping plane in the $\theta$-th quadrant. Thus, $p_j \in P_b\setminus (\cup_{\theta=1}^4 B_{ij}^\theta)$, and is obtained by comparing the $z$-coordinates of the points $\{P_b[\chi_\theta+1], \theta=1,2,3,4\}$. We will use $R_{ij}$ to denote the projection of the points in $P_r$ lying between $H(p_i)$ and $H(p_j)$. These are stored at the beginning of the array $P_r$. In each quadrant $\theta$, we define the unique maximal closest stair $\textit{STAIR}_\theta$ around $p_i$ with a subset of points of $B_{ij}^\theta$ as in \cite{cccgDeN11,NB}. The projection points of $B_{ij}^\theta$, that determine $\textit{STAIR}_\theta$, are stored at the beginning of the sub-array $P_b^\theta$ in order of their $y$- coordinates\footnote{The remaining elements ($B_{ij}^\theta \setminus \textit{STAIR}_\theta$ are stored just after $\textit{STAIR}_\theta$ in a contiguous manner in $P_b^\theta$ so that the first unprocessed element in the quadrant $\theta$ is obtained at $P_b[\chi_\theta+1]$.}. Thus, $\bigcup_{\theta=1}^4 \textit{STAIR}_\theta$ forms an empty ortho-convex polygon $OP$ on $H(p_i)$ (see Figure \ref{3dtype2i}(a)). As a consequence, the problem of finding a {\it type-3} $LRC$, with top and bottom faces passing through $p_i$ and $p_j$ respectively, maps to finding an $MER$ inside this ortho-convex polygon that contains $b_j$ and maximum number of points in the set $R_{ij}$. \remove{The pseudo-code of computing Algorithm $\textit{STAIR}_1$ is given in Algorithm \ref{stair1}. $\textit{STAIR}_\theta$, $\theta=2,3,4$ can be computed similarly. } \begin{figure}[ht] \label{ fig7} \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=.65\linewidth]{3dtype2i.pdf} \centerline{(a)} \end{minipage \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=.65\linewidth]{3dtypeiii.pdf} \centerline{(b)} \end{minipage} \caption{(a) Empty ortho-convex polygon around $p_i$ (b) Extracting the region in $OP$ for generating MERs with top and bottom face passing through $p_i$ and $p_j$} \label{3dtype2i} \end{figure} Thus we need to: (i) construct the in-place 2-d tree ${\cal T}_{ij}$ with the points in $R_{ij}$, (ii) compute all maximal empty rectangles in $OP$ that contains both $b_i$ and $b_j$ (see ), (iii) for each generated maximal empty rectangle ($MER$) perform the rectangular range counting query in ${\cal T}_{ij}$, and (iv) update $OP$ by inserting $b_j$ in the corresponding $\textit{STAIR}$ for processing the next blue point $p_{j+1} \in P_b$ during this phase. The tasks (i) and (iii) performed as mentioned in Sections \ref{preprocessing2d} and \ref{counting2d} respectively. Task (ii) is explained in Section \ref{compMER3} (also see Algorithm \ref{TYPE-3}). Task (iv) is explained in Section \ref{OPx} (also see Algorithm \ref{updatestair3d}). \begin{algorithm} \SetKwData{mini}{minimum} \KwIn{The array $P_r$ and $P_b$} \KwOut{TYPE-3 $LRC$ of maximum size} Sort the points in $P_b$ in decreasing order of their $z$-coordinates\; \For (\tcc*[f]{Compute MER$(p_i)$}){$i \leftarrow 1$ \KwTo $m$}{ Partition the points in $P_b[i+1,i+2,\ldots,m]$ into $P_b^\theta$, $\theta=1,2,3,4$\; $P_b^\theta, \theta \in \{1,2,3,4\}$ are sorted in decreasing order of their $z$-coordinates\; $m_1$,$m_2$,$m_3$,$m_4$: index of the last point in each of $P_b^\theta, \theta \in \{1,2,3,4\}$ respectively\; $\nu_1$,$\nu_2$,$\nu_3$,$\nu_4$: index of the last point in each of $STAIR_\theta, \theta \in \{1,2,3,4\}$ respectively\; $\chi_1,\chi_2,\chi_3,\chi_4$: variables to indicate the next sweep line in $P_b^\theta, \theta \in \{1,2,3,4\}$ respectively\; $\chi_1,\chi_2,\chi_3,\chi_4$ initialized with $1,m_1+1,m_2+1,m_3+1$ respectively\; $count=i$\; \While{$count\neq m$} { $count=count+1$\; $z$=\mini$\{z(P_b[\chi_1]), z(P_b[\chi_2]),z(P_b[\chi_3]),z(P_b[\chi_4])\}$\; Let, minimum attains for $P_b[\chi_\theta]$ and in quadrant $\theta$\; Compute\_MAX\_MER($i,\chi_\theta,\theta,R_{max}$) \tcc*[r]{call Algorithm \ref{MER2d}.} \If {$|R_{max}| > size_{max}$} {$size_{max}=|R_{max}|$; $C=R_{max}$;} $\textsf{\sc Update\_Stair}_\theta(\chi_\theta)$\; $\chi_\theta=\chi_\theta+1$\; } } \caption{TYPE-3\_LRC($size_{max},C$)} \label{TYPE-3} \end{algorithm} \subsubsection{Computation of $MER(p_i,p_j)$} \label{compMER3} Without loss of generality, assume that $b_j$ (projection of $p_j$ on the plane $H(p_i)$) is in the first quadrant. If $b_j$ is in some other quadrant, then the situation is similarly tackled. If there exist any point in the $STAIR_1$ which dominates $b_j$, i.e., if there exist any blue point $p$ in $STAIR_1$ such that $x(p)< x(b_j)$ and $y(p)<y(b_j)$, then no axis-parallel $cLRC$ is possible whose top boundary passes through $p_i$ and bottom boundary passes through $p_j$. Therefore we assume that $b_j$ is not dominated by any point in $STAIR_1$. We now determine the subset of points in $\textit{STAIR}_1 \cup \textit{STAIR}_2$ that can appear in the north boundary of an $MER$ containing both $b_i$ and $b_j$. \begin{figure}[ht] \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=.8\linewidth]{3dtypeiiiup.pdf} \centerline{(a)} \end{minipage \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=.65\linewidth]{fig_stair.pdf} \centerline{(b)} \end{minipage} \caption{(a) Update $STAIR_1$ after processing $b_j$ w.r.t. $b_j$ and (b) corresponding array update} \label{figtype3} \end{figure} Let $\textit{STAIR}_1 = \{ b_k,k=1,2 \ldots,\nu_1 \} \subseteq B_i^1$. Let $b_\alpha \in STAIR_1$ be such that $y(b_\alpha) = \max\{b_k\in STAIR_1|y(b_k) < y(b_j)\}$ (i.e., the $y$-coordinate of $b_\alpha$ is maximum among all the points in $STAIR_1$ whose $y$-coordinate is lesser than the $y$-coordinate of $b_j$). Similarly, let $b_\beta \in STAIR_1$ be such that $y(b_\beta) = \min\{b_k \in STAIR_1|x(b_k) < x(b_j)\}$ (i.e., $y$-coordinate of $b_\beta$ is minimum among all the points in $STAIR_1$ whose $x$-coordinate is lesser than the $x$-coordinate of $p_j$). We define $Q=\{b_{\alpha+1}, b_{\alpha+2}, \ldots, b_{\beta-1}\}$ $=\{b_k \in \textit{STAIR}_1|x(b_k) > x(b_j)~\text{and}~ y(b_k) > y(b_j)\}$ (see Figure~\ref{3dtype2i}(b)). All the axis-parallel $MER$s in $OP$ with north boundary passing through $b_k$, $k \in \{ \alpha+1,\alpha+2,\ldots,\beta \}$ and containing $p_i$ in its proper interior will contain $b_j$ also. We draw the projections of $b_j$ and $b_\beta$ on $\textit{STAIR}_2$. Let these two points be $\mu$ and $\nu$, respectively. If $x(\mu) = x(\nu)$, then no point on $\textit{STAIR}_2$ can appear on the north boundary of a desired axis-parallel $MER$. But if $x(\mu) < x(\nu)$, then all the points $p \in \textit{STAIR}_2$ satisfying $x(\mu) < x(p) < x(\nu)$ can appear on the north boundary of a desired axis-parallel $MER$. In Figure \ref{3dtype2i}(b), the set of points that can appear on the north boundary of an $MER$ are marked with empty dots. The method of computing an axis-parallel $MER$ with a point $p \in \textit{STAIR}_1 \cup \textit{STAIR}_2$ on its north boundary is given in Algorithm \ref{MER2d}. \begin{algorithm} \SetKwInOut{Kw}{Work-Area} \SetKwData{index}{index-of} \SetKwData{maxi}{maximum}\SetKwData{mini}{minimum} \SetKwData{size}{size} \small{ \KwIn{$STAIR_1$ = $B[1,2,\ldots,\nu_1]$, $STAIR_2$ = $B[m_1+1,m_1+2,\ldots,\nu_2]$, $STAIR_3$ = $B[m_2+1,m_2+2,\ldots,\nu_3]$, $STAIR_4$ = $B[m_3+1,m_3+2,\ldots,\nu_4]$, where the array $B=P_b[i+1, \ldots m]$; $m_\theta$ = number of points of $B$ in $\theta$-th quadrant\; } \Kw{$M$: location to compute the size of the axis-parallel $MER$ containing $p_i,p_j$; $R$: stores the $(north,south,east,west)$ sides of a rectangle \tcc*{$b_i$: projection of $p_i$} } \KwOut{$R_{max}$ \tcc*[r]{red rectangle containing maximum red points} } $MAX\_size = 0$\; $\alpha$ = \index {\mini{$y(B[k])$}}: $\forall ~k \in \{m_1+1,\ldots,\nu_2\}$ and $y(B[k]))>y(B[j])$\; $\beta$ = \index {\maxi{$y(B[k])$}}: $\forall ~k \in \{m_1+1,\ldots,\nu_2\}$ and $x(B[k]))>x(B[j])$; $\beta=\beta+1$\; $\mu$ = \index {\mini{$y(B[k])$}}: $\forall ~k \in \{m_2+1,\ldots,\nu_3\}$ and $y(B[k]))>y(B[j])$\; $\nu$ = \index {\maxi{$y(B[k])$}}: $\forall ~k \in \{m_2+1,\ldots,\nu_3\}$ and $y(B[k]))<y(B[\beta])$\; \For(\tcc*[f]{Call MER with the feasible points of $STAIR_1$ as top boundary }){$k \leftarrow \alpha$ \KwTo $\beta$}{ $north=P[k]$; $east=P[k-1]$\; $\theta=$ \index {\maxi{$y(B[\ell])$}}: $\forall ~\ell \in \{m_1+1,\ldots,\nu_2\}$ and $y(B[\ell]))<y(B[k]))$\; $\psi=$ \index {\maxi{$x(B[\ell])$}}: $\forall ~\ell \in \{m_3+1,\ldots,\nu_4\}$ and $x(B[\ell]))<x(B[k]))$\; $\phi=$ \index {\maxi{$y(B[\ell])$}}: $\forall ~\ell \in \{m_2+1,\ldots,\nu_3\}$ and $x(B[\ell]))<x(B[\theta])$\; $\phi'=$ \index {\mini{$y(B[\ell])$}}: $\forall ~\ell \in \{m_2+1,\ldots,\nu_3\}$ and $y(B[\ell]))>y(B[\psi])$\; \If(\tcc*[f] {Only one MER is possible}){$\phi'>\phi$} {$west =P[\theta]$; $south=P[\psi]$; $R = (north,east, south,west)$\; \size=\textsf{\sc In-Place-Rectangular-Counting-Query-2d-Tree}(${\cal T}_i,R$)\; {\bf if} {\size $>$ MAX\_size} {\bf then} {MAX\_size $\leftarrow$ \size; $R_{max} \leftarrow R$\;} } \If(\tcc*[f] {Multiple $(\geq 2)$ MERs are possible}){$\phi'\leq \phi$} { $south=P[\psi']$\; \For{$\ell=\psi'$ \KwTo $\psi$}{ $west=P[\ell]$; $R=(north,east,south,west)$; $south=P[\ell]$\; \size=\textsf{\sc In-Place-Rectangular-Counting-Query-2d-Tree}(${\cal T}_i,R$)\; {\bf if} {\size $>$ MAX\_size} {\bf then} {MAX\_size $\leftarrow$ \size; $R_{max} \leftarrow R$\;} } $west=P[\alpha]$; $R=(north,east,south,west)$\; \size=\textsf{\sc In-Place-Rectangular-Counting-Query-2d-Tree}(${\cal T}_i,R$)\; {\bf if} {\size $>$ MAX\_size} {\bf then} {MAX\_size $\leftarrow$ \size; $R_{max} \leftarrow R$\;} } } \For(\tcc*[f]{Call MER with the feasible points of $STAIR_2$ as top boundary }){$k \leftarrow \mu$ \KwTo $\nu$}{ $west=P[k-1]$; $top=P[k]$; $east=P[\theta']$\; $\theta_1$ = \index{\maxi{$y(B[\ell])$}}:$\forall ~\ell \in \{\theta \ldots, \nu_1\}$ and $y(B[\ell]) < y(B[\mu])$\; $\psi_1=$ \index {\maxi{$x(B[\ell])$}}: $\forall ~\ell \in \{m_2+1,\ldots,\nu_3\}$ and $x(B[\ell]))>x(B[k-1]))$\; $\phi_1=$ \index {\maxi{$y(B[\ell])$}}: $\forall ~\ell \in \{m_3+1,\ldots,\nu_4\}$ and $x(B[\ell]))<x(B[\theta_1])$\; $\phi_2=$ \index {\mini{$y(B[\ell])$}}: $\forall ~\ell \in \{m_2+1,\ldots,\nu_3\}$ and $y(B[\ell]))>y(B[\psi_1])$\; \If(\tcc*[f] {Only one MER is possible}){$\phi_2>\phi_1$} {$east =P[\theta_1]$; $south=P[\psi_1]$; $R = (north,east, south,west)$\; \size=\textsf{\sc In-Place-Rectangular-Counting-Query-2d-Tree}(${\cal T}_i,R$)\; {\bf if} {\size $>$ MAX\_size} {\bf then} {MAX\_size $\leftarrow$ \size; $R_{max} \leftarrow R$\;} } \If(\tcc*[f] {Multiple MER is possible}){$\phi_2\leq \phi_1$}{ $south=P[\phi_2]$\; \For{$\ell=\phi_2$ \KwTo $\phi_1$}{ $west=P[\ell]$; $R=(north,east,south,west)$; $south=P[\ell]$\; \size=\textsf{\sc In-Place-Rectangular-Counting-Query-2d-Tree}(${\cal T}_i,R$)\; {\bf if} {\size $>$ MAX\_size} {\bf then} {MAX\_size $\leftarrow$ \size; $R_{max} \leftarrow R$\;} } $west=P[\alpha_1]$; $R=(north,east,south,west)$\; \size=\textsf{\sc In-Place-Rectangular-Counting-Query-2d-Tree}(${\cal T}_i,R$)\; {\bf if} {\size $>$ MAX\_size} {\bf then} {MAX\_size $\leftarrow$ \size; $R_{max} \leftarrow R$\;} } } } \caption{Compute\_MAX\_MER($i,j,\theta,R_{max}$)} \label{MER2d} \end{algorithm} \subsubsection{Updating $OP$} \label{OPx} After computing the set of axis-parallel $MER$s in $OP$ containing both the projected points $b_i$ and $b_j$ in its interior, instead of recomputing the whole ortho-convex polygon again to process the next point $p_{j+1} \in P_b$, we update $OP$ by inserting $b_j$ in the respective $STAIR$ (see Figure \ref{figtype3}(a)). Without loss of generality, assume that $b_j$ lies in the first quadrant. After inserting $b_j$ in $STAIR_1$, none of the points in $Q \in STAIR_1$ will participate in forming $MER$ while processing points $p_k \in P_b$ with $z(p_k)<z(p_j)$. So, we need to remove the members in $Q$ from $\textit{STAIR}_1$. This can be done by using the algorithm for stable sorting \cite{KP92}, where the elements in $Q$ will assume the value 1 of the given (0, 1)-valued selection function $f$, and will stably move to the end of $\textit{STAIR}_1$. A simple procedure for this task is given in \cite{BMM07} in the context of stably selecting a sorted subset. We tailored that procedure for our purpose as follows: We maintain two index variables $\alpha$ and $\beta$; $\alpha +1$ and $\beta -1$ indicates the starting and ending positions of the $Q$, respectively. Now, two cases may arise depending on whether $|Q|=0$ or not. \begin{description} \item[$|Q| \neq 0$]: See Case 1 of Figure \ref{figtype3}(b). Here, we need to remove $Q$ from $STAIR_1$ and appropriately insert $p_j$ into the stair. We do this by the following way: \begin{itemize} \item First, by swapping $b_j$ and $b_{\alpha +1}$, we insert $b_j$ in the proper position. \item Now, we need to move out $b_{\alpha+1}, \ldots, b_{\beta-1}$ from the $STAIR_1$. This can be done by a sequence of swap operations: swap($P[r], P[r-(\beta-\alpha-2)]$, starting from $r=\beta$ until $r=\nu_1$, where $\nu_1$ denotes the end of $STAIR_1$. \item Finally, we set $\nu_1$ as $\nu_1-(\beta-\alpha-2)$. \end{itemize} \item[$|Q| = 0$]: See Case 2 of Figure \ref{figtype3}(b). Here, we need only to insert $b_j$ into the stair. We do this by first swapping $(P[\nu_1+1],P[j])$ and then a sequence of swapping ($P[r], P[r+1]$) starting from $r=\nu_1$ until $r=\beta$. Finally, we set $\nu_1$ as $\nu_1+1$. \end{description} Clearly, this updating $OP$ needs $O(|P_i^1|)$ time in the worst case. \begin{algorithm} \SetKwData{index}{index-of} \SetKwData{maxi}{maximum}\SetKwData{mini}{minimumorangec} \KwIn{$STAIR_1$ corresponding to $p_i$, the projection $b_j$} \KwOut{updated $STAIR_1$} $\alpha=$ \index {\maxi{$y(b_k$}}: $\forall ~k \in STAIR_1$ and $y(b_k)<y(b_j)$\; $\beta=$ \index {\mini{$y(b_k$}}: $\forall ~k \in STAIR_1$ and $x(b_k)<x(b_j)$\; \If{$(\beta- \alpha) > 1 $} {swap($P[j], P[\alpha +1]$)\; $k= \beta - \alpha-2$\; \For{$r\leftarrow \beta$ \KwTo $\nu_1$} {swap($P[r], P[r-k]$)\; } $\nu_1= \nu_1 -k$\; } \Else{ swap($P[\nu_1+1],P[j]$)\; \For{$r \leftarrow \nu_1$ \KwTo $\beta$} {swap($P[r], P[r+1]$)\; } $\nu_1= \nu_1 +1$\; } \caption{ $\textsf{\sc Update\_Stair}_1$($j$)} \label{updatestair3d} \end{algorithm} After computing the largest {\it type-3} axis-parallel $LRC$ with $p_i$ on its top boundary, we need to sort the points again with respect to their $z$-coordinates for the processing of $p_{i+1}$. \remove{ \begin{lemma}\label{count3} The number of {\em type-3} $cLRC$ is $O(m^3)$ in the worst case. \end{lemma} \begin{proof} Let us consider the $cLRC$s' with top and bottom faces passing through $p_i,p_j$ ($\in P_b$) respectively. Surely, the other four sides of each of these $cLRC$s' are defined by the points $p_k \in P_b$, $k \neq i,j$ lying inside the horizontal slab defined by $H(p_i)$ and $H(p_j)$. Let $B_{ij}$ be the projections of these points on $H(p_i)$. Observe that, (i) each $cLRC$, if exists, corresponds to an MER on $H(p_i)$ among the points $B_{ij}$, and (ii) if there exists any point of $B_{ij}$ in the region bounded by the lines $X=x(p_i)$, $X=x(p_j)$, $Y=y(p_i)$, $Y=y(p_j)$, then no $cLRC$ exists with $p_i$ and $p_j$ on its top and bottom boundaries respectively. As earlier (Section \ref{compMER3}), consider the $STAIR$s constructed by the projections of the points in $B_{ij}$ on the plane $H(p_i)$. Consider the shaded region of $OP$ in Figure \ref{count3x} that contains both the points $b_i,b_j$ in its interior. Each MER in this region contributes a $CLRC$. Let $b_j$ lies in the first quadrant defined by $b_i$; $b_{\phi_1}, b_{\phi_2}$ be a pair of consecutive points in $STAIR_1$ that generates $\eta(b_{\phi_1}, b_{\phi_2})$ number of $MER$s (as described in Section \ref{compMER3}), which can be at most $j-i = O(m)$. After processing $p_j$, we consider $B_{i(j+1)} = B_{ij} \cup \{b_j\}$, and there exists no MER with its one corner defined by $b_{\phi_1}, b_{\phi_2}$. Thus, if $b_{\phi_1}, b_{\phi_2}, \ldots, b_{\phi_k})$ are the consecutive points on $STAIR_1$ such that $(b_{\phi_k}, b_{\phi_{k+1}})$ defines MER(s) in $OP$, then after processing $p_j$, the corners of $STAIR_1$ defined by the points $b_{\phi_1}, b_{\phi_2}, \ldots, b_{\phi_k})$ can be deleted as described in Subsection \ref{OPx}. During the processing of $p_i$ (defining the top boundary), each points $p_j$ in quadrant 1 generates at most 2 corners in $STAIR_1$. Thus at most $O(m)$ corners are generated in $STAIR_1$, and they can generate at most $O(m^2)$ MERs. The same argument holds for other $STAIR$s also. Considering all the points $p_i, i=1,2,\ldots, m$ defining the top boundary, the result follows. \end{proof} \begin{figure}[t] \vspace{-0.1in} \centering \includegraphics[scale=0.5]{3d.pdf}\vspace{-0.1in} \caption{Computation of {\it type-3} $LRC$ }\vspace{-0.15in} \label{count3x} \end{figure} } Thus we have the following result: \begin{lemma} \label{res-type3} The time required for processing $p_i$ is $O(m^2+C_i'\sqrt{n} +mn\log n)$ in the worst case, where $C_i'$ is the number of {\em type-3} axis-parallel $LRC$s with $p_i$ on its top boundary. \end{lemma} \begin{proof} The worst case time required for computing $MER(p_i,p_j)$ is $O(|P_{ij}|+C_{ij})$, where $P_{ij}$ denotes the number of points inside the horizontal slab bounded by $H(p_i)$ and $H(p_j)$, and $C_{ij}$ denotes the number of axis-parallel $MER$s containing both $b_i$ and $b_j$ inside $OP$ with the projection of points $B_{ij}$ on $H(p_i)$. In order to compute the largest {\em type-3} axis-parallel $LRC$ with $p_i$ on its top boundary, we need to compute $MER(b_i,b_j)$ for all $j>i$, $C_i'=\displaystyle\sum_{j=i+1}^n C_{ij}$, and $\displaystyle\sum_{j=i+1}^n |P_{ij}| =O((m-i)^2)$. For each of the cuboid $C_i'$, the in-place counting query in the corresponding ${\cal T}_{ij}$ requires $\sqrt{n}$ time (using Lemma \ref{count}). The last part of the time complexity follows due to the fact that, for every point $p_j, j=i+1,\ldots,m$ we need to construct the in-place $2d$-tree. Also after the processing of each $p_i\in P_b$, the sorting step takes $O(n\log n)$ time. \end{proof} Lemma \ref{res-type1}, \ref{res-type2} and \ref{res-type3} lead to the following result. \begin{theorem} The worst case time complexity of our in-place algorithm for computing the axis-parallel largest monochromatic cuboid ($LMC$) is $O(m^3\sqrt{n}+m^2n\log n)$, and it takes $O(1)$ extra space. \end{theorem} \subsection*{Acknowledgment:} The authors acknowledge the valuable constructive suggestions given by the reviewer regarding the presentation of the paper.
2,869,038,156,025
arxiv
\section*{Scientific/Technical/Management} \section{Introduction} All stars later than F5 possess convective zones that drive hot corona heated to 1-10 MK. For this standpoint, the Sun has a moderately heated corona (1-3 MK) extending from the transition zone to a few solar radii. The solar coronal heating is observed in the soft X-ray (SXR) and EUV bands and plays a critical role in controlling the thermodynamics and chemistry of the Earth's upper atmosphere (Meier 1991). The corona’s variable radiative output is associated with flares and coronal mass ejections that affect space weather, and eventually, life on Earth. Variations in the radiation affect radio signal propagation and satellite drag thereby impacting communication, navigation, surveillance, and space debris collision avoidance. Predicting the spectral irradiance from the global Sun is therefore a major goal of the national space weather program. Having this capability requires an understanding of the puzzling physical mechanism that heats the outermost part of the solar atmosphere, the solar corona, to multi-million degree temperatures. Stellar SXR observations have revealed that the coronal heating processes are not unique to the Sun, but are common in magnetically active stars. Therefore, understanding the origin of high-temperature plasma in the solar/stellar coronal environments is one of the fundamental problems of solar physics and stellar astrophysics. While stellar observations show a large variety of coronal environments characterized by up to four orders of magnitude larger heating rates (for example on RS CVn stars or coronal giants), higher spatial and spectral resolution EUV/SXR observations of the solar corona provide the critical data for resolving this puzzle. Specifically, first SXR Yohkoh and later SOHO observations of the global Sun have revealed that the solar coronas represent a highly inhomogeneous environment filled with plasma frozen to magnetic structures of two basic configurations: open and closed. Magnetically open structures extend from the solar photospheres into the heliosphere, while closed structures are signified as loop-like structures filled with relatively dense (10$^9$ cm$^{-3}$) and hot (few MK) plasma emitting in EUV lines of highly ionized metals. While the quite-Sun regions are associated with weak magnetic fields (a few Gauss), EUV/SXR emitting plasma in active regions (AR) is formed in magnetic structures that can be traced back to strong (over 1 kG) surface magnetic fields. The strongest magnetic field in ARs is usually associated with hotter ($>$ 5 MK) and denser plasma which is observed as higher contrast in AIA and SXR images, while regions with weaker fields show signatures of cooler plasma. This association clearly relates the problem of coronal heating to the energy stored and released in the solar coronal magnetic field. Energy into the magnetic field is likely supplied from the mechanical energy of photospheric convective motions. The coronal loops observed in the AR core are usually shorter, denser with higher temperature and associated with stronger magnetic fields. The footpoints of core loops are observed in EUV structures called "moss" (Fletcher \& De Pontieu 1999; De Pontieu et al. 2013). Studies of the temperature evolution of AR coronal loops in time suggested that their emission in EUV results from impulsive heating events occurring at sub-resolution scale (or strands) and ignited a new heating scenario of coronal loops through "nanoflare storms" (Klimchuk 2006). The recent evidence in favor of impulsive heating in coronal loops comes from observations of time-lag of peaks of emission observed in high-temperature lines compared to cooler lines suggesting that these loops can be explained by so-called long nanoflare storms occurring in many strands within a coronal loop (Klimchuk 2009; Viall \& Klimchuk 2012). Recent high spatial resolution SDO and the latest High-resolution Coronal Imager (Hi-C) observations of one active region imply that a magnetic loop is not a monolithic structure, but consists of many (possibly hundreds) of unresolved "strands," with the fundamental flux tubes thinner than 15 km (Peter et al. 2013; Brooks et al. 2013). Moreover, a nanoflare scenario was further specified from analysis of cool, dense and dynamic loops observed by Hi-C observations in lower parts of coronal loops (Winebarger et al. 2013). Two leading theories provide an explanation for how “nanoflares” release magnetic energy in the corona. Magnetic energy dissipated in coronal loops is supplied by the photospheric convection either in the form of upward propagated MHD waves (Asgari-Targhi \& van Ballegooijen 2012) or formation of current sheets driven by twisting and braiding of coronal field lines forming a nanoflare storm (Parker 1988). In either of these proposed scenarios, energy can dissipated at small scales on a single "strand" (a flux tube) in a series of transient heating events. Two important questions are: What is the time scale between two successive "nanoflares" (or frequency of nanoflares) within an AR coronal loop? To what extent are waves or current sheets responsible for nanoflare heating? These two theories predict distinctive scaling laws of the heating rates with magnetic field and characteristic spatial scales of coronal loops (Mandrini et al. 2000). All coronal loop models presented to date can be divided into three categories. Early models of equilibrium loops by Rosner et al. (1978) and Craig et al. (1978) suggested that loops are symmetric, semi-circular monolithic loops with uniform cross section in static equilibrium. These and later studies of individual loops were successful in explaining many signatures of SXR and EUV loops (Porter \& Klimchuk 1995; Cargil \& Priest 1980; Aschwanden \& Schriver 2002; Winebarger et al. 2003; Reep et al. 2013). This approach is useful in studying detailed response of individual loops to different heating scenarios; however, it is difficult to compare them directly to observations of active regions with collections of loops "contaminated" by selection and line of sight (LOS) effects. Another approach is to construct three dimensional MHD models of an active region that will accommodate the above mentioned effects (Lionello et al. 2005; Gudiksen \& Nordlund 2005; Bourdin et al. 2013). These models are extremely useful in understanding a general geometry and dynamics of magnetic structures and can be directly compared to observations. However, they are computationally expensive especially when it comes to resolving physically important scales as well as in treating thermal conduction at small scales in individual loops. The third class of emerging models incorporates the advantages of individual loop models with geometry and LOS effects. This class includes forward models of active regions (Lundquist, Fisher \& McTiernan 2008a; 2008b, Patsourakos \& Klimchuk 2008; Airapetian \& Klimchuk 2009). Airapetian \& Klimchuk (2009) have developed a new class of impulsive coronal heating models that are based on introducing magnetic field extrapolation of active regions using HMI/SDO magnetograms. They make use of the "0D" HD code, EBTEL, which provides a computationally fast way to derive loop averaged temperature and density and construct 2D synthetic images of an active region driven by nanoflare storms. However, that model assumed a uniform cross section of modeled loops as well as uniform heating along each loop. In the current paper, we have significantly expanded on the capabilities of forward models of active regions to construct realistic synthetic images of individual ARs and the global Sun by applying our state-of-the-art fully non-linear 1D hydrodynamic code. First, we developed a fundamentally new class of active region models based on parametrically specified impulsive heating of individual strands (flux tubes) comprising coronal loops. We begin with constructing a "magnetic skeleton" of an active region using the most sophisticated methods to extrapolate Non-Linear Force Free coronal magnetic fields (NLFFF) from high resolution vector HMI/SDO and SOLIS observations (Tadesse et al 2013). We then study how the entire active region (with LOS projection effects) responds to the heating function (volumetric heating rate) scaled with magnetic field and spatial scale parameters and find the best match between synthetic and actual (reconstructed) DEMs obtained by SDO. \section{The Magnetic Skeleton of The Solar Coronal Active Region, AR 11117 } In this paper we construct synthetic EM images of specific ARs in EUV and SXR bands, we need first to construct a 3D equilibrium magnetic loop model of an entire AR or the "magnetic skeleton" of an active region. The magnetic skeleton in the solar corona can be realistically constructed by using SDO/HMI vector magnetograms and extrapolating them into the inner solar corona. Reliable magnetic field measurements are still restricted to the level of the photosphere, where the inverse Zeeman effect in Fraunhofer lines is observable. As an alternative to measurements in these super-photospheric layers, we must rely on numerical computations (known as extrapolation) (Amari et al., 2006) of the field that use the observed photospheric magnetic field vector as a boundary condition. These numerical computations can be carried out using potential field, force-free field or magneto-hydrodynamics (MHD) models. Force-free models do include electric current, and so they can include free-magnetic energy. Force-free models make the simplifying assumption that these currents are field aligned. A force-free model gives static representations of the state of the solar corona at a given instant. This is a good approximation in the low-β corona because the vanishing Lorentz-force does not allow currents perpendicular to the magnetic field. By applying a force-free model to a time sequence of magnetograms we can study the changes in magnetic configuration that results from a flare or eruption. \begin{figure}[h!] \includegraphics*[width=\linewidth]{Fig1.png} \caption{Tracing of the magnetic field lines for the global Sun using our NLFFL extrapolation algorithm (Tadesse et al. 2013).} \end{figure} In nonlinear force-free field (NLFFF) models, there are no forces in the plasma which can effectively balance the Lorentz force, $\vec{J} \times \vec{B}$, (where $\vec{J}$ and $\vec{B}$ have the standard definitions of current density and magnetic field, respectively). NLFFF extrapolation is a realistic way to model the non-potential coronal fields in active regions. \begin{figure}[h!] \includegraphics*[width=\linewidth]{Fig2.png} \caption{SDO image of AR 11117 and its magnetic skeleton used for populating individual strands with plasma.} \end{figure} We use an optimization procedure to calculate 3-D magnetic field solutions into the corona from photospheric boundary. We implement Cartesian or spherical geometry depending on the size of area of region of interest. We have developed IDL tools which help us trace the magnetic field. To describe the equilibrium structure of the static coronal magnetic field, the force-free assumption is appropriate: \begin{equation} \vec{\nabla} \times \vec{B} = \alpha \vec{B} \end{equation} \begin{equation} \vec{\nabla} \cdot \vec{B} = 0 \end{equation} subject to the boundary condition $B = B_{obs}$ on photosphere where $B$ is the magnetic field and $B_{obs}$ is measured vector field on the photosphere. Using the three components of B as a boundary condition requires consistent magnetograms, as outlined in Aly (1989). The photospheric vector magnetograms, obtained by the Synoptic Optical Long-term Investigations of the Sun survey (SOLIS)/Vector Spectromagnetograph (VSM) or HMI/SDO are used as the boundary conditions. Meanwhile, those measured data are inconsistent with the above force-free assumption. Therefore, one has to apply some transformations to these data before nonlinear force-free extrapolation codes can be applied. This procedure is known as preprocessing. This preprocessing scheme modifies the boundary data so that they are consistent with necessary conditions for a force-free field, namely so that integrals representing the net force and torque on the coronal volume above the photosphere are closer to zero (Wiegelmann et al. 2006; Tadesse et al. 2009). We solve the force-free equations using an optimization principle (Wheatland et al. 2000; Wiegelmann 2004) in spherical geometry (Wiegelmann 2007; Tadesse et al. 2009, 2012; 2013). For our test calculations, we have selected AR 11117 observed by SDO on Oct 26, 2010 at 04:00 UT. The image in 171 Å is presented in Figure 2. Using the described technique, we have constructed a "magnetic skeleton" of the active region containing over 12,000 strands. We then imposed a background heating rate in each strand and evolve them using time-dependent hydrodynamics until they have reached equilibrium. Coronal loops are be treated as bundles of magnetic field lines (or elementary flux tubes) that expand into the corona but are rooted in the solar photosphere. Their lengths are much greater than their widths and their orientation is along the direction of the magnetic field. The expansion factor (cross-section) of each individual flux tube is controlled by the condition of the magnetic flux conservation along the tube, specified at the photosphere from the local magnetic field derived from a magnetogram and the minimum size of the magnetic element resolved by HMI observations, ~350 km. \section{ARC7: 1-D Hydrodynamic Model of the AR 11117} Once the magnetic skeleton of the active regions is constructed, we populate each strand of the active region with an initial atmospheric state. To do this we apply uniform background heating, $E_{bg}$, that provides the temperature of 0.5 MK. This allows density and temperature to reach a steady-state equilibrium. To simulate the thermodynamics in each coronal loop driven by a storm of impulsive "nanoflare" events, we use a time-dependent heating rate applied to each strand. The heating rate we use has a general form allowing us to model energy release due to a number of different physical mechanisms. The time-dependence of the impulsive heating from each nanoflare were modeled as triangular pulse with a maximum value given by, \begin{equation} \centering E_H = E_{bg} + g(t) E_0 \end{equation} After heating has been applied for a specified duration we continue to simulate the strands as they cool. The duration of each pulse as well the number of heating pulses applied, and the cooling time are also free parameters. Varying these allows to study the frequency with which nanoflare heating occurs. Therefore, the physical size of the cell varies for each loop with its length with the grid resolution of a few tens of km at the loop base. The heating function, $E_0$, is scaled with the local value of the magnetic field within each {\it nth} cell as $B_{n}^{\alpha}$ and the physical extent of the cell as $l_{n}^{\beta}$. Therefore, in each cell the local heating function is defined as \begin{equation} \centering E_0=\epsilon~B_{n}^{\alpha}~\Delta{s}_{n}^{\beta} \end{equation} In the low solar corona, the magnetic forces dominate over gas pressure. In this regime, plasma is constrained to flow along magnetic field lines and the magnetic field remains static over the time scales which we simulated. Thus, the full 3D MHD equations can be well-approximated by one-dimensional hydrodynamics with that dimension being the axis of magnetic field lines. To model solar coronal loop dynamics, we solve the 1D hydrodynamic equations using a modified form of the ARC7 code (Allred \& MacNeice 2012). ARC7 was created to solve the equations of MHD in 2.5D geometry. It solves the equations of MHD explicitly using a 2nd order accurate in time and space flux-corrected transport algorithm. A radiative loss term is included in the energy conservation equation. This term is proportional to $n_{e}^2~{\Lambda}(T)$, where $n_e$ is the electron number density and ${\Lambda}(T)$ is the radiative loss function and is obtained from the CHIANTI package (Dere et al. 2009). Field-aligned thermal conduction is included in the energy conservation equation and is assumed to have the classical Spitzer formulation. However, during our impulsive heating simulations temperature gradients can occasionally become large enough that the Spitzer formula predicts fluxes which would exceed the free electron streaming rate. This is unphysically large and we cap the heat flux at the free streaming rate. In order to capture the effect of the expansion of the magnetic field from the footpoints into the corona, we scale the cross sectional area of ARC7’s grid cells so that magnetic flux is conserved. At the loops boundaries (i.e., footpoints) we have implemented a non-reflecting boundary condition so that waves can pass through. The boundary of our loops is held at a temperature of 20,000 K and start with sufficient mass density so that material can be evaporated into the corona in response to impulsive heating without significantly changing the boundary density. We perform an impulsive heating simulation using the following algorithm. An initial background heating rate is specified to obtain an equilibrium temperature ~0.5MK. We use the RTV scaling laws (Rosner et al. 1978) to setup a starting atmospheric state within loops depending on the background heating rate and loop length. We allow ARC7 to evolve the loop until it reaches equilibrium. We then turn on the impulsive heating term which linearly ramps the heat function up until it reaches a maximum value and then linearly ramps it down over a time $\delta$t. The maximum value heating function is assumed to have the form $Q_0~ B^{\alpha}/{\Delta}s^{\beta}$, where $Q_0$ is a coefficient, $B$ is the magnetic field strength and $\Delta$s is the length of the element along a flux tube. We also specify n, ${\Delta}t_{int}$, and ${|Delta}t_{cool}$, where n=4 is the number of heating pulses we applied during the simulation, ${\Delta}t_{int}$ is the time interval between heating pulses and ${\Delta}t_{cool}$ is the time we allow the loop to cool after the impulsive heating has been applied . We have chosen to use ARC7 because of its high-speed performance. As noted, ARC7 is a 2.5D MHD code. Our proposed method requires hydrodynamics in only one spatial dimension because plasma is frozen-in the magnetic field in a low-$\beta$ low corona We have simplified ARC7 to take advantage of these assumptions which results in a vast improvement in performance. Using a standard single processor computer, we can model the evolution of a single loop in response to impulsive heating in a few seconds. Our model active regions have on the order of 104 individual strands. We performed these strand simulations in parallel using 100 processors simultaneously on NASA’s Pleiades supercomputer and completed the simulations over an entire active region in about an hour. To reproduce the magnetic structure of the active region, we have used 12,800 individual strands and ran individual trains of nanoflares (low frequency events) on each of them. We then ran nanoflare trains on each of the strands. We selected the duration of a nanoflare as $\Delta$ t = 200 s with the time interval between two successive events, $\tau$ = 200 s. For this example we have used $\alpha$ = 2 and $\beta$ = 2. \begin{figure}[h!] \includegraphics*[width=\linewidth]{Fig3.png} \caption{Distribution of of plasma temperature and density along a typical loop with the length of 58 Mm during a train of 5 nanoflares within one flux tube of the coronal loop for the case of background heating only (dashed line) and for the case of nanoflare heating (solid line) at its peak.} \end{figure} \noindent In the selected extrapolation model of the active region, the loop lengths vary between 5Mm and 200 Mm. We ran simulations with 5 consecutive pulses then another 5000 s of cooling time. Figure 3 shows the temperature and density at the flare peak in a single strand compared with the background temperature and density. The temporal evolution of the peak of that strand is shown in Figure 4. \begin{figure}[h!] \includegraphics*[width=\linewidth]{Fig4.png} \caption{Distribution of of plasma temperature and density along a typical loop with the length of 58 Mm during a train of 5 nanoflares within one flux tube of the coronal loop for the case of background heating only (dashed line) and for the case of nanoflare heating (solid line) at its peak.} \end{figure} \section{Synthetic DEM Images of the AR 11117}. We combined the results of all of our HD simulations to form a 2D picture of the DEM for that active region. We calculated the DEM in each grid cell of each strand using our 1D simulations. We then averaged these DEM in time over the duration of the simulations. Time-averaging captures the assumption that these impulsive heating events occur at random intervals and are independent of each other. These time-averaged DEMs were projected along the line-of-sight back onto HMI pixels forming a 2D representation of the temperature and density structure of that AR. Once the DEM is known, we calculated the optically-thin radiation spectrum, I(${\lambda}$), using the most recent version of CHIANTI atomic database package (Dere et al. 2009). \begin{figure}[h!] \centering \includegraphics*[width=\linewidth]{Fig5.png} \caption{Left panel - Time-averaged DEM for AR 11117 from our 1D HD model are projected along the line-of-sight back onto HMI pixels. Right panel-DEM for AR 11117 reconstructed from AIA images (see the text).} \end{figure} The right panel of Figure 5 shows an example of the DEM constructed from our simulations. Our model results can be compared with observations in two ways. First, we can convolve our DEM with AIA filter passbands to produce synthetic images which can be compared directly with AIA images. We can also construct a DEM from AIA images and compare that directly with our simulated DEM. Developing methods for constructing DEMs from AIA images is a very active topic of research. We have used the tool developed by Hannah \& Kontar (2012). This tool uses a regularized inversion method and has the advantage that it provides uncertainties in both the DEM and temperature (i.e., it provides both horizontal and vertical error estimates). The AIA images were obtained and processed using SolarSoftWare (SSW) IDL packages. We downloaded level 1 AIA images for all passbands for the time interval over which our HMI magnetogram was observed using the SSW routines vso$\textunderscore$search and vso$\textunderscore$get. Next, we converted them to level 1.5 and co-aligned them with the HMI magnetogram using the aia$\textunderscore$prep function. Finally, we ran the DEM construction program data2dem$\textunderscore$reg provided by Hannah \& Kontar (2012) on all pixels which modeled in our simulations. The left panel of Figure 5 shows this DEM reconstruction at a temperature of log T = 6.5. This simulated DEM distribution will be compared with the observationally derived DEM for the active region in the near future. \begin{figure}[h!] \centering \includegraphics*[width=\linewidth]{Fig6.png} \caption{Simulated Differential Emission Measure distribution for the entire coronal active region AR 11117.} \end{figure} \section{Conclusions} We have constructed the first realistic synthetic EM images of the entire coronal active region, AR 11117 driven by a storm of nanoflares. Each nanoflare event was modeled by using our 1D fully non-linear time-dependent single-fluid hydrodynamic code. We simulated the response of the entire active region to a storm of nanoflares specified by impulsive (time-dependent) heating function occurring on over 12,000 strands within the active region. The heating function is scaled with the magnetic field and spatial scale parameters with $\alpha$=2, $\beta$=2 power indices. The reconstructed DEM for this AR will be compared with the observationally derived DEM for the active region in the near future. We will also construct DEMs for a range of $\alpha$ and $\beta$ values to determine the sensitivity of its shape to the specified shape of the heating function. \normalsize
2,869,038,156,026
arxiv
\section{Matrix multiplication on Systolic Arrays} \label{s:gemm} SAs consist of an array of Processing Elements (PEs) organized in $R$ rows and $C$ columns, as shown in Fig.~\ref{f:ws-base}(a). Each PE consists of a multiplier and an adder and necessary registers to appropriately pipeline the operation. SAs are fed by local memory banks placed on the west edge (for the input features) and the north edge (for the weights of the convolution kernels) of the array, while output results are collected on the south edge. The dataflow selected for the SA determines the structure of the PEs and how matrix multiplication $A\times B$ is executed. Fig.~\ref{f:ws-base}(b) shows how the SA performs matrix multiplication using the Weight-Stationary (WS) dataflow~\cite{scalesim}. WS is generally preferred over other dataflows, since it exploits high spatio-temporal reuse of the weights~\cite{tpu, meissa}. \begin{figure}[t] \centering \begin{tabular}{cc} \includegraphics[width=0.44\columnwidth]{figures/figure1a.png} & \includegraphics[width=0.40\columnwidth]{figures/figure1b.png} \\ {\small (a) Typical SA organization} & {\small (b) Weight-Stationary Dataflow} \\ \multicolumn{2}{c}{\includegraphics[width=0.5\columnwidth]{figures/figure1c.png}} \\ \multicolumn{2}{c}{{\small (c) Tiled matrix multiplication}} \end{tabular} \caption{The organization of a typical SA and the structures of the Weight-Stationary dataflow and of tiled matrix multiplication.} \label{f:ws-base} \end{figure} Matrix multiplication $X_{T, M}\! =\! A_{T, N} \!\times\! B_{N, M}$ can be mapped to a SA, if the SA is large enough to accommodate in parallel a column of $A$ and a row of $B$. This can happen when the spatial dimensions $N$ and $M$ of matrix multiplication match the size of the SA, i.e., $R = N$ and $C = M$. In this case, and assuming a WS dataflow, as shown in Fig.~\ref{f:ws-base}(b), matrix $B$ is first pre-loaded in the SA by loading a new row per cycle. Thus, $R$ cycles are needed to complete the loading. Once $B$ is loaded, matrix $A$ is streamed in from the left edge of the array. The first arriving element would reach the rightmost column of the SA after $C-1$ cycles. After the top row is filled with the incoming data elements, it takes $R-1$ cycles to reduce the result of all PEs of the same column. For the reduction operation, the result of the multiplication and addition in each PE is first registered at the borders of the PE and it then moves downwards to the next PE of the same column. The SA becomes empty when the reduction is finished on the rightmost column for all incoming skewed columns of $A$. Since $A$ consists of $T$ rows, the overall latency $L$ of computing matrix multiplication equals: \begin{equation} L = 2R + C + T - 2 \label{e:latency-base} \end{equation} When the size of matrices $A$ and $B$ is larger than the size of the SA, i.e., $N > R$ and/or $M > C$, matrix multiplication is executed in tiles, as shown in Fig.~\ref{f:ws-base}(c), where each tile (sub-matrix) matches the size of the SA. The partial sums for each tile that reach the bottom of the SA, get accumulated into the corresponding output accumulator located below the SA (see Fig.~\ref{f:ws-base}(a)). According to~\cite{scalesim}, the latency of tiled matrix multiplication is equal to \begin{equation} L_{\text{total}} = L \times \left\lceil \frac{N}{R} \right\rceil \times \left\lceil \frac{M}{C} \right\rceil. \label{e:latency-total-base} \end{equation} $L$ is the latency for computing the product of two tiles $A^{\text{sub}}_{T, R}\times B^{\text{sub}}_{R, C}$ and it is given by~\eqref{e:latency-base}, and $\lceil \frac{N}{R} \rceil \times \lceil \frac{M}{C} \rceil$ represents the total number of tiles. \section{Conclusions} \label{s:conclusions} Merging the pipeline stages of an SA creates an interesting tradeoff for the computation of GEMM. On one hand, the number of cycles required to complete the matrix multiplication is reduced proportionally to the collapsed pipeline depth. On the other hand, the clock period should increase to accommodate the larger combinational delay of the merged pipeline stages. Utilizing carry-save adders in parallel to the multiply-add components of the PEs allows us to efficiently control the clock frequency degradation. Since the clock frequency reduction is smaller than the reduction achieved in the number of cycles for certain CNN layers, the proposed ArrayFlex SA can minimize the total execution time. The reduced clock frequencies and the marginal hardware overhead incurred also allow for the reduction of power. Currently, this work focuses on dense computations. Nevertheless, since sparse layers can be mapped to GEMM blocks and executed by SAs using efficient peripheral circuitry, we plan -- as future work -- to also explore the applicability of ArrayFlex to sparse layers. \section{Evaluation} \label{s:eval} In the experimental evaluation, the goal is to highlight: (a) when SAs operating in shallow mode make sense, (b) the latency and power savings in these cases, and (c) the area overhead incurred in offering pipeline-depth reconfigurability. To answer these questions, we developed parameterized models of a conventional SA and ArrayFlex in SystemVerilog RTL. Both SAs operate on 32-bit quantized inputs and weights executing single-batch inference of various CNNs that consist of matrix multiplications of different sizes. The additions in each column of the SAs are performed at 64 bits. The SAs were implemented using Cadence's digital implementation flow using a 28 nm standard-cell library. Conventional SAs operating only with a normal pipeline in a non-configurable manner can reach a clock frequency of 2 GHz. The proposed configurable SAs support one normal and two shallow pipeline modes. In normal pipeline mode ($k=1$), the proposed SA operates at 1.8 GHz. The two shallow pipeline modes allow for collapsing $k=2$ or $k=4$ pipeline stages. In these cases, the clock frequency is configured at 1.7 GHz and 1.4 GHz, respectively. Collapsing three pipeline stages is not supported, since three does not divide exactly with the size of the SA, which is a power-of-two in both dimensions. \begin{figure}[ht] \centering \begin{tabular}{cc} \includegraphics[width=0.32\columnwidth]{figures/figure6a.png} & \includegraphics[width=0.3424\columnwidth]{figures/figure6b.png} \\ \end{tabular} \caption{The physical layouts of 8$\times$8 PEs using the conventional SA (left) and the proposed ArrayFlex design (right).} \label{f:layout} \end{figure} To estimate the area cost of reconfigurability, Fig.~\ref{f:layout} highlights the physical layout of a conventional SA, relative to the ArrayFlex design, using 8$\times$8 PEs. From the physical layout of both SAs, it is evident that the area of ArrayFlex is increased in both dimensions. The area overhead per PE for this design is approximately 16\%. This extra area is consumed by the carry-save adder and the bypass multiplexers, while some marginal area is consumed by the two configuration bits per PE. \begin{figure*} \centering \includegraphics[width=0.94\textwidth]{figures/figure7.png} \caption{The execution time of each CNN layer of ConvNeXt~\cite{convnext} using the conventional and the proposed ArrayFlex SAs. Size of both SAs: 128$\times$128 PEs.} \label{f:convnext} \end{figure*} \subsection{Performance evaluation} Initially, the aim is to highlight the effectiveness of configuring the pipeline depth per CNN layer in a way that minimizes the total execution time. Fig.~\ref{f:convnext} illustrates the execution time per CNN layer of ConvNeXt~\cite{convnext} using SAs that consist of 128$\times$128 PEs. The proposed ArrayFlex SA selects the optimal pipeline depth based on the structure of each CNN layer. For the first 11 layers, it is advantageous to operate under normal pipeline mode. This means that both the conventional SA and ArrayFlex require the same number of cycles to finish the matrix multiplication of each layer. Thus, since the conventional SA operates at a higher clock frequency, it finishes earlier in these cases. For layers 12--46, the proposed SA works optimally under a shallow pipeline mode of $k=2$, while, for layers 47--55, $k=4$ is the best configuration. In those cases, the execution time required by ArrayFlex is less than the execution time on the conventional SA. Interestingly, the best pipeline organization per CNN layer is approximated fairly accurately (assuming continuous values) by Equation~\eqref{e:opt-k}. For ArrayFlex, the execution time savings per layer range between 1.5\% and 26\%, while the \textit{total execution time} \textit{for all layers} is 11\% less than the time required by the conventional SA. \begin{figure}[htb] \centering \begin{tabular}{cc} \includegraphics[width=0.44\columnwidth]{figures/figure8a.png} & \includegraphics[width=0.44\columnwidth]{figures/figure8b.png} \\ {\small (a) 128$\times$128 SAs} & {\small (b) 256$\times$256 SAs} \end{tabular} \caption{The normalized execution times for \textit{complete runs} (i.e., execution of all layers) for three CNNs using (a) 128$\times$128 and (b) 256$\times$256 SAs. The times are normalized for visual clarity, since the execution time of ConvNeXt is significantly higher than the execution times of the other two CNNs.} \label{f:apps} \end{figure} Similar behavior is observed under other CNN models and different SA sizes. Fig.~\ref{f:apps} depicts the normalized total execution time of three CNNs, ResNet-34~\cite{resnet}, MobileNet~\cite{mobilenet}, and ConvNeXt~\cite{convnext}, using 128$\times$128 and 256$\times$256 SAs. In all cases, the proposed ArrayFlex design, which configures the pipeline depth and the corresponding clock frequency to the characteristics of each CNN layer, achieves lower execution latency, ranging between 9\% and 11\%. The savings increase for larger SAs, since more CNN layers prefer a shallow pipeline configuration with $k=4$. This behavior is in line with Equation~\eqref{e:opt-k} that "predicts" higher values for $\hat{k}$ when the size of the SA increases, i.e., with larger values of $R$ and $C$. \subsection{Power consumption evaluation} One other equally important attribute of the proposed ArrayFlex architecture is that it reduces execution time \textit{without} increasing power. ArrayFlex has larger switched capacitance than a conventional SA, due to the extra hardware required to enable pipeline-depth configurability. Furthermore, it operates at a lower clock frequency than a conventional SA in all pipeline modes. The latter property partially amortizes the power cost of the additional hardware. However, in normal pipeline mode, ArrayFlex still consumes more power than a conventional SA. This behavior changes when in shallow pipeline mode, whereby the clock frequency is further reduced and additional power is saved by the clock gating of the bypassed registers. Therefore, the power profile of ArrayFlex strongly depends on the selected pipeline mode, which is decided independently for each CNN layer. \begin{figure}[htb] \centering \begin{tabular}{cc} \includegraphics[width=0.44\columnwidth]{figures/figure9a.png} & \includegraphics[width=0.44\columnwidth]{figures/figure9b.png} \\ {\small (a) 128$\times$128 SAs} & {\small (b) 256$\times$256 SAs} \end{tabular} \caption{The power of the SAs for \textit{complete runs} (i.e., execution of all layers) for three CNNs using (a) 128$\times$128 and (b) 256$\times$256 SAs. The power of the SRAMs and any other peripheral circuitry outside the SAs is omitted.} \label{f:apps-power} \end{figure} Fig.~\ref{f:apps-power} depicts the average power consumption of both SAs under comparison when executing inference on the ResNet-34~\cite{resnet}, MobileNet~\cite{mobilenet}, and ConvNeXT~\cite{convnext} CNNs. For ArrayFlex, the power cost of each pipeline mode is shown separately. ArrayFlex operates in shallow pipeline mode in the majority of the CNN layers of each application. Consequently, this behavior translates to \textit{overall power savings} that range between 13\% and 15\% for SAs of size 128$\times$128 PEs, and increase to 17\%--23\% for SAs of size 256$\times$256 PEs. The combined effect of reduced power and less execution time makes ArrayFlex 1.4$\times$--1.8$\times$ more efficient in terms of energy-delay-product than a conventional SA. \section{Introduction} \label{s:intro} The quality of deep learning has increased significantly with Convolutional Neural Networks (CNNs)~\cite{convnext}. CNNs have enabled remarkable performance in many application fields, such as computer vision~\cite{convnext, mobilenet}, natural language processing~\cite{NLP_CNN}, and robotics~\cite{CNN-SLAM}. This widespread adoption of CNNs has triggered the need to accelerate them directly in hardware. To do so, CNN layers are mapped to General Matrix Multiplication (GEMM) kernels~\cite{cudnn}. GEMMs are at the heart of deep learning hardware and they naturally map onto Systolic Arrays (SA)~\cite{why-systolic}. Tensor-processing units~\cite{tpu} and other related architectures~\cite{scalesim, auto-sa, meissa, factored-sa} are characteristic examples of newly designed SAs. Systolic arrays have also been implemented as configurable architectures that support arbitrary bit-width arithmetic precision to enable sub-word parallelism~\cite{reconfig-bitwidth}. Furthermore, configurable SAs can support various dataflows~\cite{eyriss2,hetero-sa} that are more amenable to the different forms of convolutions found across different CNNs, and even across different layers within one model. Coarse-grained SA reconfiguration allows for the partitioning of the hardware resources into multiple CNNs that execute concurrently on the different parts of the SA~\cite{dataflow_mirroring, planaria, sara}. In the same context, SAs have been customized to handle sparse matrices~\cite{sparse-tpu}. In this work, we focus on customizing the \textit{pipeline} structure of SAs with the goal being to reduce the execution latency of matrix multiplication. Throughput can always be increased by adding more processing elements and increasing the input and output bandwidth of the SA. On the contrary, optimizing the execution \textit{latency} requires architectural re-organization of the SA that should also adjust to the structure of each CNN layer. Reducing latency is important in applications executed at the edge~\cite{edge} and a necessity for applications that also require real-time responses~\cite{edge-2}. Moreover, for small batch sizes, reducing the latency can also reduce the time to the final result. This is critical in RNNs, which are harder to batch than CNNs~\cite{rnn-batch}. The proposed SA architecture, named \emph{ArrayFlex}, can configure its pipeline structure between normal and various shallow pipeline depths. In shallow mode, two or more adjacent pipeline stages are joined by bypassing intermediate pipeline stage(s)~\cite{collapse, transparent}. This merging effectively reduces the number of cycles needed to complete matrix multiplication. On the other hand, the clock frequency is reduced to avoid timing violations due to the increased logic depth. This double-faceted tradeoff allows us to identify the best possible configuration per CNN layer that minimizes the total execution latency in absolute time. Overall, the contributions of this work can be summarized as follows: \begin{itemize} \item ArrayFlex introduces a configurable pipeline architecture for SAs that can adjust its pipeline depth to the size of the corresponding matrix multiplication while aiming to minimize the total execution latency. \item When shallow pipeline mode is beneficial, power is equivalently reduced, since transparent registers remain clock-gated and the design as a whole operates at a lower clock frequency. \item Extensive evaluations using state-of-the-art CNN applications demonstrate that the proposed architecture reduces the latency by 11\%, on average, while also consuming 13\%--23\% less power, as compared to SAs with a fixed pipeline organization. This amounts to a substantial improvement in overall energy efficiency. \end{itemize} The rest of the paper is organized as follows: Section~\ref{s:gemm} revisits the basics of computing matrix multiplication on SAs. Section~\ref{s:transparent} introduces the proposed ArrayFlex SA architecture with configurable pipeline depth. Experimental results are presented in Section~\ref{s:eval} and conclusions are drawn in Section~\ref{s:conclusions}. \section{A Systolic Array with Configurable Transparent Pipelining} \label{s:transparent} In this work, we aim to adjust the pipeline depth of the vertical and horizontal pipelines of the SA, in order to optimally calibrate them to the size of the systolic array ($R$ and $C$), and the size $T$ of matrix $A$, which are crucial in the overall latency of computation. \begin{figure}[thb] \centering \includegraphics[width=0.78\columnwidth]{figures/figure2a.png} \\ {\small (a) Normal pipeline; $k=1$} \\ \vskip 0.2cm \includegraphics[width=0.78\columnwidth]{figures/figure2b.png} \\ {\small (b): Shallow pipeline; $k=2$} \caption{(a) In normal pipeline mode, each data item moves to the next PE, either horizontally or vertically, in one clock cycle. (b) In shallow pipeline mode, the dataflow of every $k=2$ PEs is merged in a single-cycle operation. Merging is possible by bypassing the intermediate pipeline registers. The input and output dataflow skew is altered to match the shallower pipeline structure.} \label{f:transparent} \end{figure} To reduce the $R-1$ cycles spent in the reduction operation in each column of the SA, we can configure the vertical reduction pipeline to operate in shallow mode, where two or more adjacent pipeline stages are merged by making the intermediate registers transparent. The registers in transparent mode, bypass the input data to the next stage, thereby joining two adjacent combinational logic circuits into one pipeline stage. Up to $k$ registers can be joined in the vertical direction. Pipeline collapsing is also performed in the horizontal dataflow. Instead of letting the input stream move one column to the right in each cycle, we allow it to broadcast to $k$ columns when operating in shallow pipeline mode~\cite{risset}. The normal pipeline mode that corresponds to the case of $k=1$ is shown in Fig.~\ref{f:transparent}(a). Similarly, Fig.~\ref{f:transparent}(b) depicts an example of shallow pipeline operation assuming $k=2$. In this case, the result of the top row of PEs is added transparently to the result of the second row of PEs in the same clock cycle. The same operation occurs for every two adjacent PEs. To align the shallow pipelines of the SA with the arrival of the input data, their arrival skew should be altered. The first (and last) elements of matrix $A$ arrive in batches of $k$ words. It should be stressed that this change does not fundamentally alter the operation of the systolic array, since the required input and output bandwidth remains the same and it is equal to $R$ and $C$ words per cycle (i.e., equal to the number of rows and columns of the SA). \subsection{Latency vs. clock frequency tradeoff} Using this approach, and assuming that we can collapse/merge $k$ intermediate PEs into the same pipeline stage in both the vertical and the horizontal directions, the number of cycles spent in the reduction operation reduces from $R-1$ to $\frac{R}{k} - 1$, and the number of cycles spent in the broadcast of the first data element to the rightmost column of the SA reduces from $C-1$ to $\frac{C}{k}-1$. Thus, the overall latency of computing a matrix product $A^{\text{sub}}_{T, R}\times B^{\text{sub}}_{R, C}$ (as needed by each sub-matrix of the original $A\times B$ product) becomes \begin{equation} L(k) = R + \frac{R}{k}+\frac{C}{k}+T-2 \label{e:lat-new} \end{equation} The number of cycles needed for all tiles is then equal to \begin{equation} L_{\text{total}}(k) = L(k) \times \left\lceil \frac{N}{R} \right\rceil \times \left\lceil \frac{M}{C} \right\rceil. \label{e:lat-total-new} \end{equation} Overall, the higher the amount of collapsing (i.e., higher value of $k$), the larger the reduction in the number of cycles needed to complete matrix multiplication. On the other hand, to enable pipeline collapsing within the SA, one must slow down the clock frequency to avoid timing violations due to increased logic depth. Column collapsing only affects the delay marginally. However, row collapsing requires $k$ additions to be performed in series in the same clock cycle. Therefore, for each shallow pipeline configuration, there is an equivalent throttling of the operating clock frequency to adjust the clock period to the logic depth of the selected configuration. Our goal is to select the best possible $k$ that minimizes the total execution latency in absolute time (i.e., clock cycles $\times$ clock period), given the size of the systolic array $R\times C$ and the size of the matrix multiplication, as determined by $N, M$, and $T$. \subsection{The Organization of Configurable PEs} The minimum clock period that the design can operate at is determined by the maximum logic delay between any two pipeline registers, plus any clocking overhead (sum of the register clock-to-Q delay and the setup time of the flip-flops). In the baseline case ($k=1$), the maximum combinational delay remains inside the borders of one PE and is equal to the delay of the multiplier and the delay of the adder. When collapsing $k$ pipeline stages into one, the maximum combinational delay again involves the delay of one multiplier plus the delay of $k$ carry-propagate adders in series plus the delay of bypass multiplexers. To avoid this significant delay overhead when collapsing adjacent pipeline stages, we augment the PEs of the SA with an additional 3:2 carry-save stage that is only enabled during shallow pipeline mode. The organization of two enhanced PEs of the same column are shown in Fig.~\ref{f:new-pe}. The 3:2 carry-save adder is composed of parallel full-adders, with each one placed at each bit position. \begin{figure} \centering \includegraphics[width=0.45\columnwidth]{figures/figure3.png} \caption{The organization of two enhanced, configurable PEs of the same column. Registers are bypassed in the vertical and horizontal directions according to the pipeline configuration. In shallow pipeline mode, reduction is performed using a series of 3:2 carry-save adders ending with a carry-propagate adder.} \label{f:new-pe} \end{figure} When PEs are collapsed, the registers placed in the horizontal direction are bypassed (and clock gated) by additional multiplexers controlled by configuration bits loaded in parallel to the weights of matrix $B$. In the vertical direction, to add the $k$ products produced by the multipliers of each PE, we utilize $k$ carry-save adder stages. The carry-save adders are connected in series through additional bypass multiplexers placed in the vertical direction. At the last stage, where pipeline collapsing ends and the result needs to be saved in the corresponding pipeline register, the result of the carry-save adders is transformed to one operand using the carry-propagate adder of each PE. \begin{figure}[htb] \begin{tabular}{cc} \includegraphics[width=0.31\columnwidth]{figures/figure4a.png} & \includegraphics[width=0.58\columnwidth]{figures/figure4b.png} \\ (a) Normal pipeline & (b) Shallow pipeline; $k=2$ \\ \end{tabular} \caption{Example of active paths for (a) a normal pipeline ($k=1$), and (b) a shallow pipeline ($k=2$).} \label{f:transparent-example} \end{figure} This configuration is shown in Fig.~\ref{f:transparent-example}(b) for $k=2$. The products of the first and the second row in each column are added using carry-save adders. The final sum is produced by the carry propagate adders of the last row. The carry propagate adders of the top row are not used, while the registers that are bypassed are clock-gated to save power. Each PE needs two configuration bits that configure independently the transparency (bypassing) of the pipeline registers in each direction. Separate configuration bits per PE are needed since each PE can play a different role depending on the selected pipeline mode. The incoming input and the weight stored in each PE have the same bit width. However, the vertical connections, including the carry-save adders and the carry propagate adders, have double the bit width, in order to accommodate the full product of the multiplier. The 3:2 carry-save adder and the bypass multiplexers participate in the operation of each PE even when configured for a normal pipeline, i.e., $k=1$. As shown in Fig.~\ref{f:transparent-example}(a), the product of each multiplier is first added to the result of the previous PE of the same column through the carry-save adder before finalizing the addition with the carry-propagate adder. This extra hardware placed in series between the multiplier and the adder inevitably affects the minimum delay that can be achieved by a conventional PE that does not offer any pipeline reconfiguration. The experimental results show that this delay overhead is marginal and does not limit the applicability of the proposed approach. \subsection{Minimizing the total execution time} To identify the optimal value of $k$ that best fits the examined configuration, we first need to develop for ArrayFlex a rough model of how the clock period is affected with respect to $k$. When collapsing $k$ pipeline stages of the SA, the maximum combinational delay involves the delay of $k$ bypass multiplexers in the horizontal direction, the delay of the multiplier ($d_{\text{mul}}$) of the rightmost PE of the collapsed pipeline block, plus the delay of $k$ cascaded 3:2 carry-save adders ($d_{\text{CSA}}$) and bypass multiplexers ($d_{\text{mux}}$) in the vertical direction. In this delay, we should add the delay of the final carry propagate adder ($d_{\text{add}}$) of the last row of the collapsed pipeline block and any flip-flop clocking overhead ($d_{\text{FF}}$). Overall, we can roughly estimate that the minimum clock period that can be achieved by a $k$-collapsed pipeline is: \begin{equation} T_{clock}(k) = d_{\text{FF}} + d_{\text{mul}} + d_{\text{add}} + k ( d_{\text{CSA}} + 2d_{\text{mux}} ) \label{e:clock} \end{equation} In practice, the design supports a maximum pipeline collapsing depth $k_{\text{max}}$. When collapsing fewer than $k_{\text{max}}$ pipeline stages, the combinational paths that still exist in the design but are not used are considered false paths. We provide this information explicitly to the static timing analyzer. The latency in absolute time $T_{abs}(k)$ of computing a complete matrix multiplication using an SA with $k$-collapsible pipeline is the product of the latency in clock cycles $L_{\text{total}}(k)$ given in Equation~\eqref{e:lat-total-new} and the minimum clock period $T_{\text{clock}}(k)$ that corresponds to each $k$ (as given by Equation~\eqref{e:clock}): \begin{equation} T_{\text{abs}}(k) = L_{\text{total}}(k)\times T_{\text{clock}}(k) \end{equation} \begin{figure}[t] \centering \begin{tabular}{cc} \includegraphics[width=0.35\columnwidth]{figures/figure5a.png} & \includegraphics[width=0.35\columnwidth]{figures/figure5b.png} \\ {\small (a) ResNet-34 Layer 20} & {\small (b) ResNet-34 Layer 28} \\ \end{tabular} \caption{The execution time of computing layers (a) 20 and (b) 28 of ResNet-34~\cite{resnet} as an equivalent matrix multiplication using a configurable SA under various pipeline collapsing depths $k$. The execution time of the conventional (non-configurable) SA that operates only in normal pipeline mode under the highest clock frequency is depicted as a straight line.} \label{f:example} \end{figure} To explore the interesting interplay between computation latency in cycles and the configurable pipeline depth of the columns of the systolic array, we performed a simple experiment. We measured the execution time required to compute layers 20 and 28 of ResNet-34 CNN~\cite{resnet} as matrix multiplications using a configurable SA that consists of $(R, C) = (132, 132)$ rows and columns. The values of $R$ and $C$ were selected to be divisible by all the examined values of $k$, i.e., 1, 2, 3, and 4. The sizes of the corresponding matrix multiplications for computing layers 20 and 28 are $(M, N, T)=(256, 2304, 196)$ and $(M, N, T)=(512, 2304, 49)$, respectively. In both cases, we examined various pipeline collapsing depths. The obtained results in each case are depicted in Fig.~\ref{f:example}. For each pipeline collapsing depth $k$, we scaled the clock frequency accordingly to match the combinational delay for each case. The execution latencies that correspond to a conventional (non-configurable) SA are shown as straight lines in both cases of Fig.~\ref{f:example}. The conventional SA operates using a normal pipeline at the highest clock frequency, since it does not suffer any delay overhead associated with configurability. According to Fig.~\ref{f:example}(a), the execution time for layer 20 is minimized at $k=2$. In this case, the reduction of clock cycles and the increase of the clock period find their optimal match. Collapsing the pipeline deeper, i.e., $k=3$, still reduces the execution time, relative to a conventional SA, but the savings are less. For layer 28, as depicted in Fig.~\ref{f:example}(b), deeper pipeline collapsing offers the best execution time. In this case, utilizing a pipeline collapse depth of $k=4$ is the best choice. To identify the optimal pipeline depth $\hat{k}$ that minimizes $T_{\text{abs}}(k)$, we take the derivative of $T_{\text{abs}}(k)$ with respect to $k$ and set it equal to zero. This leads to: \begin{equation} \hat{k} = \sqrt{\left ( \frac{R+C}{R+T-2}\right) \left(\frac{d_{\text{FF}} + d_{\text{mul}} + d_{\text{add}}}{d_{\text{CSA}} + 2d_{\text{mux}}}\right )} \label{e:opt-k} \end{equation} Even though $k$ is a discrete variable, Eq.~\eqref{e:opt-k} gives a simple analytical model that leads to one interesting conclusion: the pipeline depth of the SA should be judged not only on the delay profile of the adder and the multiplier hardware blocks, but also on the size of the matrix multiplication (dimension $T$) relative to the size of the SA. For instance, the first layers of the CNN try to identify on the input coarse features using a wide search area. This leads to large values for dimension $T$ on the corresponding matrix multiplication. As a result, $\hat{k}$ cannot easily reach values larger than one. This means that the best choice is to use an architecture with a normal pipeline, i.e., with $k=1$. On the contrary, in the last CNN layers, it is common the size of the input features to decrease and their number to increase~\cite{convnext, mobilenet}. Effectively, in these layers the value of $T$ drops and using shallow pipelining (higher $k$) is a better choice. Allowing for pipeline collapse, which also reduces the clock frequency, not only reduces the overall execution time, but it also saves dynamic power. Under shallow pipelining, the clocking power is also reduced, since more registers remain clock-gated.
2,869,038,156,027
arxiv
\section{Introduction} An important aspect of cognitive radio and dynamic spectrum access is how to distribute resources among users. Resource allocation in such systems differ from conventional wireless systems because the resources available changes with time and location, yielding an optimal allocation hard. In dynamic spectrum access scenarios, management of spectrum is usually divided between two approaches \cite{Lehr}: (i) property rights/licensed/exclusive use and (ii) open access/unlicensed/spectrum commons. In the latter all devices are treated equal and sharing of the spectrum is done through protocols and etiquettes. Although these protocols and etiquettes may be designed to achieve some overall system performance, quality of service can not be guaranteed in this scenario as everything will be best effort. In the licensed scenario, devices can buy and trade access to spectrum. In such a scenario there exists central entities which grant access to spectrum on a legal basis and central control. This approach has led to numerous publications on spectrum trading, including demand \cite{Niyato2008}, pricing \cite{Ileri} and auctions of spectrum \cite{Gandhi2007}. In such a scenario, quality of service can be guaranteed, and in fact must be guaranteed in order for the market to function. A natural problem for spectrum holders is then how to maximize revenue. In this paper we assume secondary users can buy access to spectrum for a given time period. When requesting spectrum, the secondary users issue a statement about the necessary quality-of-service for their operation. In this paper we limit our focus to the case where such a QoS metric consists of a signal-to-interference plus noise ratio (SINR). Each user is willing to pay a certain amount for its service. The goal of the spectrum holder is then to maximize the number of users that can be granted access to spectrum, while still guaranteeing the QoS for these users. This becomes an admission control and spectrum allocation problem, which is difficult to solve. The difficulty of such a problem lies in the nature of the wireless channel: interference. The accumulative nature of interference causes the coupling between decision variables to be non-linear and usually the optimization problem is non-convex (and NP-hard). A central approach is usually assumed to solve such problems close to optimality, and techniques such as genetic algorithms or geometric programming can be used \cite{4275017}. Another approach is to solve the problem by a distributive approach by investigating the Karush-Kuhn-Tucker (KKT) necessary conditions for optimality \cite{wang}. However, finding the KKT multipliers is complex, and in addition the objective function must be continuously differentiable and the problem has to be convex for the KKT conditions to also be sufficient conditions for optimality \cite{Boyd2004}. Based on a recent work \cite{optimus}, we use a constraint transformation to decouple the joint admission control and spectrum allocation problem into two disjoint problems. We show that the admission control problem must be solved by a central entity in order to guarantee any QoS to the secondary users, while the spectrum allocation problem can be solved in a distributed manner close to, and in some cases to, optimality. This has the advantage of decreasing complexity of the central controller and the option of delegating part of the processing to the users while still having a central control. The rest of this paper is organized as follows: Section \ref{sec:rel-work} provides an overview of related work, Section \ref{sec:pre} defines the system model and problem statement, Section \ref{sec:prob} presents the constraint transformation used in access control, Section \ref{sec:mkp} describes the the heuristic algorithm presented in this paper, Section \ref{sec:channel-selection} describes the channel selection algorithm, Section \ref{sec:performance} provides performance evaluations and Section \ref{sec:conc} concludes the paper. \section{Related Work} \label{sec:rel-work} Due to the reuse factor used in early mobile phone systems and in many of today's broadcast systems, the spectrum allocation problem (SAP) has been extensively studied in the literature, see for instance \cite{hale} and references therein. However, the SAP problem described for cellular networks with reuse factors differ from ours by the fact that these problems dealt with fixed topology and fixed resource allocations. E.g. each base station was allocated $x$ amount of spectrum, the problem was finding this $x$ for each base station. Also, for cellular networks with fixed topology, solution time is not of the essence as the topology does not change. In more dynamic networks, such as dynamic spectrum access networks, a popular approach to simplifying the problem is to simplify the interference model. One frequently used approach is to approximate the interference model by a graph with pairwise constraints. E.g. node $a$ cannot transmit on the same frequency as node $b$. This is done in for instance \cite{Subramanian} and \cite{4658258}. However, the inefficiency of such graph-based models has also been analyzed quite extensively \cite{Gronkvist:2001:CGI:501416.501453}\cite{Moscibroda:2006p137}. To model interference and successful reception we use the physical SINR model \cite{4146676}, which accounts for accumulated interference as opposed to graph-based models. In recent years this model has been extensively studied for link scheduling in wireless networks, where approximation guarantees on the optimal solution have been shown to exist \cite{Goussevskaia:2007:CGS:1288107.1288122}\cite{Moscibroda:2007:WCW:1236360.1236362}\cite{Halldorsson:2011:WCO:2133036.2133155}. The best known solutions yield a constant factor $\mathcal{O}(1)$ approximation on optimality. We extend these works by considering multiple channels and individual SINR requirements. \cite{optimus} considers the case of distributing spectrum among users with cumulative interference and a universal SINR constraint for all users. They assume a common set of channels $K$ available to each user where $|K| \sim 1000$ and the goal is to allocate a fraction of the total number of channels to each user according to some objective function. To overcome the cumulative interference problem they propose a novel linear relaxation of the interference constraints such that the problem becomes a linear program. The linear relaxation of the interference constraints introduced in \cite{optimus} forms the basis of our work. Compared to \cite{optimus}, we extend their work by considering individual channel sets and SINR requirements for each user, and also assume the individual channels sets can be arbitrarily small. Instead of allocating a fraction of a common set of channels to each user, we assume a user can obtain a maximum of one channel. These assumptions changes the problem from a linear program in \cite{optimus} to a binary quadratic constraint (BQC) problem. In \cite{Xiang2010} spectrum allocation among femtocells is considered, where the goal is to maximize overall system rate while guaranteeing a minimum SINR at each femtocell user. This may seem as a more generalized approach then the one considered in this paper. However, such a formulation has certain drawbacks: (i) with such a formulation each femtocell user \textit{must} transmit, otherwise this problem is equivalent to the general sum-rate problem which is known to be unfair (i.e. most of the resources are allocated to users with high channel gain). (ii) It is known that in many cases some users must stay silent for a feasible solution to exist. Thus the requirement that each user must transmit on some channel limits the cases where a feasible solution exist. However, from a spectrum holders perspective, if there is demand then a feasible solution exists as one could simply allocate all spectrum to the highest bidder. The work most closely related to ours is \cite{Hoang2008}. In \cite{Hoang2008} the goal is to allocate spectrum to as many cognitive users as possible while achieving a SINR requirement at each user and satisfying some interference limit at the primary users. It is shown that the general problem formulation is NP-hard and thus a greedy heuristic is proposed. As the problem formulation is still NP-hard in this paper, we also rely on a heuristic algorithm to solve the optimization problem efficiently. However, we consider a case where primary users are only protected through a maximum power constraint at each user, such as in the TV white spaces. By this reduction we can use a constraint transformation such that we are able to utilize heuristics with a much lower complexity bound and by comparing sufficient and necessary conditions we are able to upper bound the optimal solution of our problem in certain cases. \section{System Model and Problem Formulation} \label{sec:pre} \subsection{System Model} A set of $N$ transmitters wants to allocate spectrum to support transmission to their receivers. A pair consisting of a transmitter and a desired receiver is denoted as a user. These users can be one-hop links in ad-hoc networks, or they can be access points to terminal links. In latter case, if different terminals connected to the same access point uses different frequencies, a user has to be defined for each terminal to support this scenario. However, in general this does not affect the spectrum allocation framework, as this only depends on how interference between these users are defined. To acquire access to spectrum, the users contacts a spectrum holder through some means of communication. The accepted approach to grant access to secondary spectrum is through a database approach as currently done by the FCC \cite{FCC4}. We assume the system consists of a set of channels $\mathcal{K}$. By providing location information to a spectrum holder database, each user has potential access to a subset $\mathcal{K}_i$ of channels along with a maximum power constraint $P_i$ which guarantees that primary users are not affected. Thus we have that $\mathcal{K}_i \subseteq \mathcal{K}$ and $K_i = |\mathcal{K}_i|$. Each user can select one channel among the $K_i$ channels, where each channel has the same bandwidth and propagation characteristics. The SINR of user $i$ on channel $k$ is given as \begin{equation} SINR_{i}^k = \frac{g_{i,i}P_i}{\sigma^2 + \sum_{j\neq i} a_j^kg_{j,i}P_j} \end{equation} where $a_j^k \in \{0,1\}$ is 1 if user $j$ transmits on channel $k$, $g_{j,i}$ is the channel gain between node $j$ and node $i$ and $\sigma^2$ is the noise variance. Let $S_i = g_{i,i}P_i$ and $I_{j,i} = g_{j,i}P_j$. The SINR$_i^k$ is then given as \begin{equation} SINR_{i}^k = \frac{S_i}{\sigma^2 + \sum_{j\neq i} a_j^kI_{j,i}} \end{equation} We assume each has a SINR requirement $\beta_i$ \begin{definition} Let $\mathbb{A}$ define a spectrum allocation, \begin{equation} \mathbb{A} = \{a_i^k\}, 1\leq i\leq N,1\leq k \leq K_i \end{equation} where $a_i^k=1$ indicates that user $i$ selects channel $k$, $a_i^k = 0$ otherwise. \end{definition} \begin{definition} With the SINR requirements of each user, a spectrum allocation $\mathbb{A}$ is \textbf{successful} if $a_i^k=1$ implies $SINR_i^k\geq \beta_i$. \end{definition} \subsection{Problem Formulation} As mentioned above each user has a SINR requirement. By contacting spectrum holder database, each user is willing to pay a fee for access to the spectrum, given that its SINR requirement is met. To provide a price function versus time and performance is out-of-scope of this paper, we only assume it is a non-decreasing function of SINR. The goal of the spectrum holder is thus to maximize its revenue by allowing an optimal set of users access to the spectrum under their individual SINR constraints. Since a user can only use one frequency, we introduce a variable $x_i \in [0,1]$ where \begin{equation} x_i = \sum_k a_i^k \end{equation} The problem can then be formally defined as follows \begin{align} \max& \sum_i r_ix_i \label{eq:org-prob}\\ \text{s.t.:}& \nonumber \\ &a_i^k(SINR_i^k -\beta_i)\geq 0 \label{eq:constraint-non-linear} \\ &x_i = \sum_k a_i^k \\ &0\leq x_i\leq 1, a_i^k\in\{0,1\} \label{eq:a-0-1} \end{align} where $r_i$ is the revenue of the spectrum holder given that user $i$ is granted access to the spectrum. This problem is a non-convex 0,1 problem, as (\ref{eq:constraint-non-linear}) is a non-convex constraint and the values of $a_i^k$ is either 0 or 1. The difficulty of the problem lies in the large unknown non-convex solution space due to the large number of possible variable configurations. For instance if there are 10 users with 5 available channels each, there are $5^{10}$ possible combinations. As noted in numerous papers, this problem belong to the class of NP-hard problems. Therefore the time required to solve the problem to optimality increases exponentially with the number of constraints. The approached used in this paper consists of a decoupling of the problem into an access control problem and channel allocation problem. The reason for this approach is twofold: first, through a constraint transformation a sufficient condition for a feasible solution can be stated which significantly reduces the solution space and for which efficient heuristics exist. Secondly, given a set of users which is known to contain a feasible solution, a distributed approach to channel selection can be done which in general perform close to optimal and in certain cases is optimal. Thus the channel selection part of the problem can be done both centrally and distributively. This has the advantage of being able to offload some of the processing requirements at the central entity to the users in the field. On the other hand, access control cannot be done in a distributed manner while still guaranteeing QoS at the users as "global" information about the users is required. In Section \ref{sec:performance}, we show how limiting knowledge to consist of only certain neighboring users affects performance as would be the case in a distributed implementation. \section{Access Control} \label{sec:prob} \subsection{Constraint Transformation with Equal Channel Sets} To overcome the large solution space in the original problem (\ref{eq:org-prob})-(\ref{eq:a-0-1}), we transform constraint (\ref{eq:constraint-non-linear}) to a binary quadratic constraint, which although is not linear, reduces the possible combinations of variables to $2^{N}$ for which an efficient heuristic algorithm can find good solutions. The constraint transformation was introduced in \cite{optimus}, but we have modified it to our problem. Instead of transforming (\ref{eq:constraint-non-linear}) from a non-convex constraint to a linear constraint, which is possible in \cite{optimus} due to the fractional channel set allocation which renders a linear approximation possible, we transform the constraint to a binary quadratic constraint due to our single-tone approach. The constraint given in (\ref{eq:constraint-non-linear}) can be approximated by \begin{equation} x_i(1 + \sum_{j\neq i} x_j\frac{I^{+}_{j,i}}{I_i^{\max}}) \leq K_i \label{eq:linearized-c-t} \end{equation} where $I_i^{\max} = \frac{S_i}{\beta_i}-\sigma^2$ and $I^{+}_{j,i} = \min(I_i^{\max},I_{j,i})$. The problem then becomes \begin{align} &\max \sum_i r_ix_i \label{eq:max-z-t}\\ &\text{s.t.} \nonumber \\ &x_i(1 + \sum_{j\neq i} x_j\frac{I^{+}_{j,i}}{I_i^{\max}}) \leq K_i, x_i\in\{0,1\} i=1,...,N \label{eq:x-0-1-t} \end{align} If $x_i$ equals 1, this means user $i$ can achieve its SINR by selecting some channel. It does not, however, find the channel that should be selected. Let $z'$ be the solution to the optimization problem given in (\ref{eq:max-z-t})-(\ref{eq:x-0-1-t}), i.e. the number of users that can obtain their SINR requirement. We then have the following proposition: \begin{proposition} $z'$ is a feasible solution to the original optimization problem (\ref{eq:org-prob})-(\ref{eq:a-0-1}). \label{prop:1} \end{proposition} \begin{IEEEproof} See Appendix \ref{app:proof-prop1} \end{IEEEproof} \subsection{Constraint Transformation With Unequal Channel Sets} Although the solution to (\ref{eq:max-z-t})-(\ref{eq:x-0-1-t}) finds a feasible solution to the original problem, we conjecture that the constraint given in (\ref{eq:linearized-c-t}) is too restrictive in some cases. This is especially the case when the different users have different subsets of channels available to them, which is the case in many secondary scenarios as spectral resources depends can depend on location and device specifications. In the proof of Proposition \ref{prop:1}, a feasible solution is shown to exist by bounding the number of blocked channels, where all users are assumed to be able to block all channels at any given user if they are allowed to transmit. However, if two users don't share any common channels they cannot block each other. Also, if two users only share a subset of channels it is less likely that they will contribute the blocking of channels. Thus we propose a new constraint for this scenario which takes into account the probability of two users blocking each other based on their respective channel sets which is yields a feasible solution in the asymptotic regime almost surely. The new constraint is given by \begin{equation} x_i(1 + \sum_{j\neq i} x_j\frac{I^{+}_{j,i}|\mathcal{K}_j\cap \mathcal{K}_i|}{I_i^{\max}K_j}) \leq K_i \label{eq:linearized-c} \end{equation} and thus the new optimization problem becomes \begin{align} &\max \sum_i r_ix_i \label{eq:max-z}\\ &\text{s.t.} \nonumber \\ &x_i(1 + \sum_{j\neq i} x_j\frac{I^{+}_{j,i}|\mathcal{K}_j\cap \mathcal{K}_i|}{I_i^{\max}K_j}) \leq K_i, i=1,...,N \\ &x_i\in\{0,1\}, i=1,...,N \label{eq:x-0-1} \end{align} \begin{proposition} Let $z^{*}$ be the solution to the problem given in (\ref{eq:max-z})-(\ref{eq:x-0-1}). $z^{*}$ is a feasible solution to the original optimization problem (\ref{eq:org-prob})-(\ref{eq:a-0-1}) almost surely when $N\rightarrow \infty$. \label{prop:a} \end{proposition} \begin{IEEEproof} See Appendix \ref{app:proof-propA} \end{IEEEproof} We now have two new 0,1 non-linear problems to compute the maximum revenue that a spectrum leaser can achieve while satisfying the requirements of a successful alloation. Specifically, the problems are modified versions of the multidimensional 0,1 knapsack problem (MKP), with the difference that instead of linear constraints we have binary quadratic constraints. As MKPs are NP-hard \cite{kellerer}, so are our problems. Thus solving the above problems is not easy. By exploiting the structure of our particular problems we use a Lagrange relaxation inspired by the one introduced in \cite{magazine1984319} for the standard MKP. A heuristic based on Lagrange relaxation has desirable properties such as low complexity (running time is $\mathcal{O}(2N^2)$ for our problem) and an upper bound for $z^{*}$ without resorting to an LP relaxation of the problem. \section{A Heuristic using Lagrange Relaxation} \label{sec:mkp} To simplify notation we set $a_{ij} = \frac{I^{+}_{j,i}|\mathcal{K}_j\cap \mathcal{K}_i|}{I_i^{\max}K_j}$ (or $a_{ij} = \frac{I^{+}_{j,i}}{I_i^{\max}}$ if solving (\ref{eq:max-z-t})-(\ref{eq:x-0-1-t})). A Lagrange relaxation of our problem is \begin{align} \max_{\mathbf{x}}&\bigl\{\sum_{i=1}^N r_ix_i + \sum_{i=1}^N \lambda_i (K_i-x_i(1+\sum_{j\neq i}^N a_{ij}x_j))\bigr\}\label{eq:rl-prob} \\ \text{s.t.:}&\nonumber \\ &\mathbf{x}\in\{0,1\}^N \text{ and } \mathbf{\lambda}\geq 0. \label{eq:rl-x} \end{align} Let $\mathbf{x'}$ be a solution to (\ref{eq:max-z})-(\ref{eq:x-0-1}) (or (\ref{eq:max-z-t})-(\ref{eq:x-0-1-t})) with $z(\mathbf{x'})$ as its value. Let RL$(\mathbf{x'})$ be the value of the Lagrange relaxation problem. Clearly RL$(\mathbf{x'})\geq z(\mathbf{x'})$, since when $\mathbf{x'}$ satisfies (\ref{eq:linearized-c}), $K_i-x_i(1+\sum_{j\neq i}^n a_{ij}x_j)\geq 0$. Therefore the optimal value of (\ref{eq:rl-prob})-(\ref{eq:rl-x}) is larger than than $z^{*}$ and it is therefore a relaxation. It is easy to see that this maximization problem is equivalent to the following \begin{align} \max_{\mathbf{x}}&\bigl\{\sum_{i=1}^N \bigl(r_i-\lambda_i(1+\sum_{j\neq i}^N a_{ij}x_j)\bigr)x_i\bigr\} \label{eq:mod-langrange}\\ \text{s.t.:}&\nonumber \\ &\mathbf{x}\in\{0,1\}^N \label{eq:x-0-1-2} \end{align} since $\lambda_iK_i$ are constants. It is also easy to see that (\ref{eq:mod-langrange})-(\ref{eq:x-0-1-2}) has the trivial solution \begin{equation} x^{*}_i = \left\lbrace\begin{array}{c c}{1} & {\text{if }(r_i-\lambda_i(1+\sum_{j\neq i}^N a_{ij}x_j)\bigr)>0} \\ {0} & {\text{otherwise}}\end{array}\right. \label{eq:determine-x} \end{equation} The difficulty lies in finding values for the Lagrange multipliers such that the solution $\mathbf{x}$ is feasible for the original problem and for which the inequality \begin{equation} \sum_{i=1}^N \lambda_i (K_i-x_i(1+\sum_{j\neq i}^N a_{ij}x_j)) \geq 0 \end{equation} is as close to equality as possible. When equality holds the solution $\mathbf{x}$ is the optimal solution to the BQC problem \cite{everett}. \subsection{An Upper Bound} Before we start with the process of finding the Lagrange multipliers, we provide a result on an upper bound for the optimal value of the original MKP. \begin{proposition} Let $\mathbf{x}$ be the solution and $\mathbf{\lambda}$ the multiplier values of a Lagrange relaxation. Then an upper bound for the optimal value of the BQC problem is given by \begin{equation} z^{*} \leq z(\mathbf{x}) + \sum_{i=1}^N \lambda_i (K_i-x_i(1+\sum_{j\neq i}^N a_{ij}x_j)) \label{eq:upper-bound} \end{equation} \end{proposition} Proof of this proposition can be found in \cite[Thm 4.1]{magazine1984319}. \subsection{Finding the Lagrange Multipliers} Clearly the quality of the solution depends on the Lagrange multipliers, as a low value of the summation in (\ref{eq:upper-bound}) provides a small gap to the optimal value. Finding Lagrange multipliers for integer programming have been investigated in e.g. \cite{everett} and \cite{SenjuandToyoda}, and specifically for the MKP in \cite{magazine1984319}. We adopt the algorithm proposed in \cite{magazine1984319}, by changing it to our modified version of the MKP. The algorithm consists of the following steps:\newline \textit{Step 0 (intialize and normalize)}\newline Let $\lambda_i = 0$ and $x_i = 1$, $i=1,...,N$. \newline Normalize the coefficients:\begin{align}a_{ij}=& a_{ij}/K_j \hspace{0.2cm}\text{for }j=1,...,N \text{ and }i=1,...,N \nonumber\\ a_{ii} =& 1/K_i \hspace{0.2cm}\text{for }i=1,...,N\nonumber \\ b_i =& 1 \hspace{0.2cm}\text{for }i=1,...,N\nonumber \end{align} Compute $y_i = \sum_{j = 1}^N a_{ij}$ for $i=1,...,N$. \newline \newline \textit{Step 1 (determine the most violated constraint)}\newline If $y_i\leq 1$ for all $i$, then $\mathbf{x}$ is feasible and we can stop. Else find the most violated constraint: \begin{align} i^{*} &= \arg\max_{i} y_i \nonumber \end{align} \textit{Step 2 (compute the increase of the multipliers $\lambda_{j^1},...,\lambda_{j^l}$)} Compute \begin{equation} j^{*} = \arg\max_j \frac{a_{i^{*}j}}{c_j}x_j \nonumber \end{equation} \textit{Step 3 (increase the $\lambda$s)}\newline Set $\lambda_{j^{*}} = \lambda_{j^{*}}+a_{i^{*}j^{*}}$\newline Set $x_{j^{*}} = 0$, $y_{j^{*}} = 0$ and $y_i = y_i-a_{ij^{*}}$.\newline If $y_i\leq 1$ for all $i$, then go to step 4; otherwise go to step 1.\newline The idea of the algorithm is as follows: find the most violated constraint and find the variable that contributes the most to violating this constraint as a fraction of the revenue it would bring the spectrum holder. Set this variable to zero along with its constraint, and reduce all other constraints by the contribution this variable had on these constraints. \begin{algorithm}[t] \caption{Channel Selection} \label{algo:1} \begin{algorithmic}[1] \STATE Consider any $\mathbf{x}$ that is a solution to (\ref{eq:max-z})-(\ref{eq:x-0-1}). \STATE For each user $i$ with $x_i = 1$ select a random channel from $K_i$. \FOR{\textbf{ each} user i with $x_i = 1$} \STATE Compute $\omega_i^k = \sum_{j\neq i} a_j^k I_{j,i}$. \STATE Select $k^{*} = \arg\min_k \omega_i^k$ \ENDFOR \STATE Repeat 3-6 until no more adjustments can be performed. \end{algorithmic} \end{algorithm} \section{Channel Selection} \label{sec:channel-selection} Through the heuristic algorithm presented in the previous two sections we have found the users which are allowed to transmit in order to achieve a feasible solution of users that can transmit simultaneously while achieving their SINR requirements. However, the solution to this problem does not actually find the channels to be used by each user with $x_i = 1$. In a single-channel network where power control is used to achieve SINR targets at different users, if a feasible solution exists a simple greedy distributed algorithm converges \cite{Foschini1993}. Unfortunately, such a result does not exist for a system consisting of multiple channels. In the general case this is not a trivial problem. However, by assuming that the channel gains between users are reciprocal (such that $g_{i,j} = g_{j,i}$), a simple greedy algorithm is guaranteed to converge. Such an algorithm is given in Algorithm \ref{algo:1}. \begin{proposition} Algorithm \ref{algo:1} will converge. \label{prop:2} \end{proposition} \begin{IEEEproof} See Appendix \ref{app:proof-prop2} \end{IEEEproof} An example of a scenario where such an assumption can hold is wireless APs such as Wi-Fi networks. In such a network, the APs have about the same coverage area and thus the channel gain between AP $i$ and AP $j$ is approximately the same as between AP $j$ and AP $i$. \section{Performance Evaluation} \label{sec:performance} Through this paper we have presented a heuristic approach to solving the simultaneous SINR problem. This process consisted of two main steps: simplifying the constraints in the original problem to binary quadratic constraints (BQCs) and using a heuristic algorithm for solving the BQCs. We are thus interested in evaluating how the transformation from the original non-convex constraints to BQCs affects the optimal value as well as how the heuristic performs compared to the optimal value of the BQCs. Also, Proposition \ref{prop:1} says that the solution obtained from the maximization problem (\ref{eq:max-z})-(\ref{eq:x-0-1}) is a feasible solution almost surely for $N\rightarrow \infty$. Thus, for finite values of $N$ we are interested in how many of the users with $x_i = 1$ actually achieve their SINR requirements when Algorithm \ref{algo:1} terminates. These issues are investigated in this section as well as the impact of a distributed implementation of the access control algorithm. \begin{table}[t] \centering \caption{Set of SINR targets and their revenue value for the two sub problems, (i) maximizing number of satisfied users and (ii) maximizing revenue} \begin{tabular}{c|c|c|c|c|c} \hline SINR Targets & 0 dB & 3 dB & 6 dB & 9 dB & 12 dB \\ \hline \hline $c$ Value (Max Sat Users) & 1 & 1 & 1 & 1 & 1 \\ \hline $c$ Value (Max Revenue) & 1 & 2 & 3 & 4 & 5 \\ \hline \hline \end{tabular} \label{tab:sinr-targets} \end{table} \subsection{Analytical Analysis of Geometric Signal Propagation} We start by a theoretical analysis of the transformation from the problem in (\ref{eq:org-prob})-(\ref{eq:a-0-1}) to the one given in (\ref{eq:max-z-t})-(\ref{eq:x-0-1-t}). The constraint given in (\ref{eq:linearized-c-t}) is a sufficient condition for a feasible solution. If channel gain can be modeled as proportional to distance so that the triangular inequalities hold\footnote{let $x,y,z$ be points in a Euclidean space, then $d(x,y)<d(x,z)+d(y,z)$ hold.}, and all users have the same SINR requirement, we can through a necessary condition bound the optimal solution as follows. \begin{proposition} Assume the optimal solution to (\ref{eq:org-prob})-(\ref{eq:a-0-1}) can achieve $OPT$. Then the optimal solution to (\ref{eq:max-z-t})-(\ref{eq:x-0-1-t}), $OPT'$, can achieve \begin{equation} \frac{OPT}{\min\{2^{\alpha}-1,10\}}-1\leq OPT' \end{equation} where $\alpha$ is the pathloss exponent. \label{prop:G} \end{proposition} \begin{proof}See Appendix \ref{app:proof-propG}\end{proof} Thus, under these conditions we can find a constant factor approximation of the optimal solution. However, if received signal power don't satisfy the triangular inequalities it has been shown that it is NP-hard to approximate the optimal solution of a scheduling with one channel to within $N^{1-\eta}$ for $\eta>0$ \cite[Theorem 6.1]{5062108}. As adding channels increases the decision complexity, this also holds for our problem. \begin{figure}[t] \centering \includegraphics[width = 0.8\columnwidth]{sol_value_nonasymptotic_6_18_K_5_w_zoom.pdf} \caption{Solution of the original problem (GA) compared to the GA solution value of the problem given in (13)-(15) as well as the solution value of the BQC problem (10)-(11) (GA, heuristic, upper bound and channel allocation) for the max sat problem. } \label{fig:opt-value-nona} \end{figure} \begin{figure}[t] \centering \includegraphics[width = 0.8\columnwidth]{sol_value_asymptotic_1_18_w_zoom.pdf} \caption{Solution of the original problem (GA) compared to the solution value of the BQC problem (13)-(15) (GA, heuristic, upper bound and channel allocation) for the max sat problem.} \label{fig:opt-value} \end{figure} \subsection{Simulation Setup} To evaluate the issues mentioned above we simulate an environment of constant user density. Specifically, we have a density of 1/800 [users/m$^2$]. The rest of the user parameters are set so that the optimal value of the original problem is less than the number of users for most instances. We create the environment by positioning the users randomly in a square according to a uniform distribution in two dimensions. The distance from a user to its receiver is given by Gaussian distribution with mean value 10 and variance 5. The transmit power of each user is set to 1 W and the noise variance is set to $10^{-8}$ W. The SINR requirement of a specific user is chosen at random from a set of SINR targets, which is given in Table \ref{tab:sinr-targets}. We consider two sub problems, (i) maximizing number of satisfied users, in which case the revenue of each user is equal, and (ii) maximizing revenue, in which case satisfying larger SINR targets leads to a larger revenue. To provide a reference value to the optimal value of the simplified problem compared to the original optimization problem, we solve (\ref{eq:org-prob})-(\ref{eq:a-0-1}) using a genetic algorithm (GA) from the MATLAB Global Optimization Toolbox. GAs are particularly suitable when the structure of the solution space of the problem is unknown and little theoretic analysis of the problem has been done. To benchmark the quality of the heuristic algorithm, we also solve the constraint transformed problems through the GA function in MATLAB. The population size of the GA was set to ten times the number of decision variables. \begin{figure} \centering \includegraphics[width = 0.8\columnwidth]{sat_vs_heur.pdf} \caption{Actually satisfied users after Algorithm 1 terminates vs solution value of the heuristic.} \label{fig:sat-vs-z} \end{figure} \subsection{Simulation Results} Fig. \ref{fig:opt-value-nona} shows the solution values for an environment with $5$ available channels where the goal is to maximize the number of satisfied users. Since all users have the same spectral resources, the constraint transformation in (\ref{eq:linearized-c}) will not hold and as can be seen from the results the solution values obtained from (\ref{eq:max-z})-(\ref{eq:x-0-1}) are not valid solutions. On the other hand, the solutions obtained from the constraint transformation in (\ref{eq:linearized-c-t}) are valid solutions, but we can see that the constraint in (\ref{eq:linearized-c-t}) is only a sufficient condition for a valid solution as the gap between the GA performance of the original problem in (\ref{eq:org-prob})-(\ref{eq:a-0-1}) and the performance of (\ref{eq:max-z-t})-(\ref{eq:x-0-1-t}) is roughly 2 for high numbers of users. For the results in Fig \ref{fig:opt-value} - \ref{fig:d-vs-c-no} we let the spectral resources at each user vary such that its available channels is drawn as a random subset of $K_i$ channels from the total number of channels $K$. In the simulations $K_i$ is a uniformly distributed number between $2$ and $K$ and $K$ is set to 10. In Fig. \ref{fig:opt-value} the solution value of the different algorithms are plotted for the max sat problem. For 2 and 3 users, the BQCs optimal value can actually be greater than that of the GA because when the number of channels at any user is at least 2, the BQC problem is trivial for 2 users and will always set $x = 1$, as can be seen from (\ref{eq:linearized-c}). For large values of $N$ we see that the solutions obtained by BQC problem can surpass the GA solution of the original problem. A possible explanation for this is that with increased number of users the solution space for the original problem is so large that it becomes less likely that the GA will find the global optimum. With this in mind, the results indicate that the constraint transformation captures the performance of the system well. In Fig. \ref{fig:sat-vs-z}, the difference between the optimal value of the heuristic and the number of users actually achieving their SINR requirements when Algorithm \ref{algo:1} terminates is given, normalized against the former. This further validates the constraint approximation done in this paper, as for number of users greater than 5 there is a less than 2\% difference between the optimal value of the heuristic and the number of users that actually achieve their SINR. Note that in the simulations here we have not assumed reciprocal channel gains (as was assumed in the proof of Proposition \ref{prop:2}), and some of the gap can be attributed to this fact. \begin{figure}[t] \centering \includegraphics[width = 0.8\columnwidth]{sol_value_asymptotic_1_18_revenue.pdf} \caption{Solution of the original problem (GA) compared to the solution value of the BQC problem (13)-(15) (GA, heuristic, upper bound and channel allocation) for the max revenue problem.} \label{fig:opt-value-revenue} \end{figure} In the previous results the goal has been to maximize the number of satisfied users. In Fig. \ref{fig:opt-value-revenue} the goal is to maximize the revenue, where each users SINR requirement yields a corresponding profit as given in Table \ref{tab:sinr-targets}. Again, the constraint transformation performs within the same percentage of the GA solution of the original problem, between 10-15\% for 10-18 users. \begin{figure} \centering \includegraphics[width = 0.8\columnwidth]{percent_of_centralized.pdf} \caption{ Distributed performance as percentage of centralized solution after Algorithm 1 terminates.} \label{fig:d-vs-c-no} \end{figure} In Fig. \ref{fig:d-vs-c-no} we investigate the effect of taking only a subset of interfering users into account at each user. As our sufficient condition is based on bounding the size of different sets of interfering users, limiting the number of users that are contributing to the interference at each user will increase the number of users the heuristic algorithm allows to transmit. The question if how this affects the actual channel allocation based on the users that are allowed to transmit from constraint transformation. In Fig. \ref{fig:d-vs-c-no} we have plotted the number of users that actually achieve their target SINR after Algorithm \ref{algo:1} terminates for different neighbor degrees, as percentage of the number of users that achieve their SINR targets after Algorithm \ref{algo:1} terminates with global knowledge at the heuristic. Based on the user density we have calculated the distance within which $x$ neighboring users are expected to be found. Thus, for Neighbors = $x$, on average $x$ neighbors will be taken into account when solving the optimization problem. As can be seen from Fig. \ref{fig:d-vs-c-no}, for a low number of users limiting the number of neighbors taken into account actually increases the number of satisfied users even after Algorithm \ref{algo:1} terminates. Thus, we can assume that the sufficient condition is a bit restrictive. However, for increasing number of users limiting the number of neighbors that are taken into account severely degrades performance. This result limits the sufficient condition to be used in a distributed manner. The result that capture the benefit of the heuristic algorithm the most is given in Table \ref{tab:time}, which shows the time taken to solve the problem for the different approaches. As the SINR requirements of the users change with time, the time it takes the algorithm to find a good solution is important. E.g. we see that for 12 users it takes the GA algorithm 30 minutes to solve the optimization problem. In this time it is likely that some of the users transmission parameters have changed, such as the SINR requirement and worst case signal strength, rendering the solution suboptimal. Due to the time required to solve the original problem through a GA, simulation results are only provided for a maximum of 18 users. However, as our constraint transformation allows us to utilize a heuristic algorithm with a time complexity $\mathcal{O}(2N^2)$ this approach can be used to provide good estimates of the max sat problem and the max revenue problem for large-scale networks. \begin{table}[t!] \caption{Time to solve the problem} \centering \begin{tabular}{c c c} \hline Number of users & Algorithm & Time (s) \\ \hline \hline \multirow{3}{*}{2} & GA & 1.982 \\ & BQC GA & 0.422 \\ & Heuristic & 0.0001 \\ \hline \multirow{3}{*}{6} & GA & 125.7 \\ & BQC GA & 1.072 \\ & MKP Heuristic & 0.0004 \\ \hline \multirow{3}{*}{12} & GA & 1832 \\ & BQC GA & 3.922 \\ & MKP Heuristic & 0.0009 \\ \hline \multirow{3}{*}{18} & GA & 16813 \\ & BQC GA & 7.305 \\ & MKP Heuristic & 0.0015 \\ \hline \end{tabular} \label{tab:time} \end{table} \section{Conclusion and Future Work} \label{sec:conc} In this paper we have presented a heuristic algorithm for solving the spectrum allocation problem under SINR requirements. Specifically, we investigated how many users that can transmit and achieve their SINR targets simultaneously and maximize the profit to a spectrum holder. As this problem is a non-convex integer problem, we transformed this problem into two binary quadratic constraint problems, one for which the solution is guaranteed to be a feasible solution to the original optimization problem and one which is a feasible solution asymptotically almost surely. To solve the BQC problems, we presented a heuristic algorithm based on Lagrange relaxation which bounds the solution value of the heuristic to the optimal value of the BQC problem. Through simulation results we showed that this approach yields solutions on average at a gap of 10\% from the solutions obtained by a genetic algorithm for the original non-linear problem. \appendices \input{prop1} \input{propA} \input{propConvergence} \input{propG} \bibliographystyle{ieeetr} \section{Proof of Proposition \ref{prop:1}} \label{app:proof-prop1} We need to show that for each $x_i=1$ there exists a channel $k$ such that SINR$_i^k = \frac{S_i}{\sigma^2 + \sum_{j\neq i}a_j^k I_{j,i}}\geq \beta_i$ Let $\omega_i^k = \sum_{j\neq i}a_j^k I_{j,i}$ represent the accumulated interference at user $i$ on channel $k$. From the definition of $I_i^{\max}$, it is clear that SINR$_i^k\geq \beta_i$ is equivalent to $\omega_i^k\leq I_i^{\max}$. We define a channel $k$ as blocked if $\omega_i^k > I_i^{\max}$. To prove the proposition we need to prove that if $x_i=1$ (i.e. user $i$ transmits) there exists at least one channel that is not blocked. We bound the number of blocked channels as follows \begin{align} \Phi = &\{k|\omega_i^k>I_i^{\max}\} \nonumber \\ = &\{k|\exists j, a_j^k = 1 \text{ and } I_{j,i}>I_i^{\max}\}\cup \nonumber\\ &\{k|j:a_j^k = 1 \text{ and } I_{j,i}\leq I_i^{\max} \text{ and } \omega_i^k>I_i^{\max}\}\nonumber \\ = &\Phi_1 \cup \Phi_2 \end{align} $\Phi_1$ contains the channels blocked by strong interferers, i.e. those that can block a band single handedly, whereas $\Phi_2$ contains those channels that are blocked by accumulative interference. We now bound the sizes of $\Phi_1$ and $\Phi_2$. We first bound $\Phi_1$. \begin{align} \Phi_1 &= \sum_{k=1}^{K_i}\sum_{j\neq i, I_{j,i}>I_i^{\max}} a_j^k \leq \sum_{j\neq i, I_{j,i}>I_i^{\max}}x_ \end{align} To bound $\Phi_2$ we use the fact that: \begin{align} &\sum_{k\in \Phi_2}\omega_i^k = \sum_{k\in\Phi_2}\sum_{j\neq i} a_j^kI_{j,i} = \sum_{j\neq i, I_{j,i}\leq I_i^{\max}}I_{j,i}\sum_{k\in\Phi_2}a_j^k \nonumber \\ &\leq \sum_{j\neq i, I_{j,i}\leq I_i^{\max}}I_{j,i}x_j \end{align} Since for any channel $k\in\Phi_2$ $\omega_i^k>I_i^{\max}$ we have that \begin{equation} \sum_{k\in\Phi_2}I_i^{\max}<\sum_{k\in\Phi_2} \omega_i^k \leq \sum_{j\neq i, I_{j,i}\leq I_i^{\max}}I_{j,i}x_j \end{equation} Since $I_i^{\max}$ is fixed we have that \begin{equation} |\Phi_2|<\frac{1}{I_i^{\max}}\sum_{j\neq i, I_{j,i}\leq I_i^{\max}}I_{j,i}x_j \end{equation} And thus that the number of blocked channels is at most \begin{align} |\Phi| &= |\Phi_1|+|\Phi_2|<\sum_{j\neq i, I_{j,i}>I_i^{\max}}x_j\nonumber\\ & + \sum_{j\neq i, I_{j,i}\leq I_i^{\max}}\frac{I_{j,i}}{I_i^{\max}}x_j \nonumber \\ &= \sum_{j\neq i} x_j\frac{I^{+}_{j,i}}{I_i^{\max} \end{align} And thus if $\sum_{j\neq i} x_j\frac{I^{+}_{j,i}}{I_i^{\max}}\leq K_i-1$ there exists at least one channel on which user $i$ can achieve its SINR requirement.\qed \section{Proof of Proposition \ref{prop:a}} \label{app:proof-propA} Consider the sets $\Phi_1$ and $\Phi_2$ defined in Appendix \ref{app:proof-prop1}. We now bound them again. We bound $\Phi_1$ as \begin{align} \Phi_1 &= \sum_{k=1}^{K_i}\sum_{j\neq i, I_{j,i}>I_i^{\max}} a_j^k \leq \sum_{j\neq i, I_{j,i}>I_i^{\max}}x_j\text{Prob}(j,i \end{align} where $\text{Prob}(j,i)$ is the probability that $j$ will contribute to blocking on any of $i$'s channels. We note that the same can be done for $\Phi_2$ so that \begin{align} &\sum_{k\in \Phi_2}\omega_i^k \leq \sum_{j\neq i, I_{j,i}\leq I_i^{\max}}I_{j,i}x_j\text{Prob}(j,i) \end{align} and \begin{equation} \sum_{k\in\Phi_2}I_i^{\max}<\sum_{k\in\Phi_2} \omega_i^k \leq \sum_{j\neq i, I_{j,i}\leq I_i^{\max}}I_{j,i}x_j\text{Prob}(j,i) \end{equation} so that \begin{equation} |\Phi_2|<\frac{1}{I_i^{\max}}\sum_{j\neq i, I_{j,i}\leq I_i^{\max}}I_{j,i}x_j\text{Prob}(j,i) \end{equation} The total number of blocked channels can now be given as \begin{align} |\Phi| = \sum_{j\neq i} x_j\frac{I^{+}_{j,i}}{I_i^{\max}}\text{Prob}(j,i) \sim \sum_{j\neq i} x_j\frac{I^{+}_{j,i}}{I_i^{\max}}\frac{|\mathcal{K}_j\cap\mathcal{K}_i|}{K_j} \end{align} Clearly as $N\rightarrow \infty$ \begin{equation} \sum_{j\neq i} x_j\frac{I^{+}_{j,i}}{I_i^{\max}}\text{Prob}(j,i)-\sum_{j\neq i} x_j\frac{I^{+}_{j,i}}{I_i^{\max}}\frac{|\mathcal{K}_j\cap\mathcal{K}_i|}{K_j}\rightarrow 0 \end{equation} due to the central limit theorem. Thus it follows that if $\sum_{j\neq i} x_j\frac{I^{+}_{j,i}|\mathcal{K}_j\cap\mathcal{K}_i|}{I_i^{\max}K_j}\leq K_i-1$, there exists at least one channel on which user $i$ can achieve its SINR requirement almost surely.\qed \section{Proof of Proposition \ref{prop:2}} \label{app:proof-prop2} We prove that a greedy algorithm such as the one given in Algorithm \ref{algo:1} will converge under the assumption of reciprocal channel gains (i.e. $g_{ij} = g_{ji}$ is the pathloss between user $i$ and $j$). Consider a utility function $u_i^{k}$ for each user $i$ that depends on $k$ as follows \begin{equation} u_i^{k} = \sum_{j\neq i}a_i^k a_j^k g_{ji}P_jP_i. \end{equation} Clearly minimizing $\omega_i^k$ in Algorithm \ref{algo:1} is equivalent to minimizing $u_i^k$. Consider a potential function $P(\mathbb{A})$ defined for an allocation $\mathbb{A}$ as \begin{equation} P(\mathbb{A}) = \sum_{i=1}^N\sum_{k=1}^{K_i} u_i^k \end{equation} We prove convergence by showing that the game forms a generalized ordinal potential game. Convergence is then guaranteed as all finite generalized ordinal potential games have a pure strategy equilibrium \cite[Corollary 2.2]{Monderer1996} and all finite generalized ordinal potential games have a finite improvement path \cite[Lemma 2.3]{Monderer1996}. Now, user $i$ will change its allocation from channel $k$ to $k'$ given that \begin{equation} \omega_i^{k'} = \sum_{j\neq i}a_{j}^{k'}g_{ji}P_j<\sum_{j\neq i}a_{j}^{k}g_{ji}P_j = \omega_i^{k} \end{equation} By multiplying each side by $P_i$ we have \begin{equation} u_i^{k'} =\sum_{j\neq i}a_{j}^{k'}g_{ji}P_jP_i<\sum_{j\neq i}a_{j}^{k}g_{ji}P_jP_i = u_{i}^k \end{equation} Assume before user $i$ changed from $k$ to $k'$ that user $l$ transmits on $k$ and user $m$ transmits on $k'$. Since user $i$ no longer transmits on $k$, clearly $\omega_l^k \text{ (after)}<\omega_l^k \text{ (before)}$. Define $\Delta u_i$, $\Delta u_l$ and $\Delta u_m$ as \begin{equation*} \Delta u_i = u_{i}^{k} - u_{i}^{k'} = \sum_{j\neq i}a_{j}^{k}g_{ji}P_jP_i-\sum_{j\neq i}a_{j}^{k'}g_{ji}P_jP_i>0, \end{equation*} \begin{equation*} \Delta u_l = u_l(\text{before $i$ changed}) - u_l(\text{after $i$ changed}) = a_{l}^{k}g_{il}P_lP_i \end{equation*} and \begin{align} \Delta u_m &= u_m(\text{before $i$ changed}) - u_m(\text{after $i$ changed}) \nonumber \\ &= -a_{m}^{k'}g_{im}P_mP_i \nonumber \end{align} $\Delta u_l$ corresponds to the users that have gained by user $i$'s change from channel $k$ to $k'$ in terms of increased SINR. $\Delta u_m$ corresponds to the users that have lost by user $i$'s change from channel $k$ to $k'$ in terms of decreased SINR. The total change over all affected users is thus \begin{align} &\Delta P(\mathbb{A},\mathbb{A'}) = \Delta u_i + \sum_{l\neq i} \Delta u_l + \sum_{m\neq i} \Delta u_m \nonumber \\ &=\sum_{j\neq i}a_{j}^{k}g_{ji}P_jP_i-\sum_{j\neq i}a_{j}^{k'}g_{ji}P_jP_i + \sum_{j\neq i}a_{j}^{k}g_{ji}P_jP_i \nonumber \\ &-\sum_{j\neq i}a_{j}^{k'}g_{ji}P_jP_i \nonumber \\ &= 2\biggl(\sum_{j\neq i}a_{j}^{k}g_{ji}P_jP_i-\sum_{j\neq i}a_{j}^{k'}g_{ji}P_jP_i\biggr)>0 \end{align} Thus, thus if $u_i^k>u_i^{k'}$ $\Rightarrow$ $P(\mathbb{A})>P(\mathbb{A'})$ and this is thus a generalized ordinal potential game.\qed \section{Proof of Proposition \ref{prop:G}} \label{app:proof-propG} To prove the proposition we first introduce a necessary condition for a successful spectrum allocation: \begin{lemma} Let $\mathbf{x}$ be a successful spectrum allocation. Then for each $x_i$ the following holds \begin{equation} x_i\bigl(1+\sum_{j\neq i} x_j \frac{I_{j,i}^{+}}{I^{\max}}\bigr) \leq \min(2^{\alpha}+1,10)K_i \label{eq:nec} \end{equation} \end{lemma} The proof is similar to the proof of Lemma 3 in \cite{optimus}. By comparing the necessary condition of (\ref{eq:nec}) with the sufficient condition in (\ref{eq:linearized-c-t}) we can bound the optimal value. Let $\frac{I_{i,j}^{+}}{I^{\max}}$ be denoted as $a_{j,i}$ and $\min(2^{\alpha}+1,10)$ as $C$. We now define two sets: \begin{align} &\mathcal{L}_s = \{i |x_i = 1, x_i(1+\sum_{j\in \mathcal{L}_s,j\neq i}x_ja_{j,i})\leq K_i\} \\ &\mathcal{L}_n = \{i |x_i = 1, x_i(1+\sum_{j\in \mathcal{L}_n,j\neq i}x_ja_{j,i})\leq CK_i\} \end{align} Thus, $|\mathcal{L}_s|$ is the number of users satisfying the sufficient constraints and $|\mathcal{L}_n|$ is the number of users satisfying the necessary constraints. Clearly the optimal value (OPT) of (\ref{eq:org-prob})-(\ref{eq:a-0-1}) is bounded as $|\mathcal{L}_s|\leq OPT\leq |\mathcal{L}_n|$. Let $\epsilon_i$ be smallest $a_{i,j}$ in the sum over $\mathcal{L}_n$, i.e. $\epsilon_i = \min_{j\in \mathcal{L}_n}a_{j,i}$. Then we have \begin{align} &|\mathcal{L}_n|\epsilon_i\leq 1+ \sum_{j\in \mathcal{L}_n,j\neq i}x_ja_{j,i} \leq CK_i \nonumber \\ \Rightarrow & \epsilon_i\leq \frac{CK_i}{|\mathcal{L}_n|} \nonumber \end{align} which maximizes $|\mathcal{L}_n|$. Consider \begin{align} &1+ \sum_{j\in \mathcal{L}_n,j\neq i}x_ja_{j,i}\nonumber \\ &= 1+\sum_{j\in \mathcal{L}_s,j\neq i}x_ja_{j,i}+\sum_{j\in \mathcal{L}_n\backslash\mathcal{L}_s,j\neq i}x_ja_{j,i} \nonumber \\ &\geq 1+\sum_{j\in \mathcal{L}_s,j\neq i}x_ja_{j,i}+(|\mathcal{L}_n\backslash\mathcal{L}_s|-1)\epsilon_i \end{align} Thus we have \begin{align} &1+\sum_{j\in \mathcal{L}_s,j\neq i}x_ja_{j,i}+(|\mathcal{L}_n\backslash\mathcal{L}_s|-1)\epsilon_i \nonumber \\ &\leq K_i + (|\mathcal{L}_n\backslash\mathcal{L}_s|-1)\epsilon_i \leq CK_i \end{align} Summation over all $i$ yields \begin{align} C\sum_iK_i &\geq \sum_i (K_i+(|\mathcal{L}_n\backslash\mathcal{L}_s|-1)\epsilon_i)\nonumber \\ &\geq \sum_i K_i+(|\mathcal{L}_n\backslash\mathcal{L}_s|-1)\sum_i\epsilon_i \nonumber \\ &\geq \sum_i K_i+(|\mathcal{L}_n\backslash\mathcal{L}_s|-1)\frac{C}{|\mathcal{L}_n|}\sum_iK_i \nonumber \\ \Rightarrow C&\geq 1 + (|\mathcal{L}_n\backslash\mathcal{L}_s|-1)\frac{C}{|\mathcal{L}_n|} \end{align} Since $\mathcal{L}_s$ is a subset of $\mathcal{L}_n$, $|\mathcal{L}_n\backslash\mathcal{L}_s| = |\mathcal{L}_n|-|\mathcal{L}_s|$, and we get \begin{equation} |\mathcal{L}_n|\leq C(|\mathcal{L}_s|+1) \end{equation}\qed